Wolfgang Osten (Ed.) Fringe 2005
Wolfgang Osten (Ed.)
Fringe 2005 The 5th International Workshop on Automatic Processing of Fringe Patterns
With 448 Figures and 14 Tables
123
Professor Dr. Wolfgang Osten Institut für Technische Optik Universität Stuttgart Pfaffenwaldring 9 70569 Stuttgart Germany
[email protected] Library of Congress Control Number: 2005931371
ISBN-10 3-540-26037-4 Springer Berlin Heidelberg New York ISBN-13 978-3-540-26037-0 Springer Berlin Heidelberg New York This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. Springer is a part of Springer Science+Business Media springeronline.com © Springer-Verlag Berlin Heidelberg 2006 Printed in Germany The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Typesetting: By the authors Production: LE-TEX Jelonek, Schmidt & Vöckler GbR, Leipzig Cover design: design & production GmbH, Heidelberg Printed on acid-free paper 7/3142/YL - 5 4 3 2 1 0
V
Conference Committee
Organizers and Conference Chairs: Wolfgang Osten (Germany) Werner Jüptner (Germany)
Program Committee: Armando Albertazzi (Brazil) Gerald Roosen (France) Anand Asundi (Singapore) Fernando Mendoza Santoyo (Mexico) Gerd von Bally (Germany) Joanna Schmit (USA) Josef J.M. Braat (Netherlands) Rajpal S. Sirohi (India) Zoltan Füzessy (Hungary) Paul Smigielski (France) Christophe Gorecki (France) Mitsuo Takeda (Japan) Peter J. de Groot (USA) Ralph P. Tatam (UK) Min Gu (Australia) Hans Tiziani (Germany) Igor Gurov (Russia) Vivi Tornari (Greece) Klaus Hinsch (Germany) Michael Totzeck (Germany) Jonathan M. Huntley (UK) Satoru Toyooka (Japan) Yukihiro Ishii (Japan) James D. Trolinger (USA) Guillermo Kaufmann (Argentina) Theo Tschudi (Germany)
VI
Richard Kowarschik (Germany) Ramon Rodrigues Vera (Mexico) Malgorzata Kujawinska (Poland) Elmar E. Wagner (Germany) Vladimir Markov (USA) Alfred Weckenmann (Germany) Kaoru Minoshima (Japan) Günther Wernicke (Germany) Erik Novak (USA) James C. Wyant (USA) Tilo Pfeifer (Germany) Ichirou Yamaguchi (Japan) Ryszard J. Pryputniewicz (USA) Toyohiko Yatagai (Japan) Session Chairs: Session 1:
Session 2: Session 3: Session 4: Session 5: Poster Session:
W.Jüptner (Germany) M.Takeda (Japan) F.Mendoza Santoyo (Mexico) J.M.Huntley (UK) K.Creath (USA) P.J.de Groot (USA) G.Häusler (Germany) J.D.Trolinger (USA) I.Yamaguchi (Japan) V.Markov (USA) T.Yatagai (Japan) H.Tiziani (Germany) A.Albertazzi (Brazil) A.Asundi (Singapore) G.Wernicke (Germany) N.Demoli (Croatia)
VII
Preface In 1989 the time was hot to create a workshop series dedicated to the dicussion of the latest results in the automatic processing of fringe patterns. This idea was promoted by the insight that automatic and high precision phase measurement techniques will play a key role in all future industrial applications of optical metrology. However, such a workshop must take place in a dynamic environment. Therefore the main topics of the previous events were always adapted to the most interesting subjects of the new period. In 1993 new principles of optical shape measurement, setup calibration, phase unwrapping and nondestructive testing were the focus of discussion, while in 1997 new approaches in multi-sensor metrology, active measurement strategies and hybrid processing technologies played a central role. 2001, the first meeting in the 21st century, was dedicated to optical methods for micromeasurements, hybrid measurement technologies and new sensor solutions for industrial inspection. The fifth workshop takes place in Stuttgart, the capital of the state of BadenWürttemberg and the centre of a region with a long and remarkable tradition in engineering. Thus after Berlin 1989, Bremen 1993, 1997 and 2001, Stuttgart is the third Fringe city where international experts will meet each other to share new ideas and concepts in optical metrology. This volume contains the papers presented during FRINGE 2005. The focus of this meeting was especially directed to resolution enhanced technologies, new approaches in wide scale 4D optical metrology and advanced computer aided measurement techniques. Since optical metrology becomes more and more important for industrial inspection, sophisticated sensor systems and their applications for the solution of challenging measurement problems are chosen again as one of the central topics of the workshop. This extended scope was honored again by a great response on our call for papers. Scientists from all around the world offered more than 110 papers. This enormous response demanded a strong revision of the papers to
VIII
select the best out of the overwhelming number of excellent papers. This hard job had to be done by the program committee since there is a strong limitation of the number of papers which can be presented and discussed during our workshop without holding parallel sessions. The papers presented in this workshop are summarized under 5 topics: 1. New Methods and Tools for Data Processing 2. Resolution Enhanced Technologies 3. Wide Scale 4D Optical Metrology 4. Hybrid Measurement Technologies 5. New Optical Sensors and Measurement Systems Each session is introduced by an acknowledged expert who gives an extensive overview of the topic and a report of the state of the art. The classification of all submitted papers into these topics was again a difficult job which often required compromises. We hope that our decisions will be accepted by the audience. On this occasion we would like to express our deep thanks to the international program committee for helping us to find a good solution in every situation. The editor would like to express his thanks to all the authors who spent a lot of time and effort in the preparation of their papers. Our appreciation also goes to Dr. Eva Hestermann-Beyerle and Monika Lempe from Springer Heidelberg for providing excellent conditions for the publication of these proceedings. My deep thanks is directed to the members of the ITO staff. The continuous help given by Gabriele Grosshans, Ruth Edelmann, Christa Wolf, Reinhard Berger, Witold Gorski, Ulrich Droste, Jochen Kauffmann and Erich Steinbeißer was the basis for making a successful FRINGE 2005. Finally, special thanks and appreciation goes to my co-chair, Werner Jüptner, for sharing with me the spirit of the 5th Fringe workshop. Looking forward to FRINGE 2009. Stuttgart, September 2005 Wolfgang Osten
IX
Table of Contents Conference Committee .............................................................................V Preface ....................................................................................................VII Table of Contents .................................................................................... IX
Key Note R.S. Sirohi Optical Measurement Techniques............................................................2
Session 1: New Methods and Tools for Data Processing M. Kujawinska New Challenges for Optical Metrology: Evolution or Revolution ......14 P. de Groot, X. Colonna de Lega Interpreting interferometric height measurements using the instrument transfer function...................................................................30 K.A. Stetson Are Residues of Primary Importance in Phase Unwrapping? ............38 W. Wang, Z. Duan, S.G. Hanson, Y. Miyamoto, M. Takeda Experimental Study of Coherence Vortices: Birth and Evolution of Phase Singularities in the Spatial Coherence Function........................46 C.A. Sciammarella, F.M. Sciammarella Properties of Isothetic Lines in Discontinuous Fields...........................54 R. Dändliker Heterodyne, quasi-heterodyne and after ...............................................65 J.M. Huntley, M.F. Salfity, P.D. Ruiz, M.J. Graves, R. Cusack, D.A. Beauregard Robust three-dimensional phase unwrapping algorithm for phase contrast magnetic resonance velocity imaging ......................................74
X
R. Onodera, Y. Yamamoto, Y. Ishii Signal processing of interferogram using a two-dimensional discrete Hilbert transform.....................................................................................82 J.A. Quiroga, D. Crespo, J.A. Gomez Pedrero, J.C. Martinez-Antón Recent advances in automatic demodulation of single fringe patterns ...................................................................................................................90 C. Breluzeau, A. Bosseboeuf, S. Petitgrand Comparison of Techniques for Fringe Pattern Background Evaluation ...................................................................................................................98 W. Schumann Deformed surfaces in holographic Interferometry. Similar aspects concerning nonspherical gravitational fields.......................................107 I. Gurov, A. Zakharov Dynamic evaluation of fringe parameters by recurrence processing algorithms...............................................................................................118 T. Haist, M. Reicherter, A. Burla, L. Seifert, M. Hollis, W. Osten Fast hologram computation for holographic tweezers .......................126 Y. Fu, C. Quan, C. Jui Tay, H. Miao Wavelet analysis of speckle patterns with a temporal carrier ...........134 C. Shakher, S. Mirza, V. Raj Singh, Md. Mosarraf Hossain, R.S. Sirohi Different preprocessing and wavelet transform based filtering techniques to improve Signal- to-noise ratio in DSPI fringes ............142 J. Liesener, W. Osten Wavefront Optimization using Piston Micro Mirror Arrays ............150 E. Hack, P. Narayan Gundu Adaptive Correction to the Speckle Correlation Fringes using twisted nematic LCD ..........................................................................................158 R. Doloca, R. Tutsch Random phase shift interferometer .....................................................166
XI
V. Markov, A. Khizhnyak Spatial correlation function of the laser speckle field with holographic technique.................................................................................................175 Q. Kemao, S. Hock Soon, A. Asundi Fault detection from temporal unusualness in fringe patterns..........183 T. Böttner, M. Kästner The Virtual Fringe Projection System (VFPS) and Neural Networks .................................................................................................................191 F.J. Cuevas, F. Mendoza Santoyo, G. Garnica, J. Rayas, J.H. Sossa Fringe contrast enhancement using an interpolation technique .......195 S. Drobczynski, H. Kasprzak Some remarks on accuracy of imaging polarimetry with carrier frequency ................................................................................................204 A. Federico, G.H. Kaufmann Application of weighted smoothing splines to the local denoising of digital speckle pattern interferometry fringes ....................................208 R.M. Groves, S.W. James, R.P. Tatam Investigation of the fringe order in multi-component shearography surface strain measurement ..................................................................212 Q. Kemao, S. Hock Soon, A. Asundi Metrological Fringe inpainting.............................................................217 B. Kemper, P. Langehanenberg, S. Knoche, G. von Bally Combination of Digital Image Correlation Techniques and Spatial Phase Shifting Interferometry for 3D-Displacement Detection and Noise Reduction of Phase Difference Data ..........................................221 T. Kozacki, P. Kniazewski, M. Kujawinska Photoelastic tomography for birefringence determination in optical microelements ........................................................................................226 A. Martínez, J.A. Rayas, R. Corsero Optimization of electronic speckle pattern interferometers ..............230
XII
K. Patorski, A. Styk Properties of phase shifting methods applied to time average interferometry of vibrating objects ......................................................234 P.D. Ruiz, J.M. Huntley Depth-resolved displacement measurement using Tilt Scanning Speckle Interferometry..........................................................................238 G. Sai Siva, L. Kameswara Rao New Phase Unwrapping Strategy for Rapid and Dense 3D Data Acquisition in Structured Light Approach .........................................242 P.A.A.M. Somers, N. Bhattacharya Determination of modulation and background intensity by uncalibrated temporal phase stepping in a two-bucket spatially phase stepped speckle interferometer.............................................................247
Session 2: Resolution Enhanced Technologies K. Sugisaki, M. Hasegawa, M. Okada, Z. Yucong, K. Otaki, Z. Liu, M. Ishii, J. Kawakami, K. Murakami, J. Saito, S. Kato, C. Ouchi, A. Ohkubo, Y. Sekine, T. Hasegawa, A. Suzuki, M. Niibe, M. Takeda EUVA's challenges toward 0.1nm accuracy in EUV at-wavelength interferometry ........................................................................................252 M. Totzeck Some similarities and dissimilarities of imaging simulation for optical microscopy and lithography .................................................................267 I. Harder, J. Schwider, N. Lindlein A Ronchi-Shearing Interferometer for compaction test at a wavelength of 193nm .............................................................................275 J. Zellner, B. Dörband, H. Feldmann Simulation and error budget for high precision interferometry. ......283 G. Jäger, T. Hausotte, E. Manske, H.-J. Büchner, R. Mastylo, N. Dorozhovets, R. Füßl, R. Grünwald Progress on the wide scale Nano-positioning- and Nanomeasuring Machine by Integration of Optical-Nanoprobes .................................291
XIII
J.J.M. Braat, P. Dirksen, A.J.E.M. Janssen Through-Focus Point-Spread Function Evaluation for Lens Metrology using the Extended Nijboer-Zernike Theory ......................................299 C.D. Depeursinge, A.M. Marian, F. Montfort, T. Colomb, F. Charriére, J. Kühn, E. Cuche, Y. Emery, P. Marquet Digital Holographic Microscopy (DHM) applied to Optical Metrology: A resolution enhanced imaging technology applied to inspection of microscopic devices with subwavelength resolution ...........................308 T. Tschudi, V.M. Petrov, J. Petter, S. Lichtenberg, C. Heinisch, J. Hahn An adaptive holographic interferometer for high precision measurements.........................................................................................315 T. Yatagai, Y. Yasuno, M. Itoh Spatio-Temporal Joint Transform Correlator and Fourier Domain OCT.........................................................................................................319 W. Hou Subdivision of Nonlinearity in Heterodyne Interferometers .............326
Session 3: Wide Scale 4D Optical Metrology O. Loffeld Progress in SAR Interferometry ..........................................................336 P. Aswendt, S. Gärtner, R. Höfling New calibration procedure for measuring shape on specular surfaces .................................................................................................................354 T. Bothe, W. Li, C. von Kopylow, W. Jüptner Fringe Reflection for high resolution topometry and surface description on variable lateral scales ...................................................362 J. Kaminski, S. Lowitzsch, M.C. Knauer, G. Häusler Full-Field Shape Measurement of Specular Surfaces.........................372 L.P. Yaroslavsky, A. Moreno, J. Campos Numerical Integration of Sampled Data for Shape Measurements: Metrological Specification.....................................................................380
XIV
P. Pfeiffer, L. Perret, R. Mokdad, B. Pecheux Fringe analysis in scanning frequency interferometry for absolute distance measurement ...........................................................................388 I. Yamaguchi, S. Yamashita, M. Yokota Surface Shape Measurement by Dual-wavelength Phase-shifting Digital Holography ................................................................................396 Y. Ishii, R. Onodera, T. Takahashi Phase-shifting interferometric profilometry with a wide tunable laser source ......................................................................................................404 P. Andrä, H. Schamberger, J. Zänkert Opto-Mechatronic System for Sub-Micro Shape Inspection of Innovative Optical Components for Example of Head-Up-Displays 411 X. Peng, J. Tian 3-D profilometry with acousto-optic fringe interferometry...............420 E. Garbusi, E.M. Frins, J.A. Ferrari Phase-shifting shearing interferometry with a variable polarization grating recorded on Bacteriorhodopsin...............................................428 G. Khan, K. Mantel, N. Lindlein, J. Schwider Quasi-absolute testing of aspherics using Combined Diffractive Optical Elements ....................................................................................432 G. Notni, P. Kühmstedt, M. Heinze, C. Munkelt, M. Himmelreich Selfcalibrating fringe projection setups for industrial use.................436 J. Tian, X. Peng 3-D shape measurement method using point-array encoding ...........442 H. Wagner, A. Wiegmann, R. Kowarschik, F. Zöllner 3D measurement of human face by stereophotogrammetry..............446 M. Wegiel, M. Kujawinska Fast 3D shape measurement system based on colour structure light projection................................................................................................450
XV
Session 4: Hybrid Measurement Technologies L. Koenders, A. Yacoot Tip Geometry and Tip-Sample Interactions in Scanning Probe Microscopy (SPM) .................................................................................456 N. Demoli, K. Šariri, D.Vukicevic, M.Torzynski Applications of time-averaged digital holographic interferometry...464 P. Picart, M. Grill, J. Leval, J.P. Boileau, F. Piquet Spatio-temporal encoding using digital Fresnel holography .............472 G. Wernicke, M. Dürr, H. Gruber, A. Hermerschmidt, S. Krüger, A. Langner High resolution optical reconstruction of digital holograms .............480 J. Engelsberger, E-H.Nösekabel, M. Steinbichler Application of Interferometry and Electronic Speckle Pattern Interferometry (ESPI) for Measurements on MEMS ........................488 J. Trolinger, V. Markov, J. Kilpatrick Full-field, real-time, optical metrology for structural integrity diagnostics ..............................................................................................494 T. Doll, P. Detemple, S. Kunz, T. Klotzbücher 3D Micro Technology: Challenges for Optical Metrology .................506 S. Grilli, P. Ferraro, D. Alfieri, M. Paturzo, L. Sansone, S. De Nicola, P. De Natale Interferometric Technique for Characterization of Ferroelectric Crystals Properties and Microengineering Process............................514 J. Müller, J. Geldmacher, C. König, M. Calomfirescu, W. Jüptner Holographic interferometry as a tool to capture impact induced shock waves in carbon fibre composites .........................................................522 H. Gerhard, G. Busse Two new techniques to improve interferometric deformationmeasurement: Lockin and Ultrasound excited Speckle-Interferometry .................................................................................................................530
XVI
A. Weckenmann, A. Gabbia Testing formed sheet metal parts using fringe projection and evaluation by virtual distortion compensation....................................539 V.B. Markov, B.D. Buckner, S.A. Kupiec, J.C. Earthman Fatigue damage precursor detection and monitoring with laser scanning technique.................................................................................547 G. Montay, I. Lira, M. Tourneix, B. Guelorget, M. François, C. Vial Analysis of localization of strains by ESPI, in equibiaxial loading (bulge test) of copper sheet metals........................................................551 P. Picart, J. Leval, J.P. Boileau, J.C. Pascal Laser vibrometry using digital Fresnel holography ...........................555 P. Picart, J. Leval, M. Grill, J.P. Boileau, J.C. Pascal 2D laser vibrometry by use of digital holographic spatial multiplexing .................................................................................................................563 V. Sainov, J. Harizanova, S. Ossikovska, W. Van Paepegem, J. Degrieck, P. Boone Fatigue Detection of Fibres Reinforced Composite Materials by Fringes Projection and Speckle Shear Interferometry.......................567 L. Salbut, M. Jozwik Multifunctional interferometric platform specialised for active components of MEMS/MOEMS characterisation..............................571 V. Tornari, E. Tsiranidou, Y. Orphanos, C. Falldorf, R. Klattenhof, E. Esposito, A. Agnani, R. Dabu, A. Stratan, A. Anastassopoulos, D. Schipper, J. Hasperhoven, M. Stefanaggi, H. Bonnici, D. Ursu Laser Multitask ND Technology in Conservation Diagnostic Procedures ..............................................................................................575
XVII
Session 5: New Optical Sensors and Measurement Systems T.-C. Poon Progress in Scanning Holographic Microscopy for Biomedical Applications............................................................................................580 K. Creath, G.E. Schwartz The Dynamics of Life: Imaging Temperature and Refractive Index Variations Surrounding Material and Biological Systems with Dynamic Interferometry .......................................................................588 M. Józwik, C. Gorecki, A. Sabac, T. Dean, A. Jacobelli Microsystem based optical measurement systems: case of optomechanical sensors…………………………………………………….597 C. Richter, B. Wiesner, R. Groß, G. Häusler White-light interferometry with higher accuracy and more speed ...605 F. Depiereux, R. Schmitt, T. Pfeifer Novel white light Interferometer with miniaturised Sensor Tip .......613 W. Mirandé Challenges in the dimensional Calibration of sub-micrometer Structures by Help of optical Microscopy ...........................................622 A. Albertazzi, A. Dal Pont A white light interferometer for measurement of external cylindrical surfaces ...................................................................................................632 J. Millerd, N. Brock, J. Hayes, M. North-Morris, B. Kimbrough, J. Wyant Pixelated Phase-Mask Dynamic Interferometers ...............................640 K.D. Hinsch, H. Joost, G. Gülker Tomographic mapping of airborne sound fields by TV-holography 648 S. Toyooka, H. Kadono, T. Saitou, P. Sun, T. Shiraishi, M. Tominaga Dynamic ESPI system for spatio-temporal strain analysis ................656
XVIII
V. Striano, G. Coppola, P. Ferraro, D. Alfieri, S. De Nicola, A. Finizio, G. Pierattini, R. Marcelli Digital holographic microscope for dynamic characterization of a micromechanical shunt switch..............................................................662 Y. Emery, E. Cuche, F. Marquet, S. Bourquin, P. Marquet Digital Holographic Microscopy (DHM): Fast and robust 3D measurements with interferometric resolution for Industrial Inspection................................................................................................667 R. Höfling, C. Dunn Digital Micromirror Arrays (DMD) – a proven MEMS technology looking for new emerging applications in optical metrology .............672 C. Bräuer-Burchardt, M. Palme, P. Kühmstedt, G. Notni Optimised projection lens for the use in digital fringe projection.....676 K. Mantel, J. Lamprecht, N. Lindlein, J. Schwider Absolute Calibration of Cylindrical Specimens in Grazing Incidence Interferometry........................................................................................682 A. Michalkiewicz, J. Krezel, M. Kujawinska, X. Wang, P.J. Bos Digital holographic interferometer with active wavefront control by means of liquid crystal on silicon spatial light modulator..................686 K.-U. Modrich In-Situ-Detection of Cooling Lubricant Residues on Metal Surfaces Using a Miniaturised NIR-LED-Photodiode-System .........................690 E. Papastathopoulos, K. Körner, W. Osten Chromatic Confocal Spectral Interferometry - (CCSI) .....................694 F. Wolfsgruber, C. Rühl, J. Kaminski, L. Kraus, G. Häusler, R. Lampalzer, E.-B. Häußler, P. Kaudewitz, F. Klämpfl, A. Görtler A simple and efficient optical 3D-Sensor based on “Photometric Stereo” (“UV-Laser Therapy”) ............................................................702
Appendix: New Products ...........................................................707
Key Note
Optical Measurement Techniques Given by Rajpal S. Sirohi Bhopal (India)
Optical Measurement Techniques R.S. Sirohi Vice-Chancellor Barkatullah University Bhopal 462 026 India
1 Introduction Man’s romance with light may date back to millions of years but light as a measurement tool is of recent origin. Light is used for sensing a variety of parameters, and its domain of applications is so vast that it pervades all branches of science, engineering, technology, biomedicine, agriculture etc. The devices that use light for sensing, measurement and control are termed as optical sensors. Optical sensing is generally non-contact, non-invasive and provides very high accuracy of measurement. In many cases, accuracy can be varied over a large range. In these sensors, an optical wave is both an information sensor and a carrier of information. Any one of the following characteristics of a wave are modulated by the measured quantity (measurand): Amplitude or intensity, Phase, Polarization, Frequency, and Direction of propagation. However, the detected quantity is always intensity as the detectors can not follow the optical frequency. The measured quantity modifies the characteristics of the wave in such a way that on demodulation, it results in change in intensity. This change in intensity is related to the measured quantity. In some measurements, the intensity of wave gets modulated straight, and hence no demodulation before detection is used. Measurement of phase is often used; phase can be measured by direct and also by indirect methods. Indirect methods of measuring phase make use of interferometry. I will therefore confine my attention to some of the
Key Note
3
techniques developed by us over the last couple of decades. Further I will confine for the moment to two areas: 1. Collimation Testing, and 2. Speckle Interferometry Many applications require a collimated beam and a number of methods are available. We researched on this topic and developed several novel techniques. Similarly we carried out detailed investigations in speckle shear interferometry.
2 Collimation Testing Laser beam is used for variety of measurements in all branches of science, engineering and technology. Usually the laser oscillating in TEM00 mode is used. In some applications, the beam is to be expanded to a larger diameter. Conversion of small diameter and large divergence beam (as it is emitted from the laser) to a large diameter and low divergence beam is done by an inverted telescope arrangement. The foci of the two lenses must coincide and their optical axes must also align such that diffraction limited incident beam emerges as diffraction limited. The purpose of collimation testing is to check if the two foci are coincident and axes are aligned. In general, the foci are not coincident, and hence the emergent beam would be either divergent or convergent depending on the locations of the focal points of the two lenses. Interferometric methods are commonly used for checking the collimation. These methods can be grouped under1-21,23-34 a. Shear interferometry1,2,7,21-24 b. Classical interferometry2,8,30 c. Talbot interferometry5,6,13,16,19,24 d. Hybrid methods32 e. Special Techniques10,18,27,31,33 All these methods require long coherence length and hence are used only to collimate laser beams. 2.1 Shear Interferometry A plane parallel plate is one of the convenient elements to introduce linear shear both in reflected and transmitted beams1. For reason of high contrast of interference fringes, a reflection arrangement is preferred. An interference pattern is observed in the region of overlap. For divergent or convergent beam incident on the plate, a system of equally spaced
4
Key Note
straight fringes is obtained. However, for a collimated beam, there is uniform illumination in the superposed region, i.e. only a single fringe is formed. It offers a quick method of realizing correct collimation. However determination of location of the lens for infinite fringe on a finite beam introduces certain inaccuracy. This problem is resolved by the use of a wedge plate of a small wedge angle of about 10 arc seconds7. Wedge plate can be used in two orientations namely (i) shear and wedge directions parallel and (ii) shear and wedge directions perpendicular. Interference between the beams reflected from front and rear surfaces of the wedge plate results in a straight fringe pattern. For a collimated beam, the fringes run perpendicular to the wedge direction. However for a divergent or convergent beam, there is a change of fringe width for (i) and change of orientation for (ii). Usually the second configuration is used and collimation is achieved when the fringes run parallel to a fiduciary line drawn on the face of the plate itself. This offers better accuracy compared to that of uncoated plane parallel plate. Wedge plate shear interferometry certainly provided an improvement over the PPP method as the alignment of fringe pattern parallel to a fiduciary mark can be done more accurately than the determination of infinite fringe width over a finite aperture. However, there is a need to dispense with the fiduciary mark. Essentially one searches self-referencing methods12,13,34. This can however be achieved with a pair of wedge plate. The plates are arranged anti-parallel, i.e. wedge angles of two plates are oppositely directed. This composite plate now can be used in two orientations, (i) wedge direction perpendicular to the shear direction (orthogonal configuration), and (ii) wedge direction parallel to the shear direction (parallel configuration)13. At collimation, straight-line fringes normal to the wedge direction are formed in both the halves of the composite plate. However for non-collimated illumination the fringe pattern rotates in opposite directions in orthogonal configuration and fringe width changes in parallel configuration. A number of methods have been demonstrated where in a single wedge plate along with additional optical systems simulates two wedge plates interferometer. The wedge plates can be arranged in tandem12,13,17,25. Instead of uncoated wedge plate, if a plate with both sides coated is used, some interesting results are observed – first, the fringe pattern can be observed both in reflection and transmission, but the pattern in transmission is recommended. It can be shown that the principal fringe has satellites when the plate is illuminated by a convergent or a divergent beam22,23. Satellite fringes disappear and sharp equidistant straight fringes are observed when illumination on the plate is collimated. Transmission pattern over reflection pattern is recommended because (i) the contrast of fringes
Key Note
5
is high, and (ii) satellite fringes are stronger. This method is better than that based on uncoated wedge plate, as it relies on the appearance and disappearance of satellite fringes. It may be emphasized that certain reflectivity values may give better results than the other values of reflectivity. For testing collimation of short duration laser beam, a cyclic interferometer is proposed with a shear element inside the interferometer18,31. The shear element, for example introduce radial shear. 2.2 Classical Interferometry Classical interferometry can be used for collimation testing but it requires fairly large path difference between the two beams2,8. Basic idea is that if the incident beam is collimated, it should give either a fringe free field or a straight-line fringe pattern irrespective of the path difference between the two beams. However if the incident beam departs from collimation, the pattern would arise due to interference between two spherical waves thereby displaying either circular fringes or in general curved fringes. Obviously the sensitivity would depend on the path difference and the set up is highly susceptible to vibration. This technique is therefore seldom used for collimation testing. 2.3 Talbot Interferometry Certain class of objects when illuminated by a coherent beam image themselves at specific distances along the direction of propagation. These objects have to satisfy Montgomery condition. Linear grating is an example of such an object. Periodicity in transverse direction gets translated into periodicity along longitudinal direction under Talbot imaging. A linear grating will image at equal distances; theoretically an infinite number of identical images are formed. However when the illumination is either convergent or divergent, the grating pitch in the image changes and the Talbot images are not equi-spaced. However for a small departure from collimation, which usually is the case, the positions of Talbot plane may be assumed to lie at locations dictated by collimated illumination but the grating pitch would change. If another grating is placed at the Talbot plane, a moiré pattern is formed3,5,6,19. This moiré pattern can be either of finite fringe width or infinite fringe width depending on the orientation of the grating. Departure from collimation will either produce moiré when infinite fringe width is used or rotate the moiré pattern for the case of finite fringe width. This therefore poses same problem encountered with plane parallel plate and wedge plate based techniques. Therefore a dual grating is
6
Key Note
conceived which when illuminated by coherent beam images itself at the Talbot planes. At an appropriate Talbot plane, another identical grating is used thereby producing moiré patterns in both halves. The moiré fringes in these patterns would run parallel to each other if the illumination is collimated. This is a self-referencing technique like with double wedge plate11,16. The dual grating has been produced in two different configurations. Several other types of gratings like circular, spiral etc. have also been used for collimation testing24,29. 2.4 Hybrid Techniques These techniques combine techniques of shear interferometry with Talbot effect. Several combinations have been adopted and found to provide reasonable sensitivities32. 2.5 Special Techniques All self- referencing techniques could be called special techniques as they are self-referencing and provide double the sensitivity. We could also explore the possibility of phase conjugation for this purpose. Since a phase conjugate mirror will convert a diverging wave into a converging wave or vice-versa, this offers an exciting way of collimating a beam. A Michelson interferometer set up is used in which one of the mirrors is replaced by a phase conjugate mirror10,15,33. In general one would observe curved fringes or circular fringes. Only at correct collimation one would observe straightline fringes or fringe free field. The method does offer double the sensitivity compared to plane parallel plate or wedge plate technique as the interference arises between diverging and converging waves of nearly the same curvature. However the self-referencing feature can be introduced by using a double mirror arrangement instead of a single mirror as is the case in Michelson interferometer. The angle between the mirrors is taken very small; this therefore presents two fields. Interference pattern in one field acts as reference for the other.
3 Speckle interferometry Coherent light reflected from a rough surface or transmitted through a medium having refractive index inhomogeneities or random surface height variations like in a ground glass shows a grainy structure in space, which is
Key Note
7
called speckle pattern and the grains are called speckles35. They arise due to self-interference of large number of randomly de-phased waves. The speckle pattern degrades the image quality under coherent illumination and hence was considered bane of holographers. Methods were therefore investigated to eliminate/reduce the speckle noise. It was soon realized that the speckle pattern is also a carrier of information. Realization of this fact gave rise to a class of technique known as speckle metrology36,37. Initially it was applied to fatigue testing but has slowly evolved into a technique comparable to holographic interferometry. The technique is applied to deformation measurement, contouring, stress analysis, vibration measurement etc. Earlier speckle interferometry was carried out with photoemulsions for recording the speckle pattern. However it can also be carried out with electronic detection and hence phase shifting can be easily performed and processing can be almost real-time providing 3-D display of deformation maps38. Further the configuration can be so designed as to measure all the three components of the deformation vector simultaneously and equipments for such measurements are commercially available39. Equipment for directly measuring displacement derivatives and hence strains, slopes and curvature is also commercially available40. The technique, therefore, has come out of the laboratory environment and is used in the field environment. Speckle interferometry, unlike holographic interferometry, uses an imaging geometry and like in classical interferometry, a reference beam is added to code phase information into intensity variation41. The reference beam can be specular or diffuse and is generally added axially except where spatial phase shifting is incorporated42. In the latter case it makes a very small angle with the object wave so that appropriate fringe frequency is produced. In shear speckle interferometry, the reference beam is inbuilt - one sheared field acts as a reference to the other43. The shear methods are grouped under the following categories [table 1]: Table 1.
Shear types 1. 2. 3. 4. 5.
Lateral shear or Linear shear Rotational shear Radial shear Inversion shear Folding shear
Fringe formation G(x+'x, y+'y) - G(x,y) G(r, T + 'T) - G(r, T) G(r r 'r, T) - G(r, T) G(x, y) - G(-x, -y) G(x, y) - G(-x, y) : folding about y-axis G(x, y) - G(x, -y) : folding about x-axis
8
Key Note
Linear shear provides fringe pattern that depict partial slope. Rotational shear is useful to fully compensate rotationally symmetric deflection/deformation and hence presents fringes that are due to departure from circular symmetry44. Radial shear provides radially increasing sensitivity and has maximum sensitivity at the periphery45. Folding yield double the sensitivity for tilt. In case inversion and folding there is full superposition even though the shear has been applied. This is also true for rotational shear. A structural engineer would be interested to obtain deflection, strains and bending moments etc. from the same experiment. This is done by placing an opaque plate having several apertures in front of the imaging lens46. These apertures may carry shear elements etc. By a judicious choice of aperture configuration, it is possible to obtain in-plane and out-of-plane displacement components, partial slopes and curvature fringes from a single double-exposure specklegram. The technique works fine with photographic recording as different information can be retrieved at the frequency plane through the process of Fourier filtering. The method employs both frequency and theta multiplexing. Several other interesting techniques have also been reported where there has been enhancement of sensitivity47-49. It is however difficult to implement multiplexing techniques in electronic speckle pattern interferometry due to the limited resolution of the CCD array. It may however be possible to get over this limitation using technique similar to that used in digital holographic interferometry.
4 Acknowledgements This paper contains information that has been reported in some form in several publications. I would therefore like to express my sincere gratitude to all my students and colleagues who have contributed to the development of these techniques.
5 References 1. MVRK Murty, Use of single plane parallel plate as a lateral shearing interferometer with a visible laser gas source, Appl. Opt., 3, 531-534 (1964) 2. P Langenbeck, Improved collimation test, Appl. Opt., 9, 2590-2593 (1970)
Key Note
9
3. DE Silva, A simple interferometric method of beam collimation, Appl. Opt., 10, 1980-1983 (1971) 4. JC Fouere and D Malacara, Focusing errors in collimating lens or mirror: Use of a moiré technique, Appl. Opt., 13, 1322-1326 (1974) 5. P Hariharan and ZS Hegedus, Double grating interferometers II, Applications to Collimated beams, Opt. Commun., 14, 148-152 (1975) 6. K Patorski, S Yokezeki and T Suzuki, Collimation test by double grating shearing interferometer, Appl. Opt., 15, 1234-1240 (1976) 7. MVRK Murty, Lateral Shear Interferometers, in Optical Shop Testing, Ed. D. Malacara, John Wiley & Sons, pp. 105-148 (1978) 8. M Bass and JS Whittier, Beam divergence determination and collimation using retroreflectors, Appl. Opt., 23, 2674-2675 (1984) 9. MW Grindel, Testing collimation using shearing interferometry, Proc. SPIE, 680, 44-46 (1986) 10. WL Howes, Lens collimation and testing using a Twyman-Green interferometer with self-pumped phase-conjugating mirror, Appl. Opt., 25, 473-474 (1986) 11. MP Kothiyal and RS Sirohi, Improved collimation testing using Talbot interferometry, Appl. Opt., 26, 4056-4057 (1987) 12. RS Sirohi and MP Kothiyal, Double wedge plate shearing interferometer for collimation test, Appl. Opt., 26, 4954-4056 (1987) 13. MP Kothiyal, RS Sirohi and K-J Rosenbruch, Improved techniques of collimation testing, Opt. Laser Technol., 20, 139-144 (1988) 14. CW Chang and DC Su, Collimation method that uses spiral gratings and Talbot interferometry, Opt. Lett., 16, 1783-1784 (1991) 15. RP Shukla, M Dokhanian, MC George and P Venkateshwarlu, Laser beam collimation using phase conjugate Twyman-Green interferometer, Opt. Eng., 30, 386-390 (1991) 16. MP Kothiyal, KV Sriram and RS Sirohi, Setting sensitivity in Talbot interferometry with modified gratings, 23, 361-365 (1991) 17. DY Xu and K-J Rosenbruch, Rotatable single wedge plate shearing interference technique for collimation testing, Opt. Eng., 30, 391-396 (1991) 18. TD Henning and JL Carlsten, Cyclic shearing interferometer for collimating short coherence length laser beams, Appl. Opt., 31, 1199-1209 (1992) 19. AR Ganesan and P Venkateshwarlu, Laser beam collimation using Talbot interferometry, Appl. Opt., 32, 2918-2920 (1993) 20. KV Sriram, MP Kothiyal and RS Sirohi, Self-referencing collimation testing techniques, Opt. Eng., 32, 94-100 (1993)
10
Key Note
21. KV Sriram, P Senthilkumaran, MP Kothiyal and RS Sirohi, Double wedge interferometer for collimation testing: new configurations, Appl. Opt., 32, 4199-4203 (1993) 22. RS Sirohi, T Eiju, K Matsuda and P Senthilkumaran, Multiple beam wedge plate shear interferometry in transmission, J. Mod. Opt., 41, 1747-1755 (1994) 23. P Senthilkumaran, KV Sriram, MP Kothiyal and RS Sirohi, Multiple beam wedge plate shear interferometer for collimation testing, Appl. Opt., 34, 1197-1202 (1994) 24. KV Sriram, MP Kothiyal and RS Sirohi, Collimation testing with linear dual field, spiral and evolute gratings: A comparative study, Appl. Opt., 33, 7258-7260 (1994) 25. J Choi, GM Perera, MD Aggarwal, RP Shukla and MV Mantravadi, Wedge plate shearing interferometers for collimation testing: Use of moiré technique, Appl. Opt., 34, 3628-3638 (1995) 26. JH Chen, MP Kothiyal and HJ Tiziani, Collimation testing of a CO2 laser beam with a shearing interferometer, Opt. Laser Technol., 12, 179-181 (1995) 27. DY Xu and S Chen, Novel wedge plate beam tester, Opt. Eng., 34, 169-172 (1995) 28. JS Darlin, KV Sriram, MP Kothiyal and RS Sirohi, A modified wedge plate shearing interferometer for collimation testing, Appl. Opt., 34, 2886-2887 (1995) 29. JS Darlin, V Ramya, KV Sriram, MP Kothiyal and RS Sirohi, Some investigations in Talbot interferometry for collimation testing, J. Opt. (India), 42, 167-175 (1996) 30. CS Narayanamurthy, Collimation testing using temporal coherence, Opt. Eng., 35(4), 1161-1164 (1996) 31. JS Darlin, MP Kothiyal and RS Sirohi, Self-referencing cyclic shearing interferometer for collimation testing, J. Mod. Opt., 44, 929-939 (1997) 32. JS Darlin, MP Kothiyal and RS Sirohi, A hybrid wedge plate-grating interferometer for collimation testing, Opt. Eng., 37(5), 1593-1598 (1998) 33. JS Darlin, MP Kothiyal and RS Sirohi, Phase conjugate TwymanGreen interferometer with increased sensitivity for laser beam collimation, J. Mod. Opt., 45, 2371-2378 (1998). 34. JS Darlin, MP Kothiyal and RS Sirohi, Wedge plate interferometry – a new dual field configuration for collimation testing, Opt. Laser Technol. 30, 225-228 (1998). 35. JC Dainty (Ed.), Laser Speckle and Related Phenomena, Springer, Berlin (1975).
Key Note
11
36. R Jones and C Wykes, Holographic and Speckle Interferometry, Cambridge University Press, Cambridge, England (1989). 37. RS Sirohi (Ed.), Speckle Metrology, Marcel Dekker, New York (1993). 38. PK Rastogi (Ed.), Digital Speckle Pattern Interferometry, Wiley, New York (2001). 39. Steinbichler Optotechnik GmbH, Germany. 40. Bremer Institut fuer Angewandte Strahltechnik (BIAS), Germany. 41. RS Sirohi, Speckle Interferometry, Contemporary Physics, 43(3), 161180 (2002). 42. RS Sirohi, J Burke, H Helmers and KD Hinsch, Spatial phase-shifting for pure in-plane displacement and displacement derivatives measurement in electronic speckle pattern interferometry (ESPI), Appl. Opt.,36(23), 5787-5791 (1997). 43. RS Sirohi, Speckle shear interferometry - A review, J. Opt. (India), 13, 95-113 (1984). 44. RK Mohanty, C Joenathan and R S Sirohi, NDT speckle rotational shear interferometry, NDT International (UK), 18, 203-05 (1985). 45. C Joenathan, CS Narayanamurthy and RS Sirohi, Radial and rotational slope contours in speckle shear interferometry, Opt. Commun., 56, 309-12 (1986). 46. RK Mohanty, C Joenathan and R.S Sirohi, Speckle and speckle shear interferometers combined for simultaneous determination of out of plane displacement and slope, Appl. Opt., 24, 3106-09 (1985). 47. N Krishna Mohan, T Santhanakrishnan, P Senthilkumaran, and RS Sirohi, Simultaneous implementation of Leendertz and Duffy methods for in-plane displacement measurement, Opt. Commun., 124, 235-239 (1996). 48. T Santhanakrishnan, N Krishna Mohan, P Senthilkumaran, and RS Sirohi, Slope change contouring of 3D-deeply curved objects by multiaperture speckle shear interferometry, Optik, 104, 27-31 (1996). 49. T. Santhanakrishnan, N. Krishna Mohan, P. K. Palanisamy and R. S. Sirohi, Various speckle interferometric configurations for contouring and slope change measurement, J. Instrum. Soc. India, 27, 16-22 (1997).
SESSION 1
New Methods and Tools for Data Processing Chairs: Werner Jüptner Bremen (Germany) Mitsuo Takeda Tokyo (Japan) Fernando Mendoza Santoyo Guanajuato (Mexico) Jonathan M. Huntley Loughborough (UK)
Invited Paper
New Challenges for Optical Metrology: Evolution or Revolution Malgorzata Kujawinska Institute of Micromechanics & Photonics, Warsaw Univ. of Technology, 8, Sw. A. Boboli Str., 02-525 Warsaw, Poland
1 Introduction Although experimental interferometry began several centuries ago, this was Thomas Young who reported one of the first examples of quantitative fringe analysis when in early 1800 s, he estimated the wavelength of light after measuring the spacing of interference fringes. Latter in the same century Michelson and Morley employed fringe measurement in their interferometric measurements. As interferometry began to develop into a mature subject, interferometric metrology was primarily concerned with measurements of optical surfaces in two dimensions, but the routine quantitative interpretation of surface form and wavefront deviation was not practical in the absence of computers. Optical workshops used interferometry as a null setting technique, with craftsman polishing surfaces until fringes were removed or linearised. The real revolution in interferometry and, more general, optical metrology was introduced by invention of laser in early sixties. This highly coherent and efficient light source not only increased application of classical interferometry but also enabled practical development of such measurement techniques as holographic interferometry, ESPI and interferometric grid based techniques. The information coded in interferograms required quantitative analysis, however fringe numbers and spacings were at the beginning determined manually and relied strongly on a priori knowledge of human operators. However at the end of eighties of XXth century we had experienced the next revolution in full-field, fringe-based optical metrology. This was due to rapid development of personal computers with image processing capabilities and matrix detectors (CCD and latter CMOS), as well as introduction of temporal [1] and spatial [2] phase based interferogram analysis methods. This was also the time when Fringe Workshop was born together with at least three other international conferences (Fringe Analysis’89 FASIG (UK), Interferome-
New Methods and Tools for Data Processing
15
try’89 (Poland), Interferometry: Techniques and Analysis (USA)) covering the subject of automatic fringe pattern analysis applied to coherent and noncoherent methods of fringe generation. The topics of Fringe Workshop steadily expanded and changed its original focus in accordance to the needs in the surrounding world. In 1993 the expansion was towards shape measurement and material fault detection. Fringe’97 deals with scaled metrology, active measurement and hybrid processing technologies, while the first meeting in XXI century focused on optical methods for micromeasurements and new optical sensors. The new topics for 2005 meeting are resolution enhanced technologies and wide scale 4D optical metrology with the special emphasis put on modern measurements strategies taking into account combination of physical modeling, computer aided simulation and experimental data acquisition as well as new approaches for the extension of the existing resolution limits. These changes of focus and subjects have been introduced by the new needs expressed through researchers and industry and also by the advancements in the technology, which provide new sources, detectors, optoelectronics and electromechanics with enhanced properties. However it looks to me that we have experienced rather evolution of the approaches, methods, apparatus concepts then revolution. Today, sixteen years after the first Fringe, I was asked by the Program Committee of the Fifth Fringe Workshop to look both: back in time and into future to evaluate this what had happened in research and application of optical metrology and discuss what are the visionary concepts and advancements in methods and technologies in photonics which may create a new generation of optical metrology tools and expand their application to new areas of research, industry, medicine and multimedia technologies. The full topic is too wide to cover in this presentation therefore I will focus on the analysis of a few areas in optical metrology. These include: - new methods and tools for the generation, acquisition, processing and evaluation of data in optical metrology including active phase manipulation methods, - novel concept of instrumentation for micromeasurements. Most of these topics are developing through evolution of the tasks, approaches and tools, however some of them are or may be the subject of technological revolution.
16
New Methods and Tools for Data Processing
2 New methods for the generation, acquisition, processing and evaluation of data The success in implementation of optical full-field measuring methods in industry, medicine and commerce depends on the capability to provide quick, accurate and reliable generation, acquisition, processing and evaluation data which may be used directly in a given application or as the initial data for CAD/CAM, FEM, specialized medical or computer graphics and virtual reality software. I already had mentioned that the revolution in automatic fringe pattern processing was brought by introducing the temporal (phase shifting) [1]) and spatial (2D Fourier transform [2]) phase based interferogram analysis which started hundreds of works devoted to modifications, improvements and extensions of these methods. M. Takeda gave an excellent overview with the focus on the analogies and dualities in fringe generation and processing during Fringe’97 [3]. It was shown that all spatial, temporal and spectral based fringe pattern analysis methods have common roots and this is the reason why they are developed parallelly, although their applications and full implementation are sometimes restricted due to lack of the proper technological support. Let us see where we are now. A fringe pattern obtained as the output of a measuring system may be modified physically by optoelectronic and mechanical hardware (sensors and actuators) and virtually by an image processing software [4]. These modifications refer to phase and amplitude (intensity) of the signal produced in space and time, so that the general form of the fringe pattern is given by f
>
@
Ix, y, t a 0 x, y ¦ a m x, y cosm ^2 ʌ f 0x x f 0y Ȟ 0 t Įt I x, y ` (1) m 1
where am(x,y) is the amplitude of mth harmonic of the signal, f0x, f0y are the fundamental spatial frequencies, Q0 is the temporal frequency and D is the phase shift value. The measurand is coded in the phase I(x,y), (x,y) and t represent the space and time coordinate of the signal. Additionally there are recently several active ways in which the phase coded in fringe pattern may be modified in order to allow further comfortable analysis. One of such methods relies on applying in the reference beam of interferometer active beam forming device e.g. in the form of LCOS modulator [5]. This allows to reduce the number of fringes in the final interferogram, correct systematic errors or introduce the proper reference wavefront (including conical or other higher order if necessary). An example of an ac-
New Methods and Tools for Data Processing
17
tive interferogram generation process resulting in subtracting of an initial microelement shape in order to facilitate the measurement of deformation of vibrating object is shown in Fig. 1. The general scheme of the fringe pattern (FP) analysis process is shown in Fig. 2. After passive or active FP generation, acquisition and preprocessing the fringe pattern is analysed. Although the phase measuring methods are used in most commercial systems, however two alternative approaches should be addressed:
Fig. 1. Active phase correction of micromembrane: a) initial interferogram obtained in Twyman-Green interferometer, b) phase map mod(2S) used for phase correction, c) interferogram after phase correction and shapes computed from d) initial and e) final interferogram.
x intensity methods in which we work passively on an image intensity distribution(s) captured by a detector [6]. This includes: fringe extreme localization methods (skeletoning, fringe tracking), which are widely applied in the fault detection or NDT coupled with neural network or fuzzy logic approach [7], phase evaluation by regularization methods [8], which in principle may deliver the unwrapped phase directly from a fringe pattern, however at the moment this method is computationally extensive and suffer from multiply of restrictions, contrast extreme localization methods for white light interferometry;
18
New Methods and Tools for Data Processing
x phase methods for which we actively modify fringe pattern(s) in order to provide additional information to solve the sign ambiguity [6,9]: temporal heterodyning (introducing running fringes); this method, realized electronically, is very advantaques as it deliver directly phase with no 2S ambiguity. For long time it had required scanning of the measurement field of view by a single detector however recently the CMOS photo sensors technology allows to add functionality [10] and create light detection systems with parallel phase value detection at all pixels. This requires providing high end camera systems with active pixel sensors and high-speed video capabilities. Optical metrology being a lower size market, does not generate commercially avaliable specialized CMOS sensors for temporal heterodyning phase analysis, however I strongly believe that in future its the extended usage of optical metrology will show to the market that this functionality is worth the extra expense of additional silicon real estate and development cost. When it happens we will experience the next revolution in fringe – based optical metrology which allows rapid and highly accurate analysis of arbitrary fringe pattern, spatial heterodyning (Fourier transform method, PLL and spatial carrier phase shifting methods) which importance increases with development of high resolution CCD and CMOS detectors and the necessity of analysis variable in time objects and performing of measurements in unstable environment, temporal and spatial phase shifting, which are discrete versions of the above methods, where the time or spatially varying interferogram is sampled over a single period. The extended Fourier analysis of these methods [9] allows their full understanding and development of numerous algorithms, which are insensitive to a variety of errors, however usually on the expense of increased number of images required for the phase calculations. We experience a constant evolution of analysis methods and algorithms, however the most significant solutions and rapid changes may come from hardware modifications.
New Methods and Tools for Data Processing
19
Fig. 2. The general scheme of the fringe pattern analysis process.
The very good example of such procedure is the evolution of spatial phase shifting and spatial carrier phase shifting methods. They were firstly introduced into interferometric, holographic and ESPI configurations for dynamic events analysis in the middle and late eighties [11], however they had been only recently commercially implemented in a single shot interferometers [12] which allow to measure object in the presence of significant vibrations or actually measure how the sample is vibrating (due to usage the micro polarizer phase-shifting array overlaid at the CCD detectors). Such compact hardware based solution brings efficiently the optical metrology to industry and other customers and may future have much higher impact on intelligent manufacturing, medical or multimedia applications.
20
New Methods and Tools for Data Processing
The new approach, which gained a lot of attention during last ten years, is the phase reconstruction through digital holography [13]. This is again the concept which become practical due to avaliability of high resolution matrix detectors. Digital recording and numerical reconstruction of holograms enable comfortable phase manipulation [14] which, without the necessity to produce secondary interferograms (as in classical holographic interferometry [15]) enables to perform a wide range of measurements especially supporting microtechnology. Another very interesting issue connected with digital holography and digital holographic interferometry is the possibility to perform remote measurements or structure monitoring. This relies on capturing and transferring digital data through Internet network and optoelectronic near real-time reconstruction of digital holograms at distance location [14,16]. The great challenges for DH and DHI are increasing significantly the object size and develop a versatile camera for a variety of industrial and medical applications. The next constant challenge in fringe pattern analysis is phase unwrapping. As mentioned above, the only method which measures the phases with no 2S phase ambiguity is temporal heterodyning. All other methods including [17]: - path dependent and path independent phase unwrapping, - hierarchical unwrapping, - regularized phase – tracking techniques have several limitations and are often computational extensive. We still wait for the new revolutionary approach to solve efficiently this problem which slows down significantly the phase calculations and often unable to fully automate the measurement process. The phase unwrapping procedures finalize the fringe measurement process which reduces a fringe pattern to a continuous phase map (see Fig. 2). However to solve a particular engineering problem the phase scaling [18] which converts the phase map into the physical quality has to be implemented. Further processing is strongly application – oriented and is developing rapidly due to strong need of the full-field metrology data in CAD/CAM/CAE, rapid prototyping, intelligent manufacturing, medicine and multimedia technology. In order to show the importance and complexity of this stage of data processing I refer to just one example connected with a great demand for realistic imaging of real three-dimensional objects in multimedia techniques [19]. These 3D objects are most often used in computer generated scenes and works with (virtual reality, simulators and games) or without (film and animation) implemented interaction. In general, multimedia techniques require information about shape and texture of existing 3D-objects in a form compatible with existing applications. For shape representation of virtual 3D-objects triangle mesh or pa-
New Methods and Tools for Data Processing
21
rametric surface should be used [20]. Also, to deliver additional colour information a texture should be created and mapped on a virtual model [21]. The processing path of 3D object is shown in Fig. 3. The structure light projection systems deliver information about 3D object in the form of (x,y,z,R,G,B) co-ordinates from a single direction, known as two and half-dimensional (2,5D). In consequence, an object has to be measured from N overlapping directions to cover whole surface and to create a complete virtual representation. In most cases, each directional cloud of points CoP is located at its own co-ordinates space because during measurement a relative position object-system changes. After capturing data their preprocessing is realized. The software environment imports data in the form of CoP. It should work efficiently with huge num- Fig. 3. Processing path of 3D object. ber of points; sometimes more than ten million. It has to enable user to pre-process, fit directional, convert and export CoPs to a compatible form. Pre-processing algorithms are used for data smoothing, noise removal and simplification of results in order to decrease the number of points. The component coPs are automatically merged to create one virtual object. Next, triangle mesh or parametric description is calculated from main CoP with attached texture map. Finally, virtual object is exported into format compatible with actually supported application. In order to illustrate the measurement challenges complexity of the processing process and diversivity of the objects to be scanned, the virtual model of full hussar armor rendered in virtual reality application [19] is shown in Fig. 4. The measurements were done in the Kórnik castle (Poland). During two weeks measurement session more than 50 objects were scanned from more than 800 directions. Rough data from measurements take up approximately 80 GB of hard drive space and the number of measurement points is greater than 2 billion.
22
New Methods and Tools for Data Processing
Fig. 4. Virtual model of full hussar armor rendered in virtual reality application: a-d) different views.
Recently the main challenge addressing optical metrology specialist is not only to provide quick and accurate measurement in arbitrary environment but also to prepare the measurement data to be used in further complex analysis and application oriented tasks.
3 New challenges and solutions in development of novel instrumentation for micromeasurements Novel materials, microcomponents and microsystems (MEMS/MOEMS, electronic chips) require constant modifications of the measurement tools and procedures. This has been realized using bulk optics systems (Fig. 5a), however such tools often cannot meet the requirements of micromeasurements which include: integrated multiple functions, improved performance, specifically high spatial and temporal resolution, nanometer accuracy, inexpensive, compact and batch-fabricated, portable, low power consumption and easily massively parallel. The support micro- and nanotechnology with measurement and testing requires providing “rulers” that allow measurement of very small dimension. This “rulers” should be based on novel strategy which fits the dimension of an object to the measurement tool (Fig. 5b). This can be achieved based on lab-on-chip strategy or/and M-O platform approach. Novel MEMS and MOEMS technologies offer new possibilities to create measuring devices. Usually micro-optical
New Methods and Tools for Data Processing
23
MEMS consist of elements used to shape and steer optical beam (actively or passively) and of electro-optical elements (laser diodes, detectors, etc.). For these functionalities generalized platforms are needed, which simplify the assembly of the MOEMS. The platforms then become an enabling technology to design complex microoptical systems where microoptical componets are fabricated with extremely high dimensional tolerances, precise positioning on chip, and well controlled microactuation. Following this strategy the first new measuring architectures have been proposed: Michelson interferometer with MEMS based actuator [22], waveguide based multifunctional microinterferometric system [23] and the on-chip integrated optical scanning confocal microscope [24]. a)
b)
Fig. 5. From macroscopic scale to microsystem concept of optical metrology (courtesy of C. Gorecki).
The first one is in the form of a in plane beam steering platform (microoptical bench, MOB) which consists of a passive micromirror and beamsplitter integrated with mobile (movable) micromirror (Fig. 6) and fixing elements for mounting optical devices (diffractive elements, laser diodes, etc). Such a device finds applications in low cost, mass-produced, miniature spectroscopy, but it can easily be modified for active Twyman-Green interferometer allowing microshape and out-of-displacement measurement.
24
New Methods and Tools for Data Processing
Fig. 6. Michelson interferometer with MEMS based actuator: a) scheme, b) photograph of a com b drive actuator with mirror.
Another example of the miniature measurement system based on microoptics is the novel multifunctional waveguide microinterferometer produced with low cost technologies (moulding) and material (PMMA) (Fig. 7). It consists of one or several measurement modules including:
Fig. 7. The scheme of multifunctional integrated waveguide microinterferometric system.
x grating (moire) microinterferometer (or ESPI) for in-plane displacement/strain measurements [25], x Twyman-Green interferometer (or digital holographic interferometer) for out-of-plane displacement/shape measurement [23,25], x digital holographic interferometer for u, v, w displacements determination [26].
New Methods and Tools for Data Processing
25
The system include also an Illuminating/Detection module in which VCSEL light source and CMOS matrix are integrated at one platform and may include Active Beam Manipulation module which allows to introduce phase shifting or linear carrier fringes for rapid interferogram analysis. The next example: on-chip scanning confocal microscope is obtained by the “smart pixel” solution [24]. The individual pixel is configured from the vertical cavity surface emitted laser (VCSELs) - flip-chip bonded to the microactuator moving up-and-down the integrated microlens, flying directly above the specimen (Fig. 8). The use of optical feedback of the laser cavity as an active detection system, simplifies the microscope design because the light source and detector are unified parts of the VCSEL itself. The microscope can be fabricated in the form of the single device (Fig. 8a) or as an array-type device (Fig. 8b), so called the multi-probe architecture. The focalization system of the multi-probe microscope must contain an array of microlenses, where each of microlenses is moved by an individual vertical actuator. This can be constructed as a two silicon wafers system, where one wafer consists an array of microlenses on moving microstructures (membranes, beams) and the second contains steering electrodes. Using such array of confocal microscopes, light from multiple pixels can be acquired simultaneously. Each microscope is able to capture 2-D images (and 3-D object reconstructions) with improved dynamic range and improved sensitivity due to the independent control of illumination of each pixel. The miniature confocal microscope can posses a lateral resolution of from 1 to 2 µm. The multiprobe (array) approach will allow in future to overcome the fundamental limitation of single-optical-axis imaging, namely the tradeoff between the field of view (FOV) and image resolution. If these bottleneck problem is solved, the industry will be able to check with high speed the quality of new products produced by one of silicon technologies, which in its turn will support the development of micro-optics wafer based technology (Fig. 9). a) b) VCSEL MEMS layer
microlens GaAs
z y
x
focusing (z): microscanner vertical
Micro Optics layer VCSEL layer
skan x-y
Fig. 8. Chip-scale optical scanning confocal microscope: a) an individual “smart pixel”, b) multiprobe system.
New Methods and Tools for Data Processing
26
a)
b)
c)
d) Average deformation of the membranes 0.45 x 0.45 2
mm
700 W avg0.08 [nm]
600 500 400 300 200 100
1
0 1
2
3
2 4
5
6
7
8
9 10
3
Fig. 9. An exemplary silicon wafer with multiply of active micromembranes which requires parallel testing; a) the photo of wafer and b) a focus on individual elements, c) the result of single micromembrane shape measurement and d) the processed results in the form of P-V values distribution over whole wafer.
11
It is interesting that the commercial realization of an array microscope already had started [27]. The researchers from DMETRIX Inc. introduced recently a new generation of optical system with an FOV-to physical-lensdiameter ratio (FDR) values around eight (for classical microscopic objectives this value is on the order 25 to 50). This allows to assemble a large number of microscopic objectives to simultaneously image different parts of the same object at high resolution. The system consists of an array of multi-element micro-optical systems overlaying a 2-D detector array (Fig. 10). Each of the individual optical (aspheric) systems has a numerical aperture of 0.65 and an FOV on the order 250Pm in diameter. The specially design custom complementary image detector captures the data at 2500 frames/s with 3.5Pm pixels and it has 10 parallel output channels and operates at a master clock of 15MHz. The design of the whole array as a monolithic ensemble does not require any form of stitching or image postprocessing, making the completely reconstructed image available immediately after completion of scan. DMETRIX’s microscope is focused on biological application (histopathology), however the array approach can be extended to other approaches including epi-illumination microscopy, confocal microscopy, interferometric and digital holography microscopes. It is anticipated that the development of array-based, ultra-fast, high-resolution microscope systems will launch the next chapter of digital microscopy. It
New Methods and Tools for Data Processing
27
is difficult to predict if it is just evolution or revolution in micromeasurement, as it depends strongly on the amount of money which will be allocated towards implementation of this concept.
Fig. 10. The 8×10 array of miniature microscope objectives constructed from several monolithic plates [27].
4 Conclusions In science revolutions do not happen often, however they influence our life significantly. We had definitely experienced the revolution connected with introduction of lasers, powerful desktop computers and matrix detectors. However several problems should still be solved. At the moment the evolution of the active optoelectronic and MEMS based devices as well as phase analysis and processing methods bring us to the higher level of fulfilling the requirement formulated by the users of the measurement systems. Several efficient solutions have been demonstrated. The presented concepts of the micromeasurement systems demonstrate that sophisticated photonic and micromechanical devices, and their associated electronic control, can be made small, low power, and inexpensive, even permitting the device to be disposable. We are also close to converting our 2D image world into 3D or even 4D one based on active data capture, processing and visualization. If this concept is fully implemented it will be revolution in IT technologies, however it is real challenge for system designers and software developers. On the other hand the possible future analysis by temporal heterodyning method performed electronically by customized CMOS cameras may convert totally our software based fringe pattern analysis concept. However the hardware based technological revolution requires the critical mass of product quantity. Otherwise it is not financially viable for implementation and therefore devoted to evolutional but revolutional changes.
28
New Methods and Tools for Data Processing
5 Acknowledgments We gratefully acknowledge the financial support of EU within Network of Excellence for Micro-Optics NEMO and Ministry of Scientific Research and Information Technology within the statutory work realized at Institute of Micromechanics and Photonics, Warsaw University of Technology.
6 References 1. Bruning, J,H, et al. (1974) Digital wavefront measuring interferometer for testing, optical surfaces and lenses. Applied Optics 13:2693-2703 2. Takeda, M, Ina, H, Kobayashi, S, (1984) Fourier transform methods of fringe pattern analysis for computer based tomography and interferometry. JOSA 72:156-170 3. Takeda, M, (1997) The philosophy of fringes – analogies and dualities in fringe generation and analysis, in Jüptner W. and Osten W. eds Akademie Verlag Series in Optical Metrology 3:17-26 4. Kujawinska, M, Kosinski, C, (1997) Adaptability: problem or solution in Juptner W. and Osten W. eds Akademie Verlag Series in Optical Metrology 3: 419-431 5. Kacperski, J, Kujawinska, M, Wang, X, Bos, P, J, (2004) Active microinterferometer with liquid crystal on silicon (LCOS) for extended range stanic and dynamic micromembrane measurement. Proc. SPIE 5532:37-43 6. Robinson, D, W, Reid, G, T, (eds) (1993) Interferogram analysis: digital fringe pattern measurement techniques. IOP Publishing Bristol 7. Jüptner, W, Kreis, Th, Mieth, U, Osten, W, (1994) Application of neural networks and knowledge based systems for automatic identifiocation of facult – indicating fringe patterns. Proc. SPIE 2342: 16-24 8. Sevrin, M, Marroquin, J, L, Cuevast, F, (1997) Demodulation of a single interferogram by use of a two-dimensional regularized phasetracking technique. Opt. Eng. 36: 4540-4548 9. Malacara, D, Servin, M, Malacara, Z, (1998) Optical testing: analysis of interferograms. Marcel Dekker New York 10. Lauxtermann, S, (2001) State of the art in CMOS photo sensing and applications in machine vision, in Osten W., Jüptner W. (eds). Proc. Fringe 2001 Elsevier Paris: 539-548 11. Kujawinska, M, (1993) Spatial phase measurement methods, in Interferogram Analysis. Robinson D.W., Ried G.T. (eds) IOP Publishing Bristol: 141-193 12. Millerd, J, et al. (2005) Modern approaches in phase measuring metrology. Proc. SPIE 5856:14-22
New Methods and Tools for Data Processing
29
13. Schnars, U, (1994) Direct phase determination in hologram interferometry with use digitally recorded interferograms. JOSA A 11: 2011-2015 14. Michalkiewicz, A., et al. (2005) Phase manipulation and optoelectronic reconstruction of digital holograms by means of LCOS spatial light modulator. Proc. SPIE 5776: 144-152 15. Kreis, Th, (1996) Holographic Interferometry. Akademie Verlag Berlin 16. Baumbach, T, Osten, W, Kopylow, Ch, Jüptner, W, (2004) Application of comparative digital holography for distant shape control. Proc. SPIE 5457: 598-609 17. Ghiglia, C, Pritt, M, D, (1998) Two dimensional phase unwrapping. John Wiley & Sons, Inc. New York 18. Osten, W, Kujawinska, M, (2000) Active phase measurement metrology in Rastogi P., Inandu D., eds Trends in Optical Nondestructive testing and Inspection. Elsevier Science BV: 45-69 19. Sitnik, R, Kujawinska, M, Zaluski, W, (2005) 3DMADAMC system: optical 3D shape acquisition and processing path for VR applications. Proc. SPIE 5857 (in press) 20. Foley, J, D, van Dam, A, Feiner, S, K, Hughes, J, F, Phillips, R, L, (1994) Introduction to computer graphics. Addison-Vesley 21. Saito, T, Takahashi, T, (1990) Comprehensible rendering of 3D shapes. SIGGRAPH'90: 197-206 22. Sasaki, M, Briand, D, Noell, W, de Rooji, N, F, Hane K, (2004) Three dimensional SOI-MEMS constructed by buckled bridges and virtual comb drive actuator. Selected Topics in Quantum Electronics 10: 456461 23. Kujawinska, M, Górecki, C, (2002) New challenges and approaches to interferometric MEMS and MOEMS testing. Proc. SPIE 4900: 809823 24. Gorecki, C, Heinis, D, (2005) A miniaturized SNOM sensor based on the optical feedbach inside the VCSEL cavity. Proc. SPIE 5458: 183187 25. Kujawinska, M, (2002) Modern optical measurement station for micro-materials and microelements studies. Sensors and Actuators A 99: 144-153 26. Michalkiewicz, A, Kujawinska, M, Krezel, J, Salbut, L, Wang, X, Bos, Ph, J, (2005) Phase manipulation and optoelectronic reconstruction of digital holograms by means of LCOS spatial light modulator. Proc. SPIE 5776: 144-152 27. Olszak, A, Descour, M, (2005) Microscopy in multiplex. OE Magazine, SPIE, Bellingham May: 16-18
Interpreting interferometric height measurements using the instrument transfer function Peter de Groot and Xavier Colonna de Lega Zygo Corporation Laurel Brook Rd, Middlefield, CT 06455, USA
1 Introduction Of the various ways of characterizing a system, one of the most appealing is the instrument transfer function or ITF. The ITF describes system response in terms of an input signal’s frequency content. An every-day example is the graph of the response of an audio amplifier or media player to a range of sound frequencies. It is natural therefore to characterize surface profiling interferometers according to their ITF. This is driven in part by developments in precision optics manufacturing, which increasingly tolerance components as a function of spatial frequency [1]. Metrology tools must faithfully detect polishing errors over a specified frequency range, and so we need to know how such tools respond as a function of lateral feature size. Here we review the meaning, applicability, and calculation of the ITF for surface profiling interferometers. This review leads to useful rules of thumb as well as some cautions about what can happen when we apply the concept of a linear ITF to what is, fundamentally, a nonlinear system. Experimental techniques and example results complete the picture. Our approach is informal, as is appropriate for a conference paper. The foundation for a rigorous understanding of the ITF is well documented in the literature, including the well-known books by Goodman [2].
2 Linear systems ITF is most commonly understood to apply to linear systems, which share certain basic properties that lend themselves naturally to frequency analysis. Principally, the response of a linear system is the sum of the responses that each of the component signals would produce individually. Thus if
New Methods and Tools for Data Processing
31
two frequency components are present in an input signal, we can propagate them separately and add up the results. Another property of linear systems is that the response for a given spatial frequency f along a coordinate x is given by a corresponding ITF value characteristic of the system alone, independent of signal magnitude and phase. Thus to determine the output g c given an input g , we write
Gc f
ITF f G f
(1)
where
G f
FT ^ g x `
Gc f
FT ^g c x `
(2)
and the Fourier Transform is defined by f
FT ^
` ³ ^ ` exp 2Sifx dx .
(3)
f
This is a powerful way of predicting system response to diverse stimuli.
3 OTF for optical imaging A familiar ITF is the optical transfer function or OTF, which describes how an optical system reproduces images at various spatial frequencies. The modulus of the OTF is the modulation transfer function (MTF). An approach to the OTF is to consider the effect of a limiting aperture in the pupil plane of an unaberrated imaging system. A plane wavefront generated by a point source illuminates a perfectly flat object (top left diagram in Fig. 1). The object reflectivity profile may be dissected in terms of sinusoidal amplitude gratings over a range of spatial frequencies. Allowing each constituent grating its own DC offset, each grating generates three diffraction orders, -1, 0, 1. The separation of the r1 orders in the pupil plane is proportional to the grating frequency. According to the Abbé principle, if the pupil aperture captures all of the diffracted beams, then the system resolves the corresponding frequency. Assuming that the optical system is perfect and that it obeys the sine condition, the principle rays in Fig. 1 show that the optical system faithfully reproduces the amplitude reflectivity frequency content up to a limiting
New Methods and Tools for Data Processing
32
frequency NA / O . This coherent imaging MTF is therefore a simple rectangle, as shown in the top-right of Fig. 1. Coherent illumination
MTF
Pupil aperture
1
+1 -1 Object
Image
0
Incoherent illumination
NA/O 1
0
Partially obscured diffraction disks
0 Spatial Frequency
Fig. 1. Illustration of incoherent and coherent light imaging systems (left) and the corresponding MTF curves (right).
The reasoning is much the same for an extended, incoherent source (lower left of Fig. 1) [3]; although the results are very different. The various source points in the pupil generate overlapping, mutually-incoherent images that add together as intensities. As we move across the pupil, the obscurations of the r1 diffraction orders vary. The calculation reduces to the autocorrelation of the pupil plane light distribution, which for a uniformly-filled disk is
MTF f
2 S
ª¬ I cos I sin I º¼
I cos 1 Of 2 NA
.
(4)
This curve, shown in the lower right of Fig. 1, declines gradually from zero out to twice the coherent frequency limit. Incoherent imaging is often preferred in microscopes because of this higher frequency limit and softer transfer function, which suppresses ringing and other coherent artifacts. Note that coherent systems are linear in amplitude and incoherent systems are linear in intensity. This leads to an ambiguity in the ITF for partially coherent light, addressed pragmatically by the apparent transfer function, which uses the ratio of the output and input modulations for single, isolated frequencies while simply ignoring spurious harmonics [4].
New Methods and Tools for Data Processing
33
4 ITF for optical profilers The ITF is so useful that it is tempting to use it even for systems that are explicitly nonlinear. Traditional tactile tools, for example, are nonlinear at high spatial frequencies because of the shape of the stylus; but their response is often plotted as a linear ITF [5]. If we are lucky, we find that over some limited range the system is satisfactorily approximated as linear. This is the case of optical profilers as well, with appropriate cautions.
Strength
1 Reflectivity grating PV=100%
0.5
Strength
0 1 Height profile grating PV=O/4
0.5
0
1
0.5
0 Pupil plane coordinate
0.5
1
Fig. 2. Comparison of the diffracted beams from amplitude (upper) and phase (lower) gratings illustrates the complex diffraction behavior of height objects, leading to nonlinear response when profiling surface heights.
Returning to the elementary concept of constituent gratings, consider coherent illumination of an object that has uniform reflectivity but a varying height. The surface impresses upon the incident wavefront a phase profile that propagates through the system to the image plane as a complex amplitude. Using any one of the known interferometric techniques, we can estimate the imaged phase profile and convert this back to height. Just as before, a Fourier Transform of the object wavefront yields sinusoidal phase gratings over a range of spatial frequencies. Each grating generates diffracted beams, although Fig. 2 shows that for phase gratings, the light spreads into higher angles than just the -1, 0, 1 orders present with amplitude gratings. Generally, the deeper the grating, the stronger and more numerous the higher diffraction orders, resulting in a very different situation from simple imaging. Spatial frequencies couple together, resulting in harmonics and beat signals in the imaged wavefront, inconsistent
New Methods and Tools for Data Processing
34
with the simple formula of Eq.(1). The response of the system is now inseparable from the nature of the object itself. Unavoidably, interferometers are nonlinear devices, as are all optical tools that encode height information as wavefront phase. The solution to this dilemma is to restrict ourselves to small surface heights, where small means O 4 . For such small heights, diffraction from a phase grating is once again limited to the -1,0,1 orders and the higher orders become insignificant. The optical system responds to these small surface heights in much the same way as it images pure intensity objects, suggesting that we may be able to approximate the ITF by the OTF. This last idea gains credence by considering a simple example. Arrange an interferometer so that the reference phase is balanced at the point where the intensity is most sensitive to changes in surface height h . Then
I h
I 0 I c sin kh
(5)
where I 0 is the DC offset, I c is the amplitude of the intensity signal and
k
2 S O . Inversion of Eq.(5) as the approximation
h | I I 0 I ck
(6)
shows a linear relationship between height and intensity. More sophisticated algorithms will reduce in this limit to the same kind of simple linear equation. For a coherent system such as a laser Fizeau, the variation I I 0 in Eq.(6) is proportional to the amplitude, since it is the product of the reference and object waves that gives rise to the measured intensity. For small surface heights, the coherent interferometer ITF is the same as the coherent imaging OTF. Similarly, for an incoherent system, we add together the interference intensity patterns for multiple source points—a calculation that mimics that of the incoherent imaging OTF. To summarize the key conclusions of this section: (1) The measurement of surface heights optically, e.g. by interferometry, is a fundamentally nonlinear process. (2) A linear interferometer ITF is a reasonable approximation in the limit of very small surface deviations ( O 4 ). (3) In the limit of small surface deviations, the interferometer ITF is the same as its imaging OTF.
New Methods and Tools for Data Processing
35
5 Measuring interferometer ITF 1.2 1.0
ITF
0.8 0.6 0.4 Experiment
0.2
Theory 0.0 10
100
1000
10,000
Spatial frequency (cycles/mm)
Fig. 3. Comparison of the theoretical ITF magnitude (Eq.(4)) and experimental results for a white-light interference microscope using a 100X, 0.8 NA Mirau objective and incoherent illumination. The data derive from the profile of a 40-nm step object.
As a consequence of conclusion (3) above, it is sufficient to describe an interferometer’s imaging properties to infer how it will respond to shallow height features. Of the many ways to measure OTF, one of the most convenient is to image a sharp reflectivity step [6], generated e.g. by depositing a thin layer ( O 4 ) of chrome over one-half of a flat glass plate. The idea is to determine the frequency content of the image via Fourier analysis and compare it to that of the original object. The ratio of the frequency components directly provides the OTF. The experiment does not require interferometry—we may even wish to block the reference beam to suppress interference effects. Curiosity at least demands that we attempt the same experiment by directly profiling a step height [7]. The ITF in Fig. 3 for one of our white light interferometers illustrates how closely the magnitude of the resulting experimental ITF magnitude matches the prediction based on the incoherent imaging MTF calculated from Eq.(4). The resolution of low-magnification systems are often limited by the camera. Fig. 4 shows the ITF of our laser Fizeau interferometer configured for coherent imaging. The coherent optical ITF is assumed equal to one for the theory curve over the full spatial frequency range shown, while the finite pixel size modulates the ITF by a sinc function.
New Methods and Tools for Data Processing
36 1.2 1.0
ITF
0.8 0.6 0.4 Experiment
0.2 0.0
Theory 0.01
0.1
1
Spatial frequency (/Nyquist)
Fig. 4. The predicted and experimental ITF curves for this 100-mm aperture coherent laser Fizeau interferometer are dominated by the lateral resolution of the 640X480 camera. Here the data stop at Nyquist because the sampling is too sparse above this frequency. 1.2 1.0
ITF
0.8 0.6 0.4 0.2 0.0 1
10
100
1000
10,000
Spatial frequency (cycles/mm)
Fig. 5. Theoretical ITF curves for 2.5X, 5X, 20X and 100X microscope objectives illustrate the spatial frequency overlap achieved in typical microscopes setups, and the influence of the camera at low magnification.
Fig. 5 shows the coverage of a range of microscope objectives in incoherent imaging, including the effects of the camera. At lower magnifications, the lobes correspond to frequencies for which the optical resolution surpasses that of the camera. This figure illustrates how a range of objective on a turret provides complete coverage over a wide spatial frequency range.
New Methods and Tools for Data Processing
37
6 Conclusions Much of this paper has emphasized the precariousness of using a linear ITF for what is fundamentally a nonlinear process of encoding height into the phase of a complex wave amplitude. A more accurate model begins with an explicit calculation of this amplitude, then propagates the wavefront through the system to determine what the instrument will do. Nonetheless, a kind of quasi-linear ITF is an increasingly common way to thumbnail the capabilities and limitations of interferometers in terms of lateral feature size, and to evaluate the effects of aberrations, coherence, defocus and diffraction [8]. As we have seen, the basic requirement for a meaningful application of a linear ITF is that the surface deviations be small. This allows us to estimate the expected behaviour for coherent illumination, as in laser Fizeau systems, and incoherent illumination, which is the norm for interference microscopes. Happily, in this limit of small departures, the profiling behaviour follows closely that of imaging, so that with appropriate cautions we can get a good idea of expected performance using the imaging OTF as a guide to the expected ITF.
References and notes 1. Wolfe, R., Downie, J., Lawson, J. (1996) Measuring the spatial frequency transfer function of phase-measuring interferometers for laser optics. Proc. SPIE 2870, p. 553-557. 2. Goodman, J., Statistical Optics (John Wiley & Sons, 1985) 3. To be truly incoherent, the source pupil should be much larger than the imaging pupil. Fig. 1 is a simplification to illustrate the basic idea. 4. Reynolds, G., DeVelis, J., Parrent, G., Thompson, B. (1989) The New Physical Optics Notebook: Tutorials in Fourier Optics (AIP): 139. 5. Lehman, P. (2003) Optical versus tactile geometry measurement— alternatives or counterparts. Proc. SPIE 5144: 183-196. 6. Barakat, R. (1965) Determination of the optical transfer function directly from the edge spread function. J. Opt. Soc. of Am. 55: 1217. 7. Takacs, P., Li, M., Furenlid, K. Church, E. (1993) A Step-Height Standard for Surface Profiler Calibration. Proc. SPIE 1993: 65-74. 8. Novak, E., Ai, C., and Wyant, J. (1997) Transfer function characterization of laser Fizeau interferometer for high spatial-frequency phase measurements Proc. SPIE 3134: 114-121.
Are Residues of Primary Importance in Phase Unwrapping? Karl A. Stetson Karl Stetson Associates, LLC 2060 South Street Coventry, CT 06238
1 Introduction Phase step interferometry and related techniques have given rise to the problem of phase unwrapping, that is, how to add and subtract multiples of 2S to the values of a wrapped phase map in order to create the continuous phase distribution whose measurement is desired. Wrapped phase maps are obtained by calculating phase via the arctangent function, which can generate phase values over an interval of no more than 2S. Ghiglia and Pritt, in Reference 1, have discussed this problem and its many solutions at length, and the majority of techniques they present are based upon establishing what are called branch cuts that connect what are referred to as residues in the wrapped phase map. These branch cuts define paths across which unwrapping may not proceed, and a successful set of branch cuts will allow phase unwrapping to proceed around them so as to generate the most nearly continuous phase map possible. The purpose of this paper is to examine the concept of residues as applied to the phase maps generated in electronic holographic interferometry and consider how they arise. Further, it suggests that residues are actually imperfect indicators of a more primary phenomenon in these phase maps. The goal of this discussion is to encourage the use of phase wrapping methods that do not use residues and branch cuts in their operation.2, 3
2 Residues Residues are detected in discrete two-dimensional phase maps by making counterclockwise circuits around every set of four neighboring data points
New Methods and Tools for Data Processing
39
in the phase map and summing the number of positive and negative phase transitions greater than S. The resulting sum around the circuit is usually zero, but will occasionally be plus or minus one, in which case a residue is detected and assigned to the center of the circuit of four points. Ref. 1 strongly relates residues in discrete, two-dimensional, phase maps to residues as defined in the theory of complex functions of two dimensions, and it is shown that residues in bounded complex functions are associated with points where the amplitude of the function goes to zero. It has further been shown experimentally4 that such phenomena exist in fully developed laser speckle patterns and that the points where the amplitude of a speckle pattern is zero exhibit what are called optical vortexes, points about which the phase cycles by 2S. Reference 4 goes on to identify the translation of such points as the main source of residues in speckle interferograms.
Fig. 1. A histogram of pixel values from a typical electronic hologram with the camera lens set to f/5.6.
In electronic holographic interferometry, aka electronic speckle pattern interferometry, we may question the relevance of optical vortexes and residues in two-dimensional complex functions. In Ref. 4, the speckle patterns examined were expanded so that their characteristic speckle size was much larger than the pixels of the camera recording the patterns. In a practical electronic holography system, it is common to have speckles that are smaller than the camera pixels. For example, with laser light at 633 nm
40
New Methods and Tools for Data Processing
and the camera lens set between f/5.6 and f/11, the speckle size will range from 4.25 Pm to 8.5 Pm. For a camera in the 2/3-inch format, the pixel cell size will be in the order of 11.6 by 13.5 Pm. Furthermore, the speckle patterns are usually unpolarized and therefore not fully developed. The effect of this can be seen in Fig. 1, which shows a histogram of the pixels in a typical electronic holography image. Note that there are nearly no pixels with a value of zero. Furthermore, in electronic holography the phase function measured is the phase difference between two images of the same object with little if any lateral shift between their speckle patterns.
3 Phase Inclusions As discussed in Section 2.5 of Ref. 1, the goal of unwrapping can be described as elimination from the phase map of all transitions greater than S, and, if this is possible for the final phase map, there is really no difficulty. In such a case, as pointed out in the reference above, the unwrapping process is path independent and can proceed along any path with the same result. Such a phase map is also free of residues. In reality, this circumstance is rarely the case, and the number of transitions greater than S can only be minimized. The first assertion of this paper is the obvious one that any wrapped phase map containing residues will, by necessity, generate a final unwrapped phase map that contains some transitions greater than S. These remaining transitions greater than S are central to the thesis of this paper, and for convenience they require a name. Herráez et al. have referred to them as phase breaks, by which they distinguish them from phase wraps.5 In this paper, we call them phase inclusions and make an analogy to particles of foreign matter in an otherwise homogeneous material. We may also think of these remaining transitions greater than S as gaps in the continuous phase that must be included in the final phase map, and thus the word inclusion seems additionally appropriate. The next assertion is that whenever a residue is detected in a phase map, a phase inclusion must exist between one pair of neighboring points among the four in the circuit. This requires amplification, so consider the example in Table 1 taken from a wrapped phase map from an actual electronic holography interferogram. There is a residue of –1 within the circuit of B3, B4, C4, & D3 and a residue of +1 in the circuit of C3, C4, D4, & D3. These residues are generated by the transitions greater than S between cells B3 & C3, and between cells C3 & D3. Simple trial and error will show that it is impossible to add or subtract any multiples of 256 from these
New Methods and Tools for Data Processing
41
numbers in any pattern that will remove all transitions greater than S from the circuits surrounding the cells with the residues. The value of 215 at cell C3 is clearly the problem, and what must be done is to subtract 256 from it to leave –41. This will reduce the number of transitions greater than S from three to one and place that transition between the two residues. It is this final transition greater than S that we call a phase inclusion. Table 1. A set of 25 points containing two residues taken from an unwrapped phase map generated by electronic holographic interferometry.
1
A 76
2
56
B 50 0
0 54
0 3
55
61
73
5
70
0
215
58 67
0
41
96
64 0
72 0
54
58 0
+1
0
E 47
54 0
1
0
D 24
67 0
0 4
C 98
88 0
66
74
Fig. 2. A residue map for a wrapped phase map from the electronic holographic interferogram shown in Fig. 3.
42
New Methods and Tools for Data Processing
4 Relationship between Residues and Phase Inclusions It is significant to note that the residues in Table 1 occur as a pair of opposite polarity, and, in phase-step holographic interferometry, residues generally do occur in bipolar pairs. The logical reason for this is because residues straddle phase inclusions. Residues occur singly when they are at the edge of the phase map and the corresponding phase inclusion is between two pixels along this edge. Otherwise, one phase inclusion will generate two bipolar residues, and the pair of residues indicates the presence of the inclusion. Figure 2 illustrates this for the wrapped phase map of a disk shown in Fig. 3. The positive residues are rendered as pixel values of 255, the negative residues as 0, and zero residues as 127. Values outside the object are rendered as zero.
Fig. 3. The wrapped phase map of a disk for which the residue map in was calculated for Fig. 2.
The pairing of residues is clear where they are relatively isolated, but becomes confused where they are clustered. The fact that the density of residues is greater in the center of the disk where the phase gradient of the deformation is greatest and is minimum at the edges where the gradient is
New Methods and Tools for Data Processing
43
least is consistent with a model of noise combining with the phase gradient to create phase inclusions. This is further supported by the fact that there are more pairs of bipolar residues in the horizontal direction where the phase gradient is vertical. To get an indication of the noise level, consider Fig. 4, which shows a histogram of part of an interferogram of an undeformed object for which the phase gradient is zero. While most of the pixel values are within a range of about 20 units out of 255, a phase range of about 28 deg., there are pixel values spanning a range of 111 units for a range of 156 deg., and this approaches 180 deg., or S phase. This random variation in calculated phase, combined with a phase gradient due to the object deformation, can easily produce phase inclusions in the data, and the higher the phase gradient, the more inclusions and residues. As noted above, we expect the phase inclusions to be aligned in the direction of the phase gradient.
Fig. 4. A histogram of an undeformed object showing pixel variation due to noise.
If phase inclusions could be identified a priori in a wrapped phase map, it would make unwrapping trivial, but they are only evident a postiori after unwrapping. Branch cuts based on residues serve to prevent phase inclusions from being unwrapped if they indicate correctly the locations of all phase inclusions. Unfortunately, residues may not always indicate the correct locations of phase inclusions. This is particularly true when phase inclusions occur diagonally as shown in table 2. Both sets of three phase in-
New Methods and Tools for Data Processing
44
clusions have the same residue pattern, but they are quite different, and there is a question as to how the branch cuts are to be drawn. There would be a temptation to put a branch cut between the negative and positive residues in the center of the array, especially if other residues were available to connect to the residues at the edges, and such an incorrect branch cut would lead to serious errors in unwrapping. Table 2. Residues for sets of diagonally occurring phase inclusions. The pixels are indicated as spots and the phase inclusions by arrows.
1
A
B
C
D
E
F
x
x
x
x
x
x
͘
͘
x
x
-1 2
x
x -1
3
x
x
͘ x
x
x
x
+1 ĺ
x ͘
4
ĺ
x
x
x
x
x
x
x
+1
5 Conclusions In discrete phase maps, phase inclusions and residues are inseparably linked with phase inclusions being of primary importance and residues indicating where phase inclusions lie. Phase inclusions result when noise in the phase measurement combines with the gradient of the phase being measured to create steps greater than S that must not be unwrapped. Phase inclusions, unlike residues, cannot be used to guide phase unwrapping; however, residues are inadequate because they do not clearly define the locations of phase inclusions. In general, then, it is better to use methods that do not rely on residues for phase unwrapping. To date, it would appear that there are only two such methods, which are cited in refs. 2 and 3. Of these, we recommend the method of ref. 2, which calculates unwrap regions based upon the idea that the locations of phase wraps depend upon the phase reference used in the arctangent calculation whereas the locations of phase inclusions do not. These unwrap regions are then used to
New Methods and Tools for Data Processing
45
guide the unwrapping process in a way that guarantees that phase inclusions will be ignored.
6 References 1. 2.
3. 4.
5.
D. C. Ghiglia and M. D. Pritt, Two-Dimensional Phase Unwrapping, (John Wiley, New York, 1998), Chap. 2. K. A. Stetson, J. Wahid, and P. Gauthier, “Noise-immune phase unwrapping by use of calculated wrap regions,” Appl. Opt. 36, 48304838 (1997). T. J. Flynn, “Two-dimensional phase unwrapping with minimum weighted discontinuity,” J. Opt. Soc. Amer. A, 14, 2692-2701, 1997. J. M. Huntley and J. R. Buckland, “Characterization of sources of 2S phase discontinuity in speckle interferograms,” J. Opt. Soc. Amer. A, 28, 3268-3270, 1995. M. A. Herráez, J. G. Boticario, M. J. Lalor, and D. R. Burton, “Agglomerative clustering-based approach for two-dimensional phase unwrapping,” Appl. Opt. 44, 1129-1140, 2005.
Experimental Study of Coherence Vortices: Birth and Evolution of Phase Singularities in the Spatial Coherence Function Wei Wang a, Zhihui Duan a, Steen G. Hanson b, Yoko Miyamoto a, and Mitsuo Takeda a a The University of Electro-Communications, Dept. of Info. & Comm. Engg., Chofu, Tokyo, 182-8585, Japan b Risoe National Laboratory, Dept. for Optics and Plasma Research, OPL128, P.O. Box 49, 4000 Roskilde, Denmark
1 Introduction Optical vortices have been known for a long time, and extensive studies have been made on their basic properties since the seminal work of Nye and Berry in the early 1970s [1]. While the previous studies have primarily been centered on phase singularities in fully coherent, monochromatic optical fields, recent theoretical researches relating to Young’s interference with partially coherent light have revealed numerous new effects, and predicted the existence of phase singularities found in the phase of a complex coherence function [2,3]. This new type of phase singularities, referred to as coherence vortices, has come to attract more attention and demonstrated their unique properties. Here, we will present the first direct experimental evidence of coherence vortices, and experimentally investigate the mechanism for the birth and evolution of phase singularities in a spatial coherence function along the optical axis.
2 Principle A schematic diagram of the proposed system is illustrated in Fig.1. A conventional Michelson interferometer composed of two plane mirrors is illuminated by an extended quasi-monochromatic spatially incoherent light source (S) located at some distance 'f from the focal plane of lens L1. Light emitted from point A( x0 , y0 ) of the source is collected by lens L1
New Methods and Tools for Data Processing
47
Fig. 1. Optical system for systhesis of the coherence vortex. Abbreviations are defined in text.
and is split into two beams by the beam splitter BS. One beam is reflected from mirror M1, which serves as the reference field at the origin, and the other is reflected from mirror M2, which serves as the three-dimensionally displaced field to be correlated with the reference field. Mirrors M1 and M2 are located at distances z Z and z Z 'Z , respectively, from the lens L1. The interference fringes generated on the CCD image sensor are the result of superposition of the mutually displaced two optical field distributions, being imaged by lens L2 onto the CCD image sensor. The point source u~0 ( x0 , y0 ) at A creates a field distribution behind lens L1: [4] u ( x , y , z ) u0 ( x0 , y0 ) f ( jO B ( z )) exp ^ j 2S ( f z 'f ) O `
^
u exp jS ª 'f x 2 y 2 z f ¬
x02 y02 ¼º
`
O B( z )
(1)
u exp ^ j 2S f ( x0 x y0 y ) O B ( z )` , where O is the wavelength of light, ( x, y , z ) is the coordinate behind lens L1 with their origin at the center of lens L1, f is the focal length of lens L1, and B( z ) { f 2 'fz f'f . The field u~ ( x, y, Z ) at object mirror M1 and the field u~ ( x, y , Z 2'Z ) at the corresponding location in the other arm of the interferometer are imaged and superimposed to form interference fringes on the CCD image sensor. Because each point on the extended source is
New Methods and Tools for Data Processing
48
assumed completely incoherent with respect to any other point on the source, the overall intensity on the image sensor contributed by all the source points becomes a sum of the fringe intensities obtained from the individual point sources:
³³ u ( x, y, Z ) u ( x, y, Z 2'Z )
I ( x, y , Z )
2
dx0 dy0 ,
(2)
where the integration is performed over the area of the extended source. After some straightforward algebra, this intensity distribution becomes:
I ( x, y, Z ) | A^ 1 P~ ( x, y,2'Z ) cos>M ( x, y,2'Z ) 4S'Z O
(3)
@`
2S'f 2 'Z x 2 y 2 OBZ BZ 'Z ,
2
2 f 2 ³³ I 0 ( x0 , y0 ) dx0 dy0 O 2 B ( Z ) B ( Z 2'Z ) , I 0 u0 , and the complex degree of coherence P~ P~ exp( jM ) is now given by where A
P ( x, y , 2 'Z )
³³ I x , y exp ¬ª j 2S 'Z f x 2
0
0
0
2 0
y02 O B ( Z ) B ( Z 'Z )
j 4S'f 'Zf x0 x y0 y O B Z B Z 'Z º¼ dx0dy0
³³ I x , y dx dy 0
0
0
0
0
(4) .
Eq. (4) has a form similar to that of the 3D complex degree of coherence derived from the generalized van Cittert-Zernike theorem: [5]
³³ I x , y exp ª¬ j 2S'z x y O f j 2S x 'x y 'y O f º¼ dx dy ³³ I x , y dxdy
P 'x, 'y, 2'z
0
0
0
0
2 0
0
0
0
2 0
0
0
2
0
(5)
.
Apart from the difference in the scaling factor, the proposed system can give a simultaneous full-field visualization of a three-dimensional coherence function in the form of the fringe contrast and the fringe phase, with the magnification controllable through the scaling factor. From the analogy to the optical diffraction theory [6-8], our problem of producing coherence vortex for some particular optical path difference 2'Z can be reduced to the problem of finding real and nonnegative aperture distributions that produce optical fields with a phase singularity on the optical axis. A circular aperture whose transmittance has the form of a spiral zone plate can satisfy the above requirement if we choose I 0 ( x0 , y0 ) 1 2 1 cos ª 2SJ x02 y02 arctan y0 x0 º , 0 d x02 y02 d R , (6) ¬ ¼
^
`
where R is the radius of a circular source I 0 x0 , y0 , and J is a variable that determines the location of the coherence vortices along the 'Z -axis . Substituting the proposed source distribution, Eq. 6, into Eq. 4, we have the following degree of coherence,
New Methods and Tools for Data Processing ° °P ° ° ° ®P ° ° ° °P ° °¯
'Z 0
49
1;
'Z JO B ( Z ) B ( Z 'Z ) f 2
'Z JO B Z B Z 'Z f 2
v
v
J 1 4S'f J R x 2 y 2 f 2
2
8S 'f J R x y
2
f
J 1 4S'f J R x 2 y 2 f 2
2
8S 'f J R x y
2
y jx
x
f
2
y2
32
x jy
x
2
y2
32
(7)
;
,
where denotes the convolution operation. When Eq. 7 was evaluated, we have used the 2-D Riesz kernels: [9]
^
`
32
exp ª¬ j arctan y x º¼ v ju 2S u 2 v 2 , (8) where ^ ` stands for the Fourier transform. As seen in Eq. 7, a spatially incoherent source whose irradiance distribution has the same form as Eq. 6, produces fields that exhibit a high degree of coherence for the three optical path differences, 'Z 0 and 'Z r JOB Z B Z 'Z f 2 , where a coherence vortex can be clearly observed with the inversed topological charge, respectively. Relation 7 gives three correlation peaks, and we can control the distance between the central and the side peaks with coherence vortices by changing the parameter J of the zone plate with a spatial light modulator.
3 Experiments Experiments have been conducted to demonstrate the validity of the proposed technique. A schematic illustration of the experimental system is shown in Fig. 2. Linearly polarized light from a 15mw He-Ne laser was expanded and collimated by collimator lens C to illuminate a liquidcrystal-based Spatial Light Modulator (SLM), which modulates the light intensity transmitted by analyzer P placed immediately behind the SLM. A computer-generated spiral zone plate pattern was displayed on the SLM, and was imaged onto a rotating ground glass GG by a combination of lenses L1 and L2 through pinhole PH, which functions as a spatial filter to smoothen out the discrete pixel structure of the SLM. The image of the spiral zone plate on the rotating ground glass serves as a quasimonochromatic incoherent light source. The light from this spatially
50
New Methods and Tools for Data Processing
Fig. 2. Schematic illustration of the experimental system: C, collimator lens; P, polarizer; L1, L2, L3 and L4, lenses; PH, pinehole; GG, ground glass; BS, beam splitters.
incoherent source, placed at some distance from the focal plane of lens L3 was collected by L3 and introduced into a Michelson interferometer consisting of prism beam splitter BS, reference mirror MR, and object mirror MO, the surface of which is imaged by lens L4 onto the sensor plane of the CCD camera. The experiments were performed as follows: First, we designed a zone plate source that produces high coherence peaks for mirror distance 'Z 1mm by choosing the correct value for parameter J .Then we observed the fringes virtually located on mirror MO by the CCD camera with lens L4 focused on MO. By moving the position of mirror MR, we changed the optical path difference between the two arms of the Michelson interferometer and measured the visibility of the fringes along the optical axis from the recorded interferogram’s coherence vortices. Fig. 3 shows the irradiance distribution of the source with the shape of a computer generated spiral zone plate. We detected the coherence vortices by moving the reference mirror along the optical axis. The fringes recorded by a CCD camera for the different optical path differences 'Z are shown in Fig. 4(a)-(g). As predicted from the theoretical analysis, coherence vortices with inversed topological charge are readily observed at the position 'Z r1.0 mm in Fig. 4 (b) and (f), respectively, which correspond to the plus and minus first order coherence peak. We can also observe high coherence when 'Z is equal to zero at the position of zero’th order coherence peak. As theoretically predicted by Schouten et al. [2], the coherence vortices have a degree of coherence equal to zero without fringe contrast, while the intensities of the field do not vanish, which is quite different from the traditional optical vortices in the coherence field. From the recorded interferogram in Fig. 4 (c), we can directly calculate the
New Methods and Tools for Data Processing
51
Fig. 3. Source irradiance distribution designed to have the shape of spiral zone plate.
complex degree of coherence by the Fourier transform method (FTM) [10]. The result is shown in Fig. 5. As expected, a cone-like structure whose apex with a degree of coherence equal to zero indicates the position of a phase singularity in the coherence function is observed for the
(a) 'Z
(d) 'Z
1.5 mm
0 mm
(b) 'Z
1.0 mm
(e) 'Z
0.5 mm
(g) 'Z
1.5 mm
(c) 'Z
(f) 'Z
Fig. 4. The interferograms recorded for different optical path difference.
0.5 mm
1.0 mm
New Methods and Tools for Data Processing
52
0.3
4
0.25
2
Phase (rad)
~P ~
0.2 0.15 0.1
0
-2
0.05 0 50
-4 50
40
40
30
X(
a.u
20
.)
10 0
0
10
20
30
.) Y (a .u
40
50
30
X (a
20
.u .)
10 0
0
10
30
20
Y
40
50
(a .u .)
(a) (b) Fig. 5. The distributions of amplitude and phase of complex degree of coherence around the coherence vortex.
amplitude of the complex degree of coherence. In addition, we can also observe that the corresponding phase for this coherence function has a helical structure. Fig. 5 provides a first direct experimental evidence of the existence of phase singularities in the coherence functions.
4 Conclusions In summary, we have presented evidence of coherence vortices for the first time and experimentally investigated the properties of phase singularities in the coherence function. Unlike for conventional optical vortices, the intensity for coherence vortices does not vanish, but their contrasts become zero. Furthermore, the proposed method for synthesizing coherence vortices faciliates direct observation of the detailed local properties of an coherence vortex, and introduces new opportunities to explore other topological phenomena for the coherence function.
Acknowledgments Part of this work was supported by Grant-in-Aid of JSPS B(2) No.15360026, Grant-in-Aid of JSPS Fellow 15.52421, and by The 21st Century Center of Excellence (COE) Program on “Innovation of Coherent Optical Science” granted to The University of Electro-Communications.
New Methods and Tools for Data Processing
53
References 1. Nye, J F and Berry M V (1974) Dislocations in wave trains. Proc. R. Soc. Lond. A 336,165-190. 2. Schouten H F, Gbur, G, Visser, T D and Wolf E (2003) Phase singularities of the coherence functions in Young’s interference pattern. Opt. Lett. 28(12):968-970. 3. Gbur, G and Visser T D (2003) Coherence vortices in partially coherent beams. Opt. Comm. 222:117-125. 4. Goodman, J W (1968) Introduction to Fourier optics. McGRAW-Hill, New York. 5. Rosen, J and Yariv, A (1996) General theorem of spatial coherence: application to three-dimensional imaging. J. Opt. Soc. Am A 13:20912095. 6. Rosen, J and Takeda, M (2000) Longitudinal spatial coherence applied for surface profilometry. Appl. Opt. 39(23):4107-4111. 7. Wang, W, Kozaki, H, Rosen J and Takeda, M (2002) Synthesis of longitudinal coherence functions by spatial modulation of an extended light source: a new interpretation and experimental verifications. Appl. Opt. 41(10):1962-1971. 8. Takeda, M, Wang, W, Duan, Z and Miyamoto, Y (2005) Coherence holography: Holographic Imaging with Coherence Function. Holography 2005, International Conference on Holography, Varna, Bulgaria. 9. Larkin, K G, Bone, D J and Oldfield, M A (2001) Natural demodulation of two-dimensional fringe patterns. I. General background of the spiral phase quadrature transform. J. Opt. Soc. Am. A 18(8):18621870. 10. Takeda, M, Ina H and Kobayashi S (1982) Fourier-transform method of fringe-pattern analysis for computer-based topography and interferometry. J. Opt. Soc. Am 72:156-160.
Properties of Isothetic Lines in Discontinuous Fields C.A. Sciammarellaa, F.M. Sciammarellab a Dipartimento di Ingegneria Meccanica e Gestionale, Politecnico di Bari, Viale Japigia, 182, 70126, Bari, ITALY,
[email protected] b Dipartment of Mechanical, Materials and Aerospace Engineering, Illinois Institute of Technology, 10 West 32nd St., 60616, Chicago (IL), USA,
[email protected] 1 Introduction Optical methods that retrieve displacement or frequency information produce fringes generated by the beating of two close spatial frequencies of the real or virtual gratings (deformed and undeformed) that are carriers of information ( moiré fringes). The analysis of the displacements and strains requires the knowledge of the topology of the fringe patterns. Moiré fringes have a dual interpretation. They can be seen as lines of equal projected displacements, in this case the phase modulated signal interpretation is used. Or they can be considered as frequency modulated spatial signals. Utilizing the phase modulation concept, the moiré fringes are the loci of equal projected displacement, isothetic lines. The displacements on a plane are given by a vector,
a u ( x, y ) i v ( x, y ) j
(1)
These two components produce separate families of moiré fringes whose light intensity is given by
I D ( x, y ) I 0 >1 Q cos ID ( x, y )@ , D
x, y
(2)
Where I0 is the background intensity, Q is the visibility of the fringes, ID ( x, y ) is the phase of the signal. The moiré fringes are characterized by the property, 2S uD ID ( x , y ) c (3) p
New Methods and Tools for Data Processing
55
Where p is the pitch or fundamental frequency of the real or virtual carrier generating the fringes, the moiré fringes are isophase loci. If we consider the x-axis as the projection axis, the fringes are integral solutions of the differential equation [1] wI1 ( x, y ) dy (4) wx dx wI1 ( x, y ) wy In the above equation the subscript 1 indicates the phase of the moiré fringes corresponding to the x-direction, u(x,y) family. A similar equation can be written for the I 2 ( x, y ) family. The phase functions ID ( x, y ) are not independent of each other because they are subjected to the restrictions imposed by basic assumptions of Continuum Mechanics. The system of equation (4), and the corresponding system to the other family of fringes have solutions that leave the phase indeterminate, these points are called singular points. At singular points the two partial derivatives are equal to zero. The shape of the isothetic lines in the neighbourhood of the singular point is characterized by the Jacobian expression seen below,
J
§ w 2 I1 ( x, y ) ¨ ¨ wx 2 ¨ w 2 I ( x, y ) 1 ¨¨ w x wy ©
w 2 I1 ( x, y ) · ¸ wy wx ¸ w 2 uI1 ( x, y ) ¸ ¸¸ wy 2 ¹
(5)
0
Where the 0 indicates that the derivatives are taken at the point of coordinates xo, yo, the singular point. It can be shown that the behaviour of the lines depends on the solution of the characteristic equation,
O2 OS '
(6)
0
where,
S
trace J
§ w 2I1 ( x, y ) · § w 2I1 ( x, y ) · ¸¸ ¸¸ ¨¨ ¨¨ 2 2 w w x y ¹0 ¹0 © ©
(7)
2
'
det J
2 § w 2I1 ( x , y ) · § w 2 I 1 ( x , y ) · § wI 1 ( x , y ) · ¨ ¸ ¸ ¨ ¨¨ ¸ ¸ ¨ ¸ ¨ wx wy ¸ (8) wx 2 wy 2 © ¹0 © ¹0 © ¹0
56
New Methods and Tools for Data Processing
There is a large variety of singular points. A discussion and graphical examples of some frequently found singular points as well as some singular lines can be found in [1]. The ID ( x, y ) can not be arbitrary functions, one of the hypothesis of the continuum is that the functions ID ( x, y ) are analytical functions. The analicity requires that the functions have a single gradient vector )(x,y) at a point. The isothetic lines can not intercept. The other important property is that the isothetic lines are, either close lines or they begin and end at boundaries. The above described properties are enough to understand the topography of the phase function and therefore to provide the necessary rules for phase unwrapping. There are of course mechanics of solid problems where the analicity of the displacement function does not apply. If the displacement function is not analytical the phase functions can have a large variety of shapes. A similar problem arises when optical techniques are used to obtain shapes of surfaces. Three dimensional surfaces are also second order tensors and hence when one wants to obtain the interpretation of fringes of complex surfaces one faces the same problem that we have indicated in the case of displacement information.
2 Fringe patterns dislocations The analysis of displacement fields of a more general nature than those generated by an analytical function takes us to the concept of dislocations. Dislocation theory is used extensively by material scientists to analyse crystal kinematics and dynamics. A natural extension to the interpretation of the lack of single valued solution for the displacement field is to use the concept of dislocation to analyse fringe patterns. This fundamental step in fringe analysis was done by Aben in his pioneering work on photoelastic fringes interpretation [2]. Following Aben’s model the introduction of the definition of dislocations in moiré fringes can be found in [3], [4], [5] providing preliminary results that were obtained. In this paper this subject is explored further. Let us define a dislocation in a moiré fringe pattern as a departure in the pattern from the continuity properties outlined in the introduction. Figure 1 shows a tensile pattern of a particulate composite specimen. The pattern shown is the v-pattern corresponding to the axial direction of the specimen the material is a live solid propellant. In Figure 2 the dislocation is defined by the Burger circuit indicated in the figure. When one draws the circuit in a pattern that contains a dislocation the circuit will not close after the removal of the dislocation. The length of the
New Methods and Tools for Data Processing
57
Burger vector is equal the pitch p of the grating real or virtual generating the pattern times the number of fringes introduced at the dislocation, in the present example two fringes.
Fig. 1. Moiré pattern corresponding to v(x,y) in a propellant grain tensile specimen. Load applied in the y-direction.
Fig. 2. (a) Burger circuit around a fringe dislocation of pattern Fig.1, (b) Burger circuit with dislocation present, (c) Burger vector magnitude 2p.
The Burger vector defined here is not the same Burger vector utilized in crystal analysis. The dislocations that appear in fringe patterns are manifestations of the presence of dislocations in the field under analysis. The dislocations are positive sources of displacement or negative sources of displacements. Positive sources are defined as sources that increase the displacement field and negative sources are sources that reduce the displacement field. This convention is applied then to follow the definition of tensile deformation as positive, and compressive deformations as negative. The isothetic lines corresponding to a dislocation in the field will also have dislocations. This problem can be looked at in a slightly different way by considering the phase interpretation of displacements. One must recall equation (3), the phase of the fringes is proportional to the displacements. For each fringe family we have a phase function which geometrically has the interpretation of the rotating vector that generates the phase modulated function. Let us now consider that a displacement vector has two projections and each projection has a rotating vector with corresponding phase. The phases of the rotating vectors can not be independent because they are the result of the projections of the same vector.
3 Singularities of the displacement field If there are no discontinuities in the field the solution of a two dimensional continuum mechanics problem is a vector field of the form defined in
New Methods and Tools for Data Processing
58
(1).The vector field, is characterized by trajectories that have the following differential equation, dx dy (9) u v The above equation defines the tangent to the trajectories that should be curves that either end up in the boundaries of the domain or are closed curves. The trajectories can never intercept inside the domain. It is a well known fact that a vector defined by (1) can be represented by the sum of two vector fields, a aI a\ (10) The vector field,
aI )
(11)
a
I(x,y)2S(f0nx f0ny )D n @ a(x,y)b(x,y)Cos>)(x,y)@
(1)
where a(x,y) is the background intensity distribution eventually corrupted by noise, b(x,y) the fringe amplitude map, and I(x,y) the optical phase map to be determined. f0nx and f0ny are the spatial frequencies of an optional linear carrier generated by a tilt between the interfering wavefronts and Dn (n=1,...) are optional additional shifts between the n interferograms. We will assume that a(x,y) and b(x,y) are the same for the n interferograms considered. The 2D Fourier spectrum of a fringe pattern intensity given by Eq.1 is classically written as:
I n(f x , f y ) A(f x , f y )C(f x f0nx , f y f0ny )C*(f x f0nx , f y f0ny )
(2)
where A(fx,fy) is the Fourier transform of a(x,y), C(fx,fy) is the Fourier transform of c(x,y)=1/2b(x,y)expi>I(x,y)+Dn] and C* is the complex conjugate of C. It is well known that when the spatial variations of a(x,y), b(x,y) and ij(x,y) are slow with respect to the carrier frequency components, the Fourier spectrum contains three main separated peaks: a peak around origin related to background intensity variations, and two symmetrical sidelobes C and C* centered around (f0x,f0y) and (-f0x,-f0y) that are related to the phasemodulated fringe carrier.
3 Interferogram background extraction techniques An easy method to evaluate the background intensity distribution is to record a fringe-less pattern [7]. Such a pattern can be obtained by adjusting the optical path difference of the interferometer to a value larger than the coherence length of the light source, by inserting a stop in the reference beam or, for interferometers including an objective or a lens, by defocusing. In all these cases, the resulting image has typically a lower average intensity, a lower spatial frequency bandwidth and/or various disturbances so it is only an approximation of the true interferogram background intensity.
100
New Methods and Tools for Data Processing
Another simple method is to perform a low-pass filtering in the real space or in the Fourier domain. However, the choice of the filter type, of its cut-off frequency, of the kernel size and of the number of filtering steps is somewhat arbitrary. Indeed, it depends on the fringe pattern and there is no simple way to check the validity of the result. This method is limited to the restrictive case where the fringe pattern background intensity a(x,y) has low spatial variations with respect to the total optical phase )(x,y). f
Fig. 1. Fourier transform methods of background extraction. a) Interferograms, b) Fourier spectra, c) Extracted background with method 1 (top) and method 2 (bottom). Interferogram size 256x256. Tilt x: 15 fringes, Tilt y: 25 fringes; Gaussian background: Standard deviation 140 pixels , offsets X and Y: 30 pixels
The background of fringe patterns with a linear spatial carrier can be extracted by using fast Fourier Transform (FFT) techniques [7,8] (Fig.1). In these methods, data within a frequency window around the modulated carrier sidelobes in the Fourier space are replaced by data in the same frequency windows but in another spectrum or in another quadrant of the same spectrum. Then the background is computed by the inverse Fourier transform of the modified Fourier spectrum. In the former case, sidelobes data are replaced by data in the same spatial frequency ranges taken in the Fourier spectrum of a fringe pattern with carrier fringes in a perpendicular direction to those in the original interferogram (case 1 in Fig.1b). In the second case they are replaced by data in the same frequency window by mirrored data with respect to the fx axis (case 2 in Fig.1b) or by data after 90° rotation (case 3 in Fig.1b). The main advantage of these FFT methods
New Methods and Tools for Data Processing
101
is that noise and high spatial frequency components are kept in the extracted background. A drawback is that background data with spatial frequency components in the frequency window of the filter used to remove the fringe carrier may be alterated. For cases 1 and 3 this is notably the case when the background spatial frequencies close to those of the fringe carrier are not isotropic. For case 2, fx components of the spatial frequencies of the background around f0x are correctly extracted while fy components around f0y are alterated. An additionnal drawback of the first method is the need to record a second interferogram with a precisely adjusted 90° rotation of the fringes with respect to the first interferogram. Application of the first and second FFT technique is demonstrated on a simulated interferogram in Fig.1. The simulated interferograms (Fig.1a) are fringe patterns of a tilted plane with a background built by superimposing an offcentered gaussian background and a scaled image of a surface with scratches. Fig.1b is the corresponding Fourier spectrum in logarithmic scale. According to the method, complex data within the circle B are replaced by complex data within the circles A, A' or A'' and a similar procedure is used for the symmetrical modulated carrier sidelobe. The extracted backgrounds are displayed in Fig.1c. They are correctly retrieved in this case. More generally, we found that some parasitic undulations often appear near the background image boundaries. As integer fringe numbers along the x and y axis were chosen in simulations to limit spectral leakage effect, it is thought that these undulations occur when there is a fringe contrast discontinuity along the boundaries. In summary FFT methods can be applied only to fringe pattern with a fringe carrier and may provide a background with artefacts near its boundaries. Techniques that potentially provide a better approximation of the true background of fringe patterns with or without a fringe carrier are phase shifting techniques. They consist in recording interferograms with several phase shifts Dn between them. The first one is simply based on the addition of two fringe patterns with a S shift between them. Eq.1 shows that this provides twice the background intensity map if the phase shift is strictly equal to S. A simple calculation shows that when the actual phase shift is equal to Sr H (HI(x,y)2S(f0nx f0ny )D n @ (4) where a is the vibration amplitude, O the mean detected wavelength and J0 is the Bessel function of the first kind of zero integer order (Fig.2).
Fig. 2. Bessel function J0(x)
Fig. 3. Piezoelectric vibrating system
New Methods and Tools for Data Processing
103
Table 1. First eight zeros of the Bessel function J0 and corresponding values of dJ0/dx, dJ0/da and of the vibration amplitude a for O=0.6µm.
Root 1 2 3 4 5 6 7 8 order Value 2.4048 5.5200 8.6537 11.7915 14.9309 18.0710 21.2116 24.3524 dJ0/dx -0.5175 0.3398 -0.2712 0.2323 -0.2064 0.1876 -0.1732 0.1616 dJ0/da (%/nm) 1.084 0.712 0.568 0.487 0.432 0.393 0.363 0.338 a (nm) 114.8 263.6 413.2 563.0 712.9 862.8 1012.8 1162.7
When the vibration amplitude a is adjusted such as 4Sa/O corresponds to a zero of J0 (Table 1), the second term of Eq.4 cancels out and the timeaveraged interferogram becomes simply equal to the true background intensity distribution. It is obvious from the J0 function shape and from the values of the derivative dJ0/da for a values corresponding to the first 8 roots of J0 (see Table 1) that the choice of a high zero order is preferable to minimize the error related to an incorrect adjustement. However beyond the third zero of J0 there is only a slight improvement. Let us emphasize that this technique can theoretically be applied to any fringe pattern recorded by a two-beam interferometer and any background intensity distribution. This is of course true only if the vibration amplitude is homogeneous i.e. when the frequency does not correspond to a resonance of a part of the vibrating surface. It can be as well adapted to other interferometric techniques that allow time-averaged interferometry, like Electronic Speckle pattern Interferometry and holographic interferometry. Then, the vibration amplitudes must be adjusted to the zeros of the corresponding fringe contrast modulation function. Some experiments were performed with an interference microscope to validate this time-averaged interferometry method. Experimental conditions and test samples were selected to provide fringe patterns with high spatial frequency non uniformities. Test samples were vibrated with a simple piezoelectric translator powered with an alternate voltage (Fig.4). The vibration amplitude was adjusted in each case to minimize visually the fringe contrast. This background estimation method is particularly well suited to interference microscopy measurements because an homogeneous vibration amplitude could be obtained in most cases. Fig.4a shows an interferogram recorded on a tilted silicon nitride flat membrane fabricated by KOH bulk micromachining on a silicon wafer. A close look to this interferogram shows that it is entirely corrupted by quasi
New Methods and Tools for Data Processing
104
horizontal parasitic fringes with a vertical spatial frequency relatively close to that of the real fringe carrier. These fringes are related to unwanted
a)
b)
Fig. 4. Background measurement by time-averaged interferometry on a tilted transparent silicon nitride membrane. a) Static interferogram recorded with a Michelson X5 objective. b) Measured background intensity distribution. Interferogram size : 1.5mmx1.5mm
interferences in the optical set-up when a highly coherent light source is used. Fig.4b is the background intensity image obtained by vibrating the sample at 1kHz with an amplitude corresponding to the 3rd root of the bessel function J0(4Sa/O) (see table 1). It demonstrates that, as expected, the contrast of the main interference fringes could be fully cancelled while parasitic fringes are kept intact in the background image. The results of another measurement performed on the same silicon nitride transparent membrane but with a larger field of view is shown in Fig.5a. For this measurement, the sample was put on a rough surface to get a background with an inhomogeneous reflectivity. Fig.5b and 5c display the interferogram recorded for vibration amplitudes adjusted visually to values respectively lower and equal to the 3rd root of the Bessel function J0(4Sa/O). In that case, the fringe contrast on the surrounding frame and on the membrane could not be cancelled simultaneously. Nevertheless high spatial variations of the background intensity are correctly retrieved.
New Methods and Tools for Data Processing
a)
b)
105
c)
Fig. 5. Background measurement by time-averaged interferometry on a transparent silicon nitride membrane over a rough surface. a) Static interferogram recorded with a Michelson X5 objective. b) and c) Interferograms recorded on the sample vibrated at 500Hz with a vibration amplitude lower and approximately equal to the 3rd root of J0(4Sa/O).
5 Conclusion Starting from a critical analysis of the main existing techniques for fringe pattern background evaluation, we proposed in this paper an alternative technique based on the cancellation of the fringe contrast by vibrating the whole sample surface. This technique can be applied whatever the fringe pattern content and background spatial frequency and does not require any computation. Its accuracy is limited by the need to adjust precisely the vibration amplitude in the whole measurement field. Experiment in progress showed that some simple in-flight image processing can be performed to improve the accuracy of this adjustement.
6 References [1]
[2]
[3]
Servin M, Rodriguez-Vera, R, Malacara, D (1995) Noisy fringe pattern demodulation by an iterative phase lock loop. Opt. and lasers in Eng. 23 355:365 Gdeisat, M.A, Burton, D.R, Lalor, M.J (2000) Real-time pattern demodulation with a second-order digital phase-locked loop. Appl. Opt. 39(29) 5326:5336 Quiroga, J.A, Servin, M, Marroquin, J.L, Gomez-Pedrero, J.A (2003) A isotropic n-dimensional qudrature transform and its application in fringe pattern processing. Proc. SPIE 5144 259:267
106
[4]
New Methods and Tools for Data Processing
Larkin, K.G, Bone, D.J, Oldfield, M.A (2001) Natural demodulation of two-dimensional fringe patterns. I. General background of the spiral quadrature transform. J. Opt. Soc. Am. A18(8) 1862:18701 [5] M. Servin, Marroquin, J.L, Cuevas F.J (1997) Demodulation of a single interferogram by use of a two-dimensional regularized phasetracking technique. Appl. Opt. 36(19) 4540:4548 [6] Legarda-Sáenz, R, Osten, W, Jüptner, W (2002) Improvement of the regularized phase tracking technique for the processing of non normalized fringe patterns. Appl. Opt. 41(26) 5519:5526 [7] Roddier, C, Roddier, F (1987) Interferogram analysis using the Fourier transform techniques. Appl. Opt. 26(9) 1668:1673 [8] Baldi, A, Bertolino, F (2001) On the application of the 2D fast Fourier Transform to the surface reconstruction by optical profilometers. Proc. XII ADM int. Conf., Rimini, Italy B1:9-16 [9] D. Lovric, D, Vucic, Z, Gladic, J, Demoli, N, Mitrovic, S, Milas, M (2003) Refined Fourier-transform method of analysis of twodimensional digitized interferograms. Appl. Opt. 42(8):1477-1484 [10] Petitgrand, S, Yahiaoui, R, Bosseboeuf, A, Danaie, K (2001) Quantitative time-averaged microscopic interferometry for micromechanical device vibration mode characterization. Proc. SPIE 4400: 51-60
Deformed surfaces in holographic Interferometry. Similar aspects concerning nonspherical gravitational fields Walter Schumann Zurich Switzerland
1 Derivatives of the optical path difference, strain, rotation, changes of curvature, fringe and visibility vectors The basic expression in holographic Interferometry for a small surface deformation is the optical path difference D u (k h ) OQ . Here are: u the displacement, h, k unit vectors on the incident and reflected rays, O the wave length and Q the fringe order. In case of a large deformation, when using two modified holograms >1@, the exact expression becomes D (O / 2S )(M M ') ( L L ') , where L , L ' denote the distances from the P' to a point K of fringe localisation. The phases at image points P, P' are P,
M (2S / O )( LT LS p q qT p q qT ) S \ , M ' =...+'\ with the distances LT , LS , p, q, qT , p , q , qT ,... (see the figure) so that we obtain D L L ' ( L p ) ( L ' p ') ( p q ) ( p ' q ') q q ' O'\ / 2S . (1) S
S
Many authors >2@,... have studied the recovering of the fringes. In digital holography >3@ the modification must be simulated by the computer. The contrast of the fringes depends on the smallness of the derivative of D.
New Methods and Tools for Data Processing
108 Recording: qT drˆ q
Modification. Reconstruction: Laser n dr LS q˜T T undeformed T k nˆ dr q˜ dr˜ h P n p ˜ L˜ k p ˜ ˜ q ˆ ˜ ' ˜ P l L'S H u dr ' q' c ˜ L˜' P' ˜ c˜ p˜ ' p' n' R ˜ K P' Camera Images Fringe deformed centre Holograms Hologram localisation LT
The fringe spacing leads incidentally to the strains. Thus, the differential dD dLS dL 'S d( L p ) ... dq dq ' is primary. In particular, we with the normal projector have dLS dr NLS dr Nh ,... N I n
n. In the following we use the rules v ( a
b ) (v a )b , ( a
b )w a(b w ) for any dyadic. The 2D-operator
N n .
aD w / wT D (D sum from 1 to 2) is the projected 3D-operator
Here
D
a aE
G DE
are
, aE
T 1, T 2
coordinates, E
wr / wT , aD aE
a1 , a 2
aDE , N
base D
vec-tors
and
E
aDE a
a . We
ˆ c) get so dD dr ' N '(k ' h ') dr N(k h) drˆ ' Nˆ '(k ' c ') drˆ N(k (k k ') . The deformations read dr ' N '(k ' c ') dr N(k c ) dU K U ˆ ˆ drˆ ,... The semi-projection FN N (
u)T N 'dr' = FNdr , Ndr = FN n of the 3D-deformation gradient F I (
u)T intervenes here only. The polar decomposition is F = QU , with the (orthogonal) rotation Q ( QT Q I ) and the symmetric dilatation U, defined by the Cauchy-Green tensor FT F UU . At the surface the decomposition becomes with a rotation Qn ( n ' Qn n Qi n ) and the in-plane dilatation V ( NFT FN VV ), (2) FN Qn V Qi Qp V . For small values , a strain tensor
J , an inclination vector Z , a pivot ro-
tation scalar : , implying the 2D-permutation E
EDE aD
a E ( E11
0,
E12 E21 , E22 0 ) the decomposition is FN N J : E n
Z and Qi N ; N n
Z , Qp ; N : E , V ; N J , EE N . We write also n
n
B ,
n
N
T
B
n B
n@ .
(3,4)
New Methods and Tools for Data Processing
The tensor B
BDE aD
a E
109
(1 / r1 )e1
e1 (1 / r2 )e2
e2 describes the
exterior curvature of a surface with principal values 1 / r1 , 1 / r2 . Eqs.3,4 correspond to the Frenet-relations dn /ds e / r , de /ds n / r in case of a plane curve. The open bracket
@T
in Eq.4 indicates a transposition of the T
last two factors in the triadic, so that B
n @
BDE aD
n
a E . At an iso-
(W v0 EW E ) / E0 with coefficients v0 , E0 , the stress tensor W and the involution E(...) E . The image P> is defined by dT P 0 (i.e. Eq.6) of the phase T P 2S ( p q p q) / O for the rays of the aperture. We get then with T ( 1) ( 1) V ' Q 'n' Qn V , V V N (k k ') , (5) dD dr' N' ª¬ (k' h ') Q n V ( 1) (k h) º¼ dU K U ˆ ª VQ ˆ ˆ Tˆ (k c ) (k c) º 0 . N (6) n ¬ ¼ Next, the equation of a geodesic curve, relative to the arc s, can be written Nd 2 r / ds 2 0 , because the osculating plane contains the unit normal n . However, for any curve and its image we find Nd 2 r V ( 1) ª¬ QnT N ' d 2 r ' (drDV dr ) º¼ , (7)
tropic, elastic surface we have
DV
J
>( n
V ) N @ N >(n
Qn )V @ N ' Qn .
(8)
The sign N marks a projection of the middle factor in a triadic. Using the integrability n ( EFT ) 0 , we could eliminate the rotation and we would obtain DV >( n
V ) N @ N n ( EVE )V (1) E
EV with the involution. Finally, if we apply the formal relation (1) dr' n' dr n dr' Qn V n , we obtain the change of surface curvature by deformation B ' Qn V (1) ( n
n ') N ' Qn V (1) ª¬ BQnT ( n
Qn )n º¼ N ' . (9) Consider now T R
2S (" q " q ) / O . The relation dT R
0 gives
also Eq.6. Therefore we find from neighboring rays d 2 (" q " q ) 0 , ˆ Nˆ / q º drˆ ... . For Nd 2r we apply Eq.7 d 2 q ( Nˆ d 2 rˆ ) c drˆ ª¬ Bˆ (nˆ c ) NC ¼ so that the total term Nˆ d 2 rˆ(...) cancels because of Eq.6. We use also the affine connection drˆ l Mˆ T dk with Mˆ I nˆ
k / nˆ k as oblique pro-
New Methods and Tools for Data Processing
110
jector. Resolving drˆNˆ dk d 2 l ... , we get a transformation d k l T d k ,where / q ˆ Bˆ nˆ (k c) NCN ˆ ˆ / q Q V ( 1) ªB n (k c NCN T M n ¬ / " º V ( 1) QT M ˆ T denotes the curvature tensor of DV V ( 1) (k c ) K n ¼ ˆ . The inverses of the distances the converging nonspherical wavefront at H
^
`
"1 , l 2 to the focal lines of the astigmatic interval R> (the origin at recording of the camera centre R at the reconstruction) are the eigenvalues of T. The ray aberration reads Kdr Kdrˆ pdk ; therefore the bridge ˆ T MT dk gives the virtual deformation ˆ ( 1)Q "dk " KV nˆ ˆ T Q V ) ˆ ( 1) Q (V nˆ n Kdr G (Kdr ) , G "( pT K )Q VMT /( " p ) . (10,11) n
R
R
If the surface areas projected by the aperture overlap sufficiently, we should be / L ' , where the superposition vector f Ku have k k ' # Ku S small because of the correlation. To apply Eq.5, we use dr' = dr'K ' M ' 'dU ( " ' p ' L ')dk ' with M ' I n '
k '/ n ' k ' and K with 'dE ' , a unit vector m ' and an angle dE ' . We write now dk ' m ' f ' or dD / dE ' m ' f ' . The fringe vector f ' dD / dE ' m R'
R'
K
K
R'
(fringe spacing >4@) and the visibility vector f 'K% (distance of the homologous rays and contrast >5@) are 'T M ' ªk ' h ' Q V ( 1) (k h) º K ' f ( " ' p ' L ') / L ' , f 'R' ( " ' p ')G n S R' ¬ ¼ 'T M ' ªk ' h ' Q V ( 1) (k h) º . f 'K L ' G (12,13) n K ¬ ¼
2 Aspects of deformation for spherical and nonspherical gravitational fields, gravitational lens, rotating bodies This section is only indirectly related to the previous subject. An extension should illustrate Eqs.2,3,4,7,8,9 and focus to the problem of general gravitational fields. Eq.9 gives for B { 0 the curvature B ' of a surface 2 3 as a deformed part of 2 . For a hypersurface k n , n > k this leads to the Ricci tensor R . We recall incidentally the components RDE * ODO , E * ODE , O * PDO * OPE * PaE * OPO showing Christoffel sym-
New Methods and Tools for Data Processing
bols
* ODE
111
a OP (aP a, E aPE , a aDE , P ) / 2 .
D
N ' aDE a
a
E
But
the
projector
I n 'i
n 'i (DE from 1 to k or from 0 to k-1; i
from 1 to n-k) implies both the „metric tensor“ aDE and the exterior orthogonal unit vectors n 'i . If we use these vectors, it can be seen that the Riemann-Christoffel tensor is RT = N ' N ' ª¬ n'
n'
N ' n'
( n'
N ')T º¼ N ' N ' = B 'i
B 'i B 'i
B 'i ]]T , according
to
Eq.4
and
B 'i
(14)
Qn V
(1)
( n
n 'i ) N ' (Eq.9).
The
T
bracket ]] indicates a transposition of the factors 2 and 4. The Ricci tensor is the con-traction of R, thus alternatively R B 'i B 'i B 'i ( B 'i N ') . For a spherical gravitational field first, one uses the Schwarzschild radius 2M 2GM / c 2 with the constant of gravitation G , the mass M and the velocity of light c, as well as polar coordinates r, T , M and the radius a of the central body. We define an angle \ o sin 2 \ 2 M / r , where 2M
N
2M , for r ! a and 2M
N ³0r U ( rˆ)rˆ 2 drˆ for r d a ,
with
8S G / c 2 and the „density“ U . The fundamental form >6@ is, by means
of r 2 dT 2 r 2 sin 2 T dM 2
dr Kn dr ,
(cos \ / Y 2 )c 2 dt 2 (cos 2 \ )1 dr 2 dr Kn dr , (15) where Y 1 for r ! a . The projector Kn N k
k refers to the radial vector k(TM .The space part ds'2 dr (k
k / cos 2 \ Kn )dr dr VVdr gives V (1) k
k cos \ Kn . We obtain with r ' rk wn , dw / dr w, r a dV '
2
2
deformation gradient FN
(k w, r n)
k Kn and
dr ª¬(1 w,2r )k
k Kn º¼ dr , so that we get cos2 \ becomes Qn N Using the
ds '2
dr FT Fdr
1 / (1 w,2r ) . Eq.2
FV (1) k '
k Kn with k ' k cos \ n sin \ , n ' ... . key relation (sin \ ), r ] sin \ / 2 r , where
1 NU r 3 / 2M , we find n
n ' (k
k ')] tan \ / 2 r Kn sin \ / r and the 3D-curvatures B ' Qn V (1) ( n
n ') N ' (sin \ / r ) >(] / 2)k '
k ' Kn @
]
(1 / r1 )k '
k ' (1 / r2 ) Kn ,
(16)
New Methods and Tools for Data Processing
112
R3D
B ' B ' B '( B ' N ') (sin 2 \ / r 2 ) >] (k '
k ') (] / 2 1) Kn @
(2 / r1r2 )k '
k ' (1 / r1r2 1 / r2 r2 ) Kn , (17) as well as the known vase-like surface >7@. Second, as for the time-radial terms in Eq.15, we introduce a vector r ' 2 Mk cos \ / Y wn . Defining an angle F o sin F 2 M (cos \ / Y ), r cos \ , w, r cos F / cos \ we get an inclination k ' k sin F n cos F , n ' k cos F n sin F and curvatures B ' Qn V (1) ( n
n ') N ' (K / r )h
h (r / K )k '
k ' Y (sin F ), r / 2 M
h
h / r0 k '
k '/ r1 ,
(18)
ª¬Y (sin F ),r / 2 M º¼ (h
h k '
k ') (1 / r0 r1 )(h
h k '
k ') . (19) Note that K Y r cos F / 2 M cos \ is not relevant in Eq.19 and that both meridians have the same arc s'. Third, the field equation and its inverse are with R R4 D , the relation N ' N ' 4 and the energy-impulse tensor T R N >T (1 / 2 )(T N ') N '@, (20,21) R (1 / 2 )( R N ') N ' N T , R2D
B ' B ' B '( B ' N ')
implying also ( n'T ) N '
T00
0 . The principal components of T in this
T11
static case are U, T22 T33 p , where p(r ) is the „pressure“. We replace now the curvatures of Eqs.15,17, on a meridian-stripe 4 6 , by B '1 (1 / r0 )h
h (1 / r1 )k '
k ' (1 / r2 ) Kn , 1 / r0 Z sin \ / 2 r and by B '2 (1/ r0 )h
h (1/ r1 )k '
k ' , where the re lations cos E / r0 sin E / r0 , cos E / r1 sin E / r1 must hold. E, S– Eare the angles of n '1 , n '2 with respect to n'. We can also use the definition Z o N p (sin 2 \ / r 2 )(Z 1) besides NU (sin 2 \ / r 2 )(1 ] ) The 4D-Ricci tensor becomes with Eq.21, tan 2 E > (2 ] ) / Z 1@ 2 / ] 1 and by composition (i sum from 1 to 2)
R4 D
B 'i B 'i B 'i ( B 'i N ') (N / 2 ) >( U 3 p )h
h ( U p )(k '
k ' Kn )@
(sin 2 \ / 2 r 2 ) >(3Z ] 2)h
h (Z ] 2)(k '
k ' Kn )@ (1/ r0 r1 1/ r0 r1 2 / r0 r2 )h
h (1/ r0 r1 1/ r0 r1 2 / r1r2 )k '
k ' (1 / r0 r2 1 / r1r2 1 / r2 r2 ) Kn , (22) As we have 2 / r0 r2 a 00 * 1 00 (* 212 * 313 ) a 00 a11a00,1 / r Y sin F / Mr
New Methods and Tools for Data Processing
the comparison with 1 / r0
113
Z sin \ / 2 r shows that
Z Y (r sin F / M sin 2 \ ) Y (cos \ / Y ), r (2 r cos \ / sin 2 \ ) ,
(23) Eq.22 is now compatible with Eqs.17,19 and also with all the component equations of RDE , if we have the connection 1/ r0 r1 1/ r0 r1 1/ r0 r1 . (24) Thus, Eq.22 appears as an „intrinsic“ form, expressed by the parts 17,19 alone. Further, the combination 3R11 R00 eliminates Z, so that we obtain, in case U(r) is a given function, a linear differential equation for 1/Y d ª¬(cos\ /Y ),r cos \ / r º¼ / dr (1 / Y )d(sin 2\ / 2 r 2 ) / dr . (25) The following two special cases should be noted: a) r ! a , U 0 , ] 1, Z Y 1, sin F sin 4 / 2 (Schwarzschildsolution), b) r d a , U U0 , ] 2 , Z Y 2 cos \ / (3cos \ a cos \ ) , >8@, sin F (a 3 / r 3 )sin 4 / 2 , p U0 / (cos \ cos \ a ) / (3cos \ a cos \ ) (TOVEq.). However, in general U(r) must satisfiy an equation of state. In the case r2 z r3 of a nonspherical gravitational field we take k normal to the surfaces of constant potential U in the flat space. Here we have the equation U, ss 2U, s / r NU / 2 with a normal arc s and the mean curvature 1 / r 1 / 2 r 1 / 2 r . We define \ o sin 2 \ 2( M PU )1/ 2 , 2
with
U]
3
,s
the key relations 1 r ( M P ), s / 2M P NU r / 4U , s .
(sin \ ), s
U] sin \ / 2 r ,
In the exterior U 0 we choose ] 1 , M M , thus we get the equation U 1 r (ln P ), s / 2 for Ps . In the interior however, it is convenient to define q( s ) o U M / P q 2 (spherical case: U M / r 2 ). We obtain ,s
2
sin \
2U, s P q and the Eq. U]
,r
2 r ( P q ), s / Pq NU r / 2U, s for Pq(s).
Further, in any point the vector k differs by an angle D from another unit vector k* whichmust be determined by the conditions of vanishing mixed terms. The inclination \ will then be between k* and k'. The unit normal reads n ' >k cos D (t cos J u sin J )sin D @sin \ n cos \ . where t and u denote orthogonal unit vectors and where J is a second angle for these principal directions In case of a rotational symmetry we have J { 0 and
New Methods and Tools for Data Processing
114
n
n ' n\
k ' ( nD
t cos D n
k sin D n
t )sin \ . The condition t n\ The 3D-tensor
t n ln( PU, s )r / 2 U , U cos D
U.
B ' Q\ V ( 1) ( n
n ') N ' (1 / r1 )k '
k ' Bk sin \
is
0 gives sin D
now symmetric and contains the 2D-curvature tensor Bk
k
k .
The two factors U and Pdepend on D; thus an iteration must be applied. The generalization for the angle F is (2 Mr / U q )(cos \ / Y ), s cos \ , 1 / r0 r1
sin F a
4
stripe
1 / r0
8
U Z sin \ / 2 r ,1 / r1
we
U ] sin \ / 2 r ,
Y (sin F ), s q / 2 Mr . On use r0 r0 / tan E , as
well
as
the
4D-
ª¬(1 / r0 )h
h (1 / r1 )k '
k ' Bk sin \ º¼ / 2 , B '3 ... and the 2D curvatures B '2 > (1/ r0 )h
h (1/ r1 )k '
k '@ / 2 , B '4 ... . We write then B '1 B '1 B '1 (B '1 N ') (1/ 2r0 r1 1/ r0 r )h
h (1/ 2r0 r1 1/ r1r )k '
k ' B '1
curvatures
(sin 2 \ / 2) ª¬ U (Z ] ) Bk / 2 r (1 / r2 r3 ) Kn º¼ ,
(26)
B '3 B '3 B '3 (B '3 N ') (1/ 2r0 r1 1/ r0 r )h
h (1/ 2r0 r1 1/ r1r )k '
k '
(sin 2 \ / 2) ª¬ U (Z ] ) E Bk E / 2 r (1 / r2 r3 ) Kn º¼ , (27) B '2 B '2 B '2 (B '2 N ') (1/ 2r0 r1 )(h
h k '
k ') B '4 B '4 B '4 ( B '4 N ') . (28,29) E Here are: 1 / r2 r3 Bk ( E Bk E ) / 2 K 1 / r
the
2D-permutation tensor, the Gauss curvature and
1 / 2 r2 1 / 2 r3 , 1/ r sin\ / r . Adding the four Eqs. 26-29, we
obtain, with
Bk E Bk E R4D
1 / r2
sin \ / r2 , 1 / r3
sin \ / r3 ,
2 Kn / r , the 4D-Ricci tensor (i sum from 1 to 4) and U
B 'i B 'i B 'i ( B 'i N ') N ª¬( U 3 p )h
h ( U p )(k '
k ' Kn ) º¼ sin 2 \ ( U / 2 rr ) >(3Z ] 2)h
h (Z ] 2)k '
k '@
sin 2 \ ª¬( U / 2 rr )(Z ] ) K º¼ Kn (1/ r0 r 1/ r1r 1/ r2 r3 )K n (1/ r0 r1 1/ r0 r1 2 / r0 r )h
h (1/ r0 r1 1/ r0 r1 2 / r1r )k '
k ' . (30)
U
rr K ,
N p K sin 2 \ (Z 1) , NU
K sin 2 \ (1 ] ) ,
(31-33)
New Methods and Tools for Data Processing
if tan 2 E
Z
115
>(2 ] ) / Z 1@2 r / r U ] 1 . Similar to Eqs.23-25 we have
(2 cos \ / K r sin 2 \ )Y w(cos \ / Y ) / ws , 1/ r0 r1 1/ r0 r1 1/ r0 r1 and
(w / ws 1 / r ) ª¬ cos \ w(cos \ / Y ) / ws º¼
(1 / Y ) ª¬ w / 2 r ws K º¼ sin 2 \ ,
w( Pq ) / ws ( K r cos D 2 / r NU / 2U, s )Pq NUr cos D / 2U, s for P q( s ) . As for the general gravitational lens, outside with ] 1 , Z Y =1, we use the equation of a geodesic curve N' d 2 r' / ds' 2 0 . A type of Eqs.7,8 gives then the corresponding backwards deformation into the flat space. With an auxiliary sphere of radius pˆ rˆ / sin \ˆ , approximating the hypersurface at \ˆ \ for r1 and r1 r1 / tan E , we write similar to Eqs.26-29 four parts N1d 2 r k sin 2 \ ª¬ U (drˆ h )2 U drˆ 2 / cos 2 \ 2 rdr Bk dr º¼ / 4 r , (34,35) N3d 2 r N2 d 2 r
k sin 2 \ ª¬ U (drˆ h )2 U drˆ 2 / cos 2 \ 2 rdr E Bk E dr º¼ / 4 r , k sin 2 \ tan E ª¬ U (drˆ h )2 U drˆ 2 / cos2 \ º¼ / 4 r
N4 d 2 r .(36,37)
The vector Nd 2 r4 D (V ') ( N1d 2 r N3d 2 r )cos E ( N2 d 2 r N4 d 2 r )sin E gives the image relation Nd 2 r4 D (V ') k sin 2 \ K (dV '2 3dr Kn dr )/2 , with Kn
N k
k and if drˆ
icdt cos \ / 2 . For a
dr / 2 . drˆ h
4D-nullgeodesic or light ray, where dV '
2
2
kd- , k o 0 holds, we ob-
tain then simply (real when K ! 0 , but imaginary when K 0 )
Nd 2 r4 D (- ) k sin 2 \ K (3dr Kn dr )/2 .
(38)
The surrounding field of a rotating star for instance is nonspherical. In the rotating system there, we may write for the scalar of the inertial force V (: 2 / 2 c 2 )r K0 r , where K0 N k0
k0 is the projector for the equatorial plane. : denotes here the angular velocity. The gravitational potential reads U GM / rc 2 M / r and the gradient of the sum is n (U V ) M ( r F K0 r ) / r 3 with F : 2 r 3 / Mc 2 . The normal of U + V = const. is k and Bk k
k
( r F K0 r ) / W , where W 2
(k0 r )2 (1 F )2 r K0 r
>Kn F Kn K0 k F
K0 r k (ln W )
(r F K0 r )@.
New Methods and Tools for Data Processing
116
In the equatorial plane we have k F 0 , k k0 0 , W (1 F )r , D 0 . We get thus the curvatures 1 / r (2 F ) / (1 F )2 r ,
K
1 / (1 F )r 2
K
sin \
U
U
r 2K
3
U,r V,r 2
and
k n (U V ) WM / r , 2 M P (U,r V,r )
(sin \ ),r to d(lnP )/dr
we
(2 M / r ) P (1 F ) .
U sin \ / 2 r , U
find
The
key
relations
P (1 F / 2)2/3 (1 F )1 , (39)
(cos 2 \ )c 2 dt 2 r 2 dM 2 (cos 2 \ )1 dr 2 r 2 dT 2 , 2
1/3
cos \
As
(2 F ) / (2 F ) r (ln P ),r / 2 lead finally
(4 F )(2 F )1 (1 F )1 F / r ,
dV '2
4(1 F ) / (2 F )2 .
1 (2 M / r )(1 F / 2 )
(40)
! 1 2M / r .
(41)
2 2 2 2 cos \ ; 1 2 M / r : r / 3c . A Lorentztransformation leads with 1 : 2 r 2 / c 2 1 / X to Eq.42. In comparison Eq.43 shows the Kerr-solution >9@, also >10@, Eq.(10.58), where ' / r 2 1 2 M / r a 2 / r 2
:,
For
small
dV '2
ª¬ cd t : r 2 dM /c º¼ X cos 2 \ >rdM : rd t @ X dr 2 / cos 2 \ ...
2
2
, 2
2
>cd t adM @ ' / r ª¬(r a )dM acd t º¼ / r r dr / ' r dT 2 (42,43) This tentative approach may be extended to the interior of the rotating body if U U 0 . In the equatorial plane we have 1 / r (2 F a ) / (1 F a )2 r , dV '
2
1 / (1 F a )r 2 , sin 2 \
2
2
2
2
2
2
2
NU0 (1 F a )r / 6 , r / r r ( Pr ),r / P r . The elimination of ] gives the result for P q
K
U]
2(U,r V,r )P q , U, r V, r
and, using the condition at r = a, for cos 2 \ ( complex when r a) and 1/Y 3r d(P q) (4 F a )Pq 6 , P q Cr (4 F a )/(2 F a ) , (44) dr 2 Fa (2 F a )r 3 Fa 2(3 F a ) 1 ½ º ª F a · 3 3(1 F a ) » 3(1 F a ) ° °§ r · 2 F a « § cos \ 1 ®¨ ¸ ¾ ,(45) ¸¹ » « ¨© ¹ © 2 F F a 3 3 a a ° ° »¼ «¬ ¯ ¿ >2(1 F a )r / (2 F a )@d ª¬(cos\ / Y ),r cos\ º¼ /dr (cos\ / Y ),r cos\ 2
2Mr 2 1 a3
(1 / Y ) ª¬ (sin 2 \ ),r / 2 2 sin 2 \ / (2 F a )r º¼ .
(46)
New Methods and Tools for Data Processing
117
References 1.
Champagne, E B (1974) Holographic Interferometry extended, International Optical Computing Conference, Zurich, IEEE: 73-74 2. Cuche, D E (2000) Modification methods in holographic and speckle Interferometry..Interferometry in speckle light, Springer: 109-114 3. Osten, W (2003) Active Metrology by digital Holography, Speckle Metrology, SPIE 4933: 96-110 4. Stetson, K A (1974) Fringe interpretation for hologram interferometry of rigid-body motions and homog. def. ,J.Opt. Soc. Am., 64: 1-10 5. Walles, S (1970) Visibility and localization of fringes in holographic interferometry of diffusely reflecting surfaces. Ark.Fys. 40, 299-403 6. Schwarzschild, K (1916) Ueber das Gravitationsfeld eines Massenpunktes. Deutsche Akademie der Wissenschaften, Kl. Math.: 196. 7. Misner, C W, Thorne, K S, Wheeler, J A (1972) Gravitation. W.H. Freeman and Company, New York: 837 8. Sexl, R U, Urbantke, H K (1981) Gravitation und Kosmologie, Wissenschaftsverlag, Wien: 240-243 9. Kerr, R P(1963) Gravitational field of a spinning mass as an ex. of algebraic special metrics, Physical review letters, 11, 5,: 237,238 10. Goenner, H (1996) Einführung in die spezielle und allg. Relativitätstheorie. Spektrum, Akademischer Verlag, Heidelberg: 303,304
Dynamic evaluation of fringe parameters by recurrence processing algorithms Igor Gurov, Alexey Zakharov Saint Petersburg State University of Information Technologies, Mechanics and Optics Sablinskaya Street 14, 197101 Saint Petersburg Russia
1 Introduction Fringe processing methods are widely used in non-destructive testing and optical metrology. High accuracy, noise-immunity and processing speed are very important in practical use of the systems based on fringe formation and analysis. A few commonly used fringe processing methods are well-known like Fourier transform (FT) method [1] and phase-shifting interferometry (PSI) technique (see, e.g., [2]). The FT method is based on description of interference fringes in frequency domain using integral transformation and can be classified as non-parametric method, because it does not involve a priori information about fringe parameters in an explicit form. Indeed, the Fourier transformation formula f
S( f )
F{s ( x )}
³ s( x ) exp( j 2Sfx )dx
(1)
f
is valid for any function s ( x ) if it only satisfies to integrability condition. In non-parametric methods, an a priori knowledge about general fringe properties is used mainly after calculations to interpret the processing results. The PSI methods utilize fringe samples series or a few fringe patterns obtained with known phase shifts between them. It means that PSI methods relate to parametric class, due to a priori information about at least fringe phase is used in explicit form. It was recently described in detail [3] new approach to interference fringe processing based on fringe description by stochastic differential equations in a state space involving the a priori information about fringe
New Methods and Tools for Data Processing
119
properties in well-defined explicit form that allows dynamic evaluating interference fringe parameters. In discrete case, difference equations derive recurrence algorithms, in which fringe signal is predicted to a following discretization step using full information available before this step, and fringe signal prediction error is used for step-by-step dynamic correcting the fringe parameters. The developed recurrence fringe processing algorithms were successfully applied to interference fringe parameters estimating in rough surface profilometry [3], multilayer tissue evaluating [4], optical coherence tomography (OCT) [5] and in analyzing 2-D fringe patterns [6]. New results of applying the approach proposed to PSI were recently obtained [7]. In this paper, general approach and peculiarities of recurrence fringe processing algorithms are considered and discussed.
2 Fringe description based on differenlial approach Commonly used mathematical model of interference fringes is expressed as s ( x ) B ( x ) A( x ) cos ) ( x ) , (2) where B ( x ) is background component, A( x ) is fringe envelope, ) ( x ) is fringe phase, ) ( x ) H 2Sf 0 x I ( x ) , (3) H is an initial phase at the point x = 0, f 0 is a fringe frequency mean value, and I ( x ) describes a phase change nonlinearity. In the determinate model Eq. (2), fringe background, envelope and phase nonlinearity are usually supposed as belonged to a priori known kind of determinate functions that vary slowly with respect to cosine function cos( 2Sf 0 x ) . This assumption allows one to use processing algorithms applicable to highquality fringes obtained when measuring mirror-reflecting objects. In parametric approach, a priori knowledge about fringe signal dependence of its parameters is involved in processing algorithm in an explicit form before calculations. In this way, fringe signal is initially defined as dependent on its parameters, i.e.
s ( x ) s( x, T); T ( B, A, ), f )T , (4) where T is vector of fringe parameters in the state space {T}. It allows taking into account more accurately the a priori knowledge about supposed variations of fringe parameters and their dynamic evolution.
120
New Methods and Tools for Data Processing
If the fringe signal background component B, amplitude A and fringe frequency f are supposed, e.g., to be constant, and fringe phase ) varies linearly in a given interferometric system, one can write dT (0, 0, 2Sf , 0)T . (5) dx It is evident that Eq. (5) relates to ideal monochromatic fringes. If fringe envelope changes following Gaussian law that is inherent in lowcoherence fringes, all possible envelopes are the solutions of the differential equation for envelope dA A( x x0 ) / V 2 , (6) dx where x0 and V are correspondingly maximum position and Gaussian curve width parameter. Random variations of fringe parameters can be introduced by modifying Eq. (4) as follows: dB dA d) df wB ( x ) , w A ( x) , 2Sf w) ( x ) , w f ( x) , (7) dx dx dx dx where w
( wB , w A , w) , w f )T is a random vector.
The 1st-order Eqs. (7) are stochastic differencial equations of Langevin kind, that can be rewritten in vectorial form dT < ( x , T) w ( x ) , (8) dx where the first term relates to determinate evolution of fringe parameters, and the second one presents their random variations. The a priori information about evolution of vector of parameters T is included by appreciable selecting the vectorial function < and supposed statistical properties of “forming” noise w ( x ) . It is important to emphasize that Eq. (8) defines also non-stationary and non-linear processes. In discrete case, Eq. (8) is rewritten in the form of stochastic difference equation defining discrete samples series at the points xk k'x, k 1, ..., K , where 'x is the discretization step. It provides the possibility of recurrence calculation at k-th step in the form T( k ) T( k / k 1) w ( k ) , (9) where T( k / k 1) is a predicted value from the (k-1)-th step to the k-th one taking into account concrete properties of Eq. (8). Prediction in Eq. (9) contains an error, i.e. difference between the a priori knowledge at the (k-1) step and real information at the k-th step. This difference is available for observation only as fringe signal error. To obtain the a posteriori in-
New Methods and Tools for Data Processing
121
formation about parameters at the k-th step, the signal error should be transformed to correct fringe parameters, namely Tˆ (k ) T (k / k 1) P (k ){s obs (k ) s[k , T(k k 1)]} , (10) where P( k ) is a vectorial function, which transforms scalar difference between observed signal sample value sobs ( k ) and modelled (predicted) one s ( k , T) to vectorial correction of fringe parameters. The peculiarities of recurrence fringe processing algorithms based on general formula Eq. (10) are considered in the following section of the paper.
3 Recurrence fringe processing by using the extended Kalman filtering algorithm As known, discrete Kalman filtering is defined by observation equation and system equation. The observation equation describes the signal evolution s(k) dependent on signal parameters, and the system equation defines dynamic evolution of vector of parameters T( k ) . For linear discrete Kalman filtering these equations are presented as, respectively (see, e.g., [3]) s( k ) C(k )T( k ) n( k ) , (11) T(k ) F(k 1)T(k 1) w(k ) , (12) where C( k ) , F(k ) are known matrix functions, n(k) is an observation noise, and w (k ) is considered as system (forming) noise. In discrete Kalman filtering algorithm (see Fig. 1), vector of parameters is predicted at the kth step using the estimate obtained at previous step T( k 1) . According to Eq. (12), the predicted estimate of vector of parameters is calculated as F( k 1)T( k 1) . The a posteriori estimate is obtained using recurrence equation involving the input signal sample s ( k ) in the form T( k ) F( k 1)T( k 1) P( k ) [ s( k ) C( k )F( k 1)T( k 1)] , (13) where P(k ) is the filter amplification factor. Useful component of interferometric signal value is defined by nonlinear observation equation s ( k ) A( k ) cos ) ( k ) n ( k ) h ( T( k )) n( k ) , (14) and the a posteriori estimate of the vector of fringe parameters is expressed as Tˆ ( k ) T( k / k 1) P( k )[ s( k ) h( T( k / k 1))] . (15)
New Methods and Tools for Data Processing
122
Amplification Correction
^
s(k)
4(k)
-1
Difference
Parameters prediction
Signal prediction
'x
4(k-1) C(k)
F(k)
Fig. 1. Scheme of linear Kalman filter
The Kalman filter amplification factor P( k ) is calculated [3] in the following form involving covariation matrix Rpr(k) of a priori estimate error of the vector T and the observation noise covariation matrix Rn : P(k)=Rpr(k)CT(k) [C(k)Rpr(k)CT(k) +Rn]-1, (16) where C( k ) hTc ( T( k / k 1)) is obtained by local linearization of the nonlinear observation equation. The matrices Rpr(0) and Rn in Eq. (16) are evaluated a priori taking into account general correlation properties and dispersions of fringe parameters and observation noise. The covariation matrix of a posteriori estimate error is determined as R(k)=[I-P(k)C(k)]Rpr(k). (17) It is clearly seen that the Kalman filtering method allows introducing a priori information about dynamic evolution of fringe parameters including their correlation properties in well-defined form.
4 Experimental results 4.1 Dynamic recurrence processing of low-coherence interference fringes
The Kalman filtering processing algorithm described above has been used when processing OCT signals and recovering tomograms of multilayer tissues [4, 8]. Fig. 2 shows an example of the experimental tomogram represented in logarithmic grey-level inverse scale to be more visual.
New Methods and Tools for Data Processing
123
1,5 1 0,5 0 -0,5 -1 -1,5 500
700
900
a)
b)
c)
d)
1100
Fig. 2. (a) Optical coherence tomogram recovered by evaluating envelopes of lowcoherence fringes in parallel depth-scans of multilayer tissue, (b) example of fringe envelope evaluation within single depth-scan by the Kalman filtering algorithm, (c) example of fringes with variable local frequency and (d) unwrapped phase (in radians) of the signal (c) recovered dynamically by the extended Kalman filter (the sample number k is indicated in horizontal axes)
The accuracy of the method was compared with well-known analogue amplitude demodulation methods like signal rectification with subsequent low-pass filtering and synchronous amplitude demodulation [9]. It was found that the Kalman filtering method provides better resolution of fringe envelope variations. 4.2 Application to phase-shifting interferometry
It is well-known that PSI technique is one of the most accurate methods for measuring fringe phase. It provides high accuracy with phase error near 2S/1000 or less. The basic approach to fringe processing in PSI is usually a
New Methods and Tools for Data Processing
124
least-squares fitting of the interferometric data series and phase estimation on conditions that fitting error is minimized. The model Eqs. (2)-(3) is characterized by vector of parameters
T ( B, A, ), f ) T taking into account that initial phase H can be calculated as H ) ( k ) 2Sfk'x . If phase step 2Sf'x is non-stable, e.g., due to external disturbances of optical path difference in interferometer, it may be interpreted by an observer as fringe frequency variations, i.e. f = f(k). Thus, knowing ) (k ) and f(k), one can easily calculate initial phase as
Hˆ( k ) ) ( k ) 2S ¦ kk c
1 f (k
c) 'x .
H, rad 3
3
3
3
2
2
2
2
1
1
1
1
0
0
0
1 12 23 34 45
(a)
1 12 23 34 45
0
1 12 23 34 45
(b)
(c)
1 12 23 34 45
(d)
Fig. 3. (a) a priori supposed phase distribution equal to S/2; (b), (c) wavefront estimates after 5 and 10 recurrence processing steps, correspondingly; and (d) true tilted wavefront obtained after 20th phase step
Fig. 3 shows dynamic evolution of initial phase estimates obtained when measuring a tilted wavefront. The number of lateral points, where fringe phases calculated is 50u50. The phase shift step was selected to be equal to 2S/100. It is seen that after approximately 20th step the phase errors become small. It has been found out [7] that the phase error becomes smaller than 2S/100 after approximately 20 processing steps and smaller than 2S/1000 after a half of fringe period processed. It confirms high phase accuracy of the extended Kalman filtering algorithm.
5 Discussion and conclusions Recurrence fringe processing methods are based on difference equations formalism, and a priori knowledge about fringes should be included in Eq. (16) before calculations. It means that recurrence parametric methods are more specialized providing advantages in accuracy, noise-immunity and processing speed. At first sight, requirement to accurate a priori
New Methods and Tools for Data Processing
125
knowledge seems like restriction. However, almost the same information is needed after calculation in conventional methods to interpret processing results. Parametric approach allows one to use a priori knowledge in welldefined form including non-stationary and nonlinear fringe transformations. Thus, parametric approach presents flexible tool for dynamic fringe analysis and processing. The advantages of the recurrence algorithms considered consist in high noise-immunity and signal processing speed.
6 References 1. Takeda, M, Ina, H, Kobayashi, S (1982) Fourier transform method of fringe-pattern analysis for computer-based topography and interferometry. J Opt. Soc. Am. 72: 156-160. 2. Grevenkamp, JE, Bruning, JH (1992) Phase-shifting interferometry, in Optical Shop Testing, D Malacara, ed. Wiley New York. 3. Gurov, I, Ermolaeva, E, Zakharov, A (2004) Analysis of lowcoherence interference fringes by the Kalman filtering method. J Opt. Soc. Amer. A 21: 242-251. 4. Alarousu, E, Gurov, I, Hast, J, Myllylä, R, Zakharov, A (2003) Optical coherence tomography of multilayer tissue based on the dynamical stochastic fringe processing. Proc. SPIE 5149: 13-20. 5. Alarousu, E, Gurov, I, Hast, J, Myllylä, R, Prykäri, T, Zakharov, A (2003) Optical coherence tomography evaluation of internal random structure of wood fiber tissue. Proc. SPIE 5132: 149-160. 6. Zakharov, A, Volkov, M, Gurov, I, Temnov, V, Sokolovski-Tinten, K, von der Linde, D (2002) Interferometric diagnostics of ablation craters created by the femtosecond laser pulses. J Opt. Technol.: 69: 478-482. 7. Gurov, I, Zakharov, A, Voronina, E (2004) Evaluation of interference fringe parameters by recurrence dynamic data processing. Proc. ODIMAP IV: 60-71. 8. Bellini, M, Fontana, R, Gurov, I, Karpets, A, Materazzi, M, Taratin, M, Zakharov, A (2005) Dymamic signal processing and analysis in the OCT system for evaluating multilayer tissues. Will be published in Proc. SPIE. 9. Gurov, I, Zakharov, A, Bilyk, V, Larionov, A (2004) Low-coherence fringe evaluation by synchronous demodulation and Kalman filtering method: a comparison. Proc. OSAV'2004: 218-224.
Fast hologram computation for holographic tweezers Tobias Haist, Marcus Reicherter, Avinash Burla, Lars Seifert, Mark Hollis, Wolfgang Osten Institut für Technische Optik, Universität Stuttgart Pfaffenwaldring 9 70569 Stuttgart, Germany
1 Introduction In this paper we give a short introduction to the basics of using consumer graphics boards for computing holograms. These holograms are employed in a holographic tweezer system in order to generate multiple optical traps. The phase-only Fourier holograms - generated at video frequency are displayed on a liquid crystal display and then optically reconstructed by the microscope objective. By using a standard consumer graphics board (NVidida 6800GT) we outperform our fastest CPU based solution, which employs machine coding and the SSE multimedia extensions by a factor of more than thirty at a floating point precision of 32 bit. With the help of this fast computation it is now possible to control a large number of Gaussian or doughnut-shaped optical traps independently of each other in three dimensions at video frequency.
2 Holographic Tweezers Holographic tweezers are a special case of optical tweezers where the micromanipulation of small objects (e.g. cells) is realized by a holographically generated light field[1,2,3]. If modern spatial light modulators (SLM) are used as hologram media it is possible to change the trapping field in video real-time. By superposition one can generate a large number of traps and control them in three dimensions with high accuracy. In addition it is possible to correct for field-dependent aberrations via the holograms[4] and to change the trapping potential (e.g. doughnut modes [2]).
New Methods and Tools for Data Processing
127
The principle setup is depicted in Fig. 1. The SLM, a Holoeye LC-R-2500 reflective twisted-nematic liquid crystal on silicon (LCoS) display with XGA (1024 x 768) resolution (pixel pitch: 19 µm. fill-factor: 93%), is illuminated by a 150 mW laser diode working at 830 nm (Laser 2000LHWA-830-150). By a proper selection of the input and output polarization and the driving signal it is possible to obtain a linear 2S phase shift. For these settings one still has an amplitude modulation of about 70%. The phase-only holograms displayed on the LCD are coupled into the microscope objective (Zeiss Achroplan 100x, 1.0W) by a telescope and thereby Fourier transformed into the trapping volume.
Fig. 1. Principle setup for holographic tweezers
Of course the complete flexibility of this method is only exploited if one is able to compute the phase-only Fourier holograms in real-time. A simple estimation of the computational cost shows (see below), that this is not trivial if ordinary personal computers (PC) without specialized hardware are used. For a single trap j the corresponding light field in the Fourier (hologram) plane equals E j ( x, y ) E 0 exp i (k x x k y y ) iD ( x 2 y 2 ) i\ ( x, y ) (1)
>
@
128
New Methods and Tools for Data Processing
where the lateral and axial position of the trap is given by the tilt terms (exp[ikxx+ ikyy]) and the quadratic phase term (exp[iD(x2+ y2]). The additional phase \x,y determines the light field’s potential. For Gaussian shaped light distributions (and a Gaussian input field) this term equals zero. Often in optical trapping a doughnut mode is advantageous. In this case a phase singularity has to be introduced resulting in
tan(\ ( x, y ) / n)
y x
(2)
for a doughnut of order n. Other light fields are of course possible. For M traps we just have to compute the phase of the superposition of the individual light fields.
ª M º « (¦ E j ) » » M ( x, y ) arctan « jM1 « » « ( ¦ E j ) » ¬ j1 ¼
(3)
3 CPU-based computation For one hundred doughnuts and a 1024 x 768 (XGA) pixel hologram we have to compute 78.6·106 hologram pixels. At least 14 operations (including trigonometry) are necessary for the computation of one pixel. For a 15 Hz update of the hologram we therefore would have at least 14 · 15 · 78.6·106 = 16.5·109 floating point operations per second (16.5 GFlops). The peak performance of an Intel Pentium 4 CPU at 3.2 GHz is about 6.4 GFlops. So even if one would be able to use that peak performance the computation of such holograms would not be fast enough on the CPU. The average achievable performance is considerably below the peak performance. In [5] it is reported that careful handcoded matrix multiplication using the SSE extensions on the Pentium 4 results in up to 3.3 GFlops. This average achievable performance of course strongly depends on the problem to be solved. For our handcoded hologram computation we achieved about 0.5 GFlops on a Petium 4 at 3.0 GHz (also using handcoded SSE). To generate the holograms fast enough basically two different approaches are possible. One might try to improve the algorithms and optimize the code for the computation or one might use faster hardware. Specialized hardware for the computation of holograms[6,7] as well as high-
New Methods and Tools for Data Processing
129
speed digital signal processor boards are available, but the performance to cost ratio (GFlops/Euro) is much better if one uses consumer graphics boards (or even video games hardware). Furthermore by going that way one can strongly profit from the extraordinary performance growth over time in that field. Whereas for ordinary CPUs according to Moore’s law we have more or less a doubling of the performance every eighteen month this doubling of performance occurs every twelve months for graphics processing units (GPU).
4 GPU-based computation In the past several authors already used graphics boards for the computation of holograms[8,9,10]. Their approaches were based on amplitude modulation Fresnel holograms. Therefore it was possible for them to render a hologram as the superposition of precomputed holograms which were translated and scaled. Translation, rotation, and scaling are of course basic operations, which can be done by modern graphics boards very easily. For our phase-only Fourier holograms this is not possible because translating the hologram would not translate the reconstruction but would lead to an additional phase tilt. Therefore processing the different exponential terms of Eq. 1 separately and adding up all the results is necessary. Fortunately, today it is also possible to implement the hologram computation directly into the GPU because current GPUs are programmable in a quite flexible way. A lot of scientific applications running on GPUs have been proposed and implemented. For a good overview on this field the reader is referred to http://www.gpgpu.org. Currently we are using an NVidia 6800GT based AGP graphics board (about 350 Euro in spring 2005) within an ordinary PC. It incorporates 16 pixel shaders, each consisting of two floating point units giving an overall peak performance of 51.2 GFlops at 400 MHz since every unit can in principle work on 4 floats (red, green, blue, and alpha) in parallel. For implementing Eq. (1) to (3) we use the straight forward algorithm depicted in Fig. 2. The computational intensive parts are done in Cg, a freely available programming language for GPU programming[11]. The Cg syntax should be easily understandable if one is already familiar with C. A short example is shown in Fig. 3. The basic framework of the program is done in C++ using the OpenGL library.
130
New Methods and Tools for Data Processing
Fig. 2. Algorithm for the computation of the phase-only Fourier holograms using the GPU.
The computation of the sum over the Ej (see Eq. (1) and (3)) was implemented by using a texture or a so-called p-buffer (pixel buffer). For both versions the basic processing and storage unit is a pixel. Every pixel on the GPU consists of four numbers (red, green, blue, and alpha). Therefore we are able to compute within every texture pixel two hologram pixels (real and imaginary part) at the same time. This approach is depicted in Fig. 4.
New Methods and Tools for Data Processing
131
// ------------------------------------------------// Compute the phase out of the complex (re,im) field float hphase(in float2 wpos) { float xx = (wpos.x - 0.5)/2.0; float x = floor(xx); float2 h; wpos = float2(x + 0.5, wpos.y); if(x==xx) h = texRECT(hcomplex, wpos).xy; else h = texRECT(hcomplex, wpos).zw; float A = arctan2(h.y, h.x) / PI_2; return frac(A); } Fig. 3. Example code in Cg computing the phase by an arctan operation
Fig. 4. Doubling of the performance is possible if two hologram pixels are computed within each pixel of the texture. Two complex values corresponding to two hologram pixels can be stored/processed by one GPU pixel.
The overall performance that can be achieved depends on the details of the implementation as well as on the number of doughnuts to be reconstructed. For 100 doughnuts our best solution results in 0.78 ms per
New Methods and Tools for Data Processing
132
doughnut (1024 x 768 hologram size). This corresponds to at least 14.1 GFlops. The driver for the board was set to “dynamic overclocking” in order to obtain the best performance. For such large numbers of doughnuts it is advantageous to do the looping over all doughnuts also within Cg. This outperforms our fastest CPU solution by a factor of more than thirty. The performance can be further increased if two graphics boards are used in one PC (linking is possible via the scalable link interface (SLI) of NVidia). Table 1. Performance of hologram computation based on a NVidia 6800GT with dynamic overclocking for different numbers of doughnuts. “Best time” denotes always the shortest time if the program is run several times. All time values are listed as initialization time/update time. Initialization is only done once while running the program. Results for four different implementations are shown.
2 doughnuts
2 doughnuts
2 doughnuts
4 doughnuts
for loop inside CG
for loop outside CG
p-buffer
for loop inside CG
Best
Avg
Best
Avg
Best
Avg
Best
Avg
1
31/0
47/0
15/0
15/0
16/0
16/0
47/0
47/0
10
31/16
32/15
16/0
31/15
31/16
31/16
31/0
31/0
100
141/ 110
141/ 110
171/ 172
187/ 172
203/ 188
203/ 188
93/ 62
109/ 78
255
312/ 297
328/ 297
453/ 422
453/ 422
500/ 484
500/ 484
250/ 219
250/ 219
5 Conclusions We have shown that it is possible to accelerate considerably the computation of phase-only Fourier holograms by using a consumer graphics board (MSI 6800GT) instead of the ordinary CPU. Our fastest CPU solution (Pentium 4 @ 3.0 GHz) - using handcoded assembly code together with the multimedia extensions SSE – was outperformed by a factor of more than thirty resulting in an average performance of 14.1 GFlops for 100 doughnuts. The presented algorithm can be used for the computation of holograms for an arbitrary (up to 250) number of traps located at
New Methods and Tools for Data Processing
133
different positions in three dimensions and having independent trapping potentials. We thank the Landesstiftung Baden-Württemberg for their financial support within the project “AMIMA”.
References 1. Reicherter, M, Liesener, J, Haist, T, Tiziani, H J (1999) Optical particle trapping with computer-generated holograms written in a liquid crystal display. Optics Letters 9:508-510 2. Dufresne, E R, Grier, D G (1998) Optical tweezer arrays and optical substrates created with diffractive optics. Rev. Sci. Instrum. 69:19741977 3. Liesener, J, Reicherter, M, Haist, T, Tiziani, H J (2000) Multifunctional optical tweezers using computer-generated holograms. Opt. Commun. 185:77-82 4. Reicherter M, Gorski W, Haist T, Osten W (2004) Dynamic correction of aberrations in microscopic imaging systems using an artificial point source, Proc. SPIE 5462:68-78 5. Yotov, K, Li, X, Ren, G, Garzaran, M, Padua, D, Pingali, K, Stodghill, P (2005) Is search really necessary to generate highperformance BLAS, Proc. of the IEEE 93:358-386 6. Ito, T, Masuda, N, Yoshimura, K, Shiraki, A, Shimobaba, T, Sugie, T (2005) Special – purpose computer HORN-5 for a real-time electroholography, Optics Express 13:1932-1932 7. Lucente, M (1993) Interactive computation of holograms using a look-up table. Journal of Electronic Imaging2: 28-34 8. Ritter, A, Böttger, J, Deussen, O, König, M, Strothotte, T (1999) Hardware – based rendering of full – parallax synthetic holograms. Applied Optics 38:1364-1369 9. Petz, C, Magnor, M (2003) Fast hologram systhesis for 3D geometry models using graphics hardware. Proc. SPIE 5005:266-275 10. Quentmeyer, T (2004) Delivering real-time holographic video content with off-the-shelf PC hardware. Master thesis Massachusetts Institute of Technology 11. Fernando, R, Kilgard, M J (2003) The Cg Tutorial, Addison Wesley
Wavelet analysis of speckle patterns with a temporal carrier Yu Fu, Chenggen Quan, Cho Jui Tay and Hong Miao Department of Mechanical Engineering National University of Singapore 10 Kent Ridge Crescent, Singapore 119260
1 Introduction Temporal phase analysis [1] and temporal phase unwrapping technique [2] have been reported during recent years to do measurement on continuously deforming objects. In this technique, a series of fringe or speckle patterns is recorded throughout the entire deformation history of the object. The intensity variation on each pixel is then analyzed as a function of time. There are several temporal phase analysis techniques, among them, temporal Fourier transform is the predominant method. The accuracy of Fourier analysis is high when the signal frequency is high and the spectrum is narrow. However, in some cases, the spectrum of signal is wide due to the non-linear phase change along the time axis, and various spectrums at different pixels increase the difficulties of automatic filtering process. In recent years, a wavelet transform was introduced in temporal phase analysis to overcome the disadvantages of the Fourier analysis. The concept was introduced by Colonna de Lega [3] in 1996, and some preliminary results [4] were presented. Our previous research [5,6] also showed the advantages of the wavelet transform compared with the Fourier transform in temporal phase analysis. Temporal phase analysis technique has the advantage of eliminating speckle noise, as it evaluates the phase pixel by pixel along the time axis. However, it does have its disadvantage: it cannot analyze the part of an object that is not moving with the rest; neither the objects that deform in opposite directions at different parts. Determination of absolute sign of the phase change is impossible by both temporal Fourier and wavelet analysis. This limits the technique to the measurement of deformation in one direction which is already known. Adding a carrier frequency to the image acquisition process is a method to overcome these problems. In this study, a temporal carrier is applied in ESPI and DSSI set-ups. The phase is re-
New Methods and Tools for Data Processing
135
trieved by temporal wavelet transform, and phase unwrapping process in time and spatial domain is not required. The phase variation due to temporal carrier is also measured experimentally. After remove the effect of temporal carrier, the absolute phase change is obtained.
2 Theory of wavelet phase extraction When a temporal carrier is introduced in speckle interferometry, the intensity of each point can be expressed as I xy (t )
I 0 xy (t ) Axy (t ) cos[ M xy (t )]
I 0 xy (t ) Axy (t ) cos[ IC (t ) I xy (t )] (1)
where I 0 xy (t ) is the intensity bias of the speckle pattern, and IC (t ) is the phase change due to the temporal carrier. At each pixel the temporal intensity variation is a frequency-modulated signal and is analyzed by continuous wavelet transform. The continuous wavelet transform (CWT) of a signal s(t) is defined as its inner product with a family of wavelet function \ a ,b (t ) . f *
³ s ( t )\ a , b ( t ) dt
W S (a, b)
(2)
f
where \ a ,b (t )
1 §t b· \¨ ¸ a © a ¹
b R , a ! 0
(3)
‘a’ is a scaling factor related to the frequency and b is the time shift and * denotes the complex conjugate. In this application, the complex Morlet wavelet is selected as a mother wavelet (4) \ (t ) exp t 2 2 exp iZ 0 t
Here Z0 2S is chosen to satisfy the admissibility condition [7]. CWT expands a one-dimensional temporal intensity variation of certain pixels into a two-dimensional plane of scaling factor a (which is related to the frequency) and position b (which is the time axis). The trajectory of maximum Wxy (a, b) 2 on the a-b plane is called a ‘ridge’. The instantaneous frequency of signal M cxy (b) is calculated as M cxy (b)
Z0 arb
(5)
136
New Methods and Tools for Data Processing
Fig. 1. Experimental Set-up of ESPI with temporal carrier
where arb denotes the value of a at instant b on the ridge. The phase change 'M xy (t ) can be calculated by integration of the instantaneous frequency in Eq. (5), so that phase unwrapping procedure is not needed in temporal and spatial domain. Subtracting the phase change 'IC (t ) due to temporal carrier, the absolute phase change representing different physical quantities can be obtained on each pixel.
3 Temporal carrier with ESPI When a vibrating object is measured using ESPI, the phase change of each point has opposite directions at different instants, and different points may have various frequencies of intensity variation due to the different amplitude of the vibration. Figure 1 shows the experimental set up of ESPI with temporal carrier. The specimen tested in this study is a perspex cantilever beam with diffuse surface. The beam is subjected to a sinusoidal vibration at the free end using a vibrator. To generate the temporal carrier, the reference plate is mounted on a computer-controlled piezoelectrical transducer (PZT) stage. During vibration of cantilever beam, the reference plate is applied with a linear rigid body motion at certain velocity. In order to retrieve phase change of temporal carrier, a still reference block with diffuse
New Methods and Tools for Data Processing
137
surface is mounted above the vibrating beam and is captured together with the beam. The object and reference beams are recorded on a CCD sensor.
(a)
(b) Fig. 2. (a) Gray-value variation of one point on cantilever beam; (b) Modulus of complex wavelet transform.
Figures 2 shows the intensity variations and the modulus of the Morlet wavelet transform of a point on cantilever beam. Integration of 2S arb was carried out along the time axis to generate a continuous phase change 'M (t ) . Figure 3(a) shows the temporal phase change obtained on cantilever beam and on reference block. The difference between these two lines gives the absolute phase change of that point due to vibration. In a speckle interferometer, as it is shown in Fig. 1, a 2S phase change represents a displacement of O / 2 (=316.4nm) in the z direction. Figure 3(b) shows the temporal displacement obtained on that point. Figure 4(a) shows the outof-plane displacement on a cross section of the beam at different time intervals (T1 T0 ) , (T2 T0 ) and (T3 T0 ) [shown in Fig. 3(b)]. For comparison, temporal Fourier analysis was also applied on the same speckle patterns. Figure 4(b) shows the temporal displacements obtained by temporal Fourier transform. It was observed that CWT on each pixel generates a smoother spatial displacement distribution at different instants compared to the result of Fourier transform. The maximum displacement fluctuation
138
New Methods and Tools for Data Processing
due to noise is around 0.04 µm in Fourier transform, but only 0.02 µm in wavelet analysis.
(a)
(b) Fig. 3. (a) Phase variation retrieved on reference block and on cantilever beam;(b) Out-ofplane displacement of one point on cantilever beam.
(a)
(b) Fig. 4. The displacement distribution on one cross section at different timeinterval obtained by (a) wavelet transform; (b) Fourier transform.
New Methods and Tools for Data Processing
139
Fig. 5. Typical shearography fringe pattern and area of interest.
4 Temporal carrier with DSSI Shearography is an optical technique that can measure the displacement derivatives. Sometimes even the displacement of the test object is in one direction, the phase change of shearography is in opposite directions at the different parts of object. In addition, zero phase change area also exists. In this case, introducing a temporal carrier is the only method to overcome these problems. The specimen tested in this study is a square plate with a blind hole, clamped at the edges by screws and loaded by compressed air. Similar as ESPI set-up mentioned above, a still reference block with diffuse surface is mounted besides the object and is captured together with the plate. A modified Michelson shearing interferometer is adopted as the shearing device, so the temporal carrier can be easily introduced by shifting the mirror in one beam of interferometer using a PZT stage. Figure 5 shows the specimen and the reference block with typical shearography fringes. Figure 6 shows the intensity variations of point R on reference block and points A and B (shown in Fig. 5) on the plate. Different frequencies are found on points A and B as the directions of phase change at these two points are opposite. Similar as the process mentioned above, the absolute phase change can be obtained by temporal wavelet analysis. The combination of phase change of each point at certain instant gives a instantaneous spatial phase distribution which is proportional to the
140
New Methods and Tools for Data Processing
deflection derivative, in this case, ww wy . Figure 7 shows a high quality 3D plot of reconstructed value of ww wy at certain instant.
Fig. 6. Temporal intensity variations of (a) point R on reference block; (b) point A and (c) point B on the square plate.
Fig. 7. The 3D plot of reconstructed value of ww wy .
5 Conclusion This paper presents a novel method to retrieve the transient phase change on a vibrating or continuously deforming object using combination of temporal wavelet analysis and temporal carrier technique. The introducing of temporal carrier ensures that the phase change of each point on the object is in one direction, so that temporal phase analysis methods can be applied. Two applications of temporal carrier are illustrated with different
New Methods and Tools for Data Processing
141
optical techniques. A complex Morlet wavelet is selected as the wavelet basis. The phase change is retrieved by extracting the ridge of wavelet coefficient. As wavelet analysis extract the instantaneous frequency with the highest energy (which is the frequency of the signal), it performs an adaptive filtering of the measured signal, thus limits the influence of various noise sources and increases the resolution of measurement. A comparison between temporal wavelet transform and Fourier transform shows that wavelet analysis can significantly improve the result in temporal phase measurement. However, continuous wavelet transform maps a onedimensional intensity variation of a signal to a two-dimensional plane of position and frequency, and then extracts the optimized frequencies. Obviously it is a time-consuming process and required high computing speed and memory. In this investigation, the computation time is about 10 times larger than that of temporal Fourier transform. However, this disadvantage becomes inconspicuous due to the rapid improvement in capacity of computers.
References 1.
2.
3.
4.
5.
6. 7.
Kaufmann, GH, and Galizzi, GE (2002) Phase measurement in temporal speckle pattern interferometry: comparison between the phaseshifting and the Fourier transform methods. Applied Optics 41: 72547263. Huntley, JM, Saldner, H (1993) Temporal phase-unwrapping algorithm for automated interferogram analysis. Applied Optics 32: 30473052. Colonna de Lega, X (1996) Continuous deformation measurement using dynamic phase-shifting and wavelet transform. in Applied Optics and Optoeletronics 1996, K. T. V. Grattan, Ed., Institute of Physics Publishing, Bristol 261-267. Cherbuliez, M, Jacquot P, Colonna de Lega, X (1999) Wavelet processing of interferometric signal and fringe patterns. Proc. SPIE, 3813: 692-702. Fu, Y, Tay, CJ, Quan, C, Chen, LJ (2004) Temporal wavelet analysis for deformation and velocity measurement in speckle interferometry. Optical Engineering 43:2780-2787. Fu, Y, Tay, CJ, Quan, C, Miao, H (2005) Wavelet analysis of speckle patterns with a temporal carrier. Applied Optics, 44:959-965. Mallat, S (1998) A wavelet Tour of Signal Processing, Academic Press, San Diego, Calif.
Different preprocessing and wavelet transform based filtering techniques to improve Signal- tonoise ratio in DSPI fringes Chandra Shakher, Saba Mirza, Vijay Raj Singh, Md. Mosarraf Hossain and Rajpal S Sirohi Laser Applications and Holography Laboratory, Instrument Design Development Centre, Indian Institute of Technology, Delhi, New Delhi – 110 016, (INDIA).
1 Introduction Digital Speckle Pattern Interferometry (DSPI) has emerged as a powerful tool for measurement / monitoring of vibrations [1,2]. But the DSPI speckle interferograms have inherent speckle noise. Many possibilities of improvements in DSPI fringes have evolved as a result of advancement in digital image processing techniques. Different methods investigated to reduce speckle noise in DSPI fringes are only partially successful. Methods based on Fourier transform, such as low pass filtering or spectral subtraction image restoration have proven to be quite efficient to reduce speckle noise. Fourier method, however, does not preserve details of the object. This is a severe limitation because in practice, test objects usually contain holes, cracks or shadows in image field. This is basically due to the reasons that in Fourier transform method the original functions expressed in terms of orthogonal basis functions of sine and cosine waves of infinite duration [3]. Thus errors are introduced when filtered fringe pattern is used to evaluate the phase distribution. For better visual inspection and automatic fringe analysis of vibration fringes, a number of methods for optimizing the signal to noise ratio (SNR) have been reported [4]. Wavelets have emerged as a powerful tool for image filtering. Recently several publications have appeared to reduce speckle noise using wavelet filters [5-9]. Our investigations reveal that a filtering scheme based on combination of preprocessing schemes and wavelet filters are quite effective in reducing the speckle noise present in speckle fringes of vibrating objects. In this paper different filtering schemes based on wavelet filtering are presented for removal of speckle noise from the speckle fringes. Preprocessing of
New Methods and Tools for Data Processing
143
speckle interferograms depends mainly upon texture and number of speckles present in the speckle interferogram. The potential of different filtering schemes is evaluated in terms of speckle index / SNR in speckle fringes of vibrating objects.
2 DSPI Fringe Pattern Recording In the case of measurement of vibration using DSPI, let us assume that the frequency of vibration ‘ Z ’ of the harmonically vibrating object is greater than the frame rate of CCD camera used to record image of the object. Two time-averaged specklegrams of the vibrating plate are recorded. The intensity, if two time-averaged specklegrams are subtracted, is given by [6].
I( x , y) 2A o A r J 0 [(2S ) J w 0 (x, y)] u cos[2(I o I r )] O
(1)
where, A o and A r are amplitude of object and reference wavefronts respectively; O is the wavelength of laser light used to illuminate the object (plate); I r is phase of reference beam and I o is a position dependent phase of the object beam which corresponds to the original state of the object; w 0 ( x , y) is phase dependent out-of-plane displacement of the harmonically vibrating object with respect to some reference position; J is geometric factor which depends on the angle of illumination and the angle of observation, and J 0 is a zero order Bessel function. The term
cos[2(I o I r )] represents phase dependent high frequency speckle information. The Bessel function J 0 spatially modulates brightness of the speckle pattern. The time-average subtraction method improves the SNR in the speckle interferograms. The speckle noise however can not be removed by mere subtraction process. Presence of undesired bright speckles in the area of dark fringes and similarly the dark speckles in the area of bright fringes make inaccuracy in measurement from speckle interferogram. In coherent imaging number of filters have been investigated to reduce speckle noise. Recently we have studied various schemes to remove noise from speckle interferograms. Some of the schemes, which are having potential to handle speckle noise effectively, are discussed below.
144
New Methods and Tools for Data Processing
3 Filtering schemes In DSPI fringe pattern, speckle noise appears in terms of resolution at different scales. To remove the noise a filtering scheme is needed which can decompose images at different scales and then remove the unwanted intensity variations. The filter scheme should be such that the desired structure for minima and maxima remain same. The intensity changes occurs at different scales in the image so that their optimal detection requires the use of operators of different sizes. The sudden intensity change produces peak and trough in the first derivative of the image. This requires that the vision filter should have two characteristics; first, it should be differential operator, and second, it should be capable of being tuned to act at any desired scale. The wavelet filters have these properties. The wavelets are new families of orthonormal basis functions, which do not need to have of infinite duration. When wavelet decomposition function is dilated, it accesses lower frequency information, and when contracted, it accesses higher frequency information. It is computationally efficient and provides significant speckle reduction while maintaining the sharp features in the image [6]. One of the parameter to test filtering schemes is to calculate speckle index. The speckle index is the ratio of standard deviation to mean in homogenous area. SpeckleIndex(C) = (Standard Deviation / Mean) =
var(x) / E (x) =
V /m, where V = standard deviation and m = mean. SNR is reciprocal of speckle index, i.e. SNR = 1/C.
4 Experimental The DSPI set-up for recording the DSPI fringes is shown in Fig. 1. A beam of 30 mW He-Ne laser of wavelength 632.8 nm is split into two beams by a beam splitter BS1. One of the beams fully illuminates the surface of the object under study and the other beam is used as the reference beam. The value of J for our experimental setup is 1.938.
New Methods and Tools for Data Processing
145
Fig. 1. Schematic of DSPI setup for measurement of DSPI fringe recording
The object beam is combined with the reference beam to form a speckle interferogram that is converted into a video signal by CCD camera. The video analog output from CCD camera is fed to the PC-based imageprocessing system developed using National Instrument’s IMAQ PCI-1408 card. Lab-VIEW 5.0- based program in graphical programming language was developed to acquire, process and display the interferograms. The program implements accumulated linear histogram equalization after subtraction of the interferograms. The histogram equalization alters the graylevel value of the pixels. It transforms the gray-level values of the pixels of an image to evenly occupy the range (0 to 255 in an 8 bit image) of the histogram, increasing the contrast of the image. The pixels out of range are set to zero. The IMAQ PCI-1408 card is set to process the images of interferogram at the rate of 30 images / second. One time-average interferogram of the vibrating object over the frame acquisition period (1/30second) is grabbed and stored as a reference interferogram. The successive time-averaged interferograms are subtracted from reference interferogram continuously and displayed on computer screen.
146
New Methods and Tools for Data Processing
To arrive at optimum filtering scheme, first experiments were conducted on loudspeakers / tweeders [6,8]. One typical result on tweeders is given in Fig. 2 and Fig. 3. Results show that speckle noise can be reduced significantly by using appropriate preprocessing and wavelet filtering.
Fig. 2. DSPI speckle interferograms recorded for vibrating tweeder at (a) frequency 3kHz and force 0.6 mV, (b) frequency 9.31 kHz and force 3 V and (c) frequency 2.41 kHz and force 3 V.
Fig. 3. Filtered speckle interferograms (a) for the speckle interferograms in Fig. 2(a), (b) for the speckle interferograms in Fig. 2(b), (c) for the speckle interferograms in Fig. 2(c) respectively by implementing Wiener filtering followed by Symlet wavelet filtering and (d) Line profile of the filtered interferogram shown in Fig. 2(c).
After getting cue from these experiments systematic experiment was conducted on cantilever beam fixed at one end of dimension 50mm u 50mm u 0.8mm. Aspect ratio of the cantilever beam a / b = 1 (where ‘a’ and ‘b’ are the length and width of the beam). The cantilever beam was made of aluminum (Young’s modulus = 70 GPa, Density = 2700 kg / m3). Surface of the beam was made flat on the optical grinding / polishing machine. Sketch of the cantilever beam with a point of loading P is shown in Fig. 4(a). Function generator (model number: HP 33120A) regulates the frequency and magnitude of the force of the exciter. The function genera-
New Methods and Tools for Data Processing
147
tor was set to generate sinusoidal signal. An unfiltered speckle interferogram recorded for the cantilever beam is shown in Fig. 4(b).
(a) t = thickness All dimensions are in mm
(b) frequency: 1.937 KHz, force: 0.8 u10-3 N frequency parameter: 24.63
Fig. 4. (a) Sketch of cantilever beam with a point of loading P and (b) Unfiltered speckle interferograms for cantilever beam fixed at one end having dimension 50 mm u 50 mm u 0.8 mm fixed at one edges and other being free
The following filtering schemes are implemented on the recorded fringe pattern shown in Fig. 4(b). 1. Preprocessing by average followed by Daubechies (db). 2. Preprocessing by average or median followed by Symlet. 3. Pre-processing by sampling, thresholding, averaging followed by Symlet. 4. Pre-processing by ampling, thresholding, averaging followed by Biorthogonal wavelet. The filtered images of Fig. 4(b) for average followed by Daubechies (db), average followed by Symlet, pre-processing scheme (consists of average, sampling, thresholding, averaging) followed by Symlet and preprocessing scheme (consists of average, sampling, thresholding, averaging) followed by Biorthogonal wavelet are shown in Fig. 5(a), Fig. 5(b), Fig. 5(c) and Fig. 5(d) respectively. The speckle index and SNR for unfiltered speckle interferogram of Fig. 4(b) and filtered interferograms shown in Fig. 5(a), Fig. 5(b), Fig. 5(c) and Fig. 5(d) are given in Table.1.
New Methods and Tools for Data Processing
148
(a)
(b)
(d)
(c)
Fig. 5. The filtered Speckle interferograms of Fig. 2 (b) for (a) average followed by Daubechies (db), (b) average / median followed by Symlet, (c) pre-processing scheme (consists of average, sampling, thresholding, averaging) followed by Symlet and (d) pre-processing scheme (consists of average, sampling, thresholding)
Table 1. Speckle index and SNR with different filtering scheme Speckle index, C = V / m, SNR = 1 / C
Image name
Fig. 3
Fig. 5(a)
Fig. 5 (b)
Fig. 5(c)
Fig. 5 (d)
1.3100
0.3587
0.3398
0.1494
0.1236
0.7633
2.788
2.9429
6.6946
8.0926
Speckle index (C)
SNR (1/C)
It is observed that there is significant reduction in speckle index of the filtered speckle interferogram by using appropriate pre-processing scheme followed by Biorthogonal wavelet filter.
Conclusions Experimental results reveal that using appropriate pre-processing scheme followed by Biorthogonal wavelet filter reduced speckle index and increase the SNR significantly. This results in enhancement of contrast between dark and bright fringes. The Fig 5(d) shows that the implementation of appropriate pre-processing scheme followed by Biorthogonal wavelet
New Methods and Tools for Data Processing
149
filter gives more clear fringe pattern as comparison to filtering schemes results shown in Fig. 5(a), Fig. 5(b) and Fig. 5(c).
References 1. P. Varman and C. Wykers., “Smoothening of speckle and moiré fringes by computer processing”, Opt. Lasers Eng. 3, 87-100 (1982). 2. O. J. Lokberg, “ESPI – the ultimate holographic tool for vibration analysis?, ” J. Acoust. Soc. Am. 75, 1783-1791 (1984). 3. M. Takeda, K. Mutoh, “Fourier-transform profilometry for the automatic measurement of 3-D object shapes” Appl. Opt. 22, 3977-3982 (1983). 4. S. Kruger, G. Wernecke, W. Osten, D. Kayser, N. Demoli and H. Gruber, “ The application of wavelet filters in convolution processors for the automatic detection of faults in fringe pattern”, Proc. Fringe 2001 (Elsevier), edited by W. Osten and W. Juptner (2001). 5. G. H. Kaufmann and G. E. Galizzi., “Speckle noise reduction in television holography fringes using wavelet thresholding”, Opt. Eng. 35, 914 (1996). 6. C. Shakher, R. Kumar, S. K. Singh, and S. A. Kazmi, “Application of wavelet filtering for vibration analysis using digital speckle pattern interferometry”, Opt. Eng. 41,176-180 (2002). 7. A. Federico, G. H. Kaufmann, “Evaluation of the continuous wavelet transform method for the phase measurement of electronic speckle pattern interferometry fringes”, Opt. Eng. 41, 3209-3216 (2002). 8. C. Shakher and R. S. Sirohi, “ Study of vibrations in square plate and tweeders using DSPI and wavelet transform”, ATEM’03, Sept. 10-12, Nagoya, Japan (2003). 9. Y. Fu, C. J. Tay, C. Quan, L. J. Chen, “ Temporal wavelet analysis for deformation and velocity measurement in speckle interferometry”, Opt. Eng. 43, 2780-2787 (2004).
Wavefront Optimization using Piston Micro Mirror Arrays Jan Liesener, Wolfgang Osten Institut für Technische Optik, Universität Stuttgart Pfaffenwaldring 9, 70569 Stuttgart Germany
1 Introduction Spatial light modulators (SLMs) are key elements in the field of active and adaptive optics, where the defined control of light fields is required. The removal of aberrations in optical systems, for example, requires a modulator by which the phase of light fields can be influenced, thus forming the shape of the outcoming wavefront. Presently, deformable membrane mirrors are widely-used, although their spatial resolution is very limited. Pixelated liquid crystal displays offer a high resolution but their polarization effects must be carefully considered [1][2]. A new type of SLM, a micro mirror array (MMA) developed by the IPMS (Fraunhofer Institut für Photonische Mikrosysteme), consists of an array of micromirrors that move with a piston-like motion perpendicular to their surfaces enabling the accomplishment of a pure phase modulation. A breadboard was set up on which the MMA’s wavefront shaping capability was tested by measuring the maximum achievable coupling efficiency (CE) into a monomode fiber after compensation of artificially induced wavefront errors. The artificial wavefront errors resembled typical wavefront errors expected for lightweight or segmented mirrors of space telescopes. Several wavefront optimization methods were applied. The methods can be divided into direct wavefront measurements, among them ShackHartmann wavefront measurements as well as interferometric phase measurements, and iterative methods that use the CE as the only measurand. The latter methods include a direct search algorithm and a genetic algorithm. Only minor changes in the optical setup were necessary in order to switch between the methods, the fact of which generates the comparability of the methods.
New Methods and Tools for Data Processing
151
2 Micro Mirror Array description The MMA fabricated at the Fraunhofer Institut für Photonische Mikrosysteme (FhG-IPMS, www.ipms.fraunhofer.de) consists of 240x200 micro mirrors, arranged on a regular grid with a pixel pitch of 40µm. Unlike for example the flip mirror arrays developed by Texas Instruments (DLP technology, www.dlp.com) which operate in a binary on/off mode, the micro mirrors in this investigation perform a continuous motion perpendicular to their surface. Thereby, a pure phase shift of the light reflected from the mirrors is accomplished. The deflection of a single mirror is induced by applying a voltage to the electrode underlying the pixel, resulting in an equilibrium between the electrostatic force and the restoring force of the suspension arms (see fig. 1). Since the maximum deflection of the mirrors is 360nm, wavefront shaping with more than 720nm shift has to be done in a wrapped manner, i.e. phase values are truncated to the interval [-S..S] by adding or subtracting multiples of 2S. The one-level architecture device is fabricated in a CMOS compatible surface micro-machining process allowing individual addressing of each element in the matrix. The mirrors and actuators of the mechanism are formed simultaneously in one structural layer of aluminium. The MMA currently operates with a 5% duty cycle and all CE measurements in this investigation refer to the 5% period in which the micro mirrors have the desired deflection value. The measurements could be performed by triggering all other hardware to the MMA driving board. The 5% duty cycle was not problematic in this investigation, but it could prove to be obstructive in other applications.
Fig. 1. Left: Pixel structure of an MMA showing the mirrors and the suspension arms. Right: White light interferometric measurement of deflected and undeflected mirrors.
152
New Methods and Tools for Data Processing
3 Breadboard description
Fig. 2. Drawing of the opto-mechanical setup for coupling efficiency maximization including the generation and compensation of wavefront errors.
With the opto-mechanical setup depicted in fig.2 all optimization routines for the fiber CE maximization can be performed with only minor alterations. A collimated beam with 633nm wavelength is generated with the He-Ne laser (L), a spatial filter (SF), and a collimation lens (CL). The unit consisting of the polarizing beam splitter (BS1), the telescope (T1) and the deformable membrane mirror (DMM) acts as a wavefront error generator. The deformation of the DMM is transferred to the wavefront reflected from the DMM. The telescope is necessary for the adjustment of beam diameters used to read out the DMM (35mm) and the MMA (8 x 9.6mm). The MMA's task is to compensate for the wavefront errors in reflection. BS2 directs the beam towards the focusing lens (FL), by which the light is focused onto the front end of a monomode fiber (MF). BS3 couples out some intensity for the Shack-Hartmann and the interferometric measurements. The telescope T2 acts as beam reducer and projects an image of the MMA either onto the CCD chip or onto the microlens array in the ShackHartmann approach. Part of the initial collimated beam passes through BS1 to form a reference beam for the interferometric approach. The angle of the reference beam can be adjusted using the second adjustable mirror. Quarter wave plates (QWP) control the transmittance/reflectance of the polarizing beam splitters (BS1, BS2, BS4).
New Methods and Tools for Data Processing
153
4 Optimization methods For the determination of the MMA phase pattern necessary to compensate the system’s aberrations four optimization methods were investigated and successfully applied. The compensation phase function is represented by either a set of Zernike coefficients (modal) or a set of localized phase/amplitude values (zonal). 4.1 Direct Wavefront Measurements
Direct wavefront measurements return the shape of the wavefront reflected by the MMA. The difference from a calibration wavefront is calculated and subtracted from the MMA phase pattern. Measurements can be performed repeatedly, enabling closed loop operation. Using a Shack-Hartmann sensor [3], a microlens array is placed in a wavefront. The focal spot behind each lens is shifted according to the local tilt of the wavefront at each microlens position. This tilt information is used for the reconstruction of the wavefront shape. In our test setup the SHS is used to set up a closed loop system for continuous measurement of the wavefront after compensation by the MMA. The sensor is placed in the conjugate plane of the MMA. A Mach-Zehnder type interferometer is established by also using the reference light path described above. By superposing the light coming from the MMA and the reference beam that is not affected by the aberrations introduced by the DMM, an interference pattern is generated on the CCD. The reference beam is tilted so that the interferogram is provided with a carrier frequency. Fourier transform methods [4] are applied to extract the phase information from only one image as opposed to phase shifting interferometry for which at least 3 images are necessary. 4.2 Iterative methods
Using the iterative methods the compensation pattern displayed by the MMA is iteratively modified while the fiber CE is monitored. The genetic algorithm1 [5] assesses several sets of parameters (Zernike coefficients) in terms of fiber CE. Only the best sets of parameters are selected, bad sets are discarded. In order to get back to the original number of sets, new sets are created by inheritance of the properties of two or more 1
the genetic algorithm components library GAlib (http://lancet.mit.edu/ga/) was used
154
New Methods and Tools for Data Processing
previous sets (parents). Analogous to nature, the parameters of all new sets also undergo a certain degree of mutation in order to find the best fit into the environment. In the direct search algorithm [6], fractions of the MMA (zones) are optimized successively in a random order. In each optimization step the deflection of one zone is changed (e.g. in four steps) while the fiber CE is observed. The (four) measured CEs are then used to calculate the optimal zone deflection with algorithms equivalent to the ones used in phase shifting interferometry.
5 Optimization experiments Using the different optimization methods, the MMA had to correct continuous wavefront errors of 0.7O1.2O and 7.9O PV that were created with the DMM. A step wavefront error was generated by introducing the crossed edges of two microscopic cover plates. The grey-scaled phase errors can be seen in table 1. 5.1 Performance of the optimization methods
Table 1 shows the wavefront errors that were corrected by the MMA. Here, the fiber coupling efficiencies (CEs) that could be obtained with the different optimization methods are also listed. The CE is defined as the ratio of light intensity exiting the fiber to the entire light intensity in the fiber input plane (including all diffraction orders caused by the periodic MMA structure). Of all optimization methods the direct search algorithm performed best with small and continuous wavefront errors. An important parameter for the optimization is the size of the zones, i.e. the number of pixels that are optimized simultaneously. On the one hand, too large of cells may not be chosen as wavefront error variations within the cells are then not detected. This problem becomes serious for big slopes and especially for step errors. On the other hand, too small of cells may not be chosen as the intensity
New Methods and Tools for Data Processing
155
Table 1. Listed test results imposed wavefront errors Wavefront phase plot, grayscaled WFE (rms) WFE (PV)
0.13 Ȝ 0.7 Ȝ
Direct search algorithm Genetic algorithm Shack- HartMann sensor Interferometer without compensation
63% 62% 60% 61% 51%
0.17mm glass 0.25 Ȝ 2.0 Ȝ plate edges 1.2 Ȝ 7.9 Ȝ Achieved coupling efficiencies (elapsed time) 49% 40% 42% (22 sec) (1.5 min) (12min) (12 min) 47% Optimization Not applicable (36 min) (45 min) unsuccessful 48% 40% Not applicable (0.4 sec) (0.8 sec) (0.8 sec) 46% 44% 47% (2 sec) (2 sec) (2 sec) (2 sec) 1.4% 0.27% 8.1%
change at the detector, caused by the cell's phase variation during optimization, must be larger than the noise in the signal. A typical progression of the CE during the optimization is depicted in fig. 2 (left). The CE rise is typically quadratic until all zones have been optimized once (at 480 optimization steps). A second optimization of all zones has not resulted in a further improvement (step 481 to 960). The CE jump at the very end of the optimization is achieved by smoothing the phase distribution. Smoothing is a method in which phase values that are not at the center of one zone are interpolated. Contrary to the zonal optimization in the direct search algorithm the genetic algorithm performs a modal (global) optimization in terms of Zernike modes. Principally it can cope better with big slopes than the direct search algorithm if the error can be represented by the chosen number of Zernike modes. Step errors can not be represented by Zernike modes. Therefore the GA is not suitable for step errors. The consequence of higher aberration amplitudes is an unpredictably longer optimization time. A further very important parameter of the optimization is the mutation magnitude used in the optimization. Too little of mutation causes an unnecessarily long optimization time and with too much mutation an iteration toward the "perfect" wavefront becomes unlikely. In fig. 2 (right) a typical CE progression within a GA optimization is depicted. The mutation magnitude in this optimization run was cut in half every 150 generations.
156
New Methods and Tools for Data Processing
Fig. 3. Typical coupling efficiency progression for the iterative methods with the intermediate continuous wavefront error using the direct search algorithm with 24x20 zones (left) and genetic algorithm with 66 Zernike modes (right).
The wavefront control with the Shack-Hartmann sensor requires more equipment than with the stochastical approaches. In turn we have a very powerful control that measures the wavefront with "one shot" and can also perform iterative measurements in a closed-loop manner. Measurable wavefront slopes are only limited by hardware parameters such as focal length of the microlenses and the size of the subaperture of each microlens. In our setup a slope of 0.36 degrees can be measured which corresponds to a wavefront tilt of 95O over the MMA aperture, which is definitely sufficient for the given wavefront errors. As in the GA approach, step errors can not be detected reasonably with the SHS, since, also from a mathematical point of view, the slope of phase steps is not defined. However, the slope is the relevant measurement quantity with this technique. The interferometric approach provides "one shot" measurements with step error detection capability. The carrier frequency is adjusted so that one interference stripe has a period of approximately four camera pixels. The filtering in the Fourier space allows spatial frequencies between 0.5 and 1.5 times the carrier frequency, i.e. local stripe periods between 8 and 2.67 camera pixels are allowed. This limits the measurable wavefront tilts to 96O over the MMA aperture, which is far above the present wavefront tilts. 5.2 Comparison to flip mirror array
The same driving board that drives the piston MMA chip can also drive an MMA chip with flip mirrors that can also perform continuous motion. In this investigation, however, it was used in a binary on/off mode. Of the two MMA types the piston type MMA performed much better, since the tilt MMA operates in a binary amplitude mode, i.e. a big fraction of the in-
New Methods and Tools for Data Processing
157
cident light (about 50%) is taken out of the system and light is also diffracted into unwanted diffraction orders.
6 Summary and conclusions With the given micromirror array several wavefront optimization methods could be applied that enabled a greatly improved fiber coupling efficiency. The improvement was especially significant for the piston type MMA. Direct phase measurement methods (Shack-Hartmann sensor and interferometry) are superior when time is critical or when strong aberrations are present. The advantage of iterative methods (direct search algorithm and genetic algorithm) is the simple experimental setup and the unnecesity of calibration. These methods performed especially well with small aberrations.
Acknowledgements We thank EADS-Astrium and Fraunhofer-Institut für photonische Mikrosysteme (IPMS) for the effective collaboration and ESA-ESTEC for supervising and financing the project (16632/02/NL/PA).
References 1. Kohler, C, Schwab, X, Osten, W (2005) Optimally tuned spatial light modulators for digital holography. Submitted for publication in Applied Optics 2. Osten, W, Kohler, C, Liesener, J (2005) Evaluation and Application of Spatial Light Modulators for Optical Metrology. Optoel'05 (Proceedings Reunión Española de Optoelectrónica) 3. Seifert, L, Liesener, J, Tiziani, H.J (2003) Adaptive Shack-Hartmann sensor. Proc. SPIE 5144:250-258 4. Takeda, M, Ina, H, Kobayashi, S (1982) Fourier-transform method of fringe-pattern analysis for computer-based topography and interferometry. Journal of the Optical Society of America 72:156-160 5. Schöneburg, E, Heinzmann, F, Feddersen, S (1994) Genetische Algorithmen und Evolutionsstrategien. Addison Wesley 6. Liesener, J, Hupfer, W, Gehner, A, Wallace, K (2004) Tests on micromirror arrays for adaptive optics. Proc. SPIE 5553:319-329
Adaptive Correction to the Speckle Correlation Fringes using twisted nematic LCD Erwin Hack and Phanindra Narayan Gundu EMPA, Laboratory Electronics/Metrology Überlandsstrasse 129, CH-8600 Dübendorf Switzerland
1 Introduction In digital speckle pattern correlation interferometry (DSPI), intensity patterns from the interference of the speckled object wave with a reference wavefront are recorded digitally [1]. Subtracting two interference patterns before and after an object change reveals a correlation fringe pattern. Speckle correlation fringes can be conceived as a smoothly varying intensity distribution multiplied to a noise term. The high noise content is due to the random distribution of speckle intensity and speckle phase across the image plane. Although intensity modulation and speckle phase can be eliminated by phase stepping [2,3] or other phase retrieval methods, noise is not eliminated completely due to the fact that there is a limited dynamic range of the sensor, given by the saturation of the camera, the digitisation depth and the electronic noise. Besides, these techniques require recording several frames. Fourier transform methods, temporal phase unwrapping or spatial phase shifting have been developed to overcome the sequential image capture. The speckle noise remaining in the phase map is generally eliminated by filtering techniques. Many digital processing techniques have been developed to reduce the speckle noise from the fringe pattern [4-8]. Lowpass and Wiener filtering have proved to be inefficient and inadequate due to smoothing of both the noise and the signal. Local averaging and median filtering methods using several kernel sizes and shapes with multiple iterations result in blurring of the image. Recently developed methods such as Wavelet-based filtering have met with some success but the intensity profile of the fringes is not restored completely. To obtain a phase distribution with minimal error one would need to restore the smooth intensity profile of the fringes across the image plane.
New Methods and Tools for Data Processing
159
To improve the intensity profile of speckle correlation fringes, we reduce the speckle noise by reducing the range of the modulation intensity values. The pixelwise adaptive compensation is made only once, before the deformation of the object. It leaves the correlation fringes with a welldefined intensity envelope interspersed with notches or gaps. Hence, a simple morphological filtering – a dilation – is sufficient to obtain smooth correlation fringes.
2 Speckle noise The intensity observed on a CCD where a beam scattered from a rough object interferes with a plane reference beam is given by: Ii
I 0 I M cos M sp,i M ref
I0
I ref I sp
2 I ref I sp
IM
(1)
where the background and modulation intensities, I 0 and I M , are expressed in terms of the reference and speckle wave intensities, I ref and I sp . After an object state change, an intensity pattern I f is recorded. As-
suming that neither the speckle intensity nor the speckle phase change,
I 0 I M cos M sp, f M ref
If
(2)
Correlating the two intensities, Eq. 1 and 2, by subtraction leads to the well-known expression for the speckle correlation fringe pattern [1] F
I f Ii
where 'M
§ 'M · § 'M · M sp,i M ref ¸ sin¨ 2 I M sin¨ ¸ © 2 ¹ © 2 ¹
(3)
M sp, f M sp,i is ҏthe phase change due to the object state
change. The speckle noise in the fringe pattern is multiplicative in nature and arises from the two highly varying terms in Eq. 3, the modulation intensity, I M , and the speckle phase term Psp
§ 'M · M sp,i M ref ¸ sin¨ 2 © ¹
(4)
Assuming uncorrelated speckle intensity and speckle phase distributions, the signal-to-noise ratio (SNR) is [9]
New Methods and Tools for Data Processing
160
SNR
F
2
2
2
IM
var >F @
var >I M @ Psp
2
Psp
IM
2 2
(5)
> @
var Psp
which is independent of the difference phase term, as expected for multiplicative noise. Eq. 5 shows that reducing the variance either in the modulation intensity or in the speckle phase term or both will improve the SNR and lead to better fringe quality. We consider here a fully developed, polarized speckle field, the intensity of which obeys a negative exponential statistics, and the phase is uniformly distributed. The distribution of Isp for an unresolved speckle pattern depends upon the average number of speckles, n, in one pixel of the CCD [1], from which the joint probability density function (pdf) of the modulation and background intensity can be deduced when a smooth reference wave is used [10]
p I 0 , I M
nn
2 § I I M ¨ I 0 I ref M ¨ 4 I ref ©
I sp 2
n
with I M d 4 I ref I 0 I ref
· ¸ ¸ ¹
2 I ref *( n 1)
n 2
§ I 0 I ref exp¨ n ¨ I sp ©
· ¸ ¸ ¹
(6)
and n t 2 . Note that due to the integration
over the pixel the bracket expression in the numerator is not zero. The joint pdf reaches its maximum value at Iˆ0
I ref I sp
2n 3 2n
IˆM
2 u I ref I sp n
(7)
From Eq. 6 the pdf of the modulation intensity alone is p I M
n IM 2 I ref I sp
2 § n IM exp¨ ¨ I © ref I sp
· ¸ ¸ ¹
(8)
which has its maximum at the same value as given in Eq. 7, while the interval [ 0 , 2 IˆM ] includes 95.6% of all modulation intensity values.
New Methods and Tools for Data Processing
161
3 Reducing the variance of IM We note from Eq. 1 that I M can be modified by varying I ref in each pixel to obtain a constant modulation intensity, i.e. var >I M @ 0 . In order to illustrate the concept and its effect we consider the out-of-plane fringe pattern expected from a point loaded bending beam clamped at x = 0. Fig.1 shows the analytical bending line z(x) and, at the top, a cross-section through simulated speckle fringes, where we have assumed a modulation intensity distribution given by Eq. 8 for n = 4. The same cross-section is displayed with a constant modulation intensity, i.e. the smooth envelope multiplied by the random speckle phase factor, Eq. 4, only. Bending beam 3
150
(a)
2.5
100
fringe pattern
2
50
1.5
0
1
-50
(b) (c)
0.5 0 -0.5
0
200
-1
400
-100 -150 600
(d)
-1.5
800
1000
-200
2
z( x )
x· § x· § ¨ ¸ ¨3 ¸ L¹ © L¹ ©
-250 -300
beam position [a.u.]
Fig. 1. (a) Simulated cross-section through the out-of-plane speckle fringe pattern expected from a bending beam. (b) smooth fringe pattern (c) speckle fringe pattern with constant modulation intensity (d) bending line
4 Experimental implementation 4.1 Amplitude-only SLM
The experimental implementation is performed using an amplitude-only Spatial Light Modulator (Sony LCX016AL SLM with 832 x 624 pixels at a pitch of 32 µm) in a conventional ESPI set-up [9]. In general, twistednematic LCDs vary both the phase and amplitude of the incident light together, but not independently. Nevertheless, it has been shown [11] how to obtain amplitude-only characteristics with elliptically polarized light. The phase and amplitude transmission characteristics are plotted in Fig. 2.
New Methods and Tools for Data Processing
1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
170
Amplitude Intensity Phase
150 130 110 90 70 50 30 10
Phase change (deg.)
Normalized intensity
162
-10 0
50
100
150
200
250
Gray-level (0-255) Fig. 2. Intensity variation and phase-stability for an amplitude-only SLM.
The plot shows that the incident intensity can be reduced by up to 96% while the phase is changed by less than 3° over the entire dynamic range (0 to 255 grey-levels) of the LCD. 4.2 Adaptive DSPI
The experimental verification is performed using a conventional DSPI setup (Fig.3). In this set-up, an F/3.5 imaging lens is used to image both the LCD and the rough object onto the CCD. By working with ~1/3 magnification, we have a one to one pixel correspondence between the LCD and the CCD. The speckle intensity and the reference intensity are measured by shuttering the reference and object beam, respectively. From the values of I sp and I ref the modulation intensity I M is estimated at each pixel according to Eq. 1. In order not to saturate the camera (too much), total intensity values must be below 255 GL for a fraction of P pixels (e.g. P=99%). The modulation intensity values should be maximized by adaptively controlling the reference intensity transmission at each pixel within the dynamic range of the LCD, i.e. W [W min ,1] (see Fig. 2).
New Methods and Tools for Data Processing
163 BS
Iref
Isp
Object
BS Laser
QWP2 P2
TN-LCD
P1 QWP1
BS
IL
CCD Fig. 3. Experimental realization of adaptive speckle correction. BS are non-polarizing beam splitters, P1 and P2 are linear polarizers, QWP1 and QWP2 are quarter-waveplates, TNLCD is the twisted-nematic LCD. IL is the imaging lens.
I i, f d I 0 I M d 255 GL I f I i d 2 I M d 255 GL
(9)
In order to optimise the measurement, we have to choose an optimum modulation intensity which should cope with as many speckle intensities as possible, whence from Eq. 8 we accommodate most of the speckles within an interval twice the most probable values. Hence from Eq. 9 2 Iˆ0 2 IˆM 4 IˆM
2 I ref I sp 4
2n 3 n
2
2 I ref I sp d 255 GL n
(10)
2 I ref I sp d 255 GL n
Best results are expected for full modulation, i.e. I M 128 GL , which then would call for a background intensity of the same level. Due to the dynamic range of the LCD, the reference intensity can be varied within the interval [W min u I ref , I ref ] . !
IM
2 I sp,max u I ref u W min
2 I sp,min u I ref
(11)
New Methods and Tools for Data Processing
164
Hence, the range of speckle intensities that can be accommodated by the adaptive technique is I sp,max u W min I sp,min . These cases of bright and dark speckles correspond to a maximum total intensity of I i, f (bright ) d I ref u W min I sp,max I M d 255 GL I i, f ( dark ) d I ref I sp,min I M
The optimum value I M
I ref I sp,max u W min I M
(12)
128 GL then yields the condition
I ref I sp,max d
In our case, we have W min
255 GL 1 W min
(13)
0.04 from which I ref I sp,max d 245 GL .
From Eq. 13 the two intensities are found to be I ref
I sp,max
122 GL .
Conclusion Theory, simulation and experimental investigation has shown the feasibility of improving speckle fringe patterns by modulating the IM values pixelwise with an amplitude-only LCD SLM. From Fig. 9c we can see that without adaptive modulation the intensity in the central region of the fringe pattern is saturated and the fringes at the rim area are rather weak. In contrast, from Fig. 9d we note that the fringe pattern after adaptive modulation is much more homogeneous, and fringes are also discernible along the rim area. This can be understood from the histograms in Figs. 9e and 9f of the marked bright fringes in figs. 9a and 9b respectively. Most of the pixels are clustered close to zero grey value in Fig. 9e resulting in a weak appearance of the fringe whereas the pixels in Fig. 9f have a broad spectrum of values leading to a better quality fringe pattern. The SNR has been improved significantly. Complete reduction of the spectrum to a single value was not possible with our LCD SLM because of the limitation of its dynamic range.
New Methods and Tools for Data Processing
165
References 1.
P.K. Rastogi (Ed.), Digital speckle pattern interferometry and related techniques (John Wiley & Sons Ltd, Chichester, England, 2001) 2. Creath, K (1985) Phase-shifting speckle interferometry. Appl. Optics 24: 3053-3058 3. Surrel, Y (1996) Design of algorithms for phase measurements by the use of phase stepping. Appl. Optics 35: 51-60 4. Arsenault, H.H, April, (1976) G Speckle removal by optical and digital processing. J. Opt. Soc. of Am. 66: 177 5. Jain, A.K, Christensen, C.R (1980) Digital processing of images in speckle noise. Proc. SPIE 243: 46-50. 6. Lim, J.S, Nawab, H (1980) Techniques for speckle noise removal. Proc. SPIE 243: 35-44 7. Federico, A, Kaufmann, G.H (2001) Comparative study of wavelet thresholding methods for denoising electronic speckle pattern interferometry fringes. Opt. Eng. 40: 2598-2604 8. Sveinsson, J.R, Benediktsson, J.A (2002) Review of applications of wavelets in speckle reduction and enhancement of SAR images. Proc. SPIE 4541: 47-58. 9. Hack, E, Gundu, P.N, Rastogi, P.K (2005) Adaptive correction to the speckle correlation fringes using twisted nematic LCD. Appl. Opt. 44: 2772-2781 10. Lehmann, M (1996) Phase-shifting speckle interferometry with unresolved speckles: A theoretical investigation”, Opt. Commun. 128: 325-340 11. Pezzaniti, J.L, Chipaman, R.A (1993) Phase-only modulation of a twisted nematic liquid-crystal TV by use of the eigenpolarization states. Opt. Lett.18: 1567-1569
Random phase shift interferometer Doloca, Radu; Tutsch, Rainer Technische Universität Braunschweig Institute für Produktionsmeßtechnik
1 Introduction Mechanical vibrations give rise to significant problems for most interferometric test methods. With the classic phase-shift interferometric methods the experimental data are obtained sequentially, taking four or five frames with a CCD camera. This means that during the measurement the fringe pattern must remain stable. Because of the floor vibrations the interferometric systems must be mounted on a vibration-isolation table, which is usually an expensive task. Several vibration-tolerant solutions have been developed [1]. Taking of the interferogram data at higher frame rates has the effect of pushing the sensitivity to higher vibration frequencies [2], [3]. With instantaneous phase-shifting techniques, using polarisation components [4], [5], or holographic elements [6], [7], [8], the beams are split in multiple paths and phase-shifted interferograms are simultaneously acquired. This paper presents the concept and the first experimental setup of an interferometric system that is designed to work without vibration isolation and uses the random mechanical vibrations as phase shifter. An additional detector system consisting of three photodiodes is used for the determination of the phase shifts that occur at the moments of taking of the interference images. An adequate PSI algorithm for random phase shifts must be used.
2 Experimental setup The system consists basically of a two-beam Fizeau interferometer. For a Fizeau interferometer only the relative oscillations between the reference and the test plates have influence on the fringe pattern. This reduces the
New Methods and Tools for Data Processing
167
demands for the adjustment of the optical components against the internal vibration sensitivity and the thermic effects. Two orthogonally polarised laser beams of different wavelengths, a continuous He-Ne laser with Ȝ1 = 632.8 nm wavelength and a pulsed laser diode with Ȝ2 = 780 nm wavelength, see Fig. 1, are coupled through the Detector system
Achromat-2 He-Ne Laser
Objective Pulse Laser Diode
CCD Camera
Beam Splitter-3 Spatial filter-2 Achromat-1
Spatial filter-1
Test plate Piezoelectric transducer
Beam Splitter-1
Beam Splitter-2
Reference plate
Fig. 1. Experimental setup
beam splitter-1 and collimated with the achromat-1 to the reference and test plates, which have 50 mm in diameter. The waves reflected from the test and reference surfaces are deviated by the beam splitter-2 and trace through the spatial filter-2. Using the polarising beam splitter-3, the inter-
New Methods and Tools for Data Processing
168
Tilt angle
Measured point
Laser beam Measurement head
Reference head Test plate
Mechanical mount
Laser vibrometer
y Post holder
Piezoelectric transducer
z
x
Fig. 2. The oscillating mounting system of the test plate
x
y
P2
z
P1
P3
He-Ne interference field
Fig. 3. The detector system, consisting of three photodiodes P1, P2, P3 placed in the interference field of the collimated He-Ne rays
ference fields are separated. The fringes from the pulsed laser are projected onto the sensor of a CCD camera, and the He-Ne fringes are collimated by the achromat-2 and hit the detector system that consists of three photodiodes. The achromats have the same focal length f = 300 mm, in order to achieve the magnification M = 1 at the photodiodes system. Under the influence of the mechanical vibration, the relative oscillations between the test and the reference plates lead to a continuous random phase-shifting between the test and reference beams. We assume the vibration-induced movements of the reference plate and the test plate as rigidbody shifts and tilts.
New Methods and Tools for Data Processing
169
3 Description of the method Preliminary tests are performed on a vibration-isolation table. The mechanical holder of the test plate is oscillated around the X axis with a piezoelectric transducer, see Fig 2. The post holder has a parallelepipedic form, with the X side much longer than the Z side, so all the points of the test surface and of the mounting system oscillate in phase around the X axis. Starting with simple oscillating functions, like sinusoidal signals, the system is gradually complicated with random oscillations, to simulate the influence of the floor vibrations. The oscillations of the test plate produce a continuous phase shift between the test and reference beams. The CCD camera is externally triggered and synchronised with the laser pulses which have enough intensity and are short enough in order to freeze the laser pulse fringes on the CCD sensor and to obtain good quality interferogram images. The three photodiodes of the detector system are placed in the interference field of the collimated He-Ne beams and define a plane perpendicular to the optical axis. The sensitive area of each photodiode is a square millimeter. The analog signals of the photodiodes show the time dependence of the intensity of the He-Ne fringe pattern at three different measurement points, and are connected through an acquisition card to a computer with Labview software installed. In case of a linear phase shift in time, that would correspond to a translation with constant velocity in the optical axis direction of the test plate, the signal of a photodiode has a sinusoidal dependence in time. According to the fundamental equation for PSI [9], the variation of the intensity measured by the photodiodes can be written as: I i t I ic I icccos>G i t ) i @ ,
i 1, 2, 3 ; G i t
4S
O1
zi t
(1)
where I´i is the intensity bias, I´´i is half the peak-to-valley intensity, įi(t) is the time varying phase shift introduced into the test beam, ĭi is related to the temporal phase shift, i is the photodiode index, zi(t) is the optical path difference (OPD) introduced to the test beam. For continuous oscillations of the test plate the photodiode signal looks more like a frequency modulated signal, as we can see in the Fig. 4.a) at 100 Hz oscillation frequency of the piezo-transducer. In this representation the signals have an arbitrary intensity bias.
New Methods and Tools for Data Processing
170
In order to make a correlation between the photodiode signals and the test plate oscillations, a single point laser vibrometer from Polytec is used. A laser vibrometer measures the vibrations and the velocity of an object in the direction of the laser beam. It is oriented like in the Fig. 2, and it measures the oscillations in the direction of the Z axis of a point on the mechanical holder of the test plate. The vibrometer signal is also connected through the acquisition card and it is synchronised with the photodiode signals, as it is shown in the Fig. 4.b). The evaluation of the photodiodes signals gives us the variation of the phase shifts with time of the three correspondent individual points on the test surface. The plane defined by these three points represents the variation of the phase shift for every point on the test surface. It can be observed that an extreme value of the photodiode signals corresponds to an extreme value of the vibrometer signal. We call them the main extreme values. Between two main extreme points, the signals present a series of maximum and minimum values, IM and Im, so the vibrations introduce phase shift up to several wavelengths. Because of the tilt, there is a spatial variation of the phase shift at the test surface. A photodiode that corresponds to a point placed at a higher level on the test surface, P1 for example, gives a signal with a larger number of maximum and minimum values, due to the larger oscillation amplitude. During the measurements, the interference fringes must remain several times larger than the sensitive area of a photodiode. Otherwise the signal would be the integration of the light intensity over an area comparable with the fringe width, and the signal contrast would be very low. To avoid this effect, the mechanical oscillation should be limited to about (3x10-3)°. To determine the dependence of the phase shift in time for the three individual points on the test surface, if we consider (see Eq. 1.) I´i=0 and ĭi=0 we obtain zij t
O1 I t a cos i , 4S I icc
i 1, 2, 3 the photodiode index
(2)
Using a computation algorithm, the main extreme values are identified. We iteratively apply this equation on every time interval Tij = (tij,ti(j+1)), between the maximum and minimum values and we obtain the OPD variations zi(t) for the entire measurement time. The results, see Fig. 4,b), show
New Methods and Tools for Data Processing
171
I1(t) Main extreme values a)
I2(t)
I3(t) time (ms) OPD (nm)
T3j=(t3j, t3(j+1)) Vibrometer signal z3j(t)
b)
z1(t) z2(t) z3(t)
time (ms)
Fig. 4. a) The simultaneous photodiode signals. b) The correspondent calculated oscillations of the three individual points on the test surface
the oscillations of the three individual points on the test surface, with arbitrary offsets. There are oscillations in phase, but with different amplitudes, due to the different position levels of the points. There is a good concordance with the vibrometer signal, that shows the oscillations of a measurement point situated on the mounting system, at a higher level over the test plate, see Fig 2. Combining the photodiode signals, we find the information of the oscillating plane defined by the three individual points. Under the assumption of a rigid body movement of the test plate we obtain the time dependence of the OPD shift z(x, y, t) at every measurement point on the test surface. Because of the arbitrary offsets of the zi(t), the OPD shift is determined with reference to an arbitrary tilted plane. Only shifts in Z direction and tilts around the X and Y axis are effective for fringe modulation, while lateral movements of the test plate can be neglected. While the movement of the test plate is measured continuously, the CCD camera records a number of interferograms of the laser diode. The trigger signal of the pulse laser is connected in parallel with the photodiode signals to the acquisition card. Comparing the disc interference image of the laser diode on the CCD camera with disc interference image of the He-
New Methods and Tools for Data Processing
172
Ne laser at the detector system plane, we can determine the (xi,yi) coordinates of the three individual points on the test surface. Using the threepoint form of the plane equation we obtain: A A1 x A2 y (3) A3 where A = det(X1X2X3); Xi = (xi, yi, zi(t)), and Ai is the determinant obtained by replacing Xi with a column vector of 1s. Next we get the phase shift variation for every point on the test surface: z x, y , t
G x, y , t
4S
O1
z x, y , t
(4)
A four-steps PSI algorithm can be used, for arbitrary phase shifts introduced between the sequentially recorded interferograms. The correlation on the time scale of the trigger signal of the pulse laser with the photodiode signals makes possible to determine for every interference image, at any point (x,y) the random phase shift įk(x,y) that occurs, where k = 1, 2, 3, 4 is the index of the interferogram. The equation system: I k x, y I cx, y I ccx, y cos>M x, y G k x, y @
(5)
with three unknowns, I´(x,y) the intensity bias, I´´(x,y) the half peak to valley intensity modulation, and M x, y the unknown phase, can be resolved for the value of M x, y at every point of the interferograms. First, the intensity bias I´(x,y) is eliminated. If we make the notations: c12
cos G1 cos G 2 ,
s12
sin G1 sin G 2
c34
cos G 3 cos G 4 ,
s34
sin G 3 sin G 4 ,
R
I1 I 2 (6) I3 I4
the result of the PSI algorithm is: ª Rc c º
M x, y tan 1 « 34 12 » ¬ s12 Rs34 ¼
(7)
The optical path difference OPD related to the test surface profile is given by:
New Methods and Tools for Data Processing
OPDx, y
O2 M ( x, y ) 4S
173
(8)
The real profile, in Z axis direction, is obtained after the rotation of the OPD profile (see Eq. 8.) with the tilt angle of the plane defined by the offsets of the zi(t) oscillations. As depicted in the Fig. 4.b), the values of zi(t) are calculated with reference to the plane of maximum tilt of the test plate. In order to find the value of the tilt angle, we change iteratively the offsets of the zi(t) to obtain the equilibrium position of the test plate. We use the fact that dzi(t)/dt has maximum value at the equilibrium position. The reconstruction of the OPD shifts zi(t) works also for random oscillations of the piezoelectric transducer. The same computation algorithm is used for the identification of the main extreme values and of the maximum and minimum values of the photodiode signals. In the next steps we will test the interferometer placed on a table without vibration-isolation system. The photodiode will measure now the relative oscillations between the reference and the test plates. The right dimensions of the post holder must be found, so that a suitable oscillating phase shift is induced from the floor vibration.
4 Summary In this paper a two-beam interferometric system has been introduced, that is designed to work without vibration isolation and uses the random mechanical vibration as phase shifter. A detector system consisting of three photodiodes is placed in the interference field of a continuous He-Ne laser to determine the random phase shifts, and a CCD camera is simultaneously used to evaluate the fringes from a pulse laser diode. A four steps PSI algorithm for random phase shifts was described.
174
New Methods and Tools for Data Processing
5 References 1. Hayes, J.: Dynamic interferometry handles vibration. In: Laser Focus World, March, 2002, p. 109-113 2. Wizinowich, P.L.: Phase shifting interferometry in the presence of vibration: a new algorithm and system. In: Applied Optics, Vol. 29 (1990) 29, p. 3271-3279 3. Deck, L.: Vibration-resistant phase-shifting interferometry. In Applied Optics, Vol. 35, (1996) 34, p. 6655-6662 4. Koliopoulos, C.L.: Simultaneous phase-shift interferometer. In Doherty, V.J. (Ed.): Advanced Optical Manufacturing and Testing II, Proceedings of SPIE Vol. 1531 (1992), p. 119-127 5. Fa. Engineering Synthesis Design, Inc. (ESDI): product information: Intellium H1000, Tucson, AZ, 2005 6. Hettwer, A.; Kranz, J.; Schwieder, J.: Three channel phase-shifting interferometer using polarisation-optics and a diffraction grating. In Optical Engineering, Vol. 39 (2000) 4, p. 960-966 7. Millerd, J.E.; Brock, N.J.; Hayes, J.B.; Wyant, J.C.: Instantaneous phase-shift point-diffraction interferometer. In: Creath, K.; Schmit, J. (Eds.): Interferometry XII: Techniques and Analysis, Proceedings of SPIE Vol. 5531 (2004), p. 264-272 8. Millerd, J.E.; Brock, N.J.; Hayes, J.B.; North-Morris, M.B.; Novak, M.; Wyant, J.C.: Pixelated phase-mask dynamic interferometer. In: Creath, K.; Schmit, J. (Eds.): Interferometry XII: Techniques and Analysis, Proceedings of SPIE Vol. 5531 (2004), p. 304-314 9. Greivenkamp, J.E.; Bruning, J.H.: Phase shifting interferometry. In Malacara, D.: Optical shop testing, Chap. 14, Second Edition 1992
Spatial correlation function of the laser speckle field with holographic technique Vladimir Markov, Anatoliy Khizhnyak MetroLaser, Inc., 2572 White Road, Irvine, CA 92614, USA
1 Introduction Speckle-field characterization has been a subject of interest since the very beginning of coherent optics. This is despite the fact that a variety of approaches have been tried, yet a reliable and consistent method for an accurate experimental estimation of the spatial correlation function of such a field has not been achieved. An analysis of the power spectral density [1] and the autocorrelation function of the field’s intensity distribution [2] allows to derive such key parameters as an average lateral and longitudinal speckle size. A number of methods may be used to experimentally measure the values of , such as correlation [1-3], speckle photography [4], and analysis of spatial intensity [5]. However, they all provide qualitative, rather than quantitative, information, especially with regard to . This report discusses a method that allows for the direct measurement of the spatial correlation function of the speckle field and, as a result, the 3-D dimensions of speckle. The method is based on a fundamental feature of volume holography allowing for selection of the component of the reconstruction field that is matched to the spatial structure of that used at recording [6].
2 Characterization of the speckle-hologram 2.1 Recording stage
Let us consider a volume hologram recorded with a plane-wave object beam and speckle reference beam. No general solution has been established so far for characterizing such a hologram, although a partial solution
New Methods and Tools for Data Processing
176
can be obtained by using certain approximations, such as the first Born approximation that works well at a low diffraction efficiency [7]. The method we will apply is known as the holographic mode analysis [8] It can be used when the following conditions are satisfied: (1) a volume hologram is employed; (2) its thickness " encloses several speckles (" >> ) and the entire interactive area of the recording beam; (3) the spacing of the holographic grating (cross-grating) / is smaller than the inter-modulation component (/ M 1 (r ) exp'1 z M 2 ( r ) exp' 2 z @ EOR R0 , RR , (3)
where AR0 , RR
³³ R0 ( xc, yc, z0 ) RR ( xc, yc, z0 )dxcdyc , EOR R0 , RR is part of
P0
the reconstruction field RR that is orthogonal to the recording field R0; '1 & & and '2 are the constants of the propagation of the modes M1 ( r ) and M 2 ( r ) , respectively. 2.2 Reconstruction
It follows from Eq. (1) that the amplitude of the reconstructed beam is proportional to the degree of orthogonality between the spatial functions of the reference beam used at recording and reconstruction of the hologram, i.e., the spatial correlation function of these two fields. Thus, by measuring the intensity of the reconstructed beam, it is possible to obtain a direct estimation of spatial correlation function for these two fields. We will show now that this correlation function coincides with the function of mutual intensity that is used to characterize the speckle field. When the ergodicity conditions of the light field that passes through the diffuser are satisfied, the integration in Eq. 1 can be substituted by an ensemble averaging:
³³ R0 ( x, y, z0 )RR ( x, y, z0 )dxdy
R0 ( x, y, z0 ) RR ( x, y, z0 ) ,
(4)
P0
which corresponds to the definition of the mutual intensity function & & J R (r , r c) . Following [11], we will now introduce the normalized function of & * mutual intensity P I r , r / , as this is the parameter that can be measured & * experimentally. In the Fresnel approximation P I r , r / is:
New Methods and Tools for Data Processing
178
& * PI r , r /
& & 2 JR r,r / & & & & J R r , r J R r / , r /
³³ P[ ,K exp>iW [
2
2
@
K 2 i D[ EK d]dK
P0
2
,
(6)
2 ³³ P[ ,K d[dK P0
where P([,K) is a real-valued aperture function of the illuminated area on & the diffuser. The vector r x, y, z0 describes the distance and direction
&
from point p [ ,K on diffuser at z = 0 to the point at observation plane with transverse coordinates (x,y) and distance z0 from the diffuser. The & x J / , y G / , z0 H / describes position of the second point vector r /
in space of the speckle field under the analysis. The parameters W, D, and E are: W k H / / 2 z02 ; D k zJ / H / x / z02 ; E k zG / H / y / z02 (7)
Thus, the diffraction efficiency of the volume hologram is directly proportional to the mutual intensity and, therefore, by recording. A hologram of the plane and speckle waves and measuring its diffraction efficiency as a function of the reconstruction speckle beam spatial position allows for estimation of the correlation function of this beam. Because the hologram completely replicates the spatial distribution of the recorded speckle field, the method should allow for a complete characterization of the latter. As an example, let us consider a common experimental situation when the diffuser is illuminated with a Gaussian beam, i.e.: & 2 P( p)
>
@
exp b( [ 2 K 2 ) .
(8)
with D = 2/ b being the diameter of the illuminated part of the diffuser at the value of 1/e. Taking into account Eq. (6) and (7), Eq. (8) can be reduced to: & &
P I (r , r ' )
§ D2 E2 b b2 ¨ exp ¨ 2 b2 W 2 b2 W 2 ©
· ¸. ¸ ¹
(9)
New Methods and Tools for Data Processing
179
Eq. (9) has an analytical solution that allows us to arrive at two of the most typical cases for speckle field characterization: when the spatial decorrealtion to be measured is in the lateral or longitudinal direction. 2.2.1 Lateral shift
For a lateral shift only (J z 0, G z 0, H = 0), the mutual correlation function takes the exponential form:
§ 2S 2 J 2 G 2 O2 z0 2 b ©
& &
P I ( r , r ' ) exp¨¨
·¸ . ¸ ¹
(10)
Derived in Eq. (11), correlation function has the characteristic lateral scale at the value 1/e:
VA
2 O z , S D 0
(11)
2.2.2 Longitudinal shift
The longitudinal correlation function derived at the displacement along zaxis (J = 0, G = 0, H z 0) can be expressed as the Lorenz curve: 1
· § & & S 2D4 P I (r , r ' ) ¨¨1 H 2 ¸¸ , 2 2 ¹ © 16 O z0
(12)
where the longitudinal scale is: V //
4 O z . S D2 0
(13)
3 Experimental results To verify experimentally the proposed method of mapping the 3-D speckle, the volume hologram was recorded in a thick (" = 2.5 mm to 7.0 mm) Fe:LiNbO3 crystal. The recording geometry was asymmetric with normally incident object beam (plane-wave) and speckle-encoded reference beam with incident angle TR, forming the grating with spacing / |
New Methods and Tools for Data Processing
180
0.8 Pm (f
1
P
Q
¦ ¦ >f
1
( x p, y q ) f1 )
p P q Q
where
@>
( x p, y q ) f1 ) f 2 ( x p, y q) f 2 )
@
(2)
p P q Q
c ( x , y , t1 , t 2 )
p
and
q
P
Q
@ ¦ ¦ >f 2
2
( x p, y q) f 2 )
@
2
p P q Q
are
dummy variables; the block size is (2 P 1) u (2Q 1) ; f ( x, y, t i ) is written as f i ( x, y ) for simplicity; f i is the average value of f i ( x, y ) in that block. The faults are successfully detected by the correlation coefficients c( x, y, t1 , t 2 ) , as shown in Fig. 2(c). Unfortunately, this result is not as encouraging in the presence of noise.
New Methods and Tools for Data Processing
186
(a)
(b) Normalized cross correla-
(c) Fig. 2. Fault detection by NCC approach (a) frame 1; (b) frame 2; (c) correlation of (a) and (b)
2.3 Windowed Fourier transform approach
The WFT approach is proposed by combining the advantages of the FT approach (i.e., the transform-based processing is insensitive to noise) and the NCC approach (i.e., the block processing localizes the fringe features). The proposed scheme is illustrated in Fig. 3. We first construct a database containing all possible local fringe patterns (step 1). A block in frame 1 is compared with the database (step 2) and the most similar pattern in the database is selected (step 3). Finally the selected pattern is compared with the block at the same location in frame 2 and their similarity is measured (step 4). If they are similar, we say that no fault is detected; else a fault is identified. Figure 3(c) shows that the WFT approach can detect faults successfully.
New Methods and Tools for Data Processing
187
(i)
…
(ii) (iii) (iv) (b
(a) )
By above operations
(c) Fig. 3. Fault detection by proposed WFT approach (a) frame 1; (b) frame 2; (c) fault measure of (a) and (b); (i) construct a database; (ii) comparison of a local area with the database; (iii) the most similar basis is selected from the database; (iv) this basis is compared with the same local area in the new frame to detect the faults.
The algorithm is as follows. (i) Construct the WFT elements for the database as k ( x, y, [ ,K ) exp ( x 2 y 2 ) / 2V 2 exp j[x jKy (3)
>
@
1 ; [ where V indicates the spatial extension of the patterns; j and K are angular frequencies in x and y direction, respectively. Different values of [ and K give different WFT elements; (ii) Compute the similarity of a block centered at ( x, y ) in frame 1 and a WFT element in the database as, Q
Af ( x, y, [ ,K , t1 )
P
¦ ¦ f ( p x, q y , t ) k 1
*
( p, q, [ ,K ) (4)
q Q p P
where
p
and
(2 P 1) u (2Q 1) .
q
are
dummy
variables;
the
block
size
is
New Methods and Tools for Data Processing
188
(b)
(a)
(d)
(c)
(e)
Fig. 4. Fault detection in speckle correlation fringes (a) frame 1; (b) frame 2; (c) phase difference of (a) and (b) using the FT approach; (d) correlation of (a) and (b) using the NCC approach; (e) fault measure of (a) and (b) using the WFT approach
It is recommended that P Q 2V and V 10 ; (iii) The best WFT elements can be determined when the similarity is the highest: >[ ( x, y, t1 ),K ( x, y, t1 )@ arg max Af ( x, y, [ ,K , t1 ) (5) [ ,K
r ( x, y, t1 ) Af >x, y, [ (x, y, t1 ),K ( x, y, t1 ), t1 @ (6) where [ ( x, y, t1 ) and K ( x, y, t1 ) are the instantaneous frequencies at ( x, y ) ; r ( x, y, t1 ) is the highest similarity; (iv) Compute the similarity between the block centered at ( x, y ) in frame 2 and the selected element as, Q
r ( x, y , t 2 )
P
¦ ¦ f ( x p, y q, t
2
) k * >p , q , [ ( x , y , t1 ), K ( x , y , t1 ) @ (7)
q Q p P
A fault measure (FM) is then defined and computed as as,
r ( x, y , t 2 ) u 100% (8) r ( x , y , t1 ) A fault alarm is sounded if FM ( x, y, t1 , t 2 ) drops to below a preset threshold. In all the following examples, a FM threshold of 50% is used. FM ( x, y, t1 , t 2 )
New Methods and Tools for Data Processing
189
The four-step algorithm can thus be realized as the previous Equations: Eq. (3) Eq. (4) Eqs. (5-6) Eqs. (7-8).
3 Comparison results Speckle correlation fringe patterns with and without faults are simulated as shown in Fig. 4(a) and (b). They are tested using the FT, NCC and WFT approaches and the results are shown in Fig. 4(c), (d) and (e), respectively. All the approaches detect the defects, while it is the easiest to indicate the faults from WFT result.
4 Conclusions In this paper temporal unusualness of faults are emphasized and three approaches, Fourier transform, normalized cross correlation and windowed Fourier transform are analyzed and compared. The result shows all the approaches can detect the faults in the example, while the WFT is the most promising approach.
References [1] Tichenor, D. A. and Madsen, V. P. (1979) Computer analysis of holographic interferograms for nondestructive testing. Opt. Eng. 18:469472 [2] Robinson, D. W. (1983) Automatic fringe analysis with a computer image-processing system. Appl. Opt. 22:2169-2176 [3] Osten, W., Juptner W. and U. Mieth (1993) Knowledge assisted evaluation of fringe patterns for automatic fault detection. Proc. SPIE 2004:256-268 [4] Juptner, W., Mieth, U. and Osten, W. (1994) Application of neural networks and knowledge based systems for automatic identification of fault indicating fringe patterns. Proc. SPIE 2342:16-26 [5] Li, X. (2000) Wavelet transform for detection of partial fringe patterns induced by defects in nondestructive testing of holographic interferometry and electronic speckle pattern interferometry. Opt. Eng. 39:2821-2827 [6] Krüger, S., Wernicke, G., Osten, W., Kayser, D., Demoli, N. and Gruber, H. (2001) Fault detection and feature analysis in interferometrer
190
New Methods and Tools for Data Processing
fringe patterns by the application of wavelet filters in convolution processors. Journal of Electronic Imaging 10:228-233 [7] Takeda, M., Ina H. and Kobayashi, S. (1982) Fourier transform methods of fringe-pattern analysis for compter-based topography and interferometry. J. Opt. Soc. Am. 72:156-160 [8] Qian, K., Seah H. S. and Asundi, A. K. (2003) Algorithm for directly retrieving the phase difference: a generalization. Opt. Eng. 42:17211724 [9] Kemao, Q. (2004) Windowed Fourier transform for fringe pattern analysis. Appl. Opt. 43:2695-2702 [10]Kemao, Q. (2004) Windowed Fourier transform for fringe pattern analysis: addendum. Appl. Opt. 43:3472-3473 [11]Kemao, Q., Soon S. H. and Asundi, A. (2003) Instantaneous frequency and its application to strain extraction in moire interferometry. Appl. Opt. 42:6504-6513
The Virtual Fringe Projection System (VFPS) and Neural Networks Thomas Böttner Institut für Mess- und Regelungstechnik, Universität Hannover Nienburger Str. 17, 30167 Hannover Germany Markus Kästner Institut für Mess- und Regelungstechnik, Universität Hannover Nienburger Str. 17, 30167 Hannover Germany
1 Introduction Optical measurement systems like fringe projection systems (FPS) are complex systems with a great number of parameters influencing the measurement uncertainty. Pure experimental investigations indeed are unable to determine the influence of the different parameters on the measurement results. The virtual fringe projection system was developed for this purpose. It gives the possibility to control parameters individuelly and independently from other parameters [1]. The VFPS is a computer simulation of a fringe projection system and mainly developed to investigate different calibration methods. While several black-box calibration methods are shown in [1], neural networks is the main subject of this paper.
2 Neural Networks Many different kinds of artificial neural networks have been developed, therefrom backpropagation networks are probably the most well-known. Most neural networks can be considered simply as a nonlinear mapping between the input space and the output space. When a neural network is provided with a set of training data, it will be able to response with the correct answer after a learning phase—at least below a given error margin. Neural networks’ generalization ability means the effect of a nonlinear approximation on new input data.
New Methods and Tools for Data Processing
192
Backpropagation and radial basis function (RBF) networks are appropriate for approximation of functions [2]. Backpropagation networks construct global approximations to the nonlinear input-output mapping, whereas RBF networks construct local approximations. The calibration process of a FPS means mathematically the determination of the nonlinear calibration function f. The function f describes the relationship between the image coordinate system, consisting of the camera pixels (i, j), as well as the phase value I, and the object coordinate system (X, Y, Z), i.e.
( X ,Y , Z )
f (i, j , I )
(1)
Experimental investigations of backpropagation networks provide bad results, therefore only RBF networks will be considered. RBF networks consist of one hidden layer and one output layer. The hidden layer consists of radial basis neurons. The output value oi of neuron i is
oi
h wi x
(2)
with hi as a radial basis function like the Gaussian bell-shaped curve
hr exp Dr 2 , D > 0
(3)
and input vector x and weight vector wi. Because the Gaussian curve is a localized function, i.e. h(r) o 0 as ro f, each RBF neuron approximates locally. The parameter D determines the radius of an area in the input space to which each neuron responds. The output layer is linear mapping from the hidden space into the output space. Each output neuron delivers directly an output value. The number of hidden neurons is normally much greater than the number of input signals. In the case of the calibration task, there are three input signals (image coordinates) and three output signals (object coordinates).
3 Simulation and results With the VFPS it’s possible to evaluate exclusively the error of a RBF network, due to the fact that it allows the investigation of the calibration method under ideal conditions, independently from other influences [1]. The dimension of the used measuring volume is 120 x 90 x 40 (the unit is set to one) in the VFPS. For the calibration process 20 x 15 x 10 (in X, Y and Z direction) calibration points p, uniformly distributed in the measuring volume, is used. With this setup, the RBF network is calculated using the VFPS. For this purpose the phase value I for all calibration points p
New Methods and Tools for Data Processing
193
will be calculated first (via a direct projection on the projector plane and calculation of the corresponding phase value at this point). Subsequently, the points p will be projected on the image plane of the camera, delivering the corresponding image coordinates (i, j). Now the calibration function f can be determined with the aid of all known values (i, j,I) and (X, Y, Z). In order to determine the resulting error of the RBF network 10.000 points are randomly generated in the measuring volume and “measured” with the VFPS. The corresponding object coordinates of these points are then calculated by means of the previously determined RBF network. The resulting standard deviation of the RBF network was 9.8u10-4. For the same configuration a polynomial method (method C in [1]) yielded a standard deviation of 9.2u10-4. Figure 1 shows the 3D gray-coded error map of the RBF network. Three section planes with the measuring volume show the deviation of the calculated values from the correct values. To show border effects, the volume is extended by 10% in each direction compared to the calibration process. Figure 2 shows additionally a similar diagram for the polynomial method mentioned above.
Fig. 1. Gray coded 3D error map for the RBF network, three section planes with the measurement volume visible
194
New Methods and Tools for Data Processing
Fig. 2. Gray coded 3D error map for the polynomial method, three section planes with the measurement volume visible
4 Conclusion The RBF network and the polynomial method investigated previously produce comparable results. So far no evident arguments can be found to prefer it as calibration method. Nevertheless, further investigations have to show the effects of the influence of different disturbances (like noise) on different methods.
5 Acknowledgment The author gratefully acknowledge the support of the DFG.
6 References 1. Böttner, T, Seewig, J (2004) “Black box” calibration methods investigated with a virtual fringe projection system, Proc. of SPIE, Optical Metrology in Production Engineering 5457 : 150-157 2. Haykin, S (1999) Neural Networks, Prentice Hall, 290-294.
Fringe contrast enhancement using an interpolation technique F.J. Cuevas, F. Mendoza Santoyo, G. Garnica and J. Rayas Centro de Investigaciones en Óptica, A.C., Loma del Bosque 115, Col. Lomas del Campestre, CP. 37150, León, Guanajuato, México J.H. Sossa Centro de Investigación en Computación, Av. Othón de Mendizabal s/n, Zacatenco México, D.F., México
1 Introduction We can model mathematically a fringe pattern using the following mathematical expression: I ( x, y ) a ( x, y ) b( x, y ) cos(Z x x Z y y I ( x, y ) , (1) where x, y are the coordinates of the pixel in the interferogram or fringe image, a(x,y) is the background illumination, b(x,y) is the amplitude modulation and I ( x, y ) is the phase term related to the physical quantity being measured. Z x and Z y are the angular carrier frequency in directions x and y. The main idea in metrology tasks is calculate the phase term, which is proportional to the physical quantity being measured. We can approximate the phase term I ( x, y ) by using the phase-shifting technique (PST) [1-5], which needs at least three phase-shifted interferograms. The phase shift among interferograms should be controlled. This technique can be used when mechanical conditions are met throughout the interferometric experiment. The phase-shifting technique can be affected by background illumination variations due to experimental conditions. When the stability conditions mentioned are not fulfilled and a carrier frequency can be added, there are alternative techniques to estimate the phase term from a single fringe pattern, such as: the Fourier method [6,7], the Synchronous method [8] and the Phase Locked Loop method (PLL) [9], among others. Recently, techniques using Regularization, Neural Networks, and Genetic
196
New Methods and Tools for Data Processing
Algorithms have been proposed by Cuevas et al. [10-18] to approximate the phase term from a single image. Phase demodulation errors are found when the analyzed interferogram has irradiance variations due to the background illumination and amplitude modulation (a(x,y) and b(x,y)). In real fringe metrological applications, it is common to capture low and variable contrast fringe images. These contrast problems are generated by the use of different optical components and light sources such as lenses and lasers. Then, these contrast fringe problems complicate the phase calculus with the above mentioned techniques. In this paper, we present a technique to enhance the contrast of a fringe pattern. In this case, we use spline interpolation [19] to obtain wellcontrasted fringes. Two splines are fitted over the maximums and the minimums of the fringe irradiance. Then, the splines (max function and min function) are used to interpolate and enhance the contrast of intermediate points of the fringe pattern. Preliminary results are presented when the method is applied on a degraded computer simulated fringe pattern.
2 Fringe enhancement using spline interpolation The phase detection techniques from fringe patterns [1-17] can work adequately only if a well-contrast fringe image is obtained. Due to this a contrast enhancement or normalization process is required previously to fringe demodulation procedure. This paper is concerned to solve the fringe normalization problem. The main idea is to fit two spline functions over the irradiance maximums and minimums of the fringe irradiance to approximate functions a(x,y) and b(x,y) in Eq.1. Then, each point over the fringe is normalized by using of their respective maximum and minimum value in the splines. The procedure can be described in the following way: 1. For each line in the fringe pattern the fringe irradiance maximums and minimums are calculated by using of first and second derivatives. Then, two lists containing the maximum and minimum fringe peaks are generated for line x=xi : min(I(xi,y))={ymin0,ymin1,…,yminn}where ymin0x
x y 0 y z 0 z 2
0
2
2
@
1
2
x x si >x xsi y y si 2 z z si 2 @
1
2
2
(4a) 2 S
e iy
O
y0 y >x0 x y 0 y 2 z 0 z 2 @
1
2
2
y y si >x x si y y si 2 z z si 2 @
1
2
2
(4b) 2 S
e zi
O
z 0 z >x0 x y 0 y 2 z 0 z 2 @
1
2
2
z z si >x x si y y si 2 z z si 2 @
1
2
2
(4c)
i 1 3 . Where P0=(x0,y0,z0) is the observer position (CCD camera position); Psi=(xs,ys,zs) is illumination point, and P=(x,y,z) is a point on the specimen surface. Then we can define the sensitivity function to each component as
S
i x
e
i 2 x
e i ( P)
2
u 100 , S
i y
e
i 2 y
e i ( P)
2
u 100 and S
i z
e
i 2 z
e i ( P)
2
u 100 .(5a-c)
232
New Methods and Tools for Data Processing
2.2 Theoretical cases for three divergent beams We analysed the case when the sources positions are: Ps1=(17.45 cm, 0 cm, -166 cm); Ps2=(-8.7 cm, 15.11 cm, -166 cm) and Ps3=(-8.7 cm, -15.11 cm, -166 cm). The incidence angle of each illumination source is of T i 6 $ . The angular separation between illumination sources is Z 120 $ . In the second case, the sources positions were: Ps1=(167 cm, 0 cm, 0 cm); Ps2=(-83.5 cm, 145 cm, 0 cm) and Ps3=(-83.5 cm, -145 cm, 0 cm). The incidence angle is T i 90 $ and the angular separation between illumination sources is Z 120 $ . Finally, the third case was: Ps1=(0 cm, 0 cm, -167 cm); Ps2=(0 cm, 161.3 cm, -43.22 cm) and Ps3=(161.3 cm, 0 cm, -43.22 cm). The incidence angles to Ps1, Ps2, and Ps3 are T i 0 $ and 75° to the last two sources. The angular separation between Ps2, and Ps3 is Z 90 $ . The functions to each component of the sensitivity vector according to the eq. 5, to each source are presented in the figure 1 to only last case. The observer position (CCD camera position) is P0=(0 cm, 0 cm, -82 cm) and the specimen surface is considered plane. Figure 1 shows if we pick up a large incidence angle and the sources are collocated on axis x and y, the sensitivity functions Sx and Sy are increased. The sensitivity function Sz is increase when the source is collocated near to optical axis. From the proposed geometries, the described in the third case, makes possible to have a maximum sensitivity on each of the three directions.
3 Conclusion Simple geometries of ESPI systems were discussed, which shows the influence of sources position on the sensitivity vector. It can be observed from the analyzed cases that an optical system which uses three illumination beams, where one illumination source collocated near to optical axis (incidence angle near to 0°) will gives maxim sensitivity in w-direction. The position of the second source on axis y and an incidence angle near to 90° will gives maxim sensitivity in v-direction. The position of the third source on the axis x and an incidence angle to 90° will gives maxim sensitivity in u-direction.
4 Acknowledgments Authors wish to thank partial economical support from Consejo de Ciencia y Tecnología del Estado de Guanajuato. R. R. Cordero thanks support of
New Methods and Tools for Data Processing
233
MECESUP PUC/9903 project and Vlaamse Interuniversitaire Raad (VLIR-ESPOL, Componente 6). 0.15
S x1 ex1 ( %)
0.15
S y1
0.1
ey1 ( %)
0.05
0 -4 -3
- 2 -1
0
y ( cm)
ex2 ( %)
3 44 3
2
1
0 -1
-4 -3
x ( cm)
ey2 ( %)
-2 -1
0 1 2
3
44
3
2
10
-1
33 -4 -3
-2 -1 0
y ( cm)
1 2
3
44
3
2
1
0
-1-2
-3-4
x ( cm)
2
0 -1
99.9 99.8 99.7 -4 -3
x ( cm)
-2 -1
0
y ( cm)
1 2
3 44 3
2
1
0 -1
-4 -2-3
x ( cm)
66
Sz2 ez 2 ( %)
37 35
64 62 60
-2 -1
0
y ( cm)
( %)
35
3 44 3
1
39
-4 -3
Sey3y3
37
1 2
41
x ( cm)
39
0
-4 -2-3
33
-4 -2-3
41
-2 -1
y ( cm)
S y2
y ( cm)
ex3 ( %)
1 2
-4 -2-3
0.25 0.2 0.15 0.1 0.05 0 -4-3
S x3
ez 1 ( %)
0.05
0
S x2
100
Sz1
0.1
1 2
3
44
3
2
1
0
-1
-4 -2-3
-4 -3
0
y ( cm)
x ( cm)
0.25 0.2 0.15 0.1 0.05 0
-2 -1
Sezz33 ( %)
1 2
3
44
3
2
1
0
-1-2
-3-4
x ( cm)
66 64 62 60
-4-3
-2 -1
y ( cm)
0 1 2
3
44
3
2
10
-1
-4 -2-3
x ( cm)
-4 -3
-2 -1
y ( cm)
0
1 2
3
44
3
2
1
0
-1-2
-3-4
x ( cm)
Fig. 1. Percentage of each one of sensitivity vector components to each source in the case of optical set-up with three illuminating beams, S1: Ps1=(0 cm, 0 cm, -167 cm); S2: Ps2=(0 cm, 161.3 cm, -43.22 cm) and S3: Ps3=(161.3 cm, 0 cm, -43.22 cm).
References 1.
2.
3.
4.
5.
Martínez Amalia, Rodríguez-Vera R., Rayas J. A., Puga H. J. (2003) Fracture detection by grating moiré and in-plane ESPI techniques. Optics and Lasers in Engineering, 39(5-6): 525-536. Timoshenko S. P., Goodier J. N., Chapter 5: Photoelastic and Moiré Experimental Methods: 150-167, Chapter 1: Introduction: 1-14, in Theory of Elasticity, (McGrawHill International Editions, Singapore, 1970). W. Jüptner and W. Osten eds., T. Kreis, Chapter 3: Holographic Interferometry: 65-74; Chapter 4: Quantitative Evaluation of the Interference Phase:126-129; Chapter 5: Processing of the interference phase, pp. 186-189, in Holographic Interferometry, (Akademie Verlag, New York, 1996). Takeda M., Ina H., Kobayashi S. (1982) Fourier-transform method of fringe-pattern analysis for computer-based topography and interferometry. Journal of Optical Society of America 72(1), 156-160. Martínez Amalia, Rayas J. A., Rodríguez-Vera R., Puga H. J. (2004) Threedimensional deformation measurement from the combination of in-plane and out-ofplane electronic speckle pattern interferometers. Applied Optics 43(24): 4652-4658.
Properties of phase shifting methods applied to time average interferometry of vibrating objects K. Patorski, A. Styk Warsaw University of Technology Institute of Micromechanics and Photonics
1 Introduction Time average interferometry allows easy finding of resonant frequencies and their vibration mode patterns irrespectively of frequency value. Temporal phase shifting (TPS) for automatic interferogram analysis supports the method in contrast and modulation calculations [1, 2]. The properties of the TPS method applied to two-beam interferogram modulation calculations are summarized. The modulation changes are introduced by a sinusoidal vibration. Simulations are conducted for two experimental errors: the phase step error and average intensity changes of TPS frames (the latter error might be caused by light source power changes and/or CCD matrix auto mode operation). Noise free and intensity noise fringe patterns obtained under null field and finite fringe detection modes are studied. Exemplary calculation results and experimental resonant mode visualizations of silicon membranes are shown. Next the influence of the mentioned errors on phase-shift histograms is addressed.
2 TPS algorithms The following algorithms have been applied for two-beam interferogram intensity modulation simulations and calculations using experimental data: a) classical four frame algorithm [3-5]; b) modified four frame algorithm [6]; c) five frame modulation algorithm [3-5]; d) Larkin five frame algorithm [7]; e) four frame algorithm 4N1 using the frame sequence (1,3,4,5) [8]; f) four frame algorithm 4N2 using the frame sequence (1,2,3,5) [8]. Last two algorithms emphasize the phase step errors. The number of detector pixels with the same phase shift angle might serve as a source of information on TPS errors. Five frame histograms Į(x,y) have been calculated using the following equations:
New Methods and Tools for Data Processing
235
Well known five frame algorithm introduced by Schwider et al. 1983, Cheng and Wyant 1985, and Hariharan et al. 1987 [3-5] ªI I º Įx, y arc cos « 5 1 » ¬ I4 I2 ¼
(1)
Equations presented by Kreis [9] ª I 1 ( I 3 I 4 ) ( I 2 I 3 I 4 )( I 2 2 I 3 I 4 ) I 5 ( I 2 I 3 ) º (2) » 4( I 2 I 3 )( I 3 I 4 ) ¬ ¼
D ( x, y ) arccos«
º ª ( I 2 I 4 )( I 1 2 I 2 2 I 3 2 I 4 I 5 ) (3) D ( x, y ) arccos« » ( )( 2 ) ( )( 2 ) I I I I I I I I I I 4 1 3 5 1 5 2 3 4 ¼ ¬ 2 To see the influence of average intensity changes of TPS frames on the lattice-site representation of phase shift angles [10] was calculated as well.
3 Numerical simulations and experimental works Let us comment a general case of simultaneous presence of the phase step and different average intensity errors. Parasitic modulations caused by those errors depend on their particular combination, the interferogram contrast, and orientation of two-beam interference fringes with respect to their contrast change direction. Average intensity changes of TPS frames represent a crucial factor for true visualization and measurement of the vibration amplitude. They influence the location and minima values of dark Bessel fringes in the case of null field detection mode and when the carrier fringes are not parallel to their contrast change direction. Although null field detection provides best results, stringent experimental conditions must be met. The detection with carrier fringes parallel to their contrast change direction (if possible) is recommended. Five frame algorithms give better modulation reproduction than four frame ones. Figure 1 shows exemplary results for a membrane vibrating at 170 kHz. a)
b)
c)
Fig. 1. a) grey level representation of simulated modulation distribution; b) cross-sections along columns 1 and 91; and c) experimentally determined modulation map using algorithm 4N1. Square membrane vibrating at 170 kHz; estimated frame recording errors: įIav § 5% (relative average intensity error) and įĮc § -200 (quasi-linear phase step error).
236
New Methods and Tools for Data Processing
4 Phase shift angle histograms Before histogram and lattice-site representation calculations component TPS frames should be noise preprocessed because high frequency intensity noise influences the bias and modulation of the interferogram intensity distribution [1-5]. For that purpose spin filtering was used [11]. The following conclusions have been obtained from calculated histograms: Conventional and lattice-site phase shift angle representations provide limited information on average intensity changes except for some cases related to min and max intensity values of the first and fifth frame. Sharp asymmetries and/or quasi-central dips are found in phase shift histograms. They correspond to vertical displacements of characteristic quasi-elliptical patterns in lattice-site representations; Clear dips appear in 5 frame histograms when I1 and I5 represent Imin and Imax average intensity values, or vice versa; Sharp asymmetries are found in histograms when I1 or I5 are Imin or Imax values. In those cases lattice-site patterns shift vertically as well; For I1 > I5 lattice-site pattern shifts upwards, for I1 < I5 it shifts downwards. Lattice-site representations calculated from equations (2) and (3) are much more irregular and give generally different average shift angle Įc (max populated Į value or dip location) than widely used shortest histogram equation (1). The reason might be much longer forms of Eqs. (2) and (3).
Fig. 2. Upper row - histograms calculated using Eqs. (1) (left), (2) (center) and (3) (right); bottom row – lattice-site representations. Circular membrane vibrating at 833 kHz; average intensities of TPS frames expressed by the numbers: 75.3, 68.3, 68.3, 69.1 and 67.1.
New Methods and Tools for Data Processing
237
5 Acknowledgements This work was supported, in part, by the Ministry of Scientific Research and Information Technology grant No. 3T10C 001 27 and statutory funds.
6 References 1. Patorski K, Sienicki Z, Styk A, (2005) Phase-shifting method contrast calculations in time-averaged interferometry: error analysis, Optical Engineering 44, in press 2. Patorski K, Styk A, (2005) Interferogram intensity modulation calculations using temporal phase shifting: error analysis, Proc. SPIE 585655, in press 3. Schwider J, (1990) Advanced evaluation techniques in interferometry, Chap. 4 in Progress in Optics, ed. Wolf E, 22: 271-359, Elsevier, New York 4. Greivenkamp J.E, Bruning J.H, (1992) Phase shifting interferometry, Chap. 14 in Optical Shop Testing, ed. Malacara D, 501-598, John Wiley & Sons, New York 5. Creath K, (1994) Phase-shifting holographic interferometry, Chap. 5 in Holographic Interferometry, ed. Rastogi P.K, 109-150, SpringerVerlag, Berlin 6. Schwider J, Falkenstorfer O, Schreiber H, Zoller A, Streibl N, (1993) New compensating four-phase algorithm for phase-shift interferometry, Optical Engineering 32: 1883-1885 7. Larkin K.G, (1996) Efficient nonlinear algorithm for envelope detection in white light interferometry, Journal of Optical Society of America 13: 832-843 8. Joenathan C, Phase-measuring interferometry: new methods and error analysis, Applied Optics 33: 4147-4155 9. Kreis T, (1996) Holographic Interferometry: Principles and Methods, Akademie Verlag, Berlin 10. Gutmann B, Weber H, (1998) Phase-shifter calibration and error detection in phase-shifting applications: a new method, Applied Optics 37: 7624-7631 11. Yu Q, Liu X, (1994) New spin filters for interferometric fringe patterns and grating patterns, Applied Optics 33: 3705-3711
Depth-resolved displacement measurement using Tilt Scanning Speckle Interferometry Pablo D. Ruiz and Jonathan M. Huntley Wolfson School of Mechanical and Manufacturing Engineering, Loughborough University, Ashby Road, Loughborough, Leicestershire, LE11 3TU, United Kingdom
1 Tilt Scanning Interferometry The first demonstrations of depth-resolved displacement field measurement have been presented recently. In those based on Low coherence interferometry (LCI) [1, 2] the system is sensitive only to the movement of scattering points lying within the coherence gate slice selected by the reference mirror position. Wavelength Scanning Interferometry (WSI) systems provide decoupling of the depth resolution and displacement sensitivity, but also appear to possess some additional practical advantages over LCI, the most important being an improved signal-to-noise ratio [3, 4]. In this paper we present a different approach to measure depth-resolved displacements within semi-transparent materials which is based in tilting the illumination angle during the acquisition of image sequences –Fig.1. This provides the necessary depth-dependent phase shifts that allow the reconstruction of the object structure and its internal displacements. In a proof of principle experiment, a collimated beam is steered by a mirror mounted on a tilting stage controlled by a ramp generator. An imaging system captures the interference between the scattered light that propagates nearly normal to the object surface and a smooth wavefront reference beam R. The time varying interference signal is recorded throughout the whole tilt scanning sequence. The test object was a beam manufactured inhouse with clear cast epoxy resin seeded with a small amount of titanium oxide white pigment to increase the scattering within the material. Under a three point bending test, the beam was loaded with a ball tip micrometer against two cylindrical rods, as shown in Fig. 2(a).
New Methods and Tools for Data Processing
239
Sj S2
S1 R l
f2
f1
Fig. 1. By continuously tilting the illumination beam, depth dependent Doppler shifts f1 and f2 corresponding to slices S1 and S2 are introduced in the time varying interference signal
Fourier transformation of the resulting 3D intensity distribution along the time axis reconstructs the scattering potential (magnitude spectrum) and the optical phase within the medium. Repeating the measurements with the object wave at equal and opposite angles about the observation direction resulted in two 3D phase-change volumes, the sum of which gave the outof-plane-sensitive phase volume and the difference between them gave the in-plane phase volume. From these phase-change volumes the in-plane and out-of-plane depth-resolved displacements fields are obtained. A reference surface was placed just in front of the object, and served to compensate for shifts of spectral peaks along the horizontal axis x –see Fig. 2(a)– and due to non-linearity of the tilting stage. The main measurement parameters were set as follows: camera exposure time Texp = 0.1397 s; framing rate FR = 7.16 fps; acquired frames Nf = 480 frames; acquisition time T = Nf ×Texp = 68.6 s; spatial resolution of field of view (FOV): 256×256 pixels; size of FOV: 7.2×7.2mm2; tilt angle scanning range 'T= 0.0048 rad; illumination angle T= 45 deg; material refractive index n1= 1.4; laser wavelength O = 532 nm; laser power per beam: ~35 mW CW; loading pin displacement: 40 Pm along z axis. It can be shown that the depth resolution of the system is:
Gz
JO n0 [ 'T
(1)
where n0 is the refractive index of the medium surrounding the object (air) and [ is a constant that depends on the material index of refraction and the illumination angle T. Depending on the windowing function used to
240
New Methods and Tools for Data Processing
Fig. 2. Experimental results: (a) Schematic view of an epoxy resin beam under 3-point bending load; (b) In-plane (top row) and out-of-plane (bottom row) wrapped phase-change distribution for different slices within the beam (b). Black represents -Sand white +SFringe spacing is equivalent to ~0.38Pm and ~0.15Pm for in-plane and out-of-plane sensitivity, respectively
evaluate the Fourier transform, J = 2, 4 for rectangular or Hanning windows, respectively. In our experiment, n0=1, [ , J = 4 and therefore the depth resolution was Gz ~ 1.1mm. The top row of Fig. 2(b) shows the in-plane (x axis) phase-change distribution for different slices within the epoxy resin beam starting at the object surface z = 0 mm (left) in steps of -1.74 mm down to z = -5.22 mm (right). The out-of-plane phase-change distribution for the same depth slices are shown in the bottom row of Fig. 2(b). These phase maps have been corrected for the refraction due to the slightly bent surface of the beam. It can be seen that the gradient of the in-plane displacements is reversed as we move from the front to the back surface of the beam. This indicates a tensile state for the first front slices and a compressive state for the slice behind the neutral axis at z = -3.8 mm. A nearly flat phase distribution is obtained for the slice in the neutral axis (third column), as would
New Methods and Tools for Data Processing
241
be expected. The out-of-plane displacements show different levels of bending as we approach the back surface from the front surface. The asymmetry of the distribution is produced by the position of the point of contact between the loading pin and the beam, which was ~2mm below the horizontal symmetry axis of the beam. The last slice at z = -5.22 mm starts to reveal detail of the local deformation around the point of contact. The reference surface can be seen in Fig. 2(b) at the bottom of each wrapped phase distribution. These results compare well with finite element simulations.
2 Conclusion Promising results were achieved by means of a novel technique that we call Tilt Scanning Interferometry (TSI) to measure 3D depth-resolved displacement fields within semitransparent scattering materials. A depth resolution Gz ~1.1mm was achieved for a tilting range of 0.0048 rad using a home made tilting stage. By means of TSI, the scattering potential within the sample can be reconstructed in a 3D data volume as in scanning Optical Coherence Tomography. Most importantly, in-plane and out-of-plane displacements can be measured within the object under study with a sensitivity of Vz~O/30 (decoupled from the depth resolution) and up to a depth of ~6mm with our simple system.
3 References 1. Gülker, G, Kraft, A (2003) Low-coherence ESPI in the investigation of ancient terracotta warriors. Speckle Metrology 2003, Trondheim, Norway, SPIE 4933: 53-58 2. Gastinger, K, Winther, S, Hinsch K, D (2003) Low-coherence speckle interferometer (LCSI) for characterization of adhesion in adhesivebonded joints. Speckle Metrology 2003, Trondheim, Norway, SPIE 4933: 59-65 3. Ruiz, P, D, Zhou, Y, Huntley J, M, Wildman, R, D (2004) Depthresolved whole-field displacement measurement using wavelength scanning interferometry. Journal of Optics A: Pure and Applied Optics 6: 679-683 4. Ruiz, P, D, Huntley, J, M, Wildman, R, D (2005) Depth-resolved whole-field displacement measurement using Wavelength Scanning Electronic Speckle Pattern Interferometry. Applied Optics (in press)
New Phase Unwrapping Strategy for Rapid and Dense 3D Data Acquisition in Structured Light Approach G Sai Siva and L Kameswara Rao Department of Instrumentation Indian Institute of Science Bangalore-12, India.
Abstract Sinusoidal structured light projection (SSLP) technique, specificallyphase stepping method, is in widespread use to obtain accurate, dense 3-D data. But, if the object under investigation possesses surface discontinuities, phase unwrapping (an intermediate step in SSLP) stage mandatorily require several additional images, of the object with projected fringes (of different spatial frequencies), as input to generate a reliable 3D shape. On the other hand, Color-coded structured light projection (CSLP) technique is known to require a single image as in put, but generates sparse 3D data. Thus we propose the use of CSLP in conjunction with SSLP to obtain dense 3D data with minimum number of images as input. This approach is shown to be significantly faster and reliable than temporal phase unwrapping procedure that uses a complete exponential sequence. For example, if a measurement with the accuracy obtained by interrogating the object with 32 fringes in the projected pattern is carried out with both the methods, new strategy proposed requires only 5 frames as compared to 24 frames required by the later method. Keywords: Structured light projection, shape measurement; phase stepping; phase unwrapping; color-coding; surface discontinuities
1 Introduction The measurement of surface shape by use of projected structured light patterns is a well-developed technique. Especially, SSLP techniques have
New Methods and Tools for Data Processing
243
been extensively used as it can give accurate and dens 3D data. The procedure involves- projecting a pattern on to the object from an offset angle and recording the image of the pattern which is phase modulated by the topographical variations of the object surface. An automated analysis is then carried out to extract the phase from the deformed fringe pattern mostly using either FFT[1] or phase stepping[2] methods, both of them produce wrapped phase distribution. The reconstruction of surface profile of objects with inherent surface discontinuities or having spatially isolated regions is usually a difficult problem by using standard phase unwrapping techniques. To overcome this problem several phase unwrapping strategies were developed [3][4][5]. All of them mandataroly require multiple phase maps generated by varying the spatial frequency of projected fringe pattern either linearly or exponentially. Further, the degree of reliability varies from method to method. A different class of structured light projection techniques relies upon color-coded projection. They are capable of extracting 3D data from a single image. Different color-coding strategies can be seen in [6],[7]. However, they can give only sparse 3D data. In the following sections we suggest an approach for obtaining dens 3D data of objects even with surface discontinuities while requiring minimum number of input images as compared to any of the contemporary phase unwrapping algorithms
2 Method The first step of profiling objects in the proposed method involves the generation of wrapped phase map using “four-frame phase shifting algorithm”. The fundamental concepts of phase stepping method, is described elsewhere [2], is only briefly reviewed here. 2.1 Phase Stepping Algorithm
When a sinusoidal fringe pattern is projected on to a 3-D diffuse object, the mathematical representation of the deformed fringe pattern may be expressed in the general form I (x, y) = a (x, y) + b (x, y) cos I(x, y)
(1)
Where a (x, y), b(x, y) represent unwanted irradiance variations arising from the non-uniform light reflection by a test object. The phase function
244
New Methods and Tools for Data Processing
I(x, y) characterizes the fringe deformation and is related to the object shape h (x, y). The principal task is to obtain I(x, y) from the measured fringe-pattern intensity distribution. Upon shifting the original projected fringe pattern by a fraction 1/N of its period P, the phase of the pattern represented by Eq.(1) is shifted by 2S/N. Using four images I(x, y) can be retrieved independently of the other parameters in Eq.(1): I(x, y) = arc tan [ ( I4 – I2 )/ ( I1- I3 ) ] (2) 2.2 Phase Unwrapping
The object phase calculated according to Eq.(2) is wrapped in the range -S to S. The true phase of the object is Iun(x, y) = I(x, y) + 2 n(x, y) S (3) where n(x, y) is an integer. Unwrapping is only a process of determining n(x, y). A conventional spatial phase unwrapping algorithm search for locations of phase jumps in the wrapped phase distribution and adds/subtracts 2S to bring the relative phase between two neighboring pixels into the range of - S to S. Thus irrespective of actual value of n(x, y) to be evaluated, they always assign r1 thereby fails to reliably unwrapp phase maps in profiling objects with surface discontinuities. In order to determine n (x, y) we are introducing the following procedure: 2.3 New Unwrapping Procedure
In this new approach, an additional image of the object captured under illumination of a color-coded pattern is used for calculating n (x, y). A color-coded pattern is generated using MATLAB in a computer and projected with the help of a LCD projector. Color-coded pattern generated comprises of: an array of rectangular shaped bands, with each band identified uniquely by its color and arranged in a specific sequence as shown in Fig.1. This pattern is projected onto the reference plane and its image (Cr(i,j)) is recorded. A non-planar real object distorts the projected structured pattern in accordance with its topographical variations (Co(i,j)). Now, we know a priori the color expected to be returned by every point from an ideally planar object surface. Deviations in the detected color at any point on the non-planar object surface essentially correspond to local height deviations. Therefore, from the knowledge of observed color (Co) and expected color (Cr), height deviation at every point on object surface can be expressed in terms of difference of their band indices (m) as explained in [8]. If the width of every band (w) is made equal to the pitch of the gray-
New Methods and Tools for Data Processing
245
scale fringe pattern used in phase stepping, then m can be directly related to n(x, y) in the Eq.(3). This is the basis for determining n(x, y) unambiguously. Procedure to extract necessary information from the deformed color-coded pattern and exploiting its use in determining n(x, y) is presented in [8].
Fig. 1. Generated color-coded pattern
3 Experimental Results
Fig. 2. (a) Fringe pattern (b) color-coded pattern on the surface of an object with two step discontinuities (c) wrapped phase map obtained with phase stepping (d) phase map after unwrapping with the help of Fig.2 (b)
It is impossible to unwrap the phase map in Fig.2(a) correctly by conventional spatial methods because the phase jumps at the steps are too large (more than 2S). Even though it is impossible to determine exact numbers of fringes shifted at each step height from Gray scale fringe pattern (Fig. 2(a)) alone, color-coded pattern on the object surface clearly reveals this information as can be seen from Fig. 2(b).
4 Conclusions The new approach proposed, that combines CSLP and SSLP in a specific way, has resulted in a new and more powerful method for generating rapid
246
New Methods and Tools for Data Processing
and dense 3D data. It is shown to be significantly faster and reliable than temporal phase unwrapping that uses complete exponential sequence and compared to it the reduction both in image acquisition and in analysis times by the factor [N*(log2S +1)] / (N+1) is an important advantage of the present approach. (N: number of frames used in phase stepping and S: number of fringes in the pattern projected).
5 References [1] Mitsuo Takeda et al., (1982) J.Opt. Soc. America 72(1): 156-160 [2] V.Srinivasan et al., (1984) Applied Optics 23(18): 3105-3108 [3] H.Zhao et al., (1994) Applied Optics 33: 4497-4500 [4] J.M. Huntley and H.O. Saldner, (1997) Meas. Sci. Technol 8: 986-992 [5] Hong Zhang et al., (1999) Applied Optics 38(16), 3534-3541 [6] Weiyi Liu et al., (2000) Applied Optics 39(20), 3504-3508 [7] Li Zhang et al., (2002) Proc. of the 1st Int. Symp. on 3DPVT, 24-36 [8] Sai Siva et al., (2005) Proc.of SPIE, vol:5856, Paper:78.
Determination of modulation and background intensity by uncalibrated temporal phase stepping in a two-bucket spatially phase stepped speckle interferometer Peter A.A.M. Somers, and Nandini Bhattacharya Optics Research Group, Delft University of Technology Lorentzweg 1, NL-2628 CJ Delft the Netherlands
1 Abstract A new phase stepping method is presented, based on the combination of spatial and temporal phase stepping. The method comprises one fixed spatial phase step of S/2 and two arbitrary, unknown temporal phase steps. The method prevents phase errors caused by phase changes during temporal phase stepping. It is therefore in particular useful for dynamic applications, but will also improve system performance for quasi-static applications in non-ideal environments.
2 Introduction Optical path differences between two interfering beams can be calculated modulo 2S by applying a phase stepping method. In general three or four interferograms are involved in the calculation of phase; mostly multiples of S/2 or 2S/3 are applied as a phase step [1]. Phase stepped interferometers can be subdivided into two classes: x Temporally phase stepped systems x Spatially phase stepped systems For systems in the first class, phase steps are applied sequentially, in general by a physical change of the optical path length in one of the interfering beams, for instance by displacing a mirror by means of a piezo element. After each phase step a new interferogram is acquired, and after the desired number of interferograms is obtained, phase is calculated. This ap-
248
New Methods and Tools for Data Processing
proach has the advantage that all interferograms are acquired with one single camera. The disadvantage is that the object of interest or the medium between the object and the interferometer may have changed between two exposures, which will lead to errors in the calculation of phase. As a result this approach is not appropriate without adaptation for measurements of dynamic events. In addition piezo elements suffer from hysteresis, drift, and non-linear behaviour, which is a disadvantage with respect to calibration of the phase stepping procedure. In systems of the second class, phase stepping can be realized by introducing a phase difference for adjacent pixels by applying an oblique reference beam. In general two to four pixels are involved. This method is known as spatial phase stepping [1]. The advantage of this approach is that the information which is necessary to calculate phase is present in a single image, representing one particular state of the object. A disadvantage is that a speckle should be large enough to cover the three or four adjacent pixels involved, which limits the efficient use of available light. An alternative method for spatial phase stepping is also based on simultaneous acquisition of two or more phase stepped interferograms. This can be implemented by dividing the interfering beams over two or more optical branches, that each have a fixed phase step with respect to each other. Such a system has been realized recently: a shearing speckle interferometer with two optical branches allowing simultaneous acquisition of two phase stepped interferograms [2]. The phase step is S/2. The intensities I1 and I2 for two phase stepped interferograms that can be acquired simultaneously with this system are: I1 = IB + IM cos(M) I2 = IB + IM sin(M)
(1) (2)
where IB and IM are background and modulation intensities respectively, and M is the phase. Phase step in Eq. 2 is -S/2. Phase M can be calculated by Eq. 3, that can be derived from Eqs. 1 and 2:
M
ª I 2 IB º arctan « ¬ I 1 IB »¼
(3)
In Eq. 3 the modulation intensity IM is eliminated, but the unknown background intensity IB is still present. In the next section a method will presented that resolves IB, after which phase M can be calculated.
New Methods and Tools for Data Processing
249
3 Quadrature phase stepping When three S/2 spatially phase stepped interferogram pairs are taken, each pair with an additional phase step, this combination of spatial and temporal phase stepping yields six equations with five unknowns. For each pair the S/2 phase step is fixed, temporal phase steps are assumed to be unknown. IB and IM are assumed not to have changed during temporal phase stepping, a requirement also to be met for conventional phase stepping. (4),(5) I1 = IB + IM cos(M1 ),I2 = IB + IM sin(M1 ), (6),(7) I3 = IB + IM cos(M2 ),I4 = IB + IM sin(M2 ), I5 = IB + IM cos(M3 ),I6 = IB + IM sin(M3 ) (8),(9) We are particularly interested in M1 , the phase angle that represents the initial state of the object. After a change a second set consisting of three pairs of S/2 phase stepped interferograms is taken, and another phase result, representative for the changed state of the object is obtained. Phase change caused by the event can now be calculated modulo 2Sby taking the difference between initial and final phase. The method can be illustrated graphically by parametric presentation of the three intensity pairs (Fig. 1). On the horizontal axis intensities I1, I3, and I5, given by the cosine equations are plotted. On the vertical axis the intensities given by the sine equations are plotted: I2, I4, and I6. The three points representing the three intensity pairs are all on a circle with a radius of IM. The location of the centre of the circle is given by IB. Both IB and IM can be calculated with known geometrical methods. I3 = IB + IM cos(M I4 = IB + IM sin(M
I5 = IB + IM cos(M I6 = IB + IM sin(M
M
M
M
I90
I1 = IB + IM cos(M I2 = IB + IM sin(M
IM IB IB I0
Fig. 1. Parametric presentation of three pairs of S/2 spatially phase stepped interferograms. Temporal phase step between pairs is arbitrary. Intensity data belong to a single pixel.
250
New Methods and Tools for Data Processing
It is clear that the values of M2 and M3 can be arbitrary, so it is not necessary to calibrate the temporal phase steps. These steps can even be unknown, which allows the object or the medium between the object and the interferometer to change during phase stepping. As a result the method is very robust for quasi-static applications, and exceedingly appropriate for dynamic applications. The method requires three temporally phase stepped pairs of S/2 spatially phase stepped interferograms, but can be extended to four or more.
4 Conclusions A new phase stepping method has been presented, based on the combination of spatial and temporal phase stepping methods. The spatial phase step is fixed at S/2, the temporal phase steps are arbitrary and need not to be known. At least three temporally phase stepped pairs of S/2 spatially phase stepped interferograms are required for the method. The proposed method is very robust for quasi-static applications, and is exceedingly appropriate for dynamic applications since the object is allowed to change during phase stepping.
5 Acknowledgements This research was supported by the Technology Foundation STW, the Applied Science Division of NWO and the Technology Programme of the Ministry of Economic Affairs.
6 References 1. B. V. Dorrio and J. L. Fernandez, “Phase-evaluation methods in whole-field optical measurement techniques,” Measurement Science & Technology, vol. 10, pp. R33-R55 (1999). 2. Peter A.A.M. Somers and Hedser van Brug, “A single camera, dual image real-time-phase-stepped shearing speckle interferometer”, Proceedings Fringe 2001, pp. 573-580, Wolfgang Osten, Werner Jüptner, eds, Elsevier (2001).
SESSION 2 Resolution Enhanced Technologies Chairs Katherine Creath Tucson (USA) Peter J. de Groot Middlefield (USA)
Invited Paper
EUVA's challenges toward 0.1nm accuracy in EUV at-wavelength interferometry Katsumi Sugisaki, Masanobu Hasegawa, Masashi Okada, Zhu Yucong, Katsura Otaki, Zhiqiang Liu, Mikihiko Ishii, Jun Kawakami, Katsuhiko Murakami, Jun Saito, Seima Kato, Chidane Ouchi, Akinori Ohkubo, Yoshiyuki Sekine, Takayuki Hasegawa, Akiyoshi Suzuki, Masahito Niibe* and Mitsuo Takeda** The Extreme Ultraviolet Lithography System Development Association (EUVA)3-23 Kanda Nishiki-cho Chiyoda-ku, Tokyo, 101-0054, Japan *University of Hyogo 3-1-2 Kouto, Kamigori-cho, Ako-gun, Hyogo, 678-1205 Japan **University of Electro-Communications 1-5-1, Chofugaoka, Chofu, 182-8585, Japan
1 Introduction Extreme ultraviolet (EUV) lithography using radiations of 13.5 nm wavelength is a next-generation technology to fabricate fine device patterns below 32 nm in size. The wavefront tolerance of the projection optics used in the EUV lithography is required to be less than O/30 RMS corresponding to 0.45nm RMS. Wavefront metrologies are used to fabricate these optics. In the EUV region, the optics uses multilayer coated mirrors. In general, visible and ultraviolet interferometries also are applicable to evaluations of the optics. However, the wavefront measured with visible/ultraviolet light is different from it measured with the EUV due to the effect of multilayer. Figure 1 shows the wavefront difference between wavelengths of 266nm and 13.5nm. Therefore, wavefront metrology at the operating wavelength (at-wavelength) is an essential for developing such optics. A study on the EUV wavefront metrology was started since the early 1990's at Lawrence Berkeley National Laboratory.[1] Japanese research project was started at Association of Super-Advanced Electronics Technologies (ASET) in 1999. Extreme Ultraviolet Lithography System Development Association (EUVA) takes over this project.
Resolution Enhanced Technologies
-0.0254
253
0.0344 OEUV
11.5mOEUV RMS Fig. 1. Wavefront difference between measurement wavelength of O= 266nm and O= 13.5nm.
The final goal of EUVA is to build the EUV Wavefront Metrology System (EWMS) by March 2006, which evaluates six-mirror projection optics of NA0.25 for mass-production EUV exposure tools. In order to develop metrological techniques for evaluating such optics with ultra-high accuracy, we have built an Experimental EUV Interferometer (EEI) in the NewSUBARU.[2] Using the EEI, six different metrological methods can be tested using the same test optic to determine the most suitable measurement methods. The six kinds of methods are point diffraction interferometry (PDI) and line diffraction interferometry (LDI), lateral shearing interferometry (LSI), slit-type lateral shearing interferometry (SLSI), doublegrating lateral shearing interferometry (DLSI) and cross-grating lateral shearing interferometry (CGLSI). The EEI works well and the six types of the interferograms were successfully obtained. In this paper, we present our recent results including a comparison among six metrological methods, analyses on error factors, developments of calibration methods for achieving high accuracy, and systematic error evaluations as a part of assesment of the accuracy.
2 Interferometric methods and analyses Figure 2 shows a schematic diagram of the EEI. The test optic is a Schwarzschild optic of NA0.2. A coherent EUV beam from the long undulator of the NewSUBARU comes from the left-direction of this figue. The beam is focused onto the 1st pinhole mask by a schwarzschild-type illuminator. The EEI has five piezo-stages for precise alignments of optical
Resolution Enhanced Technologies
254
components such as pinhole masks and gratings. Each masks and gratings have many kinds of patterns on them. By exchanging these patterns on these masks and gratings, we can easily change the type of the testing interferometer. EUV beam from NewSUBARU undulator
Vacuum Chamber
Grating Pinhole or Window Illuminator (NA 0.01)
Grating Grating
Test Optics Schwarzschild NA 0.2 Magnification 1/20
Pinhole and/or Window
CCD Camera
Vibration isolator
Fig. 2. Schematic diagram of the EEI.
2.1 PDI and LDI
The left-side schematic in Fig. 3 shows the PDI [3] and the LDI. The PDI uses a 650nm pinhole. The pinhole generates an aberration-free spherical wavefront. The spherical wavefront is divided into the 0th and +/- 1storder diffracted waves by a binary grating. These waves pass through the test optic and arrive at a 2nd pinhole mask, which has a small pinhole and a large window. The 0th-order wave passes through the small pinhole and is converted to a spherical wave again by the pinhole. One of the 1st-order diffracted waves goes through the large window, carrying the aberration information of the test optic. These two waves interfere and the interference fringes are observed by a CCD camera. In the LDI, pinholes in the PDI are replaced by slits in order to increase the number of detectable photons. This is one of the solutions, which compensate the degradation of the S/N ratio in the PDI for high-NA optics. Because the LDI utilizes diffractions by the slits instead of pinholes in the PDI, only one-dimensional information of the wavefront can be obtained.
Resolution Enhanced Technologies
255
In order to obtain two-dimensional data, two sets of measurements with perpendicular diffracting slits are required.
Fig. 3. Principles of the PDI, the LDI, the LSI, the SLSI and the DLSI.
2.2 LSI and SLSI
The middle schematic in Fig. 3 shows the LSI [4] and the SLSI. The 1st pinhole is illuminated by the EUV radiations. An aberration-free spherical wavefront is generated by diffraction of the 1st pinhole. The aberrationfree wave goes through the optic under test. The wave passing through the optic is aberrated and diffracted by a binary grating. A order-selection mask is placed at the image plane of the test optic. The mask has two large windows, which act as spatial filters. Only the +/- 1st-order diffracted waves can pass through the windows in the mask. The 0th and higher order diffracted waves are blocked by the mask. By using the order-selection mask for spatial filtering, noise is reduced and measurement precision is improved. The +/- 1st-order diffracted waves, which carry the aberration information of the test optic, interfere with each other and the interference fringes are observed by the CCD camera. By shifting gratings laterally, phase shifting measurement is achieved for high sensitivity. In the SLSI, the 1st pinhole in the LSI is replaced by a slit in order to increase the number of detectable photons. From the viewpoint of a quantity of light, the SLSI has an advantage compared with the LSI. Because the SLSI utilizes the diffraction by the slit instead of the pinhole of the LSI, only one-dimensional information of the wavefront can be obtained. In or-
256
Resolution Enhanced Technologies
der to obtain two-dimensional data, two sets of measurements with perpendicular diffraction directions are required, that is the same as the LSI. 2.3 DLSI
The DLSI is one of the shearing interferometry.[5] The DLSI uses two gratings. These gratings are plasced before the object and the image planes of the test optic. These gratings are placed in conjugate positions of the optic under test. The illuminating EUV radiation wave is divided into the 0th-order wave and +/- 1st-order diffracted waves by the first binary grating. A twowindow mask is placed at the object plane of the test optic. The 0th-order and +1st-order waves diffracted by the first grating is selected by the mask. The two waves pass through the test optic in mutually laterally shifted positions and are diffracted again by the second grating. Since the two gratings are placed in conjugate positions, the first grating is imaged on the second grating. Therefore, the 0th-order and the +1st-order waves diffracted by the first grating are completely overlapped and aberrations in the illuminating beam cancel out. The second mask with a large window selects two waves, the 2nd grating's 0th order of the 1st grating's 1st order and the 2nd grating's -1st order of the 1st grating's 0th order. These two waves that passed through the second mask interfere and the interference fringes are observed by the CCD camera. By shifting one of the gratings laterally, phase shifting measurement is achieved. 2.4 CGLSI
The CGLSI is based on the Digital Talbot Interferometer (DTI) [6][7] and the EUV Lateral Shearing Interferometer (LSI). We used a crossed grating to divide and shear the beam passing through the test optic. One of the features is using order selecting windows. The optical layout of the CGLSI is shown in Fig. 4. Two configurations of the CGLSI are available. The aberration-free spherical wavefront generated by an object pinhole passes through the test optic and is diffracted by the cross-grating located before the image plane. In the window-less type of the CGLSI, by setting the cross-grating on the Talbot-plane, the retrieved image of the crossgrating is observed by the CCD camera as an interferogram. In this case, the interferogram is deformed due to the aberrations of the test optic. In the 4-window type CGLSI, four windows are set on the image plane, and work as a spatial filter that blocks undesired orders of diffracted light. The four first-order diffraction light rays (+/-1st Order in the X direction, +/-1st Or-
Resolution Enhanced Technologies
257
der in the Y direction) interfere on the CCD camera and form an interferogram. This interferogram is also deformed by the aberration of the test optic.
Fig. 4. Principle of the CGLSI.
Fourier transform method (FTM) was applied to retrieving differential wavefronts. Figure 5 shows the wavefront retrieval process of the CGLSI. At first, applying 2-dimensional Fourier transform, we can obtain a spatial frequency spectra of the interferogram. Secondly, we set a spectral bandpass filter around the carrier-frequency domain that corresponds to the pitch of the interferogram. After shifting one of the carrier-frequency spectrum to the zero and executing inverse Fourier transform, we obtain the a differential wavefront. This process is applied for two spectrum corresponding to the x- and y-diffrential wavefronts as shown in Fig. 5. The phases of two complex amplitude maps correspond to the differential wavefronts in the x-direction and the y-direction, respectively. The differential Zernike polynomial fitting method [8] was applied to retrieve the wavefront of the CGLSI. Annular Zernike polynomials are used in the process. There are advantages and disadvantages for both the phase shifting method and the FTM, respectively. The phase shifting method is mainly influenced by the factors that vary in the time-domain, for example, light intensity change and vibrations of the system. The FTM is mainly influenced by factors that vary in the spatial-domain, for example, the light intensity distribution of the interferogram and the light cross-talk among
258
Resolution Enhanced Technologies
tensity distribution of the interferogram and the light cross-talk among the different diffraction orders.
Fig. 5. Wavefront retrieval Process of the CGLSI.
2.5 Comparison among six kinds of interferometers
Figure 6 shows the comparison result of five kinds of interferometers, namely, the PDI, the LDI, the CGLSI, the LSI, and the DLSI. We did not succeed the wavefront reconstraction of the SLSI because of the ununiformity of its interferogram. In the LSI, the low order astigmatism was not correctly measured. In the DLSI, Z5, Z6, and Z8 and Z9 show different values from those obtained by the other methods. It seems that the condition of the aberration compensation of illuminator in the DLSI is not completely achieved. The wavefronts obtained by the PDI, the LDI and the CGLSI agree well. There are two points. The LDI uses two interferograms for wavefront retrieval but the PDI and the CGLSI use one interferogram respectively. It shows that composition of two wavefronts of the LDI has succeeded well. The 2nd point is that the wavefront of the point-diffraction method and the wavefront of the shearing method show good agreement. These three methods are considered to be the candidates to be installed in the EWMS.
Resolution Enhanced Technologies
259
Fig. 6. Comparison of Zernike coefficients of six kinds of interferometers.
3 Error factors and calibration methods 3.1 Error factors
Since the wavelength of the EUV is extremely short, noises such as a shot noise and an electronic noise are less effective relative to the case of normal interferometers using visible or ultraviolet light. Instead of the noise, major errors are induced by geometrical configurations of the optical components. Figure 7 shows major error factors of the PDI and the LSI's. The geometrical errors can be calibrated using calculated data based on accurate measurements of the system configuration. However, the accurate measurements of the configurations are hard tasks. Therefore, another calibration method is required. In the following chapters, we describe the error factors in the EUV wavefront metrologies.
Resolution Enhanced Technologies
260
Initial pinhole
diffraction effect by the grating Test optic
Grating
beam alignment to the pinhole Reference pinhole
Flare effect
Aberration leak through the pinhole
Detector
Errors induced by detector arrangement
Fig. 7. Major error factors at the EUV interferometry.
3.1.1 Errors in spherical wavefront emitted from the pinhole
Ideally, the pinholes are expected to eliminate all aberrations included in the illuminating beam and to generate perfect spherical wavefronts. In actual, the pinhole cannot perfectly eliminate the aberrations. For example, a little astigmatism is easy to pass through the pinhole. In addition, the pinhole substrate cannot perfectly block the beam outside the pinhole, that is, a little wavefront including the aberrations of the illuminating beam penetrates the pinhole substrate. Although the more aberrations can be eliminated using a smaller pinhole, the light intensity emitted from the pinhole also become small compared to the transparent light through the pinhole substrate. In addition, misalignment of the beam illuminating the pinhole causes the deformation of the spherical wavefront. The fluctuations of the illuminating beam caused intensity variation. Therefore, we should carefully control the condition related to the pinholes. The accuracy of the spherical wavefront generated by pinholes are discussed the other literatures.[9-11] 3.1.2 Errors generated in Grating
When a concentrating or diverging beam is divided by a planer grating, the diffracted beam contained the diffraction aberration. This is caused by diffraction angle varying with the changes in the incident angle. The major diffraction aberration is coma. In addition, astigmatism is induced by the grating tilt. Since the grating tilt is difficult to be determined, another calibration method is required.
Resolution Enhanced Technologies
261
The grating generates not only required diffraction orders but also unwanted diffraction orders. Especially, transmission phase grating in the EUV region cannot be fabricated. The unwanted diffraction orders act as a noise in the interferogram. 3.1.3 Errors induced by geometrical configuration of detector
Interference fringes generated by the separate point sources are hyperbolically curved. The curved fringes cause coma, which is named as "hyperbolic coma." Similar to the grating, a detector tilt also induces astigmatism. These errors are reviewed another paper.[11] 3.1.4 Errors due to flare
The EUV radiations are much scattered compared to the visible and ultraviolet radiations due to the extremely short wavelength. The scattered radiations are observed as a flare. The flare of the one of the interference beams, including the unwanted diffraction orders, overlaps the other beam as a noise. The flare effect is reviewed another paper as a factor hindering measurements.[12] 3.2 Calibration method
Calibrations are essentials for achieving accurate measurements. Therefore, we have been developing calibration techniques continuously. 3.2.1 Absolute PDI
The absolute PDI is the calibrated PDI.[12] Figure 8 shows the principle of the absolute PDI. The absolute PDI uses two measurements. The first measurement is carried out using the standard PDI configuration with a pinhole-window mask (Fig. 8 (a)). The second measurement is carried out with a windowwindow mask (Fig. 8 (b)) under the same diffraction orders of the grating. The second measurement is used as calibration data of the systematic error of the interferometer. Both measurements have the same systematic errors including the diffraction aberrations and the geometrical aberrations. Therefore, subtracting the first measurement from the second, the result is
Resolution Enhanced Technologies
262
the direct comparison between the wavefront aberration of the test optic and the ideal wavefront diffracted from the small pinhole. 1st
0th
D
T
T
D
T
(a) T
G
G
D
T
D
T
T
(b) G
G
D
(c)
=
T
G T
: Diffraction aberrations : Geometrical aberrations : Wavefront of the test optic
Fig. 8. Principle of the absolute PDI.
3.2.2 Calibrating grating aberrations
For LSI's, calibrating diffraction aberrations is important because the aberrations are large because the grating is inserted in the large NA beam. Therefore, we developed calibration method for LSI's.[14] Figure 9 shows the principle of this calibration method. This calibration method uses two measurements. One is a shearing measurement using 0th and +1st order beams. The other is using -1st and 0th order beams. The second measurement is shifted by the shear amount. After shifting the second measurement, subtracting the differential wavefront generated by the 1st and the 0th orders from the differential wavefront generated by the 0th and the +1st orders, we obtain the difference between them. The wavefronts of the test optic cancel out. The diffraction aberrations derived from this difference.
Resolution Enhanced Technologies
263
O ( x, y )
D1 ( x, y )
D1 ( x, y )
Test
Aberration in +1st Order
Aberration in –1st order
-
=
Fig. 9. Principle of the calibration method for LSI's.
3.2.3 Removing flare effect
The flare can be removed by utilizing temporal and spatial filtering. Three methods are proposed to cancel the flare effect. The first method is referred to as "dual-domain" method.[15] This method is a hybrid method of spatial filtering and phase shifting (temporal-domain). The second method is carried out by a specialized algorithm for the phase-shifting analysis.[4] This utilizes difference of the angular velocity of the phase between true measurement and noise. This is applied for the LSI using +/-1st diffraction orders. The third method used the FTM. This method is utilized averaging different initial phases.[16]
4 Systematic error evaluation In order to assess the accuracy of our interferometer, we have tried to evaluate a part of the systematic error of our interferometer.[17] Through analysing the error, identifying the error source is expected. It is important for high accuracy to identify the error source. The assessed method is the absolute PDI. 4.1 Evaluating method
This method is based on the absolute measurement.[18] In the absolute measurement, the wavefront of the test optic is measured, whereas errors of the measuring system are measured in the systematic error evaluation. When the test optic is rotated, the wavefront of the test optic is also rotated but the errors of the measuring system are not rotated. Therefore, rotating
Resolution Enhanced Technologies
264
TEST
Wm0, 0
-
TEST
Wm0,D
SYSTEM
SYSTEM
SYSTEM
and non-rotating components can be divided by the measurements before and after rotation of the test optic. Figure 11 shows the principle of the systematic error derivation. First, the wavefront of the test optic is measured at normal orientation. The measured wavefront Wm consists of the wavefront of the test optic and the systematic error Ws of the metrology. Then, the test optic is rotated and the wavefront is measured. The measured wavefront returned the original orientation numerically. Subtracting the wavefront measured after rotation (Wm0,a) from that before rotation (Wm0,0), then we obtain the difference between these systematic errors with the different orientations. In this time, the test wavefronts are eliminated. The systematic error is derived from the difference. It is note that the rotationally symmetric and the nT components of the systematic error also distinguish because these components are unchanged by rotation. The number of n is determined by 2S/D where D is a rotation angle.
SYSTEM
SYSTEM
Wm0,D Wm0, 0
,0 Ws0,asym .
=
Fig. 11. Systematic error evaluation method. The systematic error is derived from the difference between the measurements before and after rotating the test optic.
By 90-degree rotation, rotationally symmetrical and 4T systematic error cannot be obtained. Therefore, we also rotated the optics by 120 degrees. By the 120-degree rotation, 3T components cannot be obtained but 4T can be obtained. Therefore, we obtained rotationally asymmetrical error by using 90-degree rotation and 120-degree rotation measurements. 4.2 Result
We measured the wavefront with four different orientations of the test optic. Figure 12 shows the measured wavefronts. The RMS values are from 1.26 to 1.30 nm. The measured wavefronts repeated well.
Resolution Enhanced Technologies
265
0deg.
90deg.
180deg.
120deg.
1.26 nm RMS
1.30 nm RMS
1.27 nm RMS
1.28 nm RMS
(a)
(b)
(c)
(d)
Fig. 12. Measured wavefronts at 4-orientations of the test optic
Figure 13 shows the derived annular Zernike coefficients of the derived systematic errors. We calculated three systematic errors using three sets of 0 and 90 degree, 90 and 180 degree, and 0 and 120-degree wavefronts. The RMS values of the systematic errors are from 0.075 to 0.086 nm corresponds to about O/170. The evaluated systematic errors are quite small compared to the wavefront of the test optic. 6\VWHPDWLF(UURU
1.0) polarized light lithography for sub O/4 resolution”, SPIE Proceedings 5754. 3. H.H. Hopkins: “On the diffraction theory of optical images”, Proc. Roy. Soc. (London) A 217 (1953) 408-432. 4. M. Totzeck, “Numerical simulation of high-NA quantitative polarization microscopy and corresponding near-fields”, Optik 112 (2001) 399-406 5. A. Taflove, The Finite-Difference Time-Domain Method (Artech, Boston, 1995
A Ronchi-Shearing Interferometer for compaction test at a wavelength of 193nm Irina Harder, Johannes Schwider, Norbert Lindlein Institute of Optics, Information and Photonics, University Erlangen-Nuernberg
Staudtstr. 7/B2, 91058 Erlangen Germany email:
[email protected] 1 Introduction A common problem in UV lithography systems is the deterioration of the optical components made of fused silica due to long time exposure with high energy radiation. One effect is colour centre formation, another one is the compaction or rarefaction of the fused silica [1]. The compaction or rarefaction leads on the one hand to stress induced birefringence as the structure of the fused silica is changed. On the other hand the refractive index is changed due to the change of the density. Although the change of the refractive index and the stress induced birefringence are very weak effects, they contribute nevertheless to the aberrations of a lithographic objective since long path lengths in fused silica are quite common. Also in case of the measurement long path lengths have to be used in order to achieve measurable aberrations. Therefore polished cubes of fused silica with a length of several hundred mm are used for the examination of this effect. The cubes are exposed for several months to a UV-laser beam of 0.2mJ/cm²-10mJ/cm² [2]. This leads to a deteriorated volume along the beam path with a diameter of about 2mm-3mm. For the measurement of the effect there are several methods used. The in situ measurement analyzes mainly the laser induced fluorescence [3], or the Raman scattering [3], or the absorption of the exposure beam [4], or the changes of its polarization [1]. After exposure the stress induced birefringence [2] or the wave front distortion due to the compaction or expansion of the material [5] are analyzed in a setup which is normally based on a HeNe-Laser and therefore the measuring wavelength is O=633nm. The measured wave front distortions are between 1nm/cm and 8nm/cm for this wavelength. How-
276
Resolution Enhanced Technologies
ever, the interesting wavelength domain for the measurement remains the DUV because the working wavelengths are situated there. We setup a Ronchi shearing interferometer with a working wavelength of O=193nm for compaction measurement. The interferometer is based on an ArF-Excimer Laser although this is a light source having poor spatial coherence. For a first test of the setup small samples of fused silica with a diameter of 1” and a thickness of 5mm are used. The samples were structured by reactive ion etching so that a flat step on the surface in the centre of the sample is achieved. The diameter of the step is 2mm and has the shape of a Yin-Yang (B) sign
2 Shearing interferometry with Ronchi phase gratings 2.1 Lateral Shearing with Ronchi phase gratings
Basically the lateral shearing interferometry uses two wave front copies which are laterally sheared one to another. Interference fringes are only observed in the overlap region of the two shifted copies. Then the measured wave front 'W is the difference of the two sheared wave front copies I(x+½'s,y) and I(x-½'s,y) if the shear was applied in x-direction. For compaction measurement one has to deal with local disturbances on an undisturbed area. So, a total separation of the two sheared disturbances can be achieved. In this case the wave front error can directly be seen in the unwrapped image of the measured wave front with both positive and negative sign, in respect to the undisturbed wave front. To double the wave front and apply the shear there are several setups known [6]. The setup, which is used in this paper, is based on two identical Ronchi phase gratings as shearing unit. The first grating acts as a diffractive beam splitter. As a Ronchi grating is defined to have a duty cycle of 1:1 all even orders are suppressed. Apart from the zeroth order only the two first diffraction orders receive maximum intensity. In case of a pure (opaque) Ronchi grating the diffraction efficiency for the two first orders is 10.1%. The efficiency and the total intensity is increased by using a phaseonly Ronchi grating. If the height of the phase structure is chosen to be ½O/(n-1), where n is the refractive index of the substrate material, then the zeroth order is also suppressed. The efficiency of the remaining orders in case of a phase-only Ronchi grating is shown in Tab. 1. If the two first diffraction orders are used as the two wave front copies a common path interferometer with symmetric paths for the two interfering beams is achieved, as long as the impinging wave is mostly perpendicular to the plane of the
Resolution Enhanced Technologies
277
first Ronchi grating. An identical second Ronchi grating is parallely placed behind the first grating and simply parallelize the two first orders [7] by introducing similar diffraction angles. Obviously the spacing between the two Ronchi gratings results in a lateral displacement of the two first orders. The shear is given by the distance between the gratings a and the diffraction angle D:
's
2a tan(D ) | 2a
O p
,
(1)
where p is the period of the grating. Table 1. Efficiency of a Ronchi phase grating 0th Order
1st Order
2nd Order
3rd Order
4th order
5th order
0.00%
40.53%
0.00%
4.50%
0.00%
1.62%
2.2 Reduced coherence
Simple evaluation procedures of shearing interferograms require a total separation of the local features on a flat background. The common exposure setup for compaction tests produces disturbances with a size of some mm. To achieve a total separation of the disturbance on the two wave front copies, shears of several mm with respect to the object plane are needed. This leads to high requirements on the coherence of the used light source. In comparison to a laser light source like the HeNe-Laser, which is normally used for interferometric measurements the excimer laser is a light source with both poor temporal and poor spatial coherence. The bandwidth is specified by the manufacturer to be 0.44nm at a wavelength of 193nm. This results in a coherence length of 85µm. As this shearing setup is a totally symmetric common path interferometer for perpendicular impinging waves the temporal coherence has no effect on the visibility of the interference fringes. In addition the excimer laser is emitting a spatial multi-mode pattern due to the short dwelling time in the laser resonator. The resulting reduced spatial coherence limits the maximum possible shear, as the distance of two interfering wave front points has to be smaller than the coherence area of the light source. The coherence area is defined by the first zero crossing of the absolute value of the complex coherence factor |P12('s)|, which is a cross correlation of two interfering wave front points 1 and 2 with a spacing of 's. The distribution of the complex coherence factor in a specified
278
Resolution Enhanced Technologies
plane can by calculated from the intensity distribution of the light source by using the Van Cittert-Zernike Theorem. For a simple rectangular aperture with a diameter of d=1mm the complex coherence factor is sinc-function and has one small central peak. The width results in a maximum possible shear of 's=0.306mm if a collimation lens with a focal length of f=650mm is used. Obviously this shear is too small to achieve a total separation of the disturbances which we want to measure. 2.3 Use of a periodic light source
The Van Cittert-Zernike theorem can be used to tailor the spatial coherence. So, larger shears will be possible. If a periodic intensity distribution is introduced as light source instead of a homogenous luminous area the complex coherence factor will also show several peaks, likewise the diffraction image of a grating. That implies that the contrast of the interference fringes will return if the shear corresponds to the spacing of the first two orders of the complex coherence factor. In [8] the concept of the use of a periodic light source for a white light shearing interferometer was introduced and systematically discussed in [9]. The tolerance for the shear is again given by the width of the peaks and therefore by the number of luminous periods. An easy way to achieve a periodic light source is illuminating an opaque grating with incoherent light. As the duty cycle of the grating influences the height of the peaks of the complex coherence factor the contrast can be increased by narrowing the slits of the grating. Unfortunately, this leads also to less total intensity. One has to compromise on the desired contrast and the needed intensity. We apply a chromium grating on a fused silica substrate with a duty cycle of 1:4. This leads to a theoretically possible contrast of 87% and a reduced intensity of 0.2I0. The period of the grating is p=50µm. So the contrast will return for a shear of 's=5.02mm in the plane of the sample under test (again collimation lens f=650mm). Additionally to the large possible shears disturbing interference fringes are suppressed. Only those will be visible which meet the condition of an optical path difference of 's. So disturbances in the interference image are reduced.
Resolution Enhanced Technologies
279
3 Experimental setup In the experimental setup the remaining spatial coherence of the laser has to be destroyed to achieve an incoherent illumination of the periodic light source (see Fig. 1). Therefore a defocused spot on a rotating scatterer is imaged onto the light source grating. One might argue that a pulse laser delivers a stationary speckle pattern but several pulses produce differing speckles. In the measuring process an averaging over 100 pulses is used. The slits of the light source grating must be parallel to the grooves of the two shearing gratings to achieve the maximum contrast. So it must be possible to rotate and tilt the light source grating.
Fig. 1. Experimental setup
The light is collimated by a lens made of fused silica with a focal length of f=650mm. The sample is then imaged by a telescopic setup onto the CCD (Quantix 1401E, Photometrics). The scaling is 1:5 as the CCD Chip has a size of 7mmx9mm. The lateral resolution is limited by the pixel size of the CCD which is 6.8µm. As the DUV region suffers of a lack of optical materials mainly fused silica and CaF2 are used for the fabrication of optical components like lenses. To reduce the aberrations the collimation lens and the imaging lenses in this setup are best form lenses made of fused silica. Objectives made of several lenses would be too expensive in this case. The shearing unit are two Ronchi phase gratings with a period of 5µm and a height of the surface structure of 170nm made of fused silica which are placed in front of the first imaging lens. The theoretical height is 172nm for a wavelength O=193nm. Due to errors during development of the structure the duty cycle is not 1:1 over the whole grating. This leads to a small remaining second order which is filtered by the aperture of the second imaging lens like the other higher orders. The diffraction angle is 2.2°.
280
Resolution Enhanced Technologies
So the shearing distance must be 65mm to achieve a shear 's=5.06mm. For phase shifting purposes the first grating is mounted on a piezo which moves perpendicular to the grooves of the grating. The wave front under test is measured by applying five-frame phase shifting interferometry [10]. It is well known that statistic error sources like air turbulences or fluctuations of the intensity during the measurement cause errors which cannot be eliminated by software afterwards. To avoid fluctuations in the phase due to air turbulences the whole setup is shielded. Statistic intensity fluctuations cannot be eliminated in such an easy way. They origin mainly from the excimer laser and to a certain part from inhomogeneities of the rotating scatterer. The shearing interferogram offers the possibility to eliminate fluctuations of the total intensity as impinging intensity can be monitored contemporaneously to the measurement. Therefore, the shifted areas where no interference occurs are captured also during measurement (Fig. 2). Afterwards every measured intensity frame is normalized by the sum of the intensity in the monitoring area in respect to the first frame.
Fig. 2. CCD image of the sheared wave front of the sample under test
4 First results In Fig. 3a the unwrapped phase of the structured fused silica sample is shown. The desired phase step nearly vanishes due to the wave front errors of the sample itself and the optical setup. To achieve a useful sensitivity for small steps those errors have to be reduced. First of all a linear polynomial was fitted to the phase to eliminate the tilt of the measurement (Fig. 3b).
Resolution Enhanced Technologies
281
To get rid of the unwanted wave front errors the sample under test was already measured before the structuring. In Fig. 3c the unwrapped phase of the empty sample is shown which is emended by a linear polynomial. This measurement was afterwards subtracted from the tilt-corrected measurement of the structured sample Fig. 3b. The resulting phase profile is shown in Fig. 3d. The background is now smaller than the desired phase steps if the repositioning of the sample after structuring is done well. The needed precision for the repositioning must laterally be smaller than one pixel on the CCD chip – in our case this means 34µm in the sample plane. The double optical path difference of the disturbed area with respect to the undisturbed area can now be measured by determining the difference between the two sheared disturbances. For this sample we measured a height of 2OPD=0.13O. This complies to a step height of 22nm. Unfortunately the setup suffers still from instabilities of the light source which can be seen by looking at the statistical values of Fig. 3d. To stabilize the setup and so increase the certainty of the measurement some efforts have still to be made.
Fig. 3. First measurement of a small phase step (a), a linear polynomial was fitted (b) and the measurement of the empty sample before structuring (c) was subtracted (d). The resulting phase difference between the sheared phase step images was 2*OPD=0.13O
282
Resolution Enhanced Technologies
5 References 1. Schenker R. et al., (1994) Deep ultraviolet damage of fused silica, J. Vac. Sci. Technol. B 12(6): 3275-3279 2. Liberman V. et al., (1999) Excimer-laser-induceddensification of fused silica: laser-fluence and material-grade effects on scaling law, J. NonCrystalline Solids 244: 159-171 3. Muehling Ch. et al., (2000) In situ diagnostics of pulse laser-induced defects in DUV transparent fused silica glasses, Nucl. Instr. and Meth. in Phys. Res. B 166/167: 698-703 4. Stafast H. et al., (2004) Vakuum-UV-Spektroskopie an synthetischem Quarzglas unter UV-Pulslaserbestrahlung, DGaO-Proceedings 2004 5. Kuehn B. et al., (2003) Compaction vs. expansion behaviour related to the OH-content of synthetic fused silica under prolonged UV-laser irradiation, J. Non-Crystalline Solids 330: 23-32 6. Malacara, (1978) Optical shop testing, Wiley&Sons, New York, 105148 7. Schreiber H. et al., (1997) A lateral shearing Interferometer based on two Ronchi Gratings in series, Appl. Opt. 36(22): 5321 8. Schwider J., (1984) Continuous lateral shearing interferometer, Appl. Opt. 23(23): 4403-4409 9. Wyant J.C., (1974) White light extended source shearing interferometer, Appl. Opt.13: 200 10. Robinson D., Reid G. T., (1993) Interferogram Analysis, IOP Pulishing, Bristol/Philadelphia, 94-140
Simulation and error budget for high precision interferometry Johannes Zellner Bernd Dörband Heiko Feldmann Carl Zeiss SMT AG Carl-Zeiss-Str. 22 73447 Oberkochen Germany
1 Introduction In interferometric measurements the total wavefront includes both the wavefront from the sample and the wavefront due to the experimental setup. The latter can be removed from the total wavefront by first making a calibration without sample. The desired wavefront of the sample is then obtained by subtracting the calibration wavefront from the total wavefront of the measurement. The drawback of this method is that it does not cope with drift errors – errors due to small changes in the experimental setup between calibration and measurement. We will show, that these drift errors can be calculated by a computer simulation which takes into account the real experimental setup.
2 Targets The drift errors depend both on the magnitude of the drift and on the experimental setup including wavefront contributions of the interferometer components. These wavefront contributions include the design wavefront of each component and the wavefront contributions due to manufacturing errors. Setting up an error budget for the individual interferometer components is one of the main targets of the interferometer simulation. This way, criti-
Resolution Enhanced Technologies
284
cal components can be spotted easily and their tolerance budget set up accurately. On the other hand, the interferometer simulation helps relaxing tolerances for non-critical components and therefore helps in saving production costs. As the drift errors depend both on the wavefront contributions of the individual interferometer components and on the drifts, another target of the interferometer simulation is to set up an error budget for the acceptable drifts. Last but not least, as the interferometer simulation determines the measurement errors for a given interferometric setup including all real-world disturbances such as manufacturing errors and drift errors – the interferometer simulation allows qualifying the precision of the total setup.
3 Simulation of the experimental setup For the sake of generality, our computational tool allows in principle to simulate arbitrary interferometric setups. For the following considerations, we use a Fizeau-interferometric setup which consists of: a point-like illuminating source a tilted beam-splitter a collimator a tilted Fizeau-plate for generating the reference wave a sample (for the simulations e.g. a flat mirror) an “eyepiece” a CCD-camera which records the interferogram CCD-camera eye-piece
ce rfa r u o s t a ullim izea F co
sample
source
beam-splitter Fig. 1. Sketch of a simple Fizeau interferometer as it was used for the simulations (not to scale). The rays of the reference wave (reflected from the Fizeau surface) are drawn in light grey. The angle of the rays of the reference wave is largely exaggerated.
Generally, there will be more components than those shown in Fig. 1 (like mirrors for example), which were left out for the sake of simplicity.
Resolution Enhanced Technologies
285
3.1 Wavefronts propagating differently through the system
The reference wave is reflected by the Fizeau surface – the back side of the Fizeau plate. The sample wave is reflected by the sample. Both waves interfere at the CCD camera. As we want to record the interference at the position of the sample, the sample has to be focused on the CCD-camera by moving the CCD-camera to the conjugate plane of the sample. The focus position depends on the cavity length – the distance between the Fizeau plane and the sample. As can be seen from Fig. 1, different parts of the illuminating wave are used to build the reference and the sample waves. The “shearing” of the waves (e.g. at the place of the Fizeau surface) depends on the cavity length. Therefore both waves propagate differently through the interferometer, seeing different parts of each component. 3.2 Wavefronts and interferograms
The source aperture has to be large enough to illuminate both waves completely. Thus the source aperture has to be in fact a little larger than the apertures propagated by the two waves through the system.
Fig. 2. Sample wavefront (left) and reference wavefront (right) relative to the source wave which is indicated by the surrounding circle. The greyscale is on an arbitrary scale. The part of the reference wave which interferes at the CCD-camera is sheared relatively to the illuminating wave and the sample wave diagonally to the top left because of the diagonally tilted Fizeau surface. Both waves show significant coma due to the tilted beam splitter. (Wavefronts are drawn with constant offsets and tilts removed).
The resulting interferogram is obtained as the difference between reference and sample wave. In the case of both waves being dominated by coma as above, the interferogram will show significant astigmatism and focus as the shearing of the two wavefronts performs a differentiation of the wavefront.
286
Resolution Enhanced Technologies
4 Drifts The interferogram measured as shown above contains all features of the interferometric setup including design errors and manufacturing errors of the individual components. To calibrate the interferometer, an interferogram can be recorded with a calibrating mirror instead of a real sample. This interferogram can be subtracted later from the measurements. This way, the wavefront contributions of the interferometric setup can be removed from the total interferogram, allowing separating the desired wavefront contributions from the sample. In practice, calibration and measurement are subject to small drifts – small changes in the interferometric setup due to environmental influences. These can be small changes in air pressure or temperature as well as small motions of the interferometer components. Such drifts are not covered by the calibration and lead to measurement errors. The sensitivity to drifts increases with increasing wavefront contributions by the interferometer. In other words: if the interferometer was an ideal one, having no wavefront contributions itself, it would not be sensitive to any drifts.
5 How a simulation can help Drift errors are influenced both by the wavefront contributions of the interferometer and by the magnitude and type of the drifts. Keeping drift errors small can be achieved by reducing the wavefront contributions of the interferometer components and by reducing the magnitude of the possible drifts. Reduction of the wavefront contributions of the interferometer components is limited to the design wavefront of the interferometer – the wavefront which is already there w/o any manufacturing errors. 5.1 Precision of the interferometer
Given different types of expectable drifts, a simulation of the setup allows the calculation of the resulting drift errors. These drift errors, which are obtained w/o taking any manufacturing errors into account limit the measurement accuracy of the interferometer. Therefore the simulation allows calculating the limiting accuracy of a given interferometric setup.
Resolution Enhanced Technologies
287
5.2 Drift errors due to manufacturing errors
Manufacturing errors of the interferometer components usually lead to an increased wavefront contribution of the interferometer. This in turn leads to increased drift errors. Given the expected drift errors and the desired measurement accuracy of the interferometer, the simulation allows calculating the tolerances of the individual interferometer components precisely based on the real experimental setup. For the simulation, manufacturing errors can be applied for example by aspheric deformations on the surfaces of interest. The simulation will therefore locate critical components with tight tolerances. On the other hand, it will locate non-critical components with relaxed tolerances and therefore helps to reduce production costs. 5.3 Reducing the drift
Reducing the magnitudes of the drifts themselves is the most obvious way to reduce the drift errors. The simulation allows spotting the critical drifts and the calculation of their maximum allowable magnitude. On the other hand, the simulation helps spotting non-critical drifts, therefore relaxing the specification for their maximal allowable magnitude.
6 Examples For the following examples, we consider a typical Fizeau Interferometer as shown in Fig. 1. The phase shift between the reference and the sample wave are achieved by tilting the Fizeau plate. The speed of the system (speed of the collimator and eyepiece) was 5.6. 6.1 Errors due to a specific drift
As a specific drift we consider an additional tilt of the Fizeau plate of 0.0023 deg about the axis perpendicular to the plane shown in Fig. 1. This tilt was chosen to add 10 additional fringes in the interferogram at the CCD plane. The measurement error due to this drift can be calculated by calculating the interferogram of the drifted state and then subtracting the calibration – the interferogram of the undrifted state. The result is shown in Fig. 3. Note that this error is already present in the absence of any manufacturing errros. Therefore, if the given tilt of the Fizeau plate (10 addi-
288
Resolution Enhanced Technologies
tional fringes at the CCD) was an expectable drift, The wavefront error shown in Fig. 3 would be the limiting precision of the interferometer setup.
Fig. 3. Measurement errors in nm due to an additional drift of the Fizeau plate of 10 fringes (at the plane of the CCD). Tilts and constants offsets are removed from the plot.
6.2 Error contributions due to assumed manufacturing errors
Fig. 4. Total measurement error in nm due to an additional drift of the Fizeau plate of 10 fringes (at the CCD plane) given a spherical manufacturing error of 1 Ȝ at the first surface of the eyepice. This measurement error contains both the contributions of the interferometer setup according to the optical design and the contributions of the manufacturing error. Tilts and offsets are removed from the plot.
As a specific manufacturing error, we consider a spherical surface deformation of the first surface of the eyepice of one 1 Ȝ in combination with the drift of the Fizeau plate as given above. The measurement error again is calculated by subtracting the calibration from the drifted interferogram. The result is shown in Fig. 4. To separate the contributions of the manufacturing error to the measurement error from the contributions of the “ideal” interferometer (according to the optical design), the wavefront as show in Fig. 3 can be sub-
Resolution Enhanced Technologies
289
tracted from the total wavefront as shown in Fig. 4. The resulting plain contribution of the manufacturing error to the measuremennt error is shown in Fig. 5:
Fig. 5. Contribution of the spherical manufacuring error of 1 Ȝ on the first surface of the eyepice to the measurement error (in nm) if the drift of the Fizeauplate is 10 fringes (at the CCD plane) between calibration and measurement.
6.3 Error contributions due to known manufacturing errors
If the manufacturing error for a specific interferometer component is known, e.g. by a measurement of the wavefront contributions of this component, the resulting measurement error can be calculated by applying the known manufacturing error to the corresponding component and carrying out the calculations as shown above. Fig 6. shows a measured wavefront of a real eyepice and the contributions to the measurement error due to the measured wavefront in combination with an assumed drift of the Fizeau plate as shown above.
Fig. 6. Left: measured wavefront of an eyepice in nm. The wavefront shows significant coma due to manufacturing errors (e.g. lens element decentration). Right: contributions of the wavefront from the left to the measurement error assuming a drift of the Fizeau plate of 10 fringes at the CCD plane.
290
Resolution Enhanced Technologies
7 Results The simulation was carried out for special interferometer setups which are used at the Carl Zeiss SMT AG. Different drifts as the motion and tilt of all relevant interferometer components were investigated as well as manufacturing errors on all interferometer components. The “eyepiece” turned out to be a critical component regarding manufacturing tolerances. Mirrors and the reflecting Fizeau surface turned out to have also pretty tight surface tolerances. On the other hand, not quite surprisingly, an axial shift of the source or the CCD-camera turned out to have no big impact on the resulting drift errors.
8 Conclusions An accurate calculation of the error budget of individual interferometer components can only be based on a simulation of the total interferometric setup. We have shown that such a simulation can in fact make accurate predictions of the achievable measurement precision. Furthermore, a simulation allows calculating the maximal allowable drifts magnitudes. Using a simulation of the interferometric setup therefore allows for focusing on critical components and drifts on the one hand and for relaxing requirements for non-critical components and drifts on the other hand.
Progress on the wide scale Nano-positioningand Nanomeasuring Machine by Integration of Optical-Nanoprobes Gerd Jäger, Tino Hausotte, Eberhard Manske, Hans-Joachim Büchner, Rostyslav Mastylo, Natalja Dorozhovets, Roland Füßl, Rainer Grünwald Technische Universität Ilmenau, Institut für Prozessmess- und Sensortechnik PF 100 565, 98684 Ilmenau Germany
Abstract The paper describes the operation of a high-precision wide scale threedimensional nanopositioning and nanomeasuring machine (NPMMachine) having a resolution of 0,1 nm over the positioning and measuring range of 25 mm x 25 mm x 5 mm. The NPM-Machine has been developed by the Technische Universität Ilmenau and manufactured by the SIOS Meßtechnik GmbH Ilmenau. The machines are operating successfully in several German and foreign research institutes including the Physikalisch-Technische Bundesanstalt (PTB). The integration of several, optical and tactile probe systems and scanning force microscopes makes the NPM-Machine suitable for various tasks, such as large-area scanning probe microscopy, mask and water inspection, circuit testing as well as measuring optical and mechanical precision work pieces such as micro lens arrays, concave lenses, mm-step height standards.
1 Introduction If one believed in the prediction made by Gordon Moore according to which the number of transistors on a chip doubles every two years, more
292
Resolution Enhanced Technologies
than one thousand million transistors would have to be realized per chip in the year 2010. The consequences of this are that 45-nm structures will have to be implemented. However, the technological development will not have reached its ultimate destination yet. The way afterwards is depicted by the “Technology Roadmap for Semiconductors 2003”, which predicts that in the year around 2016, 22-nm structures will have to be realized. In the face of those enormous technological objectives, high requirements will have to be fulfilled also by nanometrology just as by nanomeasuring- and nanopositioning techniques. Thus, scanning probe microscopes scanning over large areas are required for mask and wafer inspection and also for the testing of ICs, with those microscopes being suited to industrial applications, too. In addition, nanomeasuring- and nanopositioning devices are necessary for the positioning and measuring to within a nanometre of, for example, nanosurface- and nanostructure standards, mechanical and optical high-precision parts as well as for material analysis. A nanopositioning and nanomeasuring machine (NPM machine) presenting a relatively large positioning and measuring range of 25 mm x 25 mm x 5 mm and a resolution of 0,1 nm has been developed at the Institute of Process Measurement and Sensor Technology of the TU Ilmenau. The structure and the operating principle of this machine, the integration of different sensor systems, and the measurements performed are explained.
2 Design and operation of the NPM-Machine The vision is the development of highly capable, reliable nanopositioning and nanomeasuring instruments (NPM-Machines) at sub-nanometer scale across large ranges. Our NPM-Machines /1, 2, 3/ consist of following main components: - traceable linear and angular measurement instruments at high resolution and accuracy - 3D-nanopositioning stages (bearings, drives) - nanoprobes (AFM, Focus-sensor) suitable for integration into NPM-Machine - control equipment. First of all a new concept is required to assemble the main components in order to achieve uncertainties as small as possible.
Resolution Enhanced Technologies
293
2.1 Traceable linear and angular sensors
Fig. 1. Plane mirror interferometers
Fig. 1 shows the difference between a plane mirror interferometer of the state of the art (left) and the interferometer developed by our institute (right) /4/. The most advantage of our plane mirror interferometer is that it has only one measuring beam. This fact is important for compliance with the Abbe comparator principle on all three measurement axes. The interferometer (left) needs two beams for having tilt invariance in a small angle range of the moving plane mirror.
Fig. 2. Single, double and triple beam plane mirror interferometers
Single, double and triple beam plane mirror interferometers can be applied in the NPM-Machine in order to measure the x-, y- and zdisplacements and the pitch-, yaw- and roll-angles (see Fig. 2).
294
Resolution Enhanced Technologies
2.2 Operation of the NPM-Machine
The approach of the NPM-Machine consists of a consequent realization in all the measurement axes at all times (see Fig. 3).
Fig. 3. Principle design of the nanomeasuring machine 1) x-interferometer, 2) yinterferometer, 3) z-interferometer, 4) metrology frame made of Zerodur, 5) roll and yaw angular sensor, 6) pitch and yaw angular sensor, 7) surface-sensing probe, 8) sample, 9) corner mirror, 10) fixing points for probe system
The intersection of all length measurement axes is the point of contact between the probe and the sample. This Abbe offset-free design with three interferometers and a user-selectable surface-sensing probe provides extraordinary accuracy. The sample is placed on a movable corner mirror that is positioned by three-axis drive systems. The position of the corner mirror is measured by three plane mirror interferometers. Angular deviations of the guide systems are measured at the corner mirror by means of sensitive angle sensors and used for angular control. Guide error compensation of the stages is achieved by a closes-loop control system. The used electromagnetic drives achieve high speed and, at the same time, a positioning resolution of less than 1 nm. The NPMMachine has electromagnetic drives with one drive each for x- and y-axes and four drives for the z-axis. Therefore, the angular errors caused by the x- and y-axes of the linear guides can be compensated. 2.3 Nanoprobes integrated into the NPM-Machine
Many different nano sensor types can be used for integration into the NPM-Machine. A focus sensor /5/, a scanning force microscope and a met-
Resolution Enhanced Technologies
295
rological AFM have been developed by the Institute of Process Measurement and Sensor Technology. The central part of the focus sensor is a so-called hologram laser unit. This multifunctional element has made the extreme miniaturization of the sensor possible. The structure of the entire focus sensor is shown in figure 4. The lateral resolution is about 0,8 µm. The resolution depends on the laser wavelength and the focal aperture of the probe. The optical system has been dimensioned such that a measurement range of about ± 3 µm can be achieved. Thus, a resolution of the zero point of < 1 nm is made possible by the AD converter used. To be able to see the point of the optical scanning on the surface of the sample, the focus sensor has been combined with a CCD camera microscope, which allows the user to spot interesting regions on the sample surface. The camera illumination is fed from an LED via optical fibres to minimize heat penetration into the measuring machine. The characteristic line of the focus probe can be calibrated using the laser interferometer of the NPM machine.
Fig. 4. Setup of focus sensor
The focus sensor has been used to design a scanning force microscope. The bending of the cantilever is detected by the focus sensor. Due to an integrated piezo translator, measurements in intermittent contact made as well as in contact made are possible (Fig. 5).
296
Resolution Enhanced Technologies
Fig. 5. Scanning force microscope with focus sensor
The developed metrological AFM is the first AFM traced to international standards. The bending of the cantilever is measured by a plane mirror interferometer and additional detected by the focus sensor.
3 Measurement results Five step-height standards from 7 nm to 780 nm were measured with the NPM-Machine in combination with the focus sensor as probe system. The step-height standards have been calibrated at the PTB. The maximum difference between the mean value measured by the PTB and our own results were r 1,3 nm. The achievable scanning speed is of special interest with regard to large area scans. Scan speed up to 500 µm/s have been carried out without an observable increase of the expanded uncertainty (k = 2), which reached from 0,7 nm to 2 nm. The NPM-Machine provides for the first time a large scanning and measurement range of 25 mm x 25 mm x 5 mm with a resolution of 0,1 nm. The focus sensor has proven to be versatile in its possibilities for use. Step height measurements up to 5 mm can be carried out. Fig. 6 shows the measurement results of a 1 mm step height.
Resolution Enhanced Technologies
297
Fig. 6. 1 mm-step height
Fig. 7 illustrated the long range nanoscale measurement of a concave lens
Fig. 7. Concave lense
A step height standard of 780 nm was measured with the developed focus sensor AFM. A standard deviation of only 0,4 nm was calculated.
Conclusion This paper descripes the design and operation of a nanopositioning and nanomeasuring machine with a scanning and measurement range of 25 mm x 25 mm x 5 mm and resolution of 0,1 nm in all measurement axes. The Laser-interferometric measurement is free from Abbe errors of first order in all three coordinates. The presented single, double and triple beam
298
Resolution Enhanced Technologies
interferometers can be applied in NPM-Machines. A new high-speed focus sensor and a scanning force microscope with installed focus sensor are explained. It was possible to measure step height standards with uncertainties below 1 nm and long range samples of nanometer scale.
Acknowledgements The authors wish to thank all those colleagues who have contributed to the developments presented here. Our special thanks are due to the Thuringian Ministry of Science, Research and Arts for promotiong the nanocoordinate metrology in the framework of joint projects and the German Research Foundation (DFG) for funding the Collaborative Research Center 622 “Nanopopositioning and Nanomeasuring Machines” at the Technische Universität Ilmenau.
References 1. G. Jäger, E. Manske, T. Hausotte, W. Schott: Operation and analysis of a nanopositioning and nanomeasuring machine, Proceedings of the 17th Annual Meeting of the ASPE, St. Louis, Missouri, USA, 2002, S. 299 – 304 2. G. Jäger, E. Manske, T. Hausotte, H. Büchner, R. Grünwald, R. Füßl: Application of miniature interferometers to nanomeasuring and nanopositioning devices, Proceedings of the Conference Scanning Probe Microscopy, Sensors and Nanostructures (TEDA), Beijing, China, Mai 2004, S. 23 – 24 3. G. Jäger, E. Manske, T. Hausotte, R. Füßl, R. Grünwald, H. Büchner, W. Schott, D. Dontsov: Miniature interferometers developed for applications in nano-devices, Proceedings of the 7th International Conference on Mechatronic Technology, ICMT 2003 Taipei, Taiwan, Dezember 2003, S. 41 – 45 4. H.-J. Büchner; G. Jäger: Interferometrische Messverfahren zur berührungslosen und quasi punktförmigen Antastung von Messoberflächen; Technisches Messen, 59 (1992) 2, S. 43 – 47 5. R. Mastylo; E. Manske; G. Jäger: Development of a focus sensor and its integration into the nanopositioning and nanomeasuring machine; OPTO 2004, 25.-27.05.2004; Nürnberg, Proceedings, S. 123 – 126
Invited Paper
Through-Focus Point-Spread Function Evaluation for Lens Metrology using the Extended NijboerZernike Theory Joseph J.M. Braat1, Peter Dirksen2, Augustus J.E.M. Janssen3 1 Optics Research Group, Department of Imaging Science and Technology, Faculty of Applied Sciences, Delft University of Technology, Lorentzweg 1, 2628 CJ Delft, The Netherlands (
[email protected]) 2 Philips Research Laboratories, Kapeldreef 75, B-3001 Leuven, Belgium (
[email protected]) 3 Philips Research Laboratories, Professor Holstlaan 4, 5656 AA Eindhoven, The Netherlands (
[email protected])
1 Introduction Lens metrology is of utmost importance in the field of optical lithography; both at the delivery stage and during the lifetime of the projection objective, a very high imaging quality should be guaranteed. Frequent on-line tests of the optical quality are required and they should be well adapted to the environment in which the objective has to function, viz. a semiconductor manufacturing facility. The most common method for high-precision lens characterization is at-wavelength optical interferometry [1]. A first limitation for applying this method can be found in the availability of an appropriate coherent source at the desired wavelength. The second problem is the reference surface that is needed in virtually all interferometric set-ups. For these reasons, there has been much interest in lens quality assessment by directly using the intensity distribution in the image plane. A reconstruction of the complex pupil function of the objective from a single intensity measurement is generally not possible. A combined intensity measurement in the image plane and the pupil plane, if possible, can give rise to an improved reconstruction method. More advanced methods use several through-focus images but it is not possible to guarantee the uniqueness of the aberration reconstruction in such a numerical ‘inversion’ process [2]-[4]. In this paper we give an overview of a new method that is based on an analysis of the through-focus images of a pointlike object using a complex pupil function expansion in terms of Zernike polynomials.
300
Resolution Enhanced Technologies
While the original analysis of the diffracted intensity by Nijboer and Zernike, using the orthogonal circle polynomials, was limited to a close region around the image plane, the so-called Extended Nijboer-Zernike (ENZ) theory offers Bessel series expressions for the through-focus intensity distribution over a much larger range. Good convergence for the through-focus intensity is obtained over a range of typically 10 to 15 focal depths. For high-quality projection lenses, this is typically the range over which relevant intensity variations due to diffraction are observed and the information from this focal volume is used for the reconstruction of both the amplitude and phase of the complex pupil function of the lens. In a recent development, the theory has been extended to incorporate vector diffraction problems so that the intensity distribution in the focal volume of an imaging system with a high (geometrical) aperture (e.g. sinĮ=0.95) can be adequately described. From this analysis one can basically also extract the so-called ‘polarization aberrations’ of the imaging system. As it was mentioned previously, the starting point in all cases is a pointlike object source that is smaller than or comparable to the diffraction limit of the optical system to be analyzed. A sufficient number of defocused images serves to create three-dimensional ‘contours’ of the intensity distribution that are then used in the reconstruction or retrieval scheme. In Section 2 we briefly describe the basic features of the extended Nijboer-Zernike theory, followed in Section 3 by a presentation of its implementation in the retrieval problem for characterizing the lens quality. In Section 4 we present experimental results and in Section 5 we give some conclusions and an outlook towards further research and developments in this field.
2 Basic outline of the Extended Nijboer-Zernike theory To a large extent, the imaging quality of an optical system is described by the properties of the complex (exit) pupil function. In Fig.1 we have shown the geometry corresponding to the exit pupil and the image plane. The cartesian coordinates on the exit pupil sphere are denoted by µ and Ȟ and the pupil radius is ȡ0. The distance from the centre E0 of the exit pupil to the centre P0 of the image plane is R. The cartesian coordinates on the exit pupil sphere are normalized with respect to the pupil radius. The real space coordinates (x,y) in the image plane are normalized with respect to the diffraction unit Ȝ/s0 where s0 is the numerical aperture of the imaging system and these coordinates are then denoted by (X,Y). In an analogous way, the axial coordinate z is normalized with respect to the axial diffraction unit,
Resolution Enhanced Technologies
301
u O / ^1 (1 s02 )1/ 2` and denoted by Z. As usually, the diffraction calculations are carried out using polar coordinates, (ȡ,-) for the pupil coordinates and (r,ij,z) for the image plane coordinates. The complex pupil function is written as (2.1) B ( U ,- ) A( U ,- ) exp ^i) ( U ,- )` , with A(ȡ,-) equal to the lens transmission function and ĭ(ȡ,-) the aberration function in radians of the objective.
Fig. 1. The choice of coordinates in the pupil and the image space.
The point-spread function in image space is obtained from Fourier optics [5] and is written as U (r,M ; f ) (2.2) 2S 1 1 2 ³ ³ exp{if U }B(U ,- )exp^i2S rU cos(- M )` U dU d- ,
S
0 0
where the defocusing parameter f has been included to allow through-focus evaluation of the image space amplitude. The calculation of U(r,ij;f) can be done in a purely numerical way but the Nijboer-Zernike theory has shown that a special representation of B(ȡ,-) in terms of the Zernike polynomials allows an analytical solution for f=0. It has turned out that for nonzero values of f, as large as ±2ʌ, a well-converging series expression for Eq.(2.2) can be found [6] and this solution has proven to be very useful
Resolution Enhanced Technologies
302
and effective [7] once we are confronted with the inversion problem described in the introduction. 2.1 Zernike representation of the pupil function
The common way to introduce Zernike polynomials in the pupil function representation is to put the amplitude transmission function A(ȡ,-) equal to unity (a frequently occurring situation in optical systems) and to apply the expansion only to the phase aberration function ĭ(ȡ,-) according to (2.3) B( U,-) exp{i)(U,-)} | 1 i)(U,-) 1 i¦Dnm Zn|m| (U,-) , n,m
with (2.4) Z (U ,-) Rn|m| (U )exp(im-) . |m| The radial polynomial Rn (ȡ) is the well-known Zernike polynomial of radial order n and azimuthal order |m| (with n-|m|0 and even) and the azimuthal dependence is represented by the complex exponential function exp(im-). To represent all possible cosine and sine dependences in the Zernike polynomial expansion, the summation over n,m for the representation of ĭ(ȡ,-) has to be extended to both positive and negative values of m up to a chosen maximum value of |m|. In our analysis of the through-focus amplitude we prefer to use a Zernike polynomial expansion for the complete pupil function B(ȡ,-) and this leads to the following expression (2.5) B( U,-) A(U,-)exp{i)(U,-)} ¦Enm Zn|m| (U ,-) , n,m
where the coefficients ȕ now represent both the amplitude and phase of the complex pupil function. For sufficiently small values of the ȕ-coefficients, an unequivocal reconstruction of the separate functions A(ȡ,-) and ĭ(ȡ,-) is feasible. 2.2 Amplitude distribution in the focal region
The extended Nijboer-Zernike theory preferably uses the general representation of the complex pupil function according to Eq.(2.5) and from this expression the amplitude in the focal plane is obtained as (2.6) U (r,M ; f ) ¦EnmUnm (r,M ; f ) , n, m
with the functions Unm(r,ij;f) given by
Resolution Enhanced Technologies
303
(2.7) Unm (r,M ; f ) 2imVnm (r, f )exp(imM ) . m The expression for Vn (r,f) reads 1 2 m m t 0, °³ exp(if U )Rn (U ) Jm (2S r U )Ud U (2.8) 0 ° ° Vnm (r, f ) ® ° 1 °(1)m exp(if U 2 )Rn|m| (U ) J|m| (2S r U )Ud U m 0. ³0 °¯ It is a basic result of the Extended Nijboer-Zernike theory that the function Vnm(r,f) can be analytically written as a well-converging series expansion over the domain of interest in the axial direction, say |f | 2ʌ. The integrals in (2.8) are given by l p f J|m|l 2 j 1(2S r) (2.9) § if · exp(if )¦¨ , ¸ ¦ulj (2S r) l 0 © Sr ¹ j 0 where p=(n-|m|)/2 and q=(n+|m|)/2. The coefficients ulj are given by | m | l 2 j 1 § | m | j l ·§ j l ·§ l · § q l j · (2.10) ulj (1) p ¸¨ l ¸¨ p j ¸ ¨ l ¸ , l q l j 1 ¨© ¹© ¹© ¹ © ¹ where the binomial coefficients ‘n over k’ are defined by n!/(k!(n-k)!) for integer k,n with 0 k n and equal to zero for all other values of k and n. 2.3 Intensity distribution in the focal region
The intensity distribution is proportional to the squared modulus of the expression in Eq.(2.6) and can be written as 2
I (r , M ; f )
(2.11)
¦ E nmU nm (r ,M ; f ) n,m
and in a first order approximation we obtain 2 2 I a (r , M ; f ) | E 00 U 00 (r , M ; f ) 2¦ ' Re ^E 00 E nm*U 00 (r , M ; f )U nm* (r , M ; f )` n,m
4E
0 2 0
0 0
2
V ( r , f ) 8E
0 0
¦ n,m
'
^
`
Re i m E nm*V00 (r , f )Vnm* (r , f ) exp(imM ,
(2.12) where the two summation signs Ȉ’ exclude (n,m)=(0,0). The approximated expression Ia(r,ij;f) neglects all quadratic terms with factors ȕnm ȕn’m’* in them which is reasonable if in the pupil function expansion the term with ȕ00 , assumed to be >0, is the dominant one.
Resolution Enhanced Technologies
304
3 Retrieval scheme for the complex pupil function In this section we develop the system of linear equations that allows to extract the Zernike coefficients from the measured through-focus intensity function. We suppose that in a certain number of defocused planes (typically 2N+1 with N is e.g. 5) the intensity has been measured. A typical step of the defocus parameter from plane to plane is e.g. 4ʌ/(2N+1). The discrete data in the defocused planes are interpolated and, if needed, transformed from a square to a polar grid so that they optimally fit the retrieval problem. After these operations we effectively have the real measured intensity function I(r,ij;f) at our disposal for further analysis. 3.1 Azimuthal decomposition
We first carry out a Fourier decomposition of the measured intensity distribution according to S 1 (3.1) X m (r , f ) I (r , M ; f ) exp(imM )dM . 2S ³S Our task is to match the measured function I(r,ij;f) to the analytical intensity function Ia(r,ij;f) of Eq.(2.12) in the focal volume by finding the appropriate coefficients ȕnm. The harmonic decomposition of Ia(r,ij;f) yields S 1 X am (r , f ) I a (r , M ; f ) exp(imM )dM 2S ³S 2
=4G m 0 E00 \ 00 (r , f ) 4E 00 ¦ ª¬ E nm*\ nm* (r , f ) E n m\ n m (r , f ) º¼ . nz0
(3.2) Here we have used the shorthand notation (3.3) \ nm (r , f ) i mV00* (r , f )Vnm (r , f ). We now have at our disposal the harmonic decomposition of both the measured data set and the theoretically predicted intensity distribution that depends on the unknown ȕnm-coefficients. These coefficients can be evaluated by solving for each harmonic component m the approximate equality (3.4) X am (r , f ) X m (r , f ) . Our preferred solution of the 2m+1 equations is obtained by applying a multiplication with ȥnm(r,ij) and integrating over the relevant region in the (r,f)-domain (inner product method). In this way, we get a system of linear equations in the coefficients ȕnm that can be solved by standard methods.
Resolution Enhanced Technologies
305
4 Experimental results The ENZ aberration retrieval scheme has been applied to a projection system that allows, in a controlled way, the addition or subtraction of a certain amount of a specific aberration. In this way, by controlled adjustment of the lens setting, we were able to execute two sets of consecutive measurements and detect the aberrational change between them. In Fig.2 we show the results of these measurements where each time a certain amount of extra aberration has been introduced (50 mȜ in rms-value). It can be seen that the measured aberration values correspond quite well to the changes in the objective predicted by the mechanical adjustments. The through-focus intensity contours have been obtained by a large number of recorded pointspread images at different dose values. For a fixed resist clipping level, we can track the various intensity contours from the developed resist images that are automatically analyzed by an electron microscope. The accuracy of the method can be checked by a forward calculation of the point-spread function intensity using the retrieved pupil function. The measured and reconstructed intensity distributions show differences of 1% at the most. At this moment, the accuracy in the reconstructed wavefront aberration is of the order of a few mȜ in rms value.
5 Conclusions and outlook The Extended Nijboer-Zernike theory for point-source imaging has been applied to a lens metrology problem and has proven to be a versatile and accurate wavefront measurement method in an environment where e.g. classical interferometry is difficult to implement. The wavefront retrieval process requires a reliable measurement of the through-focus intensity distribution. In our case, for a moderate NA projection lens (NA=0.30), we have used a large number of defocused resist images with widely varying exposure values. Automated resist contour evaluation of the developed images with an electron microscope yields an accurate representation of the through-focus intensity distribution.
Resolution Enhanced Technologies
306
Spherical added 120 Nominal Detuned
100
[mO]
80 +50 mO spherical
60 40 20 0
0
Spher.
Y-coma
X-coma
Y-ast.
X-ast.
Y-thre.
X-thre.
X-ast.
Y-thre.
X-thre.
X-coma added 120 100 80 [mO]
+50 mO X-coma 60 40 20 0
0
Spher.
Y-coma
X-coma
Y-ast.
HV-Astigmatism added 120 +50 mO X-Astigm. 100
[mO]
80 60 40 20 0
0
Spher.
Y-coma
X-coma
Y-ast.
X-ast.
Y-thre.
X-thre.
Fig. 2. Bar diagrams of the measured change in aberration value for three specific aberration types (spherical aberration, coma and astigmatism) using the retrieval method according to the Extended Nijboer-Zernike theory. The black bars indicate the reference aberration values, the white bars the values corresponding to the detuned systems (all values in units of mȜ rms aberration).
Resolution Enhanced Technologies
307
With the aid of the reconstructed exit pupil function, the point-spread image of the aberrated projection lens can be calculated and we have observed a fit to better than 1% in intensity between the measured and calculated through-focus intensity distributions. Recent research has focused on the incorporation of high-NA vector diffraction into the ENZ-theory [8], on the effects of image blurring due to latent resist image diffusion during the post-exposure bake and on the influence of lateral smear by mechanical vibrations [9]. All these effects tend to obscure the real lens contribution to image degradation and they have to be taken into account in the retrieval process.
References 1. 2.
3.
4. 5. 6.
7.
8.
9.
Malacara, D (1992) Optical Shop Testing, 2nd edition,Wiley, Hoboken NJ (USA). Gerchberg, R W, Saxton, W O (1971) Phase determination from image and diffraction plane pictures in electron-microscope. Optik 34:277-286 Gerchberg, R W, Saxton, W O (1972) Practical algorithm for determination of phase from image and diffraction pictures. Optik 35:237-246 Fienup, JR (1982) Phase retrieval algorithms - a comparison. Appl. Opt. 21:2758-2769 Born, M, Wolf, E (1970) Principles of Optics, 4th rev. ed., Pergamon Press, New York (USA) Janssen, A J E M (2002) Extended Nijboer-Zernike approach for the computation of optical point-spread functions. J. Opt. Soc. Am. A 19:849-857 Braat, J J M, Dirksen, P, Janssen, A J E M (2002) Assessment of an extended Nijboer-Zernike approach for the computation of optical point-spread functions. J. Opt. Soc. Am. A 19:858-870 Braat, J J M, Dirksen, P, Janssen, A J E M, van de Nes, A S (2003) Extended Nijboer-Zernike representation of the vector field in the focal region of an aberrated high-aperture optical system. J. Opt. Soc. Am. A 20:2281-2292 Dirksen, P, Braat, J J M, Janssen, A J E M, Leeuwestein, A (2005) Aberration retrieval for high-NA optical systems using the extended Nijboer-Zernike theory. To appear in Proc. SPIE 5754, Conference on Microlithography 2005, San Jose, USA, February 26 - March 4
Digital Holographic Microscopy (DHM) applied to Optical Metrology: A resolution enhanced imaging technology applied to inspection of microscopic devices with subwavelength resolution Christian D.Depeursinge, Anca M. Marian, Frederic Montfort, Tristan.Colomb, Florian Charrière, Jonas Kühn, STI-IOA, EPFL, 1015 Lausanne, Switzerland Etienne Cuche, Yves.Emery, Lyncée Tec SA, rue du Bugnon 7, CH-1005 Lausanne, Switzerland and Pierre Marquet Physiology Institute, Lausanne University, Switzerland
1 Introduction Digital Holographic Microscopy is an imaging technique offering both sub-wavelength resolution and real time observation capabilities. We show in this article that, if longitudinal accuracies can be as low as one nanometer in air or even less in elevated refractive index media, the lateral accuracy and the corresponding resolution is less good, but can be kept at a sub-micron level by the use of a high Numerical Aperture (N.A.) microscope Objective (M.O.). In the present state of the art, it can be kept currently below 600nm. We show that the use of high N.A. objectives provides an effective mean of adapting the sampling capacity of the digital camera (here a CCD) to the needs of hologram registration. On the other hand, the accuracy may be also limited by the weak intensities of the optical signals from the nanometer size diffracting objects.
2 Digital Holographic Microscopes (DHM) Several optical arrangements have been selected for taking digital holograms of various specimen of diffracting objects. The most frequently used are the reflection geometry for surface topology measurement (Fig.1.) and
Resolution Enhanced Technologies
309
the transmission geometry for the measurement of object thicknesses and refractive indexes (Fig.2.). The originality of our approach is to provide high accuracy in the reconstructed images, both by using a slightly modified microscope design yielding digital holograms of microscopic objects, and by taking advantage of an interactive computer environment to reconstruct easily object shape from digital holograms. The use of a slightly offaxis configuration, enables to capture the whole image information by a single hologram acquisition. By using a gated camera with a few tens of microseconds aperture time or pulsed illumination sources, it is possible to avoid perturbations originating from parasitic movements or vibrations or perturbing ambient light. On the other side, wavefront reconstruction rate may be as high as 15 frames/second, making DHM an ideal solution to perform systematic investigations on large volumes of micro-devices such as full wafers bearing MEMS, MOEMS and micro-optical devices. Real time image reconstruction and rendering is henceforth possible, thus providing a new tool in the hands of micro- and nano-system engineers. This imaging modality is based on the reconstruction of the wave front in a numerical form, directly from a single digitalized hologram taken in a slightly off axis geometry. In our DHM implementation, no time heterodyning or moving mirrors are required and the microscope design is therefore simple and robust. DHM brings quantitative data derived simultaneously from the amplitude and phase of the complex reconstructed wave front diffracted by the object. Microscopic objects can be imaged in transmission and reflection geometry. DHM provides an absolute phase image, which can be directly interpreted in term of refractive index and/or profile of the object. Very high accuracies can be achieved, which are comparable to that provided by high quality interferometers, but DHM offers a better flexibility and the capability of adjusting the reference plane with the computer, i.e. without positioning the beam or the object. This computerized procedure adds Fig. 1. DHM configuration for much flexibility and can even be made use in reflection microscopy transparent to the user.
Resolution Enhanced Technologies
310
The holograms are acquired with a CCD or MOS camera and then digitized. A digital expression of the wavefront is formed in the plane [K and then propagated in the object plane x-y according to the Fresnel diffraction law. The Fresnel Huyghens expression given by the mathematical expression (1) which, in the paraxial approximation, can be put in form (2) which will be evaluated numerically after discretization. The reconstructed wavefront simultaneously delivFig. 2. Geometry for use in ers the phase information, which reveals transmission microscopy the 3D topography of an object surface and the intensity image, as obtained by conventional optical microscope.
< ( x, y )
< ([ ,K )
exp ik r r ' 1 < 0 ([ ,K ) d [ dK ³³ r r' iO
) ([ ,K )
(1)
exp(i2S d/O ) ª iS 2 º [ K 2 » exp « iO d ¬ Od ¼
ª iS ( x, y ) I H ( x, y ) exp « R D ³³ ¬ Od
x [
2
2 º y K » ¼
(2)
Expression (2) of the reconstructed wavefront < is the Fresnel transform of the un-propagated wavefront RDIH,. O is the wavelength and d the propagation distance. (2) provides the transformation of the wavefront < from the hologram plane 0xy, where it is equal to RDIH , to the observation plane 0[K. The digital reference wave RD has been introduced in Ref.1, and the digital phase mask )introduced in Ref.2 in order to correct phase quadratic dependence and possible aberrations introduced by the M.O., play a major role for phase reconstruction. RD is defined as a computed replica of the experimental reference wave R. If we assume that the hologram has been recorded in the off-axis geometry, with a plane wave as reference, RD is defined as follows:
Resolution Enhanced Technologies
311
ª 2S º RD ( x, y ) exp «i k x x k y y iG (t ) » ¬ O ¼
(3)
Where the parameters kx, ky define the propagation direction, and G(t) the phase delay between the object and reference waves, which can vary. The adjustment of several parameters is needed for proper reconstruction of the phase distribution. In particular, kx and ky compensate for the tilt aberration resulting from the off axis geometry or resulting from a not perfect orientation of the specimen surface. Equation (3) can be discretized and computed by calculation of the discrete Fresnel Transform (see [2] and [3]). 2.1 High longitudinal resolution
Fig. 3. Reconstructed image of a pentacen layer deposited on gold. The diagram shows the profile of the border of the layer: nanometric accuracies are obtained.
The accuracy of the reconstructed wavefront on the optical axis is given by the phase of the reconstructed wavefront j M 2 (n, m) M1 (n, m) @
ai (n, m) exp> jM i (n, m)@
(1)
n,m denote column and row index, ai(n,m) is the interferometric amplitude and M i (n, m) the interferometric phase. While any individual pixel phase in a SAR image will be helplessly noisy (uniformly distributed in the interval S , S @ , the interferometric phase calculated as the phase difference of two corresponding pixels will assume a meaningful value. This means that the two corresponding SAR image pixels, from which the interferogram pixel has been calculated by equation (1), will be noisy but they will be noisy in the same way. The quantity measuring the similarity is the pixel coherence J (n, m) defined as the normalized cross correlation coefficient: J (n, m)
^
`
E s1* (n, m) s 2 (n, m)
^
2
` ^
E s1 (n, m) E s 2 (n, m)
2
`
(2)
Conceptually the coherence can be defined as a complex value, while the classical coherence known from Laser interferometry [5] might be interpreted as the absolute value of the complex coherence expressed by equation (2). A lot of scientific work concerning interferometric phase and amplitude statistics as well as optimal filtering to reduce interferometric phase noise has been done and published by various authors. An example is [7], and a compilation of several results can be found in [2,3]. It can be shown that the interferometric phase noise variance is some non linear function of the classical coherence, where the total coherence
Wide Scale 4D Optical Metrology
338
U (n, m)
J (n, m) can be factorized into three individual physically mean-
ingful terms: U (n, m)
U temporal (n, m) U spatial (n, m) U thermal (n, m)
(3)
The spatial coherence describes the correlation or decorrelation effects caused by the different aspect angles of the two SAR sensors imaging the same scene. These effects generally grow with increasing baseline length, so for each interferometric mission depending on incidence angle and employed wavelength there is an upper baseline length limit which can not be exceeded without complete decorrelating the individual SAR images. The thermal coherence describes the decorrelation caused by thermal sensor noise. Finally the temporal coherence reflects the influence of the time delay between the acquisition of the individual SAR scenes and the change in the scene content introduced by the time delay (vegetation growth, environmental or man made changes, changing weather conditions, etc.). 1.1 Repeat Pass or Single Pass Interferometry?
It is the temporal decorrelation which basically introduced the two basic options of SAR interferometry, namely x Two or repeat pass interferometry x Single or one pass interferometry While the first option images the same scene with the same instrument from different paths at different times, being prone to any temporal variation of the scene between the acquisition times, the second option uses two different instruments on the same platform, separated by a rigid baseline (a 60 m mast in the Shuttle Radar Topography Mission, 1 – 2 m in airborne experiments). Here the two scenes are essentially acquired at the same time, minimizing the temporal decorrelation. As an example for spaceborne repeat pass interferometry the ERS-1 interferometric setups and the ERS-1/ERS-2 tandem mission might be regarded. The first spaceborne single pass interferometer mission was the Shuttle Radar Topography Mission (SRTM), which will be considered subsequently. While in repeat pass interferometry, no mechanical coupling between the two sensors is present, this coupling, existing in the second option, introduces dynamical motion errors to the baseline vector in a way that the whole interferometric constellation becomes time varying. While repeat pass interferometry seems feasible for spaceborne sensors showing very stable orbits that require almost no motion compensation, repeat pass interferometry is much more
Wide Scale 4D Optical Metrology
339
demanding for airborne sensors requiring more sophisticated motion compensation.
1.2 Across Track or Along Track Interferometry?
The spacial separation of the SAR sensors might be in the flight direction, giving rise to Along Track Interferometry, or perpendicular to the motion vector, being described by Cross Track Interferometry. Along track interferometry allows for the determination of motion, such as oceanic surface flows [9-11, 13-15], artic glacier flow and traffic monitoring, while across track interferometry in general enables the generation of high accuracy Digital Elevation Models (DEM). More recently substantial developments have been achieved in the contest of centimetric/millimetric accuracy ground deformation monitoring via the Multiple-Pass Differential SAR Interferometry technique [16-18]. A topical review of Synthetic Aperture Radar Interferometry, with emphasis on processing aspects can be found in [8]. In the following we will restrict our interest to across track SAR systems.
2 A Simple SAR Across Track SAR Interferometer S1
B S
2 (SAR 2)
[
J
(SAR 1)
BA H
r+dr
r
T
P z
Terrain Surface Point
Fig. 1. Principle of Across Track Interferometry
The principle of an across track interferometer is shown inFig. 1. Any pixel of the corregistered scenes corresponds to one surface point, which before the coregistration in one scene would be imaged at slant range r, showing a phase value proportional to r, while in the second scene the surface point would be imaged at slant range r+dr, showing a phase value proportional to that slightly aly tered range value.
Wide Scale 4D Optical Metrology
340
With the cosine law for arbitrary triangles we have then: (r dr ) 2
r 2 B 2 2 B r cos J where:
J
90$ [ T
(4)
Hence we may write: (r dr ) 2
r 2 B 2 2 B r cos(90 $ [ T ) r 2 B 2 2 B r sin(T $ [ )
and for the sine term we obtain: sin(T $ [ )
(5)
(r dr ) 2 r 2 B 2 2 Br
For the right angled triangle consisting of S1, the local surface point and the point P at height z we have: H r ª 1 a 2 cos([ ) a cos([ )º «¬ »¼ (r dr ) 2 r 2 B 2 where : a sin(T [ ) 2 Br z
a(r , B, dr )
(6)
indicating that we can determine the height of any imaged surface point from its slant range, the baseline length B and the slant range difference dr. Further parameters, we need to know, are the baseline orientation and the height H of the master satellite S1. Obviously the slant range difference dr is proportional to the interferometric phase difference dM M i : Mi
4S
O
dr and : dr
O Mi 4S
(7)
so that we are finally able to determine the height from the interferometric phase, provided that we know all the other parameters, like baseline length and orientation. It is worth noting that here the interferometric phase is essentially the unambiguous phase which unfortunately is not directly observable from the interferogram.
3 Interferometric Processing 3.1 Raw Data Focussing
Interferometric processing starts with two complex raw data sets collected by the individual, spacially separated SAR sensors (Fig. 2).
Wide Scale 4D Optical Metrology
341
Fig. 2. 2 Raw Data sets (real part)
Due to the large number of individual scatterers in one antenna footprint SAR raw data always resembles noise. Only very bright and isolated point like scatterers yield some structure in the noise, in fact they yield the SAR sensor’s complex valued point spread function. After the focussing process which basically consists of a space variant, phase preserving two dimensional matched filter operation, both raw data sets are converted into focussed Single Look Complex (SLC) images, the amplitude of any pixel reflecting the radar brightness and the phase being proportional to the distance (Fig. 3). Rotated and shifted
Titisee (Schwarzwald) Raw data Dornier
Titisee (Schwarzwald) Raw data Dornier
Fig. 3. Focussed Single Look Complex SAR images (absolute values)
For non parallel flight tracks, the images will not necessarily be aligned and in general can be rotated against each other, see Fig. 3 right image. 3.2 Coregistration and Interferogram Formation
The process of coregistering the images eliminates relative shift and rotation of the two SAR images against each other. In a first step a large number of imagelets in one image is selected, and then correspondencies
342
Wide Scale 4D Optical Metrology
in the other image are searched. This can be done by correlation or information theoretical similarity measures (e.g. mutual information). As part of that procedure the relative shifts in two directions between matching imagelets are determined, and from that large number of relative shift vectors a similarity transformation from one image to the other is set up. The result of that transform is a pair of coregistered images that match pixelwise (Fig. 4). Usually coregistering the images implies subsampling and interpolation techniques applied to the complex data.
Fig. 4. Coregistered Images
After coregistering the images, the complex interferogram is formed by pixelwise multiplying one image with the complex conjugate of the other image (c.f. equation (1)). The result is shown in Fig 5. From the complex interferogram the phase image (Fig. 6) is readily found by calculating the arctan of the ratio of imaginary and real part of any pixel.
Fig. 5. Complex Interferogram
Fig. 6. Phase Image (modulo 2S)
Due to the arctan operation all pixel phases are wrapped into the ambiguity interval (-S,S] which is denoted as the wrap around effect. The phase jumps, called fringes, now become clearly visible, introducing the term of a fringe image. The ‘flat earth’ phase contribution introducing a phase
Wide Scale 4D Optical Metrology
343
ramp of constant slope and being visible as phase fringes which are almost parallel to the azimuth flight track should be noted in Fig. 6. 3.3 Interferogram Filtering, Phase Unwrapping, Phase to Height Conversion
For the phase to height conversion the phase image must be unwrapped which, due to the implicit phase noise stemming from lacking coherence is not a trivial task. The result is shown in Fig. 7. Additionally reference points (e.g. corner cubes or land marks with known heights) are utilized to determine the absolute phase offset of the whole scene. From the unwrapped phase the geometric height of any pixel can be determined by means of equation (6) for the airborne case (Fig. 8). In spaceborne SAR interferometry height determination is usually done in WGS-84 coordinates using vectorial calculus.
Fig. 7. Unwrapped Phase Image
Fig. 8. Slant Range Height Image
Due to phase noise the raw height image usually appears quite noisy, containing spikes and arbitrarily wrong individual pixel heights. These effects raise the need of filtering the phase images or employing noise eliminating phase unwrapping approaches to be addressed later. 3.4 Geometrical Transformations - Orthoprojection
The height image is still in slant range azimuth coordinates. Hence it must be converted to ground range azimuth coordinates. In the airborne case orthoprojection process essentially makes use of Pythagoras’ law (Fig. 9), in the spaceborne case the mapping is performed onto some ellipsoidal reference plane. Finally the orthoprojected height image can displayed in Pseudo 3D representation (Fig. 10) or as an red green anaglyph. Provided that absolute
344
Wide Scale 4D Optical Metrology
GPS-track references are available the image can be mapped in a geocoding process to any geodetical coordinate frame (e.g. Gauss Krüger).
Fig. 9. Orthoprojected Height Image
Fig. 10. Orthoprojected Image
4 Crucial Issue - Phase Unwrapping Phase unwrapping is an extremely critical operational issue. A very basic approach to phase unwrapping consists of first calculating finite phase differences (phase slopes), removing the 2S phase jumps by a modulo operation and integrating the finite phase differences again [19]. It has been shown, for example in [8, 24] that the complexity of phase unwrapping essentially depends on the degree of coherence and on the phase slope. The coherence is a quality measure depending on mission design (geometric and temporal baseline, wavelength, mean incidence angle) and sensor design and can only be improved by filtering out the uncorrelated noise stemming from disjoint parts of the power spectral densities of the complex SAR images (cf. [8]). This reduction of phase noise comes at the cost of decreasing the geometrical (slant range/azimuth) resolution). Further filtering (weighted special averaging) to reduce the noisiness of the phases can be applied to the complex interferogram (denoted by Multi Looking), again improving phase resolution at the cost of slant range/azimuth resolution. The phase slope is usually reduced by phase image flattening procedures, which try to first extract the flat earth contribution or to demodulate the phase image with some nominal or coarsely known height image. After that process the residual phase image is unwrapped and after unwrapping the previously extracted phase offsets are superimposed again. Kalman filter based phase unwrapping interprets the inphase and quadrature component of the complex interferogram as noisy nonlinear observations of the true unambiguous phase. This approach is followed and described in [20-24] and does not need any prefiltering nor phase slope
Wide Scale 4D Optical Metrology
345
flattening. Fig. 11 shows an undisturbed, unambiguous fractal phase image. From that phase image a complex interferogram was formed with superimposed complex noise of 10 dB. From that noisy interferogram the interferometric phase image in Fig. 12 was generated by an arctan operation. While classical phase unwrappers would try to unwrap that phase image, the Kalman filter directly processes the complex interferogram values and directly estimates the unambiguous phase image from the complex interferogram (Fig. 13). Rewrapping that result (Fig. 14) it becomes obvious that the Kalman filter eliminated the noise almost completely without, however, smoothing away the tiny details of the phase image. It has been shown furthermore in [23] that the Kalman filter approach maintains the fractal dimension of the phase image.
Fig. 11. Unambiguous (fractal phase image)
Fig. 12. Noisy (wrapped around) phase image (coherence equivalent SNR=10 dB)
Fig. 13. Kalman Filter based Phase Unwrapping Result
Fig. 14. Rewrapped Phase Unwrapping Result
5 Recent and Future Interferometric Missions While most of the airborne interferometric missions have been single or one pass interferometric missions – all of the satellite based missions in the past have been two or repeat pass missions. It must be emphasized that the
346
Wide Scale 4D Optical Metrology
real breakthrough in SAR interferometry was achieved through the European ERS-1 satellite and its follow-on, ERS-2. The satellite orbit was determined with dm and cm accuracy; the baseline control was very good and many orbit pairs met the baseline conditions for repeat-pass interferometry. The ERS-2 SAR is identical to the one of ERS-1. The satellite was launched in 1995 and has the same orbit parameters as ERS-1. Most important from the SAR-interferometry point of view was the TANDEMmission [25] during which ERS-1 and ERS-2 were operated in parallel. ERS-2 followed ERS-1 on the same orbit at a 35 min delay. Together with the Earth’s rotation this orbit scenario assured that ERS-1 and ERS-2 imaged the same areas at the same look angle at a 1 day time lag. The orbits were deliberately tuned slightly out of phase such that a baseline of some 100 m allowed for cross-track interferometry. This virtual baseline between ERS-1 and ERS-2 could be kept very stable, because both satellites were affected by similar disturbing forces. The first of several TANDEM missions was executed in May 1996. Despite all the excellent scientific results obtained with ERS data, it should be kept in mind that the instrument had been designed for oceanographic imaging and, hence, used a very steep incidence angle of 23q. According to that terrain slopes of higher than about 20q could not be mapped. 5.1 Shuttle Radar Topography Mission
Based on the extremely successful Shuttle Imaging Radar SIR-C/X-SAR missions, (see, e.g. the special SIR-C/X-SAR issue of IEEE Transactions on Geoscience and Remote Sensing 33 (4), 1995), the Shuttle Radar Topography Mission SRTM [27] was launched in February 2000, acquiring topographic mapping of the entire land mass within r 60qlatitudes during an 11 days flight. This first spaceborne single pass/ dual-antenna acrosstrack interferometer reused the existing SIR-C/X-SAR hardware augmented by a second set of receive antennas for the C- and X-band SARs mounted at the tip of a 60 m boom, which extended from the cargo bay of the shuttle (Fig. 15). The mission was intended to combine the stability of orbital SAR platforms with the advantages of single pass interferometric imaging thus eliminating all phase noise influences from temporal decorrelation. The German X-band data was intended to provide DEMs of about 6 m height accuracy and 25 m posting accuracy [28], imaging about 70% of the area covered by the C-band interferometer. Despite of the tremendous success the mission has finally converted into, one of the most serious problems
Wide Scale 4D Optical Metrology
347
during mission operation turned out to be the mast oscillations due to failing mast dampers.
DLR Fig. 15. Shuttle Radar Topography Mission
OCS-Ursprung im ICS - z-Komponente [m]
Fig. 16 shows an oscillation amplitude of several cm in each direction. As a drawback of those attitude instabilities, baseline length and baseline orientation angle turned out to be massively time varying introducing height errors of several 70 m without compensation. In order to cope with these effects, the phase to height conversion had to use time varying baseline parameters. A lot of work has been performed to estimate these parameters over ocean and then to propagate the baseline parameters over land by dynamic models employing Kalman filtering techniques [2934]. Fig. 17 shows a result of estimating the baseline length over OCS-Ursprung im ICS - x-Komponente [m] an ocean track and then propaFig. 16. Origin of outboard coordinate frame in gating the baseline over land inboard coordinates with a Kalman filter. DT 146_190; PADR_009_00_19_00_009_00_41_00_512_B_PB2.RKD
A more detailed description can be found in [34]. A nice compilation of recent results can be found in [35].
Wide Scale 4D Optical Metrology
348
Baseline estimates determined over ocean calibration track
Predicted over land
baseline
estimates
Nominal baseline estimates from mission data base
Fig. 17. Baseline length estimates over flight time
5.2 Interferometric Cartwheel
While the main problem in the Shuttle Radar Topography Mission turned out to be the mechanical coupling between outboard antenna and the shuttle giving rise to oscillations, a new generation of spaceborne interferometer aims at achieving highly stable interferometric constellations without any mechanical coupling. The basic idea is to use the implicit short and medium term stability of a group or cluster of satellites orbiting on identical orbits with slightly detuned orbit parameters. One of the most prominent missions developed in this context is CNES’1Interferometric Cartwheel [36-38]. Another well known example is DLR’s2 Interferometric Pendulum [39]. By employing a set of passive receive only satellites cost reductions may be realized, the transmit antenna and high power electronics with the corresponding power supply being one of the main cost drivers in SAR satellites. Using only passive receivers, such missions need to employ active SAR satellites, such as Envisat, TerraSAR-X as illuminators. The Interferometric Cartwheel consists of a group of N passive (receive only) satellites (e.g. N=3). All satellites are copositioned in the same orbital plane, they move on identical (same semi major axes) but slightly excentric orbits (eccentricity
A1j x, y, d 0 , t j # D j R j x, y exp iSO d 0 u 2j v 2j R
@
u Ax Ou j d 0 , y Ov j d 0 , t j
(3)
Note that A+1R1 and A+1R2 are multiplexed in the reconstruction field because of the appropriate choice of the spatial frequencies ^uj,vj`. When the two holograms are demultiplexed according to the method proposed in [3], each phase term can be exctracted and it is expressed as
\ j x, y 2iS u j x v j y i S O d 0 u 2j v 2j \ 0 x, y, t j
(4)
When the object is in a static state we have \0(x,y,t2) = \0(x,y,t1). When the object is under a dynamic excitation, we get \0(x,y,t2) = \0(x,y,t1) + 'M(t1,t2). So phase difference '\ = \2 \1 includes the time varying phase change 'M. Since in a static state, we have '\S = 2S((u2u1)x + (v2v1)y) SO0d0(u22u12+v22v12), the phase term of the subtraction includes a phase biased term. This term hides the useful information and thus it must be removed. When removing '\S, the phase difference in a dynamic state is simply '\D = 'M(t1,t2). Thus spatio-temporal encoding allows to study transcient deformations using low frame rate cameras, the temporal resolution being only limited by the time difference between the two laser pulses.
3 Experimental set-up We used an interferometric set-up with two different paths for the reference wave. It is described in figure 1. The laser is a 2Z NdYAG pulsed laser (20ns, 15mJ). The mixing of the four waves in the CCD area produces the addition of two spatio-temporally-shifted holograms of the object. The path switching is performed by a polarization switching in a secondary Mach-Zehnder architecture. This is done using a high-speed pockels cell by applying two different voltages in order to get a half wave plate. This device is generally devoted to laser pulse generation in pulsed lasers. Each path of this secondary interferometer includes a telescopic system, whose last lens is translated perpendicularly to its optical axis in order to produce suitable spatial frequencies for each reference wave.
Hybrid Measurement Technologies
475
The reference wave propagates following one of the two possible paths indicated in figure 1. The path is determined by the polarization of the reference wave before polarizing beam splitter n°2. If the reference wave is spolarized, then it propagates following path n°1. It follows path n°2 if it is p-polarized. The switching between the two paths is produced by the pockels cell placed just before polarizing beam splitter n°2.
Fig. 1. Experimental set-up for dual space-time encoding
As pointed out, introduction of off-line holographic recording is realized using lens L2 on path n°1 and lens L3 on path n°2. Lenses are displaced out of the afocal axis by means of two micrometric transducers to adjust the values of the spatial frequencies {uj,vj}. Lenses are adjusted such that there is no overlapping between the five diffracted orders when the field is nu-
476
Hybrid Measurement Technologies
merically reconstructed [3]. The digital holograms are reconstructed using a discrete version of the Fresnel transform programmed with Matlab 5.3 [5]. De-multiplexing is the step which consists in determining the point to point relation between the two holograms. It is achieved according to [3]. The detector is a 12 bits digital CCD (PCO PixelFly) with 1024u1360 pixels each of size px = py = 4.65Pm. The camera is driven by the software CamWare via a PCI acquisition board.
4 Experimental results We applied the measurement principle to a loudspeaker 40 mm in diameter placed at 1400 mm in front of the CCD area. The laser delivered pulses at a rate of 50 Hz. So two pulses were selectionned with a time difference t2t1 = 20 ms. Note that the pulse delay is limited by the flash pump rate and that it can be considerably diminuted if one uses a twin NdYAG laser or a double pulse ruby laser. The CCD exposure was set to 30 ms. In full frame mode, the CCD can perform acquisitions at 7 frames/s. Thus the detector has a very low frame rate and temporal resolution is only 1/7 = 142 ms; the two holograms generated by the two laser pulses could not be recorded with two consecutive frames.
Fig. 2. Reconstructed spatio-temporally multiplexed holograms of the loudspeaker
Thus, the proposed method appears to be very well adapted to current state of the art of detector technology. Its limitation is only due to the laser source. The path switching was performed before the second laser pulse was fired. Figure 2 shows the multiplexed holograms of the loudspeaker.
Hybrid Measurement Technologies
477
Figure 3 shows the phase difference '\S (modulo 2S) when the loudspeaker is in a static state. This term has high spatial frequencies so that 2S phase jumps can not be observed at the naked eye. The computation of the phase biased term allows it modulo 2S subtraction to '\S. This is shown in figure 4. It can be shown that after removing of the phase biased term, the phase change is uniform. This result is coherent with the fact that the loudspeaker is in a static state, thus it do not produce any temporal phase change. So the result shown in figure 4 validates the measurement principle.
Fig. 3. Static phase difference '\S
Fig. 4. Static phase difference after removing of the phase biased term
The loudspeaker was sinusoidally excited at a frequency of 2855 Hz. So the pulse delay corresponds to a phase shift of S/5 rad on the sinusoidal excitation. The parameters for illumination and recording were the same as in the static state.
478
Hybrid Measurement Technologies
Figure 5 shows the wrapped phase change '\D = 'M(t1,t2) which was extracted after removing of the phase biased term. The deformation of the membrane of the loudspeaker between time t1 and t2 can be clearly seen. It is indicated by 2S phase jumps.
Fig. 5. Vibrating phase difference after removing of the phase biased term
Figure 6 shows the unwrapped phase map of that of figure 5.
Fig. 6. Unwrapped phase map of figure 5
Hybrid Measurement Technologies
479
5 Conclusion This paper has presented a new method for stationary or transcient deformation analysis. The method needs a phase biased term compensation in order to retrieve the phase change induced by the object deformation. The validation was perfomed successfully by using a static state of the object. Potentialities of the method were demonstrated through the application to a loudspeaker. This new technique could be applied in the future in full field optical metrology of fast or very fast transcient phenomena for which frame rate of current solid state detectors is too low.
6 References 1. Schnars, U, Jüptner, W (1994) Direct recording of holograms by a CCD target and numerical reconstruction. Applied Optics 33:179-181 2. Cuche, E, Bevilacqua, F, Depeursinge, C (1999) Digital holography for quantitative phase contrast imaging. Optics Letters 24:291-293 3. Picart, P, Moisson, E, Mounier, D (2003) Twin sensitivity measurement by spatial multiplexing of digitally recorded holograms. Applied Optics 42:1947-1957 4. Doval, AF, Trillo, C, Cernadas, D, Dorrio, BV, Lopez, C, Fernandez, JL, Perez-Amor, M (2000) Measuring amplitude and phase of vibration with double exposure stroboscopic TV holography. Interferometry in Speckle Light – Theory and Applications, Jacquot, P & Fournier, JM Editors, 25-28 September 2000, Lausanne, Switzerland, Edited by Springer (Berlin):281-288 5. Picart, P, Leval, J, Mounier, D, Gougeon, S (2005) Some opportunities for vibration analysis with time-averaging in digital Fresnel holography. Applied Optics 44:337-343 6. Moore, AJ, Duncan, DP, Barton, JS, Jones, JDC (1999) Transient deformation measurement with electronic speckle pattern interferometry and a high speed camera. Applied Optics 38:1159-1162 7. Farrant, DI, Kaufmann, GH, Petzing, JN, Tyrer, JR, Oreb, BF, Kerr, D (1998) Measurement of transient deformation with dual-pulse addition electronic speckle pattern interferometry. Applied Optics 37:72597267 8. Pedrini, G, Froning, PH, Fessler, H, Tiziani, HJ (1997) Transient vibration measurements using multi-pulse digital holography. Optics and Laser Technology 29:505-511
High resolution optical reconstruction of digital holograms Günther Wernicke, Matthias Dürr, Hartmut Gruber, Andreas Hermerschmidt*, Sven Krüger*, Andreas Langner Humboldt University Berlin, Institute of Physics Newtonstrasse 15, 12489 Berlin Germany * Holoeye Photonics AG Einsteinstrasse 14, 12489 Berlin, Germany
1 Introduction Electro-optical effects in liquid crystal displays make them suitable for amplitude and phase modulation of coherent wave fronts so that they can be used as a programmable diffractive element. The devices are addressed by a digital signal at video frame rate. Therefore they can act as adaptive optical elements, digital-optical interfaces or digital-analog interfaces. In practical applications, important parameters of these devices are the available phase shift, light efficiency, and space-bandwidth product. The improved parameters of recent LC displays make them useable for digital holography as well as for many other coherent optical applications. The requirements for such usage can be summarized as follows: - Small (squared) pixels, high pixel number, high dynamic range (8 Bit – 12 Bit) - High fill factor, high transmission and reflectivity - Uncoupled amplitude or phase modulation, high contrast, phase modulation above 2S - No flicker and cross talk, exact pixel to pixel addressing - Flat panel surface (no wave front distortions) - Homogeneous thickness of the liquid crystal layer - Analog addressing (no time sequential or field sequential) - High frame rate, short response times, standard signal sources In the Laboratory for Coherence Optics at Humboldt University we made investigations for the applicability of liquid crystal displays for the optical reconstruction of digital holograms.
Hybrid Measurement Technologies
481
In the first part of this paper we will give a short overview of the technology of liquid crystal spatial light modulators. In part two we will show results of our investigations about the performance of the displays when illuminated with coherent light and in part three we show the application in the optical reconstruction of digital holograms.
2 Spatial light modulators Spatial Light Modulators (SLMs) became very important components in optical systems. Among the well known shutter and display applications, the possibilities of phase modulation are more and more subject of current research and development [1,2]. SLMs using two-dimensional arrays of phase-modulating pixels are the basis of many new system proposals in adaptive optics, image processing and optical switching. One challenge is the implementation in diffractive optics in order to realize high-resolution phase functions [3]. With increased resolution and performance they might even compete with micro-lithographic fabricated diffractive elements in some applications. Using the available phase modulation, SLMs can be used in a wide range of application fields like laser beam splitting and beam shaping for projection and material processing applications. 2.1 Micro displays: Technologies and developments
Mainly driven by multimedia applications, displays with high resolution, efficiency and contrast have been developed. The increase of pixel numbers has been overcompensated by the decrease of the pixel size so that the total area of the displays has become smaller over the recent years. However, these displays have not been developed for the purpose of realizing high-resolution phase modulating spatial light modulators. So, the phase modulation caused by the birefringence of the LC-material of about 2Sҏ@ 532 nm, that we found rather early in the liquid crystal display devices like the SONY LCX012BL series and used for the reconstruction of digital holograms [3, 4], can be considered a more or less accidental side-effect. Since that time quite a few research groups are looking for the right display type for coherent or diffractive applications. Now, there are even institutes designing displays for special optical applications. The displays developed so far still lack important properties of a high-resolution twodimensional phase modulator.
482
Hybrid Measurement Technologies
The video and data projectors as well as now also rear projection TVs mostly using micro-displays such as LCDs, DMDs and LCoS displays of small pixel size and high resolution. The LCD technology turned from diaphanous TFT LCDs with pixel sizes down to 15 µm to reflective LCoS displays, which can realize pixels smaller than 10 µm with an enormous fill factor of >90 %. So, especially the liquid-crystal-on-silicon technology is very promising to deliver displays with high resolution, small pixels, and a high light efficiency. The development of smaller pixels for translucent displays has the drawback of a reduction of the filling factor (i.e. the optically useable fraction of the overall area). For example, the Sony SVGA display LCX016 with a 1,3” panel diagonal and a 32 µm pixel has a fill factor of about 85%. The 0,9” LCX029 XGA display with a pitch of 18 µm but a pixel size of only 12 µm has a fill factor of only around 45%. Moerover, the micro-lens arrays mounted on these displays are more or less distorting because of their influence on the optical wave front. 2.2 Implemented micro display devices
Based on translucent SONY LC displays, the spatial light modulators LC 1004 and LC 2002 were developed by Holoeye Photonics AG. Measurements of the phase modulation and a subsequent adaptation of the driver electronics led to the successful implementation of a dynamic phase modulating system with an almost linear modulation and a maximum phase shift of Sҏ. Such system is suitable for addressing highly efficient diffractive phase functions. Limiting parameters of the system are determined by physical boundary conditions, such as pixel number and size, response time, transmission etc. Some of these limitations can be overcome by devices based on LCoS displays which have a high optical fill factor, high resolution and very small pixels. Three different LCoS systems with various resolution
Fig. 1. Spatial light modulator LC-R 3000 with WUXGA resolution
Hybrid Measurement Technologies
483
and pixel size have been tested for their suitability for phase-modulating SLM systems. The micro display MD800G6 Micromonitor is a high resolution LCoS display with 0.5 inch diagonal, 1.44 million dots and a pixel size of 12.55 µm x 12.55 µm. It has 800x600 SVGA resolution and a 91 % fill factor. A second system, which is currently in use is an LCoS prototype, which shows XGA resolution with 19 µm pixel size. Moreover, the Holoeye SLM LC-R 3000 based on a WUXGA resolution micro display fabricated with the LCoS technology, was investigated. The display has a 0.85 inch diagonal, a 9.5 µm pixel pitch and can address HDTV standard of 1920x1080 with a lower border (120 pixels high) for other data. The fill factor is >92 %. It has over 2.3 million pixels and can display up to 256 grey levels. This display device is shown in Fig. 1.
3 Automated system for the measurement of complex modulation The complex modulation properties are of high importance for the performance of SLM systems, so the dynamic modulation range has to be investigated carefully. Different kinds of measurement systems were proposed and discussed in the literature [1-3]. For the automated measurement system for complex modulation of micro displays in reflexion we built up a comparatively simple measuring system based on a double slit experiment (Fig. 2). Mirror
1.5°
Laser
Aperture Polarizer LCoS Mirror
CCD PC
Analyzer
Fig. 2. Experimental Set-up for SLM characterization in reflexion
Hybrid Measurement Technologies
Phase Shift in ʌ
484
0,5 0 -0,5 -1 -1,5 -2 -2,5 -3 0
100 200 Graylevel
Fig. 3. Experimental result: Phase distribution Fig. 4. Phase change for different wave dependent on the gray value relation (O 633 lengths in the range 355 nm to 780 n nm)
The LCoS display is addressed with an image consisting of two areas of equal size, but different graylevels. By changing one of the graylevels, the mutual phase delay of the two waves created by the two slits can be measured. The interference patterns are recorded with a CCD camera and averaged with respect to the direction perpendicular to the optical table in order to reduce noise. The interference pattern thus is reduced to a single pixel row in the merged image shown in Fig .3, where the interference patterns for 256 different graylevels are assembled. In Fig.3 one can see that for gray values of about 200 the fringe contrast nearly disappeares. The reason for the low contrast is the simultaneous change of polarization and phase by the LC molecules. The relation between these quantities can be expressed by the geometrical phase [5]. The dynamical part of the overall phase is caused by differences in the optical path length of two beams. In contrast to that the geometrical phase is independent from the dynamical progression of the beam. The theoretical base for the geometrical phase was developed by Pancharatnam in the 1950´s. It can be shown that the phase difference between two single beams is not only affected by the optical path length but also by the different polarizing states that each beam passes through. For applications of the SLM system as a phase modulating element it is not necessary to distinguish between geometrical and dynamical phase because both parts result in the overall phase by sum up of each phase value [6]. A publication of these results is in preparation. From the shape of the individual interference patterns, which are proportional to cos(/x+)), the phase difference ) can be determined as a function of the adressed graylevel, as shown inFig. 4. These results were
Hybrid Measurement Technologies
485
verified for cw- and femtosecond lasers and no significant differences were measured.
4 Application in the optical reconstruction of digital holograms For the optical reconstruction of digital holograms we have set up an experiment shown in Fig. 5. The recording of the holograms was made with a 12 bit CCD camera Kappa DX2N, pixel number 1382x1032, Pixel size 4.65 µm x 4.65 µm. We used an Adlas Nd:YAG laser, frequency doubled, output power 140 mW. For the reconstruction was used the LCoS LC-R 3000, with a gamma-curve that was linearized with the results of the measurements described before. This procedure is comparable to the well known procedure of bleaching in conventional holography. Mirror
Laser
Mirror
adjustable Beamsplitter Object
CCD Beamsplitter LCoS
Polariser
PC
Mirror
Screen (Reconstruction)
Fig. 5. Scheme for the recording and reconstruction of digital holograms
The hologram recorded by the CCD camera is used as the video signal addressed to the LCoS display. The display is illuminated with the laser and a reconstruction of the hologram is obtained. With an additional function of the used software, deformations and displacements of the object are detected. The optical wave fields are subtracted without any intermediate change in the system. This makes it possible to store one hologram and to subtract a second one or to subtract a life stream of the camera, simulating
486
Hybrid Measurement Technologies
the double exposure and the real time technique of conventional holography, respectively. By a change in the geometry of the object, e.g. due to displacements, interference fringes become visible Additionally there is a possibility to add some diffractive distributions to the hologram, realizing an additional lens for focusing the hologram reconstruction or a prism phase to shift the image away from the zeroth order of the reconstruction. The deformation of a thermoelectric device is shown as an example in Fig. 6.
Fig. 6. Deformation of a thermoelec-tric device
Fig. 7. Optical reconstruction of a microscopic digital hologram of a USAF resolution target
The spatial resolution of the optical reconstruction is in the µm range . An optical reconstruction of a microscopic hologram of a resolution target, made as digital hologram by D. Carl [7], is shown in Fig. 7.
5 Conclusion High resolution spatial light modulators provide a new technology option for adaptive optics, which can be used for the realization of dynamic diffractive devices. In this work, we have used the possibilities of modern SLM technology for the optical reconstruction of digital holograms. By this method it is possible to reconstruct an optical wave field and to manipulate it practically in real time. Further investigations will evaluate applications in the field of optical metrology.
Hybrid Measurement Technologies
487
6 Acknowledgements The financial support of the German Ministry for Science and Technology under Grant Nr. 13N8096 (Humboldt University Berlin) and 13N8097 (HoloEye Photonics) is gratefully acknowledged.
7 References 1. D. A. Gregory, J. A. Loudin, J. C. Kirsch, E. C. Tam, F. T. S. Yu, 1991 Use of the hybrid modulating properties of liquid crystal television. Appl. Opt. 30, 1374-1378 2. K. Ohkubo, J. Ohtsubo, 1993 Evaluation of LCTV as spatial light modulator. Opt. Comm. 102, 116-124 3. G. Wernicke, S. Krüger, J. Kamps, H. Gruber, N. Demoli, M. Dürr, S. Teiwes, 2004 Application of a liquid crystal spatial light modulator system as dynamic diffractive element and in optical image processing. J Optical Communications 25 141-148 4. G. Wernicke, S. Krüger, H. Gruber, 2000 New challenges for spatial light modulator systems. New Prospects of Holography and 3DMetrology - International Berlin Workshop, Strahltechnik-Bremen, Vol. 14, 27-28 5. P. Hariharan, H. Remachandran, K.A. Suresh, J. Samuel, 1997 The Pancharatnam Phase as a strictly geometric phase: A demonstration using pure projections. J. Mod. Optics 44 707-713 6. A. Langner, 2004 Untersuchungen an reflektiven FlüssigkristallLichtmodulatoren. Diploma Thesis Humboldt University Berlin 7. D. Carl, B. Kemper, G. Wernicke, G. von Bally, 2004 Parameteroptimized Digital Holographic Microscope for High-resolution Livingcell Analysis. Applied Optics 43 36, 6536-6544
Application of Interferometry and Electronic Speckle Pattern Interferometry (ESPI) for Measurements on MEMS J. Engelsberger, E-H.Nösekabel, M. Steinbichler Steinbichler Optotechnik GmbH Am Bauhof 4, D-83115 Neubeuern Germany
1 Introduction Since many years, interferometric measurement methods have been applied in research and industry for the investigation of deformation and vibration behaviour of mechanical components. Besides the classical interferometers, electronic speckle pattern interferometers have been introduced to a wide range of applications in many industrial branches. Recently, these techniques also have been introduced to measure the deformations of MEMS, especially under vibration conditions. In the following, some measurements carried out with a modified Michelson interferometer and some measurements carried out with a continuous wave 3D-ESPI interferometer will be described.
2 Measurements with a Michelson Interferometer For flat and highly reflecting objects like mirrors and micro-mirrors, deformation measurements can be carried out directly with a Michelson interferometer. Usually, such an interferometer is equipped with a CCD camera connected to a fringe processing system. In order to obtain quantitative deformation results with high accuracy, usually temporal phase shifting is applied. Using temporal phase shifting, measurement resolutions of about Ȝ/50 can be obtained. A common phase shifting device consists of a piezo crystal and its computer controlled driver, mounted to the reference mirror of the Michelson interferometer.
Hybrid Measurement Technologies
489
Fig. 1. Fringe pattern of a deformed micro mechanical actuator and quantified result presented as pseudo-3D plot
The deformation of a micro mechanical actuator structure as a typical result obtained by such an interferometer is shown in figure 1. The size of the mirror plate is about 2mm x 2mm with a thickness of about 30 µm. The torsion springs measure 2 mm x 30 µm x 30 µm. In figure 1, the left image shows the obtained fringe pattern, the right image shows a pseudo 3D plot of the quantified deformation.
3 Measurements with a 1D-ESPI Interferometer In most cases, ESPI systems use continuous laser light to illuminate the investigated structure. The output beam of the laser is separated into two illumination beams, the object beam and the reference beam. The object beam travels through a lens to the object and illuminates it. Due to the roughness and shape of the object, the backscattered object light has no longer a homogeneous wave front, but is modulated in a way that is characteristic for the object and shows a granular pattern of bright and dark spots, the “speckle pattern”. The reference beam is guided by fibre optics directly to the recording camera. The backscattered light from the object travels to a CCD-camera, where it is superposed with the reference beam with which it creates an interference pattern. For a deformation measurement, the interference pattern coming from the unloaded object describes the “reference state” of the object. When the object is deformed by applying a load to the structure, the light coming from each object point changes its phase, depending on the amount of deformation, whereas the reference beam remains constant.
490
Hybrid Measurement Technologies
LASER
OBJECT OBJECT BEAM
REFERENCE BEAM
CCD SENSOR LENS
CCD-CAMERA
BACKSCATTERED BEAM
Fig. 2. Scheme of the basic set-up of a 1D-ESPI interferometer
This fact modifies the interference pattern on the CCD- target for all object points, which have moved. Again, this interference pattern, which describes the “deformed state” of the object, is recorded. The difference of the two recorded interference patterns describes the shape difference between the loaded and unloaded state in the direction of the sensitivity vector. The basic principle of an Electronic Speckle Pattern interferometer is shown in figure 2. If measurements of MEMS shall be carried out with ESPI, the reflecting surfaces have to be treated in such a way that they scatter back the illuminating light. For measurements of sinusoidal vibration, it is advantageous to introduce a stroboscopic illumination in such a way that the duty cycle of the illumination can be shifted in phase with respect to the excitation phase. By setting the duty cycle of the illumination in such a way that the vibrating object is illuminated during its amplitude maximum, the vibration can be treated like a static deformation.
Fig. 3. Deformation of a micro-mechanic mirror vibrating at 200 Hz, evaluated results
Hybrid Measurement Technologies
491
Figure 3 shows the quantified deformation of a micro-mechanic image mirror vibrating at 200 Hz. The mirror has a size of 10 mm x 6 mm and a thickness of approx. 50 µm. For measurements of Eigenfrequencies it is interesting to know, in addition to the frequency and the amplitude, the vibration phase with respect to the excitation. This vibration phase can easily be determined with the light modulator, if it is synchronized with the excitation: If the excitation energy of an object vibrating in a natural frequency is kept constant and if the duty cycle of the modulator is set to a small value, it is sufficient to shift the duty cycle of the light modulator with respect to the excitation until the obtained vibration amplitude reaches its maximum. The phase difference between the excitation and the illumination now indicates the vibration phase. Figure 4 shows the result of such a measurement: All mirrors of a micro mirror array have been excited with their respective Eigenfrequencies, and for each of the mirrors, their phase difference with respect to the excitation has been determined.
4 Measurements with a 3D-ESPI Interferometer It is often interesting to learn about the vibration behaviour of MEMS in all three directions in space. In this case, a 3D-ESPI system can be applied. Although different techniques for the measurement of the 3-dimensional vibration behaviour exist, the easiest set-up uses 3 illumination directions and 1 observation.
Fig. 4. Vibration phases of independent mirrors of a micro mirror array. Each mirror is vibrating in one of its natural frequencies
492
Hybrid Measurement Technologies
Fig. 5. Scheme of the basic set-up of a 3D-ESPI interferometer
Consecutive speckle pattern acquisition using all illumination directions leads to three reference state speckle patterns. If the system is equipped with a light modulator, these measurements can also be carried out with sinusoidal vibrating objects. The acquisition procedure is repeated after the object has been loaded to obtain the deformed state speckle patterns. Now, the deformation phase maps can be calculated for each of the illumination directions. As the sensitivity vectors for the three measurements can be defined from the geometrical set-up of the system, it is easy to transform the intermediate phase images to deformation maps representing the deformations along the 3 axes of a Cartesian coordinate system. The scheme of such a set-up is shown in figure 5, the applied system in figure 6.
Fig. 6. The 3D-ESPI interferometer applied for the measurements
Hybrid Measurement Technologies
493
With such a set-up, it is possible to measure the resonant mode shapes e.g. of silicon resonators not only in the out-of-plane direction, but also torsional modes, in-plane rotations or in-plane translations, like shown in figures 7 and 8.
Fig. 7. The silicon resonator investigated with 3D-ESPI
Fig. 8. 299Hz out-of-plane torsion (left), 371Hz in-plane rotation (middle) and 773Hz inplane translation (right) of the silicon resonator
5 Acknowledgments We would like to thank the Technische Universität Chemnitz, Prof. Dr. Gessner, Dr. Markert and Mr. Kurth , for their kind contribution of test results.
Full-field, real-time, optical metrology for structural integrity diagnostics Jim Trolinger, Vladimir Markov, Jim Kilpatrick MetroLaser, Inc., 2572 White Road, Irvine, CA 92614, USA
1 Introduction In this paper, we explore the relationship between two optical diagnostics techniques: Electronic Digital Holography (EDH) and Laser Doppler Vibrometry (LDV), two methods that can be used to analyze the relative micro-movements of points on surfaces. Both methods produce phase and phase difference maps of lightwaves reflected from a surface, which can be related to the makeup and condition of the underlying structures, as well as the integrity of the entire structure. We have used both extensively to locate and identify defects and to assess structural health. The interpretation of LDV as a real time variation of EDH can offer insights into new ways to improve and deploy the methods by carrying over signal and image processing procedures that have been developed separately for the two. Other investigators have made similar observations [1-3]. A primary motive for our research is to produce advanced sensors for space exploration applications. Such sensors can fulfill a need to equip robots with a wide range of advanced diagnostics capability. We show how robots equipped with EDH sensors could provide investigators with a presence in remote space environments to assess the characteristics and health, and provide microscopic examination, of both manmade and natural objects and structures.
2 Relating EDH and LDV Digital holography. Digital holography is the division of coherent optics that deals with recording, storing, reconstructing, processing, and synthesizing wavefronts by using discrete values of information distributed over discrete points in space and time. Historical and geographical variations in terminology have led to a wide range of names than can be encompassed under the digital holography umbrella, including electronic speckle pattern interferometry (ESPI), TV holography, electro-optical holography, electronic holography, and LDV. Within a few years of the invention of holo-
Hybrid Measurement Technologies
495
graphy, investigators had begun to examine the possibility of digital holography [4]. For many years, available computers were far behind the ability to compete with analogue methods and the spatial resolution of electronic detectors was far short of that which is needed to compete with photographic materials. With new megapixel detector arrays; fast, low-cost computers; and new signal processing algorithms, this is fast changing, and electronic digital holography is replacing analogue methods in many applications.
Fig. 1. Recording and reconstructing wavefronts with digital holography. The hologram can be a set of ones and zeros in a computer memory or it can be a specific configuration of an optical device.
EDH is more often used as a tool in applications where the information is processed, analyzed, quantified, and compared, with the goal of viewing the results of the analysis as opposed to the raw wavefront itself. The information can be manipulated entirely inside the computer to do such things as retrieve and manipulate the object wavefront or add and subtract other wavefronts (interferometry). The ability to subtract two wavefronts is unique to electronic holography, since with optical means alone, they can only be added. This fact has been exploited in EDH beneficially and may also have application in LDV. Digital holography comes with all the advantages and disadvantages of digital information processing and handling. In many ways, a digital hologram can be thought of as an “ideal” hologram, because it provides a direct, real time link between optical wavefront information and computers, allowing one to access efficiently the power of both optics and electronics to record, process, and even create optical information. Available framing
496
Hybrid Measurement Technologies
detector arrays are about one centimeter square and contain about 2000 sensors on a side, and the associated spatial resolution is at least 10 to 20 times smaller than even common photographic materials used in holography. So, compared to analogue holograms recorded on photographic materials, the equivalent size of a typical digital hologram is about one millimeter in diameter in terms of information capacity. This must be offset somehow with all the other benefits of digital recording. Much of the work and opportunities for advancement are to produce more efficient algorithms and sampling methods that overcome the spatial resolution limits of the recording devices and compress and use the vast amounts of data that can be generated. Laser doppler vibrometry. With the simplest form of LDV, a beam of laser light is focused to a surface, and the scattered light from the surface, which is Doppler shifted by surface movement, is collected and mixed with a reference wave, producing a beat or heterodyne signal, which is proportional to the frequency difference of the two waves. Demodulating this signal retrieves the Doppler frequency, which is proportional to the surface velocity at the illumination point (with geometry factors) and which is also proportional to the phase change rate of the scattered light wave. Integrating this signal provides a measure of surface displacement, which can quite easily be achieved with nanometer resolution. Likewise, this integral provides the phase of the light wave emerging from the surface. Therefore, such a LDV can be thought of as a tiny, continuously varying hologram of a point on a surface, actually an electronic, analogue hologram at this stage. The phase of the scattered wave can be determined with great precision, since the availability of temporal information allows a precise measurement of the relative phase between the object and reference waves (this is also known as heterodyne interferometry). To produce such a hologram of a line or a finite area on a surface requires going digital, since continuous electronic detectors do not exist. A line or area on a surface can be imaged onto a linear or area detector array where it is also mixed with a reference wave that floods the detectors, providing the temporally resolved phase of the wave at each imaged point. In practice, the lines, areas, and even the reference wave are digitized for efficiency, so beams are focused to points on a line or matrix. An example of such a LDV system is shown in Figure 2. The outgoing laser beam is split into an array of beams by a holographic optical element, and then these are focused to a surface. Similarly, a HOE is used to combine the return signal with individual reference waves. These systems provide the amplitude and phase of the scattered signal from each illuminated point on the surface. Therefore the resulting data can be described as a real time, digital hologram of a line or area of surface, depending upon the de-
Hybrid Measurement Technologies
497
tector array used. To capture the precise movement of the whole surface requires that the points be positioned sufficiently close to each other to resolve the spatial variations or modes.
Fig. 2. Linear array (multipoint) laser Doppler vibrometer
Optical scanning holography. A related holography method, in which holograms of a surface are produced point by point, sequentially, by scanning the object beam over the surface has been demonstrated by Poon in a procedure described as optical scanning holography [5]. In this procedure, an object is scanned by a point source and the reflected light is mixed with a reference wave at a single detector. The small detector size is not a limiting factor, although the time for scanning does lead to limitations. It would seem beneficial to deploy this method with multiple beams and a detector array to reduce the recording time.
3 Digital hologram recording devices and limitations Framing arrays like CCD and CMOS require a certain amount of time, typically tens of microseconds, to produce a storable, few mega pixel hologram and the most commonly affordable systems can record up to about 30 holograms per second, although very high speed systems are available at much higher costs with reduced numbers of pixels. Sensors are typically sized between 5 and 10 microns in diameter and a mega pixel ar-
498
Hybrid Measurement Technologies
ray is therefore between 5 and 10 millimeters in diameter. Two challenges with these detectors are to deal with the relatively small size of the equivalent hologram and to efficiently handle and exploit the vast amount of data that can be produced in a short period of time. Real time detector arrays, in which each detector continuously reads and transmits temporal data, can, in principle, record holograms continuously and in near real time and are even more limited presently by available sensor size and number. Available real time arrays are limited to less than 10,000 detectors being between 50 and 100 microns in diameter. Clearly this resolution limits the application to very special cases where the angle between the object and reference waves is less than one degree, and phase changes over the detected area are small. Even so, some of the benefits of such recording could lead to remarkable new applications. For example, this kind of phase detection enables heterodyne detection that can provide nanometer-scale, optical path length sensitivity, at least an order of magnitude better than is possible with static interferograms. The challenges here are in overcoming the spatial resolution limitation of such detectors and devising ways to transfer and store the vast amount of data that would be produced. One group working on such a sensor has approached this problem by employing on-chip processing, so the hologram is formed, analyzed, and used to extract and compress the specific useful data all on the same chip [6,7]. To be useful, holograms must record fringe information with a sufficient spatial resolution to define the details of interest in the object wavefront. To make the digital hologram, this means that the interferogram of the object and reference wave must be sampled sufficiently to capture every fringe whose information content is needed in the wavefront of interest that is being stored. For a static recording, classical sampling methods would require that at least two individual detectors must lie on each fringe required to store the object wave of interest. Fringe spacing is given by D~O/D, where D is the angle between the object and reference wave, and O is the wavelength. For practical cases with typical detector sizes of 5 to 10 microns in mega pixel arrays, this limits the allowed angle between the object and reference wave to a few degrees. Methods exist and are continuously under development to circumvent such limitations, and classical methods may not be the optimal way to sample [8,9]. Scanning the object, scanning the detector [10], super resolution and magnifying the interference fringe pattern to produce arrays of holograms that must be stitched together properly [11] are a few possibilities.
Hybrid Measurement Technologies
499
As an illustration of scanning to extract additional information, consider a specific case of a detector whose diameter, D, is larger than the fringe of period, P, of the fringes that we wish to resolve. I = 1+Cos2S(x/P)
x
D/2 Pixel
x’
P Fig. 3. Sampling a fringe pattern by scanning with a detector that is larger than a fringe
We consider a sinusoidal fringe pattern with period, P, and we scan the pixel of diameter D across the fringe pattern in a direction normal to the fringes. At any place, x, the energy collected by the pixel is given by the following integral: x D / 2
E
³ [1 Cos2S ( x' / P)]dx' = 2[ D + (P/S)Sin(SD/P)Cos(2Sx/P)]
xD / 2
Note that by scanning the detector, we have recovered the cosine wave, even though the pixel itself may be much larger than the cosine period. This is because the information is provided by the edges of the pixel as it is scanned with the expense of some loss in contrast. The highest contrast occurs when the pixel diameter obeys: D=qP/2, where q = 1, 3, 5, Then: E= 2D[ 1 + (2/qS) Cos(2Sx/P)] The signal contrast is reduced directly by q, and for even values of q, the contrast is zero and the information is lost. Consequently, to totally recover all frequencies would require that the effective pixel size, D, be also varied. This simple example is primarily for illustration that information can be extracted by scanning, since, of course, it assumes that the variation is
500
Hybrid Measurement Technologies
in one dimension, limiting its use in general. Any of these methods come with the penalty of added information processing. For many applications, a change in the phase map is the quantity of greatest interest and not the absolute wavefronts themselves. Instead of reconstructing two wavefronts and subtracting one from the other, it is possible to reconstruct the difference only, eliminating two operations in the process. In addition, the subtraction procedures should enable the removal of constant terms and noise cancellation and system optics effects. A unique phase difference algorithm was developed to allow wavefronts to be compared efficiently and accurately [12], and it also helps with noise cancellation and management. By solving for phase differences directly, the number of computer operations can be cut in half. Some of the greatest features of holographic interferometry are even easier to implement electronically and offer additional advantages.
4 Four Applications Landmine Detection. Sabatier [13,14] and his group have shown that LDV with acoustic-to-seismic coupling-based technology is a viable tool in locating landmines, especially those that are difficult to sense with other methods, i.e., non magnetic or metallic materials. The ground is vibrated acoustically and the LDV detects anomalies in the surface velocity immediately above the landmine, because the landmine behaves differently than soil in the overall mechanical system. A major remaining problem is to improve coverage and detection speed. This is a motivation for employing multiple beam systems, arrays, and digital holography. We have applied and compared all of these methods. In one field test, the multi-beam LDV [15,16] simultaneously measured the vibration of the ground at 16 points spread over a 1-meter line. It was used in two modes of operation: stop-and-stare and continuously scanning beams. The noise floor of measurements in the continuously scanning mode increased with increasing scanning speed. This increase in the velocity noise floor is caused by dynamic speckles. Either airborne sound or mechanical shakers can be used as a source to excite vibration of the ground. A specially-designed loudspeaker array and mechanical shakers were used in the frequency range from 85-2000 Hz to excite vibrations in the ground and elicit resonances in the mine. Field experiments show that buried landmines can be detected within one square meter in several seconds using a system in a continuously scanning mode, with loudspeakers or shakers as the excitation source. De-
Hybrid Measurement Technologies
501
velopment of a new demodulation technique that can provide a lower noise floor in continuously scanning mode is a high-priority task of the future work. This system is driven over an area to produce a scan of a surface. In some ways, these recordings are analogous to the synthetic aperture radar images that lead Leith to employ holographic methods in his original work on holography. Non Destructive Inspection of Space Shuttle Tiles. We have applied and compared multiple beam LDV and digital holography to analyze the structural integrity of space shuttle tiles, which had been purposely preprogrammed with typical defects. Defect detection is achieved by performing a vibrational spectral analysis on the surface as different frequencies are used to excite the test piece. At characteristic frequencies, the defects become apparent. In these tests, all of the programmed defects were located and classified. LDV systems are more sensitive by at least an order of magnitude over those that employ framing types of digital holography, i.e., CCD cameras. This is because the temporal information is used to improve the phase resolution and displacement sensitivity. The obvious disadvantage is the lower spatial resolution. Vibrational Analysis. In a typical application, a digital holocamera and a LDV were used to analyse the vibrational characteristics of an aircraft component. LDV’s similar to the one shown above have been used extensively for this purpose. The digital holocamera that we use employs a frequency doubled YAG laser to illuminate the object. In this case, digital holograms are formed on a 2000X2000 CCD array. Wavefronts are reconstructed through the use of an instantaneous phase shifting interferometry procedure that was reported elsewhere [17], wavefront subtractions are done in the computer and phase difference maps are displayed. The system determines instantaneously (within one vibrational cycle) the displacement difference between the two extreme positions of the blade as it vibrates. These recordings can be produced in the existing system at a rate up to about 20 frames per second, limited by the CCD camera. Commercially available cameras can now support such measurements at kilohertz rates. Virtual Laboratories - The remote window concept. Digital holograms can be thought of as windows into other spaces and times [18]. If a hologram is produced in real time and the information is transmitted in near real time to another location where a second hologram like the original is synthesized, this second hologram is the optical equivalent of a window into the space where the original hologram is being recorded. Much more than close circuit TV, this provides a physical window that looks into someplace else, in a different time and place. The concept, taken to its fullest, has the ultimate promise of providing scientists with “eyes” in space. A robot equipped with “holographic windows or eyes” could provide a
502
Hybrid Measurement Technologies
vide a presence in space without actually being there. For example, a scientist looking into a holographic window on earth could see the true surface of Mars 150 million miles away just millimeters beyond the window. An experimental chamber of this type, originally developed for deployment on the International Space Station under the NASA SHIVA [19] (Spaceflight Holography Investigation in a Virtual Apparatus) program is now being examined for its potential application as a planetary instrument. All of the necessary concepts were tested and validated in the SHIVA project [20]. A holocamera of this type was developed in the early seventies by Hughes Research Corporation for deployment on the moon. Difficulties and hardware limitations limited its potential, and it was never deployed. With the improvements in technology since that time, such applications should be reconsidered.
5 Concept for a Holographic Eye for a Robot Figures 4 and 5 illustrate the concept of a holographic eye for a robot. The eye projects its own illumination to the target, then senses the full 3-D movement of the surface. A user who is located in a remote station can “see” with great precision the full dynamics of the structure in 3-D as fast as the data can be transmitted to the receiver station. This capability can be especially useful in space exploration where it is critical that sensors provide the most complete information to the investigator.
Fig. 4. Block Diagram for a robotic digital holographic sensor
Hybrid Measurement Technologies
503
Fig. 5. Conceptual application of a digital holographic sensor robot application, i.e., a “holographic robot eye”.
6 Conclusions We have shown how laser Doppler velocimetry can be considered as a special case of digital holography. When used in a matrix form, LDV can provide an extremely precise full 3-D dynamic analysis of a structure, and can provide the user with a presence in a remote location, leading to the capability to make precision measurements of structures and structural changes. Such measurement capability can provide information about the structural characteristics and health of either manmade objects, such as the robot itself, or of natural objects, such as geological formations. By recognizing that multipoint LDV is a special case of digital holography, we can exploit techniques that have been developed for either method for enhancing the other.
7 References 1. Kilpatrick JM, Moore AJ, Barton JS, Jones JDC, Reeves M, Buckberry C (2000) Measurement of complex surface deformation by high-speed temporal phase-stepped digital speckle pattern interferometry. Opt Lett. 25:1068-1070 2. Ruiz, PD, Huntley, JM, Shen, Y, Coggrave, CR, Kaufmann, G. H. (2001) Low-frequency vibration measurement with high-speed phase-
504
3. 4.
5. 6.
7.
8.
9. 10.
11.
12.
13. 14. 15.
16.
Hybrid Measurement Technologies
shifting speckle pattern interferometry and temporal phase unwrapping, W. Jüptner and W. Osten, editors, Elsevier, Paris, Fringe 2001:247-252 Kauffmann J, Tiziani HJ, (2002) Temporal Speckle Pattern Interferometry for Vibration Measurement”, SPIE Proc. 4827:133-136 Huang, TS, Prasada, B (1966) Considerations on the generation and processing of holograms by digital computers, MIT/ RLE Quat. Prog. Rep. 81:199-205 Poon, TC (2004) Recent progress in optical scanning holography. Journ. Of Holography and Speckle, 1:6-25 Pui, BH, Hayes-Gill, BR, Clark, M, Somekh, M, See, CW, Pieri, J F , Morgan, S, Ng A (2001) Optical VLSI processor fabricated via a standard CMOS process. Proc. SPIE 4408:73-80 B H Pui, B Hayes-Gill, M Clark, M G Somekh, C W See, J F Pieri, S P Morgan, and A Ng. (2002) The design and characterisation of an optical VLSI processor for real time centroid detection. Analog integrated circuits and signal processing, 32:67–75,. Onural, L. Ozahtas H. (2005) Diffraction and Holography from a Signal Processing Perspective, Proceedings of the International Conference on Holography, Optical Recording, and Processing of Information, Varna, Bulgaria. Onural, L (2000) Sampling of the Diffraction Field. Applied Optics 39:5929-5935 Trolinger, J, Markov, V, Khizhnyak, A (2005) Applications, Challenges, and Approaches for Electronic, Digital Holography, Proceedings of the International Conference on Holography, Optical Recording, and Processing of Information, Varna, Bulgaria Gyimesi, F, Borbely, V, Raczkevi, B, Fuzessy, Z (2004) A Speckel Based Photo Stitching in Holographic Interferometry for Measuring Range Extension. Journ. of Holography and Speckle, 1:39-45. Vikram, CS, Witherow, WK, Trolinger, J.D. (1993) Algorithm for Phase-Difference Measurement in Phase-Shifting Interferometry. Applied Optics 32:6250-6252 Sabatier, JM, and Xiang, N (1999) Laser-Doppler Based Acoustic-toSeismic Detection of Buried Mines .SPIE Proc. 3710:215-222 Xiang, N. Sabatier, JM (2000) Land Mine Detection Measurements using Acoustic-to-Seismic Coupling. SPIE Proc. 4038:645-655 Lal, AK, Zhang, H, Aranchuk, V, Hurtado, E, Hess, CF, Burgett, RD, Sabatier, JM, (2003) Multi-beam LDV system for buried landmine detection. SPIE Proc. 5089:579-590 Aranchuk, S, Sabatier, JM, Lal, AK, Hess, CF, Burgett, RD, O’Neill, M (2005) Multi-beam laser Doppler vibrometry for acoustic landmine
Hybrid Measurement Technologies
17.
18.
19.
20.
505
detection using airborne and mechanically-coupled vibration Proc. SPIE 5794:624-632 Brock, NJ, Millerd JE, Trolinger, JD (1999) A Simple Real-Time Interferometer for Quantitative Flow Visualization. AIAA Paper No. 990770, 37th Aerospace Sciences Meeting, Reno, NV Trolinger, JD, L’Esperance, D. (2004) Digital Holography in Real and Virtual Research-A Window into Remote Places, SPIE Holography Newsletter, November. Trolinger, JD, L'Esperance, D, Rangel, RH, Coimbra, CFM, Witherow, WK. (2004) Design and preparation of a particle dynamics space flight experiment, SHIVA. Annals of the New York Academy of Sciences 1027:550-566 L’Esperance, D, Coimbra, CFM, Trolinger, JD, Rangel, RH (2005) Experimental verification of fractional history effects on the viscous dynamics of small spherical particles. Experiments in Fluids
Invited Paper
3D Micro Technology: Challenges for Optical Metrology
Theodore Doll, Peter Detemple, Stefan Kunz, Thomas Klotzbücher Institut für Mikrotechnik Mainz GmbH Carl-Zeiss Str. 18-20, 55129 Mainz Germany
1 Introduction Production control and specification verification are two mayor demands in successful industrial use of micro systems and thus have been brought into the foci of several research calls by the CEC [1] and other national authorities. The reason is that the tolerances typical to micro systems are not scaling with the dimension shrinkage of its parts, despite three dimensional assembly especially of heterogeneous hybrid systems like plastic, metal and electronic components is the naturally the best way of achieving low cost mass products. The latter components origin from precision engineering and semiconductor technology indicates that micro systems metrology will always be seen from different viewpoints of testing: 100% validation of serially manufactured parts at one hand and statistical control of batch parallel processes on the other. The first aspect requires online control or fast inspection whilst the other may employ destructive inspection methods. As several technologies merge into a hybrid, new level, multiparameter and multitool testing beds are discussed, however, optical inspection remains the fastest and often cost effective approach to dimensional metrology despite the micro dimensions coming close to the wavelengths under use. Where these limitations are and which measurements are desired are exemplified here from an applicational point of view.
Hybrid Measurement Technologies
507
2 Technologies Historically, the so-called LIGA-technique, was the first important approach for the realization of non-silicon micro parts with huge aspect ratio and structural dimensions from the µm up to the mm range. Combining deep X-ray lithography as a mastering step followed by electroforming for the fabrication of ultra-precise mould inserts, the LIGA-technique allows the manufacturing of high quality micro components and micro structured parts in particular from plastics but also from ceramics and metals on large scale. In recent years, different competing variants of this technique have been established which are mainly based on UV-lithography and employ e.g. the processing of thick photo resists such as SU-8 or methods of deep reactive ion etching of silicon such as the so-called ASETM (Advanced Silicon Etch) process. These processes are also suitable for direct integration of electronics for the realization of advanced MEMS devices. Due to the recent advances in mechanical ultra precision machining, laser micro machining and micro electro discharge machining, also these techniques allow today the realization of structural dimensions that that were up to now only accessible by lithography-based processes.
3 Measurement Challenges Usually technologies come with their own specialized measurement setups and strategies; however heterogeneous hybrid systems need one approach that does it all. SEM pictures as provided here, give an idea, however its metrological validity is still under research [3]. 3.1 Three Dimensional Geometries
Precision machined micro holes are used for professional inject printing heads, spinnerets and injection nozzles. Typical diameters are ranging from 300 µm down to some ten µm. Figure 1 shows an study for hydrogen gas turbines which employed laser drilling, sink erosion and wire erosion. Mayor demands are roundness and diameter tolerances, both ranging below 1 µm. Others are inside wall roughness and debris, deviations from cylinder geometry and circumferring burr [2]. SEM inspection, as seen in the inset, provides some idea, however for industrial production and quality control standard measuring methods and parameters are lacking.
508
Hybrid Measurement Technologies
Fig. 1. Hole machined in 0,5 mm V4A steel by laser drilling of starting holes, sink EDM of a thread cones (inserts) and subsequent wire EDM refinement. SEM inspection indicates 0,5 µm roundness tolerance, however internal cylindricity, roughness and creep remain undetermined.
Other examples of 3D parameters in micro systems are given in figure 2. On the left a miniature x,y,z biomedical sensor is shown, fabricated from solid single crystalline silicon by ASETM, in which the “joystick” protrudes the sensor surface. The processing damages at the outer rim do not infer with applicability. Critical parameters are, however, the precise thickness control of the suspension at the bottom and, for imposing tactile probes, low tolerances of the shaft diameter. The SEM inspection suggests some shrinkage downwards the shaft [4].
Fig. 2. Three dimensional Microsystems with demanding production requirements. Left: The Silicon X,Y,Z (joystick)-sensor needs proper bottom thickness and shaft diameter control. Right: The polymer micro gear box needs checking complex geometries, interlacing and moulding shrinkage from LIGA processing.
Hybrid Measurement Technologies
509
The right photograph shows a micro gearbox subassembly made by polymer moulding from LIGA machined inserts [5]. Whilst still working, problems with polymer shrinkage of the individual parts are obvious. 3.2 Sidewalls
As with the x,y,z-sensor above, sidewall control is also important for setting up a process for hot embossing. Figure 3, left, shows an silicon etched structure that undergoes electroplating replication that finally ends up in PMMA micro channels for massive parallel capillary electrophoresis (right). Rigorously steep flanks are a must and, for high uniformity of the latter parallel ion drift, the sidewalls need uniform roughness over a macroscopic chip of ID-card dimensions [6].
Fig. 3. Mold insert fabrication for fluidic capillary electrophoresis polymer chips. Right: Silicon master structure made by ASETM, turned into Nickel structures by electroplating. Left: PMMA channels replicated by hot embossing. Processing quality strongly depends on the etching steepness and sidewall roughness for successful mold separation.
Such smooth flanks are not a standard result of ASE processes. Due to its setup of iterative etching and passivation, sidewall ripple is inherent to this technology. Figure 4 highlights the problem and the high sidewall quality achieved by a post process developed at IMM. The surface would appear like defocused if there were no remaining sharp pits in the sidewall. Up to now, no roughness measurement of both the original and smoothed could be realized.
510
Hybrid Measurement Technologies
Fig. 4. Sidewall roughness smoothing of ASETM Silicon structures before (right & insert) and after two steps of oxidation and HF etching (left). Destructive SEM inspection indicates substantial improvement, however no quantitative access is known for easy process control.
Fig. 5. Holey Silicon devices for gas diffusive membranes and electron optical elements (inset). Note the bottom curvature that influences the active areas as well as variance in cylindicity, burr and sidewall roughness.
3.3 Bottom Curvature
All deep reactive ion etching processes produce some bottom curvature. This becomes critical if membrane or opening areas are directly influencing the device performance. Figure 5 gives two examples for that. The larger photograph shows, in crossection, a gas diffusive SiO2 membrane at the bottom of a silicon. It is used for helium leak detection where the membrane withstands normal air pressure against vacuum. It is obvious
Hybrid Measurement Technologies
511
that silicon remainders forming the bottom curvature reduce the active diffusible membrane area. Even for such larger hole diameters of 500 µm, there is no standard method for checking these edges of microstructures without using destructive preparation techniques. The problem becomes severe, if the hole dimensions are drastically reduced as shown in the inset. These micro grids made of heavily doped silicon are used in advanced electron microscopy, e.g. for energy filtering. The silicon structure acts like a structured metal foil but allows smaller hole sizes and density and thus leads to a reduction of Moiré effects. For industrial use, it would be important to easily control the bottom curvature again in order to tailor the best potential distribution [7].
Fig. 6. Sink EDM tool fabricated by wire EDM turning (left) and micro electron optical structure fabricated with that tool (right, dissected). Proper non-destructive inspection of those internal micro geometries is still a mayor challenge.
3.4 Undercuts
Again with an electron microscopy application, figure 6 shows the demands of measuring undercuts exemplarily. The tool that for that required sink erosion has been machined by wire EDM turning. What would be required in this case, are simple geometries inside a 2,5 mm diameter element like diameters, rotational symmetry and roundness of edges. As these products are individually made to specifications, destructive testing is rather unfavourable.
4 Standard Approaches What is common to the tasks described above is that they almost all require coupled methods e.g. for roughness and geometries and, which is most important, sidewall data in cavities as small as 100 µm. These are
512
Hybrid Measurement Technologies
almost not accessible by standard methods given in table 1. Tilted SEM inspection offers some access towards sidewalls for aspect ratios up to one. The tactile probes are, in general able to do the job, however the tip radii make them of limited use. The fibre optical probe would be best choice, however needs fundamental adaptation. Only micro tomography would give full access even to undercuts despite its price and limitation to low Z materials like polymers. Table 1. State of the art in MEMS measurement
Type of probe
Res. / Tip Ø
Uncertainty
Sidewall capability
1D, 2D (x, y)
3D (z)
AR
Video probes
> 5 µm
> 1 µm
> 1 µm
NO
Classical tactile probes
300 µm
> 0.5 µm
> 0.5 µm
YES, 5
Tactile-optical probes (“fibre probes”)
20 µm
0.5 µm
0.5 µm
Ø, 1
MEMS micro-probes
300 µm
50 nm
> 50 nm
Ø
SPMs Scanning probe microsco- 20-50 nm pes, (AFM, SNOM)
< 10 nm
no 3D objects measurable
NO
Confocal microscopes (scanning) and other focussing sensors
< 1 µm
> 0.5 µm
1 µm
NO
Electron microscope, tilted beam
3 nm
20 nm
0.5 µm without tilt
Ø, 1
Micro Tomography
> 2 µm
YES
5 New Concepts and Outlook Confocal scanning probes have the advantage of high resolution in zdirection (< 1 nm), but low lateral resolution (1-2 µm) due to the wavelength of light. At aspect ratios below 1 and combining multiple views (rotation of the object or the probe) will allow some access comparable to SEM views. Complex software to stitch the views is required because the resolution is only high in one coordinate direction and the use of Far UV light sources. For stand-offs up to 5 mm measurements at the bottom of deep and/or rotated structures might become feasible. For the most part, the confocal microscope will be used to test the multiple view techniques. As the work horse of classical micro topography is the interference microscope, its combination with automatic fringe evaluation software might
Hybrid Measurement Technologies
513
offer additional access to cavities not coming too close to wavelength restrictions. Other ideas are focus probes or confocal point probes that are combined with SPM-probes and optical microscopes. With this it might be possible to combine fast orientation with the optical microscope, high speed optical scanning and high lateral resolution of SPM-technique. It is estimated that such advanced measurement strategies will reduce measurement times dramatically and combined instruments will bridge milli-, micro- and nanometre scales. As a conclusion, the up come of multitools combining tactile and optical sensing has been regarded the most promising approach towards 3D micro systems measurement. Optics allows for enhanced speed, whilst mechanics should “bring the light down”. Ideas to combine this within a novel fibre optical inspection are missing.
References 1. CEC FP6-2002-NMP-1 by 17.12.2002
2. Neumann, F., Challenges for Mikrometrology, Proc. 12th Sensor Congress, Nuremberg May 2005 3. Herrmann, K., Koenders, L.; K.; Wilkening, G., Dimensional Metrology Using Scanning Probe Microscopy, Sino-German Symp. Micro Systems and Nano Technology, Braunschweig, 5-7, September, 2001, Germany 4. L. Beccai, S. Roccella, A. Arena, F. Valvo, P. Valdastri, A. Menciassi, M. P. Dario, L. Schmitt, W. Staab, F. Schmitz, "Design and fabrication of a hybrid silicon three-axial force sensor for biomechanical applications", S&A A, 120, 2, pp. 370-382 5. Nienhaus, M., Ehrfeld, W., Berg, U., Schmitz, F. and Soultan, H., 2000, Tools and methods for automated assembly of miniaturized gear systems, in Microrobotics and microassembly II : 5 - 6 November 2000, Boston, USA (SPIE, Bellingham, WA), pp 33-, 43 6. Griebel A. Rund S. Schönfeld F. Dörner W. Konrad W. Hardt S. Integrated polymer chip for two-dimensional capillary gel electrophoresis. Lab. Chip., 4, 18-23, 2004 7. F. Haase, P. Detemple, S. Schmitt, A.Lendle, O.Haverbeck, T.Doll, D.Gnieser, H. Bosse, G. Frase, Electron Permeable Membranes for Environmental MEMS Electron Sources, to be presented at Eurosensors 2005
Interferometric Technique for Characterization of Ferroelectric Crystals Properties and Microengineering Process Simonetta Grilli, Pietro Ferraro, Domenico Alfieri, Melania Paturzo, Lucia Sansone, Sergio De Nicola, and Paolo De Natale Istituto Nazionale di Ottica Applicata (INOA) del CNR c/o Istituto di Cibernetica "E.Caianiello" del CNR Via Campi Flegrei, 34 - c/o Compr. "Olivetti" - 80078 Pozzuoli (NA) Italy
1 Introduction Ferroelectric crystals, such as lithium niobate (LN) or lithium tantalate, have emerged as an important class of materials useful for several technological applications including quasi-phase-matched (QPM) frequency converters or electro-optic scanners and surface acoustic wave (SAW) devices. Most of these applications are critically dependent on the ability to micro-engineer ferroelectric domains, usually performed by the electric field poling process. Therefore non-invasive and in-situ diagnostic methods for thorough understanding of domain formation become very important. In particular, the fabrication of high-quality ferroelectric microengineered devices strictly depend on the ability to control the domain switching process during poling. In case of low conductive ferroelectric materials, such as LN, the electric field poling is usually monitored by controlling the poling current flowing in the external circuit, which gives information about the amount of charge delivered to the sample but ignoring the spatial and temporal evolution of the domain walls [1]. In this paper we present an interferometric technique based on digital holography (DH) to provide the phase shift distribution of the object wavefield, due to the linear electro-optic (EO) and piezoelectric effect, for realtime visualization of ferroelectric domain switching. Temporal and spatial evolution of reversing domain regions in congruent LN crystal samples is presented. During the application of the poling external voltage a MacZehnder (MZ) type interferometer set-up generates an interference fringe pattern recorded by solid state array detector. The recorded digital holo-
Hybrid Measurement Technologies
515
grams are used to numerically reconstruct both the amplitude and the phase of the wavefield transmitted by the crystal. Sequences of amplitudeand phase-maps of the domain walls, moving under the effect of poling process, are obtained and collected into movies for real-time visualization of domain evolution. Depending from the reversed wall velocity experienced during the poling process CCD or high frame-rate CMOS camera can be employed. The technique can be used for real-time monitoring of periodic poling process as an alternative to the poling current control method, giving spatial full-field information. In fact this technique can provide in-situ and non-invasive characterization of the reversed domain structures obtained after the poling process. By this method it is possible to assess the quality of the fabricated periodically reversed samples avoding the destructive ex-situ selective etching usually adopted to reavel reversed domains [2].
2 Experimental procedure Congruent LN crystal samples (20u20u0.5)mm sized are obtained by dicing single domain 3-inch diameter crystal wafers, polished on both sides, and mounted in a special holder for electric field poling and simulateneous interferometric investigation (see Fig.1). The structure of the sample holder allows simultaneously the application of an external high voltage for reversing ferroelectric domains and the laser illumination of the crystal sample along its z-axis direction, through the quartz windows [3]. plexiglas mount quartz windows silicone o-ring object laser beam
liquid electrolyte (LiCl)
y x z LiNbO3 crystal
HVA
Fig. 1. Schematic view of the sample holder used for electric field poling and simultaneous interferometric acquisitions (HVA high voltage amplifier)
516
Hybrid Measurement Technologies
Electrical contact on the sample surfaces is obtained by tap water. The sample area under poling and simultaneous laser illumination is 5 mm in diameter. The sample holder is inserted into one arm of a MZ type interferometer set-up as shown in Fig.2. The horizontally polarized beam emitted by a frequency-doubled Nd-YAG laser at 532 nm is divided by a polarizing beam-splitter into two beams which are properly expanded to get two plane parallel wavefields. The object wavefield propagates through the crystal sample along its z-axis direction. The two beams are then recombined by the second beam-splitter and an interferogram is obtained and captured by the camera. The fringe pattern is digitized by a CCD camera with (1024x1024) pixels 6.7 Pm sized or by a CMOS camera with (512x512) pixels 11.7 Pm sized. The CCD provides an acquisition time of about 10 s at a rate of about 12 frame/s and it is used here in case of LN slow poling [1]. Fast poling process is captured by a CMOS camera which provides an acquisition time of about 1 s at a rate of about 800 frame/s. It is well known that for EO materials such as LN the refractive index n increases from n to n 'n under a uniform external electric field, in one domain, while in the oppositely oriented one it decreases from n to n 'n , thus providing an index contrast across the domain wall [4]. In fact, the refractive-index change, due to the linear EO effect along the z crystal axis, depends on the domain orientation according to 'n v r13 E3 , where E3 is the external electric field parallel to the z crystal axis. The index difference across a domain wall is equal to 2'n and causes phase retardation of the transmitted beam. mirror
PBS 532 nm O/2 %(
O/2
%(
sample
BS mirror
camera
Fig. 2. MZ type interferometer set-up used for real-time visualization of reversing ferroelectric domains by the linear EO and piezoelectric effect (PBS polarizing beam-splitter; BE beam-expander; BS beam-splitter)
Hybrid Measurement Technologies
517
This effect was widely used for in-situ EO imaging of domain reversal in ferroelectric materials [4] thus avoiding the ex-situ invasive chemical etching process [2]. The phase retardation of a plane wave incident along the domain boundaries is also affected by the piezoelectric effect, which induces a negative or positive sample thickness variation 'd in reversed domain regions. Therefore, during the electric field poling, an incident plane wave experiences a phase shift 'I, mainly due to the linear EO and piezoelectric effects along the z crystal axis, according to 'I
2S ª 2S 'nd 2« n0 n w 'd º» O ¬O ¼
>
@
2S r13 n 0 3 2n 0 n w k 3 U
(1)
where the piezoelectric thickness change 'd is dependent on k3, the ratio between the linear piezoelectric and the stiffness tensor (k3=7.57u10-12m/V) [24], n w 1.33 is the refractive index of water and U is the applied voltage. Compared to the amplitude contrast EO imaging methods [4] the DH technique allows to reconstruct the object wavefield in both amplitude and phase. The phase shift distribution provides high contrast images of reversing domain regions and quantitative information about the phase retardation in correspondence of the domain walls. A DH technique has been used by these authors in a previous paper [5] for in-situ visualization of switching ferroelectric domains in congruent LN. An improvement of the technique is prosposed here to get high spatially and temporally resolved domain reversal visualization by replacing the RGI set-up with a MZ type interferometer and by using higher speed cameras. DH is an imaging method in which the hologram resulting from the interference between the reference and the object complex fields, r(x,y) and o(x,y) respectively, is recorded by a camera and numerically reconstructed [6,7]. The hologram is multiplied by the reference wavefield in the hologram plane, namely the camera plane, to calculate the diffraction pattern in the image plane. The reconstructed field *Q , P in the image plane, namely the plane of the object, at a distance d from the camera plane, is obtained by using the Fresnel approximation of the Rayleigh-Sommerfeld diffraction formula ª iS 2 º [ cos D 2 K 2 » exp> 2iS [Q KP @d[ dK *Q , P v ³³ h[ , K r [ , K exp « ¬Od ¼
(2)
518
Hybrid Measurement Technologies
The reference wave r(x,h), in case of a plane wave, is simply given by a 2 constant value and h[ , K r [ , K o[ , K is the hologram function, O is the laser source wavelength and d is the reconstruction distance, namely the distance measured between the object and the camera plane along the beam path. The coordinates (QP) are related to the image plane coordinates (x’, y’) by Q x' O d and P y ' O d . The reconstructed field *Q , P is obtained by the Fast Fourier Transform (FFT) algorithm applied to the hologram h[ , K and multiplied by the reference wave r([K) and the chirp function exp[iS O d ([ 2 K 2 )] . The discrete finite form of equation (2) is obtained through the pixel size 'x' , 'y ' of the camera array, which is different from that '[ , 'K in the image plane and related as follows: 'x'
Od ; N'[
'y '
Od N'K
(3)
where N is the pixel number of the camera array. The 2D amplitude Ax' , y ' and phase I x' , y ' distributions of the object wavefield can be reimaged by simple calculations: A x' , y '
abs>* x' , y ' @; I x' , y '
arctan
Im>* x' , y ' @ Re>* x' , y ' @
(4)
3 Results Two different configurations have been used in this work. In case A the LN sample is subjected to slow poling process by using high series resistor in the external circuit (100M:) [1]. The whole area under investigation (diameter 5 mm) is reversed in less than 10 s and the interferograms are acquired by the CCD camera. In case B another virgin LN crystal sample is reversed by fast poling (series resistor 5M: in order to reverse the whole crystal area in less than 1 s and the interferograms are acquired by the CMOS camera. Amplitude and phase maps of the object wavefield are numerically reconstructed by DH method as described in the previous section. The reconstruction distance is 125 mm in case A and 180 mm in case B, while the lateral resolution obtained in the reconstructed amplitude and phase images is 9.7 Pm in case A and 16 Pm in case B, according to (3). A reference interferogram of the sample at its initial virgin state is acquired before applying the external voltage and it is used to calculate the phase shift experienced by the object wavefield during poling. The DH reconstruction is performed for both the reference hologram and the nth holo-
Hybrid Measurement Technologies
519
gram, recorded during the domain switching, to obtain the corresponding phase distributions I 0 x' , y ' and I n x' , y ' . The 2D phase shift map 'I x' , y ' I n x' , y ' I 0 x' , y ' is calculated for each hologram and the corresponding images are collected into a movie. Fig.3-4 show some of the frames extracted from such movie in case A and B, respectively. The out of focus real image term, generated by the DH numerical procedure [6,7], is filtered out for clarity. The in-focus image of the domain wall propagating during the application of the external voltage is clearly visible. Switching process always starts with nucleation at the electrode edges. It is interesting to note that a residual phase shift gradient is present in correspondence of previously formed domain walls, as indicated in the Figs.3-4. This is probably due to the decay effect of the internal field related to the polarization hysteresis in ferroelectric crystals [8,9]. It is important to note that crystal defects and disuniformities are clearly visible and readly detectable by observing Figs.3-4, due to their different EO behaviour [10]. Moreover in Fig.4 the high temporally resolved frames obtained by using the CMOS camera allow to notice that the evolution of the domain walls is clearly influenced by the crystal defects where the domain wall propagation appears to be partially blocked.
o l d d o m a in w a l l
n e w d o m a i n w a ll
(a )
(c )
(b ) o ld d o m a in w a ll
n e w d o m a in w a ll
(d )
(e )
(f)
Fig. 3. Selected frames from the phase-map movie obtained in case A. The frame area is (5x5) mm2 and the time t (in seconds) corresponding to each frame is (a) 4.2, (b) 4.6, (c) 5.0, (d) 6.3, (e) 6.7, (f) 7.9. The polarization axis is normal to the image plane
520
Hybrid Measurement Technologies
o ld d o m a i n w a ll
d e f ec ts
n e w d o m a in w a ll
(a )
(b )
(c )
n e w d o m a in w a ll
o ld d o m a i n w a ll
(d )
(e )
(f)
Fig. 4. Selected frames from the phase-map movie obtained in case B. The frame area is (5x5) mm2 and the time t (in milliseconds) corresponding to each frame is (a) 390, (b) 430, (c) 470, (d) 510, (e) 530, (f) 580. The polarization axis is normal to the image plane
4 Conclusions A DH technique for non-invasive real-time visualization of switching ferroeletric domains with high spatial and temporal resolution has been proposed and demonstrated in this paper. The technique provides the reconstruction of the phase shift distribution of the wavefield transmitted by the sample during poling by making use of the EO and piezoelectric effect occurring under the external voltage. The technique can be used as an accurate and high-fidelity method for monitoring the periodic poling process as an alternative to the commonly used poling current control. Further experiments are under investigation in case of photoresist patterned samples by using a microscopic configuration of the MZ interferometer.
5 Acknowledgments This research was partially funded by the Ministero dell'Istruzione dell'Università e della Ricerca (MIUR) within the project "Microdispositivi in Niobato di Litio" n. RBNE01KZ94 and partially by the project MIUR n.77 DD N.1105/2002
Hybrid Measurement Technologies
521
6 References 1. Myers, L, Eckardt, R, Fejer, M, Byer, R, Bosenberg, W, Pierce, J (1995) Quasi-phase-matched optical parametric oscillators in bulk periodically poled LiNbO3. Journal Optical Society of America B 12:2102-2116 2. Nassau, K, Levinstein, H, Loiacono, G (1966) Ferroelectric lithium niobate. 1. Growth, domain structure, dislocations and etching. Journal of Physics and Chemistry of Solids 27:983-988 3. Wengler, M, Müller, M, Soergel, E, Buse, K (2003) Poling dynamics of lithium niobate crystals. Applied Physics B 76:393-396 4. Gopalan, V, Mitchell, T (1999) In situ video observation of 180° domain switching in LiTaO3 by electro-optic imaging microscopy. Journal of Applied Physics 85: 2304-2311 5. Grilli, S, Ferraro, P, Paturzo, M, Alfieri, D, De Natale, P, de Angelis, M, De Nicola, S, Finizio, A, Pierattini, G (2004) In-situ visualization, monitoring and analysis of electric field domain reversal process in ferroelectric crystals by digital holography. Optics Express 12:18321842 6. Schanrs, U, Jüptner, W (2002) Digital recording and numerical reconstruction of holograms. Measurement and Science Technology 13: R85R101 7. Grilli, S, Ferraro, P, De Nicola, S, Finizio, A, Pierattini, G, Meucci, R (2001) Whole optical wavefields reconstruction by digital holography. Optics Express 9 : 294-302 8. Paturzo, M, Alfieri, D, Grilli, S, Ferraro, P, De Natale, P, de Angelis, M, De Nicola, S, Finizio, A, Pierattini, G (2004) Investigation of electric internal field in congruent LiNbO3 by electro-optic effect. Applied Physics Letters 85:5652-5654 9. de Angelis, M, De Nicola, S, Finizio, A, Pierattini, G, Ferraro, P, Grilli, S, Paturzo, M (2004) Evaluation of the internal field in lithium niobate ferroelectric domains by an interferometric method. Applied Physics Letters 85:2785-2787 10. de Angelis, M, Ferraro, P, Grilli, S, Paturzo, M, Sansone, L, Alfieri, D, De Natale, P, De Nicola, S, Finizio, A, Pierattini, G (2005) Twodimensional mapping of the electro-optic phase retardation in Lithium Niobate crystals by Digital Holography. Optics Letters (to be published).
Holographic interferometry as a tool to capture impact induced shock waves in carbon fibre composites J. Müller1, J. Geldmacher1, C. König2, M. Calomfirescu3, W. Jüptner1 1 BIAS GmbH, Klagenfurter Str. 2, 2 Bremer Institut für Konstruktionstechnik BIK, Badgasteiner Str. 1, 3 Faserinstitut Bremen e.V., Am Biologischen Garten 2, 28359 Bremen, Germany
Abstract In this work an analysis of impacts on carbon fibre structures using holographic interferometry is presented. Impacts are caused e.g. by stones or hail at high motion speeds of vehicles. An impact is defined as a force acting shorter than the travelling time of the impact waves through the structure. The measurements are therefore performed using a pulsed Nd:YAG laser, making it possible to record digital interferograms at different times after the impact [1]. The impact is produced by an air-driven projectile and the holograms are stored digitally using a CCD camera. The numerical evaluation of the interferograms then gives access to the out-of-plane displacements of the surface. Thus a 2D-strain field is extracted and analysed quantitatively. The experiments cover the influence of different parameters. Investigated are the influence of contact time during impact, momentum of the projectile and the evolving wave forms (longitudinal, transversal, bending wave). The effect of these parameters is also investigated concerning different layer designs of the composite. Due to the anisotropic properties of carbon fibre composites not much is known about the damage tolerance and failure limits especially in the dynamical case. Therefore the goal of these experiments is to gain a deeper understanding of the dynamical behaviour of these materials and to provide the dynamical material parameters which can be used for numerical simulations on one hand and for design and construction on the other hand.
Hybrid Measurement Technologies
523
1 Introduction The designers of modern airplanes and automobiles make extensive use of carbon fibre reinforced structures. These composites have advantages concerning weight and durability and therefore reduce costs. But due to high motion speeds they are exposed to damage by impacts caused e.g. by stones, birds or hail. Most of the analytically and experimentally gained parameters for the structural analysis only cover the static case. For the design it is necessary to know the effects of highly dynamic cases like different impact induced wave forms on the structures [2]. Additionally the design of structural components is diffcult due to the anisotropic nature of the composite. Therefore research is necessary to understand the behaviour of impact induced waves in these materials. In the following we will report on our work using optical methods to gain access to these waves and the influence of different material parameters. We calculated the principal sum of strains from measurements performed using a holographic double exposure setup. These will be combined with results from experiments performed at the Bremer Institut für Konstruktionstechnik BIK. The photoelastic coating method (PCM) used here gives the principal difference of strains [3]. From these a 2D representation of the principal strains H1 and H2 can be calculated. The data is then used for further work in finite element simulation at the Faserinstitut Bremen to gain understanding of the effects of wave forms on carbon fibre structures and to predict potentially occurring secondary damages outside a non-damaging impact.
2 Impact waves Unlike in static cases the loads caused by an impact are space-and timedependent. The known dependacy between the stress and the strain measured at the same time at a different part of the structure does not exist in this highly dynamic case. An impact is defined as a loading having a duration which is smaller than the running time of the wave through the sructure.
t impact
l c
(1)
Here l is the typical dimension and c is the speed of the wave. The amplitude of the stress depends on the momentum that was carried into the structure, the wavelength depends on the contact time and the speed of the
524
Hybrid Measurement Technologies
wave. Therefore three parameters have to be controlled: Contact time, force distribution over time and momentum. Three kinds of waves need to be considered. The phase speed of the longitudinal wave is given by [4]
E
cL0
U
; c L1
E U (1 Q 2 )
(2)
for the one- and two-dimensional case. The 3D case is given by
E (1 Q ) U (1 Q ) (1 2Q )
cL2
(3)
where E is Young’s Modulus, U is the mass density and Q is Poisson’s number. The second wave form is the transversal wave which travels at roughly half the speed of the longitudinal wave
cT
E 2 U (1 Q )
(4)
Another wave form is mixture of the two preceding forms. The Rayleigh wave forms at free surfaces of the body. We are looking at flat plates and for this twodimensional case, Rayleigh waves can be neglected because their wave length would be much larger than the thickness of the panel. The third wave form that has to be considered is the bending wave. The phase speed of this wave is frequency dependent and given by
cB
E 1 f h 2 3U (1 Q 2 )
S
E h 2 3U (1 Q ) O
(5)
with h being the thickness of the plate. Note that cBĺ0 for large wavelengths and cBĺ for Oĺ0. This is called the anomalous dispersion. For physical reasons the upper limit for the phase speed is the speed of the Rayleigh wave [5, 6]
c B ,max
c Rayleigh
q cT
Where q can be approximated by q=(0.874+1.12Q)/(1+Q) [4].
(6)
Hybrid Measurement Technologies
525
3 Holographic double exposure method The capturing of the travelling wave follows the well known double exposure method [7] with the difference that the two holograms are recorded separately with a CCD-target and the reconstruction is performed numerically [8]. The procedure used here is shown in Fig. 1. The first hologram H1 represents the unloaded reference state before the impact. The impact triggers the second laser pulse and hologram H2 is recorded at a defined time after the impact. From these two hologams the complex wavefields b’ are reconstructed and the phases are calculated
M n ( x' , y ' )
arctan
Im[(b' ( x' , y ' )] Re[(b' ( x' , y ' )]
(7)
The phase difference 'I of the loaded and the unloaded state is then directly related to the displacement uz of the surface
uZ
'I
O 2S
(8)
In our case a third hologram H3 is recorded after the impact. This serves the purpose to remove any remaing deformations of the plate due to frictional forces by the anvil and to control if the impact was nondestructive. This second difference phase DP2 is subtracted from the first one resulting in the final difference phase DP3. This difference phase is then unwrapped (DP) and converted into a 2D-field of the principal sum of strain according to
(H1 H 2 )
2
1 Q uz Q h
Fig. 1. Schematic procedure for the evaluation of holograms
(9)
526
Hybrid Measurement Technologies
4 Experimental Setup The light source used in this work is a pulsed ruby laser working in double pulse mode at 694 nm. The emitted pulses have a peak power of pout=1J and a pulse duration of tpulse=30ns. The beam is divided into object and reference beam by a 90/10 beamsplitter BS1 and a lens L1 is used to illuminate the sample with the object beam. The reference beam is expanded using a telescope arrangement. Both beams are brought to interference on the CCD-target using a second beamsplitter BS2. Since a CCD-target can only record a limited angle between object- and reference wave [7] due to the sampling theorem the diverging lens L2 is introduced.
M1
ruby laser L1
BS1 BS2
impact unit sample
L2
CCD
Fig. 2. Experimental setup for holographic double exposure
The impact unit consist of a pressure container which drives a steel ball in a tube acting as the projectile. The velocity is recorded by two photosensors at the end of the tube. The force is recorded by a transducer mounted between the end of the tube and an anvil which provides a constant surface of contact during the impact. Experiments show that the force can be accurately reproduced for each impact. The duration of impact can be controlled by choosing anvils of different sizes. The force is controlled by choosing different driving pressures and therefore different velocities of the projectile. The sample is a plate measuring 30cm x 30cm and is clamped at the upper and lower edges. The other two edges are free. The pulse laser is triggered by the first photosensor. The first pulse is emitted before the projectile hits the sample. The second pulse is triggered at a choosable time after the impact. The CCD camera is synchronized to this trigger in order to capture two images of the two pulses. Therefore for each point in time that has to be captured a single impact has to be performed. The reconstruction of the holograms is performed off-line on a personal computer.
Hybrid Measurement Technologies
527
5 Results The material under investigation is a carbon fibre structure typically used in fuselages of aircrafts. It contains fibres in the directions of 0°, 45°, -45° and 90° in equal fractions of 25% which is the simpliest case without anisotropic behaviour. The impact was performed using a 7mm steel ball with an average impact velocity of vave=18.1 m/s. The contact time of the impact was 9,5µs at a maximum impact force of 10.9 kN. Finite element simulations show that the impact produces a series of circular wave peaks (see Fig. 3a). The impact was non-damaging and so was the wave itself. From this calculations also the speeds of the transversal and the bending wave were extracted.
Fig. 3. a) Simulation of displacement (x1000), 30µs after impact (zoomed), b) Unwrapped phase differences at diff. times after impact (complete sample)
Figure 3b shows the corresponding measurements of the phase differences at 4 different times after the impact. It can be seen that the equal fractions of fibre directions lead to the expected circular wave front. The center section of the excited area is not resolved due to the large amplitude causing an undersampled fringe density. Only the front part of the wave can be observed. Inspection of the difference phases after the impact also show that no damage was done to the material. From these unwrapped difference phases representative line scans as indicated by the white lines are extracted and the corresponding displacement is calculated according to Equ. 8. One typical result in Y direction (fibre direction 90°) is shown in Figure 4. Here the displacement of the wave is shown for three points in time. The data points are only shown for the first one for clarity. Two peaks of the
528
Hybrid Measurement Technologies
wave front are identifiable. The speed of the slower one can be evaluated as indicated by the dotted line. The velocity is v = (1700±200) m/s indicating a bending wave. 2,5
10µs 27µs 41µs
displacement [µm]
2 1,5 1 0,5 0 -0,5 0
1
2
3
4
5
6 Y [cm]
7
8
9
10
11
Fig. 4. Amplitude at different times after impact in Y-direction (90°)
The second peak indicated by the dash-dotted line travels at nearly double the speed of the slower one at v = (3500±400)m/s. This velocity suggest this wave as being a transversal wave since the theoretical value from numerical calculations is vT=3705 m/s and also the wavelength is longer than the wavelength of the slower one which would not be the case for a bending wave (see Equ. 5). Calculations show that the longitudinal wave travels at vL=6282 m/s. This wave was not observed yet.
6 Summary and Outlook In this paper we have shown the possibility of recording the amplitude of transient events like an impact using double exposure digital holography. We have shown that the resolution of digital holography is sufficient for the recording of different wave forms. Comparisons to FEM-simulations show good agreements concerning wave speeds but further work is required for more accurate modelling of e.g. the scaling of the displacement and the damage effects of the impacts like delaminations. Future work will include the calculation of 2D maps of the principal strains by combining
Hybrid Measurement Technologies
529
the result from holography and the PCM method. This data can then be used for evaluation of the FEM-simulation. Further investigation is also needed concerning the frequeny dependent damping of bending waves and the effect of different layer designs of the composites on the damping. With more knowledge about the damping it becomes possible to make better predictions of the damaging characteristics of structures and to give according design rules.
7 Acknowledgements The authors would like to thank the Deutsche Forschungsgemeinschaft for funding this work under the grant number Ju 142/54-1.
8 References 1. Hariharan, P, (1984) Optical Holography: Principles Cambridge University Press 2. Müller, D, Jüptner, W, Franz, T, (1996) Untersuchung der Stosswellenausbreitung in Faserverbundwerkstoffen mittels dynamischer Spannungsoptik und holografischer Interferometrie, Engineering Research Vol. 62, No. 7/8, pp. 195-213 3. Franz, T, (1998) Experimentelle Untersuchung von impactbelasteten versteiften bzw. gekerbten Platten aus Faserverbundwerkstoffen, Diss. Universität Bremen 4. Cremer, L, Heckl, M, (1996) Körperschall, Springer Verlag, Berlin 5. Kolsky, H, (1963) Stress waves in Solids, New York, Dover Publications 6. Goldsmith, W, (1960) Impact – The Theory and physical behaviour of colliding solids, London, Edw. Arnold Ltd. 7. Kreis, T, (1996) Holographic interferometry: principles and methods, Akademie Verlag, Berlin 8. Schnars, U, Jüptner, W, (1994) Direct recording of holograms by a CCD target and numerical reconstruction, Apllied Optics 33(2), pp. 179-181
Two new techniques to improve interferometric deformation-measurement: Lockin and Ultrasound excited Speckle-Interferometry Henry Gerhard, Gerhard Busse University Stuttgart, IKP-ZFP Pfaffenwaldring 32, 70569 Stuttgart Germany
1 Introduction Speckle methods like Shearography and Electronic-Speckle-PatternInterferometry (ESPI) display changes of surface deformation in a fringe pattern [1]. Such a deformation can be induced e.g. by applying a pressure difference or by remote heating. Defects cause a distortion of the fringe pattern so they reveal themselves this way. However, the deformation of the whole sample makes it difficult to detect the much smaller superposed defect induced distortions. We show how modulated heating induced deformation improves the detectability. The probability of defect detection (POD) is much higher if heating acts selectively on defects. This is achieved by using elastic waves for enhanced loss angle heating in defects (“ultrasound activated speckleinterferometry” [2,3]). The hidden defect is then marked by a "small bump" on the surface.
2 Principle 2.1 Electronic-Speckle-Pattern-Interferometry (ESPI)
In an ESPI-measurement a diffuse reflecting sample is exposed to a laser beam and imaged in this light by a CCD- or CMOS camera. A superposed reference beam causes an interferometric speckle pattern that responds to wavelength-sized deformations of the object. The superposition of the two speckle patterns “before and after deformation” results in fringes that display lines of equal surface deformation (similar to the equal-height-lines in
Hybrid Measurement Technologies
531
a map). The components of the object deformations, perpendicular (“out of plane”) or tangential (“in-plane”) to the object surface can be imaged this way simultaneously providing complete 3D-information about the deformation field [4]. For our experiments we used out of plane imaging where height difference between adjacent fringes is half the laser wavelength (l/2=328nm). Phase shifting technique and unwrapping algorithms were used to improve the signal-to-noise ratio and to obtain absolute deformation values [5]. The ESPI-system and the software were developed at our institute (IKP-ZfP). 2.2 Lockin Interferometry
Modulation techniques allow for noise suppression due to the reduction of bandwidth. As such filtering used to be performed by electronics named lockin, the wording is also applied for imaging where it denotes the use of such a procedure at each pixel. An example is lockin thermography (OLT) where an absorbing sample is exposed to intensity modulated light. The resulting surface temperature modulation propagates as a thermal wave into the object. At boundaries this wave is back-reflected to the surface where it affects both phase and magnitude of the initial wave by superposition. The stack of thermography images obtained during the modulated irradiation of the object are pixelwise Fourier transformed at the modulation frequency. This way the information from all images is narrowband filtered and compressed into an amplitude and a phase image of the temperature modulation [6-10]. We applied this principle to ESPI to perform optical-lockin interferometry (OLI) [11-12]. In contrast to OLT which monitors the temperature amplitudes or the phase shift of the modulated temperature field at the surface of the object, OLI analyses the modulated deformation resulting from periodical heating (e.g. by intensity modulated irradiation). The advantages are the same as before: one obtains an amplitude and a phase image at a much improved signal to noise ratio. It should be mentioned that phase is involved twice: both the lockin phase image and the amplitude image are derived by Fourier tansformation from a sequence of interferometric phase images.
532
Hybrid Measurement Technologies
A
laser
lens
t sinus modulated deformation
beam splitter
object beam
reference beam lens
defect
camera
filter
sample Sinus modulated lamp (heat source) computer
Fig. 1. Principle of Lockin-ESPI
OLI is suited for imaging of hidden features where the depth range depends on the thermal diffusion length µ = (2k/wr)1/2 (k denotes thermal conductivity, r density and c specific heat, respectively) [13,14] which can be varied by the modulation frequency w [14]. 2.3 Ultrasound activated interferometry
Information in speckle-interferometry images is coded by fringes. The dynamic range of measurements is limited by the maximum detectable lines in the image. When the whole sample is illuminated, the whole surface is heated and deformed while the effect of the defect may be quite small. We developed a method where a defect responds selectively so that the image would display mostly the defect induced fringes and not the potentially confusing background of the intact structure. As a mechanical defect is generally characterized by local stress concentration and/or an enhanced mechanical loss angle ("hysteresis“), excitation of the sample by ultrasound together with internal friction in defects converts elastic wave energy locally into heat. The resulting local thermal expansion in the defect area causes a bump that reveals the hidden defect.
Hybrid Measurement Technologies
533
Laser lens beam splitter sample
object beam reference beam camera
defect with heat generation ultrasound transducer
thermal expansion
lens
control unit computer
Fig. 2. Principle of ultrasound excited ESPI
3 Results We present examples for the potential of the two methods described above. 3.1 Optical-Lockin-ESPI
Defect detection in Polymethylmethacrylat (PMMA) The first example is an investigation of a homogeneous PMMA model sample (120x86x6mm³) with subsurface holes drilled perpendicular to the rear surface in order to simulate defects in different depths underneath the inspected front surface. The transparent specimen was painted black on the front side to hide the holes. The plate was illuminated at 0.02 Hz modulation frequency while ESPI-images were continuously recorded.
534
Hybrid Measurement Technologies
In each image taken out of the sequence, the defects can be detected neither in the wrapped (Fig. 3 left) nor unwrapped phase image (Fig. 3 right). Only a two-dimensional fit could find small differences in deformation (fig.4 - left) at a poor signal to noise ratio. The phase image derived by Fourier transformation from the whole sequence of modulated deformation at the lockin frequency makes all holes clearly visible (fig. 4 - right). It has been shown previously that holes in different depths can be distinguished by variation of the modulation frequency because it controls the depth range of the generated thermal wave probing the defect [11].
7
250
6 200
expansion
Intensity
5
150
100
4
3
2 50
1
0 0
50
100
150
200
250
0
300
0
50
100
P o s itio n [ p ix e l]
150
200
250
300
P o s itio n [p ix e l]
Fig. 3. Left: wrapped-phase image, Right: same image after phase unwrapping
0 ,4
0 ,1
Hole1
0 ,3
Hole2
Hole3
0 ,2
phase [°]
expansion (filtered)
0 ,0 5
0 ,1
0
Hole1
-0 ,1
0
-0 ,0 5
- 0 ,1
-0 ,2
-0 ,3
-0 ,1 5
0
50
10 0
150
P o s itio n [p ix e l]
200
250
30 0
0
50
100
150
200
250
30 0
P o s itio n [p ix e l]
Fig. 4. Left: demodulated image with a two-dimensional fit, Right: Lockin phase image with a two dimensional fit
Hybrid Measurement Technologies
535
Inclusions in a honeycomb structure The high specific stiffness of honeycomb structures is of interest for aerospace applications. The critical part of such structures is where the skin is bonded to the core. Ingress of water or excessive glue may result in lower stiffness or too much weight, respectively. In our investigations we used a honeycomb structure (420x170x13mm³) partially filled with glue. In the middle of the plate a marking is visible. The two areas of modified stiff-
filled with glue marking strip
Fig. 5. Left: One wrapped interferometric phase image of the sequence Right: Lockin phase image from Fourier transform of sequence at 0.06Hz
ness stand out clearly in the lockin phase image (figure 5 right) derived from the sequence at the frequency of excitation while they are hidden in the strong background deformation in the single image of the sequence (figure 5 left). Depth resolved measurements in wood Wood is a natural material which is important for furniture where genuine wood is used for the veneer layer and cheap wood for the core sheet. Delaminations caused by glue failure can be detected using OLI. The dimensions of the plate were discussed in [11]. Since this publication the measurements were not only improved with respect to the signal to noise ratio, but it shows also a depth resolved measurement of the holes under a different veneer thickness. At a frequency of 0.09Hz only the holes visible under a thin veneer layer. By decreasing the frequency the penetration depth of the thermal waves increases, thus allowing to detect more holes at lower frequencies. The grain veneer structure gets also a better contrast at lower frequencies. The applied lockin frequencies are given below the images.
536
Hybrid Measurement Technologies
0.09Hz
0.06Hz
0.03Hz
Fig. 6. Depth resolved measurements of holes under different veneer thickness
3.2 Ultrasound activated interferometry
Impacts, delaminations and embedded foil in CFRP In this example the cooling-down process of an impact damaged CFRPsample (100x150x5mm³) is shown over a period of 13.4s after excitation for 3s at an input power of 200 Watt of ultrasound. However, the actually injected power is much lower because of the impedance mismatch between sample and transducer. The plate contains an impact damage in the center, some delaminations in the edges and a laminated foil in the upper right part. The defects (heated selectively due to their enhanced mechanical losses) are well visible in the time sequence (figure 7). The number of fringes is reduced since overall sample heating is avoided. The visible temporally shifted delaminations allow to investigate the nature of the defects and the depth where they are located.
Hybrid Measurement Technologies
537
Fig. 7. Time sequence after ultrasound excitation
4 Conclusion Two new techniques have been presented which improve interferometric defect detection. Optical lockin interferometry extracts depth-resolved weak structures from a strong background of overall deformation. The mechanism involved in this interferometric tomography is frequency dependence of thermal wave depth range. Phase images have the advantage of more depth range and insensitivity to variation of surface absorption and scattering. The second technique -ultrasound activated interferometry - responds selectively to the mechanical losses in defects because ultrasound is converted into heat. This is an alternative method to reduce the influence of background deformation and to enhance the probability of detection (POD).
538
Hybrid Measurement Technologies
5 References [1] [2]
[3]
[4]
[5] [6] [7] [8]
[9] [10] [11]
[12] [13] [14] [15]
Cloud, G.: Optical Methods of Engineering Analysis. Cambridge: University of Cambridge, 1995 Salerno, A.; Danesi, S.; Wu, D.; Ritter, S.; Busse, G.: Ultrasonic loss angle with speckle interferometry. 5th International Congress on Sound and Vibration. University of Adelaide,15.-18.12.1997 Gerhard, H.; Busse, G.: Ultraschallangeregte Verformungsmessung mittels Speckle Interferometrie. In: DGZfP Berichtsband BB7-CD. 6. Kolloquium, Qualitätssicherung durch Werkstoffprüfung, Zwickau, 13.11.- 14.11.2001 Ritter, S.; Busse, G.: 3D-electronic speckle-pattern-interferometer (ESPI) in der zerstörungsfreien Werkstoff- und Bauteilprüfung. Deutsche Gesellschaft für Zerstörungsfreie Prüfung e.V. (Jahrestagung 17.-19.05.93, GarmischPartenkirchen), pp. 491-498 Ghiglia, D.; Pritt, M.: Two- Dimensional Phase Unwrapping: Theory, Algorithms, and Software. (Wiley, New York), 1998 Patent DE 4203272- C2, (1992) Busse, G.; Wu, D.; Karpen, W.: Thermal wave imaging with phase sensitive modulated thermography. In: J. Appl. Phys. Vol.71 (1992): pp. 3962-3965 Carlomagno, G.; Bernardi, P.: Unsteady thermotopography in nondestructive testing. In: Proc. 3nd Biannual Exchange, St. Louis/USA, 24.26. August 1976, pp.33-39 Rosencwaig, A.; Gersho, A.: Theory of the photo-acoustic effect with solids. J. Appl. Phys., (1976), pp. 64-69 Busse, G.: Optoacoustic phase angle measurement for probing a metal. In:Appl. Phys. Lett. Vol. 35 (1979) pp.759-760 Gerhard, H.; Busse, G.: Use of ultrasound excitation and optical-lockin method for speckle interferometry displacement imaging. In: Green, R.E. Jr.; Djordjevic, B.B.; Hentschel, M.P. (Eds): Nondestructive Characterisation of Materials XI, Springer-Verlag Berlin, (2003), pp. 525-534, ISBN: 3540-40154-7 Gerhard, H.; Busse, G.: Zerstörungsfreie Prüfung mit neuen InterferometrieVerfahren. In: Materialprüfung, Vol. 45 Nr. 3 (2003), pp. 78-84 Rosencwaig, A.; Busse, G.: High resolution photoacoustic thermal wave microscopy. Appl. Phys. Lett. 36 (1980) pp. 725-727 Opsal, J.; Rosencwaig, A.: Thermal wave depth profiling: Theory. J. Appl. Phys. 53 (1982) pp. 4240-4246 Dillenz, A.; Zweschper, T.; Busse, G.: Elastic wave burst thermography for NDE of subsurface features. In: INSIGHT Vol 42 No 12 (2000) pp. 815817
Testing formed sheet metal parts using fringe projection and evaluation by virtual distortion compensation A. Weckenmann, A. Gabbia Chair Quality Management and Manufacturing Metrology, University Erlangen-Nuremberg, Naegelsbachstr. 25, 91052 Erlangen Germany
1 Introduction Nowadays the forming technology allows the production of highly sophisticated free form sheet material components, affording great flexibility to the design and manufacturing processes across a wide range of industries. Due to versatile forming technologies, sheet material parts with complex shapes can be produced at acceptable costs. A very important factor in the process of production and measurement of these objects in according to the structure is the elastic return of the material that is translated in the linear course of the strain – deformation (V – H) graph (see Fig.1). The curve in this graph has a characteristic course depending on the material, but one feature point can be characterized up to which an elastic return is obtained (right case), that will be greater for the ductile materials and will go down for resilient materials. In cases where it is not obvious to define the change between elastic and plastic behaviour (left case), the elasticity limit is assumed to be the strain Vs which causes a permanent deformation of H=0,2% (Fig. 1a). In this work only deformations in the elastic region are subject to simulation; it means strain V < Vs. V
V
A
VR
B
VR
Vs
Vs
H
H
Fig. 1. Classic strain – deformation curve for a steal material
H
540
Hybrid Measurement Technologies
The objects therefore underlie forces that can modify the shape during movement and assembly according to their structure. Such deformations require the work piece to be clamped during the measurement process with conventional measuring systems (e.g. tactile coordinate measuring systems) in order to set the work piece in it’s assembled state. In this state geometrical features can be inspected and compared to the respective tolerances. The conventional measuring systems require an accurate alignment of the clamped work piece. Therefore the inspection process chain consists of six steps (see Fig. 2). These steps are laborious, time consuming and some actions (e.g. clamping) can not be automated. With this traditional system is not possible to testing 100% of the production. The progresses in the field of optical coordinate measuring systems brought up systems robust enough to be used in industrial environment. Such systems as fringe projection allow a fast, parallel and contact free acquisition of point clouds. The measured data contains of points of the visible surface of the surrounding measuring range and nearest the camera. From this a surface model representing the work piece can be extracted, which could be used in Finite Element Method (FEM) simulations of the clamping process. Using a virtual distortion compensation methods with a FEM analysis, the inspection process chain can be significantly reduced (e.g. for a car door from 1 day to a few minutes) and automated. Such a process chain could serve to increase the control over sheet material production processes and decrease inspection costs. With this method 100% of the production can be tested in line. manufacture precise positioning
source: www.ukon.de
tactile measurement
source: www.zeiss.de
clamping picking up
unclamping deposit in measuring range optical measurement
virtual distortion compensation
picking up
MARK Mentat
optical struments
Fringeprojection
assemblage
Fig. 2. Measuring chain with tactile and with optical measurement system
Hybrid Measurement Technologies
541
2 Measuring system and measuring chain Optical surface measuring systems allow a fast, parallel and contact free sampling of the measurands surface. Using a measuring system with a measuring range that fits the space where the measurand will be located, one obtains data that can be used for the extraction of the work piece from its surrounding and for the virtual distortion compensation. Although several optical measurement principles have been developed for the measurement of surfaces in 3D (e.g. stereo vision, auto focus), fringe projection systems turned out to be the most successful method on the market. Its success is based in the fast, robust and accurate data acquisition combined with high configuration flexibility. In fringe projection systems the triangulation principle is used to calculate 3D coordinates. If a surface is illuminated by a light spot, a displacement in z direction is translated by the spots position in a displacement in x direction with respect to the observation system under a calibrated angle T (see Fig. 3). The spot is used as a marker for depth information. 'x CCD Camera Laser
tgD Reference plane
'x 'z
D
'z
Fig. 3. Triangulation principle with a single CCD camera
Fringe projection systems use a fringe pattern as a marker. The phase of a sinusoidal intensity fringe pattern serves as a parallel marker. This pattern can be projected by DMD projectors. The so illuminated scene is observed by (at least) one camera (or few cameras) under a triangulation angle with respect to the projections optical axes. The phase difference between the captured picture and a picture captured by measuring a calibrated plane (reference plane), is proportional to the position in the planes normal direction according to the triangulation principle. This method is suitable for the measurement of non reflective surfaces. Reflection of the projected marker leads to outliers and makes an accurate measurement impossible. As the phase has to be determined looking at a neighbourhood of the sample point, outliers also have impact on the measurement accuracy of their neighbouring measured points.
542
Hybrid Measurement Technologies
In the example described, a fringe projection system consisting of two cameras and one Digital Micromirror Device projector (DMD) is used. It’s measuring field has a size of 700 mm x 900 mm x 400 mm at a resolution of 0.5 mm in x-y direction and 0.15 mm in z direction. The system was designed for the measurement of large sheet metal parts of sizes finding application in car industry. For this work are important the following information, objects and systems: working piece, fringe projection system, PC with FEM and processing data software, clamping information, CAD-model as STL (STereoLithography) file of the clamped work piece and material proprieties. The work piece is measured at first without clamping system: in this measured model are detected holes and edges. Than with clamping information (for example the position of holes and edges) and through a FEM analysis (which needs the material proprieties), the virtual distortion compensated model was obtained (FEM-model). Figure 4 shows the measuring and processing chain which is used for this analysis. The work piece is indispensable for the measurement (it lasts 8-10 seconds) only. The other steps are executed by use of a PC. The production don’t have to be stopped to test the work piece. For this reason this method is more suitable for a inline test than a conventional tactile measuring process. The most important steps are described in the following chapters. Work piece with CAD Model in STL file format and material properties Measurement with fringe projection
point cloud triangle mesh smoothing and remeshing
Processing of the measurement Material properties: E-Modul, Poisson ratio, density Information for assembly of the work piece
contour and hole detection FEM Analysis distortion compensated shape as STL file Comparison of the geometrie
qualitative analysis
Results
Fig. 4. Processing chain
CAD Modell as STL File
quantitative analysis
Hybrid Measurement Technologies
543
3 Measured data Fringe projection systems provide the measured coordinates (x,y,z) in a so called point cloud, which is a list of all measured points (see Fig. 5). As there is no three dimensional order contained in a list, for geometrical operations a comparison of all points is required to find the points neighbourhood relations.
Fig. 5. Measured point cloud
Fig. 6. Triangle mesh
Although the point cloud contains all measured information, for a complete surface model the space between the points has to be approximated as well. A linear approximation to the surface is a triangle mesh, in which the space between the adjacent points is approximated by triangles (see Fig. 6). Methods like Delaunay triangulations allow the automated determination of a unique triangle patch representing the measured surface. Additional to the point cloud (or vertex list) a triangle mesh consists of a triangle list, each entry being a 3 vector containing the edge points’ indices in the vertex list. Such a structure provides the point clouds topology. Using the intersection of triangles, the mentioned neighbourhood information can be easily accessed, a fact which leads to an increase of numerical efficiency in geometrical calculations. As fringe projection systems calculate up to a few million coordinates per measurement, the data has to be reduced for further analysis. Using the edge collapse algorithm, points in regions with small feature density can be removed iteratively, until a given geometrical error threshold is not exceeded. Thus one can reduce the number of sample points for the increase of the following up algorithms without loosing geometrical information. The noise contained in the measured coordinates can be removed using adaptive smoothing. Several adaptive smoothing methods have been proposed in the past. In the research carried out the local sphere assumption proposed in [2] has been used. Such smoothing algorithms preserve geometrical features and remove noise within specific limits. As this measured
544
Hybrid Measurement Technologies
data will be used in the later FEM analysis, noise would lead to wrong simulation results. Thus smoothing has to be performed as much as possible with respect to feature preservation. The model, that will obtained, is subject to further inspection.
4 Virtual distortion compensation The measured, smoothed and thinned triangle mesh is the base for the FEM model of the work piece (here with 180.000 degrees of freedom). First, the mesh-points on the surface have to be translated by half of the metals thickness in normal direction to obtain the midsurface. Using e.g. the MARC Mentat pre-processor, each triangle in this mesh can be specified to represent a triangular shell element with a given thickness. The boundary conditions for the FEM problem are found by the above feature extraction. As the fixing process acts with given translations and torque at the holes, they have to be detected on the mesh. From a nominal actual value comparison of the position and orientation of the holes the resulting translations and torque can be determined. The corresponding nodes in the FEM model are then assigned to the calculated translations and torque. Solvers like MARC are used to calculate the shape of the measured object under the given boundary conditions using the updated Lagrange algorithm. For a surface description, the nodes localized on the midsurface has to be back-shifted in normal direction. +
A B –
Fig. 7. FEM model with boundary conditions
Fig 8. Qualitative analysis (FEM-model and CAD-model)
In an experiment, the work piece was measured in a relaxed state and in two clamped states. By means of virtual distortion compensation, the unclamped state was transformed into the clamped states.
Hybrid Measurement Technologies
545
A qualitative analysis consists on comparison with the colours: the two shape (CAD-model and FEM-model) are overlapped. Taking the surface of the CAD-model like reference, is possible to characterize positive or negative deviations based on the visualized colour in a the specific area of the work piece, which is under consideration (see Fig. 8).
50 mm
clamped profil
25 mm
virtually clamped profil Local point observed
Point 1 clamped Point 1 Point 1 virtually clamped
Point 1
Fig. 9. Quantitative analysis: clamped (red) and virtually clamped (green) datasets
With a quantitative analysis a cut through the data (see 9) shows the success of this method. Figure 10 shows the deviation of the marked point from it’s nominal position with and without virtual distortion compensation. 6 with FEM analysis
mm
Point 2
4 Point 3
Deviation
3 without FEM analysis
2 1 0
Point 1 Point 1
Point 2
Point 3
Fig. 10. Deviation of a point from its nominal position with and without virtual distortion compensation
The limits of the method lie in the measurement uncertainty of the measured coordinates (measuring system), the stability and accuracy of the feature extraction algorithms and the quality of the thickness assumption (there is only a value for the all the work piece).
546
Hybrid Measurement Technologies
5 Conclusion Fringe projection systems are suitable for the fast and contact free measurement of FSM parts without clamping. The measurement result can be used to extract features of the object like holes or edges. Some of these are relevant for the assembly process, others are subject to further inspection. From the information about the transformation of the assembly features from their actual to their nominal position, virtual distortion compensation can be used to calculate feature parameters of the distortion compensated shape. Thus the inspection process chain can be shortened and automated at the same time. Actual limitation of the method is the measurement uncertainty of the measured coordinates. An ulterior deepening of this work would involve a comparison of the obtained results with a tactile coordinate measurement.
6 References 1. G. Frankowski, M. Chen, T. Huth: Real-time 3D Shape Measurement with Digital Stripe Projection by Texas Instruments Micromirror Devices DMD. Proceedings of SPIE Vol. 3958, San Jose, USA 22.-28.1.2000, p. 90-106. 2. S. Karbacher, G. Häusler: A new approach for modelling and smoothing of scattered 3D data. Richard N. Ellson und Joseph H. Nurre (Ed.), Proceedings of SPIE Vol. 3313, San Jose, USA 24.-30.1.1998, p. 168-177. 3. Weckenmann, A.; Gall P.; Hoffmann, J.: Inspection of holes in sheet metal using optical measuring systems. In: Proceedings of VIth International Science Conference Coordinate Measuring Technique (April 21-24, 2004, Bielsko-Biala, Poland), p. 339-346. 4. Y.Sun, D. Page, J.K. Paik, A. Koschan, M.A. Abidi: Triangle Mesh-Based Edge Detection and its Application to Surface Segmentation and Adaptive Surface Smoothing. In: Proceedings of IEEE 2002 International Conference on Image Processing (ICIP 2002), September 22-25, 2002, Rochester, New York (USA), p. III 825-828. 5. Steinke P.; Finite-Elemente-Methode; Springer, Berlin, Heidelberg, New York 2004 6. T. Asano, R. Klette, C. Ronse: Geometry, Morphology, and Computational Imaging, Proceedings of 11th International Workshop on Theoretical Foundations of Computer Vision, Dagstuhl Castle, Germany, April 7-12, 2002. 7. Weckenmann A.; Gall. P.; Ernst R.: Virtuell einspannen, Prüfprozess flächiger Leichtbauteile verkürzen. In: Qualität und Zuverlässigkeit QZ 49 (2004) 11, S.49-51. 8. Weckenmann A.; Knauer, M.; Killmaier, T.: Uncertainty of Coordinate Measurements on Sheet Metal Parts in Automotive Industry. In: Sheet Metal 1999, Geiger, M., Kals, H., Shrivani, B., Singh, U., Hg., 1999. Proceedings of the 7th International SheMet, 27.– 28. 09. 1999, Erlangen. p. 109-116. 9. Weckenmann A.; Gall. P.; S. Ströhla, Ernst R.: Shortening of the inspection process chain by using virtual distortion compensation. In: 8th International Symposium on Measurement and Quality Control in Production (October, 12–15, 2004, Erlangen, Germany) – VDI-Berichte 1860, p. 137-143. 10. Weckenmann A.; Gall P.; Gabbia A.: 3D surface coordinate inspection of formed sheet material parts using optical measurement systems and virtual distortion compensation. In: 8th International Symposium LaserMetrology, 2005. Proocedings of the 8th International Symposium, 12-18 February 2005, Merida, Mexico.
Fatigue damage precursor detection and monitoring with laser scanning technique VB Markov, BD Buckner, SA Kupiec, JC Earthmang MetroLaser, Inc., 2572 White Road, Irvine, CA 92614, USA g Department of Chemical Engineering and Materials Science University of California, Irvine, CA 92697, USA
1 Introduction Over the last decade, investigations into fatigue damage precursors have established that metal components subjected to cyclic stress develop surface-evident defects such as microcracks and slip bands [1-4]. These defects evolve with the development of fatigue. Detection of such damage precursors on a component and examination of their evolution can thus provide information on the state of the critical parts and prognostics of their catastrophic failure. Evolving surface defects, although of a different nature, can similarly be observed on some materials when subjected to thermal cycling fatigue.
2 Technique The damage detection technique we use is to scan a focused laser beam over the component surface and then detect the scattered light intensity (Fig.1). The scattering signal variation as the beam sweeps over the microcracks provides information on the specimen surface. Additionally, the microscale surface texture (roughness) affects both the mean signal level and (under coherent illumination) the statistical properties of the speckle. The areal surface microcrack density tends to increase in characteristic patterns as fatigue progresses in the component, and knowledge of this pattern, combined with the measured crack density from the scanning technique, can provide a means of gauging the fatigue state of the component. Surface micro-roughness evolution can be more complex, but it does yield some information as well.
548
Hybrid Measurement Technologies
reflected beam
incident beam
scattered light scattered light
microcracks Fig. 1. Illustration of light scattering from defects on a fatigued surface.
With this technique, we have investigated fatigue damage precursors on the surfaces of nickel-base superalloy turbine components under low-cycle fatigue conditions. The scanning approach allows selective exploration of the information space (sometimes with hardware-based feature extraction), requiring a much lower data throughput rate than would an image-based technique. As a result, the present technique is capable of scanning speeds that are substantially greater than those achieved with image processing methods. Specially designed Waspaloy specimens and sections of the turbine rotors were tested using a servo-hydraulic MTS machine at ambient temperature under load control conditions, as well as at elevated temperature. The fatigue damage was monitored by scanning a laser beam along the specimen in situ and during periodic interruptions of the cyclic loading. Acetate replicas of the gage section surface were also made to examine the surface morphology using SEM. Comparisons of the results demonstrate that a rapid rise in the mean defect frequency corresponds to the emergence of surface relief features that follow the grain boundaries intersecting the surface in the areas of greatest stress. The presence of this surface relief can be attributed to the presence of relatively soft precipitate free zones along the grain boundaries that preferentially deform under fatigue loading conditions leading to the formation of microcracks. Fig.2 shows a similar comparison of visual count of slip-band groups and deformed grain boundaries with the scattering peak count along a Waspaloy fatigue coupon, showing that regions with high fatigue defect densities also have high
Hybrid Measurement Technologies
549
densities of scattering signal peaks, demonstrating that this technique provides a good estimate of surface fatigue damage.
add peak count
count
MWF 4695 fatigue defects over slip band region add peaks 20
10
8 15
6 10 4
5 2
0 0
5000
10000
15000
0 20000
position (um)
Fig. 2. Comparison of scanned defect counts and microscopically sampled defect counts (in a 0.3 mm2 area) at the same positions on the waspaloy sample
Measurements of scattering peaks over fatigue life have been performed as well (Fig.3). These measurements show a general trend toward increasing surface defect density, though a few samples show anomalous variations, probably due to surface contamination, which is one of the major hurdles to practical implementation of the technology. This technology is being developed both for laboratory and field use, as well as approaches to utilizing compact beam-scanning devices for in-situ structural health monitoring in several areas.
550
Hybrid Measurement Technologies
mean defect frequency
coarse grained waspaloy
95 s1 95 s2
10
96 s1
1
96 s2
0.1
97 s1
0.01
97 s2
0.001 0
20
40
cycles (1000s)
60
98 s1 98 s2
Fig. 3. Mean defect frequency on different Waspaloy samples with an increasing number of fatigue cycles
3 Acknowledgments Portions of the work reported here were supported under U.S. Army Aviation Applied Technology Directorate, RDECOM, Contract DAAH10-02C-0007 and U.S. Army Aviation & Missile Command (DARPA), Contract DAAH01-02-C-R192. The information contained herein does not necessarily reflect the position or the policy of the Government, and no official endorsement should be inferred.
4 References 1. Schmidt, P, Earthman, JC (1995) Development of a scanning laser crack detection technique for corrosion fatigue testing of fine wire. J. Mater. Res. 10:372 2. Chou, KJC, Earthman, JC (1997) Characterization of low-cycle fatigue damage in inconel 718 by laser light scattering. J. Mater. Res. 12:2048-2056 3. Earthman, JC, Angeles, J, Markov, V, Trolinger, J, Moffatt, J. (2004) Scattered light scanning for fatigue damage precursor detection on turbine components. Materials Evaluation 62:460-465 4. Lee, C, Chao, YJ, Sutton, MA, Peters, WH, Ranson, WF (1989) Determination of plastic strains at notches by image-processing methods. Experimental Mechanics 29:214-220
Analysis of localization of strains by ESPI, in equibiaxial loading (bulge test) of copper sheet metals Guillaume Montay1, Ignacio Lira2, Marie Tourneix1, Bruno Guelorget1, Manuel François1 and Cristián Vial2 1 Université de Technologie de Troyes, Laboratoire des Systèmes Mécaniques et d’Ingénierie Simultanée (LASMIS FRE CNRS 2719), 12rue Marie Curie, BP 2060, 10010 Troyes, France. 2 Pontificia Universidad Católica de Chile, Department of Mechanical and Metallurgical Engineering, Vicuña Mackenna 4860, Santiago, Chile.
1 Introduction The problem of strain localization is important in sheet metal forming, as it determines the forming limit diagram of the material. To analyze this process, engineers use a variety of tests. One of them is the bulge test. In it, an initially flat specimen is placed between a matrix and a blank holder, and hydraulic pressure is applied on one of its surfaces. An approximately equi-biaxial strain loading path is obtained [1,2]. In this paper we report on the application of electronic speckle-pattern interferometry (ESPI) to determine strain localization in the bulge test. A video sequence of images was captured and stored. The video allowed a posteriori analysis of the test. By subtracting pairs of images, fringes were obtained at load steps close to fracture. In this way, the progress of local strain rate at various positions on the apex of the dome was followed. The stages of diffuse and localized necking [3] were clearly seen.
2 Experimental procedure Experiments were performed on cold rolled copper plates, 0.8 mm thick, annealed at 400 °C. Pressure was applied with a tensile testing machine through a hydraulic jack. Oil flow was 105 mm3/s, except near the end of the test where it was changed manually to 52.5 mm3/s to follow more pre-
552
Hybrid Measurement Technologies
cisely the localization stage. A pick-up computer-connected sensor was used to monitor pressure. After reaching the maximum, a short period of nearly constant pressure ensued, to be followed by a slow decrease until fracture. Total strains between the initial flat state and the current state were found from white light images of a square grid impressed on the specimen. The grid sizes in the x and y directions, Lx and Ly, were measured at different positions and at different load stages. This information (in pixels) was transformed into millimetres with the magnification factor of the imaging system. The strains were computed as Hxx ln(Lx/L0) and Hyy ln(Ly/L0), where L0 = 5 mm was the initial grid size. The strain in the third direction was obtained under the hypothesis of incompressibility and neglecting elastic strain, giving Hzz (Hxx + Hyy). Because of the height variation, the magnification factor changed during the test. This change was followed by monitoring the image of a piece of graph paper glued onto the specimen near its apex.
1
y
11
x Fig. 1. Example of a fringe pattern depicting eleven bright fringes. Fringes are contour lines of equal deformation, about 1 µm per fringe order
An interferometer with in plane sensitivity in the y direction was placed above the specimen to measure deformation due to bulging. An expanded He-Ne laser beam was divided in two by a beam splitter. The two beams impinged from opposite sides onto the surface. They produced speckle patterns that were captured and stored at a rate of four pictures per second. Electronic fringes were obtained by subtracting pairs of images. From one fringe to the next, the displacement in the y direction between the two im-
553
Hybrid Measurement Technologies
ages was S O/(2sinD), where D is the incidence angle. We used D=18o, giving a sensitivity of about 1 µm per fringe. However, because of the change in height, the incidence angle of the laser beams had to be readjusted several times to maintain constant sensitivity. Figure 1 shows one of the fringed images, it corresponds to the central part the dome. On this image, three parallel lines were drawn along the y direction; the one in the centre was close to the fracture zone. Fringe positions on these lines were measured with an image processing software. The strain increment along each line was obtained as Hyy,inc NS/L, where L is the length of the line in millimetres and N is the number of fringes that cross the line. The conversion from distance in pixels to length was carried out as explained above. Dividing the strain increment by the time interval 't between the two images gave the strain rate. In practice, 't was about 2 seconds.
3 Results As expected, total strains Hxx and Hyy were about equal, indicating isotropy. In the z direction, maximum strain was about 130%. This value corresponds to a final thickness of 222 µm, in agreement with the measured thickness after the crack, 238 µm. 0,0003
Strain rate (1/s)
0,00025 0,0002
x=7 mm
x=19 mm
0,00015
x=31 mm
0,0001 0,00005
Diffuse necking
Localized necking
0 0
10
20
30 40 Strain (%)
50
60
Fig. 2. Strain rate as a function of average strain for the three lines in figure 1
70
554
Hybrid Measurement Technologies
Figure 2 depicts the strain rate as a function of average strain H for the three lines in figure 1. The plots start from an average strain of 27%, below which the strain rate was almost the same for the 3 lines. After that, the strain rate at the centre line increased a little faster. Strong localization started at H|59%. Maximum strain rate was obtained at the central line, close to the final fracture. The two regions in figure 2 correspond to those of diffuse and localized necking.
4 Conclusions In this paper, an original application of ESPI in materials engineering has been described. The technique was used to analyze a bulge test in order to study strain localization by following the strain rate progress. Results show that ESPI allows detecting clearly the two stages of localization, namely, the diffuse and localized necks. Using this technique, forming limit diagrams can thus be established accurately.
Acknowledgments The support of Conicyt (Chile) through Fondecyt 1030399 and ECOS/CONICYT C01E04 research grants is gratefully acknowledged.
References 1. Atkinson M (1997) Accurate determination of biaxial stress-strain relationships from hydraulic bulging tests of sheet metals. International Journal of Mechanical Sciences 39:761-769 2. Gutscher G, Wu HC, Ngaile G, Altan T (2004) Determination of flow stress for sheet metal forming using the viscous pressure bulge (VPB) test. Journal of Materials Processing Technology 146:1-7 3. Rees DWA (1996) Sheet orientation and forming limits under diffuse necking. Applied Mathematical Modelling 20: 624-635
Laser vibrometry using digital Fresnel holography Pascal Picart, Julien Leval, Jean Pierre Boileau, Jean Claude Pascal Laboratoire d’Acoustique de l’Université du Maine, Avenue Olivier Messiaen, 72085 LE MANS Cedex 9, France ; email :
[email protected] 1 Introduction Recently, it has been demonstrated that digital Fresnel holography offers new opportunities for metrological applications, examples of which include object deformation [1], surface shape measurement [2], phasecontrast microscopic imaging [3] and twin-sensitivity measurements [4]. Nevertheless, vibration amplitude and phase retrieving is a challenge for full field optical metrology because it is necessary for some applications. This paper presents a full field vibrometer using digital Fresnel holography. A simple three-step algorithm is presented. So vibration amplitude and phase extraction do not need too much images and information is full field, thus leading to direct full field vibrometry. Experimental results are presented and especially the mean quadratic velocity is estimated using the full field digital holographic vibrometer.
2 Theory A rough object submitted to a harmonic excitation and illuminated by a coherent laser beam induces a spatio-temporal optical phase modulation Thus, at any time t, the surface of the illuminated object diffuses an optical wave written as
A x, y, t
A0 x, y exp>i\ 0 x, y @ u exp>i'M m x, y sin Z 0 t M 0 x, y @
(1)
556
Hybrid Measurement Technologies
where 'M m is the maximum amplitude at pulsation Z = 2S/T0, and M 0 is the phase of the mechanical vibration. In equation (1), A0 is the modulus of the diffused wave and \ 0 is a random phase uniformly distributed over interval [-S,+S]. At any distance d0 the diffracted object field can be mixed with a reference wave of spatial frequencies {u0,v0}. Interferences between the diffused wave and the plane reference wave R(x’,y’) = aRexp[2iS(u0x’+v0y’)] generate an instantaneous hologram. The instantaneous hologram will be time-integrated by the solid state sensor during an exposure time noted T. The digital reconstruction is computed by a discrete bidimentionnal Fresnel transform of the interference pattern. After digital reconstruction at distance d0, the reconstructed +1 order is :
>
AR1 x, y, d 0 , t j # N M O4 d 04 R * x, y exp iSO d 0 u 02 v02 u³
t j T tj
@
A x O u 0 d 0 , y O v0 d 0 , t dt
(2)
where {M,N} corresponds to the number of pixels. The +1 order is then localized at coordinate (Ou0d0, Ov0d0) in the image plane. Temporal integration can be derived considering the harmonic excitation expressed in equation (1). The phase of the digitally reconstructed object can be calculated and it is found to be
^
`
arg AR1 x, y, d 0 , t j
2iS u0 x v0 y i S O d 0 u02 v02 \ 0 x, y 'M j x, y
(3)
° q j x, y sin 'M j x, y 4 j x, y ½° arctan ® ¾ °¯1 q j x, y cos'M j x, y 4 j x, y °¿ where
q j exp 4 j
k f
§
k f
©
ª ZT º T· ¸¸ J k 'M m exp «ik §¨ Z0t j M0 0 ·¸» 2 ¹¼ ¬ © 0 ¹
(4)
x2n 1 ¦ 2n 1 ! n 1
(5)
¦ P¨¨ kS T
and
P x
n f
n
In the ideal case where exposure time T is infinitely small compared to the vibration period T0, it appears that the phase of the reconstructed object
Hybrid Measurement Technologies
557
contains 3 unknowns. So only 3 equations are necessary to extract amplitude and phase of the vibration. The set of 3 equations with 3 unknowns can be obtained by applying a S/2 phase shifting between excitation and recording. Under these considerations, with (j = 1,2,3), we get
^
`
arg AR1 t j
\ j \ '0 'M m sin Z0t j M0 j 1 S / 2
(6)
It is straigthforward to demonstrate that amplitude and phase of the vibration may be extracted with the two following equations :
'M m x, y
1 2 '\ 132 x, y >'\ 23 x, y '\ 21 x, y @ 2 º '\ 13 x, y » ¬ '\ 23 x, y '\ 21 x, y ¼ ª
M0 x, y arctan «
(7)
(8)
where '\kl = \k – \l.
3 Influence of pulse width In practical situations, it is common for T to be not much smaller than T0. Because of the non-zero cyclic ratio Rc = T/T0, the phase measurement '\kl includes error terms d'\kl = d\k – d\l. Variation terms d\j are extracted from equation (3) and they may be approximated as [5]
d\ j #
q j sin'M j 4 j
1 q j cos'M j 4 j
(9)
Considering a linear approximation for the phase error for vibration amplitude and phase, it is found that : d'M m
k f
D 11
¦ D k 1
1 4 k 1
D 14 k 1 cos4kZ 0 t1 4kM 0 2kZ 0T
(10)
558
dM 0
Hybrid Measurement Technologies
1 'M m
u
k f
¦ D k 0
1 4 k 3
(11)
D 14 k 5 sin 22k 2 Z 0 t1 M 0 Z 0T / 2
with D1k is the kth coefficient of the Fourier expansion of d\1. These errors have a period one quarter the vibration period. In order to quantify the error it is useful to define a criterion which takes into account these characteristics. We have chosen criterions that correspond to the mean power of expressions 10 and 11. Figure 1 presents 3D-plot of the criterion for 'Mm measurement as a function of 'Mm and Rc. This result shows the highly non linear behavior of distortion in relation to vibration amplitude and cyclic ratio. As a conclusion it can be seen that the cyclic ratio Rc must be smaller than 1/'Mm to avoid distortion.
Fig. 1. 3D plot of criterion for amplitude distortion vs Rc and 'Mm (%)
4 Experimental set-up The digital holographic set-up is described in figure 2. The object under the sinusoidal excitation is a loudspeaker 60 mm in diameter, placed at d0 = 1400 mm from the detector area. The off-axis holographic recording is carried out using lens L2 which is displaced out of the afocal axis by means of two micrometric transducers [4]. The detector is a 12-bit digital CCD with (MuN) = (1024u1360) pixels of pitch px = py = 4.65Pm. Digital reconstruction was performed with K = L = 2048 data points. The synchro-
Hybrid Measurement Technologies
559
nisation between acquisition and excitation is performed by use of a stroboscopic set-up. The system is based on a mechanical shuttering with a rotating aperture [5]. Considering the mechanical and electronic devices, the cyclic ratio of the stroboscope is found to be Rc | 1/18. So the stroboscopic set-up is designed for vibration amplitude smaller than 18 rad in order to get amplitude distortion than 0.15 %.
Fig. 2. Experimental set-up
5 Experimental results The loudspeaker was excited in sinusoidal regime from 1.36 kHz to 4.32 kHz with step of 40 Hz. Figure 3 shows the phase maps obtained at the different steps of the process for a frequency of 2 kHz. Figure 4 shows vibration amplitude and phase at 2 kHz extracted from the three S/2 phase shifted phase maps of figure 3, according to algorithms 7 and 8. In figure 4, maximum amplitude is found to be 16.1 rad; so distortion is less than 0.25 % for 'Mm. The region of interest in each map contains about 240000 data points. The evaluation of the amplitude and the phase of the membrane of the loudspeaker determines its velocity along z direction. A criterion usually used for quantifying the vibration is that of the mean quadratic velocity :
vz
2
1 S T0
³³ ³ S
T0 0
2
vz x, y, t dtdxdy
(12)
where S is the surface of the vibrating object and the velocity is given by
560
Hybrid Measurement Technologies
v z x, y , t
Of 0 1 cos T
'M m x, y cosZ0t M 0
(13)
Fig. 3. Phase maps for vibration amplitude and phase reconstruction at 2 kHz
Fig. 4. Vibration amplitude (left) and vibration phase (right) for a frequency of 2 kHz
Figure 5 shows the mean quadratic velocity of the loudspeaker extracted from the set of data. The frequency is varying from 1.36 kHz to 4.32 kHz with step of 40 Hz.
Hybrid Measurement Technologies
561
Fig. 5. Mean quadratic velocity of the mebrane of the loudspeaker
In order to validate the stroboscopic measurement, figure 6 shows a comparison between time-averaged measurement [6] and a computation of time-averaging from the stroboscopic result.
Fig. 6. Comparison between time-averaging and stroboscopic measurement (left : experimental ; right : computation with stroboscopic result)
The experimental Bessel fringes and those which were computed are seen to be in close agreement. This confirms the relevance of the set-up to perform full field accurate amplitude and phase vibration measurements.
562
Hybrid Measurement Technologies
6 Conclusion This paper has discussed a full field vibrometer based on digital Fresnel holography. The Influence of the pulse width was studied and analytical expressions of amplitude and phase distortion were proposed. As a result, it was demonstrated that amplitude and phase extraction are possible with a cyclic ratio about 1/'Mm if maximum vibration amplitude does not exceed 'Mm. Thus, the use of a moderated cyclic ratio is possible in vibration analysis. Experimental results were presented and exhibit the relevance of digital Fresnel holography in vibrometry. The comparison between timeaveraging and stroboscopic measurement confirms the accuracy of the method.
7 References 1. Pedrini, G, Tiziani, H.J. (1995) Digital double pulse holographic interferometry using Fresnel and image plane holograms. Measurement 18:251-260 2. Wagner, C, Seebacher, S, Osten, W, Juptner, W (1999) Digital recording and numerical reconstruction of lens less Fourier holograms in optical metrology. Applied Optics 28:4812-4820 3. Dubois, F, Joannes, L, Legros, J.C. (1999) Improved three dimensional imaging with a digital holography microscope whith a source of partial spatial coherence. Applied Optics 38:7085-7094 4. Picart, P, Moisson, E, Mounier, D (2003) Twin sensitivity measurement by spatial multiplexing of digitally recorded holograms. Applied Optics 42:1947-1957 5. Leval, J, Picart, P, Boileau, JP, Pascal, JC (2005) Full field vibrometry with digital Fresnel holography. Applied Optics (to be published) 6. Picart, P, Leval, J, Mounier, D, Gougeon, S (2003) Time-avberaged digital holography. Optics Letters 28:1900-1902
2D laser vibrometry by use of digital holographic spatial multiplexing Pascal Picart, Julien Leval, Michel Grill, Jean Pierre Boileau, Jean Claude Pascal Laboratoire d’Acoustique de l’Université du Maine, Avenue Olivier Messiaen, 72085 LE MANS Cedex 9, France ; email :
[email protected] 1 Introduction In this paper we present opportunities for full field bidimensionnal vibrometry. We demonstrate that it is possible to simultaneously encodedecode 2D amplitude and phase of harmonic mechanical vibrations. The process allows determination of in plane and out of plane component of the vibration of an object sinusoidally excited. The principle is based on spatial multiplexing in digital Fresnel holography [1].
2 Spatial multiplexing of digital holograms Spatial multiplexing is based on the incoherent addition of two views of the object under interest. For this, we used a twin Mach-Zehnder interferometer. The incoherent summation of holograms is realized by orthogonal polarization along the two interferometers. The spatial frequencies of the reference waves are adjusted such that there is no overlapping between the five diffracted orders when the field is reconstructed [1]. Figure 1 presents the set-up. De-multiplexing is the step which consists in determining the point to point relation between the two holograms. It is achieved according to [1]. Digital reconstruction is performed following diffraction theory under the Fresnel approximation [2]. When the rough object is submitted to a harmonic excitation it induces a spatio-temporal displacement vector which can be written as U(t) = uxsin(Z0t+Mx)i + uysin(Z0t+My)j + uzsin(Z0t+Mz)k, where ^ux,uy,uz` are the maximum amplitudes at pulsation
564
Hybrid Measurement Technologies
Z0 = 2S/T0, and ^Mx,My,Mz` are the phase of the mechanical vibration along the three directions (see reference axis of figure 1).
Fig. 1. Twin Mach-Zehnder digital holographic interferometer for 2D vibrometry
Because àf the sensitivity of the set-up, at any time tj where we perform a recording followed by a reconstruction, the phase of the ith (i = 1,2) reconstructed object is given by
\ ij \ 0 r 'M x sin T sin Z0t j M x 'M z 1 cosT sin Z0t j M z
(1)
where \0 is a random phase mainly due to the roughness of the object and 'Mx,z = 2Sux,z/O.
3 2D Amplitude and phase retrieving If we compute phase \ij at three different synchronous time t1, t2, t3 such that Z0(t2t1) = S/2 and Z0(t3t1) = S, we can extract quantities '\ ikl = \ ik – \ il. Synchronisation can be realized with a stroboscopic set-up [3]. Quantities '\i13, '\ i21 and '\ i23 carrie informations on in plane and out of plane vibrations. Note that these quantities are determined modulo 2S and that they must be unwrapped. Determination of pure in plane and out of plane phase terms is perform by computing continuous quantities '\kl_x = '\1kl '\2kl and '\kl_z = '\1kl + '\2kl. With these quantities it is now possible to extract the amplitude of the vibration according to the following algorithm [3]
Hybrid Measurement Technologies
565
1 '\ 132 _ A '\ 23 _ A '\ 21 _ A 2
>
'M A
@
2
(2)
Where A = x or A = z. For the vibration phase, the following relation holds
MA
ª º '\ 13 _ A arctan « » ¬« '\ 23 _ A '\ 21 _ A ¼»
(3)
4 Experimental results We applied the set-up and the measurement principle to an industrial automotive car joint constituted with elastomer material. The inspected zone on the piece is 58.4 mm by 15.4 mm at a distance from the CCD of 1348 mm. Figure 2 shows the multiplexed holograms of the piece.
Fig. 2. Reconstructed multiplexed holograms of the car joint piece
Figure 3 shows in plane and out of plane vibrations for a frequency of 690 Hz. The determination of amplitude and phase allows the computation of the mean quadratic velocity along the two sentivities. It is given by
v
2
1 S T0
T0
³³ ³ vx, y, t S
0
2
dtdxdy
(4)
where S is the surface of the object. Figure 4 shows the mean quadratic velocities exctracted from the two sets of measurements for an excitation frequency varying from 200 Hz to 1000 Hz.
566
Hybrid Measurement Technologies
Fig. 3. Vibration amplitudes and phases along i and k directions at a frequency of 690 Hz
Fig. 4. Mean quadratic velocities along i and k directions
5 References 1. Picart, P, Moisson, E, Mounier, D (2003) Twin sensitivity measurement by spatial multiplexing of digitally recorded holograms. Applied Optics 42:1947-1957 2. Kreis, Th, Adams, M, Jüptner, W (1997) Methods of digital holography : a comparison. Proceedings SPIE 3098:224-233 3. Leval, J, Picart, P, Boileau, JP, Pascal, JC (2005) Full field vibrometry with digital Fresnel holography. Applied Optics (to be published)
Fatigue Detection of Fibres Reinforced Composite Materials by Fringes Projection and Speckle Shear Interferometry Ventseslav Sainova, Jana Harizanovaa, Sonja Ossikovskaa a Bulgarian Academy of Sciences, CLOSPI-BAS Acad. G. Bonchev Str., bl. 101, PO Box 95, 1113 Sofia Bulgaria Wim Van Paepegemb, Joris Degrieckb, Pierre Booneb b University of Gent Belgium
1 Introduction Fatigue detection by fringes projection and speckle shear interferometry of subjected to cyclic loading different types of fibres reinforced composite materials is presented. As the sensitivity of the applied methods could vary in broad limits in comparison with the other interferometric techniques, the inspection of the loaded specimens is realized in a wide dynamic range. Three points bending test has been applied at two static loading (pure tensile at 1 kN and 5 kN), with two consecutive displacements along Z (normal) direction – 0.1 mm and 1.5 mm for both loading conditions. The results are presented as differences ratio ǻz/ǻy, by subtracting phase maps modulus 2S, corresponding to the shape differences after loading the specimens. The testing of composite vessel subjected to cycling loading (pressure) was performed. Two spacing phase stepping fringes projection interferometry was applied for absolute coordinate measurement of the object. Derivatives of in plane and out of the plane components of the displacement vector were obtained by lateral speckle shear interferometry. The experimentally obtained results for non-pre-loaded, loaded and cycled specimens are presented together with the results from the pure tensile tests and from the cyclic tests. The selected measurement methodology follows the tendency in development of optical methods for remote measurements and non-destructive testing [1].
568
Hybrid Measurement Technologies
2 Testing of fibres reinforced materials before and after cycling loading The tested samples are plates with dimensions 200 u 30 u 3 mm, cut from unidirectional glass/epoxy fibres reinforced composite with eight layers. All layers are reinforced with unidirectional glass fibres. The stacking sequence used is [+45/45]_2s. The results from usual tensile and cyclic tests are shown on Fig. 1 and 2 with loading along y-axis. 14
Tensile test [+45/-45]_2s UD glass fibres
8
Cyclic test [+45/-45]_2s UD glass fibres 12
10
Load [kN]
Load [kN]
6
4
8
6
4
2
2 0 0,0
0,5
1,0
1,5
2,0
2,5
3,0
Displacement [mm]
0 0
2
4
6
8
10
12
14
Displacement [mm]
Fig. 1. Tensile test of specimen G3
Fig. 2. Cyclic test of specimen G4
Optical measurements are performed onto the same samples by threepoints bending test. The results for relative coordinates measurement by fringes projection interferometry of cycled and non-cycled samples at different loading (1 and 5 kN) and 1.5 mm normal displacement are shown on Fig. 3.
a) Normal displacement of non-cycled sample G3
b) Normal displacement of cycled sample G4
Fig. 3. Three point bending test of cycled and non-cycled samples.
He Ne laser (O = 632.8 nm) is used as a light source in Max-Zhender interferometer for generation of the projected fringes with spacing 0.5 mm, angle of illumination 70 deg, and distance 2 m, as described in [3]. All measurements are performed under static conditions. Five steps algorithm is used for phase calculation. Initial five frames with consecutive S/2 phase
Hybrid Measurement Technologies
569
shifts of the projected fringes onto the loaded sample (sample’s surface being the reference plane) are recorded. The results for differential three-points bending test of non-cycled and cycled samples are presented in Fig. 4. Two different loadings (1 and 5 kN) with two different normal displacements (0.1 and 1.5 mm) for each loading state have been consecutively applied. The results, presented as ǻz/ǻy, are calculated by subtracting the measured values of the normal displacements z(x, y) for two different loading states – 1 kN and 5 kN obtained at a given normal displacement, z(0,0) = 0.1 or 1.5 mm respectively, in the center of the specimen. 'z (1) 'z5kN 'z1kN / 'y 'y For the central part (y = 0) of the samples 'y is about 0.3 mm. At 'y o 0 'z 'y o dz / dy that is more informative for the tested materials.
a) At normal displacement 0.1 mm b) At normal displacement 1.5 mm Fig. 4. Difference 3-points bending tests of cycled sample G4
3 Testing of fibres reinforced composite vessel In the case of real 3D objects correct information for the object’s shape is necessary for determination of the three components of deformation on the curved surface [2]. The object for testing is a composite vessel. The results of shape measurement in the central part of the object sized 100×100 mm using two spacing fringes projection interferometry (absolute coordinates measurement [3]) at d1 = 0.5 mm and d2 = 2 mm are presented in Fig. 5. The applied cyclic loading is near to the sinusoidal with modulation from 300 to 500 kPa and 0.2 Hz frequency. The initial, interims and final results from macro measurements are performed by lateral shear interferometry along x direction (1% in the central part sized 100×100 mm of the object) at ~ǻ200 kPa static loading (static pressure) and shown in Figs. 6. The fibers bands should form a number of fringes. An unusual mechanical response appears in the different surface zones.
570
Hybrid Measurement Technologies
a) Unwrapped phase map modulus 2ʌ
b) 3D presentation
Fig. 5. Absolute coordinates measurement of the central part of composite vessel by two spacing fringes projection interferometry
a)
b)
c)
Fig. 6. Fatigue of composite vessel at cycling loading a) Initial state; b) after 400; c) after 600 cycles;
4 Conclusions Fringes projection and speckle-shear interferometry are applied for fatigue detection of fibres reinforced composite materials. The comparative results from tensile, cyclic and 3-points bending tests are presented. The fatigue of the tested objects after cycled loading is clearly identified. The presentation of the results in both cases (fringes projection and shearography) as the first difference of normal displacement is more informative due to the higher sensitivity, that allow fatigue detection of composite materials and machine parts to be performed at low-levels loadings, in working conditions and real time operating mode.
5 References 1. F. Chen, G. M. Brown, M. Song, “Overview of three-dimensional shape measurement using optical methods”, Opt. Eng., 39 (1), 2000, pp. 10-22 2. M. Y. Y. Hung, H. M. Shang, L. Yang, “Unified approach for holography and shearography in surface deformation measurement and nondestructive testing”, Opt. Eng., 42 (5), 2003, pp. 1197-1207 3. V. Sainov, G. Stoilov, J. Harisanova, P. Boone, “Phase-stepping interferometric system for relative and absolute co-ordinate measurement of real objects ”, Proc. of Int. Conf. OWLS V, Springer-Verlag Berlin, 2000, pp. 50-5
Multifunctional interferometric platform specialised for active components of MEMS/MOEMS characterisation Leszek Salbut1, Michal Jozwik2 1 Warsaw University of Technology Institute of Micromechanics and Photonics 8 Sw. A. Boboli St., 02-525 Warsaw Poland 2 Department d’Optique Universite de Franche-Comte 16 Route de Gray, 25030 Besancon Cedex France
1 Introduction Strict requirements with respect to reliability and lifetime of microsystems have to be fulfilled if they are used in practice. Both, reliability and lifetime are strongly dependent on the material properties and mechanical design. In comparison to conventional technologies the situation in microsystems technology is extremely complicated. Modern microsystems (MEMS and MOEMS) and their components are characterized by high volume integration of the variety of materials. But it is well known that the material behavior in combination with new structural design cannot be easily predicted by theoretical and numerical simulations. The objective of this work is to develop the new instrument and procedures for characterization of the mechanical behavior of MEMS elements. High sensitive and accurate measurements are required for automatic microelements static shape determination and monitoring the value and phase of out-of plane displacement at chosen frequencies and stages of vibration. The measurement system is based on conventional two-beam interferometry [1] with cw and pulse light sources. It gives the possibility to combine capabilities of time average, stroboscopic and pulse interferometry techniques. The opto-electro-mechanical system and measurement techniques create the multifunctional interferometric platform MIP for various types of microelements testing. The efficiency of MIP is presented on examples
572
Hybrid Measurement Technologies
of the resonance frequencies and amplitude distributions in vibration modes of active micromembrane testing.
2 Measurement system Scheme of the measurement system is shown in Fig.1a. It is based on the Twyman-Green microinterferometer integrated with optical microscope and a variety of supporting devices for object manipulation and loading and for synchronization between object loading system and pulse light source controller. The photos of the interferometric system designed and manufactured at IMiF PW are presented in Fig. 1b. As the light source (PLS) two types of lasers can be used: pulse microlaser (O = 543 nm, power 80 mW, frequency up to 50 kHz) or diode pulse laser (O = 630 nm, power 15 mW, frequency up to 2 MHz ). The reference mirror is mounted on piezoceramic transducer for realization of phase shift required for automated fringe pattern analysis methods (PSM). Two manipulators with arms provide voltage for electrical loading of microelements and enable testing them on the silicon wafer (before cutting) and separately (after cutting). a)
b)
PLS
Fig. 1. Scheme of the measurement system (a) for static and pulse (stroboscopic) interfereometry and photo of measurement area (b)
3 Measurement procedures and exemplary results MIP with cw light source can works as conventional Twyman-Green interferometer for shape deformation measurement or can be used for vibrating microelements testing by time average technique. For improving visibility of the Bessel fringes obtained by time average technique the special numerical procedure, colled “four frame method” is used [2]. If the pulse la-
Hybrid Measurement Technologies
573
ser is applied one of the following techniques for non static objects study can be used: x pulse interferometry – the most general method for testing of moving elements [3], x stroboscopic interferometry – the method for testing of vibration modes [2,4], x quasi-stroboscopic interferometry – simplified stroboscopic method for qualitative low frequency vibration analysis [2].
a)
b)
Fig. 2. The principle of pulse and stroboscopic (a) and quasi-stroboscopic (b) techniques
a)
b)
c)
Fig. 3. The vibration modes of the square membrane for resonance frequencies: a) 92.8 kHz; b) 107.1 kHz; c) 172 kHz (excitation signal = 5.6 VPP for each case) determined using the time average and quasi-stroboscopic methods
For vibration modes profiling by pulse or stroboscopic interferometry, the light pulse of width Gt, synchronized with the vibration excitation signal (see Fig.2a) but with an adjustable delay time tm is used to freeze the object vibration at any time of the vibration cycle. The idea of quasi-stroboscopic technique is shown in Fig.2b. A pulsed laser and amplifier activating the microelement under test are controlled by
574
Hybrid Measurement Technologies
the same signal. In this case the excitation signal and light pulses are synchronized automatically and the light illuminates the vibrating microelement at its maximal deflection. Due to the rectangular shape of the excitation signal the shape of microelement is quasi-stable during the relatively long time of the light pulse (this situation is similar to the stroboscopic technique used for tm = T/4). Fig. 3 shows exemplary results of vibration analysis of the silicon micromembranes by time average and quasi-stroboscopic techniques. First three resonance frequencies of the 1.35 x 1.35 mm2 square silicon micromembrane with active PZT layer (made by THALES under the European Project OCMMM) were tested.
5 Acknowledgements The work was supported partially by European Project OCMMM and Polish Scientific Council project no. 4 T10C 021 24
6 References 1. Kujawinska, M, Gorecki, C (2002) New challenges and approaches to interferometric MEMS and MOEMS testing. Proc.SPIE 4900: 809-823 2. Salbut, L, Patorski, K, Jozwik, M, Kacperski, J, Gorecki, C, Jacobelli, A, Dean, T (2003) Active microelements testing by interferometry using time-average and quasi-stroboscopic techniques. Proc.SPIE 5145: 23-32 3. Cloud, G (1995) Optical methods in engineering analysis. Cambridge University Press 4. Petitgrand, S, Yahiaoui, R, Danaie, K, Bosseboeuf, A (2001) 3D measurement of micromechanical devices vibration mode shapes with a stroboscopic interferometric microscope. Optics and Lasers in Engineering 36: 77-101
Laser Multitask ND Technology in Conservation Diagnostic Procedures
V. Tornari1, E. Tsiranidou1, Y. Orphanos1, C. Falldorf2, R. Klattenhof2, E. Esposito3, A. Agnani3, R. Dabu4, A. Stratan4, A. Anastassopoulos5, D. Schipper6, J. Hasperhoven6, M. Stefanaggi7, H. Bonnici8 , D. Ursu9 1 FORTH/IESL, 2 BIAS, 3UNIVPM, 4 NILPRP, 5Envirocoustics S.A., 6Art Innovation b.v, 7LRMH, 8MMRI, 9ProOptica
Introduction Laser metrology techniques successfully applied in industrial diagnostic fields have not yet been adjusted in accordance to the investigations requirements of Cultural Heritage. The setback is due to the partial applicability provided by each technique unsuited for the variety of diagnostic problems implicated in the field. The extensively fragmented applicability obstructs the technology transfer and elevates the aim to integrate complementary properties providing the essential functionality. In particular, structural diagnosis in art conservation intends to depict the mechanical state of the concerned cultural treasure for plotting its restoration strategy. Conventional conservation practices rely on point by point finger-knocking on the exposed surfaces and differentiate acoustically the surface sound. Ultimate tool only for movable items and in emergency cases is provided by x-ray imaging and thermography. Modern optical metrology may provide better suited alternatives incorporating transportable, non-contacting, safe, sensitive and fast subsurface topography if a diagnostic methodology is developed to design an integrated working procedure of techniques.
EC 5th FWP - DG Research, EESD, LASERACT EVK4-CT-2002-00096 Scientific coordinator:
[email protected], tel:+30 810 391394, fax:+30 810 391318, 1 Foundation for Research and Technology-Hellas/Institute of Electronic Structure and Laser, 71 110 Heraklion, Crete, Greece.
576
Hybrid Measurement Technologies
1 Integration concept and diagnostic methodology The integration concept is based on two respects constituting separate methodology development steps. a) Techniques act on complementary advantages: Holography related techniques tested to provide diverged object beams allowing larger field of view for artworks of moderated dimensions and high resolving power for complex micro-defect detection and parametrical analysis whereas vibrometry scanning tested allows for remote access on distant objects of extended dimensions and larger but simpler defects [1-3]. b) Art classification table versus defect pathology: Despite the broad field of objects and materials constituting the cultural heritage two characteristic structural problems are persistively identified to dominate the deterioration growth. These are the detachments and cracks formed in plethora of artworks with multilayered structure and inhomogeneous materials. The experimental work was based on detection from simulated samples and real artworks of the dominant conservation problems. Suitability of techniques assigned to the art classification table permits the standardisation of inspection sequence according to artwork character and potential pathology. INTERCONNECTION DIAGRAM OF OPERATIONAL PROCEDURE 1st step: INSPECTION SSS DSHIoDSS* SSS: Defect map a. Image Processing b.Defect Detection c. Indicate a suggested type of defect
MSS SLDV*oDSSo DSHI
2nd step: ANALYSIS MSS Defect map * plus vibration threshold
LSS SLDVoDSSoDSHI*
LSS Vibration threshold * plus defect map
3rd step: EVALUATION
Fig. 1. Operational sequence for development of integrated diagnosis. (SSS: Small Scale Structures, M: Medium and L: Large). The technique in italics is optional to the operator.
Hybrid Measurement Technologies
577
2 Results The feasibility tests were concluded to the simultaneous development of interchangeable transportable modules based on the techniques of Digital Speckle Holographic Interferometry (DSHI), Digital Speckle Shearography (DSS) and Scanning Laser Doppler Vibrometry (SLDV) to constitute one compactified prototype. For the DSHI a custom 8 ns pulsed laser at 532 nm based on microlaser pumping was additionally developed Green pulse energy adjustable in 50 steps by use of a O/2 wave-plate @ 532 nm, rotated by a step by step motor controlled by computer, and a Glan polariser. Software integrating the art classification table with the operational parameters of modules drives by interactive user-friendly interface the operator to perform the investigation and conclude the diagnosis. Operating system
INPUT
Working procedure
OUTPUT
Operatin system EXPERT
SLDV DSS DSHI
Lasers Exciters E/O
Operational procedure
Art class database
Fig. 2. Principle of operation for the multitask prototype system.
The prototype system in field action is shown in figure 3. The compact dimensions allowed for transportation under extreme out of laboratory conditions. Some characteristic results are shown in figure 4a-c.
Modules & laser
Software driven procedures
Output parameters of the laser : 1. Wavelength: 532 nm 2. Pulse energy: > 10 mJ 3. Standard deviation: 1.5 m
Fig. 3. Photograph from system in onfield investigation and laser parameters.
578
Hybrid Measurement Technologies
Fig. 4. a) DSHI on Maltese fortification for defect detection and b) DSS on defect detection of simulated sample, and c) SLDV on Maltese samples stone for definition of quality.
3 Discussion Taking into account the requirements for a technique to be qualified by conservation community as suitable, such as to be non-destructive, non invasive, non contacting, acquiring subsurface information and visualise defect presence, being capable for remote access and on-field transportation, applicable to variety of artworks/shapes/materials, and providing objective and repeatable results; was successfully suggested that those requirements can be delivered by integrating complimentary characteristics delivered from laser techniques existed in optical metrology with development of art classification database and user-friendly integrated software.
4 References 1. V. Tornari, V. Zafiropulos, A. Bonarou, N. A. Vainos, and C. Fotakis, “ Modern technology in artwork conservation: A laser based approach for process control and evaluation”, Journal of Optics and Lasers in Engineering, vol. 34, (2000), pp 309-326 2. P. Castellini, E. Esposito, N. Paone, and E. P. Tomasini, “Non-invasive measurements of damage of frescoes paintings and icon by Laser Scanning Vibrometer: experimental results on artificial samples and real works of art ”, SPIE Vol. 3411, 439-448, (1998) 3. V. Tornari, A. Bonarou, E. Esposito, W. Osten, M. Kalms, N. Smyrnakis, S. Stasinopulos, “Laser based systems for the structural diagnostic of artworks: an application to XVII century Byzantine icons”, SPIE 2001, Munich Conference, June 18-22, 2001, vol. 4402
SESSION 5 New Optical Sensors and Measurement Systems Chairs: Toyohiko Yatagai Tsukuba (Japan) Hans Tiziani Stuttgart (Germany)
Invited Paper
Progress in Scanning Holographic Microscopy for Biomedical Applications Ting-Chung Poon Optical Image Processing Laboratory Bradley Department of Electrical and Computer Engineering Virginia Tech Blacksburg, Virginia 24061, USA
1 Introduction Optical scanning holography (OSH) is a unique technique in that holographic information of a three-dimensional (3-D) object is acquired with a single 2-D optical heterodyne scanning. OSH has several opportunities of applications in areas such as 3-D holographic microscopy, recognition of 3-D objects, 3-D holographic television, 3-D optical cryptography as well as 3-D optical remote sensing. In this talk, I will concentrate on the use of OSH to 3-D microscopy.
2 Generalized two-pupil processing system Optical scanning holography starts with the so-called two-pupil processing system [1-3]. A generalized version of its set-up is shown in fig. 1. p1 ( x, y ) and p 2 ( x, y ) are the two pupil functions, located in the front focal plane of Lens L1. The two pupils are illuminated by laser light of temporal frequencies Z 0 and Z 0 : , respectively. The beamsplitter BS combines the two pupil fields and the combined fields are projected, through an x-y scanner, onto the specimen slice T ( x, y; z ) located at a distance f z 0 z from Lens L1, as shown in Fig. 1. We model the 3-D object as a stack of transverse slices and each slice of the object is represented by an amplitude transmittance T ( x, y; z ) , which is thin and weakly scattering. We place the 3-D object in front of the Fourier transform lens L2. M ( x, y ) is a mask placed in the back focal plane of Lens L2. The
New Optical Sensors and Measurement Systems
581
photodetector PD collects all the light after the mask and delivers a scanned current i (t ) . The electronic bandpass filter, BPF, tuned at the frequency : , will then deliver a heterodyne current i: (t ) , which can be written as
i: (t ) v Re[i:p ( x, y ) exp( j:t )] , where x Vt and y Vt and V is the speed of the scanning beam.
(1)
Z0 :
f
f
p ( x, y) 2
p ( x, y) 1
Lens L1
i(t)
Z0
i: (t )
BPF@: [ \ VFDQQHU
%6
f
Lens L2
f
z0
z
PD
M(x,y) T ( x, y; z )
3D object
cos( : t )
id i:
iq
Computer or 2-D Display
sin(:t )
Fig. 1. A generalized two-pupil optical system: filter, BPF-bandpass filter, PD-photodetector
-
electronic multiplier, LPF-lowpass
2.1 Coherency of imaging
When M ( x, y ) i :p ( x, y)
³ [P
1
1 , i:p ( x, y ) in Eq. (1) becomes [4] *
( x' , y ' ; z z 0 ) P2 ( x' , y ' ; z z 0 ) | T ( x' x, y ' y; z ) | 2 dx' dy' dz (2)
582
New Optical Sensors and Measurement Systems
where Pi ( x' , y ' ; z z 0 )
F { pi ( x, y )}k x
i=1 or 2, F{ p i ( x, y )}k x , k y
k0 x / f ,k y k0 y / f
³³ p ( x, y) exp( jk i
x
h( x, y; z z 0 ) with
x jk y y )dxdy , denotes
the 2-D convolution involving x and y coordinates, and finally jk k h( x, y; z ) exp( jk 0 z ) 0 exp[ j 0 ( x 2 y 2 )] is the free-space spatial 2Sz 2z impulse response in Fourier optics with k 0 being the wavenumber of the laser light [5]. Equation (2) corresponds to the case of incoherent imaging. Incoherent objects include fluorescent specimens in biology, or diffusely reflecting surfaces as encountered in remote sensing. When M ( x, y ) G ( x, y ) , i.e., the mask is a pin hole, and one of the pupils is also a pin hole, i.e., p1 ( x, y )
G ( x, y ) , i:p ( x, y ) in Eq. (1) be-
comes [6]
i :p ( x, y )
³ P ( x' , y ' ; z z 2
0
)T ( x' x, y ' y; z )dx' dy ' dz .
(3)
This corresponds to coherent imaging, which is important for quantitative phase-contrast imaging for some biological imaging. 2.2 Detection scheme
This heterodyne current can be processed electronically in two channels by electronic multipliers and lowpass filters as shown in Fig. 1 to give two outputs id ( x, y ) and iq ( x, y ) to be displayed or stored in a computer. They are given by [6]
id ( x, y ) v| i:p ( x, y ) | cos T
and
iq ( x, y ) v| i:p ( x, y ) | sin T ,
(4)
where i:p ( x, y ) | i:p ( x, y ) | exp( jT ( x, y )) .
3 Optical scanning holography The generalized two-pupil processing system can operate as a holographic mode by properly choosing the pupil functions. Taking the case of incoherent imaging, if we let p1 ( x, y ) 1 and p 2 ( x, y ) G ( x, y ) , the two equations in Eq. (4) become [6]
New Optical Sensors and Measurement Systems
583
k0 k0 sin[ ( x 2 y 2 )] | T ( x, y; z ) | 2 dz (5a) (z z0 ) 2( z z 0 ) k0 k0 cos[ ( x 2 y 2 )] | T ( x, y; z ) | 2 dz , (5b) i q ( x, y ) v ³ (z z0 ) 2( z z 0 ) where denotes 2-D correlation involving x and y coordinates. Eqs. (5a) i d ( x, y ) v ³
and (5b) are called, respectively, the sine- and cosine- Fresnel zone plate (FZP) hologram of the incoherent object, | T ( x, y; z ) | 2 . Figure 2a) shows the original “fringe” 2-D pattern located at z 0 , 2 i.e., | T ( x, y; z ) | I ( x, y )G ( z ) , where I ( x, y ) represents the 2-D “fringe” pattern. Figure 2b) and 2c) show the sine-FZP and cosine-FZP holograms, respectively. Figure 2d) and 2e) show their reconstructions. Reconstruction can simply be done by convolving the holograms with the free space impulse response matched to the depth parameter z 0 , h( x, y; z 0 ) . Note that there is twin-image noise in these reconstructions. For no twin-image noise reconstruction, we can construct a complex hologram, H c ( x, y ) , according to the following equation [6]:
H c ( x, y )
iq ( x, y ) jid ( x, y )
(6)
Figure 2f) shows the reconstruction of the complex hologram and it is evident that twin-image noise has been rejected completely.
a)
584
New Optical Sensors and Measurement Systems Sine-FZP hologram
b) Cosine-FZP hologram
c) Reconstruction of sine-FZP hologram
d)
New Optical Sensors and Measurement Systems
585
Reconstruction of cosine-FZP hologram
e) Reconstruction of complex FZP hologram,+j
f) Fig. 2. a) Original “fringe”, b) Sine-FZP hologram, c) Cosine-FZP hologram, d) Reconstruction of sine-hologram, e) Reconstruction of cosine-hologram f) Reconstruction of complex hologram (no twin-image noise)
4 Scanning holographic fluorescence microscopy We have applied the principles of optical scanning holography to 3-D microscopy [7]. In 1996, scanning holographic microscopy was first proposed [8]. In 1997, we captured hologram of fluorescent beads with sizes of about 15 Pm within a volume of space about 2mm by 2mm by 2mm, first ever recorded hologram of fluorescent information [9]. The hologram is shown in Fig. 3a). Fig. 3b) and 3c) show its reconstruction at two planes.
586
New Optical Sensors and Measurement Systems
a)
b)
c)
Fig. 3. a) hologram of fluorescent beads b) and c) are reconstructions at two planes [After Schilling et al. Optics Letters, Vol. 22, 1507 (1997)].
In 1998, using optical scanning holography we described and illustrated experimentally a method aimed at the three-dimensional (3-D) imaging of fluorescent inhomogeneities embedded in a turbid medium [10]. In 2002, Swoger et al. analyzed the use of optical scanning holography as a technique for high-resolution 3-D biological microscopy [11] and most recently, Indebetouw et al. have demonstrated optical scanning holographic microscopy with resolution about 1 Pm [12].
5 References 1. Lohmann, A, Rhodes, W, (1978) Two-pupil synthesis of optical transfer function. Applied Optics 17:1141-1150 2. Poon, TC, Korpel, A (1979) Optical transfer function of an acoustooptic heterodyning image processor. Optics Letters 4:317-319
New Optical Sensors and Measurement Systems
587
3. Indebetouw, G, Poon, TC, (1992) Novel approaches of incoherent image processing with emphasis on scanning methods. Optical Engineering 31:2159-2167 4. Poon, TC, Indebetouw, G, (2003) Three-dimensional point spread functions of an optical heterodyne scanning image processor. Applied Optics 42:1485-1492 5. Poon, TC, Banerjee, P, (2001) Contemporary Optical Image Processing with MATLAB®, Elsevier Science, Oxford 6. Poon, TC, (2004) Recent progress in optical scanning. Journal of Holography and Speckle 1: 6-25 7. Poon, TC, Schilling, B, Indebetouw, G, Storrie, B (2000) Threedimensional Holographic Fluorescence Microscope. U.S. Patent # 6,038,041 8. Poon, TC, Doh, K, Schilling, B, Wu, M, Shinoda, K, Suzuki, Y (1996) Three-dimensional microscopy by optical scanning. Optical Engineering 34:1338-1344 9. Schilling, B, Poon, TC, Indebetouw, G, Storrie, B, Wu, M, Shinoda, K, Suzuki, Y (1997) Three-dimensional holographic fluorescence microscopy. Optics Letters 22:1506-1508 10. Indebetouw, G, Kim, T, Poon, TC, Schilling, B (1998) Threedimensional location of fluorescent inhomogeneities in turbid media by scanning heterodyne holography. Optics Letters 23: 135-137 11. Swoger, J, Martinez-Corral, M, Huisken, J, Stelzer, E (2002) Optical scanning holography as a technique for high-resolution threedimensional biological microscopy. Journal of Optical Society of America 19: 1910-1918 12. Indebetouw, G, Maghnouji, A, Foster, R. (2005) Scanning holographic microscopy with transverse resolution exceeding the Rayleigh limit and extended depth of focus. Journal of Optical Society of America A 22: 892-898
The Dynamics of Life: Imaging Temperature and Refractive Index Variations Surrounding Material and Biological Systems with Dynamic Interferometry Katherine Creath a,b,c,d, Gary E. Schwartz d,b College of Optical Sciences, University of Arizona, 1630 E. University Blvd, Tucson, AZ, USA 85721-0094 b Biofield Optics, LLC, 2247 E. La Mirada St., Tucson, AZ, USA 85719 c Optineering, 2247 E. La Mirada St., Tucson, AZ, USA 85719 d Center for Frontier Medicine in Biofield Science, University of Arizona, 1601 N. Tucson Blvd., Su.17, Tucson, AZ, USA 85719
a
1 Abstract Dynamic interferometry is a highly sensitive means of obtaining phase information determining phase at rates of a few measurements per second. Many different techniques have been developed to obtain multiple frames of interferometric data simultaneously. Commercial instruments have recently been designed with the purpose of measuring phase data in the presence of vibration and air turbulence. The sensitivity of these phasemeasurement instruments is on the order of thousandths of a wavelength at visible wavelengths. This sensitivity enables the measurement of small temperature changes and thermal fields surrounding living biological objects as well as material objects. Temperature differences are clearly noticeable using a visible wavelength source because of subtle changes in the refractive index of air due to thermal variations between an object and the ambient room temperature. Living objects can also easily be measured to monitor changes as a function of time. Unwrapping dynamic data in time enables the tracking of these subtle changes to better understand the dynamics and interactions of these subtle variations. This technique has many promising applications in biological and medical sciences for studying thermal fields around living objects. In this paper we outline methods of dynamic interferometry, discuss challenges and theoretical concerns, and
[email protected]; phone 520 626-1730; fax 520 882-6976
New Optical Sensors and Measurement Systems
589
present experimental data comparing thermal fields measured with dynamic phase-measuring interferometry surrounding warm and cold material objects as well as living biological objects.
2 Introduction Dynamic interferometers have been designed with the purpose of measuring phase data in the presence of vibration and air turbulence so that interferograms can be captured in “real” time [1], turbulence in the area near an object can be “frozen,” and flows and vibrational motion can be followed [2]. They are designed to take all necessary interferometric data simultaneously to determine phase in a single snapshot [3,4,5,6]. Variations in optical path as a function of time can be calculated to obtain OPD movies (“burst” mode) of dynamic material and living biological objects. Any object not in thermal equilibrium with its environment will have a thermal field surrounding it. Temperature variations in thermal fields will alter the refractive index of the air surrounding objects. These subtle variations can be measured interferometrically, and with dynamic capabilities fluctuations in thermal fields can be frozen in time and followed over a period of time. For the study presented in this paper we focus on looking at the difference between room temperature and body temperature objects and compared these to a human finger. The human body dynamically emits thermal energy. This thermal energy relates to metabolic processes. We hypothesize that the thermal emission from the human body is dynamic and cycles with time constants related to blood flow and respiration [7]. These cycles create small bursts of thermal energy that creates convection and air currents. We have termed these subtle air currents generated by the human body “microbreezes” [8]. The thermal variations of these microbreezes modulate the refractive index of the air path. Because dynamic interferometry can measure subtle changes in refractive index and thereby measure air currents and microbreezes, we hypothesize that this technique will enable us to visualize the thermal aura around human body parts such as a finger tip, and furthermore that we will be able to quantify the relative variations over time.
590
New Optical Sensors and Measurement Systems
3 Dynamic interferometer The specific type of dynamic interferometry used for this study is a spatial multichannel phase-measurement technique [9]. Multiple duplicate interferograms are generated with different relative phase shifts between the object and reference beams. These interferograms are then recorded using either a number of separate cameras or by multiplexing all of the interferograms onto a single camera. These interferometers are well suited for studying objects that are not vibrationally isolated or that are dynamically moving. They are able to “freeze” the motion of an object to obtain phase and surface height information as a function of time. Commercial systems utilizing this type of phase measurement are manufactured by 4D Technology Corporation, Engineering Synthesis
Fig. 1. Schematic of dynamic interferometer system used for this study. The object under study is between the collimating lens and the return mirror. (Schematic courtesy of 4D Technology Corporation).
Design, Inc. and ADE Phase Shift. The system used for this study was a PhaseCamTM from 4D Technology [10]. A schematic of the 4D Technology PhaseCam is shown in Fig. 1. A single mode polarized HeNe laser beam is expanded and collimated to provide illumination coupled by a polarization beam splitter. Quarter-wave plates are used to set orthogonal polarizations for the object and reference beams. The object beam is further expanded, collimated and directed at a return mirror. The cavity between the collimating lens and the return mirror is where the objects were placed for this study. When the object and reference beams are recombined inside the interferometer, the polarizations are kept orthogonal. The combined beams pass through optical transfer elements providing imaging of the return mirror onto the highresolution camera. In one embodiment a holographic optical element creates four copies of the interference pattern that are mapped onto four camera quadrants (see
New Optical Sensors and Measurement Systems
591
Fig. 2). A phase plate consisting of polarization components is placed in front of the camera to provide a different relative phase shift between the object and reference beams in each of the four quadrants of the image plane. Phase values are determined modulo 2ʌ at each point in the phase map using the standard 4-frame algorithm (see Fig. 2) [11]. This calculation does not yield absolute phase differences in the object cavity. The arctangent phase calculation provides modulo 2ʌ data requiring that phase values be unwrapped to determine a phase map of the relative phase difference between the object and reference beams [12]. If data frames are tracked in time, and the phase values do not jump by more than a half of a fringe between frames of data, it is possible to track the relative phase differences dynamically (also known as unwrapping in time). The version of the software used for this study did not yet have the capability of unwrapping the phase in time. Because the interferograms are multiplexed onto different quadrants of the image plane, care needs to be taken to determine pixel overlap of all
Fig. 2. Schematic showing the creation and encoding of 4 phase shifted interferograms onto the CCD camera. (Courtesy of 4D Technology Corporation).
the interferograms, to remove spatial optical distortion and to balance the gain values between interferograms. In practice, when measurements are taken, a reference file is generated from an average of a large number of frames of data with an empty cavity and subtracted from subsequent measurements to provide a null. Dynamic variations in the object cavity can be monitored by taking a “burst” of data comprised of a user-selectable number of data frames with a fixed time delay between frames. The sensitivity of phase-measurements taken with this instrument is on the order of thousandths of a wavelength at the visible HeNe wavelength (633 nm) enabling measurement of small
592
New Optical Sensors and Measurement Systems
temperature changes in thermal fields surrounding material and living biological objects.
4 Results The dynamic interferometer was set up as shown in Fig. 1. The object cavity between the collimating lens and the return mirror was enclosed with a cardboard tube except for an approximately 3cm space to place the object in the beam. This limited ambient air currents from affecting the measurements as much as possible. A reference data set was taken with no object present as the average of 30 consecutive measurements taken in a single burst. This reference data set is subtracted from all subsequent measurements accounting for variations across the field due to the interferometer itself creating a null cavity when no object was present. Figure 3 displays various OPD maps of air taken in a single CCD frame (30ms)under different conditions. All OPD maps are scaled from –0.05 to +0.05 waves OPD. Figure 3(A) shows the empty object cavity. Bright areas (white) are warmer than dark areas (black). Figure 3(B) shows a blast from a can of canned air. Note that the turbulence is easily frozen in time and that the canned air is cooler than the background air. Figure 3(C) shows the effect of a candle flame below the object beam. The area heated by the candle flame is obviously brighter than the darker ambient air temperature. Figure 4(A) shows OPD maps of a screwdriver handle approximately 2 cm across at room temperature. These images are scaled the same as Fig. 3. The presence of the room temperature screwdriver handle does not appear to thermally affect the air path at all. However, when the screwdriver handle is warmed up to body temperature and placed in the beam, there is obviously a thermal gradient around it (Fig. 4(B)). Figure 4(C) shows a finger of the second author placed in the beam. Note that the thermal gradient around the finger is similar to that around the body temperature screwdriver handle. The differences between these two objects are mainly in the surrounding “halo”.
New Optical Sensors and Measurement Systems
(A)
(B)
593
(C)
Fig. 3. OPD maps of air patterns recorded in a dynamic interferometer with different objects. (A) Empty cavity. (B) Blast of canned air. (C) Candle flame. Brighter shades (white) are warmer air temperatures and darker shades (black) are cooler. All OPD maps are scaled to the same range.
(A)
(B)
(C)
Fig. 4. (A) Room temperature screwdriver handle. (B) Body temperature screwdriver handle. (C) Human finger. All OPD maps are scaled the same as Fig. 3. Note “halo” around warm objects.
Figure 5 displays three consecutive OPD maps of dynamic air patterns taken ~0.1s apart surrounding the tip of a human finger (A-C) and a screwdriver handle at finger temperature (D-F). The OPD maps were processed using ImageJ software [13] utilizing a lookup table to reveal structure and changes in structure over time. A number of distinctions can be seen in these figures. The screwdriver is more symmetric and static, while the finger is more asymmetric and dynamic. In the generated OPD movies it is possible to see pulsing around the finger probably corresponding to heart rate that is not visible around the screwdriver handle.
594
New Optical Sensors and Measurement Systems
(A)
(B)
(C)
(D)
(E)
(F)
Fig. 5. Consecutive OPD maps taken ~0.1s apart of a human finger (A-C) and a body temperature screwdriver handle (D-F). Lines indicate areas of equal optical path like a topographic map. Note there are more dynamic variations between images of the finger than the screwdriver handle.
5 Discussion and Conclusions The sensitivity of these phase measurements shows that our eye can easily discern 0.01 waves difference in OPD from an OPD map. Calculations can further extend the repeatability to have a sensitivity of around 0.001 waves. The refractive index of air is roughly 1.003 and is dependent upon temperature, pressure, humidity, CO2 quantity and wavelength. These dependencies have been studied extensively for accurate distance measurements using light [14]. Operating with an interferometric measurement sensitivity on the order of 0.001 waves, variations of 1 part in 104 of refractive index can be resolved. As seen in the OPD maps presented here, this type of variation is apparent in the fields around the human finger and can be extrapolated to be present around other living biological objects. Since we are interested in dynamic changes and not absolute values, we
New Optical Sensors and Measurement Systems
595
feel that this technique shows promise for tracking dynamic changes in thermal fields around biological objects. The main limitation of the method used in this study was that the software was not yet able to unwrap phase in time. Unwrapping in time enables tracking specific air currents over time and determination of the refractive index (or OPD changes) between frames of phase data. This type of calculation could be invaluable for a number of different applications such as modal analysis and mapping of air turbulence in a telescope dome. Dynamic interferometry is relatively new. The first dynamic interferometers were designed simply to get around vibration and air turbulence issues. As the field is evolving it is becoming apparent that dynamic interferometry has a huge advantage over standard phase measurement interferometry by being able to follow dynamic motions and capture dynamic events. A survey of vendors of dynamic interferometers indicates that they are in the process of incorporating this type of analysis into their products. It is anticipated that in the not too distant future dynamic analysis and visualization of motions and flows will be the industry standard. The studies presented in this paper clearly show that it is possible to discern the difference between objects at different temperatures by looking at the gradient of the phase map around the object. This experiment has also shown that there is a difference in the dynamic air currents and temperature gradients around living biological objects and inanimate objects at the same temperature. Adding the dimension of time enables the study of subtle changes as a function of time. This type of measurement will enable the study of the dynamics of thermal emission from the human body. We anticipate that dynamic interferometry will enable the correlation of dynamic biofield measurements of thermal microbreezes to variations in metabolic function such as heart rate, respiration and EEG.
6 Acknowledgements The authors wish to thank 4D Technology, Inc. for the use of their PhaseCam interferometer and specialized software they created for this study. One of the authors (GES) is partially supported at the University of Arizona by NIH grant P20 AT00774 from the National Center for Complementary and Alternative Medicine (NCCAM). The contents of this paper are solely the responsibility of the authors and do not necessarily represent the official views of NCCAM or NIH.
596
New Optical Sensors and Measurement Systems
7 References 1. Hayes, J. (2002). Dynamic interferometry handles vibration. Laser Focus World, 38(3):109-+. 2. Millerd, J.E.et al. (2004). Interferometric measurement of the vibrational characteristics of light-weight mirrors. In H.P. Stahl (Ed.), Proceedings of SPIE -- Volume 5180: Optical Manufacturing and Testing V (Vol. 5180, pp. 211-218). Bellingham, WA: SPIE. 3. Wyant, J.C. (2003). Dynamic Interferometry. Optics and Photonics News, 14(4):36-41. 4. North-Morris, M.B., VanDelden, J., & Wyant, J.C. (2002). PhaseShifting Bi-refringent Scatterplate Interferometer. Applied Optics, 41:668-677. 5. Koliopoulos, C.L. (1992). Simultaneous phase-shift interferometer. In V.J. Doherty (Ed.), Advanced Optical Manufacturing and Testing II (Vol. 1531, pp. 119-127). Bellingham, WA: SPIE. 6. Smythe, R., et al. (1984). Instantaneous Phase Measuring Interferometry. Op-tical Engineering, 23(4):361-364. 7. Creath, K., & Schwartz, G.E. (2005). The Dynamics of Life: Imaging Chang-ing Patterns of Air Surrounding Material and Biological Systems with Dy-namic Interferometry. J. Alt. Comp. Med., 11(222-235). 8. Creath, K., & Schwartz, G.E. (2004). Dynamic visible interferometric meas-urement of thermal fields around living biological objects. In K. Creath & J. Schmit (Eds.), Interferometry XII: Techniques and Analysis (Vol. 5531, pp. 24-31). Bellingham, WA: SPIE. 9. Creath, K., & Schmit, J. (2004). Phase-Measurement Interferometry. In B.D. Guenther & e. al (Eds.), Encyclopedia of Modern Optics. New York: Aca-demic Press. 10.Millerd, J.E., & Brock, N.J. (2003). Methods and apparatus for splitting, im-aging, and measuring wavefronts in interferometry. USPTO. USA: MetroLaser, Inc. 11.Creath, K. (1988). Phase-measurement interferometry techniques. In E. Wolf (Ed.), Progress in Optics (Vol. 26, pp. 349-393). Amsterdam: Elsevier Science Publishers. 12.Robinson, D.W. (1993). Phase unwrapping methods. In D.W. Robinson & G.T. Reid (Eds.), Interferogram Analysis (pp. 194-229). Bristol: IOP Publish-ing. 13.Rasband, W.S. (1997-2005). ImageJ. retrieved 5 May, 2005, http://rsb.info.nih.gov/ij. 14.Ciddor, P.E. (1996). Refractive index of air: new equations for the visible and near infrared. Applied Optics, 35(9):1566-73.
Microsystem based optical measurement systems: case of opto-mechanical sensors Michaá Józwik, Christophe Gorecki, Andrei Sabac Département d'Optique, FEMTO-ST, Université de Franche-Comté 16 Route de Gray, 25030 Besançon Cedex France Thierry Dean, Alain Jacobelli Thales Research & Technology France Domaine de Corbeville, 91404 Orsay Cedex France
1 Introduction The MEMS technology offers a large field for the development of miniature optical sensors by combining of planar waveguiding structures with micromachined MEMS. The achieved functions may be passive or active. Passive functions like alignment between optical fibers and integrated optical devices by U-grooves and V-grooves on silicon are attractive in terms of low cost packaging [1], providing good reproducibility and precision of fiber-to-waveguide connection. The use of active functions like modulation or sensing, issued from the combination of integrated optics and mechanical structures, is also attractive because the potential to generate low cost opto-mechanical sensors [2]. In opto-mechanical sensors, the active structural element converts a mechanical external input signal (force, pressure, acceleration) into an electrical signal via a waveguide read-out of the micromechanical-sensing element [2,3]. Structurally active elements are typically high aspect ratio components such as suspended beams or membranes. The most wide application of micromachined opto-mechanical sensors is the pressure sensing. In resonant pressure sensors, the detection of frequency shift offers a high accuracy, high stability and excellent repeatability [4]. The pressure range and sensitivity is limited by the dimensions and thickness of the membrane. In general, the optical testing methods offer the advantage, that the optical observations do not influence the mechanical behaviour. In particular, integrated opto-mechanical structures offer the possibility of monitoring behaviour in situations where laboratory equipment with free space beams
598
New Optical Sensors and Measurement Systems
have no access to the parts to be observed anymore. This situation is perfectly adapted for reliability tests and the monitoring of micromechanical performances during lifetime of MEMS devices.
2 Design and testing of devices 2.1 Waveguide fabrication
Most published work refers to waveguides structures based on depositing pure silica, for the cladding layers and doped silica for the core layer [4], and silicon nitride or silicon oxinitride (SiOxNy) for the core layer [5-7]. SiOxNy waveguides permit low attenuation of light and well adjustable refractive index over an important range, this is suitable for matching to single-mode fibers, due to the possibility to tailor the mode-field profile of such waveguides to that of silica-based optical fibers. The our single mode buried channel waveguides are composed from 1.5µm–thick silicon oxide claddings (n=1.47) deposited by plasmaenhanced chemical vapour deposition (PECVD) [7]. The light will be laterally confined by the resulting core SiOxNy rib with refeactive index n=1.53, 4µm large and 0.22µm depth. The PECVD working with relatively high deposition rate and low deposition temperature, is compatible with the well-established microelectronic processing. 2.2 Integrated Mach-Zehnder interferometer
The first type of device consists of three main parts: silicon membrane, integrated MZI interferometer, and PZT layer as mechanical actuator. A schematic is shown in Fig.1. The device contains a measuring arm of MZI crossing a 1350x1350-µm wide, 5-µm thick membrane, acting as an interrogation system [8]. The reference arm, positioned outside the membrane, is rigid.
New Optical Sensors and Measurement Systems
599
Fig. 1. Schematic of the device with Mach-Zehnder interferometer
The optical waveguides of MZI are sandwiched between the SOI wafer and the PZT transducer. Piezoelectric 2.6-µm thick actuator is located at the top of membrane integrated with measuring arm of MZI. After fabrication process all structures are separated by saw cutting to singular chips with dimensions 37x6mm2.
Fig. 2. Comparison of normalised amplitudes of optical signal from MZI output - sensing arm at: center (a), at quarter (b), and arm at board of membrane (c).
Three position of a measuring arm were considered: arm crossing center of membrane, arm at their quarter, and arm at board of membrane. In the MZI the position of the sensing arm on the membrane will influence on optical signal at MZI output. The sensing arm should be located where the
600
New Optical Sensors and Measurement Systems
largest change of refractive index is expected. Also depending on the resonant mode of membrane the MZI optical signal will be changed. We have tested amplitude of signal vs. frequency for three configuration of sensing arm position (Fig.2). The application of micromachined integrated MZI is in the area of resonant pressure sensors. We have observed highest amplitudes at the MZI output for structure with sensing arm placed at quarter of membrane. This configuration of MZI was adapted and tested as resonant pressure sensor at frequency of 111.413 kHz. The output signal was compared with excitation sinusoid from generator in synchronic detection module. In this way, the output optical signal amplitude and phase changing, observed between electrical excitation and optical output, can be directly visualised and measured. Firstly, the amplitude at 111.413 kHz equal 7.9 V was accompanied by 90 degrees of phase changes. Secondly, the pressure was applied from -2000 Pa to 2000 Pa. The amplitude and phase changes observed at the output of MZI are plotted in Fig. 3a.
Fig. 3. The amplitude (solid) and phase (square symbols) of MZI optical signal as a function of pressure (a), and shift of resonance frequency due to applied pressure.
Asymmetric behaviour of amplitude is caused to stresses involved by pressure. Their value is added to initial stress state caused bu technological process [9]. The Fig. 3b present the amplitude versus frequency. Three peaks of amplitude corresponding to resonance frequency changes. Results show, that 4000 Pa of pressure correspond to 4000 Hz of resonance frequency shift. It confirms very good sensitivity of presented device and prove their application as resonant pressure sensor. 2.3 Integrated Michelson interferometer
The second device is an integrated Michelson interferometer (MI) (Fig.4) fabricated on a silicon substrate by using of single-mode waveguides.
New Optical Sensors and Measurement Systems
601
Fig. 4. The scheme of the device with Michelson interferometer
The input and output facets of the interferometer are obtained by high precision saw dicing. The light source is a commercially available laser diode coupled via a polarisation maintaining optical fiber to the input of MI, while the output is linked by an optical fiber to a photodiode. Face to the cleaved waveguide end face of the reference arm of MI is placed a mirror activated by electrostatic actuator. The input light beam is divided into reference and sensing arms. The light in the sensing arm is guided up to the output plane where after reflection on a measured micromechanical part is coupled back into the waveguide. Photodiode at output provide information about light insensitivity as a result interference sensing and the reference beam. The displacement of electrostatic mirror generates a phase shift between both the reference and sensing arms of the MI. This optical phase modulator is used for high-resolution optical heterodyning with phasemodulated single sideband detection. In this paper we present the simulations data and the firsts results of chip version of Michelson interferometer with dimmensions 5x40mm2. Two MI configuration were considered. The first one consist two Y junctions with waveguides crossing at the MI center (distance 0). Seconde one is a directional coupler, where two adjacent waveguides are designed such, that the light can be transferred from one waveguide to the other by coupling. The coupling is obtained by adjusting the distance between waveguides varying from 0 to 4 µm. Using commercial integrated optics OlympIOs software we simulate the light propagation and power transfer was calculated by finite difference 2D-BPM (Fig.5) [10]. When distance between waveguides is equal 0, due to splitting, we obtain exactly this some power at the ends of structure.
602
New Optical Sensors and Measurement Systems
Fig. 5. Light propagation in Michelson interferometer by OlympIOs.
According to simulation results, the set of devices were fabricated (Fig.6) but good performance is hard to obtain mostly due to photolithographic transfer. The deviation of waveguide width and distance x have to be smaller than 0.2µm. In this case the optical attenuation of waveguide is 0.5 dB/cm for TE polarisation and total loss of device about 10dB.
Fig. 6. Photography of the diced chips.
The goal of the proposed study is the implementation of a high resolution MOEMS sensor based on a MI integrated with a electrostatic actuator, applied for the characterisation of dynamic behaviour of micromechanical parts. The future experimental tests will be concentred on MI connection with a beam actuator and output signal demodulation.
New Optical Sensors and Measurement Systems
603
3 Conclusions This paper describe design and investigations of a MOEMS family measurement devices based on light propagation via SiOxNy waveguide structures. Planar optical waveguide integration with micromachined structures and inclusion of micro-optic elements within MEMS environment offers significant promises for achieving advanced functionality of optomechanical architectures. The required optical sources and detectors can be outside the opto-mechanical system, then requiring light transport by fibers. As example, the resonant pressure sensor based on micromembrane with optical interrogation were designed, fabricated and tested. It works on the principle of resonance frequency shift, caused by the change of internal stress due to change of external physical environment. The introduced pressure sensor combine the advantages of the resonant operational mode with a MEMS fabrication process and optical signal detection, providing high sensitivity and maintaining stable performance. The technology has to be optimised in order to decreasing of initial stress state, what influence on sensitivity of sensor. The results indicated, that the sensor do not require vacuum encapsulation, but low-cost packaging is sufficient. The presented devices with optical read-out starts new methodology for reliability testing and monitoring mechanical performances during lifetime of MEMS devices. In this case the microinterferometer is completely integrated with the MEMS and sometimes cannot be reused for other measuring systems. Seconded architecture is a waveguide version of integrated Michelson (MI) interferometer. MI can provide measuring of position, displacements and vibrational characteristics. The fabrication and tests proved functionality of device was accomplished, but integration with micromechanical actuator are under development.
4 References 1. M. Tabib-Azar, and G. Beheim, "Modern trends in microstructures and integrated optics for communication, sensing, and actuation", Opt. Eng. 36, pp.1307-1318, 1997 2. C. Gorecki, in: P. Rai-Choudhury (Ed.), Optical waveguides and silicon-based micromachined architectures, MEMS and MOEMS – Technology and Applications, SPIE Press, Bellingham, 2000 3. E. Bonnotte, C. Gorecki, H. Toshiyoshi, H. Kawakatsu , H. Fujita, K. Wörhoff, K. Hashimoto, “Guided-wave acousto-optic interaction with phase modulation in a ZnO thin film transducer on Silicon-based inte-
604
4.
5.
6.
7.
8.
9.
10.
New Optical Sensors and Measurement Systems
grated Mach-Zehnder interferometer”, IEEE J.of Lightwave Technol. 17, pp. 35-42, 1999 S. Valette, S. Renard, J.P. Jadot, P. Guidon, C. Erbeia, "Silicon-based integrated optics technology for optical sensor applications", Sensors and Actuators A21-A23, pp.1087-1091, 1990 C. Gorecki, F. Chollet, E. Bonnotte, H. Kawakatsu, "Silicon-based integrated interferometer with phase modulation driven by acoustic surface waves", Opt. Lett.22, pp.1784-1786, 1997 K. Wörhoff, P.V. Lambeck, A. Driessen “Design , Tolerance Analysis, and Fabrication of Silicon Oxinitride Based Planar Optical Waveguides for Communication Devices", J. Lightwave Tech, 17, No 8, 1401-1407, 1999 A. Sabac, M. Józwik, L. Nieradko, C. Gorecki, „Silicon oxynitride waveguides developed for opto-mechanical sensing functions", Proc. SPIE, Vol. 4944, 214-218, 2003 A. Sabac, C. Gorecki, M. Józwik, T. Dean, A. Jacobelli, “Design, testing, and calibration of an integrated Mach-Zehnder-based optical readout architecture for MEMS characterization”, Proc. of SPIE Vol. 5458, pp. 141-146, 2004 L. Saábut, J. Kacperski, A.R. Styk, M. Józwik, C. Gorecki, H. Urey, A. Jacobelli, T. Dean, „Interferometric methods for static and dynamic characterizations of micromembranes for sensing functions”, Proc. of SPIE, Vol. 5458, pp. 16-24, 2004 OlympIOs, BBV Software BV, http://www.bbvsoftware.com
5 Acknowledgements This work was supported by The Growth Programme of the European Union (contract G1RD-CT-2000-00261). The developement of MI structure is main of European Union - Marie Curie Intra-European Fellowships (contract FP6-501428). Michaá Józwik thanks for financial support of work at Université Franche-Comté. Special thanks to Lukasz Nieradko from FEMTO-ST and to Pascal Blind from Centre de Transfert des Microtechniques for guiding and help in technological process realisation.
White-light interferometry with higher accuracy and more speed Claus Richter, Bernhard Wiesner, Reinhard Groß and Gerd Häusler Max Planck Research Group, Institute of Optics, Information and Photonics, University of Erlangen-Nuremberg Staudtstr. 7/B2, 91058 Erlangen Germany
1 Introduction White-light interferometry is a well established optical sensor principle for shape measurements. It provides high accuracy on a great variety of surface materials. However, to accomplish future industrial tasks several problems have to be solved. One major task is to increase the scanning velocity. Common systems provide scanning speeds up to 16 µm/sec depending on the surface texture. We will introduce a system based on a standard white-light interferometer which can achieve scanning speeds up to 100 µm/sec with a standard frame rate of 25 Hz. With an add-on to the hardware we achieve up to 10 times higher modulation in the signal, compared to the standard setup. To cope with the sub-sampled signals, we introduce new evaluation methods. On a mirror we achieve a distance measurement uncertainty of 230 nm using a scanning speed of 100 µm/sec. On optically rough surfaces we achieve an improvement of the scanning speed up to 78 µm/sec without any loss of accuracy. Another major task concerns the white-light interferometry on rough surfaces (“Coherence Radar”) [1]. Here the physically limited measuring uncertainty is determined by the random phase of the individual speckle interferograms. As a consequence, the standard deviation of the measured shape data is given by the roughness of the given surface [4]. The statistical error in each measuring point depends on the brightness of the corresponding speckle; a dark speckle yields a more uncertain measurement than a bright one. If the brightness is below the noise threshold of the camera, the measurement fails completely and an outlier occurs.
606
New Optical Sensors and Measurement Systems
We present a new method to reduce the measuring uncertainty and the number of outliers. In our method, we generate two or more statistically independent speckle patterns and evaluate these speckle patterns by assigning more weight to brighter speckles.
2 Increasing the scanning speed The major factor limiting the vertical scanning speed is the frame rate of the used video camera. With a standard frame rate of 25 Hz, the scanning speed cannot exceed 16 µm/sec. To fulfil future demands for industrial applications this speed has to be increased. 2.1 Increasing modulation at higher scanning speeds
Using a standard setup and just increasing the scanning speed evokes some difficulties. During the exposure of an image the linear positioning system is still moving. Therefore the optical path difference between reference arm and object arm of the sensor is changing. This leads to a decrease of the modulation of the interferograms at high scanning speed [2]. To avoid this effect the optical path difference must be approximately constant. Our basic idea is to introduce a motion to the reference mirror to compensate for the motion of the positioning system during the exposure of one frame. During this time the optical path length of both arms is changing but the optical path difference remains the same. In the time gap between two images the reference mirror is switched back to its initial state [3]. To test this setup we recorded the signals of 10.000 interferograms at several sample distances and calculated the mean modulation for each sample distance (see Fig. 1).
New Optical Sensors and Measurement Systems
607
modulation [digits]
200 180
without compensation
160
with compensation
140 120 100 80 60 40 20 0 0
0,5
1
1,5
2
2,5
3
3,5
4
4,5
sample distance [µm]
Fig. 1. Modulation of interferograms at different scanning speeds with and without compensating the integration effect
The object under test was a mirror. Looking at the grey curve you can see the integrating effect of the camera. The modulation is rapidly decreasing down to about 13 digits at high sample distances, corresponding to the background noise. Using the compensation movement (black curve) the modulation at small sample distances is the same compared to the standard setup. Still at higher scan velocities the modulation remains high. With this setup it is possible to increase the scan velocity by a factor of 8 obtaining the same modulation compared to the standard system. 2.2 Evaluating sub-sampled signals
Carrying out measurements with large sample distances causes subsampling of the interferogram. Evaluating these signals with established algorithms does not provide the required accuracy. Two new approaches are made to evaluate sub-sampled signals in white-light interferometry. The first approach is a centre-of-mass algorithm. This is a very simple algorithm and enables very fast data processing. To improve this evaluation method the interferogram is rectified before. The second approach uses the information we have about the interferogram shape. To calculate the height information we apply a crosscorrelation between the recorded interferogram and a simulated interferogram. This method is more complex than the centre-of-mass algorithm and needs much more evaluation time.
608
New Optical Sensors and Measurement Systems
2.3 Results
With this combination of hardware add-on and new evaluation methods we achieved a measurement uncertainty of 230 nm measuring a mirror. The scanning speed was 100 µm/sec using 25 Hz frame rate. On optically rough objects we were able to increase the scanning speed up to 78 µm/sec without great loss of accuracy. In Figure 2 you can see the comparison of two crossections of a measured coin. The measurement on the left was done with a standard setup with a scanning speed of 4 µm/s. The measurement on the right was done with the new setup and the new evaluation methods with a scanning speed of 34 µm/s.
Fig. 2. Crossection of a measurement (coin). Left side: Normal setup; scanning speed 4µm/s. Right side: New setup; scanning speed 34 µm/s
3 Better accuracy and reliability Another challenge for white-light interferometry is the measurement of rough surfaces. Generally we speak of a rough surface if height variations greater than O/4 within the diffraction spot of the imaging system appear. In that case the well-known interference fringes disappear and instead, a speckle pattern appears. Since the phase varies statistically from speckle to speckle it does not carry any useful information and one can only evaluate the envelope of the interference signal (“correlogram”). Since this resembles a time-of-flight measurement we called the method “coherence radar” [1]. Comparing the correlograms of different camera pixels one can see two main features of the interference signal different from smooth surfaces:
New Optical Sensors and Measurement Systems
609
x statistical displacement of the signal envelope (“physical measurement error”) x varying interference contrast It has been shown [4] that the surface roughness can be evaluated from the ensemble of all those displacements. If we explore the reliability of one measured height value, we find [5] that the standard deviation of the height values ız(I) depends on the surface roughness ıh, the average intensity and the individual speckle intensity I:
V z (I )
1 2
I! Vh I
(1)
The consequence of Eq. 1 is quite far reaching, because it reveals that every measured height value is associated with a physical measurement error. The darker the speckles are the bigger is this error. Hence, we are eager to create and select bright speckles. 3.1 Consequences of varying interference contrast
An additional error source that has to be taken into account is demonstrated in the following experiment: A rough surface was measured ten times. To ensure that at any time the same speckle pattern was measured, the object under test remained at the same position. A cross section through the surface is shown in Figure 3. There is a spreading of the height values in every pixel. Since the speckles are the same for all ten measurements the physical error and so the measured height value should remain the same. So the spreading has to be caused by another error source and that is the camera noise. A dark and a bright speckle are highlighted to point out the difference: The spreading is bigger for darker speckles due to the bigger share of the camera noise. In the worst case the interference contrast of the correlogram is below the noise threshold of the camera. This means, the measurement in that speckle fails completely and an outlier appears. If the camera noise is reduced, for example by cooling the CCD-chip or by applying higher integration time, the spreading will disappear, but not the physical measuring error. This way the repeatability would be perfect, but according to Eq. 1, the measured height is still unreliable.
610
New Optical Sensors and Measurement Systems
Fig. 3. Profile of a rough surface measured ten times.
The consequences can be summarized as x bright speckles generate more reliable measurements x bright speckles avoide outliers Therefore, in order to improve the quality of a measurement one has to look for bright speckles. A posteriori solutions such as filtering the measured image are not an appropriate approach. 3.2 Offering different speckle patterns to the system
Our new approach is to offer not only one but two (or even more) decorrelated speckle patterns to the system. The combination displays a better statistics: For one speckle pattern the darkest speckles have the highest probability. However, if we may select the brightest speckle in each pixel, out of two (or more) independent speckle patterns, the most likely speckle intensity is shifted to higher values and the probability to end up with a very dark speckle is small. Decorrelated speckle patterns can be generated either by the use of different wavelengths [6] or by moving the light source. In this case the camera sees different speckle patterns. In our setup we synchronized the camera with the light sources: For odd frame numbers only light source “one” was on, whereas for the even frames only light source “two” was on. A separated signal evaluation for the correlograms recorded in odd and even camera frames is carried out. Subsequently, the SNR for both signals is estimated and that height value is chosen which displayed the better SNR. The cost of this method is of course a reduction of the actual frame rate but according to Eq. 1 there is a significantly higher reliability of the measured profile.
New Optical Sensors and Measurement Systems
611
3.3 Results
In an experimental verification two LEDs with a central wavelength of 840 nm are used as light sources. They are placed in front of a beam splitter which. One of the LEDs is mounted on a micrometer slide to shift the sources against each other. The rough object under test is a diffuse surface and the signals are recorded by a standard 50 Hz camera. The LEDs are alternating switched on and off as described above. Subsequently in ten thousand camera pixels the higher SNR value is estimated. For comparison with the standard setup, a second measurement is performed using only one LED. Again, in ten thousand camera pixels the SNR value is estimated. Figure 4 displays the SNR distribution of the two measurements. The improvement by the “choice of the brighter speckle” is significant: The maximum of the probability is shifted to higher SNR values (indicated in Fig. 4 by the arrow) and the amount of pixels with low SNR is significantly reduced. To quantify this, a series of measurements with different scanning speeds was performed and the share of pixels with a SNR not exceeding 4 was determined. A SNR value of at least 4 ensures a safe distinction from noise. The result is displayed in Figure 5. For all scanning speeds the proportion of low SNR camera pixels is smaller for two speckle patterns than for one. Figure 6 displays as a measurement example a part of a coin.
Fig. 4. Distribution of ten thousand SNR values measured with both, one (grey) and two speckle patterns (black). The arrow shows the improvement of the SNR by the “choice of the brighter speckle”
612
New Optical Sensors and Measurement Systems
Fig. 5. Percentage of camera pixels with a SNR < 4 for one and two speckle patterns.
Fig. 6. Part of a coin measured with both, one (left) and two (right) speckle patterns. The number of outliers has been reduced significant.
4 References 1. Dresel, T, Häusler, G, Venzke H (1992) Three-dimensional sensing of rough surfaces by coherence radar. Applied Optics 31: 919-925 2. Windecker, R, Haible, P, Tiziani, H J (1995) Fast coherence scanning interferometry for smooth, rough and spherical surfaces.Journal of Modern Optics 42: 2059-2069 3. Richter, C (2004) Neue Ansätze in der Weisslichtinterferometrie. Diploma Thesis, University Erlangen-Nuremberg 4. Ettl, P, Schmidt, B, Schenk, M, Laszlo, I, Häusler, G (1998) Roughness parameters and surface deformation measured by “Coherence Radar”. Proceedings op SPIE Volume 3407: 133-140 5. Ettl, P (2001) Über die Signalentstehung bei Weißlichtinterferometrie, PhD Thesis, University Erlangen-Nuremberg 6. George, N, Jain, A, Speckle (1973) Reduction Using Multiple Tones of Illumination. Applied Optics 12: 1202-1212
Novel white light Interferometer with miniaturised Sensor Tip Frank Depiereux, Robert Schmitt, Tilo Pfeifer Fraunhofer Institute for Production Technology IPT Dept. Metrology and Quality Management Steinbachstrasse 17, 52074 Aachen Germany
1 Introduction White light interferometry is an established technique in metrology [1]. It gives the possibility to obtain absolute distance measurements on different surfaces. White light systems are mostly bulky stand-alone solutions and cannot be used for certain measurement tasks, such as small cavities inspections. White light interferometers can also be realized as fiber based systems, these systems provide a great potential for miniaturization. We describe such a fiber based white light interferometer with its main, innovative components. In principle, the presented system is based on linking two interferometers: a measuring interferometer (the donor) and a receiving interferometer (the receiver) [2]. The donor set-up was realized as a fiber based Fabry-Perot solution, which results in a reduction of the sensor tip diameter of 800 µm. This sensor tip is very sturdy and can be used in an industrial environment. A Michelson Interferometer has been used as receiver. Scanning of the measuring area in the time domain is replaced by spatial projection of the white light fringes onto a CCD-, CMOS- or line-detector. The choice of the detector depends on the preferred measuring frequency-to-range combination. The digital detection of the fringe pattern explains that the measuring frequency is determined in the first instance by the frame rate of the chosen detector. A stepped CERTAL (oxygen-free aluminium) mirror element replaces commonly used phase shifting elements like piezos or linear stages. The range of the system can be designed as required by designing the number and geometries of the steps [5]. A slight angularity of the mirror perpendicular to the direction of the incidence beams results in a characteristic fringe pattern on the sensor chip.
614
New Optical Sensors and Measurement Systems
2 Theoretical background In contrast to laser interferometry, white light interferometry provides limited coherent areas in which interference is possible. They are obtained in dependence on the FWHM (full width at half maximum) and the central wavelength of the light source. Short coherent light sources allow absolute distance measurement. The function of the sensor and the principle of white light interferometry can be developed in the frequency range by focusing on the transmission functions of the donor and the receiver, combined with the quasi Gaussian power density spectrum of the light source [6]. The signal intensity and position are the result of the path difference in the sensor and the receiver [5]. 2.1 Power density spectrum of the light source (Gaussian)
PO
2 1 2 e S 'O
( O O0 ) 'O2
(1)
Here, O0 is the central wavelength and 'O the FWHM of the light source. 2.2 Transmission function for the sensor, (Airy function)
TG x, k
(r1 r2 ) 2 4r1r2 sin 2 2Skx (1 r1r2 ) 2 4r1r2 sin 2 2Skx
(2)
Here, k=1/O is the wave-number, r1 and r2 are the values of reflection of the Fabry-Perot donor, where the first surface is the end surface of the fiber and the second surface is the surface of the measurement object at a distance of x. 2.3 Transmission function for the receiver
TE y, k
1 (1 cos 2Sky ) 2
(3)
The path difference y is found between stepped and reference mirror in the receiver.
New Optical Sensors and Measurement Systems
615
2.4 The signal intensity Ux(y) f
U x ( y)
³ P(O )T x, k T y, k dk G
E
(4)
f
In order to simplify these equations, differences in running time of the waves due to changes in refractive indices of the optical components are not taken into account. On the one hand there is a so-called main signature, for equal geometric paths in the receiver (y=0), on the other hand there are sub-signatures. These are visible on the mirror, when the path length differs in the donor and the path lengths in the receiver are equal for x and y. There are redundant signatures which are n4X=n(x+y)=n2x apart for a measured value of X, as long as the path difference can be compensated by the geometry of the stepped mirror, i.e. they appear within the measuring range. Only the first pair of sub-signatures is of interest for the detection, because the second pair does not provide more information. It is sufficient for measurement evaluation to detect the main signature and one sub-signature because the distance between the center values of the signatures is exactly x=2X. The signature width depends on the distribution of the light source’s power density. The theoretical signal described by (4) is shown in Fig. 1. Both the main and the first two pairs of subsignatures are shown. The width of the signature relates to the used SLD. With the main wavelength O0=846.9 nm and the FWHM value 'O=15.6 nm, the coherence length of the SLD is given by the coherence length:
lcSLD
O20 / 'O
846.9 2 nm 2 / 15.6nm | 46Pm
Fig. 1. Detector Signal, theoretical
(5)
616
New Optical Sensors and Measurement Systems
3 Interferometer set-up The setup (Fabry-Perot donor / Michelson receiver) is shown in Fig. 2. When light is emitted from the source (A), a short coherent wave reaches the Fabry-Perot donor via the single-mode fiber coupler (B). The reference wave, which originates from the end surface of the sensor tip (C) is superimposed with the measuring wave from the object (D) in the Michelson receiver. When the paths match, they interfere. The interference signals can be detected by a CMOS-, CCD- or line camera (E), depending on the desired measuring range and frequency. The light source is an SLD with an average power output of ~3 mW and a central wavelength of 850 nm, already pigtailed to a single-mode fiber. The light from the source is coupled into the fiber and transmitted to the Fabry-Perot sensor via the coupler (50/50). The single-mode fiber has a diameter of approx. 4.5 µm and a numerical aperture of 0.12, which, in conjunction with a collimating sensor tip, results in an almost collimated beam with a spot size of approx. 40 µm along the complete measuring range. The use of a focusing sensor tip is also possible, so the capability to measure trailing edges can be increased. The single-mode fiber has a further advantage: it acts as a spatial mode filter; the spatial coherence is restored [6]. The beam from the fiber is collimated in the receiver again to illuminate the reference (F) and the tilted mirror (G). As mentioned above, the stepped mirror replaces commonly used phase shifting elements. A
B
D
E
G
F Fig. 2. Schematic system setup
C
New Optical Sensors and Measurement Systems
617
4 CFP sensor tip When the system is used with a bare fiber as sensor tip, the beam expands under the half angle of T= 6.8° (NA of the fiber). This results in a beam diameter of db~ 245 µm in a measurement distance of 1 mm. A focussed or collimated beam can be provided by the use of a gradient index fiber [7].
Fig. 3. Collimating connector (Fa. Diamond)
A short piece of gradient index fiber (2) is spliced (3) to the single-mode fiber (2) and glued into the ferrule (1) of the chosen connector (Fig. 3). The GRIN-fiber has then to be polished to a proper length to provide beam shaping. The same technique has been used to realise the miniaturized sensor tip. In order to achieve an outer diameter of the sensor head below 1 mm and a sensor shaft length of min. 50 mm, a new designed CFP-tube (carbon fiber reinforced plastic) acts as sensor tip [8]. The integration of the spliced fiber resulted in a Fabry-Perot sensor tip with a diameter of 0.8 mm. A prototype of the sensor terminated with an E2000 connector is shown in Fig. 4. Compared to other materials, the main advantages of CFP are its special properties. On the one hand, the sensor is flexible enough to allow industrial handling and on the other hand it is stiff enough to keep its shape, which is elementary for measurement purposes.
Fig. 4. Sensor prototype [9]
618
New Optical Sensors and Measurement Systems
5 Mirror element As mentioned above, the sensor tip is connected to the Michelson receiver with the fiber coupler. The lens set-up between the fiber and the Michelson interferometer provides a collimated beam which illuminates the reference and the stepped mirror. The mirror (length 10mm, width 7mm) gives a measuring range depending on the dimensions, number of steps and angle of the mirror. A mirror with ten steps, each 100 µm high, was used in the set-up (Fig. 5).
Fig. 5. Stepped mirror: (a)calibration step, (b) serpentine structure
To ensure a continuous measuring range, the required angle of incidence is ~ 0.6°. This angle is due to the selected step dimensions (step length and height). The visibility of the fringes diminishes continually with increasing incidence angle [5][10]. The use of the stepped mirror within the system delivered important information about improving its design. In order to increase the measuring distance to 1 mm by simultaneously detecting both the main and subsignature, a new design with a so-called “calibration step” was realized (Fig. 5a: first step). The advantage of this design is that the mirror always reflects a stable detectable main signature on the first step (height 1mm). The main signature is important not only for signal processing but also for monitoring the receiver condition. The required signal processing for a mirror with planar steps is quite intensive because of the “signature jump” at the end of each step. Fig. 5b shows an improved mirror with a serpentine structure. This structure enables continuous signal detection without signature jumps. It has to be made clear that these mirrors are far more difficult to manufacture than the stepped versions.
New Optical Sensors and Measurement Systems
619
6 Results The combination of the stepped mirror with the light source results in signatures which are laterally spread across regions of the mirror, with a maximum intensity in the centre of the signature. The CCD image shows the stepped mirror with signatures on different steps. The signature on the first step (calibration step) shows the main signature. It also shows, as described above, that the main signature has a higher intensity compared to the sub-signatures (Fig. 7). The sub-signature is encircled and can be located on the third step. In order to acquire the distance between sensor and object it is necessary to filter and analyze the raw image data. After background subtraction the noise signals can be reduced with a frequency filter (i.e. by means of FFT).
Fig. 7. CCD image of four steps with main and sub signature
Fig. 8a shows the filtered grey code signal together with a Gaussian fit for the peak. The pixel positions of the peak of both the main and the sub signature can now be used to calculate the distance. The linearity of the measured positions can be seen in Fig. 8b. It shows on the y-axis the pixel positions that have been measured for the distances between sensor and object which can be seen on the x-axis.
Pixel
Translation
Fig. 8. (a) Processed image data with signature and peak, (b) Linear fit
620
New Optical Sensors and Measurement Systems
With this method it is possible to calibrate the system and obtain a linear relation between pixel position and distance. The measuring uncertainty depends on the capability to clearly separate two peaks.
7 Summary We presented the set-up of a novel fiber-based, miniaturized white light interferometer with a unique CFP sensor tip. A stepped mirror that replaces mechanical scanning components was introduced. Improved designs of this mirror show the capability for further developments of the system like stability and condition of the receiver (calibration step) on the one hand or for optimizations in signal processing (serpentine structure) on the other hand. It is also shown, how fringe patterns on different steps of the mirror encode measurement (distance) information.
8 Acknowledgements The presented results have arisen from a national research project supported by the German Ministry of Education and Science (BMBF) and the Stiftung Industrieforschung. The project is carried out by CeramOptech GmbH, Bonn; Precitec GmbH, Rodgau and Mahr-OKM GmbH in Jena.
9 References 1. James C. Wyant: White Light Interferometry, Optical Sciences Center, University of Arizona, Tucson, AZ 85721 2. Bludau, W. Lichtwellenleiter in Sensorik und optischer Nachrichtentechnik, Springer Verlag Heidelberg 3. T. Bosselmann, T. (1985) Spektral-kodierte Positionsübertragung mittels fasergekoppelter Weißlicht-interferometrie, Universitätsbibliothek Hannover, Hannover 4. Koch, A. (1985) Streckenneutrale und busfähige faseroptische Sensoren für die Wegmessung mittels Weißlichtinterferometrie, Universität Hamburg-Harburg, VDI-Verlag, Düsseldorf 5. Chen, S., Meggitt, B.T., Roger, A.J. (1990) Electronically-scanned white-light interferometry with enhanced Dynamic range. Electronics Letters Vol. 26, No. 20, S. 1663-1665 6. Company brochure Superlum Diodes LTD., Moscow, Russia
New Optical Sensors and Measurement Systems
621
7. Cerini, A., Caloz, F., Pittini, R., Marazzi, S. “HIGH POWER PS CONNECTORS”; DIAMOND SA, Via dei Patrizi 5, 6616 Losone, Switzerland (
[email protected]) 8. Depiereux, F., Schmitz, S., Lange, S. (2003) “SENSOREN AUS CFK”, F&M Mechatronik, Hanser Verlag, 11-12/2003 9. Photography, courtesy of Felix Depiereux, Düsseldorf 10. Chen, S., Meggitt, B.T., Rogers, A.J. (1990) A novel electronic scanner for coherence multiplexing a quasi-distributed pressure sensor, Electron. Lett., Vol. 26, (17), pp. 1367-1369
Honorary Lecture
Challenges in the dimensional Calibration of submicrometer Structures by Help of optical Microscopy Werner Mirandé (Retired from) Section for Quantitative Microscopy Physikalisch Technische Bundesanstalt Bundesallee 100 38116 Braunschweig Germany
1 Introduction Optical microscopes are well-established instruments for dimensional measurements on small structures. The main advantage imaging methods that make use of the visible and UV-parts of the electromagnetic radiation is the minimum risk of damage of the objects to be measured. Measurement results with high accuracy, however, can only be obtained by carefully analysing the process of image formation in the microscopes and observing all sources of systematic uncertainties or by calibrating the systems by use of traceable standards.
2 Basics 2.1 Image Formation in optical Microscopes
Essential components of a typical measuring microscope are the light source, the condenser, the objective lens, a tube lens and in measuring systems an electro-optical receiver system. Although exists there a tendency to assume that the images at least qualitatively resemble the shape of the sample structures, the distortion of the features produced by the imaging system or in optical microscopy by the illumination conditions can sometimes be severe. In practise, because of diffraction at diaphragms that are introduced as aperture stops in the optical setup, the image of an object point results in a three dimensional dis-
New Optical Sensors and Measurement Systems
623
tribution of the complex amplitude or intensity, even though the system is free of aberrations and perfectly focussed. is free of aberrations and is free of aberrations and perfectly focussed. perfectly focussed. is free of aberrations and perfectly focussed. free of aberrations and perfectly focussed. perfectly focussed. is free of aberrations and perfectly focussed. Objects that are representatives in the present context are usually non-luminescent and have, therefore, to be illuminated by help of an auxiliary light source and condenser system. The condenser-aperture diaphragm controls the maximum angle of incidence of the light cone of illumination. Some of this light is then transmitted through the object or it is absorbed, reflected or scattered, with or without change of the phase or the polarisation. In the objective lens system the objective–aperture controls the maximum angle of inclination to the optical axis of marginal rays that can pass through the objective. In combination with the wavelength of light it usually determines the limit of resolution. The ratio between the condenser-aperture and the objective-aperture is a critical parameter in an imaging system as it determines the total degree of spatial coherence, and consequently it affects also essentially the image intensity. A point to notice is that, even for an aperture ratio of one the microscopic image formation of a nonluminescent object is partially coherent [1]. That’s why pure phase objects give rise to an intensity distribution in a diffraction-limited, perfectly focussed system with an aperture ratio of one. For perfect incoherent imaging this would not be the case. Out of the collection of various particular arrangements and methods of observation that have been developed, each suitable for the study of certain types of objects or designed to bring out particular features of the object, [2- 4] only the conventional bright field and two dark field methods shall be discussed in some more detail. 2.1.1 Bright Field Methods
For dimensional measurements at structures on photomasks or at other features on transparent substrates so called bright field imaging is a suitable method [5-8]. Fig. 1 shows the schematic setup of a typical bright field system. For sake of simplification it has been assumed that the focal length of the condenser lens, the objective lens, and the tube lens are equal. In this case a magnification of 1 result’s. In a microscope for imaging objects in bright field illumination mode the objective lens has to do double duty, acting as a condenser in the illumination system as well as the imaging objective. An additional diaphragm in the illumination system may than act as condenser aperture in order to provide adequate conditions of spatial coher-
624
New Optical Sensors and Measurement Systems
ence. That is of interest, for instance, when the image contrast of topographical structures shall be enhanced.
Fig. 1. Schematic beam path of bright field microscope
2.1.2 Dark Field Methods
As it will be shown later, dark field imaging can be advantageous in context with edge localisation. In a common dark field system only the light scattered or diffracted by object details reaches the image plane. In the reflected light mode this can for example be achieved in a conventional microscope by an elliptical ring mirror at the periphery of the objective that directs the light onto the object at the suitable angle. Incidentally, image patterns with intensity distributions similar to those obtained by the methods mentioned above can also be produced by differential interference contrast or by special adjustment of confocal microscope systems [9,10]. 2.2 Some Terms and Definitions 2.2.1 Precision
A fundamental and of course desirable quality of a measuring instrument is that it delivers the same result for a certain measurement every time. Thus, the consistency of measurement results is an important concept for characterising the quality of a measuring system. This property is usually called precision. The International Organisation for Standardisation (ISO) defines repeatability and reproducibility instead of precision for the variability observed in repeated measurements that are performed under the same conditions [11]. On the one hand the repeatability and reproducibility depend on the
New Optical Sensors and Measurement Systems
625
scale and its relation to the image; on the other hand they include the effects of noise and thermal or mechanical drift. 2.2.2 Accuracy
According to [11] and [12] nowadays the reciprocal term uncertainty, in the present context, the total measurement uncertainty, is defined as a combination of the random and systematic uncertainties with some estimate in the confidence in this number. It is also a parameter associated with the result of a measurement that characterises the dispersion of the values that could be attributed to the measurement. 2.2.3 Traceability
According to the ISO (International Organisation for Standardisation) Traceability is a property of a result of a measurement or the value of a standard whereby it can be related to stated references usually national or international standards through an unbroken chain of comparisons all having stated uncertainties. For dimensional measurements the length reference of the PTB (Physikalisch Technische Bundesanstalt) and other NMIs (National Measurement Institutes) is the SI unit of length, the definition of the meter [13]. 2.3 Standards and Calibration
From the preceding discussion can be obviously seen that high precision or good reproducibility is not sufficient to guarantee that a measurement result has a high accuracy or small uncertainty respectively. The measurements performed with an instrument that provides excellent reproducible results can be precisely wrong because of significant systematic errors, which have not been made out. But what can be done in order to reduce at least these errors or deviations. An often-used solution is compensating a measuring instrument for systematic deviations by measuring samples with well-known values of the parameter to be measured. This process is called calibration and the samples that are specially designed for this purpose are usually called standards. The efficiency of the calibration, of course, depends on the one hand critically on the quality and particularly on the uncertainty of the known values of the standard and on the other hand on the adequate use of it. Such standards are widely used by various customers in context with their quality management systems. They are also needed for vendor/buyer communication, for developing specifications and ensuring that products meet specifications, and sometimes for compliance with legal
626
New Optical Sensors and Measurement Systems
requirements. The essential value of a traceable standard lies in the carefully estimated calibration uncertainty as claimed by its purveyor and its authority, the ultimate user’s confidence in that claim. These qualities are then transferred to the user’s subsequent in-house measurements. Now the question arises, how these standards themselves can be calibrated. The PTB and other NMIs have been working on this task since more than two decades [14-16]. For this purpose specially designed measuring microscopes, sophisticated procedures for evaluating the image information and strategies like cross calibration [17] were developed in order to reduce ever existing uncertainties to the lowest possible level. Apart from optical microscopes that are mainly used for this task other methods like e.g. scatterometry, scanning electron as well as scanning force microscopy are employed by co-workers of the working groups for Quantitative Microscopy and Ultra-High Resolution Microscopy at the PTB in Braunschweig either for cross calibration or in order to get additional, more detailed information on the samples to be calibrated. 2.4 Edge Localisation
Frequently used dimensional standards for the calibration of measuring microscopes are pitch or linewidth standards. The pitch value is defined by the distance of congruent edges. The linewidth is the distance between two neighbouring edges of a sample structure. Accurate edge localisation, therefore, is a substantial task in the calibration of dimensional standards. If the measurements are performed by optical microscopy, usually an intensity distribution in the magnified image of the object has to be evaluated for determining the edge positions. The intensity distribution in the image, however, results from the reflected or transmitted profile of the complex amplitude across an object feature and depends on the relative reflectivities or transmittances, the induced phase shifts of the materials composing the feature and the substrate and coherence conditions. Because of diffraction at the apertures of the imaging system and residual aberrations even perfect edges are represented by more or less blurred distributions of intensity in the image plane. By applying threshold or extremevalue criteria edge localisation with a precision or reproducibility better than a nanometer in principle can be achieved in well designed instruments [18]. But what’s about the uncertainty.
New Optical Sensors and Measurement Systems
627
3 Examples Some challenging capacities and problems of edge localization by using extreme value criteria can be demonstrated in context with the task of calibration of standards that are designated for the characterization of the tips of scanning force microscopes. These samples consist of silicon chips with surface structures that have been produced by whet etching. They are developed by the IPHT (Institut für Physikalische Hochtechnologie)/Jena in collaboration with the PTB and partners from industry [19, 20]. In the following examples it is assumed that the object structures consist of flat, isolated silicon bars on a silicon substrate. All are 100 nm in height and have perfect vertical edges. They are imaged in reflected light with quasi-monochromatic radiation with O = 365 nm. The calculations have been performed by use of the software package MICROSIM [21] that is based on the RCWA (Rigorous Coupled Wave Analysis) method and was developed at the Institut für Technische Optik University of Stuttgart. The intensity across the images of silicon bars with linewidths of 1500 nm and 300 nm is shown for bright field imaging and different conditions of polarisation in Fig. 2 Fig. 3 shows the modelled image distribution of the field in the neighbourhood of the sample surface and the intensity across the image of a silicon bar with 300 nm linenewidth that is imaged in conventional dark field mode with circularly shaped condenser aperture.
4 Discussion From the modelled distributions shown in Fig.2 and Fig. 3 it becomes plainly visible that the extreme-values of the image intensity are located not exactly at the positions of the edges and that the deviation depends on the polarisation (TE: E parallel to the edge, TM: E perpendicular to the edge). However in all cases the error for a linewidth measurement remains smaller than 50 nm. By the way, calculations by use of a model on the basis of scalar diffraction theory do not reveal a larger deviation than 50 nm in context with bright field imaging for an object according to that of Fig. 2.
628
New Optical Sensors and Measurement Systems
90
90 TE TM UP
80
80
70
70
60 y t si n et ni
60
50
y t si n et ni
40 30 20
40
TE TM UP
30
(a)
20
10 0 -3000
50
-2000
-1000
0 x / nm
1000
2000
10 -600
3000
(b) -400
-200
(a)
0 x / nm
200
400
600
(b)
Fig. 2. Modelled image intensity across SI-bars with linewidth a: 1500 nm, b: 300 nm. Bright field imaging; Objective numerical aperture: 0.9, condenser aperture: 0.6 -300 TE TM UP
180 -200
160 140
-100 m n / z
y t si n et ni
0
100
120 100 80 60
200 40 300 -600
-400
-200
0 x / nm
200
400
600
20 -600
-400
-200
0 x / nm
200
400
600
Fig. 3. Modelled intensity for a SI-bar with a linewidth of 300 nm. Dark field imaging Objective aperture: 0.85, ring shaped condenser aperture:
There is an other unfavourable feature of the conventional dark field imaging; for a linewidth not much smaller than the wavelength the intensities of the extreme-values already begin to merge. Compared with that, the new dark field imaging methods AGID and FIRM [22] make it possible to separate the signals from the different edges by alternating gracing incidence illumination or by making use of frustrated total internal reflection. But also in this case a non negligible deviation from the true linewidth results and modelling based on rigorous diffraction theories has to be used for determining systematic offsets. Alternating gracing incidence dark field illumination (AGID) is assumed with the calculation of the image distributions for the same object as in Fig. 3.
New Optical Sensors and Measurement Systems
629
0.18
-300
TE 0.16
-200
0.14 0.12
-100 m n / z
y t si n et ni
0
0.1 0.08 0.06
100
0.04 200
0.02 300 -600
-400
-200
0 x / nm
200
400
600
0 -600
-400
-200
0 x / nm
200
400
600
Fig. 4. Image distribution calculated for the same object as in Fig. 3 but with illumination from the left side according to the AGID method
5 Conclusion A basic task in dimensional metrology is edge localisation. The distance of two neighboring edges in an object structure, for instance, can be determined by the evaluation of the intensity distribution by use of threshold or extreme-value criteria. However, the distributions in the images begin to overlap for structures with dimensions below Ȝ/NA where Ȝ is the wavelength and NA is the numerical aperture of the imaging lens. That’s why the distances of the extreme values or the thresholds become strongly dependent from the width of the structures and for still smaller structures the extrema usually merge into one extremum. By use of a special new type of dark field illumination it becomes possible to separate the maxima of intensity representing the edges of single microstructures whose edges would not be resolved by conventional dark field techniques. But also with this method the position of the extreme values in the image distribution has an offset to the true positions of the structure edges. In order to get traceable measurements; however, modelling of the image intensity on the basis of rigorous diffraction theories can be applied in order to compensate for residual offsets from exact edge positions [23]. The most direct connection of the length scale of a measuring microscope is achievable by making use of the object scanning method [24] where the object stage of the system is equipped with a laser interferometer.
630
New Optical Sensors and Measurement Systems
6 Acknowledgements The author wants to thank N. Kerwin an G. Ehret for performing the calculations an providing the figures as well as A. Diener for kind help in preparing the final version of the manuscript.
7 References 1. Hopkins, H. H (1953) On the diffraction theory of optical images. Proc. Roy. Soc. Lond. A 217 : 408-432 2. Pluta, M (1989) Advanced Light Microscopy, Vol. 2 , Specialized Methods. PWN-Polish Scientific Publishers Warzawa 494 pages 3. Totzeck, M, Jacobsen, H, Tiziani, H. J (2000) Edge localisation of subwavelength structures by use of interferometry and extreme-value criteria. Applied Optics 39: 6295-6305 4. Bodermann, B, Michaelis, W. Diener, A, Mirandé, W. (2003) New Methods for Measurements on Photomasks using dark field optical Microscopy. Proc. of 19th European Mask Conference on Mask Technology for Integrated Circuits and Micro-Components, GMMFachbericht 39: 47-52 5. Nyysonen, D, Larrabee, R (1987) Submicrometer Linewidth Metrology in the Optical Microscope. J. Research of the National Bureau of Standards, Vol.16 6. Potzick, J. (1989) Automated Calibration of Optical Photomask Linewidth Standards at National Institute of Standards and Technology. SPIE Symposium on Microlithography 1087: 165-178 7. Czaske, M, Mirandé, W, Fraatz, M (1991) Optical Linewidth Measurements on Masks and Wafers in the Micrometre and Submicrometre Range. Progress in Precision Engineering: 328-329 8. Nunn, J. Mirandé, W. Jacobsen, H. Talene, N (1997) Challenges in the calibration of a photomask linewidth standard developed for the European Commission. GMM-Fachbericht 21: 53-68 9. Lesssor, D. L. Hartmann, J.S. and Gordon, R.L. (1979) Quantitative Surface Topography determination by Nomarski Reflection Microscopy, I. Theory. J Opt. Soc. Am. 69: 22-23 10. Kimura, S. Wilsom, T. (1994) Confocal scanning dark-field polarization microscopy. Applied Optics 33: 1274- 1278 11. ISO, Geneva (1993) International Vocabulary of Basic and General Terms in Metrology. 2nd Edition
New Optical Sensors and Measurement Systems
631
12. ISO, Geneva (1993) Guide to the Expression of Uncertainty in Measurement. 1st Edition 13 Bureau International des Poids et Mesures (1991) Le Système International d’Unitées (SI), 6 ieme Édition 14. Nyyssonen, D (1977) Linewidth Measurement with an Optical Microscope.the Effect of Operating Conditions on the Image Profile. Applied Optics 16: 2223-2230 15. Downs, M. J, Turner, N. P, King, R. J, Horsfield, A (1983) Linewidth Measurements on Photomasks using Optical Image-shear Microscopy. Proc. 50th PTB-Seminar Micrometrology PTB-Opt-15: 24-32 16. Mirandé, W. (1983) Absolutmessungen von Strukturbreiten im Mikrometer-bereich mit dem Lichtmikroskop. Proc. 50th PTB-Seminar Micrometrology PTB-Opt-15: 3-16 17. Bodermann, B, Mirandé, W (2003) Status of optical CD metrology at PTB. Proc. 188th PTB-Seminar, PTB-Bericht F-48: 115-129 18. Hourd, A. C et al. (2003) Implementation of 248 nm based CD Metrology for Advanced Reticle Production. Proc. of 19th European Mask Conference on Mask Technology for Integrated Circuits and MicroComponents, GMM-Fachbericht 39: 203-212 19. Hübner, U et al. (2003) Downwards to metrology in naonscale: determination of the AFM tip shape with well known sharp-edged calibration structures.Appl.Phys.A 76: 913-917 20. Hübner, U et al. (2005) Prototypes of nanoscale CD-Standards for high resolution optical microscopy and AFM. Proc. 5th euspen Internatinol Conference 21. Totzeck, M (2001) Numerical simulation of high-NA quantitative polarization microscopy and corresponding near-fields. Optik 112: 399406 22 Mirandé, W, Bodermann. B (2003) New dark field microscopy methods. Proceedings of the 187th PTB-seminar on Current Developments in Microscopy PTB-Opt-68: 73-86 23 Schröder, K. P , Mirandé, W , Geuther, H , Herrmann, C (1995) In quest of nm accuracy: supporting optical metrology by rigorous diffraction theory and AFM topograhy. Optics Communications 115: 568-575 24 Mirandé, W. (1990) Strukturbreiten-Kalibrierung und Kontrolle. VDIBerichte 870: 47-82
A white light interferometer for measurement of external cylindrical surfaces Armando Albertazzi G. Jr., Alex Dal Pont Universidade Federal de Santa Catarina Metrology and Automation Laboratory Cx Postal 5053, CEP 88 040-970, Florianópolis, SC Brazil
1 Introduction White light interferometry has been extensively used for profiling of technical parts. It combines the high sensitivity of interferometers and the ability to perform absolute height measurements.[1 to 8] Parts with lateral sizes ranging from few micrometers to over 100 mm can be measured. It is possible to achieve height resolution better than one nanometer and measurement ranges up to several millimeters, what makes this technique excellent for industrial applications concerning geometric quality control. Several commercial systems using this measurement principle are already available on the market. A typical setup for white light interferometer is a Michelson like configuration. Light from a low coherent light source is collimated and directed to a partial mirror. Part of the light is directed to a reference surface, usually a high quality mirror, and is reflected back to the imaging device. Light is also directed to the object to be measured and is reflected back to the imaging device and it is combined with the light reflected by the reference surface. An interference pattern is only visible for those regions where the optical path difference is smaller than the coherence length of the light source. The loci of the points where the interference patter is visible is a contour line for a given height. By moving the part to be measured, or the reference mirror, it is possible to scan the entire surface. An algorithm is used to find the position of maximum contrast of the interference pattern for each pixel of the image and to assign a height value. White light interferometers are naturally used to make measurements in rectangular coordinates. X and Y are associated with the lateral dimensions of the image and Z to the heights. In this paper the authors extends white
New Optical Sensors and Measurement Systems
633
light interferometry to measure in cylindrical coordinates. A high precision 45º conical mirror is used for both illuminate cylindrical parts and to image the resulting interference pattern into a CCD camera. This configuration opens possibilities for measuring high precision cylindrical or almost cylindrical parts. Either continuous or stepwise surfaces can be measured. The measurement principle, practical considerations and performance results are presented here as well few applications of practical interest.
2 The optical setup 45° conical mirrors have some interesting optical properties. They can be used to optically transform rectangular coordinates into cylindrical coordinates. Collimated light propagating in Z direction is reflected by the conical mirror to propagate in radial direction, as shows Fig. 1. If a cylinder is aligned to the axis of the conical mirror its image reflected in a 45° conical mirror is transformed in such a way that it is seen as a flat disc. If the observer is located in the infinity or a telecentric optical system is used, and if the alignment and mirror geometry are ideal, a perfect cylinder is transformed into a perfect flat disc. If the quality of the optical components and the alignment is good enough, the form deviations of the cylindrical surface is directly connected to flatness errors of the flat disc.
Z
Fig. 1. Reflection of a cylinder by a conical mirror: cylindrical surfaces become flat discs
To measure in cylindrical coordinates the white light interferometer is modified in the way presented in Fig. 2. A near infrared ultra-bright LED is used as a non coherent light source with about 20 µm coherent length. The light is naturally expanded and split by a partial mirror in two components. The first component goes through the partial mirror, is collimated, reaches a reference flat mirror and is reflected back toward the partial mir-
634
New Optical Sensors and Measurement Systems
ror and then is imaged into a high resolution (1300 x 1030) digital camera. The second light component is reflected to the bottom of the figure by the partial mirror, is collimated and reaches a 45° conical mirror. The conical mirror reflects the collimated light radially toward the cylindrical surface to be measured, located inside the conical mirror. The light is reflected back by the measured surface to the conical mirror and then propagates and is imaged into the camera. Unlike most white light interferometers the collimating lens are placed after the partial mirror since a larger clear aperture was needed for the image of the measured cylinder reflected by the conical mirror. Both collimating lens are similar to minimize the optical aberration differences between the two arms of the interferometer. The outer diameter of the 45° conical mirror is about 80 mm and was designed to fit the set of diameters and heights of the cylindrical pieces to be measured. It was manufactured in aluminium with an ultra-precision turning machine with a diamond tool. TV camera
telecentric lens reference mirror partial mirror
light source
scanning motor lens conical mirror cylindrical part
Fig. 2. Modified white light interferometer to measure in cylindrical coordinates
Interference patterns are visible if the optical path difference is smaller than the coherent length of the light source. A high precision motor moves the flat reference mirror across the measurement range what produces equivalent changes in the radius of a virtual cylinder that scans the cylindrical measurement volume. The peak of maximum contrast of the fringes is searched by software for each pixel of the image and represents the
New Optical Sensors and Measurement Systems
635
heights of the flat disc which is equivalent to the radius where a virtual cylinder crosses the actual measured shape. So, the measured heights are converted to radius and the actual 3D surface is reconstructed from cylindrical coordinates.
3 Alignments and Calibration To align and calibrate the interferometer a 22.5 mm diameter master cylinder was used as reference. The form error of the master cylinder is known to be better than ±0.5 µm. The master cylinder, mirrors and lens were carefully aligned to minimize the number of visible residual fringes. The master cylinder was measured ten times. The apparent shape deviation of the master cylinder was computed from the mean values. Since the master cylinder was assumed to be the reference, its deviation from a perfect mathematical cylinder was assumed to be the systematic errors of the interferometer. This amount of systematic error was saved and used to correct all further measurements. Data sets from ten repeated measurements were also analyzed to estimate typical random error components. The standard deviation was separately computed for each measured point on the cylindrical surface. A typical Ȥ2 distribution was obtained for the standard deviations of all measured points. The most frequent value was 0.11 µm and 95% of the values were smaller than 0.27 µm. The influences of other major error sources were analyzed and their contributions are presented in Table 1. The type A component (standard deviation) and the master cylinder uncertainty were the most significant ones. The overall expanded uncertainty was estimated to be about 1.0 µm with 95% confidence level. The alignment of the part to be measured to the conical mirror axis is not a relevant error source. Translations and tilts of the measured cylinder related to the conical mirror axis can easily be detected and corrected by software. However, a finer alignment reduces the measurement time since a smaller scanning range is required.
636
New Optical Sensors and Measurement Systems
Table 1. Uncertainty budget for cylindrical shape measurement Uncertainty source Type A (standard deviation)
0,27 Pm
Distribution normal
Divider 1
u 0,27 Pm
Ȟ 9
uSE
Systematic error uncertainty
0,09 Pm
normal
1
0,09 Pm
f
uCil
Master cylinder uncertainty
0,50 Pm
rectangular
3
0,29 Pm
f
uC
Combined uncertainty
normal
0,41 Pm
9
U95%
Expanded uncertainty
normal
Symbol uA
Value
k = 2,32
0,95 Pm
At the present stage, the alignment requires some practice and about 15 minutes. It starts with a coarse alignment trying to make the piston’s image uniformly illuminated. After that, the software starts a loop were acquires one image, moves the reference flat mirror of about 180° in phase and acquires a second image. The images are subtracted and the result is squared. The result shows white areas in regions on the measured cylinder where the interference pattern is visible. Those white areas are equivalent to pseudo-interferometric fringes, as shows Fig. 3. Mechanical stages are used for aligning the measured cylinder. The fine alignment is guided for the shape of the pseudo-interferometric fringes and is completed when only one fringe occupies the entire image.
4 Measurement examples The interferometer has been successfully applied to measure cylindrical deviation of gas compressor pistons. Ranging from 17 mm to 26 mm those pistons are made of steel and recovered with a phosphate coating, what makes the cylinder surface quite rough. Fig. 4 shows results of a measurement of a gas compressor piston in a much exaggerated scale. Note that the difference between the minimum and maximum radius is about only 7 µm. Quantitative analysis of a longitudinal and a transversal section of this piston are shown in Fig. 5.
Fig. 3. Alignment sequence using pseudo-interference fringes
New Optical Sensors and Measurement Systems
637
7 µm
Fig. 4. Measurement example of a gas compressor piston
Fig. 5. Quantitative analysis of the piston shown in Fig. 4.
Fig. 6 demonstrates that it is possible to measure stepped cylinders. The scale of the left part of figure was chosen to make it possible to see both cylindrical surfaces. The scale of the right part was chosen to emphasize form deviation of the cylindrical surface with larger diameter.
Fig. 6. Measurement results for a stepped piston
638
New Optical Sensors and Measurement Systems
No surface preparation at all was needed in any case. For the continuous cylinder the scanning was done in one range only. For the stepped one the scanning was done in two continuous regions close to the expected diameter values for each area. The scanning time for each cylindrical part was typically from three to five minutes.
5 Conclusions This paper shows that is possible to extend white light interferometers to measure both continuous or stepped cylindrical surfaces. The optical setup is modified by introducing a high precision 45° conical mirror to optically transform rectangular coordinates into cylindrical coordinates. A prototype of this new design of interferometer was built, aligned and calibrated using a master cylinder as reference. At the current stage the prototype of the interferometer is not optimized, but it was possible to perform preliminary evaluations and apply it to measure pistons of gas compressors. The typical measurement time ranges from three to five minutes. It was found an overall expanded measurement uncertainty of about 1.0 µm, what is sufficient for several industrial applications. That configuration opens possibilities for new applications of high interest in mechanical engineering such as wear measurement in cylindrical surfaces. Either continuous or stepped cylindrical surfaces can be measured. The uncertainty achieved at this stage of development is about 1.0 µm. The authors believe that improvements in the scanning mechanism and the use of a better reference cylinder can reduce the expanded uncertainty to something below 0.3 µm. Current developments efforts are focused in the measurement of inner cylindrical geometries and development of algorithms for wear measurement.
6 Acknowledgments The authors would like to thanks the help and encouragement of Analucia V. Fantin, José R. Menezes, Danilo Santos, Fabricio Broering, Ricardo S. Yoshimura, Lucas B. de Oliveira and the financial support of MCT/TIB, Finep and Embraco.
New Optical Sensors and Measurement Systems
639
7 References 1. Creath, K., Phase measurement interferometry methods. Progress in Optics XXVI, ed. E. Wolf, p. 349-442, 1988. 2. Dresel, T.; Häusler, G.; Venzke, H., Three-dimensional sensing of rough surfaces by coherence radar. Appl. Opt., v. 31, n. 7, p. 919-925, 1992. 3. De Groot, P.; Deck, L., Three-dimensional imaging by sub-Nyquist sampling of white light interferograms. Optics Letters. v. 18, n. 17, p. 1462-1464, 1993. 4. Häusler G., et al., Limits of optical range sensors and how to exploit them. International Trends in Optics and Photonics ICO IV, T. Asakura, Ed. (Springer Series in Optical Sciences, v. 74, Springer Verlag, Berlin, Heidelberg, New York), p. 328 – 342, 1999. 5. Yatagai, T., Recent progress in white-light interferometry, Proc. SPIE Vol. 2340, p. 338-345, Interferometry '94: New Techniques and Analysis in Optical Measurements; Malgorzata Kujawinska, Krzysztof Patorski; Eds., Dec 1994. 6. Helen S. S., Kothiyal, M. P., Sirohi, R. S., Analysis of spectrally resolved white light interferograms: use of phase shifting technique, Optical Engineering 40(07), p. 1329-1336, Donald C. O'Shea; Ed., Jul 2001. 7. de Groot P., de Lega X. C., Valve cone measurement using white light interference microscopy in a spherical measurement geometry, Optical Engineering 42(05), p. 1232-1237, Donald C. O'Shea; Ed., May 2003. 8. de Groot P., Deck, L. L., Surface profiling by frequency-domain analysis of white light interferograms, Proc. SPIE Vol. 2248, p. 101-104, Optical Measurements and Sensors for the Process Industries; Christophe Gorecki, Richard W. Preater; Eds. Nov.
Pixelated Phase-Mask Dynamic Interferometers James Millerd, Neal Brock, John Hayes, Michael North-Morris, Brad Kimbrough, and James Wyant 4D Technology Corporation 3280 E. Hemisphere Loop, Suite 146 Tucson, AZ 85706
1 Introduction We demonstrate a new type of spatial phase-shifting, dynamic interferometer that can acquire phase-shifted interferograms in a single camera frame. The interferometer is constructed with a pixelated phasemask aligned to a detector array. The phase-mask encodes a highfrequency spatial interference pattern on two collinear and orthogonally polarized reference and test beams. The wide spectral response of the mask and true common-path design permits operation with a wide variety of interferometer front ends, and with virtually any light source including white-light. The technique is particularly useful for measurement applications where vibration or motion is intrinsic. In this paper we present the designs of several types of dynamic interferometers including a novel Fizeau configuration and show measurement results.
2 Phase Sensor Configuration The heart of the system consists of a pixelated phase-mask where each pixel has a unique phase-shift. By arranging the phase-steps in a repeating pattern, fabrication of the mask and processing of the data can be simplified. A small number of discrete steps can be arranged into a “unit cell” which is then repeated contiguously over the entire array. The unit cell can be thought of as a super-pixel; the phase across the unit cell is assumed to change very little. By providing at least three discrete phase-shifts in a unit cell, sufficient interferograms are produced to characterize a sample surface using conventional interferometric algorithms.
New Optical Sensors and Measurement Systems
641
The overall system concept is shown in Fig. 1 and consists of a polarization interferometer that generates a reference wavefront R and a test wavefront T having orthogonal polarization states (which can be linear as well as circular) with respect to each other; a pixelated phase-mask that introduces an effective phase-delay between the reference and test wavefronts at each pixel and subsequently interferes the transmitted light; and a detector array that converts the optical intensity sensed at each pixel to an electrical charge. The pixelated phase-mask and the detector array may be located in substantially the same image plane, or positioned in conjugate image planes. R Polarization Interferometer T Sensor
Mask
Array matched to detector array pixels
Unit Cell 0
0
90
180 270
Stacked
or
270
90 180
Circular
ref LHC RHC test
Circ. Pol. Beams ('I) + linear polarizer (D)
cos ('I + 2D)
Phase-shift depends on polarizer angle
Fig. 1. Basic concept for the pixelated phase-mask dynamic interferometer
In principle, a phase-mask shown in Fig. 1 could be constructed using an etched birefringent plate, however, such a device is difficult to manufacture accurately. An alternative approach is to use an array of micropolarizers. Kothiyal and Delisle [1] showed that the intensity of two beams having orthogonal circular polarization (i.e., right-hand circular and lefthand circular) that are interfered by a polarizer is given by I ( x, y )
1 I r I s 2 I r I s cos 'I ( x, y ) 2D p 2
(1.)
where Dp is the angle of the polarizer with respect to the x, y plane. The basic principle is illustrated in Figure 1. From this relation it can be seen that
642
New Optical Sensors and Measurement Systems
a polarizer oriented at zero degrees causes interference between the inphase (i.e., 0q) components of the incident reference and test wavefronts R and T. Similarly, polarizers oriented at 45, 90 and 135 degrees interfere the in-phase quadrature (i.e., 90q), out-of-phase (i.e., 180q) and out-of-phase quadrature (i.e., 270q) components respectively. The basic principle can be extended to an array format so that each pixel has a unique phase-shift transfer function. Several possible methods can be used to construct the pixelated phase-mask. Nordin et al[2]. describe the use of micropolarizer arrays made from fine conducting wire arrays for imaging polarimetry in the near infrared spectrum. Recently, the use of wire grid arrays has also been demonstrated in the visible region of the spectrum[3]. The planar nature of the conducting strip structure permits using it as a polarizer over an extremely wide incident angle, including zero degrees, and over a broad range of wavelengths, provided the period remains much less than the wavelength. For circular polarized input light, the micropolarizer array can be used directly. For linear polarized input light, which is more typical of polarization interferometers, a quarter-wave retarder plate (zero order or achromatic[4]) can be used in combination with the micropolarizer array. The quarter-wave retarder may be adjoined to the oriented polarizer array to form the pixelated phasemask; however, it can also be separated by other imaging optics.
3 Data Processing The effective phase-shift of each pixel of the polarization phase-mask can have any spatial distribution; however, it is highly desirable to have a regularly repeating pattern. A preferred embodiment for the polarization phasemask is based on an arrangement wherein neighboring pixels are in quadrature or out-of-phase with respect to each other; that is, there is a ninetydegree or one hundred eighty degree relative phase shift between neighboring pixels. Multiple interferograms can thus be synthesized by combining pixels with like transfer functions. To generate a continuous fringe map that opticians are accustomed to viewing for alignment, pixels with transfer functions can be combined into a single image or interferogram. The phase difference is calculated at each spatial coordinate by combining and weighting the measured signals of neighboring pixels in a fashion similar to a windowed convolution algorithm. The phase difference and
New Optical Sensors and Measurement Systems
643
modulation index can be calculated by a variety of algorithms that are well-known in the art [5]. This method provides an output phase-difference map having a total number of pixels equal to (N-w) times (M-v), where w and v are the sizes of the correlation window and N and M are the size of the array in the x and y directions, respectively. Thus, the resolution of the phasemap is close to the original array size, although the spatial frequency content has been somewhat filtered by the convolution process. Figure 1 illustrates two possible ways of arranging the polarization phase-mask and detector pixels (circular and stacked). We examined the sensitivity of both orientations as a function of phase gradient using a computer model and plot the results in Figure 2. The stacked orientation preferentially reduces the effects of sensor smear because each column of pixels has a constant signal level regardless of the input phase. However, the circular orientation has a significantly reduced sensitivity to phase gradients and is therefore the prefered orientation under most conditions. 0.016 Circular 0.014
Circular 50% smear
RMS Error (waves)
Stacked 0.012 Stacked 50% smear 0.01 0.008
0.006
0.004
0.002 0 0
0.005
0.01
0.015
0.02
0.025
0.03
0.035
45 deg Tilt (waves/pixel)
Fig. 2. Simulated phase error as a function of fringe tilt for two pixel orientations (circular and stacked), with and without sensor smear.
644
New Optical Sensors and Measurement Systems
4 Interferometer Configurations 4.1 Twyman Green
One type of measurement system is illustrated in Fig. 3, wherein the pixelated phase-mask is used in conjunction with a Twyman-Green interferometer (TG). An afocal relay is used to form an image of the input pupil plane at the location of the pixelated phase-mask. The aperture is preferably selected so that the diffraction-limited spot size at the pixelated phasemask is approximately 2 effective pixels in diameter in order to avoid aliasing of the interference pattern spatial frequency. This selection of the aperture ensures that spatial frequencies higher than the pixel spacing are not present in the final interference pattern.
Test Mirror
Single Mode Laser
QWP
PBS
High Resolution Camera Diverger
QWP
Phase-Mask
Reference Mirror A
C
A
C
A
C
B
D
B
D
B
D
A
C
A
C
A
C
B
D
B
D
B
D
A
C
A
C
A
C
B
D
B
D
B
D
Pixelated Mask Pattern
Parsing
Pixelated Mask
Sensor Array Phase-Shifted Interferograms
Fig. 3. Twyman-Green implementation of a dynamic Interferometer
4.2 Fizeau
The pixelated phasemask can also be combined with a Fizeau-type interferometer employing both on-axis [5] and off-axis beams (shown in Figure 4). The on-axis configuration achieves very high uncalibrated accuracy due to the true common-path arrangement but requires the additional step of path matching during alignment. The off-axis arrangement is simple to use but requires careful design of the optical imaging system in order to mitigate off-axis abberations. We have built and demonstrated both types of systems.
New Optical Sensors and Measurement Systems test Ref.
645
collimator
'L
R
Beam splitter Aperture R
T
R T
Beam Combiner
T
Mask and Detector
(System can be used on-axis) PBS
QWP
'L M2
source L
HWP QWP M1
stage
Path compensation module
Fig. 4. Fizeau implementation of the dynamic interferometer. Path matching module can be used with either an on- or off-axis configuration.
5 Measurement Results We constructed a pixelated phase-mask sensor using a planar deposition technique. The pixel pitch of the mask and CCD was 9 microns, and was 1000x1000 pixels wide. The pixelated phase-mask was bonded directly in front of a CCD array. Every 4th pixel
Fig. 5. Measurements made with the TG interferometer. Checked pattern is a magnified grayscale image showing 24 x 17 pixels. The Fringe pattern is synthesized by selecting every fourth pixel. Saw tooth is generated with a 3x3 convolution phase algorithm.
Fig. 5 shows data measured from a pixelated phasemask sensor configured as a Twyman-Green interferometer. A flat mirror was used as the test object. The angle between the mirrors was adjusted to give several fringes
646
New Optical Sensors and Measurement Systems
of tilt. The magnified image shows a area of 24 x 17 pixels from the CCD array. The greyscale of the image corresponds to the measured intensity at each pixel. The high contrast between adjacent pixels demonstrates the ability to accomplish discrete spatial phase shifting at the pixel level. Every 4th pixel was combined to generate a continuous fringe map or interferogram. A wrapped fringe map was calculated using the 3x3 convolution approach. The resulting sawtooth map, shown in Figure 5, had a total of 974 x 980 pixels, just under the actual CCD dimensions. We measured good fringe contrast with up to 170 fringes of tilt in each direction before the onset of unwrapping errors. Figure 6 shows measurements of a mirror having a 2 meter radius of curvature using the TG interferometer and a 400mm diameter mirror using a large aperture Fizeau interferometer. The mirror and interferometer were located on separate tables for the TG measurement and a spider was introduced into the cavity of the Fizeau measurement to demonstrate the ability of the technique to successfully process high spatial frequency content without edge distortion or ringing.
Fig. 6. Measurement of a mirror with a 2 meter radius of curvature using TG interferometer located on a separate table and; measurement of a flat mirror (400mm dia) using a large aperture fizeau interferometer. Exposures were made in under 60 microseconds.
We performed a series of measurements to determine the instrument repeatability. 10 measurements were made of a test mirror, each measurement consisting of 16 averages. The results of the study are shown in Table 1. The uncalibrated accuracy, defined as the pixel-wise average of all 160 measurements, was limited mainly to the polarization beamsplitter. Precision, defined as the average deviation of each measurement subtracted from the calibrated surface on a pixel-by-pixel basis, was below 1 milliwave rms. Repeatability, defined as the standard deviation of the 10 measurements, was below 1/10th milliwave rms.
New Optical Sensors and Measurement Systems
647
Table 1. Measured performance for the pixelated phasemask interferometer using a flat reference.
Uncalibrated Accuracy Precision Repeatability
0.0039 waves rms 0.0007 waves rms 0.00008 waves rms
6 Summary We have demonstrated a new type of dynamic measurement system that is comprised of a micropolarizer array and can work with any type polarization interferometer to measure a variety of physical properties. The unique configuration overcomes many of the limitations of previous single frame, phase-shift interferometer techniques. In particular it has a true common path arrangement, is extremely compact, and is achromatic over a very wide range. We demonstrated high quality measurement with both a Twyman-Green and Fizeau type interferometer. The technique is useful for many applications where vibration or motion is intrinsic to the process.
7 References 1. P. Kothiyal and R. Delisle, “Shearing interferometer for phase shifting interferometry with polarization phase shifter,” Applied Optics Vol. 24, No. 24, pp. 4439-4442, 1985 2. Nordin, et. al., “Micorpolarizer array for infrared imaging polarimetry,” J. Opt. Soc. Am A, Vol. 16, No. 5, 1999 3. See for example, U.S. Patent No. 6,108,131 4. Helen, et. al., ”Achromatic phase-shifting by a rotating polarizer” Optics Communications 154, p249-254, 1998 5. see for example, Interferogram Analysis for Optical Testing. Malacara et. al. Marcel Decker, Inc. New York, 1998 6. US Patent 4,872,755 October 1989
Tomographic mapping of airborne sound fields by TV-holography K. D. Hinsch, H. Joost, G. Gülker Applied Optics, Institute of Physics, Carl von Ossietzky University D-26111 Oldenburg Germany
1 Introduction Optical detection of sound utilizes the pressure-induced change in the refractive index n. Phase-sensitive techniques measure the resulting modulation of the optical path at the sound frequency. Thus, any method that responds to the phase modulation of light scattered from a vibrating surface can also be applied for the sensing of sound. Since the pressure fluctuations in airborne sound, however, are extremely small, only an interferometric method of high sensitivity deserves consideration. Time-averaging TV-holography or Electronic Speckle Pattern Interferometry (ESPI) with sinusoidal reference wave modulation and phase shifting usually is used for vibration studys in case of amplitudes in the range of only a few nanometers. In the present study we use this technique for an acoustic challenge that requires mapping of a three-dimensional sound field with high spatial resolution. The recordings represent a twodimensional projection of the refractive index modulation of the sound field integrated along the viewing direction. The three-dimensional field is obtained from many such projections through the sound field at different viewing angles in a tomographic setup. Inversion by filtered backprojection yields the three-dimensional sound amplitude and phase. These data has been used to optimize the sound field of a parametric acoustic array. Parametric acoustic arrays are built to generate highly directional audio sound by nonlinear interaction of two ultrasonic waves differing in frequency by the audio frequency to be generated. Both these waves are made to overlap in the air volume in front of the sound transducer and create the difference frequency by parametric mixing [1]. Since the wavelength of the ultrasound is smaller by one or two orders of magnitude than the dimensions of its sound source it can be radiated with high directionality.
New Optical Sensors and Measurement Systems
649
Now, also the angular diagram of the audio sound radiation is very narrow, because it is governed by the length of the interaction volume. Due to the low efficiency of the nonlinear process high-level ultrasound of more than 110 dB is needed. Arrays for applications at high audio sound pressure use piezoelectric transducers of PZT. Since the individual transducer elements are very small ( 1µm is achieved directly on the component during the manufacturing process. Up to now an in-situ detection of oil-based contamination films for quality control and surface cleanliness could not be carried out directly on the component during a running manufacturing process. Thus, the developed testing method creates new possibilities for the industrial application to increase the product quality by an efficient quality control, to reduce defects and arising costs, and consequently, to guarantee a more economic production of cleaning-sensitive components
4 Summary Because of the flexibility of this testing method concerning punctual and surface measurements as well as the low input in apparatus, this testing method, as described in the present paper, offers the possibility to considerably reduce the reject rate of cleaning-sensitive manufacturing in metal working by in-situ detection of oil-based contamination films. The developed in-situ detection represents an efficient and economic method of reproducible testing surface cleanliness especially for metal-cutting, which due to its partly rough and changing surfaces has not been possible up until now. In manufacturing processes in which the quantifiability of contamination films within defined limits are of interest for subsequent processes, radiation sources have to be used where the wavelength lies within the range of higher absorption coefficients.
Chromatic Confocal Spectral Interferometry - (CCSI) Evangelos Papastathopoulos, Klaus Körner and Wolfgang Osten ITO – Institut für Technische Optik Pfaffenwaldring 9 70569 Stuttgart Germany
1 Introduction In recent years, several methods have been proposed in order to characterize the geometry of complex surfaces. Due to their enhanced depth and lateral resolution the optical techniques of Confocal Microscopy (CM) and White-Light-Interferometry (WLI) have been established as standard methods for investigating the topology of various microscopical structures. In CM the depth information, necessary to construct a 3D-image, is obtained by selectively collecting the light emerging from a well defined focal plane, while in WLI the same information is rather obtained by analyzing the cross-correlation pattern created during the optical interference between the low-coherence light-field reflected from the sample and a reference field. Both techniques are based on sequential recording of the depth information, experimentally realized by mechanically scanning the distance between the investigated object and the microscope’s objective. Nevertheless, simultaneous acquisition of the entire depth information is possible using the so-called Focus-Wavelength-Encoding [1]. Here, a dispersive element is combined with the objective lens of the microscope, to induce a variation of the focal position, depending on the illumination wavelength (Chromatic Splitting). Finally, the light reflected from the sample is spectrally-analyzed to deliver the depth information. In WLI the spectrally-resolved measurement [2-6] results in an oscillatory waveform (Spectral Interference-SI) where the periodicity encloses the depth information. By use of these chromatic concepts, mechanical scanning is no longer necessary, the measurement is rather performed in a so-called “single-shot” manner. On the other hand, the method used to acquire the depth information does not effect the properties of the lateral image. In CM and
New Optical Sensors and Measurement Systems
695
WLI, focusing with a higher Numerical-Aperture (NA) increases both the lateral resolution as well as the light collection efficiency of the detection.
Fig. 1. (a-d) Simulated spectral waveforms arising from the optical interference between two identical field with a Gaussian spectral profile and focused geometries of various numerical apertures. (e) Combined schematic representation of the focused fields. After reflection upon the sample and reference mirrors the two fields are recombined and brought to optical interference (not shown here).
In the present communication, we theoretically address the hybrid technique of Chromatic Confocal Spectral Interferometry (CCSI). As shown in the following, the waveform acquired by SI undergoes a severe loss of contrast when high NA focusing is employed. Combining the technique of SI with the chromatic concept allows for an effective compensation of this discrepancy while a large dynamical range is retained for the topological measurement. Additionally, confocal filtering of the light emerging from the sample allows for an effective supression of backround signals, often met in WLI measurements by objects with a high degree of volume scattering (biological samples, thick polymer probes etc).
2 Spectral Interference at High Numerical Apertures We assume two broad-band fields originating from the sample and the reference arms of a Linik WLI microscope and suppose that both fields have identical gaussian spectra and equal amplitudes (optimal contrast conditions). After reflection upon the sample and reference mirrors the two fields are recombined and their optical interference is observed with a spectrometer. To simulate the emerging interference pattern we assume the
696
New Optical Sensors and Measurement Systems
focused geometries depicted in Fig. 1e. The interference contribution for a single ray bundle through the optical system is given by [7]: (1) where T is the incidence angle of the ray bundle with respect to the axis of the optical system, k is the wavenumber of the light-field, z the displacement of the sample with respect to the reference arm, R1 and R2 the reflectivity of the sample and reference mirrors respectively, and I the relative phase between the sample and reference field acquired during their propagation and reflection. The total interference signal recorded with the spectrometer is given by the integral of the ray bundle contributions dI(T,k,z) over the whole range of incidence-angles: (2)
where Tmax is the maximum incidence angle, defined by the numerical aperture and V(k) is the optical-spectrum of the interfering fields. For a fixed sample position z=4 Pm and assuming equal reflectivities R1=R2 we evaluated numerically the integral in Eq. 2. The results are summarized in Fig. 1 (a-d) for various NA. With relatively low NA = 0.1 (Fig. 1a) the interferometric signal exhibits a pronounced oscillatory behaviour. The frequency of this spectral modulation scales linearly with the displacement z, as admittedly seen in Eq. 1. Under these focusing conditions the paraxial approximation holds and the cos(T) term can be approximated by unity. However, for higher NA this approximation fails. This gives rise to a periodicity which is a function of the incidence angle T. Consequently, after integrating over T (Eq. 2) the contrast of the spectral interference is reduced, to such an extent that by NA = 0.7, the modulation is hard to analyze (Fig. 1c,1d). The discrepancy of the reduced spectral-modulation creates the necessity for an alternative interferometric scheme, which will be the subject of the following section.
3 Chromatic-Confocal Filtering To acquire the interference patterns in Fig. 1(a-d) we assumed a constant displacement z of the sample with respect to the reference field. However, the amplitude of modulation depends both on the NA employed, as well as
New Optical Sensors and Measurement Systems
697
the displacements z. This effect is illustrated in Fig. 2 where the modulation depth is plotted as a function of z in the vicinity of 1-10 Pm.
Fig. 2. Simulation of the modulation-depth observed with spectral intereferometry under various focusing conditions. The results are presented as functions of the displacement of the sample with respect to the reference field. The trend of the curves shown here resembles the depth-of-focus function reported for CM and WLI.
These results follow the numerical evaluation of the interference using Eq. 2 for different displacement z and various focusing conditions. By NA = 0.1 the modulation depth depends hardly on the displacement z (less than 5% reduction over the 1-10 Pm range). However, under sharp focusing conditions (higher NA) the dependence becomes more pronounced. By NA = 0.9 (Fig. 2 solid line), the amplitude of the interference reduces by almost 90% within the first 500 nm. The plots in Fig. 2 resemble the depth-of-focus functions reported for CM and WLI [8]. Despite the loss of modulation at high-NA, the interference exhibits always a maximum when the displacement z approaches 0. For z=0 the interferometric scheme assumed here is utterly symmetric and the optical interference is perfect. This effect is exploited in CCSI. The basic idea behind this concept is to introduce a (chromatic-) wavelength-dependence of the focal plane in the sample arm of the interferometer, so that for a wide range of z a part of the broad light-spectrum always interferes at equal optical paths with the reference. A possible experimental realization of this concept is schematically depicted in Fig. 3. The basis of the set-up is a standard Linik-type interferometer. To introduce the chromatic-dependence of the focal position, a focusing Diffractive-Optical-Element (DOE) is added at the back-Fourier plane of the objective lens. Insertion of the DOE results to a linear dependence of the focal position with respect to the wave-number of the illumina-
698
New Optical Sensors and Measurement Systems
tion field i.e. the focal length for the “blue” part of the spectrum is larger than that for the “red” part.
Fig. 3. Schematic representation of the modified Linik-Interferometer used for monitoring spectral interference. A diffractive Optical Element, located at the back Fourier-plane of one objective lens separates the focal positions of the different spectral components. To compensate for the group velocity mismatch of the two fields, the reference field propagates through a dispersive material of variable thickness. The recombined fields are focused on the entrance pinhole of the spectrometer and the interference is recorded by a CCD camera.
Accounting for the combined operation of the DOE with the objective lens, the focal position is summarized under the expression: (3) where k0 is the wave-number corresponding to the center of the opticalspectrum and A is a measure of the chromatic splitting. The interference component dI(T,k,z) then becomes: (4) To derive the above expression, we assumed that the focal length for k0 is
New Optical Sensors and Measurement Systems
699
the same as that from the reference field, which is assumed to be achromatic.
Fig. 4. a) Simulated interference pattern following the insertion of the DOE. The chromatic-splitting of the light-field reflected by the sample induces a high contrast modulation in the vicinity of z=A(k-k0). b) Due to the spatial filtering from the spectrometer pinhole a confocal-spectral filter (dashed line) is imposed on the interference signal (solid line).
Using the same field parameters as in Fig. 1 (a-d) we calculated the spectral-interference by integrating dI´(T,k,z) as in Eq. 2. The spectral interferogram acquired for z=4 Pm, A=7 Pm2 and NA=0.7 is depicted in Fig. 4a. A fast oscillating wavelet is seen in the vicinity of 750 nm. The amplitude of this modulation is maximum when z=A(k-k0), with a contrast of practically equal to unit. It has to be noted that not only the position of the spectral interference but also it’s periodicity enclose the information of the position z. This allows for an accurate measurement of z based both on the envelope of the modulation as well as by evaluating the spectral-phase underlying the interferogram. The width of the wavelet in Fig. 4a is determined by the NA employed. By high NA the amplitude of the spectralinterference drops faster (Fig. 2) and the wavelet becomes narrower. In Fig. 4a the interference pattern is confined within a spectral window from about 700 nm to 800 nm. Beyond this region the observed waveform originates from the non-interfering Gaussian light-spectrum of the individual fields. Usually the entrance of a spectrometer comprises a pinhole or slit (Fig. 3), the opening of which effects significantly the resolution of the instrument. Focusing of the two interfering fields onto the pinhole incorporates a spatial confocal-filtering. This imposes a modification of the interference, since only the frequency components of the chromatic-analyzed spectrum that are sharp focused propagate through the pinhole and contribute to the interference. This effect is included in the calculation by replacing the reflectivity term R2 by:
700
New Optical Sensors and Measurement Systems
(5) The added term resembles the confocal depth-response function, only that the axial position has been replaced by the chromatic-dependent coordinate z-A(k-k0). The dashed line in Fig. 4b represents the resulting confocal spectral-filter, while the interference signal is depicted as a solid line. The confocal-filtering apparently does not effect the spectral contribution from the reference field since no chromatic-dispersion is involved i.e. all spectral components are equally focused and propagate through the pinhole. The confocal filtering of the sample field is in particular advantageous in the context of reducing the background signal in measurements where a high degree of volume scattering is involved (thick polymer probes, biological samples etc). As previously mentioned, in order to accomplish a high contrast spectral modulation, the optical paths of the sample and reference fields must be approximately equal. This incorporates the requirement that the optical interference takes place within the coherence length of the employed light field. However, the chromatic dispersion from the DOE induces a group velocity mismatch of the various components of the light-spectrum. Upon reflection from the sample, the “blue” part of the spectrum propagates a longer optical path that the “red” part before recombining with the reference. Due to Eq. 3, the group delay of the sample-field at the recombiner is a linear function of k. This resembles the effect of Group-VelocityDispersion (GVD) for propagation within dispersive materials, where for a given geometrical distance the optical path also exhibits a linear dependence with k. Therefore, the group delay of the interfering fields can be matched by simply including a dispersive element in the reference arm of the interferometer, indicated as GVD compensator (Fig. 3). With this configuration, the optical path difference can be compensated for without the necessity of readjusting the length of the reference arm (no-scanning is required).
4 Conclusion In the present communication, we addressed the hybrid technique of Chromatic-Confocal Spectral-Interferometry (CCSI). A number of recent developments have proved the feasibility of encoding the depth information of topological measurements into the spectrum of broad-bandwidth low-coherent light sources. The loss of contrast arising in SI measurements when high NA is employed were discussed, and how this discrepancy is
New Optical Sensors and Measurement Systems
701
lifted by incorporating a diffractive focusing element (DOE) in a typical Linik interference microscope. Also by means of numerical simulations a qualitative description of the emerging chromatic-spectral interference was presented. On the basis of these simulations, a number of issues were raised, concerning the confocal filtering of the light reflected from the sample and the compensation of group-velocity mismatch induced by the DOE. The functional proposals presented in this discussion aim to contribute towards the development of the so-called “single shot” metrology suitable for dynamical topology characterization of micro-structured surfaces.
5 References 1. Akinyemi, O, Boyde, A, Browne, MA, (1992) Chromatism and confocality in confocal microscopes. Scanning 14:136-143 2. Mehta, DS, Sugai, M, Hinosugi, H, Saito, S, Takeda M, Kurokawa, T, Takahashi, H, Ando, M, Shishido, M, Yoshizawa, T, (2002) Simultaneous three-dimensional step-height measurement and high-resolution tomographic imaging with a spectral interferometric microscope. Appl. Opt. 41: 3874-3885 3. Calatroni, J, Guerrero, AJ, Sainz, C, Escalona, R, (1996) Spectrallyresolved white-light interferometry as a profilometry tool. Opt. & Laser Tech. 28:485-489 4. Sandoz, P, Tribillon, G, Perrin, H, (1996) High-resolution profilometry by using phase calculation algorithms for spectroscopic analysis of white-light interferograms. J. Mod. Opt. 43:710-708 5. Li, G, Sun, PC, Lin, C, Fainman, Y, (2000) Interference microscopy for three-dimensional imaging with wavelength-to-depth encoding. Opt. Lett. 25:1505-1507 6. Pavlícek, P, Häusler, G, (2005) White-light interferometer with dispersion: an accurate fiber-optic sensor for the measurement of distance. Appl. Opt. 44:2978-2983 7. Kino, GS, Chim, SSC, (1990) Mirau correlation microscope. Appl. Opt. 29:3775-3783 8. Corle, TR, Kino, GS, (1996) Confocal scanning optical microscopy and related imaging systems. Academic Press (San Diego)
A simple and efficient optical 3D-Sensor based on “Photometric Stereo” (“UV-Laser Therapy”) F. Wolfsgruber1, C. Rühl1, J. Kaminski1, L. Kraus1, G. Häusler1, R. Lampalzer2, E.-B. Häußler3, P. Kaudewitz3, F. Klämpfl4, A. Görtler5 1 Max Planck Research Group, Institute of Optics, Information and Photonics, University of Erlangen-Nuremberg; 23999D-Shape GmbH, Erlangen; 3 Dermatologische Klinik und Poliklinik der Ludwig-MaximiliansUniversität München; 4Bayerisches Laserzentrum gGmbH, Erlangen; 5 TuiLaser AG, München
1 Introduction We report about the present state of our research project “UV Laser Therapy”: Its objective is the precise sensor controlled treatment of skin lesions such as dermatitis and psoriasis, using high power (UV-)excimer laser radiation. We present an optical 3D sensor that measures the lesion areas. The acquired 2D and 3D information is used to control the laser scanner for the reliable exposure of the lesion areas. The medical and commercial requirements for the sensor and the algorithms are high reliability, accurate and fast identification of the lesions and – last not least – low cost. These requirements can be satisfied by the sensor that is based on “Photometric Stereo”.
2 The Treatment System The treatment system (see Fig. 1) consists of the 3D sensor, the laser scanner, and the UV Laser. The 3D sensor measures the skin of a patient. The output of the sensor is the slope of the surface in each pixel. This data is used to control the laser to apply the correct radiation dose onto the skin. The slope and the additionally acquired 2D color images are used to automatically detect the diseased skin regions. The scanner directs the beam exclusively over the identified regions. Thus, the exposure dose on healthy skin is reduced to a minimum.
New Optical Sensors and Measurement Systems
703
Fig. 1. Sketch of the complete treatment system
The laser (made by TuiLaser, Munich) allows to apply high radiation doses onto the skin, such that the number of treatments can be reduced in comparison to conventional light therapy. The whole therapy is more comfortable for the patient and reduces the time cost for the physician.
3 Photometric Stereo Photometric Stereo [1] is a simple and fast principle with high information efficiency [2]. It measures the surface slope precisely and allows to detect surface anomalies with high sensitivity. The object is illuminated from four different directions (see Fig. 2). A CCD camera grabs an intensity image for each illumination direction. Based on the different shadings in each image, the surface normal can be computed for each camera pixel:
n x, y
1
S S U T
1
S T E x, y
(1)
n = surface normal, ȡ = local reflection, S = illumination matrix, E = irradiance vector
704
New Optical Sensors and Measurement Systems
Fig. 2. Principle of Photometric Stereo
Fig. 3. Left: camera image; Right: intensity-encoded slope image
A complete measurement (3D data and RGB image) takes about 0.7 s. Fig. 3 displays a measurement example of a patient’s knee with a psoriasis lesion. The change of the surface structure caused by the lesion is clearly observable in the slope image. In addition we also need accurate slope data to control the power of the laser. High accuracy is difficult to achieve because the method requires the precise knowledge of the illumination parameters (direction and power). Our approach to obtain these parameters consists of a new calibration procedure utilising a set of calibration gauges, and of new evaluation algorithms [3]. The result of the calibration is displayed in Fig. 4.
New Optical Sensors and Measurement Systems
705
Fig. 4. Result of the calibration: The measured slope of a tilted plane is more accurate with the additional direction calibration
4 Automatic Detection of the Lesions The automatic detection of diseased skin is a segmentation problem. We have to distinguish healthy skin from psoriasis lesions and from background. The background is segmented with a simple empiric rule [4]. The segmentation of the skin itself is more difficult because of different manifestations of psoriasis. Additionally, the appearance of the psoriasis varies during the therapy. We achieve the best results by using a k-means clustering algorithm [5]. In a last step, remaining holes inside a lesion are closed automatically.
Fig. 5. Left: region of interest of the camera image from Fig. 2; Right: the white line displays the result of the discrimination process
The reliability of this method depends on the manifestation of the psoriasis. In some cases (see Fig. 5), healthy skin is detected as part of the lesion (or the other way round). To overcome this problem, we are investi-
706
New Optical Sensors and Measurement Systems
gating additional methods that additionally analyze the local surface structure of the skin. Untreated lesions have a significant surface texture which can be detected by local frequency analysis methods such as Gabor filters. First experiments show promising results.
5 Conclusions We presented an improved sensor setup based on Photometric Stereo that satisfies all the requirements (speed and accuracy) for “UV-Laser therapy”. The present study shows that the discrimination process between healthy skin and skin lesions will be reliable in many cases. With a combination of 2D methods and 3D methods, we expect to get a reliable procedure for the automatic treatment of patients.
6 Acknowledgements This work is supported by the “Bayerische Forschungsstiftung”.
7 References 1. B. K. P. Horn, M. J. Brooks, Shape from Shading, MIT Press (1989) 2. C. Wagner and G. Häusler, Information theoretical optimization for optical range sensors. Applied Optics 42(27): 5418-5426 (2003) 3. C. Rühl, Optimierung von Photometrischem Stereo für die 3D-Formvermessung, Diploma Thesis, University of Erlangen (2005) 4. G. Gomez, E. Morales, Automatic Feature Construction and a Simple Rule Induction Algorithm for Skin Detection, Proceedings of the ICML Workshop on Machine Learning in Computer Vision: 31-38 (2002) 5. R. Cucchiara, C. Grana and M. Piccardi, Iterative fuzzy clustering for detecting regions of interesting skin lesions, Atti del Workshop su Intelligenza Artificiale, Visione e Pattern Recognition (in conjunction with AI*IA 2001): 31-38 (2001) 6. Asawanonda Pravit, Anderson R. Rox., Yuchiao Chang, Taylor Charles R., 308 nm Excimer Laser for Treatment of Psoriasis, Arch. Dermatol. 2000, 136: 619-624 (2000)
APPENDIX New Products
New Products
New Products
YOUR PARTNER IN 3D MEASUREMENT
Portable 3D measurement device for industrial and medical applications x Your advantage: x Mobile Hand-held 3D measurement device x No tripod necessary x Compact and flexible x Very easy to use x Available for Industrial measurements and LifeSciences
x x x x x x
Key Features: 3D Measurement in between milliseconds Fire wire COLOR camera Different measurement fields available Used with laptop computer Complete software solution for different applications Database
CONTACT GFMesstechnik GmbH
Warthestr. 21, 14513 Teltow / Berlin Tel.: +49 (0) 3328-316760 Fax: +49 (0) 3328-305188 Web: www.gfmesstechnik.com E-Mail:
[email protected] New Products
New Products
New Products
New Products
New Products
3-D-Camera The VEW 3-D-Ca mera is a miniaturized fringe projection device for robust surfac e topometry Due to the sm all size of 20x22x14c m and an ope ning a ngle larger than 90°, the system c an b e optim ally utilize d und er diffic ult me asureme nt c ond itions with lim ite d sp ac e a nd / or sm all distance to the ob jec t's surface. Bec ause of the large op ening angle, the mea surem ent fie ld size reac he s 1 x 1 m in a dista nc e of 1 m with a m easureme nt resolutio n of 0.1 mm . The allowed me asure ment distance ranges from 20 c m up to 4 m . The in teg ra ted high po wer lig ht sourc e illu m ina tes a m easurem ent a rea of 2x2m , ric h in contra st, even und er diffic ult environm ental c onditions. The mea surem ent process is fully a utom ated and d elivers a c curate X-, Y-, Z-c oordinates. Scop e of d elive ry: 3D-Ca mera, trip od, system c ase, me asure ment and evaluation software, com puter (op tio na l) Te c hnic a l fac ts: Warp resistant aluminum p ro file p acka ge, size: 200 x 220 x 140 m m, with q uick fastener (e xc hange ab le ad ap te r) Me asureme nt field d ia m eter: 200 ... 4000 m m Projector: Pixel q ua ntiza tio n: (12 bit grays) Resolution: 1024 x 768 pixe l Interfac e: DVI Ob je ctive : f= 12.5 m m
Ca mera : 8 Bit g ra ys 1040 x 1392 p ixel (o pt. 1200x1600) IEEE 1394 (Fire Wire ) f= 6.5 mm
Illumination: 160 W UHP la mp (switcha ble hi / lo intensity) The objec tives c an be e xc hange d for ada pting the 3D-Cam era to spe c ial m easureme nt tasks. VEW Vereinigte Elektronikwerkstätten GmbH Edisonstraße 19 * Pob: 330543 * 28357 Bremen Fon:(+ 49) 0421/271530 Fa x(+ 49) 0421/273608 E-Mail:
[email protected] DIE ENTWICKLER