Jürgen Valldorf · Wolfgang Gessner (Eds.) Advanced Microsystems for Automotive Applications 2005
Jürgen Valldorf · Wolfgang Gessner (Eds.)
Advanced Microsystems for Automotive Applications 2005 With 353 Figures
13
Dr. Jürgen Valldorf VDI/VDE Innovation + Technik GmbH Rheinstraße 10B D-14513 Teltow
[email protected] Wolfgang Gessner VDI/VDE Innovation + Technik GmbH Rheinstraße 10B D-14513 Teltow
[email protected] Library of Congress Control Number: 2005920594
ISBN 3-540-24410-7 Springer Berlin Heidelberg New York This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilm or in other ways, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under German Copyright Law. Springer is a part of Springer Science + Business Media springeronline.com © Springer-Verlag Berlin Heidelberg 2005 Printed in The Netherlands The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Typesetting: Jasmin Mehrgan Cover-Design: deblik, Berlin Production: medionet AG, Berlin Printed on acid-free paper
68/3020 Rw
543210
Preface Since 1995 the annual international forum on Advanced Microsystems for Automotive Applications (AMAA) is held in Berlin. The event is offering a unique opportunity for microsystems component developers, system suppliers and car manufacturers to show and to discuss competing technological approaches of microsystems based solutions in vehicles. The event’s design, character and intention remained unchanged, although becoming more mature during the last years. The AMAA facilitates technology transfer and co-operation along the automotive value chain. At the same time it is an antenna for newest tendencies and most advanced industrial developments in the area of microsystems for the automobile applications. The book accompanying the event has demonstrated to be an efficient instrument for the diffusion of new concepts and technology results. The present volume including the papers of the AMAA 2005 gives an overview on the state-of-the-art and outlines imminent and mid-term R&D perspectives. The 2005 publication reflects – as in the past – the current state of discussions within industry. More than the previous publications, the AMAA 2005 “goes back” to the technological requirements and indispensable developments for fulfilling the market needs. The large part of contributions dealing with sensors as well as “sensor technologies and data fusion” is exemplary for this tendency. In this context a paradigm shift can be stated. In the past development focused predominantly on the detection and processing of single parameters originating from single sensors. Today, the challenge increasingly consists in getting information of complex situations with a series of variables from different sensors and in evaluating the information. Smart integrated devices using the information deriving from the various sensor sources will be able to describe and assess a traffic situation or behaviour much faster and more reliable than a human being might be able to do. Systems integration in an enlarged sense becomes the key issue. Prof. Färber in his keynote paper gives a wonderful outline on how the understanding of biological systems can help to develop technical systems of cognitive perception and behaviour. We are particularly happy to present contributions from INTERSAFE on intersection safety issues. INTERSAFE is a subproject of the EU Commission funded Integrated Project PREVENT on active vehicle safety.
My explicit thanks go to the authors for their valuable contributions to this publication and to the members of the Honorary and the Steering Committee for their commitment and support. Particular thanks are also addressed to the companies providing the demonstrator vehicles: A.D.C., Aglaia, Audi, Bosch, Continental, DaimlerChrysler, IBEO, University of Ulm and Toyota. I would like to thank the European Commission, the Senate of Berlin and the Ministry of Economics Brandenburg for their financial support through the Innovation Relay Centre Northern Germany and to the numerous organisations and individuals supporting the International Forum Advanced Microsystems for Automotive Applications 2005 for their material and immaterial help. Last but not least, I would like to express my sincere thanks to the Innovation Relay Centre team at VDI/VDE-IT, especially Jasmin Mehrgan for preparing this book for publication, and not forgetting Jürgen Valldorf, the conference chairman and project manager of this initiative. Teltow/Berlin, March 2005 Wolfgang Gessner
Public Financers Berlin Senate for Economics and Technology European Commision Ministry for Economics Brandenburg
Supporting Organisations Investitionsbank Berlin (IBB) State Government of Victoria, Australia mstnews ZVEI - Zentralverband Elektrotechnik- und Elektronikindustrie e.V. Hanser automotive electronic systems Micronews - The Yole Developpement Newsletter enablingMNT
Co-Organisators European Council for Automotive R&D (EUCAR) European Association of Automotive Suppliers (CLEPA) Advanced driver assistance systems in Europe (ADASE)
Honorary Commitee Domenico Bordone
President and CEO Magneti Marelli S.p.A., Italy
Günter Hertel
Vice President Research and Technology DaimlerChrysler AG, Germany
Rémi Kaiser
Director Technology and Quality Delphi Automotive Systems Europe, France
Gian C. Michellone
President and CEO Centro Ricerche FIAT, Italy
Karl-Thomas Neumann
CEO, Member of the Executive Board Continental Automotive Systems, Germany
Steering Commitee Dr. Giancarlo Alessandretti Alexander Bodensohn Serge Boverie Geoff Callow Bernhard Fuchsbauer Wolfgang Gessner Roger Grace Henrik Jakobsen Horst Kornemann Hannu Laatikainen Dr. Peter Lidén Dr. Torsten Mehlhorn Dr. Roland Müller-Fiedler Paul Mulvanny Dr. Andy Noble Gloria Pellischek David B. Rich Dr. Detlef E. Ricken Jean-Paul Rouet Christian Rousseau Patric Salomon Ernst Schmidt John P. Schuster Bob Sulouff Berthold Ulmer Egon Vetter Hans-Christian von der Wense Arnold van Zyl
Centro Ricerche FIAT, Orbassano, Italy DaimlerChrysler AG, Frankfurt am Main, Germany Siemens VDO Automotive, Toulouse, France Technical & Engineering Consulting, London, UK Audi AG, Ingolstadt, Germany VDI/VDE-IT, Teltow, Germany Roger Grace Associates, San Francisco, USA SensoNor A.S., Horten, Norway Continental Automotive Systems, Frankfurt am Main, Germany VTI Technologies Oy, Vantaa, Finland AB Volvo, Göteborg, Sweden Investitionsbank Berlin, Berlin, Germany Robert Bosch GmbH, Stuttgart, Germany QinetiQ Ltd., Farnborough, UK Ricardo Consulting Engineers Ltd., Shoreham-by-Sea, UK Clepa, Brussels, Belgium Delphi Delco Electronics Systems, Kokomo, USA Delphi Delco Electronics Europe GmbH, Rüsselsheim, Germany Johnson Controls, Pontoise, France Renault S.A., Guyancourt, France 4M2C, Berlin, Germany BMW AG, Munich, Germany Motorola Inc., Northbrook Illinois, USA Analog Devices Inc., Cambridge, USA DaimlerChrysler AG, Brussels, Belgium Ceramet Technologies, Melbourne, Australia Freescale GmbH, München, Germany EUCAR, Brussels, Belgium
Conference chair: Dr. Jürgen Valldorf
VDI/VDE-IT, Teltow, Germany
Table of Contents
Introduction Biological Aspects in Technical Sensor Systems
3
Prof. Dr. G. Färber, Technical University of Munic
The International Market for Automotive Microsystems, Regional Characteristics and Challenges
23
F. Solzbacher, University of Utah S. Krüger, VDIVDE-IT GmbH
Status of the Inertial MEMS-based Sensors in the Automotive
43
J.C. Eloy, Dr. E. Mounier, Dr. P. Roussel, Yole Développement
The Assessment of the Socio-economic Impact of the Introduction of Intelligent Safety Systems in Road Vehicles – Findings of the EU Funded Project SEiSS
49
S. Krüger, J. Abele, C. Kerlen, VDI/VDE-IT H. Baum, T. Geißler, W. H. Schulz, University of Cologne
Safety Special Cases of Lane Detection in Construction Areas
61
C. Rotaru, Th. Graf, Volkswagen AG J. Zhang, University of Hamburg
Development of a Camera-Based Blind Spot Information System
71
L.-P. Becker, A. Debski, D. Degenhardt, M. Hillenkamp, I. Hoffmann, Aglaia Gesellschaft für Bildverarbeitung und Kommunikation mbH
Predictive Safety Systems – Steps Towards Collision Avoidance and Collision Mitigation
85
P. M. Knoll, B.-J. Schäfer, Robert Bosch GmbH
Datafusion of Two Driver Assistance System Sensors J. Thiem, M. Mühlenberg, Hella KGaA Hueck & Co.
97
Reducing Uncertainties in Precrash-Sensing with Range Sensor Measurements
115
J. Sans Sangorrin, T. Sohnke, J. Hoetzel, Robert Bosch GmbH
SEE – Sight Effectiveness Enhancement
129
H. Vogel, H. Schlemmer, Carl Zeiss Optronics GmbH
System Monitoring for Lifetime Prediction in Automotive Industry
149
A. Bodensohn, M. Haueis, R. Mäckel, M. Pulvermüller, T. Schreiber, DaimlerChrysler AG
Replacing Radar by an Optical Sensor in Automotive Applications
159
I. Hoffmann, Aglaia Gesellschaft für Bildverarbeitung und Kommunikation GmbH
System Design of a Situation Adaptive Lane Keeping Support System, the SAFELANE System
169
A. Polychronopoulos, Institute of Communications and Computer Systems N. Möhler, Fraunhofer Institute for Transportation and Infrastructure System S. Ghosh, Delphi Delco Electronics Europe GmbH A. Beutner, Volvo Technology Corporation
Intelligent Braking: The Seeing Car Improves Safety on the Road
185
R. Adomat, G. Geduld, M. Schamberger, A.D.C. GmbH J. Diebold, M. Klug, Continental Automotive Systems
Roadway Detection and Lane Detection using Multilayer Laserscanner
197
K. Dietmayer, N. Kämpchen, University of Ulm K. Fürstenberg, J. Kibbel, W. Justus, R. Schulz, IBEO Automobile Sensor GmbH
Pedestrian Safety Based on Laserscanner Data
215
K. Fürstenberg, IBEO Automobile Sensor GmbH
Model-Based Digital Implementation of Automotive Grade Gyro for High Stability
227
T. Kvisterøy, N. Hedenstierna, SensoNor AS G. Andersson and P. Pelin, Imego AB
Next Generation Thermal Infrared Night Vision Systems A. Kormos, C. Hanson, C. Buettner, L-3 Communications Infrared Products
243
Development of Millimeter-wave Radar for Latest Vehicle Systems
257
K. Nakagawa, M. Mitsumoto, and K. Kai, Mitsubishi Electric Corporation
New Inertial Sensor Cluster for Vehicle Dynamic Systems
269
J. Schier, R. Willig, Robert Bosch GmbH
Powertrain Multiparameteric Oil Condition Sensor Based on the Tuning Fork Technology for Automotive Applications
289
A. Buhrdorf, H. Dobrinski, O. Lüdtke, Hella Fahrzeugkomponenten GmbH J. Bennett, L. Matsiev, M. Uhrich, O. Kolosov, Symyx Technologies Inc
Automotive Pressure Sensors Based on New Piezoresistive Sense Mechanism
299
C. Ernsberger, CTS Automotive
Multilayer Ceramic Amperometric (MCA) NOx-Sensors Using Titration Principle
311
B. Cramer, B. Schumann, H. Schichlein, S. Thiemann-Handler, T. Ochs, Robert Bosch GmbH
Comfort and HMI Infrared Carbon Dioxide Sensor and its Applications in Automotive Air-Conditioning Systems
323
M. Arndt, M. Sauer, Robert Bosch GmbH
The role of Speech Recognition in Multimodal HMIs for Automotive Applications
335
S. Goronzy, R. Holve, R. Kompe, 3SOFT GmbH
Networked Vehicle Developments in Vehicle-to-vehicle Communications Dr. D. D. Ward, D. A. Topham, MIRA Limited Dr. C. C. Constantinou, Dr. T. N. Arvanitis, The University of Birmingham
353
Generic Remote Software Update for Vehicle ECUs Using a Telematics Device as a Gateway
371
G. de Boer, P. Engel, W. Praefcke, Robert Bosch GmbH
High Performance Fiber Optic Transceivers for Infotainment Networks in Automobiles
381
T. Wipiejewski, F. Ho, B. Lui, W. Hung, F.-W. Tong, T. Choi, S.-K. Yau, G. Egnisaban, T. Mangente, A. Ng, E. Cheung, S. Cheng, Astri
Components and Generic Sensor Technologies Automotive CMOS Image Sensors
401
S. Maddalena, A. Darmont, R. Diels, Melexis Tessenderlo NV
A Modular CMOS Foundry Process for Integrated Piezoresistive Pressure Sensors
413
G. Dahlmann, G. Hölzer, S. Hering, U. Schwarz, X-FAB Semiconductor Foundries AG
High Dynamic Range CMOS Camera for Automotive Applications
425
W. Brockherde, C. Nitta, B.J. Hosticka, I. Krisch, Fraunhofer Institute for Microelectronic Circuits and Systems A. Bußmann, Helion GmbH R. Wertheimer, BMW Group Research and Technology
Performance of GMR-Elements in Sensors for Automotive Application
435
B. Vogelgesang, C. Bauer, R. Rettig, Robert Bosch GmbH
360-degree Rotation Angle Sensor Consisting of MRE Sensors with a Membrane Coil
447
T. Ina, K. Takeda, T. Nakamura, O. Shimomura, Nippon Soken Inc. T. Ban, T. Kawashima, Denso Corp.
Low g Inertial Sensor based on High Aspect Ratio MEMS
459
M. Reze, J. Hammond, Freescale Semiconductor
Realisation of Fail-safe, Cost Competitive Sensor Systems with Advanced 3D-MEMS Elements J. Thurau, VTI Technologies Oy
473
Intersafe eSafety for Road Transport: Investing in Preventive Safety and Co-operative Systems, the EU Approach
487
F. Minarini, European Commission
A New European Approach for Intersection Safety – The EC-Project INTERSAFE
493
K. Fürstenberg, IBEO Automobile Sensor GmbH B. Rüssler, Volkswagen AG
Feature-Level Map Building and Object Recognition for Intersection Safety Applications
505
A. Heenan, C. Shooter, M. Tucker, TRW Conekt K. Fürstenberg , T. Kluge, IBEO Automobile Sensor GmbH
Development of Advanced Assistance Systems for Intersection Safety
521
M. Hopstock, Dr. D. Ehmanns, Dr. H. Spannheimer, BMW Group Research and Technology
Appendices Appendix A: List of Contributors
533
Appendix B: List of Keywords
539
Introduction
3
Biological Aspects in Technical Sensor Systems Prof. Dr. G. Färber, Technical University of Munic Abstract This paper concentrates on the information processing aspects of biological and technical sensor systems. The focus is on visual information and it tries to extend the topic from acquisation to interpretation and understanding the observed scenes: This will become more and more important for technical systems – especially for automotive applications.
1
Information Processing in Biological and Technical Systems
To understand the layers of information processing as it was developed by nature, the scheme of Rasmussen (figure 1) is presented [R83]:
Fig. 1.
The Rasmussen 3-layer model of perception / action
The lowest layer relies on skills either inherited or learned by training:
The reaction to stimuli from outside is skill-based in the layer 1. If there is a more complex or rare situation where no skills are available,
rules out of a catalogue can be used: These “when-then”-rules help to master situations where no predefined skill-based behaviour is found
4
Introduction
(rule based: layer 2) Finally if there are no rules that cover the conditions of a situation,
more abstract knowledge is required and the sequence “identification”, “decision” and “planning” takes place to find new rules: knowledgebased behaviour in layer 3. Figure 1 corresponds to the sensorimotor principle in biology as shown in the quite imprecise figure 2. Biological systems are able to perceive multimodal sensory information and to abstract the sensory signals to an interpretation of the actual scene: “cognitive perception” means that the system understands the situation; to make decisions on the basis of the actual situation and of learned knowledge; furthermore it is able to extract new knowledge out of its positive or negative experiences in this situation and to feed it into its knowledge base (“learning”); to execute actions corresponding to its decisions: it needs motor capabilities to influence the situation (e.g. to evade or to attack).
Fig. 2.
The sensorimotor bow
Most scientists in biology or neurophysiology agree that “intelligence” only can evolve in such a “sensorimotor bow”: Cognition (cognitive perception and behaviour) requires a system with the ability to perceive and to act in its environment. The biological example may be analysed on different layers of the Rasmussenmodel as we will see in the next chapter. Three examples including technical applications are: The vestibulo-ocular reflex VOR will be used later as a model for an automotive camera: Here an interesting control loop belonging to layer
Biological Aspects in Technical Sensor Systems
1 in figure1 handles the problem of image stability on the retina [B01]; some other features belong to layer 2 (e.g. where to glance: Sakade control or switching off the visual channel during a sakade). The ability of animals with two legs to keep the equilibrium during standing and walking: This is a quite complex control loop with sensory inputs from the sense of balance as well as from proprioceptive and force sensors, with control of many muscles in legs, arms and body. It is interesting to see today´s “humanoid robots” that still rely on static equilibrium: Here the biological model gives chances to much better solutions – e.g. for a running humanoid robot. In figure 1 layer 1 and 2 have to be modelled for this example. It is an important capability of humans to perform complex manipulation tasks with its hand- / arm- combination: Tactile as well as visual sensors interact in this sensorimotor bow. Hand-eye-systems are an interesting model for future robotic manipulations: In a joint project with neurobiologists [HSSF99] the behaviour of normal and pathologic humans has been studied and modelled (figure 3). The model has been implemented on a technical manipulator – with very promising results that may help to come from today’s robotic systems to future sensor based robots that are able to adapt their behaviour to varying situations. Here the Rasmussen- model must be spanned over all 3 layers (figure 4).
Fig. 3.
Biological and technical hand-eye-coordination
The focus of the paper will be in visual sensors and try to bring some of the biological principles to technical vision systems to improve robustness as required for most future automotive applications. Some of the results presented here are subject of the FORBIAS-project [FB03]. This joint research project combines research in technical and medical / biological groups. It is founded by the BFS (Bavarian Research Foundation); the acronym means “Joint
5
6
Introduction
Research Project in Bioanalogue Sensorimotor Assistance Systems”. Instead of “bioanalogue” often the terms “bioinspired” or “biomorph” are used.
Fig. 4.
2
Structure of a technical hand-eye-coordination system
Biological and Technical Sensors
Biological sensors originated from the necessity to get all information on the environment required to survive: On the one side to detect enemies as early as required to evade or to avoid dangerous situations (gas, temperature, …), and on the other side to find something to eat or to drink or to catch a victim; many sensors are used also to gather information on the state of the own body. So the human has learned a to see (eyes, here in the focus), b to hear (ears), c to feel (tactile, temperature, pain), d to smell (nose), e to taste (tongue and nose), f to feel “balance”(sense of balance, otolithes), g to feel pressure or forces.
Biological Aspects in Technical Sensor Systems
The last two modalities are important for our kinaesthetic impressions (movements, accelerations, forces). In addition many additional sensors in the muscles help to control movement very accurately with actor elements that are non linear or show non reproducible behaviour. Technical sensors today are good for the modalities a, b, c und g; d (“electronic nose”) starts to come into use, e may not be important for technical systems, and for f we have inertial sensors with 3 or 6 degrees of freedom. It is the question if we should copy the physical principles of these biological sensors or if the analogy to biology should be restricted to the behaviour that has proved to be reasonable over millions of years. This behaviour consists at least of two elements: the physical characteristics (sensitivity, accuracy, range, dynamic behaviour, spatial and temporal resolution) and the type of perception and interpretation for the signals coming from the sensors (what does a given signal combination “mean” for the living organism?). Because of the construction of living organisms it does not make sense to copy the physical implementation (membrane potentials, waste of elements and interconnections, neurons), but the functionality has proved to meet the requirements. As an example: We start to understand the principles of neuronal processing. But to implement this functions by a one-to-one-imitation in an artificial neuronal network may not be the adequate technical solution. The second part of this chapter is dedicated to the biological model of the visual system. At first we consider the basic sensor “eye” and its characteristics: Dynamics (range of light intensity) is handled over 6 decades (1:106). This is accomplished by a logarithmic behaviour of the receptors and by “accommodation”, the automatic control of diaphragm (iris). Focussing the eye by automatic lens control. Colour vision (rods and cones for colour), with variable resolution. Temporal resolution: limited by neuronal processing; not homogenous moving objects are detected much better in the peripheral part of the view field. Spatial resolution: Also very non-homogenous, very high pixel density in the central part (fovea), lower and lower towards peripheral areas. The number of rods is about 6 million, and there are 120 million cones in the human eye!
7
8
Introduction
Fig. 5.
Characteristics of a HDRC CMOS-camera
It is not easy to copy that performance in technical systems. However, technical vision sensors are becoming better and better, and there are good chances to meet or even bypass the biological system: The dynamic behaviour of the eye is already feasible with the new CMOS-cameras (figure 5). The HDRC-principles allows to adapt the characteristics to the requirements (e.g. by controlling the integration times), a logarithmic behaviour in a range of 1:108 can be implemented. The right side of figure 5 shows an example of the effect of logarithmic behaviour compared with a CCD-camera. For many application only distances of >2m are required: This can be accomplished easily by a fixed focus lens. Colour vision is available in single and multiple chip versions. Today’s high resolution chips allow high quality colour even with a single chip. Temporal resolution is as good as the neuronal system: 25 to 50 images/s are easily achievable, with lower resolution even up to 1000 images/s. Many more than the eye. Some people tried to build image sensors with resolution characteristics similar to the eye (rotational symmetry, decreasing resolution with increasing radius). As this is not in the technological focus, these devices are very expensive. However, image chips with constant spatial resolution are getting more and more pixels. For the consumer market up to 12 million pixels are available at low cost. One solution here is to use a multifocal configuration. The view fields of two camera chips with different focus deliver high resolution in the fovea (about 10° telecamera) and lower resolution in a bigger angle (60° wide angle camera). Figure 6 shows the overlapping view area, in the “fovea” the resolution is 36 times higher.
Biological Aspects in Technical Sensor Systems
Fig. 6.
Multifocal camera system
Such a technical vision sensor configuration already is a quite good “bioanalogue eye”, but the biological eye can be moved in a very effective way (eye movements relative to the head, and movements of the head) so that the fovea can be directed always to the “region of interest” ROI; the biological eye compensates for all disturbances as they arise by walking (head movements) or by driving, e.g. if we are sitting in a car. However, we always see a stabilized image, otherwise we would feel dizziness. It seems not to be enough to just have a good sensor – it must be integrated into a sensor system that solves to problems just mentioned. The already mentioned Vestibulo-Ocular Reflex (VOR) serves as biological model that integrates the following additional components into the complete vision system: The oculomotor system consisting of some powerful muscles that allow to move the eye in 3 degrees of freedom: up and down, to the right and left, and around the vision axe (angle). The performance figures are impressive: The eye can be moved with up to 700°/s and with accelerations of up to 5.000°/s! The vestibular sensor system: The sense of balance (otolithes in the semicircular canals) delivers inertial signals concerning 6 degrees of freedom (3 translational and 3 rotational degrees). The nervous system can generate translation and angle velocities as well as changes in position and angle (integration of the acceleration signals). This information concerning fast changes is used to compensate the disturbances.
9
10
Introduction
There is finally the neuronal control loop that compensates the unin-
tended movements with eye movements. It uses the vestibular signals as well as the so called “retinal slip” – information that is generated in the cerebellum from the retina image. The neuronal system also allows to generate intended sakadic movements of the eye, it also prevents that images are processed during a sakade. For about 100ms the input channel is blocked until the image is again stable. Figure 7 [G03] shows a simplified block diagram for the interaction of these components: Especially interesting is the lower block within the “Inferiore Olive” that acts as a teacher changing the synaptic weights so that the visual error is minimized.
Fig. 7.
VOR: The vestibulo-ocular-reflex.
The “optokinetic nystagmus” is an additional mechanism that generates eye movements in the opposite direction if visual stimuli with large area are moving trough the view field (to keep the image stable). In constant time intervals sakades bring the eyes back to allow further movements in the same direction. Some effects observed in patients with a defect in the sense of balance can be explained easily by the VOR-model. For instance these patients tend to have heavy dizziness – the visual impressions do not correspond to the signals received from the sense of balance [B99]. For normal persons dizziness felt in a simulator that stimulates only the visual system but not the sense of balance has the same explanation. The image stabilisation function does not work for
Biological Aspects in Technical Sensor Systems
patients with a damaged sense of balance: If they walk on the street they cannot recognize the faces of people even if they know them well. The VOR-model is well understood, it is one of the main goals of FORBIAS to apply its principles to an automotive camera system. It also means that a fast moveable platform and an inertial sensory system has to be added to the camera. Now we have copied the basic capabilities of the biological eye: Beside mechanical disturbations (bluring) the camera system always generates stable and sharp image sequences, it allows also to direct the camera view to a desired direction (gaze control). These image sequences can be stored, they can be transmitted and presented on displays. Technically this is an immense quantity. If each pixel coming from a colour camera with a resolution of 500x700 pixels is coded in 2 Bytes and if a frame rate of 25 frames/s is generated for 2 cameras (with 2 different focusses), a bandwidth B of B = 2 • 500 • 700 • 2 • 25 = 35 MB/s has to be stored. But storing into any memory does not help at all: The image sequences have to be processed, biological systems are able to interpret, to “understand” the relevant information contained in the images. Perception must result into immediate behavioural decisions and in actions according to the “sensorimotor paradigm” – tolerable times to understand the images are in the range of a tenth of a second. The following chapter again looks to biological principles for perception because exactly the same functional and timing requirement have to be realized for technical applications – e.g. in a car using “machine vision”.
3
Visual Perception in Biology
Many scientists in different fields are trying to understand the mechanisms behind the perceptual capabilities of the visual system. Figure 8 indicates, where advances can be expected. In neurophysiological studies the very first layers of image interpretation are quite well understood. Especially the quite old results of Hubel & Wiesel indicate that the image is locally analysed by spatial and spatio-temporal filters
11
12
Introduction
(receptive fields as in figure 9). Edges of different length and direction – some of them only when moving with a given speed in a given direction – are detected. In higher layers there seems to happen an abstraction from the location of the photoreceptor cells. Until the cortex (V1 and V2) this type of features is detected – and there are many hypotheses what may happen in the next stages (bottom-up). Detailed models of these layers are still missing. However it may be useful to use these feature classification for the first stages in image interpretation.
Fig. 8.
Neurophysiological and psychophysical view
Fig. 9.
Spatial and spatio-temporal receptive fields
Simple animals like flies have a much simpler vision system. Most important here is the detection of moving objects, mainly enemies. Figure 10 presents a simple model for the fly´s eye. After the (facet) photoreceptors in the first
Biological Aspects in Technical Sensor Systems
stage, the Reichardt cells are detecting movements. In a second stage a spatial integration by so called “large-field neurons” happens and it looks that in this simple animal there is a direct coupling to a behavioural stage that executes the required high speed escape reaction [BH02].
Fig. 10. A simple model for the fly´s eye
In psychophysical studies impressions of people in experiments are analysed that are hierarchically on a much higher level as the simple features of a. (neuropsychology, perception psychology). The performance of the system behaviour and also its limit can be studied in a top-down direction: The analysis of “optical deceptions” is an interesting way to understand the mechanisms behind the system structure. “Gestalt”-perception is one example for the very good performance of biological vision. Independent of size (distance), location, rotation or viewing aspect humans are able to identify and classify known object types with very little errors. We have stored “cues” or combinations of cues for the “Gestalt”, that are used for a first hypothesis of the object class. For classes very relevant for us we have stored first hypothesises for different viewing angles, it takes only parts of a second to be sure that this is an instantiation of a certain class. We also have the capability to change the view of objects less well known in our imagination to get a good starting point for the recognition process. The object class hypothesis will be reinforced by tracking the object over some time, any uncertainty may disappear with time. This is especially true if either the observer or the object is moving – you get new aspects, new information and a better impression of the object´ s form: One way to use this information is “motion stereo” handled later.
13
14
Introduction
The information processing stages behind the eye tolerate even large defects in the sensor system itself. Many people not even know that they cannot see well with both eyes: The interpretation system hides this fact. An interesting example is the “blind spot” in the eye: Since the photoreceptors as well as the first neuronal stages are in front of the retina, the optical nerve has to go through a hole in the retina – it is a quite large field where no photoreceptors are – we compensate for that, and it is even difficult for us to find the blind spot.
Fig. 11. See something that is not there
Figure 11 shows an example for an optical deception that helps to understand the vision principles. There is no quadratic structure available, but everybody sees the form that overlaps the 4 concentric ring arrangements. These is an example showing that the idea of object recognition by combining basic features like edges is not always true. There are no edges in this image. For technical applications it will mostly be possible to restrict the number of object classes for a given domain. For automotive applications only traffic-relevant objects must be considered: That reduces the complexity for the building of first hypothesises. However, this is still one of the main problems of automotive vision systems. A technical perception process will need models for all the object classes involved in the domain. Depth perception is another important capability of the biological vision system. All biological vision sensors produce 2-dimensional projections of the 3Dscene, and the 3rd dimension, the depth, disappears. However for many biological purposes the distance to other objects is highly important. There are 2 types of distance of interest:
Biological Aspects in Technical Sensor Systems
Near distance, where a frog can catch a fly, or where a human can
manipulate objects (at arm´s distance). Distances of 0,2 to 2m have to be measured with sufficient accuracy for these biological purposes. Far distance, where a frog detects the dangerous stork or the human finds either its prey or its enemy. Distances of a few meter for the frog or of 50m and more for the human have to be mastered. For both distance types nature has developed different principles. We have a look now at the human vision system. For near distances binocular vision is used: Two eyes allow to have 2 different projections from the 3D-scene to 2-dimensional images, stereo-vision is the most well known principle to reconstruct the 3D-scene. But there are other cues also like Evaluation of the vergence angle: The axes of the 2 eyes point to the interesting objects, interpretation of the angle gives the impression of depth; some of the “2-dimensional 3D-pictures” are using this principle. Use of the depths of focus: Control of the lens focus until the image of an object is sharp. From the state of the lens you can infer to the distance of the object. Stereo often is used as the technical solution because also the biological systems are applying this principle. But it is well known that a person that can see only with one eye also has spatial impressions – the defect will be compensated by other methods (as shown in below). Technically stereo vision is especially useful for manipulation purposes. Far distance vision as mainly required for automotive applications relies on other concepts that also are implemented by nature: The first is based on some a priori-knowledge of the biological systems: A frog knows about the size of a stork, and by combining this knowledge with the relative size (angle) in its vision field it knows about the distance. The same is valid for the humans: We know how big a car is, by seeing it in a certain angle of the view field the distance can be estimated. Figure 12 shows a well know phenomenon – all figures here have the same absolute size, by comparing them with the alignments our impression is deceived. Second if there are movements either by the perceiving subject or by an observed object the principle of motion stereo can be used. Instead of two simultaneous images a sequence of images with different points of view and the knowledge about the distances of these points is processed to obtain a 3D-impression. Third there are other cues that help us to have spatial impressions.
15
16
Introduction
Examples are certain textures or the effect of illumination by a light source coming from a known (or assumed) direction.
Fig. 12. Subjective size perception
Combinations of these principles allows an estimation of distances accurate enough for most applications. This is a good model for technical systems in automotive vision applications. But it implies that the system has some a priori knowledge and that it is able to identify the object class. To solve both problems requires still a lot of work to do and some insights into the biological model. Motion detection is the next problem where nature can give solutions. In the frog’s or the fly’s vision system it is the most important thing to detect moving objects (prey or enemy). But also in the human visual system there are neurons specialised on movement: Especially in the peripheral area fast movements are detected very well. For automotive applications this principle may be applied too: It is very important to detect moving objects (cars, bicycles) coming from the peripheral to the central view field. The detection of moving objects must trigger visual attention. It will start to build a new hypothesis that comes into the scene interpretation process.
Biological Aspects in Technical Sensor Systems
In technical applications the principle of sensor data fusion is quite important. Many projects today are combining radar and visiual sensors to get a more robust knowledge about the scene. How is this in biology? Two examples shall be mentioned: The adaptation and calibration of the vision system of a baby happens by data fusion of two sensors: The eyes and the touching hands. The word “grasp” means as well “to understand” as “to reach for”: By combination of the 2 sensory inputs the baby learns to see, later on it can understand the scene also without its hands. Sensor data fusion between the vision and the auditory sensors is important for the human scene understanding. You hear where the sound is coming from and you see, what may be the reason. As long as both correspond there is no perceptual problem. If it does not, usually the vision sensor wins. Often this is called the ventriloquism phenomenon: If the speaking puppet moves its lips, the speech coming from the puppet player is heard from the lips direction. Technical applications often use active sensors (RADAR or LIDAR). There is no biological model for the visual channel, but quite successful with ultrasonic “vision” where the bath is a wonderful model for technical applications as for parking aids. Here data fusion to a visual channel could be successful. In the FORBIAS project, neuroscientists are cooperating with engineers to find new principles especially in visual perception. Some experimental work is already done to detect the best depth cues and to propose technical applications. Future work will be directed to the detection of “Gestalt”, of classes of objects relevant to automotive applications. Already today it seems clear that real advances in machine vision only can be made if some knowledge will be included to interpret e.g. traffic scenes. It is the question how much knowledge will be necessary and how the knowledge can be used. In any case technical systems are coming into the neighbourhood of “cognitive systems”. Cognition will be required if these systems must provide the same degree of robustness as human drivers. It must be much better than available in today´s technical vision systems for industrial applications.
4
Behavioural Aspects
The scope of the paper is on biological sensors. However, to run biological sensors in an optimal way the motor systems with the ability to change some sen-
17
18
Introduction
sor characteristics must behave optimally. For the biological vision sensor this includes: Focus control (lens focus adapted to an object’s distance) Intensity control (pupil width) Gaze control: To direct the fovea towards the actual “region of interest” Most behavioural reactions happen on layer 1 (skill based) and layer 2 (rule based) of the Rasmussen scheme (figure 1). Focus and intensity control as well as the stabilisation part of the VOR happens on layer 1, the optokinetic nystagmus on layer 2. The gaze direction happens partly in a reactive way (e.g. have a short look to a peripheral part where something is moving) but partly also intentionally. Here the layer 3 is involved. To get the necessarry knowledge from the environment a sequence of sakades should be executed that optimises the information content or minimizes the remaining uncertainty. If one considers the situation of a car driver: he has to look forward to preceding and meeting cars, to the side for crossing cars, into the mirrors for the cars behind the own system. All parts of the traffic scene have to be analysed fast enough that the risk for an accident is minimized. This needs a good gaze control strategy. This behaviour involves a lot of experience and knowledge about possible traffic situations. Some of the principles involved to find the adequate behaviour are: Selection from a list of well known alternative behaviours. Environmental cues are used as conditions. Pre-simulation of behavioural decisions, mental evaluation of the expected results on the basis of some knowledge on its own capabilities, followed by execution. Learning: E.g. bringing new behaviours into the list of alternatives or retrying a new behaviour for pre-simulation. “Reasoning” in the sense of the classical artificial intelligence very often is too slow. The result would be too late for a critical real time situation. There are many psychological experiments concerning gaze control for given tasks. Many of the results can be applied also to technical systems. Behavioural decisions for other actions – locomotion, manipulation, head movements, … – will follow similar laws. It is also a field where it is worth to look into the biological model.
Biological Aspects in Technical Sensor Systems
5.
Perception and Autonomous Action in Technical Systems.
Figure 13 shows an example for a system architecture as it may be used for a “cognitive technical system”. It resembles to the principles shown in the figures 1, 2 and 8: The relationship to the Rasmussen scheme is not as evident as to the sensorimotor bow, the comparison with the biological architecture in 8 demonstrates the similarity between the biological and technical systems. Figure 13 concentrates on a “cognitive car” that provides functions for driver assistance systems as well as for autonomous driving:
Fig. 13. System architecture for a “cognitive car” The physical body at the bottom shows all “hardware”: The car with all
its sensors and actors including the interfaces to information processing. On the right side there is the perception part: It has to care about the Ego-state (where the physical body is located, how fast it is moving and in which direction) and to do the “traffic sensing”. To detect all relevant objects like lane marks, moving and still cars and other objects, and to track these in the image sequence taken by the sensors. By far the most successful principle applied here is the 4D-model of Dickmanns [DW99]: It is based on Extended Kalman Filters EKF that allow estimations for the state (including movement vectors) of all observed objects. The art is to detect features relevant for the object class and to track them in a robust and stable way.
19
20
Introduction
It may be necessary to fuse these results with the results of other sen-
sors (like the object list of RADAR-sensors) or with some information available from a map where the location of the body is well known because of DGPS-information. The result of reading a traffic sign may be fused with some a priori knowledge that on this place a known sign is located. Also road orientation of the map may be used to stabilise results of the lane mark detection given by a vision system. All these results (descriptions of a perception process delivering object instances with their states) have to be stored into a dynamic database containing the actual state of all detected objects and the history of a few past seconds. This database is updated with about the video frame rate, it has a quite short temporal horizon. From this “Dynamic Object Base” a cognitive process must interpret the traffic situation and to detect the intention of other players (“subjects”) in the traffic situation (a car trying to bypass from the rear, a car starting to change its lane, a bypass-process on the other side of the road. The results are stored in an other data base with a larger time horizon that contains the actual traffic situation. This information is used together with the mission plan (what the car has to achieve, where to go), some value information (minimal risk for all participants), and some knowledge about the capabilities of the own body (e.g. acceleration, dynamics) to decide about the next behavioural steps to be forwarded to the actor-side of the car. Finally the selected behavioural steps have to be executed: For the active vision process the gaze control has to be done (“minimize uncertainty”) and for the car itself the longitudinal (breaking and gas) and lateral (steering) control has to be done with the corresponding control loops.
Perception, interpretation of the traffic situation and behavioural decisions require cognitive abilities. They need general knowledge about relevant objects, traffic situations and the capabilities of the car. However, the time to decide and to execute the behavioural steps is very short – only parts of a second are available to avoid risky situations and to stay in a safe state. The scheme in figure 13 is simplified. There are also many direct interconnections between the sensor- and the actor-side e.g. for the local control loops as well for car motion as for gaze control where the VOR-model is implemented. It requires very fast information from the vision sensor to use the retinal slip-information without time delay. However, it shows the principle organisation.
Biological Aspects in Technical Sensor Systems
6
Future Aspects
The examples presented in this paper make clear that it is not the physics of the biological sensor that must be copied to achieve the performance of biological systems. Technical sensors may use different physical sensor principles, most of these are not worse then the biological ones. The difference is in the interpretation process – the cognitive capabilities of biological beings. The signals coming from the sensors are processed in the context of the situation and stored knowledge is used to “understand” the signal pattern. We know only part of the neuronal mechanisms, we know something about behaviour. When we will better understand these principles, we also will be able to realize the observed functions in technical systems. With the transition from the horse carriage to the automobile we have gained a lot of speed, mobility, comfort. But we also have lost the cognitive capabilities of the biological system “horse”. It was able to find to home if its owner drank too much and it avoided crashes with other carriages. To compensate for these functions cars need some cognitive capabilities that may be used as “assistance functions” or even as “autonomous functions” – like the horse that finds its home.
References [B01]
Brandt, Th.: Modelling brain function: The vestibulo-ocular reflex. Curr. Opin. Neurol. 2001; 14: 1-4. [B99] Brandt, Th.: . Vertigo: Its multisensory syndromes. 2nd Ed. Springer: London, 1999. [BH02] Borst A, Haag J: Neural networks in the cockpit of the fly. J Comp Physiol 188: 419-437 (2002). [DW99] Dickmanns E.D., Wünsche H.-J.: Dynamic Vision for Perception and Control of Motion. Chapter 28 in: B. Jähne, H.Haußecker and P. Geißler: Handbook of Computer Vision and Applications. Vol. 3, Systems and Applications, Academic Press 1999, pp 569 – 620. [FB03] Färber G., Brandt, Th.: FORBIAS Bioanalogue sensomotoric assistance systems. Proposal for a joint research project founded by BFS (Bavarian Research Foundation), Munich 2003. [G03] Glasauer, S.: Cerebellar contribution to saccades and gaze holding: a modelling approach. Ann NY Acad Sci 1004, 206-219, 2003 [HSSF99] Hauck,A., Sorg M., Schenk T. and Färber G.: What can be Learned from Human Reach-To-Grasp Movements for the Design of Robotic Hand-Eye
21
22
Introduction
[R83]
Systems?. In Proc. IEEE Int. Conf. on Robotics and Automation (ICRA’99), Seiten 2521-2526, Mai 1999. Rasmussen J.: Skills, Rules, and Knowledge; Signals, Signes, and Symbols, and Other Distinctions in Human Performance Models. IEEE Transactions on Systems, Man, and Cybernetics, Vol.13 Nr.3, page 257-266.
Prof. Dr. Georg Färber TU München Lehrstuhl RCS Arcisstr. 21 80290 München Germany
[email protected] 23
The International Market for Automotive Microsystems, Regional Characteristics and Challenges F. Solzbacher, University of Utah S. Krüger, VDIVDE-IT GmbH Abstract Microsystems technologies are in widespread use. An ever increasing number of automotive functions rely on MEMS/MOEMS applications. Besides the simple quantitative market numbers the correlations in the value chain, the strategic alliances and willingness for cooperation and sharing of information as well as RTD infrastructure are strong indicators for success in a specific application field and geographical regions. This paper gives an overview about the key microsystems applications and their further market deployment. In addition to the market data, background information such as key players, competitive analysis, market specific situations and interdependencies are reviewed. A brief outlook will discuss the change of the greater picture by the introduction of solutions using data and sensor fusion approaches. We will introduce a suggestion of the impact of this upcoming approach.
1
The Global Automotive Market
The objective of this paragraph is to outline the boundary conditions for automotive microsystems, their market potential and potential future trends. It is an informed view of major issues, trends and behaviour of major players and includes global car production forecasts for the year 2015.
1.1
Competitive Arena
The automotive industry has become a very tough environment for suppliers and OEMs alike. It is one of the most global industries with high competition across the entire production chain. Due to low profitability and increasing investment cost for new products, a further consolidation of the industry field can be expected. New entries are almost impossible.
24
Introduction
Fig. 1.
Porter 5-forces model of the automotive industry
Trends
Design, branding marketing, distribution and a few key components to large extend define a car makers competitive position. Profits flow from sales, service, finance or leasing. Therefore, more and more OEMs are outsourcing extensive parts not only of their production but also R&D efforts to suppliers. Cost and innovation pressure are being passed on to the suppliers due to increasing competition and production overcapacities. The automotive supply chain is thus undergoing a tremendous transformation – suppliers are more and more evolving into development (and risk sharing) partners. The subsequent high R&D and production investment volumes that need to be advanced by automotive suppliers lead to further consolidation of the supplying industry. Current studies expect a decrease of the total number of suppliers from 5500 today to about 3500 in 2010 (from 800 to about 35 1st tier suppliers) [1]. Product development cycles continue to decrease from an average today of about 23.6 months down to 18.3 months in 2010, which will put unforeseeable strain even on the fastest of suppliers and R&D partners. At the same time, vertical integration of the manufacturing level is expected to decrease (OEM 39.5% 2002 to 27.8% 2010; supplier: from 46.1% 2002 to 40% 2010) [2]. The percentage of cars using common platforms will continue to rise from 65% (2000) to about 82% (2010) [3] allowing larger production volumes of identical or only slightly modified parts for suppliers and OEMs.
The International Market for Automotive Microsystems, Regional Characteristics and Challenges
1.2
Production Forecast
Even though some of the key European markets have stalled over the past years, the total automotive market will continue to grow on average by about 2.2% (no. of cars) / 2.8 % (market volume) annually from about m57 cars (2002) to about m76 cars in 2015. This growth will be driven by fast emerging markets such as India, China and Thailand.
Fig. 2.
Global car production forecast 2015 [4].
Automotive Industry Regional Specifics
When reflecting on the local customer requirements of the international automotive business one has to take into account that the concept of a world car never proved successful. Regional specifics in consumer expectation have to be addressed, different legislation, tolling, the volatility of exchange rates and competing production costs are reasons for regional strategies. Table 1 lists some of the interesting regional specifics. The globally automotive industry is in a transition towards strong implementation of regional specifics and towards building a transnational manufacturing system along the whole value chain. This development will with some time delay be adapted to automotive microsystems supply and manufacturing networks.
25
26
Introduction
Automotive MST R&D in Europe has assumed a leading role combined with a long tradition in automobile manufacturing. Especially Germany has shown to be very strong in the development and production of medium and upper segment vehicles paired with a strong public engineering attitude which translates into a high level of automation. In contrast to this, Europe as a whole remains a fairly conservative market where industrial customers (OEMs) tend to stick to known solutions.
Tab. 1.
Comparison of the international markets
When looking for lead markets it is essential to have a closer look at Japan. The limited, but nevertheless large market with costumers willing to pay for innovations makes Japan very attractive. Almost no material resources, high population and limited numbers of highways due to the geographic situation, big cities with overwhelming traffic and a leading role in telecommunication infrastructure support the introduction of high tech solutions. On the downside however, Japan still remains a largely “hermetic” market with its own complete value chain where Europeans or Americans fail to address a significant market share. Japan on the other hand is a strong exporter with increasing shares in Europe high success in North America. North America is traditionally a very strong and competitive automotive market. The costumer is very cost sensitive which leads to “simple” solutions and a lack of high tech innovations in the car. In contrast to the end customer need, regulations by public authorities (e.g. California) are very tough. With exception of SUV`s which are counted as belonging to commercial vehicle categories in the US due to their “truck” structure, frame and engine and which are seeing very high demand, the US introduced fleet consumption rules, zero emission roadmaps and safety regulations. The ambivalence of the market paired with a legislative system with high industrial liability results in a rather high tech avoiding industrial approach. It further has to be mentioned that the US high way system does not compare to Europe or Asia which makes e.g. the discussion about zero emission, hybrid, gasoline or diesel in a global context difficult.
The International Market for Automotive Microsystems, Regional Characteristics and Challenges
Fuel consumption also does not seem to be a technology driver for cars in Eastern Europe. Eastern Europe can not build on a long automotive tradition. It is a big, but slowly growing market dominated by the international car industry. The costumer base divides into two groups, one looking for the cheapest available vehicle and the other going for the top-of-the-range car serving as a status symbol. For microsystems application the latter group is quite interesting, since cars are delivered even better equipped than in the manufacturers home markets. These cars help to create a strong brand image. Branding appears to be one of the buzz words for China, as well. Initially, sheer population numbers in China triggered a high interest of the automotive industry. When observing current developments it can however be concluded, that compared to traditional new markets China builds up a strong domestic automotive industry that is becoming internationally competitive in high tech areas including microsystems technology.
China
China’s domestic car sales have grown at more than 10% annually over the past years and will account for about 15% of the total global automotive market growth. Improved infrastructure, sales and distribution channels, the deregulation of the automotive market as well as the growing economy and subsequent prosperity will lead to increasing demand growth. One of the common misconceptions of China is that it is a market for simple low cost cars – on the contrary: Chinese customers see the car clearly as a status symbol of accomplishment and success. Safety and comfort are highly valued and upper middle class or luxury European cars are therefore in high demand (BMW is already selling more 7 series cars in China than in Germany!). The OEM market is dominated by global-local joint ventures, such as Shanghai Volkswagen and FAW (Changchun) which together account for more than 50% of the total Chinese car production. The remaining international joint-ventures add a further 43%, leaving no room for the remaining 20 domestic car makers. China’s entry into the WTO has lead to drastic cutting of import tariffs and to fading out of local-content requirements for cars. Competition and quality will continue to increase, partially due to the setting up of shops of European and US suppliers in China. Successful automotive suppliers such as SAIC (Shanghai Automotive Industry Group Corporation) assume more and more responsibility for quality and out put while transitioning from contract manufacturer to fully developed supplier.
27
28
Introduction
Fig. 3.
Automotive joint-ventures in China [5]
Just like for many of the emerging markets, looking at MST requirements for cars in China, one has to clearly make a distinction between European upper middle class and luxury cars (starting at VW Passat and above) which come fully equipped and the probably larger bulk of very low cost cars. An example of such cars is the Renault ”Logan” built by Dacia, which is being produced in Romania and sold for between 5000 and 7000 Euros and which except for optional ABS and airbag is devoid of any “invisible helpers” or MST devices. Due to the strong economic growth – unlike a lot of the former eastern block countries, the middle-east and India – China is most likely to absorb a growing percentage of MST market relevant luxury cars. Thus, the customer request for ever improved and increased high tech content in the car will keep driving an MST demand. Current studies are expecting the number of potential customers to grow to about m170 until 2010 [4]. At the same time, China has been very active identifying and attracting partners for core technology areas for the future development of their technology and industrial infrastructure base. Thus, initiatives to foster technology transfer as well as own developments are beginning to bear fruit. China has started growing an increasing base of SME companies that are supplying the big 1st tiers and OEMS in the automotive as well as the white appliances industry.
The International Market for Automotive Microsystems, Regional Characteristics and Challenges
Examples are companies like Huadong (China Eastern) Electronics Corporation which from a former military supplier has evolved into a high tech company with financial interest in multiple SME companies such as GaoHua Technologies (Nanjing) which is a supplier of industrial pressure transducers which fit European standards. These companies have also found local R&D partners in local Universities and national laboratories. Furthermore, China is starting to attract more and more highly skilled, foreign trained engineers and researchers due to the growing opportunities in their home country from the US and Europe: i.e. people are moving back and starting companies or pursuing academic careers rather than staying in the US or Europe – the chances or prospects of success are by now apparently larger than they are in the western world. US universities and employers can already account for a clear decline in high profile foreign researchers from countries such as China and India. I.e., even though it is still very common to come across company or administrative structures guided by the mind frame of engineers trained in pre WTC PRC which focus on imitation rather than innovation, it is a western misconception to expect a 10 to 20 year delay until countries like China may start to innovate. Already, one can witness the pace at which China absorbs the newest technology in all fields together with the people to produce, operate and develop it. At the same time, international joint ventures with technology drivers mainly from Europe and the US will lead to additional momentum for the approaching of international R&D standards. The European MST industry will have to take this into account when planning their strategic positioning for the coming two decades.
India
The Indian industry leader is Maruti Udyog with about 50% of the market share. Ford, Honda, Mitsubishi, Hyundai and Daewoo are amongst the most active global players. The component and system supplier market is highly fragmented und underdeveloped leading to low productivity. Many foreign carmakers are unsatisfied with Indian component manufacturers and would rather prefer free imports. The sales volume of 3 billion USD of automotive components in India is comparable to the Portuguese auto parts industry6. India – due to its comparably low growth of the economy is an ideal market for an MST irrelevant car as perceived and initiated by Renault’s CEO Louis
29
30
Introduction
Schweitzer about eight years back against the trend of all other manufacturers.
ASEAN
Indonesia, Malaysia, Philippines and Thailand are the four major markets of the Association of Southeast Asian Nations. Absolute increase in production from 2001 to 2007 in units: Indonesia: 230.000 to 300.000 Malaysia: 416.000 to 550.000 Philippines: 64.000 to 145.000 Thailand: 460.000 to 870.000 The top positions of the ASEAN market are occupied by Malaysian automakers, followed by Toyota, Isuzu and Mitsubishi. The ASEAN market is booming, but highly affected by economical and political crisis. Furthermore, the current absolute market size is still very small the automotive industry has become a very tough environment for suppliers and OEMs alike.
Japan
For Japan with a market volume decline of nearly 3 percent until 2007 the prospects do not look euphoric (2001: 9.134 Mio cars; 2007 8.873 Mio cars.). The Toyota group seems to be moving against the trend with a volume growth from m6.1 cars in 2001 to m7.8 cars (2007) – their primary volume growth however comes entirely from new volume in emerging markets, North America and Europe.
2
Automotive Microsystems
Automotive Microsystems have emerged in the eighties, starting with the introduction of Manifold Air Pressure (MAP) sensors followed by airbag sensors. The driving force for the use of microsystems in cars is that it technically or economically facilitates integration of new functionalities leading to improved overall car safety, security, efficiency and comfort. Key factors are: Low cost due to high degree of integration and low material use Small size and weight (allowing the use in weight sensitive applications, e.g. sensors in unbalanced part of suspension systems such as tire pressure sensors as well as the use of large numbers of systems without an
The International Market for Automotive Microsystems, Regional Characteristics and Challenges
unbearable increase of vehicle weight) High reliability (processes and test mechanisms originating in semicon-
ductor industry are highly developed leading to low failure rates, systems integration lowers the number of external interfaces) Low power consumption (allowing large number of sensor systems without upgrading of car power grid, as well as some battery driven sensors, e.g. tire pressure monitoring) Interface to car electronics exists or can easily be established Enhanced functionality (possibility to measure and control quantities that so far could not be measured/controlled) Today, modern cars feature up to 100 microsystems components, fulfilling sensory and actuator tasks in: Engine/ drivetrain management and control Safety On-board diagnostics and Comfort / convenience and security applications. The increasing number of sensors used, as well as data fusion strategies, however, has lead to blurring of the boundaries between application fields. Figure 4 exemplary names some automotive functions strongly related to sensor input.
Fig. 4.
Car functions and the respective sensors (source: based on DaimlerChrysler)
31
32
Introduction
Technical Requirements for Automotive Microsystems
The use of microsystems as sensors typically requires close contact to the measured medium and often translates into harsh environment conditions for the sensor. Microsystems therefore have to withstand and function under almost all automotive conditions being present in a car. The following table gives a brief overview of such environments: Temperature challenges: unlike commonly assumed about 5 years back, most of the relevant future applications do not require operation of MST devices at temperatures beyond 180°C. The current strategy pursued by MST suppliers and 1st and 2nd tiers is thus to enhance existing Si chip technology by improving packaging and metallization systems, rather than employing SOI, SiC or other exotic materials and technologies. Exceptions are: Exhaust gas sensors (operated at between 280 and 450°C) Cylinder pressure sensors (operated up to 650°C), which however are assumed to remain irrelevant for market use due to the high price per sensor in any existing technology Differential pressure sensors for soot filters in diesel engines (sensor operated at around 280°C) – current systems place the sensor at a sufficient distance to the hot exhaust system reducing the pressure to implement new high temperature compatible systems. Exhaust gas pressure sensors for variable turbine control in TDI diesel engines (operating temperatures around 280°C)
Tab. 2.
Automotive environments
Low temperature Silicon fusion bonding for wafer level packaging and encapsulation of MST chips is well established. Technological challenges are thus
The International Market for Automotive Microsystems, Regional Characteristics and Challenges
primarily to be found in the development of better high temperature metallization systems. Pressure challenges: peak pressures can be found in the diesel injection system as well as electrohydraulic brakes with peak pressures around 1500 bar. Up to date, two to three competing high pressure sensors, primarily based on stainless steel diaphragms with bonded piezoresistive Si-chip exist in the market. Core problems are found in sensor long term reliability under pressure and temperature load cycles. Media challenges: The majority of future devices will require operation in hostile environments such as hydraulics oil, exhaust gas, etc. This is a major concern for pressure and gas sensors. Gas sensors constitute one of the neglected fields in MST device technology. Examples of existing sensors are Lambda sensors (O2 sensor for catalytic converter) made from Yttria stabilized Zirconia and air quality sensors for automatic flap control in HVAC systems. New emissions regulations for trucks and cars taking effect in 2007/2008 will require a further reduction of the NOx gas concentration. Ammonia (NH3) is injected into the exhaust gas to reduce the NOx concentration. Thus, both a NOx and a NH3 sensor are required. Very little proven technology for reliable detection under automotive specifications exists today. The most mature technology available use metal oxide gas sensitive layers and electrochemical sensor principles, none of which currently offer sufficient long term stability and selectivity. Current specifications require about 8-10 years and 700.000 km of sensor operation. Current solution approaches towards media separation involve wafer encapsulation (e.g. fusion and anodic bonding), the use of hermetic coating layers and new substrate materials (e.g. SiC on Si). Besides aforementioned sensors operating in specific harsh environments, even regular standard sensor/actuator systems have to meet automotive specifications including some resistance to oil, fuel, salt water, ice and to car wash chemicals. These translate into severe packaging and capsulation requirements.
MST Applications
Microsystems applications for vehicles can be divided, according to their lifecycle, into three major groups: established devices, introduced systems and systems currently being researched. Using total produced units as a measure, settled and saturated systems to measure pressure and acceleration constitute the biggest group. Systems in currently being introduced are for instance predictive sensors (e.g. pre-crash detection), trying to derive a situation or status
33
34
Introduction
in the immediate or near future based on information gathered in the past or this instant. Compared to already common systems these sensors are quite complex, yet low volume and high price including attractive margins for suppliers. The third group of MST devices, which still is in the R&D phase consists of very complex sensors or sensors in highly challenging conditions. These devices often do not directly determine a certain quantity or value of interest. Sometimes like e.g. for oil condition sensors, a life time history might be needed to extract the data. Table 3 provides a brief overview in current microsystems, adding information on the respective application, the life cycle status, challenges of the system and the systems future potential based on market forces. Governmental regulation remains the biggest driver for introduction of MST technology. Further driving forces are X-by-wire and comfort features, where the customer is willing to pay the additional price and fusion concepts, leading to a new generation of sensors. The automation of the car and its driving asks for an improved understanding of the vehicle status, the traffic situation as well as better communication and interaction with the driver. Microsystems can make the difference leading to a feasible automated individual transport system of the future. 2.1
Sensor and Data Fusion
Many automotive MST development projects are faced with the problem that the high-precision sensors that would be needed in order to meet the functional specifications typically have to be replaced by less accurate sensors due to cost considerations. Hence, there is a huge potential for the use of data and sensor fusion technology to substitute expensive precision sensors and to create high-precision virtual sensors at modest cost. Widely available bus architectures, communication technologies and reliability issues for safety applications further support the use of data and sensor fusion concepts. Data fusion refers to the fusing of information resulting from several, possibly different physical sensors, i.e. to compute new virtual sensor signals. These virtual sensors can in principle be of two different types: High-precision and self-calibrating sensors, i.e. improved versions of the physical sensors. The goal is either to achieve higher performance using existing sensors or to reduce system cost by replacing expensive sensors by cheaper ones and using sensor fusion to restore signal quality. Soft sensors, i.e. sensors that have no direct physical counterpart.
The International Market for Automotive Microsystems, Regional Characteristics and Challenges
Tab. 3a. Automotive microsystem Applications - Drivetrain and Safety
35
36
Introduction
Tab. 3b. Automotive microsystem Applications - Diagnosis, Comfort and HMI
The International Market for Automotive Microsystems, Regional Characteristics and Challenges
Figure 5 contains a schematic picture of how data and sensor fusion concepts may be applied to vehicles. On the left hand side, different types of information sources are listed. These include underlying real sensors used to measure characteristics of the environment of the car, to monitor the vehicle internal parameters and status and to observe or anticipate the drivers’ intent and driving behaviour. Besides these car centred input dimensions, communication technologies already add virtual sensors by adding additional information to the car systems. In a similar fashion, information being of broader interest and originating from one vehicle can be shared with other vehicles.
Fig. 5.
Data/sensor fusion concepts (based on NIRA Dynamics [7])
All signals are fed into a sensor integration unit, which merges the information from different sources and allows the computation of virtual sensor signals. These, in turn, may be used as inputs to various control systems, such as anti-skid systems and adaptive cruise control systems, or in Human/Machine Interfaces (HMI) such as e.g. a dashboard or overhead display. The possibility to compute virtual sensor signals allows assessing complex dimensions like oil quality or obstacle detection. Additionally, fault diagnosis / self test of the physical sensors can be improved. The reason for this being, that by using sensor fusion, analytical redundancy is introduced, which can be used to detect and isolate different sensor faults. This redundancy also implies that a system can be reconfigured in case one or more sensors brake down to achieve so-called degraded, or “limp-home”, functionality. Classical designs rely on hardware redundancy to achieve these goals, which is a very expensive solution compared to using sensor fusion software.
37
38
Introduction
In order to discuss the practical influence of data and sensor fusion concepts, some major effects have to be looked at more closely. Looking at table 2, a decision has to be made, whether comparable parameters actually really have to be measured by different vehicle systems or even by each vehicle, separately. One example would be road friction monitoring as an input for vehicle dynamics systems. There exists no sensor system in the market today that can measure and predict road friction. Information about vehicle stability systems after a critical situation however allows deriving of road friction on a particular patch of road. If this information originating in one or a set of cars could be shared with other vehicles, a tremendous safety effect could be achieved with almost no additional cost, provided each car comes with a car to car communication module. Even when looking at one individual car quantities like e.g. inertial, pressure and temperature data are measured several times. Some of this information is redundant. Sometimes the same type of information is measured in a different range or position. In these situations it is difficult to measure with one sensor, only. Hence, looking at the overall information situation (table 2) it can be predicted that up to one fifth of all sensors might be not required, if using sensor and data fusion. The second issue to mention effect is the possibility to design a sensor for an information environment where the need for accuracy for the specific sensor decreases. Sensor fusion allows the design of cooperative systems where single sensors assist each other and are designed to function as a set. This concept translates into virtual sensors and allows for much simpler units.
2.2
Trends
The bottom line of the past years MST device and market development is that there will most likely not be a specific new “killer application” propelling MST device technology and market penetration onto a new level. There is a clear trend towards: Consolidation of existing sensor technologies – future developments focus on evolution rather than entirely new concepts – most of the known sensing mechanisms have – if suitable – been transferred to MST technology Evaluation of sensor and data fusion concept potential in order to a) gain additional otherwise inaccessible information or b) reducing the number of sensors required and make use of the increasing redundancies of existing hardware sensors Standardisation of signals and interfaces in order to reduce cost and improve exchangeability
The International Market for Automotive Microsystems, Regional Characteristics and Challenges
Improved communication technology in order to remove wire harness-
es (hostile environments + cost issue) In addition a few technological and device challenges will however remain such as e.g. pre-crash sensing.
2.3
Global Innovation Networks
The automotive industry’s R&D activities have undergone drastic changes over the past decade. The increasing complexity of future system components has led to research and development projects being addressed by teams representing (almost) the entire value and production chain from component/chip supplier to 1st tier and OEM. The establishment of increasing communication between all the partners involved has been one of the major accomplishments of the European automotive and MST industry. OEMs have undergone a tremendous learning process, when realising that for successful use and implementation of MST components and technology it is essential to be involved in the definition of acute requirements and future needs. In consequence, already, a project engineer has to have a very wide knowledge in order to coordinate multi-facetted developments. Over the coming decade this development will move to the next stage. US suppliers have already started buying technology or outsourcing developments chiefly to European facilities due to a) higher commercialisation rate of MST products in Europe (Germany), b) higher production and reliability competence and c) lower IP-barriers compared to the US. R&D and technology competence is progressing more and more towards a global scale. With increasing need for high tech components in emerging markets such as e.g. China, more and more competence will move into these countries as well. This will eventually lead to R&D not only having to take into account the entire value or production chain, but also to bridge intercultural barriers, since R&D partners will be spread all over the globe. The impact on requirements for future R&D engineers and project managers in this field can not be foreseen at this point in time. Likely scenarios are a) creation of new positions/functions for international project managers b) increased skill sets including intercultural competence for engineers. Interestingly enough, international logistics companies such as Danzers have become pioneers in excelling in intercultural competence as competitive advantage since this allows them to provide highly efficient and nationally customized service by making it part of their company culture.
39
40
Introduction
3
Summary and Outlook
MST for automotive markets and applications has certainly arrived at a watershed in its development. From a market perspective, automotive applications will remain an important cornerstone of MST development and products, but will make up an ever smaller proportion of the market. Due to the remaining high innovation pressure it can still be expected that new devices, technologies and applications will keep being introduced, even at maybe slower pace than in the previous decade. It can be expected that harsh environment compatible MST and high complexity MST or sensor systems will largely contribute to this remaining growth. Biomedical and consumer market applications will however outclass automotive MST in market volume (total market and quantities) by far. After a few years of experimenting with the new technology, Germany as key driver market for automotive MST has returned a more conservative approach towards new technologies and new MST devices: unless a device is a “need-tohave” item, the cost associated with introduction of the new device (investment, initial failure rates, etc.) does not justify or warrant the potential competitive advantage. In line with this development, most major OEM’s have to a large extent pulled out of earlier MST/MEMS commitments and R&D programs. Whereas until about 5 years ago the goal of many OEMs was to be technology leader when introducing new systems, today the primary objective appears to be “first-to-follow”, in order to be able to monitor the customer reaction as well as initial failure modes. BMW appears to be an exception to this rule with the introduction of features such as the iDrive concept. It remains to be seen, whether some emerging markets such as e.g. China with currently a high customer desire for high tech cars will keep up their momentum or undergo a similar saturation and slowing phase. Legal requirements (e.g. new emissions regulations for diesel engines taking effect in 2007/2008) remain a powerful driver for continued innovation in this field. Strangely enough, even though some of the regulations are stricter in the US, the European automotive OEMs still seem to be implementing a much larger number of new high tech systems than their US counterparts. Likely reasons for this development can be the extreme price competition and pressure on US brand cars as well as the reduced necessity for MST high tech devices due to lower traffic density, lower vehicle speeds, etc. Current trends point towards consolidation and data fusion – i.e. try to use systems in place first before adding additional new components which add to the overall car electronic system complexity and potential failure modes.
The International Market for Automotive Microsystems, Regional Characteristics and Challenges
Another interesting issue is future used car reliability. The complexity of today’s cars lead to a high failure rates and recalls already during early product life covered under warranty. Already we are approaching a state where cars do tend to behave similar to computer hard and software (i.e. problems with tire pressure monitoring systems requiring “system reboot”, software glitches in navigation and MMI interface computers leading to partial loss of central car comfort functionalities (HVAC, radio, etc.). Taking into account the complexity and cost of replacing some of the units (replacement of a navigation system can cost up to 7.000 Euros if a wire harness needs to be replaced) it remains an open question whether used high tech cars will be affordable at all due to high maintenance cost. In the US, car manufacturers have already reacted with “certified warranties” that allow up to 4 years of used car warranty on all parts and labor. It can be assumed that until the reliability of the new systems has reached a level comparable to the one acquired in airplane control systems, cars that were built about 10 years ago will probably represent a peak in reliability: their mechanical systems have matured to very high reliability, but they are not yet quite as loaded with electronic components that their electrical / electronic reliability suffers. Finally, the increasing globalisation of the automotive supply industry in unison with the need to cater for regional customer needs and desires calls for a new generation of automotive and MST R&D engineers. High intercultural competence will be a key to R&D project and product success. Automotive MST has started to mature and will start to loose some of its original momentum. Just like car electrical systems, however it has established itself as essential component of individual transport systems and will replace basic electrical system continue to be the economic and innovation motor for the coming decades.
Acknowledgements The authors wish to thank Mr. Goernig (ContiTemic) for the fruitful discussions on MST market penetration and the ongoing open minded exchange of ideas. The authors would also like to thank Mr. Rous for researching and compiling a large proportion of the market material presented in this article.
41
42
Introduction
References [1] [2] [3] [4]
[5] [6] [7]
Price Waterhouse Coopers: Supplier Survival: Survival in the modern automotive supply chain, July 2002 Center for Automotive Research: What Wallstreet wants…from the auto industry, April 2002 Chuck Chandler: Globalisation: The automotive industry’s quest for a world-car strategy, 2000 Mercer Management Consulting (Eds.): Automobilmarkt China 2010: Marke, Vertrieb und Service entscheiden den automobilen Wettbewerb in China, November 2004 The McKinsey Quarterly, 2002 Ed. 1 Francisco Veloso: The automotive supply chain: Global trends and Asian perspectives, Massachusetts Institute of Technology, September 2000 Forsell, U. et.al.: Virtual Sensors for Vehicle Dynamics Applications, in: Advanced Microsystems for Automotive Applications 2001, Springer-Verlag 2001
Prof. Dr.-Ing. Florian Solzbacher Department of Electrical Engineering and Computing University of Utah 50 S Central Campus Drive UT 84112 USA
[email protected] Dipl.-Ing. Sven Krüger VDIVDE-IT GmbH Rheinstrasse 10B 14153 Teltow Germany
[email protected] Keywords:
microsystems application, market, deployment, prediction, differentiation, production capacities, sensor and data fusion, competitive analysis, innovation networks, technological challenges
43
Status of the Inertial MEMS-based Sensors in the Automotive J.C. Eloy, Dr. E. Mounier, Dr. P. Roussel, Yole Développement Abstract The inertial sensor applications are the most active among the MEMS markets. This paper analyzes the future market for accelerometers and gyros. Yole found that between 2003 and 2007, the compound annual growth rate (CAGR) of gyroscopes will be 25%, coming from 348M$ in 2003 to 827M$ and the CAGR of acceleration sensor will reached 10%, coming from 351M$ to 504M$. For the first time, the markets for micro machined gyroscope will exceed acceleration sensor markets in 2005. Both markets are now dominated by automotive applications.
1
Accelerometers, a Market of $504 Million in 2007
The following table (figure 1) shows the accelerometers market forecast for the 2003 – 2007 time period. The total market has been estimated to be $351 million in 2003, $410 million in 2005 and $504 million in 2007. Today, the automotive application is 90% of the overall market for airbag deployment sensing and active suspension. The main characteristic of the automotive field is that it requires low cost chips in the range $3 to $5 per component. In the accelerometers field, main manufacturers are Bosch, Analog Devices and Freescale/Motorola. With a yearly production of cars of 40 millions units (CAGR of 0.36%), we estimate that 180 million of accelerometers will be necessary in 2005 for the automotive field only. Figure 2 shows the accelerometer manufacturers 2003 market share in $ M sales. The 8 first manufacturers of accelerometers represent more than 90% of the total market share in number of components. Main manufacturers are Bosch, Analog Devices, Motorola (part of the production is sub-contracted to Dalsa), VTI Hamlin, X Fab, Denso, Delphi-Delco and SensoNor (now Infineon). In 2003, the total volume of accelerometers for automotive was more than 100 millions
44
Introduction
of components for more than $300 million market. We should note that Infineon also uses pressure sensors as side airbag sensors placed inside the door structure.
Fig. 1.
Markets for MEMS-based accelerometers 2003-2007
For airbag application, the specifications are the following: ±50g, auto-calibration and self-test, integration of multi-axis sensing for front shock detection Integration from 1 to 5 airbag sensors per car, for several axis of detection Price of 1 axis sensor: < 2$ Price of 3 axis sensor: 5 to 6$
Fig. 2.
Accelerometers manufacturers’ 2003 market share (all applications)
The market shares in 2003 for airbag sensors were the following: ADI with 27%, Freescale with 15%, Bosch with 35%, Delphi with 9% and Denso with 8%. Today, the trends are to have more sensors in order to have focused activation of airbags and the integration of several axis of detection in one pack-
Status of the inertial MEMS-based sensors in the Automotive
aged sensor. For active suspension, a ±3g acceleration sensor is necessary with high accuracy. In 2003, VTI Technologies had the largest market share. For ESP application, a ±3g acceleration sensor plus 1 gyro are required. In 2003, VTI Technologies has the largest market share followed by Bosch. The business trend is a strong need due to extended use of security systems for car stabilization.
2
Gyroscopes, a Market of $827 Million in 2007
The following table (figure 3) shows the gyros market forecast for the 2003–2007 time period. The market has been estimated to be $348 million in 2003 and $827 million in 2007. This is about 25% CAGR. Today, like accelerometers, the automotive application is 90% of the overall market for: Rollover detection Navigation (GPS) Antiskid systems
Fig. 3.
Markets for MEMS-based gyroscopes 2003-2007
The main characteristic of the automotive field is that it requires low cost chips. The gyros’ ASP is in the range $15 to $30, which is still considered as a high price for automotive components, thus restricting the use of gyros to high-end cars. We estimate that 48 million of gyros will be necessary in 2005.
45
46
Introduction
Automotive application is more than 90% of the market for gyroscopes with: Rollover detection Navigation (GPS) ESP This field requires low cost gyros in the range 15$ to 30$ / components. This ASP is considered as a high price for automotive components. For car applications, main players are SSS-BAE, Bosch … and 2005 production is estimated to be more than 50 millions units.
Fig. 4.
Market shares for gyros manufacturers in 2003 (all applications)
For the rollover detection application, a detection of the angular rate as low as 0.5°/s is necessary. In 2003, Matsushita was the main supplier (followed by Sensonor/Infineon). It is mainly a Japanese market with few applications in North America, and unclear market evolution in 2005. For GPS specifications (loss of GPS signal in cities, tunnels …), the measurement range is ±80°/s. The major players worldwide are Matsushita and VTI, selling a 3x1 axis accelerometer on the USA market. There is a strong need in automotive GPS moving from high-end to low-end cars. For ESP, a ±3g acceleration sensor plus 1 gyro are needed. Bosch and SSS are main manufacturers. There is a strong need due to extended use of security systems for car stabilization.
Status of the inertial MEMS-based sensors in the Automotive
3
Most of the inertial MEMS Devices are Made with Deep Reactive Ion Etching
Regarding the accelerometers micro-structure, 40% of total production is comb-drive accelerometers (which represented more than 50 millions units in 2004). The companies manufacturing comb-drive accelerometers are Delphi Delco (less than 10 million units per year), Denso (10 million units per year) and Bosch. Matsushita also develops comb-drive accelerometers. Today, these accelerometers are at the feasibility stage). The production yield is in the range 70% to 80% for accelerometers and about 1800 accelerometers are manufactured on a 6’’ wafer with an average size of 8mm2. We estimate that 56% of accelerometers are surface micromachined and 44% of accelerometers are bulk micromachined. But, some players (such as Bosch, AD …) are using Deep RIE equipments in surface micromachining process in order to benefit from the high etching rate. Using these data, we calculated that, in 2004, more than 30 Deep RIE equipments should have produced about 100 millions accelerometers (that is 75% of the total production). By keeping a conservative scenario (in 2007, the use of Deep RIE will remain at 80% of the total production), we estimate that almost 50 Deep RIE equipments will be necessary in 2007. For gyroscopes, the production yield is in the range of 50% today. We estimate that 52% of gyros are silicon or quartz surface micromachined. 48% are bulk micromachined (for example, SSS is using Deep RIE equipments) but some other players are using Deep RIE equipments in surface micromachining process in order to take benefit from the high etching rate. We then estimate that 70% of the gyros for automotive market are manufactured using Deep RIE equipments. In 2004, we calculate that less than 40 Deep RIE equipments should have produced 70% of the total gyroscopes production. If we assume a conservative scenario for the future, it means that in 2007, 70% of the total production of gyros will be made using Deep RIE. With this hypothesis, we calculate that about 80 Deep RIE equipments will be necessary in 2007 for the production of gyros.
47
48
Introduction
4
Conclusions
The MEMS inertial sensor markets are widely dominated by automotive applications for the years to come and the new applications (both low-end and highend) are driven by the availability of adapted-cost devices with right specifications. We forecast that, in 2005, for the first time the gyroscopes market will exceed the accelerometers market. On the MEMS equipment side, the inertial MEMS market growth is an opportunity for DRIE manufacturers as the development of new applications in the fields of accelerometers and gyroscopes will drive the DRIE market. J.C. Eloy, Dr. E. Mounier, Dr. P. Roussel Yole Développement 45, rue Sainte-Geneviève 69006 Lyon France
[email protected] Keywords:
inertial sensors, accelerometers, gyroscopes, market forecast, deep etching
49
The Assessment of the Socio-economic Impact of the Introduction of Intelligent Safety Systems in Road Vehicles – Findings of the EU Funded Project SEiSS S. Krüger, J. Abele, C. Kerlen, VDI/VDE-IT H. Baum, T. Geißler, W. H. Schulz, University of Cologne Abstract Road crashes take a tremendous human and societal toll from all EU member states. Each year, more than 125.000 people are killed and millions more are injured, many of them permanently. The costs of the road safety problem in the EU amount up to 2% of its gross domestic product. New, safety related technologies are promising instruments in order to reduce the number of accidents and their severity. The study delivers an overview of safety-related functions, identifies key variables and developes methods for the assessment of their socio-economic impact.
1
Introduction
Transport is a key factor in modern economies. The European Union with increasing demand for transport services needs an efficient transport system, and has to tackle the problems caused by transport: congestion, harmful effects to the environment and public health, and the heavy toll of road accidents. The costs of accidents and fatalities are estimated to be 2% of gross domestic product in the EU (EC 2003). It is the policy of the European Commission to aim at a 50% reduction of road fatalities by 2010. There is convincing evidence that the use of new technologies can contribute significantly to this reduction in the number of fatalities and injuries. For this reason the eSafety initiative aims to accelerate the development, deployment, and use of intelligent vehicle safety systems (IVSS). Intelligent safety systems for road vehicles are systems and smart technologies for crash avoidance, injury prevention, and upgrading of road holding and crash-worthiness of cars and commercial vehicles enabled by modern IT. Governments as well as marketing departments in the automotive industry face the dilemma to decide on new technologies or new paths of research and
50
Introduction
development, respectively, before reliable data can exist. For this reason, it is essential to evaluate the safety impact of new technologies before they are marketed. Being aware of methodological problems, it is necessary to provide a basis for rational and convincing decisions. Therefore the eSafety initiative as well as the European Commission are asking for a sound data base and decision supporting methodology. Facing the dilemma of not being able to account for the effects of the introduction of intelligent vehicle safety systems in advance, the problem stays evident for the evaluation of components or technologies. Therefore it is a challenging task to define the impact of the introduction of a specific technology, because to the general impact assessment problem of a vehicle function the exchangeability of technologies is added. The use of technologies like e.g. microsystems technology connects specific costs to technical possibilities. Other technologies will have different limitations and other advantages. Looking for break even points, it becomes very important to get a better understanding of the financial scenarios. Therefore, independent from stakeholders like scientists, suppliers, original equipment manufacturers, insurance companies, or public authorities it becomes very important to find measures to access and compare technologies, functions, and approaches. The European Commission initiated this exploratory study in order to provide a survey of current approaches to assess the impact of new vehicle safety functions, develop a methodology to assess the potential impact of intelligent vehicle safety systems in Europe, provide factors for estimating the socio-economic benefits resulting from the application of intelligent vehicle safety systems; these factors, such as improved journey times, reduced congestion, infrastructure and operating costs, environmental impacts, medical care costs etc., will be the basis for a qualified monetary assessment, identify important indicators influencing market deployment and develop deployment scenarios for selected technologies/regions.
2
State of the Art
Investigations of the socio-economic impact of intelligent vehicle safety systems began in the late 1980’s. Since then, the benefits of IVSS technologies and services have been assessed on the basis of more than 200 operational tests and early deployment experiences in North America, Europe, Japan, and Australia (PIARC 2000). Three broad-based categories of evaluation approach-
The Assessment of the Socio-economic Impact of the Introduction of Intelligent Vehicle Safety Systems
es are currently being used (OECD 2003): empirical data from laboratory measurements as well as real-world tests simulation statistical analysis Several projects funded by EU Member States or the European Commission as well as studies of the automotive industry and equipment suppliers have already provided some data on the impact of intelligent vehicle safety systems. A large number of projects deals with technological research and development and provides a basis for further progress of the field (e.g. AIDE, CARTalk2000, CHAMELEON, EDEL, GST, HUMANIST, INVENT, PReVENT, PROTECTOR, RADARNET, SAFE-U). Several projects are focused on accompanying measures in order to develop the sectoral innovation system and strengthen networks and co-operation (e.g. ADASE II, HUMANIST). Some projects reflect on the implementation of safety systems and on measures to support the application of new technologies (e.g. ADVISORS, RESPONSE). Finally, a number of projects discuss costs and benefits of the technologies that were investigated (ADVISORS, CHAUFFEUR, DIATS, E-Merge, STARDUST, TRL-report). However, a systematic assessment and coherent analysis of the potential socio-economic impact of intelligent vehicle safety systems is not yet available. In addition, such an analysis is further complicated by the fact that many systems are not yet widely deployed. Reflecting on socio-economic effects of IVSS, it is necessary to distinguish different levels of impact: operational analysis dealing with the technical assessment of operational effectiveness, socio-economic evaluation, and strategic assessment. This study argues that an assessment of the socio-economic impact of intelligent safety systems has to combine these different evaluation approaches.
3
Methodology of the Study
The suggested methodology consists of 14 major steps (refer to figure 1). It includes technology, function, market, and traffic inputs. It delivers the opportunity to differentiate on member states level. The relevant steps for SEiSS methodology are:
51
52
Introduction
1 2 3 4 5 6 7 8 9 10 11 12 13 14
4
Determination of the technology and functions interaction matrix (IVSS) Assessment of functions interaction Calculation of collision probability for IVSS differentiated for accident types Estimate of the penetration rate for IVSS following specific market deployment scenarios Prediction of number of accidents for specific IVSS setup Prediction of accident severity for specific IVSS setup Calculation of accident costs Prediction of congestions Calculation of time costs based on congestions Calculation of vehicle operating costs Calculation of emission costs differentiating into CO2 and pollution Differentiation in cost effects with and without IVSS Calculation of IVSS specific cost Calculation of benefit-cost ratio
Technology, Safety Functions and System Interaction
Technology is a prerequisite for an automotive function. On the basis of new technologies, a new safety function might be introduced. However, we face the problem of system interaction between different safety technologies. It is not possible to refer to an evaluation of single technologies in order to assess the system behaviour (step 2). It is therefore necessary to define the interacting areas. In the model, functions are correlated to a time pattern, i.e. the effect of a function becomes assessed regarding to its time slot in accident mitigation and its effectiveness. A specific IVSS set-up translates into specific time patterns for the different accident types. This time pattern correlates to collision probability (step 3). In order to calculate the accident severity (relevant for step 6) the same time related pattern used for step 3 is being used. The severity of an accident depends on the impact energy that directly correlates to impact speed and passive safety systems absorption potential. The latter can be translated into additional time for the specific accident type.
The Assessment of the Socio-economic Impact of the Introduction of Intelligent Vehicle Safety Systems
Fig. 1.
5
Relevant steps for SEiSS methodology
Market Deployment
The main goal of integrating the market perspective in the proposed model is to find a way of forecasting the diffusion of intelligent vehicle safety systems within the vehicle fleet of the countries considered, or in other words the market deployment (step 4). The target figure for capturing the market perspective therefore is the rate of equipment with intelligent vehicle safety sys-tems. This figure has an influence on the socio-economic impact of IVSS. Firstly, because vehicles that are equipped with IVSS and vehicles or other road users
53
54
Introduction
that are involved in crashes with those vehicles, profit from the advantages of the crash avoiding or crash outcome minimising effects of IVSS. Only the equipped vehicles therefore influence the overall socio-economic impact. Secondly, some IVSS may need a certain equipment rate to fully exploit their potential benefits. Especially car-to-car-communicating systems need a minimum of equipped cars for the technology to be able to function correctly. To forecast market deployment, i.e. to calculate an equipment rate at a given point in time, the time of availability of an intelligent safety system has to be determined, the time of market introduction has to be assessed, and a probable way of diffusion into the market has to be decided upon.
6
Traffic Influence and Socio-economic Evaluation
Considering the methodological framework, a widespread approach for assessing the potential socio-economic impact is the welfare economics-based costbenefit analysis. The favourability of intelligent vehicle safety systems from the society point of view can be illustrated by confront-ing the socio-economic benefits with the system costs (investment, operating and maintenance costs). Benefit/cost ratios of more than 1 indicate the public rentability of the system deployment. The cost-benefit-analysis consists of the following calculation procedure: analyse the impacts of each case by traffic and safety indicators such as traffic flow, vehicle speed, time gaps and headways, work out the physical dimensions of the traffic impacts such as total transport time, fuel consumption, level of pollution, number of accidents for the with-case and without-case; calculate the benefits (=resource savings) by valuing the physical effects with cost-unit rates (step 7 to 11) aggregate the benefits, determine the system costs (investment costs, maintenance costs, operating costs), and work out the benefit-cost ratios (step 12 to 14). It is necessary to specify the general framework conditions for the analysis and define the relevant alternatives that will be compared (without-case: IVSS is not used, with-case: IVSS will be used). Furthermore the proposed methodology needs the calculation for three different speed patterns covering urban, rural and highway traffic. The whole approach described so far has to be calculated for cars and heavy-duty vehicles separately. In addition, different mar-
The Assessment of the Socio-economic Impact of the Introduction of Intelligent Vehicle Safety Systems
ket deployment, vehicle mileage, safety systems relevance, accident path, and cost figures call for separate calculation. The resulting cost-benefit ratios for cars and heavy-duty vehicles for urban, rural and highway traffic brought together become the overall benefit-cost ratio of a specific IVSS set-up. The calculations can be made for a worst and a best case scenario leading to a defined bandwidth of the benefit-cost ratio.
7
Additional Considerations
So far, the introduced methodology follows a clear path to find a comprehensive approach that allows to integrate system interaction and different disciplinary views on the problem. However, for the further development of the model it is essential to account for more detailed reflections into specific fields, such as: Speed independent correlations for system interaction The most important speed independent scenario is the “out of control” scenario. E.g. skidding has a sufficient effect on accidents and therefore should be adequately taken into consideration. Other effects like human perception might play a less important role in the assessment of IVSS interaction. For these cases a specific handling has to be defined. Minimum penetration rate for cooperative systems Functions like hazard warning based on car-to-car communication need a sufficient number of systems in the market to work. This aspect can be integrated into the model via the market deployment considerations. Non safety effects of the introduction of IVSS For examples like ACC systems non-safety relevant effects are predicted for strong market penetration. In this case traffic flow might lower because of an increased safety margin. This effect would influence congestions, being an important value in the calculation. Other important effects are energy consumption as well as pollution, because additional IVSS add for operations on these values. The inclusion of this correlation is planned and explored in dotted green lines in the above figure. External parameters Political influence might change deployment patterns or even the capabilities of intelligent vehicle safety systems. Within the model it is not planned to include such scenarios at this stage of development. Differentiation on member states might be done in detail and might be used for calculation of scenarios.
55
56
Introduction
8
Conclusion
The proposed methodology strongly allows for a competitive assessment of the introduction of different IVSS and in addition provides an absolute idea of the related costs and benefits. The proposed assessment methodology aims for a better understanding of the impact of the introduction of intelligent vehicle safety systems. For a contribution on the overall picture different disciplines have to be combined, therefore a proper understanding of technology, accident causation, statistics, marketing, and traffic influence is needed. Because there are specialists in each of these fields, it is suggested to rely on available data like professional forecasts. For specific areas like e.g. figures for the definition of accident probability and accident mitigation as well as accident severity additional research has to be carried out. Anybody looking for impact analysis needs this kind of information and faces the lack of relevant data. For better compatibility of results of different investigations common data bases should be used. So far, socio-economic effects have been calculated for single technologies and functions. The proposed model describes the possibilities of a comprehensive approach, that covers the interaction of different technolgogies and functions as well as the facets of a multi-disciplinary onset.
The Assessment of the Socio-economic Impact of the Introduction of Intelligent Vehicle Safety Systems
References [1]
[2] [3]
EC (2003): Communication from the Commission to the Council and the European Parliament, Information and Communications Technologies for Safe and Intelligent Vehicles (SEC(2003) 963), http://europa.eu.int/information_society/ activities/esafety/doc/ esafety_communication/esafety_communication_vf_en.pdf OECD (2003): Road Safety. Impact of New Technologies, Paris 2003. PIARC (2000): ITS Handbook 2000, Committee on Intelligent Transport, PIARC Paris 2000.
Sven Krüger, Dr. Johannes Abele, Dr. Christiane Kerlen VDI/VDE Innovation + Technik GmbH Rheinstr. 10b 14513 Teltow Germany
[email protected] [email protected] [email protected] Prof. Dr. Herbert Baum, Dr. Thorsten Geißler, Dr. Wolfgang H. Schulz University of Cologne Institute for Transport Economics Universitätsstr. 22 50932 Cologne Germany
[email protected] [email protected] [email protected] 57
Safety
61
Special Cases of Lane Detection in Construction Areas C. Rotaru, Th. Graf, Volkswagen AG J. Zhang, University of Hamburg Abstract This paper presents several methods that treat the special cases that appear in the lane marking detection in construction areas for both highways and country roads. The system complements the lane marking detection methods by treating the special case of temporary yellow markings that override the normal white markings. It uses both position and color to separate the valid markings from the former ones left in place but without semantics for the driver.
1
Introduction
Areas of construction on public roads are a permanent source of traffic problems. The special marking of these areas, the smaller size of the lanes and mostly the high quantity of older information (lane markings, traffic signs) without semantics raise problems that are not present elsewhere. For a driver assistance system one important aspect is to be able to distinguish between important and meaningless information in such an area. The major part of European countries use yellow lane makings that are over imposed on the existing white lane markings in construction areas. Such conditions don’t make a gray-level based approach very useful since both yellow and white colors will convert to relatively high intensity values. This makes a reliable distinction between them difficult if not impossible. In such situations a common approach is to signal to the driver that an unknown situation was encountered and to give up waiting for better environmental conditions. An approach based on color has a greater chance of interpreting the scene, but still it has to solve specific issues. This paper focuses on the specific handling of these situations. The lane detection problem is not covered here; the underlying lane detection algorithms are presented in [1]. Lane detection algorithms focus on the detection of lane markings by using some image features. There are many approaches to lane marking detection (see e.g. [2-5]), but few attempts (see [6]) to make the distinction between yellow and white markings. Both the difficulty of working with color data and the
62
Safety
complexity of the scenes have drastically limited the number of algorithms that promise to assist the driver in construction areas. This paper tries to fill in the gap by describing solutions for the most common problems encountered in such environments.
2
Assumptions about the Environment
Inconsistent lane markings (white markings that are in the right place may or may be not replaced by yellow markings), lack of markings (yellow markings may not be applied at exterior of the road) or incomplete markings are only a few cases which show the complexity of the environment. This suggests that an approach based strictly on color would have too many limitations in the number of situations it can handle. One way to overcome the problems is to use additional information (for example the road limits in the image) and to make some assumptions that limit the complexity of the system. These assumptions are listed below: A yellow marking is applied between two traffic lanes (most common situation). Situations in which yellow markings are on the side of the road and a white marking is present in the middle cannot be treated without using information on the lane size (i.e. a calibrated system which is beyond the scope of this paper). The outer white markings (at the left and right sides of the road) can be valid even if there are yellow markings on the street. If there are yellow markings located close to them the white markings should be dropped.
3
Software Implementation
3.1
System Overview
The system input is given by the road feature detection system presented in [1]. It consist of lane markings expressed in picture coordinates as groups of vertical segments, vertical limits of the road surface, average (H)ue, (S)aturation, (I)ntensity values for the road surface and the source H, S, I images obtained from the grabbed RGB image. The system output is given by a trust value attached to each detected marking. (0 = not valid, 255 = completely trusted), a flag indicating the presence/absence of yellow markings and a flag showing the quality of the detec-
Special Cases of Lane Detection in Construction Areas
tion.
Fig. 1.
3.2
Histogram of H, S, I components in the detected lane marking areas at day (left) and night (right). Up: only white markings. Down: both yellow and white markings.
Color Information
The system uses the HSI color representation. An analysis of the three components is done in order to decide what criteria can be used to separate the white markings from the yellow ones. In figure 1 two specific situations are presented. The two histograms on the left column have been obtained from data taken at daytime before and within construction area. The histograms on the right column have been obtained from data taken at night. In order to plot all three components (H, S, I) on the same histogram, a remapping of the respective domains was done to the interval 0..255. Each of the components is analyzed below with its specific advantages and disadvantages: Hue: In practice the values for the yellow colors associated with the markings depend on the hardware and software setup. Nevertheless they can be generally distinguishable from the values associated with the white markings. In both lower histograms one can observe the presence of a peak for hue components near the beginning of the hue interval. In our experiments the value given by the camera was close to orange. In all cases in which the yellow lanes are not present the hue component for white mostly consists of noisy values.
63
64
Safety
In the HSI representation white should be represented as having S = 0 and accordingly the H component is invalid. It is not always possible to invalidate hue using the saturation information given by the RGB to HSI conversion because of the inherent acquisition noises (the color camera gives no real grayscale values -i.e. having S=0- but some values in which S is small, still not negligible). Such H values proved to have little influence on the algorithm. The chosen solution was to use the H values without accounting for the saturation. In the lower histograms of figure 1 one can see the peak that characterizes the yellow markings. Its raw value may not always be high enough to count alone as a criteria for distinguishing between the markings, still hue is valuable information. Saturation: Comparing the lower histograms with the upper ones it becomes clear that saturation values that are above some specific threshold (this was empirically found to be about 10% from the maximum of the saturation) are observed only if there are yellow markings The yellow markings give a footprint between 15% and 70% of the maximum saturation. In some particular cases this criterion is still too weak. When the yellow markings are shining due to strong sunlight the footprint tends to be close to 15% and the white areas are somewhere below 10%. Intensity: Depending on the lightning of the scene and the camera setup yellow and white markings result in very close intensity levels in the picture. Taking into account the noise of the acquisition it is almost impossible to distinguish between the two intensity levels in almost all cases. An exception can be seen in the lower-left histogram. In this case yellow markings that are not highly reflective generate a second group of lower intensity values on the histogram. Since this information is not always accurate the intensity information is not used at all in this approach. The system starts by building the histograms for hue and saturation. The number of saturation values bigger than 15% of the maximum value is computed. If the number is significant the yellow flag is set. If the saturation data lies too close to the threshold then hue is analyzed as well. If no significant percent of values was between dark orange and yellow then the algorithm concludes that there are no yellow markings present. If yellow markings are found, the algorithm runs further and marks the white lanes as not trusted (based on their hue and saturation average values). At early stages of the development direct labeling in yellow and white lanes was tried without the evaluation of the presence or absence of the yellow markings; it produced very noisy results and even fake yellow markings when the markings were not having a very good footprint in the picture. Since singular values are not accurate enough, this
Special Cases of Lane Detection in Construction Areas
approach focused on evaluating values from all lane markings present in the picture. If the lane markings detector has not detected enough yellow markings (for example the markings are not continuous or they are old) the above-mentioned algorithm has not enough data and will not be able to perform well. In such cases a more sensitive but still accurate measure for the presence of yellow markings in the picture is needed. The function should be able to recognize a yellow lane marking that is not expected to be long or with a strong footprint in the picture. Since it complements the other method it was designed to work well especially in these cases where the other one fails (when the major part of detected markings were white). The chosen function is based on the weighted deviation of the lane marking H and S values from the average values for all lanes. This function performs very well if the number of segments belonging to yellow markings is less than 10% of the number of total segments. In these cases the saturation and hue of the yellow marking are experiencing a significant deviation from the averages. After an extra check that the lane marking color is close to yellow the algorithm concludes that the lane marking is yellow and the detection was not accurate enough.
Fig. 2.
3.3
Typical yellow markings at day (left) and night (right)
Position information
One common situation that occurs in construction areas is illustrated in the right image of figure 2. The yellow markings are applied on the center and right sides of the road, but no marking is applied over the old white marking on the left. In such situations dropping all white markings found in the picture means eliminating really valuable information. There are no criteria based on color that enable the distinction between the invalid white markings and the
65
66
Safety
valid ones. The only clue here is the position with respect to the yellow markings. Two approaches are presented here. The first one uses the results of the road detection algorithm. This algorithm returns the image coordinates where the road extends. Typically these are the same as the last left/right marking. Accordingly, the first algorithm relies on computing the offset between these extents and the position on which the white lane markings that were already marked as invalid by the color separation algorithm so far. If the result was negative (the lane marking started above the highest limits of the road) than the lane marking was considered valid. There are also cases in which such an approach is inefficient. The right image of figure 2 is such an example. The outer right white marking is not valid since its semantics is overridden by the traffic indicators.
Fig. 3.
Diagram of the system
The second algorithm can only be used in situations where at least 2 yellow lane markings were detected. It works by estimating the average distance between these markings as a first degree function dx = ay + b, where dx is the relative distance in the picture (in pixels) and y is the vertical picture position. Using this “template” distance it checks the distance to the closest yellow marking for all white markings. The difference is then compared to 40% of the minimum distance of the yellow markings at that Y position in the picture. If it is smaller then the white marking is dropped. If not, it checks whether the white marking is surrounded by yellow markings. If it is surrounded it is dropped. This approach avoids leaving detected white markings that were in the middle of the lane (see left image of figure 2) as valid in the output set.
Special Cases of Lane Detection in Construction Areas
3.4
Merging Results
The connection between algorithms is described below. The “yellow/white separation” algorithm refers to the algorithm described in section 3.2 for the global analysis of the lane markings; “sense yellow marking” algorithm being the algorithm from the bottom of the same section used for a deeper analysis of the cases in which the lane detection delivered minimal results. “Position on road” algorithm is the former presented in section 3.3 and “Relative position” algorithm is the latter presented in the same section. The “yellow markings present” flag is obtained from the “yellow/white separation algorithm”. If it is false, the “yellow sense algorithm” is run to enforce the conclusion. The flag indicating poor detection quality is set by default to false and will only be set to true if “yellow sense algorithm” ran and concluded that there was at least one yellow lane marking. In figure 3 the activity diagram of the system is presented. In short it works as follows: first of all the source lane markings are checked one by one by the “yellow/white separation algorithm”. The algorithm marks all markings that have low saturation and non-yellow hue averages as not trusted. The “Position on road algorithm” will then restore the trust for those lanes that are the lateral limits of the detected road surface. From these lane markings the ones that are close to the yellow ones will be invalidated by the “Relative position algorithm”. This is the final data that is released at the output of the system covered in this paper.
4
Experimental Results
The system was tested in both highway and country road scenarios. It showed that most of the encountered situations can be successfully interpreted. The yellow markings detection is stable even in cases when only few of the markings were presented at the output of the lane detector. At night due to the reflective nature of the yellow marking the results are usually better than at day. The worst cases were encountered shortly after rainfall when the street surface was still covered with water that extremely reduced the contrast of the markings. If old markings with lower reflectivity were present, the stability of the system was affected.
67
68
Safety
The algorithm runs in less than 4 ms (all operations described in the paper) on a mobile P4, 1,7 GHz. This makes it suitable as part of a real-time system.
5
Conclusion & Future Work
Using color information to make the separation between yellow and white markings complemented with position information proved to be an effective way of dealing with most situations in construction areas. The future work will include the detection of specific traffic signaling elements (indicators) that are present in these areas to enhance the detection in cases when no markings are present. The sequence information is examined as one way to improve the stability of the system.
Special Cases of Lane Detection in Construction Areas
References [1] [2]
[3] [4]
[5] [6]
C. Rotaru, “Extracting road features from color images using a cognitive approach”. Submitted to IEEE Conference on Intelligent Vehicles, Parma IT, 2004 K. C. Kulge, “Performance evaluation of vision-based lane sensing: some preliminary tools, metrics and results”. IEEE Conference on Intelligent Transportation Systems, 1997 D. Pomerlau D. and T. Jochem, “Rapidly Adapting Machine Vision for Automated Vehicle Steering” IEEE Expert, 1996, Vol. 11, pp. 109-114 S. Lakshmanan and K. Kluge, “LOIS: A real-time lane detection algorithm”, Proceedings of the 30th Annual Conference on Information Sciences and Systems”, 1996, pp. 1007-1012 M. Bertozzi and A. Broggi, “A Parallel Real-Time Stereo System for Generic Obstacle and Lane Detection”, Parma University, 1997 Toshio Ito and Kenichi Yamada, “Study of Color Image processing Methods to Aid Understanding of the Running Environment”
Dipl.-Ing. Calin Augustin Rotaru, Dr. Thorsten Graf Group Research Electronics, Volkswagen AG Brieffach 1776/0, D-38436 Wolfsburg, Germany
[email protected] [email protected] Prof. Dr. Jianwei Zhang Fachbereich Informatik, AB TAMS Vogt-Kölln-Straße 30, D-22527 Hamburg, Germany
[email protected] Keywords:
color image processing, yellow lane markings, driver assistance, construction areas
69
71
Development of a Camera-Based Blind Spot Information System L.-P. Becker, A. Debski, D. Degenhardt, M. Hillenkamp, I. Hoffmann, Aglaia Gesellschaft für Bildverarbeitung und Kommunikation mbH Abstract The development of a camera-based Blind Spot Information System (BLIS), starting with the functional requirements up to the final product design, will be described. The paper focuses on the illustration of the software system, while recognizing that different aspects of hardware architecture play an important role. Different constraints for the successful execution of such a project will be outlined. Finally, the capabilities of the driver assistant system will be demonstrated.
1
Introduction and Overview
The product development of a market-relevant camera-based driver assistant system is still a challenge. The hardware demands of price, design and performance seem to contradict functionality requirements along with high performance needs for an image processing software system. Chapter 2 covers this issue in more detail. The authors intend to describe how that contradiction was solved in the development of a Blind Spot Information System (BLIS). The development history from specified functionality (chapter 2.4) and selected hardware architecture (chapter 2.3) to the resulting software design (chapter 4) will be traced. Different constraints for the successful execution of such a project will be outlined, such as: prototype-oriented development strategy, in order to meet and demonstrate customer requirements at every project stage (see chapter 3.2). generation of an appropriate video-database representing different environmental conditions and driving situations (see chapter 3.3). This database is crucial for the reproducibility and proof of progress of the algorithms. set up of different testing environments and testing strategies for verification and validation of the system under laboratory and field test conditions (see chapter 3.5).
72
Safety
In order to reduce development time, it is necessary on one hand to use
a flexible software development platform in the laboratory as well as in the field. On the other hand, due to hardware limitations, it is necessary to create a compact and simple system. Our preferred solution will be outlined in chapter 3 (Development Process Strategy). Examples of the quality of the driver assistant system will be demonstrated in chapter 5 based on a number of traffic scenarios.
2
Motivation
2.1
Aglaia GmbH – A Mobile Vision Company
Since this paper is based on accumulated knowledge of Aglaia GmbH Berlin employees, the company will be briefly presented. Aglaia is an independent hardware and software development company of camera-based driver assistant systems. It was founded in 1998 by professionals in industrial and automotive real-time image processing applications. Today, Aglaia is selling its own automotive products, like high-dynamic CMOS cameras (including stereo cameras), CAN/LIN-Bridges and special development and testing tools. The development of customer- specific prototypes up to systems ready for serial production is also an essential part of the business concept.
2.2
The Need for a Blind Spot Information System
The well-known Blind Spot, outside of the peripheral vision of a driver, is responsible for an estimated 830.000 accidents per year in the United States, according to the National Highway Traffic Safety Administration [1]. As can be read in [2], one third of all accidents in Germany outside urban areas can be traced to a lane change. Unfortunately, there are no reliable statistics available on a European level about this type of accident. But it can be seen, that the European Commission is making great efforts in order to reduce accidents that can be traced to the blind spot. On November 10, 2003, the European Parliament and Council adopted a new directive (Directive 2003/97/EC) on rear-view mirrors and supplementary indirect vision systems for motor vehicles [3]. This directive will improve road user safety by upgrading the performance of rear-view mirrors and accelerating the introduction of new technologies that increase the field of indirect vision for drivers of passenger cars, buses and trucks.
Development of a Camera-Based Blind Spot Information System
Despite rear-view mirrors, there is always the risk of blind spots when driving a car especially in situations, where the driver is inattentive and is starting a lane change manoeuvre anyway. Since drivers have more distractions than ever and roadways are getting more congested, it is quite difficult to be aware of the current traffic situation around one’s own car at any time. The described situation is even worse with trucks due to their size and shape. In addition, motorists are getting older and are less able to swing their necks around to observe their blind spots. In order to make driving more comfortable and especially safer related to the blind spot, a camera-based driver assistant system is introduced. When another vehicle enters the monitored zone, the driver immediately gets a visual indication that another vehicle is in the adjacent lane beside his/her own car. This information gives the driver a better basis for making the right decision, especially in case of a lane change. Both sides of the car are monitored in the same way.
2.3
Hardware Requirements
One of the key points of such a product is the choice of the sensor. For the described Blind Spot System, a CMOS video camera is used on each side of the vehicle. Generally speaking, a video sensor has the following advantages in comparison to other sensors: The sensor is passive. Sun or headlight illumination is used – no electromagnetic emission, no legal restrictions. The infrastructure of the road environment is designed for human visual perception (lane markings, traffic signs, etc.). Thus, it is very suitable for visual processing. Systems can be designed with standard electronic components. The information in one image is abundant and thus a great deal of information can be extracted in a short time. CMOS technology offers a high dynamic range. Thus the images have comparable quality independent of weather and lighting conditions. In addition, the sensor characteristics can be adjusted according to the functional requirements of the system. The sensor, together with the optics, is very compact and offers a small package size including the processing unit. Due to the small package size, the complete system can be integrated into the mirror base. This installation is especially appropriate for the sensor, because vibration is minimal and that part of the mirror is always fixed and protected quite well from any sort of mechanical damage.
73
74
Safety
In order to minimize size and hardware costs of such a product, the hardware performance especially processor frequency and memory is very limited. The important data: 200 MHz Texas Instruments Floating Point DSP 256 KBytes DSP Cache (for code and data) 1 MByte Flash RAM (for code and parameters only) No additional external RAM As an important interface to the car, the BLIS is connected to an LIN-Bus and a bi-directional data exchange is performed. Data from other sensors of the vehicle as well as data from the other BLIS module, are required in order to meet the functional requirements. In particular, information of the wheel speed sensors is utilized. Figure 1 shows one of the early hardware versions.
Fig. 1.
2.4
BLIS Hardware
Functional Requirements
The main functional requirements will be described in the following paragraph. It has to be taken into account, that all requirements have to be applied to both BLIS systems, independent of the side on which they are installed. One’s own car is called the subject vehicle while all other vehicles are called object vehicles. Position and speed values for object vehicles are very important requirements. The system shall detect all vehicles entering the monitored detection zone with a certain relative (negative or positive) speed. That means the system must detect vehicles approaching from behind and vehicles sliding back when
Development of a Camera-Based Blind Spot Information System
they are overtaken by the subject car. The monitored detection area is divided into a “must detect” zone and a “may detect zone” (see figure 2). If a vehicle is within the “must detect” area and fulfils all warning requirements, BLIS must issue a warning on the appropriate side.
Fig. 2.
Detection Zones
The system should be designed in such a way that it is able to detect passenger cars, trucks (also with trailers) and buses as well as motorbikes, in both daylight and darkness. It may detect motorcycles and bicycles. At the same time, it should not react to parked vehicles, roadside fences, crash barriers, lampposts and so on. It is required, that the BLIS system also works in bad visibility conditions (e.g. bad weather). If this is not possible, the system should inform the driver of this fact. The system shall also detect situations in which the optical path is blocked, e.g. by dirt, ice and so on. The system must detect relevant vehicles very quickly in order to inform the driver without significant delay, which makes a processing speed of 24 frames per second necessary. Since the system is installed for the lifetime of the vehicle, it shall have the required performance all the time, independent of services at a garage (e.g. dismantling of complete mirror) or full load situations. The Blind Spot Information System shall work on motorways, rural roads and urban streets as well.
75
76
Safety
3
Development Process Strategy
3.1
Overview
As can be seen in figure 3 a prototype-oriented software development process is used. The yellow marked parts will be described in more detail in the following chapters. The dividing of prototype and firmware/framework ensures a productive development process without dealing with the restrictions of target hardware in terms of computing power, programming environment and availability. Therefore, the target hardware can be developed simultaneously and is introduced into the project at a later time. The prototype development itself will be described in chapter 3.2. The video-database represents different environmental conditions and driving situations (see chapter 3.3). This database is crucial for the reproducibility and proof of progress of the algorithms. Based on that database, the test system evaluates the prototype and should reflect how the algorithms are fulfilling the specifications (see chapter 3.5). As soon as the target hardware is available, the prototype code can be ported onto the target (firmware code) while the “operating system”, the so called framework, has to be implemented exclusively for the hardware. The porting process is described in chapter 3.4.
Fig. 3.
3.2
Strategy Overview
Prototype-Oriented Software Development Process
The prototype development process consists of several successive phases (see figure 4): mission definition (goal and scope of project), feature set definition
Development of a Camera-Based Blind Spot Information System
(features for next prototype version/milestone), (re)design, implementation, evaluation (verification of functionality with the test system, processing and memory consumption estimation) and the optional customer review. This is an iterative process, whereas the feature set can be refined and enriched at every cycle according to the specification. At every milestone the prototype meets the defined feature set for that development phase, which can be demonstrated either with a test vehicle or in the laboratory.
Fig. 4.
Prototype Development
In order to implement the design of the system, which follows the specified functionality, a rapid development tool is used. At Aglaia GmbH, a special automotive software development platform (Cassandra®) is the basis of nearly all developments. The test system is also an integral part of that platform. Thus, it is possible to evaluate the algorithms more easily and adapt them quickly, if necessary. The platform can be used in a very flexible manner without significant modifications on the configuration side on the test vehicle for field tests as well as in the laboratory along with video sequences of the database.
3.3
Database
The video-database consists of two parts. One part reflects the requirements and thus the typical driving behaviour of an average driver. In case of BLIS, the database contains motorway, rural and city driving situations to an equal degree, as well as during daytime and nighttime. This part of database is called the test database. A second part of the database can be used for the software development. Especially difficult traffic scenarios, which are very challenging for the algorithms, can be represented more in the database than other scenes. This second part of the database is called development database.
77
78
Safety
The development progress can be measured based on the development database while the test database ensures that the progress doesn’t have negative side effects on the overall performance. In order to precisely reproduce and forecast the behaviour of the system on the target, all additional information must be recorded which will be used later on the final hardware, including the correct timing behaviour, like other sensor signals from the LIN-Bus, all camera images without any loss etc. This is done with the Aglaia Drive Recorder software also based on Cassandra®. For specific traffic scenarios, the real world video database is not sufficient. In these cases, artificial computer generated traffic scenarios can be used. All information of the scene is automatically available (e.g. for later tests, see chapter 3.5). Also, with simple changes in the configuration and parameter set, a lot of different situations can be created. Only a certain fraction of the database should consist of simulated scenes, because they can only model real scenarios to a certain extend. Figure 5 shows an image out of a nighttime simulation.
Fig. 5.
Nighttime Simulation
The database can also be used to support and test the porting process, as described in the following chapter.
3.4
Porting Process
As soon as the prototype meets the system requirements and if the hardware is available, the solution can be ported. This includes a code and design review, a redesign (if necessary) and a re-implementation of the prototype units into firmware units. The correctness of firmware unit code is proven by direct comparison with the prototype unit response to an identical input. In order to simplify the debug process the framework can be emulated together with the firmware on a PC. In the final step, the correctness can be proven on the tar-
Development of a Camera-Based Blind Spot Information System
get. Special equipment and software ensures that the input is identical to the prototype input. The framework implements basic functionality on the target, like I/O from camera and LIN-Bus, interrupt handling, task scheduling, system boot procedures, dynamical code administration and so on. This part is usually reusable for other applications on the same hardware.
3.5
Test System
The test system has the following function within the project: It measures the software development progress. It verifies the functionality concerning the specification. It can be used for an automated parameter optimization. The Aglaia test system is implemented in such a way, that it is able to process the video database automatically and log the results for later evaluations. A test system needs nominal values. These values represent information of an independent (sensor) measurement of a certain video image. More precisely, it is required, that the exact position of every vehicle in the scene be known by the test system. Only based on that information it is possible to evaluate the prototype functionality. In case of BLIS, a manual evaluation of the test database and also on parts of the development database was performed. Special tools are used in order to automate that task.
4
Software Design
4.1
Design Overview
Due to the hardware architecture only 265 KByte cache is available for runtime code and data. Therefore only a few lines of the whole image frame can be stored at a time. In addition, the variety of possible image content is almost infinite because of the camera-based approach that deals with highly varying environmental conditions. Thus, a central feature for the design concept is an efficient model of a complex environment at every processing step. The amount of data must be reduced at an early stage without loss of crucial information, which is quite challenging.
79
80
Safety
Figure 6 shows a schematic view of the designed system. On the interface side, 12 bit video images from the CMOS camera are processed, along with the wheel speed data from the LIN-Bus. Depending upon the lighting conditions and thus the image brightness, the appropriate subsystem will process the video images (day or night subsystem). As a result, hypotheses of vehicles are produced, while position and speed are calculated for each vehicle. A post-processing system (for day and night) assigns the hypothesis to already tracked vehicles or creates a new one. Finally, the information concerning speed and position is evaluated in terms of the detection area. If all warning criteria are fulfilled, a warning is issued. Thus, every image is subsequently reduced down to a single piece of information – LED on or off.
Fig. 6. Software Design
4.2
Day System Design
In order to fulfil the requirements it is very important to estimate the speed and position of every vehicle as precisely as possible. Moreover, the system shall detect every sort of vehicle down to the size of a very small car and even a motorbike. As mentioned in chapter 2.3, the system consists of a monocular camera. In comparison to stereo cameras, it is not possible to estimate the real position of the vehicle on the street by only a single image. The well-known stereo from motion approach is used. Two successive images are used as if they were a stereo image pair. Since the car is moving between the images, the so-called ego motion of the car has to be taken into account. This is a combination of speed and yaw rate of the vehicle. As a result, the Feature Extraction unit provides special image features for the subsequent processing steps. The features are dynamically extracted within every image and matched between two successive images. Based on the
Development of a Camera-Based Blind Spot Information System
known position and alignment of the camera (external calibration) in conjunction with the parameters for the optical attributes of the camera (internal calibration), it is possible to estimate the so-called time to contact for each feature with the camera plane and a motion vector. The next processing unit clusters features into groups, which represent vehicles. Only those features are considered which fulfil certain criteria, like position, speed, reliability etc. The clustering step works independently from previous results and generates vehicle hypotheses for each image. Several plausibility tests are applied for each hypothesis. Since the motion flow is dense enough, the position on the street can be estimated by triangulation for each vehicle. Based on the time to contact, the speed can also be estimated. Finally, a probability for each hypothesis is estimated based on several properties.
4.3
Night System Design
Especially on motorways, lampposts are rarely seen. That’s why the only reliable information that is available at night are the headlights of the vehicles. A special Headlight Detection unit extracts the so-called blobs and measures size and position for each light. In the next step, lights are grouped into pairs of two in order to represent passenger cars or trucks. Based on this information, the distance can be estimated. It is assumed that all blobs that are not grouped belong to a motorbike. In order to estimate the distance of the bike, other information has to be taken into account. The described processing step works independently of previous results and generates vehicle hypotheses for each image. Finally, a probability for each hypothesis is estimated based on several properties.
4.4
Post-Processing Components
In this chapter, the most important post-processing units will be described briefly. One of them is the Tracking unit. As mentioned in the context of the day and night subsystem, vehicle hypotheses are provided as an input of the Tracking unit. The main function of this unit is to assign either a new hypothesis to existing and already tracked vehicles or to create new ones. Therefore, the longer a vehicle is tracked and confirmed by new hypothesis, the higher is the associated probability. With every update of a tracking object the position and speed is updated based on a recursive estimation filter. Since the hardware provides fixed timing and processing of images, the position for every vehicle can be predicted for the next frame.
81
82
Safety
A final processing unit filters all vehicles that fulfil the requirements concerning speed and position. A global danger level is updated based on the probabilities of each tracked vehicle. This (danger) level reflects a general probability for the existence of a car within the blind spot. If the level is above a certain threshold, a warning is issued and indicated.
4.5
System Robustness
Additional components ensure the correct functionality of the system. Some important ones will be described in the following. As mentioned in the previous chapters, the quality of the detection of vehicles depends on the calibration of the camera. If the car is being serviced in a garage, in case of a fully loaded car or even because of changes due to the aging of the vehicle, the external calibration might especially change significantly. In order to make sure that the mentioned scenarios don’t affect the functionality, continuous, dynamic re-calibration of the camera is performed during daytime and nighttime. Since the wheel speeds are processed in order to estimate the ego motion the plausibility of this data is also constantly checked. As a result, transmission failures on the LIN-Bus will not effect the calculation. Also, some major defects in the wheel speed sensors or even very different tire pressures, which can influence the yaw rate calculation, can be detected. If the sensor signals are no longer plausible, an appropriate message will be presented to the driver. Another unit observes the image quality. For example, the quality can decrease due to bad visibility or because of a blocked sensor situation (e.g. a lens is covered with dirt or ice or even completely clogged). If this unit calculates a high probability for the mentioned situations an appropriate message will be presented to the driver. As long as the image quality of only one camera is used, the information is not unambiguous. Reference measurements of other sensors can improve the results significantly. That’s why the image quality results of the other camera side are taken into account. But even then, some problem situations cannot be clearly determined. Future developments, especially concerning hardware, can improve the described self-diagnostic function.
Development of a Camera-Based Blind Spot Information System
5
Results
A bounding box marks the estimated position of a vehicle. The red and blue areas indicate the detection zones (see figure 2). The red circle in the lower left side shows the warning condition.
Fig. 7.
Day Heavy Rain
Fig. 8. Day Sunset
Fig. 9.
Day Sunny
Fig. 10. Day Motorbike
Fig. 11. Day Snow with Sun-Reflections
Fig. 12. Day Tunnel-Entry
83
84
Safety
Fig. 13. Night Motorbike (Brightened)
Fig. 14. Night City-Lights (Brightened)
Fig. 15. Night City-Lights (Brightened)
Fig. 16. Night Tunnel (Brightened)
References [1] [2] [3]
Brett Clanton, Detroit News, from http://www.usatoday.com, 2004 Jan C. Egelhaaf, Peter M. Knoll, Night Vision. Innovative Fahrerassistenzsysteme (IIR), 2003 European Energy and Transport Forum, Road Safety, from http://europa.eu.int, 2004
Dipl.-Inform. Lars-Peter Becker Tiniusstr. 12-15, 13089 Berlin Germany
[email protected] Keywords:
driver assistant system, blind spot detection, image processing, camerabased, video-based, headlight detection, stereo from motion
85
Predictive Safety Systems – Steps Towards Collision Avoidance and Collision Mitigation P. M. Knoll, B.-J. Schäfer, Robert Bosch GmbH Abstract Sensors to detect the vehicle environment are being used already today. Ultrasonic parking aids meanwhile have a high customer acceptance, and ACC (Adaptive Cruise Control) systems have been introduced in the market recently. New sensors are being developed at rapid pace. On their basis new functions are quickly implemented because of their importance for safety and convenience. Upon availability of high dynamic CMOS imager chips Video cameras will be introduced in vehicles. A computer platform with picture processing capability will explore the high potential of functions. Finally, sensor data fusion will improve significantly the performance of the systems. During the “PROMETHEUS” project at the end of the 1980s the electronic components necessary for these systems – highly sensitive sensors and extremely efficient micro-processors – were not yet ready for high-volume series production and automotive applications. Now they are available.
1
Introduction
Almost every minute, on average, a person dies in or caused by a crash. In 2000, more than 90.000 persons have been killed in the Triad (Europe, USA and Japan) in road traffic accidents leading to a socioeconomic damage of more then 400 bill. EUR. As a consequence, the EU Commission has defined with their e-Safety program a demanding goal by cutting the number of killed persons to half until the year 2010. Bosch wants to contribute significantly to this goal by developing Driver Assistance Systems in close cooperation with the OEMs and thus, reduce the frequency and the severity of road accidents. In critical driving situations only a fraction of a second may determine whether an accident occurs or not. Studies [1] indicate that about 60 percent of front-end crashes and almost one third of head-on collisions would not occur
86
Safety
if the driver could react one half second earlier. Every second accident at intersections could be prevented by faster reactions. An important aspect of developing active and passive safety systems is, therefore, the capability of the vehicle to perceive and interpret its environment by using appropriate sensors, to recognize and interpret dangerous situations and to support the driver and his driving maneuvers in the best possible way. Microsystems technology plays an important role during the introduction of active safety systems. The sensor technologies are manifold: Ultrasonic, Radar, Lidar, and Video sensors, they all contribute to gain relevant and reliable data of the vehicles surrounding. Sensor technology and sensor data processing, sensor data fusion and appropriate algorithms for function development allow the realization of functions for accident avoidance and mitigation
2
Traffic Accidents – Causes and Means to Mitigate or to Avoid Them
Only recently, statistic material has been published [2] showing that the accident probability for vehicles equipped with the ESP system (ESP=Electronic Stability Program) is significantly lower than for vehicles without ESP. Additional improvement is expected from systems like PRE-SAFE. It combines active and passive safety by recognizing critical driving situations with increased accident possibility. It triggers preventive measures to prepare the occupants and the vehicle for possible crash by evaluating the sensors of the ESP and the Brake Assist. To protect best passengers from a potential accident, reversible belt-pretensioners for occupant fixation, passenger seat positioning and sunroof closure are activated. Like the vehicle interaction with the vehicles dynamic with ESP, the release of collision mitigation means can be activated only in the case when a vehicle parameter went out of control or when an accident happens. Today, airbags are activated in the moment when sensors detect the impact. Typical reaction times last 5 ms. In spite of the extremely short time available for the release of accident mitigation means, there is no doubt that airbags have contributed significantly to the mitigation of road accidents and, in particular, fatalities. But due to the extremely short time between the start of the event and the possible reaction of a system the potential of today’s systems is limited. This high accident avoidance potential can be transferred in an ever higher extend to “predictive” driver assistance systems. They expand the detection
Predictive Safety Systems – Steps Towards Collision Avoidance and Collision Mitigation
range of the vehicle by the use of surround sensors. With the signals of these sensors objects and situations in the vicinity of the vehicle can be enclosed into the calculation of collision mitigating and collision avoiding means.
3
Components of Predictive Driver Assistance Systems
Making use of the electronic surround vision many driver assistance systems can be realized. Today, the components for the realization of these systems – highly sensitive sensors and powerful microprocessors - are available or under development with a realistic time schedule, and the chance for the realization of the “sensitive” automobile are fast approaching. Soon sensors will scan the environment around the vehicle, derive warnings from the detected objects, and perform driving maneuvers all in a split second faster than the most skilled driver. Electronic surround sensing is the basis for numerous driver assistance systems – systems that warn or actively intervene. Figure 1 shows the detection areas of different sensor types.
Fig. 1.
Surround sensing: Detection fields of different sensors
By an early warning of the driver an earlier reaction of the driver can be achieved. Active driver assistance systems with vehicle interaction allow a vehicle reaction which is quicker than the normal reaction of the driver. The following sensors are available or under development.
87
88
Safety
3.1
Ultrasonic Sensors
Reversing and Parking Aids today are using Ultra Short Range Sensors in ultrasonic technology. Figure 2 shows an ultrasonic sensor of the 4th generation. The driving and the signal processing circuitry are integrated in the sensor housing. The sensors have a detection range of approx. 3m.
Fig. 2.
Ultrasonic sensor 4th generation
Ultrasonic parking aid systems have gained high acceptance with the customer and are found in many vehicles. The sensors are mounted in the bumper fascia. When approaching an obstacle the driver receives an acoustical and/or optical warning
3.2 Long Range Radar 77GHz The 2nd generation Long Range Sensor with a range of approx. 200m is based on FMCW Radar technology. The narrow lobe with an opening angle of ±8° detects obstacles in front of the own vehicle and measures the distance to vehicles in front. The CPU is integrated in the sensor housing. The sensor is multi target capable and can measure distance and relative speed simultaneously. The angular resolution is derived from the signals from 4 Radar lobes. Series introduction was made in 2001 with the first generation. Figure 3 shows the 2nd generation sensor. It will be introduced into the market in March, 2004. At that time this Sensor & Control Unit will be the smallest and lightest of its kind on the market. The antenna window for the mm-waves is a lens of plastic material which can be heated to increase the availability during winter season. The unit is mounted in air cooling slots of the vehicle front end or behind plastic bumper material by means of a model specific bracket. Three screws enable the alignment in production and in service. Figure 3 shows the long range Radar sensor.
Predictive Safety Systems – Steps Towards Collision Avoidance and Collision Mitigation
Fig. 3.
77 GHz Radar sensor with integrated CPU for Adaptive Cruise Control
The information of this sensor is used to realize the ACC function (Adaptive Cruise Control). The system warns the driver from following too close or keeps automatically a safe distance to the vehicle ahead. The set cruise speed and the safety distance are controlled by activating brake or accelerator. At speeds below 30 km/h the systems switches off with an appropriate warning signal to the driver. In future, additional sensors (Video, Short Range Sensors) will be introduced in vehicles. They allow a plurality of new functions.
3.3 Short Range Sensors Besides ultrasonic sensors, 24GHz radar sensors (Short-Range-Radar (SRR)Sensors) or Lidar sensors can be used in future systems to build a „virtual safety belt” around the car with a detection range between 2 and 20m, depending on the specific demand for the function performance. Objects are detected within this belt, their relative speeds to the own vehicle are calculated, and warnings to the driver or vehicle interactions can be derived. The release for the 24GHz UWB (Ultra Wide Band) has been given in 2002 for the USA. In Europe it has been released by the ECC with some restrictions (Use only until mid of 2013 with a deactivation in the vicinity of Radio astronomy sites). As a substitute after the sunset date of this frequency, a new UWB between 77 and 81GHz has been released. The SARA consortium is working on a worldwide harmonization for these frequency bands to ensure a widespread application of these components.
89
90
Safety
3.4 Video Sensor Figure 4 shows the current setup of the Robert Bosch camera module. The camera head is fixed on a small PC board with camera relevant electronics. On the rear side of the camera board the plug for the video cable is mounted. The whole unit is shifted into a windshield mounted adapter.
Fig. 4.
Video camera module
CMOS technology with non linear luminance conversion will cover a wide luminance dynamic range and will significantly outperform current CCD cameras. Since brightness of the scene cannot be controlled in automotive environment, imagers with a very high dynamic range are needed. Due to the high information content of a Video picture, Video technology has the highest potential for future functions. They can be realized on the Video sensor alone or Video signals can be fused with Radar or Ultrasonic signals. Regarding sensor technology, all aspects of high sophisticated Micro Systems Technology are covered by these surrounding sensors. Sensor performance is still in an early stage and cost of the components is still too high to allow a widespread application. There is a huge potential for sensor performance improvement and cost reduction by introducing new Micro Systems Technologies.
4
Driver Assistance Systems for Convenience and for Safety
Figure 5 shows the enormous range of driver assistance systems on the way to the „Safety Vehicle“. They can be sub-divided into two categories: Convenience systems with the goal of semiautonomous driving, Safety systems with the goal of collision mitigation and collision avoidance.
Predictive Safety Systems – Steps Towards Collision Avoidance and Collision Mitigation
Driver support systems without active vehicle interaction can be viewed as a pre-stage to vehicle guidance. They warn the driver or suggest a driving maneuver. One example is the parking assistant of Bosch. This system will give the driver steering recommendations in order to park optimally in a parking space.
Fig. 5.
Driver assistance systems on the way to the safety vehicle
Another example is the Night Vision Improvement system. As more then 40% of all fatalities occur at night this function has high potential for saving lives. Lane departure warning systems can also contribute significantly to the reduction of accidents as almost 40% of all accidents are due to unintended lane departure. ACC, which has been introduced to the market a few years ago, belongs to the group of active convenience systems and will further be developed to a better functionality. If longitudinal guidance is augmented by lane-keeping assistance (also a video-based system for lateral guidance), and making use of complex sensor data fusion algorithms, automatic driving is possible in principle. Passive safety systems contain the predictive recognition of potential accidents and the functions of pedestrian protection. The highest demand regarding performance and reliability is put on active safety systems. They range from a simple parking stop, which automatically brakes a vehicle before reaching an obstacle, to Predictive Safety Systems (PSS).
91
92
Safety
4.1 Adaptive Cruise Control (ACC) Figure 6 shows the basic function of the ACC system. With no vehicle in front or vehicle in safe distance ahead, the own vehicle cruises at the speed which has been set by the driver (figure 6, up). If a vehicle is detected, ACC adapts automatically the speed in such a way that the safety distance is maintained (figure 6, middle) by interaction with brake and accelerator. In case of a rapid approaching speed to the vehicle in front, the system additionally warns the driver. If the car in front leaves the lane the own vehicle accelerates to the previously set speed (figure 6, below).
Fig. 6.
Basis function of ACC
In order to avoid excessive curve speeds the signals of the ESP system are considered simultaneously. ACC will reduce automatically the speed. The driver can override the ACC system at any time by activating the accelerator or with a short activation of the brake. The current systems of the first and second generation are active at speeds beyond 30km/h. To avoid too many false alarms, stationary objects are suppressed. With the improved ACC of the 2nd generation this convenience function can be used also on smaller highways. The next step in functionality will come with the ACCplus function, which will brake the car to stand still. With ACC FSR (Full Speed Range), with a data fusion of the long range radar with a Video camera will allow a complete longitudinal control at all vehicle speeds, and also in urban areas with a high complexity of road traffic scenery.
Predictive Safety Systems – Steps Towards Collision Avoidance and Collision Mitigation
The today’s ACC system is a convenience function supporting the driver to drive more relaxed. Starting from 2005 on, Bosch will extend the functionality of ACC to „Predictive Safety Systems“, and enter, thus, into the field of safety systems.
4.2 Video System The above mentioned Video technology will first be introduced for convenience functions that provide transparent behavior to and intervention by the driver. Fig. 7 shows the basic principle of operation for a video system. The enormous potential of video sensing is intuitively obvious from the performance of human visual sensing. Although computerized vision has by far not achieved similar performance until today, a respectable plurality of information and related functions can readily be achieved by video sensing:
Fig. 7.
Basic principle of a video sensor and functions being considered
lane recognition and lane departure warning, position of own car with-
in the lane, traffic sign recognition (speed, no passing, ...) with an appropriate
warning to the driver, obstacles in front of the car, collision warning vehicle inclination for headlight adjustments.
93
94
Safety
New methods of picture processing will further improve the performance of these systems [4]. Besides the measurement of the distance to the obstacle the camera can assist the ACC system by performing an object detection or object classification. Special emphasis is put on the night vision improvement function in the introduction phase of Video technology.
4.3 Predictive Safety Systems Inattention is the cause of 68% of all rear end collisions. In 11% besides inattention following too closely is the cause, 9% of the rear end collisions are caused by following too closely alone. These statistics [6] show that 88% of rear end collisions can be influenced by longitudinal control systems. We assume a stepwise approach from convenience systems to safety systems where the first step has been made with the Adaptive Cruise Control.
Fig. 8.
Shows the analysis of the braking behavior during collisions.
In almost 50% of the collisions the drivers do not brake at all. An emergency braking happens only in 39% of all vehicle – vehicle accidents, and in 31% of the accidents with no influence of another vehicle, respectively. This analysis confirms that inattention is the most frequent cause for collision type accidents and shows the high collision avoidance and collision mitigation potential of predictive driver assistance systems if the braking process of the driver can be anticipated or a vehicle interaction can be made by the vehicle’s computer. Predictive safety systems will pave the way to collision avoidance with full interference in the dynamics of the vehicle. They are partly based on signals derived from additional sensors, allowing to integrate the vehicle’s surrounding. From the measurement of the relative speed between detected obstacles
Predictive Safety Systems – Steps Towards Collision Avoidance and Collision Mitigation
and the own vehicle, dangerous situations can be recognized in an early state. Warnings and stepwise vehicle interactions can be derived. The introduction of predictive safety systems comes most likely with convenience systems where safety systems will use the same sensors. From 2005 on Bosch will extend ACC as the most important component of predictive safety systems to Predictive Safety Systems (PSS) in three stages. PSS1 addresses the cases with partial braking. It prepares the brake system for a possible emergency braking. In situations where there is the threat of an accident, it prepares for it by building up brake pressure, brings the brake pads into very light contact with the brake discs and modifies the hydraulic brake assist. The result is that the driver gains important fractions of a second until the full braking effect is achieved. In about half of all collisions drivers crash into the obstacle without braking. Bosch is developing the two succeeding generations of Predictive Safety Systems for these kind of accidents. PSS2 addresses the cases with no braking. It warns the driver from the danger of driving into the vehicle in front. The second generation of the Predictive Safety System does not only prepare the braking system; it also gives a timely warning to the driver about dangerous traffic situations, helping to prevent accidents in many cases. To do this it triggers a short, sharp operation of the brakes. Studies of drivers have shown that a sudden braking impulse is the best way of drawing the driver’s attention to what is happening on the road; drivers react directly to the warning. Alternatively or additionally, the system can also warn the driver by means of optical or acoustic signals, or by a brief tightening of the normally loosely fastened safety belt. PSS3 performs an emergency braking in the case of an unavoidable accident. The third developmental stage of the Predictive Safety System will not only recognize an unavoidable collision with a vehicle in front, but the system will in this instance also trigger automatic emergency braking with maximum vehicle deceleration. This will especially reduce the severity of an accident when the driver has failed to react at all to the previous warnings, or has reacted inadequately. Automatic control of vehicle function demands a very high level of certainty in the recognition of objects and the assessment of accident risk. In order to be able to reliably recognize that a collision is inevitable, further metering systems – such as video sensors – will have to support the radar sensors.
95
96
Safety
5
Outlook
The political institutions have put the right emphasis on their programs to reduce fatalities and road traffic accidents, e.g. the European Union with the e-Safety program with the vision to reduce fatalities to 50% until the year 2010, and the German government with programs such as INVENT. Car makers and suppliers have responded to these programs and try to make their contributions to reach the goal [5]. In conjunction with these programs there is a big challenge for the Micro Systems Technology: Sensor Technology and sensor (data) fusion, setup and connecting technologies, reliability and data security. The price of the components will play a dominant role. Only costly components allow a widespread distribution of safety technologies, being a precondition for the effectiveness of future accident prevention and mitigation.
6
References
[1]
Enke, K.: „Possibilities for Improving Safety Within the Driver Vehicle Environment Loop, 7th Intl. Technical Conference on Experimental Safety Vehicle, Paris (1979) Anonymous statistics of accident data of the “Statistisches Bundesamt (German Federal Statistics Institution), Wiesbaden, Germany (1998 – 2001) Statistics from the “Gesamtverband der Deutschen Versicherunswirtschaft e.V.” (Association of the German Insurance Industry) (2001) Seger, U.; Knoll, P.M.; Stiller, C.: “Sensor Vision and Collision Warning Systems”, Convergence, Detroit (2000) Knoll, P.M.: Predictive Safety Systems – Steps towards Collision Avoidance“ VDA Technical Congress, Rüsselsheim, Germany (2004) NHTSA Report (2001)
[2] [3] [4] [5] [6]
Peter M. Knoll, Bernd-Josef Schaefer Robert Bosch GmbH AE-DA/EL2 Daimerstr. 9 71229 Leonberg Germany
[email protected] 97
Datafusion of Two Driver Assistance System Sensors J. Thiem, M. Mühlenberg, Hella KGaA Hueck & Co. Abstract This contribution deals with data fusion between two sensors for driver assistance; a lidar-based ACC-sensor and a CMOS-based vision sensor system for LDW. The main properties and experimental results of the proposed approach will be described. The first fusion task is to supply the ACC sensor with lane information, obtained by the vision system, to improve the relevant target determination and control strategy. Furthermore, the LDW-sensor is concurrently able to use and verify ACC target hypotheses by vision-based object detection and tracking. This goes along with the improvement of the estimation of lateral target-position and –dynamics. Several test drives figure out the capability of this multiple sensor system. The main focus lies on information processing in the vision sensor, i.e. lane and object detection. In addition, the fusion method and data association inside the object detection module is specified.
1
Introduction
In general the functionality of sensors for Advanced Driver Assistance Systems (ADAS) are optimised according to their primary application. Today’s Adaptive Cruise Control (ACC) in upper class cars depends on the reliability and consistency of the measurement data of a single radar or lidar sensor. For the first generation of comfort-orientated driver assistance this works, because the designated driving areas are highway-like and therefore moderate due to complexity of target vehicle movement, ego-vehicle dynamics and the changes of the driving course. Signal processing and target tracking can be done in a model-based way and scene interpretation can be based on transparent assumptions and constraints, covering standard and uniform situations. Though the distance sensors are specialists for longitudinal control tasks, inconsistency or data lack occurs in non-standard situations, e.g. construction sites with crash barrier and reflectors producing phantom objects or curves with low radii with the effect of loosing the relevant target. In contrast to the longitudinal driver assistance the Lane Departure Warning (LDW) requires a different additional physical sensor principle, in this case a
98
Safety
vision sensor and a processing unit to detect the lane markings in front of the ego-vehicle by aid of image processing methods. Furthermore the establishment of this second and heterogeneous ADAS sensor enables the possibility to overcome physical limitations of the distance sensor. Combining the multiple inputs and utilizing the strength of each sensor results in a better and improved ACC application. In this first step towards a ADAS sensor fusion roadmap, a ACC lidar sensor is combined with a CMOS-vision sensor system for LDW, using the image processing capability of detecting visual pattern (vehicle rear), the better lateral resolution (vehicle width, lateral motion) and lane position information (egoposition, lane geometry). Both sensors cover the area in front of the car. From a topological view this coverage areas are different in range, horizontal and vertical opening angle. There is also a difference in the physical features delivered by the sensors: The lidar sensor delivers a range map of the detected objects, clustered by the number and the arrangement of beams. The grey value image stream of the CMOS-camera is calculated by an image processing device; the output are edge features representing lane markings, object contours, other contrast pattern, etc. After introducing the system overview and explaining the sensor properties, this paper describes the main vision modules “lane detection” and “object detection”. Hereafter, it focuses on the fusion aspects in section 5, i.e. the MAP fusion method and data association inside the object detection. Finally, section 6 outlines experimental results.
2
System Overview
Significant in the redundant coverage area are the longitudinal distance measurements of the lidar sensor and the lateral object contour determination by aid of the image sensor (figure 3). The combination of this data allows a precise localization of the target and also its classification, improving e.g. the ACC control strategy. Furthermore, the lane tracking of the LDW profits on additional stationary objects detected by the lidar sensor. Signal processing in the complementary areas enables the early prediction of new incoming objects (e.g. “cut-in”) and the preconditioning of both systems. In general, sensor data fusion will have a lasting effect on the architecture of driver assistance systems in vehicles; the functional separation in sensors and applications, the sensor communication and common interfaces are only few, but important points to mention.
Datafusion of Two Driver Assistance System Sensors
The redundant covering area of both sensors in front of the car is of importance (figure 1) for the initial target detection (lidar) and tracking (lidar, vision). The area borders are the opening angle of the lidar with 16° in azimuth and a fixed vision sensor range of 70m for objects. Objects which drift in the vision sensor’s right and left complementary areas will be tracked for a certain life time only by image processing to bypass short time object loss of the ACC, e.g. in narrow curves. In addition, objects that enter the vision area from left or right (near cut-in) should also be recognized initially by the LDW sensor.
Fig. 1.
Sensor covering areas and ranges.
The ACC- and LDW-sensor are arranged in a decentralized fusion network cluster and linked via High-Speed CAN 2.0A (figure 2). In our approach other vehicles are initially detected and tracked by the ACC sensor, the target attributes are delivered to the Fusion-CAN. The LDW uses this track lists to verify and measure up the objects again with image processing methods and returns the determined vision-based track lists, plus additional lane information to the ACC for the fusion task. So the idea for this first step in sensor data fusion is to improve the existing application ACC, but not to create a new application. Due to this proceeding the functional structure of both sensors can be preserved in most aspects, which means low modification and variation of the single sensor system. Especially sensor-specific signal processing and object tracking tasks remain in each device. A close sensor-internal data connection between this two tasks ensure a optimum system performance without facing the problems of a centralized fusion architecture, e.g. high data bandwidth requires FlexRay, new system setup in case of sensor enlargement, costs, etc. Due to the fact, that the fusion is based on objects association, data transfer is limited and a private CAN link fulfils all requirements, in our case with a bus load about 35% of a 500kBit CAN.
99
100
Safety
Fig. 2.
2.1
Lidar/vision sensor cluster and the partitioning of the function blocks.
Lidar Sensor
The ACC-system described in this paper is based on the optical lidar technology IDIS® (figure 3). The wavelength is 905nm, so the sensor works active in the near infrared (NIR).
Fig. 3.
Lidar sensor for ACC. Significant in the illustrations are the transmitter and receiver lens of the sensor. The right picture illustrates the mounting position in the cars front bumper area (without black IR-lens hood).
The sensor emits 15ns laser pulses in 16 single, horizontally arranged channels (Multibeam) and measures the return pulse time-delay of the reflected beams. The arrangement of the beams allows a certain spatial resolution in azimuthal
Datafusion of Two Driver Assistance System Sensors
direction (1°) and beams are capable of multi-target determination. The measurement data are therefore distance vectors. After signal preprocessing, object grouping and tracking, the lidar sensor measures the distance and estimates the relative velocity to the moving relevant targets in front of the ego-vehicle. Stationary objects are not taken into account for the ACC longitudinal control strategy, although they are part of the sensor-internal track-list. In case of initial detection, objects become new tracks after a few cycles according to quality of the tracking process (‘lifetime’). The sensor’s properties are listed in Tab. 1.
Tab. 1.
2.2
Lidar sensor properties
Vision Sensor
The LDW-system is based on a forward looking CMOS-Imager with a wideangle lens and a separate image processing unit. The sensor works mainly in the visual spectrum and needs no additional, special light source at night. In contrast to the distance sensors it delivers a matrix of grey-values representing the sceneries brightness distribution, leading to a 2D pattern- or template-based signal processing for object detection without the information of depth (geometric model). The lane-detection task is based on a mixed feature-/model-based image processing approach. By sensing the position of lane boundaries like white lane markings in front of the car, it estimates lane assignment, curvature ahead and ego-position of the vehicle in the own lane track. For reproducible system oper-
101
102
Safety
ation this assumes at least driving environments with a “model”-like character. Much investigations has been devoted to the image processing task, yielding a combination of an edge- and area-based lane marking detection algorithm adaptable to most road and illumination conditions. The object-detection approach is based on contour information extracted from the grey-value in the image scene. With this information, vehicle hypotheses are generated and feed into a multiple-target multiple-hypotheses tracking process.
Fig. 4.
Vision sensor for LDW. Best mounting position due to perspective view of the road ahead is behind the upper windscreen
Tab. 2.
Vision sensor properties
Datafusion of Two Driver Assistance System Sensors
3
Lane Detection
ACC objects must be judged according to their track position to determine the relevant target. The determination of lane width wLane and ego-position by the LDW complements and improves this ACC task and results in a better and more precise lane-object mapping. Furthermore the curvature cLane ahead is a useful info. But due to reduced look ahead range of the vision sensor under adverse weather and lighting conditions the availability and consistency of the curvature information from the LDW is limited. Therefore a decision in the fusion unit is done whether the curve info from the vision wVision or the yaw sensor wGyro is used for the track prediction. The decision is based on measurement or prediction quality factor QSensor, e.g. the estimation error covariance. (1)
This completes the lane and ego-motion model to: (2)
Here, l is the lateral offset of the vehicle and ∂ψ the heading angle difference due to the lane orientation. The slipping angle β is not considered in this model. The lidar sensor object track m can be described by: (3)
with dm: target distance, vrel,m: relative velocity and nR,m and nL,m as the target border describing lidar beams (n = -8…-1, 1…8). So we can predict the lane boundaries in the range dm of the target m by:
103
104
Safety
(4)
(5)
and come to the relevant target parameter determination (6)
with nR,m=-1…-8 and nL,m=1…8. Now we can complete our target state vector with the relevance factor (7)
Fig. 5.
4
Frame of the lane detection. The actual lane offset of lEgo: 68cm and lane width wLane: 337cm, determined by LDW.
Object Detection
Once the multibeam lidar detected an object, the lateral position and size of this target can be described accurately enough to establish a search area in the
Datafusion of Two Driver Assistance System Sensors
image plane, depending on the lidar measurement variances. The vision sensor task now is to detect and track all ACC objects.
Fig. 6.
Demonstration of the Fusion based Object Detection.
The image processing methods are based on horizontal and vertical edge detection, while the resulting edge candidates are analyzed concerning their attributes and must be arranged in certain geometrical orientation, e.g. the Upattern [4]. This works during day-time and under good conditions, at night or in low-light situation additional features, like the tail-light of the vehicles have to be detected, so we use a multiple geometrical model approach for object segmentation. Finally, Kalman filtering is used to track the objects by the LDW system in parallel to the ACC tracking.
Fig. 7.
5
Object detection. a) horizontal and vertical edges. b) segments based on grouped edges. c) pattern from model-based grouping d) tracked object.
Data Fusion
The ACC and LDW sensor extract target attributes by tracking the objects separately. The parameters differ in quality and accuracy depending on the sensors’ physical principle. For example, the lidar’s distance measurements are of
105
106
Safety
higher precision than the results obtained by the vision system. At this point, fusion is cooperative and feature-complementary by simple assembling of predefined sensors’ measurements, assuming that this measurements are constantly available. On the other side, object features are often competitive due to the different situation- and weather-depending availability of the sensor systems (quality factor Qm,Sensor). So the merging of features can lead to a better result.
Fig. 8.
5.1
The fusion object inherits the competitive attributes (here the lateral position) of the single sensors.
Maximum A Posteriori Estimation
For accurate merging of the measured object positions obtained by the two sensors, one further have to consider the sensors’ different lateral and longitudinal resolutions. For this purpose, the uncertainties of the estimated positions are modelled with a gaussian probability function [1] (8)
where Pi(x, y):=P(xm,i, ym,i | x, y) denotes the probability that the measurement (xm,i , ym,i) of sensor i with a given uncertainty (σx,i , σy,i) represents the “real” object position (x,y). Regarding two sensor estimates, the likelihood of the fused object position can than be determined by the conditional probability P(x, y | xm,1, ym,1, xm,2, ym,2). This “a posteriori probability” describes the probability of the object position in the case that the sensors provide the measurements (xm,1 , ym,1) and (xm,2 , ym,2) . Under the assumption of statistical independent measurements an the theorem of BAYES we get the expression
Datafusion of Two Driver Assistance System Sensors
(9)
that can be simplified to (10)
Finally, the most probable object position can be obtained as “Maximum A Posteriori” estimation (MAP) by (11)
If the “a priori probability” p(x, y) is unknown or uniformly distributed, than the solution is simply (12)
However, we assume gaussian distributions here, so the resulting expression that has to be maximized in Eqn. 11 is gaussian as well. For this reason, no time-consuming optimization algorithm is necessary. The estimated optimal object position (xF , yF ) and the resulting uncertainties (σx,F , σy,F ) are given explicitly. In case Eqn. 12 we get (13)
(14)
while the gaussian function can be separated in x, y. In figure 9 the object positions measured by the lidar and vision sensor are shown. Here, the different lateral and longitudinal resolutions of the sensors
107
108
Safety
become obvious. However, the resulting position of the fusion object offers high accuracy in both lateral and longitudinal dimension, which is well illustrated.
Fig. 9.
5.2
Fusion of the object location measured by the ACC and LDW sensor.
Data Association
In the process of object detection, existing image processing objects (IPobjects) of the LDW sensor have to be associated with incoming ACC objects. Fig. 6, e.g. illustrates a scene with 9 ACC targets while two stable IP objects will be considered in the tracking cycle. Similar to the fusion of objects explained in the preceding section, we use the method of MAP estimation to calculate a measure for the geometrical “distance” or similarity between ACC targets and existing IP objects. Here, the goal is not to estimate the optimal position as in Eqn. 11, but to evaluate the probability itself. If we assume gaussian distributions the probability function Eqn. 10 can be rewritten to (15)
that results for the optimal position (xF , yF ) in
Datafusion of Two Driver Assistance System Sensors
(16)
to obtain the confidence factor C. This factor is zero if both coordinates are equal and will increase with the geometrical distance regarding the individual uncertainties. In case of Eqn. 12 this factor is generally given by (17)
The abbreviation c(•) is called confidence function. The contrast to other tracking problems is, that IP-objects are not spot-wise objects. Instead, they are described by the left and right border edges that will be extracted in the edge detection process. Therefore, we have to associate the ACC target position with both left and right border of the vehicle hypothesis (18)
The fact that left and right border edges of a preceding vehicle should have the same distance xLIP = xRIP =: xIP leads to (19)
The confidence regarding the lateral position (20)
is arranged with an additional parameter λ
109
110
Safety
(21)
If the image processing could reliably extract both border points yLIP, yRIP (case i), and the ACC target is located between them, then the resulting confidence is CLy = CRy = 0, i.e. λL = λR = ∞ . If there is just one stable border point (case ii and iii), then a plausible vehicle width wmax is used and the parameter is chosen to widen or narrow the gaussian distribution of the ACC target. By this means candidates that seem to be located inside (ii) or outside (iii) the plausible vehicle are treated in a different way. In figure 10 this mechanism is illustrated.
Fig. 10. Adaptation of the confidence function . Data association in case ii (left) and iii (right).
6
Results
Fig. 11 shows the example for using the additional LDW lane information to improve the object-track-assembling. The ego-vehicle is travelling on the left track in a highway-like situation with increased lane width. In this case the real lane width is 4.2 m in contrast to the supposed value of 3.5 m of the ACC. The dotted lines are the measured lane borders from the LDW, the solid lines belong to the ACC hypothesis. One can see, that the cut-in of the target directly in the right front of the ego-vehicle is in an advanced stage, so the ACC control strategy is able to react earlier on this event by using the lane border from the vision sensor.
Datafusion of Two Driver Assistance System Sensors
Fig. 11. Example for using lane information from LDW instead of the ACC lane hypothesis (different scale for x- and y-axis).
The vision-based object detection capabilities are illustrated in figure 12. Once again a lane-change manoeuvre of the relevant track ahead is shown (double line symbol). The small bottom line represents the object width measured by the lidar, the top thick one is the fusion object attribute target width, improved by the vision system. The actual range in the given frame is about 60 m, so the accuracy of the fusion step demands primarily on the resolution of the vision sensor. Figure 13 points out the differences between the lidar and the vision sensor in lateral tracking. Here, the left and right border of the preceding test-vehicle detected by the image processing and ACC sensor is shown. With this information the error of width estimation and finally the lateral accuracy can be determined (figure 14).
Fig. 12. Additional vision sensor based lateral parameter (thick top line) improves the position and width of the target.
111
112
Safety
The vision sensor system is capable of tracking up to 5 objects (own lane-track, neighbouring lane-tracks) in parallel. Of course, the performance of the vision sensor and the object detection depends on weather and lighting conditions. This has to be taken into account when defining the ACC longitudinal control strategy based on fusion.
Fig. 13. The target vehicle’s lateral position and width, tracked and interpolated by lidar (ACC) and vision (IP), travelling distance: 120 m, constant measurement distance: 30..40 m.
Fig. 14. Estimation of the lateral resolution of the Lidar and vision sensor.
Datafusion of Two Driver Assistance System Sensors
7
Conclusion
Along with ACC, future luxury cars will be equipped with a vision sensor system realizing LDW. In our fusion approach both sensors deliver their object lists to a fusion module, which can be a software task located in one of the sensors, e.g. the ACC sensor. The entire system also includes an edge-based multiple-target object detection and tracking in the vision-sensor that is triggered by potential ACC targets. The combination of all data from the lidar and vision sensor allows a precise location of the target and also its classification, improving e.g. the ACC control strategy. Furthermore, the lane tracking of the LDW profits on additional stationary objects detected by the lidar sensor. Signal processing in the complementary areas also enables the early prediction of new incoming objects (e.g. “cut-in”) and the preconditioning of both systems. Obviously, this additional vision sensor information causes some extra workload in the image processing of the LDW sensor unit, but resulting in a more precise scenario description, advantageous in multi-track environments.
References [1]
[2]
[3]
[4]
[5] [6]
[7]
Bähring D., Hoffmann C.: “Objektverifikation durch Fusion monoskopischer Videomerkmale mit FMCW-Radar für ACC“, Workshop Fahrerassistenzsysteme FAS 2003, Leinsweiler (Pfalz) Sept. 2003, p.9 (2003). Darms M., Winner H.: “Fusion von Umfelddaten für Fahrerassistenzsysteme“, Workshop Fahrerassistenzsysteme FAS 2003, Leinsweiler (Pfalz) Sept. 2003, p.13 (2003). Dickmanns E.D., Mysliwetz B.D.: “Recursive 3-D Road and Relative Ego-State Recognition“, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 14, No. 2, (1992). Gern. A., Franke U., Levi P.: “Advanced Lane Recognition – Fusing Vision and Radar“, Proceedings of the IEEE Intelligent Vehicles Symposium 2000, Dearborn (MI), USA, p. 45-51 (2000). Hutchins R.G.: “Target Tracking Algorithms Em,ploying Imaging Sensors”, Dissertation, University of California, San Diego (1988). Mühlenberg M.: “Bildsensoranwendung im Kraftfahrzeug am konzeptionellen Beispiel der Fahrspurerkennung“, VDI Elektronik im Kraftfahrzeug, Baden-Baden 2001, VDI-Berichte 1646, p.879 (2001). Mühlenberg M., Thiem J., Rotter A.: “ ACC- and LDW-Sensor in a Fusion Network“, 7th International Symposium on Advanced Vehicle Control 2004, Arnhem, The Netherlands, Aug. 2004.
113
114
Safety
[8]
Rieder A.: “Fahrzeuge sehen – Multisensorielle Fahrzeugerkennung in einem verteilen Rechnersystem für autonome Fahrzeuge“, Dissertation, Universität der Bundeswehr München, Fakultät für Luft- und Raumfahrtechnik (2000).
Jörg Thiem, Martin Mühlenberg Hella KGaA Hueck & Co. Dept. GE-ADS Advanced Development – Systems and Products Beckumer Stra. 130, 59552 Lippstadt Germany
[email protected] [email protected] Keywords:
sensor fusion, intelligent cruise assistance, driver assistant systems
115
Reducing Uncertainties in Precrash-Sensing with Range Sensor Measurements J. Sans Sangorrin, T. Sohnke, J. Hoetzel, Robert Bosch GmbH Abstract This paper focuses on the data processing development for PrecrashSensing, when only the object range is provided. If the object is close to the sensor, perturbations in form of measurement uncertainty and other classes of noises will become critical due to the small amount of time available to make decisions. Nevertheless, these decisions are required to be accurate and highly reliable. The wide variety of object classes and possible situations in vehicle surroundings aggravate the effects of all these perturbations so that the signal processing development for Precrash becomes quite a challenge. Based on current techniques for object tracking and multilateration, crash-situations can be detected and crash-parameters such as velocity at the impact point can be computed. Experiments were performed with a vehicle equipped with a multiple-sensor system based on two different sensor technologies (radar and ultrasonic).
1
Introduction
Since governments and authorities put much effort into the reduction of road accidents, the demands on vehicle safety are continuously growing. Thus, new technologies for passive safety are being developed to enhance the performance of current systems, such as airbag or pyrotechnical belt pretensioner (see [1][2][4][6]). These systems reduce the risk of injury and its level during a crash. The integration of information about the situation which precedes the first contact can be used for optimised control of restraints (see [3]). This motivates the development of Preventive Safety Systems (PSS) such as PrecrashSensing. Therefore, in the scope of the European project PReVENT, Bosch researches and develops into systems that contribute to the road safety targets set by the European commission transport policy for 2010. The objective is to increase the protection for vehicle occupants and even pedestrians in case of impending crashes. These new systems are based on sensors collecting information about vehicle surroundings. The aim of these systems is to implement different driver safe-
116
Safety
ty and convenience assistance functions (see [4]). Depending on the objective of the function, the area of interest varies. Hence, the selection of the technology becomes relevant because of the sensor’s field of view (FOV) has to outperform this area of interest. Furthermore, the information provided by the sensors depends on the used technology. Ultrasonic sensors provide only object range, whereas object velocity and even angle can be additionally provided by radar. The economic cost of the function has to be taken into account for the selection of the technology, because parameters such as number of sensors, computational resources or sensor features influence the system performance. After a dedicated selection of the technology, the fulfillment of the requirements has to be verified. Taking account of the requirements on several function, it is possible to implement a platform which supports multiple functions (see [1]). The information provided by the sensor system is used to categorize the situation in vehicle surroundings. Thus, in case of an impending crash-situation, the relevant information, such as closing-velocity, is included in the Crash Object Interface (COI) (see [2]). Precrash makes high demands on the sensor system and data processing. High measurement rates are required in order to get enough data for reliable decisions. In the case of a multiple range-sensors system, multilateration is a common technique for sensor data-fusion. Using this method in vehicles, high accuracy in single sensor data is required, due to the small distances between sensors. Therefore, single sensor measurements are pre-processed to obtain more consistency (see [5]). Moreover, random and systematic uncertainties in the position estimate are propagated to the computation of the object velocity. Methods to reduce these uncertainties are therefore needed to describe the vehicle environment and to perform accurate crash-predictions. These methods are adaptations of current algorithms for object tracking and multilateration techniques, taking the functional requirements of Precrash into account.
2
Precrash-Sensing System
2.1
Data Processing Architecture
The sensors mounted in the vehicle periphery communicate with a central electronic control unit (ECU). The distribution of the software components for data processing has to take account of the available computational resources in sensors and ECU’s. Fig. 1 gives the system architecture for Precrash data processing. This schema can be adapted to different technologies such as ultrasonic and short range radar (SRR) sensors. The data processing blocks are distributed among the available resources in the system components. In case of
Reducing Uncertainties in Precrash-Sensing with Range Sensor Measurements
the radar system, a microcontroller is available in each sensor and in the ECU, whereas for ultrasonic systems only a microcontroller is available in the ECU.
Fig. 1.
Data processing architecture
After a scan of the sensor FOV, the raw-data processing generates the measurement list. This list contains the ranges of the objects within the FOV. As given in Fig. 1, each sensor processes its own measurement list (1D-Tracking). Thus, each sensor describes the situation in vehicle surroundings from its own “point of view” and the necessary information, such as object distance and velocity, is included in the one-dimensional Object list (1D). Based on multilateration techniques, the information contained in each 1D-Object list is fused to a two-dimensional description of the situation. Thus, the relative object position and velocity can be computed in Cartesian coordinates and included in the 2D-Object list. This is the interface which contains the basic information for implementing the Precrash functions.
2.2
Crash Object Interface
Precrash deals with highly threatening situations in vehicle surroundings. Thus, by continuously analysing the information contained in the 2D-Object list, impending crash-situations can be detected. In this case, the relevant information is transmitted to restraints by means of the Crash Object Interface (COI). COI contains information about only one object. Hence, due to the fact that more than one object can be found within the area of interest for Precrash, the level of threat is assessed for each object contained in the 2D-Object list. Then, in case of an impending crash-situation, the most dangerous object is selected. From this selection, predictions of parameters such as closing-velocity (cv), time-to-impact (tti) and offset (dy) at contact point are computed. In order to
117
118
Safety
classify the crash severity, the closing-velocity including the angle of the vector (α) becomes advantageous.
Fig. 2.
2.3
Parameters of the Crash Object Interface
Precrash Functions
The aim of Preset is to improve the performance of current restraints by integrating vehicle surroundings information. Preset only provides additional information, so that the pyrotechnical restraints are not activated on this basis. However, this information improves the categorization of the situation and becomes specially advantageous for multi-stage airbags, which require an accurate estimation of the closing-velocity. Early detection of a crash is used for the function Prefire. This enables the advance activation of the reversible restraints, which require a longer deployment time (e.g. 100-200ms) than pyrotechnical. These systems might be activated in any tentative crash-situation. The most prevalent system is an electronic belt pretensioner. A reversible belt pretensioner keeps the occupants in position by removing the belt slack. Fixing the passengers in the current position increases the survival space. This system avoids a forward moving of the body during the crash is before the pyrotechnical pretensioner is activated. Furthermore, in slow speed crashes (e.g. destination>Erlangen), this one-to-one mapping is no longer given. The user might say “navigate from Erlangen to Munich via Ingolstadt without using the highway”. This oral command comprises three or more menu steps in the graphical equivalent. It is now the task of the interpretation component of the dialogue in GUIDE to map the speech commands to appropriate system events. In GUIDE, these are events with appropriate parameters, e.g. NAVIGATION(DEST: Munich, VIA: Ingolstadt, ROUTEOPTION: NO_HIGHWAY).
Fig. 6.
GUI transition for user utterance: “Navigate to Erlangen”
The dialogue is also responsible for asking clarifying questions if user input could not be interpreted correctly (e.g. because of low levels of confidence or identified OOVs). However, the overall behaviour of the system should be exactly the same as if the buttons were used, otherwise the user would be confused. As a result the state transitions of the state chart defining the menu behaviour could be equivalent. But since actually three (or more) commands were given in one sentence, it is not necessary or even unwanted that the three commands are processed one after the other, each time showing the
The role of Speech Recognition in Multimodal HMIs for Automotive Applications
same screen as if buttons were used. Still, the final screen or the final confirmation is supposed to be the same. An example of the ‘final’ screen is given in Figure 6 after the user uttered “Navigate to Erlangen”. The GUI ‘jumped’ from the radio to the navigation input screen, but already ‘Erlangen’ is displayed as the destination since this was contained in the speech command. As street ‘center’ is displayed since this is the default if no street name is specified.
4.2
Ensuring Consistency between GUI and Speech Dialogue
In GUIDE, each state and/or widget is equipped with an additional editor for specifying the vocabulary and grammar for that particular state. States/widgets can inherit the vocabulary and grammars from parent states but also define vocabulary and grammar that are valid for this state only. By doing so the availability of certain commands that are valid in all dialogue steps concerning this application can be ensured. In order to ensure consistency between the graphical menu items and the speech commands the automatic generation of a base vocabulary for each state or widget is possible in GUIDE. Such a base vocabulary consists of the textual commands of the GUI. These should be available as speech commands in any case. If the textual commands are changed later, also the base vocabulary is changed accordingly. Abbreviations are treated separately. Simply providing the names/commands from the menu wouldn’t however lead to a natural dialogue and it is thus possible to extend the vocabulary by alternative expressions to enable a more natural interaction. Just as the properties described in section 2.1 that can be global and constant, the dialogue can have an overall, global behaviour, such as ‘confirmation strategy is always very explicit’ or ‘explicit confirmation is only used when absolutely necessary’ etc. On the other hand, there might be very state-specific confirmation strategies, e.g. in situations where PIN numbers are entered by the user. Other global properties could be e.g. whether speaker adaptation should be used. The level of confidence that is necessary to accept an utterance on the other hand could be state-specific as explained above, since different applications require different security levels.
4.3
Dialogue History and Multimodality in GUIDE
When closely following the state hierarchy of the GUI another advantage is that modelling the dialogue history becomes relatively easy since all parameters are kept in the global data pool that can also be accessed by the speech dialogue component. That means if the user is currently entering something
347
348
Comfort and HMI
that corresponds to a state very deep in the state hierarchy, ambiguities can be resolved by analysing the already entered parameters. Dialogue history is an extremely important factor if natural dialogue is desired. It is very unnatural that if one parameter, e.g. the destination in the navigation application is changed, all other parameters (such as route options etc.) have to be reentered, too. By closely coupling the speech and haptic input in the GUIDE model a truly multimodal interaction can be ensured. The user can jump between speech and haptic input also within a dialogue turn. The events, whether resulting from a haptic or a voice event, are always processed in the same way. In addition to the speech input, speech dialogue systems also use speech output (either pre-recorded speech or text-to-speech synthesis). So for each transition it can be specified that if an event was triggered by a speech or button event the output should be graphically and/or verbally.
4.4
Usability Testing in GUIDE
Another main advantage of using such a tool for the design of speech enabled HMIs is the fact that its behaviour can be tested immediately, since the HMI can be simulated at any time during the design process. Inconsistencies within the dialogue or between GUI and speech can be discovered very quickly. Of course, such testing cannot completely replace usability studies that are necessary at different steps during HMI development. But the tool can be used for usability tests without the need of having a working prototype on the target realised. In traditional HMI development the application can often be tested only when implemented on the target. However due to very strict time schedules results of usability tests that are conducted after implementation often do not find their way into the product. Using GUIDE, usability tests can be conducted at very early project states and thus there is a much higher chance that the results can be reflected in the final HMI. Even in later development stages changes can be incorporated quickly since changes in one aspect of the HMI model are immediately reflected in all other components of the system.
The role of Speech Recognition in Multimodal HMIs for Automotive Applications
5
Summary and Outlook
In this paper a HMI design tool was presented that allows for the simultaneous specification, design and implementation of HMIs. We focused on the integrated design of GUI and speech dialogue which is a very important topic in speech dialogue system design. By using GUIDE the development of truly multimodal interfaces is easily possible. The different mechanisms and properties of GUIDE were presented. Also special challenges to speech dialogue systems in automotive environments were discussed and proposals how to alleviate these problems were given. The speech-enabled version of GUIDE is currently under development. First aspects, as described in this paper, are already implemented. However, also methods for dialogue strategy development will follow soon.
References [1] [2] [3] [4]
[5]
http://www.speechdat.org/SP-CAR/ http://www/speecon.com S. Goronzy, Z. Valsan, M. Emele, J. Schimanowski, The Dynamic, Multi-Lingual Lexicon in SmartKom, Eurospeech 2003, pp. 1937-1940, Geneva, Switzerland S. Goronzy: Robust Adaptation to Non-Native Accents in Automatic Speech Recognition, Springer Verlag, Lecture Notes in Artificial Intelligence, 2002, ISBN 3-540-00325-8 C. Rosette: Elektronisch gesteuerte Systeme legen weiterhin zu. In: Elektronik Automotive, Heft 6/2002, p. 22f.
Dr. Silke Goronzy, Dr. Rainer Holve, Dr. Ralf Kompe 3SOFT GmbH, Frauenweiherstr. 14 91058 Erlangen Germany
[email protected] Keywords:
spoken dialogue, integrated tool-based UI design, GUIDE, speech recognition, speaker adaptation, confidence measures, OOV
349
Networked Vehicle
353
Developments in Vehicle-to-vehicle Communications Dr. D. D. Ward, D. A. Topham, MIRA Limited Dr. C. C. Constantinou, Dr. T. N. Arvanitis, The University of Birmingham Abstract The advanced electronic safety systems that will improve the efficiency and safety of road transport will rely on vehicle-to-vehicle and vehicle-to-infrastructure communications to realize their full potential. This paper examines the technologies that are presently in use for vehicle communications in the mobile environment, and indicates their limitations for achieving the range of functions proposed as part of future road transport developments. An area of research that shows considerable potential for these communication requirements is the use of mobile ad hoc networks, where the vehicles themselves are used to form a self-organizing network with minimal fixed infrastructure requirements. The development of this technology will be described, along with the technical issues that will need to be addressed in order to make effective use of it in the modern road transport system. In particular, a novel approach to the development of a framework of mobile ad hoc network routing protocols is described.
1
Introduction
Historically, both road vehicles and the electronic systems fitted to them have operated independently. Electronic systems have been developed and optimized to perform a specific function, such as engine management or anti-lock braking. Similarly the vehicles themselves have operated autonomously, without any knowledge of the surrounding environment except for that provided through the driver responding to traffic conditions, road signs and external command systems such as traffic signals. In the modern vehicle, many of the electronic systems are interconnected via a databus or network system. The most prevalent databus system is CAN (Controller Area Network), which is frequently used to network electronic modules that share responsibility for a particular set of global functions such as body control or powertrain management. The functionality that these systems can achieve through interoperation is frequently greater than could be
354
Networked Vehicle
achieved by the modules acting independently. For example, functions such as traction control are implemented by interaction between the anti-lock braking system and the engine management system. Elements of the traction control functionality are to be found in both modules. However the vehicles themselves are still autonomous. In the future, vehicles will interact, both with each other and with the transport infrastructure, to achieve additional enhancements to the safety and efficiency of road transport. These interactions rely on a means of networking the vehicles. The term “intelligent transportation systems” (ITS) describes a collective approach to the problems of enhancing safety, mobility and traffic handling capacity, improving travel conditions and reducing adverse environmental effects with a future aim of automating existing surface transportation systems.
Tab. 1.
Summary of IVC applications
Communication requirements are central to any ITS infrastructure because they enable the transmission and distribution of the information needed for numerous control and coordination functions. For example, the European Commission has a stated aim to reduce fatalities on European roads by 50% by 2010. Much of this reduction will come from the deployment of advanced elec-
Developments in Vehicle-to-vehicle Communications
tronic safety systems. The “e-Safety” initiative [1] is conducting research into a number of areas where electronic systems can substantially enhance road safety. Many of the applications proposed can only realize their full potential through the use of vehicle-to-vehicle and/or vehicle-to-infrastructure communications. As highlighted above an important subset of the communication requirements of an ITS is inter-vehicle communication (IVC), which is essential in realizing certain ITS applications as summarized in table 1. In order to meet the requirements of IVC, one solution is to implement a distributed communication network between the vehicles on the road. Mobile ad hoc networking is a candidate technology that can be employed to meet this requirement.
2
Introduction to ad hoc Networks
The term “ad hoc network” refers to a self-organizing network consisting of mobile communicating nodes connected by wireless links. Each node participating in an ad hoc network sends and receives messages and also acts as a router, forwarding messages to other nodes. Each node receives and analyses the addressing information in each message: the routing protocol then determines autonomously whether this message is retransmitted. Its decisions are generally based on information it continuously exchanges with its neighbours and measurements it might take of its environment. The networking capability arises because of the cooperative behaviour of nodes in forwarding messages for third party nodes. In theory an ad hoc network does not require communication with a fixed infrastructure since coordination within an ad hoc network is decentralized. Wireless LAN (WLAN) technology, as popularly used for networking of computers in offices and homes, is an example of a wireless communication technology that is operated in broadcast mode i.e. where all nodes can communicate directly with a common point known as an access point. However, WLAN technology has the potential to be operated in an ad hoc mode, provided the network layer in each node is modified to provide routing functions, so that communication need not take place via the access point. Currently, WLAN nodes can be configured to communicate in an ad hoc mode where communication between nodes is achieved through broadcasting messages to nodes within their direct communication range only.
355
356
Networked Vehicle
An important aspect of an ad hoc network is that when communication is required between nodes that are not within direct communication range of one another, messages can be forwarded using intermediate nodes. This is illustrated in figures 1 and 2.
Fig. 1.
Ad hoc network communications example
Fig. 2.
Ad hoc network communications example
In figure 1, node A wishes to transmit a message to node B. However nodes A and B are not within each other’s communication range. In figure 2, an intermediate node C is introduced which is within the communication range of both nodes A and B. Now, A may transmit a message to B via node C using a “multi-hop” approach, A → C → B. The forwarding of messages in this way is achieved through the use of a routing protocol, the design of which is crucial in performing efficient, reliable and expedited message delivery. Although in principle messages can be forwarded over a long distance in an ad hoc network, there are a number of practical limitations. One such limitation is the delay introduced by the routing protocol, which could potentially accumulate to a significant level over long distances. For some of the safety-related applications listed in table 1, a significant delay may be unacceptable which highlights the requirement for the routing protocol to be optimized to function efficiently within its target operational environment.
Developments in Vehicle-to-vehicle Communications
An uncontrolled relaying of messages causes a phenomenon known as a “broadcast storm” which is similar to an avalanche. For example, consider a message intended for a specific node four hops (or transmission radii) away which is broadcast to 10 other nodes; and each of these nodes broadcasts it to another 10 neighbours and this process repeats 4 times. Thus, the message ends up being relayed approximately 10,000 times, most of which are unnecessary as only four message transmissions are actually needed. The “broadcast storm” is avoided by dynamically discovering a route to the destination through the cooperation of the intermediate nodes and by exploiting their knowledge of their neighbourhoods. The challenge for IVC is to achieve the successful delivery of a message when the nodes are moving at high speed, as in ITS applications, and where any route discovered will have a very short lifetime. Current networking technology performs satisfactorily with stationary or slow moving nodes.
3
Analysis of ITS Communications Technology
There are a number of radio communication technologies that could be considered for ITS applications. These include: Wide-area broadcasting, using analogue or digital techniques. Examples include the RDS (radio data system) transmissions superimposed on analogue FM radio broadcasts, and DAB. Cellular radio systems, such as GSM and 3G mobile telephones Ad hoc networking techniques such as wireless LAN. Wide-area broadcasting has the advantage that a single transmission can reach multiple vehicles, but the disadvantage that usually it is not possible to address an individual vehicle nor to verify that the information has been successfully received. Furthermore, it is highly inefficient in terms of spectrum utilization to support any reasonable number of communication sessions with independent vehicles. Broadcast services are usually, but not always, “free to air”. Broadcast services are therefore best suited to disseminating information that needs to be sent to a wide range of consumers, where the data volumes are such that discrimination can be made easily at the receiving node. An example of this is traffic messages transmitted using the RDS-TMC system to vehicle navigation systems, where the receiver in the vehicle applies a filter to the messages based on the vehicle’s intended route. Cellular radio systems have the advantage that a message can be transmitted to an individual vehicle, but the disadvantages that there is usually a latency associated with message delivery and that the services have to be paid for.
357
358
Networked Vehicle
This communication latency arises fundamentally due to the need to maintain and search a location register, such as the home and visitor registers in GSM. The GSM short messaging service (SMS) is popular for transmitting short items of information, but contrary to popular opinion delivery is not necessary “instant”. Ad hoc networks appear to show the most promise for supporting the communication needs of ITS applications in the dynamic road traffic environment, and there is considerable interest in the global ITS community in the use of ad hoc networks for transport applications. In Europe, projects such as FleetNet [2] and CarTalk2000 [3] have used a self-organizing ad hoc radio network as the basis for communications. More recently, the PReVENT project [4] includes activities on wireless local danger warning and intersection warning. In the USA, activities are aimed at securing common frequency allocations for vehicle-vehicle communications in the 5.8GHz band alongside the allocations already reserved for DSRC. A key feature of many of these activities is that wireless LAN technology is being proposed as the basis for the communications. In order for ad hoc networks to be used effectively for IVC, the following technical challenges have to be overcome: Selection of an appropriate physical layer for radio communications. Wireless LAN technology has been used as a starting point, based on the industrial, scientific and medical (ISM) frequency allocations in the 2.4GHz and 5.8GHz radio bands. However, the ISM bands are unlicensed and may be used for a wide variety of applications. Long-term, dedicated radio bands for IVC may be required. Some bands have already been allocated (e.g. the 5.9GHz allocation for DSRC) but more may be needed in future. A dedicated communication standard, IEEE 802.11p, is currently under development. However, this addresses the physical layer only, i.e. single hop communication issues in a vehicular environment. Determining the level of fixed infrastructure required. Although in theory ad hoc networks can exist with no fixed infrastructure; in practice fixed infrastructure will be required to communicate messages to the wider transport infrastructure and to fill in the gaps in communication that may exist during periods of low vehicular traffic density. The fixed infrastructure will always be needed as wireless ad hoc networks cannot become arbitrarily large, since their data throughput would reduce to become arbitrarily small. Appropriate routing protocols to address the particular needs of IVC in the ITS context, which include (but are not limited) to defined message latency for safety-related applications, scalability, and the need for rout-
Developments in Vehicle-to-vehicle Communications
ing to be sensitive to the location of the vehicle and the direction of travel. In the remainder of this paper the development of a routing protocol for vehicular ad hoc networks is considered.
4
Review of ad hoc Routing Schemes
Many routing protocols have been proposed for ad hoc networks with the goal of providing efficient routing schemes. These schemes are generally classified into two broad categories: topology-based routing and position-based routing [5].
4.1
Topology-based Routing
Topology-based routing protocols use information about the links that exist in the network to perform packet forwarding. Topology-based routing can be further broken down into three sub-categories: proactive, reactive and hybrid routing. Proactive protocols attempt to maintain a global view of network connectivity at each node through maintaining a routing table that stores routing information to all destinations, computed a priori. Routing information is exchanged periodically, or when changes are detected in the network topology. Routing information is thus maintained even for routes that are not in use. Proactive protocols are “high maintenance” in that they do not scale well with network size or with speed of changes in topology. Reactive (also known as on-demand) routing protocols, however, only create routes when required by the source node and are based on a route request, reply and maintenance approach. Route discovery is by necessity based on flooding (it is assumed that the identity of the nodes is known a priori). On-demand routing generally scales better than proactive routing to large numbers of nodes since it does not maintain a permanent entry to each destination node in the network and a route is computed only when needed. However, a drawback of on-demand protocols is the latency involved in locating the destination node, as well as the fact that flooding is expensive in terms of network resource usage and reduces the network data carrying capacity. Hybrid routing protocols, the third category of topological protocols, are those that combine both proactive and reactive routing, e.g. the zone
359
360
Networked Vehicle
routing protocol (ZRP) [6, 7]. The ZRP maintains zones; within a zone proactive routing is used, whereas a reactive paradigm is used for the location of destination nodes outside a zone. The advantage of zone routing is its scalability since the “global” routing table overhead is limited by zone size and route request overheads are reduced for nodes outside the local zone. Further detailed reviews of topological routing techniques can be found in [8] and the references therein.
4.2
Position-based Routing
Unlike topology-based routing, position-based routing forwards packets based on position information, reducing and in some cases eliminating the overhead generated by frequent topology updates [9]. Position-based routing requires the use of a system that provides positioning information and a location service (LS) [10]. Although mobile nodes can disseminate their positioning information via flooding algorithms, a location service is important for scalability [11]. A location service helps a source node to detect the location of the destination node. A review of location services can be found in [5, 10]. There are two types of packet forwarding paradigms commonly used within position-based routing: restricted flooding and geographic forwarding (also referred to as “greedy forwarding”) [12]. In restricted flooding, a packet is flooded through a region that has been set up using the position of the source and destination nodes. Although restricted flooding is still affected by topology changes the amount of control traffic is reduced by the use of position information, thus limiting the scope of route searches and reducing network congestion. When a route to the destination cannot be found, network-wide flooding of the route request message occurs resulting in high bandwidth utilization and unnecessary network congestion. Geographic routing relies on the local state of the forwarding node to determine which neighbouring node is closest to the destination to forward the packet to and is thus not affected by the underlying topology of the network. The selection of the neighbouring node depends on the optimization criteria of the algorithm. Even though geographic forwarding helps to reduce routing overhead as a result of topology updates, the lack of global topology information prevents it from predicting topology holes [5].
Developments in Vehicle-to-vehicle Communications
5.
IVC Operating Constraints
5.1
Operating Environment
The majority of ad hoc networking research in the development and comparison of routing protocols has evaluated performance based on a 2-dimensional rectangular plane where nodes change their speed and direction randomly. This differs from the mobility model required for an ad hoc IVC network in several ways. Firstly, the movements of vehicles are spatially restricted to the road structure, thus constraining the mobility pattern significantly. Secondly, the speed of vehicles is often much faster than the node speeds used in the literature. Thirdly and most importantly, the dynamic nature of vehicular traffic flow (i.e. traffic flow patterns and density) must be used in order to evaluate the performance of the routing protocol for the target applications. The effect of differing mobility models on the relative performance of routing protocols has been highlighted in [13, 14]. This emphasizes the fact that the performance of a routing protocol modelled without emulating the movement characteristics and spatial constraints of the target application cannot be assumed to exhibit the same quantitative results demonstrated in the literature in a different operational environment. Another difference in mapping routing techniques to IVC is that no prior knowledge of the possible set of identifiers exists without maintaining either a centralized or a distributed database. As pointed out in [15], the possible number of identifiers can easily exceed a practical size and will be constantly changing, thus making it unmanageable to maintain such a database. Hence, node ID must be considered to be a priori unknown. Since vehicles are increasingly being equipped with positioning systems (e.g. GPS) it can be assumed that future vehicles will be equipped with an accurate positioning system as standard, allowing vehicles to be addressed by position. Vehicle ID must therefore consist of two fields, a geographical location field and a unique node identification number, as a minimum addressing requirement. In applications requiring data to be addressed to a specific destination, vehicle ID can be discovered through their current position and maintained by each neighbouring node only for as long as necessary. In this way, both conventional distributed and centralized node ID database solutions are avoided completely.
5.2
Routing Protocol Considerations
The potential size of an IVC ad hoc network coupled with the dynamic nature of traffic flow excludes the use of a purely proactive protocol for the following reasons. Firstly, continuous changes in vehicle connectivity will result in con-
361
362
Networked Vehicle
stant routing update packets being transmitted, compromising routing convergence (by the time a vehicle receives routing update information, it may already be “stale”). Secondly, as a consequence of control traffic consuming network resources (bandwidth and processing) the delivery of application data will be restricted. Thirdly, as the number of vehicles increases, the size of the routing update packet will increase proportionally, placing extra demands on network resources. However, proactive protocols may be suitable at a local level for a restricted number of vehicles where timeliness of delivery is imperative to the application and the relative velocity between vehicles is low. A purely reactive protocol assumes that the identity of a node is a priori known, in order for it to address a message to a particular destination. However, the creation and maintenance of a vehicle ID database is likely to be prohibitively complex. Even if a vehicle had such information, it would initially have no knowledge of a path to the destination. Therefore, finding a path would delay transmission and, for applications where timeliness of delivery is imperative, this would not be acceptable. Although route caching can be supported, there will still be an initialization period before information is built up, but the freshness of this information may be short-lived due to continuous changes in network connectivity. This further limits the protocol’s scalability. Thus, supporting a purely reactive routing protocol is unsuitable for IVC. For low priority applications where delay is acceptable a modified version of the reactive routing scheme, taking into account the addressing issues, may offer a suitable solution. However, for fast flowing traffic flow, message delivery may not be possible if links are continuously changing. A hybrid scheme using pure versions of both the reactive and proactive routing paradigms would not be suitable without modifying their methodologies to take into account position information, although it offers scalability advantages. It could automatically be assumed that, since position-based routing delivers messages based on position and fulfils one of the addressing requirements of IVC, it would provide the best routing solution. However, in the case of restricted flooding, knowledge of a vehicle’s position and ID are assumed to be a priori known so that the message can be flooded to the area where the vehicle is expected to be located. If the vehicle cannot be found then this may result in network-wide flooding in order to find the required destination. This is clearly not acceptable to applications for which timeliness of delivery is imperative. The level of detail of geographical information required to support efficient restricted flooding must include not only relative position of neighbouring vehicles, but also their direction of motion relative to the vehicular traffic flow and the message destination region. As will be seen in section 6, the justification for maintaining this level of geographical information com-
Developments in Vehicle-to-vehicle Communications
plexity in the routing layer is dictated by the ITS applications themselves. For low priority applications, where a vehicle has prior knowledge of the destination, this technique may be suitable although network-wide flooding for unicast transmissions must be avoided since the potential control overhead in locating a route could tie up network resources unnecessarily. Geographical routing suffers from the requirement for a location service. Although the method used in routing the message to the destination is effectively stateless, the location service will be affected by the underlying connectivity and may delay the delivery whilst waiting for position information. The position information also needs to be accurate up to one-hop away from the destination. The algorithmic complexity and maintenance overheads in implementing a location service can be highly costly. A modified version of the geographical forwarding scheme may be appropriate for certain groups of applications but the implementation of a location service is likely to be prohibitively complex for the range of application scenarios we are considering. The geocast scheme is a technique that can be applied for IVC for applications where information is of relevance to vehicles in a particular region, such as incident warning.
6
Proposed IVC Routing Framework
Having investigated the requirements of various ITS application scenarios in table 1, it became quite evident that they have quite different quality of service (QoS) demands, message delivery requirements and differing regions to which the data is relevant. The region to which the message is relevant for each application will be referred to as the routing zone of relevance (RZR), which is adapted from [16]. For example, ITS application scenarios such as vehicle platooning and cooperative driving will have a very low threshold in terms of acceptable communication delay, since any excess delay could mean the difference between the application either working as implemented, or potentially causing an accident. The RZR for these applications is considered to be in the near-vicinity of the source vehicle. On the other hand, applications such as mobile vending services and traffic information systems are not critically dependent on communication delays and have a wide area RZR. Thus, it is imperative to assign data priority depending upon the safety-related implications of the application. The message delivery requirements for various application scenarios are also different; e.g. platoons may require group delivery, incident warning message a broadcast to vehicles within a specific region and information for a specific vehicle such as a reply to a traffic information enquiry, unicast delivery.
363
364
Networked Vehicle
In order to implement an IVC ad hoc network which meets the requirements of the application scenarios, the need to use different routing scheme paradigms was identified, the selection of which is dependent on the application and its specific priority rating, required RZR and message delivery requirements. It is assumed that an accurate positioning system will be universally deployed in future vehicles (e.g. GPS or Galileo) and that there is neither a centralized, nor distributed, database maintaining a list of vehicle identifiers, as discussed in section 4. The message delivery requirements of the IVC applications can be classified into three different categories. The first classification consists of those applications such as incident warning, or an approaching emergency vehicle warning, which requires information to be broadcast to a geographic region. The required message delivery type used in this case is a geocasting delivery scheme. The second classification consists of applications such as a response to a traffic information request where an expected RZR can be determined from the packet sent from the requesting vehicle, using a method similar to the “expected-zone” technique used in [17]. The response message will be specifically addressed to the requesting vehicle using unicast delivery scheme. The third classification covers applications such as platooning or cooperative driving where communication between a number of vehicles is required in order coordinate manoeuvres between vehicles. The required delivery type is multicasting, also known as group delivery. The second and third classifications mentioned above will benefit from local connectivity information, specifically when the message is nearing its destination and in the case where application communication is on a local scale and timeliness of delivery is an issue. Maintaining network connectivity at the local level will aid delivery of unicast and multicast data for applications where timeliness of delivery is imperative as the message approaches its destination. Maintaining network connectivity information requires periodic exchanges of control packets. In a highly dynamic environment, where links are formed and broken frequently, the amount of control traffic required in order to maintain up-to-date connectivity restricts this type of protocol scheme from scaling well with an increase in network size [18]. However, knowledge of network connectivity is considered to be important for the implementation of the second and third classes of IVC applications mentioned previously, for two reasons. Firstly, local connectivity information is important for applications such as platooning, cooperative driving and any application requiring coordination between vehicles where timeliness of delivery is imperative. Secondly, local connectivity knowledge will help reduce the number of retransmissions required in order for a packet to find its destination within the RZR.
Developments in Vehicle-to-vehicle Communications
The latter mentioned applications require communication between specific vehicles, which are generally in the immediate vicinity of the source vehicle. Since these types of applications operate in the immediate vicinity of the source vehicle, we intend to maintain network connectivity information at the local level in zones called “local zones” (LZ) centred on each node. The size of the zone changes dynamically, depending on local vehicle traffic density, local mobility and local data traffic overhead. The above discussion leads to the conclusion that to satisfy the communication routing requirements of the plethora of application scenarios we wish to consider, it is necessary to deploy a suite of routing protocols. This work is similar to the FleetNet project [2] in that a wide range of IVC applications has been examined, but unlike [2] the application requirements have been analysed in their totality in order to identify efficiencies through the synergies of their underlying protocol mechanisms. This has resulted in the definition of a routing framework in order to satisfy the message delivery requirements of a number of different classes of IVC applications. The framework utilizes a hybrid routing approach that combines a routing scheme which maintains connectivity data at the local level with a position-based routing scheme beyond the LZ. Figure 3 shows a schematic diagram of the proposed IVC routing framework.
Fig. 3.
Proposed ITS routing framework
365
366
Networked Vehicle
Within the ITS routing framework, independently of the message delivery type and when the source vehicle lies outside of the RZR the message is forwarded towards the RZR using a routing technique called perimeter vehicle greedy forwarding (PVGF). PVGF is based on the principles of greedy forwarding [10, 12, 19]. When the message reaches the RZR of relevance the routing technique employed within the RZR changes depending upon the required message delivery scheme. This highlights the need for cross-layer communication as opposed to strict protocol layering in a vehicular environment. If the message type is geocast, a routing technique called distance deferral forwarding (DDF) is used to deliver the message within the RZR. However, if the message is addressed to particular vehicle(s) within the RZR the message is forwarded to the destination vehicle(s) using local zone routing (LZR) along with perimeter vehicle local zone routing (PVLZR). Both of these routing schemes utilize local connectivity information in order to locate the destination vehicle within the RZR. In the scenario where the source vehicle is a member of the RZR and addresses a message to a specific vehicle within the RZR, then both LZR and PVLZR are used to route the message to the destination(s). The following section expands the routing framework presented in figure 3 and discusses its constituent algorithms in outline.
6.1
Protocol Properties
The IVC framework is currently being developed for the motorway (freeway) environment and assumes that the “hello” packet header will also contain road identification information, facilitating vehicle classification per road. Further classifications can also be added optionally in order to classify vehicles at intersections. When a data packet m is transmitted by vehicle A which can either be the source vehicle, or the vehicle retransmitting m, the decision as to what type of routing protocol to apply depends upon the required type of message delivery (which is application dependent), as well as the location of A with respect to the RZR. For ITS applications where vehicle A is outside the RZR such as, a traffic interrogation request further along the motorway or a broadcast transmission
Developments in Vehicle-to-vehicle Communications
addressed to all vehicles in a particular region, the PVGF protocol is used to forward m towards the RZR. Unlike greedy routing techniques [13, 19] where recovery techniques are employed to route around the topology holes, such approaches are not required for the application of a vehicular ad hoc network on a highway. If partitions (topology holes) occur in the network or no appropriate vehicle class exists in the neighbour table to forward m in the direction of the RZR, the packet is stored in the neighbour waiting table until a vehicle meeting the required vehicle classification is detected. Although the packet is stored in the table the protocol takes advantage of the dynamic nature of vehicular traffic flow by waiting until it encounters an appropriate neighbour that can forward the message in the direction of the RZR. The method employed in dealing with topology holes is similar to the method applied in [20]. Once the message has reached the RZR, or the source vehicle is inside the RZR, the routing scheme changes according to the application delivery requirements, depending on whether a unicast, multicast, geocast or anycast delivery is required within the RZR. Broadcasting packets inherently suffers from the effect of congesting the network during packet flooding. Many schemes have been developed in order to reduce this effect [21, 22]. Applications requiring information such as an accident warning are considered to have safety related implications and hence require a high-priority delivery service along with high reachability within the RZR. Thus the reduction of retransmissions using a broadcast scheme allowing speedy delivery with a high penetration rate within the RZR is imperative in order to prevent unnecessary usage of the transmission medium. The technique employed is to reduce the number of retransmissions through the use of a (re)transmission deferral time which allows vehicles further away in the required forwarding direction to rebroadcast the packet first. Vehicles between the rebroadcasting vehicle and the source of the transmission will cancel their scheduled rebroadcasts for this packet. This technique reduces the number of retransmissions through allowing the vehicle farthest away to retransmit m first. Thus, the further away B is from A, the lower its deferral time is. This scheme is discussed in more detail in [23]. When the message delivery type within the RZR is either unicast or multicast, the search for the destination utilises the local connectivity information maintained in the LZ. When the destination node is found within a node’s LZ then LZR is used to deliver m. The selection of the next hop node is made depending on the location of the destination. This decision is repeated at each node receiving m within the LZ until m reaches its destination. However, if the destination does not exist in the LZ then PVLZR is used. In PVLZR, m is forward-
367
368
Networked Vehicle
ed to the node furthest away within the LZ called the perimeter node. LZR is then used to deliver the message to the perimeter node. At the perimeter node, if the destination is not within its LZ the above procedure is repeated. Otherwise, if the destination is found within the LZ, LZR is used to route m to its destination.
7
Conclusions and Future Work
Developments in road transport require a robust and scaleable means of communicating information between road vehicles to implement and fully realize the benefits of ITS applications. Mobile ad hoc networks are seen as the most promising technology to fulfil the communications requirements of these applications. This paper has described the development of a routing protocol framework suitable for use in the road transport environment. Ongoing work will verify the performance of the protocols by conducting simulations using a microscopic traffic simulator combined with a network simulation tool in order to derive performance metrics for the proposed protocols.
References [1] [2] [3] [4] [5]
[6] [7]
[8] [9]
European Commission; “eSafety Initiative”, www.europa.eu.int/information_society/programmes/esafety/text_en.htm FleetNet project website, www.fleetnet.de CarTALK 2000 project website, www.cartalk2000.net PReVENT project website, www.prevent-ip.org M. Kasemann et al., “Analysis of a Location Service for Position-Based Routing in Mobile Ad Hoc Networks” in Deutscher Workshop Uber Mobile Ad Hoc Networks, March 2002, Ulm. Z.J. Haas and M.R. Pearlman, “The performance of Query Control Schemes for the Zone Routing Protocol”, in ACM SIGCOMM, 1998. M.R. Pearlman, and Z.J. Haas, “Determining the Optimal Configuration for the Zone Routing Protocol”, IEEE Journ. On Selected Areas in Comms, Aug. 1999,17(8). E.M. Royer and C.K. Toh, “A review of current routing protocols for ad hoc mobile wireless networks”, IEEE Personal Communications, 1999. 6(2): p. 46-55. J. Tian, I. Stepanov, and K. Rothermel, “Spatial Aware Geographic Forwarding for Mobile Ad Hoc Networks”, 3rd ACM symposium on Mobile Ad Hoc Networking and Computing, MobiHoc 2002, Lausanne Switzerland, June 9 –1, 2002.
Developments in Vehicle-to-vehicle Communications
[10] M. Mauve and J. Widmer, “A Survey on Position-Based Routing in Mobile Ad Hoc Networks”, IEEE Network, 2001: p. 30 - 39. [11] H. Hartenstein et al., “A Simulation Study of a Location Service for Position-Based Routing in Mobile Ad Hoc Networks”, Reihe Informatik, University of Mannheim, June 2002. [12] B. Karp and H.T. Kung, “GPSR: Greedy Perimeter Stateless Routing for Wireless Networks” in ACM/IEEE MobiCom, August 2000. [13] J. Tian et al., “Graph Based Mobility Model for Mobile Ad Hoc Network Simulation”, in 35th Annual Simulation symposium, April 2002, San Diego, California: IEEE/ACM. [14] T. Camp, J. Boleng, and V. Davies, “A Survey of Mobility Models for Ad Hoc Network Research”, Wireless Communication and Mobile Computing (WCMC): Special issue on Mobile Ad Hoc Networking: Research, Trends and Applications, 2002. 2(5): p. 483 - 502. [15] L. Briesemeister and G. Hommel, “Overcoming Fragmentation in Mobile Ad hoc Networks”, Journal of Communications and Networks, 2000. 2(3): p. 182 - 187. [16] W Kremer and W. Kremer, “Vehicle Density and Communication Load Estimation in Mobile Radio Local Area Networks (MR-LANs)”, in VTC. 10 - 13 May, 1992. Denver, Colorado, USA: IEEE. [17] Y-B Ko and N.H. Vaidya, “Location-Aided Routing (LAR) in Mobile Ad Hoc Networks”, in 4th Annual Int. Conf. on Mobile Computing and Networking (MOBICOM’98). October 1998. [18] Z.J. Haas and S. Tabrizi, “On Some Challenges and Design Choices in Ad-Hoc Networks”, in MILCOM 98. Bedford, MA. [19] I. Stojmenovic, “Position Based Routing in Ad-Hoc Networks”, IEEE Communications Magazine, July 2002. 40(7): p. 128-134. [20] L. Briesemeister, L. Schafers and G. Hommel, “Disseminating Messages Among Highly Mobile Hosts Based on Inter-Vehicle Communication”, in Intelligent Vehicles Symposium. 2000: IEEE. [21] M. Sun and T.H. Lai, “Location Aided Broadcast in Wireless Ad Hoc Networks”, in Proceedings IEEE WCNC. 2002. Orlando, FL: IEEE. [22] S. Basagni, I. Chlamtac, and V.R. Syrotiuk, “Geographic Messaging in Wireless Ad Hoc Networks”, Proceedings of the IEEE 49th Annual International Vehicular Technology Conference (VTC’99),vol. 3, pp. 1957-1961, Houston, TX, May 16-20, 1999. [23] D.A. Topham, D.D Ward, Y. Du, T.N. Arvanitis and C.C. Constantinou, “Routing framework for vehicular ad hoc networks: Regional dissemination of data using a directional restricted broadcasting technique”, to be presented at WIT 2005, Hamburg, Germany, March 2005.
369
370
Networked Vehicle
David Ward, Debra Topham MIRA Limited Watling Street Nuneaton CV10 0TU UK
[email protected] [email protected] Costas Constantinou, Theo Arvanitis Electronic, Electrical and Computer Engineering The University of Birmingham Edgbaston Birmingham B15 2TT UK
[email protected] [email protected] Keywords:
intelligent transportation systems, eSafety, road safety, vehicle to vehicle communications, ad hoc networks, routing protocols
371
Generic Remote Software Update for Vehicle ECUs Using a Telematics Device as a Gateway G. de Boer, P. Engel, W. Praefcke, Robert Bosch GmbH Abstract Exchanging the software in the ECUs during the lifetime of a vehicle will become a mandatory requirement, in order to update the vehicle’s functionality, and to cope with software errors and avoid return calls. Thus, Remote Software Download will become a dominating aspect in the manufacturer’s vehicle maintenance service in the future. Considering the amount of ECUs in the vehicle, a crucial issue is the flash procedure between gateway and ECU. There are many efforts towards standardizing the ECU interfaces and the communication protocols for software download. Nevertheless, this standardization is neither finished nor well-established. In this paper, we propose a very generic and flexible approach for the update of the vehicle’s software using a telematics device as a gateway between the remote download server and the local ECUs within the vehicle. This work has been carried out in the context of the EAST-EEA project within the ITEA research programme funded by the German Government BMBF [2].
1
Scenario Description
Innovation in automotive electronics has become increasingly complex. As microprocessors become less expensive, more mechanical components and functions are being replaced by software based functionalities. This results in high-end vehicles containing more than 60 electronic control units (ECUs). Market forecasts foresee that the amount of software in a vehicle is doubling every two to three years. Considering this trend, maintaining ECU software will become a crucial aspect in the manufacturer’s vehicle maintenance service in the future. Exchanging the software in the ECUs during the lifetime of a vehicle will become a mandatory requirement, in order to update the vehicle’s functionality, and to cope with software errors and avoid return calls. Mechanisms for remote management of the vehicle software, which allow an access of the vehicle in the field, become more and more important. A key use case is the remote update of ECU software. In this “software download” scenario, typically a powerful ECU with an Internet connection to the manufac-
372
Networked Vehicle
turer plays the role of a gateway receiving software from a remote maintenance server and distributing it to the affected ECUs in the vehicle via the local network, i.e. via the Controller Area Network (CAN) [5].
Fig. 1.
Remote Maintenance Scenario
Taking into account the number of ECUs in the vehicle, a crucial issue is the flash procedure between gateway and ECU. A standardized flash procedure identical for each ECU would be helpful in order to avoid a high complexity at the gateway. There are many efforts towards standardizing the ECU interfaces and the communication protocols for software download. Nevertheless, this standardization is neither finished nor well-established. Even though some manufacturers introduced a quasi standard within some high-end vehicles based on the Keyword Protocol 2000 (KWP2000) [6], a generic protocol common for all ECUs is out of sight today. Due to the fact that KWP2000 defines an optional set of diagnostic services but neither mandatory service sequences and associated state machines nor service parameter values, flash procedure variants with different security mechanisms, flash services and flash sequences are still status quo, even within one vehicle. In this paper, we propose a generic mechanism for wireless remote software update of ECU software (“software download”) using an infotainment head unit as a gateway between the remote download server and the local ECUs within the vehicle. The paper describes the basic principles of this solution and gives an overview about the applied technologies and standards. It describes the overall system architecture including the software architecture of a validator system. This work has been carried out in the context of the EAST-EEA project within the ITEA research programme funded by the German Government BMBF.
Generic Remote Software Update for Vehicle ECUs Using a Telematics Device as a Gateway
2
Open Service Gateway Initiative (OSGi)
Some important new challenges posed by electronics in vehicles are the growing complexity of in-vehicle software, the shorter life-cycles of infotainment systems and applications, and the need to reduce the cost of software while significantly increasing its reliability and security. One of the most attractive and promising ideas addressing these new requirements is the platform defined by the Open Service Gateway Initiative (OSGi) [3]. Goal of the OSGi is to create open specifications for the delivery of multiple services over wide area networks to local networks and devices. Members of the OSGi come from the software, hardware and provider business. OSGi concentrates on the complete end-to-end solution architecture from remote service provider to local device. OSGi concentrates on the specification of a service gateway that functions as a platform for many communications based services. The specification defines an open architecture for software vendors, network operators, and service providers to develop and deploy services in a harmonized way. An OSGi service gateway provides a general framework for building and managing various software applications. The service gateway enables, consolidates and manages Internet, voice, data, multimedia communications to and from the home, office and other locations. This service gateway functions also as an application server for high value services such as telematics, energy management, control, security services, safety, health care monitoring service, e-commerce, device control, maintenance. The gateway enables service providers to deliver services to client devices on the local network. Speaking of future multimedia systems in the vehicular environment, devices and a central infotainment unit which are connected to a bus system can be seen as local network elements. In this manner a service provider could connect to these central infotainment unit and devices to provide his service. According to the contributors of OSGi the service gateway will likely be integrated in whole or parts in existing product categories such as (digital and analogue) set-top boxes, cable modems, routers, alarm systems, consumer electronics, PC. Key benefits of OSGi are platform independence: OSGi’s APIs can be implemented on a wide range of hardware platforms and operation systems application independence: OSGi’s specifications focus on defining common implementation APIs security - the specification incorporates various levels of system security features, ranging from digital signing of downloaded modules to finegrained object access control multiple services - hosting multiple services from different providers on
373
374
Networked Vehicle
a single service gateway platform multiple local network technology - e.g. Bluetooth, HAVi, HomePNA,
HomeRF, IEEE-1394, LonWorks, powerline communication systems, USB, VESA (Video Electronics Standards Association) multiple device access technology - UPnP, Jini. installation of new services via remote connections, even into a running environment. In more detail, the OSGi provides a collection of APIs that define standards for a service gateway. A service is hereby a self contained component that performs certain functionality, usually written with interface and its implementation separated, and that can be accessed through a set of defined APIs. These APIs define a set of core and optional APIs that together define an OSGi compliant gateway.
Fig. 2: OSGi Service Gateway Components, Release 3
The core APIs address service delivery, dependency and life cycle management, resource management, remote service administration, and device management. The essential system APIs are listed below. Device Manager: Recognition of new devices and automatic download of required driver software HTTP Service: Provides web access to bundles, servlets. Log Service: Logging service for applications and framework. Configuration Admin: Configuration of Bundles, maintenance of configuration repository. Service Tracker: Maintains a list of active services User Admin: User administration, maintenance of data for authentication and user related data, i.e. private keys, passwords, bio- profile, preferences. Wire Admin: Administration of local communication links
Generic Remote Software Update for Vehicle ECUs Using a Telematics Device as a Gateway
Measurement: Utility class for the consistent handling of measure-
ments based on SI units. Position: Utility class providing an information object describing the
current position including speed and track information IO Connector Service: Intermediate layer that abstracts communication
protocols and devices. Jini Service Provision of Jini to services within OSGi. UPnP Service Provision of OSGi services in UPnP networks, UPnP device
control. The optional set of APIs defines mechanisms for client interaction with the gateway and data for management, and, in addition, some of the existing Java APIs, including JINI. Moreover, OSGi Service Platform 3 provides mechanisms supporting current residential networking standards such as Bluetooth and HAVi. A very attractive feature when dealing with remote admistration services is the capability of the platform, to be maintained independently from garage services. While the OSGi system was originally designed to provide powerful mechanisms for remotely manipulating and managing software on the embedded service gateway it can be used to extend this feature beyond the embedded device. With the strategy described in the following sections it is possible to perform software management on ECUs which are attached to the gateway with the same flexibility and security as within the gateway itself. This also comprises wireless remote access and thus avoids the return of the car to a maintenance station.
3
System Description
Making use of an OSGi-infrastructure at the manufacturer’s site, the gateway is administrated from the remote infrastructure. This includes the software download (push) and installation, start of new software packages and the deinstallation of software. The implementation of an OSGi gateway can take place in principle on an arbitrary system. In an automotive environment it is meaningful to integrate the gateway into existing powerful telematics systems which already provide a wireless communication connection [1]. Modern telematics systems are sophisticated software intensive systems and typically integrate a broad range of infotainment devices, including communication features based on e.g. GSM/GPRS or UMTS. In our scenario, we use the Head Unit of the infotainment domain as a host for the remote OSGi gateway to the vehicle.
375
376
Networked Vehicle
Fig. 3.
Main Components
The components of the remote software download scenario according to fig.3 are listed below. Head unit: Powerful telematics system hosting the Java-based OSGi service gateway. It is connected to the local vehicle network (CAN [5]) in order to access the ECUs in the vehicle. For software download and diagnosis purposes, this is done via a CAN Transport Layer software module according to ISO 15765 [7]. This is done via the so-called Java Native Interface (JNI), which gives access to system interfaces outside the Java environment (e.g., lower network and device functions as well as operating system functions). ECU Electronic Control Unit possessing a flash loader and a CAN interface, which allows the download of new software into the flash memory of the ECU via CAN Transport protocol ISO 15765. Remote Administration Server: The Administration and Maintenance Server provides services for the remote administration of the gateway. This includes the push of new Java software into the gateway and it’s installation and start. The Head Unit is administrated from the infrastructure by means of the Remote Administration Server, which may download, install, un-install, start and stop so-called OSGi-bundles on the gateway. An OSGi-bundles is a jararchive and represents the deployment unit for new shipping services. Access to the gateway is performed by standardized OSGi-APIs. The Administration and Maintenance Server provides services and software for the remote flash procedure. It maintains a database of „Flash-Bundles“ that may be downloaded to the HU.
Generic Remote Software Update for Vehicle ECUs Using a Telematics Device as a Gateway
A key issue is the content of the software package “Flash Bundle” (figure 4): It contains the software to be installed on the ECU plus related information about the flash process (e.g. configuration data) and, additionally, a slim Java application, the so called reprogramming controller, which controls the flash procedure of the ECU later on. The reprogramming controller contains dedicated KWP2000 service calls [6] and is tailored to the ECU to be flashed. This is a very generic approach that allows to cope with different ECUs and flash loaders. Optionally a GUI for visualization, tailored to the HU requirements, may be added.
Fig. 4.
4
Flash Bundle
Downloading and Installing ECU Software
The installation of new ECU software is performed in several steps, as depicted in figure 4. First, based on standardized Java/OSGi mechanisms, a software package containing the new ECU-software is downloaded to the HU using a TCP/IP connection (e.g. via GSM/GPRS). This is done within the following two steps: After mutual authentication of the remote administration service and the HU, the software configuration of the vehicle is identified at the administration side based on the authentication information. The configurations of each vehicle is stored in a central database and assigned to a unique vehicle ID. After authentication, the server selects (or builds) a proper flash bundle dedicated to the ECU to be flashed (in the example ECU A). The flash bundle is transferred to the HU. The flash bundle is installed and activated on the HU within the OSGi
377
378
Networked Vehicle
framework by means of standardized services provided by the OSGi gateway.
Fig. 5.
Download Process
Second, triggered by a software element in the vehicle or from the maintenance site, the reprogramming controller is executed. It connects itself to the CAN Transport Layer module and drives the flash process according to the ECU flash loader specification. This process consists of the following steps: The reprogramming controller initiates the communication with the flash loader unit on the ECU using the CAN/KWP2000 protocol and handles the flash process. Since the reprogramming controller executes dedicated KWP2000 service calls according to the ECU flash loader protocol, it is possible to reprogram different ECUs regardless of their flash loader. Here lies the key aspect of the generic flash process. ECUs having different requirements regarding their flash procedure are not handled with one bulky flash mechanism pre-installed on the HU, which is applicable for all presently known ECUs. They are rather treated on the most generic level of communication, the KWP2000 calls. The software to treat the individual differences (the Reprogramming Controller and the Configuration Data) is transmitted together with the payload. This strategy even takes care of ECUs to come that may have flash procedures different from those today and provides for maximum flexibility. The local ECU flash loader installs the downloaded software in the ECU. After installation of ECU software, a notification about the installation result is given to the reprogramming controller and passed back to the server.
Generic Remote Software Update for Vehicle ECUs Using a Telematics Device as a Gateway
After successful installation of ECU software, the reprogramming controller is terminated and the complete flash bundle is uninstalled and erased from the system. With this step, system resources are de-allocated, and the interface to the ECU is removed.
5
Summary
A flexible, secure and robust mechanism for wireless remote software update of ECU software using an OSGi compliant telematics device as a gateway between the remote download server and the local ECUs within the vehicle has been proposed. Since the reprogramming controller dedicated to a specific ECU is downloaded together with the ECU flashware within one software package, this solution averts the danger of incompatibilities between the HU and the ECU and reduces the complexity of the HU software. Here, only a standardized CAN API has to be provided. Moreover, since the flash bundle is erased from the system after the flash process, the access to the ECU is possible only temporarily, which leads to an improved system security. In view of the fact, that the OSGi-infrastructure may be used for software distribution, vehicle’s ECU updates can be controlled via an OSGi-infrastructure and multiple vehicles can be updated in parallel.
References [1]
[2] [3] [4] [5] [6] [7]
V. Vollmer, M. Simons, A. Leonhardi, G. de Boer: Architekturkomponenten für zukünftige Multimediasysteme im Fahrzeug; 24. Tagung “Elektronik im Kfz”, Haus der Technik - Essen, Juni 2004 ITEA/EAST-ITEA: Project information; www.itea-office.org Open Services Gateway Initiative: OSGi Service Platform Release 3; www.osgi.org JavaTM 2 Platform, Standard Edition, java.sun.com. ISO 11898: Road vehicles — Controller Area Network (CAN) ISO 14230: Road vehicles - Diagnostic Systems - Keyword Protocol 2000 ISO 15765: Diagnostics on CAN, Part 2: Network layer services
379
380
Networked Vehicle
Dr. Gerrit de Boer, Dr. Werner Praefcke, Peter Engel Robert Bosch GmbH, FV/SLH, P.O. 77 77 77, D-31132 Hildesheim, Germany
[email protected] [email protected] [email protected] Keywords:
remote software download, remote maintenance, telematics gateway, OSGi
381
High Performance Fiber Optic Transceivers for Infotainment Networks in Automobiles T. Wipiejewski, F. Ho, B. Lui, W. Hung, F.-W. Tong, T. Choi, S.-K. Yau, G. Egnisaban, T. Mangente, A. Ng, E. Cheung, S. Cheng, Astri Abstract We have developed fiber optic transceivers (FOT) based on leadframe and molding technology for large core PCS optical fiber systems. The transmitter module contains a VCSEL light source, an electronics driver chip, and some passive electronics components. The operating wavelength range of the laser chip is in the near infrared at 850nm. An internal control circuit stabilizes the optical output power over the entire operating temperature range from -40°C to +105°C to within 1dB variation. Eye diagrams taken at 500Mbps are wide open from -40°C to 105°C. The extinction ratio is larger than 10dB and the rise and fall time in the order of 0.5ns. The FOT is highly reliable and stable over 1000 temperature cycles from -40°C to 125°C. For the receiver side we developed high speed MSM photodetectors. The large area MSM photodetectors relax the coupling alignment tolerance to the core of the optical fiber. The MSM photodetector is capable of data rates of 3.2Gb/s. At this high speed the sensitivity is better than -18dBm for the MSM photodetector co-packaged with a suitable transimpedance amplifier (TIA). The technology meets the requirements of current and future infotainment networking applications.
1
Introduction
Optical interconnects for telecom and datacom applications have been extensively used for many years. We will briefly discuss the historical background of fiber optic applications and their application in automobiles.
1.1
Migration of Photonics Applications
Figure 1 shows the migration of photonics applications over the last decades. Since the 1980’s fiber-optics is widely applied in long-haul telecom systems. These systems use quartz glass single mode fibers of 9mm diameter to maxi-
382
Networked Vehicle
mize distance and speed. In the early 90’s, fiber-optics applications penetrated from telecom to datacom systems. Transmission distances are as short as 100m or even less. Multimode optical fibers of 50 or 62.5mm diameter are employed instead of the single mode fibers in order to relax the optical coupling tolerances. The packages of transceivers migrated from butterfly packages to TOcan packages, which have much lower cost. The relaxed tolerances and higher manufacturing volume enabled interconnects of lower cost compared to typical telecom systems.
Fig. 1.
Photonics applications migration from high performance driven telecom to datacom to low cost bitcom
Since the beginning of this millennium new applications [1-4] are emerging in the field of automotive, industrial control, consumer electronics and optical backplane interconnects (Fig. 2). These applications are summarized as bitcom. They typically exhibit transmission distances from 10cm to 30m. Since the bandwidth-distance product required is smaller, one can use large core fibers such as the plastic optical fiber (POF) [5] and polymer cladded silica (PCS) fiber. The core diameters of these fibers are normally 0.2mm to 1mm. The large optical core of POF or PCS fibers relaxes the mechanical alignment tolerances of the optical components. As a result, passive alignment can be realized. Conventional electronics packaging technologies such as automatic pickand-place and transfer molding can be employed in mass production of optical transceivers. The manufacturing cost of optical interconnects can also be substantially reduced due to the relaxed mechanical tolerances. From telecom to datacom to bitcom the typical cost of fiber optic transmitter components can be reduced by about a factor of ten depending on specifications and requirements.
High Performance Fiber Optic Transceivers for Infotainment Networks in Automobiles
Fig. 2.
Short distance optical interconnect applications with their typical data rates: A) automotive infotainment and
safety systems, B) industrial control systems, C) home networking, D) optical backplanes in high-end computers
1.2
Fiber Optics Technology for Automotives
Fiber optic networks have been introduced to automobiles several years ago. MOST and IDB1394 are standards for the infotainment system [6-7]. Many European car manufacturers have already adopted the MOST system in their car models. The fiber optic transceiver production volume for automotive applications is already several million units per annum. Besides the infotainment system fiber optics has also been introduced to the safety system. The Byteflight system of BMW [8] is pioneering fiber optic links in their airbag control system.
Tab. 1.
Benefits of fiber optics
The key benefits of fiber optic links are the absence of electromagnetic interference (EMI) noise creation and susceptibility. In addition, fiber optic cables
383
384
Networked Vehicle
are thin, light weight, and highly flexible. The fiber optic connectors are robust and can carry high speed signals up to several hundred Mb/s easily (Tab. 1). A more detailed comparison between copper wires and optical fiber is shown in Table 2. The unshielded twisted pair (UTP) and shielded twisted pair (STP) cables are most commonly used as well as coaxial type cables for other high speed applications. The EMI immunity, light weight, and small bending radius of optical fibers are their main benefits for automotive applications.
Tab. 2.
1.3
Comparison between copper wires and optical fiber.
Physical Layer of Infotainment Networks
We develop fiber optic transceiver (FOT) components for automotive applications. Figure 3 illustrates how these components would be placed in a typical ring architecture structure of the infotainment system. All infotainment devices such as the radio head unit, the DVD player, or the GPS navigation are connected by an optical fiber. A pair of FOT transmitter and FOT receiver is placed in each device. Currently plastic optical fiber (POF) and 650nm LEDs are used in the fiber optics systems [1]. These systems exhibit a power budget of around 14dB and an operating temperature up to 85°C or 95°C. Therefore the current fiber systems are limited to the passenger compartment. For roof top implementations a maximum temperature of 105°C would be required and for the engine compartment even up to 125°C. Polymer cladded silica (PCS) fiber is under consideration to overcome the current operating temperature limitations. This fiber can be operated even at 125°C.
High Performance Fiber Optic Transceivers for Infotainment Networks in Automobiles
Fig. 3.
ASTRI transmitter and receiver components for the infotainment system in automobiles. The various infotainment devices (such as for example: radio “A,” CD player “B,” navigation system “C”) are connected in a fiber optic ring.
Vertical-cavity surface-emitting lasers (VCSELs) are suitable light sources for the new PCS fiber. The VCSEL has a very narrow beam emission angle. Therefore the light output of the VCSEL can be easily coupled into the 200mm core of the PCS optical fiber. Since the VCSEL is a laser element its maximum modulation speed is very high (>>GHz). VCSELs provide data transmission for the Gb/s range. Thus, optical systems with VCSELs are future proof for higher speed requirements in new systems. This is an advantage in particular for video type applications. Table 3 shows a comparison of characteristics of 1VCSELs and other light sources used in optical fiber systems. VCSELs combine the advantages of LEDs such as easy packaging with the high performance of edge-emitting lasers. Thus, they are almost the ideal choice for short distance optical interconnects. Resonant-cavity LEDs (RCLEDs) are similar to conventional LEDs, but they can provide some higher speed. Their fabrication complexity is considered equivalent to VCSELs. However, they are not yet widely used in optical communication applications. VCSELs on the other hand have been employed in datacom modules for many years. Their production volume is millions of units per annum. VCSELs have also shown good reliability results in various applications.
385
386
Networked Vehicle
Tab. 3.
1.4
Optical sources for optical fiber systems
Comparison of Physical Layer
A comparison between the VCSEL/PCS fiber system and the LED/POF system is shown in Tab. 4. The VCSEL/PCS fiber system exhibits a power budget of around 21dB and an operating temperature range from -40°C to 105°C. In future the operating temperature can be potentially increased to 125°C. The higher power budget enables the system designer to incorporate more in-line connectors per link. Thus, the system flexibility is greatly improved.
Tab. 4.
Comparison between VCSEL/PCS fiber system and LED/POF system.
The PCS fiber has the advantage of low optical loss at the standard VCSEL wavelength of 850nm. Standard POF is not suitable for this wavelength, since the optical loss is too high. Even at the loss minimum wavelength of 650nm the attenuation of POF is still relatively high. Another disadvantage is that the absorption minimum covers only a narrow spectrum. Therefore the loss of POF increases when the operating wavelength of the 650nm LED moves to longer wavelengths at higher temperatures. This limits the total power budget of the POF link. For telecom and datacom applications glass fibers are used, because of their superior transmission performance in terms of speed and distance. The quartz glass single mode fiber has a very loss optical loss at 1550nm of 0.2dB/km. At
High Performance Fiber Optic Transceivers for Infotainment Networks in Automobiles
1310nm the optical loss is 0.4dB/km and the chromatic dispersion is zero. Therefore transmission capacity in the Tb/s range has been demonstrated with single mode fibers. For shorter distances of hundreds of meters multimode fibers are suitable. Table 5 summarizes the different performance parameters for the various types of optical fiber. For automotive applications PCS fiber seems to be a good choice, because it also offers a small bending radius of 15mm or less.
Tab. 5.
2
Comparison of various optical fibers used in telecom, datacom, and bitcom
Fiber Optic Transceivers
We have developed small size VCSEL based fiber optic transceiver (FOT) modules with plastic packages for PCS fiber systems [9]. A photo of the FOT modules is shown in figure 4. The module on the left is the transmitter, on the right is the receiver module. Their outer form factor of the two modules is a mirror image of each other. The module dimensions of 9.7mmx6.2mmx3.6mm are very small compared to the standard SFP modules. The maximum operating temperature of these modules is up to 105°C, much higher compared to typical datacom (85°C) or telecom (70°C) devices.
387
388
Networked Vehicle
Fig. 4.
2.2
High speed fiber optic transmitters (FOT) for a wide operating temperature from –40°C to 105°C. Left: transmitter module (Tx), right: receiver module (Rx)
Output Power Stability
One challenge for the transmitter design is to stabilize the optical output from VCSEL at high temperature. Telecom products use a thermo-electrical cooler to stabilize the device temperature, but the cost would be too high for bitcom products. In our approach we use a monitoring photo detector (MPD) to receive part of the optical output and feedback the photocurrent into the laser diode driver (LDD). Figure 5 shows the electrical circuit diagram of the transmitter. The LDD has an automatic gain control function to adjust the bias condition of VCSEL according to the feedback photocurrent. An additional external input signal can be used to level the average laser output power.
Fig. 5.
Electrical circuit diagram of VCSEL based fiber optic transmitter.
We have obtained stable optical power output of less than 0.4dB change over a wide temperature range from -40°C to 105°C as shown in fihure 6. This excellent value enables a tight output power specification of the transmitter.
High Performance Fiber Optic Transceivers for Infotainment Networks in Automobiles
The tight power specification results in a better link margin for the transmission system.
Fig. 6.
2.3
FOT optical output power variation over a temperature range from -40°C to 105°C.
Optical Coupling Tolerance
We have designed an optical coupling system to couple the VCSEL output power into the PCS fiber. We optimized the optical coupling design to achieve a long working distance with a wide lateral alignment tolerance. Figure 7 shows a schematic of the optical coupling scheme.
Fig. 7.
Schematic optical coupling design.
A ray tracing model is built to calculate the coupling efficiency. The coupling efficiency is simulated for different VCSEL offset positions and fiber offset positions. By using the optimized coupling lens, the -3dB lateral coupling tolerance of the VCSEL transmitter to the PCS fiber is around ±100mm. This good value
389
390
Networked Vehicle
is obtained over a long longitudinal working distance of 500mm. Figure 8 displays some measurement results.
Fig. 8.
2.4
Coupling tolerance of VCSEL transmitter to PCS fiber.
High Speed Performance
The current MOST infotainment system runs at a data rate of 25Mb/s. Since the data format is a bi-phase signal, the physical transmission rate is 50Mb/s. The VCSEL based FOT can easily meet the timing specifications of the 50Mb/s data link. The fast turn-on and turn-off signal transitions provide a low jitter value for the entire optical system.
Fig. 9.
FOT eye-diagrams at 500Mb/s data rate for temperatures of (a) 25°C, (b) 95°C, and (c) 105°C.
The VCSEL FOT is also capable of much higher data rates in the Gb/s range. Other applications such as the IDB 1394 S400 standard require a data rate of 500Mb/s. Therefore we demonstrate the high speed performance of the FOT at the higher data rate of 500Mb/s. Figure 9 depicts the eye diagrams of the module at 500Mbps data rate for the different operating temperatures of 25°C, 95°C, and 105°C. The eyes diagrams are wide open indicating a good and error
High Performance Fiber Optic Transceivers for Infotainment Networks in Automobiles
free transmission performance. The extinction ratio is greater than 10dB for all temperatures. The rise and fall time are in the order of 0.5ns.
2.5
Reliability Test Results
Preliminary reliability studies on the VCSEL transmitters were carried out. These studies are important, because the coefficient of thermal expansion (CTE) of encapsulating materials is usually greater than that of the VCSEL by one or two orders of magnitude. The mismatch between CTE causes thermal stress on VCSEL during thermal cycling. The VCSEL reliability may be degraded if the stress is too high. Figure 10 shows some reliability test results for temperature cycling from -40°C to +125°C. The optical output power of the VCSEL transmitters is highly stable over 100 temperature cycles. The change in output power is less than 0.5dB. This excellent result shows that the package design and materials are well suitable for high reliability.
Fig. 10. Reliability results of VCSEL transmitter for temperature cycling from -40°C to 125°C.
3
MSM-PD Receiver
3.1
Photodetector for Large Core Fiber
The coupling of fiber to receiver becomes more challenging when the fiber core diameter is larger. Figure 11 shows the photdetector technology and optical media used for various optical transmission systems from very short reach (40km). For low data rate below 1Gbps, Si PIN-PD is a good candidate for receiving light from POF, PCS or MM-fiber. For high data rates up to 3.2Gbps or 10Gbps, GaAs or InGaAs pin-PD were used due to their higher speed. However, their application is usually limited to SM-fiber or MM-
391
392
Networked Vehicle
fiber systems since their aperture size is designed to be around 70mm otherwise the RC time constant may limit the bandwidth.
Fig. 11. Photodetector technology and optical media used for various optical transmission systems.
POF and PCS fiber are actively developed for Gigabit Ethernet. Gigabit per second transmission over POF has been demonstrated by several groups [10-11]. The demand for large area and high bandwidth photo-detectors is increasing. For POF/PCS systems transmitting data rate from 1Gbps to 3.2Gbps, GaAs metal-semiconductor-metal photodetector (MSM-PD) is found to be an attractive solution.
3.2
MSM Photodetector Design
The MSM-photodetector is comprised of back-to-back Schottky diodes that use an inter-digital electrode configuration on top of the active area. Figure 12 shows a cross-sectional view of the device. The bias voltage V is applied between electrode pairs to create a depletion region within the GaAs semiconductor.
Fig.12.
Schematic of an MSM-PD with metal electrodes of width w and spacing s.
High Performance Fiber Optic Transceivers for Infotainment Networks in Automobiles
MSM-PD has a lower capacitance per unit area compared to pin-PD thus allowing a larger active area for the same bandwidth. Another advantage of GaAs MSM-PDs is the option to monolithically integrate them with transimpedance amplifiers as their fabrication processes are compatible. However, one disadvantage is the smaller responsivity due to the contact shadowing effect of the metal fingers. We have fabricated MSM-PDs on 4” GaAs wafers as shown in Fig. 13. The apertures size is f250mm. This enables loose alignment tolerance and high bandwidth. The chip dimensions are 730mm x 485mm. The chip thickness is 200mm.
Fig. 13. MSM-PD chip produced on 4”wafer.
Figuere 14 shows the drift time and RC time constant of an MSM-PD as function of the finger spacing s.
Fig. 14. Drift time td, RC time tc, and resulting total time constant for an MSM-PD as function of finger spacing s.
The larger finger spacing results in a drift time related speed limitation and also requires a higher bias voltage. For smaller finger spacing the capacitance is the speed limiting factor of the MSM-PD.
393
394
Networked Vehicle
Since the capacitance of the MSM-PD is reduced by a factor of 0.28 in comparison to a pin-PD of same diameter, the resulting speed at larger diameter is higher. Figure 15 shows a comparison of time constant for large area MSM-PD with finger spacing s=2mm and a pin-photodiode with an absorbing layer thickness of 2mm. The MSM detector is significantly faster for diameter of 150mm and above. For smaller diameters the drift time is more dominant. Thus the speed of the pin-diode is comparable to the MSM-PD.
Fig. 15. Comparison of time constant for large area MSM-PD and pin-PD.
3.3
Capacitance Measurement
The capacitance the MSM-PD with 250mm diameter at different bias voltages is shown in Fig. 16. The capacitance values are around 0.8pF at bias voltages from 0.5V to 10V. For comparison, the capacitance value of a pin-PD with diameters of 100mm is 0.8pF at 5V bias voltage. It is interpolated that the capacitance of a pin-PD with 250mm diameter is 5pF at the same conditions.
High Performance Fiber Optic Transceivers for Infotainment Networks in Automobiles
Fig. 16. Capacitance of the MSM-PD at different bias voltages in comparison with pin-PD.
3.4
Coupling Tolerance Comparison
Figure 17 shows the comparison of coupling tolerances between MSM-PD of active diameter 250mm and pin-PD of active diameter 100mm. The MSM-PD has a 3dB bandwidth of 2.2GHz whereas the pin-PD has a 3dB bandwidth of 2GHz. With the use of MSM-PD, significant improvement of coupling tolerances of 80mm and 4000mm are obtained in lateral and longitudinal axes, respectively.
Fig. 17. Comparisons of coupling tolerances of MSM-PD and pin-PD to PCS fiber of 200mm core diameter in
395
396
Networked Vehicle
3.5
High Speed Performance
The eye-diagram and bit error rate curve of the MSM-PD packaged with a trans-impedance amplifier (TIA) are shown in Fig. 18. A PCS fiber is used to couple the optical source to the MSM-PD. The wavelength of the optical signal is 850nm. The signal is modulated at 3.2Gb/s with an average power of 10dBm. From the bit error rate curve, we interpolate a sensitivity of -18dBm at error rate of 10-12 for the PCS fiber system at 3.2Gb/s.
Fig. 18. Eye diagram and bit error rate of MSM-PD receiver coupled with PCS fiber. Wavelength = 850nm, data rate = 3.2Gb/s.
The MSM photodetector shows excellent high speed performance in large core optical fiber applications. The speed enables large alignment tolerances in current optical networks and provides the option for higher speed applications in the future. This includes video systems that require much larger bandwidths.
High Performance Fiber Optic Transceivers for Infotainment Networks in Automobiles
4
Conclusion
We have developed high performance fiber optic transceivers for infotainment networks in automobiles. The VCSEL based transmitter modules can operate up to 500Mb/s over a wide ambient temperature range between -40oC and 105oC. The optical output power is stabilized within 1dB variation. The extinction ratio is greater than 10dB. The small size leadframe style package provides loose alignment tolerances of ±100mm in lateral axis for a working distance of 500mm to large core PCS optical fibers. The large alignment tolerances for the transmitter and in-line connectors enable very cost effective optical networks. The receiver for high speed large core optical fibers is based on MSM technology. The MSM receiver technology exhibits a sensitivity of 18dBm at 3.2Gbps for a PCS fiber link. These transceivers overcome the limitations of current fiber optics technologies and provide higher performance in terms of operating temperature, speed and manufacturability. The VCSEL transmitter and the MSM based receiver modules meet the requirements of current applications and are future proof for new applications requiring higher data rates. These systems such as video links will be introduced in next generation of automobiles.
Acknowledgement We gratefully acknowledge great support from our suppliers and partners. This work was financially supported by funds from the Hong Kong Innovation and Technology Commission ITC.
397
398
Networked Vehicle
References [1]
Eberhard Zeeb, “MOST-Physical Layer Standardization Progress Report and Future Physical Layer Activities”, Proc. The 3rd Automotive LAN Seminar, pp. (3-3) – (334), Oct, 2003. [2] Hideo Nakayama, Takeshi Nakamura, Masao Funada, Yuichi Ohashi*, Mikihiko Kato, ”780nm VCSELs for Home Networks and Printers,” Proc. 54th ECTC, Las Vegas, June 2004. [3] W. Daum, W. Czepluch, “Reliability of Step-Index and Multi-Core POF for Automotive Applications,” Proc. 12th Intl. Conf. Polymer Optical Fiber, pp.6-9, Sep, 2003. [4] T. Nakamura, M. Funada, Y. Ohashi, M. Kato, “VCSELs for Home Networks – Application of 780nm VCSEL for POF,” Proc. 12th Intl. Conf. Polymer Optical Fiber, pp.161-164, Sep, 2003. [5] Andreas Weinert, “Plastic Optical Fibers”, Publicis MCD Verlag, 1999. [6] www.mostcooperation.com [7] www.firewire-1394.com/what_is_idb-1394.htm [8] R. Griessbach, J. Berwanger, M. Peller, “Byteflight – neues HochleistungsDatenbussystem für sicherheitsrelevante Anwendungen,” ATZ/MTZ Automotive Electronics, Friedrich Vieweg & Sohn Verlagsgesellschaft mbH, January 2000. [9] Flora Ho et al., “Plastic Package VCSEL Transmitter for POF and PCS Fiber Systems,” Proc. 13th POF Conference Nuremberg, Sep. 2004. [10] Takaaki Ishigure and Yasuhiro Koike, “Design of POF for Gigabit Transmission”, Proc. 12th Intl. Conf. Polymer Optical Fiber, pp.2-5, Sep, 2003. [11] O. Ziemann, J. Vinogradov, E. Bluoss, “Gigabit per Second Data Transmission over short POF Links,“ Proc. 12th Intl. Conf. Polymer Optical Fiber, pp.20-23, Sep, 2003. Torsten Wipiejewski, F. Ho, B. Lui, W. Hung, F.-W. Tong, T. Choi, S.-K. Yau, G. Egnisaban, T. Mangente, A. Ng, E. Cheung, S. Cheng ASTRI 5/F, Photonics Center, Hong Kong Science Park Hong Kong
[email protected] Keywords:
VCSEL FOT, fiber optic transceiver, high speed photodetector, MSM photodetector, PCS optical fiber link, infotainment system
Components and Generic Sensor Technologies
401
Automotive CMOS Image Sensors S. Maddalena, A. Darmont, R. Diels, Melexis Tessenderlo NV Abstract After penetrating the consumer and industrial world for over a decade, digital imaging is slowly but inevitably gaining marketshare in the automotive world. Cameras will become a key sensor in increasing car safety, driving assistance and driving comfort. The image sensors for automotive will be dominated by CMOS sensors as the requirements are different from the consumer market or the industrial or medical markets. Dynamic range, temperature range, cost, speed and many others are key parameters that need to be optimized. For this reason, automotive sensors differ from the other market’s sensors and need to use different design and processing techniques in order to achieve the automotive specifications. This paper will show how Melexis has developed two CMOS imagers to target the automotive safety market and automotive CMOS imagers in general.
1
Introduction
For the past decade, CMOS and CCD image sensors have driven the creation of many new consumer applications like mobile camera phones. The introduction of imaging sensors in cars has suffered from a serious lag, because high-speed buses, high quality displays and compact low-cost signal processing units and memories were not yet available. Also, previous generation camera sensors were not yet performing sufficiently well and did not match sufficiently the stringent automotive needs. Today, with the availability of the newly developed Automotive Camera sensors, the Tier 1’s can create revolutionary safety and comfort features for the car of tomorrow. Current camera IC’s having high dynamic range, high sensitivity over a broad spectrum (both Visible and Near Infra Red), fast frame rate, non-blooming, low cost and in-field testable capabilities are giving the automotive designers the ability to offer drivers options previously not possible. Two different kinds of applications exist, both with clearly distinct requirements and needs: Human Vision Applications, typically for comfort systems
402
Components and Generic Sensor Technologies
and Machine Vision Applications, typically safety critical and to be used in driver assistance systems. In the first application group, the task of the imager is to show one or several camera pictures to the driver. These applications usually need a small and autonomous colour sensor, most of the time generating an NTSC or equivalent signal for plug-and-play operation. In this set we find for example park assist, assisted overtaking (see the other-side road from another point of view like the left mirror), some sort of blind-spot detection systems. A sub-set of the display set is night vision where a black and white sensor of a higher cost and a higher resolution might be preferred. The second set of applications is the one dealing with image processing and usually safety related applications like lane tracking, seat occupancy for smart airbags, automatic blind-spot detection, automatic cruise control,… In these applications it is crucial that the sensor is a snapshot, black and white + NIR, fully testable, highly reliable CMOS imager. As the connection is to be made to a computation unit and not a display, parallel digital output is preferred to the common video standards. Although the Melexis sensors were designed to meet the demanding criteria of the safety critical applications and perfectly match the machine vision applications’ needs, they can also be easily used to build prototypes of the first category (Human Vision). Good examples to explain the need for an ‘intelligent camera’ in the “image processing” category are unintended lane departures. Such events were cited in 43% of all fatal accidents occurring in 2001. Rumble strips on the outside of the road have been proven to reduce lane departure accidents by 30-55%. Intelligent cameras could be applied to provide lane departure warning on nearly all roads. Once consumers realize the significance of an intelligent camera it will become a ‘must have’ feature. Existing systems just give a warning, but take no real action. (e.g. sound alarm, vibrations in the seat, flashing light, …). A final goal is a car that can control the steering and braking based on intelligent camera and other sensor inputs, but this is very far away. The intermediate situation is smooth guidance through a combination of multiple sensor inputs. In this paper we will mostly discuss the sensors for machine vision (image processing) and safety related applications in particular. A CMOS Imager in automotive applications must meet several goals: low price, good performance at high and low temperature, little or no ageing under the influence of the severe environmental factors found in motor vehicles and rigorous testability in production and in the field.
Automotive CMOS Image Sensors
As a reply to these demanding automotive requirements, Melexis has developed two CMOS snapshot camera chips for both inside and outside vision requirements. This paper will show how these two parts MLX75006 (CIF, 352x288=0.1Mpixels) and MLX75007 (Panoramic VGA, 750x400= 0.3Mpixels) have met these challenges. The chip design ensures a wide spectrum high sensitivity, an excellent dynamic range, all at a high frame rate, and without CCD imager issues like blooming and difficult control. For surviving the harsh automotive conditions, an over moulded plastic package with integrated focus free glass lens stack has been developed. This compact chip-package-lens combination can meet the price expectations of the market, while allowing the necessary flexibility in Field of View and NIR optimization. These devices are fully verified in production and can be tested for integrity in real time in the field.
2
Automotive Imager Requirements and Melexis Solutions
2.1. Technical and Commercial Requirements
As discussed in the Introduction, an Automotive Image Sensor should comply with several different requirements in order to meet the automotive market demands 1. Good image quality: High sensitivity over a wide spectrum and wide dynamical Range, all this over a broad temperature range: -40ºC ... +105ºC and vibration modes (snapshot). Some applications also require color, 2. Reliability: camera designed for good testcoverage, well tested in production, and allowing the user to do in-field self testing and real time integrity checks. 3. Compact system with optimum cost: highly integrated camera with lots of internal features (ADC, …), easy programmable and preferably with a matched glass lens system, again at minimum size and cost but without compromising the automotive qualification requirements.
2.2.
Solutions for Good Image Quality
Although not necessarily meaning the same thing, “Good Image Quality” is required for both Machine and Human Vision Applications. Although Vision Applications will have more emphasis on the aesthetic aspects of the image, and Machine Vision Applications will focus more on sharpness, allowing edge
403
404
Components and Generic Sensor Technologies
and pattern detection, “seeing” is key for both. The sensor needs to be sensitive enough to give a “good” image under all circumstances. As “seeing” is key, Melexis uses an optimized process for NIR sensitivity, yielding to the response curve shown below with the peak of sensitivity at a wavelength of 700nm and a high sensitivity up to 900nm. A high absolute value in sensitivity is met by using the relatively large 10µm x 10µm pixel size.
Fig. 1.
Spectral response relative to 555nm
As light conditions can change drastically (full sunlight, shadowing, tunnels, snow and rain, night, etc.) and a good picture is required under all those circumstances, an automotive image sensor needs to have an extremely high dynamical range. Dynamic range is one of the most limiting factors of common consumer image sensors. These sensors never have to deal with dark objects in the shadows surrounded by reflecting light on a wet road with the sun low at the horizon. Consumer sensors are usually able to take pictures of very bright and very dark scenes but not both at the same time. This is a must-have-feature for automotive image sensors. CMOS image sensors offer the possibility to have non-linear pixel responses. This non-linearity can be a piecewise linear response (also called multiple slopes), a logarithmic response or a mix of different responses. Melexis’ solution is the piecewise linear response where the total integration time is distributed over all the pixels so that the brightest ones see a lower integration time than the darkest ones and thus do not saturate. Non-saturating allows detection of objects in very bright areas and thus extends the dynamic range to higher limits. As the darkest pixels see a normal linear response, their noise level remains low and the dynamic range close to the lowest limit is
Automotive CMOS Image Sensors
unchanged. Together, this results in a significantly extended dynamical range, exceeding 100dB. Exact dynamical range calculations can differ strongly, depending on the calculation methods and assumptions used. Optimistic calculations can reach up to 120, 130dB, but one could wonder if all noise sources, and temperature effects are taken into account.
Fig. 2.
Examples of high dynamic range pictures
As the automotive world requires temperature ranges from -40ºC up to +85ºC at least, the image sensor must be designed and tuned to fit this broad range. By optimizing both design and technology, the Melexis sensors offer a guaranteed operational optical quality level from -40ºC up to +105ºC. The physical limitation of good high temperature behavior is leakage. This effect is an exponential function of temperature and will degrade the image sensor’s response in a non-uniform and non-predictable way, limiting the dynamical range and creating white spots randomly over the image map. The Melexis sensors include an internal “hot spot” rejection and general black level compensation to counteract some of the high temperature leakage effects and exceeding the +85ºC limit with good image quality.
Fig. 3.
The same scene at 25°C and at 105°C
Figure 3 compares two identical scenes at different temperatures. The first picture is taken at room temperature and must be compared to the second pic-
405
406
Components and Generic Sensor Technologies
ture taken at 105°C, with “hot spot” cancellation and offset compensation, but with identical integration time and speed settings. In order to sustain the high quality level and avoid skewing of the images under all (automotive driving & vibration) conditions, a global shutter where all pixels integrate and “freeze the image” at the same moment in time is a key feature. Whereas for Human Vision Applications a Global Shutter requirement is probably overkill, and not worth the disadvantages (more complex pixel), for Safety Critical Machine Vision Applications, a Global Shutter is a must. Whereas in a “Rolling Shutter” camera every pixel or line samples the image at a different moment in time, in a “Global shutter” every pixel in the whole array integrates and samples (locks) the image at the same identical moment. Together with the possibility of having very high frame rates, the Melexis cameras have the capability of monitoring very fast moving objects, like a rotating fan, without any blurry effects. In order to design a good Global Shutter camera, one needs to have an efficient “memory” cell in every individual pixel and sufficiently advanced technology in order to keep the image info stored in this cell during a complete read out cycle. Under extreme sunlight conditions, this is not evident for all global shutter sensors in the market. The parameter with which this limitation is measured is called “shutter efficiency” and its behaviour is strongly dependent on the pixel design itself.
Fig. 4.
Some colour pictures
In Machine Vision Applications, complex algorithms are constantly searching to identify edge transitions, correlations or other abrupt intensity changes. No
Automotive CMOS Image Sensors
human brain can help to interpret some strange artefacts and any extra software overhead to compensate the potential misalignments due to a rolling shutter is preferably to be avoided. Therefore for this group of applications, a Global Shutter is an absolute requirement On the other hand, the colour option is a must for Human Vision Applications (due to market requirements of exigent high end users) but not always necessary for Machine Vision (only required if the colour offers a significant difference in machine interpretation) Melexis and its partners have developed colour filters that can be added on top of a black & white camera in order to make a true colour camera. We have chosen a repetitive colour pattern of RGB filter, known as the Bayer pattern. Each block of 2x2 pixels equals then a 3 colour pixel and can be used to reconstruct a colour picture. As shown in figure 4, a high fidelity can be reached.
2.3. Solutions for Automotive Reliability & Testability for Image Sensors
As for any other component that plays a key role in automotive safety applications, reliability is an important topic that has to be addressed during the design and production phase of an automotive image sensor. The Melexis strategy of automotive image sensors follows the same line as used for all other automotive components 1. design for testability on sensor level (target is to reach efficiently high testcoverages on chip) and on application level by using DFMEAs 2. automotive AEC-Q100 set of qualification tests, 3. 100% optical and electrical tests during production at different temperatures, 4. several built-in testmodes and real-time integrity checks to allow the user to monitor its application in the field. Getting high test coverage in a complex sensorchip like an imager is not straightforward. Therefore, special attention has been put into this during the design cycles. The high testcoverage is reached by using full electrical separation of some blocks, using advanced software tools to write optimized test patterns and last but not least, using the continuous data stream flow of the sensor to incorporate info of internal nodes during operation. Together with a lead Tier1 supplier, several DFMEA discussions were held on application level, in order to make sure Melexis can offer an image sensor meeting the automotive standards.
407
408
Components and Generic Sensor Technologies
Before releasing the imager to automotive production, the standard automotive qualification tests are performed on the parts, all with appropriate temperature levels. As only standard processes are used, no major problems are to be expected. However, the less standard color filter technology and the integrated lens stack (see further) will also be qualified following the same guidelines. In order to reach the automotive target of a few ppm’s EOL fail levels, one needs to test 100% the outgoing production parts. With a sensor designed for test and high coverage, an efficient and optimized test procedure can be installed. Here, Melexis uses the experience and know-how built up during several years on testing another optical safety critical part, the MLX90255 linear optical array. Melexis is capable of doing a full optical & electrical test both at wafer level and final test at any temperature between -40ºC and +125ºC with automated, high volume test equipment.
Fig. 5.
Mass volume automated test equipment
Building a robust safety critical application requires advanced integrity checks on all critical building blocks. As most machine vision applications require a powerful processing unit, and the built-in space is very limited, some systems are made by means of a so called “two-box-solution”: one part consists of the imager, lens and some discrete electronics put at the critical sensing area, whereby the processing unit and voltage references are placed in another part inside the car. Such a system built requires not only good validation of the building blocks themselves, but also a good monitoring of the communication lines. The Melexis imagers offer several features to allow the user to indeed build a robust application, even using the two box concept. After completion of every image, several known testpatterns are sent by the camera to the processor, allowing the user to validate both uplink and downlink communication lines
Automotive CMOS Image Sensors
(also the command word used is sent back). As modern image sensors integrated in standard CMOS technology are composed of several other features like an internal ADC, white spot correction, advanced programming features etc., it is also important to have integrity verifications of this system on a chip itself available. Mixed after every line, and at the end of a full picture, known testpatterns are sent by the camera, allowing the user to verify the integrity of several internal nodes both in the analog and digital domain of the imagers itself. All these integrity modes are real time and can be interleaved with normal operation with very high efficiency. However, if something would go wrong, additional, more advanced test modes are also available to the user to identify where exactly a problem might occur. To access these advanced test modes, the sensor has to be put out of normal operation, but fast single frame switching between testmode and normal mode is possible at all times. Although the expected failure rate of electronic sensors qualified for automotive usage (including image sensors!) is in the ppm range, full visibility of all key components, including the wiring, is crucial in order to build a robust final system. By means of the variety of testmodes implemented and made available with the Melexis imagers, one can guarantee this. However, first generation safety critical driver assistance systems, will certainly not overtake the drivers’ decisions, but mainly provide assistance and help in potentially dangerous situations.
2.4. Solutions for Optimum Size & Cost
In the automotive world, besides performance and reliability, size and cost are also criteria of extreme importance. Using the experience and know how of the optical linear array, Melexis has further developed with its partners the plastic overmoulded cavity package concept. This solution offers several advantages: 1. a well known, standard packaging technology, relatively lowly priced and offering similar quality performances as several other plastic package used in the automotive world. 2. although the cavity offers an easy path to the light to enter the sensors sensitive optical area, all other parts are overmoulded, including some sensitive electronics, and of course, the bondwires and bondpads. 3. because the optical sensing area can be accessed freely, there is also the opportunity of focus free lens mounting, giving several extra advantages to the end user in terms of size, ease of handling and in the end total cost.
409
410
Components and Generic Sensor Technologies
Several years of experience with the plastic cavity concept used for the optical linear array have allowed us to elaborate the plastic cavity package concept into a general image sensor package technology. Using standard processes and compound, standard automotive quality levels can be reached. MSL level 3 is standard, but with the choice of adequate compound, MSL level 1 can be the target. Currently, leadframe based solutions like MQFP type packages are widely accepted in the automotive world, but if QFN technology would receive the same acceptance level as MQFP, the same cavity technology can be applied to leadless QFN types. One of the most critical points in reliability of a package are the bondwires and the bondpads. As those are overmoulded in exactly the same way as with any other standard automotive plastic package, no difference in quality is to be expected. As long as a sufficient guard band is kept between the cavity and the bondpad positions, the plastic cavity packaging concept gives equal reliability performances as the standard plastic packages. As the sensitive nonoptical electronics is covered with plastic, no extra metal layers are required for coverage. This can allow for a cheaper sensor technology and can also improve some electrical behavior like speed or matching. Also, a thick layer of plastic is a much more efficient optical sealant than thin metal layers.
Fig. 6.
MLX75006 (CIF sensor) in MQFP44 package, with and without Integrated Lens Stack
All optical systems require one or multiple lenses. This open package technology allows to directly mount a lens on the sensor. An integrated lens stack consisting of multiple lens interfaces optimized to the so called Panoramic VGA resolution of the MLX75007 image sensor, has been developed for this purpose. The lens offers a 60 degree field of view and a F# = 2.5. The optical design is matched our sensors performances, and optimized for a broad wavelength range up to the Near Infra Red. Edge distortion is limited to a minimum and an MTF of 40% at 50lp/mm can typically be achieved over a broad wavelength range and Field Of View.
Automotive CMOS Image Sensors
When mounting a lens, using the plastic package or PCB board as a reference, extra means of adjusting are required due to the high build up of tolerances. By placing the Integrated Lens Stack (ILS) directly inside the package, with the sensor area as reference plane, and by well controlling the tolerances, we can offer a fully focus free, integrated lens solution. Besides optimum size, the ease of processing also gives several advantages to the user: 1. no focusing required by the end user, nor the possibility to get out of focus (by design) 2. the module: sensor + lens interface can be completely tested by Melexis before shipping 3. the target is to offer a complete hermetically and optically sealed solution, capable of withstanding the standard soldering techniques, and as such to be a plug & play component, similar to a lot of other currently used sensors.
2.5
How to design an automotive vision application: “in search for the best compromise”.
Crucial to the end success of any (vision) application will be the good judgement of all application needs and the correct choice of both hardware and software needs. Every building block has its specific advantages, but also its price, therefore – as always – designing well means making optimized and well balanced compromises. As described in this paper, concerning the image sensor, several specification requirements can and should be weighed against each other: global shutter vs. rolling shutter, color vs. black & white, pixel size and resolution. As described above, a global shutter is a very interesting feature for a camera, but it requires a complex pixel design. Therefore, if not required, a rolling shutter might be an interesting alternative, with probably a higher light sensitivity for a lower price. The same is valid for color. Nice to have, but its need should be carefully balanced and questioned in the total system, as it will have a significant lower light sensitivity and dynamical range for an increased cost. Pixel size and resolution will determine largely the end cost of the imager, but can also impact on the lens cost and software needs. Too large lenses will be prohibitively expensive for the mass volume automotive market. Extremely high bandwidths and processing power due to megapixels image streams at very high frames/sec are probably also prohibitive for both processor requirements and EMC qualifications. For some specific applications, the “old-fash-
411
412
Components and Generic Sensor Technologies
ioned” CCD solution might even still be the best choice nowadays. For other applications, however, a CMOS global shutter image sensor will be required. A final automotive vision application will be the end result of a series of these balanced judgments: the right parts with the optimized software to get a correct performance at an acceptable market price.
3
Conclusions
This paper has presented the main criteria to which an image sensor should comply if it is to be used in the highly demanding automotive world. Next to obvious parameters like image quality, size and cost, also reliability and integrity verification means will determine the possibility of integration of an automotive image sensor in a safety critical automotive application. Furthermore, the unique combination of the overmoulded open package with a focus free integrated lens stack, can address successfully the dimensional and cost requirements.
References [1] [2]
A. Darmont, “CMOS Imaging: From Consumer to Automotive”, MSTnews 6/04, pp 43-44, December 2004. http://www.melexis.com/prodmain.asp_Q_family_E_MLX75007
Sam Maddalena, Arnaud Darmon, Roger Diels Product Group Manager OptoElectronic Sensors Melexis Tessenderlo NV Transportstraat 1 3980 Tessenderlo Belgium
[email protected],
[email protected],
[email protected] Keywords:
CMOS camera, automotive image sensor, high dynamic range, high sensitivity, fully programmable, near infra-red, advanced package, integrated lens, in-field test modes and integrity checks
413
A Modular CMOS Foundry Process for Integrated Piezoresistive Pressure Sensors G. Dahlmann, G. Hölzer, S. Hering, U. Schwarz, X-FAB Semiconductor Foundries AG Abstract In this paper a CMOS foundry process is presented, which allows the integration of piezoresistive pressure sensors with mixed signal electronic circuits on a single chip. Based on a 1.0µm modular CMOS technology platform, two distinct MEMS process modules have been developed. The first of these process modules is based on silicon direct wafer bonding in a pre-CMOS process and is designed for the realisation of miniature absolute pressure sensors. The second process is based on a conventional bulk micromachining post-CMOS process and allows the realisation of relative and absolute pressure sensors. Prototypes of pressure sensors have been fabricated and characterised as discrete devices. The measurements have shown that the sensors have state-of-the-art performance. A set of reliability tests have been carried out and show that the device characteristics remain stable even under harsh environmental conditions.
1
Introduction
The use of semiconductor foundries to outsource IC production has rapidly gained importance in recent years, not least in the automotive industry. Wafer fabrication in a foundry offers considerable advantages. Foundries can provide a second source for IC manufacturers with in-house production facilities, which enables them to react more flexibly to an increase or decrease in customer demand and to manage their own fabrication capacity more efficiently. Furthermore, foundries have given smaller fabless companies the opportunity to access the market, for whom the investment in production facilities would otherwise have been prohibitively expensive. Outsourcing is a routine procedure when it comes to manufacturing of integrated circuits. For micromechanical sensors, however, outsourcing is still an exception, even though the advantages of an outsourcing strategy are equally compelling. An example for a class of devices, where sensor manufacturers could benefit from additional flexibility and potential reduction of fabrication
414
Components and Generic Sensor Technologies
cost are piezoresistive pressure sensors. These devices are widely used and there is a rapidly increasing variety of applications for consumer goods, process control and automotive markets. Integrated pressure sensors, where the piezoresistive sensor and the signal conditioning electronics are placed on a single chip, are used for a number of automotive applications. Monolithic integration offers substantial advantages, such as reduction of size and weight and the potential to reduce the manufacturing cost. On the other hand, monolithic integration is not always possible, because harsh environmental conditions may require for the electronics to be separated from the pressure sensor. Furthermore, the initial investment cost for technology development is very substantial and it is only justified if the production quantities are high. As a consequence, today there are still relatively few applications, where the integrated technology has replaced the conventional two chip solution with discrete pressure sensor and ASIC. Manifold absolute pressure sensors (MAP) and barometric absolute pressure sensors (BAP) are examples, where integrated pressure sensors are the industry standard, today. However, only industry leaders such as Freescale, Bosch or Denso have managed to develop viable technologies for these applications. Only recently with the loom of tyre pressure monitoring systems new players have tried to enter the market for integrated pressure sensors. Most notably, Elmos, have presented their newly developed technology in 2004 [1]. If integrated pressure sensors can be used for new applications and enter into new markets will depend on whether the technology can be made cost-effective compared to conventional technology. Outsourcing the fabrication of integrated pressure sensors using a semiconductor foundry offers excellent prospects towards making the technology more cost-effective. With a foundry approach, a substantial part of the initial investment required for technology development can be saved. Moreover the need to run and maintain a production line for a highly complex process can be eliminated. With an outsourcing strategy, integrated pressure sensor technology could therefore become a viable alternative to the conventional two-chip solution, also for medium volume applications.
2
Technological Platform - 1.0µm Modular CMOS Process
X-FAB is a semiconductor foundry with experience of many years in development and volume production of integrated circuits. Besides the company’s core
A Modular CMOS Foundry Process for Integrated Piezoresistive Pressure Sensors
competence in mixed signal IC manufacturing, there is also a great deal of expertise in the manufacture of microsystems. The technology portfolio comprises modular CMOS and BiCMOS technologies ranging from 0.35µm to 1.0µm, as well as SOI and MEMS technologies. For the development of integrated pressure sensors the 1.0µm CMOS technology was selected as a platform. X-FAB’s 1.0µm CMOS technology is a wellestablished technology, which is suitable for automotive, consumer and industrial applications. It has a modular architecture with a core process, which allows either 1.5V or 5V digital and basic analogue functionality. Around the core process there is a number of specific modules which can be freely combined with each other. These include: up to 3 metal layers (including power metal 3) ERPOM and EEPROM non-volatile memory advanced analogue options, for poly-poly capacitors, bipolar transistors, high-ohmic resistors high voltage transistors optical window for photo diodes ESD protection In order to facilitate circuit design, there is a comprehensive library of primitive devices and a set of more complex analogue and digital library cells. Most of the common EDA platforms are supported with high precision device models. The technology has been used for the implementation of diverse automotive ICs, including sensor interfaces. For numerous applications it has been demonstrated that the reliability requirements of the automotive industry can be met.
3
MEMS Process modules for integrated Pressure Sensors
In this chapter the process modules are described, which allow the monolithic integration of piezoresistive pressure sensors in 1.0µm CMOS technology. There are two distinct processes, a pre-CMOS process based on wafer bonding and a post-CMOS process based on bulk silicon micromachining.
415
416
Components and Generic Sensor Technologies
3.1
Pre-process Module Based on Wafer Bonding Technique
The first process module is a pre-CMOS process module, which means that the MEMS process steps in which the pressure sensitive membrane is created are carried out prior to the CMOS process. The process flow is shown in figure 1.
Fig. 1.
CMOS process with wafer bonding pre-process module
The basic idea to manufacture pressure sensors using direct silicon wafer bonding was first presented by Peterson in 1988 [2]. The process starts with a silicon wafer, in which a cavity is created by anisotropic silicon etching. Next, a second wafer is bonded onto the first silicon wafer using silicon fusion bonding under vacuum. After annealing, the wafer stack is thinned back, first by grinding and then by chemical mechanical polishing (CMP) until only a thin layer of silicon remains over the cavity. This silicon layer over the cavity is the diaphragm of a piezoresistive pressure sensor. The vacuum inside the cavity is the reference pressure for the absolute pressure sensor. At this point, the MEMS pre-process is completed and the CMOS front end process begins. The n-well implantation into the p-type substrate is also used to create an n-type region on the top surface of the membrane. Next follows
A Modular CMOS Foundry Process for Integrated Piezoresistive Pressure Sensors
field oxidation and definition of active areas. Subsequently, gate oxide is grown and the polysilicon gates are defined. This is followed by the p- and n-type implantations for source and drain, where the p-type implantation is used to create the piezoresistive elements on the membrane. After this, an interlayer dielectric is deposited and the contact openings are etched. Depending on the complexity of the circuit, up to three layers of metal can be added, which are separated by glass interlayers. Finally the passivation is deposited and structured in order to provide contacts to the bond pads. The principal advantage of this integrated pressure sensor technology is that by adding the MEMS pre-process module the core CMOS process is left unchanged. In this way it can be ensured that all other process modules are available and can be combined with the pressure sensor module. Furthermore, the device libraries with primitive devices, digital and analogue cells remain available as well.
3.2
Post-Process Module Based on Bulk Micromachining Process
The second process module is a post-CMOS process module, which means that the MEMS specific process steps are carried out after the core CMOS process. The membranes are manufactured using a conventional bulk micromachining process, where the silicon wafer is etched anisotropically from the backside. The process flow is shown in figure 2. As for the pre-process module described above, the key focus of the process development was to leave the core CMOS technology unchanged in order to ensure compatibility with other CMOS process modules. The process flow is hence very similar to the process flow that was described in chapter 3.1. The wafer material consists of a p-type epitaxial layer on an n-type silicon substrate. The process flow starts with n-well implantation and diffusion. The areas where the pressure sensor membrane will be located at the end are also n-doped, in order to provide the isolation for the p-type piezoresistors. Next is the field oxidation using a LOCOS process, the gate oxidation and creation of the polysilicon gates. A second polysilicon layer, which is isolated from the first one is optional and allows the realisation of poly poly capacitors or resistors with higher sheet resistance. Subsequently, the n-type and p-type source and drain areas are created by implantation and diffusion. The p-type piezoresistors for the pressure sensor are also created in this process step together with source and drain areas for the p-channel transistors. The metallisation system consists of up to 3 metal layers, which are insulated by glass interlayers. A thicker top metal layer is optional for high power applications. The final
417
418
Components and Generic Sensor Technologies
step of the CMOS process is the deposition of the passivation layer and the opening of the bond pads.
Fig. 2.
CMOS process with bulk micromachining post process module
The post-process then starts with the deposition of an oxide layer on the backside, which acts as mask material for the bulk silicon etch. A front-to-backside alignment technique is used to pattern the backside of the wafer, where the mask openings are brought into line with the features on the front side. Finally the membranes are created using time controlled etching in potassium hydroxide (KOH) from the wafer back side. Optionally, a glass base wafer can be bonded to the pressure sensor wafer. This increases the stability of the pressure sensor and isolates the devices from thermo-mechanical packaging stress. In order to implement absolute pressure sensors, a solid glass wafer is bonded to the silicon wafer under vacuum. For a relative pressure sensor, the glass wafer contains openings in order to provide access to both sides of the membrane.
A Modular CMOS Foundry Process for Integrated Piezoresistive Pressure Sensors
4
Prototype Fabrication and Assembly
Pressure sensor prototype wafers have been manufactured for both process modules with the aim to characterize discrete pressure sensors and to carry out preliminary reliability tests. Since the discrete pressure sensor does not require several of the CMOS process layers, a simplified process flow has been used to manufacture the prototype wafers. The sequence of layers and the associated characteristics, however, remain essentially unchanged. Pressure sensor dies have been designed and manufactured for various pressure ranges: Wafer bonding process module Absolute pressure sensor (APS) 3bar, 6bar, 15bar Bulk micromachining process module Relative pressure sensor (RPS) 0.5bar, 1bar, 6bar, 10bar, 15bar For characterisation, the pressure sensors are packaged. In order to obtain valid characterisation data for the pressure sensor chip, the package is selected so as to cause minimal degradation of the sensor performance. Distinct packages are used for absolute and relative pressure sensors. Absolute pressure sensors are mounted in a standard DIP-MK8 ceramic package. For the relative pressure sensor a custom package has been developed, where the chip is mounted onto a ceramic plate that contains an opening in order to provide the second pressure connection. For both sensor types a silicone based adhesive is used to attach the silicon sensor to the substrate. No further protective coatings are applied, which leaves the pressure sensor chip directly exposed to the atmosphere.
5
Characterisation
For all pressure sensor types, pressure and temperature dependent characteristics have been measured. The Wheatstone bridge was excited with a constant voltage of 5V and no external compensation was used. The temperature range was -40 to 125°C . The linearity error was defined as the maximum deviation from a best fit straight line. The following characteristics have been measured for absolute pressure sensors fabricated using the wafer bonding process module:
419
420
Components and Generic Sensor Technologies
Tab. 1.
Typical characteristics for absolute pressure sensors fabricated using wafer bonding process module.
For relative pressure sensor that were fabricated using the bulk micromachining process module, the measured characteristics are:
Tab. 2.
Typical characteristics for relative pressure sensors fabricated using bulk micromachining process module.
On the whole, the characteristics are comparable to what is the industry standard for uncompensated pressure sensor dies. The trade-off that results from using the CMOS layer sequence is that the piezo-coefficients are lower than for a discrete technology, where they can be optimised to yield a maximum sen-
A Modular CMOS Foundry Process for Integrated Piezoresistive Pressure Sensors
sitivity. A further consequence of this constraint is that the sheet resistance of the piezoresistors is relatively low. This issue was addressed by adapting the resistor layout, in order to ensure that a practical value is achieved for the bridge resistance.
6
Reliability Tests
The reliability of a micromechanical sensor and the stability of its characteristics over time are very critical performance parameters. Thorough reliability testing and qualification schemes are required to ensure the correct functioning of the device over its life time. With micromechanical sensors the dilemma is, however, that there is no industry standard for reliability testing as it exists for integrated circuits. There is a vast number of applications, requiring for sensors to work under very diverse environmental conditions, which have made a standardisation impractical. As a result, tests and test conditions vary from application to application and from manufacturer to manufacturer. For the pressure sensors that were manufactured using the MEMS process module of the 1.0µm CMOS technology, reliability tests have been carried out, which were derived from qualification specifications for integrated circuit by the Automotive Electronics Council (AEC). Two diverse environmental tests have been carried out with samples for both process options, a high temperature storage test and a temperature cycle test. As for the characterisation measurements the package remains open during the stress tests and no protective coating is applied. The silicon chip is hence directly exposed to the atmosphere. The high temperature bake is carried out at 175°C for 500h, the temperature cycle test includes 500 cycles from -65°C to 150°C. The pressure sensor characteristics are measured before and after the stress test. The results are shown in the figures below.
421
422
Components and Generic Sensor Technologies
Fig. 3.
Offset drift for absolute pressure sensors fabricated using wafer bonding process module after stress test - left: high temperature bake (45 devices) - right: temperature cycle test (45 devices)
For the pressure sensors that were manufactured using the wafer bonding technique, the offset drift after high temperature storage is 0.3% (typically) and 0.6% (maximum). After temperature cycling, the drift is 0.1% typically and 0.4% (maximum). The values are related to the full scale output voltage (FSO).
Fig. 4.
Offset drift for relative pressure sensors fabricated using bulk micromachining process module after stress tests - left: high temperature bake (45 devices) - right: temperature cycle test (75 devices)
For the relative pressure sensors that were manufactured using the bulk micromachining process module, the offset drift after stress tests is slightly higher. After high temperature storage a 0.4% (typical) and 0.7% maximum deviation was measured. After temperature cycle test the drift was 0.4% (typical) and 1% (maximum). Since the layer sequence and the sensor geometry is essentially the same in both cases, the increased offset drift is mainly attributed to the reduced stability of the bulk micromachined sensor. This sensor chip does not have a stable base, but it is directly attached to the ceramic substrate, which makes it more susceptible to thermo-mechanical stress.
7
Pressure Sensor IPs
Semiconductor foundries generally provide some form of design support in order to enable their customers to use foundry processes more efficiently and to minimise the time-to-market. Besides a detailed process specification and design rule documentation, X-FAB’s design support also includes a compre-
A Modular CMOS Foundry Process for Integrated Piezoresistive Pressure Sensors
hensive library with characterised circuit components, the so-called design kit. In the design kit, the circuit designer can find primitive devices, such as transistors or diodes, as well as analogue and digital cells, e.g. operational amplifiers, comparators, logic gates, etc.. Moreover, there are IP blocks, for more complex circuit sub-functions, such as memory blocks. For all library elements, models are available and most of the common EDA tools are supported. In this way, the design of integrated circuits can be made considerably more efficient. The designer can compose the circuit of well characterised components and re-use working sub-circuits, instead of developing everything from scratch. In this way, expensive and time-consuming design iterations can be brought to a minimum. Since the concept has proven to be highly beneficial for the development of integrated circuits, X-FAB has extended the idea to integrated pressure sensors. Thus, a discrete pressure sensor is an IP block, which can be incorporated into a customer ASIC design. In this way the sensor, its signal conditioning electronics and ASIC functionality can be combined in a system-on-chip. X-FAB provides GDS data and comprehensive characterisation data for all of the above described CMOS pressure sensors. Furthermore, in house expertise with finite element modelling and with sensor design allows the development of customer specific pressure sensors. The practicality to develop device models of pressure sensors for EDA platforms is currently assessed.
8
Conclusions
Based on a 1.0µm CMOS technology platform, two process modules have been developed for the manufacture of integrated piezoresistive pressure sensors. Prototypes of sensors have been fabricated and characterised. The results show that the performance is comparable to state-of-the-art discrete pressure sensors. Preliminary reliability tests have been carried out and it has been shown that the device characteristics remain stable. With the new technology it is possible for the first time to manufacture integrated pressure sensors in a semiconductor foundry. Sensor IPs are available for a set of pressure ranges and can be built in into customer IC designs. With this approach, it is anticipated that the development cost and the time-to-market for integrated pressure sensors can be significantly reduced, which could make these devices attractive for an increasing number of applications.
423
424
Components and Generic Sensor Technologies
References [1]
[2]
R. Bornefeld, W. Schreiber-Prillwitz, H.V. Allen, M.L. Dunbar, J.G. Markle, I. van Dommelen, A. Nebeling, J. Raben, Low-Cost, Single-Chip Amplified Pressure Sensor in a Moulded Package for Tyre Pressure Measurement and Motor Management, in Advanced Microsystems for Automotive Applications, Springer, Berlin, 2004, pp. 39-50 K. E. Peterson, P. Barth, et al., “Silicon fusion bonding for pressure sensors”, in Proceedings of the Solid-State Sensor and Actuator Workshop, Hilton Head, 1988, pp. 144–147
Dr. Gerald Dahlmann, Gisbert Hölzer, Siegfried Hering, Uwe Schwarz MEMS Process Development X-FAB Semiconductor Foundries AG Haarbergstr. 67 D-99097 Erfurt Germany
[email protected] Keywords:
MEMS, integrated pressure sensor, system-on-chip, foundry process, bulk micromachining, wafer bonding
425
High Dynamic Range CMOS Camera for Automotive Applications W. Brockherde, C. Nitta, B.J. Hosticka, I. Krisch, Fraunhofer Institute for Microelectronic Circuits and Systems A. Bußmann, Helion GmbH R. Wertheimer, BMW Group Research and Technology Abstract We have developed a CMOS camera featuring high sensitivity and high dynamic range which makes it highly suitable for automotive applications. The camera exhibits a 768x76 pixel resolution and a dynamic range of 118dB at 50fps. The measured noise equivalent exposure is 66pJ/cm2 at a wavelength of 635nm which corresponds to 4.9mlx at an integration time of 20ms. The centrepiece of the camera is a novel CMOS image sensor chip integrated in a standard 0.5µm CMOS technology. The image sensor employs multi-exposure principle and was used to build an automotive camera, which contains the CMOS image sensor, camera electronics, a 2/3 inch lens and an IEEE 1394 Firewire Interface. This interface enables transfer of user-defined sub-frames yielding 512x256 pixels which exhibit 118dB dynamic range at 50fps and featuring 64dB dynamic range when acquired at a single integration time.
1
Introduction
In numerous applications image sensors are required which exhibit high sensitivity and high dynamic range. One of the most demanding applications is automotive vision. Among the most crucial specifications for this application (and the most difficult to fulfil) are the dynamic range of about 120dB and the sensitivity which should be better than 0.1lx [1]. Based on our experience with automotive CMOS image sensors, we have set the following minimum specifications: a dynamic range of 120 dB, a signal-tonoise ratio corresponding to 8bit, and a noise equivalent power of 50µW/m2 at 635nm wavelength and 20ms integration time [2, 3]. To realize this, we have developed CMOS image sensors based on multi-exposure approach. The 1st generation design featured 256x256 pixel resolution in 1µm CMOS technology and yielded a dynamic range of 120dB at 50fps [2,3].
426
Components and Generic Sensor Technologies
In this paper, we report on an improved high-dynamic range sensor realized in 0.5µm CMOS technology. The spatial resolution has been increased from 65k pixels to 442k pixels while the noise equivalent exposure (NEE) has been drastically improved. We have retained the multi-exposure approach, which enables the realization of linear imager characteristics in the entire brightness range and yields excellent image quality and contrast superior to imagers with nonlinear characteristics.
2
Circuit Description
The basic architecture of the presented image sensor is depicted in figure 1. The readout circuitry is located at the bottom of the array and enables parallel column readout while sequentially accessing all rows. The column and row shift registers enable sub-frame readout if so desired.
Fig. 1.
Image sensor architecture
Each pixel contains the 3-transistor pixel circuit shown in figure 2 [2]. The photodiode is based on the pn-junction formed between an n+-diffusion and the psubstrate.
High Dynamic Range CMOS Camera for Automotive Applications
The column readout amplifier with correlated double sampling (CDS) capability is shown in more detail in figure 3. The CDS cancels the effects of reference voltage variations, all offset voltages, and low frequency noise. Nevertheless, it doubles the high frequency white noise power.
Fig. 2. Fig. 3.
3
Pixel circuit (left) Column readout amplifier with CDS (right)
Pixel Circuit Design
Besides the area and the MTF, the most critical parameter of the pixel circuit is its noise because it defines the dynamic brightness range, the signal-to-noise ratio, and the noise equivalent power, i.e. the minimum detectable irradiance at the photodiode. The noise referred to the photodiode consists of photon shot noise, dark current noise, reset noise, and noise of the readout ( ) electronics. Hence, the noise depends on the photocurrent Iphoto, the dark current IS, the total photodiode capacitance CP, and . Although both, CP and IS, depend on the geometry of the pn-junction photodiode, it is the photodiode capacitance which is much more dependent on area. For small-area photodiodes, the dark current is mostly generated at the sidewall parts of the junction, above all at the corners [4]. This means that the dependence of the dark current on the area is rather weak, especially for small-area diodes. The photodiode capacitance, on the other hand, is much less affected by the sidewall parts, so that it exhibits rather heavy dependence on the area of the bottom planar part of the junction. If VREF is the reference voltage used for the reset operation, S the sensitivity of the photodiode, APD the photodiode area, IS the dark current of the photodiode, CP the photodiode capacitance, and tint the integration time, then we can calculate the dynamic range as
427
428
Components and Generic Sensor Technologies
(1)
and the maximum signal-to-noise ratio as (2)
where is the total noise of the readout electronics referred to the photodiode. Note that the reset noise is doubled due to CDS. The noise equivalent power (NEP) which represents the noise equivalent input irradiance (i.e. noise equivalent power, NEP) is given by (3)
and is related to the noise equivalent exposure (NEE) as NEP=NEE/tint. A further important measure is the maximum voltage swing at the photodiode which is given by (4)
For the 0.5µm CMOS process used here we have assumed VREF=2V (for a power supply voltage of 3.3V) and tint=20ms (i.e. 50fps) and experimentally found S=0.2A/W at 635nm. Moreover, the dark current measurements confirmed that this current is nearly constant for small-area photodiodes, though it tends to rise with the photodiode area size. Experimentally, we have found that the value IS=3.5fA is a good approximation for small-area photodiodes. If we assume that we can design such a low-noise operational amplifier for the CDS stage and that the product is not dominant and can be neglected, we can plot an isochor diagram shown in figure 4.
High Dynamic Range CMOS Camera for Automotive Applications
Fig. 4.
Isochor diagram for tint=20ms (parameter: APD)
As we plan to use the multi-exposure approach (i.e. several different integration times) again, the dynamic range with a single integration time must be at least 55dB: the isochor diagram then yields CPmin=1.5fF. On the other hand, a SNRmax equivalent to 8bit yields CPmin=5.5fF while NEPmax=50µW/m2 yields e.g. APDmin=37µm2 for CP=5.5fF and APDmin=87µm2 for CP=35fF for tint=20ms. The diagram shows that APDmin=50µm2 (i.e. fill factor of 50% at a pixel pitch of 10µm) is perfectly feasible and the capacitance CP should not exceed 10fF.
4
Realization and Measurements
A CMOS imager has been designed using the above optimization procedure and fabricated in a standard 0.5µm CMOS technology featuring p-substrate, nwell, single poly, and triple metal layer (see figure 5). The chip realizes the multiexposure method mentioned above [2, 3]. It uses up to 4 different integration times: the number of integration times can be defined by the user. The minimum integration time is 20µs while the maximum integration time is 20ms for a single integration time at 50fps. The integration times can be interlaced yielding e.g. 4 integrations within 20ms with 16ms for the longest inte-
429
430
Components and Generic Sensor Technologies
gration time. The light sensitive area contains 768x576 pixels, while the total pixel count is 769x604. The on-chip voltage gain can be switched between 1 and 7.
Fig. 5.
Chip photomicrograph
The measured NEP was 33µW/m2 (i.e. 41 noise electrons) for tint=20ms. This is due to small capacitance CP and optimized low noise of the readout electronics. The NEE is then 66pJ/cm2 which yields a sensitivity of 4.9mlx at 20ms integration time and 635nm wavelength. The remaining technical data can be found in the Table below.
Fig. 6.
Camera demonstrator
High Dynamic Range CMOS Camera for Automotive Applications
5
Camera
To demonstrate the performance of the image sensor presented above, we have developed a camera demonstrator (see figure 6). The camera contains a processor which generates composite images exhibiting a DR of 118dB from images acquired at different integration times. It features a 2/3 inch lens and an IEEE 1394 Firewire interface. This interface enables transfer of userdefined sub-frames containing 512x256 pixels and featuring 118dB dynamic range at 50fps. Full-size frames can be transferred at 50fps exhibiting 64dB when acquired at 20ms single integration time. Figure 7 shows a single image of a video sequence taken on a highway at sunset in high-dynamic range mode with 4 different integration times.
Fig. 7.
Image sample acquired in high-dynamic range mode
431
432
Components and Generic Sensor Technologies
6
Summary and Conclusions
We have presented a high-sensitivity, high dynamic range CMOS image sensor fabricated in 0.5µm CMOS technology. We have shown how the pixel layout and circuit design can be optimized for a given technology. When carefully optimized, the sensitivity of CMOS image sensors can match or even surpass that of CCD image sensors while exhibiting a superior dynamic range. The result of our optimization has yielded an image sensor which is highly suitable - among others - for automotive applications including night vision.
References [1] [2]
[3]
[4]
[5]
B.Höfflinger, “Vision chips make driving safer,” Europhotonics, June/July 2001, pp. 49-51. M. Schanz, C. Nitta, T. Eckart, B.J. Hosticka, and R. Wertheimer, “A high dynamic range CMOS image sensor for automotive applications,” Proc. of the 25th European Solid-State Circuits Conference, 21-23 September, 1999, Duisburg (Germany), pp. 246-249. M. Schanz, C. Nitta, A. Bußmann, B.J. Hosticka, and R. Wertheimer, “A high dynamic range CMOS sensor for automotive applications,” IEEE Journal of SolidState Circuits, vol. 35, no. 7, pp. 932-938, July 2000. H.I. Kwon, I.M. Kang, B.-G. Park, J.D. Lee, and S.S. Park, “The analysis of dark signals in the CMOS APS imagers from the characterization of test structures,” IEEE Trans. on Electron Devices, vol. 51, no. 2, pp. 178-183, Febr. 2004. W. Brockherde, A. Bußmann, C. Nitta, B.J. Hosticka, and R. Wertheimer, “ HighSensitivity, High-Dynamic Range 768 x 576 Pixel CMOS Image Sensor”, Proceedings of the 30th European Solid-State Circuits Conference, ESSCIRC 2004, pp. 411-414, Leuven, Belgium, Sept. 2004
High Dynamic Range CMOS Camera for Automotive Applications
Werner Brockherde, Christian Nitta, Bedrich Hosticka, Ingo Krisch Fraunhofer Institute for Microelectronic Circuits and Systems Finkenstr. 61, D-47057 Duisburg, Germany
[email protected] [email protected] [email protected] [email protected] Arndt Bußmann Helion GmbH Bismarckstr. 142, D-47057 Duisburg Germany
[email protected] Dr. Reiner Wertheimer BMW Group Research and Technology 80788 Munich, Germany
[email protected] Keywords:
CMOS camera, high-dynamic range camera, image sensor, automotive camera
433
435
Performance of GMR-Elements in Sensors for Automotive Application B. Vogelgesang, C. Bauer, R. Rettig, Robert Bosch GmbH Abstract In the paper an introduction to the principle of the GMR-effect, and investigations into three types of GMR sensor are presented. Attention is hereby directed to the automotive applications ABS/ESP, engine management systems and transmission control. A sensitive GMR element with an integrated circuit has been developed fulfilling all requirements of the automotive environment for incremental speed sensors. The measurements of the sensors were performed using both active (magnetic) and passive (steel) impulse wheels in a temperature range of -40°C to 170°C. In addition to these investigations, simulations of the magnetic field have been performed to define the magnetic circuit. In order to be able to quantify a direct system advantage, the functionality is discussed compared to products in mass production based on Hall and AMR technology.
1
Introduction
The requirements of sensor technology in the automotive sector have continuously risen in the last few years. In particular within the area of vehicle dynamics and engine management systems, active sensors based on magnetic principles are an established commodity. The contactless principle of these sensors leads to a very high robustness in the automotive environment. The system requirements e.g. for higher temperature stability and increasing mounting tolerances are continuously rising. Fundamental changes in the automotive market and the ever increasing number of sensors in a vehicle has lead to the fact that costs are the most important criteria for success in the sensor market, nevertheless cost must be evaluated as the total system cost. In the past the magnetic sensors mounted in a motor vehicle have been passive inductive sensors but the trend is moving towards active magnetic sensors, whose measurement principle is based on the Hall effect or on the Anisotropic MagnetoResistive effect (AMR).
436
Components and Generic Sensor Technologies
Current developments in the field of GMR (giant magneto resistive) sensor technology show important functional advantages in relation to conventional sensor principles. This is due to an improved sensitivity and temperature stability, which will be crucial for future applications. The higher costs at the start of GMR production are cancelled by the improved function and robustness, reducing required system tolerances and thus total system cost. Increased robustness by smart integration of compatible technologies and full supply chain test optimization can lead to significant improvement in the overall quality, reducing the total cost of ownership.
2
The GMR Effect
The strong dependence of the electrical resistance on an applied magnetic field is called giant magnetoresistant effect (GMR). It results of the magnetic coupling of adjacent ferromagnetic layers separated by thin non-ferromagnetic layers. This effect can be used for different GMR sensor structures, which differ in the stack design. In our investigations we examined multilayer (ML), multilayer with integrated hard magnetic bias layer (HMB) and spin-valve (SV) structures, that are described in the following sections
Fig. 1.
2.1
Sketch of the GMR-effect. Antiparallel (left) and parallel (right) magnetization of a GMR multilayer.
Multilayer
GMR multilayer (ML) consist of a stack with alternating layers of ferromagnetic and non-ferromagnetic materials (see figure 1) with layer thicknesses in the range of a few nanometers. The magnetic coupling of the ferromagnetic layers depends on the layer thickness of the non-ferromagnetic spacer layer.
Performance of GMR-Elements in Sensors for Automotive Application
For the GMR multilayer an appropriate thickness is chosen to generate an antiferromagnetic coupling of the adjacent ferromagnetic layers. Parallel coupling of the magnetic layer leads to interface scattering of only one type of electron spins, thus resulting in a low resistance of the stack. Whereas in the antiparallel coupling state both type of electrons scatter with the corresponding magnetized layer resulting in a high resistance of the whole stack. The characteristic curve of the multilayer stack arising from the above explained magnetic behaviour is shown in figure 2. The picture on the right hand side of figure 2 shows a TEM shot of the investigated multilayer structure. These GMR multilayers show the highest sensitivity in the magnetic field range of 10 to 20mT or -10 to -20mT. Therefore an external bias magnet is used to provide a magnetic field to shift the working point into this area. The advantages of multilayer structures are their simple layer system, their high GMR effect level and their high stability due to perturbing magnetic fields. Possible applications for multilayers in the automotive sensor area are e.g. incremental speed sensors, position sensors and current sensors.
Fig. 2.
2.2
Characteristic curve (GMR vs. magnetic field) of a multilayer structure (left) and TEM picture of a multilayer stack (right)
Spin-Valves
Alternative GMR stacks, suitable for sensor applications, are spin-valves (SV). They consist of two ferromagnetic layers with a nonmagnetic spacer layer, which has a high thickness in comparison to ML spacer layers to decouple the ferromagnetic layer. A typical SV structure is depicted in figure 3 a. One magnetic layer is referred to as reference layer and the other as free layer. The alignment of the reference layer is defined by the coupling with the antiferromagnet. The anti-parallel state occurs when the magnetization direction of the free layer is changed by external fields. Spin valve structures have much lower GMR values than multilayer structures but they feature low coercitivi-
437
438
Components and Generic Sensor Technologies
ties and high sensitivities (figure 3 b). The reference layer is selected in the way that it shows a large magnitude of uniaxial anisotropy. The stability of this layer can be enhanced by adding a synthetic antiferromagnet (SAF) as showed in figure 3. The pinned layer is magnetically biased in direction of the easy axis by the means of the Exchange Bias effect. The Exchange Bias describes the pinning of the magnetization of a ferromagnetic layer by an adjacent antiferromagnet.
Fig. 3.
Typical spin-valve structure a) and its characteristic curve (GMR vs. magnetic field) b)
The advantages of this type of spin-valve structures are their high, adjustable sensitivity, their negligible hysteresis compared to MLs and their stability due to perturbing magnetic fields of more than 100mT. Their applications in the automotive sensor area can be enhanced by angle measurements.
3
GMR Sensor Structures
For a benchmark of GMR sensors with Hall and AMR sensors, a GMR-ASIC was developed to process the GMR structure signals and provide a current interface for incremental speed sensors (see figure 4 b). Discrete and integrated versions are both investigated to verify the feasibility of integrating the technological processes of the GMR-layers and the CMOS without reduction in functionality.
3.1
Discrete Sensor
Figure 4 a depicts the layout of a GMR multilayer sensor bridge for a discrete sensor. The four resistors of the wheatstone bridge are arranged in a gradiometer geometry to avoid signals arising from interference magnetic fields.
Performance of GMR-Elements in Sensors for Automotive Application
In the discrete version the sensor is made up of the GMR sensor bridge and the ASIC. The two separated chips are connected with each other via bonding wires in a SOIC8 package in a stacked dice geometry.
Fig. 4.
3.2
GMR sensor bridge layout with multilayer resistors arranged in a wheatstone-bridge configuration a). After signal processing the sensor triggers a pulse at every period via a current interface b).
Integrated Sensor
For a robust handling and smaller outline of the GMR sensor an integrated version of the GMR sensor is also realized as shown in figure 5. The integration is assembled by depositing the GMR layers on the left and right hand side of the integrated circuit on top of empty but pre-processed areas.
Fig. 5
4
Integrated GMR sensor: The GMR meander are deposited on the left and right hand side of the integrated circuit a), the integrated GMR sensor in a SOIC8-package b) and a package similar to PSSO4.
Magnetic Simulation
In order to use GMR sensors in combination with steel wheels a suitable bias magnet is mandatory. The aim must be to develop a magnetic circuit where the GMR element is working close to maximum sensitivity in the middle of the operating range. Due to the non linear characteristic curve of GMR-elements
439
440
Components and Generic Sensor Technologies
the requirements on the magnetic field distribution of a back bias magnet are a lot higher compared to back bias magnets for Hall sensors.
Fig. 6.
FEM model of a toothed steel wheel.
A valuable tool to design an appropriate magnet is the numerical simulation of magnetic fields. This is done via simulation tools using a coupling between finite element (FEM) and boundary element methods (BEM). figure 6 shows a finite element model of a toothed steel wheel.
Fig. 7.
Signal characteristic of a GMR bridge signal at an air gap of 6.0mm.
After the generation of an finite element mesh the calculation of the electromagnetic fields is performed and the data is transferred and evaluated. Together with the known characteristic curve of the GMR-elements the sensor signals can be computed. Dependent on the signal results the magnetic circuit is changed and optimized in an iterative process. figure 7 shows a GMR bridge signal with nearly offset free signal characteristics.
Performance of GMR-Elements in Sensors for Automotive Application
5
Application
5.1
Automotive System Requirements
In this chapter we focus on three different automotive applications: wheel speed sensors, engine speed sensor and rotational speed sensors for transmission control. The geometrical, thermal, chemical and electrical system requirements differ dependent on the target application. The three applications have the following in common: The environmental influences are challenging due to their application in the engine compartment. Sensors based on magnetic principles have become established, because of their contactless, robust and thus very reliable function. Differential measurement principles are used wherever it is possible, to avoid incident magnetic signals. Wheel speed sensors are placed in the wheel hubs either inside or outside of the wheel bearings. Their task is to determine the rotational velocity of the wheels for systems like ABS, ESP etc.. Each wheel and axle assembly is equipped with a toothed tone wheel or magnetic encoder that rotates with the wheel near the sensor. As the wheel rotates, a magnetic field fluctuates around the sensor working as an incremental sensor. An additional backbias magnet is placed behind the speed sensor when using a tone wheel. The output signal is transmitted via two wire cable to the correspondent control unit. For the magnetic field fluctuations there are different impulse wheels in use (figure 8). Magnetic encoders with 32 to 54 pole pairs (usually 48), magnetic encoders with axial or radial magnetization and toothed tone wheels with 46 or 48 teeth. The speed sensor is attached to the hub nearby the brake disk resulting in high temperature requirements of -40°C to 150°C.The air gap requirements vary depending on the design of the hub up to 3.5mm. Furthermore the wheel speed sensor must be able to work with frequencies up to 4500Hz. Sensors which are able to detect the direction of rotation are increasingly in demand.
441
442
Components and Generic Sensor Technologies
Fig. 8.
Incremental speed detection with magnetic encoders (a) and toothed tone wheels (b).
To characterize the performance of different sensors with respect to their suitability for wheel speed application, measurable parameters have to be found. The characteristic parameters discussed in this paper will be: Air gap, dutycycle and jitter. For engine management incremental speed sensors are used to measure the speed of the crankshaft. Both magnetic encoder and toothed tone wheels are used to generate the fluctuations of the magnetic field for te sensor. These are radial magnetized encoder with 60-2 pole pairs and toothed tone wheels with 60-2 teeth. The required temperature range is similar to the one for wheel speed application from -40°C to 150°C with an similar air gap range of 0mm to 3.5mm. The frequencies that can occur on the output signal of the sensor can be as high as 10kHz. Digital voltage output is required for sensors in motor management systems. The recognition of the direction of rotation is a feature that will be used in future systems for the crankshaft. Rotational speed sensors for transmission control operate almost exclusively with metal tone wheels: toothed or perforated with holes. The diameter and the number of teeth or holes varies widely depending on the transmission it is applicated to. Therefore different sensor types can only be compared in reference to a defined impulse wheel and cannot be easily transferred to every impulse wheel inserted in different transmissions. The temperature requirements in the transmission barely differ from those of the preceding applications. The range covers -40°C to 150°C. In contrast the air gap and the frequency ranges are greatly increased to 5.5mm and 12kHz respectively. Additional to direction of rotation recognition, vibration recognition is the most desirable new feature for transmission sensors.
Performance of GMR-Elements in Sensors for Automotive Application
For a comprehensive evaluation, the overall benefit for the system must be taken into account. However this will not be considered here, because it is outside the scope of this paper.
Tab. 1.
5.2
Summarized requirements for incremental speed sensors in wheel speed, crankshaft or transmission application.
GMR Sensor Performance
In wheel speed sensor application the temperature and air gap requirements are – as mentioned before – the most critical parameters. GMR-structures appear to be of great advantage for this application due to their excellent temperature stability and the enhanced sensitivity compared to AMR and Hall sensors. Therefore the focus of the investigations for wheel speed application has been the air gap dependence on temperature when a standard magnetic encoder is used. Figure 9 shows two different types of wheel speed sensors. A bottom read sensor with an integrated backbias magnet for steel wheel application is displayed on the left hand side, a side-read sensor and a wheel bearing with an integrated magnetic encoder is displayed on the right hand side.
Fig. 9.
Wheel speed sensors for ABS, ESP, ... systems.
For crankshaft applications on the other hand the parameters repeatability and phase shift are of most importannce. For this reason investigations for this application were made by measuring the 360° repeatability and the phase
443
444
Components and Generic Sensor Technologies
shift of the sensors over frequency and air gap at different temperatures. A radial magnetized encoder with 60-2 pole pairs was used as a reference wheel for the measurements. In figure 10 a) an example for crankshaft application with an 60-2 teeth tone wheel and an attached incremental speed sensor is displayed. In transmission application the requirements for air gap are constantly growing due to the increasing mechanical clearance of the gear wheels. Therefore the air gap is the only parameter discussed here. It was measured in front of a rotating perforated disk wheel with 40 holes at room temperature. Figure 10 b) shows an automated transmission and its control module with an integrated rotational speed sensor.
Fig. 10. Speed sensor for crankshaft speed detection a) and rotational speed sensor for transmission control b).
In the following the performance of a GMR speed sensor based on a gradiometer principle is demonstrated to compare it with other sensor technologies. Both discrete (2 chips: GMR-bridge and ASIC) and integrated (1 chip) GMR sensors were investigated and showed best performance. The ability to integrate GMR processes with CMOS processes could be verified. The integrated version is much more suitable for automotive application. It has smaller dimensions and the packaging is much more reliable in comparison to the one of a discrete two chip version. The summary for the air gap benchmark is listed in table 2. These values were obtained using a small sample size and are for development purposes only. Air gap is the most important parameter, because it can be an enabler for cost reduction in the system due to reduction of precision in mechanical parts. The GMR sensors are compared to common Hall- and AMR-sensor available on the sensor market. The air gap for both GMR sensor types (spin-valves and
Performance of GMR-Elements in Sensors for Automotive Application
multilayer) show a great improvement compared to traditional sensor technologies. The expected low temperature dependence of the air gap was confirmed for all three applications.
Tab. 2.
Maximum air gaps of different sensor types for different applications are determined from measurements with the same impulse wheel for each application.
The crankshaft measuring results for the 360° repeatability and the phase shift especially for GMR sensors with spin-valve structures are outstanding. The dependence of the parameters on the temperature and frequency are very weak. In summary it may be said that the investigated GMR sensors show high potential for applications in the automotive area due to their enlarged air gap, low jitter and low temperature dependence. New developments in the automotive sector like corner modules with integrated wheel speed sensors as well as trends to reduce the size of wheel bearings and thus the magnetic stimulation are demanding speed sensors for enlarged air gaps. This may lead to cost reduction of the systems and as a result boost the introduction of the new GMR technology into the market.
445
446
Components and Generic Sensor Technologies
References [1]
[2] [3] [4]
V. Gussmann, D. Draxelmayer, J. Reiter, T. Schneider, R. Rettig: „Intelligent Hall Effect based Magnetosensors in Modern Vehicle Stability Systems“, Convergence 2000, Detroid, 2000-01-CO58. J. Marek, H.-P. Trah, Y. Suzuki, I. Yokomori: „Sensors for Automotive Technology“, Wiley-VCH, Weinheim (2003) B. Vogelgesang, C. Bauer, R. Rettig: “ Potenzial von GMR-Sensoren in Motor- und Fahrdynamiksystemen”, Sensoren und Messsystemme 2004, Ludwigsburg (2004) D. Hammerschmidt, E. Katzmaier, D. Tatschl, W. Granig, Infineon Technologies Austria AG, J. Zimmer, Infineon Technologies AG, B. Vogelgesang, R. Rettig, Robert Bosch GmbH, “Giant magneto resistors – sensor technology & automotive applications”, SAE 2005, in press
Dr. Birgit Vogelgesang, Dr. Christian Bauer, Dr. Rasmus Rettig Robert Bosch GmbH Business Unit Chassis Systems Robert Bosch Allee 1 4232 Abstatt Germany
[email protected] [email protected] [email protected] Keywords:
GMR, giant magnetoresistant, sensor, speed sensor, automotive sensor, magnetoresistive sensor, sensor application
447
360-degree Rotation Angle Sensor Consisting of MRE Sensors with a Membrane Coil T. Ina, K. Takeda, T. Nakamura, O. Shimomura, Nippon Soken Inc. T. Ban, T. Kawashima, Denso Corp. Abstract Progress in electronic vehicle control has created the need for contactless detection of rotation angles up to 360 degrees, such as for engine control valves or vehicle control steering. Conventional contactless sensors that can detect rotation angles up to 360 degrees have been optical or magnetic rotary encoders, which are large and expensive for use in automobiles. The authors developed a small and low-cost sensor that is suited for mass production and able to detect rotation angles up to 360 degrees by combining MRE sensors with a membrane coil.
1
Background of Sensor Development
Progress in electronic vehicle control technology has raised the necessity for detection of rotation angles over a wider range with a higher degree of precision. In particular, crank angle sensors and steering wheel angle sensors are required to detect absolute angles up to 360 degrees. To meet the above requirement, the authors have been engaged in the development of a rotation angle sensor that can linearly detect absolute angles up to 360 degrees. Typical rotation angle sensors available today are shown in Fig. 1. Optical sensors are sensitive to ambient conditions (temperature and stain), though they can detect angles more accurately than the other two types of sensors. Resolvers are large in size and expensive since they are constructed of wirewound coils. Due to the above disadvantages, these sensors are intrinsically unsuitable for automotive use. On the other hand, two-phase MRE sensors consist of a pair of rotary magnets and produce a parallel magnetic field. The angle of the magnetic field is detected by two magnetism-sensing elements arranged with their sensitivity faces shifted by 90 or 45 degrees. Sensors of this type were considered to be suitable for developing into inexpensive, high
448
Components and Generic Sensor Technologies
accuracy rotation angle sensors for automotive use, since they would easily be integrated into single chips for miniaturization.
Fig. 1.
Principle and disadvantage of conventional rotation angle sensors
The biggest problem with MRE sensors available today is that their angle detection range is limited to within the angle range of zero (0) to 180 degrees. The authors used the principle of two-phase type MRE sensors to develop a new sensor that can detect rotation angles up to 360 degrees.
2
Angle Detection Principle of Conventional MRE Sensors
The angle detection principle of currently available MRE sensors is shown in Fig. 2. Parallel main magnetic flux is produced between a pair of opposed rotary magnets. The rotation angle of the main magnetic flux is detected by COS and SIN bridge MREs. Since these bridge MREs are arranged with their phase shifted by 45 degrees from each other, the output of the SIN and COS bridges corresponding to the rotation angle of the main flux is doubled as shown in Fig. 2. To obtain a sensor output proportional to the rotation angle, the ARCTAN values of these two outputs are calculated. Since this principle is based on the ratio of two bridge outputs, it enables high accuracy angle detection almost independently of the effect of ambient temperature. However, since MRE sensors detect angles at a 180-degree cycle, they cannot detect 360-degree rotation angle. In other words, this principle cannot distinguish between two angles differing by 180 degrees from each other. The reason is because the MRE is a resistance element that cannot identify the direction of magnetic flux in principle. This fact had been a barrier to upgrading
360-degree Rotation Angle Sensor Consisting of MRE Sensors with a Membrane Coil
this type of sensor to one that can detect rotation angles up to 360 degrees.
Fig. 2.
Angle detection principle of conventional MRE sensor
3
New MRE Sensor Capable of Detecting Rotation Angles up to 360 Degrees
3.1
Principle of Rotation Angle Detection up to 360 Degrees and Problems to Be Solved in New Sensor Development
The new method the authors devised for detecting rotation angles up to 360 degrees by using the principle of MRE sensors is outlined in Fig. 3. In this method, an auxiliary coil is wrapped around an MRE in a single direction. When energized, the auxiliary coil produces an auxiliary magnetic flux. The auxiliary magnetic flux gives effect on the main magnetic flux produced by the rotary magnets to produce a compound magnetic flux. The MRE detects this compound magnetic flux, the direction of which differs from that which the MRE had detected before the auxiliary coil was energized. Under the condition where the auxiliary magnetic flux is set along the 0degree direction in Fig. 3, the compound magnetic flux changes its direction clockwise with respect to the main magnetic flux if the direction of the main flux stays within an angle range of 0 to 180 degrees. To the contrary, the compound magnetic flux changes its direction counterclockwise with respect to the main magnetic flux if the direction of the main flux stays within an angle range of 180 to 360 degrees. As discussed above, the compound magnetic flux changes its phase difference from the main magnetic flux clockwise or coun-
449
450
Components and Generic Sensor Technologies
terclockwise depending on the direction of the main magnetic flux. The MRE detects the change in the direction of the compound magnetic flux to determine the angle range, either a 0 to 180-degree range or a 180 to 360-degree range, where the phase angle of the main magnetic flux stays with respect to the reference angle (0). The auxiliary coil is energized (ON) for a predetermined time and de-energized (OFF) for another predetermined time. While the power to the auxiliary coil is OFF, the phase angle of the main magnetic flux is detected/calculated to obtain two candidate rotation angles. The auxiliary coil is then energized to identify the direction of the main magnetic flux according the method described above and thus to determine the correct rotation angle. As discussed above, the new method can detect 360-degree rotation angle.
Fig. 3.
Principle of rotation angle detection up to 360 degrees
The new method involved the following two problems: The first problem was that wrapping a wire around an MRE would increase the coil size, leading to increase in sensor size and cost. The second problem was that it was difficult to identify the direction of the compound magnetic flux depending on the phase angle of the main magnetic flux. The second problem is discussed in more detail by referring to the upper right section of Fig. 3. For the main magnetic flux with nearly a 90- or 270-degree phase angle, the main magnetic flux and auxiliary magnetic flux produce a comparatively large phase difference and this enables the compound magnetic flux to change its phase angle sharply. Therefore, the direction of the main magnetic flux can easily be identified. For the main magnetic flux with nearly a 0- or
360-degree Rotation Angle Sensor Consisting of MRE Sensors with a Membrane Coil
180-degree phase angle, however, the phase angle of the main magnetic flux becomes nearly equal to that of the auxiliary magnetic flux. In this case, the compound magnetic flux does not change its phase angle significantly, making it difficult to identify the direction of the main magnetic flux. To solve the second problem, we had to devise a new method that enables the compound magnetic flux to change steeply even in an angle range where the phase difference between the main and compound magnetic flux remains almost unchanged.
Fig. 4.
3.2
Method for improving identification performance in angle ranges near 0 and 180 deg.
Solution to the Second Problem
We took the following measures to solve the second problem. As shown in the left half of Fig. 4, changes in the direction of the main magnetic flux could be easily identified in rotation angle ranges near 90 and 270 degrees, while changes were difficult to identify in angle ranges near 0 and 180 degrees each of which is shifted by 90 degrees from the former angle range. If the MRE sensor could be rotated by 90 degrees integrally with the auxiliary coil, the characteristics would shift by 90 degrees, making the change in the direction of the main magnetic flux easy to identify in angle ranges near 0 and
451
452
Components and Generic Sensor Technologies
180 degrees. Thus, we reached a conclusion that the angle ranges with small phase difference change could be eliminated by shifting the ranges with large phase difference change by 90 degrees. In practice, it was difficult to physically rotate the sensor during rotation angle detection. We put this idea into practical use by designing the required function into an electronic circuit.
Fig. 5.
Signal switching in bridge circuit
Fig. 6.
Detectable angle range of bridge circuits
We used the symmetry of the bridge circuit of the sensor to artificially rotate the MRE sensor. The MRE sensor consists of four symmetrical MRE elements the phase of which is shifted by 90 degrees from each other, as shown in Fig. 5. In the SIN bridge circuit (1), an electric current is supplied through the top and bottom terminals and the output is obtained from the right and left ter-
360-degree Rotation Angle Sensor Consisting of MRE Sensors with a Membrane Coil
minals. Since the four MRE elements are identical in construction, an electric current can be supplied to the right and left terminals, which are positioned at an angular distance of 90 degrees from the top and bottom terminals, and the output is obtained from the top and bottom terminals (this method is called “alternating energization”). The alternating energization method artificially rotates the sensor by 90 degrees. In the same manner, we can alternately energize another pair of COS bridges the phase of which is shifted by 45 degrees from the SIN bridges. Thus, we can prepare four drive patterns in total. Fig.6 shows the ranges that allow easy rotation angle detection for each of four bridge circuits (1 through 4 in Fig. 5) in the whole range of 0 to 360 degrees. For example, it shows that SIN bridge 1 shares the angle range to detect around 90 degrees and 270 degrees. It also shows the angle ranges assigned to be detected by the other three bridge circuits. In this way, it is shown that the angle ranges of the four bridge circuits cover the whole range of 0 to 360 degrees in total.
Fig. 7.
Producing symmetric magnetic field by spiral coil
Stated above is the structure of the SIN and COS bridge circuits to detect the whole angle range of 0 to 360 degrees. To realize the detection model, an auxiliary coil is required to generate auxiliary magnetic flux shown in Fig.4. Fig7. shows this structure. In the auxiliary coil, the winding is placed as octagnonshaped spiral in a plane on the MRE sensor. The reason why we employed this structure is as follows. When an electric current was applied to the two-dimensionally arranged spiral coil, magnetic flux was generated in a radial manner over the coil plane according to the right-hand rule as shown by arrows in Fig. 7. And the square-shaped spiral coil in Fig.7b can generate the magnetic flux for the SIN bridge in Fig.5. Similarly, the coil shown in Fig.7b right can generate the auxiliary magnetic flux for the COS bridge. This magnetic flux pattern enabled us to obtain the same characteristics as those shown in Fig. 4 without requiring the auxiliary coil to be rotated physically by 90 degrees. We also could identify the direction of the compound magnetic flux at all times in response to the phase angle change of the main magnetic flux by laying the octagonal-shaped spiral coil (same function as to two square-shaped spiral coil
453
454
Components and Generic Sensor Technologies
in Fig.7b) orthogonally with respect to all MRE elements. Since the coil could be laminated with the MRE sensor in a plane manner, the dimensions of the new sensor remained almost the same as that of the base sensor. This solved the first problem at the same time. In other words, we could avoid increase in sensor size and manufacturing cost.
3.3
Newly Devised Sensor
As discussed in the previous section, we could identify the direction of the main magnetic flux in the entire angle range from zero (0) to 360 degrees by devising “an auxiliary spiral coil of an octagonal shape” and “alternating energization.” We made a prototype MRE sensor. The construction of the new sensor is shown in Fig. 8 and a photograph of a prototype sensor chip in Fig. 9. We formed an insulation layer after forming two bridge MREs on a silicon wafer. Following the above procedure, we laminated with the sensor an auxiliary coil made by etching an aluminum foil into an octagonal spiral. We determined the number of coil turns to be 16 to 33 after taking into account the dimensions of the MRE and the distance between adjacent coil elements. The resistance of the coil with the number of turns of 16 was approximately 50Ω, showing that the sensor could be operated by a 5V power with a 100mA pulse and 10% duty.
Fig. 8.
Construction of the new sensor
As has been already discussed, a rotation angle is detected according to the following steps:
360-degree Rotation Angle Sensor Consisting of MRE Sensors with a Membrane Coil
With no power applied to the sensor, a candidate rotation angle within a range of 0 to 180 degrees is detected/calculated. According to the angle range within which the candidate angle falls, the applicable MRE alternating energization pattern is selected among the previously described four patterns and is energized. Following the above step, the auxiliary coil is activated to detect the change in the MRE sensor output to identify the direction of the compound magnetic flux. Based on the direction identified thus, the actual rotation angle within a range of 0 to 360 degrees is finally determined.
Fig. 9.
3.4
Photograph of the new sensor chip
Sensor Evaluation Result
The result of prototype sensor evaluation showed that the direction of the main magnetic flux could be identified over the entire angle range, as we had estimated. As shown in Fig. 10, the angle range, where the sensor output changes when the auxiliary coil is energized, shifts by 45 degrees according to the energization pattern of the two bridge circuits of the MRE sensor. In addition, the output changes from positive value to negative value or vice versa according to the direction of the main magnetic flux, making it possible to identify the direction. It was confirmed that the combination of these bridge circuits and four energization patterns enabled detection of rotation angle from 0 to 360 degrees.
455
456
Components and Generic Sensor Technologies
Fig. 10. Characteristics of the new sensor
4
Conclusion
We devised a unique MRE sensor consisting of a combination of an octagonally shaped auxiliary spiral coil and MRE sensor bridges that are energized alternately. This sensor is very small in size and can detect absolute angles from 0 to 360 degrees. Based on the research results, we will develop an angle sensor that can be used for engine control and vehicle steering control.
5
Acknowledgement
The authors would like to express his gratitude to Dr. Kunihiko Hara, Senior Executive Director of NIPPON SOKEN, INC. for his helpful advice to complete this study and thesis.
360-degree Rotation Angle Sensor Consisting of MRE Sensors with a Membrane Coil
References (1)
(2)
(3)
(4)
(5)
Hans Theo Dorisen , Klaus Durkopp. (2003). Mechatronics and drive-by-wire systems advanced non-contacting position sensors. Control Engineering Practice 11 (2003) 191-197 Ichiro Shibasaki. (2003). Properties of InSb Thin Films Grown by Molecular Epitaxy and Their Applications to Magnetic Field sensors. IEEJ Trans. SM, Vol.123, No.3, 2003 Masanori Hayase, Takamasa Hatano, Takeshi Hatsuzawa. (2002). Remote Power Sourceless Encoder Using Resonant Circuit with Loop-type Coil. IEEJ Trans. SM, Vol.122, No.3, 2002 Yoshiyuki Watanabe, Takashi Mineta, Seiya Kobayshi, Toshiaki Mitsui. (2002) 3Axis Hall Sensor Fabricated by Microassembly Technique. T.IEEJ Vol.122-E, No.4, 2002 Osamu Shimoe, Yasunori Abe, Yukimasa Shonowaki, Shigenao Hashizume. (2002) Magnetic Compass using Magneto-resistive device. Hitachi Metals technical review Vol.18(2002)
Toshikazu Ina, Kenji Takeda, Tsutomu Nakamura, Osamu Shimomura Nippon Soken Inc. 14 Iwaya, Shimohasumi-cho Nishio-shi Aichi-pref. 445-0012 Japan
[email protected] Takao Ban, Takashi Kawashima Denso Corp. 1-1, Showa-cho Kariya-shi Aichi-pref. 448-8661 Japan
457
459
Low g Inertial Sensor based on High Aspect Ratio MEMS M. Reze, J. Hammond, Freescale Semiconductor Abstract This paper is presenting a new single axis low g inertial sensor based on a High Aspect Ratio MEMS (HARMEMS) transducer combined with a 0.25µm SmartMOS™ mixed signal ASIC into a single Quad Flat No leaded 16 pin package (QFN). The high logic gate density digital signal processing (DSP) is used for filtering, trim and data formatting while the associated non volatile memory contains the device settings. The micro-machined transducer features an overdamped mechanical response and a high signal to noise ratio. This makes this device ideal for use in Vehicle Stability Control (VSC) or Electrical Parking Brake (EPB) applications where high sensitivity and small zero-g acceleration output error is required.
1
Market Perspective
1.1
Vehicle Stability Control
Since its first debut on the roads in 1995, the electronic stability program/control (ESP/ESC) has evolved and is now recognized by the industry and several governments to have a huge safety benefit. Several international studies [1] have demonstrated through significant data collection that ESC significantly reduce the risk of a crash and help save thousands of lives annually. As a matter of fact, several car manufacturers in Europe and in the US have introduced this equipment as standard on some of their car lines. The ESP is an additional improvement to the anti-lock braking system (ABS) and traction control system (TCS). Its basic function is to stabilize the vehicle when it starts to skid by applying differential braking force to individual wheels and reducing engine torque. This automatic reaction is engineered for improved vehicle stability, especially during severe cornering and on low-friction road surfaces, by helping to reduce over-steering and under-steering. [2] Additional sensors must be added to the ABS system in order to implement ESP functionality like a steering wheel angle sensor, a yaw rate sensor and a low g acceleration sensor that measure the vehicle dynamic response.
460
Components and Generic Sensor Technologies
According to Strategy Analytics [3] the highest growth potential for accelerometers appears in VSC applications with a world wide system demand of 14.4 million units in 2007 and up to 17 million units in 2009 as shown in figure 1. In terms of regional market penetration, ESP did not take off in North America as it did in Europe or Japan. For 2004, nearly 38% of the European passenger cars will be equipped compared to 22% for the US. But things are changing. Survey shows that SUV are more subject to rollover or loose of steering control in difficult driving conditions. Furthermore, Original Equipment Manufacturers (OEM) are directly promoting their systems to consumer. Thus, the prospects for growth are strong and 57% of the cars sold in Europe and in the US by 2009 should have ESP included (36% on a world wide basis in 2009).
Fig. 1.
1.2
Vehicle stability control market demand [Source: Strategy Analytics]
Electrical Parking Brake (EPB)
First introduced on the luxury car segment, this function is becoming more and more popular as and medium segment cars are now delivered equipped. The basic principle of EPB is that mechanical connections to the rear callipers are replaced with electric connections and actuators. This is a first step toward
Low g Inertial Sensor based on High Aspect Ratio MEMS
a full “brake by wire“ functionality. It simplifies assembly and service and brings a lot of interesting new features: It improves brake pedal feel by providing reduced pedal travel versus hydraulic brake systems, it improves interior spaciousness by eliminating the parking brake lever or pedal, it provides anti-lock brake (ABS) function during dynamic parking brake application and it enables an automatic hill-hold feature preventing the car from rolling back when on a hill. The system uses information from various sensors inside the vehicle like wheel speed sensor and a lateral low-g accelerometer which is used to detect the car angle with respect to the ground.
1.3
Sensor requirements
In order to address the two types of application mentioned above, the low g accelerometer needs to fulfill several requirements. As it is critical to detect very small accelerations the sensor needs to have a high sensitivity output and a high accuracy (low noise, small zero-g acceleration shift in temperature). Furthermore, the device needs to be immune to the parasitic high frequency content present in the car at the chassis level. Low energy signals with large frequency bandwidth can be found, from few hundreds Hz during normal driving condition to few kHz due to shocks coming from the road (gravel,…). All frequencies above 1kHz must be filtered to avoid corrupting the sensor response. By definition, an inertial sensor is highly sensitive to acceleration of any origin, since the micromachined sensing element is based on a seismic mass moving relative to a fixed plate. The sensor output signal is typically cleaned of parasitic high frequencies via electronic low pass filtering. A sensor with an overdamped transducer which can eliminate this unwanted higher frequency acceleration content mechanically provides additional benefit. A sensor acceptable for the application will have sensitivity to acceleration in the range of 1 to 1.5V/g for a 5V or 3.3V ratiometric power supply, allowing it to measure ±1.7g acceleration (1g being the earth gravity). Sensitivity error and cross axis sensitivity should be less than 4%. The zero-g acceleration error needs to be below 125mg over the full temperature range (-40°C to +125°C). Output noise should be below 700µg√Hz.
461
462
Components and Generic Sensor Technologies
2
System-in-package Technologies
One of the most common methods used to sense acceleration is to measure the displacement of a seismic mass which is translated into a variable capacitance measurement. The sensing element is a mechanical structure formed from semiconductor materials which is manufactured by using surface micromachining techniques. Moving plates (the seismic mass) are deflected from a rest position when subjected to acceleration. The change in distance between the moving plates and the fixed plates is a measure of acceleration. The arms used to maintain the moving plates behave as springs and the air squeezed between the plates will damp the movement. All Freescale’s accelerometers consist of a surface micromachined capacitive sensing element and a control ASIC for the signal conditioning (conversion, amplification and filtering) contained in a plastic integrated circuit package. A new low-g accelerometer has been developed using an innovative transducer technology focused on increasing the thickness of the structural layer to improve the performance. Since the height of the devices movable structure is much larger than the spacing and widths, the technology is known as HARMEMS (High Aspect Ratio MEMS) where the ratio is between air gap and trench deepness.
2.1
Transducer: HARMEMS
The HARMEMS transducer utilizes differential capacitive sensing to translate acceleration into a capacitance which can then be processed by sensor circuitry.
Fig. 2.
High aspect ratio MEMS or HARMEMS.
Low g Inertial Sensor based on High Aspect Ratio MEMS
The term ‘high aspect ratio’ refers to the width of the key mechanical features in the transducer such as the spring portion of the mass-spring system or the gap between movable and fixed capacitor plates [4]. The technology delivers this high aspect ratio by a combination of a 25 µm thick SOI layer and narrow trenches defined by deep reactive ion etching (DRIE) (see fig. 2). The HARMEMS process flow includes an SOI substrate with a buried thermal oxide layer, 25µm SOI layer, a field oxide for electrode isolation, a field nitride and polysilicon interconnect from above. Aluminmum allow bond pads are shown in bluered. Hermetic sealing is accomplished via wafer bond using with glass frit. (see fig. 3).
Fig. 3.
Cross section of high aspect ratio MEMS or HARMEMS with wafer bond above-vacuum sealing cross section.
The high aspect ratio of the technology, combined with higher-than-vacuum hermetic sealing possible with glass frit wafer bonding, provides an overdamped mechanical response. In figure 4, the HARMEMS mechanical response is compared with a thin underdamped polysiliconm MEMS (‘PolyMEMS’) device which has been in production for several years. The PolyMEMS device is excited to resonance (for this design, above 10kHz). By contrast, the HARMEMS devices exhibits no resonance, but rather a cut-off frequency below 1kHz. The designed resonant frequency of the HARMEMS device is between 1 and 5kHz. . In fairness, it is also theoretically possible to have a high aspect ratio polysilicon MEMS device. But that is not the device measured here. The larger sensor capacitances per unit area possible with a HARMEMS process flow lead to increased capacitance change versus acceleration (see table 1, fig 5). As an example, with a thicker capacitor layer, a 205µm
463
464
Components and Generic Sensor Technologies
HARMEMS base capacitance increases by more than 10 times compared to a 2 um polysilicon MEMS transducer. The ratio of mass to spring constant, meanwhile, remains constant. At the same time, the vertical spring constant of the HARMEMS device is 100 times higher than the Poly-MEMS device, offering substantially increased resistance to process and in-use vertical stiction. Any increase possible in mass to spring ratio is amplified by the thicker mechanical layer, further improving the signal to noise performance of the HARMEMS transducer.
Fig. 4.
Comparison of normalized dynamic response of a traditional thin (underdamped) polysilicon MEMS device vs overdamped HARMEMS .
Tab. 1.
Comparison of 205µm HARMEMS and 2µm Poly-MEMS
For a fixed transducer die area, this enables improved noise performance (see fig 5). Using the same circuit, a HARMEMS accelerometer demonstrates better than 50% decrease in power spectral density. It should be noted that the
Low g Inertial Sensor based on High Aspect Ratio MEMS
HARMEMS transducer in this data also has a larger mass to spring ratio than the poly-MEMS device to which it is compared.
Fig. 5.
Comparison of measured normalized Spectral Density (proportional to noise) for Poly-MEMS and HARMEMS low g transducers.
Finally, small error tolerance of the system is enabled, again by the high aspect ratio of the MEMS process. Thicker capacitor plates mean less out of plane deformation of the sensor structure due to package stress over the automotive temperature range (-40°C to +125°C). And, the improved signal-tonoise ratios available in HARMEMS translate to lower gain of the transducer signal in the sensor system. Errors in transducer, ASIC, or package are reduced, making for a tighter total error from the product system.
2.2
Asic: SmartMOS™
SMARTMOS™ is Freescale’s family of mixed signal technologies engineered to combine precision analog, high speed CMOS logic, and high voltage power transistor capabilities onto a single chip. This technology is well suited for applications operating in harsh electrical environments often found in automotive systems. SmartMOS™ is an analog CMOS technology based on mature dual gate (0.25µm min feature) logic. Most common electronic functions can be implemented, from voltage regulators, A to D converter and op amps to E2PROM and MCU cores. This process has voltage-scalable analog CMOS devices with breakdown voltages up to 80V. Lateral and vertical pnp devices can complement high-gain npn devices (beta>100). Its unique trench based isolation eliminates the need for intra-device and inter-device spaces for voltage support thereby maximizing analog and power device shrink. Its high density logic is ~25kgates/mm2 and allows integration of complex state machines or DSPs
465
466
Components and Generic Sensor Technologies
with many parametric trimming options. Full digital signal conditioning can be implemented which brings some advantages such as programmability (filters, acceleration range, …) and autodiagnostics which can be initiated periodically during operation (self-test).
2.3
Packaging: Quad Flat No Lead 16 pin
Packaging of MEMS structure is a vital process as it directly influences the final characteristics of the product (mechanical stress, shock transmission, etc), and its reliability and its cost.
Fig. 6.
Stacked die approach
One of the accelerometer specific requirements is that the sensing element must be protected from the plastic material injected during molding. This is solved by a first wafer level packaging where the transducer is sealed hermetically with a glass frit by a silicon cap (fig. 3). Made in clean room environment at wafer level, this process ensures that the sensing element will be free of any particles that may disturb the sensing element. With such a protection the sensing element can go through all the other process steps (scribing, packaging) without any damage. Figure 6 shows a picture of the hermetically sealed sensing element and the control ASIC mounted on the lead frame, with wire bonding already complete. Compared to the actual generation of devices in production right now which are side by side assembled, the new accelerometer is using a stacked die approach. The advantages of such configuration are the smaller volume taken by the overall package (6x6x1.98mm3)
Low g Inertial Sensor based on High Aspect Ratio MEMS
and the improved manufacturability cycle time as few process steps are needed.
3
Product Description
The single X axis low-g sensor depicted in fig. 7 is manufactured based on Freescale system-in-package approach combining an HARMEMS transducer and a SmartMOS8mv Asic into a QFN package:
Fig. 7.
3.1
Multichip approach based on separate sensing element and control IC
Asic Architecture Description
Voltage Regulator
Separate internal voltage regulators supply fixed voltages to the analog and digital circuitry. The voltage regulator module includes a power monitor which holds the device in reset following power-on until internal voltages have stabilized sufficiently for proper operation. The power monitor asserts internal reset when the external supply voltage falls below a predetermined level. A separate voltage reference provides a stable voltage which is used by the sensing element interface.
Oscillator
An internal oscillator operating at a nominal frequency of 2MHz provides a stable clock source. The oscillator is factory trimmed for best performance. A clock generator block divides the 2MHz clock as needed by other blocks. In the event of oscillator failure, an internal clock monitor provides a fault signal to
467
468
Components and Generic Sensor Technologies
the control logic. An error code is then transmitted in place of acceleration data.
Programmable Data Array
A 256-bit programmable data array allows each device to be customized. The array interface incorporates parity circuitry for fault detection along with a locking mechanism to prevent unintended changes. Portions of the array are reserved for factory-programmed trim values.
Control Logic
A control logic block coordinates a number of activities within the device. These include: Post-reset device initialization; Self-test; Operating mode selection; Data array programming; Device support data transfers
SPI
A serial peripheral interface (SPI) port is provided to accommodate communication with the device. The SPI is a full bidirectional port which is used for all configuration and control functions. Acceleration output is provided over 10 bits data.
Self-test Interface
The self-test interface provides a mechanism for applying a calibrated voltage to the sensing element. This results in deflection of the proof mass, causing reported acceleration results to be offset by a specified amount. Σ-∆ Converter A 16 bit sigma delta converter provides the interface between the sensing element and digital signal processing block. The output of the Σ-∆ converter is a 1-bit data stream at a nominal frequency of 1MHz.
Digital Signal Processing Block
A digital signal processing (DSP) block is used to perform all filtering and correction operations. The DSP operates at the frequency of the Σ-∆ converter. Each device is factory programmed to select the acceleration range and filter characteristics for the device.
Low g Inertial Sensor based on High Aspect Ratio MEMS
A temperature sensor with 8-bit resolution provides input to the digital signal processing block. Device temperature is incorporated into a correction value which is applied to each acceleration result. Low-pass filtering occurs in two stages. The serial data stream generated by the Σ-∆ converters is decimated and converted to parallel values by a sinc filter. Parallel data is then processed by an infinite impulse response (IIR) lowpass filter. A selection of low-pass filter characteristics are available. The cutoff frequency (fc) and rate at which acceleration samples are determined by the device (ts) vary depending upon which filter is chosen. Power consumption is also affected, as higher sample rates require higher DSP clock frequencies, which in turn requires more supply current. Several others functions could be implemented in order to offer even more flexibility to the device. More channels could be added by multiplying the number of Σ-∆ converter to monitor other axis in order to propose XY and XYZ accelerometer. Analog output can be provided by adding a 10-bit digital to analog converter (DAC) at the output of the DSP.
Fig. 8.
3.2
Zero-g output error of a QFN low-g device vs temperature (-40°C to 125°C).
Electrical Performance
Figures 8 and 9 show some of the benefit of a HARMEMS transducer as seen in the response of output voltage error and sensitivity versus temperature of an analog output QFN low g accelerometer (see fig. 7, 8). The uncompensated zero g output voltage varies by 103mV from -40°C to 125°C, or equivalently 86mg. By adding on-chip temperature compensation, this variation can be reduced to 19mV. compensated zero g output voltage varies by 19mV over
469
470
Components and Generic Sensor Technologies
temperature. Dividing by the device sensitivity of 1.2V/g, this equates to 16mg. The sensitivity variations are lower than 1% over -40°C to 125°C.
Fig. 9.
4
Normalized sensitivity of a QFN low-g device vs temperature (-40°C to 125°C).
Conclusion
We have presented a new low g accelerometer based on a High Aspect Ratio sensing element. The sensitivity has been increased by reducing the distance between the fixed and the moving elements. High signal to noise ratio and overdamped frequency response have been achieved by increasing the thickness of the structural layer. This makes this device perfectly suited for Vehicle Dynamic Control application. Combined with a 0.25µm mixed signal Asic, it contains a full digital signal conditioning path allowing the implementation of programmable parameters like filters or acceleration ranges. Thanks to the flexibility offered by this multichip approach and by this type of Asic architecture, the development of a full family of single, dual and tri axis low g axis sensors fulfilling harsh automotive requirements will be implemented.
Acknowledgements The HARMEMS technology described herein is the result of a fruitful collaboration between CEA-LETI and Freescale. The authors acknowledge the work and effort of the entire team at LETI in Grenoble, France, lead by Bernard
Low g Inertial Sensor based on High Aspect Ratio MEMS
Diem, with significant contributions from Sophie Giroud, Denis Renaud, and Hubert Grange. The authors gratefully acknowledge the contributions and significant effort of the Freescale technology and product development teams for their work on HARMEMS, sensor packaging, and low g sensors. In particular, we recognize the work of Bishnu Gogoi, Ray Roop, Jan Vandemeer, Dan Koury, Gary Li, Arvind Salian, and Dave Mahadevan.
References [1] [2] [3] [4]
Research study conducted by the University of Iowa Understanding Automotive Electronics, Sixth Edition, William B Ribbens, SAE Strategy Analytics, Automotive Sensor Demand 2002 – 2011, Market forecast, October 2004 “SOI ‘SIMOX’; from bulk to surface micromachining, a new age for silicon sensors and actuators”, Sensors and Actuators A: Physical, Volume 46, Issues 1-3, January-February 1995, Pages 8-16, B. Diem, P. Rey, S. Renard, S. V. Bosson, H. Bono, F. Michel, M. T. Delaye and G. Delapierre
Matthieu Reze Freescale Semiconducteurs SAS Sensor Product Division BP 72329 31023 Toulouse Cedex 1, France
[email protected] Jonathan Hammond Freescale Semiconductor Inc Sensor Product Division 2100 E. Elliot Road Tempe, AZ 85284, United States of America
[email protected] Keywords:
low g accelerometer, HARMEMS, DRIE etching, surface micromachining
471
473
Realisation of Fail-safe, Cost Competitive Sensor Systems with Advanced 3D-MEMS Elements J. Thurau, VTI Technologies Oy Abstract Developing MEMS structures is a first milestone for creating a successful final product. The industrialization reveals whether the concept is ready for mass production with an acceptable yield to meet commercial aspects as well as the requirements for reliability and performance over lifetime. By taking the robust bulk-micromachining technology from today’s high volume series production and combining it with modern DRIE technology towards 3D-MEMS technology VTI has developed its own concept to condense the high performance to smaller dimensions. The next generation of low-g sensors will utilize this platform for single as well as multiple axes sensor systems. A new accelerometer family is created within one housing so that suiting to all measurement requirements it is just necessary to install the appropriate sensor component on the same PCB position. Main target is to meet fail-safe requirements, bring system costs down, reduce size and enable new functionality applications in the automotive environment. This article shows how to close the link between excellent sensor element development and the integration into application where sensor components need to fulfil advanced and enhancing criteria for modern sensor systems.
1
Today’s Market Situation
It was like religion in the past: There were two different approaches of micromachined sensors out on the market – surface micromachining vs. bulk micromachining. Both had their dedicated technologies leading to dedicated advantages which were used for different classes of acceleration sensors – as shown in table 1.
474
Components and Generic Sensor Technologies
Tab. 1.
Comparison of different micromachining approaches
Sometimes it happens in real life that there is convergence between two different approaches leading to excellent synergy. In motion MEMS market it seems to happen that the classic categories are merging more and more – figure 1 – surface micromachining is getting thicker and bulk micromachining is getting better aspect ratios so that at the end there is a common growth where a border line has been before – a development that gives benefits to the customers. More performance for better price and in smaller size are necessary to satisfy the increasing demand for higher numbers of motion sensors due to higher penetration rates in automotive safety and comfort systems as well as for the numerous commodity, sports and fitness applications which are identified by market research companies. To meet the volume requirements VTI has invested heavily in a second clean room building for volume increase as well as redundancy purpose. A new second clean room is kept totally separate with it’s supplies as well as from average risk point of view.
Fig. 1.
Market trend: Merging of motion MEMS technologies
Based on the given knowledge VTI has identified its critical success factors to meet the market demand. The following metrics were defined to measure the development results for the next generation accelerometers:
Realisation of Fail-safe, Cost Competitive Sensor Systems with Advanced 3D-MEMS Elements
Performance
Equal and better to meet dedicated application requirements Robustness
Excellent vibration and shock endurance stability Small size
fit into size reduced housings and applications Cost
help market penetration of performance applications Communication
Enhance system performance Q: MTBF and PPM
to meet reliability and failsafe application requirements
2
3D MEMS Structures
VTI developed its own SIMAS process to meet requirements for modern motion and pressure sensing systems. Starting at today’s robust bulk-micromachining technology that is in stable high volume series production and combining it with Deep Reactive Ion Etching (DRIE) increases the efficiency ratio of silicon area to signal quality. With this approach – the combination of KOH wet etching and DRIE etching – real three dimensional structures are achieved, called 3D-MEMS. Depending on the product Silicon On Isolator (SOI) wafers are used to achieve exact dimensions of thin structures at one side of the wafer – e.g. for torsional springs.
Fig. 2.
SIMAS process – left: DRIE etching for mass structuring - right: KOH wet etching for high performance springs
The SIMAS process is having a more sophisticated technology also to generate springs. Starting from wafers with several 100µm thickness – like in today’s VTI MEMS structures – trenches are etched into those wafers by DRIE to achieve a good aspect ratio for much better than KOH etching area utilization.
475
476
Components and Generic Sensor Technologies
After structuring the proof mass from the frame by literally “cutting” the wafer with DRIE, the springs will be etched by VTIs legendary KOH wet etching in single crystal silicon leading to most exact dimensioning in height and shape. This is necessary to achieve the excellent shock resistance of the sensors of several 10,000g that is required for most of the applications. For that spring shaping process all other structures are covered by photo mask so that just the springs are etched “through small windows” (figure 2). The result is a large proof mass on the given silicon area with exact sensitive springs. After processing the middle wafer as shown above, the sensing elements are sealed hermetically with upper and lower capping wafers leading to wafer level/ chip scale packaging MEMS elements. The wafer anodic glass bonding technology used for this process is the same as in today’s mass production. During this 3-wafer stacking process the media in the sensing element is set to be vacuum or a controlled pressure depending on the required behavior of the final product. For structuring electrically isolated partitions on the same element surface vertical 3D glass isolation technique is used. In combination with the thick horizontal glass layers a low parasitic capacitance structure is achieved. Those sensor elements are typically contacted by SMD technology or wire bonding.
Fig. 3.
3-axis sensing element structure – 3D-MEMS – realized with SOI and DRIE
An example for SOI wafer utilization is the three axis concept where the isolator layer is used for dimensioning the height of the rotational spring. Due to the DRIE etching, appropriate aspect ratios are achieved to realize an excellent utilization of the silicon area – creating smaller sensing elements (figure 3).
Realisation of Fail-safe, Cost Competitive Sensor Systems with Advanced 3D-MEMS Elements
The more sophisticated technique to achieve exact spring thickness is the utilization of KOH in combination with DRIE etching like in 1-axis accelerometer elements (figure 4).
Fig. 4.
left: new single axis sensing element – right: 3D-MEMS proof mass structure DRIE and springs in KOH wet etching
The capacitances for the measurement are in all shown structures between middle wafer released proof mass and capping wafer electrodes. The main advantage is a stable design against misalignment DRIE etch trenches. This avoids one of the critical elements of DRIE finger structures. Due to the nature of thick middle wafer with its several 100µm thickness the proof mass has the significant mass advantage leading to excellent signal to noise ratios. With the described new SIMAS process the following design targets have been met compared to today’s designs in mass production: Reduction of sensing element size by 65% Enhancement of capacitance dynamics by 50% Improvement of relative capacitive sensitivity by 100% Advancement of mechanical sensitivity by 200% Perfectioning of static linearity by reducing the stray capacitance and by introducing a parallel plate movement Improvement of dynamic linearity by new damping structures (vibration rectification effect) The process was set-up in a way that can be used as platform technology for new products which can be introduced as flexible as ASICs today in C-MOS technology. This reduces the lead-time for new MEMS elements on this platform significantly. Furthermore VTIs excellent wafer capping process can be utilized easily to achieve extremely good vacuum for high Q products even for extremely fragile structures. This makes the new platform flexible for upcoming new products.
477
478
Components and Generic Sensor Technologies
3
Housing Concept – One Size Fits All
Coming from today’s combination concept of SCA610 x-axis, SCA620 z-axis as well as SCA1000 dual axis sensor family designed for the same footprint on the PCB, it was a more than natural requirement to apply the same requirement to the new sensor generation also. In a first approach it was considered to have two different housings for single axis and multi axes products. After detailed review it was decided that the same housing would be the best approach for all products for this platform. This added approximately 1mm in length and width to the single axis originally planned footprint, what is acceptable for automotive applications. The advantage is an economy of scale production with the same housing for all related products, no matter whether it is single, dual or three axes accelerometer or even other MEMS products with the same highly automated production line for excellent yield and PPM figures.
Fig. 5.
Flexible housing platform
A mixing of existing sensor elements and ASICs with next generations is even possible to generate an easy transfer from generation to generation. This flexibility in introduction makes the change management much more reliable for the specific application. Within the same housing with uniform layout and pinout, a variety of products will be available so that the application-tailored performance, number of axes as well as safety level can be adapted to the requirements.
4
Packaging – Performance vs. Price
In automotive applications temperature requirements of -40°C up to 150°C lead to extraordinary requirements in MEMS packaging. Best results were
Realisation of Fail-safe, Cost Competitive Sensor Systems with Advanced 3D-MEMS Elements
achieved with pre-molded packages where the CTE mismatch between housing plastic and silicon MEMS is buffered best by silicone gel and adhesives. This leads to uniform behavior over temperature as well as negligible hysteresis effects. The disadvantage of higher costs of this more sophisticated packaging method can be compensated by stable high yield for the packaging process as well as better performance products.
Fig. 6.
Basic structure of sensing element and ASIC in premolded housing
The chosen Dual Flat Lead (DFL) concept is based on a lead frame premolded package. The sensing element as well as the ASIC are picked and placed by die bonder and positioned on silicone adhesive. The open cavity is filled with silicone gel and protected with a stainless steel lid, which is used for laser marking identification with product name as well as unique serial number for traceability. Both are in clear text as well as 2D-matrix code for automatic reading. The product is produced on a high performance, automated line with integrated test area.
Fig. 7.
New DFL component housing (7,6 x 8,6 x 3,3 mm?) and soldering meniscus due to 0,3mm excess leads
The pins of the housing are chosen differently from SCA610 family housing before, where gull wings were considered to be required to meet the thermal shock reliability requirements. The housing for the next generation was produced as prototype in two versions – one with gull wings and one with flat leads underneath the housing. Thermal shock comparison tests with traditional tin-lead solder as well as lead free solder have shown that the results with the flat leads were fully acceptable for the harsh automotive reliability
479
480
Components and Generic Sensor Technologies
requirements. Compared to the standard QFN housing the pins realized here are having excess leads of 0.3mm (figure 7). The advantage is that the surface of the pins was increased and even more important that a clear meniscus is building up during soldering process that is improving the reliability of the soldering. In addition to that the solder joint and meniscus can be inspected automatically by visual inspection. The chosen housing concept is therefore dedicated to reliability requirements of harsh automotive environment. Due to the premolded housing the performance of the sensor is the same before and after soldering process. This is important to avoid any over temperature tests in application at end of line. At this point it needs to be pointed out that VTI is pursuing housing solutions with over-molded low-cost QFN housings as well. Those are dedicated for lower performance applications so far.
5
Signal Conditioning
For the next generation of acceleration sensors VTI has changed from mainly analogue ASICs with digital parameter programming towards more digital signal processing. The chosen technology with 0,35µm structure and a supply voltage of 3,0V to 3,6V has lead to higher integration that was utilized to realize more features as well as modern SPI interface and for the single axis accelerometer PWM interface additionally. The sensing element with its symmetrical dual capacitor structure for best common mode suppression is read out by advanced sigma delta conversion in an analogue interface block. This is done very similar to today’s ASICs hence the signal conditioning is performed digitally. This gives several degrees of freedom related to the resolution that can be varied between 8 and 16 bits depending on application demand. The low resolution comes with fast update frequency for higher cut-off frequency applications. Whereas the high resolution is dedicated for slow applications like inclination measurements. A fast SPI interface with up to 8MHz clock frequency and strong driving capabilities is realized to transfer the information to the application controller. The SPI interface is realized in a way that it has the same format for all types of sensors – single, dual and three axes – to achieve the platform character with full interchangability of motion products within the application. For the single axis accelerometer alternatively to the SPI interface a Pulse Width Modulation (PWM) is implemented. It can be used as an analogue inter-
Realisation of Fail-safe, Cost Competitive Sensor Systems with Advanced 3D-MEMS Elements
face after rectification also. A classic analogue output was considered but found not to be state of the art anymore.
Fig. 8.
ASIC structures
The strength of VTIs current acceleration sensors is that temperature compensation isn’t needed to fulfil offset stability criteria of standard automotive applications. This strength was kept. For special highest offset stability types the ASICs incorporate a temperature compensation that can be calibrated on demand to achieve enhanced offset stabilities for upcoming applications.
6
Safety
The good old news first: 3D-MEMS structures from VTI guarantee that inrange failures and sticking are impossible due to the fact that the bulky mass structure etched into single crystal silicon doesn’t have an in-range position where electrostatic adhesion might be more powerful than low acceleration forces as present in low-g applications. Single crystal silicon does not show any creeping or other deterioration over lifetime. For applications which have to work fast after power-up VTI can guarantee that there isn’t any start-up drift. This is quite important for failure recognition algorithms as well as for low power applications were the power is turned on only short time to save energy. An old friend can be found again in the single axis sensor – the electrostatic forced self test. In this test voltage is applied to one of the capacitors deflecting the proof mass to one side and the output to rail. The self test was modified in a way that the mass can be deflected to both directions in SCA800
481
482
Components and Generic Sensor Technologies
series. A control was implemented to release the mass automatically after reaching a pre-defined threshold. After successful self-test the result can be read out via SPI. If the self-test fails, the sensor is recognizing it by logic so that a signal not valid signal status is set.
Fig. 9.
self test sensing element signal of single axis SCA800 family
The ASIC diagnosis is designed in a way that malfunctions of the sensing element as well as for the interconnections are detected. In combination with a straightforward sensor structure this leads to negligible field failure rates as well as to extremely low FIT figures for system safety calculations.
7
Meeting the Application
Today’s main application is still the Electronic Stability Control (ESC) requiring single axis vehicle lateral sensing direction. For this application a X-axis sensor SCA810 or a Z-axis sensor SCA820 can be used depending on the orientation of the PCB in the car and the Gyroscope type. For the measurement of longitudinal acceleration one additional axis is required so that a best fit would be a dual axis sensor – applications are 4x4-ABS, Electronic Parking Brake (EPB) and Hill Start Assist (HSA). Adding the vertical axis, rollover applications are addressed – leading to three axis sensor requirements. A trend is seen that all axis measurements are integrated into an Inertial Measuring Unit (IMU). The most challenging feature that was realized with the 3-axis sensor is the combination of different g-ranges. For HSA and EPB application a very good resolution for low g values – corresponding with low degrees of car up- or down-hill slopes are one extreme – a sensor with 0,2g would be sufficient for this application. The ESC standard full scale range is with 1,5g in the middle of the range but rollover application as well as airbag supporting functions ask for 3,0g to 5,0g ranges. To measure all of those with one sensing element that can be rotated and configured freely in the 3 dimensional room was the chal-
Realisation of Fail-safe, Cost Competitive Sensor Systems with Advanced 3D-MEMS Elements
lenge. It was achieved by excellent over-acceleration behavior of the sensing elements as well as due to sophisticated signal conditioning structures realized by linear mapping.
Fig. 10. application acceleration directions
First results with single axis SCA810 prototypes show very promising results in terms of stability over temperature. Figure 11 shows the test result of offset stability left and sensitivity error on the right side. Both are by factor 4 better than for today’s sensors. Even by adding additional variance for potential lot to lot variation the new technology proves it superiority for dedicated high performance applications.
Fig. 11. First results with single axis SCA810: left: offset over temperature – right: sensitivity failure over temperature
8
Outlook / Summary
Starting with VTI’s current extremely robust bulk micromachining technology, combining it with higher efficiency DRIE etching technology and for some products with SOI wafer base material, VTI has created its 3D-MEMS Technology which combines the advantages of all MEMS technologies. With
483
484
Components and Generic Sensor Technologies
this technology core, modern ASICs for signal conditioning and appropriate housing technologies for best application fit, a robust sensor platform with high reliability was realized. Hence what is most important for the commercialization – stable production with appropriate yield – can be realized from the beginning due to the moderate change of technology. For the field of automotive application this guarantees that the user can be sure that after design-in the technology can be produced in appropriate volume.
References [1] [2]
[2]
[3] [4]
[5]
[6] [7]
Monolithic Accelerometer for 3D Measurements, Tuomo Lehtonen, Jens Thurau, VTI Technologies OY, AMAA proceedings 2003 MEMS-IMU Based Pedestrian Navigator for Handheld Devices: J. Käppi, University of Technology, Finland; J. Syrjärinne, Nokia Mobile Phones, Finland; J. Saarinen, Tampere University of Technology Finland, The Institute of Navigation GPS 2001, Salt Lake City, USA Sebastian Butefisch, Axel Scoft, Stephanus Buttgenbach,’three-Axes Monolithic Silicon low-g Accelerometer’, Journal of Microelectromechanical Systems, Vol. 9, No. 4, December 2000 R. Puers, S. Reyntjens,’Design and processing experiments of a new miniturized capacitive triaxial accelerometer’, Sensors and Actuators A 68, 1998 Gang Li, Zhihong Li, Congshun Wang, Yilong Hao, Ting Li, Dacheng Zhang and Guoying Wu, ‘Design and fabrication of a highly symmetrical capacitive triaxial accelerometer’, Journal of Micromechanics and Microengineering, 11, 2001 M. A. Lemkin, M. A. Ortiz, N. Wongkomet, B. E. Boser, and J. H. Smith, “A 3-axis surface micromachined SD accelerometer,” in ISSCC Dig. Tech. Papers, pp. 202–203, Feb. 1997 Heikki Kuisma, ‘Inertial Sensors for Automotive Applications’, Transducers ’01, Munich, Germany 2001 Heikki Kuisma, Tapani Ryhänen, Juha Lahdenperä, Eero Punkka, Sami Ruotsalainen, Teuvo Sillanpää, Heikki Seppä,’A Bulk macromachined Silicon Angular Rate Sensor’, Transducers ’97, Chicago 1997
Jens Thurau VTI Technologies Oy Rennbahnstrasse 72-74 60528 Frankfurt Germany Keywords:
accelerometer, 3D-MEMS, automotive application, DRIE, bulk-micromachining, signal conditioning, safety
Intersafe
487
eSafety for Road Transport: Investing in Preventive Safety and Co-operative Systems, the EU Approach F. Minarini, European Commission
Introduction It is now almost 3 years that the eSafety initiative has been launched in Europe jointly by the Industry and the European Commission. This initiative aims at bridging the gap between technology developments and their actual implementation in the market by fostering the introduction of new Information and Communication Technologies (ICT) and systems in future motor vehicles and infrastructure. The eSafety initiative aims therefore to improve road safety and efficiency through intelligent vehicle safety systems. When the eSafety initiative was launched in April 2002, The EU had 15 Member States. On 1st May 2004, 10 new countries joined for a European Union with now 25 members and a population of 445 Million. This means new urge for all modes of transport, but in particular for road transport and road safety. The European Commission, together with the European Governments has launched several initiatives to improve road safety and make the transport sector overall more sustainable. One example is the adoption in September 2001 of the Transport white paper that for the first time established a target to halve fatalities by 2010. In the EU with 25 members we had 50.000 fatalities in 2002 (source Eurostat). For 2010 we target a first reduction to 25.000 fatalities which is an ambitious but not an impossible target. Several actions have been deployed at political level mainly based on strengthening enforcement rules and improving driver education and information through for example, prevention campaigns. These measures are proving to be successful and the number of fatalities is decreasing. Nevertheless, to further reduce the number of fatalities and target the 50% reduction we developed an integrated approach where the traditional “3Es” approach (Education, Enforcement and Engineering) is extended with a 4th “E” which represented by “eSafety”.
488
Intersafe
Fig. 1.
1
Evolution of fatalities in Europe
Integrated Safety Systems
As we know that between 90 and 95% of the accidents are due to the human factor and that in almost 75% of the cases the human behaviour is solely to blame, it is clear that our failings as drivers represent a significant safety risk to ourselves, and other road users. It is in this framework that within the eSafety initiative, Advanced Driver Assistant Systems and Intelligent Active Safety have major role to play in reducing the number of accidents and their impacts. It is with this purpose that the Commission has proposed a strategic objective on “eSafety for road and air transport” in the Information Society Technologies priority thematic area of Framework Programme 6. As a result of the first call for proposals that closed at the end of April 2003, a key integrated project in the area of preventive safety and Advanced Driver Assistance Systems (ADAS) has been selected for funding.
2
The PREVENT Project
This project is part of the Integrated Safety Initiative supported by EUCAR, regrouping other projects funded by the IST programme like AIDE (on Human
eSafety for Road Transport: Investing in Preventive Safety and Co-operative Systems, the EU Approach
Machine Interface), EASIS (on Electronic architecture for ADAS in vehicles), GST (for on line safety services) and a project funded by DG RTD called APROSYS (on Passive safety). PREVENT focuses on the use of ICT to improve vehicle safety. It will develop, test and evaluate safety related applications using advanced sensors and communication devices integrated into on-board systems for driver assistance. The project also links with National and European programmes.
Fig. 2.
Different activities and areas of preventive safety in PREVENT
PREVENT integrates different activities and areas of preventive safety. It is organised as a matrix with applications to be developed in vertical and horizontal sub-projects in the following “safety functions fields”: Safe speed and Safe following Lateral Support and Driver Monitoring Intersection Safety Vulnerable Road Users and Collision Mitigation and “cross functional fields”: Code of Practice for developing and testing ADAS Use of digital maps Sensors and sensors data fusion
489
490
Intersafe
The PREVENT Consortium has 51 partners from Industry, Public Authority, RTD institutes, universities and public private organisations. In particular 12 Car manufacturers and 16 suppliers are partners in the consortium. The total cost of the Project is approximately 55MEuros and we are contributing with a grant to the budget of 29.8MEuros. The project will last for 4 years ending in January 2008. It will produce several new applications, improve existing ones, largely disseminate activities and results, liaise with other relevant research programmes and at the end it will have a public exhibition where more than 20 prototypes integrating the Project results will be presented and tested.
3
Towards Co-operative Systems
The European Commission has a long history of research on the use of information and communication technologies for road and vehicle safety. Under the European Framework Research and Development Programmes, projects have been funded which have developed and demonstrated traffic telematics systems aimed at making transport safer, more efficient, effective, and more environmentally friendly. Many of these systems were aimed at improving the transport infrastructure, while others were based in the vehicles themselves. Mostly, the systems developed by these projects have operated as autonomous or stand-alone. Although, they hold the great potential to improve road safety and efficiency, there are also some limitations to what can be achieved by systems based solely on the road, or solely in the vehicle, e.g. dealing with far distance threats or anticipating road difficulties with time margins compatible to the driver response time. This requires another class of systems whose intelligence is distributed between vehicles and roads. As the capacity and flexibility of information technology and communications increases, and costs decrease, it becomes feasible to develop co-operative systems in which the vehicles communicate with each other and the infrastructure. In this way cooperative systems will greatly increase the quality and reliability of information, support and protection available to road users, and the cost-effectiveness of applications. In spring 2004 three expert meetings were organised by Unit C.5 (ICT in Transport and the Environment) of DG Information Society. In these, experts in the field of transports telematics were invited to express their views on what should be the objectives and priorities in the area of cooperative telematics systems for improving the safety and management of road transport. This was intended to provide a basis of a contribution to the IST Work programme development for RTD projects in the Sixth Framework programme for 2005-2006.
eSafety for Road Transport: Investing in Preventive Safety and Co-operative Systems, the EU Approach
The selected experts came from relevant public sector bodies with responsibility for the road infrastructure and from the vehicle industry. The following definition of Co-operative Systems was agreed during the expert meetings: “Road operators, infrastructure, vehicles, their drivers and other road users will co-operate to deliver the most efficient, safe, secure and comfortable journeys. The vehicle-vehicle and vehicle-infrastructure co-operative systems will contribute to these objectives beyond the improvements achievable with stand-alone systems.” Some aspects of such systems have already been investigated in previous Framework Programme and recent projects, but it is sensible to make co-operative systems a stronger focus of R&D in the future, as vehicles are increasingly equipped with wireless communications and location detection, increased computing power, and a multifunctional Human Machine Interface. Taking into account the results of the consultation process and the ongoing research initiatives the requirements for projects to be funded in the IST Work programme 2005-2006 under the Strategic Objective “eSafety – Cooperative systems for Road Transport” were developed. The main objectives are safety and efficiency. Such systems will enhance the support available to drivers and other road users and will provide for greater transport efficiency by making better use of the capacity of the available infrastructure and by managing varying demands. They will also, and primarily, increase safety by improving the quality and reliability of information used by advanced driver assistance systems and allowing the implementation of advanced safety applications. The Research will focus on advanced communications concepts, open interoperable and scalable system architectures, advanced sensor infrastructure, dependable software, robust positioning technologies and their integration into intelligent co-operative systems that support a range of core functions in the areas of road and vehicle safety as well as traffic management and control. This call for proposals has been published in December 2004 and will close in March 2005 the available budget is 82MEuros. The European Funded Projects on co-operative systems are expected to start at the end of the year. We believe that innovative concepts, technologies and systems will be developed, tested and widely disseminated bringing the level
491
492
Intersafe
of European excellence in the area of ICT for road and vehicle safety an additional stage forward thus contributing, with the support of the eSafety initiative, to the ambitious goal of halving the fatalities for the year 2010. Fabrizio Minarini European Commission Information Society Directorate General Avenue de Beaulieu 31 1049 Brussels Belgium
493
A New European Approach for Intersection Safety – The EC-Project INTERSAFE K. Fürstenberg, IBEO Automobile Sensor GmbH B. Rüssler, Volkswagen AG Abstract Intersection Safety is a challenging subject due to the complexity of the heterogeneous environment. Furthermore, it is one of the most important areas under discussion with respect to the huge number of accidents which occur on intersections. Therefore, the European Project INTERSAFE was established within the Integrated Project PReVENT. INTERSAFE focuses on two approaches. The first one is a bottom-up approach using state of the art sensors – laser scanner and video – and infrastructure to vehicle communication. An innovative strategy to identify static and dynamic objects based on accurate positioning at the intersection will be presented. The second one is a top-down approach based on a driving simulator. With this different sensor configurations and communication methods can be evaluated. In addition, the investigation of dangerous scenarios can be realised as well. These two approaches will be introduced. The communication methods will be described as well as the results of a detailed accident analysis based on selected European countries.
1
Introduction
In the 6th Framework Programme of the European Commission, the Integrated Project PReVENT includes Intersection Safety. The INTERSAFE project was created to generate a European approach to increase the safety at intersections. The project started on the 1st of February 2004 and will end in January 2007. The partners in the INTERSAFE project are: Vehicle manufacturer: BMW, VW, PSA, RENAULT Automotive supplier: TRW Conekt, IBEO Institute / SME: INRIA / FCS
494
Intersafe
The main objective of the INTERSAFE project is to improve safety and to reduce (in the long term, avoid) fatal collisions at Intersections. The objective will be achieved by a combination of sensors for detection of crossing traffic and all other objects on the intersection as well as sensors for localisation of the host vehicle when approaching and transversing the intersection. Furthermore, there will be communication between the host vehicle and the infrastructure, to exchange additional information about traffic, weather, road conditions, etc. A basic approach will be realised on a test vehicle with existing on board sensors and off the shelf communication modules. In parallel an advanced approach will develop driver warning strategies using a driving simulator to evaluate and specify the needs for an extended Intersection Safety System.
2
INTERSAFE Concept & Vision
The INTERSAFE project realises two different approaches in parallel. The first approach is a bottom-up approach based on two laser scanners, one video camera and vehicle-to-infrastructure communication. All these state of the art devices will be installed on a VW test vehicle, as shown in figure 1 and figure 2. The laser scanners will be used for object detection and the video camera for road marking detection. Highly accurate vehicle localisation is performed by fusion of the outputs of the video and laser scanner systems. The Laserscanner system tracks and classifies obstacles and other road users.
Fig. 1.
Bottom-up approach with state of the art sensors on a VW test vehicle
A New European Approach for Intersection Safety – The EC-Project INTERSAFE
Fig. 2.
Fields of view of two ALASCA® sensors and one video camera used in the INTERSAFE approach
Furthermore, some communication modules will be installed at selected intersections in public traffic to realise communication between the vehicle and the traffic lights. This approach will result in a basic intersection system, which can be evaluated in public traffic on the selected intersections.
Fig. 3.
Top-down approach in the BMW driving simulator
The second approach is a Top-Down Approach, based on a BMW driving simulator (see figure 3). The driving simulator allows the analysis of dangerous situations, independent of any restricted capabilities of the sensors for environ-
495
496
Intersafe
mental detection. The results of this approach will be used to define an advanced intersection safety system, including requirements for advanced onboard sensors. The concept of INTERSAFE is shown in figure 4. Based on object detection, road marking detection and navigation based on natural landmarks (realised by matched information of the Laserscanners and the video camera in the basic approach) as well as a detailed map of the intersection, a static world model is built. As a result of this model, all objects and the position of the ego vehicle are known precisely.
Fig. 4.
INTERSAFE concept & vision
In a second step, a dynamic risk assessment is done. This is based on object tracking & classification, communication with the traffic management as well as the intention of the driver. As a result of the dynamic risk assessment, potential conflicts with other road users and the traffic management can be identified. Consequently, the intersection safety system is able to support the driver at intersections. In the INTERSAFE project, the consortium is mainly focused on stop sign assistance, traffic light assistance, turning assistance and right of way assistance. The difference of the two approaches (A-ISS = Advanced Intersection Safety System and B-ISS = Basic Intersection Safety System) lies in their time to market and complexity. The architecture and the warning strategies will probably be the same.
A New European Approach for Intersection Safety – The EC-Project INTERSAFE
3
Communication
Car-to-infrastructure communication offers additional information to the approaching vehicles. Thus, it is possible to bring more safety into signalised intersections when communicating with the traffic signal system (TSS). There are many different functions that can benefit from this communication: One aim of the INTERSAFE project is to prevent the driver from missing the red-light at intersections. Communication with the TSS enables the on-board application to give an estimation of remaining time before the light signal changes. With this information an appropriate warning or intervention strategy can be developed that is able to assist the driver. Besides the traffic light violation warning, a comfort system can be realised that is able to give a speed recommendation to the driver who is approaching the intersection. This recommendation for reaching the green light enables better traffic flow and shorter stopping times at intersections. The above mentioned functions can be realised using unidirectional communication from the traffic light to the car, which is the first approach in the INTERSAFE project. Extending this technique to bidirectional communication offers additional possibilities for driver assistance and driver comfort systems. Once the approaching cars communicate bidirectionally with the TSS, the following functions can be realised: In an intelligent traffic light control, which can be seen as an extension to the conventional induction loops, the TSS “knows” further in advance how many cars are approaching the intersection at a specific time. Traffic light priority for emergency vehicles. When the approaching cars “inform” the TSS of their arrival (like for the intelligent traffic light control) the information can be routed to all other cars to extend their survey area, even “around the corner”. Figure 5 shows the communication facilities for communication with a traffic signal system.
497
498
Intersafe
Fig. 5.
3.1
Car-to-infrastructure communication facilities
Technological Basis
In accordance with the activities in the United States (Vehicle Safety Consortium, VSC) and in Europe (Car-to-Car Communication Consortium, C2CCC) in the field of car-to-car and car-to-infrastructure communication the technological basis is IEEE 802.11a, also known as Wireless LAN. One goal of the activities is to get an exclusive frequency within the 5GHz range as it is realised in the US (frequency band from 5,85GHz to 5,925GHz) for safety relevant applications. Communication with the TSS should use the same band for its applications.
3.2
Communication Properties
Broadcasting the relevant information from the TSS to the cars in the range of the radio link seems to be suitable for the realization of unidirectional communication with the TSS. A maximum range of 200m should be sufficient. A vehicle driving at a speed of about 70km/h will receive the transmitted data more than 10s before arriving at the intersection. On streets with a lot of traffic lights equipped with communication modules the range, of course, can be shorter. An update of the transmission every 100ms should be adequate for intersection related applications. Based on the initial maximum range of about 200m there seems to be no need for additional repeaters or multi-hops. Nevertheless, if results from some testing sites in city areas with a lot of possible occlusions are available, there might be a need for multi-hop and repeaters.
A New European Approach for Intersection Safety – The EC-Project INTERSAFE
As for all safety related applications with wireless communication the data from the TSS is important. Media access and the handling of priorities will be realised as specified in 802.11e. To guarantee that the transmitted data has its seeds in the TSS, an authentication mechanism with certification is needed. Transmitted messages are digitally signed before sending and when received verified using the public key read from the certification. When thinking of comfort system applications at traffic signal systems or other applications that require a bidirectional communication, these techniques have to be extended. But as mentioned before, in a first step, INTERSAFE will focus on the easier unidirectional communication. Nevertheless, an extension will not affect the standards and the specifications stated above.
3.3
INTERSAFE System Architecture
Within the INTERSAFE project a prototypical intersection with communication will be built up. In the beginning a standard PC will replace the TSS controller board to have non-restrictive access to all required data (e.g. signal times). Nowadays TSS controllers do not offer that possibility with sufficient accuracy. The PC will be equipped with an IEEE 802.11a WLAN card to realise the communication. A GPS-timestamp will ensure the synchronization of the sent and received data. With this setup the communication possibilities for safety related applications at intersections will be evaluated.
Fig. 6.
Communication system
Figure 6 shows schematically the system architecture. As a result of a successful demonstration of the systems’s functional efficiency, a reengineering of a standard TSS controller can be considered in order to demonstrate the feasibility in real traffic situations.
499
500
Intersafe
4
Accidentology - Relevant Scenarios
Based on a detailed accident analysis for intersections of selected European countries the relevant scenarios which have to be addressed by an intersection safety system are determined. The three most important scenarios including more than 60% of the accidents on intersections are identified. The strategy of the applications in INTERSAFE is focussed on warning the driver if a dangerous situation is predicted. Thus, only a few seconds before a potential crash a warning has to be generated.
Fig. 7.
CASE VEHICLE (A) drives with an initial speed of 0 to 60km/h. The final speed is 0 kph and the final position is the stop line at the road sign. The DRIVER’S INTENTION (A) is to stop and cross the road, to stop and turn left or right or not to stop. The ROAD SIGNS could be a stop sign (mainly), atraffic light or a give-way sign. The OPPONENT VEHICLE (B) drives with a constant speed of up to 40km/h from right to left or vice versa.
A New European Approach for Intersection Safety – The EC-Project INTERSAFE
Fig. 8.
CASE VEHICLE (A) drives with an initial speed of 0 to 60km/h. The ROAD SIGNS could be a stop sign (mainly), a traffic light or a giveway sign. The OPPONENT VEHICLE (B) drives with a constant speed of up to 60km/h from right to left or vice versa or right to left/right turn. The DRIVER’S INTENTION (B) is to stop and cross or not to stop.
Fig. 9.
CASE VEHICLE (A) drives with an initial speed of 0 to 60km/h. The final speed is 0km/h and the final position is the stop in the centre of the intersection. The DRIVER’S INTENTION (A) is to turn left. There are no ROAD SIGNS for the case/opponent vehicle. The OPPONENT VEHICLE (B) drives with a constant speed of up to 40km/h from the opposite direction.
501
502
Intersafe
The three most important scenarios including more than 60% of the accidents on intersections are taken as reference for the requirements.
5
Requirements
As a result of the relevant scenarios from the previous chapter and the warning strategy, requirements for the sensor system are formulated. INTERSAFE is focussing on stop sign assistance, traffic light assistance, turning assistance and right of way assistance. Sensor systems should have a medium range up to 80m, with a very wide field of vision of about ±125ºaround the front of the vehicle. They should have the ability to localise the vehicle accurately in position and orientation. Of course, automotive requirements like weather robustness and lighting conditions are under consideration.
Fig. 10. Field of vision for turning into a road with priority (left) and for turning off a road with priority scenario (right) (not in true scale).
The sensor systems of the bottom-up approach, which will be built up by spring 2005, will be applied to the three relevant scenarios.
6
Conclusion
The proposed solution to realise the INTERSAFE System is based on challenging technical objectives. The consortium is convinced it will be able to fulfil the requirements to support the driver on intersections. The basic system with onboard sensors will provide a solution, which can be tested on selected intersections. The advanced system will provide knowledge about future needs of
A New European Approach for Intersection Safety – The EC-Project INTERSAFE
sensors and new opportunities to support the driver in more critical driving situations as well.
Acknowledgements INTERSAFE is a subproject of the IP PReVENT. PReVENT is part of the 6th Frame-work Programme, funded by the European Commission. The partners of INTERSAFE thank the European Commission for supporting the work of this project.
References [1] [2] [3]
[4]
[4]
Buchanan, A.: Lane Keeping Functions. 12th Symposium e-Safety, ATA EL 2004, Parma, Italy. Lages, U.: Laser Sensor Technologies for Preventive Safety Functions. 12th Symposium e-Safety, ATA EL 2004, Parma, Italy. Fuerstenberg, K.: Object Tracking and Classification for Multiple Active Safety and Comfort Applications using a Multilayer Laserscanner. Proceedings of IV 2004, IEEE Intelligent Vehicles Symposium, June 2004, Parma, Italy. Heenan, A.; Shooter C.; Tucker, M.; Fuerstenberg, K.; Kluge T.: Feature-Level Map Building and Object Recognition for Intersection Safety Applications. Proceedings of AMAA 2005, Conference on Advanced Microsystems for Automotive Applications, March 2005, Berlin, Germany. Hopstock, M. D; Ehmanns D.; Spannheimer, H.: Development of Advanced Assistance Systems for Intersection Safety. Proceedings of AMAA 2005, Conference on Advanced Microsystems for Automotive Applications, March 2005, Berlin, Germany.
503
504
Intersafe
Kay Ch. Fürstenberg Research Management IBEO Automobile Sensor GmbH Fahrenkrön 125, 22179 Hamburg, Germany
[email protected] Bernd Rössler Volkswagen AG Group Research, Electronic Systems, Letter box 11/17760, D-38436 Wolfsburg, Germany
[email protected] Keywords:
intersection safety, communication, relevant scenarios, video, laserscanner
505
Feature-Level Map Building and Object Recognition for Intersection Safety Applications A. Heenan, C. Shooter, M. Tucker, TRW Conekt K. Fürstenberg , T. Kluge, IBEO Automobile Sensor GmbH Abstract Accidents at intersections happen when drivers perform inappropriate manoeuvres. Advanced sensor systems will enable the development of Advanced Driver Assistance Systems (ADAS) which can assess the potential for a collision at a junction. Accurate localisation of the driver’s vehicle and path prediction of other road users can be fused with traffic signal status and other information. The ADAS system can use this fused data to assess the risks to the driver and other road users of potentially hazardous situations and warn the driver appropriately. The accurate localisation of the host vehicle is achieved by utilising individual sensors’ feature-level maps of the intersection. The INTERSAFE project will independently use video and Laserscanner sensing technologies for localisation and then fuse the individual outputs to improve the overall accuracy. The Laserscanner system will also be used to track and classify other road users and obstacles, providing additional data for the path prediction and risk assessment part of the application.
1
Introduction
European accident statistics show that up to a third of fatal and serious accidents occur at intersections. The INTERSAFE project aims to reduce and ultimately eliminate fatal collisions at intersections. The project will explore the accident prevention and mitigation possibilities of an integrated safety system by creating vehicle demonstrators that provide the driver with turning assistance and infrastructure status information. Furthermore the effectiveness of the safety system will be examined for higher-risk scenarios through its implementation and testing in a simulator. The sensor technologies being used in INTERSAFE are video and Laserscanner. Figure 1 shows an example intersection with a vehicle and the sensors’ fields of view overlaid. This paper will focus on the sensor technology challenges within the INTERSAFE project, namely, algorithm development for vehicle localisation, fusion of
506
Intersafe
the outputs of the video and Laserscanner systems and the use of the Laserscanner system to track and classify obstacles and other road users. The nature of the problem faced by the INTERSAFE project means that a very high accuracy in the localisation of the host vehicle in the intersection is required. Improvements to the current automotive sensor technologies are needed to achieve this required level of accuracy both for map creation and real-time localisation within intersections. The INTERSAFE system’s accuracy will require the fusion of high-level localisation data from two independent but complementary sensor technologies.
Fig. 1.
Example intersection with a vehicle and the sensors’ fields of view overlaid
As each of the two sensing technologies used in INTERSAFE can detect different types of features, they will have their own feature-level maps of the intersection containing only features detectable by, or relevant to, themselves. Each sensor will match sensor data with the feature-level map to provide an estimate of the position of the vehicle within the intersection. The estimate of vehicle position from each of the two sensors will be fused to provide a single estimate of the host vehicle position. The host position estimate will be used to determine the host vehicle location on a high-level feature map of the intersection. The high-level map contains data (such as position of lanes, stop lines, etc.) relevant to the risk assessment and collision avoidance algorithms. In the future the maps could be transmitted to the host vehicle as it approaches an intersection that requires extra
Feature-Level Map Building and Object Recognition for Intersection Safety Applications
driver assistance. However, this is beyond the scope of the INTERSAFE project. The feature-level maps will be created semi-automatically. Intersection data reported by the sensors along with any relevant vehicle dynamics and GPS data will be recorded whilst driving through the intersection from the various available approach roads (fig. 2). Special reference features will be added to the intersection during data logging to allow the map building algorithms to determine the orientation of each logged section and to aid in linking the logs from different directions. The special features must be detectable by both sensing technologies to enable the two sensor-level maps to be referenced from the same point on the intersection. The data logs can then be postprocessed to build up separate feature-level maps of the intersection for each sensor
Fig. 2.
Video sensor datalogger screen shot
After the feature-level maps have been created they can be edited to remove the specially added features that would not appear on the actual junction and any features that may not be useful to the sensors for localisation or target discrimination. The Laserscanner system will also use the map to remove all measurements at fixed obstacles in the current range profile set with the remaining objects being classified and reported to the INTERSAFE system as road users and potential threats. The sensors will be synchronised during oper-
507
508
Intersafe
ation using the video frame synchronisation signal to trigger the scan point of the Laserscanner system. This should simplify the host vehicle localisation task that fuses the data from the two sensor systems.
2
Video Sensor Feature-Level Data Collection and Map Building
The TRW Conekt automotive video sensor for Lane Departure Warning applications [1] will be modified and used in the INTERSAFE project to sense the position of intersection features (typically visible road markings) relative to the host vehicle. The position of features will be compared with a video sensor feature-level map built up on previous visits to the intersection in question. Data from this comparison combined with data from other vehicle sensors can be used to calculate the localisation of the host vehicle in the junction. The features detected and stored in the map by the video sensor for the INTERSAFE application must have detectable information in both the lateral and longitudinal directions for unambiguous localisation of the host vehicle to be possible.
2.1
Sensor Suite Specifications
Sensor: Sense Data: Output type: Output details:
Wheel Speed X2 Wheel Speed Digital TTL Output range: 0.3V low, 5.0V high 44 pulses / wheel revolution
Sensor: Sense Data:
BEI MotionPak(TM). Yaw rate, Pitch rate, Roll rate (º/sec), Lateral acceleration, Longitudinal acceleration, Vertical acceleration (g) Analogue Max Range: ±500º/sec (rates), ±10g (accelerations) Configured Range:±100º/sec (rates), ±2g (accelerations) Range Output: ±2.5Vdc (rates), ±7.5Vdc (accelerations).
Output type: Output details:
Sensor: Sense Data: Output type: Output details:
Correvit® SL Sensor. Outputs are configurable. See data sheet. D Channel1: Longitudinal distance DL D Channel2: Angle ϕ (º) Digital TTL Pulse Voltage: 0-5V
Feature-Level Map Building and Object Recognition for Intersection Safety Applications
Channel1: 340 Pulses/m (configurable to 160-750 pulses/m). Channel2: 50 Hz/º Note: Channel 2 is FM - 10kHz carrier wave, ±2kHz Sensor: Sense Data:
Output type: Output details: Sensor: Sense Data: Output type: Output details:
Afusoft Raven 6 Latitude (degrees, minutes, decimal minutes, North or South) Longitude (degrees, minutes, decimal minutes, East or West) True course over ground (º) Speed over ground (km/h) Serial NMEA 0183 standard format. National Semiconductor Greyscale HDR CMOS progressive scan Raw video image. Cameralink Imager Resolution 640 x 480 Enabled Field of View (Vertical) 22º Enabled Field of View (Horizontal) 54º Intra-frame Dynamic Range Variable: 62 - 110dB Inter-frame Dynamic Range 120dB Frame Rate 30Hz
The Correvit SL and BEI MotionPak sensors are used to improve the accuracy of the feature-level map creation task. It is envisioned a standard wheelspeed and yaw rate sensor (as available on most current high-end vehicles) will yield sufficient accuracy for the localisation task.
2.2
Feature Extraction
The existing TRW video lane sensor is used to measure the relative position of lane markings in front of the vehicle. The lane markings are detected in the video image using proprietary feature extraction techniques. Line tracing and fitting algorithms are used to extract parametric line descriptions from the raw edge data. Corrections are then made to the line parameters to correct for inaccuracies caused by optical effects. Feature extraction from the video images is performed differently for map creation and localisation. For map creation, the raw image is logged with the vehicle dynamics data and post-processed to allow optimised line parameter estimates. For the online localisation (line tracking and matching) the line param-
509
510
Intersafe
eters are determined using optimised image processing techniques which run in real-time on the system PC. Using parameterised lane markings reduces the amount of data which needs to be processed by the association and tracking algorithms.
Fig. 3.
2.3
Video Sensor Image and Detected Lines
Map Creation
During map creation, lane marking images and vehicle dynamics data will be logged for each approach road to the intersection. Each log is processed to generate a map containing the absolute position of each of the lane markings. The absolute positions of lane markings are determined by fusing the lane marking position relative to the vehicle and the vehicle dynamics data using an extended Kalman filter (EKF). The maps generated for the different approaches to the junction are then merged. This merging is enabled automatically as known reference markers are placed in or near the intersection. Limited manual editing of the map will then take place to produce a clean, low-level feature map (fig. 4) which can be used in the online video localisation system.
Feature-Level Map Building and Object Recognition for Intersection Safety Applications
Fig. 4.
Video feature map
Fig. 5.
Schematic description of the ALASCA®
511
512
Intersafe
3
Laserscanner Obstacle Detection
The Automotive Laserscanner (short ALASCA®) was developed to integrate Laserscanner technology into automobiles. It combines a 4-channel laser range finder with a scanning mechanism, and a robust design suitable for integration into the vehicle. The object data, like contour or velocity, are useful for many applications in the automotive area.
3.1
Principle of Operation
The main units of the ALASCA® are shown figure 5. A laser diode sends out short light pulses (red ‘light’ in fig. 5) and a rotating mirror reflects the infrared beam. The target around the scanner/car reflects the infrared beam (blue ‘light’ in fig. 5). And the mirror reflects the light to the receiver. This operation has duration of only a few nanoseconds. The time passed during this procedure represents a measurement of the distance to the target. The angle resolution is supplied by an angle-encoder. The measurement values are compensated to minimize the effects of temperature and high reflections on well reflecting targets to get accurate and robust distance values. With the new measurement technology the ALASCA® is able to detect two echos per measurement and generate two distance values for each laser shot and layer (fig. 6). With this feature the ALASCA® is able to detect targets behind raindrops, fog or spray when driving behind other vehicles. The Laserscanner can measure through a dirty cover and recognise heavy soiling and is able to report an incorrect function of the Laserscanner system.
Fig. 6.
Example for double-echo evaluation
Feature-Level Map Building and Object Recognition for Intersection Safety Applications
The four scan planes allow the ALASCA® to measure with a vertical resolution of approx. 3.2 degree. In combination with the evaluation of the scan data (e.g. removing scan data resulting from measurements on the ground, rain or snow), the scanner system compensates the pitch movements of the vehicle without losing contact with the tracked targets.
Fig. 7.
ALASCA® Scan data, 150° scan area and video frame of this scene
Fig. 7 shows a perspective view with a scan area of 150 degrees. One ALASCA® is mounted at the 0m position (red circle). The 4 layers are shown in 4 different colours. The second echoes (called B-channel) are shown in light colours. The red line shows the zero degree position of the vehicle coordinate system. The oncoming black car is to be found to the left of the orange circle in scan data picture. The silver car in the front is part of the way along the red line. The red dots (in the orange circles in fig. 7) are distance values that are processed and marked as ground data. The scan pre-processing marks measurements from dirt and rain, as well. The object tracking and classification algorithms categorise the objects on both sides as background objects.
3.2
Technical Data ALASCA® Range 0.3m up to 80m (30m on really dark targets) Range resolution ±2cm Horizontal field of view up to 240 degree (depending on the mounting
position) Scan frequency 10 to 4 Hz
513
514
Intersafe
Vertical field of view 3.2 degree subdivided into 4 layers Horizontal angular resolution 0.25 degree ... 1 degree (depending on
3.3
the scan frequency) Interfaces ARCnet / Ethernet, CAN Eye safe (laser class 1) Waterproof to IP66 (even as stand-alone unit), no external moving parts Electrical power consumption 14W
The Laserscanner Fusion System
In recent projects the scan data fusion was very processing-intensive. The Laserscanner gathers the range profile in a certain time while the mirror is rotating with the host vehicle usually moving. This leads to a shift in the distance, with respect to the vehicle coordinate system, of actually equidistant targets that are measured in different angles due to the time in scan. Every measurement is shifted to a common time base with respect to the host vehicles movement. The two Laserscanners do not usually measure the same object at the same time. If a fused scan just combined the scan data directly the same object would be at different places in the vehicle coordinate system. Under consideration of this, the ALASCA® scan data fusion is based on synchronised Laserscanners to provide a consistent fused scan. Fig. 8 shows a fusion system with 2 ALASCA®, a Fusion ECU and the vehicle control unit.
Fig. 8.
Fusion System with vehicle control
Feature-Level Map Building and Object Recognition for Intersection Safety Applications
3.4
Scanner Synchronisation
The first step of a scan data fusion of the ALASCA® sensors is the synchronisation of the Laserscanners. The ECU periodically sends signals (with the desired rotation frequency) to both Laserscanners. The Laserscanners adapt their rotation frequency to the synchronisation frequency and the angle of the rotating mirror to the sync-signal. For this purpose a fine-tuning motor speed controller is integrated into the Laserscanner. The accuracy of synchronisation (i. e. the time difference between the synchronisation signal and the zero-crossing of the direction of view) is about ±2ms. The synchronisation ensures that both Laserscanners measure an object almost at the same time. Other sensors (like a video camera) could be synchronised with the ECU, as well.
3.5
Laserscanner Data Fusion
The two ALASCA® sensors send raw data (distance and angle measurements) to an ECU. The fusion module translates the scans to the vehicle coordinate system and fuses the two scans. With the Laserscanner synchronisation, there is no need for timer shifting due to the movement of the host vehicle.
Fig. 9.
Object tracking using Laserscanners: road users are tracked crossing an intersection
515
516
Intersafe
3.6
Object Tracking and Classification
Comparing the segment parameters of a scan with predicted parameters of known objects from the previous scan(s), established objects are recognised. Unrecognised segments are instantiated as new objects, initialised with default dynamic parameters. The tracking process is usually divided into three sub-processing steps, as shown in figure 10.
Fig. 10. The tracking process
In order to estimate the object state parameters a Kalman filter is well known in the literature and used in various applications, as an optimal linear estimator [2]. A simplified Kalman filter, the alpha-beta tracker is often used instead [3]. The Kalman filter was chosen, as it allows for more complex dynamic models, which are necessary for a precise object tracking. We evaluated our data on different association methods, such as the nearest neighbour and the global nearest neighbour method [4]. Object classification is based on object-outlines (static data) of typical road users, such as cars, trucks/buses, poles/trees, motorcycles/bicycles and pedestrians. Additionally the history of the object classification and the dynamics of the tracked object are used in order to support the classification performance [5, 6]. In a simple algorithm road users are classified by their typical angular-outline using only the geometric data [7]. Additionally the objects history and its dynamic data are necessary to enable a robust classification [8]. In case there is not enough information for an object classification a hypothesis is generated, based on the object’s current appearance. The temporary assignment is valid, as long as there is no violation of a limiting parameter. However, the classification is checked every scan to verify the assignment of the specified class [9].
Feature-Level Map Building and Object Recognition for Intersection Safety Applications
An environmental model supports the selection of a suitable class. The understanding of the traffic situation may further improve the classification results.
4
Laserscanner Feature-Level Map Building
GPS based localisation could be sufficient in an environment where free view to the sky is possible. But it is still neither a very accurate nor a reliable localisation. In particular in urban areas, where most of the intersections are located, the localisation using GPS systems is not satisfactory. Therefore other strategies are under development, which enable a very accurate localisation but still in a very reliable manner. The approach within INTERSAFE focuses on localisation based on natural landmarks. In a first stage a map of the intersection is generated by moving the Laserscanner across the intersection under consideration of the host vehicles movement [10]. The obtained grid map is used to semi automatically mark the natural landmarks, like posts or trees. The landmarks are registered in a feature-level map, as shown in fig. 11.
Fig. 11. Laserscanner feature map (left) and video reference picture (right)
After a feature-level map is generated it is possible to recognise some of the landmarks which are registered in the feature-level map. With that every vehicle, which is equipped with the Laserscanner, is able to localise with respect to the recognised landmarks at the intersection. First results show that a very accurate localisation on the intersection is possible.
517
518
Intersafe
5
Conclusion
The INTERSAFE project answers the need for increased safety on European roads at intersections. The INTERSAFE approach, presented in this paper, is focussed on existing Laserscanner and video technologies and the modifications to achieve the required accuracy of host vehicle localisation. Additional information about the road users on the intersection is provided for assistance applications. Using the two sensors and infrastructure-to-vehicle communications, collision avoidance applications (relevant to common intersection accidents) are under development. The INTERSAFE project will build a demonstrator vehicle that will allow evaluation of the developed applications in real life situations and a simulator to evaluate driver reaction to the ADAS in the situations that cannot be assessed on real intersections (i.e. full collision, high speed, etc). Initial results show that the very high level of accuracy required for the INTERSAFE application is possible through the use of individual sensor feature maps and the fusion of the output from the two sensors.
Acknowledgements INTERSAFE is a subproject of the IP PReVENT. PReVENT is part of the 6th Framework Programme, funded by the European Commission. The partners of INTERSAFE thank the European Commission for supporting the work of this project.
References [1] [2] [3]
[4] [5]
A. Buchanan, M. Tucker, (2002). “A Low Cost Video Sensor for Lane Support”. ITS World Congress, 2002, Chicago. Welch, Greg; Bishop, Gary: An Introduction to the Kalman Filter. http://www.cs.unc.edu, 2001. Gavrila, Dariu M.; Giebel, Jan; Shape based Pedestrian Detection and Tracking. Proceedings of IV 2002, IEEE Intelligent Vehicles Symposium, IV 2002 Versailles. Blackman, Samuel: Design and Analysis of Modern Tracking Systems. Artech House, London, 1999. Willhoeft, V.; Fuerstenberg, K. Ch.; Dietmayer, K. C. J.: New Sensor for 360° Vehicle Surveillance. Proceedings of IV 2001, IEEE Intelligent Vehicles Symposium, IV 2001 Tokyo.
Feature-Level Map Building and Object Recognition for Intersection Safety Applications
[6]
Fuerstenberg, K. Ch.; Willhoeft, V.: Pedestrian Recognition in urban traffic using Laserscanners. Proceedings of ITS 2001, 8th World Congress on Intelligent Transport Systems, ITS 2001 Sidney, Paper 551. [7] Fuerstenberg, K. Ch.; Hipp, J.; Liebram, A. (2000) A Laserscanner for detailed traffic data collection and traffic control. Proceedings of ITS 2000, 7th World Congress on Intelligent Transport Systems, ITS 2000 Turin, Paper 2335. [8] Fuerstenberg, K. Ch.; Dietmayer, K. C. J.; Willhoeft, V.: Pedestrian Recognition in Urban Traffic using a vehicle based Multilayer Laserscanner. Proceedings of IV 2002, IEEE Intelligent Vehicles Symposium, IV 2002 Versailles, Paper IV-80. [9] Dietmayer, K. C. J.; Sparbert J.; Streller, D.: Model based Object Classification and Object Tracking in Traffic scenes from Range Images. Proceedings of IV 2001, IEEE Intelligent Vehicles Symposium, IV 2001 Tokyo, Paper 2-1. [10] Weiss, T.: Globale Positionsbestimmung eines Fahrzeugs durch Fusion von Fahrzeug und GPS-Daten zur Erstellung einer digitalen Referenzkarte; Diplomarbeit, Universität Ulm, Ulm 2004. Adam Heenan, Carl Shooter, Mark Tucker TRW Conekt, Technical Centre Stratford Road, Solihull, B90 4GW United Kingdom
[email protected] [email protected] [email protected] Kay Fürstenberg, T. Kluge IBEO Automobile Sensor GmbH Fahrenkroen 125 22179 Hamburg Germany
[email protected] Keywords:
video, laserscanner, intersection, map-building, safety, object recognition, sensor-level features
519
521
Development of Advanced Assistance Systems for Intersection Safety M. Hopstock, Dr. D. Ehmanns, Dr. H. Spannheimer, BMW Group Research and Technology Abstract The major task of the project InterSafe, which is funded by the European Commission, is to reduce the number of accidents in intersection scenarios. In order to solve this problem, the BMW Group chose a top-down approach to develop advanced safety systems for intersections. The relevant scenarios of turning, crossing and ignoring traffic lights were identified by the analysis of intersection accidents. Based on this, the controller will be developed within the realistic environment of the BMW Group dynamic driving simulator. This allows determining the requirements for the surveillance systems in parallel to the development of the other components such as the Human-Machine-Interface and the controller algorithm. Furthermore, the virtual environment allows the evaluation process to occur with real test drivers in critical situations without endangering them. The results of the project will be an assistance system to enhance safety at intersections.
1
Motivation of a Top-Down Approach
The effectiveness of active safety systems heavily depends on novel sensors, which influence the reliability of situation detection. Development of safety systems often follows a classical bottom-up development process led by the capabilities of key technologies. In such a process there is the risk of not considering all possible situations in the design of active safety systems. [1] In order to support both, system function and new technology, a top-down approach was chosen by the BMW Group Research and Technology. This is a strategy that allows the needed interaction between the desired function (reduction of accidents) and the technical realisation (real system). This process starts by the analysis of relevant accidents (=significant reduction potential) in order to define the required functions of the system. Then the technical feasibility of assistance systems can be determined using simu-
522
Intersafe
lation tools or a driving simulator. At the end, sensor requirements complete the development. In contrast, a bottom-up approach starts with the definition of a new sensor technology and then derives in which manner it could be used to reduce accidents.
Fig. 1.
2
Approaches for system development [2]
Statistical Analysis
For accident analysis several sources can be used. A first approach is based upon national statistics (e.g. Germany: DESTATIS - German Federal Statistical Office). These include all accidents recorded by police at the accident site, but contain only a few relevant parameters. Additionally, in depth-studies (e.g. Germany: GIDAS – German In-Depth Accident Study) can be used to get detailed information. Compared to national statistics, they have relatively few reports, but written by specially trained investigation teams on site in any detail. Since these in-depth data are a welldefined subset of the national statistics, they can be regarded as a representative sample. In Germany, there were 354.534 accidents in 2003 with at least minor injuries. The distribution of pre-crash situations (=accident types) per road type is displayed in figure 2. Intersection-related situations are turn across path (type 2) and turn into/straight crossing path (type 3) and together account for 36% of all accidents. This indicates the importance of system development in this
Development of Advanced Assistance Systems for Intersection Safety
field.
Fig. 2.
Distribution of pre-crash situations per road type [3]
In analysing the main causes in figure 3 it is obvious that disregarding right of way is clearly leading the ranking. Adding errors while turning, a large group (≈1/3) of preventable mistakes can be addressed and potentially avoided by supporting the driver with advanced assistance systems for intersection safety.
Fig. 3.
Distribution of main accident causes [4]
523
524
Intersafe
3
Accident Analysis
After defining relevant accident situations, it is important to analyse some representative cases in detail to get information on the initiating manoeuvres. This helps to interpret and to understand the circumstances why a critical situation (near-collision) in this particular case finally resulted in an accident (collision). A method of analysing the situation is to change relevant parameters like velocity, deceleration and steering angle to study changes in the accident sequence. In order to do this, suitable simulation systems are required like: PC-Crash© or Matlab/Simulink© Reconstruction data are an essential input, as parameters for avoidance algorithms should be realistic. Analysing an actual accident can provide this. An appropriate tool is PC-Crash©. This programme is based on a 3D-impact model, which allows the calculation of multiple collisions and visualisation of the sequence, which is exemplarily shown in figure 4.
Fig. 4.
Reconstruction of an accident situation created with PC-Crash© [5]
Since PC-Crash© can only provide reconstruction data, a sufficient tool for system development, algorithm testing and a first performance test is a 2D-simulation (e.g. based on Matlab/Simulink©). Stylised cars and road layouts represent the scenario and allow determining effects of different assistance systems and driver reactions to warnings (e.g. no reaction or braking/countersteering) on the sequence, which can be seen in figure 5. On the left, a severe left turn accident is visible, while in the middle it could be mitigated by braking. On the right a possible avoidance was enabled by an appropriate assistance system’s interference.
Development of Advanced Assistance Systems for Intersection Safety
Fig. 5.
2D simulation results [5]
This detailed interpretation and simulation of accidents is the basis for the development of the safety application itself.
4
Scenario Selection and System Development
As shown in the previous section, an assistance system shall prevent the driver from committing crucial mistakes that inevitably cause critical situations. Within the previously mentioned accident types turn across path and turn into/straight crossing path there exist several different scenarios. As systems shall not only address single scenarios, similar ones have been grouped to be covered by the same assistance function. The most relevant basic scenarios groups are therefore:
Fig. 6.
Left turn path (and collision with oncoming traffic)
525
526
Intersafe
Fig. 7.
Straight crossing path (and collision with lateral traffic)
Fig. 8.
Red-light crossing (and collision with other road users)
The red-light crossing scenario is not classified within any accident type and can have multiple opponents such as pedestrians, other cars or railway trains. Even though it is not regarded as accident type in the statistics; accidents caused by not adhering to traffic lights are also worth analysing. The system development for assistance with traffic lights can benefit from the other situations. The systems will have to work within these basic scenarios and similar ones. These scenarios will be used to develop: the system functionality, the controller algorithm, the sensor systems requirements and the Human Machine Interface (HMI). The development process of the intersection safety systems will follow the top down approach. First, the system functionality has to be defined. The controller algorithm itself will be derived from a possible function. In order to cover all addressed scenarios, the first controller design will be developed independently from
Development of Advanced Assistance Systems for Intersection Safety
sensor systems available in near future. Simulation will be used to test the algorithms. The first implementation and testing will be done with Matlab/Simulink©. Thus, no further knowledge of the sensor system is needed. The design of the HMI is directly connected to the system functionality. The main focus is to inform and warn the driver in such a manner so that the driver can react quickly and appropriately. This task is quite complex, because the relatively high speed requires fast reactions, so the HMI must provide an intuitively comprehensible warning. The HMI concept will be tested in the driving simulator.
Fig. 9.
Impression of virtual surrounding
The specification for the driving simulation itself can also be derived from the intersection scenarios. Approaching the intersection, the kinaesthetic and optic feedback has to be realistic at low speeds. At intersections braking until a complete stop reaches in standard situations values up to -4ms-2, which significantly differs to braking in highway traffic. Thus, a dynamic driving simulator has to be used in order to have a realistic impression of driving manoeuvres e.g. including the last twitch when reaching a stop.
527
528
Intersafe
The optical impression is typically based on an urban environment with houses, street, signs, etc. Therefore, the shape of these geometric elements has to be modelled in detail. Both aspects were considered while adapting the BMW driving simulator for the development of assistance systems for intersections. Figure 9 shows an example of the virtual image while approaching an intersection. Within this virtual surrounding, test persons can evaluate the system. The key to system functionality is sensing technology. On one hand, the movement of relevant vehicles in the surrounding and on the other signs and traffic lights have to be considered. Within intersection scenarios sight obstructions cause problems in reliable surveillance. In order to solve the problem, two technologies will be considered: autonomous onboard sensors as well as communication systems (car to car, car to infrastructure). Starting with the hypothesis of ideal sensors, the roadmap from existing technology up to ideal sensors will be examined inversely. The resulting system functionality based on the sensor capabilities can be identified. Thus, the minimum sensor requirements can be derived. Additional communication technologies will be investigated regarding their potential. The final result of the development will be a matrix that shows the system functionality depending on the surveillance capability. In order to evaluate these systems, test drivers will have to assess them. This assessment can only be done in a reproducible environment. Since the developed system only applies to critical situations, a virtual environment ensures the testing without endangering the test persons by real traffic. Within the driving simulator, the combination of risk-free and realistic environment is possible. Following the top-down approach, the simulator has the advantages of ideal sensor modelling. Within a virtual world, functionality and driver behaviour in almost any traffic situation can be tested including non-ideal weather conditions such as fog or snow and limited insight into crossing streets.
5
Conclusion
The major goal of the InterSafe project is the reduction of intersection-related accidents. In order to accomplish this goal, the BMW Group develops active safety systems using a top-down approach. This starts with an accident analy-
Development of Advanced Assistance Systems for Intersection Safety
sis to identify critical traffic scenarios. Based on the identified scenarios of turning at and crossing an intersection, the algorithm development begins with simulation studies. This allows the development of surveillance technologies like autonomous onboard sensors or communication systems in parallel to the controller development. The system requirements can be derived from the simulation studies. The major tool during the development phase is the dynamic BMW Group driving simulator. This simulator allows the evaluation of complete systems including sensing technology, controller and humanmachine-interface within a virtual environment. Major advantages of this approach are the test facility of critical scenarios involving test persons without endangering them as well as the reproducibility. At the end, the effect of the complete system can be assessed depending on the surveillance detection capability.
References [1]
[2] [3] [4] [5]
Meitinger, K.-H.; Ehmanns, D.; Heißing, B.: “Systematische Top-Down-Entwicklung von Kreuzungsassistenzsystemen”, VDI Berichte 1864, 10.2004 Meitinger, K.-H.; Ehmanns, D.; Heißing, B.: “Kreuzungsassistent”, Doktorandenkolloquium, 11.2003 DESTATIS – Statistisches Bundesamt Wiesbaden 2004: Verkehrsunfälle - Strukturdaten 2003 GIDAS – TU Dresden und MH Hannover: Datenbank 2004 Hopstock, M. “Aktive Sicherheit und Unfallanalyse” Diplomarbeit BMW Group, 06.2004
Matthias Hopstock, Dr. Dirk Ehmanns, Dr. Helmut Spannheimer BMW Group Forschung und Technik GmbH 80788 München Germany
[email protected] [email protected] Keywords:
intersection safety, driver assistance accident analysis, driving simulator
529
Appendix A List of Contributors
Contributors
List of Contributors Abele, J. 49 Adomat, R. 185 Andersson, G. 227 Arndt, M. 323 Arvanitis, T.N. 353 Ban, T. 447 Bauer, C. 435 Baum, H. 49 Becker, L.-P. 71 Bennett, J. 289 Beutner, A. 169 Bodensohn, A. 149 Brockherde, W. 425 Buettner, C. 243 Buhrdorf, A. 289 Bußmann, A. 425 Cheng, S. 381 Cheung, E. 381 Choi, T. 381 Constantinou, C.C. 353 Cramer, B. 311 Dahlmann, G. 413 Darmont, A. 401 de Boer, G. 371 Debski, A. 71 Degenhardt, D. 71 Diebold, J. 185 Diels, R. 401 Dietmayer, K. 197 Dobrinski, H. 289 Egnisaban, G. 381 Ehmanns, D. 521 Engel, P. 371 Ernsberger, C. 299 Färber, G. 3 Fürstenberg, K. 197, 215, 493, 505 Geduld, G. 185 Geißler, T. 49 Ghosh, S. 169 Goronzy, S. 335 Graf, Th. 61 Hammond, J. 459
533
534
Contributors
Hanson, C. 243 Haueis, M. 149 Hedenstierna, N. 227 Heenan, A. 505 Hering, S. 413 Hillenkamp, M. 71 Ho, F. 381 Hoetzel, J. 115 Hoffmann, I. 79, 159 Holve, R. 335 Hölzer, G. 413 Hopstock, M. 521 Hosticka, B.J. 425 Hung, W. 381 Ina, T. 447 Justus, W. 197 Kai, K. 257 Kämpchen, N. 197 Kawashima, T. 447 Kerlen, C. 49 Kibbel, J. 197 Klug, M. 185 Kluge, T. 505 Knoll, P.M. 85 Kolosov, O. 289 Kompe, R. 335 Kormos, A. 243 Krisch, I. 425 Krüger, S. 23, 49 Kvisterøy, T. 227 Lüdtke, O. 289 Lui, B. 381 Mäckel, R. 149 Maddalena, S. 401 Mangente, T. 381 Matsiev, L. 289 Minarini, F. 487 Mitsumoto, M. 257 Möhler, N. 169 Mühlenberg, M. 97 Nakagawa, K. 257 Nakamura, T. 447 Ng, A. 381
Contributors
Nitta, C. 425 Ochs, T. 311 Pelin, P. 227 Polychronopoulos, A. 169 Praefcke, W. 371 Pulvermüller, M. 149 Rettig, R. 435 Reze, M. 459 Rotaru, C. 61 Rüssler, B. 493 Sans Sangorrin, J. 115 Sauer, M. 323 Schäfer, B.-J. 85 Schamberger, M. 185 Schichlein, H. 311 Schier, J. 269 Schlemmer, H. 129 Schreiber, T. 149 Schulz, R. 197 Schulz, W.H. 49 Schumann, B. 311 Schwarz, U. 413 Shimomura, O. 447 Shooter, C. 505 Sohnke, T. 115 Solzbacher, F. 23 Spannheimer, H. 521 Takeda, K. 447 Thiem, J. 97 Thiemann-Handler, S. 311 Thurau, J. 473 Tong, F.-W. 381 Topham, D.A. 353 Tucker, M. 505 Uhrich, M. 289 Vogel, H. 129 Vogelgesang, B. 435 Ward, D.D. 353 Wertheimer, R. 425 Willig, R. 269 Wipiejewski, T. 381 Yau, S.-K. 381 Zhang, J. 61
535
Appendix B List of Keywords
Keywords
List of Keywords 3D-MEMS 459, 473 4w 269 α-Si 129 ACC 159, 185, 257, 269 acceleration sensor 269 accelerometer 43, 459, 473 active safety 185 ad hoc networks 353 adaptive cruise control 185, 257 adaptive cruise control 257 aeronautics 129 air-conditioning 323 algorithm 227 amperometric 311 angular velocity sensor 269 APIA 185 assistant 159 automotive 129, 323 automotive application 459, 473 automotive camera 425 automotive image sensor 401 automotive MEMS 289 automotive sensor 435 bias 227 blind spot detection 71 braking distance 185 bulk micromachining 413, 459, 473 camera 159 camera-based 71 carbon dioxide 323 CMOS camera 401, 425 color image processing 61 communication 493 competitive analysis 23 confidence measures 335 construction areas 61 cross-system application 269 decision system 169 deep etching 43 demand controlled ventilation 323 deployment 23 differentiation 23
539
540
Keywords
digital 227 distronic 185 DRIE 459, 473 DRIE etching 459 drive-by-wire 185 driver assistance 61 driver assistance accident analysis 521 driver assistance systems 59, 71, 97, 1185 driving simulator 521 dual-band camera 129 EAS 269 emergency braking 185 eSafety 353 ESP 227, 269 exhaust gas sensor 311 fiber optic transceiver 381 FM-pulse doppler 257 FMCW 257 foundry process 413 frequency modulated continuous wave 257 FSRA 257 full speed range ACC 185, 257 fully programmable 401 gas sensor 323 giant magnetoresistant 435 GMR 435 gold 311 GUIDE 335 gyro 227 gyroscopes 43 HARMEMS 459 headlamp IR sensor 185 headlight detection 71 HHC 269 high dynamic range 401 high sensitivity 401 high speed photodetector 381 high-dynamic range camera 425 HIS 185 hydrogen 311 image fusion 129 image processing 71 image sensor 185, 425
Keywords
in-field test modes and integrity checks inertial sensor cluster 269 inertial sensors 43 infotainment system 381 infrared sensor 185, 323 innovation networks 23 integrated lens 401 integrated pressure sensor 413 integrated tool-based UI design 335 integration 269 intelligent cruise assistance 97 intelligent transportation systems 353 intersection 493, 505, 521 key IR sensor 185 KIS 185 lane departure warning 185 lane detection 197 lane keeping support 169, 185 laser 159 laserscanner 493, 505 LDW 185 LIDAR 197 life time prediction 149 LKS 185 long range radar 185 low g accelerometer 459 low speed following 257 LSF 257 LWIR 129 magnetoresistive sensor 435 map-building 505 market 23, 43 market forecast 43 MEMS 227, 413 micro system technology 289 microbolometer 129 micromechanics 149 microsystems application 23 mid-range sensor 185 millimeter-wave radar 257 modular concept 269 MSM photodetector 381 multi-use of sensors 185
401
541
multilayer laserscanner 197 navigation 227 near infra-red 401 NIR 129 NOx 311 object recognition 505 oil analysis 149 oil condition 289 oil level sensor 289 OOV 335 OSGi 371 oxygen partial pressure 311 passive 159 passive safety 185 PCS optical fiber link 381 perception layer 169 platinum 311 polarization-twisting cassegrain 257 pre-crash detection 185 pre-crash safety 257 precrash-sensing 115 prediction 23 production capacities 23 pulse doppler 257 R744 323 radar 115, 159, 185 relevant scenarios 493 reliability 149 remote maintenance 371 remote software download 371 road safety 353 roadway detection 197 rollover 227 ROM 269 RoSe 269 routing protocols 353 safety 459, 473, 505 safety systems 185 SDM 227 sensor 435 sensor and actuator control 169 sensor and data fusion 23 sensor application 435
Keywords
sensor fusion 97 sensor technology 185 sensor-level features 505 short range radar sensor 185 signal accuracies and characteristics 269 signal conditioning 459, 473 silicon surface micromachining 269 situation analysis 115, 169 solid electrolyte 311 speaker adaptation 335 speech recognition 335 speed sensor 435 spoken dialogue 335 stability 227 stereo 159 stereo from motion 71 stop-and-go support 185 stopping distance 185 surface micromachining 459 SWIR 129 system design 169 system monitoring 149 system-on-chip 413 technological challenges 23 telematics gateway 371 thermal imager 129 titration 311 tuning fork 289 uncooled CMT 129 VCSEL FOT 381 vehicle to vehicle communications 353 video 159, 493, 505 video-based 71 vision based systems 185 vision enhancement 185 VSC 227 wafer bonding 413 yaw rate sensor 269 yellow lane markings 61
543