Smart Inspection Systems Techniques and Applications of Intelligent Vision
Full catalogue information on all books, j...
307 downloads
985 Views
14MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Smart Inspection Systems Techniques and Applications of Intelligent Vision
Full catalogue information on all books, journals and electronic products can be found on the Elsevier Science homepage at: http://www.elsevier.com
ELSEVIER PUBLICATIONS OF RELATED INTEREST JOURNALS: Advanced Engineering Informatics Automatica Computers in Industry Control Engineering Practice Engineering Applications of AI Image and Vision Computing International Journal of Machine Tool and Manufacture Robotics and Autonomous Systems Robotics and Computer-Integrated Manufacturing
Smart Inspection Systems Techniques and Applications of Intelligent Vision
D.T. Pham Manufacturing Engineering Centre Cardiff University Wales, UK
R.J. Alcock Metropolis Informatics S..4. Thessalonila" Greece
ACADEMIC PRESS An imprint of Elsevier Science Amsterdam San Diego
9Boston 9London 9N e w York 9Oxford 9Paris 9San Francisco 9S i n g a p o r e 9S y d n e y 9T o k y o
This book is printed on acid-free paper Copyright 2003, Elsevier Science Ltd. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. Academic Press An Imprint of Elsevier Science 84 Theobald's Road, London WClX 8RR, UK http://www.academicpress.com Academic Press An Imprint of Elsevier Science 525 B Street, Suite 1900, San Diego, California 92101-4495, USA http://www.academicpress.com ISBN 0-12-554157-0 A catalogue record for this book is available from the British Library
Printed and bound in Great Britain by Biddies Ltd, www.biddles.co.uk 03 04 05 06 07
BL 9 8 7 6 5 4 3 2 1
Preface
Automated Visual Inspection (AVI) is a mechanised form of quality control normally achieved using one or more cameras connected to a computer system. Inspection is carried out to prevent unsatisfactory products from reaching the customer, particularly in situations where failed products can cause injury or even endanger life. Many humans are engaged in inspection tasks but, due to factors such as tiredness and boredom, their performance is often unreliable. In some cases, human inspection is not even possible when the part to be inspected is small, the production rates are high or there are hazards associated with inspecting the product. Therefore, AVI is gaining increasing importance in industry. Simultaneously, a growing amount of research has been aimed at incorporating artificial intelligence techniques into AVI systems to increase their capability. The aim of this book is to enable engineers to understand the stages of AVI and how artificial intelligence can be employed in each one to create "smart" inspection systems. First, with the aid of examples, the book explains the application of both conventional and artificial intelligence techniques in AVI. Second, it covers the whole AVI process, progressing from illumination, through image enhancement, segmentation and feature extraction, to classification. Third, it provides case studies of implemented AVI systems and reviews of commercially-available inspection systems. The book comprises seven chapters. Chapter One overviews the areas of AVI and artificial intelligence. The introduction to AVI discusses the requirements of AVI systems as well as its financial benefits. Seven popular artificial intelligence techniques are explained, namely expert systems, fuzzy logic, inductive learning, neural networks, genetic algorithms, simulated annealing and Tabu search. Chapter Two covers image acquisition and enhancement. The key factor in image acquisition is the lighting. Common lighting system designs and light sources are detailed. The image enhancement methods presented range from traditional smoothing methods to the latest developments in smart enhancement.
Image segmentation is described in Chapter Three. A number of common segmentation techniques are covered, from the established methods of thresholding and edge detection to advanced segmentation based on artificial intelligence. Chapter Four gives methods for feature extraction and selection. Several features are discussed, including first- and second-order features as well as window and object features. Statistical and classifier-based methods for selecting the optimal feature set are also described. Classification techniques are the subject of Chapter Five. Four popular types of classifier are explained: Bayes' theorem classifiers, rule-based systems, neural networks and fuzzy classifiers. Synergistic classification, which combines the benefits of several individual classifiers, is also covered. Chapter Six describes three applications of smart vision that have been implemented. The applications specified are the inspection of car engine seals and wood boards as well as the classification of textured images. Finally, Chapter Seven reviews commercially-available inspection systems. Features of state-of-the-art vision systems are advanced cameras, intuitive development environments, intelligent algorithms and high performance. Many commercial inspection systems employ artificial intelligence to make the systems more effective, flexible and simple. Each chapter concludes with exercises designed to reinforce and extend the reader's knowledge of the subject covered. Some of the exercises, based on a demonstration version of the ProVision image processing tool from Siemens contained in the CD ROM supplied with the book, also give the reader an opportunity to experience the key techniques discussed here. Much of the work in this book derives from the AVI and artificial intelligence work carried out in the authors' award-winning Manufacturing Engineering Centre over the past 12 years. Several present and former members of the Centre have participated in AVI projects. They include Dr. E. Bayro-Corrochano, Dr. B. Cetiner, Dr. P.R. Drake, Mr. N.R. Jennings, Dr. M.S. Packianather, Dr. B. Peat, Dr. S. Sagiroglu and Dr. M. Yang, who are thanked for their contributions. Dr. B. Peat and Mr. A.R. Rowlands are also thanked for proof-reading the final manuscript. The authors would like to acknowledge the financial support received for the work from the Engineering and Physical Sciences Research Council, the Department of Trade and Industry, the Teaching Company Directorate, the European Commission (BRITE-EURAM Programme) and the European Regional Development Fund (Knowledge-Based Manufacturing Centre, Innovative Technologies for Effective Enterprise and SUPERMAN projects). The work was performed in collaboration
with industrial companies, including Europressings Ltd. (UK), Federal Mogul (UK), Finnish Wood Research Ltd. (Finland), Palla Textilwerke GMBH (Germany) and Ocas NV (Belgium). In particular, the authors wish to thank Dr. S. Baguley, Mr. M. Chung, Mr. G. Hardern, Mr. B. Holliday, Mr. J. Mackrory, Mr. U. Seidl and their colleagues at Siemens for the assistance given to the Centre. The authors would also like to acknowledge the partners who have worked with them over the years in collaborative research efforts in AVI. They include Prof. P. Estevez (University of Chile), Mr. J. Gibbons (Ventek Inc., Eugene, Oregon), Mr. T. Lappalainen (VTT Technical Research Centre, Finland) and Dr. R. Stojanovic (University of Patras, Greece). Dr. Alcock would personally like to express appreciation for the support of Metropolis Informatics S.A. (Thessaloniki, Greece), Prof. Y. Manolopoulos (Aristotle University of Thessaloniki), Dr. A.B. Chan and Dr. K. Lavangnananda. Finally, the authors wish to thank Mr. N. Pinfield, Mr. H. van Dorssen, Mrs. L. Canderton and their colleagues at Academic Press and Elsevier Science for their expert support in the production of this book. D.T. Pham
R.J. Alcock
This Page Intentionally Left Blank
Contents
1 Automated Visual Inspection and Artificial Intelligence ................................... 1 1.1 Automated Visual Inspection ............................................................................... 2 1.1.1 Practical Considerations for AVI Systems .................................................. 4 1.1.2 Financial Justifications for Automated Inspection ....................................... 6 1.2 Applications of AVI ............................................................................................. 7 1.3 Artificial Intelligence ......................................................................................... 11 1.3.1 Expert Systems .......................................................................................... 11 1.3.2 Fuzzy Logic ............................................................................................... 12 1.3.3 Inductive Learning ..................................................................................... 14 1.3.4 Neural Networks ........................................................................................ 16 1.3.5 Genetic Algorithms, Simulated Annealing and Tabu Search .................... 22 1.4 Artificial Intelligence in AVI ............................................................................. 25 1.5 Summary ............................................................................................................ 26 References ................................................................................................................ 27 Problems ................................................................................................................... 33
2 Image Acquisition and Enhancement ................................................................ 35 2.1 Image Acquisition .............................................................................................. 2.1.1 Lighting System Design ............................................................................ 2.1.2 Light Sources ............................................................................................. 2.1.3 Choosing Optimum Lighting ..................................................................... 2.1.4 Other Image Acquisition Issues ................................................................. 2.1.5 3D Object Inspection ................................................................................. 2.2 Image Enhancement ........................................................................................... 2.2.1 Low-pass Filtering ..................................................................................... 2.2.2 Morphology ............................................................................................... 2.2.3 Artificial Intelligence for Image Enhancement .......................................... 2.3 Discussion .......................................................................................................... 2.4 Summary ............................................................................................................ References ................................................................................................................
35 38 43 46 48 49 51 53 55 60 63 63 64
Problems ...................................................................................................................
67
3 S e g m e n t a t i o n .......................................................................................................
69
3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9
70 75 79 81 82 84 85 85 87 87 87 88 88 89 89 92
Edge Detection ................................................................................................... Thresholding ...................................................................................................... Region Growing ................................................................................................. Split-and-merge .................................................................................................. Window-based Subdivision ................................................................................ Template Matching ............................................................................................ Horizontal and Vertical Profiling ....................................................................... AI-based Segmentation ...................................................................................... Post-Processing o f Segmented Images ............................................................... 3.9.1 Mo r p h o l o g y ............................................................................................... 3.9.2 Hough Transform ...................................................................................... 3.9.3 AI-based Post Processing .......................................................................... 3.10 Discussion ........................................................................................................ 3.11 Summary .......................................................................................................... References ................................................................................................................ Problems ...................................................................................................................
4 F e a t u r e Extraction and Selection ....................................................................... 95 4.1 W i n d o w Features ................................................................................................ 4.1.1 First-Order Features ................................................................................... 4.1.2 Fourier Features ....................................................................................... 4.1.3 Co-occurrence Features ........................................................................... 4.2 Object Features ................................................................................................. 4.2.1 Object Shape Features ............................................................................. 4.2.2 Object Shade Features ............................................................................. 4.3 Features from Colour Images ........................................................................... 4.4 Feature Selection .............................................................................................. 4.4.1 Statistical Approach ................................................................................. 4.4.2 Classification-based Approach ................................................................ 4.4.3 Other Approaches .................................................................................... 4.5 Discussion ........................................................................................................ 4.6 S u m m ar y .......................................................................................................... References .............................................................................................................. Problems ................................................................................................................
99 99 102 104 107 107 110 110 111 115 117 120 121 122 122 125
5 Classification ...................................................................................................... 129 5.1 Bayes' Theorem Classifiers .............................................................................. 5.2 Rule-Based Classification ................................................................................ 5.2.1 Fuzzy Rule-Based Classification ............................................................. 5.3 MLP Neural Network Classification ................................................................ 5.4 Synergistic Classification ................................................................................. 5.4.1 Synergy using a Combination Module .................................................... 5.4.2 Tree-based Synergistic Classifiers ........................................................... 5.5 Discussion ........................................................................................................ 5.6 Summary .......................................................................................................... References .............................................................................................................. Problems .................................................................................................................
131 134 137 138 142 143 148 149 150 151 153
6 Smart Vision Applications ................................................................................ 157 6.1 Inspection of Car Engine Seals ........................................................................ 6.1.1 System Set-up .......................................................................................... 6.1.2 Inspection Process ................................................................................... 6.1.3 Subsequent Work ..................................................................................... 6.2 Inspection of Wood Boards .............................................................................. 6.2.1 System Set-up .......................................................................................... 6.2.2 Inspection Process ................................................................................... 6.2.3 Subsequent Work ..................................................................................... 6.2.4 Colour and Computer Tomography Inspection of W o o d ........................ 6.3 Texture Classification ...................................................................................... 6.3.1 Classification using First-order Features ................................................. 6.3.2 Classification using Second-order Features ............................................. 6.3.3 Other Recent Work on Texture Classification ......................................... 6.4 Summary .......................................................................................................... References .............................................................................................................. Problems .................................................................................................................
157 158 160 164 165 166 168 177 179 180 180 181 186 186 186 190
7 Industrial Inspection Systems .......................................................................... 193 7.1 Image Processing Functions of Inspection Systems ......................................... 7.2 State-of-the-Art Vision Systems ...................................................................... 7.2.1 Advanced Cameras .................................................................................. 7.2.2 Intuitive Development Environments ...................................................... 7~2.3 Intelligent Algorithms .............................................................................. 7.2.4 High Performance .................................................................................... 7.3 Future of Industrial Inspection Systems ...........................................................
193 194 194 197 198 202 204
7.4 S u m m a r y .......................................................................................................... 206 R e f e r e n c e s .............................................................................................................. 2 0 6 P r o b l e m s ................................................................................................................. 208
A p p e n d i x ............................................................................................................... 2 0 9 A u t h o r I n d e x ......................................................................................................... 2 1 3 I n d e x ...................................................................................................................... 2 1 9
Chapter 1 Automated Visual Inspection and Artificial Intelligence
Inspection is carried out in most manufacturing industries to ensure that low quality or defective products are not passed to the consumer. In financial terms, inspection is necessary because consumers who purchase unsatisfactory products are less likely to make a repeat purchase. More importantly, in the aerospace, automotive and food industries, failed products can cause injury or even fatal accidents. Many humans are engaged in inspection tasks but due to factors such as tiredness and boredom, their performance is often less than satisfactory. In some cases, human inspection is not even possible when the part to be inspected is very small or the production rates are very high. Thus, automated inspection is required. Another area where automated inspection is highly desirable is in the inspection of dangerous materials. These include inflammable, explosive or radioactive substances. Automated Visual Inspection (AVI) is the automation of the quality control of manufactured products, normally achieved using a camera connected to a computer. AVI is considered to be a branch of industrial machine vision [Batchelor and Whelan, 1997]. Machine vision requires the integration of many aspects, such as lighting, cameras, handling equipment, human-computer interfaces and working practices and is not simply a matter of designing image processing algorithms. Industrial machine vision contrasts with high-level computer vision, which covers more theoretical aspects of artificial vision, including mimicking human or animal visual capabilities. Figure 1.1 shows a breakdown of artificial vision. In modem manufacturing, quality is so important that AVI systems and human inspectors may be used together synergistically to achieve improved quality control [Sylla, 1993]. The machine vision system is used to inspect a large number of products rapidly. The human inspector can then perform slower but more detailed inspection on objects that the machine vision system considers to be borderline cases.
2
Automated Visual Inspection and AI
Artificial vision
Computer vision
Machine vision
I Visionbased robot control
AVI
I Scene understanding
Mimicking human vision
Figure 1.1 Examples of machine and computer vision
In AVI, many conventional image-processing functions, such as thresholding, edge detection and morphology, have been employed. However, much recent work has focussed on incorporating techniques from the area of artificial intelligence into the process. This book describes common artificial intelligence techniques and how they have been used in AVI. This chapter gives the typical stages of the AVI process, explains common artificial intelligence techniques and outlines areas where artificial intelligence has been incorporated into AVI.
1.1 Automated Visual Inspection AVI operates by employing a camera to acquire an image of the object being inspected and then utilising appropriate image processing hardware and software routines to find and classify areas of interest in the image. Figure 1.2 shows the setup of an AVI system based around a central computer. In this system, the computer controls the camera, lighting and handling system. It also takes images acquired by the camera, analyses them using image processing routines and then issues an appropriate action to be performed by the handling system. Images from the
Automated Visual Inspection and AI
3
inspected objects and the number of parts accepted and rejected may be displayed on a monitor or Visual Display Unit (VDU).
Figure 1.2 Typical AVI system structure
Generally, AVI involves the following processing stages (Figure 1.3): 9
Imageacquisition to obtain an image of the object to be inspected; Image enhancement to improve the quality of the acquired image, which facilitates later processing; Segmentation to divide the image into areas of interest and background. The result of this stage is called the segmented image, where objects represent the areas of interest; Feature extraction to calculate the values of parameters that describe each object;
9
Classification to determine what is represented by each object.
4
Automated Visual Inspection and AI
Based on the classification, the parts are passed or failed. Accepted parts may then be graded. Another possible use of the classification information is as feedback for the production process. For example, it may be noticed that a particular type of defect is occurring frequently. This indicates that one of the machines or processes may not be operating optimally.
Figure 1.3 General AVI process structure
1.1.1 Practical Considerations for AVI Systems Before implementing an AVI system, it is useful to consider some of the practical advice of Batchelor and Whelan [ 1997]:
System concept. AVI systems should not be developed just for the sake of doing so. If a cheaper or easier alternative solution to the problem is available, then this is preferable. Also, neither humans nor vision systems will achieve 100% accuracy consistently. Thus, considering the importance of quality, the best solution may be to use humans and machines together.
Automated Visual Inspection and AI
5
Requirements. It is important to start AVI system development with a clear system specification, detailing the customer requirements.
Design. The system should be designed to be as simple as possible. This is called the principle of Occam's razor. If two systems are developed with the same performance, the one with the simplest implementation should be chosen. The justification for this is that a simple system has fewer components that can fail.
Implementation. A larger part of the cost of the system development is spent on making the system work in a factory environment than on developing the image processing routines.
Visualisation. It is beneficial to attach a VDU to the inspection system to inform personnel of its operation. Users and managers feel more confident with a system when they can visualise what it is doing. The general requirements of an AVI system are that it should be: Accurate. It should improve upon human capabilities. human inspectors is usually well under 100%.
The performance of
Fast. The system should be able to work in real time. This is very important because objects in production lines arrive for inspection in rapid succession.
Robust. The system should be insensitive to industrial environments. Such factors as variable light, vibrations and dust must be taken into consideration.
Complete. The system should be able to identify, as well as locate, defects. The grade of an object can depend not only on the number and size of defects but also their type and location. In addition, the system should accumulate and make available statistical information about its operation for performance analysis purposes. If a defect, which is caused during production, occurs frequently then the system should notify the workers in the factory to correct the problem. Flexible. Ideally, the system should have some user-configurable modules so that it can be easily transferred from one product or production line to another. However, access to user-configurable parts of the system should be rigorously controlled and monitored. Reliable. If failure of the system is detected, a backup system or alarm will be required.
6
Automated Visual Inspection and AI
Maintainable. The inspection equipment should be arranged so that all parts may easily be accessed. Also, computer programs should be written so that they are readable and easy to understand. Program code should contain a large number of comments and be logically structured.
Cost effective. The cost of developing and running the system should be more than compensated for by its economic benefits. Oiten, the major cost in the development of an inspection system is not the hardware but the cost of employing developers to write dedicated software. However, considering the importance of quality in today's marketplace for acquiring and maintaining customers, the payback time for AVI systems can be short.
1.1.2 Financial Justifications for Automated Inspection
The UK Industrial Vision Association has produced a list of twenty-one financial justifications for using machine vision, which is available upon request [UKIVA, 2002]. Cost benefits can be divided into three broad categories: Cost of quality. Suppliers' quality can be monitored as well as the quality of finished products. Increased quality control should reduce the total cost of repair work required during the product guarantee. Cost of materials. The production of scrap materials will be reduced. Also, parts can be taken out of the production process as soon as they are found to be faulty so that good quality material further down the production line is not added to the faulty part. Cost of labour. This is generally a negligible effect and not a key motivator in the adoption of AVI.
One of the problems in the widespread acceptance of automated inspection systems is that companies who have installed them are unwilling to release details of their system or the cost savings because they want to maintain a competitive advantage over their rivals. For this reason, the European Union funded a programme to finance machine vision installations provided that the results of each project could be made public [Soini, 2000; HPCN-TTN, 2002]. Several machine vision projects have already yielded promising results. In the field of inspection, a bottle sorting system has been installed in a Finnish brewery. The system has increased line capacity from 4000 to 7000 crates per shift and gives cost savings equivalent to gS0,000 per month.
Automated Visual Inspection and AI
7
1.2 Applications of AVI Automated visual inspection has been applied to a wide range of products [Newman and Jain, 1995]. Due to the long set-up time for inspection systems, AVI is suited to tasks where a large number of products of the same type are made in a productionline environment. Table 1.1 gives examples of items for which automated inspection has been employed. Demant et al. [ 1999] detailed three main applications areas of AVI systems in their practical guide to automated inspection:
Mark identification. Marks on products that need to be checked include bar codes and printed characters on labels. For the checking of characters, special Optical Character Recognition (OCR) algorithms are required to determine what words have been printed. Dimension checking. Product dimensions can be verified as well as distances between sub-components on a part. Then, it can be determined if the part is manufactured to specification.
Presence verification. On an assembly, it is normally required to check if all parts are present and correctly positioned. Whilst it is relatively simple for humans to look at an object and determine its quality, it is a complex process to write image processing algorithms to perform the same task. Inspection systems work well when the product they are dealing with does not have a complex shape and is relatively uniform from one product to the next. Therefore, AVI is well suited to the inspection of objects such as car engine seals. Natural products, such as wood and poultry, can differ significantly from one to the next. Thus, whilst it is possible to inspect natural products, more complex inspection set-ups are required. The most mature of AVI tasks is that of inspecting printed circuit boards (PCBs) [Moganti et al., 1996]. There are several reasons for the popularity of AVI in this area. First, PCBs are man made and so are regular. Second, their rate of production is very high, making their inspection virtually impossible for humans. Third, the quality requirements in the manufacture of PCBs are very high. Yu et al. [1988] found that the accuracy of human inspectors on multi-layered boards did not exceed 70%.
8
Automated Visual Inspection and AI
Object Apples Automobile axles Automotive compressor parts Carpet Castings Catfish Ceramic dishes Corn kemels Cotton Electric plates Fish Glass bottles Grain Knitted fabrics Lace Leather LEDs Light bulbs Metal tubes Mushrooms Olives Pistachio nuts Pizza topping Potatoes Poultry Printed circuit boards Pulp Seeds Semiconductors Solder joints Steel Textiles Tiles Web materials Weld Wood
Authors Wen and Tao Romanchik Kang et al. ! Wang et al. Tsai and Tseng Korel et al. Vivas et al. Ni et al. Tantaswadi et al. Lahajnar et al. Hu et al. Hamad et al. Majumdar and Jayas Bradshaw Yazdi and King Kwak et al. Fadzil and Weng Thomas and Rodd Truchetet et al. Heinemann et al. Diaz et al. Pearson and Toyofuku Sun Zhou et al. Chao et al. Chen et al. Duarte et al. Urena et al.. Kameyama Kim et al. Wiltschi et al. Tolba and Abu-Rezeq Melvyn and Richard Hajimowlana et al. Suga and Ishii Pham and Alcock
Table 1.1 Examples of automated inspection applications
Year 1999 2001 1999 1997 1999 2001 1999 1997 1999 2002 1998 1998 1999 1995 1998 2000 1998 1994 1997 1994 2000 2000 2000 1998 2002 2001 1999 2001 1998 1999 2000 1997 2000 1999 1998 1998
Automated Visual Inspection and AI
9
Inspection methods for PCBs can be divided into contact and non-contact methods. Contact methods include electrical testing that find short circuits and open connections. AVI is a non-contact form of inspection. Due to the high quality requirements of PCB inspection, it is recommended that contact and non-contact testing methods be combined [Moganti et al., 1996]. Automated visual inspection tasks can be divided into four types, relating to the complexity of the images employed for the task: binary, grey-scale, colour and range [Newman and Jain, 1995]. When inspection tasks use binary images, the object generates a silhouette in front of a black or white background. Figure 1.4 shows an example of a binary image. For images acquired by a grey-scale camera, a simple fixed threshold is used to create the binary image. An advantage of using binary images is that it simplifies the requirements for image acquisition and lighting. In the simplest case, the image can be employed to determine the presence or absence of the part. However, typically, the image will be used to analyse the size, shape or position of the part. Then, it can be determined whether the part deviates significantly from its specifications.
Figure 1.4 Example of a binary image The use of grey-scale, or intensity, images widens the number of applications for automated inspection. In particular, it allows the inspection of surfaces for defects and the analysis of texture information. Figure 1.5 shows an example of a greyscale image.
10
Automated Visual Inspection and AI
Figure 1.5 Example of a grey-scale image Colour images are used in AVI when grey-scale images cannot give a suitable level of accuracy. Areas where colour adds important extra information include packaging, clothing and food inspection. Computers are more effective than humans at distinguishing between colours and "remembering" exactly what a colour looks like. In the past, grey-scale images have been utilised more than colour ones for industrial AVI for two reasons. First, there is less information to process. Second, grey-scale systems have been cheaper to purchase. However, technological advances have meant that grey-scale systems are increasingly being replaced by colour ones. Inspection systems that use range information are useful for inspecting objects where 3D information is important. One method of 3D inspection is through the use of co-ordinate measuring machines (CMMs). However, CMMs are too slow for online inspection. Obtaining 3D information from 2D images is a complex computational problem.
Automated Visual Inspection and AI
11
1.3 Artificial Intelligence Artificial intelligence (AI) involves the development of computer programs that mimic some form of natural intelligence. Some of the most common AI techniques with industrial applications are expert systems, fuzzy logic, inductive learning, neural networks, genetic algorithms, simulated annealing and Tabu search [Pham et al., 1998]. These tools have been in existence for many years and have found numerous industrial uses including classification, control, data mining, design, diagnosis, modelling, optimisation and prediction.
1.3.1 Expert Systems Expert systems are computer programs embodying knowledge about a narrow domain for solving problems related to that domain [Pham and Pham, 1988]. An expert system usually comprises two main elements, a knowledge base and an inference mechanism. In many cases, the knowledge base contains several "IF THEN" rules but may also contain factual statements, frames, objects, procedures and cases. Expert systems based on rules are also called rule-based systems. Figure 1.6 shows the typical structure of an expert system. First, domain knowledge is acquired from an expert. Second, facts and rules from the expert are stored in the knowledge base. Third, during execution of the system, the inference engine manipulates the facts and rules. Finally, the results of the inference process are presented to the operator in a user-friendly, often graphical, format.
Inference engine Expert or Knowledge engineer
User interface
Knowledge acquisition Knowledge base
Figure 1.6 Typical structure of an expert system An example of a rule that might be employed in inspection is: if SIZE OF OBJECT > 25mm then REJECT
User
12
Automated Visual Inspection and AI
The inference mechanism manipulates the stored knowledge to produce solutions. This manipulation is performed according to a control procedure and search strategy. The control procedure may be either forward chaining or backward chaining, while the search strategy may be depth first, breadth first or best first. Most expert systems are nowadays developed using programs known as "shells". These are ready-made expert systems with inference and knowledge-storage facilities but without the domain knowledge. Some sophisticated expert systems are constructed with the help of "development environments". The latter are more flexible than shells in that they also provide means for users to implement their own knowledge representation and inference methods. Expert systems are now a very mature technology in artificial intelligence, with many commercial shells and development tools available to facilitate their construction. Consequently, once the domain knowledge to be incorporated in an expert system has been extracted, the process of building the system is relatively simple. The main problem in the development of expert systems is knowledge acquisition or the generation of the rules in the knowledge base.
1.3.2 Fuzzy Logic A disadvantage of ordinary rule-based expert systems is that they cannot handle new situations not covered explicitly in their knowledge bases. These rule-based systems are unable to produce conclusions when such situations are encountered. Therefore, they do not exhibit a gradual reduction in performance when faced with unfamiliar problems, as human experts would. The use of fuzzy logic, which reflects the qualitative and inexact nature of human reasoning, can enable expert systems to be more resilient [Nguyen and Walker, 1999]. With fuzzy logic, the precise value of a variable is replaced by a linguistic description, the meaning of which is represented by a fuzzy set, and inferencing is carried out based on this representation. Knowledge in an expert system employing fuzzy logic can be expressed as qualitative statements or fuzzy rules. For instance, in the rule given previously, the value of a fuzzy descriptor, such as the word large, might replace the value ">25". Thus, the rule would become: if SIZE OF OBJECT IS LARGE then REJECT A reasoning procedure, known as the compositional rule of inference, enables conclusions to be drawn by generalisation (extrapolation or interpolation) from the qualitative information stored in the knowledge base. In rule-based expert systems, the compositional rule of inference is the equivalent of the modus-ponens rule:
Automated Visual Inspection and AI
13
if A then B A is TRUE => B is TRUE The key aspect of fuzzy logic is the membership function. Figure 1.7 gives examples of membership functions to describe object size. Here, the fuzzy descriptors used to characterise the size are small, medium and large. It can be seen that if an object is 30mm long, it is definitely large. However, if an object is 25mm long, it can be described as both medium and large to a certain extent. The membership functions used in the example are triangular. It is also possible to employ trapezoidal, curved or other shapes of membership functions. One of the main problems in fuzzy logic is to determine effective membership functions. One system developed for this is called WINROSA. WINROSA, which has been employed commercially, is based on the fuzzy ROSA (Rule Oriented Statistical Analysis) technique. Fuzzy ROSA was developed at the University of Dortmund in Germany [Krone and Teuber, 1996]. The generated rules can be integrated into many existing fuzzy shells, such as those in the commercial packages DataEngine, FuzzyTech and Matlab.
Degree of membership Large
Medium 1
0
!
!
I
10
20
30
Size/mm
Figure 1.7 Example fuzzy membership functions for object size
14
Automated Visual Inspection and AI
1.3.3 Inductive Learning The acquisition of domain knowledge for the knowledge base of an expert system is generally a major task. In many cases, it has proved a bottleneck in the process of constructing an expert system. Automatic knowledge acquisition techniques have been developed to address this problem. Inductive learning, in this case the extraction of knowledge in the form of IF-THEN rules (or an equivalent decision tree), is an automatic technique for knowledge acquisition. An inductive learning program usually requires a set of examples as input. Each example is characterised by the values of a number of attributes and the class to which it belongs. In the tree-based approach to inductive learning, the program builds a decision tree that correctly classifies the training example set. Attributes are selected according to some strategy (for example, to maximise the information gain) to divide the original example set into subsets. The tree represents the knowledge generalised from the specific examples in the set. Figure 1.8 shows a simple example of how a tree may be used to classify citrus fruit. It should be noted that if size were used at the root of the tree instead of colour, a different classification performance might be expected. I"
!
, Colour , I
I
I
I
or._ ~
~
r ..........
Size
,
,
I
I
I
l" . . . . .
I I I
I
Size
I
, I
1D~all
lar~all Orange
I
Mandarin
Grapefruit
Lemon
Figure 1.8 Decision tree for classifying citrus fruit In the rule-based approach, the inductive learning program attempts to find groups of attributes uniquely shared by examples in given classes and forms rules with the IF part as combinations of those attributes and the THEN part as the classes. After a new rule is formed, the program removes correctly classified examples from consideration and stops when rules have been formed to classify all examples in the training set.
Automated Visual Inspection and AI
15
Several inductive learning algorithms and families of algorithms have been developed. These include ID3, AQ and RULES. The ID3 algorithm, developed by Quinlan [1983], produces a decision tree. At each node of the tree, an attribute is selected and examples are split according to the value that they have for that attribute. The attribute to employ for the split is based on its entropy value, as given in Equation (1.1). In later work, the ID3 algorithm was improved to become C4.5 [Quinlan, 1993]. Entropy(attribute) =-(P§ log 2 P§
(P_ log 2 P_)
(1.1)
where P+ is the proportion of examples that were correctly classified just using the specified attribute and P_ is the proportion incorrectly classified. For implementation, 0.log(0) is taken to be zero. The AQ algorithm, created by Michalski et al. [ 1986], uses the rule-based approach to inductive learning. Combinations of attributes and their values create rules. The rules are searched, from general to specific cases. A rule is considered to be more general if it has fewer conditions. A generated rule that does not misc lassify any examples in the training set is kept. Thus, the derived rules give a 100% performance on the training set. To deal with numerical attributes, the attribute values need to be quantised so that they are like nominal attribute values. AQ has been through a number of revisions. In 1999, AQ18 was released [Kaufman and Michalski, 1999]. An extension to AQ that was designed to handle noise in the data is the CN2 algorithm [Clark and Niblett, 1989]. CN2 keeps rules that classify some examples incorrectly. It does not obtain a 100% accuracy on the training data but can give a better performance on unseen data. The formula used to assess the quality of a rule is: c+l
(1.2)
c+i+n where c is the number of examples classified correctly by the rule, i is the number incorrectly classified by the rule and n is the number of classes. Pham and Aksoy [1993; 1995a, b] developed the f'trst three algorithms in the RULES (RULe Extraction System) family of programs. These programs were called RULES-l, 2 and 3. Later, the rule forming procedure of RULES-3 was improved by Pham and Dimov [ 1997a] and the new algorithm was called RULES-3 PLUS. Rules are generated and those with the highest 'H measure' are kept. The H measure is calculated as:
16
Automated Visual Inspection and AI
H
-
y
r.
•
rt. 2 -
~1 EC E
-
V
""
"~ -
1
(1.3)
where: E is the total number of instances; E c is the total number of instances covered by the rule (whether correctly or incorrectly classified); E~ is the number of instances covered by the rule and belonging to target class i (correctly classified); E i is the number of instances in the training set belonging to target class i. The first incremental learning algorithm in the RULES family was RULES-4 [Pham and Dimov, 1997b]. Incremental learning is useful in cases where not all the training data is known at the beginning of the learning process. RULES-4 employs a Short Term Memory (STM) to store training examples when they become available. The STM has a user-specified size called the STM size. When the STM is full, the RULES-3 PLUS procedure is used to generate rules. Alcock and Manolopoulos [ 1999] carried out experiments on RULES-4 to determine optimum values for the parameters. RULES-4 has three parameters that need to be set: the STM size, number of quantisation levels (Q) and the noise level (NL). It was seen that as the STM size increases, the performance improves. The most sensitive of the three parameters was Q. Choosing an appropriate value for Q has a large effect on classification accuracy.
1.3.4 Neural Networks
There are many classifier architectures which are covered by the term Artificial Neural Networks (ANNs) [Pham and Liu, 1999]. These networks are models of the brain in that they have a learning ability and a parallel distributed architecture. Like inductive learning programs, neural networks can capture domain knowledge from examples. However, they do not archive the acquired knowledge in an explicit form such as rules or decision tress. Neural networks are considered a 'black box' solution since they can yield good answers to problems even though they cannot provide an explanation of why they gave a particular answer. They also have a good generalisation capability, as with filzzy expert systems. A neural network is a computational model of the brain. Neural network models usually assume that computation is distributed over several simple units called
Automated Visual Inspection and AI
17
neurons that are interconnected and operate in parallel (hence, neural networks are also called parallel-distributed-processing or connectionist systems). The most popular neural network is the Multi-Layer Perceptron (MLP) which is a feedforward network: all signals flow in a single direction from the input to the output of the network. Feedforward networks perform a static mapping between an input space and an output space. The output at a given instant is a function only of the input at that instant. Recurrent networks, where the outputs of some neurons are fed back to the same neurons or to neurons in previous layers, are said to have a dynamic memory. The output of such networks at a given instant reflects the current input as well as previous inputs and outputs. Neural networks "learn" a mapping by training. Some neural networks can be trained by being presented with typical input patterns and the corresponding target output patterns. The error between the actual and expected outputs is used to modify the strengths, or weights, of the connections between the neurons. This method of training is known as supervised training.
weight 1 Input 1 weight ! Input 2 ,, , weight Input 3
.
J -!
activation function
Output I...-
weight Input N
Figure 1.9 Artificial neuron The MLP is claimed to be the ANN with the nearest architecture to that of the brain. It consists of a number of artificial neurons. Figure 1.9 displays a neuron in a MLP. The output of each neuron is determined by the sum of its inputs and an activation function. In the figure, the N inputs to the neuron are labelled from 1 to N. These are multiplied by weights 1 to N, respectively, and combined (normally by
18
Automated Visual Inspection and AI
summation). By passing the result of the combination through an activation function, the final output of the neuron is derived. In the MLP, the neurons are linked together by weighted connections. An example of this is shown in Figure 1.10, where circles represent neurons and lines represent weighted connections. The example network has three layers: an input layer, an output layer and an intermediate or hidden layer. Neurons called bias neurons, that have a constant output of one, are also included. The determination of optimal values for the weights is the learning phase of the network. The learning algorithm most commonly utilised to determine these weights is back-propagation (BP). This algorithm propagates the error from the output neurons and computes the weight modifications for the neurons in the hidden layers.
Outputs
~
I
~
- .~. . . .i. . . ~ ( ~
Outputlayer
I
Bias
, , 'l-I
.,x --.
-/
/ x.
/
I
I
I
/
\
/ /
/
>," \
/I /
-.. /
I
\ ...
/ ./
\
Hidden layer
Bias
Input layer
Inputs Figure 1.10 MLP neural network
Neurons in the input layer only act as buffers for distributing the input signals x i to the neurons in the hidden layer. Each neuron j in the hidden layer sums its input
Automated Visual Inspection and AI
19
signals x i, after multiplying them with the strengths of the respective connections wji from the input layer, and computes its output yj as a function f of the sum: yj = f(Ewjixi)
(1.4)
Function f can be a simple threshold or a sigmoidal, hyperbolic tangent or radial basis function. The output of the neurons in the output layer is calculated similarly. A common activation function is the sigmoidal function:
f(x) ~
l + e -x
(1.5)
The backpropagation algorithm gives the change Awji in the weight of a connection between neurons i and j as: kwji = rlSjx i
(1.6)
where 11 is a parameter called the learning rate and ~Sj is a factor depending upon whether neuron j is an output neuron or a hidden neuron. For output neurons:
'~J= d f dnetj
(y~.t) yj)
(1.7)
E Wqj ~q
(1.8)
and for hidden neurons:
j -" ~
df
dnet i
q
In equation (1.7), netj is the total weighted sum of input signals to neuron j and yj(t) is the target output for neuron j. As there are no target outputs for hidden neurons, in equation (1.8), the difference between the target and actual output of a hidden neuron j is replaced by the weighted sum of the ~Sq terms already obtained for neurons q connected to the output of j. Thus, iteratively, beginning with the output layer, the ~5 term is computed for all neurons in all layers and weight updates determined for all connections. The weight updating process can take place after the presentation of each training pattern (pattern-based training) or after the presentation of the whole set of training patterns
20
Automated Visual Inspection and AI
(batch training). In either case, a training epoch is complete when all the training patterns have been presented once to the MLP. For all but the most trivial problems, several epochs are required for the MLP to be properly trained. A commonly-adopted method to speed up the training is to add a "momentum" term to equation (1.6), which effectively lets the previous weight change influence the new weight change, viz.: Awji(k+l ) = TISjxi + o~Awji(k)
(1.9)
where Awji(k+l ) and Awji(k) are weight changes in epochs (k+l) and (k), respectively, and ct is the "momentum" coefficient. For interested readers, further details and code for the MLP using BP training can be found in [Pham and Liu, 1999; Eberhart and Dobbins, 1990]. Some neural networks are trained using unsupervised learning, where only the input patterns are provided during training. These networks learn automatically to cluster patterns into groups with similar features. Examples of such networks are the Kohonen Self-Organising Feature Map (SOFM) and Adaptive Resonance Theory (ART) [Pham and Chan, 1998]. A basic SOFM contains a two-dimensional map of neurons. Figure 1.11 gives an illustration of a 3x3 Kohonen SOFM. The neurons in a Kohonen map (and in an ART network) are different from those in a MLP network. MLP neurons process the inputs using mathematical functions. In the Kohonen map, the neurons simply store representative feature vectors. Each neuron has the same dimensionality as the input feature vector. During training, when patterns are presented, the neuron is found which is closest to the input pattern, normally calculated using Euclidean distance. The weights of that neuron are updated, as are its neighbours but to a decreasing extent according to their distance away from the selected neuron. Over a series of iterations, the weights of the neurons are adapted so that, finally, different areas of the map correspond to different types of input patterns. A variant of the SOFM network is a supervised network called the Learning Vector Quantisation (LVQ) network [Pham and Oztemel, 1996].
Automated Visual Inspection and AI
21
Figure 1.11 3x3 Kohonen SOFM
Output layer (representative patterns)
)
0 Bottom-up weights p-down weights Input layer (training patterns)
Figure 1.12 ART-1 network ART aims to overcome the "stability-plasticity dilemma". Normally, after a neural network is trained, it becomes static and is unable to learn from new input patterns. This contrasts with humans, who are able to update their knowledge continuously. In ART, the first input pattern presented creates a new neuron. Subsequently, patterns are compared with existing neurons. If they are found to be sufficiently close to an existing neuron (according to some similarity criterion) the pattern is assigned to that neuron and the neuron's weights are updated. Otherwise, a new neuron is generated for that pattern. Thus, it is not certain how many neurons will be generated during training. This is different from other kinds of neural networks, where the number of neurons is fixed at the beginning of training. An illustration of ART-l, an early type of ART network, is shown in Figure 1.12. ART-2 has a
22
Automated Visual Inspectionand AI
similar operating principle to ART-1 but a more complex architecture, enabling it to handle noisy analogue inputs. Another popular development in neural networks is that of Radial Basis Functions (RBFs) [Schalkoff, 1997]. One particular advantage that RBFs have over MLPs is that they do not need to be retrained completely when new training data becomes available. A basic RBF network has three layers: input, hidden and output. The dimensionality of the hidden layer should be selected according to the problem. The connections between the input and hidden layers are non-linear but the hidden-tooutput layer connections are linear. The idea of the hidden layer is to transform nonlinearly separable patterns into linearly separable patterns so that the output layer can easily deal with them. The training phase for the RBF network involves f'mding the appropriate non-linear functions for the hidden layer. The output of hidden neuron i in a RBF network has the following form:
w, Ckj 'v'k, k r i (1 START menu item. Debugging and analysis are carried out using options from the TEST menu, such as BREAKPOINT, SINGLE STEP EXECUTION and DISPLAY TEST RESULTS.
This Page Intentionally Left Blank
Author Index
Abbott A.L., 91 Abdou I.E.., 73, 89 Abu-Rezeq A.N., 8, 31 Acton S.T., 72, 89 Adler D.D., 124 Akiyama N., 66 Aksoy M.S., 15, 30, 180, 188 Alcock R.J., 8, 16, 24, 27, 30, 31, 66, 69, 77, 79, 88, 91, 110, 112, 117, 122, 123,124, 135, 140, 152, 165, 168, 173, 187, 188, 189, 190 Araman P.A., 151, 188 Araujo H., 27 Armstrong W.W., 163, 186 Attolico G., 187 Balaban M.O., 29 Balakrishnan H., 181, 187 Batchelor B.G., 1, 4, 27, 40, 46, 64, 65, 75, 89 Bauer K.W., 118, 120, 122, 125 Bayro-Corrochano E.J., 60, 64, 66, 73, 74, 91,143, 151,161,163, 164, 189 Beelman R.B., 28, 90 Belue L.M., 118, 120, 122 Berenguel M.A., 32 Bernard R., 29 Betrouni M., 28 Bhandari D., 66 Bhandarkar S.M., 87, 89
Biela P., 28 Blake C., 97, 121,122 Blasco J., 27 Blasco M., 27 Bradshaw M., 8, 27 Braggins D., 38, 40, 44, 46, 64 Branca A., 180, 187 Breiman L., 116, 122 Bril F.Z., 119, 122 Broadatz P., 82, 89, 183, 184, 187, 190 Brown D.E., 122 Bnmner C.C., 64, 90, 111,122, 179, 187 Brzakovic D., 149, 151 Burke M.W., 39, 43, 64 Butler D.A., 64, 83, 90, 122, 187 Cagnoni S., 62, 66 Cao L.X., 28 Casasent D.A., 35, 64 Castleman K.R., 112, 115, 123 Cetiner B.G., 82, 91, 107, 124, 180, 181,183, 187, 189 Chakraborty B., 116, 124 Chalana V., 32, 92 Chan A.B., 20, 30 Chan H., 124 Chang F.C., 66 Chao K., 8, 27, 142, 151 Chen B.T., 52, 64
214
Author Index
Chen J., 46, 64, 92 Chen J.M., 81, 92 Chen T.Q., 8, 27 Chen Y.C., 91 Chen Y.R., 27, 151, 186, 188 Chen Y.S., 64 Cheng M.C., 86, 90 Cheng S., 123 Cheng W.C., 32 Chia Shun L., 124 Chiang S.C., 32 Cho T.H., 28, 149, 151 Choi Y.S., 62, 64 Cholley J.P., 79, 92 Chouchoulas A.A., 138, 153 Chung Y.K., 28 Clark P., 15, 27 Conners R.W., 107, 123, 149, 151, 179, 181,187 Crot~ E.A., 65 Csapodi M., 86, 92 Dagli C.H., 29 Das D.N., 75, 90 Davies E.R., 131, 151 De Santis A., 72, 90 De Silva C.W., 28, 65 Demant C., 7, 27, 84, 90, 201,206 Devijver P.A., 116, 123 Diaz R., 8, 27 Dimov S.S., 15, 16, 30, 136, 153 Dinstein I., 123, 188 Distante A., 187 Djurovic I., 66, 190 Dobbins R.W., 20, 27 Doh W.R., 28 Dong P., 186, 187 Dourado A., 27 Drake P.R., 115, 124, 148, 151,177, 187, 188 Duarte F., 8, 27 Dyer C.R., 125 Early H., 151
Eberhart R.C., 20, 27 E1 Kwae E., 90 Enbody R., 124 Ercal F., 29 Estevez P.A., 119, 123, 177, 187 Fadzil M.H.A., 8, 27 Fang N., 86, 90 Faus G., 27 Fernandez M., 123, 187 Forrer J.B., 83, 90 Freeman H., 160, 187 Friedman J.H., 122 Funck J.W., 49, 64, 90, 122, 179, 187 Gecsei J., 163, 186 Giger M.L., 118, 123 Glover F., 25, 27 Goddard J.S., 186, 190 Goedhart B., 124 Goldberg D.E., 23, 28 Gomez Ortega J., 32 Gong W., 123 Gonzalez R.C., 70, 77, 79, 87, 90, 111,123 Goodman E.D., 124 Goodsitt M.M., 124 Goonatilake S., 142, 152 Gorria P., 32 Gosine R.G., 28 Gudmundsson M., 74, 90 Guerra E., 51, 64 Gupta N.K., 190 Gwozdz F.B., 27 Hajimowlana S.H., 8, 28 Hamad D., 8, 28 Han J., 123 Hansen P., 25, 28 Haralick R.M., 104, 123, 181, 188 Hardin W., 49, 65, 195, 196, 206 Harding K., 46, 65 Harlow C.A., 107, 123 Harwood R.J., 32, 91
Author Index
Hauta-Kasari M., 186, 188 Haykin S., 142, 152 Heinemann P.H., 8, 28, 79, 90 Helvie M.A., 124 Hickey D.S., 201,206 Hill D.A., 64 Hodgson D.C., 64 Hong J., 29 Hou T.H., 72, 90 Hovland P., 124 Hnaschka W.R., 27 Hsu W.H., 64 Hu B.G., 8,28 Huber H.A., 91, 166, 188 Hughes R., 28, 90 Ishii A., 8, 31 Izumita M., 28 Jaaskelainen T., 188 Jain A.K., 7, 9, 29 Jayaraman S., 187 Jayas D.S., 8, 29 Jender H., 32 Jennings N.R., 190 Jullien G.A., 28 Jung W., 28 Juuso E., 91 Kabrisky M., 124 Kabuka M.R., 90 Kak A.C., 60, 66 Kamber M., 116, 123 Kameyama K., 8, 28 Kang D.K., 8, 28 Karaboga D., 22, 30 Karjalainen P.A., 48, 65 Kaufman K.A., 15, 28 Kawata S., 65 Keagy P.M., 64 Kelly M., 152 Keogh E., 122 Keyvan S., 142, 152 Khebbal S., 142, 152
215
Kim C.W., 48, 65, 87, 90, 149, 152 Kim T.H., 8, 28 Kim Y., 32, 92 King T.G., 8, 32 Kittler J., 92, 116, 123, 125 Kline D.E., 179, 188 Klinkhachom P., 91 Kobayashi M., 65 Koivo A.J., 48, 65, 87, 90, 149, 152 Kopp F., 35, 65 Korel F., 8, 29 Kosugi Y., 28 Kothari R., 78, 91 Kovacic S., 29 Krishnapuram R., 62, 64 Krone A., 13, 29 Kundu M.K., 66 Kuo W., 72, 90 Kupinski M.A., 118, 123 Kwak C., 8, 29 Lahajnar F., 8, 29 Lampinen J., 120, 123 Lappalainen T., 48, 65, 167 Lavangnananda K., 24, 29 Lavrac N., 29 Lee J.S., 152 Lee L.C., 64 Lee M.F., 49, 50, 65 Lehtihet E.A., 44, 65, 117, 124 Lelieveldt B.P.F., 124 Lenz R., 188 Li P., 91 Liao K., 30, 65 Lin K., 123, 187 Lindeberg T., 32 Liu H., 120, 125 Liu X., 16, 20, 30 Loftus M., 51, 66 Luzuriaga D.A., 29 Lynn Abbott A., 190 MacKay D.J.C., 142, 152 Magee M., 133, 152
216
Author Index
Majumdar S., 8, 29 Maldague X., 45, 65 Manolopoulos Y., 16, 24, 27, 112, 122 Maristany A.G., 122 Marszalec E., 179, 188 McKinney J.P., 188 McMillin C.W., 123, 187, 188 Melvyn L., 8, 29 Mertzios B.G., 124, 142, 143, 147, 152 Merz C.J., 122 Meyer-Base A., 120, 123 Michalski R.S., 15, 28, 29, 31 Michie D., 150, 152 Mitropoulos P., 66, 190 Mitzias D.A., 142, 143, 147, 152 Moganti M., 7, 9, 29 Moh J.L., 66 Molto E., 27 Moon Y.S., 28 Morrow C.T., 28, 90 Mozetic I., 29 Mukherjee D.P., 72, 89 Mukhopadhyay S., 75, 90 Murphey Y.L., 27 Murtovaara S., 74, 91 Muscedere R., 28 Nagarajah C.R., 92 Nakamura O., 45, 65 Newman T.S., 7, 9, 29 Ng C.N., 187 Ngo P.A., 32 Nguyen H.T., 12, 29 Ni B., 8, 30, 49, 65 Niblett T., 15, 27 Norton-Wayne L., 32, 91 Novini A., 47, 65 Occena L.G., 190 Oh I.S., 147, 152 Okahashi T., 28 Olshen R.A., 122
Otsu N., 77, 91 Oyeleye O., 44, 65, 117, 124 Oztemel E., 20, 30, 142, 153, 164, 180, 189 Packianather M.S., 115, 123, 124, 148, 151,177, 187, 188 Paindavoine M., 32 Pal S.K., 62, 66, 116, 124 Panas S.M., 86, 92 Papadopoulos G., 66, 190 Park B., 151,186, 188 Park S.B., 28 Park S.H., 28 Parkkinen J., 188 Paulsen M.R., 30, 65 Pavlidis T., 75, 91 Pearson T., 8, 30, 45, 66 Peat B.J., 165, 189 Pei M., 124 Peng Q., 51, 66 Perantonis S.J., 121, 124 Pemus F., 29 Petrick N., 124 Petrou M., 92, 125 Pham D.T., 8, 11, 15, 16, 20, 22, 30, 31, 60, 66, 69, 73, 74, 77, 79, 82, 88, 91,107, 110, 117, 124, 135, 136, 140, 142, 152, 153, 158, 161, 163, 164, 165, 168, 173, 177, 180, 181,188, 189, 190 Pham P.T.N., 30, 31 Pietikainen M., 179, 188 Pinz A., 32 Plataniotis K.N., 92 Poli R., 62, 66 Polzleitner W., 166, 190 Postaire J.G., 28 Pratt W.K., 73, 74, 89 Priddy K.L., 120, 124 Punch W.F., 119, 124 Quinlan J.R., 15, 31 Quirk K., 35, 66
Author Index
Ramze-Rezaee M., 121,124 Ravi V., 138, 153 Reiber J.H.C., 124 Reid J.F., 30, 65 Richard J., 8, 29 Roberts J.W., 28 Rodd M.G., 8, 31 Rodriguez F., 32 Rogers S.K., 124, 125 Romanchik D., 8, 31 Rooks B.W., 205,207 Rosenfeld A., 60, 66, 125 Ross I., 190 Ruck D.W., 124 Russ J.C., 102, 103, 104, 124 Russo F., 61, 66, 74, 91 Sagiroglu S., 142, 143, 153, 177, 190 Sahiner B., 119, 124 Sahoo P.K., 77, 91 Sanby C., 79, 91 Sari-Sarraf H., 186, 190 Sawchuk A.A., 48, 66 Schalkoff R.J., 22, 31 Schatzki T.F., 64 Schmoldt D.L., 85, 91, 179, 190 Schwingshakl G., 166, 190 Semple E.C., 100, 125 Setiono R., 120, 125 Shanmugam K., 123, 188 Shaw G.B., 187 Shen Q., 138, 153 Shih F.Y., 60, 66 Siedlecki W., 117, 119, 125 Sinisgalli C., 72, 90 Sipe M.A., 64 Sklansky J., 117, 119, 125 Smolander S., 120, 123 Sobey P.J., 100, 107, 125 Soini A., 6, 31 Soltani S., 91 Someji T., 44, 66 Sommer H.J., 28, 90
Song K.Y., 82, 92, 97, 125 Song X.L., 152 Spiegelhalter D.J., 152 Stefani S.A., 80, 92 Steppe J.M., 120, 125 Stojanovic R., 44, 66, 177, 190 Stone C.J., 122 Streicher-Abel B., 27, 90, 206 Suen C.Y., 152 Suga Y., 8, 31 Sun D.W., 8, 31 Sundaram R., 72, 92 Sutinen R., 91 Sylla C., 1, 31 Sziranyi T., 86, 92 Tafuri M., 187 T amaree N., 31 T antaswadi P., 8, 31 Tao Y., 8, 32, 45, 67 Tarr G., 124 Taylor C.C., 152 Teuber P., 13, 29 Thomas A.D.H., 8, 31 Titus J., 45, 67 Tofang-Sazi K., 29 Tolba A.S., 8, 31 Tolias Y.A., 86, 92 Toncich D., 92 Toyofuku N., 8, 30 Tretiak O.J., 46, 64 Truchetet F., 8, 32, 79, 92 Tsai D.M., 8, 32 Tseng C.F., 8, 32 Tsunekawa S., 29 Urena R., 8, 32 Van Dommelen C.H., 46, 47, 67 Van Leeuwen D., 122 Vargas M., 32 Vasquez-Espinosa R.E., 123, 187 Venetsanopoulos A.N., 92 Venkataraman S., 187
217
218
Author Index
Ventura J.A., 29, 81, 92 Vilainatre J., 31 Villalobos J.R., 51, 64 Viraivan P., 31 Virvilis V., 121,124 Vivas C., 8, 32 Vujovic N., 149, 151 Wagner G., 205,207 Walker E.A., 12, 29 Wang J., 8, 32 Waszkewitz P., 27, 90, 206 Watzel R., 120, 123 Webb A., 96, 117, 121,125, 130, 132, 153 Wei D., 124 Wen W., 75, 92 Wen Z.Q., 8, 32, 45, 67 Weng C.J., 8, 27 Weniger R., 152 Wenzel D., 152 West P., 28, 47, 67, 90 West P.J., 28, 90 Weszka J.S., 107, 125 Whelan P.F., 1, 4, 27, 65, 75, 89 Widoyoko A., 188 Wiedenbeck J.K., 188 Williams M.E., 196, 207 Wilson A., 45, 67 Wiltschi K., 8, 32 Winstone L., 123 Wong A.K.C., 91 Woods R.E., 70, 77, 80, 87, 90, 111, 123 Worthy N.M., 122 Wright D.T., 202, 208 Wu Q.M., 65 Xia A., 75, 92 Yazdi H.R., 8, 32 Yoshimura T., 66 Yu S.S., 7, 32
Zhang H., 87, 89 Zhang J., 27 Zhou L.Y., 8, 32, 79, 92 Zhou Y., 27 Zhu S.Y., 72, 92 Zimmermann H.J., 138, 153 Zuech N., 195,208
Index
3D Object Inspection, 10, 39, 41, 4751, 65, 91,197 Ambient Lighting, 36, 43 AQ, 15 Arc-discharge Lamps, 44 ART, 20, 21, 60, 131,174 Artificial Intelligence, 2, 11-25, 60, 87, 88, 173, 198-202, 205-208 Back Propagation, 18-20, 142 Backlighting, 38, 41 Bayesian Classifiers, 72, 90, 124, 131-133, 149-152 Brightfield Illumination, 39, 47, 48 C4.5, 15, 31,116, 150 CART, 116, 150 Character Verification, 93, 127, 155 Circularity, 42, 107, 109, 134, 136, 160 Classification, 4, 11, 14, 16, 26, 33, 86, 88, 97-100, 109-122, 129-155, 161,162, 173, 176, 177, 180, 186, 201 Cellular Neural Network, 86 Colour, 111 Computer Vision, 1-2 Convolution Filters, 52-56, 60, 62, 67-73, 87, 109, 161, 170, 172, 201
Co-occurrence Matrices, 99, 104, 105, 107, 126, 181,185, 186 Correlation, 50, 77, 84, 105, 113-116, 185 Cost of Inspection, 6 Darkfield Illumination, 39, 47, 48 Development Environments, 197 Diffuse Lighting, 38-41, 45-48 Digital Signal Processing, 203 Dilation, 55-58, 67, 68, 194 Dimension Checking, 7 Directional Lighting, 38-41, 44-48 Discriminant Analysis, 131 Dispersion, 100 Edge Detection, 2, 60, 70-75, 80, 89 Elongation, 107 Energy, 44, 60, 100, 105, 185 Entropy, 15, 60, 100, 105, 116, 185 Erosion, 55-58, 68, 194 Examples of AVI Applications, 8 Expert Systems, 11-16, 26, 33, 47, 161,164, 169 Feature Extraction, 26, 33, 88, 95110, 112, 122, 130 Feature Selection, 97, 111-121,177 Features, 20, 26, 33, 88, 95-138, 143, 146-150, 154, 162-164, 174, 177, 181,193, 197, 198, 201
220
Index
First-order Features, 99-101, 107, 180, 181 Fluorescent Lighting, 41-49 Fourier Transform, 102, 104 Fractals, 186 Frontlighting, 38-41, 47 Future of AVI, 204, 205 Fuzzy Logic, 11-13, 26, 61, 62, 73, 74, 85, 88, 121, 142, 173, 201, 202 Fuzzy Rule-based Classification, 137, 177 GAIL, 24 Genetic Algorithms, 11, 22-26, 62, 73, 74, 87, 117-122, 152, 201, 202, 206 Grey-level Difference Method, 181185 Hough Transform, 87 ID3, 15, 116, 150 Image Acquisition, 9, 26, 35-50, 63, 88, 89 Image Enhancement, 26, 33-36, 5162 Inductive Learning, 11, 14-16, 24, 26, 116, 136 Infrared, 45, 197 Inter-class Variation, 112-116, 126 Intra-class Variation, 112-116, 126 KNN Classifier, 131, 133, 149, 150, 154 Kohonen SOFM, 20-22, 60, 120, 131,149, 150, 163, 164 Kurtosis, 99, 100, 136, 180 Label Inspection, 34 Lacunarity, 186 Lasers, 44, 50 LEDs, 27, 45, 48, 67, 158, 159
Lighting, 1, 2, 9, 10, 36-50, 63, 67, 68, 75, 79, 88, 158-160, 168, 169, 172, 177, 205 Lighting Advisor, 47, 64 LVQ, 20, 150, 162, 177, 180, 185 Machine Vision, 1-2 Mark Identification, 7 Mean, 60, 77, 98-100, 110, 115, 125, 146, 180, 186, 202 Minimum-distance Classification, 131,133, 149, 154 Morphology, 2, 52, 55, 63, 87, 177 Morphological Closing, 58-60, 87 Morphological Opening, 55-59, 87 Multi-Layer Peceptron (MLP), 17-20, 73, 74, 78, 85, 110, 117, 120, 121, 131,138, 139, 142, 145-151, 177, 180, 185 Hidden Neurons, 139, 140 Input Neurons, 139 Learning Rate, 19, 138, 140, 150, 163,201 Momentum, 20, 138-141, 164, 201 Output Neurons, 139 Stopping Criterion, 60, 138, 139 Neural Networks, 11, 16-22, 26, 60, 62, 73, 78, 85-88, 117-120, 130, 131,138, 139, 142, 143, 146-151, 162, 163, 168, 173-176, 183-186, 198, 201,202 Non-linear Classification, 22, 131, 135 Non-parametric Classification, 131 Object Features, 107 Omni-directional Lighting, 40 Packet Inspection, 68 Parallel Processing, 203-204 Parametric Classification, 131 Polarised Lighting, 39, 47, 48
Index
Post-processing for Segmentation, 87, 88 Presence Verification, 7 Prewitt Edge Detector, 71, 72 Printed Symbol Inspection, 190-191 Profiling, 70, 177 ProVision Inspection Software, 34, 68, 93, 127, 155, 190-194, 208 Quartz-halogen Lamps, 44 Radial Basis Function Networks, 22, 120, 150 Region Growing, 70, 79-81, 89, 93 Requirements of AVI Systems, 5-6 Roberts' Edge Detector, 71, 72 Rule-based Classification, 11-15, 130, 131,135-137, 149, 151, 161, 169 RULES, 15, 16, 30 RULES-3, 15, 16, 136, 180 RULES-4, 16 Seal Inspection, 157-164 Second-order Features, 99, 102, 104, 107, 181 Segmentation, 26, 33, 51, 52, 63, 6993, 111, 122, 136, 150, 168, 169, 173, 177-180 Shade Features, 110 Shape Features, 107 Simulated Annealing, 11, 22-26, 74, 87 Skewness, 99, 100, 125, 180 Split-and-merge Segmentation, 70, 81 Smart Cameras, 194-198, 201,208 Smoothing, 52-56, 63, 77, 177, 194 Sobel Edge Detector, 68-74 Standard Deviation, 77, 98-100, 180 Statistical Features, 99, 110, 181 Structured Lighting, 6, 38, 41, 44, 50 Supervised Classification, 17, 20, 130, 142
221
Synergistic Classification, 142-149, 154, 164, 180 Tabu Search, 11, 22, 25, 26 Template Matching, 70, 84, 92 Texture Classification, 180-186 Thresholding, 2, 70, 75-80, 88, 89, 92, 93, 172, 177, 186 Tonal Features, 99 Tungsten Lamps, 43, 44 Ultrasonic, 35 Ultraviolet, 45 Unsupervised Classification, 20, 130, 142 Wood Inspection, 165-179 X-ray, 35
This Page Intentionally Left Blank