“Engineering Asset Lifecycle Management” Proceedings of the 4thWorld Congress on Engineering Asset Management (WCEAM 2009) 28-30 September 2009
Editors Dimitris Kiritsis, Christos Emmanouilidis, Andy Koronios, and Joseph Mathew
Published by Springer-Verlag London Ltd
ISBN 978-1-84996-002-1
Proceedings of the 4th World Congress on Engineering Asset Management (WCEAM 2009) Ledra Marriott Hotel, Athens 28-30 September 2009
All Rights Reserved Copyright © Springer-Verlag 2010
No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or means, electronic, mechanical, photocopying, recording or otherwise, without prior written permission of the publisher.
PREFACE The 4th World Congress on Engineering Asset Management, WCEAM 2009, held at the Ledra Marriott Hotel in Athens, Greece from 28 to 30 September 2009, represents a milestone in the history of WCEAM. It is the first to be organised under the auspices of the newly formed International Society of Engineering Asset Management (ISEAM) who will host WCEAM on an annual basis as its forum for exchange of information on recent advances in this rapidly growing field. WCEAM 2009 was organised with the invaluable support of the newly formed Hellenic Maintenance Society (HMS) in Greece, who acted as the local host. After the inaugural WCEAM in July 2006 in the Gold Coast, Australia, visioned and initiated by the CRC for Integrated Engineering Asset Management (CIEAM) and organised in conjunction with the International Conference on Maintenance Societies (ICOMS) hosted by the Maintenance Engineering Society of Australia (MESA) and the International Maintenance Systems (IMS) Conference hosted by the Intelligent Maintenance Centre (IMS Centre), USA, the 2nd WCEAM was co-organised with the Condition Monitoring Conference hosted by the British Institute for Non-Destructive Testing (BINDT) in April 2007 in Harrogate, UK . The third congress in the series was organised in Beijing in October 2008 by a consortium comprising the Division of Mechanical and Vehicle Engineering, the Chinese Academy of Engineering, the China Association of Plant Engineering, the National Science Foundation Industry/ University Cooperative Research Center on the Intelligent Maintenance Systems Center (IMS), USA, the Diagnosis and Self Recovery Engineering Research Centre at Beijing Chemical Technology University and CIEAM. The theme of WCEAM 2009 was “Engineering Asset Lifecycle Management – A 2020 Vision for a Sustainable Future” and fits well with the lessons learnt from the recent financial and economical crises that impacted severely on the global economy and showed that a sustainable future requires consideration of the various lifecycle aspects of business, industrial and public good activities in their technical, organisational, economic, environmental and societal dimensions. The management of engineering assets over its lifecycle, which includes maintenance at its core, is a crucial element for global business sustainability and its importance is gradually being recognised by corporate senior management. Our seven distinguished keynotes at WCEAM 2009 presented developments in a number of these areas. ISEAM envisions WCEAM as a global annual forum that promotes the interdisciplinary aspects of Engineering Asset Management (EAM). In view of this vision, WCEAM seeks to promote collaboration between organisations who share similar objectives and where particular matters of common interests are discussed. The program for this year included a number of special sessions organised by the EFNMS, EURENSEAM and the Manufacturing Technology Platform (MTP) on Maintenance for Sustainable Manufacturing (M4SM) of the Intelligent Manufacturing Systems international program (IMS). In addition, a session on modern maintenance training along with a dedicated e-training workshop on Maintenance Management was organised by the EU project, “iLearn2Main”. WCEAM 2009 brought together over 170 leading academics, industry practitioners and research scientists from 29 countries to: • Advance the body of knowledge in Engineering Asset Management (EAM), • Strengthen the link between industry, academia and research, • Promote the development and application of research, and • Showcase state-of-the-art technologies. Over 120 scientific and technical presentations reported outputs of research and development activities as well as the application of knowledge in the practical aspects of: • Advanced Maintenance Strategies (RCM, CBM, RBI) • Condition monitoring, diagnostics, and prognostics
• • • • • • • • • • • • • • • • •
Decision support and optimization methods and tools Education and training in asset and maintenance management E-Maintenance Emerging technologies & embedded sensors and devices for EAM Human dimensions in integrated asset management Intelligent maintenance systems Lifecycle & sustainability considerations of physical assets Performance Monitoring and Management Planning and scheduling in asset and maintenance management Policy, Regulations, Practices and Standards for asset management Quality of information and knowledge management for EAM Risk management in EAM Safety, Health and Risk Management in EAM Self-maintenance and self-recovery engineering Strategic asset management for sustainable business Technologies for asset data management, warehousing and mining Wireless technologies in EAM
All full papers published in these proceedings have been refereed by specialist members of a peer review panel for technical merit. We would like to gratefully acknowledge the support of Bayer Technology GmbH as Gold Sponsor, the Intelligent Manufacturing Systems – IMS international organisation as Satchel and Dinner Sponsor, the CRC for Integrated Engineering Asset Management (CIEAM) as Bronze Sponsor of WCEAM 2009 and the ATHENA Research and Innovation Centre as the Welcome Reception Sponsor. Function sponsorship was undertaken by Gefyra SA and Atlantic Bulk Carriers Management Ltd while the SBC business channel, the Supply Chain & Logistics & Plant Management magazines and the supply-chain.gr portal were WCEAM 2009’s publicity sponsors. We would like to thank the Hellenic Maintenance Society (HMS) and the WCEAM Organising Committee for the enormous effort they have contributed to making this conference a success. Thanks are also due to members of the WCEAM International Scientific Committee for their efforts in both reviewing papers as well as promoting the congress within their networks. Athens, the “mythical” city of Athena, the ancient Greek goddess of Sophia, and the Parthenon offered WCEAM 2009 participants with an ideal environment for knowledge exchange and networking opportunities which will lead to the beginning of much new fruitful collaboration. Thank you for having been a part of WCEAM 2009. We look forward to meeting you at our next event, the 5th
WCEAM in Brisbane, Australia, from 25-27 October 2010. Dr. Dimitris Kiritsis Congress Chair
Dr. Christos Emmanouilidis Co-chair
Professor Joseph Mathew Co-chair
Congress Chairs Chair: Dr Dimitris Kiritsis, Ecole Polytechnique Fédérale de Lausanne, Switzerland. Co-Chairs: Dr Christos Emmanouilidis, C.E.T.I/R.C. Athena, Greece Professor Joseph Mathew, CRC for Integrated Engineering Asset Management (CIEAM), Australia. International Scientific Committee Adolfo Crespo Marquez, Spain Ajith Parlikad, UK Andrew Starr, UK Andy Koronios, Australia Andy Tan, Australia Anthony David Hope, UK Antonio J Marques Cardoso, Portugal Ashraf Labib, UK Basim Al-Najjar, Sweden Benoit Iung, France Birlasekaran Sivaswamy, Australia Bo-Suk Yang, Korea Brett Kirk, Australia Brigitte Chebel Morello, France Bruce Thomas, Australia Christos Emmanouilidis, Greece Dimitris Kiritsis, Switzerland Erkki Jantunen, Finland Gao Jinji, P R China George Tagaras, Greece Hong-Bae Jun, Korea Ioannis Antoniadis, Greece Ioannis Minis, Greece Jay Lee, USA Jayantha Liyanage, Norway
Jing Gao, Australia Joe Amadi-Echendu, South Africa Jong-Ho Shin, Switzerland Joseph Mathew, Australia Kari Komonen, Finland Kerry Brown, Australia Kondo Adjallah, France Len Gelman, UK Lin Ma, Australia Marco Garetti, Italy Margot Weijnen, The Netherlands Ming Zuo, Canada Mohd Salman Leong, Malaysia Mohsen Jafari, USA Nalinakash S Vyas, India Noureddine Zerhouni, France Peter Tse, Hong Kong, China Rhys Jones, Australia Roger Willett, New Zealand Seppo Virtanen, Finland Shozo Takata, Japan Stanislaw Radkowski, Poland Tadao Kawai, Japan Yiannis Bakouros, Greece
WCEAM 2009 Organising Committee Ian Curry, Hi Events, Australia Jane Davis, CIEAM, Australia Katerina Zerdeva, Zita Congress, Greece Professor Andy Koronios, CIEAM, University of South Australia, Australia Rhonda Hendicott, Hi Events, Australia Zacharias Kaplanidis, Zita Congress, Greece
Keynote Speakers Dr Claudio Boer - Intelligent Manufacturing Systems
Dr. Panayotis Papanikolas – GEFYRA S.A.
“IMS Global Network Support for Maintenance for Manufacturing”
“Construction, operation and lifecycle cost of ships - Realizing that before maintenance comes maintenability “
Claudio is Chairman of the International Steering Committee for Intelligent Manufacturing Systems (IMS). Over the past year, he has been involved with working to initiate an innovative program for researchers designed for easier global collaborations for new and ongoing research called the Manufacturing Technology Platform (MTP) program.
Heinz Cznotka - Bayer Technology Services GmbH “Proactive Asset Lifecycle Management” Dr Heinz Cznotka is the Director Competence Center Asset Management Consultancy and Sr. Risk and Reliability Consultant. He has 20 years experience in the oil, gas & petrochemical industry as: Project Manager, Reliability Manager, Production Manager, Managing Director Maintenance Services and Technical Risk Manager. Dr Cznotka’s core competences are Risk-based Maintenance (RBM), Reliability-centered Maintenance (RCM), Turnaround optimization; Lifecycle Length and Cost Optimization; Reliability & Maintenance Engineering and Optimization; Reliability & Maintenance related EHS Management; Reliability & Maintenance Production and In-Service Inspection.
Richard Edwards - EFNMS/IAM “Why do we need asset management trends and issues?” Richard is a member of the Council of the Institute of Asset Management and the current chairman of the Asset Management Technical and Professional Network, a joint venture between the IET and the IAM. He is also a Board member of the IAM. A Director of AMCL for ten years, Richard has extensive experience in the application and assessment of Asset Management. ‘
Eric Luyer - IBM Corporation “Leveraging Smart Asset Management in Engineering and Product Lifecycle Management” Eric Luyer has global responsibility for managing Industry Marketing activities for the Industrial Manufacturing sector. With more than 25 years of experience, Eric has developed in-depth expertise in the industry, working in Financial, ERP and Enterprise Asset Management software application environments, initially in Europe and the last seven years worldwide. Eric Luyer has held senior positions in Sales Management, Indirect Channels, Alliance Partner Management and Industry Marketing - working for software solutions providers, such as Comshare, Global SSA, Baan/Invensys, MRO Software. Currently Eric is taking the position of IBM’s worldwide Manager Product Marketing for Maximo Asset Management – positioning Asset Management in Industrial Manufacturing industries.
Panayotis is Vice-Chairman & Managing Director of GEFYRA S.A., the concessionnaire company of the Rion - Antirion Bridge. Holding the position of Technical Director during the construction of the longest multi-span cable stayed bridge in the world, built in aggressive environment in terms of durability, seismicity and wind, he was in charge of developing the inspection, monitoring and maintenance management plan of the bridge. Five years of operation have helped in fine-tuning what is considered today as one of the most complete structural asset management plans for bridges.
Jürgen Pothoff - Bayer Technology Services “Proactive Asset Lifecycle Management” Jürgen has 16 years experience with globally operating Engineering, Procurement & Construction (EPC) Companies working on projects in Europe, Americas, Asia, MENA and Australia servicing the petrochemical, refineries, chemical process industry. He has worked as a Lead Process Engineer and Feasibility Study Leader for world scale petrochemical plants; Sales & Project Manager for lump sum turnkey EPC projects including a 4 year local assignment for the development of the Australian market; Establishment / Management of Maintenance Management & Contracting services based on risk based maintenance concept and 3 years as a Reliability Engineering Manager with major chemical company, on the development & global implementation of risk based maintenance.
Kristian Steenstrup - Gartner Inc. “Operational Technology and the relationship to IT Management” Kristian is research vice president in Gartner’s Australian branch. He conducts and delivers research on Enterprise Business Systems including ERP, EAM, SCM, CRM and c-commerce. Kristian is responsible for vendor analysis in the Asia/Pacific market and is global research leader for Asset Intensive ERP II. Before joining Gartner Kristian worked in ERP and EAM system design and delivery for over 15 years in a number of global markets. During this period he was directly involved in emerging technologies in Asset Intensive industries such as Utilities, Mining, Rail and Defence.
Panos Zachariadis - Atlantic Bulk Carriers “Construction, operation and lifecycle cost of ships – Realising that before maintenance comes maintainability” Panos is Technical Director of Atlantic Bulk Carriers Management Ltd. From 1984 to 1997 he was Marine Superintendent for a New York bulk carrier and oil tanker shipping company. His shipping experience spans diverse areas such as sea service in bulk carriers and oil tankers, supervision of dry dock repairs, new building specifications and supervision, ship operations andchartering. Mr. Zachariadis holds a BSc degree in Mechanical Engineering from Iowa State University and a MSE degree in Naval Architecture and Marine Engineeringfrom the University of Michigan. He is a founding member of Marine Technical Managers Association (MARTECMA) of Greece.
SPONSORS Gold Sponsor
Satchel & Congress Dinner Sponsor
Welcome Reception Sponsor
Bronze Sponsor
Wednesday Lunch Sponsor
Publicity Sponsors
Exhibitors
Bayer Technology Services GmbH 51368 Leverkusen, Germany E-mail:
[email protected] www.bayertechnology.com
Monday Morning Tea Sponsor
Ledra Marriott Hotel, Athens, Greece
Program The organisers reserve the right to make changes to the program.
Sunday 27 September 2009 1930
Dinner at Horizons Restaurant - Optional dinner at additional cost
Sponsored by Bayer Technology Services
Monday 28 September 2009 0700
Exhibitors and poster presenters mount displays
0830
Conference Registration
0900-0930
Welcome – Opening session
Plenary session sponsored by Bayer Technology Services
Chair: Dimitris Kiritsis 0930-1015
Keynote address 1: Heinz Cznotka & Jürgen Potthoff, Bayer Technology Services GmbH Proactive Asset Lifecycle Management
1015-1100
Keynote address 2: Kristian Steenstrup, Research VP, Gartner Inc Operational technology and its relationship to IT Management
1100-1120
Coffee break
1120-1300
Sessions
1300-1420
Session1: EURENSEAM - 1 - Strategic Engineering Asset Management
Session 2: Transport, Building and Structural Session 3: Lifecycle & Sustainability Asset Management Considerations Of Physical Assets
Chair: J P Liyanage
Chair: Tony Hope
Chair: Andy Koronios
Ahonen, T., Collaborative development of Maintenance investment management: A case study in pulp and paper industry
Kaphle, M., Tan, ACC., Kim, E. and Thambiratnam, D., Application of acoustic emission technology in monitoring structural integrity of bridges
Frolov, V, Mengel, D, Bandara, W, Sun, Y and Ma, L., Building an ontology and process architercture for engineering asset management
Al-Najjar, B. and Ciganovic, R., A model for more accurate maintenance decisions
Nayak, R., Piyatrapoomi, N. and Weligamage, J., Application of text mining in analysing road crashes for road asset management
Karray, MH, Morello, BC and Zerhouni, N, Towards A Maintenance Semantic Architecture
Gudergan, G., The House of Maintenance - Identifying the potential for improvement in internal maintenance organizations by means of a capability maturity model
Pérez, AA, Vieira, ACV, Marques Cardoso, AJ, School Buildings Assets - Maintenance Management and Organization for Vertical Transportation Equipment
Koronios, A., Steenstrup, C. and Haider, A., Information and Operational Technologies Nexus for Asset Lifecycle Management
Parida, A. and Kumar, U., Integrated strategic asset performance assessment
Phillips, P., Diston, D., Starr, A., Payne, J., and Pandya, S., A review on the optimisation of aircraft maintenance with application to landing gears
Matsokis, A., Kiritsis, D., An advanced method for time treatment in product lifecycle management models
Rosqvist, T., Assessing the subjective added value of value nets: which network strategies are really win-win ?
Piyatrapoomi, N. and Weligamage, J. Risk-based approach for managing road surface friction of road assets
Shin, J-H, Kiritsis, D., Xirouchakis, P., Function performance evaluation and its application for design modification based on product usage data
Lunch
World Congress of Engineering Asset Management 2009
Program Monday 28 September 2009 ... continued 1420-1540
Sessions Session4: EURENSEAM - 2 - Strategic Engineering Asset Management
Session 5: Transport, Building and Structural Asset Management
Session 6: Technologies For Asset Data Management, Warehousing & Mining
Chair: Bassim Al-Najjar
Chair: Joe Amadi-Echendu
Chair: Michael Purser
Al-Najjar, B., A Computerized model for assessing the return on investment in maintenance
Amadi-Echendu, J.E., Belmonteb, H., von Holdtc, C., and Bhagwand, J. A case study of condition assessment of water and sanitation infrastructure
Haider, A., Open Source Software Development for Asset Lifecycle Management
González Díaz, V., Fernández, JFG, Crespo Márquez, A., Case study: Warranty costs estimation according to a defined lifetime distribution of deliverables
Nastasie, D., Koronios, A., The role of standard information models in road asset management
Kans, M., Assessing maintenance management IT on the basis of IT maturity
Schuh, G. And Podratz, K., Remote service concepts for intelligent tool-machine systems
Nastasie, D., Koronios, A., The diffusion of standard information models in road asset management: - A study based on the human - technology environment
Mathew, A., Purser, M., Ma, L. and Barlow, M., Open standards-based system integration for asset management decision support
Wijnia, YC and Herder, PM, The state of Asset Management in the Netherlands
Ninikas, G., Athanasopoulos, Th., Marentakis, H., Zeimpekisa, V., and Minis, I., Design and implementation of a real-time fleet management system for a courier operator
Peppard, J., Koronios, A. Gao, J., The data quality implications of the servitization - theory building
1540-1600
Coffee break
1600-1645
Keynote address 3: Richard Edwards, European Federation of National Maintenance Societies & Institute of Asset Management Why do we need asset management: trends and issues Chair: Christos Emmanouilidis
1650- 1810 Sessions
1930-2100
Session7: EFNMS - Engineering Asset Management in Europe
Session 8: Decision Support & Optimisation Methods & Tools
17.00 - 19.00 ISEAM AGM
Chair: Kari Komonen
Chair: Marco Macchi
Chair: Joe Mathew
Benetrix, L., Garnero, MA and Verrier, V., Asset management for fossil-fired power plants: methodology and an example of application
Cenna, AA, Pang, K, Williams, KC, Jones, MG, Micromechanics of wear and its application to predict the service life of pneumatic conveying pipelines
Olsson, C., Labib, A. and Vamvalis, C. CMMS – Investment or Disaster ? Avoid the Pitfalls
Kim, JG, Jang, YS, Jeong, HE, Lim, J and Choi, BK, Flexible coupling numerical analysis method
Ulaga, S., Jakovcic, M. And Frkovic, D. Condition monitoring supported decision processes in maintenance
Godichaud M. , Pérès F. and Tchangani A., Disassembly process planning using Bayesian network
Welcome Cocktail Function - Sponsored by ATHENA Research & Innovation Centre
Ledra Marriott Hotel, Athens, Greece
Program Tuesday 29 September 2009 830
Delegate Arrival & Registration
0900-0945
Keynote address 3: Claudio Boer, Chairman, Intelligent Manufacturing System (IMS) IMS Global Network Support for Maintenance for Manufacturing Chair: Marco Garetti
1000-1140
Sessions Session 9: Maintenance for Sustainable Manufacturing - 1
Session 10 - Strategic asset management for sustainable business
S11 - Advanced Maintenance Strategies (RCM,CBM,RBI)
Chair: Marco Garetti
Chair: Ajit Parlikad
Chair: Lin Ma
Garetti, M. Welcome & Introduction to MTP M4SM Special Session
Furneaux, C.W., Brown, K.A., Tywoniak, S., and Gudmundsson, A., Performance of public private partnerships: an evolutionary perspective
Gorjian, N., Ma, L., Mittinty, M., Yarlagadda, P. and Sun, Y., A review on degradation models in reliability analysis
Pantelopoulos, S. (Industrial talk), Product maintenance in the ‘Internet of things’ world
Labib, A., Maintenance strategies: a systematic approach for selection of the right strategies
Gorjian, N., Ma, L., Mittinty, M., Yarlagadda, P. and Sun, Y., A Rreview on reliability models with covariates
Gómez Fernández J F*, Álvarez Padilla F J, Fumagalli L, González Díaz V, Macchi M, Crespo Márquez A, Condition monitoring for the improvement of data center management orientated to the Green ICT
Rezvani, A., Srinivasan, R., Farhan, F., Parlikad, AK., and Jafari, M., Towards Value Based Asset Maintenance
Muhammad, M., Majida, A.A. and Ibrahima, N.A., A Case study of reliability assessment for centrifugal pumps in a petrochemical plant
Liyanage J P, Badurdeen F, Strategies for integrating maintenance for sustainable manufacturing: developing integrated platforms
Yeoh, W., Koronios, A. And Gao, J., Ensuring Successful Business Intelligence Systems Implementation: Multiple Case Studies in Engineering Asset Management Organisations
Sun, Y., Ma, L., Purser, M. and Fidgeb, C., Optimisation of the reliability based preventive maintenance strategy
Tsutsui M, Takata S, Life Cycle Maintenance Planning System in consideration of operation and maintenance integration
Gao, J., Koronios, A., Kennett, S., Scott, H., Data quality enhanced asset management metadata model
Lee, WB, Moh, L-S, Choi, H-J, Lifecycle Engineering Asset Management
Session 12: Maintenance for Sustainable Manufacturing - 2
Session 13 - Planning, Scheduling & Performance Monitoring
Session 14 - Decision Support & Optimisation Methods & Tools
Chair: Shozo Takata
Chair: Seppo Virtanen
Chair: Colin Hoschke
Cannata A, Karnouskos S, Taisch M, Dynamic e-Maintenance in the era of SOA-ready device dominated industrial environments
Haider, A., Driving innovation through performance evaluation
Chebel-Morello, B., Haouchine, K., Zerhouni, N., Methodology to Conceive A Case Based System Of Industrial Diagnosis
Emmanouilidis, C. and Pistofidis, P. Design requirements for wireless sensor-based novelty detection in machinery condition monitoring
Kim, D., Lim, J-H., Zuo, MJ., Optimal schedules of two periodic preventive maintenance policies and their comparison
Lim, JI, Choi, BG, Kim, HJ, Kim, JG, and Park, CH., Optimim design of vertical pump for avoiding the reed frequency
Kans M, Ingwald A, Analysing IT functionality gaps for maintenance management
Lipia, TF, Zuo, MJ, and Lim, J-H., Optimal Replacement Decision Using Stochastic Filtering Process to Maximize Long Run Average Availability
Mokhtar, AA, Muhammad, M and Majida, MAA, Development of spreadsheet based decision support system for product distributions
Garetti, M - Session Wrap-Up
Seraoui R., Chevalier R. and Provost D., EDF’S plants monitoring through empirical modelling: performance assessment and optimization
Smalko Z, Woropay M, Ja, Z, The diagnostic decision in uncertainly circumstances
1140-1200
Coffee break
1200-1310
Sessions
World Congress of Engineering Asset Management 2009
Program Tuesday 29 September 2009 ... continued 1310-1430
Lunch
1430-1610
Sessions Session 15: Maintenance for Sustainable Manufacturing - 3 - Education & Training
Session 16: Advanced Maintenance Strategies
Session 17: Condition Monitoring, Diagnostics & Prognostics
Chair: Jan Franlund
Chair: Stanislow Radkowski
Chair: Tony Rosqvist
Bakouros, Y., Panagiotidou, S, and Vamvalis, C., Education and Training Needs in Maintenance: How you conduct a selfaudit in Maintenance Management
Bey-Temsamani, A., Engels, M., Motten, A., Vandenplas, S. and Ompusunggu, AP., Condition-based maintenance for OEM’s by application of data mining and prediction techniques
Chen T., Xu XL, Wang, SH, Deng SP. , The Construction and Application of Remote Monitoring and Diagnosis Platform for Large Flue Gas Turbine Unit
Emmanouilidis, C., Labib, A, Franlund, J., Dontsiou, M, Elina, L., Borcos, M., iLearn2Main: an e-learning system for maintenance management training
Gontarz, S. and Radkowski, S., Shape of specimen impact on interaction between earth and eigenmagnetic fields during the tension test
Gu, DS, Kim, BS, Lim, JI, Bae, YC, Lee, WR, Kim, HS, Comparison of vibration analysis with different modeling method of a rotating shaft system
Franlund, J, Training and Certification of Maintenance and Asset Management Professionals
Kiassat, C and Safaei, N., Integrating human reliability analysis into a comprehensive maintenance optimization strategy
Kim, BS, Gu, DS, Kim, JG, Kim, YC and Cho, BK, Rolling element bearing fault detection using acoustic emission signal analyzed by envelope analysis with discrete wavelet transform
Macchi, M., Ierace, S., Education in Industrial Maintenance Management: Feedback From Italian Experience
Mazhar, MI, Salman, M and Howard, I, Assessing the reliability of system modules used in multiple life cycles
Kim, H-E, Tan, ACC, Mathew, J, Kim, EYH, Cho, BK, Prognosis of Bearing Failure Based on Health State Estimation
Starr, A. Bevis, K., The role of education in industrial maintenance: the pathway to a sustainable future
Radkowski S, Gumiński R, Impact of vibroacoustic diagnostics on certainty of reliability assessment
Xu XL, Chen T, Wang SH, Research on Data-Based Nonlinear Fault Prediction Methods in Multi-Transform Domains for Electromechanical Equipment
Maintenance for Sustainable Manufacturing - 4 - M4SM Project kick-off 1° M4SM Meeting
Session 18: Emerging Technologies in EAM
Session 19: Condition Monitoring, Diagnostics & Prognostics
Chair: Marco Garetti
Chair: Bo-Suk Yang
Chair: Andrew Starr
Espíndola, D., Pereira, CE, Pinho, M., IM:MR - A tool for integration of data from different formats
Jasiński M., Radkowski S., Use of bispectral-based fault detection method in the vibroacoustic diagnosis of the gearbox
Mikail F. Lumentut , Ian M. Howard, Theoretical study of piezoelectric bimorph beams with two input base-motion for power harvesting
Maszak, J., Local meshing plane as a source of diagnostic information for monitoring the evolution of gear faults
Shim, M-C, Yang, B-S, Kong, Y-M, Kim, WC, Wireless condition monitoring system for large vessels
Yang, SW, Widodo, A, Caesarendra, W, Oh, JS, Shim, MC, Kim, SJ, Yang, BS and Lee, WH, Support vector machine and discrete wavelet transform for strip rupture detection based on transient current signal
Smit, JJ, Djairam, D., Zuang, Q., Emerging Technologies and Embedded Intelligence in Future Power Systems
Ierace, S., Garetti, M. and Cristaldi, L., Electric Signature Analysis as a cheap diagnostic and prognostic tool
1610-1630
Coffee break
1630-1750
Sessions
1915
Departure for Gala Dinner
1945
Conference Dinner - Athens Yacht Club Sponsored by IMS Keynote address 4 - Panos Zachariadis, Atlantic Bulk Carriers Construction, operation and lifecycle cost of ships - Realizing that before maintenance comes maintenability Chair: N. Nassiopoulos
Ledra Marriott Hotel, Athens, Greece
Program Wednesday 30 September 2009 0845
Delegate Arrival
0900-1300
WORKSHOP: INTEGRATION AND INTEROPERABILITY IN ENGINEERING ASSET MANAGEMENT (EAM)
0900-1020
Sessions Session 20: e-Maintenanance
Session 21: Policy, Regulations, Practices & Standards For Asset Management
Session 22: Condition Monitoring, Diagnostics & Prognostics
Chair: Errki Jantunen
Chair: Ashraf Labib
Chair: Andy Tan
Baglee, D., The Development of a Mobile e-maintenance system utilizing RFID and PDA Technologies
Haider, A., A Roadmap for information technology governance
Kim, EY., Tan, ACC., Mathew, J. and Yang, B-S., Development of an Online Condition Monitoring System for Slow Speed Machinery
Jantunen, E. Gilabert, E., Emmanoulidis, C. and Adgar, A., e-Maintenance: a means to high overall efficiency
Mathew, A., Purser, M. Ma, L. and Mengel, D., Creating an asset registry for railway electrical traction equipment with open standards
Rgeai, M., Gu, F., Ball, A., Elhaj, M., Ghretli, M. Gearbox Fault Detection Using Spectrum Analysis of the Drive Motor Current Signal
Oyadiji, SO, Qi, S. and Shuttleworth, R., Development of Multiple Cantilevered Piezo Fibre Composite Beams Vibration Energy Harvester for Wireless Sensors
Stapelberg, RF, Corporate Executive Development for Integrated Assets Management in a New Global Economy
Zhu, Z., Oyadiji, SO, and Mekid, S., Design and Implementation of a Dynamic Power Management System for Wireless Sensor Nodes
1020-1040
Coffee break
1040-1200
Sessions
1200-1320
Session 23: Technologies For Asset Data Management, Warehousing & Mining
Session 24: Advanced Maintenance Strategies (RCM,CBM,RBI)
Session 25: Condition Monitoring, Diagnostics & Prognostics
Chair: Matt Barlow
Chair: Ioannis Bakouros
Chair: Ioannis Antoniadis
Grossmann, G, Stumptner, M, Mayer, W, and Barlow, M, A Service oriented architecture for data integration in asset management
Bohoris, G.A. and Kostagiolas, P.A., Inferences on nonparametric methods for the estimation of the reliability function with multiply censored data
Mpinos, CA and Karakatsanis, T., Development of a dynamic maintenance system for electric motor’s failure prognosis
Natarajan, K., Li, J. and Koronios, A., Data mining techniques for data cleaning
Gilabert, E., Gorritxategi, E., Conde, E., Garcia, A., Areitioaurtena, O. and Amaya Igartuaa, An advanced maintenance system for poligeneration applications
Gryllias, K.C., Yiakopoulos, C. and Antoniadis, I., Automated diagnostic approaches for deffective rolling element bearing using minimal training pattern classificaiton methods
Natarajan, K., Li, J. and Koronios, A., Detecting mis-entered values in large data sets
Kostagiolas, P.A., and Bohoris, G.A., Finite sample behaviour of the Hollander-Proschan goodness of fit with reliability and maintenance data
Pang, K., Cenna, AA, Williams, KC, and Jones, MD, Experimental determination of cutting and deformation energy factors for wear prediction of pneumatic conveying pipeline
Apostolids, H. (industrial talk), Crisis in Maintenance and Maintenance in Crisis: Opportunities for Maintenance Re-engineering
Yachiku, H., Inoue, R., and Kawai, T., Diagnostic Support Technology by Fusion of Model and Semantic Network
Lunch - sponsored by GEFYRA SA
World Congress of Engineering Asset Management 2009
Program Wednesday 30 September 2009 ... continued 1320-1520
Sessions Session 26: Workshop for e-training in Maintenance Management (HMS, M4SM, iLearn2Main)
Session 27: Safety, health and risk management in EAM
Session 28: Condition Monitoring, Diagnostics & Prognostics
Chair: Christos Emmanouilidis
Chair: Pantelis Botsaris
Chair: Tadao Kawai
Papathanassiou, N., Emmanouilidis, C., e-Learning in Maintenance Management Training and Competence Assessment: Development and Demonstration
Botsaris, P.N., Naris, A.D., and Gaidajis, G., A Risk Based Inspection (RBI) preventive maintanance programme: a case study
Fumagalli, L., Jantunen, E., Garetti, M. and Macchi, M., Diagnosis for improved maintenance services: Analysis of standards related to Condition Based Maintenance
Maintenance Management: Live & Interactive e-training and e-assessment workshop
Papazoglou, IA, Anezirisa, ON, Konstandinidou, M, Bellamy, LJ, Damen, M, Assesing occupational risk for contact with moving parts of machines during maintenance
Widodo, A., and Bo-Suk Yang, Machine prognostics based on survival analysis and support vector machine
Skroubelos, G., Accident causes during repair and maintenance activities and managerial measures effectiveness
Younus, AM, Widodo, A and Yang, B-S., Image Histogram Features Based Thermal Image Retrieval to Pattern Recognition of Machine Condition
Training in Maintenance Management Panel Discussion & Evaluation (Jan Franlund, Ashraf Labib, Andrew Starr, Yiannis Bakouros, Cosmas Vamvalis)
Elforjani, M. Mba, D. Acoustic emissions observed from a naturally degrading slow speed bearing and shaft
A Addali, S Al-lababidi, H Yeung, D Mba, Measurement of gas content in two-phase flow with acoustic emission
1520-1540
Coffee break
1540-1625
Congress Closing Keynote Address 6: Panayiotis Papanikolas, GEFYRA SA Maintenance Management of the Rion-Antirion Bridge
1625-1710
Congress Closing Keynote Address 7: Eric Luyer, IBM Corporation Leveraging Smart Asset Management in Engineering and Product Lifecycle Management Chair: Joe Mathew
1710-1730
Closing Remarks - End of WCEAM 2009
1830
Visit to new Acropolis Museum and dinner at Dionysos Restaurant - optional at additional cost
Proceedings of the 4th World Congress on Engineering Asset Management Athens, Greece 28 - 30 September 2009
COLLABORATIVE DEVELOPMENT OF MAINTENANCE INVESTMENT MANAGEMENT – A CASE STUDY IN PULP AND PAPER INDUSTRY Jouko Heikkiläa, Toni Ahonena, Jari Neejärvib, Jari Ala-Nikkolac a
VTT Technical Research Centre of Finland, P.O. Box 1300, FI-33101 Tampere, Finland;
[email protected] b
Myllykoski Paper, Myllykoskentie 1, FI-46800 Anjalankoski, Finland;
[email protected] c
ABB Service, Myllykoskentie 1, FI-46800 Anjalankoski, Finland;
[email protected] The interest in purchasing external maintenance services increases as a strategic choice, as manufacturing companies increasingly focus on their core business. In the case presented in this paper, an external service provider has taken the responsibility of the maintenance functions as a strategic partner. A development project was started to support the creation of profound and functional partnership. Focus of the project was to support the practical collaboration between the partners. The development focused on two activities, in which the both organisations clearly have important roles. These two activities are: management of maintenance investments and operator involvement in maintenance activities. The idea was to improve the collaboration practices by practical collaborative development of these two key activities. Maintenance investment decision making requires functional collaboration between the organisations. The maintenance organisation is responsible for making investment proposals and production organisation makes the final decisions. Originally not much collaboration was planned in this process. Furthermore there was a demand for improving and clarifying the reasoning for proposals. To meet this demand, a working group comprising representatives of both organisations and researchers developed proper commonly agreed criteria, a tool and a process for preparation of maintenance investment proposals. In the developed investment process, the collaboration aspect was taken into account. In this paper, we present the collaborative development process, resulting maintenance investment management development and key findings regarding the importance of practical development efforts done together in building trust and collaboration in a business partnership. Key Words: Maintenance investment, Pulp and paper industry, Decision making, Management
1
INTRODUCTION
Demand for paper products is not expected to grow, especially in Europe. Therefore investments in new production lines will be very rare and the role of maintenance investment management will increase in engineering asset management of ageing production lines. A maintenance investment here means typically a replacement of worn equipment, but excluding scheduled maintenance and reparation of failures. While scheduled maintenance and reparation activities both include replacements of worn equipment, too, maintenance investments are typically more expensive and require specific planning and decisionmaking which is substantially based on cost-effectiveness (which makes it as an investment). Unlike other investments, the main focus in maintenance investments is on restoring the original performance. The payback from a maintenance investment comes mainly from decreasing (corrective) maintenance and unavailability costs. Outsourcing of services in industry is a common trend that has been started in 1960s and which has been significantly increased in the last 20 years (as summarised by, for example, Hendry (1995), Bailey et al. (2002), and Huai and Cui (2005). Maintenance is one of the most commonly outsourced services (Bailey et al. 2002, Tarakci et al. 2009). The outsourcing trend is still going strong, even though numerous and severe drawbacks has been reported (Hendry 1995 and anon. 2007). In Finnish paper industry traditionally nearly all activities at plant has been accomplished by the in-house maintenance organisation. In a competition for market shares, paper companies have searched for effectiveness by focusing on core business. Thus, the companies are looking for and establishing outsourcing of functions, which are not seen to be their core business. The outsourcing trend was started from such activities as canteen and cleaning services, and now the maintenance services are
1
being outsourced. After such a long tradition of in-house operations, outsourcing is likely to raise fears and opposition in organisation, as well as practical challenges in partners’ roles, distribution of work and cooperation. Improving the performance of service, focusing management activities on core business and reducing costs are the most common motives for outsourcing. However, in many cases, there have been difficulties to reach these goals, because of unexpected management needs and “hidden costs” related to cultural and cooperation aspects of organisations (Hendry 1995, Bailey et al. 2002, anon. 2005, Huai & Cui 2005). In the case presented in this paper, the maintenance services were outsourced in the beginning of 2007. This started the build-up of a new form cooperation between a paper company and a maintenance company. To support managerial and practical cooperation, a development project was started. The development project focused on two practical issues in cooperation: 1) maintenance investment management process and 2) development of the role of operators in maintenance and development. In this paper the first task and its results are presented. 2 HOW THE MAINTENANCE INVESTMENT MANAGEMENT WAS DEVELOPED The initial state in the beginning of the development project was that the agreement between the paper company and the maintenance company had been signed and the agreed maintenance operations had been started. Most of the maintenance staff remained, but some changes were made in maintenance management. Maintenance tasks and responsibilities were defined in the agreement. One of the agreed tasks allocated to the maintenance organisation was to prepare maintenance investment proposals. The proposals are then evaluated and investment decisions will be made by the paper company. A need to improve and clarify the reasoning for investments had already been identified and the two organisations proposal-decision-model emphasized this need. Too often the proposed reason for a maintenance investment was only that “it has to be done”, which makes the evaluation of the necessity and profitability of the proposal quite difficult. Initially it was supposed – especially on the paper company side – that the maintenance company would prepare the maintenance investment proposals independently. Quite soon it appeared to be evident, that at least some cooperation would be useful or even required. There were two reasons for the need of cooperation: 1) the novelty of maintenance organisation at the plant and 2) the need to coordinate maintenance investments and development investments. Even though most of the employees in the new maintenance organisation had been working already in the old organisation, the organisation itself and many of its managers were new at the plant. In addition to that, the existing maintenance and equipment history documentation was not so complete that it could cover all the history knowledge of the experience personnel. Additional challenge was the change of the maintenance information system. Several activities have been made to improve the maintenance documentation in order to make it a more reliable and complete basis for maintenance investment planning. Meanwhile, experienced production and maintenance personnel has important additional role in producing and preparing information. Hopefully in near future, the maintenance history data will be more ready to be used for maintenance investment planning. This will reduce the need for cooperation in information collection for this specific purpose in future. In some cases there are connections and overlapping between the maintenance investment and development investment: A target of a planned maintenance investment may be also a part of a planned development investment. Or, a planned development investment may set some new requirements, which are worthwhile to be taken into account in maintenance investments. Since, in this case, the preparation of maintenance investments and development investments has been separated into different organisations, a frequent enough communication between these organisations should be especially planned and agreed. Coordination between development investments and maintenance investments would have taken place later in the decision-making process anyway, but coordination already while planning will save some preparation work and enable better results. Since improvement of collaboration between two organisations was the basic objective of the project, collaborative process was chosen to be used in developing maintenance investment preparation. Collaborative development process meant that representatives of both organisations together developed and agreed the method to be used starting from the criteria to be used in investment selection and including all important aspects, such as information to be collected, means for information collection, cooperation related to information handling, responsibilities and annual schedules for the phases of preparation. External consults from VTT guided the development process and supported it with their knowledge. The development consisted of 11 workgroup meetings during one year and additional development work between the meetings. The development could have been done as an expert work by the consults or by either of the organisations by themselves. Possibly, in that case, the developed method would have been more sophisticated and the development might have been more efficient (up to the method document). However, the equality based collaborative development process aimed to ensure an undisturbed start-up and sustainable use of the new practice, by dispelling suspicions and practically constructing collaboration between the two organisations. The developed method is at the moment practically implemented at the plant, which means still some learning and finalizing of practices. The final success of the work will still be seen in future.
2
3
THE DEVELOPED MAINTENANCE INVESTMENT IDENTIFICATION AND PROPOSAL PROCESS
The development work in this case focused on the first phase of a maintenance investment management process – the phase that includes the tasks from the collection of information until preparation of investment proposals. The second phase – handling of the proposals and the final decision making – was outside of the scope of this case. The criteria to be used in investment candidate identification, candidate selection and final decision making were carefully selected. From the beginning it was clear to all parties that the economic criteria – namely the profitability of an investment – would be dominant in most cases. Especially production loss and unavailability related costs would be important criteria. Maintenance costs could be triggering criteria in some cases. Opportunity to improve the system performance (above the original) is never a main criterion for a maintenance investment, but it may be an additional factor supporting the investment. The improvement may be related to production capacity, occupational or environmental safety, or quality. Plans for development investments should always be checked and taken into account, when maintenance investments are prepared. Capability to invest may be a restricting factor in some situations. Capability may be restricted by financial, resource or production reasons. In practice, concerning a single maintenance investment, capability to invest is not an important issue, since the maintenance investments are carried out within an annual budget. When proposals for maintenance investments are prepared, it should also be checked, if other options were possible and more profitable. The other options – instead of maintenance investment – may be an improvement of maintenance procedures or a development investment. One of the main objectives in the development was to improve the information collection for the basis of potential maintenance investment identification. It is known that collaboration and related information exchange is relevant from the integrated perspective of production and maintenance planning. E.g. Sloan et al. (2000) have studied the combined production and maintenance models and concluded that combined models result in significantly greater (25 %) reward compared to traditional method. An important source of information is the maintenance management information system. However, after a strict examination it appeared that the failure and maintenance history data collected in the system was not complete and detailed enough to be used as an only information source for the identification of potential maintenance investments. One specific drawback was that, even though very extensive data on downtime existed in production management system, it could not be automatically linked to the failure and maintenance data in maintenance information system. Thus, detailed enough information on how much each failure cause loss of production was not easily available. Possibilities to link the two information systems are being examined and management activities have been carried out to improve the maintenance and failure documentation. The quality of maintenance data has improved already during the project. In addition to above mentioned general improvements in maintenance data collection, an annual process was developed (figure 1) focusing to collect and analyse maintenance investment related information especially. At the end of the process, the maintenance investment proposals are prepared. The process consists of three main phases: 1) Identification and selection of potential management investments 2) Profitability assessment based on a cost-benefit analysis 3) Decision making and proposal finalisation. By making the preparation of maintenance investment proposals a year-long – or actually a continuous – process, better quality of proposals was sought.
3
Continuous tasks
Responsible
Identification of potential for maintenance investments on the basis of incidents causing downtime and maintenance costs. Rough estimate of investment benefits and costs. Supervisor Local service manager Responsible
Local service manager
•
Morning meetings
• •
Thursday meetings Root cause analyses Periodical tasks
Responsible
Examination of downtime January Decemb February information er Novemb er
March
October Examination of downtime information
Service manager
Preparation of investment proposals
August
Local service manager
April
Septemb er
Local service manager
Examination of downtime information
May
July
Examination of downtime information
Profitability assessment Proposals for major investments Selection of potential minor investments Examination of non-indicated risks
Service and production managers
June
Service and production managers
Figure 1. Annual schedule of preparation of maintenance investment proposals
The basis of the process is in daily and weekly maintenance and production meetings. In these meetings it is frequently checked, whether there has happened something, which might indicate a need for a maintenance investment. These potential targets for maintenance investments are documented in a formal common inventory (data base). The documentation in inventory should include a note on why the investment should be done (what should be improved) and a coarse cost-benefit evaluation. An additional information source for potential maintenance investments is the root cause analyses which are carried out for any failure or disturbance which have caused at least two hour downtime in production. An increased number of failures and increased need for maintenance is one kind of indication of a required maintenance investment. However, all aging equipment does not so clearly show such symptoms of replacement need. Such equipment requires regular risk assessment as means for identification of maintenance investment need. The risk assessment is based on the criticality classification of equipment: the criticality in relation to production and safety has been defined for each device in the plant. In risk assessment the main factors to be examined are: -
equipment age compared to the typical age of such equipment
-
changes in environment, use or functional requirements for equipment
-
changes in availability of spare parts and services.
Based on these factors, it will be evaluated, on which year the replacement would most probably be required. The evaluated year of failure is related to the likelihood part of risk assessment. The consequence aspect of risk assessment is taken into account by comparing planned and unplanned replacement cases. The difference between the costs of planned and unplanned cases affects to the preferable timing of replacement: if the unplanned replacement is much more expensive, the investment in replacement is likely to be profitable even long before the evaluated replacement date. On the other hand, if the difference in costs is small, a risk can be taken by postponing the
4
replacement even after the evaluated date. On the basis of risk assessment, potential targets for maintenance investment are added to the potential maintenance investment inventory. Risk assessment is carried out annually. The inventory of potential maintenance investments is examined four times a year. In this examination the information related to these potential investments is updated and complemented. This quarterly examination serves the follow-up. In the second examination of the year the cost-benefit analysis is carried out. On the basis of the analysis, budget proposals for major investments are prepared and forwarded to decision making, and other (minor) maintenance investments are selected for proposal finalisation. In the third annual examination, maintenance investment proposals are finalised. 4
PROFITABILITY ASSESSMENT BASED ON A COST-BENEFIT ANALYSIS
The practical approach and tool developed to support the profitability assessment in this context was based on an LCP calculation model. The developed model is a combination of the practices utilised at the plant earlier, cost-based failure analyses (e.g. Rhee & Ishii, 2003) and LCP principles for a dynamic environment (Ahlmann, 2002). So far the profitability assessments have been made mostly based on information on the investment payback time which was, however, found too one-dimensional a measure if applied alone. In practice, utilising payback time as the only criterion will not take into account the profits during the lifecycle but favours investments with short payback time. Thus, it was targeted that the developed approach and related tool should produce information on both the investment decision’s effects on the time periods for which capitals are invested (payback time) and the anticipated lifecycle profits of individual investment targets. However, economic fluctuation and other key features of the dynamic business environment can have effects on which criteria are emphasised. In capital-intensive industries, production downtime typically generate most of the total costs related to equipment failures. For this reason, the main driver for the implementation of a maintenance investment often comes from the resulted system downtime and related costs, as in this case. However, the more comprehensive list of cost items used in our profitability assessment model is as follows: -
unavailability costs (production downtime)
-
maintenance costs
-
o
failure based maintenance costs
o
preventive maintenance costs
energy consumption
By evaluating the assumed change in the fore-mentioned cost items due to the considered investment, one can calculate the cost effects of the investment. Thus, our practical LCP based calculation model is a comparative analysis tool for making assessments of future costs for two different scenarios where a) no investment is done or b) investment will be done during the next year. In addition, the costs of the implementation of the investment are taken into consideration in the investment calculations. Implementation costs are categorised as follows: -
investment purchase costs
-
installation related unavailability costs
-
other installation related costs and costs generated by the need to modify surrounding assets
The main results our practical investment evaluation tool produces are the payback time of the investment and investment’s effects on the lifecycle costs of the considered target. Qualitative information on the target is also given to support the decision-making: the description of the target, the identified need for the investment, and additional benefits related to the investment are to be described qualitatively. This complements the quantitative analysis which is purely focused on the main drivers of a typical maintenance investment: unavailability, maintenance and energy costs. Thus, the following aspects regarding the potential investment target are analysed: -
environmental and occupational safety
-
target’s capability to answer the future demands (e.g. increase of capacity/speed)
-
improvement of product quality
-
synergy potential in maintenance.
5
Out of the list of potential investment targets, most auspicious candidates are chosen for profitability assessment - based on commonly agreed preliminary criteria with qualitative and quantitative aspects. The number of candidates should at this phase be larger than the number of investment targets typically funded within the considered maintenance investment budget. Profitability assessment phase results in a list of maintenance investment proposals for the next year, with both quantitative and qualitative depictions of profitability and investment benefits as well as other key features having effects on decisionmaking. 5
CONCLUSIONS
In this case study maintenance investment management was developed as a collaborative development of operations models and practices, rather than a pure technical method and tool development task. The objective was to develop the best practical means for investment management rather than theoretically best means. The development was based on wide knowledge on maintenance investment management methodologies. The (theoretically) optimal solutions were modified for local circumstances and requirements of which a newly started cooperation of two companies was an important one. A practically useful and successful process and tool was strived for. How it was succeeded, is still being seen.
6
REFERENCES
1
Ahlmann, HR (2002) From Traditional Practice to the New Understanding: The Significance of Life Cycle Profit Concept in the Management of Industrial Enterprises. IFRIMmmm Conference, Växjö, Sweden, 67 May 2002. 16 s.
2
anon. (2005) Calling a Change in the Outsourcing Market, The Realities for the Worlds Largest Organisations, Deloitte Development LCC.
3
Bailey, W., Masson, R. & Raeside, R. (2002) Outsourcing in Edinburgh and Lothians. European Journal of Purchasing & Supply Management, 8, 83-95.
4
Hendry, J. (1995) Culture, Community and Networks: Hidden Cost of Outsourcing. European Management Journal. 13(2), 193-200.
5
Huai, J. & Cui, N. (2005) Maintenance Outsourcing in Electric Power Industry in China. In: Proceedings of International Conference on Services Systems and Services Management. 13-15 June 2005. IEEE. 2, 1340-1345.
6
Rhee, S. J., Ishii, K. (2003) Using Cost Based FMEA to Enhance Reliability and Serviceability. Advanced Engineering Informatics. 17(34). July October 2003. s. 179188.
7
Sloan, T.W. Shanthikumar, G. (2000) Combined production and maintenance scheduling for a multiple-product, singlemachine production system. Production and operations management. Vol. 9(4), Winter 2000.
8
Tarakci, H., Tang, K. & Teyarachakul, S. (2009) Learning effects on maintenance outsourcing. European Journal of Operational Research. 192, 138-150.
6
Proceedings of the 4th World Congress on Engineering Asset Management Athens, Greece 28 - 30 September 2009
A MODEL FOR MORE ACCURATE MAINTENANCE DECISIONS (MMAMDEC) Basim Al-Najjar a and Renato Ciganovic b a b
Terotechnology, School for Technology and Design, Växjö University, Sweden,
[email protected] Terotechnology, School for Technology and Design, Växjö University, Sweden,
[email protected] It is usual when using CM technology for assessing the state of a component and planning maintenance actions using predetermined levels for warnings and replacements. The replacement of a damaged component is usually done at lower or higher than the predetermined level, which both means losses. This is because the probability of doing replacements just at the predetermined level is negligibly small. The accuracy in the assessment of the condition of a component has big technical and economic impact on the output of the machine, production process and consequently company profitability and competitiveness. The higher the accuracy in assessing the condition of a component yields higher probability of avoiding failures and planning maintenance actions at low costs. In this paper, techniques for assessing the state of a component using both mechanistic and other statistical approaches are considered. This paper also applies Cumulative Sum (CUSUM) Chart for identifying the time of damage initiation and reducing false alarms. Techniques for assessing the probability of failure of a component and its residual life, and predicting the vibration level at the next planned measuring opportunity or planned stoppage are introduced, discussed, computerised and tested. The problem addressed is: How is it possible to increase the accuracy of assessing the condition of a component? The major result achieved is; development of a model for more accurate assessment of the condition of a component/equipment through combining different approaches. The main conclusion that can be drawn is; applying the model, it is possible to enhance the accuracy of assessment of the condition of a component/equipment and consequently maintenance decision since the integrated model provides comprehensive and relevant information in one platform. Key Words: Integrated Approach for Maintenance Decisions, Effective Maintenance Decision, Prediction of Vibration Level, Probability of Failure, Residual Lifetime, Vibration Monitoring, Total Time on Test. 1
INTRODUCTION
Today the companies try to reduce their production costs for gaining a competitive advantage in the market. Maintenance plays a key role in reducing production cost through enhancing availability and extending the life length of producing assets, Wu et al. (2007). Using condition monitoring (CM) technologies, production security and operating safety will increase because the probability of detecting and treating problems will increase Wang (2002), Al-Najjar (2004) and Wu et al (2007). Furthermore it will result in much lower operational costs, Xiaodong et al. (2005). According to White (1994) it also improves the safety for the labour because the probability for a sudden breakdown of a machine will decrease. In other words, lack of maintenance can also have a huge negative impact on the surrounding environment through, for example radioactive radiation, oil leakage, explosions etc. This is why maintenance can reduce safety hazard for the labour and make them feel safer when working with the machines. Furthermore, White (1994) states in many industries the production has increased with around two to ten percent when applying condition-based maintenance (CBM). It is important to keep the machines in good condition, plan maintenance at need and try to avoid failures and disturbances. Without these enhancements, the company may face difficulties in keeping-up and improving the customer service level, production quality, personal safety and company profitability and competitiveness. When using CM technology, it is quite common to use vibration signals analysis for detection of machine faults, Samanta and Al-Balushi (2003). Detection is made by comparing the vibration signals of a machine operating under normal conditions with a machine running under faulty conditions. Noise, randomness and deterioration may cause variation in the vibration level, Al-Najjar (2001) and Samanta and Al-Balushi (2003). Random fluctuating of the vibration measurements may occur according to some uncontrolled factors independent of component’s condition. These can obstruct the detection of component condition, Samanta and Al-Balushi (2003).
7
When using CBM system, there are analytical tools for interpreting the signals and assessing the condition of the component, Al-Najjar (2001 and 2003). If the prediction of the vibration level in the close future utilises previous and current measurements, current and future operating conditions, the predicted value may support and enhance the accuracy of maintenance decisions. But even if it provides reliable information about the condition of a component, we do not certainly know when the component will break, Al-Najjar (1997). Replacing components at unreliable assessment of its condition leads to two types of losses; either losing a big part of its residual life or at failures. 2
CUSUM CHART
Variation in the vibration level can occur due to noise, randomness in the vibration and a wide spectrum of deterioration causes, Al-Najjar (2001) and Samanta and Al-Balushi (2003). The variation in the vibration level may lead to over- (or under-) estimation of the component/equipment condition. The economic losses that arise due to the false signals are significant especially when the downtime cost is high, see Al-Najjar (2003). According to Al-Najjar (1997) the cumulative sum (CUSUM) chart of the condition measurements is a better indicator of a potential failure than the vibration measurement itself when the variation in the vibration level is appreciably big. The graph of CUSUM values shows the behaviour of the cumulative sum of the vibration level with reduced probability of false alarms. The CUSUM chart is obtained by plotting the cumulative sum of deviations from a target value along the monitoring period with respect to a predetermined level, see Al-Najjar (1997). 3
PREDICTION OF THE VIBRATION LEVEL
Effective usage of Vibration-based Maintenance (VBM) within TQMain concept for rotating machines provides the user with a long lead time for start planning cost-effective maintenance actions to avoid failures. It can be achieved through acting at an early stage in order to keep the probability of failure of a components low until a condition-based replacement is done, Al-Najjar (2000). In Al-Najjar (2001) and Al-Najjar and Alsyouf (2003) a model for predicting the vibration level during the close future is developed and tested, respectively. The tool for predicting vibration level utilises previous and current measurements and operating conditions to predict future vibration level. Components' mean effective life can be prolonged if the operating conditions have been assessed properly, Al-Najjar (2000). When assessing the condition of a component, deterministic and probabilistic or combined approaches can used. Deterministic method includes for example issues related to machine function, failures analysis and diagnostics, while probabilistic method includes assessment based on statistical tools, Al-Najjar (2007B). More specifically, it means prediction of the vibration level, assessment of the time to maintenance action and the probability of failure of the component. According to Al-Najjar (2000) damage initiation and development phase represents more than half the total usable life of a component/equipment. The past history of a component, its current status and its operating and environmental conditions during the near future, will affect the probability of failure, Ibid. During normal component state, i.e. when no damage is initiated, CM parameter value, e.g. vibration level, is fluctuating around its mean level xo, Al-Najjar (1997). Denote the phase of mechanical component life when its vibration level approaches the potential failure level (initiation and development of damage) by xp. Replacement level is denoted by the variable xr. When the damage under development has been confirmed, the function of CM parameter value, i.e. x(t) is assumed to be a non-decreasing function. While CM parameter level is usually considered stationary during the interval prior to the initiation of damage, see Herraty (1993) and Al-Najjar and Alsyouf (2003). According to Al-Najjar (1997), it is often difficult to tell for certain that the damage development has begun, especially when the number of CM measurements, e.g. vibration, is very small. The model shown in Eq. (1) is used for predicting the vibration level during the period until the time point for next measurement of a planned stoppage see Al-Najjar (2001). c
Yi +1 = X i + a * Exp(bi * Ti +1 * Zi i ) + Ei
(1)
Yi+1 is a dependent variable representing the predicted value of the CM level at the next planned measuring time. The model expressed in Eq. (1) consists of three independent variables (X, Z and T) and three parameters (a, b and c). Ti+1 is the elapsed time since the damage initiation has been confirmed. The current CM level is labelled by Xi. Zi is the deterioration factor, which in its turn is a function of the current and anticipated load and previous deterioration rate. (Z = dx` * Lf/Lc). The parameter 'a' is the gradient (slope) by which the value of the CM parameter increased since it started to deviate from its normal state xo due to damage initiation until detecting it at potential failure level, xp. Parameters bi & ci are model's non-linear constants. The model error, Ei, is assumed to be identical, independent and normally distributed with zero mean and constant variance, N (0, s). Finally i, for i = 1, 2…n, is number of measuring opportunities after damage initiation.
8
4
ASSESSMENT OF PROBABILITY OF FAILURE AND RESIDUAL LIFETIME
The second technique uses failure and condition-based data for same or similar components using graphical Generalised Total Time on Test (GTTT-plot), Al-Najjar (2003). This technique is developed to asses the probability of failure of a component and its residual lifetime at demand. Assessment of probability of failure and residual lifetime is necessary to effectively enhance the information given to describe the condition of a component, e.g. bearing, at any time or after each vibration measurement. This is especially important for situations when the value of CM parameter, e.g. vibration level, is increasing rapidly or when it is relatively high and there is a risk of faster deterioration during the time until the next measurement, see Al-Najjar (2003). GTTT-plott can be obtained by plotting Ui = (Ti/Tn) = (Ti/n)/ (Tn/n) on y-axis versus ni/n on the x-axis. Here Ui indicates the proportion of the average exhausted life length of the bearings until their vibration levels approached or exceeded x(i) divided by the average time generated by n components until their levels equalled or exceeded x(n), for x(i)<x(n) and i=1,…,n. Consequently the ni/n gives an estimation of probability of failure occurrence, where ni is the number of replacements that have been done until x(i) is exceeded, while n represents the total number of the bearings under consideration and xr is the replacement vibration level, see Al-Najjar (2003). 5
MODEL DEVELOPMENT
It is better that predicting the vibration level in near future should not be done before the damage initiation has been confirmed for avoiding unnecessary work due to false alarms that may arise because of randomness. This is where CUSUM chart is useful due to its ability to identify the moment when damage has initiated, see Al-Najjar (1997). Assessment of the probability of failure of a component and its residual lifetime do not by itself consider the current and future state of the component in operation, which is the reason behind combining it with the tool for prediction of vibration level and CUSUM chart. The probability of failure and residual life help to enhance the information about the component by considering its age in comparison with historical data about same or similar components. CUSUM chart is computerized and the user-interface of CUSUM chart is shown by Fig.2. The CUSUM chart has been developed in Microsoft Excel since it is practical software for creating graphs. Thus, Fig. 2 displays the plot area intended for accumulated sum of the deviations between each vibration measurement and the reference value (k). The x-axis represents the measurement number and y-axis represents the CUSUM value. In this paper we apply CUSUM chart technique in the same way as done in Al-Najjar (1997). Therefore the reference level (k) is chosen in middle between xo and xp. Furthermore, the reference level is controlled against the mean vibration level of measurements and plotting of CUSUM chart is only obtained when the accumulated value of the vibration measurements is equal or less than zero. On the other hand the plotting of CUSUM is stopped as the measurements are declines under the k level because the process is assumed to be in control. Development of PreVib, ProFail and ResLife software prototypes: For easy use of the model and in order to achieve reliable and faster prediction of the vibration level, new software based on Eq. (1) and also called PreVib was developed. Also, ProFail & ResLife was developed for assessing the probability of failure of a component and its residual lifetime. The userinterface of PreVib is divided into two halves, Fig.3: left half representing input data for prediction and right half representing the result in form of a graph. Input data show both database and non-database data in the grey and white boxes, respectively. When a prediction is to be made, the segment (machine), asset (component, e.g. a rolling element bearing) and the location (direction of the vibration measurements) should be specified. Also, we should specify how many measurements in “Limit measurement” that we consider if we focus only on part of the measurements. Otherwise we should use “None” if we consider all the measurements in the prediction. Then, the measurements can be downloaded by pressing “Load” which uploads the data from a MIMOSA database. The “Mean vibration level (xo)”, “Prediction time period” and “Near future load/present load” should all be specified before clicking “Predict”. The desired future “Prediction time period” can be specified in, e.g. seconds, hours, days, weeks, months, etc. When “Predict” is clicked, the value of the predict vibration level and the associated date and plot it on the diagram on the right hand side of the user interface will be determined. By analogy, the same thing can be repeated any time the user need to predict the vibration level, for example after the next measurement of the vibration level. The right half of the user-interface presents a graph will contain two plots, the blue representing the actual vibration level values and the red showing the predicted vibration level values. Y-axis is the vibration level in, e.g. mm/sec and x-axis is the calendar time for measurements and predictions. The graph contains also two dashed lines representing the two different levels of life of a mechanical component i.e. mean vibration level (xo) and potential failure level (xp). These levels are determined and set based on data from identical components and is automatically retrieved when component data is uploaded. Some of the initial measurements belong to the first phase will not be shown in the graph, even if they have been considered when estimating model's constants b and c. The first prediction, as evident, should be done after confirming damage initiation, i.e. the vibration level exceeds or close to xp, because as long it fluctuates around xo it means there is no damage is initiated and thereby there is no need for predicting future level. To be able to predict future vibration level one need at least three measurements. This is why we need some of the measurements that have been considered part of the first phase of the component life. Observe that sometimes, the first two predictions are special cases due to the use of the measurements below the potential failure level to assess model's parameters b and c. The reason for that is; we want to predict future state as soon as we have confirmed the initiation of damage, instead of waiting for at least three measurements exceeded (xp). Waiting could also cause uncertainty in the maintenance decisions. Also, because deterioration process is a stochastic process, vibration level
9
can be changed randomly in time. Therefore, some of the level values may acquire higher level than it should, i.e. exceeds xp, which it may be irrelevant to the severity of the deterioration. For each prediction, all previous measurements are used for estimating model's parameters (b and c) and thus predicted vibration level itself. ProFail & ResLife software module is supplementary to PreVib module for enhancing the accuracy in maintenance decisions, which can be used jointly and independently. As for PreVib, ProFail & ResLife user-interface is also divided into two halves, Fig.4. The left half represents input data required for assessment and in the right half the result is shown in form of a plot (graph). When assessing the probability of failure and residual lifetime of a component, it demands to specify the “Segment” (machine) and “Asset” (component, e.g. a rolling element bearing in this case) and “Assessment time point”, i.e. the time at which the assessment should be done followed by clicking assess-button. Subsequently the program uploads the table of lifetime data from MIMOSA database and presents the graph on the right half of the user-interface. The y-axis in the graph shows the proportion of the average exhausted lifetime and the x-axis shows the probability of failure for the analysed component. “Probability of Failure” of the component in question and its “Residual lifetime” are then assessed and displayed in the left hand side of the user-interface. The values can be shown by using the cross-air pointer in the graph. If a new assessment is performed after the current assessment time point, then consequently the cross-air pointer will move forward on the curve. Each point on the graph represents each and one of the component lifetimes in relation to each other. The basic idea of developing a model for more accurate maintenance decisions (MMAMDec) through integrating CUSUM chart, PreVib, ProFail and ResLife based on; applying CUSUM chart reduces the probability of false alarms and confirms the initiation of damage, which makes the prediction of the vibration in the near future more effective. While, assessment of the probability of failure of a component and its residual lifetime increases the information underlying maintenance decision through improving the description of the condition of the component from additional dimensions. Fig.1. shows how the mentioned modules are integrated and the model working steps. Vibration measurements should firstly be analysed by using CUSUM chart. The vibration measurements can be exported to Microsoft Excel file from the original vibration measurement database. When damage initiation is confirmed by using CUSUM chart, i.e. the vibration level has approached xp, and then PreVib is applied. The software modules PreVib and ProFail&ResLife are MIMOSA compatible. MMAMDec working steps, see also Fig.1: 1. 2. 3. 4.
Use CUSUM chart to identify when xp is approached. Then, Predict the vibration level in the near future using PreVib module. From this step you can either go to step number 3 in order to enhance the information underlying maintenance decision through ProFail & ResLife module or you can jump to step 4. Assess the probability of failure of the component and its residual lifetime through using ProFail & ResLife module for the same dates as the predicted vibration levels or other date (after demand). It helps to enhance the information required to confirm the results in step 2, reduce its significance or reject it. Plan the required maintenance action by means of the information in previous steps
Fig.1. MMAMDec for integrating CUSUM, PreVib and ProFail&ResLife
10
6
MODEL TEST
The model is tested using real industrial data provided by CNC machine manufacturer Goratu from a specific machine/motor. The company also provided the reference levels for the machine type, i.e. mean vibration, potential failure and replacement levels. All the data needed for testing the models were retrieved from a MIMOSA database. Furthermore Goratu provided lifetime data, installation times and replacement/removal times for previous motors and the one under analysis. For the test of the model the same type of data that the three integrated modules use individually are needed. For the CUSUM chart we need vibration measurements and corresponding date of measurement. But we also need the different levels i.e. xo and xp in order to decide the reference level k. PreVib uses same type of data and level which are retrieved from MIMOSA database. ProFail & ResLife uses life data. First step in the model test is to identify when potential failure level xp is approached. CUSUM chart is appropriate tool for detecting systematic change from a prescribed level. The changes in the mean of the vibration measurements from the reference value can be plotted. If the mean of vibration measurements is equal to the reference value then cumulative sum fluctuates about zero. Variations in vibration level will make the CUSUM chart slope either to increase or decline. An upward slope in the CUSUM chart will indicate that accumulated sum of the deviations, between the vibration measurement and the reference value, has increased and vice versa for a downward slope. With help of CUSUM chart it is also possible to trace back the time point when the change occurred, see Fig. 2.
Fig.2. User-interface of CUSUM chart in Microsoft Excel using real data
For example by eye-balling the CUSUM chart in Fig. 2 one can see when the CUSUM chart slope changes its direction by observing the corresponding value on x-axis (Xi), which corresponds to the different measurements. Then it is easy to go back to the CUSUM chart file and trace the date for Xth measurement and start investigating the reason behind the change. CUSUM can also help us to find out if it is a false alarm or to confirm damage initiation. The characteristic of the CUSUM chart is that it cannot discover sudden deviation easily, this because usually at least two to three measurements are needed to indicate a deviation. From Fig. 2 we can see that decision interval is estimated to approximately 0,080 mm/sec, which means that passing this interval can be considered as an initiation of a potential failure. Furthermore we can see that the vibration measurement that passed the decision interval (potential failure initiation) was around measurement X50. But, prior X50 we have registered around 20 measurements have been near to or passed the potential failure level. It is by using the CUSUM chart that these false alarms have been discovered and thus reduced. We can also see that damage development is constantly increasing after the damage initiation point (X50), see Fig. 2. Once the damage initiation has been confirmed through CUSUM chart, we can start applying the PreVib module for predicting future vibration level and ProFail Module for assessing probability of failure and residual lifetime see Fig. 3 and 4.
11
Fig.3. User inter-face of the software program PreVib using real data The software module PreVib was used to predict vibration level for variable calendar time. The application of PreVib is shown in Fig. 3. One can see from Fig. 3 that predicted values are rather accurate since they are close to the real measurements. By predicting the vibration level the maintenance engineer will be able to model the optimum time for the replacement of the component.
Fig.4. User-interface of the software program ProFail and ResLife using real data Notice that the software module ProFail & ResLife is showing that probability of failure is 100% for the same dates as the predicted vibration values, see Fig. 4. This is because ProFail & ResLife makes an assessment based historical data for similar
12
or same components, which in this particular case have been shorter than the current, see life distributions on the left half of Fig.4. Also more historical data would be sufficient in order to consider the information provided by ProFail & ResLife. In the last step of MMAMDec, Fig. 1, the operator or maintenance engineer can plan the required maintenance action by means of the information in given from CUSUM chart, PreVib and possibly ProFail & ResLife. 7
RESULTS, DISCUSSIONS AND CONCLUSIONS
In this article we have presented an approach of how to integrate the three different tools with the purpose to increase the accuracy of assessing the condition of a component. By using CUSUM chart, the probability of false alarms can be reduced. After the damage initiation we started to apply the tool for predicting the future vibration levels so that an optimum time for replacement can be identified. At the same time we applied the tool for assessing the probability of failure and residual lifetime in order to enhance the information available. The main conclusion from this paper is that the developed model has the ability to integrate different type of data in one platform. The model has also clear and systemized way of application. The model creates a link between these modules so that the decision maker can for example for same date predict the vibration level and assess the failure probability and residual lifetime of the component. In order to make more accurate maintenance decisions; we should also sort out false alarms due to randomness in the vibration signals. In the final step the maintenance engineer can then plan the maintenance actions required with better accuracy. 8
REFERENCES
1
Al-Najjar, B. (1997) Condition-based maintenance: Selection and improvement of a cost-effective vibration-based policy in rolling element bearings. Doctoral thesis, ISSN 0280-722X, ISRN LUTMDN/TMIO—1006—SE, ISBN 91-628-2545X, Lund
2
University, Inst. of Industrial Engineering, Sweden.
3
Al-Najjar, B. (2000) Accuracy, effectiveness and improvement of Vibration-based Maintenance in Paper Mills; Case Studies. Journal of Sound and Vibration, 229(2), 389-410.
4
Al-Najjar, B. (2001) Prediction of the vibration level when monitoring rolling element bearings in paper mill machines. International Journal of COMADEM 4(2), 19-27.
5
Al-Najjar, B. (2003) Total Time on Test, TTT-plots for condition monitoring of rolling element bearings in paper mills. International Journal of COMADEM 6(2), 27-32.
6
Al-Najjar, Basim (2007A) The Lack of Maintenance and not Maintenance which Costs: A Model to Describe and Quantify the Impact of Vibration-based Maintenance on Company's Business. International Journal of Production Economics IJPPM 55(8).
7
Al-Najjar, Basim (2007B) Establishing and running a condition-based maintenance policy; Applied example of vibrationbased maintenance. WCEAM2007, 106-115, 12-14 June Harrogate, UK
8
Bergman, B. (1977) Some graphical methods for maintenance planing. Annual Reliability and Symposium, 467-471.
9
Bergman, B.and Klefsjö, B. (1995) Quality from customer needs to customer satisfaction. Studentl., Lund, Sweden.
10
Herraty, A.G. (1993) Bearing vibration-Failures and diagnosis. Mining Technoloy, 51-53.
11
Jardine, A.K.S. and Joseph, T. and Banjevic, D. (1999) Optimizing condition-based maintenance decisions for equipment subject to vibration monitoring, Journal of Quality in Maintenance Engineering, 5(3), 192-202.
12
Lin, C. and Tseng, H. (2005) A neural network application for reliability modelling and condition-based predictive maintenance. International Journal of Advanced Manufacturing Technology, 25(1), 174-179.
13
SAMANTA B. AND AL-BALUSHI, K.R. (2003) ARTIFICIAL NEURAL NETWORK BASED FAULT DIAGNOSTICS OF ROLLING ELEMENT BEARINGS USING TIME-DOMAIN FEATURES, Mechanical Systems and Signal Processing, 17(2), 317-328.
14
Xiaodong, Z. and Xu, R. and Chiman, K. and Liang, S.Y. and Qiulin, X. and Haynes, L. (2005) An integrated approach to bearing fault diagnostics and prognostics, American Control Conference, 2005. Proceedings of the 2005, 2750-2755.
15
Wang, W. (2002) A model to predict the residual life of rolling element bearings given monitored condition information to date. IMA Journal of Management Mathematics, 13(1), 3-16.
16
Wang, W. and Zhang, W. (2007) An asset residual life prediction model based on expert judgments, European Journal of Operational Research, 2, 496-505.
13
Maintainability
17
White, Glenn (1996) Maskinvibration, Vibrationsteori och principer för tillståndskontroll, Landskrona: Diatek vibrationsteknik
18
Wu, S. and Gebraeel, N. and Lawley, M. A. and Yih, Y. (2007) A Neural Network Integrated Decision Support System for Condition-Based Optimal Predictive Maintenance Policy, IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans, 37(2), 226-236.
Acknowledgement We would like to thank EU for the support that we had in EU-IP DYNAMITE, where this paper is part of the work done within DYNAMITE.
14
Proceedings of the 4th World Congress on Engineering Asset Management Athens, Greece 28 - 30 September 2009
THE HOUSE OF MAINTENANCE - IDENTIFYING THE POTENTIAL FOR IMPROVEMENT IN INTERNAL MAINTENANCE ORGANISATIONS BY MEANS OF A CAPABILITY MATURITY MODEL Prof. Dr. Guenther Schuh, Bert Lorenz, Cord-Philipp Winter, Dr. Gerhard Gudergan Research Institute for Operations Management at RWTH Aachen University (FIR), Pontdriesch 14/16, 52062 Aachen, Germany
In order to guarantee an efficient and effective employment of production equipment, it is essential to identify any possible potential for improving performance, not only in the production process, but also in supporting areas such as maintenance. One of the major tasks in increasing maintenance performance consists of systematically identifying the company’s most significant weaknesses in maintenance organisation and thus being able to implement improvements there where they are most needed. But how is a company to tackle this important task? To answer this question, this paper describes an assessment and improvement approach, based on a capability maturity model (CMM). By means of this approach, the status-quo of a maintenance organisation can be analysed and its individual improvement opportunities identified. Key Words: Maintenance Management, Capability Maturity Model, Maintenance Assessment, Improvement Program, House of Maintenance Framework, Maintenance Performance 1
INTRODUCTION
Ever changing market conditions, shorter product life cycles and an increase of competitive pressure create the need for a more effective and efficient deployment of production facilities. Maintenance is a key factor for both efficiency and effectiveness in production [1]. Being an internal service provider, maintenance is essential in ensuring a high degree of equipment availability, as well as meeting quality requirements and thus being able to meet customer’s needs without wasting resources. Due to its increasing significance concerning production, maintenance has become a major cause for cost increases. Nevertheless, it is essential to identify maintenance as a strategic factor for success and not a mere cost centre. Also, maintenance departments must continuously be improved, in order to enhance their performance. Improvement, however, requires the identification and understanding of unutilized potentials within maintenance. Target-oriented improvements are only possible, if the object and organisation respectively, and their current situation are measurable, and hence, rateable. It is, however, particularly difficult to develop a suitable method for objectively measuring and accessing a complex structure like a maintenance organisation. A further requisite for an improvement process is the fact that both starting point (in this case, the status quo of maintenance) and the target strived at, are known beforehand. The maturity-level-system theory offers a promising approach for solving such problems. With this in mind, the Research Institute for Operations Management at RWTH Aachen University (FIR) has attempted a research effort, aimed at projecting the maturity-level-system onto the management of the internal maintenance. For this purpose, the CMM was first assigned to the internal maintenance as well as to all other relevant supporting departments. Finally, the assessment framework (the House of Maintenance of FIR) was validated in practice, using business cases. The results of the research project are shown below for a practical case. As part of its research, the FIR has developed a diagnostic instrument, the “IH-Check” (“Maintenance-Check”), for recognizing strengths and weaknesses in the maintenance departments of mainly small and medium-sized enterprises (SME), and for recognizing potentials for improvement and methods for utilising the same.
15
Modern concepts of management, like Total Productive Maintenance (TPM), Reliability Centred Maintenance (RCM) or Risk Based Inspection (RBI), can help improve maintenance performance [2, 3, 4, and 5]. Although these and other similar concepts have been successfully used by large enterprises, their application to SMEs is, however, limited. This is validated by results of expert surveys, conducted among heads of maintenance departments [6].
„According to your oppinion, which concepts / methods are suitible for improving the maintenance performance of SMEs?“
Reliability oriented maintenance Rist Based Maintenance (RBM) Total Productive Maintenance (TPM) Outsourcing 0%
20%
40%
60%
suitable
suitable to a limited extent
not suitable at all
unknown
80%
100%
hardly suitable
N = 56 Source: FIR Expert Study „Trends and DevelopmentPerspectives in Maintenance“ 2004
Figure 1 Acceptance of existing maintenance concepts in operational service It is essentially the first step of such improvement processes, namely the realistic evaluation of one’s own strengths and weaknesses, which causes considerable problems within enterprises. Experts believe that an absence of an integrated approach is one of the causes of these problems (see fig. 1). Many existing management concepts in maintenance consist of single isolated solutions which are not part of an integrated improvement process. Additional challenges accompanying the application of modern concepts of maintenance management are described in figure 2. „According to your opinion, which are the main problems preventing the application of Maintenance concepts in SMEs?“ Systematic support in identifying potentials for improvement A systematic and integrated view of maintenance Internal analysis (Estimation of potentials in maintenance) Insufficient involvement of employees in the process of improvement Insufficient consideration of resources (human resources as well as financial resources) of SMEs Setting realistic goals concerning maintenance Consideration of qualitative assessment criteria 0%
N = 56
20%
40%
60%
80%
Source: FIR Expert Study „Trends and DevelopmentPerspectives in Maintenance“ 2004
Figure 2 Causes of the lack of acceptance of existing maintenance concepts in SMEs
16
In order to support enterprises in this situation, FIR has developed “IH-Check”, a diagnostic instrument which helps to systematically reveal organisational weaknesses in maintenance.
Customer
Partnerships
Materials management
Maintenance controlling
Maintenance organisation Maintenance object
Information and knowledge management
Maintenance policy and strategy
Maintenance staff Figure 3 House of Maintenance The assessment is based on a framework called the “House of Maintenance” (see fig. 3), which consists of nine fields of action, describing the elements of a typical maintenance organisation on a generic level. These fields are defined by an individual set of nine assessment criteria, each of which contains a set of specific levels of maturity. The levels of maturity are developed according to the CMM [7]. The CMM is a structure of elements that describes certain aspects of maturity in an organization. It aids in defining and understanding processes within an organization and is based on a five-level process maturity continuum.
Analysis of each individual field of action Evaluation of maturity level of
-
-
the specific field of action Field of action „Maintenance Controlling“
Calculation of individual maturity level for each criteria
Criteria
Tasks
Data collection
Identification of key figures
Do you collect maintenance specific data?
Answer-possibilities
Use of key figures
Key figure comparisons
Cost accounting
Budget
Calculation of profitability
Indirect costs
maturity level
No data is collected for maintenance Little data is collected. Figures are only sporadic and cannot be verified. No evaluation of figures.
Evaluations are carried out regularly.
consequences is possible. All the key figures can be determined without any uncertainty.
17
4
The collected data is complete without redundancy. It is evaluated in such a way that a forecast on the
Figure 4 Elements of the maintenance assessment
3
A large amount of data is collected, certain data often more than once. For collection a standard is specified.
2
Data is collected, but collection is not completely standardised. Evaluations are only carried out irregularly.
1
5
Following the assessment, a company’s individual maturity profile regarding maintenance management which determines the company’s potential for improvement in maintenance, is developed. In combination with a prioritisation for identifying the crucial fields of action – e.g. by means of a pair-wise comparison – specific measures can be developed to exploit the company’s full potential in maintenance. This paper describes the House of Maintenance’s structure on the one hand, enlarging on its different fields of action and referring to dedicated maturity levels, while discussing the procedure of maintenance assessment and the identification of specific measures for improvement in maintenance organization within a practical example on the other hand. Further details under discussion are the House of Maintenance’s different fields of action as well as companies’ specific levels of maturity.
2
FRAMEWORK: THE “HOUSE OF MAINTENANCE”
The assessment is based on a framework called the “House of Maintenance”, which consists of nine fields of action originating from our vast experience in maintenance organisation. The House of Maintenance is oriented towards current models concerning excellence and maintenance management [2, 8, 9, and 10]. The fields of action describe the elements of a typical maintenance organisation on a generic level. Their significance has been validated in the study among maintenance experts mentioned above. The nine fields of action represent all persons and sections/departments with a significant impact on the Overall Equipment Effectiveness (OEE) – the most important key figure for measuring maintenance performance. Maintenance staff provides the basis for all maintenance activities and is thus a key to a company’s performance whilst both production uptime and production quality depend on the maintenance department’s performance. Requirements for uptime and quality within production are determined by the production department, which therefore has a major impact on the configuration of all other fields of action. The latter have to be managed accordingly.
3
•
While developing the House of Maintenance, we considered state-of-the-art developments in maintenance as well as issues specific to SMEs:
•
The House of Maintenance has a practical and easy-to-communicate visualisation, readily understood by both management and shop floor workers.
•
While maintenance is recognised as an internal service provider, it is essential to evaluate maintenance performance from the customer’s viewpoint – the customer being the production department. It is, therefore, necessary to include aspects such as service quality and customer orientation in the House of Maintenance structure
•
A growing degree of linking between maintenance and other internal organisational units, as well as external service providers, has increased the importance of interface-management and IT-support.
•
Maintenance policy is supposed to aim at satisfying the needs of production departments in order to ensure efficient and effective production.
DEVELOPING STEPS FOR CAPABILITY MATURITY
As is the case in quality management and software engineering, the evaluation of individual criteria is based on a CMM. Such models can be employed to systemise and structure varying processes. They initiate a stable long term process of optimisation, by pointing out the course for future developments. The progress of development is quantified and can be checked at regular intervals. The IH-Check uses a total of five maturity levels, based on the typical CMM levels [7]. The adaption of the content to inhouse maintenance, however, represents a totally new development. Each benchmark is structured according to the House of Maintenance and is based on the five parameters (characteristics) of the CMM, namely, “improvisation”, “orientation”, “commitment”, “implementation” and “optimisation” (see fig. 5 top part). The first level of CMM characterises a chaotic condition, in which improvements in maintenance are introduced sporadically. No consistent understanding of a fully integrated maintenance management exists in the company. The second level is characterised by an awareness of the importance of internal maintenance as an internal service provider and its contribution to a company’s value and operational performance. Very frequently, this development is initiated and carried out by individual employees.
18
Steps Optimisation “There will always • Integrated maintenance management be room for • Continuous improvement improvement!“
CIP 5
Implementation • Usage • Measurement • Company-wide implementation
4
Commitment • Company-wide rules • Introduction • Organisation
3
Orientation • Target planning • Target formulation • Breakup period
2
1
Improvisation • To become conscious • To get real • Chaos
„Not bad at all!“
„We are Getting better!“
„Let´s get started“
„There is much to do!“
Time
Call for action:
Need for documentation of adopted measures Need to analyse adopted measures and initiation of further activities Identification of priority measures most essential to maintenance performance
Orientation:
Often, only a small number of employees realise the importance of Maintenance organisation Improvement is possible in several cases Employees begin to realise the importance of maintenance performance
Figure 5 CMM for an effective facility-oriented maintenance (modified version based on [7]) and development of the CMM (example: “Orientation”) In the third level, all measures undertaken to improve the performance of maintenance are documented, evaluated, standardised and laid down in the form of operating instructions. The fourth level may be considered to be the stage in which most important maintenance processes are understood. At this stage, a high level has already been attained. Further improvements can only occur in small steps, demanding large inputs of effort. The fifth and highest level denotes a stage of integrated maintenance management which performs with high efficiency and effectiveness. Up to this moment, the maintenance has been reorganised using effectively coordinated steps. All employees, including external service providers, have, meanwhile, adopted these aims and regulations and are prepared to accept and to improve them continuously. The five levels of the CMM may be considered to be a relatively stable condition of maintenance organisation, based on actual and durable activities and processes. This implies that a maintenance organisation cannot be changed overnight and that no steps or levels may be skipped. The individual steps are consecutive and support each other interactively, so that any single level can be attained only after the requirements of the previous one are fulfilled.
4
PROCEDURE
Assessments are handled in multiple interactively working steps. The first step is to select those fields of action which are relevant and are able to significantly improve the performance of internal maintenance. These are then compared and evaluated, thus depicting to the individual situation of the company. The above is achieved by using a pair wise comparison to give a weighting to the nine fields of action as described in the House of Maintenance [11]. In the following step, the decisive fields of action are evaluated using the consigned criteria. The criteria are depicted as open ended questions based on five standard statements. These statements are interactively related and represent the five levels of the applied CMM. The use of previously formulated statements within the assessment generally prevents any subjective evaluations (see fig. 2).
19
As an intensive evaluation process is very costly and consumes a vast amount of a company’s resources, which, especially in SMEs, are not always available in the desired quantities, it was necessary to curtail the number of evaluation criteria for each field of action to a maximum of nine, giving a total of 81 evaluation criteria for the diagnosis, requiring four to six hours for the accomplishment of the assessment. The assessment takes place as a workshop using a questionnaire based approach and including collective discussions to guarantee a highly objective evaluation process. The questionnaire, as seen in fig. 2, is used for the identification of individual maturity levels for each evaluation criteria. An evaluation of maturity levels for all relevant fields of action follows implicitly in accordance with the House of Maintenance. In order to consider all relevant views of maintenance as an internal maintenance provider, the employees and head of the maintenance department, together with employees of production, controlling and purchasing should be included in the analysis. Hence, an integrated view of maintenance as part of the enterprise as a whole, is guaranteed on the one hand. On the other hand, an evaluation including representatives from all divisions within a company ensures a high commitment of the employees involved. At the end of the questionnaire based survey, the whole information is consolidated and the individual maturity levels are calculated. For further analysis, results can be depicted for every field of action and its related criteria. This maturity evaluation is depicted in a radar chart (see fig. 6).
25%
1 Tasks 1
50%
100%
2 Data collection
80%
9
2
25%
40%
4 Use of key figures
20%
8
25%
3 Identification of key figures
60%
3
0%
5 Key figure comparison
0%
50%
6 Cost accounting 7
4
6
5
50%
7 Budget 8 Calculation of profitability
0%
0%
9 Indirect costs 0%
100%
Figure 6 Evaluation of results in the form of radar charts (example: field of action “Maintenance Controlling”) In addition, the maturity levels of all nine fields of action are condensed into a single diagram, giving a maturity profile for the company’s maintenance as a whole (see fig. 7). On the basis of this profile, it is possible to judge which fields of action should be developed primarily, and which level should be strived for. The aggregated result determined by the IH-Check also delivers a collective maturity score in the form of a percentage (0 to 100 %). It shows the stage that the maintenance organisation of the company has reached, regarding a maintenance organisation aiming at excellent equipment effectiveness. This key performance index (KPI) can be effectively applied for internal marketing purposes within the organisation. Using the information derived in the weighting process within the House of Maintenance and in the creation of the enterprise’s individual maturity profile, “IH-Check” then prioritises the fields of action, considering both the importance of each field of action to the individual company and its current level of maturity. The prioritisation’s results are depicted in a prioritisation matrix shown in figure 11. Within the matrix, fields of action with a high importance as well as a low level of maturity can clearly be identified as most crucial concerning improvement. Based on these insights, measures can be developed to improve maintenance performance and thereby increase the enterprise’s operational performance.
20
44%
100%
1 Customer 1
80%
9
2
100%
75%
3 Maintenance organisation
60%
72%
4 Information and knowledge management
40% 20%
8
33%
2 Maintenance policy and strategy
100%
3
0%
25%
5 Maintenance controlling
100%
100%
100%
59%
100%
6 Maintenance object 7
4
67%
7 Materials management 53%
6
5
100%
100%
8 Partnerships 50%
100%
9 Maintenance staff 0%
100%
AS- IS TO- BE scenario
Figure 7 Maturity profile (regarding a company’s maintenance as a whole) (example)
5
PRACTICAL APPLICATION
Practical experience has, in a variety of cases, shown that “IH-Check” supports organisations to conduct systematic, object oriented and substantiated diagnosis of their internal maintenance. This applies to a first self-assessment, which a project team can conduct in a short time and is sufficient to lay the basis for a detailed discussion of problems within individual areas. The evaluation of criteria within a team has proved to be beneficial. Firstly, this encourages an exchange among the different lobbies and ultimately promotes an improved mutual understanding regarding the different points of view existing within the organisation. On the other hand, it is just this team evaluation which is absolutely necessary to ensure the objectivity necessary for a realistic determination of the strengths and weaknesses existing at the different locations. It occurs often enough, that the highly distorted views which employees have, regarding their own maintenance performance have to be put into proper perspective. In addition, it is already in the course of these discussions, that initial valuable contributions for potential improvement are suggested. The presentation of results as radar charts has likewise proved its merit. The results of the assessment, independent of the depth and scope of the survey, can be clearly and easily communicated to and interpreted by both maintenance employees and management.
6
CASE STUDY EXAMPLE
With the objective of optimising the maintenance management of a company operating in the gas industry and the FIR have conducted a project for assessing the potential of the maintenance management and deriving the optimal maintenance strategy. An analysis of the actual situation of the company’s maintenance management was conducted using the “IH-Check”. Both the current relevance of the respective fields of action and the level of maturity within maintenance management were assessed. After defining the relevant fields of action as described in the House of Maintenance, a pair wise comparison was used for a weighting of these fields of action, specifically adapting the House of Maintenance to the company’s individual situation (see fig. 9). “Customer”, “Maintenance Object” and “Information and Knowledge-Management” were identified as fields of action with a high relevance for the company’s maintenance management, which is reflected in their weighting scores of 14.9%, 14.0% and 12.2%.
21
Materials management 2 Maintenance Partnerships staff 2 1 11 22 Materials management Maintenance staff 0 0 Partnerships 12 Maintenance staff Partnerships
Maintenance staff
0
201 2
222 1
002 1
11
21
2
20
12
0
1 Information 2 2 Maintenance organisation management and knowledge 0 Partnerships 10 1 2 2 Maintenance organisation management 01 0 Materials 1 management Customer Maintenance policy and 01
02
0
2
2
1
1
1
1
0
1
2
2
1
1
1
1
0
2
0
1
1
1 Customer strategy Maintenance 1
Maintenance policy and strategy9,6 % Maintenance organisation 14,0 %
10,2 %
9,6 %
7,2 %
Partnerships
Reset
Reset Maintenance organisation policy and 11 Maintenance 0 0 Reset Maintenance controlling strategy Maintenance policy and object/asset 02 12 Maintenance 0 0 1 Maintenance controlling strategy 22 12 Maintenance 21 Maintenance 1 2 policy Customer organisation and Maintenance object/asset 2 0 11 12 strategy 22 21 Maintenance 1 2 Customer organisation 0 Maintenance object/asset Information 12 02 Information 21 Maintenance 2 Maintenance object/asset and knowledge knowledge 0Maintenance 02 0and 1 0controlling 1 11 1 staff Information and knowledge management 12 management object/asset Maintenance 2111 0Maintenance 102 101staff 00 Maintenance 00 Maintenance 1 0controlling policy and management 10 00 20 Maintenance 0Materials 0 12 00 strategy 0 Information 0object/asset Materials management and knowledge 0 00 management Partnerships 10 000 010 020 Maintenance object/asset 12 Information 01 0 0 management andMaterials knowledge 1 Materials management management 22 12 11 Maintenance 0 11 01 staff 0 1staff Maintenance controlling Partnerships management 1 01 Maintenance 22 002 021 Maintenance staff 210 Partnerships 12 01 Maintenance 0 1staff Maintenance controlling 2 02 211 Materials Partnerships 2Customer 20 20 organisation 1management 1 Maintenance object/asset 0 21 Maintenance 02 201 Materials organisation 20 0 2Customer 20 22 Maintenance 1 management 1 Maintenance object/asset 2 2 2and 0 Partnerships 1 1 Maintenance staff Information knowledge 2 01 Maintenance 1 1 controlling 2 1 Maintenance staff 20 2Information 2 20 and Partnerships knowledge 2 2management 02 1 10 Materials 2 1 management Maintenance staff 10 management 0 0 Customer 0 1 Maintenance organisation 11 0 Materials 1 1 2 1 Partnerships management 2 controlling 100 120 00 10 Customer 11 Maintenance 2 1 Partnerships Information and knowledge 22 12 2 1 Maintenance controlling
201 1
2
12,2 % Sort
Materials management
Reset Maintenance staff
strategy Maintenance policy and 0 strategy Maintenance organisation 1 2 Customer
Maintenance object
staff
Sort
Reset
10,2 %
Maintenance controlling
Sort
Reset
Information and know ledge management
Sort
Reset
Maintenance policy and
Partnerships Maintenance
Maintenance staff
Sort
2
Maintenance staff
2
Customer
Sort
Materials management Maintenance staff Partnerships
Structural fields Main 0 Structural 0 1fields0 0 1 0 0 tena Main Customer 0 0 1 fields 0 0 1 0 Structural Maintenance policy and tena Main 2 1 2 1 Customer 1Structural 21 2fields 22 12 21 strategy Maintenance policy and tena Main 2 1 Customer 02 01 11 02 02 strategy Maintenance policy and Maintenance organisation 2 1 2 Main 1 2 2 1 1 tena 21 Customer 00 01 10 02 strategy Maintenance and Maintenance organisation policy 2 1 2 Main Information and knowledgeCustomer 2 tena 11 121 212 212 1 0 policy 0 0 0 strategy Maintenance and 0 tena Maintenance organisation 00 0 20 Main management Information and knowledgeCustomer 2 11 220 212 Maintenance 1 policy 0 and 0 0 tena 0 Main 0 0 strategy Maintenance organisation 2 1 1 2 2 1 0 1 management Information knowledge Customer Maintenance controllingand strategy 2 1 1 2 2 211 Maintenance 01 policy 22 and 21 Maintenance organisation 20 1 02 tena 21 Main management Customer Information andMaintenance Maintenance controlling strategy 2knowledge 1 1 2 and Maintenance organisation 11 02policy 00 0 02 tena 20 Maintenance object/asset 2 1 Maintenance 1 2 002 1 management Information and knowledge Maintenance controlling 01 12 strategy Maintenance organisation 11 policy 00 and 00 0 01 Information2and strategy knowledge Maintenance object/asset 1 1 2 1 2 02 management Maintenance controlling2 1 organisation 01 22 0 1 Maintenance 2 2 Materials management 1 0 0 2 0 0 2 0 management Information and knowledge Maintenance object/asset 1 2 2 Maintenance controlling2 11 112 12 212 Maintenance organisation Materials management 1 0 0 2 0 0 2 management Information Maintenance Maintenance object/asset controlling 2 and 1 0knowledge 1 12 111 20 Partnerships 2 0 Information 0 0 0 0 Materials management 00 02 management Maintenance Maintenance object/asset controlling 20and knowledge 11 111 120 121 Partnerships 2 0 0 0 0 management Materials Maintenance management object/asset 10 02controlling1 10 22 20 10 Maintenance Maintenance staff 2 1 1 1 2 2 2 Partnerships 11 02 Materials Maintenance management object/asset 10 controlling 01 002 221 1102 Maintenance Maintenance staff 2 1 1 1 2 2 Partnerships Materials management 21 02object/asset 00 02 00 10 Maintenance Maintenance staff 0 0 2 1 2 21 11 Partnerships Materials management 2 0 0 0 2 2 Maintenance object/asset 200 Maintenance Partnerships staff 2 1 11 02 01 11
Sort
Reset Maintenance object/asset Partnerships Maintenance staff Materials management
Structural fields Structural fields Customer
14,9 %
Sort
Reset
Maintenance controlling Materials management Partnerships Maintenance object/asset
Customer
Structural fields
Weighting of fields of action
Sort
Information and knowledge Maintenance organisation Maintenance controlling Maintenance object/asset Materials Maintenance management staff Partnerships management
Structural fields
Information and knowledge Maintenance object/asset Materials management Maintenance controllingstaff management Partnerships Maintenance
Structural fields
Maintenance policy and Customer strategy Maintenance policy and Customer Maintenance organisation strategy Maintenance policyand andknowledge Information Customer Maintenance organisation strategy management Informationpolicy and knowledge Maintenance and Maintenance organisation Customer Maintenance controlling strategy management Information and knowledge Maintenance policy and Customer Maintenance controlling Maintenance organisation Maintenance object/asset management strategy Maintenance policy and Customer Information and knowledge Maintenance Maintenance organisation controlling Maintenance object/asset strategy Materials management management Maintenance policy and Information and knowledge Maintenance organisation Customer Maintenance object/asset controlling Materials management strategy Maintenance Partnerships management Information and knowledge Maintenance policy and Customer Maintenance organisation Materials management Maintenance controlling Partnerships Maintenance object/asset Maintenance staff management strategy Maintenance policyand andknowledge Information Maintenance Maintenance organisation controlling strategy Partnerships Maintenance object/asset Maintenance staff management Materials management
Pair wise comparison
Maintenance staff12,0 %
policy and
Maintenance policy and strategy strategy
Figure 8 Weighting of individual fields of action (example) Using the CMM approach, an evaluation of specific maturity levels followed for each field of action, revealing the company’s status quo regarding maintenance. The evaluation took place in a questionnaire based workshop including collective discussions as described above (see fig. 9). The hereby identified status quo in maintenance is depicted in figures 6 and 7.
Field of action „maintenance controlling“ Criteria
Tasks
Data collection
Identification of key figures
Use of key figures
Key figure comparisons
Cost accounting
Budget
Calculation of profitability
Indirect costs
Do you collect maintenance specific data?
Answer-possibilities
maturity level
No data is collected for maintenance
Little data is collected. Figures are only sporadic and cannot be verified. No evaluation of figures.
Data is collected, but collection is not completely standardised. Evaluations are only carried out irregularly.
0%
2
25%
x 3
50%
4
75%
5
100%
A large amount of data is collected, certain data often more than once. For collection a standard is specified. Evaluations are carried out regularly.
1
The collected data is complete without redundancy. It is evaluated in such a way that a forecast on the consequences is possible. All the key figures can be determined without any uncertainty.
Figure 9 Evaluation of specific maturity levels using a CMM based approach Considering the weighting within the House of Maintenance as well as gap between the status quo and the desired level of maturity for each field of action, the company’s current state was transformed into a prioritisation matrix (see fig. 10). The Prioritisation matrix was used to identify those fields of action, which combine a high weighting and a high gap – representing the potential for improvement within the respective field of action – and thus have a high priority concerning the improvement of maintenance performance.
22
By assessing the weighting of relevance of each field of action against the gap between the actual and achievable level of maturity (potential for improvement) the following fields of action were identified as most crucial in the company’s maintenance management: 1. Maintenance Controlling and Performance Management (MC) 2. Maintenance Policy and Strategy (MPS) 3. Customer of Maintenance (C) Within these fields of action lay the biggest opportunities for improvement. These improvements were essential for building up a maintenance organisation that could provide the maximum value and profit to the whole company and its customers.
C:
Customer
MPS:
Maintenance policy and strategy
MO:
Maintenance organisation
IKM:
Information and knowledge management
MC:
Maintenance controlling
MOA: Maintenance objects and assets MM:
Material management
P:
Partnership
MS:
Maintenance staff
Figure 10 Prioritisation Matrix (example) Based on the results of the maintenance assessment, the FIR suggested an improvement of maintenance controlling (as well as other measures, which will not be discussed in this paper) by introducing a performance management system (PMS) based on balanced scorecards [12]. In detail, the FIR recommended a two-level PMS for maintenance consisting of strategic and execution level and five perspectives which provide a consistent monitoring of the company’s maintenance, focussing on Processes, Customer, Finance, Staff and External Services. After its implementation, the PMS would work as a tool for creating transparency and provide the basis for the continuous measurement of the actual maintenance performance in sense of effectiveness and efficiency. The PMS system was successfully implemented and increased the maintenance department’s performance within the company.
7
CONCLUSION
The “IH-Check” assessment is a powerful tool, clearly identifying shortcomings in maintenance performance as well as potentials for improvement and thus enabling the introduction of specific measures ensuring a continuous improvement. A periodic application of this instrument ensures a successful derivation and implementation measures aiming at an improvement of maintenance performance and thereby ensuring effectiveness and efficiency in production. Based on the analysis of the current condition of a company’s maintenance, “IH-Check” helps in efficiently introducing measures for improvement. The sequence of reorganisation projects in maintenance, till now more a result of reactions to
23
external causes, is converted to a process of systematic improvements. Employees are no longer confronted with the introduction of new management concepts, but are now directly involved in measures specifically derived for the company. The problem of measuring and assessing internal maintenance was solved with the introduction of the House of Maintenance framework and the associated assessment tool “IH-Check”. The actual status of a company’s maintenance can be measured easily, using the status quo analysis described. Applying the maturity-level model, it is also possible to identify salient fields of action, thus being able to derive appropriate measures. Using the “IH-Check”, it is possible to convert a mere reorganisation of internal maintenance to a continuous improvement process. The resulting progressive improvements in maintenance contribute in improving the operational performance of the whole enterprise.
8
REFERENCES
1
Schuh G., Berbner J., Lorenz B., Franzkoch B. & Winter C.-P., (2008) Reliability leads to a better performance – results of an international survey in continuous process industries. Proceedings of the 3rd World Congress on Engineering Asset Management and Intelligent Maintenance Systems (WCEAM-IMS 2008), 1366-1374, Springer-Verlag London Ltd. Beijing.
2
Shirose K. (2007) Total Productive Maintenance. New Implementation Program in Fabrication and Assembly Industries. Seventh edition. JIPM-Solutions. Tokyo.
3
Moubray J. (1997) Reliability-Centered Maintenance. Second edition. Industrial Press Inc. New York.
4
McKone K. E., Schroeder R. G. Cua K. O. (2001) The impact of total productive maintenance practices on Manufacturing performance. Journal of Operations Management, Vol. 19, 39-58.
5
Khan F. I., Haddara M., Krishnasamy L. (2008) A new Methodology for Risk-Based Availability Analysis. IEEE Transactions on Reliability, Vol. 57, No. 1, 103-112.
6
Schick E. (2004) unpublished Expert Study “Trends and Development-Perspectives in Maintenance” conducted by the Research Institute for Operations Management at RWTH Aachen University (FIR).
7
Mark C. Paulk, Charles V. Weber, Bill Curtis & Mary Beth Chrissis (1994). The Capability Model: Guidelines for improving the software process (SEI Series in Software Engineering). Addison-Wesley Professional. Boston.
8
Marquez A. C. (2007) The Maintenance Management Framework: Models and Methods for Complex Systems Maintenance (Springer Series in Reliability Engineering). Springer-Verlag. Berlin.
9
Moore R. (2004) Making Common Sence – common practice: Models for Manufacturing Excellence. Third edition. Butterworth-Heinemann. Oxford.
10
Al Najar B., Asyouf I. (2000) Improving effectiveness of manufacturing systems using total quality maintenance. Integrated Manufacturing Systems, Vol. 11, No. 4, 267-276.
11
Carrizosa E., Messine F. (2007) An exact global optimization method for deriving weights from pairwise comparison matrices. Journal of Global Optimization, Vol. 38, No. 2, 237-247.
12
Lorenz B., Winter C. (2008) Identification of Optimal Maintenance Strategy Mixes for Small and Medium Enterprises (SME). Euromaintenance Papers. Bemas. Brussels.
24
Proceedings of the 4th World Congress on Engineering Asset Management Athens, Greece 28 - 30 September 2009
INTEGRATED STRATEGIC ASSET PERFORMANCE ASSESSMENT Aditya Paridaa and Uday Kumara a
Division of Operation and Maintenance Engineering, Luleå University of Technology, Luleå, Sweden
Asset performance assessment forms an integral part of a business process for heavy and capital intensive industry to ensure performance assurance. Managing the asset performance is critical for long term economic and business viability. Assessing the asset performance is a complex issue as it involves multiple inputs and outputs; and various stakeholders’ dynamic requirements. Lack of integration among various stakeholders and their changing requirements in strategic asset performance assessment is still a problem for the companies. It is a challenge to integrate a whole organization, where free flow and transparency of information is possible; and each process is linked to integrate to achieve the company’s business goals. In this paper, various issues associated with an integrated strategic asset performance assessment are discussed. Key Words: Asset performance assessment (APA), asset Performance Indicators (API), employee involvement, Maintenance process. 1
INTRODUCTION
Global dynamic and competitive business scenario with technological development and changes during last couple of decades, besides the prevailing economic slowdown have dominated the global industrial scenario demanding an effective, safe and reliable integrated strategic engineering asset management. Further, outsourcing, separation of asset owners and asset managers, and complex accountability for the asset management, make the assessment of asset performance and its continuous control and evaluation more critical. Organizations operating today are facing several kinds of challenges, like; highly dynamic business environments, complicated intellectual work at all levels of the company, efficient use of information and communication technologies (ICT), and a fast pace of information and knowledge renewal [1]. Thus, under this technological advancement and global competition scenario, asset owners and managers are striving for monitoring, assessing and following up the asset performance. Health monitoring of the strategic engineering asset is an important issue and challenge for the management as it provides information on plant and system health status to achieve higher productivity with minimum cost and safety with high reliability. The advancement in computer and information technology has a significant impact on the process of asset management information system for finding the asset health status facilitating timely decision making. Advancement of sensor technologies, automated controls and data telemetry have made possible new and innovative methods in asset health monitoring. Rapid growth in the networking systems, especially through the internet has overcome the barriers of distance, allowing a real time data transfer to occur easily from different locations [2]. Corporate strategy of an organization describes how it intends achieve its mission and objectives, and to create value for its stakeholders like; the shareholders, customers, employees, society, regulating authorities, suppliers and alliance partners. Without a comprehensive description of strategy, executives cannot easily communicate the strategy among themselves or to their employees [3]. Therefore, it is essential that the corporate strategy and objectives of an organization is converted to the specific objectives integrating different hierarchical levels of the organization. Under challenges of increasingly technological changing environment, implementing an appropriate performance assessment (PA) system in an organization becomes a necessity. This is because, without integrated assessment of performance, it is difficult to manage and verify the results of the desired objectives of an organization. Maintenance of an asset is considered as an important support function for the management and perceived as “It can be planned and controlled” and “It creates additional value” [4]. To know the amount of additional value created, the assessment of asset performance needs to be integrated into the business process. Assessing the asset performance of an organisation is a complex issue due to multiple inputs and out put which are influenced by stakeholders and other sub-processes. Many times, contribution of maintenance on asset performance can only be assessed in terms of the losses incurred due to lack of
25
maintenance activities. Coupled with lack of maintenance activities have resulted in disaster and accident of extensive losses and changes in legal environment, the asset managers are likely to be charged with “corporate killing” for the future actions or omissions of maintenance efforts [5]. These societal responsibilities to prevent loss of life and property, besides high asset maintenance cost are compelling the management to undertake asset PA as part of the business measurement system. An asset PA system ensures that all operational activities are aligned to the organization’s corporate strategies and objectives in a balanced manner. The organization has to satisfy and meet the requirements of both the external and internal stakeholders need and identify the performance indicators (PIs) from a integrated and balanced point of view. The purpose of this paper is to discuss various issues associated with integrated strategic asset performance assessment. The structure of the paper is as follows: after providing an introduction to the topic at section 1, strategic issues of engineering asset are discussed at section 2. Section 3 deals with the integrated issues in engineering asset performance assessment. A discussion and conclusion is provided at section 4.
2
STRATEGIC ISSUES IN ENGINEERING ASSET
Strategy; is concerned with the long-term directions of the firm, deals with the overall plan for deploying resources that the firm possesses, entails the willingness to make trade-offs to choose between different directions and between different ways of deploying resources, achieving unique positioning vis-à-vis competitors, and sustainable competitive advantage over rivals and ensuring lasting profitability [6]. An organization’s strategy indicates how it intends to create value for its stakeholders, like the shareholders, customers, employees, the society, etc though effective use of its asset. For maximum impact, the measurement system should focus on organization’s strategy, how it expects to create future and sustainable value [4]. No two organizations develop and follow the strategy in the same way, because some follow strategy from financial perspectives for revenue and growth; some by the services or products focusing on their customers, others from marketing or quality perspectives; and some others from a human resource perspectives. Observing different organizations and critical analysis of authors, strategic policy exist around shareholders value, customer satisfaction, process management, quality, innovation, human resources and information technology amongst others. Engineering asset strategy is formulated from the corporate strategy considering the integrated and whole life cycle of the asset. An integrated approach is essential as an asset performance management is associated with various stakeholders with their conflicting needs with multiple inputs and outputs. From asset performance objectives, two set of activities are undertaken. One set of activity develops the key performance indicators for bench marking performance with similar industry and the other set formulate the activity plan, implementation, measurement and review as given at Figure 1. As shown in the figure, asset performance objectives are formulated as per stakeholders’ requirements and organizations integrated capability and capacity. In order to achieve the asset performance objectives, critical success factors are identified from which key result areas of activities are identified. From the key result areas, key performance indicators (KPIs) are developed for measuring and assessing the asset performance. On the other set of activity, activity plans are made based on which the implementations are carried out. After implementation, measurement and assessment of the asset performance is undertaken, so that feedback and reviewing action can be undertaken to validate the asset performance objectives.
Corporate strategy Feedback & review Asset performance objectives
PM
Asset strategy
Implementation Activity plan
Critical success factors Key result area Key performance indicators
Figure 1 Strategic asset performance measurement process (Adapted from [7])
26
Companies are using the scorecards as a strategic management system to manage their strategy over their long run and using the measurement focus to accomplish critical management processes [8] like; 1. Clarify and translate vision and strategy 2. Communicate and link strategic objectives and measures 3. Plan, set targets, and align strategic initiatives 4. Enhance strategic feedback and learning In an asset management strategy, various industry forces play important roles and are required to be considered for analysis. For new entrants, it is the entry barriers of experience and culture; for suppliers, there may be many service providers; for alternate products, it is better system and processes; for customer, it is trust and good relationship; and for the industry, it is the competitors. Importance of strategic aspects for engineering asset cannot be overlooked, especially under the present business scenario context. Some of the examples of asset performance objectives could be to achieve higher OEE level, zero defects (zero quality complaints), and nil accidents. The KPIs translate aggregate measures from shop floor levels to the strategic level. The real challenge lies in measuring all the KPIs, as some of the KPIs are difficult to measure being intangible in nature and cannot be quantified. Organizations need a framework to align their performance measurement system with the corporate strategic goals of a company by setting objectives and defining key performance at each level [9]. The performance measurement PM which forms part of the asset performance measurement system needs to be aligned with the organizational strategy [10]. The PIs are required to be considered from the perspective of the multi-hierarchical levels of the organization. As per [11], maintenance management needs to be carried out in both strategic and operational contexts and the organizational structure is generally structured into three levels. The three hierarchical levels considered by most of the firms are; the strategic or top management level, the tactical or middle management level, and the functional/operational level [12]. Two major requirements of a successful corporate strategy relevant for the performance measurement are namely: 1. 2.
Cascading down the objectives from strategic to shop floor level Aggregation of performance measurements from shop floor to strategic level.
2.1 Cascading down the objectives from strategic to shop floor level. The strategic objectives are formulated based on the requirements of the stakeholders, both internal and external. The plant capacity and resources are considered from long-term objectives and matched. These corporate objectives are cascaded down the hierarchical level of the organization though the tactical level which considers the tactical issues such as financial and nonfinancial aspects both from the effectiveness and the efficiency point of view. The bottom level is represented by the functional personnel and includes the shop floor engineers and operators. The corporate or business objective at the strategic level is communicated down through the levels of the organization and translated into the objective measures in a language and meaning appropriate for the tactical or functional level. This cascading down of strategy forms part of the goal deployment of the organization.
2.2 Aggregation of performance measurements from shop floor to strategic level. The performance at the shop floor level is measured and aggregated through the hierarchical levels of the organization to evaluate the achievement of the corporate objectives. The adoption of fair processes is the key to successful alignment of these goals. It helps to harness the energy and creativity of committed managers and employees to drive the desired organizational transformations [13]. This aggregation leads to empowerment of employees in the organization.
3
INTEGRATED ISSUES IN ENGINEERING ASSET PERFORMANCE ASSESSMENT
Observing different organizations and critical analysis of authors, strategic policy exist around shareholders value, customer satisfaction, process management, quality, innovation, human resources and information technology amongst others. [14] Thompson (1997) listed eight clusters of organizational competencies which are linked to the integrated and strategy contents competencies, needs to stay aware and strategic change competencies. The eight clusters are: 1. 2. 3. 4. 5. 6.
strategic awareness abilities stakeholders satisfaction abilities competitive strategic abilities strategic implementation and change abilities competency in quality and customer care functional competencies
27
7. 8.
ability to avoid failure and crises Ability to manage ethically and with social responsibility.
Therefore all successful organizations have to be aware, formulate the winning integrated strategy, implement and manage it in a dynamic and competitive business environment. Companies are using the PM scorecards as a strategic management system to manage their strategy over their long run and using the measurement focus to accomplish critical management processes [8] (Kaplan and Norton, 1996): • • • •
Clarify and translate vision and strategy Communicate and link strategic objectives and measures Plan, set targets, and align strategic initiatives Enhance strategic feedback and learning
The integrated issues in engineering asset PA are discussed as under [15] (Parida, 2006): 1.
2.
3.
4.
5. 6.
Stake holder’s requirement. The stakeholders’ external needs are to be assessed and responded with matching asset and resource requirements with planning and that of internal stakeholders’ capability and capacity, which formulate the corporate objectives and strategy and translate into targets and goals at the operational level or converting a subjective vision into objective goals. While considering the external stakeholders needs, the prevailing and futuristic business scenarios are looked into besides the competitors. Internal stakeholders’ need from employees, management & organizational culture perspectives are also considered, besides the asset and other resources capacity and capabilities. Organizational issues. The asset PM system needs to be aligned and form integral part of the corporate strategy. This will require commitments from the top management and all employees to be aware of the asset PM system through effective communication and training, so that they all speak same language and are fully involved. The involvement of the employees in the asset PM system at every stage, like the planning, implementation, monitoring and control, and at each hierarchical level can ensure the success of achieving the asset performance and business strategies. Besides, all functional processes and area like, logistics, IT, human resources, marketing and finance need to be integrated with engineering assets. Engineering asset requirements. From the stakeholders’ need, the demand analysis of engineering asset is perceived and designed. After concept development, validation and engineering asset specifications are worked out. Besides, competitive product, cost of maintenance, risk management, correct product design, asset configuration and integration are considered from strategic and organizational perspective. From operation and maintenance, the engineering asset may be outsourced partially or entirely. How to measure? It is essential to select the right PIs for measuring asset PA from an integrated whole life cycle perspective for benchmarking besides, collecting the relevant data and analysis for appropriate decision making. The asset PM reports developed after the data analysis are used for subsequent preventive and/or predictive decisions. The asset PM needs to be holistic, integrated and balanced [12] Sustainability. Sustainability development is the development that is consistent while contributing for a better quality of life for the stakeholders. This concept integrates and balances the social, economic, and environmental factors amongst others. Linking Strategy with integrated asset performance assessing criteria. The linkage between integrated Enterprise Asset Management (EAM) measuring criteria with condition monitoring, IT and hierarchical level for decision making at different hierarchical level is given at Figure 2. This figure describes the linkage between the external and internal stakeholders needs and considers the concept of integrated enterprise asset management from the different hierarchical needs, while linking the performance measurement and assessment from engineering asset’s operational level to decision making at strategic level. The external effectiveness is high lighted by stakeholders need like return on investment and customer satisfaction. The internal effectiveness is high lighted through the desired organizational performance reflected by optimized and integrated resources utilization for EAM. For example; availability and performance speed of the equipment and the machineries forms part of the internal effectiveness or back end process. Quality is the most important aspect, which is not only related to the products quality of the back end process, but also with customer satisfaction of external effectiveness. From external stakeholders, the quantity of annual production level is decided, considering the customer’s requirements, return on investment and internal plant capacity and plant availability etc. From internal stakeholder’s, the organization considers department’s integration, employee requirements, organizational climate and skill enhancement. After formulation of the asset PA system, the multicriteria PIs are placed under the multi hierarchical levels of the organization.
28
External Stakeholders’s Needs
Organization’s Vision & Objectives
Company’s Internal Needs
Decision
making STRATE GIC LEVEL
Checked with PIs
Organization’s
Production/Op erational Strategy & Policy
Business & Marketing Strategies
Production Planning Scheduling & Control
Maintenance Planning, Scheduling & Control
TACTICAL/ MANAGERIAL LEVEL
OPERATIONAL LEVEL
& KPIs
Information
Data
Performance Indicators For: Productivity Process, Cost, Environment,
Employee Satisfaction Growth & Innovation, Etc.
IT system
Embedded sensors
Enterprise asset management (EAM)
Figure 2. Linkage between integrated Enterprise Asset Management’s (EAM) measuring criteria with condition monitoring, IT and hierarchical levels for decision making
4
DISCUSSION AND CONCLUSION
An asset cannot be managed with out considering the integrated strategic issues for an appropriate PA system. This is because of the various stakeholders conflicting interest needs, associated multiple inputs and outputs including the tangible and intangible gains from the asset. For engineering asset PA, strategic issues are essential to be considered. Under prevailing dynamic business scenario, asset PA is extensively used by the business units and industries to assess the progress against the set goals and objectives in a quantifiable way for its effectiveness and efficiency. An integrated asset PA provides the required information to the management for effective decision making. Research results demonstrate that companies using integrated balanced performance systems perform better than those who do not manage measurements [16]. In this paper the complexities of asset PM are considered and discussed from corporate strategy perspective of the organization. Since no two organizations are exactly similar, the asset PM framework and PIs formulation from corporate strategy need to be specific to that organization. The concepts and strategic issues are discussed for a successful engineering asset PM.
29
5
REFERENCES
1
Antti Lönnqvist. (2004) Business Performance Measurement http://www.pmteam.tut.fi/julkaisut/HK.pdf, visited on 2004 Aug 22.
for
Knowledge-Intensive
Organizations,
2 Toran, F., Ramirez, D., Casan, S., E. Navarro, A and Pelegri, J., (2000) Instrumentation and Measurement Technology, Vol. 2 (Ed. IEEE) IMTC, pp. 652-656 3 Kaplan, R. S and Norton, D. P, (2004) Strategy maps, converting intangible assets intangible outcomes, Harvard Business School Press, USA 4
Liyanage, J. P. and Kumar, U. (2003) Towards a value-based view on operations and maintenance performance management, Journal of Quality in Maintenance Engineering,
5
Mather, D (2005), An introduction to maintenance scorecard, The plant maintenance News letter edition 52, dated 13 April 2005.
6
Jelassi, T. and Enders, A. (2005) Strategies for e-business, Prentice Hall, Essex, London
7
Parida, A, Ahren, T and Kumar, U. (2003) Integrating Maintenance Performance with Corporate Balanced Scorecard, COMADEM 2003, Proceedings of the 16th International Congress, 27-29 August 2003, Växjö, Sweden, pp. 53-59. 8
Kaplan, R. S and Norton, D. P (1996) The Balanced Scorecard: Translating Strategy into Action, Harvard Business School Press, pp. 322.
9 Kutucuoglu, K. Y; Hamali, J; Irani, J; Sharp, J. M (2001) A framework for managing maintenance using performance measurement systems, International Journal of Operation and Production Management, Vol. 21, No. ½, pp. 173-194. 10 Eccles, R. G. (1991) The performance measurement manifesto, Harvard Business Review, January-February, pp. 131-137. 11 Murthy, D. N. P, Atrens, A. and Eccleston, J. A. (2002) Strategic maintenance management, Journal of Quality in Maintenance Engineering, Vol. 8, No. 4, pp. 287-305. 12
Parida, A. and Chattopadhyay, G. (2007) Development of Multi-Criteria Hierarchical framework for Maintenance Performance Measurement (MPM). Journal of Quality in Maintenance Engineering, Vol. 13, No. 3, pp 241-258
13 Tsang, A. H. C. (1998) A strategic approach to managing maintenance performance, Journal of Quality in Maintenance Engineering, Vol.4, No. 2, pp.87-94 Vol. 9, pp. 333-350. 14
Thompson, J. L. (1997) Lead with Vision: manage the strategic challenge, International Thompson Business Press, London
15 Parida, A (2006) Development of Multi-criteria Hierarchical Framework for Maintenance Performance Measurement: Concepts, Issues and Challenges, Doctoral thesis, Luleå University of Technology, Sweden, http://epubl.ltu.se/14021544/2006/37/index.html 16 Lingle, J. H. and Schiemann, W. A. (1996) From balanced scorecard to strategy gauge: Is measurement worth it?” Management Review, March, pp. 56-62.
30
Proceedings of the 4th World Congress on Engineering Asset Management Athens, Greece 28 - 30 September 2009
ASSESSING THE SUBJECTIVE ADDED VALUE OF VALUE NETS: WHICH NETWORK STRATEGIES ARE REALLY WIN-WIN? Tony Rosqvista, Toni Ahonenb, Ville Ojanenc, Arto Marttinend a
VTT Technical Research Centre of Finland, PO Box 1000, FI-02044 VTT, Espoo
b
VTT Technical Research Centre of Finland, PO Box 1300, FI-33101 Tampere, Finland
c
Lappeenranta University of Technology, PO Box 20, FI-53851 Lappeenranta, Finland d
Metso Automation Inc, PO Box 310, FI-00811 Helsinki, Finland
As manufacturing companies increasingly focus on their core business, the interest in the utilisation of external services provided by system suppliers and service companies increases. Currently an increasing number of services are purchased from service supply networks. Furthermore, globalisation, complexity of technological innovations and demand for integrated solutions also create need for networking and collaboration. Establishing or improving the performance of the networked service providers, the value net, is a long-term effort, requiring the build up of trust between the partners. The necessary condition of moving from a subcontractor relationship to a strategic network or partnership is the sharing of the view of joint gains in a prospective value net. How do we then evaluate the added value of moving to a new partnership? What network strategies provide the win-win network solution? This paper is a tentative effort in answering these questions based on Decision Analysis. Keywords: value net, strategic network, network orchestrator, win-win strategy, collaborative maintenance network 1
INTRODUCTION
A current trend is to outsource operations that do not belong to the core competence of a company. The main rationale for this development is the assumption that by subcontracting, the company can buy certain services cheaper and receive them better managed and implemented, thus providing more added value compared to keeping the same functions in-house. Furthermore, embedded technology in the assets, ever-increasing demand for production efficiency, and dynamic end customer requirements increase the need for a variety of services. The main assumption here is that there exists, or will rapidly emerge, a competitive market of services that is able to provide collaborative services cost-effectively. Of course, there is always a tendency in the area of manufacturing and servicing, that some services may be based on know-how that is restricted to only some service providers, resulting in the creation of an oligopolistic service market which is not cost-effective (market effect of scarcity power). The strive to get competitive edge on a special area will naturally lead to this type of oligopolistic service market, but on the other hand companies must avoid to seize all the customer value of their related service. This is characteristic in networked environments. where the market players aim at strategic partnering, forming a value net, which can be characterised as in Fig. 1, showing the ‘strategic’ differences between the network partners and the role of a network orchestrator.
31
Figure 1. Illustration of the requirements for trust, information transparency and the roles of the value net partners. As the figure shows service providers have different strategic roles in the value net: the principal service providers belonging to the core of the network whereas other service providers lie at the fringe of the value net with decreasing strategic significance. Such a governance structure is typically worked out by the network orchestrator or network leader who has the closest relationship with the customer. From customer’s perspective, there has been a clear need to actually decrease the number of closest partners in order to ease the management. This favours the one stop shops that can integrate solutions and services and act as the main partner in the customer’s direction. The network leader establishes the values and culture of the network, developing its guiding principles (e.g. centralisation vs. decentralisation, incentive systems, accountability rules - all these issues entangled with their own tensions and trade-offs) while utilising the best practices from the network itself. In contrast to rigid control systems used to manage production units, the network orchestrator relies not just on rewards, but also upon a combination of empowerment and trust, as well as training and certification, to manage a network that it does not own. Finally, orchestrators have a different way of creating value: value in the traditional firm comes from specialisation, refining skills in specific areas, protecting trade secrets, and keeping out rivals and even partners, whereas value nets create value by integration, bridging borders, leveraging intellectual property across the network. In other words, the social rationale of a value net is simply: i) doing ‘more’ than the organisation knows, and ii) knowing ‘more’ than the organisation does. Of course, this has implications in managerial areas such as contracting / strategic partnering, relationship management (culture, trust), knowledge management (social networks, IT systems), and change management (strategy, leadership, ‘integrator’) – all looking at organisational boundaries from their own perspectives: legal, physical, work/activity and knowledge. The managerial challenges of strategic networks or partnership are best illustrated by the results from a survey made in Finland between 2007-08 on challenges that Finnish industrial service providers meet in offering a client extended services in managing the technical and economic life time of fleet assets [1]. In the survey covering 5 industrial partners (4 vendors, 1 client) and 11 interviewees, success factors for a service network where identified. The following key elements are crucial for building trust between the network partners:
32
-
Pricing in relation to the added value created - How transparent should the price and added value determinants of the services be?
-
Service offering (coupling between service bundling and pricing) - What kind of bundling of product/services will satisfy the client’s need of getting everything from ‘one hatch’?
-
Information management (open access to reliability performance of equipment and maintenance performance and plans) - To what extent should the information systems be open, or even shared, based on a common platform?
-
Mixed cultures (balance between service oriented and product manufacturing cultures) – How to reconcile effectiveness of production to flexibility of services?
-
Intermediate operator / system integrator (ability to connect product and services from separate vendors) - ‘Is an intermediate operator or system integrator’ needed – in the beginning of the new service development process only, or over the life time of the physical assets serviced?
-
Knowledge management (scope of access to each others data, IPR, business models, etc.) - What kind of knowledgerelated asymmetries [2] do we need to worry about?
Good management of the key elements above can be viewed as a necessary condition of moving towards a true value net in the particular industrial area considered in the survey. It is obvious that this transition takes a lot of time and commitment of the managers of the companies in question. The remainder of the paper is organised as follows: In section 1 the different roles in a value net are defined and some demarcations are made for presenting a framework supporting value judgments on alternative service networks, presented in section 2. Section 3 presents a value measurement framework based on value-tree analysis that is a key feature in the generic assessment process presented in section 4. The paper is concluded by a discussion of directions for future research in section 5.
2
ROLES IN A VALUE NET
REGULATOR
OWNER
SERVICE PROVIDERS PHYSICAL ASSET
END-USER
SYSTEM SUPPLIER
OTHER STAKEHOLDERS
OPERATOR
In principle, the network leader has several governance choices for a value net servicing a physical asset. The choices are reflected in the management of the different roles, accountabilities and functions conducted in collaboration. The roles and functions should be defined with particular attention to customer needs which typically derive from the asset owner’s needs together with end-users needs. This is to ensure that the established network processes are truly value adding for the customer. In practice, network and partnership choices have to be made based on current relationships and roles adopted. Basic roles tied together by the physical asset are illustrated in Fig. 2.
Figure 2. The basic question of the network orchestrator is: Which network strategies provide the highest joint gains in the value net, and at the same time meet possible constraints (e.g. regulatory)?
To narrow down the decision context of developing network strategies we will make following demarcations:
33
-
the owner of the physical asset is also the user and customer in the value net
-
one of the suppliers or service providers is also the network orchestrator (see Fig. 1)
-
the ‘value’ of running and maintaining the assets is also assessed by ‘other stakeholders’ (e.g. environmentalists) and the regulator, but these are usually not considered as the principal partners in the value net
-
the added value provided by the value net is basically determined by end-user, but the ultimate rationale of the value net is to provide ‘value’ for all partners i.e. a win-win or a joint gains outcome
-
the service agents are not all ‘equal’ but have different strategic positions in the value net (Fig. 1), and accordingly different network strategies
Formulation of the strategic objectives of the value net aims at the maximisation of customer received value and thus the analysis of customer objectives lays the foundation for the derivation of the network objectives, together with the strategic objectives set by the individual service providers. In other words, the network strategy should be based on customer’s strategic objectives. For instance in a situation where customer’s objectives are highly influenced by the dynamics in business environment, network strategy must include aspects of these dynamics as well. Or, if the customer wants the (physical) assets to have a certain performance (availability, reliability, maintainability, etc), the value net has to manage its processes in a way that the corresponding performance levels are met. Direct business related value of networking for the partners can be e.g. increasing business opportunities and profit, coping with the challenges resulted from a dynamic market, benefiting from network level reputation and reaching economy of scale in the business. In the following, we discuss the value measurement further.
3
VALUE MEASUREMENT
From the point of view of the individual partner, his/her perceived ‘value’ can be assessed using the standard Balance Scorecard approach [3-5]. The BSC perspectives are fourfold and for each of them, strategic objectives, goals and indicators can be defined according to the managerial plans. For instance, plant and maintenance objectives, and related Key Performance Indicators, are discussed in Rosqvist et al. [6]. The Learning & Growth Perspective This perspective includes employee training and corporate cultural attitudes related to both individual and corporate self-improvement. In a value net, learning from each other and sharing knowledge can offer a competitive edge for the partners. The emphasis is therefore in increased understanding of openness, transparency requirements and construction of mutual trust, and promotion of systematised feedback and interaction mechanisms. Indicators can be developed to support managing these issues. Such indicators are usually interpreted as leading indicators, i.e. indicators that signal future outcome of customer satisfaction and financial performance. The Business Process or Internal Perspective This perspective refers to internal business processes. The managerial areas are the operations management, customer management, innovation management and the regulatory&social issues management. Proper indicators allow the managers to know how well their business is running, and whether its products and services conform to customer requirements. Again such indicators are usually interpreted as leading indicators. The above managerial areas may to large extent be jointly managed by partnering. It is thus important for a value net to identify key managerial areas where joint gains can be achieved by reallocating managerial tasks and responsibilities within the network. In particular, information sharing principles need to be addressed: what is openly accessible to all partners, what is restricted to certain partners, what information is entered by whom, what ICT is used and who maintains it, what happens with IPR, etc. The Customer Perspective Recent management philosophy has shown an increasing realisation of the importance of customer focus and customer satisfaction in any business: if customers are not satisfied, they will eventually find other suppliers that will meet their needs. The mutual trust between the customer and the network is crucial. As customer value is something perceived by customers rather than objectively determined by the supplier, the significance of understanding and management of customer knowledge has to be emphasised. The main responsibility of the network orchestrator is to read the customer’s ‘signals’ and ‘transmit’ them properly up to the furthermost partner in the network (see Fig.1). The signals can relate to many attributes such as price, quality, service availability and selection, but also trust and branding and overall functionality of the network. In essence, the value propositions given by the value net need to be achieved, maintained and monitored.
34
The Financial Perspective The financial perspective relates to immediate economic determinants and economic results. Currently, financial metrics have been criticised for their emphasis on short-term performance by moving into quarterly financial reporting. For each partner in a value net, the financial performance determines the success of the network in the eyes of the partners. How this affects the decision to continue as a partner or move somewhere else depends on the position and the role of the partner in the value net: the furthermost partners are expected to base their partnering decisions more on short term than long term financial performance. It is also expected that the financial performance is one key driver of changing the network structure: if revenues and risks are perceived to be distributed in an unfair way by some partners, then the cohesion of the network is clearly threatened. Kaplan and Norton, augmented their performance measurement BSC system with a strategy planning tool – the strategy map – which depicts the cause-effect relationships between the four standard perspectives or objectives in the BSC system [7]. A key insight is that factors in the ‘Learning&Growth’, ‘Business Process’ and the ‘Customer Perspectives’ can be interpreted as leading indicators for the financial performance and competitiveness with their respective lagging indicators [8,9]. It has to be remembered that the developments by Kaplan and Norton are connected to the strategy development of a single corporation. The strategy development of a value net, can, basically, utilise the strategy map idea. Any prospective network strategy can be formulated in terms of managerial objectives that the partners share and jointly try to achieve. The network partners may value a network strategy by assessing its impact on determinants of customer value, and even further on financial determinants. In principle, the added value related to a prospective network strategy may be subjectively assessed, by each partner, by comparing it to the existing network strategy. This is illustrated in Fig. 3 which is an adaptation of the strategy map by KaplanNorton [7]. The strategy map illustrates the structure of a shared network strategy in order for the value assessment to verify that a prospective network strategy is a win-win strategy. It is important to note that a network strategy that yields added value to the customer may for the other network partners produce negative added value due to complexities introduced in contracting and knowledge management, inducing extra transaction costs. In principle, the value assessment framework includes all the strategic elements relevant in the survey referred to in the Introduction, providing a direction for further application-specific refinement with respect to the concerned managerial areas. Some of these aspects are discussed next. The role of a network orchestrator in a service supply network has been emphasised in our survey as, for instance, capabilities for structured strategic planning are often lacking particularly in smaller service companies. In the case, where an network orchestrator will take full responsibility network strategy development, smaller companies may compensate this lack by adopting dynamic working methods of good quality in the implementation of the strategy, making them an important partner in the service provision. Also with respect to information management, our survey has identified strategically important differences between typical players in a service supply network, especially with respect to transparency requirements as indicated in Fig.1. Regardless of these differences, the internal managerial perspective to information management requires integrated solutions to align the combination of strategies. Based on the survey, the surrounding business environment and the related dynamics create the most fundamental needs for partnership development with effects on pricing, service offering, as well as, leadership (culture) and knowledge management. From the customer point-of-view, one of the most important, but at the same time most difficult aspects to control during the contract period, seems to be the network’s capability to continuously develop the services found strategically important. Openness and transparency are found important when finding common ground for assessing and capturing opportunities for the added value of improved services and/or business processes in the network. In the next section, a value assessment process is describe that operationalises the strategy map – based added value model in Fig. 3.
35
Partnership Added Value (dependent on network strategy)
Added Value of service providers
Network Orchestrator Added Value
Customer Added Value (user and/or owner of physical asset)
Financial determinants
improve cost structure
improve asset utilisation
expand revenue opportunities
enhance customer value
Customer value determinants
price
quality
availability
selection
functionality
Product/service attributes
flexibility
brand
Relationship
Internal processes of the partners, where some functions are the property of the network and defined in the network strategy.
causal relationship
defining relationship
Each partner valuates the performances in the various managerial areas differently.
Current network strategy (zero reference)
partnering
Value adding alternative network strategy
?
Figure 3. A strategy map linking the internal managerial perspective (objectives) with the customer and financial perspectives (objectives) for the assessment of added value of a prospective network, formulated in terms of shared managerial objectives and goals that are expected to lead to added value for the customer and the partners, i.e. a win-win situation. The added value assessment is subjective and comparative in nature. (The reader is referred to the work of Kaplan and Norton on strategy maps).
4
THE ASSESSMENT PROCESS OUTLINE
To be able to use the added value model in the previous section, following key issues need to be addressed, mainly by the network orchestrator: -
what managerial elements are incorporated in the current network strategy, and what changes could provide added value?
-
how are the impacts of the prospective network strategy assessed?
-
how are uncertainties (risks) incorporated in the assessment?
36
Figure 4 shows two distinct activities that are needed: the Network Strategy Formulation and the Added Value Assessment.
Figure 4. Activities to support network strategy development: the outcome is an action plan that outlines how the current partnership should be changed to create an improved win-win network strategy. Network Strategy Formulation is expected to be a sensitive process led by the network orchestrator with numerous mutual discussions with the existing and potential partners on wanted or possible changes in roles, accountabilities, etc. with respect to the current network strategy. The sense of trust in the leadership and orchestration is crucial. Basically, the outcome of the activity could be in the format of Table 1 which shows how the current and the prospective network strategies are formulated. The structuring is based on the BSC perspectives. In addition, risk management issues can be included in the form of real options that can be executed conditional on the occurrence of random events. For instance, if there is a sudden price increase for a certain raw material or component, or a radical breakdown of one partner’s production, there are options for the affected partner to switch the material/component to another, or an option for the network to be temporarily supplied by some other company outside the network, respectively. Such real options should be identified and agreed upon in the network strategy. Table 1 Network strategy formulation – generic template Management area
Current strategy
Prospective strategy
financial customer business process
What is in place to implement the current network strategy?
What should be in place to implement the prospective improved network strategy?
regulation&social
The prospective new network strategy is then valuated in order to assess the added value for each partner, as well as for the customer. The valuation techniques follow the methods and techniques presented in Decision Analysis, e.g. [10,11]. The customer may, or may not be included in the assessment. This depends on the character of the customer-network relationship. At the same time will be verified that the new network strategy produces added value for each (principal) partner, i.e. the prospective network strategy is a win-win strategy. The outcome of this activity is an action plan that indicates the next concrete measures to be performed in order to implement the new network strategy. In general, it is expected that the alternative network strategy entails only a small change in terms of strategy, but anyhow, reflects a major change in attitude and trust among the partners. Any formal change in strategy is always connected with changed expectations on the outcome and the way partners will act. If expectations are met, trust will build up, and the willingness to develop deeper partnerships will increase. Such a partnering process is incremental: evolving step-by-step. The formulation and valuation activities may be supported by a Group Decision Support System (GDSS), allowing effective ways of generating ideas, commenting, voting and arriving at an action plan. The use of GDSS is focusing on supporting the whole multi-phased process of group decision making with certain technologies, and it can be seen to fall under the larger umbrella of the concept of GSS (Group Support Systems), which may include any technologies used to make groups more productive [12,13].
37
Earlier studies have revealed several benefits of GDSS that are worth noticing in coping with the problem area of this paper. These are, for example, process structuring, goal oriented process, parallelism (many people able to communicate at the same time), allowance of larger group sizes, automatic documentation, anonymity of group members, access to external information, and automated data analysis [13-16]. The problem area in this paper is a complex one, it needs structuring, it influences on several people, and several people are needed to make decisions with regards to it. Therefore, it can be assumed that the use of GDSS would provide significant benefits. A large amount of results could be done during, e.g. two strictly phased GDSS workshop sessions, of which the first one would formulate the prospective network strategy, and the second one would have the principles of value assessment, metrics, and action plans as outputs. However, earlier studies have also stressed the significance of a detailed pre-planning: In pre-planning meetings, the purpose and goals for each GDSS sessions need to be carefully defined in order to make the process itself as efficient and effective as possible, and in order to improve the likelihood of making the concrete results of the sessions the best possible.
5
CONCLUSIONS
In the proposed framework for assessing the added value of a network strategy value is a measure of the subjective preferences of the network partners. Thus, a win-win strategy cannot be proved other than by monitoring the customer-network relationship and the relationships between the partners. The partners have different roles, skills, expectations, etc. that need to be aligned in a network strategy for adding value to the customer and to each other. The Balance Scorecard perspectives and strategy maps provide a good basis for developing a value theoretic assessment framework that supports the network orchestrator in the formulation and the valuation of a prospective network strategy. In the framework, the valuation is comparative with the current network strategy as the reference point. The framework is developed as one answer to the needs to improve partnering for maintenance and production services for a customer in the fertiliser business in Finland. The presented approach needs many test cases for refinement and validation and should be viewed as a reference for further research rather than a readily implementable method to develop value nets.
6
REFERENCES
1
Ojanen V, Lanne M, Reunanen M, Kortelainen H & Kässi T. (2008) New service development: success factors from the viewpoint of fleet asset management of industrial service providers. Fifteenth International Working Seminar on Production Economics, Pre-Prints Volume 1, 369-380.
2
Cimon Y. (2004) Knowledge-related asymmetries in strategic alliances. Journal of Knowledge Management, 8(3), 17-30.
3
Kaplan RS & Norton DP. (1992) The balanced scorecard - Measures that drive performance. Harvard Business Review, Jan-Feb, 71-79.
4
Kaplan RS & Norton DP. (2001a) Transforming the Balanced Scorecard from Performance measurement to Strategic Management: Part I. Accounting Horizons, 15, 87-104.
5
Kaplan RS & Norton DP. (2001b) Transforming the Balanced Scorecard from Performance measurement to Strategic Management: Part 2. Accounting Horizons, 15, 147-160.
6
Rosqvist T, Laakso K & Reunanen M. (2009) Value-driven maintenance planning for a production plant. Reliability Engineering and System Safety, 94, 97-110.
7
Kaplan RS & Norton DP. (2004) Strategy maps: Converting intangible assets into tangible outcomes. Boston: Harvard Business School Press.
8
Neely A, Bourne M & Kennerley M. (2000) Performance measurement system design: developing and testing a processbased approach. International Journal of Operations & Production Management; 20 (10): 1119-1145.
9
Fitzgerald L, Johnston R, Brignall S, Silvestro R & Voss C. (1991). Performance measurement in service business, Chartered Institute of Management Accountants (CIMA), London.
10
Keeney L & Raiffa H. (1993) Decisions with Multiple Objectives: Preferences and Value Trade-offs. Cambridge University Press.
11
Keeney R. (1992) Value-focused thinking – a path to creative decision making. Harvard University Press.
12
Nunamaker J, Briggs R & Mittleman D. (1996) Lessons from a decade of Group Support Systems Research. Proceedings of the 29th Annual Hawaii International Conference on Systems Sciences, Jan 3-6, Maui, Hawaii.
38
13
Elfvengren K. (2006) Group Support System for Managing the Front End of Innovation: case applications in business-tobusiness enterprises. Lappeenranta University of Technology, Acta Universitatis Lappeenrantaensis 239, doctoral dissertation Lappeenranta, Finland.
14
Jessup L & Valacich J. (1993) Group Support Systems: New Perspectives. Macmillan Publishing Company.
15
Weatherall A & Nunamaker J. (1995). Introduction to electronic meetings. Technicalgraphics.
16
Turban E, Aronson J. & Liang TP. (2004) Decision support systems and intelligent systems, 7th ed. Prentice-Hall.
39
Proceedings of the 4th World Congress on Engineering Asset Management Athens, Greece 28 - 30 September 2009
APPLICATION OF ACOUSTIC EMISSION TECHNOLOGY IN MONITORING STRUCTURAL INTEGRITY OF BRIDGES Manindra Kaphle a, Andy CC Tan a, Eric Kim a and David Thambiratnam a a
CRC for Integrated Engineering Asset Management, Faculty of Built Environment and Engineering, Queensland University of Technology, Brisbane, Australia.
Bridges are an important part of a nation’s infrastructure and reliable monitoring methods are necessary to ensure their safety and efficiency. Most bridges in use today were built decades ago and are now subjected to changes in load patterns that can cause localized distress, which can result in bridge failure if not corrected. Early detection of damage helps in prolonging lives of bridges and preventing catastrophic failures. This paper briefly reviews the various technologies currently used in health monitoring of bridge structures and in particular discusses the application and challenges of acoustic emission (AE) technology. Some of the results from laboratory experiments on a bridge model are also presented. The main objectives of these experiments are source localisation and assessment. The findings of the study can be expected to enhance the knowledge of acoustic emission process and thereby aid in the development of an effective bridge structure diagnostics system. Key Words: Structural health monitoring, acoustic emission, damage, bridge structures 1
INTRODUCTION
Bridges are an important part of a nation’s infrastructure and reliable monitoring methods are necessary to ensure their safety and structural well-being. Many bridges in use today were built decades ago and are now subjected to changes in load patterns that cause localized distress, which may result in bridge failure if not corrected. Early detection of damage and appropriate retrofitting can help in prolonging the lives of bridges and preventing failures. Bridge failures can cause huge financial losses as well as loss of lives, an example being the I-35W highway bridge collapse in Minnesota, USA in August 2007 which killed 13 people and injured 145 people. There are altogether 33500 bridges in Australia with replacement value of about 16.4 billion dollars and about 100 million dollars annual maintenance expenditure [1]. In USA, out of a total 593,416 bridges, 158,182, that is, around 26.7 percent were identified as being either structurally deficient or functionally obsolete [2]. These statistics point to the need of cost effective technology capable of monitoring structural health of bridges, ensuring they remain operational during their intended lives. The aim of the paper is to compare various methods currently used in health monitoring of bridge structures and in particular discuss the application and challenges of acoustic emission (AE) technology. Some of the results from laboratory experiments, which are aimed at finding source location and assessment, are also presented. The findings of the study can be expected to enhance the knowledge of acoustic emission wave propagation and signal analysis techniques and their applications in structural health monitoring.
2
LITERATURE REVIEW
2.1 Structural health monitoring techniques Visual inspection has been the traditional tool for monitoring bridges. Bridges are inspected at regular interval for visible defects by trained inspectors. Though simple, visual inspection results depend solely on inspectors’ judgements and small or hidden defects may go unnoticed. A range of newer techniques are available today that provide more reliable information than visual inspection. In commonly used vibration monitoring techniques, damage to a bridge is assessed by measuring changes in
40
the global properties (such as mass, stiffness and damping) of the whole structure and identifying the shifts in natural frequencies and changes in structural mode shapes [3-5]. But some damage may cause only negligible change in dynamic properties and therefore may go unnoticed. Additionally, these methods generally give the global picture, indicating the presence of damage in the whole structure; and local methods are often necessary to find the exact location of the damage. Several non-destructive techniques are available for local health monitoring of bridge structures. Most commonly used techniques are based on the use of mechanical waves (ultrasonic and acoustic), electromagnetic waves (magnetic testing, eddy current testing, radiographic testing) and fibre optics [4, 6]. Ultrasonic technique detects the geometric shape of a defect in a specimen using an artificially generated source signal and a receiver [7]. Magnetic particle testing uses powder to detect leaks of magnetic flux [8]. It can be an economic alternative to other methods but cannot be used for nonferrous materials. Eddy current testing is based on the principle that there is a change in the eddy- current pattern due to the presence of a flaw in a structure [4]. It can detect cracks through paint and is effective for detecting cracks in welded joints, but testing can be expensive. In radiographic methods, a suitable energy source is used to generate radiation and flaw is detected when the radiation is recorded in the other side of the specimen. Laboratory results of radiographic testing are promising but large size of the equipment hampers its use in field investigation. Fibre optics can detect various parameters; displacement and temperature are the common ones. Sensing is based on intensity, wavelength and interference of the light waves [9]. Advantages include their geometric conformity, capability for sensing a variety of perturbations and no electric interference [9]. But they can be costly and placement within the structure during the construction may be needed, excluding their use for already built bridges.
2.2 Acoustic emission technique Acoustic emission (AE) waves are the stress waves that arise from the rapid release of strain energy that follows microstructural changes in a material [10]. Common sources of AE are initiation/growth of cracks, yielding, impacts, failure of bonds and fibre failure. AE waves generated within a material can be recorded by means of sensors placed on the surface. AE technique involves the analysis of these recorded signals to obtain information about the source of the emission. Physically, AE waves consist of P waves (primary/longitudinal waves) and S waves (shear/transverse waves) and might further include Rayleigh (surface) waves, reflected waves, diffracted waves and others [11]. In plate like structures, as signals travel away from the source, Lamb waves become dominant mode of propagation [12]. Lamb waves primarily travel in two basic modes namely symmetric (S0) or extensional and asymmetric (A0) or flexural, though higher modes such as S1 and A1 can exist [13]. These modes travel with different velocities depending on the frequencies and thickness of the plate. Dispersion curves based on solutions to Lamb’s equation are used to relate velocity with the product of frequency and plate thickness [13]. Some of the advantages of AE technique over other non-destructive techniques are its high sensitivity, source localization capability and ability to provide monitoring in real time, that is, damage detection as it occurs. Study of AE started in 1950s and AE technique found its initial application in monitoring pressure vessels and aerospace structures. AE was first applied for bridge monitoring in early 1970s but the use and study of AE for monitoring bridge structures rose with the rapid increase in computing resources and development of sensor technology.
2.3 Applications of AE technology for bridge monitoring and challenges faced AE is well suited for the study of the integrity of bridge structures as it is able to provide continuous in-situ monitoring and is also capable of detecting a wide range of damage mechanisms in real time [12]. A general overview of applications of AE for monitoring bridges has been given in [13]. Application of AE for steel bridges has been discussed by [14] and application for concrete has been covered by [11] . A number of previous studies have explored the use of AE technology for monitoring bridge structures made of different materials, such as steel [12, 15, 16], concrete [17-19] and composites [20, 21] as well as masonry bridges [22]. Most of the studies have combined field testing with experiments performed in laboratory. The traditional way in AE monitoring involves the use of parameters of the recorded AE signals such as amplitude, energy content, rise time and duration in characterising damage [23]. This parameter based approach is simple but may be insufficient as all waveform information is not used during analysis. Waveform based approach involves recording the whole AE signal waveform and studying its features; and is often regarded as better than the parameter based method. Though AE technique has been successfully applied for bridge monitoring, several challenges still exist. Large size of bridges creates practical problems, for example with accessing desired areas and the need for a large number of sensors to monitor the whole structure. The solution is to identify critical areas and then to monitor these targeted areas. As large volume of data is generated during monitoring owing to high sampling rate, effective data management becomes important. AE signals can arise from a number of sources, so distinguishing the sources of origin is critical in understanding the nature of damage. For instance, in steel bridges likely sources of AE include crack growth, sudden joint failures, rubbing, fretting and traffic noises. Presence of noise sources that can mask the AE signals from real crack has been identified as one of the biggest limitation of the AE monitoring system. Different suggestions have been made for noise suppression [24] but scope exists for further research. Proper analysis of recorded data to obtain reliable information about the source is another major challenge in
41
AE technique. Frequency based analysis provides important information about the nature of the source. Along with traditional Fourier based analysis, other signal analysis techniques such as short time Fourier transform (STFT) and wavelet analysis (WA) are gaining popularity [7].
3
EXPERIMENTS
Since three important aspects of AE monitoring have been identified as source location, source identification and severity assessment [13], laboratory experiments were carried out to address them. Two sets of experiments were carried out to determine the source location in small and large plates using popular time of arrival (TOA) method. In TOA method, differences in arrival times of the signals at different sensors and velocity of the waves are used to find the locations of the source using triangulation techniques [25]. Influence of AE wave travel modes in source location process was also studied. Next set of experiments was aimed at finding a way to determine similarity between two different sources of AE signals. Knowing how to differentiate and classify signals from different sources can be expected to help in source identification and assessment. The AE analysing system used was micro-disp PAC (Physical Acoustics Corporation) system, along with AEwin software provided by the same company. Sensors used were R15a sensors (PAC) resonant at 150 KHz and preamplifiers used had a gain of 20 dB. Data was acquired at a sampling rate of 1 MHz for duration of 15 ms. The system recorded a hit when signals reached a certain threshold, set at 60 dB.
3.1 Source location experiments 3.1.1 Source location in small plate A 300 mm by 300 mm aluminium plate of thickness of 3 mm was used as a test specimen. Three sensors were placed in three different locations to record AE signals. The source of AE signals were pencil lead breaks which involved breaking 0.5 mm pencil leads at selected locations within the plate. Signals from pencil lead break tests have been found to closely resemble crack signals. MATLAB (R2008a, The MathWorks) codes were used for iterations purposes. Calculated locations were compared with the exact locations to verify the accuracy of the TOA method. 3.1.2 Source location in larger plate Similar experiments were conducted on a 1.8 m by 1.2 m steel plate of thickness of 3 mm that acted as a part of a deck for slab-on-girder bridge model. Again three sensors were used to record data.
3.2 Signal similarity experiments In a long steel beam of dimensions (3m long, 0.15m wide, 75mm thick), two sources of AE signals were generated: pencil lead breaks and steel ball drops (6 mm diameter balls dropped from a vertical height of 15 cm). 10 sets of each test were carried out. The signals were recorded by a sensor placed at a distance of 1.5 m from the source. They were then analysed to determine similarity, with an aim of finding a way to classify signals from various sources. A parameter called magnitude squared coherence (MSC) was used to measure similarity. The magnitude squared coherence estimate is a function of frequency with values between 0 and 1 that indicates how well two signals correspond to each other at each frequency, with the value of 1 indicating exact match (MATLAB help guide R2008a, The MathWorks). MSC is calculated using the power spectral densities and the cross power spectral density of the signals [7].
4
RESULTS
4.1 Source location experiments For small plate source location experiment, longitudinal wave velocity in aluminium ( c L = E / r = 5128 m/s, E being Young’s modulus and ρ density) was used to calculate the source location. Fig.1 shows the calculated positions and the exact locations of pencil lead breaks, along with the sensor locations. A good correlation between the calculated and exact values is seen.
42
The result of the larger plate source location experiment using the longitudinal wave velocity in steel (cL=5188 m/s) is shown in Figure 2a. The exact and calculated values do not show good match. Results using c = 3000m/s, a value close to the transverse velocity (cT = E / 2 r (1 + u ) , where ν is the Poisson’s ratio) is shown in Figure 2b, where a much better correlation is obtained. The results indicate that the waves recorded by the sensors are not longitudinal waves, as in the first experiment.
Figure 1 Source location in small plate
(a) Using c = 5188 m/s
(b) Using c = 3000 m/s
Figure 2 Source location in larger plate with two different velocities Initial parts of sample signals from the two source location experiments are shown in Figure 3, along with the threshold value (dotted line); demonstrating how the threshold is crossed by the recorded signals.
43
(a)
(b)
Figure 3 Record of hits in small plate (a) and larger plate (b) In Fig. 3a, the first arriving wave crosses the threshold and records a hit. On the other hand, in Fig. 3b, the initial portion consists of low amplitude signals that do not cross the threshold; and therefore do not trigger a hit. Also seen is that the initial component arrives about 90 µs before the triggering wave component. Using the velocity of the triggering wave (c=3000m/s) and the distance between the source and signal (in this case signals were recorded by Sensor S3 for source at position of (0.3, 1.2), so the distance was calculated to be 0.67 m); the velocity of the initial arriving wave can be calculated to be around 5000 m/s. This value is close to the longitudinal velocity of waves in steel. Initial conclusions can be drawn that though longitudinal waves are present they have attenuated to a level below the threshold; and the waves that record a hit by crossing the threshold are the transverse waves. More investigation is needed to check whether the waves seen are Lamb wave modes, as Lamb waves are common in large plate like structures. Detailed frequency analysis of the signals by means of Fourier analysis and short time Fourier analysis (STFT) are expected to be useful in identifying the modes. Therefore, Fourier analysis was carried out for two parts of the sample signal from the large plate source experiment: the initial 90 µs portion and the next 230 µs portion. The frequency response diagrams are shown in Figure 4.
(a) Initial 90 µs
(b) Next 230 µs
Figure 4 FFT of two portions of signal from large plate source location experiment The major difference observed between Figures 4a and 4b is that frequency peaks around 47, 70 and 90 kHz appear in Fig. 4b, indicating that these lower frequency wave modes arrive late and trigger a hit. To obtain more information, Short time Fourier transform (STFT) analysis was carried out using time-frequency toolbox [26] and the results are shown in Fig. 5.
44
Figure 5 STFT analysis of the signal From the STFT plot in Fig. 5, it is clear that waves with frequencies around 100 to 180 kHz arrive at the beginning. Starting at around 90 µs, waves with a large variation in frequencies arrive. The frequencies gradually decrease from around 350 kHz till 30 kHz, with a peak value at around 45 kHz and 250 µs. Another similar wave pattern with decreasing frequency values emerges after 200 µs; these could be the reflected waves. For further insight into Lamb wave phenomenon, study of the dispersion curve for steel is useful. It is given in Figure 6 and shows the variation of the velocities of modes S0, A0, S1 and A1 with frequencies and thickness of the plate. Using frequency f of 100 kHz and 180 kHz and 3 mm thickness t of the plate (f.t = 0.3 or 0.54 MHz.mm), group velocity of around 5000 m/s is seen for S0 mode in Fig. 6. This value matches calculations made before for initial fast arriving component. But for the triggering slow arriving wave (of velocity 3000 m/s) to be flexural mode (A0), frequency of around 333 kHz or more is required (f.t = 1 MHz.mm). This high frequency component is not conspicuous in FFT analysis in Fig. 4b, though it can be seen in STFT analysis in Fig. 5. But since signals with a wide range of frequencies are present, it is hard to ascertain that 333 kHz component crosses the threshold first. It has to be added that due to sensor sensitivity, frequencies in higher range are not recorded properly, as sensors tend to be sensitive to the signals near their resonant frequency, in this case 150 kHz.
Figure 6 Dispersion curves in steel [13]
45
4.2 Experiment 3 To judge the similarity of signals, magnitude squared coherence (MSC) is used. MSC values between signals from pencil lead break experiments have a mean of 0.78 while the MSC values between pencil lead break and ball drop signals have a mean of 0.38. These differences are significant, showing the possibility for further exploration of magnitude squared coherence for signal classification purposes. A sample MSC values between two pencil lead break signals (a) and between pencil lead break and ball drop signals (b), using MATLAB code mscohere (R2008a, The MathWorks), are shown in Fig. 7. Fig. 7a indicates close match between the signals, especially up to 400 kHz, whereas Fig. 7b indicates less coherence.
(a)
(b)
Figure 7 MSC values versus frequencies
5
DISCUSSIONS
Experiments in this study have explored the use of frequency analysis for better interpretation of recorded AE data. Ability to determine the location of the source is an important advantage of acoustic emission technique. AE propagation in solids is a complex phenomenon as signals travel in various modes and mode conversions occur. For accurate source location, identification of the modes is necessary. It has been seen that progressive attenuation of longitudinal waves precludes their use in locating the source of AE events in larger scale [15]. The results from source location experiments confirm this case. For signal analysis, STFT is found to be more informative than Fourier transform, as both frequencies and times of occurrence of different wave modes are seen in STFT analysis. More rigorous analysis is required to ascertain whether the wave modes recorded are the Lamb wave modes, transverse waves or other waves. Use of finite element analysis and other techniques such as wavelet analysis are expected to be beneficial and will be carried out in the next stage. Similar sources have been found to give similar waveforms. Magnitude squared coherence (MSC) based on the power spectral frequency analysis of the signals provides a simple way of judging signal similarity, as verified by the experimental results. Signal similarity can be an effective tool for signal classification and thus for source identification and assessment, which are some of the important aspects of AE monitoring method. A crack waveform obtained from a laboratory experiment can act as a template for distinguishing a similar signal obtained in field testing from other noise sources. Waveform recorded by a sensor is influenced by the path travelled by the signals and by the sensor characteristics; the influence of these parameters needs further consideration.
6
CONCLUSIONS
The study of acoustic emission technique for monitoring bridge structural integrity is growing continually. Though AE technique has several distinct advantages over other non-destructive methods of monitoring, it still has not become the preferred choice in bridge monitoring. Existence of sources of noises that can mask AE signals from real damage has been identified as a major hindrance for the use of AE technique in monitoring bridges. Signal processing techniques including the frequency analysis tools used in this study can be effective tools to distinguish and remove noises from real signals. Proper analysis of recorded AE signals to deduce useful information about the nature of the source is another challenge. This study has aimed to address some of these issues by analysing the signals for source location and source assessment purposes. Though only laboratory tests have been carried out so far, knowledge from these tests is valuable in interpreting the results from actual
46
field tests. Plate like structures and beams are common in bridges, and hence make ideal experimental specimen. As AE is generally used as local monitoring technique, critical areas where damage is likely are identified and specifically monitored. In experiments in this study, signals were recorded for a distance up to slightly more than 1m for plates and 1.5 m for beams. Studies on attenuation of AE signals (not shown in this paper) have shown possibility of recording AE signals for larger distances (5-7 m) effectively; hence fairly wide area could be monitored using strategically placed AE sensors. The approach in this study was to record experimental data first and then transfer and analyse data later on. Real time analysis is an attractive option and, especially with advanced computing resources available today, should be possible for implementation. To conclude, it can be said that monitoring structural integrity of bridges by AE provides insight into the current state and helps determine if further steps are necessary to extend the lives of bridges and to ensure they perform safely and reliably.
7
REFERENCES
1
Austroads, (2004) Guidelines for Bridge management - Structure Information. Austroads Inc: Sydney, Australia.
2
USDoT, (2006) 2006 Status of the Nation's Highways, Bridges, and Transit: Condition and Performance. U.S. Department of Transportation Federal Highway Administration Federal Transit Administration.
3
Shih, H.W., D.P. Thambiratnam, and T.H.T. Chan, (2009) Vibration based structural damage detection in flexural members usign multi-criteria approach. Journal of sound and vibration. 323, 645-661.
4
Chang, P.C. and S.C. Liu, (2003) Recent research in nondestructive evaluation of civil infrastructures. Journal of materials in civil engineering. p. 298-304.
5
Chang, P.C., A. Flatau, and S.C. Liu, (2003) Review paper: Health monitoring of civil infrastructure. Structural health monitoring. 2, 257-267.
6
Chong, K.P., N.J. Carino, and G. Washer, (2003) Health monitoring of civil infrastructures. Smart Materials and structures. 12, 483-493.
7
Grosse, C.U., et al., (2004) Improvements of AE technique using wavelet algorithms, coherence functions and automatic data analysis. Construction and building Materials. 18, 203-213.
8
Rens, K.L., T.J. Wipf, and F.W. Klaiber, (1997) Review of non-destructive evaluation techniques of civil infrastructure. Journal of performance of constructed facilities. 11(2), 152-160.
9
Ansari, F., (2007) Practical implementation of optical fiber sensors in civil structural health monitoring. Journal of intelligent material systems and structures. 18, 879-889.
10 Vahaviolos, S.J., (1996) Acoustic emission: A new but sound NDE technique and not a panacea, in Non destructive testing, D. Van Hemelrijck and A. Anastassopoulos, Editors. Balkema: Rotterdam. 11 Ohtsu, M., (1996) The history and development of acoustic emission in concrete engineering. Magazine of concrete research. 48(177), 321-330. 12 Holford, K.M., et al., (2001) Damage location in steel bridges by acoustic emission. Journal of intelligent material systems and structures. 12, 567-576. 13 Holford, K.M. and R.J. Lark, (2005) Acoustic emission testing of bridges, in Inspection and monitoring techniques for bridges and civil structures, G. Fu, Editor. Woodhead Publishing Limited and CRC. p. 183-215. 14 Lozev, M.G., et al., (1997) Acoustic emission monitoring of steel bridge members. Virginia transportation research council. 15 Maji, A.K., D. Satpathi, and T. Kratochvil, (1997) Acoustic emission source location using lamb wave modes. Journal of engineering mechanics. p. 154-161. 16 Sison, M., et al., (1998) Analysis of acoustic emissions from a steel bridge hanger. Research in Nondestructive Analysis. 10, 123-145. 17 Colombo, S., et al., (2005) AE energy analysis on concrete bridge beams. Materials and structures. 38, 851-856. 18 Shigeshi, M., et al., (2001) Acoustic emission to assess and monitor the integrity of bridges. Construction and building materials. 15, 35-49. 19 Yuyama, S., et al., (2007) Detection and evaluation of failures in high-strength tendon of prestressed concrete bridges by acoustic emission. Construction and building materials. 21, 491-500. 20 Rizzo, P. and F.L. di Scalea, (2001) Acoustic emission monitoring of carbon-fiber-reinforced-polymer bridge stay cables in large-scale testing. Experimental mechanics. 41(3), 282-290.
47
21 Gostautas, R.S., et al., (2005) Acoustic emission monitoring and analysis of glass fiber-reinforced composites bridge decks. Journal of bridge engineering. 10(6), 713-721. 22 Melbourne, C. and A.K. Tomor, (2006) Application of acoustic emission for masonry arch bridges. Strain - International Journal for strain measurement. 42, 165-172. 23 Vallen, H., (2002) AE testing fundamentals, equipment, applications. NDT.net. 7(09). 24 Daniel, I.M., et al., (1998) Acoustic emission monitoring of fatigue damage in metals. Nondestructive testing evaluation. 14, 71-87. 25 Nivesrangsan, P., J.A. Steel, and R.L. Reuben, (2007) Source location of acoustic emission in diesel engines. Mechanical systems and signal processing. 21, 1103-1114. 26 Auger, F., et al., (1996) Time-Frequency Toolbox - For use with MATLAB. CNRS (France) and Rice University (USA).
Acknowledgments The authors gratefully acknowledge the financial support from the QUT Faculty of Built Environment & Engineering and the Cooperative Research Centre for Integrated Engineering Asset Management (CIEAM).
48
Proceedings of the 4th World Congress on Engineering Asset Management Athens, Greece 28 - 30 September 2009
APPLICATION OF TEXT MINING IN ANALYSING ROAD CRASHES FOR ROAD ASSET MANAGEMENT Richi Nayak1, Noppadol Piyatrapoomi2 and Justin Weligamage2 1
Faculty of Science and Technology, Queensland University of Technology, Brisbane, QLD 4001, Australia
2
Road Asset Management Branch, Queensland Government Department of Main Roads Brisbane, Queensland, Australia
Traffic safety is a major concern world-wide. It is in both the sociological and economic interests of society that attempts should be made to identify the major and multiple contributory factors to those road crashes. This paper presents a text mining based method to better understand the contextual relationships inherent in road crashes. By examining and analyzing the crash report data in Queensland from year 2004 and year 2005, this paper identifies and reports the major and multiple contributory factors to those crashes. The outcome of this study will support road asset management in reducing road crashes. Key Words: Text Mining, Road Crashes, Data Analysis 1
INTRODUCTION
Traffic safety is a major concern in many states around Australia and including the state of Queensland. Since year 2000, there had been 296 fatalities on average per year in Queensland as recorded by the Office of Economic and Statistical Research [1]. Data obtained for analysis in this paper shows that during the years 2004 and 2005, there were over 20,000 traffic crash investigation reports recorded involving Queensland motorists. The annual economic cost of road crashes in Australia is enormous - conservatively estimated at $18 billion per annum - and the social impacts are devastating [2]. The cost to the community through these crashes is very high. They also have a devastating impact on the emergency services and a range of other groups. In addition, it is inevitable that the insurance companies will have to increase the cost of premium to cover their ongoing cost of insuring those motorists and their vehicles. It is therefore in both the sociological and economic interests of society that attempts are made to identify the major and multiple contributory factors to those crashes. Statistical analysis of road crashes is not a new realm of research by any means. For many years, road safety engineers and researchers have attempted to deal with large volumes of information in order to gain an understanding of the economic and social impacts of car crashes. The hope is, that with this understanding, more efficient safety measures can be put into place to decrease the number of future road crashes [3]. Various data mining and statistical techniques have been used in the past in the domain. Researchers have attempted to investigate crash analysis through ordinary statistical tables and charting techniques [4, 5].The issue with these techniques is that they limit human involvement in the exploration and knowledge discovery tasks. Researchers have also attempted advanced data analysis methods of data mining that include clustering, neural networks and decision trees to reveal relationships between distractions and motor vehicle crashes. Major focuses of research on road crashes are the use of data mining to analyse freeway or highway accident frequency, the development of models to predict highway incident durations and the use of data mining in the classification of accident reports [6, 7, 8]. Other studies include the use of data mining and situation-awareness for improving road safety; a comparison of driving performance and behaviour in 4WDs versus sedans through data mining crash databases [5] and a study in the safety performance of intersections [9]. These studies revealed some interesting results, however, they are unable to properly analyse the cognitive aspects of the causes of the crashes. They often opt to leave out significant qualitative and textual information from data sets as it is difficult to create meaningful observations. The consequence of textual ignorance results in a limited analysis whereby less substantial conclusions are made. Text mining methods attempt to bridge this gap. Text Mining is discovery of new, previously unknown information, by automatically extracting it from different written (text) resources. Text mining methods are able to extract important concepts and emerging themes from the collection of text sources. Used in a practical situation, the possibilities for
49
knowledge discovery through the use of text mining is immense. To our knowledge, there is limited or no reputable studies that have utilised text mining in this data domain, however, earlier studies in the field indicate a real need for textual mining in order to better understand the contextual relationships of road crash data. This paper presents a text mining based method to better understand the contextual relationships inherent in road crashes. By examining and analyzing the crash report data in Queensland from year 2004 and year 2005, this paper identifies and reports the major and multiple contributory factors to those crashes. Analysis is performed to identify links between common factors recorded in crash reports. Of key concern are the causes of crashes, rather than the consequences. The outcome of this study will support road asset management in reducing road crashes. With those findings on hand, we hope it can be useful for reviewing the limitations of existing road facilities as well as planning better public safety measurements. Most importantly, implementing and continuing a long term public education on road safety issues especially amongst the young generations and male gender which historically are involved in a high proportion of road crashes each year.
2
TEXT MINING METHOD
Text mining is discovery of new and previously unknown information automatically from different text resources using natural language and computation linguistics, machine learning and information science methods [10]. The key element is linking of the discovered information together to form new facts or new hypotheses to be explored further by more conventional means of experimentation [10]. Text mining methods include the steps of processing the input text data, deriving rules and patterns within the newly processed data and finally the evaluation and interpretation of the output rules and patterns.
2.1
Objectives
The focus of this paper is to determine the most common causes of road crashes so that appropriate measures can be taken by road asset management in the future to prevent these accidents from occurring. The objective is to investigate the nature of crashes and roads that result in the crashes reported by Queensland traffic accident investigators within the period 2004 to 2005. Performing text mining on the crash description gives information about the causes of a crash that cannot necessarily be categorized into any particular field within a database. The crash reports, when pre-processed and grouped into clusters, can reveal insights that may have formally been unrecognisable. The identified unusual and hidden relationships may be useful to government, businesses (insurance organisations, motoring associations) and individuals in better road asset management. As compared with simply having a quantitative data set, textual information can enable conclusions to be drawn from the circumstances that caused the accident as opposed to simply looking at what the accidents were.
2.2 Dataset The data used for this analysis, is collected from files containing information related to road crashes from the state of Queensland. The two files supplied are actual reports produced by traffic accident investigators within the period 2004 to 2005. They contain data for each reported road accident in this period according to 29 attributes including date, time, location and road conditions of the crash. More specifically they are: Atmospheric, Carriageway, Crash_Description, Crash_Nature, Crash_Area, Crash_Date, Crash_Day_of_Week, Crash_Distance, Crash_Divided_Road, Crash_Landmark, Crash_Number, Crash_Speed_Limit, Crash_Time, District, Horizontal_Allignment, Lighting, Owner_ID, Roadway_Feature, Road_Section, Road_Surface, Traffic_Control and Vertical_Allignment. Preliminary review of the dataset helped us in determining the most interesting and important attributes to be used in our text mining analysis. However, for the purposes of text mining the crash description was of highest significance. This is a character attribute containing data with values up to 403 characters. The attribute “Street 2” with data of for example: “Warrego highway” had over 241 reported crashes which is also an interesting piece of information to look into. It is noted that there were as many as over 80% crashes occurred with clear conditions as shown in the attribute “Atmospheric”, and over 65% cases occurred during “daylight hours”. Also, a large number of reported crashes occurred on sealed and dry road surface as shown in the attribute “Road_Surface”, only small number of crashes occurred on wet surface. Another interesting attribute that came out of our observations was “Owner_ID”. It represents the gender of the person who was involved in each of these described crashes and it appears that almost all of them are MALE. The assumption is that this is due to the limitation of this particular crash report data. It may or may not be a true reflection of the correct distribution of gender involved in road crashes in a broader sense.
50
2.3 Process: Step1 - Pre-processing Pre-processing of textual information is a time-consuming task but is essential in order to achieve results that are of value to the users of the information. An initial scan through the data set identified a number of potential problems that will need to be addressed before any text mining could take place. The main cause of this is mainly due to noise and various inconsistencies between the different records, which could possibly be due to different forensic experts writing the notes. Punctuation: Punctuation was often omitted or used extraneously. No consistent information was apparent in the use of punctuation. Therefore, to simplify the text mining all punctuation was removed and replaced with spaces. Specifically, the following characters were replaced: ~`
[email protected]#$%^&*()_+-={}[]|\;':'?/,. Broken Words: The previous step resulted in some words with gaps. Also there were many gaps (spaces) between words that will need to be removed in order to obtain any value from the words. This is because gaps between words are not actually “new” words, and during the text mining process they will have a low frequency, meaning that they are unlikely to be used. These gaps were consequently removed in order to obtain a more accurate result. Some examples are “trave lling”, “ro ad”. Inconstancy due to the user of abbreviation and different cases: Another problem encountered with the data set is that there are many inconsistencies in different records. An example of this would be “unit 1”, where variants including “u1” and “unit one” were used throughout the data set. This presents a problem in the context of road crash text mining because names of roads and highways could be abbreviated in a multitude of ways. In order to provide any meaningful recommendations, abbreviations also had to be standardised. It has been agreed that most value can be provided to the end user if the full word was used. Another example of inconsistency was using lowercase and uppercase to representing the same word. Consequently data was transferred to lowercase to prevent text mining tools from separating the same words that started with either upper or lower case. If this action was not taken, same words would not be grouped, therefore misleading the results and the integrity of the analysis. Converting all the text to lower case meant there were less combinations to code for transforming abbreviations to their full descriptions. Spelling mistakes: Spelling mistakes was another common problem encountered during the pre-processing phase. They were removed by filtering through each record and correcting mistakes through the data set. If spelling mistakes are not corrected, text mining tools do not recognise the word, nor would it group same words. Some examples are “uint”, “unti”, “utni” user frequently to write the word “unit”. Common phrases: Finally as part of the formatting functions, common phrases that comprise of more than a word were combined into a single word to assist the text analysis. The data was processed for words which were in close proximity to each other to create common combinations. This was important as combinations such as green light and police station do not have the same meaning if they were not combined. For instance the car could have been green and it crashed into a light pole, instead of the car went through a green light and was hit in the middle of an intersection. Table 1 presents examples of the phrases replaced with concatenated words.
Table 1: Example replacements for common phrases
Original Text Traffic light red light turning lane stop sign green light give way lost control police station parking bay failed to stop road side bruce highway round about towing a trailer
Replace With red-light stop-sign turning-lane stop-sign green-light give-way lost-control police-station parking-bay failed-to-stop road-side bruce-highway roundabout towing-a-trailer
51
2.4 Process: Step 2 – Text Mining The process of text mining includes converting unstructured text data into structured data, clustering the crash reports to identify links between common factors reported in crash reports, and viewing the concept links. We employ the Leximancer tool [13] based on the bayesian theory [14] to assess each word in the dataset to predict the concepts being discussed. It learns which word predicts which concept (or cluster) and forms concepts (or clusters) based on associated terms. It thus positions clusters based on the terms that they share with other clusters. It constructs a conceptual graphical map by measuring the frequency of occurrence of the main concepts and how often they occur close together within the text. A concept is treated as a cluster. Each term appearing in the text data is analyzed to form a concept to allow blackbox discovery of patterns that may not otherwise be known. Concepts that are similar are merged and edited. For example, the concept list included concepts such as turn, turning and turned; direction, north, south, east and west; light and lights; approached and approaching; road and street; lane and lanes. Each of these combinations of concepts relates to the same thing and one word is merely a stem of the other. As a result these similar concepts are merged into one and are renamed to reflect the true meaning of the concept, for example: Day, Time, Years, and Week are merged into the single concept ‘Time’. Some concepts are removed which may not be pertinent to the crash senario being analysed, for example, preceded and occurred. Many concepts are then put together to form a theme.
Table 2: Example stop words excluded from the standard stop-word list Word Excluded from the stop list Bald Hit Look, looking Fast Indicate, Indicated Right Following Two 3
Rational “Bald” tyres may be a cause of an accident. Hit may imply a collision. Look and Looking may be referring to where a driver was looking when the accident occurred. Speed which may be the cause of an accident. Whether a driver indicated left or right. Turning and merging right as opposed to left may have more of an impact in collisions. (i.e. turning across traffic). Car could be following too closely to another vehicle. Could refer to Unit two or number of vehicles involved in the accident.
ANALYSIS AND RESULTS
3.1 Dataset examination Data distribution of the data set is displayed in figures 1 to 5 showing some significant correlations as well as disassociations between various attributes. The atmosphere is usually clear when the crash occurred indicating weather condition was not a big factor in this dataset. The distribution of crash time is mostly in the afternoon especially between 3pm to 5pm. This is during afternoon peak times when drivers are tired from working all day. The area with speed limit of 60km/h was where most crashes occurred, following by the area with speed limit of 100km/h. This is expected as the majority of roads have a speed limit of either 60km/h or 100km/h. No traffic control showing as most important contributory factor for the crash in this dataset. The three most significant crash natures were angle, hit fixed obstruction/ temporary object and rear-end. Figure 6 shows that over a quarter of the accidents (28%) are classified as “rear-end” which is a high proportion of the data given there are 14 categories for this attribute. The data in Figure 7 displays the count of accidents grouped by the characteristic of the road where the accident occurred according to the “Roadway Feature” attribute. The proportion of accidents that occur at some form of intersections (i.e. Cross, interchange, multiple road, roundabout, T junction and Y junction) is 95% (excluding Not Applicable). This would indicate that there might not be enough controls in place around intersections to avoid an accident.
52
Figure 1 - ATMOSPHERIC attribute
Figure 2 - CRAS_TIME attribute
Figure 3 - CRAS_SPEED_LIMIT attribute
Figure 4 - TRAFFIC_CONTROL
Figure 5 - CRASH_NATURE attribute
Figure 7: Roadway Feature Summary
Figure 6 - Crash Nature Summary
53
Figure 8 – The cluster map 3.2 Cluster Analysis This Cluster Map in Figure 8 shows the different Clusters that were produced by the Leximancer text mining tool after the pre-processing had been preformed. Several clusters are immediately obvious by communicating possible causes for road accidents including intersections, rear-ending and loss of control. The list of concept terms generated by the Leximancer tool includes roundabout, intersection, traffic, bend, lane, injuries, left and right, give-way, rear and speed, among others. These terms alone give a good indication of possible causes of road accidents as they are the most frequently appearing terms in the sample text once stop words have been removed. The travelling, Unit 1, Unit 2, road, right, vehicle and intersection have the highest relative counts. The two highest frequency concept terms are Unit 1 and Unit 2. Unit 1 occurs 12,774 times whilst Unit 2 only occurs 7,286 times. Similarly, the concept terms ‘left’ and ‘right’ appear 3111 and 6446 times respectively. An analysis into why ‘right’ might appear more than twice as many times as ‘left’ revealed that perhaps more accidents occur in right-hand lanes or whilst performing right-hand turns. Indeed, the relationship between ‘right’, ‘left’ and ‘intersection’ showed that the concept term ‘intersection’ will be accompanied by the concept term ‘right’ 73.5% of the time whilst it was only accompanied by ‘left’ 18% of the time. An immediate assumption that could be made regarding the reason for Unit 1 appearing nearly twice as many times as Unit 2 would be that single vehicle accidents occur more frequently than multi-vehicle accidents. However, this assumption may not necessarily be true. For example, Unit 1 may just be repeated more times than Unit 2 within the same passage. Figure 9 indicates the strength of the relationship between Unit 1 and all other concept words, whilst Figure 10 indicates the strength of
54
the relationship between Unit 2 and all other concept words. The relationship from Unit 1 to Unit 2 is moderately strong whilst the relationship from Unit 2 to Unit 1 is significantly stronger. The relative count of the first relationship shows that the term ‘Unit 2’ (7286) is closely related to the term ‘Unit 1’ (7286) 100% of the time. However, the second relationship, shows that the term ‘Unit 1’ (12774) is only closely related to the term ‘Unit 2’ (7268) 57% of the time. This information indicates that 43% of the time a second vehicle is not involved. It is therefore possible to conclude that nearly half of all road crashes in this case study are single vehicle accidents.
Figure 9: Unit 1 Concept
Figure 10: Unit 2 Concept
With the above discoveries in mind, the clusters can now be analysed to identify meaning in the grouping of concept words and their relative locations. The first meaningful clusters are the ‘vehicle’ and ‘lost-control’ clusters (as shown in Figure 11). These clusters appear in close proximity to each other, in fact overlapping, indicating a strong relationship between the two clusters. These two clusters include key words such as ‘towing’, ‘trailer’, ‘speed’ and ‘lost-control’. One possible conclusion that could be drawn from these concept words is that drivers can often lose control of their vehicles when speeding. Another is that drivers can easily lose control of their vehicles when towing a trailer. The ‘vehicle’ cluster may also indicate a relationship between these conclusions and ‘single driver’ accidents or ‘single vehicle’ accidents.
Figure 11: Driver Control Cluster
Figure 12: Rear-ending Cluster
The second meaningful cluster (as shown in Figure 12) is the ‘rear’ cluster which is also overlapping with the ‘Unit 2’ cluster. The concept words of the cluster include ‘slowed’, ‘stop’, ‘time’, ‘collided’ and ‘rear’. These combination of words could indicate a scenario of rear-ending, a common form of car crash in suburban areas. The concept terms ‘collided’ and ‘rear’ alone would suggest this to be the case. However, this is also supported by the terms ‘stop’ and ‘time’ indicating that perhaps a vehicle could not ‘stop in time’ and as a result collided with the vehicle in front.
Figure 13: Intersection Cluster
Figure 14: Speed Concept
55
The ‘intersection’ cluster (as shown in Figure 13) also seems to indicate quite meaningful information. An immediate observation that can be made is that this cluster overlaps with both the ‘Unit 1’ and ‘Unit 2’ clusters, indicating that perhaps accidents at intersections often involve two vehicles. The ‘intersection’ cluster includes interesting concept words such as ‘red light’, ‘green light’, ‘intersection’, ‘intending’ and ‘give way’. These key words might indicate that ‘giving way’ (or lack thereof) at intersections in perhaps a common cause of road accidents. This cluster may also suggest that traffic lights are often involved in crashes at intersections. Although this data alone does not tell us exactly how the traffic lights might be related to the accident one conclusion is that perhaps people are not stopping for red lights or simply do not see them. However, the relationship between traffic lights and causes of crashes is an area for further investigation. Although these three clusters were perhaps the most meaningful, further conclusions could be drawn from the remaining clusters with further analysis. These three clusters in particular were chosen for analysis as they indicated possible causes for road accidents. The implications of these findings are discussed later. One last area of interest is how road accidents are influenced by speeding. An analysis of the relationship between the concept word ‘speed’ and all other concept words indicates that perhaps speeding is a cause of more accidents in ‘low speed’ areas such as roads or streets rather than ‘high speed’ areas such as highways. As can be seen in Figure 14, the “speed” concept can be correlated with “road” or “street” (these two words were grouped in pre-processing) 63.6%. Whereas, highway and motorway only appear with speed 17.6% and 13.2% of the time respectively. Whilst the word ‘speed’ alone does not necessarily indicate speeding, it can be assumed that if the word were to appear in a crash report the speed must have been an influence factor.
4
DISCUSSION AND CONCLUSION
Several conclusions were drawn from the analysis conducted above. These conclusions involved: (1) the likelihood of a second vehicle being involved in an accident; (2) the likelihood of an accident when turning right as opposed to turning left; (3) the influence of towing a trailer in losing control of a vehicle; (4) the influence of speed in losing control of a vehicle; (5) a person’s inability to stop resulting in a rear-ending accident; (6) the likelihood of more than one vehicle being involved in an intersection accident; and (7) the influence of speed zone category in speeding accidents From these conclusions, various recommendations can be made. Our proposed recommendations are as follows: (a) greater awareness be raised regarding following another vehicle too closely or better known as tail-gating. Such awareness may help reduce the number of incidents related to rear-ending. (b) determine new as well as improving existing controls to prevent these sort of rear ending accidents through signalling by the immobile vehicle. This would involve developing and improving methods of displaying that a vehicle is immobile to vehicles approaching it on either side. This includes for both the vehicle as well as for trailers that are attached to it to prevent the rear ending happening. (c) further enhance future analysis of accident data, the improvement of information capture can be achieved by recording the presence/absence of right hand turning lanes at the intersection (for those accidents occurring at an intersection). (d) determine new as well as improving existing roadway features. As mentioned in point (c), turning lanes are used to improve safety at intersections, however, if these are not able to be installed at certain intersections there may be a requirement to develop alternate controls. Another consideration is if turning lanes do not reduce accidents at intersections, which could be the subject of additional research. (e) drivers purchasing trailers should be made aware of the difficult in controlling such vehicles and the implications associated with this. (f) Speeding campaigns should target low speed zone areas rather than high speed zone areas and speed cameras should be utilized in low speed areas more often to discourage speeding in these problem areas. (g) Furthermore, drivers should be reminded of ‘give way’ rules and there should perhaps be a greater focus on these rules during driving exams, particularly focusing on right-hand turns. Table 4 lists recommendations for improvement in road assets according to the features that are highlighted during text mining analysis of the crash data set. Finally, this paper has focused on the causes of road accidents and has not considered the consequences of such accidents but recognises that this is an equally significant area of concern. Whilst there is some information regarding injuries and damage to vehicles it is recommended that further research be conducted into the consequences of road accidents.
56
Table 4: Recommendation according to the features from the crash data set of interest Other features Losing control of vehicle or rolling on embankment Motorcyclist accidents Speeding through intersections Failure to obey signs. Collision with inanimate objects Blood samples taken Accidents at traffic lights Police did not attend scene/minor accidents Collisions due to right-hand turns Towing trailers Serious accidents requiring hospitalisation Rear-end collisions
Recommendations for improvement (if possible) Increased signage of accident-prone areas. Install road barriers if feasible. Increased regulations for gaining a motorcycle licence. Encourage motorcyclists to be more careful on roads at all times. This figure is concerning. More driver education is needed. Fixed speed cameras could be considered. Public awareness campaign might be required. Decrease speed limits if warranted. Stricter consequences for violation of traffic regulations. Reflector strips on guardrails, other inanimate objects. Install parking lane for stopped vehicles. Increase visibility of traffic lights or install signage on approach to lights. Improve traffic lights at Southport and Nerang. Consider installing a right-hand turn lane. Create signalised intersections. Educate drivers with long/ heavy loads how to manoeuvre vehicle properly. Investigate each of these accidents separately. Remind drivers to keep a safe distance behind other vehicles at all times.
5
REFERENCES
1
Australian Government, Department of Infrastructure, Transport, Regional Government and Road Safety. (2008) Road Safety, http://www.infrastructure.gov.au/roads/safety/, Retrieved October/2008.
2
Queensland Fire and Rescue. (18/09/2002) Firefighters called to http://www.fire.qld.gov.au/news/view.asp?id=207 , Retrieved October/2008
3
Abugessaisa, I. (2008). Knowledge discovery in road accidents database – Integration of visual and automatic data mining methods. International Journal of Public Information Systems, 2008 (1), 59-85. Retrieved October 20, 2008, from Emerald Insight database.
4
Gitelman, V. and Hakkert, A. S. (1997) The evaluation of road-rail crossing safety with limited accident statistics. International journal of Accident Analysis Prevention, 29 (2), 171-179. Retrieved October 20, 2008 from Emerald Insight database.
5
Gurubhagavatula, I., Nkwuo, J. E., Maislin, G., and Pack, A. I. (2008). Estimated cost of crashes in commercial drivers supports screening and treatment of obstructive sleep apnea. International Journal of Accident Analysis & Prevention, 40 (1), 104-115. Retrieved October 20, 2008 from Emerald Insight database.
6
Chatterjee, S. (1998). A connectionist approach for classifying accident narratives. Purdue University.
7
Li-Yen, C., & Wen-Chieh, C. (2005) Data mining of tree-based models to analyze freeway accident frequency. Journal of Safety Research.
8
Tseng, W., Nguyen, H., Liebowitz, J., & Agresti, W. (2005) Distractions and motor vehicle accidents: Data mining application on fatality analysis reporting system (FARS) data files. Industrial management and data systems, 109 (9), 1188-1205. Retrieved October 20, 2008, from Emerald Insight database. Journal of Safety Research.
9
Queensland University of Technology. (2008) Retrieved October 22, 2008, from QUT Centre for Accident Research and Road Safety: http://www.carrsq.qut.edu.au
record
number
of
road
crashes,
,
10 Hearst, M. A. (1999) Untangling Text Data Mining. The 37th Annual Meeting of the Association for Computational Linguistics, Maryland, June 20-26, (invited paper). http://www.ischool.berkeley.edu/~hearst/text-mining.html. Accessed 20 April 2007. 11 Jain, A.K., M.N. Murty, and P.J. Flynn, (1999) Data Clustering: A Review. ACM Computing Surveys (CSUR), 31(3): p. 264-323.
57
12 Grossman, D. & Frieder, O. (2004) Information Retrieval: Algorithms and Heuristics. 2nd edn., Springer. 13 Smith A. E. Humphreys, M. S. (2006) Evaluation of unsupervised semantic mapping of natural language with Leximancer concept mapping. http://www.leximancer.com/documents/B144.pdf Accessed 28 May 2007, 14 Han, J., & Kamber, M. (2001) Data Mining: Concepts and Techniques. San Diego, USA: Morgan Kaufmann
Acknowledgments We will like to thank CRC for Integrated Engineering Asset Management (CIEAM) to provide us the opportunity to conduct this case study. We will also like to thank students of ITB239 and ITN239: Enterprise Data Mining to conduct some of the experiments and Dan Emerson to assist us in reformatting the figures.
58
Proceedings of the 4th World Congress on Engineering Asset Management Athens, Greece 28 - 30 September 2009
SCHOOL BUILDINGS ASSETS – MAINTENANCE MANAGEMENT AND ORGANIZATION FOR VERTICAL TRANSPORTATION EQUIPMENTS Andrea Alonso Pérez a,b, Ana C. V. Vieira b,c and A. J. Marques Cardoso b a
b
Universidade de Vigo, ETSII, Campus Universitario Lagoas-Marcosende, C.P. 36310 Vigo, España.
Universidade de Coimbra, FCTUC/IT, Departamento de Engenharia Electrotécnica e de Computadores, Pólo II – Pinhal de Marrocos, P – 3030-290 Coimbra, Portugal. c
Instituto Politécnico de Tomar, Escola Superior de Tecnologia de Tomar, Departamento de Engenharia Electrotécnica, Estrada da Serra – Quinta do Contador, P – 2300-313 Tomar, Portugal.
Maintenance of educational building assets is an important tool not only for the wellness of students and other users, but also as an important economic instrument maximizing items life cycle and minimizing maintenance costs. The law in force regarding the Abolition of Architectural Barriers was conceived to facilitate the access to buildings of people with physical deficiencies, but even after its endorsement, there are still Portuguese Schools without an easy access or even none at all, for such individuals. The vertical transportation of persons helps to avoid this discrimination, as it provides access to all the building floors, with the highest possible comfort, for people whose mobility is reduced. This paper addresses the maintenance management and organization of Vertical Transportation Equipments. It will be proposed a standard example of a Maintenance Plan for the Portuguese Schools focusing on the elements and infrastructures related to vertical transportation of people with physical deficiencies. It will be also presented a cost analysis of these infrastructures maintenance and operation in order to reach the optimum point that allows the reduction of these costs without reducing the reliability of components, thus improving their useful life and safeguarding the wellness of users. Key Words: Costs evaluation, assets maintenance management, maintenance planning and scheduling 1
INTRODUCTION
For the particular case of school buildings, a great number of maintenance actions may be organized in a systematic way, with foreknowable costs and controlled funds. It is considered adequate to adopt preventive maintenance strategies, condition based or planned and scheduled in time. Although this might be an effective method for preserving schools, preventive maintenance is highly dependent on the availability of human resources and budgets. Preventive maintenance efforts range from visual inspection only, to performance testing and analysis; from minor adjustments, cleaning and/or lubrication, to complete overhauls; from reconditioning, to complete replacement. One must identify the adequate maintenance strategy to follow. Simultaneously, for each item a specific maintenance plan must be developed. The maintenance program may be divided into several lists of planned activities, allowing for unplanned reactive activities and deferred ones. The elevator life cycle is longer than other transport systems, reason why the design, operation, safety and accessibility can be delayed in relation with new technologies, making difficult the access of people with disability. For example, in the European Union there are more than four million of elevators, 50% of them having more than twenty five years, and they don’t have an appropriate level of safety and accessibility for nowadays [1].
59
The Portuguese Law 123/97 from the 22nd of May of 1997 enumerates the technical standards that regulate the buildings accessibility to people with disability [2]. A study developed in the framework of the Portuguese Secondary School Level of Education, the CARMAEE study [3], indicates that only 64.87% of the schools participating in the CARMAEE study, refer the amount of money expended in outsourcings, for example in maintenance of elevators and acclimatization systems. According to the same study, Portuguese schools have only adapted the building for the access of people with disability with the following elements and percentages [3]: - Elevators: 40.24% - Ramps: 71.60% - Wheelchair elevator platforms: 2.96% To accomplish the maintenance plan, it has to be considered a period of time that may, according to literature, allow to guarantee reliability of the elevator and safety of its users [2-8]. To settle on the maintenance strategy to apply, one must consider the procedures complexity, process time, and necessary technicians. According to literature, maintenance operations regarding vertical transportation of persons may be grouped in two different types. The first group includes maintenance activities that can be accomplished by an employee of the school due to the low incidence in the reliability of the equipment as well as users safety. This group of maintenance activities include ground cleaning and button illumination, for example. The second group of vertical transportation maintenance activities includes those activities that, according to current law, should be accomplished by certified companies, Elevators Maintenance Companies (EMC). These activities have a technical character and, since they have a high incidence in the element reliability and safety, they therefore should be part of the maintenance objectives. When developing the maintenance program, two different types of maintenance procedures where considered: the inspections and the maintenance actions procedures. By inspections, one means the procedures regarding maintenance activities aiming to verify the global state of operation of the elevator, and they can be partial inspections or tests. The inspections will be made in preset dates and, according to their technical complexity, they can be carried out by a school employee or a technician from the EMC. Maintenance actions procedures include operations released according to the inspections result, or due to a bad operation or damage of the equipment. As well as the inspections, they will be accomplished by a technician or an employee, depending on its technical complexity. The operations to be accomplished by the technicians of the EMC shall be performed according to a preventive maintenance plan, including procedures to be differentiated according to months, quarters, etc. On the other hand, the maintenance activities to be accomplished by school employees shall result from the following events: - At the beginning of the academic period: preventive maintenance. - During the academic period: preventive maintenance. - At the end of the academic period: preventive maintenance. - When there is a damage or bad operation: corrective maintenance. It is considered opportune to use as a general model the one of preventive maintenance, with the end of accomplishing the defined objectives, and to appeal to corrective maintenance when necessary. The maintenance plan is a very important tool since its periodicity will regulate the vertical transport function reverberating in the reliability and safety. There are several manuals, legislation, normative and technical documents, according to which it should be accomplished tests and inspections of the equipments following a stipulated periodicity, in order to increase equipments reliability and safety. The elevator is a total automatic equipment. The departure, acceleration and stop functions are automatically accomplished in agreement with the calls, without the necessity of a human command to work. Therefore, the accomplishment of regular inspections is indispensable to guarantee to users trips with safety and comfort. According to elevator manufacturers, elevators must be check-up every month to guarantee the users safety, the reliability of the equipment and its performance [4]. All maintenance reports must be kept near the equipment documents. These registers should be maintained actualized whenever there are modifications in the equipment characteristics and they should be available in the School Unit for those people in charge of the maintenance as well as for the organisms responsible for the accomplishment of tests and inspections.
60
2
MAINTENANCE PLAN OF VERTICAL TRANSPORT ELEMENTS 2.1 Maintenance activities to be accomplish by school employees
Schools that have between their staff, employees prepared to accomplish maintenance work, may organize these persons work to accomplish the proposed activities. This will allow them not only to optimize their productivity but also it will reduce costs, so important nowadays, since schools usually struggle with reduced budgets. Table 1 shows the activities to accomplish in each period of the year. Moreover, employees can also accomplish other daily recommendable inspections such as [6]: - Test if there are abnormal noises when the elevator is in movement. - Test that the platform moves without any interference. - Test that the bridge plate moves without any interference when it is folding or unfolding. - Test that the front barrier closes and blocks immediately after the platform ascends from the ground. Table 1 Maintenance activities to be accomplish by school employees [5] When to accomplish
Operations to accomplish
Before the academic period
-
During the academic period
- Coordinate the cabin and corridors cleaning. - Verify that there are no objects obstructing the accesses and -
After the academic period
circulations. Travel inside the elevator stopping in every floor. Test alarm button. Test the opening door button. Verify the telephone operation. Verify the equipment integrity.
- Send the elevator to the first floor. -
In case of damages or bad operation
Turn on the equipment according to manufacturer orientations. Travel inside the elevator stopping in every floor. Test alarm button. Test the opening door button. Verify the telephone operation. Turn on the lights of the floors and platforms of the elevators, also verifying if the access doors to the corridors are locked. Verify the equipment integrity.
-
Turn off the elevator in the command board, according to the manufacturer's orientation, and lock the board. Verify that the corridors elevator doors are locked, and turn off the corridor lights. Contact the EMC, requiring assistance. Place a poster indicating “Elevator in Maintenance”.
Reprinted from: School Elevators Usage and Maintenance (in Portuguese) 2005. Elevator damages and/or bad operation may usually be associated with the following scenarios [5]: - The minimum time of permanence of cabin door opening is less than 3 seconds. - The unevenness between the cabin floor and the pavement floor does not guarantee a safe and perfect accessibility. - There are buttons without light, do not obey the command or the elevator doesn't assist the call in the pavement. - There is an alarm information in the cabin or pavement panels. - There is cabin swinging, vibrations or noises in excess during the transport.
61
Cleaning must have a special mention since it is usually outsourced, despite its importance to avoid accidents ensuring users safety. Water infiltration in the facilities is harmful to the equipment, therefore it must always be kept dry: cabin, ground and walls of the elevator. This procedure is also important in order to avoid slipping and possible wounds to users. Special careful should be taken in the cabin interior cleaning, following the orientation of the manual supplied by the elevator manufacturer. As much for the finish in stainless steel as for the laminated one it must be used soap or neutral detergent and water, applied with cloth or sponge. Aggressive chemical products or abrasive material, as straw-of-steel should never be used [6].
2.2 Maintenance Plan to be accomplished by certified Elevators Maintenance Companies (EMC) The legislation that regulates elevators maintenance in Europe is scarce. With regard to school facilities for example it only states the necessity for inspection every two years by competent entities. On the other hand, manufacturers and other entities offer several recommendations. Although the legislation on safety inspections in elevators may be different according to the locality, country, etc, technicians usually employ similar procedures [7]. In the hoist way, in the ditch, or on top of the cabin, technician must organize the inspection in a safe and ordinate way to avoid injuries not only for himself but also for people occupying the immediately adjacent areas. To accomplish this work, it is necessary to avoid wide clothes. Technician must be equipped with safety glasses or protector glasses in the hoist way, since the airborne particles swept there might cause eye injury. Minimum tools include a flashlight, a 183 cm ruler of wood, a magnifying glass, a small mirror, and a multimeter to measure voltages and grounds [7]. The minimum specific maintenance for all traction elevators may be accomplished according to the periodicity and procedures presented in Table 2:
Table2 Maintenance plan to accomplish by the EMC [7, 8] Periodicity
Procedures to accomplish
Actions according to inspection’s result
Weekly
Perform general inspection of machinery, sheaves, worm and gear motor, brake and selector of floor controllers.
Lubricate as necessary.
Empty drip pans, discard oil in an approved manner and check reservoir oil level. Observe brake operation.
Adjust or repair if required.
Inspect machinery, contacts, linkage and gearing.
Lubricate.
Inspect brushes and switches.
Clean and repair.
Inspect controllers, selectors, relays, connectors, contacts, etc.
Clean and repair.
Ride car and observe operation of doors, levelling, reopening devices, pushbuttons, lights, etc. If rails are lubricated, check conditions and lubrication service lubricators. Verify lamps in elevator cars, machine room, pit, hall lanterns, etc.
Replace all burned out.
Inspect all machine room equipment.
Remove garbage, dust, oil, etc.
Clean trash from pit and empty drip pans. Check condition of car switch handle.
Replace emergency release glass.
Check governor and tape tension sheave lubrication.
Lubricate.
Verify lamps in all lanterns, push buttons, car and corridor position indicators, direction indicators and in other signal fixtures.
Replace all burned out.
62
Table2 Maintenance plan to accomplish by the EMC [7, 8] Periodicity
Procedures to accomplish
Bi-monthly
Observe operation of elevator throughout its full range of all floors. This procedure serves to test controls, safety devices, levelling, relieving, and other devices.
Quarterly
Actions according to inspection’s result
Check door operation: brakes, checks, linkages, gears, wiring motors, check keys, set screw, contacts, chains, cams and door closer.
Clean, adjust and lubricate.
Check selector, brushes, dashpots, travelling cables, chain, pawl magnets, wiring, contacts, relays, tape drive and broken tape switch.
Clean, adjust and lubricate.
Check car: car door and gate tracks, pivots, hangers, car grill, side and top exits.
Clean, adjust and lubricate.
Inspect interior of cab: test intercommunication system, normal and emergency lights, fan, emergency call system or alarm, car station.
Repair.
Visually inspect controller, contacts and relays.
Adjust or replace.
Observe operation of signal and dispatching system.
Repair.
Inspect compensating hitches, buffers, rope clamps, slack cable switch, couplings, keyways and pulleys. Check load weighing device and dispatching time settings.
Clean, adjust, repair, lubricate and replace.
Check oil level in car and counterweight oil buffers.
Add oil as required.
Check brushes and commutators. Inspect switches for finish, grooving, eccentricity and mica level.
Clean, adjust, repair, replace or refinish to provide proper commutation.
Inspect brushes for tension seating and wear.
Replace or adjust.
Check car ventilation system, car position indicators, direction indicators, hall lanterns and car and hall buttons.
Replace or adjust.
Check levelling operation: levelling switches, hoist way vanes, magnets, and inductors.
Clean, adjust and repair.
Check hoist way doors: car door or gate tracks, hangers, and up thrust eccentrics, linkages jibs and interlocks.
Clean and lubricate.
Check car door or gate tracks, pivots, hangers. On hoist ways doors: tracks, hangers and eccentrics, linkages jibs and interlocks.
Clean, adjust and lubricate.
Inspect all fastening and ropes for wear and lubrication: governors and hoist ropes. Inspect all rope hitches and shackles and equalize rope tension.
Clean, lubricate, balance the tension rope, repair.
Inspect hoist reduction gear brake and brake drum, drive sheave and motor, and any bearing wear. In the car, test alarm bell system: fixtures, retiring cam devices, chain, dashpots, commutators, brushes, cam pivots, fastenings.
Clean and adjust.
Test emergency switch: Inspect safety parts, pivots, setscrew, switches, adjustment of car and counterweight jibs, shoe or roller guides.
Replace, lubricate and repair.
In the pit, verify compensating sheave and inspect hitches.
Lubricate.
Inspect governor and tension sheave fastenings.
Tape tension sheave fastenings
63
Table2 Maintenance plan to accomplish by the EMC [7, 8] Periodicity
Procedures to accomplish
Actions according to inspection’s result
Quarterly
Verify oil drip pans.
Clean and empty.
Verify all parts of safeties and moving parts.
Clean and lubricate.
Check clearance between safety jaws and guide rails.
Adjust.
Visually inspect all safety parts. SemiAnnually
Annually
Examine governor rope.
Clean and replace.
Check controller, alignment of switches, relays, timers, contacts, hinge pins, etc.
Clean with blower, adjust and lubricate.
Check all resistance tubes and grids, oil in overload relays, settings and operation of overloads.
Lubricate and adjust.
Inspect fuses and holders and all controller connections. In hoist way examine guide rails, cams and fastenings.
Clean.
Inspect and test limit and terminals switches.
Replace or repair.
Check car shoes, jibs or roller guides.
Adjust or replace.
Check all overhead cams, sheaves, sills, bottom of platform, car tops, counterweights and hoist way walls.
Clean.
Inspect sheaves to ensure they are tight on shafts. Sound spokes and rim with hammer for cracks.
Adjust or repair.
Examine all hoist ropes for wear, lubrication, length and tension.
Replace, lubricate and adjust.
On tape drives, check hitches and broken tape switch.
Replace or repair.
Check car stile channels for bends or cracks; also car frame, cams, supports and car steadying plates.
Replace or repair.
Examine moving parts of vertical rising or collapsible car gates. Check pivot points, sheaves, guides and track wear.
Lubricate and replace.
Inspect guide shoe stems.
Lubricate and replace.
Check governor and tape tension sheave fastenings.
Tape tension sheave fastenings
For bi-parting doors, check: chains, tracks and sheaves, door contacts.
Clean, lubricate, repair or replace.
Clean car and counterweight guide rails using a non-flammable or high flash point solvent to remove lint dust and excess lubricant. Examine brake cores on brakes, linings, and inspect for wear.
Remove, clean, and lubricate. Correct excess wear and adjust.
Examine reservoirs of each hoisting motor and motor generator. Drain, flush and refill. Check all brushes for neutral settings, proper quartering and spacing on commentators.
Restore.
Group supervisory control systems installed must be checked out. The systems, dispatching scheduling and emergency servicing must be tested and adjusted in accordance with manufacturer’s literature. Reprinted from: Procurement Services Group: Elevator Maintenance, 2000 and Maintenance Engineering Handbook, 2002.
64
Elevators maintenance must consider three different areas: hoist way, pit and machine-room [7]. - Hoist way: • Mechanical equipment: sheaves, buffers, door closers, floor selectors, limit switches, hoist way door hangers, closers, and door gibs, interlocks, to be sure they are in the right form. • Hoist and governor ropes and their fastenings, to avoid wear and rust. • Travelling cables to verify its state, and cables for vibration and wear or tear. • Rails for a correct alignment. • Steadying plates for excessive fluctuations. - Pit: oil level in the buffer, rope stretch, debris or water leaks and safety's shoes. - Machine-room: • Motors and generators should be clean, undercut and conditioned correctly, and brushes verified. • Bearings must be inspected for wear or tear. • Electric equipment must be grounded. • Brakes and brake belts, to avoid defective safety. • Shafts into the pulley, for a correct alignment. • Gears and bolts must be inspected to ensure that they are not loose or broken. • Controllers, for a correct operation. • Switches. • Safety equipment, for blocked or shorted contacts. • Governors, to certify that the rope is appropriately placed in the sheave, correctly lubricated and that the rust don’t affect its operation. • Landing equipment, to verify if broken buttons exist and if the illumination is correct. Maintenance operations to accomplish in case of hydraulic elevator would be the same ones that in the case of a traction elevator, but in addition it must be considered [7]: - Packing of the superior part of the cylinder and piston, to make sure that the oil is not draining excessively or it is returning to the tank through a deviation. - Hydraulic machines, to be sure that there is enough oil so that the car still reaches the top of the landing with oil left in the tank. - Load oil average, for cleaning. It is recommended that safety tests should be conducted by the elevator manufacturer. In addition to ensuring compliance with safety regulations, inspections contribute to equipment’s life extension by preventive care and adjustment. A comprehensive elevator maintenance program may also include a full-load safety test, to be carried out every 5 years, and annual no-load tests [7]. 3
MAINTENANCE CONTRACT
As previously stated, school organizations must decide how to organize the elevator maintenance, whether or not trying to accomplish internal maintenance or just to hire the maintenance services of an independent company as well as those offered by the company that manufactured and installed the elevators [5, 7]. EN 13269:2006 enunciates the instructions for maintenance contracts preparation. Maintenance contracts may be differentiated by the offered coverage, namely total maintenance or partial/conservative maintenance [9]. Partial or conservative maintenance is the simplest form of contractual service. This type of maintenance contracts covers lubrication and minor cleaning of elevator equipment and they usually do not include parts replaced during maintenance activities nor overtime emergency calls. The maintenance agreement limits the responsibility of the contractor and, while protecting the elevator contractor, they are potentially costly to the owner. Contract prices shall be adjusted every year according to the changes in the labour costs [7]. Total maintenance contracts usually cover both parts and work demanded to maintain the elevator system. Emergency calls service may be or not included, depending on individual needs. The exclusions in contracts of total maintenance are normally less than in the partial/conservative maintenance contracts and the contractor party's responsibility is larger since it is responsible for the preventive maintenance. Whenever a total maintenance contract excludes mandatory safety tests; the school administration must provide trained employees to shut down the equipment in case of unsafe conditions development. These items omission by the contractor increases the maintenance cost supported by the owner [5, 7]. Contract prices and services vary a lot from company to company, depending on the contract type and services required. Table 3 presents some considerations about procedures to be followed before making a maintenance contract or when revising it [5].
65
Table 3 Advices on how to choose a maintenance contractor and procedures to adopt during contract’s life [5] Terms to consider when choosing a maintenance contractor
Procedure to adopt during contract’s life
Demand the company’s certificate of register in the regulating agency, as well as the document corroborating that the responsible engineer is properly registered.
Demand the technician's functional identity.
Verify if the company has vehicle, telephone, workshop, and parts of the installed equipment.
Control frequency and visiting hours.
Verify if the contract includes 24 hours attendance.
Demand a copy of the record service filled out and signed by the responsible.
Verify the references and services given to other customers.
Demand receipt, guarantee of use of original parts and the executed services.
Reprinted from: Foundation for the Education Development, 2005.
Together with the maintenance contract the following documents and information have to be provided in order to be able, thus, to have an easy access when it is necessary: - The date when the equipment was put in service. - Basic characteristics. - Traction handles’ characteristics. - Characteristics of the parts for which it was asked for inspection certificate. - Designs of the installation. - Schematic diagrams of the electric circuits. - Licenses of installation and operation (or documentation that substitutes them, according to legislation). - Technical guarantee of the equipment. - Maintenance contract. - Technical Dossier of the equipment (supplied by the manufacturer and/or constructor). - Manuals of the equipment (of the manufacturer). - Maintenance manual of the equipment. - Inspections, rehearsals and verifications. 4
MAINTENANCE COSTS Nowadays the maintenance costs depend on several factors like: - Charge capacity - Velocity - Number of floors - Automatic/semi-automatic doors - Kind of work: universal, with memory, etc. - Type of maintenance - 24 hours service
In the following it is presented an example of an annual maintenance cost for a standard elevator, provided by an elevator company. - Total Maintenance: € 2 400.00 - Partial Maintenance: € 1 056.00 - 24 hours Service: € 180.00 There is a big difference between the total and the partial maintenance cost. The school management executive board has to consider both options and choose the one that better fits their maintenance objectives.
66
By comparing this information with the one from CARMAEE Project it can be concluded that the average cost of € 1 679.05 for the elevator maintenance of Portuguese schools is within the limits provided above. 5
CONCLUSIONS
There are more than four million of elevators in the E.U., 50% of them having more than twenty five years, and they don’t have an appropriate level of safety and accessibility for nowadays. The elevators life cycle is longer than other transport systems, reason why the design, operation, safety and accessibility can be delayed in relation with new technologies, making difficult the access of people with disability. This paper proposes a Maintenance Plan for the Portuguese schools vertical transport equipment with the aim to provide some general guidelines for school management executive boards.
6
REFERENCES
1
Technical Specification CEN/TS 81-82:2008 (2008) Safety rules for the construction and installation of lifts. Existing lifts. Part 82: Improvement of the accessibility of existing lifts for persons including persons with disability, AENOR, Madrid.
2
Ministry of Solidarity and Social Security (1997) Law nº123/97 from the 22nd of May of 1997 (in Portuguese), Diary of the Republic nº 118 the 22nd of May of 1997, Portugal.
3
Cação, C.; Silva, F. and Ferreira, H. (2004) CARMAEE: Characterization of the Maintenance in School Buildings (in Portuguese), Final Year Project Dissertation; Department of Electrical and Computer Engineering, University of Coimbra, Portugal.
4
ThyssenKrupp Elevators (2005) Your Elevator: Preventive Maintenance (in Portuguese), 12th edition.
5
Foundation for the Education Development (2005) School Elevators Usage and Maintenance (in Portuguese), São Paulo.
6
Phantom, Phantom Elevator Maintenance Guidelines (in Spanish).
7
Robertson, J. (2002). Maintenance of Elevators and Special Lifts. In: L. R. Higgins and R. K. Mobley eds. Maintenance Engineering Handbook, 6th edt, McGraw-Hill, United States of America.
8
New York State Office of General Services (2000) Procurement Services Group: Elevator Maintenance, New York.
9
European Committee of Standardization CEN/TC 319 (2006) EN 13269: Maintenance. Guidelines on Preparation of Maintenance Contracts, Brussels.
67
Proceedings of the 4th World Congress on Engineering Asset Management Athens, Greece 28 - 30 September 2009
A REVIEW ON THE OPTIMISATION OF AIRCRAFT MAINTENANCE WITH APPLICATION TO LANDING GEARS P. Phillips¹, D. Diston¹, A. Starr², J. Payne³ and S. Pandya³ ¹School of Mechanical, Aerospace and Civil Engineering, University of Manchester, Manchester, UK ²School of Aerospace, Automotive and Design Engineering, University of Hertfordshire, Hatfield, UK ³Messier Dowty Ltd, Gloucestershire, UK Current maintenance programmes for key aircraft systems such as the landing gears are made up of several activities based around preventive and corrective maintenance scheduling. Within today’s competitive aerospace market innovative maintenance solutions are required to optimise aircraft maintenance, for both single aircraft and the entire fleet, ensuring that operators obtain the maximum availability from their aircraft. This has led to a move away from traditional preventive maintenance measures to a more predictive maintenance approach, supported by new health monitoring technologies. Future aircraft life will be underpinned by health monitoring, with the ability to quantify the health of aerospace systems and structures offering competitive decision-making advantages that are now vital for retaining customers and attracting new business. One such aerospace system is the actuator mechanisms used for extension, retraction and locking of the landing gears. The future of which will see the introduction of electromechanical replacements for the hydraulic systems present on the majority of civil aircraft. These actuators can be regarded as mission critical systems that must be guaranteed to operate at both take-off and landing. The health monitoring of these actuation systems can guarantee reliability, reduce maintenance costs and increase their operational life span. Aerospace legislation dictates that any decisions regarding maintenance, safety and flight worthiness must be justified and strict procedures followed. This has inevitably led to difficulties in health monitoring solutions meeting the necessary requirements for aerospace integration. This paper provides the motivation for the research area through reviewing current aircraft maintenance practices and how health monitoring is likely to play a future strategic role in maintenance operations. This is achieved with reference to current research work into developing a health monitoring system to support novel electromechanical actuators for use in aircraft landing gears. The difficulties associated with integrating new health monitoring technology into an aircraft are also reviewed, with perspectives given on the reasons for the current slow integration of health monitoring systems into aerospace. Keywords: Maintenance management, health monitoring, actuators, landing gear 1
INTRODUCTION
The airline industry is considered as one of the most unique businesses in the world. The business suffers from a variety of complex operations. These include moving aircraft loaded with passengers and cargo over large distances and the scheduling of flights, crews and maintenance. These all lead up to substantial operating and maintenance costs measured in time and money. Aircraft maintenance forms an essential part of airworthiness, with its main objective being to ensure a fully serviced, operational and safe aircraft. If an aircraft is not maintained to a required level then this inevitably risks passenger and crew safety. Table 1 lists examples of incidents that have occurred with the probable cause being due to insufficient maintenance [1]. Also, the risk is that the aircraft may be unable to take-off leading to passenger dissatisfaction; likewise it is plausible that the aircraft may be forced to land in undesirable locations, where spare parts or maintenance expertise is unavailable. Maintenance actions therefore have to be carried out at regular scheduled intervals, but ideally be performed with minimum cost to the operator.
68
Table 1: Aircraft maintenance related accidents Airline Aloha Airlines 737
Location Hawaii
Year 1988
Incident Inspection failure led to fuselage failure
BM AirTours 737
Manchester
1989
Wrong bolts led to windshield blowout
United Airlines DC10
Iowa
1989
Continental Express
Texas
1991
Engine inspection failure led to loss of systems Tail failure as task not completed before flight
Northwest Airlines
Tokyo
1994
Incomplete separation
ValueJet
Florida
1996
Fire in hold due to incendiary cargo
assembly
led
to
engine
One of the key systems which have to be maintained and kept fully operational is the aircraft landing gears. Landing gear’s are an essential part of any aircraft; even though they remain redundant for most of the flight. Their main task is to absorb the horizontal and vertical energy of the aircraft as it touches down on the runway. During flight most modern aircraft have their landing gears retracted and stowed and only extend them during the approach to landing. Aircraft extend and retract their landing gears using a variety of methods, which include pneumatic, hydraulic or electrical motor-driven drives, with the majority of retraction mechanisms being hydraulically powered. Most landing gears contain three actuators; the largest of which is the retraction actuator that generates a force about a pivotal axis in order to raise the landing gear against weight and aerodynamic loads. The other two actuators are the lock-stay actuator, which locks the landing gear in place once extended and the door actuator that ensures that the bay doors are successfully opened and closed for landing gear deployment. Figure 1 shows a typical arrangement of the down lock and main retraction actuator positions.
Figure 1: Airbus A320 main gear Hydraulic actuation systems have found popular use in aerospace due to their reliability, relative simplicity and their wide spread use has generated engineering experience and familiarity. They are also ideally suited for landing gear operation as the hydraulic fluid provides a constant lubrication and natural damping. There are disadvantages also, when used in aircraft they are heavy, require large volumes of space, operate noisily and require the correct disposal of hydraulic fluids in accordance with environmental legislation There is currently a move within the aerospace industry towards replacing hydraulic drives with electrical counterparts as part of the ‘more electric aircraft’ concept [2]. The motivation for the use of Electro-Mechanical
69
Actuators (EMA) is driven by the desire to reduce aircraft weight arising from a combination of increasing fuel costs and environmental concerns. For example, environmental damage associated with air traffic has created the need to reduce aircraft fuel consumption and polluting emissions, a key factor in achieving this is the reduction of weight. Landing gears contribute a significant amount of mass to the aircraft, on average they contribute 4% on civil aircraft and 3% on military aircraft [3]. The European drive for the replacement of hydraulic systems on landing gears is driven in part by a large DTI funded project known as ELGEAR [4]. The aim of which is to demonstrate the potential to reduce operating noise, increase operating efficiency, reduce installation volumes and most significantly reduce the landing gear mass by up to 12%. A further motivation for utilising electrically powered actuators is the real possibility that with the move towards optimising engine efficiency, future aircraft engines will not produce hydraulic power. Innovative maintenance solutions, such as health monitoring systems are required to support the introduction of the new electrical actuator technology to provide reliability assurances. Health monitoring systems will enable decisions to be taken regarding aircraft flight worthiness. Importantly for aircraft operators it can aid in providing a unique optimised maintenance scheduling allowing maintenance decisions to play a key part in driving forward operations strategies and providing a business winning advantage This paper will provide the motivation and justification for the current research area of electromechanical actuator health monitoring for landing gears. This is done through reviewing the current maintenance practice for aircraft with reference to landing gears, and what path future advanced maintenance solutions are likely to follow. How health monitoring is likely to play a strategic operations role offering benefits to operators, manufactures and maintenance service providers is also highlighted. The difficulties associated with integrating new health monitoring technology into an aircraft are reviewed with perspectives given on the reasons for the current slow integration of health monitoring systems into aerospace. 2
ELECTROMECHANICAL RETRACTION ACTUATOR DESIGN
The design for the retraction actuator is based around that of a roller screw. A roller screw converts the rotary torque produced by the motor into a linear motion. Roller screws consist of a multiple arrangement of threaded helical rollers assembled in a planetary arrangement around a screw captured in place by a nut. Rotation of the nut with respect to the shaft enables axial translation of the nut; likewise rotation of the screw also can enable translation of the nut. A primary duplex motor connected to a gearbox linearly displaces the nut by rotation of the screw, which moves a lever arm about a pivot achieving retraction/extension of the landing gears. If the primary motor fails there is an emergency control that will ensure successful displacement of the actuator. Figure 2 provides a schematic of the actuator arrangement.
Brake
Emergency Gear Box
Primary Gear Box
Duplex Motor
Emergency Motor Primary Roller Screw
Emergency Roller Screw
Figure 2: The EMA retraction actuator Hydraulic actuation has been used in aerospace successfully for many decades, proving to be reliable and robust, gaining the confidence of aircraft operators. Any replacement drive will therefore need to provide assurances that they are of equal robustness and reliability to the preceding system [5]. The replacement of a key aircraft system, such as actuators, will inevitably require changes in the way in which maintenance actions are performed. As an example, it is easy too visually inspect a hydraulic actuator for faults such as fluid leaks or corrosion. An EMA, such as the schematic in figure 2, is a complex mechanical system and it is not so easy to inspect individual subsystems and components. Most of theses key components (i.e. gears, electrical wiring) are often sealed within housing units making access exceptionally difficult. The use of traditional visual inspections would therefore require a certain degree of landing gear dismantling. The move towards ‘more electric aircraft’ could therefore increase the time an aircraft spends in the maintenance hangar, increasing aircraft downtime costs. This aids in justifying the need for additional automated fault detection and diagnostic health monitoring incorporated into the design
70
3
CURRENT MAINTENANCE PRACTICE
Maintenance programmes for aircraft, which include key systems such as the engines and landing gears are made up of several activities based around preventive, corrective, on-condition and redesign maintenance. Preventive actions are taken at pre-determined intervals based upon the number of operating hours, or often in the case of landing gears, the number of landings. This is supported by regularly scheduled inspections and tests in which on-condition maintenance is performed based upon observations and test results. Each of these activities is finally supported by corrective maintenance conducted in response to discrepancies or failures within the aircraft during service. The final action type, redesign maintenance takes the form of engineering modifications that are made in order to address arising safety or reliability issues, which were unanticipated in the original design. It is essential that aircraft maintenance be performed at appropriate times and to the highest standard to ensure system reliability and guaranteeing passenger safety. So that any potential unsafe conditions can be identified and addressed, the country of aircraft registration and the civil aviation authority of the manufacturing country, generate a set of mandatory guidelines known as airworthiness directives. These directives notify the aircraft operators that their aircraft may not conform to the appropriate standards and if there are any actions (i.e. maintenance) that must be taken. It is a legal requirement that operators follow the airworthiness directives and country specific authorities closely regulate them. Such authorities include the Federal Aviation Administration (USA), The Civil Aviation Safety Authority (Australia) and The Joint Aviation Authorities (Europe). Much of the major maintenance and repair work performed on aircraft is provided through service providers who carry out Maintenance, Repair and Overhaul (MRO) operations for the aircraft operators. The landing gear is a critical assembly and a major key to maintaining the overall aircraft value. Operators cannot afford, or are willing to risk compromising their landing gear MRO activities and will look for the best combination of affordability, expertise, flexibility and the ability to offer customised solutions when faced with the choice of MRO provider. An example of how maintenance support of landing gears would be as follows. In the event of a series of incidents such as ‘hard landings’ reported by the operators, major repair operations, or complete gear overhauls will be conducted at a MRO provider’s maintenance site. The operators themselves can carry out, minor repairs and on-wing maintenance, also at predetermined intervals. Once landing gears have been received at the MRO maintenance facility, they will be dismantled and individual parts will be put through a serious of non-destructive tests. This testing will identify any developing failures, such as structural fatigues or internal corrosion. The results of which will determine if the parts are repaired, replaced, scrapped or recycled [6]. There are a vast number of parts on a typical landing gear which need to be maintained and inspected, An example of key inspection areas along with typical timescales would be: 1.
After 300 hours or after 1 year in service inspection -
2.
Shock absorber Nitrogen Pressure check
After 600 hour inspections -
Landing gear hinge points visual inspections
-
Leak inspection (oil, hydraulic fluid etc)
-
Inspection of torque link play
3. After 7 years or 5000 cycles : Landing gear overhaul To understand maintenance costs it is necessary to look at the elements of maintenance in terms of time. Figure 3 gives a breakdown of the time elements covering maintenance actions. A breakdown such as this can show designers the areas in which they can influence related activity times. In corrective maintenance much of the time is spent on locating a defect which often requires a sequence of disassembly and reassembly. Being able to predict fault location times is extremely difficult using traditional inspection techniques. The ability to automate this fault diagnosis, with advanced technologies and techniques, can help accurately predict the downtime required [7].
71
Time
Up Time
Flying Time
Down Time
Available for Flying Time
Flight Prep. Time
Maintenance Time
Turn Around Time
Pre-flight inspection Time
Preventive Maintenance
Access Time
Modification Time
Corrective Maintenance
Inspection Time
Preparation Time
Defect Location Time
Defect Rectification Time
Rectify by adjustment time In-situ repair time Remove repair & refit time Remove and replace time
BITE effectiveness Fault diagnostic aids Equipment test/Read out capability Technician skill, experience & training
Figure 3: Civil aircraft maintenance time relationships 4 CHANGING MAINTENANCE STRATEGIES Currently the European market holds a 26% share of the worldwide MRO business compared to 39% held by North America and is expected to experience dramatic world wide growth during the next 10 years [8]. There are however several hurdles which must be overcome by these MRO providers in order to continue their leading global market shares [9]. Examples of which include:
Growing competition from the Middle East
Greater competition from Original Equipment Manufacturers (OEM)
Continuing pressure from airlines to reduce costs
These hurdles coupled with increased demand for airline MRO are forcing changes in the global aviation maintenance industries, including:
MRO providers are expanding their geographical reach and capabilities in a bid to become regional and global full service providers.
Spending on MRO is expected to universally increase
72
Airlines are now seeking how to make the next level of savings, which has raise the demand for more predictive maintenance strategies, with more reliability and material solutions to compliment outsourced maintenance repair work.
To drive further cost reductions, airlines are seeking to incorporate sophisticated maintenance management solutions into their aircraft, reducing investments in inventory and to aid in improvements in airline operations and reliability.
Such factors have begun to dictate a change in maintenance strategy for operators and the solutions in the services that the MRO suppliers can provide. Changing economic climates have also led operators to begin seeking innovative technology solutions to maintenance management. These will aid in reducing the levels of scheduled maintenance and hence optimising maintenance on aircraft fleets. In terms of landing gear, much of the current business offered to the customers is contracted in the form of ‘time and materials’, which can be an expensive option for operators. The changing face of the aviation industry requires that maintenance management become increasingly tailored towards individual customers needs with cost-effective solutions being found, offering compromises between customer involvement and the level of commitment required from the providers. Figure 4 shows a matrix with different maintenance solutions and the level of commitment and partnerships required by the operators and MRO providers.
Aircraft Operator Involvement MRO Support High
High
Medium
Low All Inclusive Overhauls
Through Life Support Predictive Maintenance Customised Payment Scheme
Medium Preventive Maintenance
Time and Materials
Low
Figure 4: Maintenance support concepts
5
PREDICTIVE MAINTENANCE
The desire is such that in order to remain competitive and meet the demands and challenges facing operators and suppliers new maintenance support concepts should offer several gains. For the operators these should be reductions in unscheduled maintenance activity, lower total cost of ownership, reductions in administrative burdens and overall optimisation of maintenance activities. This can be achieved by moving away from the scheduled preventive maintenance actions by introducing new systems that can provide details on the in-service operation and condition of landing gear mechanisms, such as brakes, shock absorbers and actuators. Such systems known as health monitoring systems [10] utilise a variety of data gained from on-board sensors in order to extract meaningful information. This information when combined with expert knowledge such as component reliabilities, failure mechanisms and service/maintenance history will provide a quantification of system/subsystem/component health. Based upon this information future corrective maintenance actions can be predicted and allow the optimisation of aircraft maintenance. Incorporating health monitoring systems into an aircraft landing gears in order to employ a predictive maintenance strategy [11] in place of preventive maintenance, offers benefits to both the operators, MRO providers and landing gear manufacturers as described in table 2.
73
Table 2: Benefits of a predictive maintenance strategy Operator
MRO provider
Landing gear manufacturer
Optimised maintenance scheduling
Optimisation of spare parts stockpiling
Information available from onboard health monitoring sensors can be used as a marketing tool
Reductions in maintenance costs Reduced risk of in-service failures Increased aircraft availability
Minimisation of scrap Elimination of bottlenecks in machine usage during MRO operations Reduction in turnaround times
Evaluation of in-service performance of landing gear systems Extensive knowledge of in-service performance can be incorporated into re-designs. Aids in increasing operator confidence in incorporating new replacement technologies.
However it should be noted that innovative predictive maintenance solutions supported by health monitoring can only provide each of the key players the necessary benefits if the described commitments are made. A smooth flow of information is required between the operators, maintenance providers and the manufacturers. It could also be questionable if operators would really want to commit to a long term innovative maintenance solution, due to the added commitment requirements on there behalf. They may be hesitant to uptake the offer of health monitoring systems if the manufacturers have not listened to the specific requirements for their aircraft, most notably component reliability and minimal effects on weight and complexity. The operators will also be wary of the need for the probable handling of vast quantities of extra data and information generated from the health monitoring systems. Support with this should therefore be offered within any innovative maintenance service, or systems that can provide automatic health, related decisions are essential if health monitoring is to be accepted. Operators must also be willing to follow a long-term commitment as a support partner and be willing to exchange failure data with the manufacturers in order for increased reliability in future designs. 6
CHALLENGES TO INTEGRATING HEALTH MONITORING
Health monitoring is a disruptive technology – in that large-scale integration will cause disruptive changes within well defined and established working practices. But once established it can quickly go on to become a fully performance competitive system. Health monitoring systems are aimed at improving the performance of the aircraft, which will be achieved on the lines of ‘evolutionary’ changes whilst demonstrating reliability, validated cost benefits and reduces operational risks. The integration of new technologies inevitably face difficulties and a number of challenges face the community of engineer’s and technical specialists as they seek to utilise health monitoring for aerospace usage [6], a non-exhaustive list of these difficulties include:. 1.
The technology and frameworks are available but under utilised
2.
Performance characteristics are usually untested, leading to a lack of confidence
3.
There is often a wealth of data available from the end users, but access to this data can be limited and much is yet to be converted to ‘meaningful information’
Health monitoring systems for aerospace applications differ from those for other applications such as industrial machine monitoring or the monitoring of civil structures in that often there is hardware restrictions usually based upon weight, complexity and the difficulties associated with certification. Also, in many areas of aerospace health monitoring system development, often the state-of-the-art monitoring technique being developed are restricted by a variety of limitations. This affects there use in a real operational situation’, for example, many of the sensor based methods under development for the monitoring of fuselage structures, based upon such methods as acoustics or vibration patterns require vast sensor arrays. Much of the information gained requires high levels of signal processing with the results being very subjective and consequently they may not be applicable for an on-line real time aerospace monitoring system, even though the fundamentals of the techniques work well in other applications. This will potentially lead to a case where the state of the art has difficulties in matching the necessary requirements for aerospace integration. This the author believes is the reason for the current slow integration of
74
health monitoring on civil aircraft, despite the vast wealth of academic research detailing monitoring methods, industry drive and potential areas for application. Figure 5 illustrates this hypothesis; it demonstrates how the current health monitoring state-of-the-art trend is progressing with respect to the capability requirements for health monitoring for aerospace usage. The hypothesis indicates that the current state-of-the-art is advanced enough for most industry uses; offering leaps in performance and capabilities. But is far below what is required for aerospace applications, and will require further innovations, amongst others, in terms of hardware minimisation, data reduction techniques and the use of fusion to merge multiple techniques to reduce individual limitations and maximise advantages.
Desired HM state of the art trend for aerospace applications HM system requirements for aerospace applications
Current HM state of the art trend
Capability HM system requirements for an ‘enabling’ technology
Time Figure 5: Aerospace health monitoring requirements as compared to the state-of-the-art
7
CONCLUSION
With rising operating and maintenance costs airline operators are being forced to seek out new and innovative solutions to the maintenance of their aircraft fleets. Growing competition and operators demanding reductions in the time aircraft spend in the maintenance hangars, has led aircraft MRO providers to begin looking to future maintenance management solutions increasingly tailored towards their customer needs. Also the incorporation of new ‘unfamiliar aerospace’ technologies, such as electromechanical replacements for hydraulic actuation, requires additional health monitoring systems to ensure their reliability and robustness. The use of aerospace health monitoring however must be complemented by a change and modernisation in future maintenance management. The value of incorporating a health monitoring system is most likely to arise in savings in maintenance costs by providing reductions in the downtime of the aircraft. The use of health monitoring systems for future landing gear electrical retraction mechanisms, or other aircraft systems, will offer a very competitive advantage in maintenance decision-making, which is crucial for both military and commercial aerospace users. This will help manufacturers retain customers and attract new business; these aspects will mean that monitoring solutions are now becoming a key part of formulating future maintenance strategies. The application and usage of sensors for use in providing information regarding actuator health status, which can then be converted into decisions regarding maintenance, safety and flightworthiness for landing gears is part of a long-term future maintenance strategy. Currently on landing gears there is no monitoring solution in place, but it is envisioned that as part of this long-term strategy we will see further integration of new technologies incorporated into the landing gear design, which will be supported by health monitoring based maintenance solutions. This paper has introduced the reasons for the introduction of electromechanical actuation technology into future aircraft landing gears and how this and the changing requirements for aircraft maintenance has led to the current research into an actuator health monitoring system, the aim of which is a fully validated diagnostics system.
75
8
REFERENCES
1
Gramopadyhe, A., k. & Drury, C., G. (2000) Human Factors in Aviation Maintenance: How we got to where we are. International Journal of Industrial Ergonomics, 26, 125-131.
2
Jones, R.I. (2002) The more electric aircraft - Assessing the benefits. Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering, 216(5), 259-269.
3
Greenbank, S.J. (1991) Landing gears-the aircraft requirement. Proceedings Institute of Mechanical Engineers, 205.
4
DTI (2007) Report on progress with the national aerospace technology strategy.
5
Phillips, P., Diston, D., Payne, J., Pandya, S., Starr, A. (2008) The application of condition monitoring methodologies for certification of reliability in electric landing gear actuators. In The 5th International Conference on Condition Monitoring and Machine Failure Technologies. Edinburgh, UK.
6
Patkai, B., Theodorou, L., McFarlane, D,. Schmidt, K. (2007) Requirements for RFID-based Sensor Integration in Landing Gear Monitoring - A Case Study. Auto-ID Lab, University of Cambridge.
7
Knotts., R.M. (1999) Civil Aircraft Maintenance and Support Fault Diagnosis from a Business Perspective. Journal of Quality in Maintenance Engineering, 5(4), 335-348.
8
Jenson, D. (2008) Europe’s Challenges In a Dynamic MRO Market. April 2008 [cited 4th April 2009]; Available from: http://www.aviationtoday.com/.
9
Fitzsimons., B. (2007) The BIG Picture: Airline MRO in a Global Context. Airline Fleet & Network Management, 52, 46-54.
10
Kothamasu, R., S.H. Huang, and W.H. VerDuin (2006) System health monitoring and prognostics - a review of current paradigms and practices, in International Journal of Advanced Manufacturing Technology. Springer-Verlag. 1012-24.
11
Mobley, R.K. (2002) An Introduction to Predictive Maintenance. Materials & Mechanical. Elsevier ButterworthHeinemann.
76
Proceedings of the 4th World Congress on Engineering Asset Management Athens, Greece 28 - 30 September 2009
RISK-BASED APPROACH FOR MANAGING ROAD SURFACE FRICTION OF ROAD ASSETS Noppadol Piyatrapoomi (Ph.D.) a and Justin Weligamage (M.Eng.Sc, MBA) a a
Road Asset Management Branch, Queensland Department of Transport and Main Roads, Brisbane, Queensland Australia.
In Australia, road crash trauma costs the nation approximately A$18 billion annually whilst the United States estimates an economic impact of around US$230 billion on its network. Worldwide, the economic cost of road crashes is estimated to be around US$518 billion each year. It is therefore in both the sociological and economic interests of society that attempts are made to reduce, as much as possible, the level and severity of crashes. There are many factors that contribute to road crashes on a road network. The complex manner in which human behaviour, environmental and vehicle failure factors can interact in many critical driving situations, making the task of identifying and managing road crash risk within a road network quite difficult. While road authorities have limited control over external factors such as driver behaviour and vehicle related issues, some environmental factors can be managed such as road surface friction (or skid resistance of road surface). A riskbased method for managing road surface friction (i.e. skid resistance) is presented in this paper. The risk-based method incorporates ‘risk’ into the analysis of skid resistance and crash rate, by linking the statistical properties of skid resistance to the risk of a crash. By examining the variation of skid resistance values throughout the network, the proposed methodology can establish an optimal ‘investigatory level’ for a given level of crash risk, along with a set of statistical ‘tolerance’ bounds in which measured skid resistance values can be found. The investigatory level is a threshold level for triggering a detailed investigation of a road site to identify whether a remedial treatment should be made. A road category of normal demand, spray sealed surface, speed zone greater than 80 km and annual average daily traffic less than 5,000 vehicles was used in demonstrating the application of the method. Key Words: skid resistance, wet crashes, risk-based approach, investigatory level, road surface 1
INTRODUCTION
The skid resistance of a road surface is a condition parameter which quantifies the road’s contribution to friction between the surface and a vehicle tyre. Technically speaking, it is the retarding force that is generated by the interaction between the road surface and the tyre under a locked (non-rotating) wheel. A wheel obtains such as state when the frictional demand exceeds the available friction force at the interface of tyre and road. Therefore, skid resistance is an important factor during events in which these phenomena are likely to occur, such as the high “demand activities” of accelerating, decelerating and cornering. While it is generally accepted that adequate skid resistance levels are maintained in dry conditions, skid resistance decreases substantially in wet driving conditions. It has been noted in US studies that around 20% of road crashes occur in wet driving conditions, a number which is increasing [1]. It is therefore the skid resistance level in wet conditions that is of interest when looking at crash occurrence. A common method applied in practice for managing skid resistance is to set what are known as investigatory levels for the various road categories. The investigatory levels are set as an intermediate form of surface friction management, in that they do not automatically signal that maintenance work is required. If, through normal roadway friction investigation, a particular road site is measured to have a skid resistance value below the relevant investigatory level, a more thorough site investigation and test are performed to determine if additional remedial action is needed. The use of an investigatory level allows the roadway to be assessed taking all factors, including its skid resistance, into account. This provides an extra layer of control in the management process by checking that only those sites which are in most need of maintenance are targeted first, optimising both the materials and budget available to the road authority. The selection of suitable investigatory levels has been the focus of much research, and so too the subject of this project.
77
Most investigatory levels adopted by road authorities in Australia were based on UK studies and were adjusted to suit Australia with some modifications [2, 3]. However, risks of road crashes associated with these adopted investigatory levels are unknown. Many research studies attempted to assess risk of road crashes in relation to skid resistance on their historical data using regression or correlation analysis however they reported no conclusive results [1, 4, 5, 6, 7]. A joint research project among Queensland Department of Transport and Main Roads (QDTMR) and the Cooperative Research Centre for Integrated Engineering Asset Management (CIEAM) and the Queensland University of Technology was established to explore other methods in assessing risk of road crashes and skid resistance and potentially in establishing investigatory levels explicitly associated with risk of road crashes. The final output of this project is a result of the application of the methodology to actual roadway system data. Several informed recommendations have been developed which aim to improve the methodology for managing surface friction based on risk associated with wet road crashes. The measured skid resistance value of a section of road does not stay constant throughout the life of the road surface. Indeed, skid resistance is affected by a variety of different factors, with the eventual result usually being a decrease in the available skid resistance of the road over time. External factors such as the speed, volume and type of traffic using a particular section of road, the local climatic conditions, and the various types of product used in the road all contribute to the period of time for which the skid resistance remains suitable for the safe manoeuvring of vehicles. Selecting and maintaining the skid resistance of the road surface to such levels is an important issue for road authorities, both as a quality issue in the daily driving requirements of the public, and for safety maximisation during high demand incidents such as road crashes. The hypothesis of this research is that current investigatory levels for each particular road category must be linked to an associated risk of road crash for that category, thereby incorporating risk into the management process. Given the wide range of conditions of a roadway, even for those sites within the same category, it is proposed to be more appropriate to derive a range of values on which a measured value may lie. This idea, effectively giving a certain ‘tolerance’ to the Investigatory Level, also incorporates one of the fundamental properties of the skid resistance across a site, category or indeed the entire network: the variability in the measured skid resistance value. Further than that, the linkage between skid resistance and crash rate can only ever be described in a probabilistic sense, and as such, defining a range in which the skid resistance can fall is more appropriate for the purposes of effective decision making.
2
SITE CATEGORIES OR DEMAND CATEGORIES FOR SKID RESISTANCE
The primary aim in managing skid resistance for a road network is to provide skid resistance for vehicles to manoeuvre safely in any roadway condition. It would seem desirable to simply maintain all parts of a road network to a high level, thus ensuring adequate skid resistance. However, to gain a high skid resistance level for all sections of a road network would in most cases be overly expensive. Thus, the purpose of the management process is trying to equalise crash risk across the road network, rather than simply trying to provide high skid resistance level for the whole road network. Higher skid resistance is provided to those road sites which require increased levels of friction, such as corners, intersections or roundabouts. The term “site or demand categories” was established to categorise road sites that require different skid resistance demands for safe manoeuvring. Three levels of demand categories were adopted in Queensland, namely normal, intermediate and high demand categories [2] as shown in Table 1. Table 1 shows typical demand categories adopted by Queensland Department of Transport and Main Roads. Different skid resistance investigatory levels have been given for these three demand categories. The investigatory levels given in Table 1 are the international friction indices (IFI). This table separates the road network across so-called ‘demand’ categories, as well as across various speed ranges. As can be seen from the table, manoeuvres in high demand areas would be expected to require more friction support than manoeuvres in normal demand areas. As mentioned, the investigatory levels are set as an intermediate form of roadway management, in that they do not automatically signal that maintenance work is required. If, through normal roadway testing, a particular road site is measured to have a skid resistance value below the relevant investigatory level, a more thorough site investigation and test are performed to determine if additional remedial action is needed.
78
Table 1 Current Queensland Department of Transport and Main Roads Investigatory Levels for skid resistance Demand category
High
Intermediate
Normal
3
Description of Site
Curves with radius < 100 m. Roundabouts Traffic light controlled intersections. Pedestrian/school crossings. Railway level crossings. Roundabout approaches. Curves with radius < 250m. Gradients > 5% and > 50m long. Freeway and Highway on/off ramps. Intersections. Curves with advisory speed > 15 km/h below speed limit. Manoeuvre – free areas.
F60 Investigatory Level 40 – 50 km/h
60 – 80 km/h
100 – 110 km/h
0.3
0.35
N/A
0.25
0.3
0.35
0.2
0.25
0.3
METHODOLOGY
This session presents a proposed method based on the application of probability theory in assessing risk of road crashes associated with skid resistance and in establishing investigatory levels. Information in relation to this methodology is also given in references 6, 7 and 8. The proposed method allows risk of road crashes to be explicitly incorporated into decisionmaking in establishing skid resistance investigatory levels. The steps in the analysis include 1. Categorise road network Given an initially large road network to study, it is desirable to separate the network down into a large number of smaller roadway sections. This is done not only to make the data more manageable to analyse, but also to allow roadway to be split up according to a particular set of characteristics. There are a variety of different environmental and structural properties which can affect the skid resistance of a roadway, and to be able to test if these have any relationships with skid resistance and crash rates, the various road sections must be grouped according to these characteristics. This process is also important when recommendations must be made for the various ‘demand’ categories. 2. Obtain historical road data under the particular categorisation Once the particular category and road surface condition variables are selected, all available data from the road network must be collected. An important point of this analysis is that it is based on historical data in such a way that the information provided by the analysis also increases and improves as the amount of data increases. The future intention of the analysis is that it is incorporated into the information management system of the QDTMR. 3. Divide road network sample into small sections suitable for the analysis. Once the data has been separated and collected according to the category of interest, the relevant road sections are divided into small segments of equal length. This allows the variability of skid resistance to be examined over a variety of distances. For example, there may be a particular category for which the variation over small segments is of interest, whereas for another category, larger segments may be sufficient. Once each segment is defined, a cumulative probability distribution (Fx) of skid resistance can be formed for each segment. Figure 1 shows the cumulative probability distributions of skid resistance for all segments of a category.
79
A cumulative probability distribution of recorded skid resistance for a road segment is the cumulative chance of the skid resistance that is likely to occur in that road segment.
Cumulative Probability (Fx)
1 0.9 0.8
A comparison of the recorded skid resistance cumulative probability distributions gives information relating to how the different segments compare in terms of variation in skid resistance. Apart from their use in this analysis, these plots can also provide other information that may be of interest to road engineers; i.e., a visual indication of particular road sections that may have a faulty road surface.
0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0.2
0.3
0.4
0.5
0.6
0.7
0.8
Recorded Skid Resistance (F60)
Figure 1: The cumulative probability distributions of skid resistance for all road segments of a certain category 4. Count the number of road crashes and identify the segments where crashes occur 5. Map the related crash data to these distributions After all cumulative probability distributions are created for the road segments within the category, those sections on which crashes occurred are identified. 6. Divide and categorise crash rates across all distributions and road sections This step is fundamental to the methodology in that it involves the selection of suitable envelope distributions of skid resistance (the terms envelope and investigatory curve will be used interchangeably when referring to the derived distribution). These distributions essentially split the crash sample in such a way that a certain percentage of crashes occur on segments of road whose skid resistance curves fall below the envelope. Alternatively, the split can be measured in ‘risk’ related terms. The figure indicates that 15% of crashes occur on road surfaces that have a cumulative probability distribution of skid resistance greater than the boundary F(x2). Seventy per cent of crashes occurred within the boundary of the two cumulative probability distributions of F(x1) and F(x2). Eighty-five per cent of crashes occurred on road surfaces having a cumulative probability distribution of skid resistance below the boundary F(x2).
Cumulative Probability (Fx)
1 0.9
Cumulative probability F(x1)
0.8 0.7
Cumulative probability F(x2)
0.6 0.5 0.4
70% crashes occurring within these two cumulative distributions
0.3 0.2 0.1 0 0.2
0.3
0.4
0.5
0.6
0.7
0.8
Recorded Skid Resistance (F60)
Figure 2: Envelope distribution functions are fitted to the data points, such that a certain percentage of crash related sections are found below the particular curve
80
7. Establish the distributional characteristics for the appropriate crash rates selected as per the management policy The method by which the distributional properties of the envelope distribution are obtained is dependent on the way in which the curve is produced. The simplest method is to fit the curves by visual inspection (either empirical or parametric, such as a normal distribution) or used a probability-based goodness-of-fit-test. This is the method suggested by Piyatrapoomi (2008) as a preliminary method to obtain curves that provide useful information to decision makers [6]. 8. Assess and establish Investigatory Levels for each variable Given a particular envelope distribution, the Investigatory curve, along with a range or interval can then be selected. The mean value of the distribution would normally be stated as the base Investigatory Level; with the interval set at a certain number of standard deviations either side of this mean value. 9. Repeat the analysis for the remaining road condition variables 10. Develop a management framework that incorporates the relationships established in Step 8 Once a suitable distribution has been selected to guide investigatory decisions, this can then be applied by management and practitioners in the maintenance regime of the road network. Once management has selected a suitable ‘risk’ value which meets departmental policy, the associated investigatory distribution can be evaluated and applied when examining road sections in the network.
4
ANALYSIS AND RESULTS
The goals of the research project were based around two specific research problems. The first of these was to examine the relationships that exist between crash rate or risk of crash and skid resistance. The analysis of these relationships was then expected to be linked in with a wider research problem; that of producing a methodology which allows management to determine appropriate skid resistance investigatory levels that incorporate the inherent risk of crashes into consideration. The analysis of the skid resistance – crash relationship was also expected to contribute to decisions relating to the current demand category split up for investigatory levels; a separation which up until now had been primarily based on studies from other countries. The analysis process began with the selection of appropriate analysis categories. Several different categorisations were initially used, and these changed over time as various road condition variables were tested. The two main variables that were used throughout the analysis were seal type and speed zone. The methodology developed for this project involves the extraction, manipulation and analysis of very large data sets. A calculation tool was developed specifically for the purpose of the project which allows efficient timely extraction and analysis to be performed. The software has a dual purpose: firstly, it allows the relationships between skid resistance and crash rate to be examined via the method outlined above. Secondly, it provides a beta software model on which future implementations may be based on. Figure 3 shows a calculation tool that was used for the analysis.
Figure 3: Analysis tool
81
For demonstration purposes, a category of normal demand, of spray seal surface, speed zone greater than 80 km/h, and annual average daily traffic less than 5000 vehicles was presented. The skid data and crash data were recorded in 2004, the total road kilometre in this category for the analysis was approximately 2667 kilometres, and the number of wet crashes was 27. The total road length was divided into small equal segments of 3 kilometres. Figure 4 shows the result of the analysis. Each cumulative probability distribution shown in the figure represented the variability of skid resistance within a 3 kilometre segment. The figure shows only the cumulative probability distributions of skid resistance of the road segments where wet crashes occurred. The figure shows the boundaries that divided road crashes into different crash risk or crash rate expressed in terms of the number of crashes per 10 million vehicle kilometre travelled or percent of crashes. The essence in this method is to select a boundary curve which is the investigatory level or ‘investigatory curve’ for an acceptable crash risk. For example, decision-makers may accept a risk of having 11 wet crashes per 10 million vehicle kilometre travelled, i.e. in this case the 85% percent boundary. However, this final step involves many more additional inputs than those produced by the analysis presented. Management, before deciding on the investigatory curves must combine these results with other economic and logistical information. For example, while the results may suggest a certain high value in investigatory curve for a large proportion of the network, it may simply not be economically feasible to maintain the entire section to such a high degree. Alternatively, it may not be logistically possible to obtain the required volume of aggregate needed to achieve such a skid resistance value. In these situations, management must combine all the information at hand to produce an optimal solution to the maintenance process.
Figure 4: Investigatory distributions and associated crash risks for the SNa80b5k category with wet crashes only
5
PROPOSED METHOD OF MANAGING SKID RESISTANCE
In the proposed method, the management of skid resistance on the road network is based on monitoring skid resistance and comparing them with an established investigatory curve as shown in Figure 5 rather than comparing the measured skid resistance with a single value of investigatory level as given in Table 1. Figure 6 demonstrates an example of a comparison between an investigatory curve and a cumulative probability distribution of measured skid resistance of a road section. Figure 6 demonstrates that even though some measured skid resistance values are less than the investigatory curve, it may not trigger a site investigation since the measured skid resistance values which are less than the investigatory curve are in low percentage. In this method, site investigation will not be triggered for every value of skid resistance that falls below the investigatory level, and in practice it may not be feasible to investigate every place where skid resistance falls below the investigatory level. Figure 7 demonstrates a comparison between the investigatory curve and a cumulative probability distribution of measured skid resistance of a road section that may require site investigation. In this example, the percentage of measured skid resistance is considered significant and also exhibits very low values in skid resistance. The probability-based method allows asset managers to identify more clearly the severity of road sections and better prioritise site investigation than the current practice which compares the measured skid resistance values with a single investigatory level. However as mentioned economic implication, societal expectation, government policies in relation to the tolerance level of crash rate and other logistical information such as availability of local material combined with the results of this analysis must be the basis of input information in establishing appropriate investigatory curves. Validation of
82
the decision-making must be carried out with crash data that occur after the proposed methodology has been implemented. The investigatory curves can be refined and improvement of the skid resistance management process can be developed through the validation process. The calculation tool developed in this project will be able to facilitate and enhance further improvement and development of the skid resistance management process recommended.
1 Cumulative Probability (Fx)
0.9 0.8 0.7 0.6 0.5
Investigatory curve
0.4 0.3 0.2 0.1 0 0.2
0.3
0.4
0.5
0.6
0.7
0.8
Skid Resistance
Figure 5: Example of an investigatory curve
1 Cumulative Probability (Fx)
0.9 0.8 0.7 0.6 Percentage area where measured skid resistance falls below the investigatory curve
0.5 0.4 0.3 0.2
Investigatory curve Measured skid resistance
0.1 0 0.2
0.3
0.4
0.5
0.6
0.7
0.8
Skid Resistance
Figure 6: Example of a comparison between an investigatory curve and a cumulative probability distribution of skid resistance (that does not trigger site investigation)
83
1 Cumulative Probability (Fx)
0.9 0.8 0.7 0.6
Percentage area where measured skid resistance falls below the investigatory curve
0.5 0.4 0.3
Investigatory curve Measured skid resistance
0.2 0.1 0 0.2
0.3
0.4
0.5
0.6
0.7
0.8
Skid Resistance
Figure 7: Example of a comparison between an investigatory curve and a cumulative probability distribution of skid resistance (that triggers site investigation)
6
CONCLUSIONS
The paper outlined the need to manage skid resistance of road network for road safety and proposed a methodology for establishing investigatory level which is used as an intermediate form of skid resistance management. In managing skid resistance, if a particular road site is measured to have a skid resistance value below the relevant investigatory level, a more thorough site investigation and test are performed to determine if additional remedial action is needed. The paper presented a step-by-step methodology in assessing the crash risk and skid resistance using probability-based approach and establishing investigatory level. An investigatory level suggested in this paper was in the form of a probability curve rather than a single investigatory value. The paper also presented how the investigatory curve would be used for managing skid resistance on the road network. The management of skid resistance on road network based on probability theory was presented. A road category of normal demand, spray sealed surface, speed zone greater than 80 km and annual average daily traffic less than 5000 vehicles was used in demonstrating the application of the method.
7
REFERENCES
1
Kuttesh, J.S. (2004) Quantifying the Relationship between Skid Resistance and Wet Weather Accidents for Virginia Data. Master Thesis, Virginia Polytechnic Institute and State University, Virginia, USA.
2
Weligamage J. (2006) Skid Resistance Management Plan. Road Asset Management Branch, Queensland Department of Main Roads, Queensland, Australia.
3
Austroads (2005) Guidelines for the Management of Road Surface Skid Resistance. AP-G83/05, Austroads, Sydney, Australia4
4
Seiler-Scherer L. (2004) Is the Correlation Between Pavement Skid Resistance and Accident Frequency Significant? Conference Paper STRC, Swiss Transport Research Conference, Switzerland.
5
Viner H.E., Sinhal R. & Parry A.R. (2005) Linking Road Traffic Accidents with Skid Resistance – Recent UK Developments. Proceedings of the International Conference on Surface Friction, Christchurch, New Zealand
6
Piyatrapoomi N, Weligamage J, Bunker J, & Kumar A. (2008) ‘Identifying relationship between skid resistance, road characteristics and crashes using probability-based risk approach’ The International Conference on Managing Road & Runway Surfaces to Improve Safety, 11 - 14 May 2008, Cheltenham England
84
7
Piyatrapoomi N, Weligamage J, & Kumar A.(2008) ‘Probability-based method for analysing relationship between skid resistance and road crashes’10th International Conference on Application of Advanced Technologies in Transportation , May 27th-31st, 2008, Athens, Greece
8
Piyatrapoomi N, Weligamage J, & Bunker J. (2007) Establishing a Risk based Approach for Managing Road Skid Resistance. Australasian Road Safety Research, Policing & Education Conference 2007 ‘The Way Ahead'17 – 19 October 2007 Crown Promenade, Melbourne, Australia
Acknowledgments The authors wish to acknowledge the Queensland Department of Transport and Main Roads and the Australian Cooperative Research Centre (CRC) for Integrated Engineering Asset Management for their financial support. The authors also wish to thank staff at Asset Management Branch in the Department of Main Roads, Queensland in Australia for providing technical data and support. The views expressed in this paper are of the authors and do not represent the views of the organisations.
85
Proceedings of the 4th World Congress on Engineering Asset Management Athens, Greece 28 - 30 September 2009
BUILDING AN ONTOLOGY AND PROCESS ARCHITECTURE FOR ENGINEERING ASSET MANAGEMENT Vladimir Frolov a, David Mengel b, Wasana Bandara c, Yong Sun a, Lin Ma a a
Cooperative Research Centre for Integrated Engineering Asset Management (CIEAM), Brisbane, Australia b
c
QR Network, QR Limited, Brisbane Queensland 4000, Australia.
Business Process Management Cluster, Faculty of Information Technology, Queensland University of Technology (QUT), Brisbane Queensland 4000, Australia. Historically, asset management focused primarily on the reliability and maintainability of assets; organisations have since then accepted the notion that a much larger array of processes govern the life and use of an asset. With this, asset management’s new paradigm seeks a holistic, multi-disciplinary approach to the management of physical assets. A growing number of organisations now seek to develop integrated asset management frameworks and bodies of knowledge. This research seeks to complement existing outputs of the mentioned organisations through the development of an asset management ontology. Ontologies define a common vocabulary for both researchers and practitioners who need to share information in a chosen domain. A byproduct of ontology development is the realisation of a process architecture, of which there is also no evidence in published literature. To develop the ontology and subsequent asset management process architecture, a standard knowledge-engineering methodology is followed. This involves text analysis, definition and classification of terms and visualisation through an appropriate tool (in this case, the Protégé application was used). The result of this research is the first attempt at developing an asset management ontology and process architecture. Key Words: asset management, ontology development, process architecture, text mining, classification.
1
INTRODUCTION
The proper management of physical assets remains the single largest business improvement opportunity in the 21st century [1]. Organisations from all around the world now collectively spend trillions of dollars in managing their respective portfolios of assets. Historically, asset management (AM) focused primarily on the reliability and maintainability of assets; organisations have since then accepted the notion that a much larger array of processes govern the life and use of an asset, leading to a significant increase in the amount of asset management literature being published (particularly since 2000) [2-5]. This can be attributed to the modern context of asset management - one that encompasses elements of: strategy; economic accountability; risk management; safety and compliance; environment and human resource management; and stakeholder and service level requirements [6-8]. These elements have previously existed as disparate departments (or silos) within an organisation and in many cases continue to do so; asset management’s new paradigm seeks a holistic, multi-disciplinary approach to the management of physical assets – the foundation for the overall success of an organisation [9]. Although most relevant articles acknowledge that asset management requires a multi-disciplinary approach, their content continues to mostly focus on individual elements of asset management, thus essentially missing the objective of what an ideal asset management definition strives for. A growing number of organisations, however, have understood the definition and are now developing asset management bodies of knowledge and asset management frameworks, i.e., high-level conceptual building blocks of asset management that bring together several disciplines into one overall process [7, 10]. Examples of such organisations include CIEAM [11], IAM [8, 12], AM Council [13] and IPWEA [14]. Such organisations are driving the development of new and extended asset management knowledge, incorporating the idea that asset management must be considered as a multi-disciplinary domain, i.e., one that governs and streamlines many different areas of an organisation whilst guiding managerial personnel the necessary know-how to successfully implement and sustain asset management initiatives. This research seeks to complement existing outputs of the mentioned organisations through the development of a fundamental, conceptual asset management ontology. Ontologies are content theories about the sorts of objects, properties of
86
objects, and relations between objects that are possible in a specified domain of knowledge and provide potential terms for describing our knowledge about the domain of interest [15]. Ontologies define a common vocabulary for both researchers and practitioners who need to share information in a chosen domain [16]. As more and more information is published in the asset management domain, the importance of knowledge-based systems and consistent representation and vocabulary of such information is increased [17], thus supporting the argument for the building of an asset management ontology. To the best of the authors’ knowledge, a holistic asset management ontology, i.e., one that encapsulates the ideal definition of asset management, has not yet been published. By developing an asset management ontology, one can also realise the basic structure of an asset management process architecture. The architecture of the processes of an organisation is defined as the type of processes it contains and supports, as well as the relationships among them [18]. It has already been stated in literature that it is desirable to decompose asset management into a set of processes [19-27]. An asset management process is a set of linked activities and the sequence of these activities that are necessary for collectively realising asset management goals, normally within the context of an organisational structure and resource constraints [28]. No consistent asset management process architecture has yet been published. To develop the ontology and subsequent asset management process architecture, a standard knowledge-engineering methodology is followed. This involves text analysis, classification of terms and visualisation through an appropriate tool (in this case, the Protégé application was selected). The result of this research is the first attempt at developing a fundamental, conceptual asset management ontology and process architecture. The developed ontology can be used to: share and annotate asset management information; identify gaps in current asset management thinking; visualise the holistic nature of asset management; classify asset management knowledge; and develop a relational asset management knowledge-based system. This paper is structured as follows: background information on various topics is presented in Section 2; the methodology followed is presented in Section 3; the asset management ontology and process architecture is detailed in Section 4; analysis of results is shown in Section 5; and conclusions and directions for future research are given in Section 6. 2
BACKGROUND Several research topics form the context of this research. This section presents a brief introduction to each topic. 2.1 The Evolution and Importance of Asset Management
Engineering asset management is a process of organising, planning and controlling the acquisition, use, care, refurbishment, and/or disposal of physical assets in order to optimise their service delivery potential and to minimise related risks and costs over their entire life. This is achieved through the use of intangible assets such as knowledge based decisionmaking applications and business processes [29, 30]. Previously, asset management was often a practice not dissimilar to pure reliability and maintenance, following the simplistic doctrine of cost saving. Now, however, many organisations have shifted views on asset management. The result is a new appreciation of the processes governing an asset, especially the integration of lifecycle costing into asset decisions. An asset typically progresses through four main life stages: create, establish, exploit and divest [2]. These four stages can be thought of as the value chain of an asset, and all must be optimised to deliver a better return on asset investment. Thus, engineering asset management is more than just a maintenance approach as it should influence all aspects of an asset’s life [31]. It encompasses a broader range of activities extending beyond reliability and maintenance [32]. The prevalent view today is that properly executed engineering asset management can bring great value to a business [7]. It has been stated, that asset management is ultimately accountable to the triple-bottom-lines of a business [2], namely economic, environmental and social. It is also an increasingly important governance issue, as the scope of engineering assets expands. Asset management is continually developed to become an integrated discipline for managing a portfolio of assets within an organisation. Much, however, will still need to be achieved before it can become a standard process for a business [33, 34]. Godau et al. [35] sums it up well by saying: asset management needs to deal with a range of complexities born out of the increasing technological, economic, environmental, political, market and human resources challenges facing this generation and our future generations. A holistic approach must be undertaken in which all roles involved with the management of assets come together in a practical framework and organisational structure to achieve the desired results and performance. Strategic thinking into the future is critical to ensure that future generations receive adequate levels of service across all industries, disciplines and applications [36]. 2.2 Text Mining as a Form of Information Analysis Text is the predominant medium for information exchange among experts and is also the most natural form of storing information [37-39]. The knowledge stored in text-based media is now recognized as a driver of productivity and economic growth in organisations [40]. With this, text mining is at the forefront of research in knowledge analysis and discovery.
87
Text mining is a multidisciplinary field that encompasses a variety of research areas: text analysis; information extraction and retrieval; clustering and classification; categorization; visualization; question-answering (QA) database technology; machine learning; and data mining [39, 40]. In almost all cases, text mining initiatives rely on a computer due to the massive text processing required [37, 41]. However, it is difficult for a computer to find the meaning of texts because they often have different possible meanings [42, 43]. Other ambiguities that occur when analysing text are: lexical ambiguities (words having more than one class – verb and noun); syntactic ambiguities (parsing of sentences); and semantic ambiguities (meaning of sentence). Humans can generally resolve these ambiguities using contextual or general knowledge about the subject matter, as well as a thorough understanding of the English language. Much research and methodologies has been developed to increase the efficiency and correctness of text mining applications. 2.3 Using Ontologies to Organise Knowledge In philosophy, ontology is the study of the kinds of things that exist in the world, including their relationships with other things and their properties [15, 44]. An ontology defines a common vocabulary for researchers and practitioners who need to share information in a domain in a consistent and agreed manner [16]. Although more prominent in Artificial Intelligence (AI) and Information Systems (IS) applications, many disciplines now develop standardized ontologies, e.g. SNOMED ontology in the medicine field [45]. As more and more information is published on a particular domain, the need for ontological analysis as a way to structure such knowledge becomes increasingly important. One of the more commonly referenced definitions of an ontology is that of Gruber’s [46] which states that ontologies are explicit formal specifications of the terms in a domain and relations among them. Noy and McGuiness expand on this by referring an ontology as a formal explicit description of concepts in a domain of discourse (classes), properties of each concept describing various features and attributes of the concepts (slots), and restrictions on slots (facets) [16]. Ontologies are used in many applications as mentioned in literature, for example:
Sharing and annotation of information [15-17, 47, 48] Reuse of domain knowledge [16, 17, 47] Facilitate communication [48, 49] Natural language understanding and knowledge-based systems design [15, 17] Business process re-engineering [49] Artificial Intelligence (AI) and Information Systems (IS) [15, 47, 49]
Despite their applications, ontology development is still a challenging task [50], and it suffers from two main limitations: use of ontology and construction difference. Use of ontology refers to the notion that an ontology is unlikely to cover all potential uses[15]. Construction difference refers the notion that building an ontology is more akin to an art rather than a science, and that there is no single and correct methodology for building an ontology [16, 17, 48, 51]. There are however a variety of methodologies currently exist in literature, such as TOVE, Ontolingua and IDEF[5] [17,49,51]. 2.4 Process Architecture The architecture of the processes of an enterprise is defined as the type of processes it contains and supports, as well as the relationships among them [18]. It can be defined for the whole of an enterprise or for some portion thereof and is generally presented as a high-level diagram [52]. Several whole-of-enterprise process architectures currently exist (e.g. AQPC’s Process Classification Framework [53] and the Zachman Framework [54]). However, they do not cover the scope of asset management at a sufficient level. A process architecture is a schematic that shows the ways in which the business processes of an enterprise are grouped and inter-linked. Developing a process architecture is generally seen as an important step in any process management initiative as it lays the framework for existing business processes, including the relationships among them. Therefore, interested personnel can view these business processes at varying levels of detail and scope, depending on their needs. In many cases, developing a process architecture becomes an iterative process as organisations understand more and more about their operations. Nevertheless, it is generally more appropriate to define the process architecture at an early stage of process management. Process architectures generally consists of several tiers (or levels) in a hierarchical orientation, with each tier describing more process detail than the tier before it. The first tier generally describes the overall, high-level, abstract activities that an organisation performs. The second tier generally describes the key processes that define an organisation and provide the mechanism for the implementation of the first tier elements. The third tier (and possible sub-tiers) generally describes individual, well-defined processes that are implemented in order to achieve the goals of an organisation. This tier is of much detail, with many meta-models available to increase the capability of an organisation in modelling this tier (e.g. ARIS [55]). The fourth tier (and possible sub-tiers) generally describes the individual, segmented activities that an organisation performs. These activities link together to make up processes.
88
3
METHODOLOGY
This section details the methodology followed in developing the fundamental and conceptual asset management ontology and process architecture. The overall methodology is shown in Figure 1, followed by the details of each phase.
Figure 1: Flowchart depicting overall methodology for AM ontology and process architecture development 3.1 Document Selection In order to conduct any text mining initiative, unstructured text (usually in the form of documents) must first be sourced. As the goal of this research was to develop a fundamental engineering asset management ontology, documents describing engineering asset management were first analysed. In total, over 100 articles (including journal articles, conference proceedings, books and practitioner publications) were scanned in order to find a suitable source, so as to establish a solid base for an asset management ontology and process architecture. The article that was ultimately chosen was the PAS 55 (Part 2) [12]. In 2004, the Institute of Asset Management (IAM) [8, 12] published, in two parts, PAS 55 – a publicly available specification document. It was developed in response to demand from industry for a standard for carrying out asset management. The first part details the specification, whereas the second part details the guidelines for applying the first part. PAS 55 centres on a core concept that an asset management system consists of five stages/phases: policy and strategy; asset management information, risk assessment and planning; implementation and operation; checking and corrective action; and management review. The specification then details what an organisation should have in their current asset management practice. Currently, the document is being used to certify organisations that prove their effective asset management practices through gap analyses. The PAS 55 can be thought of as a checklist of asset management elements that an organisation needs to adopt to improve their management of physical assets. The specification was developed by a large body of agencies, and in some ways is considered to be a quasi-standard (BSI standard) in asset management. The manual is not meant to be prescriptive as direct instructions, thus making it open to interpretation. There are also no individual quality weightings for the elements discussed, thus it is not easy to gauge exactly how to best apply PAS 55. It does, however, give a very good highlevel view of asset management (holistic) and can be of great benefit to organisations looking to improve their asset management processes. There are several reasons for choosing PAS 55-2 document. Firstly, the document is itself a summarised snapshot of engineering asset management, describing the essential elements of an effective and suitable asset management system. This means that the text contained within the document is more focused as compared to some of the other texts. Secondly, PAS 55 was developed by a large consortium of practitioners practicing asset management, as well as having gone through extensive review and update phases. PAS 55 is now being used to benchmark an organisation’s asset management initiatives, to see whether the organisation is implementing the required elements of asset management. PAS 55 has received mostly positive feedback and uptake by industry, and is the first step towards a more rigid standard in engineering asset management. 3.2 Tool Selection for Ontology Development Protégé is a free, open-source platform that provides a growing user community with a suite of tools to construct domain models and knowledge-based applications with ontologies. At its core, Protégé implements a rich set of knowledge-modeling structures and actions that support the creation, visualization, and manipulation of ontologies in various representation formats. In particular, the Protégé-Frames editor was used as it enables users to build and populate ontologies that are frame-based, in accordance with the Open Knowledge Base Connectivity protocol (OKBC). In this model, an ontology consists of a set of classes organized in a hierarchy to represent a domain's concepts (in this case asset management), a set of slots associated to classes to describe their properties and relationships, and a set of instances of those classes - individual exemplars of the concepts that hold specific values for their properties. Protégé is one of the most common tools available to build ontologies, and as it was not the goal of this research to evaluate various ontology development tools, Protégé was selected due to its broad support base and ease of use [56].
89
3.3 Manual Text Mining The PAS 55-2 document’s main content is 36 pages in length (reference Sections 4.2-4.6). A manual text mining/analysis approach (as opposed to using a computer text mining application) was utilised due to this relatively small document length. The second reason for choosing a manual approach was so that contextual information could also be captured. As mentioned previously, computers are unable to visualise contextual information as well as humans reading the same passage of text. Both contextual information and experience in using the English language are positive arguments towards using a manual text mining approach. This, however, only applies if one has a small amount of text to analyse. Most text mining applications use many source documents. In these cases, a manual text mining approach cannot realistically be utilised. When text mining a source document, an analyst essentially scans for three open word classes, namely: nouns, verbs and adjectives. Other word classes can also be utilised, but as it was the intention to extract only the key terms of asset management (for ontology), these three word classes were sufficient. To recall general knowledge: nouns give names to persons, places, things and concepts in general; verbs is the class of words used for denoting actions; and adjectives are words used to modify the noun [57]. Extracted terms were placed into the following format: noun (adjective1…adjectivex). As expected, at the start of the text mining activity, many terms were continually being added to the list (implemented in Excel 2007). As the activity continued, it was found that less and less terms were added as they already existed in the listed, but rather, adjectives were being added to the list where text described a particular concept from another contextual point-of-view. Verbs, in this case, were used purely for realising and supporting the context of any particular passage(s) of text. From the 36 pages of text scanned, a total of 1193 individual terms were manually extracted. 3.4 Classification of Terms The terms extracted from the previous step, were then classified into several categories of terms, following the ARIS architecture methodology, i.e., the ARIS house concept [55], and in particular, the EPC modelling convention [55]. The EPC (event-driven process chain) notation is a process modelling notation that is composed of the following rudimentary elements:
Event – passive trigger points for a process or function (or activities)
Function – fundamental activity as performed by an agent
Organizational unit – agent performing the activity (e.g. person)
Resource object - physical objects that exist in the world which are utilized by a function and/or an organizational unit
Information system object – information systems-related objects as utilized by a function and/or an organizational unit
As there was no distinct separation between functions and processes (functions being the constructs of a process), “process” was used as the category to describe a procedure the organization performed. The final categories used were as follows: AM EVENT, AM ORGANIZATIONAL ENTITY, AM RESOURCE ENTITY, AM INFORMATION SYSTEM ENTITY and AM PROCESS. These categories form the most upper elements of the ontology which equate to what objects exist in the asset management domain. The selection of these elements also aids in creating actual process chains in the EPC notation (a commonly adopted notation in the process management domain). In this case, the ontology represents the process architecture as part of its composition. 3.5 Ontology and Process Architecture Development An ontology is an explicit account or representation of some part of a conceptualisation, a collection of terms and definitions relevant to business enterprises [58, 59]. Ontologies are generally created for specific applications, and in some cases domains, however, their creation is still generally considered to be an art, rather than a science [17]. Several methodologies for ontology development currently exist in literature, such as in [16, 17, 46, 47, 49, 51, 56, 59-62]. In most of these literatures, a generic skeletal methodology for ontology development is proposed, and is as follows:
Identify a purpose for the ontology (determines the level of formality at which the ontology should first be described).
Identify scope (a specification is produced which fully outlines the range of information that the ontology must characterise).
Formalisation (create the code, formal definitions and axioms of terms in the specification).
Formal evaluation (generally includes the checking against purpose or competency questions specific to a particular ontology).
90
In [59], these generic steps are discussed in further detail. For example, formality refers to an ontology being either: highly information (expressed loosely in natural language); structured information (expressed in a restricted and structured form of natural language, greatly increasing clarity by reducing ambiguity); semi-formal (expressed in an artificial formally defined language; and rigorously formal (meticulously defined terms with formal semantics, theorems and proofs of such properties as soundness and completeness). There are also several purposes to an ontology (mentioned briefly in an earlier section). These are: communication (between people); inter-operability (among systems achieved by translating between different modelling methods, paradigms, languages and software tools); systems engineering (including re-usability, knowledge acquisition, reliability and specification). An ontology can also be generic, that is, can be reused in a range of different situations. In terms of asset management, as per this application, the ontology developed is unambiguous, but an informal ontology. This is because the focus of this research is not the inter-operability of information systems, but rather the systematic and consistent approach to developing asset management process patterns. The subject matter, is the third element of an ontology. Three widely accepted categories are:
Whole subjects (e.g. medicine, geology, finance)
Subjects of problem solving
Subjects of knowledge representation languages
The first category is generally the most popular one and is frequently referred to as a domain ontology. Overlap between these categories is generally encountered due the difficulty in scoping an ontology perfectly. In this paper, the developed ontology is an asset management domain ontology. The methodology implemented for developing the initial asset management ontology is presented below, followed by more specific details of each step:
Figure 2: Ontology development methodology (using [16]) Defining the domain and scope of the ontology: As it was mentioned earlier, the ontology developed as part of this research is for the asset management domain, the scope being that as implemented by the PAS 55-2 document. There is a perceived lack of clear understanding in literature of what processes and elements make up the modern context and understanding of asset management. Selecting important terms in the ontology: The extraction and classification of terms as outlined in Sections 3.3 and 3.4 ensured that the most important terms were selected from the document. As the document itself was a summary of asset management, as expected, a high percentage of terms were in fact considered to be important towards the ontology. Defining the class and class hierarchy of the ontology: A combination development process was used to develop the class hierarchy of important terms. The upper most classes were chosen as: AM EVENT, AM ORGANIZATIONAL ENTITY, AM RESOURCE ENTITY, AM INFORMATION SYSTEM ENTITY and AM PROCESS. A combination process is one where several toplevel concepts are first selected, followed by the recursive process of placing both lower-level and middle-level elements into the class ontology. Thus, a combination approach is the combination of a top-down approach (high-level concepts first, then lower-level concepts) and a bottom-up approach (group the most specific elements first, then generalize into more abstract constructs). When developing the class hierarchy, the following rule was applied to ensure consistency among classes: If a class A is a superclass of class B, then every instance of B is also an instance of A
91
Defining the slots of the classes: Slots define the internal structure of concepts of classes. Thus, slots are the internal properties of individual classes (relation). For this research, although slots were inputted into Protégé, they were not given values or ranges, rather being described simply as strings/words having no values set. 4
ASSET MANAGEMENT ONTOLOGY AND PROCESS ARCHITECTURE
Due to the size and layout constraint of this paper, the full presentation of the ontology and process architecture is not feasible as hundreds of classes and slots were identified from a single (short) document. Particular extracts of the ontology and process architecture are presented with accompanying details.
Figure 3: Upper-level AM ontology elements (part A) The diagram in Figure 3 shows the partial upper-level AM ontology (one superclass): AM RESOURCE ENTITY. This class describes a physical object that is used by (input or output) an activity/function/process to enable the activity/function/process to complete – in many cases, the object is modified (e.g. an asset is repaired. There are four sub-classes in this case: asset, asset system, asset-related resource (e.g. spares/inventory), and AM document. Each class (both super and sub) can have a slot associated with it. As mentioned previously, a slot describes certain properties of a class. As an example from the above diagram, an asset can have the property of “performance target” with an associated value or value range (in this case this is simplified and limited to just being a string value, however, quantitative values can also be put here in place).
Figure 4: Upper-level AM ontology elements (part B)
92
Continuing on from the diagram in Figure 3, Figure 4 shows the remaining superclasses of the developed ontology, namely: AM ORGANIZATIONAL ENTITY, AM EVENT, AM INFORMATION SYSTEM ENTITY and AM PROCESS. It can be seen that the more slots that are developed, the more defined a class can become. By putting exact values into the slots and slot ranges, instances of classes can be created. For example, a specific PERSON in the organization will have a particular set of values for the slots competence, expertise, qualifications and so on. By becoming an instance of a class, the element becomes more succinct and less abstract. The same goes for the ROLE class. A specific role will have the slot properties filled in. A major ability of a detailed and comprehensive ontology is the ability to do relational statements. For example, if the ROLE class had a slot called required qualification and an instance of the PERSON class had a specific qualification value filled in, the following statement could be made: if qualification (property) of instance of PERSON class is equal to or greater than required qualification (property) then instance X is suitable for role Y. The figure below shows how a particular instance is represented within an ontology (in this case a specialist asset designer is chosen for illustrative purposes only).
Specialist Asset Designer
Figure 5: Example of instance of class representation The arbitrary values of high, management and internal are chosen only to illustrate how an instance is represented within an ontology and its relationship with its parent class. As an ontology is filled in to a more defined (deeper) level, more and more instances would be enacted, with the super/parent classes remaining in an abstract form. The diagram in Figure 6 shows an extract of the process architecture (as per the source document). It shows how the levels discussed in Section 2.4 are actually enacted. As this is only a minimal extract, many processes are obviously missing.
---
---
---
---
Figure 6: Extract of AM process architecture
Each element describes a particular process, which can then be further subdivided into sub-processes, and so on – thus forming the exact definition of what process architecture is. Each process element therefore follows the same principles as those discussed earlier in regards to instances of classes.
93
5
ANALYSIS OF RESULTS
No existing ontologies were found that encapsulates the real-world objects existing in the asset management domain. Asset management processes, despite being classed as important in to the asset management community, have received limited focus in research. Asset management is composed of many processes which organisations implement, manage and reuse constantly in real-world asset management operations. It is logical to identify these processes and present them in a systematic and efficient manner, which supports reuse in industry. With the lack of explicit asset management ontologies and process architectures currently in literature (in many cases processes are implied), a direct comparison with an existing ontology and process architecture was not possible. Ontology literature suggests reusing existing literature when possible (or at least modifying it). This paper set out to develop a first-draft, fundamental asset management ontology, as well as showing that this is in fact possible and beneficial. Using only one source document, however, limits the scope and rigour of the results. Although PAS 55-2 was found to be a solid summary of asset management, there are other elements covered in other sources that are absent. With this, more source documents must be chosen so as to enable a broader scope of asset management to be captured. Through the analysis of existing asset management literature (other than the PAS55-2 document) it is clear that asset management suffers from inconsistent terminology, possibly stemming from its multi-disciplinary origin and general complexity in application. Thus, it is envisioned that a manual text mining methodology with several source documents should be utilised, rather than computer-aided text mining. Contextual information that can be captured in slots and instances of classes may be overlooked by using a computer for the analysis of text. An iteration process should also be used to enable the addition and elimination of terms/classes/instances/slots when necessary. In its current form, the developed ontology (and process architecture) builds a solid base for future additions and modifications, including the implementation of feedback from industry. By building a more rigorous ontology, relational statements can be utilised, which will lead into the development of an asset management knowledge system/base. 6
CONCLUSION
This research presents the methodology and development of an initial and fundamental asset management ontology and, subsequently, an asset management process architecture. The results show that an asset management ontology and process architecture can help support an organisation’s asset management initiatives through consistent knowledge representation, knowledge-based systems development, process representation and improvement, process benchmarking and process compliance checking. The developed ontology consists of hundreds of classes and slots, having been extracted and classified from a single article (PAS 55-2). This research illustrates how an ontology can benefit the asset management community through common representation of key terms and their relationship to each other. Future work in this area should see the inclusion of additional terms into the developed asset management ontology so as to build a more comprehensive asset management ontology. REFERENCES 1
H. W. Penrose, Physical asset management for the executive. Old Saybrook, CT, USA: Success by Design Publishing, 2008.
2
J. E. Amadi-Echendu, "Managing physical assets is a paradigm shift from maintenance," presented at 2004 IEEE International Engineering Management Conference, 2004.
3
C. A. Schuman and A. C. Brent, "Asset life cycle management: towards improving physical asset performance in the process industry," International Journal of Operations and Production Management, vol. 25, pp. 566-579, 2005.
4
R. Moore, "Many facets to an effective asset management strategy," Plant Engineering, pp. 35-36, 2006.
5
P. Narman, M. Gammelgard, and L. Nordstrom, "A functional reference model for asset management applications based on IEC 61968-1," Department of Industrial Information and Control Systems, Royal Institute of Technology, KTH 2006.
6
R. Lutchman, Sustainable asset management: linking assets, people, and processes for results: DEStech Publications, Inc., 2006.
7
J. E. Amadi-Echendu, R. Willett, K. Brown, J. Lee, J. Mathew, N. Vyas, and B. S. Yang, "What is engineering asset management?," presented at 2nd World Congress on Engineering Asset Management (EAM) and the 4th International Conference on Condition Monitoring, Harrogate, United Kingdom, 2007.
8
The Institute of Asset Management, PAS 55-1 (Publicly Available Specification - Part 1: specification for the optimized management of physical infrastructure assets), 2004.
94
9
D. G. Woodward, "Life cycle costing - theory, information acquisition and application," International Journal of Project Management, vol. 15, pp. 335-344, 1997.
10 E. Wittwer, J. Bittner, and A. Switzer, "The fourth national transportation asset management workshop," International Journal of Transport Management, vol. 1, pp. 87-99, 2002. 11 J. Mathew, "Engineering asset management - trends, drivers, challenges and advances," presented at 3rd World Congress on Engineering Asset Management and Intelligent Maintenance Systems (WCEAM-IMS 2008), Beijing, China, 2008. 12 The Institute of Asset Management, PAS 55-2 (Publicly Available Specification - Part 2: guidelines for the application of PAS 55-1), 2004. 13 D. Anderson, P. Kohler, and P. Kennedy, "A certification program for asset management professionals," presented at 3rd World Congress on Engineering Asset Management and Intelligent Maintenance Systems (WCEAM-IMS 2008), Beijing, China, 2008. 14 Ipwea, International Infrastructure Management Manual (Version 3.0): Institute of Public Works Engineers, 2006. 15 B. Chandrasekaran, J. R. Josephson, and V. R. Benjamins, "What are ontologies, and why do we need them?," IEEE Intelligent Systems and their Applications, vol. 14, pp. 20-26, 1999. 16 N. F. Noy and D. L. McGuiness, "Ontology development 101: a guide to creating your first ontology," Stanford KSL Technical Report KSL, 2009. 17 D. M. Jones, T. J. M. Bench-Capon, and P. R. S. Visser, "Methodologies for ontology development," in 15th IFIP World Computer Congress - IT & KNOWS Conference. Budapest: Chapman-Hall, 1998. 18 O. Barros, "Business processes architecture and design," BPTrends, 2007. 19 I. Moorhouse, "Asset management of irrigation infrastructure – the approach of Goulburn-Murray Water, Australia," Irrigation and Drainage Systems, vol. 13, pp. 165-187, 1999. 20 C. Spires, "Asset and maintenance management – becoming a boardroom issue," Managing Service Quality, vol. 6, pp. 13-15, 1996. 21 M. Hodkiewicz, "Education in engineering asset management (Paper 064)," presented at ICOMS Asset Management Conference, Melbourne, Australia, 2007. 22 R. E. Brown and B. G. Humphrey, "Asset management for transmission and distribution," IEEE Power and Energy Magazine, vol. 3, pp. 39-45, 2005. 23 M. Mohseni, "What does asset management mean to you?," presented at 2003 IEEE PES Transmission and Distribution Conference and Exposition, 2003. 24 C. Palombo, "Eight steps to optimize your strategic assets," IEEE Power and Energy Magazine, vol. 3, pp. 46-54, 2005. 25 C. P. Holland, D. R. Shaw, and P. Kawalek, "BP's multi-enterprise asset management system," Information and Software Technology, vol. 47, pp. 999-1007, 2005. 26 Y. Mansour, L. Haffner, V. Vankayala, and E. Vaahedi, "One asset, one view - integrated asset management at British Columbia Transmission Corporation," IEEE Power and Energy Magazine, vol. 3, pp. 55-61, 2005. 27 Y. Sun, L. Ma, and J. Mathew, "Asset management processes: modelling, evaluation and integration," in Second World Congress on Engineering Asset Management. Harrogate, UK, 2007. 28 L. Ma, Y. Sun, and J. Mathew, "Asset management processes and their representation," presented at 2nd World Congress on Engineering Asset Management, Harrogate, UK, 2007. 29 Cieam, "EAM 2020 roadmap: report of a workshop facilitated by the Institute of Manufacturing, UK for the CRC for Integrated Engineering Asset Management, Australia," 2008. 30 R. F. Stapelberg, Risk based decision making (RBDM) in integrated asset management. Brisbane, Australia: CIEAM, 2006. 31 D. L. Dornan, "Asset management: remedy for addressing the fiscal challenges facing highway infrastructure," International Journal of Transport Management, vol. 1, pp. 41-54, 2002. 32 I. B. Hipkin, "A new look at world class physical asset management strategies," South African Journal of Business Management, vol. 29, pp. 158-163, 1998. 33 G. O'Loghlin, "Asset management - has there been any reform?," Canberra Bulletin of Public Adminstration, vol. 99, pp. 40-45, 2001.
95
34 L. A. Newton and J. Christian, "Challenges in asset management - a case study," presented at CIB 2004 Triennial Congress, Toronto, ON, 2004. 35 R. I. Godau, "The changing face of infrastructure management," Systems Engineering, vol. 2, pp. 226-236, 1999. 36 Z. Okonski and E. Parker, "Enterprise transforming initiatives," IEEE Power and Energy Magazine, vol. 1, pp. 32-35, 2003. 37 M. Rajman and R. Besancon, "Text mining: natural language techniques and text mining applications " presented at 7th IFIP Working Conference on Database Semantics (DS-7), 1997. 38 I. Spasic, S. Ananiadou, J. McNaught, and A. Kumar, "Text mining and ontologies in biomedicine: making sense of raw text," Briefings In Bioinformatics, vol. 6, pp. 239-251, 2005. 39 A.-H. Tan, "Text mining: the state of the art and the challenges," presented at PAKDD 1999 Workshop on Knowledge Discovery from Advanced Databases, 1999. 40 R. A.-A. Erhardt, R. Schneider, and C. Blaschke, "Status of text-mining techniques applied to biomedical text," Drug Discovery Today, vol. 11, pp. 315-325, 2006. 41 M. Hearst, "What is text mining?," http://www.jaist.ac.jp/~bao/MOT-Ishikawa/FurtherReadingNo1.pdf, 2003. 42 J. Allen, Natural language understanding, 2nd ed: Bejamin/Cummins Publishing, 1995. 43 S. Russell and P. Norvig, Artificial intelligence: a modern approach: Prentice Hall, 1995. 44 D. W. Embley, D. M. Campbell, and R. D. Smith, "Ontology-based extraction and structuring of information from datarich unstructured documents," presented at International Conference On Information And Knowledge Management, Bethesda, Maryland, USA, 1998. 45 C. Price and K. Spackman, "SNOMED clinical terms," British Journal of Healthcare Computing & Information Management, vol. 17, pp. 27-31, 2000. 46 T. R. Gruber, "Towards principles for the design of ontologies used for knowledge sharing," International Journal of Human Computer Studies, vol. 43, pp. 907-928, 1993. 47 M. Uschold, "Building ontologies: towards a unified methodology," Technical Report - University of Edinburgh Artifical Intelligence Applications Institute AIAI TR, 1996. 48 M. Cristani and R. Cuel, "A comprehensive guideline for building a domain ontology from scratch," in International Conference on Knowledge Management (I-KNOW'04). Graz, Austria, 2004, pp. 205-212. 49 P. Bertolazzi, C. Krusich, and M. Missikoff, "An approach to the definition of a core enterprise ontology: CEO," in OESSEO 2001 - International Workshop on Open Enterprise Solutions: Systems, Experiences, and Organizations. Rome, 2001. 50 P. Velardi, P. Fabriani, and M. Missikoff, "Using text processing techniques to automatically enrich a domain ontology," presented at 18th International Conference on Formal Ontology in Information Systems, Ogunquit, Maine, USA, 2001. 51 M. Gruninger, K. Atefi, and M. S. Fox, "Ontologies to support process integration in enterprise engineering," Computational and Mathematical Organization Theory, vol. 6, pp. 381-394, 2000. 52 P. Harmon, "Business process architecture and the process-centric company," BPTrends, vol. 1, 2003. 53 AQPC, "Process Classification Framework," 2009. 54 J. Zachman, "Concise Definition of the Enterprise Framework," 2009. 55 M. Nuttgens, T. Feld, and V. Zimmermann, "Business Process Modeling with EPC and UML: transformation or integration?," presented at Proceedings of The Unified Modeling Language - Technical Aspecs and Applications, Mannheim, Germany, 1998. 56 O. Corcho, M. Fernandez-Lopez, and A. Gomez-Perez, "Methodologies, tools and languages for building ontologies. Where is their meeting point?," Data and Knowledge Engineering, vol. 46, pp. 41-64, 2003. 57 T. Amble, The understanding computer - natural language understanding in practice, 2008. 58 N. Guarino and P. Giaretta, Ontologies and knowledge bases: towards a terminalogical clarification. Amsterdam: IOS Press, 1995. 59 M. Uschold, M. King, S. Moralee, and Y. Zorgios, "The enterprise ontology," The Knowledge Engineering Review, vol. 13, pp. 31-89, 1998. 60 N. Guarino, "Understanding, building, and using ontologies," LADSEC-CNR, 1996.
96
61 M. Uschold and M. King, "Towards a methodology for building ontologies," in International Joint Conference on Artificial Intelligence - Workshop on Basic Ontological Issues in Knowledge Sharing, 1995. 62 M. Uschold and M. Gruninger, "Ontologies: principles, methods and applications," The Knowledge Engineering Review, vol. 11, pp. 93-136, 1996. Acknowledgments This research was conducted within the CRC for Integrated Engineering Asset Management, established and supported under the Australian Government’s Cooperative Research Centres Programme. This research is also sponsored by QR Limited. The authors are grateful for both the financial support and the opportunity of working with these organisations.
97
Proceedings of the 4th World Congress on Engineering Asset Management Athens, Greece 28 - 30 September 2009
TOWARDS A MAINTENANCE SEMANTIC ARCHITECTURE Mohamed Hedi Karraya, Brigitte Chebel Morelloa and Noureddine Zerhounia a
Automatic Control and Micro-Mechatronic Systems Department, 24, Rue Alain Savary, 25000 Besançon, France
Technological and software progress with the evolution of processes within company have highlighted the need to evolve systems of maintenance process from autonomous systems to cooperative and sharing information’ system based on software platform. However, this need gives rise to various maintenance platforms. The first part of this study investigates the different types of existing industrial platforms and characterizes them compared to two criteria namely: information exchange and relationship intensity. This allowed identifying the e-maintenance architecture as the current most efficient architecture. Despite its effectiveness, this latter can only guarantee technical interoperability between various components. Therefore, the second part of this study proposes a semantic-knowledge based architecture, thereby ensuring a higher level of semantic interoperability. To this end, specific maintenance ontology has been developed. Key words: maintenance systems, e-maintenance, interoperability, semantic maintenance, ontology. 1
INTRODUCTION
Today’s enterprise must respond to increasingly demands in terms of quality and quantity of products and services, responsiveness and costs reducing. To deal with these demands, company must have a reliable production system, well maintained by an efficient and inexpensive maintenance system. A performance and well-organized maintenance service contributes to the production system consistency, it will extend the life of industrial equipment and thus the best overall performance throughout the company. This need for maintenance concerns any type of enterprise, industry or service provider. Since the 80’Th, a phase of maintenance services structuring and standardization is being established. Markets’ evolution, globalization and their emphasis on profit and competitiveness of the firm cause the development of new concepts of production organization as well as the maintenance organization. At the same time, the quality aspect is beginning to play an important role as well as the dependability and, specifically, the maintenance function in companies. New technologies of information and communication technologies (ICT) have helped to establish and evolve these roles. Thanks to ICT, Web emergency and Internet, the achievement of maintenance services and monitoring can be performed automatically, remotely and through various distributed information systems. Hence the emergence of the concept of services offered through maintenance architectures, ranging from autonomic systems to integrated systems where cooperation and collaboration are vital to any operation. On the other hand, the set up of these fundamental aspects is a complex task. Thus we are particularly interested in the type of exchanged information, and the complex relationships between different systems and applications in these architectures. At this level, we are confronted to a classical problem in information systems, which is interoperability. This latter means the ability of two or more systems or components to exchange information and to use the information that has been exchanged [1]. In this paper we focus on the semantic interoperability of this exchanged information and how it can grantee an understandable exchange and to evolve existing architecture form static architecture to an intelligent one based on knowledge. To make that, we build a maintenance ontology, which will be shared between different systems of the architecture. The objective of this paper is twofold: (i) recense existing maintenance architectures and (ii) propose a new generation of maintenance architecture semantically interoperable. The rest of the paper is organised as follow. Section 2 is devoted to present the complex characteristics of systems and its relations. Section 3 and 4 are devoted to tale maintenance system historic and various existed maintenance architectures. We present in section 5 the semantic interoperability problematic and its importance to set up the s-maintenance architecture. In the
98
next section 6 we build a domain ontology of maintenance based on the analysis of the maintenance processes. Future work about our ontology use and evaluation and the conclusion are developed consequently in sections 7 and 8. 2
COMPLEX SYSTEMS CHARACTERISTICS
We develop in this section two classification criteria for characterizing software architectures from a macroscopic view freeing details to be studied (the protocols ...) when we want to improve these architectures, including the e-maintenance. 2.1 Information evolution The information used in different applications in the field of maintenance has changed in the light of information technology developments and depending on the complexity of the industrial environment. In the past, this information has been manually entered on paper (drawings, diagrams, manuals) and was verbally exchanged between operators in an informal way. Unlike today, the information is different. It has become formalized and structured to be manipulated by information systems. At the same time, enterprise environment becomes increasingly complex and production systems are becoming more dynamic, which makes the context of the information’s use more variable and unstable. Information is uncertain; it evolves with the changing context. The way to reduce this uncertainty is through the implementation of this information in a context with meaning and direction, by turning it into knowledge in a given objective. This knowledge then becomes, along with other information and knowledge a source to acquire skills. Today’s information systems handle this knowledge to provide a decision support for its users on problem solving and to improve their skills in this field. 2.2
Relations between systems
Thanks to technological and informatics evolution, information systems which were independent and autonomous begin to cooperate by exchanging and sharing information. More recently, new information technologies and communication (ICT) have enabled the migration of these different systems into an integrated system where cooperation and collaboration are essential to any operation. There are different types of relationships between systems under review and will be the basis for the classification of different architectures in maintenance (see Figure 1).
Figure 1. Relationship intensity between systems. - Autonomy relationship is a regime under which a system has the maximum power of management and is independent of all other systems and components. There is communication between systems and it must be self-sufficient in terms of necessary information. - Communication relationship is a link between two or more systems that allows transfers or exchanges. The information transmitted in the communication is no longer limited to alphanumeric characters and also include images, sound and video clips. In this context, the term communication is often used as synonymous with telecommunications. - The cooperative relationship presents a cooperative work being done by a work division in which each actor is responsible for part of the resolution of the problem. In our context, it is mainly technological and industrial cooperation, therefore, a cooperative agreement between independent systems that are committed to carry out joint production of maintenance services. - The collaborative relationship is a strategic partnership to achieve excellence through a combination of skills, suppliers or products. Collaboration involves a mutual commitment of stakeholders in a coordinated effort to resolve the problem by pooling resources, information and skills to better adapt to their environment organizations.
99
3
HISTORY OF MAINTENANCE’S COMPUTING SYSTEM
The development of computer systems in the field of industrial maintenance began when the maintenance has been recognized as a fundamental function in the company and a particular stress was laid on the study and development of procedures of this function. Information used in maintenance has changed in according to the evolution of information technologies and in according to the growth of enterprise environment complexity. The information structure has changed in order to be handled by information systems. We can identify various aspects in the evolution of computer system maintenance [2, 3]: - Computerization of the procedures of maintenance: The automation of the business management allowed computerizing several maintenance procedures. Computer files of equipment, interventions, stocks, plans and diagrams etc were thus created. The integration of these files and the automation of the maintenance activities were possible thanks to CMMS packages (Computerized Maintenance Management System). The daily events of maintenance were treated: the blackout, preventive execution, stocks management. - Interfacing with software packages: Thereafter, these software packages had to interface with the other enterprise software such as purchasing and accounting, already computerized. Large ERP systems (Enterprise Resource Planning) are a next step in streamlining the business processes and integration of maintenance with other corporate functions. - Evolution of the technical field: Informatics has also made progress in the technical field of maintenance. Modern techniques of analysis of maintenance and control have emerged in parallel computing: vibratory analysis, oil analysis, IR thermography, hot ultrasounds etc. We can distinguish among these systems two main groups: analysis systems and acquisition and control systems. * Analysis systems, sometimes coupled with expert systems have been developed. The analysis systems are also intended to provide decision support in diagnosis, prognosis and repair of equipment operators, etc… * Among the acquisition and control systems, we can quote SCADA (supervisory control and data acquisition), command-control of the equipment, technical data and documentation management systems, etc… - Integration of intelligent modules in maintenance architecture: The presence of these various intelligent modules of maintenance leads us to make them communicate and collaborate. The construction of intelligent modules or bricks must contribute to provide indicators to make the right strategic decision and maintenance policy. - Development of ICT: The development of new information and communication technologies, the extension of the Internet in the enterprise, application integration, and the emergence of new policies for maintenance indicate a new stage for the computerization of maintenance, that which some call "maintenance Intelligent”. This leads to cooperative and distributed architectures of maintenance systems communicating between them or on a basis of networks. Implementation of these maintenance architectures can be done using maintenance platforms whose main idea is to offer a maintenance service via internet. Maintenance platforms proposed in Proteus or OSA / CBM projects can serve as examples. 4
DEFINITIONS OF VARIOUS ARCHITECTURES
We propose a terminology characterizing the various computer systems in maintenance and we classify those under two areas: the type of information used in the system and the intensity of a possible relationship with other systems (see Figure 2). More the relation is intense more the systems are connected and integrated and we speak about common architectures to be implemented across platforms. The volume of automatically managed information is concretized by the surface of the square of each system and increases with the intensity of collaboration and also with the complexity of shared information. We note that there is a parallel between our classification of systems and enterprises classification as presented in several work [4].
100
Figure 2. Maintenance architectures classification - The maintenance system includes a single computer system on this site and used on the site of maintenance. This system is autonomous with data exchange with other systems. In parallel with the classification of companies, this corresponds to the traditional company; therefore we are talking about a traditional architecture of an information system. - The system of remote maintenance consists of at least two computer systems a transmitter and a receiver of data and information distantly exchanged. According to the definition of AFNOR remote maintenance is "the maintenance of a well executed without physical access of the staff to the equipment". We are talking about a distributed architecture, based on the concept of distance that can transfer data by radio, telephone line or through a local network. - With the extension of Internet, the systems of remote maintenance emergent towards the concept of E-maintenance. The system of e-maintenance will be implemented on a platform integrating various cooperative distributed systems and maintenance applications. This platform must take support on the global Internet (from where the E-maintenance term) and Web technology allows to exchange, to share and distribute data and information and to create common knowledge. Here the concept intelligent maintenance can be exploited and the proactive and cooperative strategies of maintenance are installed. - Finally, we propose an architecture intended to improve the performance of the architecture of E-maintenance on the level of the communication and exchange of the data between systems and which makes it possible to take account of the semantics of processed data in the applications - S-maintenance (where " S " means semantic) [5]. We will describe with section 5 this concept which takes support on semantic-knowledge via an ontology of maintenance. 4.1 Maintenance This is the basic notion where the system is completely autonomous. AFNOR defines it as “combination of all technical, administrative and managerial actions during the life cycle of an item intended to retain it in, or restore it to, a state in which it can perform the required function”. A maintenance system is represented by an application for maintenance or reliability of the various activities of the maintenance function such as logistics, planning interventions, inventory (managed by the CMMS, ERP), diagnosis and repair (expert systems, databases), monitoring equipment (SCADA, digital control equipment). The architecture of these systems can range according to various objectives. Therefore we propose to describe architectures of these systems by a generic scheme valid for any enterprise system. This scheme consists of two main parts, namely the physical system and management system. This latter produces the whole of the results or decisions based on information coming from the physical system [6]. The acquisition of information is manual or rather limited in its automation and the decisions are thus done by the intermediary of an information system. 4.2 Remote maintenance The architecture of remote maintenance consists of two or several systems or subsystems apart from each other and exchange data between them. One of the systems can function as a data acquisition system, representing the issuer of structured data. The second system is the receiver, functioning as a data processing system. The transmitter can send data automatically or in response to a request from the receiving system data. The results of data processing (output) are used by human actors or may be referred to the purchasing system to arrange the data acquisition. So that data can be exchanged and acceptable by both systems they must be structured. Always by keeping the aspect of distance, remote maintenance can be installed on only one production site as it may be distributed among different production or maintenance site and/or a maintenance center.
101
Figure 3. Example of remote maintenance architecture An example of remote maintenance architecture (cf figure 3) was created in the project TEMIC (Industrial Cooperative Remote Maintenance) which allowed cooperative remote maintenance: not only maintenance staff can perform work at a distance (remote maintenance) but it can do it in collaboration with other experts (cooperative work). Emphasis was placed on the mobility aspect of the cooperating members on several levels: - Distant Level: the actors of remote maintenance will be reachable wherever they are via the mobile network (GSM / GPRS). - Local Level (nomadism): by detection of the presence of remote maintenance actors in a preset perimeter (about 100m) within the company which manages maintenance, to join the most experienced technician on a particular problem. 4.3 E-maintenance The architecture of e-maintenance is done via Internet that allows to cooperate, exchange, share and distribute information to various partner systems of the network (see figure 4). The principle consists in integrating the whole of the various systems of maintenance in only one information system [7]. Systems offer different formats of information that are not always compatible for sharing; this requires coordination and cooperation between systems to make them interoperable. According to [8], interoperability is "the ability of two communication systems to communicate in an unambiguous way, such systems are similar or different. One can say that making interoperable is creating compatibility". The architecture of e-maintenance must ensure interoperability between each of these different systems. The project MIMOSA (Machinery Information Management Open Systems Alliance) was the first in 90th years in the United States to develop a complex information system for maintenance management [9]. The project aimed to develop a collaborative network of maintenance by providing the open standard protocol EAI (Enterprise Application Integration). The organization recommends and develops characteristics of information integration to allow the management and the control of added value by the opened, integrated and industry oriented solutions. This latter developed from information blocks to create e-maintenance platform have been proposed in this project [10].
102
Figure 4. E-maintenance architecture. A functional architecture OSA / CBM (Open System Architecture for Condition-Based Maintenance) dedicated to the development of strategies for conditional or predictive maintenance [11] was developed from the relational schema MIMOSA CRIS. It contains seven flexible modules whose contents (methodology and algorithms) are configurable by the user (cf figure 5). It can be simplified and adapted to each industrial requirement by reducing modules.
Figure 5. OSA/CBM project An E-maintenance architecture was presented in the European project Proteus (cf figure 6). The project was designed to provide a cooperative distributed platform of E-maintenance including the existing systems of data acquisition, control, maintenance management, diagnosis assistance, management of documentation, etc. The concept of this platform is defined by a single and coherent description of the installation to maintain, through a generic architecture based on the concepts of Web services and by proposing models and technological solutions of integration. These techniques help to guarantee interoperability of heterogeneous systems to ensure the exchange and sharing of information, data and knowledge. The aim of the platform is not only to integrate existing tools, but also to predict the evolution of these through the introduction of new services.
103
Figure 6. Proteus e-maintenance plateforme[wwww.proteus-iteaproject.com] Web services were conceived to guarantee interoperability between the various applications of the platform. But they prove that interfaces interconnection protocol does not treat semantics of the output and input data. XML used as bases for data exchange manages structures punts and must be used with the RdF standard to guarantee the bonds between these entities. This architecture guarantees technical interoperability -link between IT systems and services which they provide but does not take account of the semantic interoperability, which consists in giving "Meaning" (semantics) to exchanged information and to make sure that this meaning is distributed in all the interconnected systems. Taking into account this semantics makes possible these systems to combine the information received with other local information and to treat them in a suitable way compared to this semantics [12]. The European project PROMISE (Product Lifecycle Management and Information Tracking Using Smart Embedded Systems) [13] proposes a closed-looped design and lifecycle management system. The objective of PROMISE is to allow information flow management to go beyond the customer, to close the PLC (Product Lifecycle) information loops, and to enable the seamless e-transformation of PLC information to knowledge [14]. This project focused in three working areas related to e-maintenance issues [15]: - Area 1: E-maintenance and e-service architecture design tools (design of e-maintenance architecture as well as its platform for e-service applications). - Area 2: Development of watchdog computing for prognostics (development of advanced hashing algorithm for embedded product behaviour assessment and prognostics). - Area 3: Web-based and tether-free monitoring systems (development of ‘‘interface technologies’’ between the product eservice system platform and Web-enabled e-business software tools). DYNAMITE (Dynamic Decisions in Maintenance) is an European project which aims to create an infrastructure for mobile monitoring technology and create new devices, which will make major advances in capability for maintenance decision systems incorporating sensors and algorithms [16]. The key features include wireless telemetry, intelligent local history in smart tags, and on-line instrumentation [15]. In [15], Iung et al outline most e-maintenance platforms in order to evaluate their capacity from different points of views as collaboration, process formalization, knowledge management, knowledge capitalization, interoperability, etc… In term of knowledge capitalization the major contribution has especially provided by OSA-CBM and Promise Project platform. This latter has the major contribution in the context of knowledge management too. Regarding interoperability, MIMOSA and OSA-CBM standards have the most relevant contribution in this topic. On the other side, Iung et al didn’t talk about semantic interoperability, and in our knowledge, existing platforms do not focus on this issue. Hence, in this work we stress this problematic and we present s- maintenance (“S” for semantic) architecture which guarantee a high level of semantic interoperability between the various systems of the maintenance platform. 5
SEMANTIC INTEROPERABILITY IN MAINTENANCE ARCHITECTURES We seek to set up an architecture treating of the semantic interoperability of data.
104
5.1 Semantic interoperability The IEEE Standard Computer Dictionary defines interoperability as the “ability of two or more systems or components to exchange information and to use the information that has been exchanged” [1]. From this definition it is possible to decompose interoperability into two distinct components: the ability to exchange information, and the ability to use the information once it has been received. The former process is denoted as ‘syntactic interoperability’ and the latter ‘semantic interoperability’. A small example suffices to demonstrate the importance of solving both problems. Consider two persons who do not share a common language. They can speak to one another and both individuals will recognize that data has been transferred (they can also probably parse out individual words; recognize the beginning and end of message units, etc.). Nevertheless, the meaning of the message will be mostly incomprehensible; they are syntactically but not semantically interoperable. Similarly, consider a person who is blind and one who is deaf, but who both utilize a single language. They can attempt to exchange information, one by speaking and one by writing, but since they are incapable of receiving the messages, they are semantically but not syntactically interoperable [17]. In other words, Semantic interoperability ensures that these exchanges make sense—that the requester and the provider have a common understanding of the “meanings” of the requested services and data [18]. Achieving semantic interoperability among different information systems is very laborious, tedious and error-prone in a distributed and heterogeneous environment [19]. Currently it interests various works which was classified according to Park and RAM [20] in three broad approaches: 1. Cartography interoperability (mapping based approach). It aims to build cartographies between data or elements of models semantically connected [21]. A set of transformation Rules are installed to translate or federate local pattern with a global pattern. One therefore adopts an approach to study semantic interoperability via transformation [22, 23]. 2. Interoperability by interoperable languages. These query languages take into account data and metadata to solve semantic conflicts between several data bases interrogation [24]. 3. Interoperability through intermediate mechanisms such as mediators or agents. These mechanisms must have a specific knowledge of the area to coordinate different data sources generally via ontologies [25, 26] or via middleware like the Common Object Request Broker Architecture (CORBA) which based on metadata messaging to facilitate interoperability at each level [27]. Other promising ways are presented by Chen et al in [28] who propose two new approaches to resolve semantic interoperability, a model driven interoperability architecture and a Service oriented architecture for interoperability. The model driven interoperability (MDI) architecture based on MDA and enterprise interoperability concepts. The objective of this approach is to allow transforming automatically the models designed at the various abstraction levels of the MDA structure [29]. The service oriented interoperability is based on a Service-Oriented Architectures adopting a federated approach [30], i.e. allowing interoperability of services ‘on the fly’ through dynamic accommodation and adaptation. As for the above classified approaches, we choose the approach of intermediate mechanisms using ontology engineering. Indeed, Heiler, Mao et al, Yang et al and others researchers are in agreement that ontology engineering is recognized as the key technology to deal with the semantic interoperability problem [18, 19, 31]. Ontologies specify the semantics of terminology systems in a well defined and unambiguous manner [32], by formally and explicitly representing shared understanding about domain concepts and relationships between concepts. In the ontology based approach, intended meanings of terminologies and logical properties of relations are specified through ontological definitions and axioms in a formal language, such as OWL (Web Ontology Language) [33] or UML (Unified Modelling Language) [34]. This agreement seems promising to us and supports the work of knowledge management which we implemented on a repair and diagnosis module applied to an E-maintenance platform [5]. One of the problems arising from this approach is the definition of common ontology. In our case related to a business approach on maintenance, an ontology relating to the equipments was made during the development of the E-maintenance platform within the framework of European project PROTEUS. The conceived ontology, oriented towards a business approach for the maintenance of industrial equipments, is a common denominator between the various applications implemented in an E-maintenance platform. However this ontology was not exploited by all assistance modules of the platform, but only by our assistance module of diagnosis and repair. This did not guarantee the semantic interoperability of the platform. We propose to generalize with the other applications of maintenance assistance, the use of common ontologies in order to guarantee this semantic interoperability. An obstacle to such use is to have a knowledge management approach during the development of the assistance maintenance systems. 5.2 S-maintenance architecture The architecture of S-maintenance platform takes support on the architecture of E-maintenance where the interoperability of the various integrated systems in the platform is guaranteed by an exchange of knowledge represented by an ontology. So
105
that information sharing in the E-maintenance cooperative network is without difficulty, we are required to formalize this information in a way to be able to exploit it in the various systems belonging to the network. We are extending coordination between network partners and we develop an ontological base of the sharing information. Systems share the semantics created for the common architecture of the E-maintenance platform (cf figure 7). This terminological and ontological base models the whole of knowledge of the domain. It will play the role of a memory to set up a knowledge management and capitalization system and to thus exploit the experience feedback to improve maintenance system functioning. This system will use the knowledge engineering tools as well as knowledge management. The software tool must play the role of service integrator able to be connected to the other systems, specific to companies. This knowledge system makes possible to identify, capitalize and restore necessary knowledge to control, using a support environment [35]. Semantics has three levels, namely the general concepts of maintenance, application domain concepts, and specific concepts to each company.
Figure 7. S-maintenance architecture This system takes support on the concept of E-maintenance with an exchange of information either on the Web services but requiring additional constraints based on standard "OKC" resulting from the semantic Web. The semantics of exchanged information requires the creation of a domain ontology common to the various systems. It allows using and creating knowledge and skills which lead to the use of knowledge management techniques and allows capitalizing acquired knowledge. Systems collaborate, which requires a coordinated effort to solve problems. 6
DOMAIN ONTOLOGY OF MAINTENANCE
Several research works tried to build a maintenance ontology. In the software maintenance area , KITCHENHAM et al in [36] suggest that empirical studies of maintenance are difficult to understand unless the context of the study is fully defined. We developed a preliminary ontology to identify a number of factors that influence maintenance. The purpose of the ontology was to identify factors that would affect the results of empirical studies. We present the ontology in the form of a UML model. Ruiz et al in [37] developed a semi-formal where the main concepts, according to the literature related to software maintenance, have been described. This ontology, besides representing static aspects, also represents dynamic issues related to the management of software maintenance projects. REFSENO (A Representation Formalism for Software Engineering Ontologies) [38] was the methodology used in this work. Matsokis and Kiristis in [39] propose an ontology-based approach for product lifecycle management, as extension of the ontology proposed in Promise project [40]. This latter provide a Semantic Object Model for Product Data and Knowledge Management. The SOM provided a commonly accepted schema to support interoperability when adopted by different industrial partners. However, these works can be analyzed within two points of view: what does it present? And what does it regard? The two first works try to conceptualize the entire maintenance domain; nevertheless the second two works focus only on the product lifecycle and essentially the middle of life phase [41]. Within the second point of view last works regard to ensure interoperability between industrial partners contrariwise to the first two works which especially aim to ensure the best management of software maintenance and reuse activities. Thus, we develop a general product maintenance ontology which covers the whole of the maintenance domain having the goal to ensure semantic interoperability among different systems in the maintenance platform. We take advantages of the classification made by Rasovska et al in the study of maintenance process [42] to set up our ontology. In fact, like shown in figure 8, authors define four fundamental technical and business fields, identified in the general maintenance: (i) equipment analysis which consists of functional analysis and failure analysis; (ii) Fault diagnosis and expertise which aim to help the operator, during his intervention, to diagnose the problem and the prognostic to anticipate the
106
breakdown and to solve it without the recourse to an expert; (iii) Resource management which deals with the resource planning for all maintenance interventions; (iv) Maintenance strategy management which represents decision support concept for maintenance managers. Equipment analysis Fault diagnosis and expertise Resource management Maintenance strategy (contract) management Figure 8. Maintenance process concepts Based on the study of the maintenance process, dependability concepts and maintenance experts practice, we developed this ontology of maintenance expertise including maintained equipment model associated to maintenance system components as an UML class diagram. The choice of UML as language of our ontology is based on its graphical expressivity’s and the semantics’ power recommended in various research works. Cranefield et al in [34] focus on the benefits of using UML as an ontology’s language, Bézivin et al in [43] stresses that the meta-models (e.g. UML) in the sense that they are used in the OMG (Object Management Group) address the concept of representation and more specifically to the ontology definition presented in [44]. We have built our own framework, so that from its conception it takes into account the different scope of the maintenance process. This ontology was developed as a tool for sharing semantic between different actors in the e-maintenance platform. The ontology of the domain, although established independently of the methods of reasoning has a structure which depends on how acquired knowledge will be used for reasoning because experts deliver the knowledge adapted to their reasoning. The model domain consists of twelve parts (i.e. packages) corresponding to both the structure of the enterprise memory and the maintenance process (see Figure. 9). There are the monitoring management system, site management system, equipment expertise management system, resource management system, intervention management system, the maintenance strategy management model, maintenances management system, equipment states system, historic management system, document management system, functional management system, dysfunctional management system. The equipment expertise management system is characterized on the one hand, by the equipment components and sub components in a tree form (Component). Site management system: defined as a unity characterized by emplacement. This site can be a production site which contains operating equipments or a maintenance centre which is the central location to operate and maintain equipments. Equipment states system: during its operation, equipment may be in one of the following states: Normal state, Degraded state, Failure state, Programmed stop state. We include in programmed stop any stopping of the carried equipment by the authorized personnel. Among scheduled stops, we are interested only to maintenance. Maintenances management system: this package is related with programmed stop included in the equipment state system. This package manages the different types of maintenance, which are corrective maintenance, conditional maintenance, and preventive maintenance. The monitoring management system consists of sensors (sensor) installed on the equipment and various measurements (Measure) coming from these sensors. A model of data acquisition (Data acquisition model) manages the acquisition and the exploitation of these measures. This model can trigger the procedure of intervention request according to a threshold measures and is therefore connected with the intervention management model. The intervention management system focuses on the maintenance intervention. Intervention lets to remedy the equipment failure and is described by an intervention report and characterized by maintenance type. The maintenance strategies management system is based on technical indicators (Technical indicator) and financial (Financial indicator) for each equipment in a maintenance contract. The resources management system describes the resources used in the maintenance system, namely human, material, document and their subclasses: operators (Operator), expert (Expert) and manager (Manager) are subclasses of human resource. Tools (Tool), consumables (consumable) and spare parts (Spare part) are subclasses of material resource. The document resource and their subclasses are presented in a separated package. Document management system: this package presents documentation resources which are indispensable in maintenance as: the equipment plan which contains the design and the model of the equipment and its components, technical documentation where is defined all technical information of an equipment and its use guide, contract which presents maintenance contract, and finally intervention report. This latter is composed by observation, work order, technical comments.
107
Functional equipment management system functional analysis and associated model (Functional equipment model) characterize the equipment operation by MainFunction and SecondFunction classes. They represent the equipment main and secondary functions to ensure the running smooth of the main function. Dysfunctional equipment management system each equipment can suffer from breakdowns and failures described in the Failure class and analyzed in the failure Analysis (Failure equipment model). A failure is identified by symptoms (symptoms) caused by origins (Origin) and remedied with a remedial action (Action). It also has characteristics (Characteristics) such as criticality, appearance frequency, non detection and the gravity which are evaluated in the FMECA (Failure Mode, Effects and Criticality Analysis). Historic management system contains life history which stocks the life historic of an equipment. It is composed by equipments states, interventions and different measurements of the monitoring system.
108
Figure 9. Domain ontology of maintenance
109
7
FUTURE WORK: ONTOLOGY USE AND EVALUATION
Modelling the domain ontology is very beneficial. But how can we exploit this benefice?; is the question which must be responded. Presenting the ontology via UML class diagram is very beneficial in term of clearness and comprehensibility, but it does not allow the ontology evaluation and the ontology use in the e-maintenance platforms. In other words, we cannot validate the ontology’s reasoning, soundness and completeness [45], and we cannot navigate on an UML class diagram. The ontology must be translated to an ontology language allowing reasoning and understandable [46] by the technical components of the platform which use the ontology. Currently we are working to evolve this ontology by the translation on a description logic language to allow the reasoning on the ontology and to implement it throw an interpreted or compiled language. This will permit to us the study of our ontology capacity and quality to guiding us to trails and areas of the ontology development. In the other hand, we aim to enrich this domain ontology models by adding more concepts and more information to cover all domain areas which can be used to evolve the e-maintenance platform. In the same time we aim to relate this domain ontology with a task ontology providing dynamic activities in the maintenance system as diagnostic, prognostic, detection, acquisition, etc… 8
CONCLUSION
To improve system availability and safety as well as product quality, industries are convinced by the importance role of the maintenance function. Consequently various works are made to evolve this latter. Taking advantages of new information technologies -which allow the integration of various assistance systems via platforms- these works permit to expand and develop maintenance systems. In this paper we proposed a classification of various existing maintenance architectures to infer a support system architecture for maintenance services. This classification is made according to relations intensity between systems (autonomy, communication, cooperation, collaboration) in a particular architecture. Collaborative or cooperative relation generates a problem on the interoperability level between the architecture systems. The semantic interoperability is considered as one of the complex interoperability's problems that is why we focus on it in this paper. Thus we highlighted the semantic maintenance architecture (S-maintenance) which is based on a common ontology for various systems. Indeed, this developed ontology witnesses a semantic interoperability level. This ontology is related to the maintained equipment and common for the platform to guarantee interoperability between integrated systems and applications. It is based on the maintenance process, dependability concepts and maintenance experts practice, including maintained equipment model associated to maintenance system components as an UML class diagram. The choice of UML as language of our ontology is based on the power graphical expressivity and the semantic of this language. To be operational this class diagram will be translate to an ontology language allowing reasoning like OWL DL language or LOOM. 9
REFERENCES
1 2 3 4
Staff, I.o.E.a.E.E. (1990) IEEE Computer Dictionary: Compilation of IEEE Standard Computer Glossaries 610–1990. Francastel J.C. (2003) Externalisation de la maintenance : Stratégies, méthodes et contrats, Paris : Dunod publication. Boucly F. (1998) Le management de la maintenance : Evolution et mutation, Paris : Afnor Editions. Dedun I & Seville M. (2005) Les systèmes d’information interorganisationnels comme médiateurs de la construction de la collaboration au sein des chaînes logistiques : Du partage d’information aux processus d’apprentissages collectifs. Proceedings of 6th international congress on Industrial engineering. Besançon. Rasovska I., Chebel-Morello B. & Zerhouni N. (2005) Process of s-maintenance: decision support system for maintenance intervention. Proceedings of 10th IEEE International Conference on Emerging Technologies and Factory Automation ETFA’05, Italie. Kaffel H. (2001) La maintenance distribuée: concept, évaluation et mise en oeuvre. Phd thesis, Université Laval, Quebec. Muller A. (2005) Contribution à la maintenance prévisionnelle des systèmes de production par la formalisation d’un processus de pronostic. Phd thesis, Université Henri Poincaré, Nancy. Spadoni M. (2004) Système d’information centré sur le modèle CIMOSA dans un contexte d’entreprise étendue. JESA, Volume 38, n° 5, 497-525. Kahn J. (2003) Overview of MIMOSA and the Open System Architecture for Enterprise Application Integration. Proceeding of COMADEM’03, pp. 661-670. Sweden: Växjö University. Mitchell J, Bond T, Bever K & Manning N. (1998) MIMOSA – Four Years Later. Sound and Vibration, pp. 12-2. Lebold M & Thurston M. (2001) Open standards for Condition-Based Maintenance and Prognostic Systems. Proc. Of 5th Annual Maintenance and Reliability Conference (MARCON 2001), Gatlinburg, USA. Wikipedia 2009. Available on: http://fr.wikipedia.org. Lee J, Ni J. (2004) Infotronics-based intelligent maintenance system and its impacts to closed-loop product life cycle systems. Invited keynote paper for IMS’2004, International conference on intelligent maintenance systems, Arles, France.
5 6 7 8 9 10 11 12 13
110
14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46
Kiritsis D. (2004) Ubiquitous product lifecycle management using product embedded information devices. Invited keynote paper of IMS’2004, International conference on intelligent maintenance systems, Arles, France. Muller A, Marquez C & Iung B. (2008) On the concept of e-maintenance: Review and current research. Journal of Reliability Engineering and System Safety 93 1165–1187. Amsterdam: Elsevier Science Publishers Holmberg K, Helle A & Halme J. (2005) Prognostics for industrial machinery availability. POHTO 2005, International seminar on maintenance, condition monitoring and diagnostics. Oulu, Finland. Komatsoulis GA, Warzel DB, Hartel FW, Shanbhag K, Chilukuri R, Fragoso G, de Coronado S, Reeves D M, Hadfield J B, Ludet C & Covitz P A. (2008) caCORE version 3: Implementation of a model driven, service-oriented architecture for semantic interoperability. Journal of biomedical informatics. Heiler S. (1995) Semantic Interoperability. ACM Computing Surveys (CSUR). Mao M. (2008) Ontology mapping: Towards semantic interoperability in distributed and heterogenous environments. PhD thesis University of Pittsburgh. Park J & Ram S. (2004) Information System Interoperability: What Lies Beneath?. ACM Transactions on Information Systems, vol. 22, n°4. Baïna S, Panetto H & Benali K. (2006) Apport de l’approche MDA pour une interopérabilité sémantique. Interopérabilité des systèmes d’information d’entreprise, Processus d’entreprise et SI, RSTI-ISI, pp.11-29. Rahm E & Bernstein P A. (2001) A survey of approaches to automatic schema matching. The International Journal on Very Large Data Bases, vol. 10, n° 4, p.334-350. Halevy A & Madhavan J. (2003) Composing mappings among data sources. Proceedings of the conference on very large databases, p. 572-583, Berlin, Germany. Fauvet MC, Baina S. (2001) Evaluation coopérative de requêtes sur des données semi-structurées distribuées. Proceedings of Information Systems Engineering. Maedche A & Staab S. (2000) Semi-automatic engineering of ontologies from texts. Proceedings of the 12th International Conference on Software Engineering and Knowledge Engineering (SEKE 2000), p. 231-239, USA. Halevy G I, Dan Suciu D & Tatarinov I. (2005) Schema mediation for large-scale semantic data sharing. The VLDB Journal, The international Journal on Very Large Data Bases, vol.14, n°1. Tannenbaum, A. (1994) Repositories: potential to reshape development environment. Application Development Trends. Chen D, Doumeingts G, Vernadat F. (2008) Architectures for enterprise integration and interoperability: Past, present and future. Journal of Computers in Industry 59 pp 647–659. Mellor SJ, Scott K, Uhl A & Weise D. (2002) Lecture Notes in Computer Science. ISO 14258. (1999) Concepts and Rules for Enterprise Models. Industrial Automation Systems ISO TC184/SC5/WG1. Yang Q Z & Zhang Y. (2006) Semantic interoperability in building design: Methods and tools. Journal of ComputerAided Design 38 pp 1099–1112. Amsterdam: Elsevier Science Publishers. Guarino N. (1998) Formal ontology and information systems. Formal ontology and information systems. IOS Press. W3C OWL Web ontology language overview, http://www.w3.org/TR/2003/PR-owl-features-20031215/; 2005 [last accessed 10.05].] Cranefield SJS & Purvis MK. (1999) UML as an ontology modelling language. In Proceedings of the Workshop on Intelligent Information Integration, 16th International Joint Conference on Artificial Intelligence (IJCAI-99). Kramer I. (2003) Proteus: Modélisation terminologique. Technical report INRIA France. Kitchenham B, Travassos G, Von Mayrhauser A, Niessink F, Schneidewind N, Singer J, Takada S, Vehvilainen R & Yang H. (1999) Towards an Ontology of Software Maintenance. Journal of Software Maintenance: Research and Practice 11. Ruiz F, Vizcaino A, Piattini M & García F. (2004) An ontology for the management of software maintenance projects. International Journal of Software Engineering and Knowledge. Tautz C & von Wangenheim C G. (1998) A Representation Formalism for Software Engineering Ontologie. Report of Fraunhofer Institute for Experimental Software Engineering. Matsokis A, Kiritsis D. (2009) An Ontology-based Approach for Product Lifecycle Management. Computers in Industry. Special Issue: Semantic Web Computing in Industry. In press. PROMISE, (2008) FP6 project. www.promise.no. Kiritsis D, Bufardi A & Xirouchakis P. (2003) Research issues on product lifecycle management and information tracking using smart embedded systems. Journal of Advanced Engineering Informatics; 17 (3-4), pp. 189-202 . Rasovska I, Chebel-Morello B & Zerhouni N. (2004) A conceptual model of maintenance process in unified modeling language. Proceedings at 11 th IFAC Symposium on Information Control Problems in Manufacturing 2004 (INCOM) Bézivin J. (2000) De la programmation par objets à la modélisation par ontologie. Journal of Ingénierie de connaissances. Charlet J, Bachimont B, Bouaud J & Zweigenbaum P. (1996). Ontologie et réutilisabilité : expérience et discussion. In AUSSENAC-GILLES N, LAUBLET P & REYNAUD C, Eds., Acquisition et ingénierie des connaissances : tendances actuelles, chapter 4, p. 69–87. Cepaduès-éditions. Uschold M & Gruninger M. (1996) Ontologies: Principles, methods and applications. Knowledge engineering review. Uschold M. (1998) Knowledge Level Modelling: Concepts and Terminology. The Knowledge Engineering Review, vol. 13, N 1.
111
Proceedings of the 4th World Congress on Engineering Asset Management Athens, Greece 28 - 30 September 2009
INFORMATION AND OPERATIONAL TECHNOLOGIES NEXUS FOR ASSET LIFECYCLE MANAGEMENT Andy Koronios a, b, Abrar Haider a, b, Kristian Steenstrup a, c a b
CRC for Integrated Engineering Asset Management, Brisbane, Australia
School of Computer and Infomation Science, University of South Australia, Mawson Lakes Campus, SA 5095, Australia. c
Gartner Inc.
Contemporary enterprises rely on accurate and complete information to make optimal decisions. In order to do so must have the ability to harvest information from every repository that will provide them the necessary information for good decisions to be made. Asset managing organisations have in recent times moved towards integrating many of their information systems but have, in most cases, focused on the business process enabling properties of information technologies, and have tended to overlook the role of these technologies in informing the strategic business orientation. In the same vein, asset managing organisations consider IT as a business support area that is there to support business processes or operational technologies to ensure smooth functioning of an asset. However, even these operational technologies are embedded in IT, and they generate information that is fed to various other operational systems and administrative legacy systems. The intertwined nature of operational and information technologies suggest that this information provides for the control of asset management tasks and also acts as an instrument for informed and quality decision support. There exposes the active and dynamic link between IT and corporate governance. The scope of IT governance should, thus, be extended to include the operational technologies so as to develop a unified view of information and operational technologies. This paper attempts to uncover the peculiarities and variances of the relationship between industry specific operational technologies used for asset management and organisational use of mainstream IT applications for business execution. This paper puts forward the proposition that in order to achieve a high degree in data driven decision making particularly at the strategic level of the organisation a confluence of information technology (IT), operational technology (OT) and information management technology (IM) needs to occur. Key Words: Information technology, governance, operational technologies. 1
INTRODUCTION
During the last two decades, significant change has occurred in the way enterprises manage information to execute business processes, communicate and make better decisions. Indeed many organisations have used information technologies to transform their organisation and create new business models creating new business value. Information Technologies (IT) for asset management are required to translates strategic objectives into action; align organisational infrastructure and resources with IT; provide integration of lifecycle processes; and informs asset and business strategy through value added decision support. However, the fundamental element in achieving these objectives is the quality of alignment of technological capabilities of IT with the organisational infrastructure, as well as their fit with the operational technologies (OT) used in lifecycle management of assets. IT and OT are becoming inextricably intertwined, where OT facilitate running of the assets and are used to ensure system integrity and to meet the technical constraints of the system. OT includes control as well as management or supervisory systems, such as SCADA, EMS, or AGC. These systems not only provide the control of asset lifecycle tasks, but also contribute to the overall advise on effective asset management though the critical role that they have in decision making. However, even though OT owe a lot to IT for their smooth functioning, yet due to their specialised nature these technologies are not considered as IT infrastructure. Furthermore, if the This paper, therefore,
112
attempts to uncover the relationship between industry specific OT used for asset management and organisational use of mainstream IT applications for asset lifecycle management. It starts with an analysis of the IT utilised for asset management, which is flowed by a discussion on their relationship with OT in asset lifecycle management. The paper, thus, presents a framework for IT-OT nexus.
2 2.1
ASSET MANAGEMENT Scope of Asset Management
The scope of asset management activities extends from establishment of an asset management policy and identification of service level targets according to the expectation of stakeholder and regulatory/legal requirements, to the daily operation of assets aimed at meeting the defined levels of service. Asset managing organisations, therefore, are required to cope with the wide range of changes in the business environment; continuously reconfigure manufacturing resources so as to perform at accepted levels of service; and be able to adjust themselves to change with modest consequences on time, effort, cost, and performance. Asset management can be classified into three levels, i.e. strategic, tactical, and operational (Figure 1). Strategic level is concerned with understanding the needs of stakeholders and market trends, and linking of the requirements thus generated to the optimum tactical and operational activities. Operational and tactical levels are underpinned by planning, decision support, monitoring, and review of each lifecycle stage to ensure availability, quality, and longevity of asset’s service provision. The identification, assessment, and control of risk is a key focus at all levels of planning, with the results from this process providing inputs into the asset management strategy, policies, objectives, processes, plans, controls, and resource management.
External Factors
Auditors
Regulations
Operational level
Operation management
Contract management Tactical level
Contractors
Suppliers
Customer service
Work management
Inventory control
Assets management
Maintenance management
Strategic Purchasing Resource level planning AM goals & policies, Strategic level AM planning, Human Ownership resource definition Engineering
External consultants
Finance
Marketing Location management Pressure groups
Legislation
Condition monitoring
Government agencies
Registry management Business stakeholders
Reliability management Risk management
Economic forecast
Figure 1: Scope of Asset Management (Source [1])
2.2 Strategic Asset Management Planning Asset management has evolved from the humble beginning of maintaining plant machinery to execute a host of related functions to an approach that is equally as important and essential as quality, reliability, and organisational efficiency [2]. Asset strategic planning typically have a 10-25 year horizon for financial planning purposes, although organisations may look well beyond this period in order to fully assess optimum lifecycle strategies [1]. Strategic asset planning translates legal and stakeholder requirements and expectations into service outcomes; thereby allowing for an overall long term strategy to manage assets. The main constituents of strategic planning process are,
113
a.
the development of vision, mission and values statements which describe the long-term desired position of the organisation and the manner in which the organisation will conduct itself in achieving the same [3];
b.
review of the operating environment, to ensure that all elements that affect the organisation’s activities have been considered. Such elements include corporate, community, environmental, financial, legislative, institutional and regulatory factors [4];
c.
identification and evaluation of strategic options to achieve strategic goals arising from the vision and mission statements [5]; and
d.
a clear statement of strategic direction, policies, risk management and desired outcomes [6].
Public sector organisations may give more weighting to environmental, social and economic factors in determining strategic goals, whereas private sector asset owners will typically place most emphasis on economic factors. However, the agreement on levels of service in terms of criteria such as quality, quantity, timeliness and cost provides the link between the strategic and tactical plans.
2.3 Tactical Asset Management Planning Tactical planning involves the application of detailed asset management processes, procedures and standards to develop separate sub-plans that allocate resources (natural, physical, financial, etc.) to achieve strategic goals through meeting defined levels of service. Depending on an organisation’s purpose, tactical plans may have varying priorities, for example, owners of infrastructure assets are usually directly concerned with asset management plans and customer service plans, which then become an input into other tactical plans, such as resource management plan. The fundamental aim of tactical asset management processes and procedures is to cost-effectively achieve the organisation’s strategic goals in the long-term. These processes, procedures, and standards cover asset management activities, such as, a.
setting asset management objectives, including technical and customer service levels; and regulatory and financial requirements [7];
b.
operational controls, plans, and procedures [8];
c.
managing asset management information systems and information contained in them, such as asset attributes, condition, performance, capacity, lifecycle costs, maintenance history, etc. [9];
d.
risk management [10];
e.
decision making for optimisation of asset lifecycle management [10]; and
f.
asset performance and condition assessments [11].
2.4 Asset Management Operational Planning Operational plans generally comprise detailed implementation plans and information with a 1-3 year outlook. These plans typically provide the organisational direction on annual or biannual basis and are concerned with practical rather than visionary elements. Operational plans actually work as practical translations for priorities arising from tactical plans in order to deliver cost effective levels of service. According to IIMM [1] operational plans typically include aspects, such as: a.
operational controls to ensure delivery of asset management policy, strategy, legal requirements, objectives and plans;
b.
structure, authority and responsibilities for asset management;
c.
Staffing issues - training, awareness and competence;
d.
consultation, communication, documentation; to/from stakeholders and employees;
e.
information and data control; and
f.
emergency preparedness and response.
Asset lifecycle management involves significant amount of acquisition, processing, and analysis of information that enables planning, design, construction, maintenance, rehabilitation, and disposal/refurbishment or replacement of assets. The complexity and increasingly entwined nature of asset management calls for integration of cross functional information. IT in asset management, therefore, has to provide for the control of asset lifecycle management tasks, as well as act as instruments for decision support. For example, the trade-offs between deferred maintenance and preventive maintenance, between shortterm fixes and long-term solutions. Thus, the most important function of IT in asset management is bringing together of the
114
various lifecycle management functions, thereby allowing for an integrated view of asset lifecycle. However, realisation of an integrated view of asset lifecycle through IT requires appropriate hardware and software applications; quality, standardised, and interoperable information; appropriate skill set of employees to process information; and the strategic fit between IT and asset lifecycle management processes.
3
SCOPE OF IT IN ASSET MANAGEMENT
In theory IT in asset management have three major roles; firstly, IT is utilised in collection, storage, and analysis of information spanning asset lifecycle processes; secondly, IT provides decision support capabilities through the analytic conclusions arrived at from analysis of data; and thirdly, IT provides an integrated view of asset management through processing and communication of information and thereby allow for the basis of asset management functional integration. According to Haider [12] minimum requirements for asset management at the operational and tactical levels is to provide functionality that facilitates, a.
knowing what and where are the assets that the organisation own and is responsible for;
b.
knowing the condition of the assets;
c.
establishing suitable maintenance, operational and renewal regimes to suit the assets and the level of service required of them by present and future customers;
d.
reviewing maintenance practices;
e.
implementing job/resources management;
f.
improving risk management techniques;
g.
identifying the true cost of operations and maintenance; and
h.
optimising operational procedures.
In engineering enterprises asset management strategy is often built around two principles, i.e., competitive concerns and decision concerns [13]. Competitive concerns set manufacturing/production goals, whereas decision concerns deal with the way these goals are to be met. IT provide for the these concerns through support for value added asset management, in terms of the choices such as, selection of assets, their demand management, support infrastructure to ensure smooth asset service provision, and process efficiency. Furthermore, these choices also are concerned with in-house or outsourcing preferences, so as to draw upon expertise of third parties. IT not only aid in decision support for outsourcing of lifecycle processes to third parties, but also provide for the integration of extra-organisational processes with the intra-organisational processes. Nevertheless, the primary expectation from IT at the strategic level is that of an integrated view of asset lifecycle, such that informed choices could be made in terms of economic tradeoffs and/or alternatives for asset lifecycle in line with asset management goals, objectives, and long term profitability outlook of the organisation. However, according to IIMM [1], the minimum requirements for asset management at the strategic level are to aid senior management in, a.
predicting the future capital investments required to minimise failures by determining replacement costs;
b.
assessing the financial viability of the organisation to meet costs through estimated revenue;
c.
predicting the future capital investments required to prevent asset failure;
d.
predicting the decay, model of failure or reduction in the level of service of assets or their components, and the necessary rehabilitation/ replacement programmes to maintain an acceptable level of service.
e.
assessing the ability of the organisation to meet costs (renewal, maintenance, operations, administration and profits) through predicted revenue;
f.
modelling what if scenarios such as, (i)
Technology change/obsolesce,
(ii)
Changing failure ratas and risks these pose to the organisation, and
(iii)
Alterations to renewal programmes and the likely effect on levels of service,
h.
alteration to maintenance programmes and the likely effect on renewal costs; and
i.
impacts of environmental (both physical and business) changes.
IT for asset management seeks to enhance the outputs of asset management processes through a bottom up approach. This approach gathers and processes operational data for individual assets at the base level, and on a higher level provides a consolidated view of entire asset base (figure 2).
115
IT Implementation Concerns
Desired Asset Management Outputs
Level
Providing and integrated view of asset lifecycle management information to facilitate strategic decision making at the executive level.
Tactical Level
Fulfilling asset lifecycle planning and control requirements aimed at continuous asset availability, through performance analysis based on analysis of various dimensions of asset information such as, design, operation, maintenance, financial, and risk assessment and management.
Strategic How IT must be implemented to provide an integrated view of asset lifecycle?
How IT must be implemented to meet the planning and control of asset lifecycle management?
How IT must be implemented to meet operational requirements of assets?
Operational Level
Aiding in and/or ensuring of asset design, operation, condition monitoring, failure notifications, maintenance execution and resource allocation, and enabling other activities required for smooth asset operation.
Figure 2: Scope of IT for asset management (source [14]) At the operational and tactical levels, IT systems are required to provide necessary support for planning and execution of core asset lifecycle processes. For example, at the design stage designers need to capture and process information such as, asset configuration; asset and/or site layout design and schematic diagrams/drawings; asset bill of materials; analysis of maintainability and reliability design requirements; and failure modes, effects and criticality identification for each asset. Planning choices at this stage drive future asset behaviour, therefore the minimum requirement laid on IT at this stage is to provide right and timely information, such that informed choices could be made to ensure availability, reliability and quality of asset operation. An important aspect of asset design stage is the supportability design that governs most of the later asset lifecycle stages. The crucial factor in carrying out these analyses is the availability and integration of information, such that analysis of supportability of all facets of asset design and development, operation, maintenance, and retirement are fully recognised and defined. Nevertheless, effective asset management requires the lifecycle decision makers to identify the financial and non financial risks posed to asset operation, their impact, and ways to mitigate those risks. IT for asset management not only has to provide for standardised quality information but also have to provide for the control of asset lifecycle processes. For example, design of an asset has a direct impact on its asset operation. Operation, itself, is concerned with minimising the disturbances relating to production or service provision of an asset. At this level, it is important that IT systems are capable of providing feedback to maintenance and design functions regarding factors such as asset performance; detection of manufacturing or production process defects; design defects; asset condition; asset failure notifications. There are numerous IT systems employed at this stage that capture data from sensors and other field devices to diagnostic/prognostic systems; such as Supervisory Control and Data Acquisition (SCADA) systems, Computerized Maintenance Management Systems (CMMS), and Enterprise Asset Management systems. These systems further provide inputs to maintenance planning and execution. However, effective maintenance not only requires effective planning but also requires availability of spares, maintenance expertise, work order generation, and other financial and non financial supports. This requires integration of technical, administrative, and operational information of asset lifecycle, such that timely, informed, and cost effective choices could be made about maintenance of an asset. For example, a typical water pump station in Australia is located away from major infrastructure and has considerable length of pipe line assets that brings water from the source to the destination. The demand for water supply is continuous for twenty four hours a day, seven days a week. Although, the station may have an early warning system installed, maintenance labour at the water stations and along the pipeline is limited and spares inventory is generally not held at each station. Therefore, it is important to continuously monitor asset operation (which in this case constitutes equipment on the water station as well as the pipeline) in order to sense asset failures as soon as possible and preferably in their development stage. However, early fault detection is not of much use if it is not backed up with
116
the ready availability of spares and maintenance expertise. The expectations placed on water station by its stakeholders are not just of continuous availability of operational assets, but also of the efficiency and reliability of support processes. IT systems, therefore, need to enable maintenance workflow execution as well as decision support by enabling information manipulation on factors such as, asset failure and wear pattern; maintenance work plan generation; maintenance scheduling and follow up actions; asset shutdown scheduling; maintenance simulation; spares acquisition; testing after servicing/repair treatment; identification of asset design weaknesses; and asset operation cost benefit analysis. An important measure of effectiveness of IT, therefore, is the level of integration that they provide in bringing together different functions of asset lifecycle management, as well as stakeholders, such as business partners, customers, and regulatory agencies like environmental and government organisations.
4
ISLANDS OF INFORMATION ARE NO LONGER AN OPTION IN AM ORGANISATIONS
For too long business units in organisations have been allowed to create pools of data that were at best not easily available to the rest of the organisation and at worst the organisation was not even aware that such potentially valuable resources existed. This was promoted by many in the organisation as a means of exercising power and control. Operational technologies have been good candidates for creating pools of data as their nature is to gather, generally in a real-time setting, performance data from various technological systems the need for these to be integrated with other business systems has not been evident. Yet such a situation prevents or hinders optimised management of the assets. IT departments have not in the past helped such a situation from occurring through the creation of significant barriers between them and the rest of the enterprise. Furthermore, IT has been viewed, at least until recently, as an enabler and infrastructure provider to the business function rather than a strategic and indeed transformational resource. It is recently that many organisations have considered an enterprise-wide view of IT and its strategic impact. Much of the effort in enterprises is devoted into managing the physical asset and human resources. Yet another valuable resource, the data that is captured and stored in the organisational repositories, is generally not given the attention that it deserves as a strategic component of the enterprise. In addition to the information technology and operational technologies there is therefore a third discipline within many organisations with its own language, technologies and folksonomies. This is the area of records, content and knowledge management. Although it would be reasonable to assume that these organisational functions would also integrate seamlessly within the IT, this is not often the case. In the past such functions were responsible for the management of information in physical form such as paper records, maps and design diagrams, microfilm, photographs as well as more recently multimedia resources. Our research has shown very little integration between this function and information and operational technologies. Thus a holistic enterprise-wide information lifecycle management and governance scheme is not typical in most asset management organisations. Consider the concept of ‘autonomic logistics’ as proposed by Hess [15]. Autonomic logistics applied to airborne vehicles refers to the harmonisation of embedded advanced technology on board the jet fighter and the automatic transmission of information regarding the condition of systems and components and related logistic suppliers as well as trained technicians so as to ensure that the necessary parts and technical capability is available to fix the problems as soon as the jet fighter arrives to base so as to minimise the time in which the asset is on the ground. This is a very natural objective. However, how can this be realised? If the engineering systems do not integrate easily with the logistic information systems and the work management systems such a vision could not be realised. Equally strategic decisions about maximising the life of an asset, minimise its total cost of ownership and extracting the most value of an asset are all difficult if not impossible if all the available information is integrated to maximise the value of the data upon which such decisions can be made. The corporate accounting scandals of Enron, Peregrine systems, WorldCom, and others provided businesses worldwide with valuable lessons about governance and government legislation such as the Sarbanes–Oxley Act of 2002 in the United States and Basel II in Europe have ensured that the minds of CEOs, senior managers and the boards of enterprises are focused on the need for good governance and accounting practices. These, and related laws hold office holders personally responsible for the accuracy of information in financial and other reporting. IT governance has in recent years gained significant status as an issue which CEOs and CIOs have been motivated to ensure as to achieve high levels of organisational governance. Apart from the regulatory compliance and reduction of risk responsibilities however, good IT governance delivers significant benefits to the enterprise; indeed Weill & Ross [16] argue that significant IT business value directly results from effective IT governance. It is thus critical for asset management enterprises to take a holistic, enterprise-wide view of data, its capture, storage, processing, and flow within the enterprise. Steenstrup [17] suggests that the separation of IT and OT is still a major issue in engineering asset management organisations and recommends that starting small in “self-contained initial projects” is a good way forward. This is good advice, however greater guidance by senior management is required to bridge the islands of information that exist not only in enterprise IT and engineering (OT) but also in the content and records management functions where unstructured data are usually captured, archived and forgotten.
117
An It governance model, shown in Figure 3 below, and an architected information lifecycle management strategymust be applied over the top of both enterprise IT and engineering IT to ensure that full integration of information takes place in the asset management organisation.
Figure 3: An IT Governance Framework for EAM Information Integration
5
CONCLUSION
Information can deliver incredible value to the engineering asset management enterprise. For this to occur however greater cognisance needs to be given to how information is captured and harvested, how it is stored and managed so that the right information finds the right users and it is integrated in a way that business insights and strategic decisions can be made on the basis of all the information being available at the operational, tactical and strategic levels of management. Such vision cannot be realised if there exists a chasm between the enterprise information technologies and the engineering, operational technologies. Furthermore, the custodians of the enterprise content and other unstructured intellectual assets are also integrated for easy access within the enterprise. This paper suggests that the most important initiative by the senior management in the organisation to go along way towards achieving this vision would be to introduce effective IT governance mechanisms throughout the organisation. Such committees, change control boards budgeting processes and so on must include all the elements of managing the information and knowledge assets in the enterprise, the enterprise information technologies, the engineering operational technologies and the information/content management systems.
6
REFERENCES
1
IIMM 2006, ‘International Infrastructure Management Manual’, Association of Local Government Engineering NZ Inc, National Asset Management Steering Group, New Zealand, Thames, ISBN 0-473-10685-X. 2 Narain, R, Yadav, R, Sarkis, J, & Cordeiro, J 2000, ‘The strategic implications of flexibility in manufacturing systems’, International Journal of Agile Management Systems, Vol. 2, No.3, pp.202-13. 3 Alexander, K 2003, ‘A strategy for facilities management’, Facilities, Vol. 21, No. 11/12, pp. 269 – 274. 4 Inman, RA 2002, ‘Implications of environmental management for operations management’, Production Planning and Control, Vol. 13, No.1, pp.47-55. 5 Boyle, TA 2006, ‘Towards best management practices for implementing manufacturing flexibility’, Journal of Manufacturing Technology Management, Vol. 17, No. 1, pp. 6-21. 6 Balch, WF 1994, ‘An Integrated Approach to Property and Facilities Management’, Facilities, Vol. 12, No. 1, pp. 17-22. 7 El Hayek M, Voorthuysen, EV, & Kelly, DW 2005, ‘Optimizing life cycle cost of complex machinery with rotable modules using simulation’, Journal of Quality in Maintenance Engineering, Vol. 11, No. 4, pp-333-347. 8 Taskinen, T, & Smeds, R 1999, ‘Measuring change project management in manufacturing’, International Journal of Operations and Production Management, Vol. 19, No. 11, pp. 1168 – 1187. 9 Gottschalk, P 2006, ‘Information systems in value configurations, Industrial Management and Data Systems, Vol. 106, No. 7, pp. 1060-1070. 10 Murthy, DNP, Atrens, A, & Eccleston, JA 2002, ‘Strategic maintenance management’, Journal of Quality in Maintenance Engineering, Vol. 8, No. 4, pp. 287-305. 11 Sherwin, D 2000, ‘A review of overall models for maintenance management’, Journal of Quality in Maintenance Engineering, Vol. 6, No. 3, pp. 138-164. 12 Haider, A 2007, Information Systems Based Engineering Asset Management Evaluation: Operational Interpretations, PhD Thesis, University of South Australia, Adelaide, Australia.
118
13 Rudberg, M 2002, Manufacturing strategy: linking competitive priorities, decision categories and manufacturing networks, PROFIL 17, Linkoping Institute of Technology, Linkoping, Sweden. 14 Haider, A 2009, ‘Value Maximisation from Information Technology in Asset Management – A Cultural Study’, 2009 International Conference of Maintenance Societies (ICOMS), 2-4 June, Sydney, Australia. 15 Hess, A. 2007 ‘Presentation to the CIEAM CRC Conference’ June 2007, Australia. 16 Weill P. & Ross, J.W. 2000, IT Governance: How Top Performers Manage IT Decision Rights for Superior Results’, Harvard Business School Publishing, USA 17 Steenstrup, K. 2008, ‘IT and OT: Intersection & Collaboration’ Gartner Industry Research, ID No G00161537, USA
Acknowledgement The authors acknowledge the support of the CRC for Integrated Engineering Asset Management in conducting this research project.
119
Proceedings of the 4th World Congress on Engineering Asset Management Athens, Greece 28 - 30 September 2009
AN ADVANCED METHOD FOR TIME TREATMENT IN PRODUCT LIFECYCLE MANAGEMENT MODELS Matsokis A a and Kiritsis D a a
Swiss Federal Institute of Technology in Lausanne (EPFL), STI-IGM-LICP, ME A1 380, Station 9, Lausanne, 1015, Switzerland.
Time is the only fundamental dimension which exists along the entire life of an artefact and it affects all artefacts and their qualities. Most commonly in PLM models, time is an attribute in parts such as “activities” and “events” or is a separate part of the model (“four dimensional models”) to which other parts are associated through relationships. In this work a generic idea has been developed about how to make better treatment of time in PLM models. The concept is that time should not be one part of the model, but it should be the basis of the model and all other elements should be parts of it. Thus, we introduce the “Duration of Time concept”. According to this concept all aspects and elements of a model are parts of time. A case study demonstrates the applicability and the advantages of the concept in comparison to existing methodologies. Key Words: Product Lifecycle Management (PLM), Asset Lifecycle Management (ALM), Middle of Life (MOL), Interoperability 1
INTRODUCTION
The aim of this work is to introduce a new methodology for improving today’s ALM and PLM systems in the aspects of data handling (visibility and integration) as well as system interoperability. Visibility of information between the different levels of abstraction in different information and data management systems is not always available and if achieved it requires a lot of effort due to the complexity of the systems (for the sake of simplicity in this document when we use the term “systems” we mean “Information and Data systems”). All these systems either are different to each other or are under the same commercial “ALM” system. In both cases it is very difficult to retrieve and synchronise the data of all phases (Beginning of Life (BOL), Middle of Life (MOL) and End of Life (EOL)) after the product exits its (BOL) phase (design and production). Furthermore, data is collected only for some pre-defined products-components. However, experience has shown that the requirements for the types of collected data change depending on the use of each part of the model and hence, data are missing and are impossible to recover when needed in later stages. This leads into having stored data, for use as input in decision making, which is incomplete and therefore decision support is unsatisfactory. Time is the only fundamental dimension which exists along the entire life of an individual (including materials and physical products) and it affects all individuals and their qualities. Individuals existed in the past and will exist in the future no matter if they only currently exist in our model. Therefore, we introduce a method for system modelling which utilises this unique advantage of time. Time in this context is used with its generic meaning. Time is considered as the fourth dimension in several sciences and Sider in his work “Four Dimensionalism” [1] provides a good description of the 4D paradigm. Individuals exist in a manifold of 4 dimensions, three space and one time and therefore, they have both temporal parts and spatial parts. The notion of time is not considered with the appropriate level of quality and it is underestimated in today’s methodologies. This is a key issue which makes systems more complex and causes a significant loss in valuable data/information about products, processes, etc while attempting to re-use this data/information for supporting decision in the different phases of lifecycle. Methods and ideas for loading time data into parts of the models have led to solutions such as: “time stamp” and “time-interval”. Time data are being stored only at the parts of the model they were designed for. Most commonly, time is an attribute of these parts such as in “activities” (starting and finishing time) and in “events” (points in time) or is a separate part of the model (“four dimensional models”) to which other parts are associated through relationships. Thus, time data do not cover the whole system for the whole life cycle, which leads to many complex problems when it comes to information visibility. This is because there are many different systems which are at a different
120
level of abstraction, regarding the individual target asset. This has led into models which are incomplete, complicated to manage and application specific. Innovative ideas of time treatment are necessary to change the philosophy of the models and simplify them, especially in the ever more competitive global market. The structure of the paper is as follows. Section 2 describes briefly previous works dealing with time in engineering. In section 3 there is the description of the proposed methodology for future models. In section 4 a case study on maintenance department for locomotives is demonstrated.
2
BACKGROUND LITERATURE
The importance of time in the field of engineering has been noted in several works. In part 2 of the ISO 15926 [2] there is a use of time as the fourth dimension. It is used to describe actual individuals (including physical objects) which actually exist, or have actually existed in the past, possible individuals which possibly have existed in the past, and may possibly exist in the future and individuals which are hypothetical having no existence in the past or future. West [3] describes the need for tracking the state and status of an individual along time (including to which physical product the individual belongs or is part of). The author also describes how this need inspired the development of ISO 15926-2. As a solution the author recommends the use of International Standards combined with ontologies. Batres et al. [4] describe their effort to develop an ontology based on ISO 15926, analyse part 2 and briefly show how time is used to demonstrate the continuity of functionality of the parts. Roddick et al. [5] discuss the significance of time in spatio-temporal data mining systems and describe the need for future research that has to be carried out. Zhang et al. [6] suggest a model for the lifecycle of the infrastructure system facilitating the spatio-temporal data. Roddick et al. [7] on their bibliography research point out the value of investigating temporal, spatial and spatio-temporal data for future knowledge generation. In PROMISE [8] semantic object model the continuity of the parts over time is also considered important and it is stored in the “part of” class. Jun et al. [9] developed a time-centric ontology model for product lifecycle meta-data for supporting the concept of Closed-Loop PLM. In the “four dimensional models”, time attributes are included in a separate part of the model (Date_Time class) to which other parts (not necessarily all parts) are associated through relationships as shown in Figure 1. Such systems become complex due to the large number of relationships between Date_Time class and the other parts of the model. Furthermore, time data is not being collected about the whole system for the whole life cycle. The latter occurs either in cases where not all parts are connected to the Date_Time class or in cases where the architecture of the system changes along the life cycle and the relationships to the Date_Time class are altered/affected.
Figure 1. Schematic representation of a four dimensional model.
In a significant number of models which do not claim to be four dimensional time attributes exist in the parts of the model where time was considered necessary by the model designer. Most commonly time attributes are in the parts of the model describing the “process”, the “activity” (having starting time, finishing time and duration) and the “event” (having points in time or time stamps). An example is shown in Figure 2. These types of models face data integration and interoperability issues and are mostly developed to describe specific applications. Moreover, time data do not cover the whole system which has consequences in later stages, when time elements are required (i.e. feedback from maintenance to design) but they were not collected and therefore, are not available.
121
Figure 2. Schematic representation of a model with time/date attributes distributed in various classes.
3
PROPOSED METHODOLOGY
The aim of the proposed methodology is to improve today’s ALM and PLM systems by changing the use of time in the systems. The importance of time in ALM and PLM has been noted in the previous section. However, time has some qualities which make it special among all the attributes. Time is the only fundamental dimension which objectively exists along the entire life cycle of all individuals (including materials and physical products) and it is an objective element. Time exists in our everyday life on different levels: duration of accomplishing a task, duration of coffee break, duration of a phone call, duration of studies, age of a human, roman era, duration of a trip, duration of a maintenance activity, working hours of a machine, etc. Time also has granularity in order to be easier comprehensible by humans depending on the application i.e. it is easier understandable to say that I signed a five year contract than to say that I signed a 43800 hours contract. In this way time is affecting all aspects of individuals and their qualities; people are getting older (changes in character due to experience, in health, etc.) and objects wear out. All have the need for some type of maintenance. Furthermore, time is simple, comprehensive and objective and therefore, application independent. For instance duration of 5 years is understood by all systems and humans. Of course it might have different meaning and importance when it is referring to the age of a human or of a machine. For instance if one is employed by company A for a duration of 5 years, it is not really important for him to know that the company has a history of 150 years. From the company point of view the individual exists only for a small fraction of its life, where as for the individual 5 years is an important part of his 35 years of work. Regarding assets, time has a meaning of useful life, working hours, maintenance intervals, etc. Similarly, a used component of a machine has its time in the previous machine and now it has a life in a current machine. Its lifetime history would be the following: duration of MOL A in machine A (during which performs it performs task A1, task A2, etc. with durations A1, A2, etc.), duration of re-manufacture, and duration of MOL B in machine B (during which performs it performs task B1, task B2, etc. with durations B1, B2, etc.). Of course the component might have unlimited number of future uses. In this way time describes the continuity of the components functionality. In today’s systems although time attributes exist is various parts of the systems, there are no systems which are based on time. These qualities of time characteristics were the initiative to select time as the basis for our methodology for model development, the “Duration of Time concept”. It introduces the idea of seeing all aspects and elements of a model as parts of time and it provides flexibility, application independence and simplicity. In this way time exists naturally in everything but sometimes we don’t really understand it since our view is too “narrow” to see the big picture and we focus only on the small part which affects us directly considering time with its generic meaning as stable. This work introduces the “Duration of Time concept” for improving today’s ALM and PLM systems in the domains of data visibility, data integration and system interoperability. The main element of the concept, used for improving the systems performance is time. The concept is that time should not be one part of the model, but it should be the basis of the model and all other elements should be parts of it. The “Duration of Time concept” has unique advantages over existing concepts, which stem from the qualities of time characteristics. Time is objective and it may be used as a guideline basis for achieving data integration and system interoperability. Therefore, systems built on this concept take advantage of the time characteristics and combined with semantics provide data visibility, data integration and system interoperability. Time is used as a basis to provide a first step system to system visibility and common understanding. Two different time based systems will certainly have in common their time attributes and therefore, they are synchronised even though they might have been extended and used differently. This is an easy to apply method on existing models by making a “duration of time” class as a super-class of all classes of the model. This class provides the unified time framework for the entire system. A schema of this model is shown in Figure 3. The concept is protected by a patent provisional application.
122
Figure 3. Schematic Duration of Time representation example.
4
CASE STUDY
This case study demonstrates an application of the duration of time concept on an ALM/PLM ontology model, highlighting the capabilities of the final model. The model used is based on the Product Data and Knowledge Management Semantic Object Model (SOM) as developed by Matsokis et al. [10] to which the duration of time concept has been implemented. The SOM has been made a subclass of duration of time class and has been extended to facilitate the case study. It describes the maintenance activities of locomotives and also includes some parts of the model such as documents which engineers are not used treating (seeing) from the time point of view. The case study describes the application of the model by an authorised locomotive maintenance provider (MP). The MP is specialised on one model/type of locomotives. The MP has two maintenance platforms: Platform A and Platform B; each one has one machine to aid maintenance: Machine A and Machine B; and one mechanic which performs the maintenance on each platform: Mechanic A and Mechanic B; each mechanic uses one tool-box: Tool-Box A and Tool-Box B; and there are 5 documents: Document 1, Document 2, Document 3, Document 4 and Document 5. Document 1 contains the field data from the locomotive and it is updated each time the locomotive visits an authorised MP (one per locomotive, for this reason we have Document 1a, Document 1b, etc.). Document 2 contains the maintenance history of the Locomotive and it is updated each time the locomotive enters the maintenance (one document per locomotive having a, b, c and d, similarly to Document 1). Document 3 contains the manufacturer’s guidelines for performing/ operating maintenance according to the working hours of the locomotive or to the period of time passed since the last maintenance. Document 4 contains the manufacturer’s instructions with schemas for removing and replacing parts. Document 5 contains the information about the stock of the spare-parts. To facilitate and to categorise better the data for this application the model was extended accordingly. The developing process was: • • • • •
The class Duration of Time was made the superclass of the model. A time framework for the existing ontology PLM was developed. This framework has the only “time” properties of the ontology (start_date_time, end_date_time, duration). Thus, all classes and subclasses of the ontology have the same “time” framework. A central reference time CET was chosen. In this way, misunderstandings concerning time in communication between different agents around the globe will be avoided. The model was extended to facilitate the case study Instances are stored for every physical product, activity, event, process, resource etc. necessary.
The final model is shown in Figure 4. For the case study we have only three locomotives involved, Locomotive No1, No2 and No3.
123
Figure 4. Ontology model extended with necessary classes
4.1 System Analysis and Functionality In this scenario locomotives are visiting the MP with an appointment. The duration of time for each resource, activity, etc. is shown in Figure 5. The colours blue, red and green in the rows are referring to Locomotive No1, No2 and No3 accordingly and show for which locomotive and for how long is each resource used. This could be referring to the future (daily/weekly/monthly etc. schedule according to appointments). All the uncoloured cells of each row represent the time that the resource related to this row is in idle status. Each column represents 5 minutes. These time periods of 5 minutes could have been time periods of any required type such as years, months, days, hours, minutes, seconds, milliseconds. In Figure 5 we have that Locomotive No1 arrives to the service department and Mechanic A is responsible for it. He updates Document 1a with field data from the locomotive’s on-board computer unit and he checks Document 2a which contains its maintenance history. Then, according to the status of the locomotive he reads the manufacturer’s guidelines for this type of locomotive to see the maintenance activities to be performed and decides to replace some parts. He checks document 5 to see if there is any in the local stock and document 4 for the replacing instructions. Similar are the activities for Locomotives No2 and No3 shown in Figure 5 (for Locomotive No2 there is no need to remove/replace parts and Locomotive No3 arranges an appointment out of schedule). In case the MP provides multiple maintenance sites Locomotive No3 would have chosen the closest, soonest available maintenance site. Documents like all resources are seen as duration of time elements which appear in the system when they are used.
124
Figure 5. MOL Locomotives case study as seen from the “duration of time” Point of view with Queries
Using the duration of time approach provides engineers with all the necessary information for the state of each resource at every moment. Engineers can have information according to Which-queries such as “Which machines are available at this time slot?” which is equivalent to “Who is in stand by status at this time” and returns all the non-active values at that duration of time, or according to Availability-queries such as “Is Mechanic A available at a certain time?” or “When and for how long is a certain resource (mechanic or machine or document) available?” which return instances showing availability. This information is used for the best management of the resources. Moreover, the system also provides the information of the duration of time a Locomotive is using each resource. In Figure 5 several examples of the queries are shown. Firstly, a Which-Query is shown, which is applied on the model about the machine and describes “Which Machine(s) is (are) available right now (now=8:40 AM) and for how long?”. It returns the idle instance(s) of the available resources or nothing if the resources are not available. Secondly, there is the query “Is Mechanic B available right now (now=6:30AM)?” is shown. This query applies only to the certain resource instance (the query could be more generic like “who is available at this time?”) and returns either the idle instance if the resource is available or nothing if the resource instance is not available. Furthermore, Figure 5 shows an example of “When and for how long is Machine A available until 11am?” query. This query applies to all instances of Machine A and returns the idle instances of Machine A. Finally, an example of “When and for how long is Document 3 used?” query is shown; returning all the time slots during which Document 3 is being used.
4.2 Outcome of the case study
125
This case study has demonstrated that the initial model has been made simpler with the implementation of the Duration of Time concept, since the time attributes are unified and in case of model extension these attributes are inherited. A number of applications have shown that the system provides complete data visibility and therefore, inter-OEMs/Suppliers co-operation for better resources exploitation. Documents like all resources are seen as duration of time elements which appear in the system when they are used. Under this perspective one can have an overview of all documents, resources, etc. of all systems. Using similar queries, engineers are provided with a complete overview of the time slots and they are supported in decision making for optimal management of resources, activities, agents and processes. Moreover, the entire model has been described by the Duration of Time concept and still keeps its previous functionalities. Finally, through time it is very simple to track system or data changes and thus, keep track of all the past states of all the parts of the system.
5
CONCLUSION
Time has the characteristics of being objective and existing in all alive beings and materials. The Duration of Time concept has unique advantages over existing concepts exactly because it takes advantage of these characteristics. According to this concept time is used as the basis of the model and allows seeing all the parts of the system from the time point of view. The system, which has implemented the Duration of Time concept, provides flexibility, application independence and simplicity, since, time characteristics are objective and time could be used as a guideline basis for achieving data integration and system interoperability. In the case study it has been shown that it is an easy to apply concept on existing systems, it makes systems simpler, description of the system is provided through duration of time and system or data changes are tracked through time. Future development includes research on to what extend time data on all system parts supports vertical visibility of the different systems of the different levels and therefore, system interoperability and data integration under multi-system circumstances, application of the model to more existing ALM/PLM systems and further use of the concept in combination with semantics to provide benefits for industry.
6
REFERENCES
1
Sider T. (2001) Four-dimensionalism: An Ontology of Persistence and Time. Oxford University Press.
2
ISO 15926-2:2003 Integration of lifecycle data for process plant including oil and gas production facilities: Part 2 – Data model: http://www.iso.org/iso/iso_catalogue/catalogue_tc/catalogue_detail.htm?csnumber=29557 (April 2009).
3
West M. (2004) Some industrial experiences in the development and use of ontologies. EKAW 2004 Workshop on Core Ontologies in Ontology Engineering, pp. 1-14
4
Batres R, West M, Leal D, Price D, Masaki K, Shimada Y, Fuchino T, Naka Y. (2007) An upper ontology based on ISO 15926. Computers and Chemical Engineering, 31 (5-6), pp. 519-534
5
Roddick JF, Egenhofer MJ, Hoel E, Papadias D, Salzberg B. (2004) Spatial, temporal and spatio-temporal databases Hot issues and directions for PhD research. SIGMOD Record, 33 (2), pp. 126-131
6
Zhang C, Hammad A. (2005) Spatio-temporal issues in infrastructure lifecycle management systems. Proceedings, 1st Annual Conference - Canadian Society for Civil Engineering Toronto, pp. FR-131-1-FR-131-10
7
Roddick JF, Hornsby K and Spiliopoulou M. (2000) An Updated Bibliography of Temporal, Spatial, and Spatio-temporal Data Mining Research. Temporal, spatial, and spatio-temporal data mining: first international workshop, TSDM 2000, Lyon, France, pp. 147–163. Heidelberg/Berlin: Springer Verlag.
8
PROMISE Research Deliverable 9.2: http://www.promise.no/index.php?c=77&kat=Research&p=13|, (April 2009).
9
Jun H-B, Kiritsis D, and Xirouchakis P. (2007) A primitive ontology model for product lifecycle meta data in the closedloop PLM. In: Gonςalves RJ, Müller JP, Mertins K, and Zelm M, editors. Enterprise Interoperability II: New Challenges and Approaches, pp. 729-740. London: Springer Verlag;
10
Matsokis A, Kiritsis D. (2009) An Ontology-based Approach for Product Lifecycle Management. Computers in Industry. Special Issue: Semantic Web Computing in Industry. In press.
Acknowledgments This work was carried out in the framework of SMAC project (Semantic-maintenance and life cycle), supported by Interreg IV programme between France and Switzerland.
126
Proceedings of the 4th World Congress on Engineering Asset Management Athens, Greece 28 - 30 September 2009
FUNCTION PERFORMANCE EVALUATION AND ITS APPLICATION FOR DESIGN MODIFICATION BASED ON PRODUCT USAGE DATA Jong-Ho Shin a, Dimitris Kiritsis a, and Paul Xirouchakis a a
Institute de Génie Mécanique, Ecole Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne, Switzerland.
In recent years, companies have been able to gather more data from their products thanks to new technologies such as product embedded information devices (PEIDs, Kiritsis et al. (2004)), advanced sensors, internet, wireless telecommunication, and so on. Using them, companies can access working products directly, monitor and handle products remotely, and transfer generated data back to appropriate company repositories wirelessly. However, the application of the newly gathered data is still primitive since it has been difficult to obtain this kind of data without the recently developed new technologies. The newly gathered data can be applicable for product improvement in that it is transformed into appropriate information and knowledge. To this end, we propose a new method to manage the newly gathered data to complete closed-loop PLM. The usage data gathered at the MOL phase is transferred to the BOL phase for design modification so as to improve the product. To do this, we define new terms regarding function performance considering the historical change of function performance. The proposed definitions are developed to be used in design modification so that they help engineers to understand components/parts working status during the usage period of a product. Based on the evaluation of components/parts working status, the critical components/parts are discriminated. For the found critical components/parts, the working status of them is examined and correlated with field data which consists of operational and environmental data. The correlation provides engineers with critical field data which has an important effect on the worse working status. Hence, the proposed method provides the transformation from usage data gathered in the MOL phase to information for design improvement in the BOL phase. To verify our method, we use a locomotive case study. Key Words: Performance evaluation, design improvement, multi linear regression model, degradation 1
INTRODUCTION
Thanks to recently developed technologies such as product embedded information devices (PEID, Kiritsis (2004)), various sensors, wireless telecommunication, internet, and so on, a company is able to access and monitor its products remotely, to gather product data continuously, and even more to handle products directly. Through these new technologies, the information flow between company and products extends to the whole product lifecycle, which is called closed-loop product lifecycle management (PLM). Until now, the main interest of closed-loop PLM was the beginning of life (BOL) phase such as supply chain management and production management. The product usage data generated during the middle of life (MOL) phase is less handled since the technological environment to gather them was premature. Nowadays, these limitations are overcome by new technologies so that company is able to gather and use various kinds of product usage data in a ubiquitous way. However, even though the technological infrastructure to gather product usage data from the MOL phase is well established, an application method to use them is still immature and in its infancy. Product usage data can provide a company with more opportunities to understand its products and to improve them. Furthermore, in case product usage data is transformed into appropriate information or knowledge, they are applicable for various objectives such as design improvement, production optimization, advanced maintenance, effective recycle, and so on. For example, the accurate understanding of product status helps design engineers to find and modify false design parameters. The failures of the improved product will be reduced and its reliability will be increased, which enhances customer satisfaction so that it assures companies to survive in the harsh market environment. Therefore, it is necessary to develop a method to apply product usage data into product improvement.
127
To this end, in this study, we propose a design improvement support method based on product usage data gathered during the MOL phase. With the proposed method, product usage data is transformed into information to support design improvement. The transformation procedure consists of several steps (see Figure 1). In the first step, we decompose product functions. Using the decomposed functions, degradation scenarios are defined. Degradation scenarios show degradation relationships among the decomposed functions. By degradation scenarios, the importance rates of functions are calculated. Then, the working status of functions during the usage period is calculated through function performance measure. The calculated working status combined with degradation scenarios is used to find the critical time instances when functions show poor working status. By the critical time instances, the field data that consists of operational and environmental data are classified as normal and abnormal. Then, the classified field data are compared with each other using clustering technique. The comparison between normal field data and abnormal field data make it possible to find the critical field data. The critical field data are causable field data which affect poor working status of functions. At the last step, using a relation matrix between the critical field data and design parameters, the importance rates of design parameters are calculated. The design parameters having high importance rates should be checked and modified so as for the related functions not to be affected by harsh operation and environment. The proposed method is described using a locomotive case study to show validity and to help understanding. This paper is organized as follows. In section 2, we introduce previous research works on product usage data application, design improvement using historical data, performance, and performance degradation. In section 3, we explain the overall procedure of our proposed method to transform product usage data into information to support design improvement with case study. In the last section, we will conclude our research and provide some discussions.
2
STATE-OF-THE-ART
There are many ways to explore and exploit product usage data. Among them, the maintenance area is the most active application field. In this application, the product usage data is applied into the enhanced maintenance decision such as predictive maintenance, condition based maintenance, and so on. Xiaoling et al. (2007) proposed a method to use historical time series data in predictive maintenance. In their work, the historical time series data is adopted for auto-regressive moving average (ARMA) model which is usually used for the prediction of a trend and future behaviour. In the ARMA model, residual series are calculated and they are used to predict a machine failure. Based on the machine failure prediction, the maintenance policy is decided. Chang-Ching and Hsien-Yu (2005) proposed a method to estimate machine reliability based on its status. The status is monitored by the product usage data. In their work, the vibration signal is monitored and the basic information for predictive maintenance such as hazards model, reliability, and mean time between failures (MTBF) is calculated using cerebellar model articulation controller neural network-based machine performance estimation model (CMAC-PEM). Bansal et al. (2005) proposed a real-time predictive maintenance based on product characteristics during operation. As product characteristics, the motion current signature of DC motor is monitored and the distinct motor loads is classified using neural network. A real-time prediction responds to product characteristics concurrently. In these applications, the product usage data is used for the prediction of future product status. They do not focus on understanding how the historical data changes and affect product status from the viewpoint of design modification. In another field where the product usage data is used for design improvement, Delaney and Phelan (2009) proposed a method to use historical process data for robust design improvement. In their work, the historical data obtained during production is referred to a new product design. Through the application of the processing data, a performance variation of a new product can be estimated early in the design phase so that more robust design tolerance can be defined. The functionfailure design method (FFDM) (Stone, Tumer et al. 2005) is another method to perform failure analysis in conceptual design. The FFDM offers substantial improvements to the design process since it enhances failure analysis so that it reduces necessary redesigns. In many research works, the failures are well studied to find failure cause and to improve designs based on this analysis; fault tree analysis (Zampino 2001), hazard analysis (Giese and Tichy 2006), failure mode and effect analysis (FMEA) (Chin, Chan et al. 2008), and so on. However, there is still lack of methods that consider product status for design improvement. To consider the product usage data for product improvement, an evaluation method of the historical data for product improvement is required so as to understand and use it appropriately. The failure rate, performance, and performance degradation are widely used evaluation measures of product status. Among them the performance and its degradation are a good reference to understand product status. In general, it is useful to measure product performance during product usage period in many industrial fields. If companies know the actual status of products in the context of performance, they can provide much effective services such as predictive maintenance. Furthermore, they can improve product design to fix the reason of performance degradation. However, it is difficult to define product performance because of the ambiguity of performance definition. Osteras et al. (2006) showed some definitions of performance and suggested their own definition of performance as a vector of performance variables. In spite of these efforts to explain performance, the definition of performance is still hard to use in a verbal form. To use product performance in the company, it is required to convert product performance into a numerical value. Many previous research works tried to define product performance as measurable values. For example, Lee et al. (2006) showed some examples of performance measure. In the first example of their work, the
128
vibration signal waveform monitored by an accelerometer was used as the performance measure of a bearing. Another example is about controller area network (CAN). In this example, overshoot at which the signal goes beyond the steady state value was used as the performance value of CAN. Based on performance definitions, performance degradation can be formulated depending on the purpose of applications such as design improvement, maintenance, remanufacturing policy, and so on. There have been some relevant research works on these purposes. For example, Djurdjanovic et al. (2003) suggested various methods for performance assessment such as statistical overlap between performance related signatures, feature map pattern matching, logistic regression, cerebellar model arithmetic computer (CMAC) neural network pattern matching, hidden Markov model based performance assessment, particle filter based, and so on. Bucchianico et al. (2004) used the maximum amplitude of the first peak of the current signal as a performance feature. With current signal, they applied wavelet analysis to simplify the description of signal. Then, they used it as features to perform the analysis of variance (ANOVA). Furthermore, Tang et al. (2004) used the intensity of LED as degradation measure. The temperature with time interval is changed and the degradation data of the light emitting diodes (LED) is modelled as degradation path. Using degradation path, they proposed optimal test plan for accelerated degradation test (ADT). Recently, Jayaram and Girish (2005) proposed a degradation data model called generalized estimating equation (GEE). Using this model, they predicted the characteristics of Poisson distribution of degradation data set whose marginal distribution is Poisson, from which they could estimate reliability. Even though there have been much effort on performance degradation definition and modelling, there are few applications which focus on design improvement. Usually, the product degradation is related with maintenance or product reliability issues. Moreover, there is a lack of research works which deal with the connection between degradation and field data to improve design parameters.
3
PROCEDURE EXPLANATION WITH CASE STUDY
To apply product usage data for design improvement, we propose the following procedure (see Figure 1). The proposed procedure consists of four parts; 1) function evaluation, 2) function performance degradation evaluation, 3) field data evaluation, and 4) design parameter evaluation. In the first part, the product functions are evaluated from the viewpoint of performance degradation. In the second part, the functional working status of functions is calculated. The field data is evaluated in the third part based on the functional working status. At the last part, design parameters are evaluated.
< function evaluation >
< field data evaluation >
1. Decompose product functions
5. Find the critical time instance
2. Define degradation scenarios
6. Make field data clusters
3. Calculate function importance rate
7. Find field data out of range from clusters
4. Calculate function performance degradation of functions at each time instance
8. Build relation matrix between design parameters and field data
< function performance degradation evaluation >
< design parameter evaluation >
Figure 1. Overall procedure. The proposed procedure is explained in detail using a case study ‘locomotive components’. A locomotive is a complex system consisting of millions of components/parts. Among them, we extract several components and parts that are related to the locomotive braking system for this case study. Figure 2 shows the general architecture of locomotive components that are considered in the case study. Based on the targeted components, the functions are defined and the performance measure is selected as failure rate for all functions. For the case study, we simplify the functions and their relationships. Also, we make some assumptions: •
The component working status is measured by a performance measure.
•
Field data affects functional working status.
•
Performance degradation by operational and environmental effect is related with design parameters.
129
*http://www.knorr-bremse.co.uk/
Figure 2. Selected components for case study.
3.1 Function evaluation In this part, functions are evaluated from the viewpoint of performance degradation. To do this, first of all, the product functions are decomposed. The objective of function decomposition is to show how functions are structured and how they are connected with each other. A product consists of several functions depending on its objective. These functions are correlated with each other to perform objectives. Some functions work together to perform another objective and others accomplish their objectives without any function combinations. To clarify this relationship, we decompose product functions into detailed subfunctions depending on their roles and their levels. According to function relationships, functions can be layered at several levels. A product can have several functions at the first level. Each function at the first level can be decomposed into several sub-functions at the second level. A sub-function can also be divided into sub-functions at a lower level again. The level of function decomposition can be extended depending on the complexity of functions. The decomposed functions are usually connected with each other. These complex relationships among decomposed functions are used to define degradation scenarios. The relationship among functions is defined as the energy/material/information flow and degradation scenarios follow these flows. An abnormal energy/material/information flow makes abnormal function performance degradations of related functions. For example, in Figure 3, if the material input of the function ‘F11’ decreases, the output material of the function ‘F11’ decreases. The reduced material from the function ‘F11’ makes the function ‘F12’ work improperly. Hence, the material flow and energy flow can be good references for defining degradation scenarios. Considering all possible function relationships, all existing degradation scenarios should be defined in this step.
Figure 3. Degradation scenarios
130
Using degradation scenarios, we identify an important degraded function which leads to other consecutive function performance degradations. For example, in consecutively connected functions, the first degraded function can be a possibly initiative function degradation in a degradation scenario. In general, some functions are involved in multiple degradation scenarios. If these functions do not work properly, several degradation scenarios are affected. Hence, these kinds of functions are more important than less correlated functions. These multiple correlations between functions and degradation scenarios are described in the form of matrix as a correlation matrix (see Table 1). Table 1 Correlation matrix between functions and degradation scenarios Correlation matrix between functions and degradation scenarios Degradation scenarios
Functions F11
F12
F13
F21
D1
S(5) W(1) W(1) W(1)
D2
S(5) W(1)
D3
M(3) W(1) M(3)
D4
M(3) W(1)
D5
M(3) W(1)
F22
F23
F24
F31
F32
F33
F34
F42
F43
F44
M(3) S(5) S(5) S(5)
D6
S(5)
M(3) M(3) M(3)
D7
S(5)
M(3) M(3)
D8
M(3) M(3) M(3) M(3)
S(5)
D9
M(3) M(3)
D10 Function importance rate
F41
W(1) 19
5
4
1
10
9
6
12
M(3) 6
3
5
6
S(5) M(3) M(3) 13
8
3
The correlation between the functions and the degradation scenarios is rated with a usual rating method such as 1-3-5, 1-59, and so on. For example, the function ‘F11’ in Table 1 is related with degradation scenario ‘D1~D5’ as multiple correlation. After rating the correlations, the function importance rate is calculated by the sum of the correlations. According to Table 1, a function which has strong correlations with several degradation scenarios has a high function importance rate value. For example, the functions ‘F11’, ‘F31’, and ‘F42’ are highly correlated with several degradation scenarios so that they have high function importance rate. The function importance rate will be used in the relation matrix between design parameters and field data in the last step.
3.2 Function performance degradation evaluation The function performance degradation can be monitored and calculated by the function performance measure. The function performance measure is decided by the function objective such as voltage, vibration, current, speed, or noise. Using the function performance measure, the performance degradation at each time instance for each function is calculated by equation (1). (1)
Dij = Pie - Pij , if Pij > Pie , then Pie - Pij = 0 Where
i : index of function j : index of monitoring time instance of function performance measure, 1 £ j Dij : funtion performance degradation of function i at j th time instance Pie : expected amount of function performance measure of function i by design specification Pij : monitored amount of function performance measure of function i at j th time instance
131
For each decomposed function in this case study, the failure rate is selected as function performance degradation. Hence, ‘Pie’ and ‘Pij’ in equation (1) are substitute with expected failure rate and monitored failure rate respectively.
3.3 Field data evaluation The function performance degradation is calculated during the usage period in the previous part. Hence, it represents the working status of functional ability at each time instance for each function. High function performance degradation means that a function loses its working status seriously, therefore it is much degraded. A time instance having high function performance degradation is defined as a critical time instance. To find critical time instances, we set a threshold value of the function performance degradation. The threshold value is set as a reliable limitation of the function performance degradation assuring an acceptable functional ability. This value can be defined by empirical tests in the laboratory, previous data from similar products/components/parts, knowledge from engineers, design specification for the reliability, and so on. In case the function performance degradation is higher than the threshold value at a certain time instance, this time instance is classified as a critical time instance. The critical time instances of each function are defined as a set tci (see equation (2)). Table 2 shows the critical time instances which are identified by equation (2) in the case study. According to Table 2, ‘F22’ shows a lot of critical time instances as 78 times. tci = {tij | Dij ‡ Dim , 0 £ tij £ EOL of product}
(2)
where
tci : set of critical time instances of function i Dim : threshold value of Dij of function i tij : monitoring time of function i at j th time instance In the function performance degradation analysis, the field data should be considered concurrently because it can be closely correlated with the behavior of function performance degradation. When high function performance degradation occurs, it is necessary to find the causing field data affecting the high function performance degradation. In this step, we use a clustering method to find the causing field data. In general, the field data consists of a large amount of data which is gathered through the usage period. Since there are various ranges of data due to their usage environment, we use a clustering technique so as to classify the field data into a few groups. For example, the locomotive which is used in the case study works in various environments such as cold region, hot region, high altitude region, dry region, and so on. In this case, the range of field data becomes diverse and the normal range of field data where the locomotive works properly should be grouped. Since the clustering technique is a useful method to group data in a condensed form, it is used to classify the field data in this part. However, the usual clustering method does not consider the quality of data which will be clustered. There is no discrimination in field data such as good data and bad data. To overcome this limitation, we separate the field data according to function performance degradation. The field data which is gathered at the critical time instances (tci) are regarded as abnormal and the others are regarded as normal since the field data at the critical time instances shows high function performance degradation. With the normal field data, we create clusters. Hence, the field data clusters made by normal field data assure good function performance. Then, we calculate the mean and variance of each cluster. The calculated means and variances are used as reference values to represent the normal field data range which shows a normal status of functional working status. Table 3 shows the result of clustering for the function ‘F22’ that shows a lot of critical time instances. Table 3 shows the normal field data range to show good working status of the function ‘F22’ During the product usage period, various kinds of field data such as outer temperature, voltage, current, speed, and global position are recorded by PEIDs. The field data usually includes various ranges of data since the product is used in different environments and by different users. For example, a locomotive records various kinds of field data such as temperature, speed, global position, voltage, and current depending on which country and by which company the locomotive is used. Therefore, the clusters created in the previous step show the field data range where the function performs its objective well. From this, we know in which range of the field data the product works without any problem. These clusters become reference values of the field data which assure normal status of functional ability. Then, we compare the field data which are gathered at the critical time instances with the reference values obtained from the clusters of normal field data. From this comparison, we can separate field data which are out of range from the clusters. The identified field data which are mostly out of range can be one of possible causes of the function performance degradation in case that the function performance has been degraded by the abnormal field data.
132
Table 3 shows the result of clustering. These clusters exclude field data gathered around the critical time instances for the function ‘F22’. According to the field data, the created cluster numbers are different. Based on the clusters, the critical field data which is out of range from the normal range of field data is counted in Table 4 Number of abnormal field data
Field data
Occurrence number of abnormal field data F22 F34 (18 times) (78 times) 0 0 45 45 67 67 25 25 77 77 18 18 56 58
Mileage Catenary voltage Catenary current Filter current Heating circuit current Locomotive acceleration Brake cylinder pressure
According to Error! Not a valid bookmark self-reference., the field data ‘Catenary current’, ‘Heating circuit current’, and ‘Brake cylinder pressure’ show high abnormal occurrence, which means that these field data are highly correlated with the functions ‘F22’ and ‘F34’ degradation. Hence, we select three field data as the critical field data which have effect on the functions ‘F22’ and ‘F34’. Then, we build a relation matrix between design parameters of components/parts related to functions ‘F11’ and ‘F34’ and field data. . Error! Reference source not found. shows the result of the comparison between clusters and field data at critical time instances. According to Table 4 Number of abnormal field data
Field data
Occurrence number of abnormal field data F22 F34 (18 times) (78 times) 0 0 45 45 67 67 25 25 77 77 18 18 56 58
Mileage Catenary voltage Catenary current Filter current Heating circuit current Locomotive acceleration Brake cylinder pressure
According to Error! Not a valid bookmark self-reference., the field data ‘Catenary current’, ‘Heating circuit current’, and ‘Brake cylinder pressure’ show high abnormal occurrence, which means that these field data are highly correlated with the functions ‘F22’ and ‘F34’ degradation. Hence, we select three field data as the critical field data which have effect on the functions ‘F22’ and ‘F34’. Then, we build a relation matrix between design parameters of components/parts related to functions ‘F11’ and ‘F34’ and field data. , the field data ‘Catenary current’, ‘Heating circuit current’, and ‘Brake cylinder pressure’ show high abnormal occurrence, which means that these field data are highly correlated with the functions ‘F22’ and ‘F34’ degradation. Hence, we select three field data as the critical field data which have effect on the functions ‘F22’ and ‘F34’. Then, we build a relation matrix between design parameters of components/parts related to functions ‘F11’ and ‘F34’ and field data.
133
134
F11
F11
F22
F22
F31
F41
F42
D4
D5
D6
D7
D8
D9
D10
F31
F42
F32
F23
F23
F12
F12
F12
Variance
3.48E+07
0.53
6.30
2.60
2.94
0.22
0.42
1.95E+07
0.81
6.32
0.41
3.05
0.02
0.27
Mileage
Catenary voltage
Catenary current
Filter current
Heating circuit current Locomotive acceleration Brake cylinder pressure
Cluster 1
Mean
Field data
3.16
19.71
62.70
69.44
16.21
2.46E+07
Mean
0.45
17.81
3.54
20.96
0.43
322288
Variance
Cluster 2
Table 3 Field data clustering result for function 'F22' Field data clustering result for function 'F22'
N/A : there is no failure occurrence during usage period.
N/A
N/A
N/A
78 times
78 times
N/A
N/A
N/A
4.16
1081.40
82.67
149.04
16.93
2.55E+07
Mean
0.26
83.82
5.75
27.56
0.22
99662
4.97
104.81
290.00
26.03
2.85E+07
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
tci
0.10
15.30
52.39
0.82
60122
Variance
Cluster 4 Mean
F43
F33
F32
F31
F42
F43
F13
F23
F13
Function
Variance
Cluster 3
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
tci
5.12
304.77
Mean
0.03
33.29
Variance
Cluster 5
F44
F34
F31
F21
Function
5.31
Mean
0.06
5.59
Mean
0.10
Variance
Cluster 7
N/A
F34 (18 times)
F22 (78 times)
F22 (78 times)
N/A
N/A
N/A
N/A
N/A
Critical time instances
Variance
Cluster 6
N/A
18 times
N/A
N/A
tci
#
Although the field data is the same for all functions, the normal field data will depend on each function. Here, we consider that all field data affect all functions; however, with a more detailed description of the sensor allocation scheme of the locomotive components, we could differentiate the field data that are applicable to each function.
*
F11
D3
+
F11
D2
F12
F12
N/A+
F11
N/A
Function
tci
Function
Degradation scenarios D1
Table 2 Degradation scenarios and critical time instances Degradation scenarios and critical time instances
Table 4 Number of abnormal field data
Field data
Occurrence number of abnormal field data F22 F34 (18 times) (78 times) 0 0 45 45 67 67 25 25 77 77 18 18 56 58
Mileage Catenary voltage Catenary current Filter current Heating circuit current Locomotive acceleration Brake cylinder pressure
According to Error! Not a valid bookmark self-reference., the field data ‘Catenary current’, ‘Heating circuit current’, and ‘Brake cylinder pressure’ show high abnormal occurrence, which means that these field data are highly correlated with the functions ‘F22’ and ‘F34’ degradation. Hence, we select three field data as the critical field data which have effect on the functions ‘F22’ and ‘F34’. Then, we build a relation matrix between design parameters of components/parts related to functions ‘F11’ and ‘F34’ and field data.
3.4 Design parameter evaluation In the last step, we combine the found critical field data in the previous step with the design parameters of component/part. The abnormal field data found in the previous step is regarded as strongly correlated with the function performance degradation. The design parameters are defined by the component/part design specifications. The components/parts are classified and connected by the function decomposition since each function requires components/parts to perform its function objectives. To make and evaluate the relationship between design parameters and field data, we use a matrix form (see Table 5) similar to the house of quality (HOQ) used in the quality function deployment (QFD) method. Using the relation matrix, the design parameters and the field data are correlated. The degree of relationship is decided by engineers using the usual rating (generally used in QFD) such as 1-3-5, 1-5-9, low-medium-high, and so on. Then, using a similar calculation procedure such as a summation used in the QFD, we calculate the importance rate of a design parameter. In Table 5, the importance rates of design parameters are calculated as the sum of the degree of relationship for each design parameter multiplied by the function importance rate.
Table 5 Relation matrix between design parameters and field data
Function
Function importance rate
Component/ part
Design parameters
F22
10
CPU board
F34
5
Brake caliper Brake cylinder
Power module Material Diameter Cylinder diameter
Field data Catenary current
Heating circuit current
Brake cylinder pressure
High (5)
Importance rate of design parameter 50
High (5)
25
Table 5 is made for the case study. According to Table 5 , the power module of the CPU board is affected highly by the field data ‘Catenary current’. Hence, the power module of the CPU board should be checked and modified. For the function ‘F34’, the wheel cylinder diameter of the component ‘Brake cylinder’ is highly related with the field data ‘Brake cylinder pressure’. Engineers should reconsider the diameter of cylinder to improve brake cylinder pressure.
135
4
CONCLUSTION
In this study, we propose a new design improvement support method based on product usage data. Our method follows several steps to transform product usage data into information to help a product design engineer for design improvement. The product usage data gathered by PEIDs is used for monitoring product status and for calculating working status of components/parts. Then, the evaluated function performance degradation at each time instance is used to assess the field data as normal or abnormal. The normal field data is clustered so as to be used as reference values for assuring good working status. Comparing the clusters by normal field data with field data gathered at critical time instances, the critical field data which have an effect of the function performance degradation is found. Since the found field data are correlated with the function performance degradation, the critical field data should be considered in product design improvement. Hence, the relation between design parameters and the critical field data is related by the relation matrix. The relation matrix suggests the importance rate of design parameters and design parameters with high importance rate should be considered in design modification so as to reduce function performance degradation due to critical field data. To show the feasibility of our method, we have applied our procedure into a case study based on a locomotive which is provided by a real company. This approach can be a reference model applying product usage data for product design improvement. In spite of the usefulness of this procedure, our method still needs some improvement. First of all, we consider only one performance measure for the calculation of the functional working status. In case that several combined performance measures are considered in the evaluation of functional working status, a new method to deal with this complex relationship among functions should be developed. The frequently used relation matrix calculation requires input from engineers so that it depends on the knowledge and experience of engineers. Even though we provided some guidelines, some steps of the proposed method are not supported by our methodology. Moreover, the procedure is not automated. A sophisticated method to use product usage data for providing automatically design improvement suggestions is needed.
5
REFERENCES
1
Bansal, D., D. J. Evans, et al. (2005) A Real-Time Predictive Maintenance System for Machine Systems - An Alternative to Expensive Motion Sensing Technology. Sensors for Industry Conference, 2005.
2
Bucchianico, A. et al. (2004) A multi-scale approach to functional signature analysis for product end-of-life management. Quality and Reliability Engineering International, 20(5), 457-467.
3
Chang-Ching, L. & T. Hsien-Yu (2005) A neural network application for reliability modelling and condition-based predictive maintenance. International Journal of Advanced Manufacturing Technology, 25(1-2), 174-179.
4
Chin, K.-S., A. Chan, et al. (2008) Development of a fuzzy FMEA based product design system. International Journal of Advanced Manufacturing Technology, 36(7-8), 633-649.
5
Delaney, K. D. & P. Phelan (2009) Design improvement using process capability data. Journal of Materials Processing Technology, 209(1), 619-624.
6
Djurdjanovic, D. et al. (2003) Watchdog Agent - an infotronics-based prognostics approach for product performance degradation assessment and prediction. Advanced Engineering Informatics, 17(3-4), 109-125.
7
Giese, H. and M. Tichy (2006) Component-based hazard analysis: Optimal designs, product lines, and onlinereconfiguration. Gdansk, Poland, Springer Verlag.
8
Jayaram, J. S. R. & Girish, T. (2005) Reliability prediction through degradation data modeling using a quasi-likelihood approach. Annual Reliability and Maintainability Symposium, 2005 Proceedings, pp.193--199.
9
Kiritsis, D. (2004) Ubiquitous product lifecycle management using product embedded information devices. Intelligent Maintenance System (IMS) 2004 International Conference, Arles.
10
Lee, J. et al. (2006) Intelligent prognostics tools and e-maintenance. Computers in Industry, 57(6), 476-489.
11
Osteras, T. et al. (2006) Product performance and specification in new product development. Journal of Engineering Design, 17(2), 177-192.
12
Stone, R., I. Tumer, et al. (2005) Linking product functionality to historic failures to improve failure analysis in design. Research in Engineering Design, 16(1), 96-108.
13
Tang, L. C. et al. (2004) Planning of step-stress accelerated degradation test. Annual Reliability and Maintainability Symposium, 2004 Proceedings, pp.287-292.
14
Xiaoling, B., X. Quanzhi, et al. (2007) Equipment fault forecasting based on a two-level hierarchical model. Piscataway, NJ, USA, IEEE.
15
Zampino, E. J. (2001) Application of fault-tree analysis to troubleshooting the NASA GRC icing research tunnel. Piscataway, NJ, USA, IEEE.
136
Proceedings of the 4th World Congress on Engineering Asset Management Athens, Greece 28 - 30 September 2009
A COMPUTERIZED MODEL FOR ASSESSING THE RETURN ON INVESTMENT IN MAINTENANCE; FOLLOWING UP MAINTENANCE CONTRIBUTION IN COMPANY PROFIT Basim Al-Najjar a a
Terotechnology, School for Technology and Design, Växjö University, Sweden,
[email protected] In order to reduce as much as possible the economic losses that are generated due to lack or inefficient maintenance, it is necessary to map, analyse and judge maintenance performance and act on deviations before it is too late. It is always necessary for a company to act for increasing profit and consequently enhance its competitiveness. In this paper, a software model (MainSave) has been developed for mapping, monitoring, analysis, following up and assessing the cost-effectiveness of maintenance (and maintenance investments). MainSave can be used for assessing savings and profit/losses due to maintenance performance, identify problem areas and primarily plan for new beneficial investments in maintenance. The module has been tested at Fiat/CRF in Italy. The major conclusion is; applying MainSave it would be possible to identify, assess and follow up maintenance contribution in company business. Keywords: Maintenance Savings, Maintenance Profit, Maintenance Risk Capital Investment, Return on Investment in Maintenance 1
INTRODUCTION
Manufacturing industries realize the importance of monitoring and following up the performance of production and maintenance processes by simultaneously using economic and technical key performance indicators (KPIs). These indicators establish a bridge between the operational level in terms of, e.g. productivity, performance efficiency, quality rate, availability and production cost, and the strategic level expressed by company profit and competitiveness. Also, these key indicators are important to follow up the maintenance role in a sustainable manufacturing, [1]. In the past, the survival of manufacturing companies was mainly connected to how much a company was able to push into the market. This situation has changed and today's strategies imply cost minimization and differentiation and the ability to use available resources in a cost-effective way with reduced pollution to the surroundings. The focus on customer needs puts great demands on the production and maintenance systems to meet the goals of high product quality, production safety and delivery on time at a competitive price, [2,3]. Properly identified KPIs are required for following up the work done to achieve company strategic objectives and daily competition survival. Also, integration of the KPIs with the knowledge and database can provide a manager the required information, knowledge and ability to monitor and interpret the performance measures for making cost effective decisions, [4]. Furthermore, such KPIs can be utilised for benchmarking, which is one of the tools for never-ending improvements, [1,5]. 2
THEORETICAL BACKGROUND
Traditionally and faulty maintenance costs are divided into direct and indirect costs. Direct cost, i.e. the costs that can easy be related directly to maintenance, consists of direct maintenance labor, consumable maintenance material, outsourcing in maintenance and overheads to cover the expenses of, for example such as tools, instruments, training, administration and other maintenance related expenses. Indirect-costs, i.e. the costs that can be related indirectly to maintenance inefficiency, cannot all be easily related to maintenance as the losses in the production due to machine failures can be related. For example, indirectcost/profit that is related to losing/gaining of customers and shares of market are not that easy related to maintenance inefficiency/efficiency, respectively. Also, it would not be easy (or sometimes impossible) to find these costs in the current accountancy systems without being confused with other costs, [6]. In order to assess the economic importance of an investment in maintenance, it is often necessary to find the Life Cycle Income (LCI) of a machine/equipment, which is usually not an easy
137
task either. It is easier to asses the savings that have been achieved by more efficient maintenance, such as reduced downtime, number of rejected items, capital tied in inventories and operating costs [4,6]. To be able to monitor, assess and improve the outcome of different maintenance actions it is necessary to use a model for identifying and localizing/retrieving both technical and economic data from company databases. In order to make the process of data gathering and analyzing even easier and more cost-effective, the model should be computerized, [7,8]. Using MIMOSA database reduces technical difficulties and disturbance that may be induced in the current IT-systems of a company, [9]. This would allow following up maintenance KPIs more frequently and easily, thereby be able to react quicker on disturbances and avoid unnecessary costs. It will also be easier to identify and trace the causes behind deviations. The model should also help in interpreting the measurements of relevant basic variables and KPIs in order to achieve cost-effective decisions in planning and executing maintenance actions and to identify where an investment in maintenance may have the best financial payoff, [1,6,10]. In order to evaluate the economic importance of maintenance activities and consequently the Return on Investment in Maintenance (ROIIM), it is necessary to assess the savings achieved by a more efficient maintenance policy. It can be done by analyzing the life cycle cost (LCC) and the transactions between maintenance and other disciplines within the plant, such as production/operation, quality and inventories expenses using system theory. Analysis and assessment of the transactions between maintenance and other working areas can be used to highlight the real maintenance role in the internal effectiveness of a producing company. Maintenance savings are usually achieved through reducing; downtime, number of rejected items, operating/production costs, expenses of different fees/penalties, such as those due to failure-related accidents or failure-relatedenvironment violation and cost of tied up capital, i.e. less unnecessary components and equipment in inventories, [2,4]. Assessment of the savings achieved by more efficient maintenance is less influenced by irrelevant factors compared with the assessment of LCI when company profit is generally considered for assessment, [4]. In this case, several external factors, such as the amount of the product sold, currency course, wars, crises and product price that are irrelevant to the maintenance role but have an appreciable effect on the assessment of the company’s LCI. Discussing solely direct and indirect maintenance costs imply that maintenance is a cost-centre. Therefore, during recessions, companies generally reduce maintenance budget/costs regardless of the benefits that maintenance activities may generate. While the investments in maintenance during these periods can be one of the best investments in the company, see [6]. The economic benefits that could be gained by more efficient maintenance can be found as enhancements in the results of other working areas, such as production, quality and investments, through reducing losses of profit happened due to; a)
Losing production time (and production),
b) More tied up capital and expenses c)
Losses of customers,
d) Loss of reputation and consequently e)
Loss of market share.
These losses are usually generated mainly due to lack of (or inefficient) maintenance. In general, the majority of the indirect costs listed above are due to failures and short stoppages resulting from maintenance performance deficiencies, as discussed in [11]. In this paper, maintenance-related economic factors considered when evaluating the economic role of maintenance are; 1. Maintenance direct cost, 2. Economic losses (which can be considered as Potential Savings or Maintenance Income when using more efficient maintenance) 3. Maintenance savings, 4. Risk capital investments in maintenance for enhancing its performance and achieving better accuracy in maintenance decisions, and 5. Maintenance results (maintenance profit/losses) Part of the economic losses (potential savings), those are due to unavailability and expenses of delivery delay, that a manufacturing company may encounter can be recovered by implementing more efficient maintenance policy, [4,6,12]. This is why we label the economic losses as potential savings or maintenance income. The latter represents the resource for savings and consequently maintenance profit that can be generated by more efficient maintenance. 3
MODELING COST-EFFECTIVENESS WITH RESPECT TO MAINTENANCE
A maintenance policy is considered cost-effective if and only if its return on investment is greater than the capital invested in maintenance. But, the benefits of the improvements in maintenance are usually collected in other working areas but hardly
138
in maintenance as long as its accountancy system shows just costs. For example, identifying and relating the benefits generated by more efficient vibration-based maintenance (VBM) is not that easy task to perform if the mechanisms of transferring maintenance impacts, and technical and economic KPIs are not well identified, [4]. In order to justify investments in maintenance, the cost-effectiveness (Ce) of each investment in improving maintenance performance can be examined by using the proportion of the difference between the average cost per high quality product before and after the improvement to that before. This means that all the savings (and possible increments) in the expenses of production, tied-up capital, insurance premiums, etc., including the maintenance cost resulting from a more efficient maintenance policy should be assessed. At the beginning the cost-effectiveness can be ≥ 0 due to the extra expenses incurred by the learning period. This period can be defined on the basis of the nature of each improvement. But beyond this learning period it should be bigger than zero. Ce indicates the percentage of the reduction in the total production cost due to the maintenance impact and can thus be used as a measure of the cost-effectiveness of the improvements, [4]. A model shows the links between maintenance actions and their economic results has been developed at Växjö University, [4]. In order to make the model industrially applicable, we tried to make it transparent by avoiding the idea of Black Box, i.e. the end user feed in the data required in the software and get the results by pushing particular buttons without knowing any thing about what has happened inside the software. Changes and improvements in the production conditions, production and maintenance processes usually lead to appreciable changes in the performance of production and/or maintenance processes. Therefore, the formulas that have been developed, see [4], are used for assessing the impact of maintenance on the company economics during two different periods to highlight changes in production and maintenance performances and results, i.e. savings or losses achieved due to better or worse usage of the available maintenance technologies, [4]. These formulas can be applied independent of the maintenance technique being used. Denote five of the most popular sources generating savings/losses by Si, for i = 1, 2,…, 5. These popular sources are changes in; number of failures & short stoppages, stoppage time, bad quality production due to inefficient maintenance and additional expenses that can be defined by the user. These formulas are derived to underlying an inference motor constitute a new software tool that is given the name MainSave as it is shown in Fig.5. Then, the total savings or losses can simply be expressed as i =5
Total saving =
∑S
i
i =1
where S1, S2, … , S5 are assessed using the following formulas: I.
Failures; the saving or loss in the production cost has been generated due to less or more failures (S1) can be expressed by; S1 = Number of failures avoided * average stoppage time * production rate* profit margin (PM) S1 =
[(Y - y ) * L1 ] * Pr* PM
Where Y and y are the numbers of failures during the previous and current period, respectively, L1 is the failure average stoppage time and Pr is the production rate. II.
Average stoppage time; the saving or loss that has been generated due to shorter or longer stoppages (S2), i.e. longer/shorter production time, is expressed as; S2 = Difference in failure average stoppage time * number of failures * production rate * profit margin S2 =
[( L1 - l1 ) * y ] * Pr* PM
Where L1 and l1 are failure average stoppage times during the pervious and current period, respectively III. Short stoppages; the saving or loss in the production cost has been generated by less short stoppages (S3) can be expressed by; S3 = [short stoppages in previous period (B) – short stoppages in current period (b)] * average stoppage time (L2) * production rate * profit margin S3
= [( B - b) * L2 ] * Pr* PM
IV. Quality production. The saving or loss generated due to higher production quality (S4) is expressed by: S4 = [Current period high quality production per hour – Previous period high quality production per hour] * Number of production hours per day (Ph)* Number of production days per period (Pd) * profit margin S4 =
( p - P ) * Ph * Pd * PM 139
where P and p are amount (in tons, meters, etc.) of high quality product produced per hour in the previous and current year, respectively V. User defined expenses paid by the company to cover, for instant, personnel compensation due to accidents, environmental damage penalty, insurance premium, direct maintenance costs (that includes labor, spare parts and overheads), tied up capital in spare parts and equipment and penalty expenses of delivery delay. Denote the expenses before and after the improvement, i.e. previous and current period, by Eb and Ea, respectively. Then, the sum of the reduction or increment in these expenses can be expressed by S5 =
∑ (E
b
- Ea ) j
j
where j =1, 2,…, n denotes the n types of the expenses that can be expressed by the user. 4
SOFTWARE PROTOTYPE FOR INDUSTRIAL APPLICATION
One of the major reasons behind the lack of techniques for controlling and assessing maintenance economic impact on company profitability and competitiveness is the lack of clear and robust theory, methods and tools required for performing that task easily and properly, [4]. Also, the difficulties in finding and processing the data required for mapping, monitoring, controlling, following up, analysis and assessing maintenance economic impact. This is why a software-tool may make it is possible to perform this task easily and cost-effectively. The software module aims to sum all the economic losses (potential savings which represent maintenance future income) that are generated due to lack of (or inefficient) maintenance. Also, it assesses the savings and consequently the profit generated by applying more efficient maintenance. Furthermore, the KPIs, such as total savings, potential saving and profit, and ratios, such as maintenance savings to potential savings, savings to investments, investments to potential savings, investments per period, etc. are automatically assessed by the model using the above mentioned equations. The investment is assessed per the period of time that has been passed until the analysis is done instead of the whole depreciation period. The same thing can be said about Overall Equipment Effectiveness (OEE) that is assessed using the traditional equation, i.e. OEE = Availability × Performance efficiency × Quality rate The above mentioned ratios and measures are considered in this study as parts of the important KPIs that are required for mapping, monitoring, analysis and controlling maintenance performance and its economic impact. The main objective of using MainSave is to enable the user to easily and at demand assess and control the economic impact of maintenance as well as the potentials for further improvements. In other words, it can be utilized to assess the current situation, identify problem areas, assess technical and economic losses, and motivate investments in maintenance. The latter is important for providing objective evidences demanded to convince the company’s executives about the necessity of these investments for enhancing the productivity and effectiveness of a production process. All these results cannot be achieved without high quality and relevant coverage data, [4,6]. Also, the data required for applying MainSave should be easily retrieved by the system. An investment in maintenance often has a relative short payoff time compared to other investments if it has been made using right information and knowledge, [13]. Usually, it is hard to show the economic advantages of such investments due to the fact that the savings are spread out in many working areas in a company, and cannot be found easily in the current accountancy systems. 5
DATA DEFINITION AND GATHERING
The data required for applying and running MainSave can be divided in two major categories: Database datasets and Nondatabase datasets. From Fig.5, it is easy to distinguish non-database datasets from those which are database datasets. The former have white colored boxes while the latter have grey colored boxes. The data described by non-database datasets are defined below: o Profit margin per high quality item, ton, meter or cubic meter, etc. o Total investment in maintenance for improving its performance o Depreciation period, i.e. the period that was decided as the investment life length The data described by database datasets are defined below in Figs.1-4; o Data gathered concern one production machine and product. o The data cover two production periods, i.e. before and after an improvement in maintenance or production process any two well distinguished periods. o Numbers of unplanned stoppages, such as failures and short stoppages and their causes.
140
or
o Average time of the stoppages o Production rate, production time and theoretical and actual cycle time. o Quality rate, i.e. the share of high quality product out of the total production during the periods of analysis The rest of the information shown in Fig.5, such as savings, investment per period, ratios and OEE are assessed by MainSave.
Fig.1. Production theoretical cycle time.
Fig.2. Planned production time.
Fig.3. Production follow up.
6
MAINSAVE TEST
The tests were done using industrial data gathered from FIAT/CRF in Italy. Technical and economic data from production and maintenance processes and economy department have been collected and feed in the MIMOSA database located in IBK, Tallinn. A CNC machine at FIAT/CRF, was considered for MDSS test. It produces engine heads. The operation that is performed by the machine is milling. This machine is considered to be a bottleneck in the production line which makes it critical for the whole production process. The data were collected at two periods (previous/first and current/second period) of 6 141
months each (8th of Jan. 2007 – 7th of June 2007) and (7th of June 2007-8th of Jan. 2008). The periods were selected are considered to be long enough so that they include several events, such as production speed changes, shorts stoppages, failures, disturbances or any other stoppages that are generated by organization, man or environment. In general, the periods can be selected as; a)
Two periods were planned for producing two orders of the same product in the same machine, or
b) Two periods that have been selected from the machine register representing the time before and after a particular improvement has been done to the maintenance policy. The non-database dataset, i.e. profit margin, investment and depreciation period are given in the white colored boxes; 10 units, 30000 units and 4 years, respectively, Fig.5. The savings (or additional losses) that maintenance has generated due to its original performance or due to a particular performance improvement are assessed using different losses categories. The losses are classified in two groups of categories; o defined categories of losses, and o user defined category of losses, see Fig.5. In general, it is not necessary that companies have the same types of losses. Therefore, in order to accommodate MainSave for each particular application, machine and company, any of the defined losses can be included or excluded in MainSave. In MainSav, we tried to use the most usual categories of losses and left a wide window for the user defined expenses. MainSave converts all technical data, such as number of failure and downtimes to a measure on the economic scale, i.e. money. In this test, all the defined categories of losses expressed by MainSave are used and no user defined expenses were found. In Fig.5, it is clear that there is an obvious increase in the losses of the production time in the current (second) period compared with the previous (first) period. The major part of this increment was due to increased failures and short stoppages. This increment was, according to the company, happened because of using spar parts from another supplier than that they used to have. The major conclusion that can be drawn from this test is; using MainSave, it is possible to map, identify, analysis and assess the economic losses and savings in the production process and identify the causes behind that, which eases the localization of next investments required for improving maintenance performance to reduce losses. The number of failures was increased and the stoppage times were also prolonged. This resulted in more economic losses (15372.9 units) despite the investment (30000 units) was done. These losses are distributed among the major areas that are; more failures ((-4142.7 units), longer stoppage time (-10723.5 units), more bad quality expenses (-506.6 units). The biggest part of the losses is due to the longer stoppage time which represents about 70% of the total losses. Assessing the losses belonging to different category helps to primarily estimate and judge the size of the risk capital that should be invested for solving the problem. Notice that the saving with the (-) sign means losses. Also, the Overall Equipment Effectiveness (OEE) has been increased due to unknown reasons.
142
Fig.4a. Production events.
Fig.4b. Production events.
143
Fig.5. Test results of MainSave.
7
RESULTS, DISCUSSIONS AND CONCLUSIONS
When the profit margin of a plant decreases, the need for reliable and efficient maintenance policy becomes more important, because it will be more important to reduce the economic losses, i.e. pressing down production cost per high quality item, ton or meter, and consequently increases the profit. The major result of this study is the development of a new model and software prototype (MainSave) for enhancing maintenance cost-effectiveness, performance and economic impact on company business. The test of the model has shown clearly the potentials and the benefits of application. Using MainSave, it is possible to monitor, analyze, assess maintenance activities and act at an early stage in both tactical and strategic levels for fulfilling company's strategic goals in continuous improvement of its profitability and competitiveness which is hard to achieve using available tools and techniques even if the data required are available. Development of relevant and traceable technical and economic KPIs has made maintenance performance control more possible. Also, it makes it possible to handle real-time data gathering, analysis and decision making. Further necessary information about maintenance and other working areas is also provided to the decision maker. MainSave is developed using new and flexible model that can be accommodated to each production process without big difficulties for enhancing its accuracy. It provides better data coverage and quality which are essential for improving knowledge and experience in maintenance and thereby aid in increasing the competitiveness and profitability of a company. 8
REFERENCES
1
Al-Najjar, B., Hansson, M-O and Sunnegårdh, P. (2004) Benchmarking of Maintenance Performance: A Case Study in two manufacturers of furniture. IMA Journal of Management Mathematics, 15, 253-270.
2
Al-Najjar, B. (1997) Condition-based maintenance: Selection and improvement of a cost-effective vibration-based policy in rolling element bearings. Doctoral thesis, ISSN 0280-722X, ISRN LUTMDN/TMIO—1006—SE, ISBN 91-628-2545X, Lund University, Inst. of Industrial Engineering, Sweden.
144
3
Al-Najjar, B. (1998) Improved Effectiveness of Vibration Monitoring of Rolling Element Bearings in Paper Mills. Journal of Engineering Tribology, IMechE 1998, Proc Instn Mech Engrs, 212 part J, 111-120.
4
Al-Najjar, Basim (2007) The Lack of Maintenance and not Maintenance which Costs: A Model to Describe and Quantify the Impact of Vibration-based Maintenance on Company's Business. International Journal of Production Economics IJPPM 55(8).
5
Pintelon, Liliane, (1997) Maintenance performance reporting systems: some experiences. Journal of Quality in Maintenance Engineering. 3(1), 4-15.
6
Al-Najjar, B., Alsyouf, I., Salgado, E., Khosaba, S., Faaborg, K., (2001) Economic Importance of Maintenance Planning when using vibration-based maintenance policy, project report, Växjö University.
7
Al-Najjar, B. and Kans, M. (2006) A Model to Identify Relevant Data for Accurate Problem Tracing and Localisation, and Cost-effective Decisions: A Case Study. The International Journal of productivity and performance measurent (IJPPM), 55(8).
8
Kans, Mirka (2008) On the utilisation of information technology for the management of profitable maintenance. PhD thesis, 2008, Department of Terotechnology, Växjö University, Sweden
9
MIMOSA, (2006) “Common http://www.mimosa.org/.
10
Al-Najjar, B., Kans, M., Ingwald, A. Samadi, R. (2003) Förstudierapport - Implementering av prototyp För Ekonomisk och Teknisk UnderhållsStyrning, BETUS. Växjö University.
11
Al-Najjar, B. (2000) Accuracy, effectiveness and improvement of Vibration-based Maintenance in Paper Mills; Case Studies. Journal of Sound and Vibration, 229(2), 389-410.
12
Al-Najjar, Basim, (1999), Economic criteria to select a cost-effective maintenance policy, Journal of Quality in Maintenance Engineering, 5(3).
13
Al-Najjar, B. and Alsyouf, I. (2004) Enhancing a Company's Profitability and Competitiveness using Integrated Vibration-based Maintenance: A Case Study. Journal of European Operation Research, 157, 643-657.
Relational
Information
Schema
(CRIS)
Version
3.1
Specification”,
Acknowledgements This paper is part of the work done within DYNAMITE and the author would like to thank EU for supporting EU-IP DYNAMITE.
145
Proceedings of the 4th World Congress on Engineering Asset Management Athens, Greece 28 - 30 September 2009
CASE STUDY: WARRANTY COSTS ESTIMATION ACCORDING TO A DEFINED LIFETIME DISTRIBUTION OF DELIVERABLES Vicente González Díaz a, Juan Francisco Gómez Fernández a and Adolfo Crespo Márquez b a
PhD student of the Industrial Management PhD Program at the School of Engineering, University of Seville, Spain. b
Associate Professor of Industrial Management School of Engineering, University of Seville, Spain.
This paper pretends to describe a real case of warranty assistance, analyzing its management in the framework of a manufacturing company which provides deliverables during a specific period of time and following a scheduled distribution. With the sale of a product, the manufacturer is nowadays obliged contractually to perform warranty assistance to the buyer. Decreasing the incurred costs is clearly not the only aspect to achieve, since the decision has to be global and strategical inside the company in order to purchase a reliable and robust product, offering as well an appropriate after-sales service to the user. Therefore, key aspects will be presented along this study in order to estimate costs and, consequently, to take proper decisions for leading correctly the company to a successful goal. For that purpose, not only managers and responsibles in a well-established and controlled organization must take part, it is also of importance to consider the experience given by the technical staff for maintenance and warranty. As result, this paper will show basically how analyzing the past performance is possible to foresee and control the future. In our case, it will be possible to observe how the evolution of costs during the lifetime of a warranty assistance program can help to correct and foresee with more accuracy the expected total cost of the activity considered at the beginning of the program. The paper is based on an usual procedure in special supplies for the public sector (for instance, a fleet of customized vehicles), between companies inside the supply chain or directly to the final user, where this final user is for example a public entity and the budget for the complete warranty assistance is already known from the beginning of the project. Key Words: Warranty management, after sales, cost estimation, e-warranty, spare parts procurement 1
INTRODUCTION
Case studies have been normally used to support and help theoretical subjects in engineering and other research fields. Developing these cases, it is usually found such amount of information that can either trivialize the study or complicate it beyond a reasonable level. Therefore, the intention here is to synthesize a practical case which transmits easily how a proper management of warranty assistances helps to reduce costs, enables to take suitable decisions, and improves the image of the company in front of the client. The case here exposed, starts mentioning the antecedents related to warranty cost models. This brief State-of-Art will show how important is a warranty cost management system. Later on, it is described the scenario where contributions given in the mentioned State-of-Art will be applied. Once defined the problem and along the development of this particular case, a procedure is also proposed related to the way of working among different sections inside a generic company. This procedure will be exposed succinctly using a workflow chart by BPMN (Business Process Modelling Notation) standard. Finally, conclusions to this case study are expressed at the end of the paper.
2
ANTECEDENTS
Although many manufacturing companies spend great amounts of money just due to their service warranties, in most of the cases however, this reason does not receive too much attention. In spite of this, it is possible to find studies in the literature, related to the warranty cost modelling with very interesting contributions [1]. The authors of these contributions try usually to
146
identify processes, actions, stages, tools, methods or necessary support techniques to manage properly the warranty costs. Regarding processes, in order to apply an effective warranty management, is critical to collect the proper data and to exchange adequately the different types of information between the modules in which the management system can be divided [2]. In our case study, it will be proposed a warranty management system based on a several modules organization. PROBLEM / OBJECTIVE In the literature review, one can also observe different interactions between warranty and other disciplines, and how they are dealt by the different models and authors. Particularly and summarizing, three important interactions must be considered:
WARRANTY MANAGEMENT SYSTEM ENGINEERING MODULE
MANUFACTURING MODULE
POST SALE MODULE
MARKETING MODULE
DECISION SUPPORT
1. Warranty and Maintenance: In many cases, the warranty period is the time when the manufacturer still has a strong control over its product and its behaviour. Additionally, the expected warranty costs depend normally not only on warranty requirements, but also on the associated maintenance schedule of the product [3].
Figure 1. Warranty management system in four modules (adapted from [2])
3. Warranty and Quality: The improvement of the reliability and quality of the product has not only an advantageous and favourable impact in front of the client; also this improvement highly reduces the expected warranty cost [5].
High
Management Requirement
2. Warranty and Outsourcing: The warranty service or, in general, the after-sales department of a company, is usually one of the most susceptible to be outsourced due to its low risk and due also to the fact that, among others features, outsourcing provides legal insurance for such assistance services [4].
Low
Moderately Externalizable
Not Externalizable
Highly Externalizable
Less Externalizable
Interactions Requirements
High
Figure 2. The decision of outsourcing (from [1]) EAC = Cost to Date
+
Estimated Cost of Remaining Work
In reference to costs estimations, and apart from warranty issues, there are nowadays several methods to estimate accurately the final cost of a specific acquisition contract. In our case study, the method applied in a simplified way is denominated “Estimate at Completion” (EAC).
€ Estimated Final Cost Cost to Date
Project tracking
In few words, EAC is a management technique used in a project for the control of costs progress.
Now
Time
Here, the manager foresees the total cost of the project at completion, combining measurements related to the scope of supply, the delivery schedule, and the costs, using for that purpose a single integrated system.
Figure 3. EAC Formula (adapted from [6]) Finally, taking into account the above mentioned antecedents, one can see that by reengineering of management processes and by the application of a correct warranty cost model, it is possible to:
Increase sales of extended warranties and additional related products.
Increase quality by improving the information flow about product defects and their sources.
Improve better customer relationships.
Reduce expenses related to warranty claims and processing.
Better management and control over the warranty costs.
147
Reduce invalid-related expenses and other warranty costs.
Therefore, a well-established warranty management system will help basically to achieve a successful goal in the performance of the company warranty services.
3
SCENARIO OF STUDY
The case company is a large manufacturer in the metal industry that operates worldwide. The company designs, manufactures and purchases a wide range of industrial vehicles (such as forest machines, hydraulic excavators or track loaders) for industrial customers, as well as other related products like spare parts. In addition to the purchase of standard vehicles, nowadays is being also often the customization of machines. In our case, the company must supply to a client a specific amount of customized vehicles following a defined schedule. In the contract is included the assistance of warranty for the vehicles of the fleet during a period, starting when each vehicle is delivered to the customer. To provide the after-sales service in a satisfactory way, it is required the fulfilment of some conditions by the company: 1.
Teams formed by personal with appropriate training.
2.
Tools for maintenance / warranty tasks.
3.
Materials and spare parts to carry out the repairs.
The first two conditions are considered fulfilled. Regarding the third condition, the necessary materials for warranty operations are obtained from the same warehouse of the assembly line. By this way, there are two possibilities to give back the material:
When the piece is repairable, a spare part is taken from warehouse being later refunded after the repair of the disassembled piece.
When the piece is not repairable, a spare is also taken from warehouse, but the material must be restored by purchasing.
This situation is possible because the stock for manufacturing allows the loan of material for warranty without risk to the necessities of the assembly line. The problem in this scenario is defined as follows: Due to the fact that manufacturing and warranty assistance share the same warehouse, there will be a moment when the manufacturing is very advanced and simultaneously there are many vehicles under warranty. From this moment onwards, every decision must be taken prioritizing one of the two activities. Apart from the above described context, the study takes place during the lifetime distribution of deliverables. That means that, historical data regarding costs, failured items etc. are available for the research. In reference to the failured items, it has been used a classification tree with several levels following a hierarchical structure based mainly on their functionality, and reaching a sufficient level of detail in terms of procurement aspects. Custom ized Vehicle
Electrical System
Disjunctor
Cable
Hydralic System
Valve
Pum p
Level 0
Mechanical System
Gear
Brake
Auxiliary System
Intercom
Figure 4. Classification tree of components In figures, the described scenario and the delivery schedule are as shown in table 1. Our case study will be developed considering also the following hypothesis:
Every vehicle has the same reliability (they have the same failure probability).
The warranty cost is constant with the time.
148
Navigator
Level 1
Level n
The warranty time does not stop in any moment.
Table 1 Data of the described scenario
Total amount of customized vehicles to be delivered: 350 units.
Warranty period for each vehicle: 2 years.
Warranty expiration for last vehicle: March 2015.
Time point of the case study (t1): April 2009 (150 units already delivered).
Date March 2006
Accumulate amount of vehicles Roll-Out
April 2007
45 units
April 2008
100 units
April 2009
150 units
April 2010
200 units
April 2011
260 units
April 2012
315 units
April 2013
350 units
Regarding the EAC for warranty, it depends on the company policy. Usually, the budget for warranty is determined as a percentage of the project total cost. In our case study, the manufacturing plus indirect costs for each vehicle is supposed that amounts to ca. 375.000,00 € and the percentage for warranty attendance will be the 2 % of the budget for total costs. That yields around 2.625.000,00 € for the attendance of warranties during the whole project.
4
ANALYSIS, DEVELOPMENT AND RESULTS OF THE CASE STUDY
4.1 Costs analysis of the warranty assistances As mentioned, the study happens in a moment when the company has already delivered an amount of 150 vehicles. In this time, there are 105 vehicles under warranty. Some preliminary data are shown in table 2. Together to this, there is also a sample about the amount of vehicles under warranty according to the defined delivery schedule. Some figures here have been rounded off in order to simplify their use during the study.
t2
ar -0 se 6 p0 m 6 ar -0 se 7 p0 m 7 ar -0 se 8 p0 m 8 ar -0 se 9 p0 m 9 ar -1 se 0 p1 m 0 ar -1 se 1 p1 m 1 ar -1 se 2 p1 m 2 ar -1 se 3 p1 m 3 ar -1 se 4 p1 m 4 ar -1 5
In this moment, we can observe how close the end of the deliveries is (April 2013). Consequently, much closer (and critical) is therefore the manufacturing of such last vehicles.
400 350 300 250 200 150 100 50 0
Warranty Evolution
t1
m
In September 2011 (t2), the already delivered fleet -285 units- will have a maximum in the amount of vehicles simultaneously under warranty –128 units- (see yellow graphic line).
Monthly Delivery Delivered Vehicles (Acumulate) Vehicles in warranty
Amount of Vehicles
The table 2 (as commented) is only a sample extracted from the complete delivery schedule. From this complete schedule, it is possible to notice that the warranty expiration of the first vehicles takes place obviously on March 2008 and, also, that the most critical moment (t2) will happen in September 2011. Graphic in figure 5 can help to illustrate it.
Month
Figure 5. Warranty evolution graphic, in terms of delivered vehicles
149
Table 2 Extract of the delivery schedule Date
Ac. Amount of vehicles
Monthly Delivery
Vehicles in Warranty
March 2006
Roll-Out
5 units
5 units
April 2007
45 units
2 units
45 units
April 2008
100 units
3 units
91 units
April 2009
150 units
2 units
105 units
April 2010
200 units
7 units
100 units
No. of delivered Vehicles in t1: V1 = 150 units
No of Reclamations in t1: R1 = 1.200 reclamations
C1 = 1.000.000,00 €
April 2011
260 units
2 units
110 units
April 2012
315 units
2 units
115 units
April 2013
350 units
0 units
90 units
April 2014
350 units
0 units
35 units
Warranty incurred cost in t1: EAC for Warranty: EACw = 2.625.000,00 €
No of vehicles to be delivered: V(t) = [according to delivery schedule]
In t2, our teams of maintenance / warranty technicians will have to attend a high number of vehicles which will demands a huge amount of spare parts. At the same time, the operators of the assembly line will be requesting pieces for the production of the last vehicles. The shared warehouse will have then in store enough pieces for manufacturing but no more, so the loan of any spare part demanded by the after-sales personal must be decided taken into consideration the importance of the material, the time to repair the disassembled piece, and / or the time to restore it by purchasing. Every piece in the classification tree (see figure 4) belonging to the lowest level (level where materials can be procured), will have a weight (or criticity) which changes with the time. Every piece will be considered much more critical, as closer is the end of manufacturing. Therefore, and taking also into account a costs analysis, it will be necessary to have in mind the investment of a minimum strategical stock in order not to leave warranty claims unattended. Monthly Increase of Warranty Cost Warranty Cost (Acumulate) Warranty Cost
Warranty Cost Evolution
t1
2500000
Considering the above indicated data, it is possible to carry out a simple costs analysis obtaining some average values.
t2
Calculation of some values:
1500000
500000
Warranty cost per vehicle:
mar-15
jul-14
nov-14
mar-14
jul-13
nov-13
mar-13
jul-12
nov-12
mar-12
jul-11
nov-11
mar-11
jul-10
nov-10
mar-10
jul-09
nov-09
mar-09
jul-08
nov-08
mar-08
jul-07
nov-07
mar-07
jul-06
CV = C1 / V1 = 1.000.000,00 / 150 = 6.666,67 € nov-06
0
Warranty cost per reclamation: CR = C1 / R1 = 1.000.000,00 / 1.200 = 833,33 €
1000000
mar-06
EUR (€)
2000000
Month
Reclamations per vehicle: RV = R1 / V1 = 1.200 / 150 = 8 reclamations
Figure 6. Warranty evolution graphic, in terms of warranty costs With these values, and in order to make it more illustrated, it is included a graphic (figure 6) with the warranty evolution in terms of costs. One can see that lines in this graphic follow the same behaviour or track as the ones from figure 5.
150
That is because the total incurred warranty cost of every vehicle has been considered in a conservative way. That means they have been treated as already incurred just when each vehicle is delivered to the customer. Therefore, the accumulate warranty cost does not increase after the delivery of the last vehicle. In further studies, it will be possible to add also the consideration of several destinations of the vehicles, where local maximums can happen in different moments of the defined lifetime and costs must include the movement of warranty teams to different locations. Comparing the above results with the foreseen costs indicated in the EAC, a graphic is obtained as the one exposed in figure 7. The EAC is formed by a first part already known (pink line), which refers to the Incurred to Date (ITD); plus a second foreseen part (blue straight line), which refers to the Estimate to Completion (ETC).
Warranty Costs Comparison
t1
3000000 2500000 2000000 EUR (€)
1500000 1000000
Month
Figure 7. EAC vs. Average Cost Line That means, there is a budgetary buffer of ca. 290.000,00 €, which can be used for the investment of a strategical stock of spare parts. This amount would correspond to the budget for attending the warranty of around 43 vehicles, or equivalent as if the warranty assistance should be taking and advantage of ca. 15 months before the manufacturing end. Other interesting average values that can be obtained from this exercise are for example the estimated total amount of warranty claims, which shall be around 2.800 reclamations. Anyhow, and as the main conclusion of this analysis, the procurement of these strategical spare parts should avoid the use of the stock shared with assembly line, offering by this way an appropriate service to the client. That is due to the possibility of assisting warranties independently of the manufacturing department and consequently, not affecting to the final goal of the project.
4.2 Quantitative analysis of the claims Data for a huge variety of items have been possible to compile with the customer’s complaints. These items are classified according to their functionality and divided also into components that can be procured (see figure 4: Classification tree of components). COMPLAINTS ACCORDING TO COMPONENT
100
The figure 8 exposes a sample of the gathered data as an example for this case study. This kind of analysis usually helps not only to the Quality department, but also to the Manufacturing, in order to pay much more attention in those components that have many incidents during the warranty period.
95
90
80 70 62 60 54 50
Improving the manufacturing process or taking care during the component assembly, it is possible to reduce the complaints regarding a specific item.
46 38
40
30
30
26 22
10
8
6
4
3
2
Cable
12
Horn
14 10
Steering wheel
18
20
Heater
Navigator
Seats
Antenna
Lights
Intercom
Regulator
Gear
Disjunctor
Valve
Alarm
Brake
Battery
Pump
0
Engine
No. of COMPLAINTS
70
Due to the huge amount of components in such complex systems as an industrial customized vehicle, is suggested the choice of items in order to make all the gathered information easily manipulated. The criteria to select a group of items can be not only in terms of failures quantity. It is also important the cost of such components, the delivery time to procure them, etc.
Figure 8. No. of Complaints per Component
151
mar-15
jul-14
nov-14
mar-14
jul-13
nov-13
mar-13
jul-12
nov-12
mar-12
jul-11
nov-11
mar-11
jul-10
nov-10
mar-10
jul-09
nov-09
mar-09
jul-08
nov-08
mar-08
jul-07
nov-07
mar-07
jul-06
0
nov-06
500000 mar-06
Apart from the EAC line, the warranty cost line obtained from the cost average in t1 is also here implemented (green line). As a result from this graphic comparison, one sees that cost at the end (ca. 2.335.000,00 €) is slightly lower than the budget considered at the beginning of the project (2.625.000,00 €).
Estimate to Completion (ETC) Incurred to Date (ITD) Warranty according to Cost Average
In general, is important to know how critical each component is for the company and for the fulfilment of the production line. All these features will be conditions to have in mind when comes the time to take a decision. In other words, these features will be turn into factors which will give a specific weight to each component. This weight will help finally to the manager to take the proper decision. Taking this into account, and regarding again to the former figures, those data included in the graphic, are possible to be transformed in terms of relative frequency. This relative frequency refers to the number (ni) of times that an event (i) takes place (in our case, failures), and divided per the total number of events (Σni). Considering therefore statistical concepts (and together with other factors) is possible further on to weight, as mentioned, the value of each component in order to prioritize between the loan to warranty assistance or to stay the piece available for the manufacturing. Table 3 Relative frequencies Component
Claims Nº
fi = ni / S ni
Component
Claims Nº
fi = ni / S ni
Pump
95
0,1827
Lights
18
0,0346
Engine
70
0,1346
Intercom
14
0,0269
Battery
62
0,1192
Antenna
12
0,0231
Brake
54
0,1038
Seats
10
0,0192
Valve
46
0,0885
Heater
8
0,0154
Alarm
38
0,0731
Navigator
6
0,0115
Gear
30
0,0577
Horn
4
0,0077
Disjunctor
26
0,05
Steering wheel
3
0,0058
Regulator
22
0,0423
Cable
2
0,0038
The rest of components are basically not considered because:
They have been affected by very little amount of failures.
They have been delivered fast enough and mostly in time.
There is an extra stock in warehouse due to the purchasing of minimum quantities, higher than the real necessity.
Or they are not, definitively, under the interest of the project managers’ point of view, due to other reasons.
Summarizing, with the tasks before explained in order to obtain a set of chosen components (those acknowledged as critical), what we are really composing is a list of strategical spare parts. This means that, in case the company approves the use of the budgetary buffer for the supporting of the warranty service, the purchasing process can be quickly launched. All these actions will finally lead the company to positive returns:
by reducing the probability of paying penalties due to a global delay in the project delivery, and
by improving the confidence of the client due to the completion of contractual terms as the warranty assistance.
It is necessary to remark that, every failure referred here were incidences considered under warranty. For further researches on this field, is proposed for example the inclusion also of those incidences not considered under warranty. The analysis of such events must take into account the reasons why these situations happen (bad training of the user?; poor information for maintenance?; clients accustomed to other family product with different behaviour?...). Anyway, in each case and even when the failure is not attributed to the manufacturer, the company must be interested in the possible causes.
152
4.3 Spare parts management for warranty assistances The change in the utilization of pieces from assembly line to warranty assistance has a negative effect in cost for the whole project. The extracosts associated to the spare parts are of course due to the different price between the acquisition of a piece at the beginning of the project for the whole fleet, and the acquisition of a piece punctually during the lifetime of the project and for a specific incidence. Therefore, the accounts management must apply a compensation between the difference values in order not to remain such increment in the total costs of manufacturing, but to incurre it in the total warranty costs. Taking this into consideration is possible to calculate properly the costs of the loans. Consequently, the percentage incremented in the final acquisition price can be also a factor to have in mind when it will be estimated the weight for each component.
Figure 9. Typical feedback of analysis from collected reliability and maintenance data (from [7]) In order to make sure a correct warranty attention, the proposed action is basically to acquire a lot of reserves that allow reparations without delays in the vehicles manufacturing and, simultaneously during this process, to supply spares to the warranty service from the assembly line in a reasonable way. According to the mentioned considerations, and together with the collected data, the experience of warranty / maintenance technicians, the knowledge of the engineering department and of course using the already developed techniques in maintenance (figure 9), is possible not only to elaborate a spare parts purchase plan for warranty, but also to improve the business process of decision-making as well as to contribute with improvements actions for engineering and manufacturing. This purchase plan means an adequate list of essential pieces to assure the properly assistance of a high amounts of reclamations. At the end of the warranty period, the remaining spare parts can be negotiated with the client for their use in later maintenance tasks. This fact forces to control properly all these materials thus, at the conclusion of the project, they must be available to be supplied to the customer. At the same time, this action will suppose an opportunity to recover in a future, part of the incurred cost. In general, the decision-making will be the result of a process focused to a final choice among several alternatives. In our case, in order to lead the company to a fast and adequate decision-making, every department should know very clear what they have to do and which the scope of their responsibility is. For our company case, we have adapted the idea of a warranty management system divided in modules (see figure 1), proposing furthermore, certain interactions among different departments inside the company, which share the information, take suitable decisions according to their responsibilities, and coordinate activities to a common and profit goal for the whole company. In order to illustrate such interactions, activities etc., it has been used a workflow (figures 10 and 11) following a BPMN (Business Process Modelling Notation) methodology as a graphical representation for this specific business process, being by this way easily understandable. The considered departments here (including the client) are:
Logistics Department (LD)
Quality Department (QD)
Manufacturing Department (MD)
Purchasing Department (PD)
Management Board (MB)
Engineering Department (ED)
Aftersales Department (AD)
Customer (C)
The process starts when the customer detects a failure in a vehicle and informs consequently to the company. The communications can be addressed to different sections of the company, but the most appropriate way is to focus them in only
153
one communicator as, for example, the Management Board. Anyway, the Aftersales Department can also detect failures in the course of its maintenance activities. NO
Communicates the failure to MB.
CLIENT
YES Is acepted?
Are there spares in warehouse?
LOGISTIC DEPT.
YES
NO
MANUFACTURING DEPT. NO
Decides, as a last resort, if the failure must be repaired under warranty.
Transmits the information to AD.
MANAGEMENT BOARD
NO
Must the failure be considered under warranty?
Analizes the initial data.
AFTER-SALES DEPT.
Communicates the corresponding dept. the necessary actions in order to facilitate the material to AD.
Repair under warranty? YES
YES
SI
Determines resources and deadlines for the repair.
NO
Is any material needed for the repair?
Figure 10. Workflow of the proposed warranty management process (part 1 of 2). Once the information reaches the Aftersales Department, it analyzes the facilitated information. In case that the incidence is considered not object of repair under warranty (for example, when the cause of the failure has been a wrong or bad utilization), it informs to the Management Board who decides finally if, in spite of this, the incidence is repaired as warranty. If the incidence is discarded as warranty repair, the Management Board should inform the customer about that. The customer can of course disagree with such consideration. Therefore, a list of interventions (those not considered firstly as warranty) must be negotiated between the parts. If the incidence is considered under warranty conditions, the Aftersales Department must carry out a diagnosis of the incidence, detecting the problem, analyzing its solution, and determining the resources (staff and materials) and the necessary time for its repair. In reference to the material, the warranty technicians must identify between the repairable and the non reparable / consumable materials. YES
CLIENT
Sends to MB its Approval.
YES LOGISTIC DEPT.
MANUFACTURING DEPT.
NO
Is possible to obtain it by cannibalization?
YES
NO
YES
MANAGEMENT BOARD
NO AFTER-SALES DEPT.
QUALITY DEPT.
Does it affect to the delivery schedule?
NO
Decides, as a last resort, if the piece is lent.
Makes the corresponding Non Conformity Register and sends the failured material to the factory.
Is it a systematic failure?
NO
Is the piece lent?
NO
YES
Informs to AD about the disposition of materials for the repair.
Informs to MB and C about its Action Plan.
Informs the C about the closing of the incidence.
Resolves the incidence and fulfils the Closing Report.
Makes and manages the Data Base associated to the incidences, needed for its follow-up and periodical review.
Manages the repair whose charges impact in AD.
YES Manages the purchasing whose charges impact in AD.
PURCHASING DEPT.
ENGINEERIING DEPT.
Analyzes and justifies the causes.
Figure 11. Workflow of the proposed warranty management process (part 2 of 2). The necessities in general are communicated to the Management Board who addresses the actions to the corresponding department (Logistics, Manufacturing and / or Purchasing Department), in order finally to facilitate the material to the Aftersales Department. In this moment is when the Management Board must take the most important decisions in terms of costs and manufacturing prevision. Once the Aftersales Department has the material (either by a loan from warehouse, a loan by cannibalization, or acquisition by purchasing), it communicates to the Management Board (and afterward to the client), its action plan.
154
The damaged material is sent to the company where the Quality Department (together in some cases with Engineering Department) analyzes the failure. If the repair has been by replacement and the material is identified as repairable, the Quality Department manages the repair, taking into account the appropriate certification. The material, once repaired and certificated, will be stored again in warehouse for its use in the assembly line. In this process, every data about the incidence, damaged material, repair etc., gathered by Aftersales, Quality and Engineering Departments are introduced in a Data Base which is followed-up and reviewed by the Quality Department. Once the incidence is solved, Aftersales Department communicates the closure of the assistance to the Management Board, who transmits this to the client. From the customer is important to receive a document with the approval of the performed tasks and the acceptance of the service closure. The Data Base associated to these incidences and necessary for their follow-up should include, not only those incidences considered under warranty, but also the data about preventive and corrective maintenance performed on every vehicle, in order to enable the analysis of, for example, repetitive or systematic failures among others studies.
5
CONCLUSIONS
With the help of a case study, this paper summarizes a business process inside a specific framework: the warranty management. The analysis observes how information related to warranty and maintenance, gathered during the lifetime of the project, can be profitably used to take decisions reducing unnecessary expenses and improving the quality, the service and, essentially, the image of the company in front of the client. The data compilation enables to weight those parameters needed to choose properly among alternatives. Such decisions are expressed in the workflow as gateways. Nowadays, the use of computing tools can be helpful not only to make automatically choices, but also to model and simulate business processes in order to detect for example their weak points. Particularizing to our case, future studies can consider other conditions as for example:
Final products with different reliability.
Local maximums at different times and places.
Warranty cost depending on time.
When the budgetary buffer is negative.
Diverse inoperance degrees.
Etc.
In general and nowadays, e-technologies are being applied in many and different fields. In our case, further research can be focused to the e-warranty in the same way as e-maintenance. The concept of E-Warranty will be them defined as that warranty support which includes resources, services and management, needed to enable proactive decisions in the process execution. Etechnologies as e-monitoring or e-diagnosis will be, consequently, key factors to reach high levels of quality, reliability, effectiveness and, of course, confidence before the client.
6
REFERENCES
1
V. González Díaz, J.F. Gómez, M. López, A. Crespo, P. Moreu de León. (2009) Warranty cost models State-of-Art: A practical review to the framework of warranty cost management. ESREL 2009, Prague.
2
K. Lyons, D.N.P. Murthy. (2001) Warranty and manufacturing, integrated optimal modelling. Production Planning, Inventory, Quality and Maintenance, Kluwer Academic Publishers, New York. Pp. 287–322.
3
Boyan Dimitrov, Stefanka Chukova and Zohel Khalil. (2004) Warranty Costs: An Age-Dependent Failure/Repair Model. Wiley InterScience, Wiley Periodicals, Inc.
4
J. Gómez, A. Crespo, P. Moreu, C. Parra, V. González Díaz. (2009) Outsourcing maintenance in services providers. Taylor & Francis Group, London. Pp. 829-837. ISBN 978-0-415-48513-5.
5
Stefanka Chukova and Yu Hayakawa. (2004) Warranty cost analysis: non-renewing warranty with repair time. John Wiley & Sons, Ltd. Appl. Stochastic Models Bus. Ind. 20, Pp. 59–71
6
D. Christensen. (1993) Determining an accurate Estimate At Completion. National Contract Management Journal 25. Pp. 17-25.
7
ISO/DIS 14224, ISO TC 67/SC /WG 4. (2004) Petroleum, petrochemical and natural gas industries - Collection and exchange of reliability and maintenance data for equipment, Standards Norway.
Acknowledgments The author would like to thank the reviewers of the paper for their contribution to the quality of this work.
155
Proceedings of the 4th World Congress on Engineering Asset Management Athens, Greece 28 - 30 September 2009
REMOTE SERVICE CONCEPTS FOR INTELLIGENT TOOL-MACHINE SYSTEMS Guenther Schuh and Kevin Podratz Research Institute for Operations Management at RWTH Aachen University (FIR), Pontdriesch 14/16, 52062 Aachen, Germany In the near future, tooling companies will offer their customers not just maintenance services, but complex remote service packages for their engineering asset management, which is the total management of physical – not financial – assets [1]. The overall goal is to enhance the efficiency of the engineering asset, e.g. to reduce TCO, on the customers´ site by means of value creating partnerships [14]. These partnerships may be, e.g. the classical output or reliability partnership, but also process optimizing partnerships or lifecycle partnerships [4, 8]. The process optimizing partnership offers, e.g. the optimization of the system’s performance or the output quality, an optimized ramp-up and restart procedure or optimization of the production process parameters. The lifecycle partnership, on the other hand, accompanies the intelligent tool-machine-system throughout the whole lifecycle, which includes, e.g. provision of spare parts during the entire usage phase, storing, refurbishment, recycling and even the support of relocation of production facilities. Intelligent remote services have great potential for realizing all these partnerships. To realize such engineering asset-related partnerships, two major tasks have to be done. First, there has to be the intelligent tool-machine system, which delivers the information that is required for these services. And furthermore, this information has to be integrated into the maintenance processes, so that it is delivered at the right place and time and in the required form. Second, the activities and processes that are combined to the engineering asset-related partnerships have to be configured out of standardized service and process modules. Therefore configuration logic is essential. This paper is based on the results of a research project in which an intelligent tool-machine system is developed and forms the foundation for the development of such asset-related partnerships. So this paper describes the intelligent asset and the different types of partnerships and presents the logic to efficiently configure the required maintenance and business processes. Key Words: service engineering, service systems, configuration logic, configuration system, remote service, asset-related partnerships, tooling industry 1
CHALLENGES FOR TODAY’S TOOLING COMPANIES
The situation on the European tooling market is still satisfactory and even the present financial crisis has by now no serious effects on the respective companies. Nevertheless, the competition of upcoming tooling companies from Eastern Europe and Asia imposes pressure upon established companies. Thus these companies have to develop and defend their competitive advantage. Because these emergent tool constructors have a much lower cost structure, the situation becomes more and more difficult for their competitors. A statistical analysis comparing the situation of Chinese and German companies – for example – shows that tool-manufacturing in China has about 91% less personnel costs than Germany (i.e. 4.801€ compared to 58.407€ in Germany). Thus established tooling companies have to find new solutions and strategies to be competitive on the global market [9, 12]. According to Porter, an opportunity of assuring one’s competitive position on the international market is to use differentiation strategies [2]. They are enabled by product or service developments which offer unique attributes to the customer. The manufactures product quality is often easily copied and cheap tool plagiarisms regularly appear on the market. Services instead are not that easily imitated. To make use of this advantage, companies should stop just being a ‘producer’ and become a ‘solution provider’ [3, 11] that aims to solve their customers’ problems by offering complex service systems [5, 6, 7]. Thus, integrating products, parts, after sales services and value added services into solution systems allows a successful differentiation form the worldwide competition [10].
156
2
INTELLIGENT TOOL-MACHINE PARTNERSHIPS
SYSTEMS
AS
ENABLER
FOR
REMOTE
SERVICE-BASED
Technological innovations like RFID support, transponder and sensor technologies plus high-capacity communication technologies are absolutely necessary to build and install so called ‘intelligent tool-machine systems’, as enabler for remote services. By means of remote services that can be characterized as advancements of tele-services, individual service systems, which should lead to lasting customer-provider relationships [13] and the desired resulting partnerships can be effectively realised. They offer the possibility of providing technical services not locally at the machine, but from a distant location using communication networks. This is a way to create and establish entirely new services and additionally to make them much more efficient. Intelligent tool-machine systems (Figure 1) are equipped with sensors, which measure e.g. pressure or temperature. Depending on the machine itself and what it is used for, there are different other devices (e.g. counting units) which can be installed to gain more valuable information. Using a transponder and a RFID unit, specific software, like an electronic tool log, can be provided with this information. An electronic tool log, stored on a transponder and connected to the tool, administrates and processes the information from the intelligent tool-machine system. To detect failures and errors, which may occur, early enough to avoid long downtime, the electronic tool log is also able to determine critical situations that may lead to failures and gives warnings. If a repair could not be averted though, it still helps by narrowing the field of possible reasons. In order to be capable of this function, the system has to be supplied with ‘failure patterns’, meaning known combination of data and resulting failures of those.
Machine
Antenna Charge Amplifier Transponder RFID read/write unit Counting Unit
Figure 1. The intelligent tool-machine system
By means of the intelligent tool-machine system and the connected electronic tool log, services like the tool-data management, documentations, a service manual as well as engineering drawings and cooling plans can be provided directly where they are required. The processed data from tool sensor technology supports the process of tool editing and path relocation. Thereby revision and inventory data can be automatically managed. The process data is then used to improve the initial sample inspection processes, to realize fast ramp-up times and to reduce set-up times. Additionally, personnel trainings can be provided or supported by means of remote connection systems. On the basis of the intelligent tool-machine system it is also possible to provide such maintenance-related remote services like condition monitoring, remote diagnoses and remote repairing as well as process supervision and optimization (Figure 2). Beside this, planning, scheduling and dispatching can be supported by atomized processes.
157
emotional profile, customer experience integrated project managment integration of service service assortment product system
customer integration, customer value
product
trust, image
DIN/ISO approval
integrated project managment
process qualifier
life time cycle partner
status controller
3D-Data for self repair
optimizing tool-machine -system
modification data management
initial sample inspection
remote diagnosis
maintenance planning
repair-or-buy decision
tool treatment
tool storing
condition based maintenance
hotline
consulting
maintenance, repairs
training
periphery
spare parts
documentation, user manual
drawing and cooling plans
tool
tool data
integration of service
service
assortment product/ product system
availability backup
coordinator
Figure 2. Potential remote services for solution systems based on the intelligent tool-machine system [12]
By combing these new services the following service systems, in form of partnerships, may be offered in order to support the customer in managing its engineering assets more effectively [4, 8]: • Process optimization partnership: optimization of the tool-machine systems in installation and start-up phase and optimization of current processes, • Lifecycle partnership: assumption of the support of crucial tasks in the tools life cycle like continuous maintenance service, guaranteed spare part provision or end-of-use services like storage, refurbishment or even recycling, • Condition-monitoring partnership: tele-service based maintenance as well as support of preventive and reactive maintenance activities, • Availability backup partnership: support concerning machinery breakdown and effective production through knowledge of process parameters, • Coordination partnership: authorization of the customer to monitor its supplier and • Output-guarantee partnership: acceptance of responsibility of defined output quantity and process quality by customer. With the new technology of an intelligent tool-machine system, remote services and the respective partnerships between producer and customer, it is possible for the producer to establish a differentiation strategy that makes him a service provider as well. Furthermore the customer can profit from all the advantages that come with remote services. 3
REMOTE SERVICES INTEGRATION
To successfully offer such remote service systems in addition to the enabling intelligent tool-machine systems and the development of the services itself, it is necessary to match the company’s technological standards and to integrate the service systems into the company’s organization. Thus organizational and technological integration as described in Figure 3 have to be achieved.
158
Figure 3. Aspects of realization and integration of service systems [8]
Within the scope of integration of the respective service systems into the company’s technological environment, the interfaces to the existing system landscape have to be defined and it has to be specified which information will be transmitted to and from the relevant system. Thus the following tasks have to be fulfilled: • Selection and adaptation or development of suitable sensor and transponder technologies (if necessary) under consideration of special requirements of the application surroundings for e.g. injection moulds (pollution, high pressure, temperature, etc.), • Identification of necessary data to be measured like temperature, pressure, and of ways to achieve acceptable values, • Establishment of a connection between the tools transponder and the machine’s control system as well as integration of these elements into the complete system and • Integration of technologies to edit and transmit data for the realization of the remote interface. The organizational integration deals with the following aspects: • The presentation to the customer of configured service systems for example in terms of partnerships and of the opportunity for customizing packages. Besides the presentation it is necessary to integrate the sales department and consider the existing product and service portfolio as well as its structure. • The relevant resources have to be managed. This means, the resources have to be described and regularly updated by means of their essential characteristics and it has to be defined for which services the resources are required in a general way. • The integration of processes for the service delivery into the existing business process structure and of relevant information for the asset management has to be considered. 4
CONFIGURATION OF REMOTE SERVICE SYSTEMS 4.1 DEVELOPMENT OF CONFIGURATION LOGIC
For an efficient realization of the organizational integration a configuration logic (Figure 6) has to be developed according to the process displayed in Figure 4. For this purpose, first of all, the desired offerings meaning the service systems and partnerships have to be identified and described.
159
Identification and description of 1 desired service systems
Definition of service modules
Definition of combination rules
2
3
Revision of rules
6
Definition of the required information in the processes
5
Definition of the required resources for the processes
4
Definition of processes for delivery of the service modules
Figure 4. Development process of configuration logic
Next, these service systems or partnership services should be modularized into standardized service modules (Figure 5), which can be divided in: • Basic services: These are central services without which the service system would not be possible and without which no other additional services could be offered. These basic services can be specified e.g. by accounting types or contractual agreements, which also modify the respective processes for the service module. • Additional services: These services are extensions of the basic services which one can not (or does not want to) offer independently. In combination with a basic performance, additional performances offer an extra benefit for the customer.
basic services
basic services
service specifications
additional basic servicesservices incl. additional specifications additional services services
Figure 5. Services modules
After that, all possible combinations for customized packages (service systems) have to be marked out. It has to be determined which service modules have to be linked to achieve a desired function and which service modules can or must not be connected. These rules define which combinations of modules are: • Technically impossible, because the required processes or resources are not fitting together or do not exist in the required version for this special combination. • Illogical or don’t make any sense considering economical aspects. • Counterproductive, so that the required processes or resources would interfere with the functional capability of others. This designation between the service modules in the single classes (basic services and additional services) has also to be made between the classes themselves. This means that the combination rules for basic services as well as for the additional services have to be established. These rules help the user during the actual configuration to create a valid service system by giving guidelines and setting boundaries to the configuration possibilities. But not only the service modules are required to realize the service systems - actually, they are only variables or customer related terms for processes realizing them - one also requires resources and information (concerning machine control, intelligent tool and management systems) as well as an integration in the existing business processes. For this purpose, the process should be described and assigned to respective service modules. The next step is made with the assignment of the required resources as well as the relevant information of the machine control, tool machine system, IT system and management systems to those processes. This information has to be integrated into these processes to ensure its delivery at the right place and time and in the required form.
160
Figure 6. The configuration concept [8]
4.2 THE CONFIGURATION PROCESS Now the actual configuration, the allocation of the service modules and the developed service systems can be done. Therefore it has to be determined which modules are part of the respective service system. The assignments of resources to service modules and the definition of the combination rules have already been done in the development phase. But before starting the first actual configuration the company’s resources have to be inspected and introduced into the configuration system in an once-only administrative process. It is necessary to find out which resources (tools, means of transportation, employees and their qualifications) are available in the company, because they are the foundation for the service delivery processes and therefore determine which service modules and consequently services systems can be offered at all (Figure 6). In this inspection only the general availability of resources with its essential attributes is listed. The reason for dealing only with the general resource availability at first is that the resources are the basis for realizing the service systems. A company can only offer those service systems and service modules which can be delivered with its given resources. A stock check of all available resources at the very beginning makes sure no service systems and customized packages will be configured that cannot be realized afterwards. Then the actual configuration phase starts with the configuration or combination of the service modules guided by the predefined rules. After that and the optional further description of the service system the manual configuration is already finished. The further configuration of the processes and the information integration were already structured in the configuration logics. Also the actual gathering and processing of the relevant information as well as the actual execution is planned, coordinated and partly even done by the configuration system itself in collaboration with the further associated software systems. So the manual configuration actually consists of one single step.
Inspection and Introduction of the company‘s resources Once-only manual preparation
Configuration of the service systems One-step manual configuration
Figure 7. The one-step configuration
161
Configuration and Execution of order fulfillment process Automatical configuration and execution
By means of the present configuration logics the process of configuring (Figure 7) is simplified and the number of possible error sources is reduced. Therefore the configuration of service modules can be done by less specialized persons like a sales employee and even by the customer himself. With the help of the configuration logics and specific case oriented rules, service modules can be assembled easily and efficiently. Thus the realization of the new service systems and the integration of information in those can be assured and the project target can be accomplished. 5
CONCLUSION
In times of the present financial crisis and rising competitions concerning the tool-construction market, traditional tool-producers are forced to restore their competitive advantage. For that, new solutions and production strategies have to be developed. Thus expanding the already existing offers by engineering services becomes more and more important. On the one hand, entirely new services can be established and by that enlarge the company’s portfolio. On the other hand, already existing services can be improved and made much more efficient. Of course, these kinds of services should be customized and therefore have to be realized in a configuration system to allow the customer to exactly compile the needed service level. Such engineering asset-related partnerships can be facilitated by achieving organizational and technical integration into the customer’s company. Via configuration logics, information is attributed to processes, and these process and resources are attributed to service modules. The compilation is simplified in a way that the customer can realize a complete service system by combining only the service modules. 6
REFERENCES
1
Amadi-Echendu, J. et al. (2007) What is engineering asset management? In: Len Gelman, Joe, Mathew, Jim Kennedy, Jay Lee, Jun Ni (Editors): Proceedings of the 2nd World Congress on Engineering Asset Management (EAM) and the 4th International Conference on Condition Monitoring. Springer, London, p. 116 – 129.
2
Belz, C.; Bircher, B.; Büsser, M. (1991) Hillen, H.; Schlegel, H. J.; Willée, C.: Erfolgreiche Leistungssysteme – Anleitungen und Beispiele. Schäffer-Poeschel, Stuttgart.
3
DIHK (Editors) (2002) Industrie- und Dienstleistungsstandort Unternehmensbefragung durch die IHK-Organisation. DIHK, Berlin.
4
Hofmann, G. (2004) Intelligente Spritzgießwerkzeuge erschließen Servicepotenziale im Werkzeugbau. In: Wachstumspotenziale – Integration von Sachgütern und Dienstleistungen. Konferenzband, Esslingen, 2009.
5
Ittner, T.; Wüllenweber, J.: Tough times for toolmakers. In: The McKinsey Quarterly, Nr. 2.
6
Kersten, W.; Zink, T.; Kern, E. M. (2006) Wertschöpfungsnetzwerke zur Entwicklung und Produktion hybrider Produkte: Ansatzpunkte und Forschungsbedarf. In: Blecker, T.; Gemünden, H. (Hrsg.): Wertschöpfungsnetzwerke. Schmidt, Berlin.
7
Kuster, J. (2004) Systembündelung technischer Dienstleistungen. Shaker, Aachen.
8
Podratz, K. (2009) Ein Ass im Ärmel: Effizientes Handling von Remote Service basierten Leistungssystemen im Werkzeugbau. In: UdZ – Unternehmen der Zukunft Nr. 2, p. 53-59.
9
Porter, M. E.: Wettbewerbsstrategie. Campus, Frankfurt, 2008.
10
Ramaswami, R. (1996) Design and Management of Service Processes – Keeping Customers for Life. AddisonWesley, Old Tappan NJ US.
11
Schuh, G.; Friedli, T. (2004) Gebauer, H.: Fit for Service: Industrie als Dienstleister. Hanser, München.
12
Schuh, G. et al. (2008) Technologiebasierte Geschäftsmodelle für Produkt-Service-Systeme im Werkzeugbau. In: Seeliger, A.; Burgwinkel, P. (Editors): Tagungsband zum 7. Aachener Kolloquium für Instandhaltung, Diagnose und Anlagenüberwachung, Verlag Zillekens, Aachen, p. 325 – 335.
13
Schuh, G.; Gudergan, G. (2009) Service Engineering as an Approach to Designing Industrial Product Service Systems. In: Roy, R.; Shehab, E. (Editors): Industiral Product-Service Systems (IPS2) – Proceedings of the 1st CIRP IPS2 Conference. Cranfield Univercity Press, Cranfield UK, p. 1 – 7.
14
Ulepic, S.:(2009) Value-Added-Partnership-Model. In: Beschaffung aktuell Nr. 1, p. 50 – 51.
162
Deutschland:
Ergebnisse
einer
Acknowledgments The data for this paper is a result of a research project called ‘TecPro’. TecPro was established to work on the research topic “service systems for technology and production-based services of the tool and mold production“. The project is funded by the Ministry of Education and Research (BMBF) under the project reference 02PG1095 and within the research and development program “Forschung für die Produktion von morgen”. It is supervised by the department of production and manufacturing technologies of the project management agency Forschungszentrum Karlsruhe (PTKA-PFT). It started in September 2006 and will be finished in February 2010. The project’s aims were • the development and technical realization of an “intelligent tool”, • the development of business concepts and processes for the intelligent tool and • the development of a model for the configuration of service systems and integration into the business processes.
Figure 8. The TecPro project consortium
Within the project’s framework, an intelligent tool system has been developed. It is the foundation for the developments of asset-related partnerships. To successfully implement such partnerships, the data of tool-sensor and machine control have to be interpreted and implemented in the customer’s business process. Consequently, the companies can offer service systems and by that successfully reassure their strategic position on the global market. We would hereby like to thank all project partners (Figure 8) for their cooperation.
163
Proceedings of the 4th World Congress on Engineering Asset Management Athens, Greece 28 - 30 September 2009
THE STATE OF ASSET MANAGEMENT IN THE NETHERLANDS Y.C. Wijniaa, P.M. Herdera,c a
Delft University of Technology, Faculty of Technology, Policy and Management, Jaffalaan 5, 2628BS Delft, The Netherlands b c
D-Cision bv, PO box 44, 8000 AA Zwolle, The Netherlands
Next Generation Infrastructure Foundation, Delft, The Netherlands
The world of infrastructures for energy, transportation, water and communication is constantly changing. It is not only that the use is ever increasing, recent years have demonstrated that there is a move from the public domain to a more privatised setting. On the one side, customers become more critical and demand better quality and better service, whereas on the other side a need arises to limit the (public) costs of infrastructures. To cope with those conflicting demands many infrastructure organisations have introduced Asset Management, an integrated approach to balance costs, performance and risks. However, in discussions at the Asset Management Platform (a group of over 30 asset managers from all infrastructures in the Netherlands, established in 2007 by the Next Generation Infrastructures foundation) it appeared that many infrastructure asset managers had great difficulty in getting beyond the first promising step. As a first step in the formulation of a new research programme an interview round was conducted with a number of infrastructure asset managers to get a better feel for what the precise barriers and challenges were. In this paper the results of the interviews are presented. Key findings were that Asset management acquired a strong foothold, but that it was a bottom up process which did not reach the strategic level. There were still difficulties in convincing top management of the strategic value of asset management and in aligning organisational goals with technical and operational standards. Furthermore, it was recognized widely that asset management required a change in maintenance paradigms and that asset management should focus on life cycle costing. Nevertheless, asset management was generally considered to be an engineering discipline, even though lots of efforts were spent on creating support for the initiative. Some asset managers explicitly recognized the social and cultural issues and acted on this by training their asset managers in social skills like empathy and persuasiveness, however this was not common. Neither was the extensive use of asset performance models for decision support, as most asset managers preferred to rely on historical data. However, at the same moment, many asset managers reported problems in accessing those very historical data. Based on these results, a research programme was defined in which academics as well as practitioners will participate. The outlines of this programme will be introduced in the paper. Key Words: infrastructures, asset management practice, asset governance, research agenda 1
INTRODUCTION
Infrastructures have always been a vital part of society. In the past, most cities developed at a crossroad of different modes of transportation, as this was a great opportunity for trade. The first infrastructure planners were the Romans, with their extensive road network and water distribution system. Over the years, the concept of infrastructure has expanded from road and water to incorporate gas (1812), rail (1829), telecommunication (1843), electricity (1879) and airlines (1914)1. It appears society has become more and more dependent on the delivery of goods and services from/to far away, and thus on the infrastructure that supports delivery. The use of energy in our personal lives is ever increasing, and people got used to the idea that someone at the other end of the world is not far away. Most infrastructures have seen a continuous development of technology since their establishment. The 8 lane highways cannot be compared to the single track carriage ways of the Romans and the internet is miles away from the first telegraph.
1
Years according to wikipedia
164
However, in the institutional setting development seems less dramatic. Virtually all infrastructures started as a private, commercial enterprise. As government recognized the importance of well functioning infrastructures for society, many of those infrastructures were put under government control, either by institutionalizing them (the infrastructure provider becomes a government agency), acquiring the shares of the private companies or by strict regulation. Recent years have demonstrated a reversal of this movement, as many sectors have been liberalized and deregulated and many infrastructures are (re)privatized. This happened in the Netherlands for instance with respect to rail (1993), electricity (1999), and gas (1998). The general idea behind all those initiatives is that markets would provide better services at lower costs than governmental bodies could. Coinciding with this focus on commercial quality, customers became aware of the non-commercial qualities of the infrastructure. It has become more difficult and more expensive to plan new infrastructures (at least, the visible ones) as people claim a right to the view they always had, and that the landscape should not be disturbed by the new infrastructure. Incidents that happen (for example substantial power outages) are treated as news items, and the availability of information on those incidents is immense2. Infrastructures might therefore appear more risky, and new regulations are put in place to safeguard the public against the perceived “greediness” of the commercial infrastructure operators3. Infrastructures are under pressure, as one could say. This is shown in the figure below.
Increasing Performace requirements
Limited Budget
Less Public Acceptance
Infrastructure system
Higher Legal Rquirements Figure 1: The pressures on infrastructure systems It is up to the infrastructure operators to resolve this issue. In this light, it is no surprise that Asset Management has gained the attention of many infrastructure operators. Asset management, after all, is the profession of balancing cost, performance and risk over the lifecycle of an asset. For example, the PAS55 process standard defines asset management as: “Systematic and coordinated activities and practices through which an organization optimally manages its physical assets, and their associated performances, risks and expenditures over their lifecycle for the purpose of achieving its organisational strategic plan.” However, when getting into the details of the infrastructure system, it becomes clear that the total operation of an infrastructure not only depends on the physical assets, but also on other elements, like information systems, data, standards and procedures, employees, capabilities and culture4. These elements are only to a certain extent independent of each other. Over time the elements might influence each other. It is therefore probably better to speak of elements that are loosely coupled, instead of independent. This is shown in the diagram below, where the black box infrastructure system of figure 1 is replaced in which the constituting elements are represented as masses, connected by springs. The metaphor of a mass-spring system is explicitly chosen, as it provokes the images of objects reverberating and exerting forces on other elements. The metaphor also demonstrates another characteristic many asset managers are familiar with: you can increase the strain on the system to achieve a higher (financial) performance, but it either will come back at you in future (for example, postponing maintenance and having to replace the asset in a few years because of irrepairable damage) or have a (delayed) impact on the performance. Short term financial gains can be achieved, but it is much more difficult to sustain them.
2
Supported by the telecommunications infrastructure. In the recent Schiphol plane crash (25th feb 2009, 10.40 am) reports were on the internet at no later than 10.42 am (twitter) 3
In this debate, people seem to miss that the people working in those operators are to a large extent the same as when it was a state owned enterprise, preserving the culture and values of the old days 4
This is recognized by PAS55, but not in the definition
165
Increasing Performace requirements
Bioware
Software
Limited Budget
Personnel
Capability
Culture
Standards
Procedures
Practices
Assets
Systems
Data
Hardware
Less Public Acceptance
Higher Legal Rquirements Figure 2: The mass-spring metaphor of Asset management In the Netherlands, many infrastructure managers have embraced asset management. As the Netherlands are a small country, most asset managers regularly bump into each other, and several initiatives to share knowledge on the profession were deployed. In those initial contacts it became clear that knowledge sharing alone was not good enough, some new knowledge had to be developed as well. None of the participants was really certain where the next steps of asset management could lead them. The Next Generation Infrastructure Foundation (NGInfra) took up the challenge of knowledge development and established the Asset management platform in spring 2007. In the startup meeting of this platform, a very high representation of corporate (high level) infrastructure managers was present, as over 90% of the invited people attended. Represented infrastructures included road, rail, electricity, gas, water, sewage, telecom, and airports. However, in the first meeting it became apparent that asset management had very different connotations in the diverse infrastructures. As this was a serious barrier for developing a shared research agenda, the first step of the platform was to establish the state of Asset Management in the Netherlands. In this paper, the questions, procedures and key findings of this research are presented. Based on those results, a research agenda and programme was formulated and kicked off. The outlines of this programme will be introduced in the paper. 2
THE SETUP OF THE RESEARCH
The research aimed at arriving at a representative list of issues in asset management that would cross-cut all infrastructure sectors. Therefore asset managers in 7 different infrastructure and related sectors were interviewed. These were: 1. 2. 3. 4. 5. 6. 7.
Gas and oil Electricity Rail and road Drinking water Waterways and water protection Telecom Asset Service providers
The focus of NGInfra lies with the long lived infrastructures of sectors 1 to 5. To establish whether the technology was a determining factor, the 6th sector was added. In telecom, technology and demand is developing much faster than in the others, resulting in short lived assets. As outsourcing of services is a hot topic in the field of asset management, the 7th sector was added, to provide insight in the specific issues related to outsourcing from both sides. The core objective of the research was establishing the key issues in asset management, for which NGInfra could develop knowledge development and dissemmination programs. To achieve this, 17 interviews were conducted, which provided a wealth of insights. It was not the purpose of the research to judge asset management in the Netherlands, nor to provide a benchmark. The interviews were conducted by Joost Jongerius, Tom Meijerink and Jan van Miltenburg of Evers and Manders
166
consult in commission of the Next Generation Infrastructures Foundation 5 . To provide as much room as possible to the interviewees to talk about the issues close to heart, the option of open interviews was chosen instead of standardized interviews with structured questions. To facilitate comparing different interviews, a checklist of topics to be addressed in the interviews was prepared. This checklist is presented in table1.
Table 1. Overview of interview protocol.
Main topic The concept:
Organisation
Implementation
Knowledge aquisition
Room for improvement: What would be a real breaktrough?
Item • • • • • • • • • • • • • • • • • • • • • • • • •
How do you define asset management? Which assets? Which performance criteria How do you determine the criteria Development of asset management within the organisation Asset Owner or Service Provider Position of asset managent within organisation Reporting line to board of directors Number of disciplines/employees involved Balancing and prioritizing between disciplines Who determines budgets for investment and maintenance? Are there asset registers How is condition of assets established and monitored? Are the data sufficient for analysis Are there methods for risk analysis used? In which phases of the lifecycle (design, operation, decommissioning) are those employed? Are there maintenance and investment plans? How is knowledge acquired and managed? Most important knowledge gaps? Expected benefit of extra knowledge? Use of benchmarks? Organization Culture Tools/knowledge
As mentioned earlier, the objective was establishing the key issues in asset management across all sectors. However, this does not mean that only the average knowledge gap will be presented. The purpose was both illuminating where participants of the platform can learn from each other, possibly facilitated by a knowledge dissemination program of NGInfra, as well as establishing where (university based) research was needed to advance the profession. To achieve this, the differences between and within sectors provide a much wealthier insight than the average value. 3
DIRECT RESULTS 3.1 The concept of Asset Management 3.1.1 Asset management definition
All interviewees agreed that the concept of Asset Management was on the efficient delivery of desired performance. But this is an empty statement, as it can apply to many forms of management. It only becomes meaningful if the considerations which fall in the realm of asset management are further specified. With respect to this, three streams could be recognized: 5
Jongerius, J. Meijerink, T., Miltenburg, J. van, Added-Value Performance, Infrastructure asset management in the Netherlands, a study in seven sectors, Evers & Manders Consult BV, June 2007.
167
1.
Asset management as the professionalization of maintenance and operation In many organisations (of Asset owners) Asset management was limited to the operational part of the Asset life cycle. In those organizations, the production function (in cooperation with the maintenance function) was responsible for running the assets. Key objective was availability, supplemented with a pressure on operational costs. In the 1990s condition based maintenance was introduced, together with tools like FMEA and FMECA. Even though life cycle management was often mentioned in that period, the focus was still on the reliability of current assets and not on total cost of ownership as the concept of life cycle management promotes. Questions on the need for a high reliability were often not asked. Asset management was often not involved in decisions on investment, or sometimes not even in determining the maintenance budget. Only recently, risk analysis was introduced, and then often limited to individual assets and components. Thinking about the asset base and the organization as a holistic entity is still in far future. This vision of asset management can be branded as bottom up asset management.
2.
Asset management as part of an organizational performance strategy. A completely different approach is to take the organizational strategy as the starting point for asset management. Asset owners have assets for a reason, and it is up to the asset manager to help the asset owner in achieving these objectives as efficiently as possible. This means determining maintenance for current assets, but also determining the right investment strategy to arrive at a better portfolio of assets, or sometimes even challenging the objectives, as they might be very expensive to realize. Some organizations might call this performance or service management, others refer to the term risk based asset management. The reason for this is that in many infrastructures, no financial gains can be achieved by extra investments, just improvements in performance. All activities and investments are thus basically risk mitigations, hence the name.
3.
Asset management as concept of service delivery The service providers active in the field of infrastructure assets see Asset management as a means of extending their service offering. Traditionally, asset owners commissioned the construction of new assets, either turnkey or only as subcontractor for an internal project leader. The trend over time has been in the direction of a broader service offering, including design, operation and maintenance. These DCOM 6 contracts often have long running times. Problems in contracting often regard the over specification of requirements as a means of risk management by the asset owner. Therefore, service providers would like to take up the role of asset manager, where they can discuss the end performance targets with the asset owner and think of the best option to deliver this value. It is like the transition that can be witnessed in telecom. First, backups needed to be made on the computer or server itself, but because of high speed internet access, an online backup service is achievable currently. The asset (the backup tape unit) is thus replaced by a service.
As can be concluded from this impression, asset management is mainly limited to the operational level, but is developing into the tactical domain. Few organizations have moved asset management into the strategic domain. 3.1.2 Performance criteria A number of organizations has to deal with performance criteria that are imposed by external bodies. Road and rail infrastructure has to comply with certain levels of availability, water has an extensive body of quality standards, the high pressure gas transport infrastructure has strict safety risk limits. However, many of those obligations are on a high level and not directly applicable in day to day operation. Asset managers therefore have a role in translating the infrastructure performance criteria into asset performance criteria and internal targets. But alignment between organizational goals and the technical performance criteria is weak. One of the reasons for this is the lack of representation of asset management at the strategic level. The top level sees the added value of asset management, but has a limited view on what asset management can mean (often limited to improving maintenance and operation). Only in a few cases the top level acts as a real asset owner and tries to think what can be achieved with current assets instead of how the cost for the current operation can be reduced. As a consequence, the asset management professionals try to formulate meaningful objectives for themselves and then try to gain support for them. The resulting paradox is that asset managers, as they are not involved in the strategic debate, only can suggest improvements to the current situation. This is expressed by lower costs of the improved, thus reinforcing the (wrong) image of asset management as a tool for cost reduction and thus reducing the probability of being invited to the strategic sessions. Only a in a few organizations the asset manager presents the asset owner with different scenarios for cost and performance development.
6
Design, Construct, Operate and Maintain
168
3.2 Organization As has become clear in the previous section, asset management is in different stages of development in different sectors and companies in the Netherlands. This is reflected in the organizational position of asset management. In many cases asset management is part of the production or maintenance function. In some cases asset management is a separate department with direct representation in the board of directors of the infrastructure manager. However, most infrastructure managers are part of larger conglomerates, and in none of the organizations asset management held a position at the corporate level. If it had a board representation, it was in the board of the business unit. An issue addressed in the interviews was whether asset management was a line or a staff function. In none of the organizations asset management was a true staff function. The interviewees viewed this as the right way, as they would fear asset management as a staff function would be perceived as top down and thus provoke resistance. Building the asset management capabilities bottom up was perceived to be the better option. The result is that many asset managers indicate a large number of employees is involved in asset management. It also means that asset management is on the map and has acquired a strong foothold. Another observation is that asset managers tend to have a technical background, even though many recognize the need for a wider view on the world than only the technical. But they also seem to agree “that it is easier to train an engineer in economics than to train an economist in engineering”. However, despite the wide recognition of the importance of the social side of asset management, only a few organizations actively encourage their staff to develop itself in that direction by offering training courses. 3.3 Implementation 3.3.1 Availability of data “The data exist but are not accessible”. This was the central theme of all interviews. Some claimed this was a legacy problem7, but in other cases the data simply did not exist. Assets would have been constructed over multiple decades, and data might not have been recorded at all at the construction, or badly maintained, or lost over time. Some asset owners could not even tell for certain how much assets they owned. This might be hard to imagine for an asset manager working in a production plant where one sees the asset one owns. But in infrastructures, assets are distributed, so you do not necessarily know where to look. Furthermore, underground infrastructures cannot be seen at all. At best, their existence can be inferred from above ground connections, but what and where is not detectable. Another issue mentioned in many interviews was the missing quality of fault restoration data. Often the precise cause of the failure was not recorded, which is a vital piece of information for the asset manager. Despite these shortcomings, asset managers tend to be pragmatic. Even with only 80% of the data available they can make the right decisions. Nevertheless, still a drive exists to get more and better data. In this respect online monitoring systems gained much attention. A concern that many asset managers expressed was that much of the data was experience in the minds of the employees and not as a factual recording. Getting the real data out instead of history colored by rules of thumb was regarded as a big problem. This was further amplified by the uneven age profile of the organization. Employees of asset managers tend to be in the second half of their working life, with many not very far away from retirement. 3.3.2 Methods and tools One of the themes in the interviews was the use of tools and methods for asset management. With this respect, a division could be made between tools and methods for maintenance and operation on one side, and tools and methods for investment decisions. For the first set, the focus is on registering assets, classifying criticality and optimizing the maintenance strategy. Investment decisions rely more on the prognosis of future behavior and need for maintenance. •
7
Tools for maintenance It was found that a number of (IT) tools is used. Every organization seems to have a tailor made solution for their case. These are often adaptations of the commercial available suites like SAP, IBM MAximo or D7i, often supplemented with excel for registration of asset and data. In itself, the choice of the tool is not exciting: all tools deliver comparable features. However, the lack of standardization within organizations forms a barrier to centralization of asset management. What can be observed is that there is a paradigm shift happening in the field of maintenance. In the late 80s, preventive maintenance was seen as the way to reduce costs of the installations, as it limited the share of unplanned maintenance (outages) in the total costs. But nowadays, preventive maintenance is only applied when and where it is
Data only available on paper or in singular database
169
•
really needed. Maintenance is further diversified into use based maintenance, condition based maintenance, and corrective maintenance. The latter one means accepting the risk of asset failure. Tools for investment decisions In many cases, the asset managers were not involved in the strategic investment plans. The initiative is often taken at the commercial departments. However, the need for maintenance of the new investment seems to have acquired a foothold in the business case models, and therefore asset managers tend to get involved in the decision making process in a later phase. However, predicting the need for maintenance seems to be a key (and difficult) issue. The history of existing assets can provide some clues for investment decisions, but it is uncertain whether the future asset will behave comparably. In this respect two factions seem to exist. One group of asset managers is building models to predict future behavior, where the other sees them as too theoretical and tends to focus on gathering historical data.
One conclusion can be drawn from these interviews. Most asset managers used to think in terms of fixed budgets, which were spent according to best knowledge. But recent years have shown that asset managers are forced to build a business case for their cash needs. 3.4 Knowledge acquisition In the interviews, knowledge gaps and knowledge acquisition was addressed. A clear differentiation between knowledge on assets and knowledge on asset management could be observed • Knowledge gaps The major knowledge gap is on asset behavior, especially the long term behavior and prediction of the long term need for maintenance and (re)investment. Those long term predictions are not only important for the asset owners, but as well for the asset service providers. In general, asset managers think that they have good knowledge on asset management itself, and they do not indicate a knowledge gap in this field. They see a need for more knowledge on contracting, as the trend is more towards outsourcing services. • Knowledge acquisition Although asset managers have (external) technical courses in their training portfolio, external knowledge acquisition is not important with respect to building knowledge on asset behavior. As mentioned earlier, knowledge exist within the internal organization, and efforts focus on documenting the internal knowledge. External knowledge acquisition tends to focus on asset management in a broader sense, and especially (asset) risk management. Besides courses also many asset management conferences are visited. These provide new ideas, but it tends to be very difficult to apply the acquired ideas in the home environment. A final element is sharing knowledge directly with other organizations in the sector, but the extent varies widely over the sectors. Some multi sector initiatives exist. 3.5 Opportunities The opportunity for improving the profession of asset management was addressed as a final theme in the interview. A number of opinions is shown below with regard to the functions and objectives of the Platform: Don’ts • “Discussion group” • Lobbying • Combined meetings of asset owners and service providers
Do’s • “Task force”| • Lobbying • Promoting asset management at top management • Asset management Hub • Modular courses • Knowledge centre on tools and methods • Demonstrate generally applicable methods and tool • Link academic knowledge to practice • Facilitator at standardization of data
Note that lobbying is both mentioned as a do and as a don´t. A trend in the opinions is the interest in knowledge dissemination. Providing master classes, training courses, being a knowledge hub. A number of interviewees would welcome an independent body that could help them in judging the added value of “new and improved” tools and techniques.
170
4
KEY FINDINGS
1. Asset management is not yet at the strategic level Even though asset management is still a juvenile “science”, a significant progression has been achieved. Almost all interviewed organizations implemented the first asset management initiatives around 2002, followed by establishing separate asset management groups or departments around 2004-2006. Most of the time has been spent on acquiring support for asset management, working bottom up. In many organizations asset management originated in the functions of maintenance and operation, and is still regarded as the professionalization of those functions. However, over the years asset management moved up from purely maintenance decision to judging investments on their maintainability, and thus moved to the tactical level. Nevertheless, asset management is still regarded as a cost centre, and efforts should be spent to save costs. Many investments only result in risk reduction, not in extra profits. Business cases therefore often show a negative result, with the rare exception that reduced maintenance cost justify the investment. Analyses based on total life cycle cost and organizational risk are better suited to convince top management. Asset management then turns into a profit centre, a necessity to reach the strategic level. To achieve this, alignment is needed between organizational goals and technical norms and standards, but this is lacking in many organization. Translating the organizational goals into asset performance criteria could facilitate alignment, and could lift asset management to the strategic level. 2. Paradigm shift Where up until a few years ago mainly use or time based preventive maintenance was applied, currently condition based maintenance seems to be the norm, supplemented with corrective maintenance for non-critical assets. Use based preventive maintenance is only applied if it is really necessary. This is strongly linked to the concept of lifecycle costing. Trying to minimize maintenance cost for existing assets can be a silly thing if assets are available on the market with much lower operational costs, for example because they have higher energy efficiencies. Then the asset should be replaced and not maintained. The concept of lifecycle costing is gaining interest, but it is certainly not widely applied. 3. Performance models and data A key element of asset management is predicting the asset performance and the maintenance need of the asset. However, only a few asset managers relied on models to predict this, as many regarded the models as too theoretical. Most tended to rely on extrapolations of historical data. Paradoxically, at the same time most asset managers “complain” on the quality and availability of historical data. 4. Soft issues are important Even though asset management was generally considered to be an engineering discipline, most asset managers recognized the importance of the non-technical issues like economics and social elements. Many spent lots of efforts on creating support in the organization. Some even developed specific training modules for the social skills like empathy and persuasiveness, although this was not common. 5
RESEARCH AGENDA: TOWARDS ASSET GOVERNANCE
Based on the interview results, a research program was defined in which academics as well as practitioners are participating. The outline of this program is introduced in this section. Much of the research published in the area of asset management seems to deal with maintenance and operation. This is a good thing, as many issues the practitioners face today is related to maintenance and operation. Asset management without attention for maintenance and operation would be empty. Nevertheless, the interview results suggest that it is not the biggest issue asset manager face. That is more in the area of showing the added value of asset management to the whole organization, even if the costs of managing the assets would go up. Currently, asset managers tend to be involved in cost cutting in maintenance, in order to prove the value of asset management. However, we strongly feel that asset management should be more about discussing what the assets could and should deliver in relation to what the company needs them to do. In some cases, it might be wiser to increase the asset management budget as it would reduce the need to build new assets. In other cases it might be better to build new assets, as the cost of increasing the output of the existing assets could be much higher. For new assets, proper attention for the operational costs over the full asset life should be given, as these can outweigh the purchase price by many times. As long as asset management is confined to the maintenance and operation function, companies will not get the full benefit of an integrated approach. They still might build or acquire new assets that are hard to manage, or that are not needed if the current assets are managed properly8. We strongly feel that this strategic part of asset management should be addressed in our research program. As a start, we made a rough division of the field of asset management into four groups: 8
On AICHE meeting spring 2003 in the Kurhaus (Scheveningen), BP presented the concept of the phantom plant, which was the sum of all production losses. This phantom plant proved to the biggest of all plants.
171
-
Institutional embedding: This deals with how public values are embedded, how interaction with the regulatory bodies is organized, which strategic goals are allocated to asset management, how they are measured and so on. It is about the mission and values of asset management Internal organization: This deals with the business processes, operating models, authorization, change management, capability development and so on. The key theme is how to guarantee the organization structure fits the mission and values. Operational excellence: This is about fine-tuning of what you are doing. In our view, many of the maintenance management initiatives fall within this category. Contracting: This is both about the make or buy decision (in relation to outsourcing) as it is on the type of contract and the process of contracting in case the decision is on buying. This can be on all levels, from financing (PPS) to the contract itself (only services or DCOM). Theme is how to assure that you get what you want.
Crosscutting these four topics are a number of general themes that need to be researched. These are: - Maintenance and replacement: Many infrastructures are old and might need replacement or extra maintenance. How to determine what is best, and how to prepare the organization for the significant increase of work if the assets were to be replaced. - Human factors:. In the end, it is people who determine the success of an infrastructure. How to address this properly, what skills and competences are needed? - Innovation and transition: The use of many infrastructures is changing over time. How to integrate this into asset management. - IT systems and support: A key issue addressed was the availability of data. How to make certain the right data gets to the right people at the right time?
IT systems and support
Innovation and transition
Human factors
Maintenance and replacement
Together, these can be grouped into a matrix.
Institutional embedding Internal organisation Operational excellence Contracting This is a rough structuring of the asset management field. Based on the interviews, we feel that the most challenging and needed topics are institutional embedding and internal organization. One could brand this as strategic asset management, but we feel asset governance is a better name. After all, it is about the structure of asset management, and not on the strategic decisions. 6
CONCLUSION
Asset management as a profession has made a significant progress in the Netherlands in the past years, and has acquired a strong foothold in many infrastructure managers. It has moved up from operational decision regarding maintenance to a more tactical level, regarding the maintainability of future assets, but has not reached (given a few exceptions) the strategic level. To reach this, alignment between organizational goals and technical asset performance criteria is needed. However, almost no practitioner really mastered this challenge until now. It is at this point that the academia could provide support. Therefore our current research with regard to asset management focusses on linking asset management goals to organizational goals. The authors prefer to brand this asset governance, to differentiate it from the (maintenance and operation based) operational and tactical asset management. Acknowledgment This research was sponsored by the Next Generation Infrastructures Foundation, www.nginfra.nl.
172
Proceedings of the 4th World Congress on Engineering Asset Management Athens, Greece 28 - 30 September 2009
A CASE STUDY ON CONDITION ASSESSMENT OF WATER AND SANITATION INFRASTRUCTURE Joe E. Amadi-Echendu a, Hal Belmonte b, Chris von Holdt b, and Jay Bhagwan c a
Graduate School of Technology Management, University of Pretoria, Republic of South Africa. b
c
Aurecon, Republic of South Africa
Water Research Commission, Republic of South Africa
The management of physical assets covers a wide scope and range of processes that include acquisition, control, use, disposal and re-cycling of built environment structures in a manner that satisfies the constraints imposed by business performance, environment, ergonomics, and sustainability requirements. Technologies applicable towards the management of infrastructure assets for water and sanitation services are advancing rapidly apparently influenced by advances in condition monitoring, information and communication technologies. This paper discusses condition and risk assessment of water and sanitation assets. Although inferences are drawn from available public domain literature and non-probabilistic survey of representatives of organisations engaged in water and sanitation services, the findings reiterate that the most rapid trends are in technologies for the collection and transfer of data. We also find that the understanding and practice of asset management in water and sanitation services providers is still in infancy, and thus begs to question some of the purported benefits of technology applications for such organisations. Key Words: Engineering Asset Management, Water and Sanitation Infrastructure, Technology Trends. 1
INTRODUCTION
Technologies applicable towards the management of physical assets have advanced rapidly and asset-intensive businesses can take advantage of the technological developments to increase operational efficiency and to provide improved products and services. Noting the considerable impact of water and sanitation on health, economy, environment, and society at large, a core issue for service providers is to determine the condition of extensive infrastructure that includes buried pipes, dams, pumping stations, reservoirs, reticulation, treatment and transport systems. Technology can, and should be deployed towards monitoring the quality of potable water and effluents to ensure compliance with applicable health regulations. In societies with significant socio-economic disparity, there is the added imperative to establish adequate capacity for water and sanitation services both in terms of new and existing infrastructure. For example, infrastructure planners and operators need to determine the risks and interventions required in the creation, acquisition, maintenance, operation, decommissioning, disposal and/or rehabilitation of water and sanitation assets. Capital investments, operations and maintenance, and rehabilitation of water and sanitation infrastructure have traditionally been in the realm of massive public funding, and this is increasingly placing unbearable fiscal burden on government departments. The combined challenges of social cohesion, technological advancements and economic growth have provided incentives for increased participation by private sector investors and managers in water and sanitation services. This paper extrapolates from our review of methods, tools and techniques that are available for use in infrastructure condition assessment and risk management. Based on observed cases of water and sanitation providers in South Africa, we then summarise the extent to which available condition monitoring, information and communication technologies influence asset management activities like condition assessment, risk analysis and predictive modelling.
173
Challenges
As illustrated in figure 1, for the water and sanitation sector, technology embedded in physical assets, information systems, and business processes can be exploited towards addressing the wide-ranging socio-economic challenges that include satisfying healthy service delivery requirements, whilst concurrently minimizing environmental footprints in energy consumption, water extraction, and effluent discharge; all of which have to occur within highly constrained capital and operational expenditure programmes. Data, information systems and communication technologies provide the means for linking the infrastructure components to the asset management processes and towards resolving the challenges and achieving the business objectives for the owner/operator of the asset base.
•Provision of Healthy Water and Sanitation Services •Minimized Environmental Impact of Water and Sanitation Service Provision condition monitoring, risk analysis and predictive modelling
Asset Management processes
Data, Information Systems and Communications Technologies Operations and Maintenance
Acquisition
Decommisioning and Rehabilitation
PLANNING
ASSETS
Data, Information Systems and Communications Technologies MATERIALS and SENSOR TECHNOLOGIES Civil structures
Pipe networks
Mechanical Equipment
Electrical Equipment
Fig 1. Water and Sanitation Services Asset Management Model highlighting ICT Applications
2
RESEARCH
Effective decision making regarding long term planning, risk management, maintenance, operations, or other asset management activities, is dependent on the availability of appropriate data and information. Sensors, computerised systems, and communication technologies provide tools for the collection of condition and transactional data against asset records that can be processed into useful categories of information, which, subsequently, inform decision making. Asset management practices entail the use of information to make value-adding decisions regarding asset condition, performance and risk. A systematic, consistent and relevant technical assessment should provide condition information to enable infrastructure planners and operators to determine the risks and interventions required in the management of water and sanitation assets. The collection of pertinent data is a major task [1] and the assessment should at least: • provide a rating of the asset condition “as found”; • indicate the risks associated with allowing the asset to remain in the “as found” condition; and • identify the scope of work that may be necessary to restore to, and/or sustain the asset at desired condition. Marlow et al (2007)[2] provide a comprehensive breakdown of condition monitoring tools and techniques that can be applied to equipment and structures deployed in water and wastewater services. Their study produced a set of inclusive tables that break down the various inspection tools and techniques, environmental surveys and condition monitoring techniques. Our literature review (cf: for example, Andrews (1998)[3], Randall-Smith et al (1992)[4], Billington et al (1998)[5], Snyder et al (2007)[6], Ferguson et al (2004)[7], Stone et al (2002)[8] and), reveals a myriad of techniques for sensing the desired physical parameters as well as a number of computational models that can be applied towards the prediction of asset condition and risk profile. Whereas Watson et al[9], [10] and [11] may be useful references on practice guidelines, however, a key gap observed in our literature review is the apparent lack of specific sets of condition indices for each category of water and sanitation infrastructure assets.
174
Following our literature review of condition and risk monitoring techniques, we then focused our study on the application of these technologies by owners/operators of water and sanitation infrastructure. We developed a questionnaire to assist us in our study of how these techniques were applied by water and sanitation services providers in South Africa. We targeted a judgemental sample of people that included representatives of service providers, technology vendors and consultants. The service providers included 145 municipal agencies, some of which are responsible for water distribution, bulk transfer and sanitation; plus 5 companies primarily engaged in extraction, treatment and bulk transfer of water. The range of infrastructure owned/operated by the respondents’ organisations typically included boreholes, dams, reservoirs, pump stations, treatment plants, and pipeline transfer systems. Despite concerted efforts at persuading respective representatives of the respective organisations in our geographical delineation, only 23 respondents, almost exclusively representing local municipalities, completed our questionnaire. It is worthwhile to note that the responding municipalities serve less than 16% of households in a geographical population comprising more than 45 million people. The study was also conducted within the background of a recent legislation that more or less requires government departments and public agencies to adopt and implement asset management principles and practices. The bar graphs in figure 2 show the respondent feedback on how often they carried out condition assessments of the infrastructure assets and what technologies were used. The respondents claim that their respective organisations carry out daily, monthly and yearly inspections of their assets but more so on pump stations, pipelines and reservoirs facilities. It was revealing that some organisations seldom carried out condition assessment of their facilities, even if it was only limited to visual inspections, and especially with the wide range of technologies seemingly available. We were also perplexed to observe that some respondents indicated that condition assessments were “outsourced to consultants”, thus giving the impression that the particular organisations did not really pay attention to what technologies were applied.
Seldom 6/Year 3/Year 1 1
2
5
4 2 2
2 6
1
2
2
Daily
4
SEWAGEPIPELINES
WATERPIPELINES
WORKS
SEWAGETREATMENT
WORKS
1
3
2 WATERTREATMENT
Weekly
1
2
4
STATION
BOREHOLE
3
Monthly
4
3
2
6
5
4
SPRINGPROTECTION
3
2
6
Anually
1 1 1
1 1
1
SEWAGEPUMP
2
2
3
3
4
STATION
1
1 1 1 1 1 1
WATERPUMP
1
RESERVOIR
2
DAM
1 1
5
1
1 1
2
5 5
2
1/Year
2 1
VALVES
1 1
1 1
1 1 1
2/Year
2
3
Fig 2a. Frequency of inspections for condition assessment of infrastructure
80 74 70
60
50
40
30
2
1
2
1
1 Appoint a dam safety inspector
2
Acoustic emmissions
3
Sewer scanning & evaluation
4
1
Pipe inspection realtime insoection technique (PITAT)
1
Pipeline inspection gauge
2
Water meters
5 1
Visual Inspection of panels, motors and pumps
6
CCTV
6
Flow in manholes of toe drains and visuals
10
SCADA
20
Vibration Analysis
Motor current analysis
Check condition of pipes,penels and pumps
Physical Inspection
Visual Inspection
Outsourced to consultants
0
Fig 2b. Inspection technologies for condition assessment of infrastructure With regard to risk management, we approached the issue by asking the municipal organisations whether or not they measured reliability, based on the assumption that our respondents understood our definition of reliability as “the chance of pre-defined failure occurring under given conditions within a stipulated time period”. The bar graph in figure 3 suggests that less than half of the municipal organisations measured the reliability of the respective assets under their care. Of more concern is that majority of respondents indicated ‘direct assessment’ as a method for measuring reliability and ‘monetary value’ as the
175
method for risk ranking of assets. Such feedback more or less supported our apriori impression that majority of respondents did not understand how to measure reliability or risk. In fact, less than a third of our respondents indicated that their respective organisations maintained a risk register.
25
23
22
21
20
20
23
20
19
19
17
7
11
11
Sewage pump station
10
10
Water pump station
15
Number
15
12
12
10
12
11
8 6
5 2
Number of municipalties that are responsible for water assets
Valves
Sewage pipelines
Water pipelines (reticulation & bulk)
Sewage treatment works
Water treatment works
Reservoir
Spring protection
Borehole
Dam
0
No of municipalties that measure reliability of water assets
Fig 3. ‘Direct assessment’ of reliability as a measure of risk
3
DISCUSSION
Whereas the respondents’ feedback suggest visual inspections as the prevailing common method for condition assessments, however, visual inspections can encompass a rather broad definition of activities ranging from cursory inspections to highly detailed technical examinations utilising sophisticated instrumentation. The same applies for ‘direct assessment’ as the measure of reliability and the use of ‘monetary value’ as the basis for risk ranking. All the municipal organisations in the geographical delineation used for our case study are under pressure to prepare asset registers, especially to demonstrate financial compliance with the relevant legislation. The apparent lack of sector asset management guidelines over and above vendor equipment standards may exacerbate how to conduct condition and risk assessments of water and sanitation infrastructure assets, and hence the valuation of such assets. Although the technology exists and there are examples of the application of some of the methods for condition and risk assessments, however the need for an enabling environment is also exacerbated by the requirement to develop new skills, and this is further compounded by weak organisational commitments to the principles and practice of engineering asset management. The overall impression from our non-probablistic survey demonstrates that the understanding of engineering asset management is at an infancy stage for the water and sanitation service providers that participated in the study. With this in mind, we propose the following data progression structure to facilitate the journey in engineering asset management for such organisations.
Data level
Data type
Key Data Management Needs
Primary data
Inventory
Classification guidelines Basic attributes guidelines Data storage software
Secondary data Basic condition attributes
Tertiary data
Performance data/modelling
Where most Water Service Providers are now
Assessment guidelines Reporting guidelines Advanced condition technology Maintenance management software Business processes Predictive modelling methods Optimised decision making methods Benchmarking
176
Movement in the future
4
REFERENCES
1
Strategic Asset Management, Condition Assessment. www.build.qld.gov.au/sam/sam_web/content/76_cont.htm
2
Marlow, D., Heart, S., Burn, S., Urquhart, A., Gould, S., Anderson, M., Cook, S., Ambrose, M., Madin, B. and Fitzgerald, A. (2007) Condition Assessment Strategies and Protocols for Water and Wastewater Utility Assets. WERF & AWWA Research Foundation, Report 03-CTS-20CO.
3
Andrews, M.E. (1998) Large diameter sewer condition assessment using combined sonar and CCTV equipment. APWA International Public Works Congress, NRCC/CPWA Seminar series “Innovations in Urban Infrastructure”, Las Vegas, Nevada, National Research Council of Canada, Sept 14-17 1998.
4
Randall-Smith, M., Russell, A. and Oliphant, R. (1992) Guidance Manual for the Structural Condition Assessment of Trunk Mains. WRc, UK.
5
Billington, E.D., Sack, D.A. and Olson, L.D. (1998) Sonic Pulse Velocity Testing to Assess Condition of a Concrete Dam. November 1998.
6
Snyder, G., McEwen, D., Parker, B., Donnelly, R., and Murray, R. (2007) Assessing the reliability of existing anchor installation at Loch Alva and Log Falls dams. CDA 2007 Annual Conference St. John’s, NL, Canada. September 22-27, 2007
7
Ferguson, P., Shou, S. and Vickridge, I. (2004) Condition Assessment of Water Pipes in Hong Kong. Trenchless Asia Conference, Shanghai, April 2004.
8
Stone, S., Dzuray, E.J., Meisegeier, D., Dahlborg, A., and Erickson, M. (2002) Decision-Support Tools for Predicting the Performance of Water Distribution and Wastewater Collection Systems. National Risk Management Research Laboratory, U.S. Environmental Protection Agency, Cincinnati, OH, USA.
9
Watson, T.G., Christian, C.D., Mason, A.J. and Smith, M.H. (2001) Maintenance of Water Distribution Systems. The University of Auckland, Auckland, New Zealand.
10
Guidelines for Infrastructure Asset Management in Local Government 2006-2009. Department: Provincial and Local Government, Pretoria, South Africa.
11
International Infrastructure Management Manual, International Edition. (2006) Association of Local Government Engineering NZ Inc, Institute of Public Works Engineering of Australia, Thames, New Zealand.
177
Proceedings of the 4th World Congress on Engineering Asset Management Athens, Greece 28 - 30 September 2009
THE ROLE OF STANDARD INFORMATION MODELS IN ROAD ASSET MANAGEMENT Daniela L. Nastasie a, Andy Koronios a a
Cooperative Research Centre for Integrated Engineering Asset Management (CIEAM) -Systems Integration and IT, University of South Australia, Mawson Lakes SA 5095, Australia. Business activities rely on people’s understanding and interpretation of information. Meaning (semantics) is incorporated in the way information is defined and structured. From a semantics point of view information models range from low level semantics, such as taxonomies and data dictionaries, to high level semantics, such as formal ontologies. The low level semantics information models help humans add meaning to information in a structured way, while the high level semantics information models are essential for computed aided activities and automation of processes. This paper discusses standard information models of relevance to the Road Asset Management sector based on topics discussed on the IPWEA Asset Mates Forum and interviews with practitioners in Australian Government agencies. Current taxonomies, guidelines and open information standards with potential use for Road Asset Management were analysed. The findings suggest that information models used in the Road Asset Management industry are mainly at the low end of the semantics scale and they vary in consistency across the industry. At this stage there are no XML based industry standards specifically designed for Road Asset Management. It is recommended that Road Asset Management sector should consider designing XML based information standard with terms and concepts specific to this industry. Existing XML standards from other sectors could be used as examples or adapted to this particular industry needs for overlapping areas such as finance or business reporting. Key Words: Road Asset Management, Standard Information Models, XML Information Standards
1
INTRODUCTION
A 2008 research study conducted by the Commonwealth Grants Commission identified a lack of information and data related to road assets managed by local government and emphasised the need for consistency and accuracy of the local government data collections [1]. Consequently the Australian Local Government Association (ALGA) proposed a funding of $20 million over the next four years to develop a national data collection framework ($7 million) and to establish and/or upgrade asset management at local government level ($13 million). One of the three main initiatives required to improve local government services is the clarification of data types and standards, consistent with the national framework developed by ALGA together with the Australian Bureau of Statistics (ABS), the Commonwealth Grants Commission, and Local Government Grants Commissions [2]. These figures demonstrate the importance of road asset management and its reliance on data and information, as well as a special emphasis on local government agencies that manage more than 80% of the 812 kilometers Australian road network. Presently a variety of information systems are used to support the collection and analysis of data to perform road asset management tasks and even though information systems have proliferated in the road asset management arena, the expectation that smart information technologies solve the problem of information has not been met. A survey conducted in South Australia in 1999-2000 found an overwhelming majority of the councils had invested in better “data systems” as the main measure to improve their asset management activities. At the same time, the report indicates that what council staff lacked most was “better data” [3]. These findings show that even though information technology is an essential part in managing data and information, in itself it does not provide benefits. In order to use the information technology effectively and achieve the expected benefits more attention need to be paid to data and information. Traditionally data has been seen as secondary to processing the data which lead to the famous GIGO (garbage in, garbage out) problem. The importance of data per se started to become clear when computer scientists realised that software applications were entirely dependent on the data processed by those applications and they recommended a paradigm shift from applications to data [4]. This shift of power has been enabled by the maturity of the Web technologies that allows data to
178
become smarter. Daconta, Obrst and Smith envisaged the smart data continuum (Figure 1) as a four stage process from data proprietary to an application to data becoming increasingly independent of applications across the Web. The first stage of the continuum is the pre-XML stage represented by documents and data records stored in formats other than XML. The second stage corresponds to the first level of data independence, in which data related to an individual domain of practice is described using individual vocabularies represented in XML. In the third stage data is composed and classified in hierarchical taxonomies using mixed XML vocabularies from multiple domains and in the final stage new data can be inferred from existing data across the Web following logical rules embedded in XML ontologies [4].
Figure 1- The smart data continuum [[4], Figure 1.2, pg. 3] XML plays therefore a vital role in creating data independent of applications in the Web environment. While XML is the format that allows data to become independent of applications, the content represented in XML format is structured according to particular information models. Information models play an important role in attaching meaning (semantics) to data, as seen in Figure 2 which presents information models on a semantics spectrum, from low level semantics information models, such as taxonomies and data dictionaries, to high level semantics information models, such as formal ontologies [5]. For humans the information models easiest to be understood are the ones with the lowest level of formality, using natural language and located at the weak semantics end on the Ontology Spectrum. Computers need information models at the high end of the Ontology Spectrum in order to perform automated tasks. One way to create information models with more explicit semantics is by developing stronger semantics information models based on existing low level semantics information models, such as taxonomies and industrial categorisation standards. The main advantages of this approach are the fact that industrial information standards contain a multitude of concept definitions hierarchically organised and reflect a degree of community consensus, which makes their adoption and diffusion easier [6, 7].
Figure 2- Ontology Spectrum [[5], Fig. 1, pg. 367] The following sections discuss the findings from an exploratory study into the Road Asset Management information models and emphasise the role of XML information standards in the new Web enabled information environment.
179
2
RESEARCH METHODOLOGY AND FINDINGS
This paper is based on an exploratory study into the information models of relevance to Road Asset Management in Australia. Data triangulation was used in order to enhance the understanding of the issues related the role of information models in Road Asset Management. Empirical evidence was collected from three different types of sources: the Asset Mates Discussion Forum on the IPWEA (Institute of Public Works Engineering Australia) website, government publications related to information standards of relevance to Road Asset Management, and interviews with 14 Road Asset Management practitioners from local and state level government agencies in South Australia. The interviews were conducted using a semistructured questionnaire based on the HTE (Human, Technology, Environment) model [8]. A summary of findings follows.
2.1 Findings from the IPWEA NAMS.AU AssetMates Forum The IPWEA National Asset Management Strategy Committee (NAMS.AU) provides national leadership in community infrastructure management and supplies resources to assist asset management practitioners. AssetMates is an IPWEA NAMS.AU web forum that focuses on all aspects of asset management, such as asset management planning issues, discussion on the various asset classes, accounting for assets, condition assessment, information systems, levels of service, and a general discussion area. This paper analysed topics related to Asset Management Terminology posted on the IPWEA Asset Mates Forum between July 2004 and March 2009 to discover issues related to data and information in Road Asset Management. Three main topics (Asset Classes, Information Systems and General AM Issues) included 8 threads related to road classification hierarchies, asset data structures, asset data collection, third party data collection, containing a total of 118 messages, as shown in Table 1. Table 1 IPWEA NAAMS.AU Asset Mates forum information related topics as at 14 April 2009 [Source- Authors]
Topic
Threads
First message
Last message
Brief description
Number of message s
Asset Classes
Road classification hierarchy
1 May 2008
23 March 2009
Discusses the methodology used for determining road classification hierarchy
8
01 April 2009
Discusses the meaning of ‘asset classes’
7
28 October 2008
Discusses Data collection hardware and software Discusses Asset Management data structures in relation to business processes and differences in localisation or fields (Asset Register in Finance as opposed to Asset Register in Asset Management Systems) Discusses linear vs. spatial representation of roads and road assets in connection to GIS Discusses tenders for the videoing and inventory data collection of road networks and associated infrastructure (signs, footpaths, etc) Discusses definitions of basic terms related to Asset Management Discusses issues that Local Governments have in getting as-constructed asset information from developers at appropriate times and how they get around this issue; ADAC and D-Spec are the recommended information standards
Asset class definitions Information Systems
General AM Issues
Asset Data collection methods
23 December 2008 13 July, 2004
Asset Data Structure
14 September , 2006
19 December, 2006
Spatial Representation of Assets in the GIS
29 November , 2006
30 November, 2006
Road Inventory Data Collection
31 January, 2007
01 February, 2007
Definitions and Terminology
04 December , 2008
15 December, 2008
As Constructed Asset Information from developers
28 October, 2008
28 October, 2008
180
34
6
7
4
35
17
The IPWEA NAMS.AU supports the application of a nationally consistent framework for infrastructure asset management planning and encourages all entities responsible for managing service delivery from infrastructure assets to adopt the structure and framework for asset management planning in accordance with the International Infrastructure Management Manual (IIMM).
2.2 Findings from interviews with Road Asset Management practitioners Interviews with 14 practitioners in South Australia (10 from local councils and 4 from state road authorities) revealed that practitioners have a strong view on the role of information models in Road Asset Management. As shown in Table 2, the respondents have a good understanding of the benefits provided by standard information models, and at the same time they are aware of some of the issues that information standardisation might bring along. The general view is that standard information models would assist with benchmarking, data management, shared knowledge, transfer of skills and many other areas as presented in Table 2. On the other hand issues related to agreement on terminology, resistance to change, organisational politics and lack of expertise are among the concerns that need to be addressed before these standards can be created and implemented at industry level. Another finding is that due to a plethora of information standards developed at various administrative levels practitioners find it very difficult to know what information standards are available and which ones are better suited for particular business needs. This lead to the creating of several information models serving the same purpose, such as various functional road classifications for different states or ADAC (As Designed as Constructed) and D-Spec (Developer Specifications for the Delivery of Digital Data to Local Government) standards for the common specification to supply digital data from designers to local councils. Table 2 Drivers and inhibitors for the diffusion of consistent information structures [Source- Authors] Benefits access control accountability benchmarking collaboration data access data analysis data collection data consistency data control data independence data integration data maintenance data structure decision making efficiency gaining grants improved services information management more useful data reduced costs reporting shared knowledge strategic planning third party information exchange transfer of skills transparency
Issues
agreement on terminology budget business changes configuration issues data representation data re-structuring definition issues different interpretations lack of expertise local politics perspective reduced flexibility resistance to change technology requirements wasted knowledge
2.3 Findings from the Road Asset Management Information Standards Standards adoption has been considered a sign of industry maturation [9]. Road Asset Management as part of a holistic view on Asset Management is a relatively new industry [10], therefore the adoption of standards is still in early stages. Austroads, the association of Australian and New Zealand road transport and traffic authorities, has introduced the concept of Total Asset Management to the management of road networks for the first time in 1994 when the Austroads Road Asset Management
181
Guidelines was published. This study examined Austroads publications related to Road Asset Management, as well as road classifications such as NAASRA and government initiatives such as Queensland Road Alliance, the Australian National Transport Data Framework and the Australian Government Information Interoperability Framework. A brief description of the most relevant government initiatives and industry standards is presented in Table 3. The findings suggest that information models used in the Road Asset Management industry are mainly at the low end of the semantics scale and they vary in consistency across the industry. At this stage there are no XML based industry standards specifically designed for Road Asset Management.
3
DISCUSSION
In the Road Asset Management sector the Web is currently used for sharing information at industry level from top down, with Austroads and IPWEA publishing documents online that local and state road authorities can access on a regular basis. Some organisations use the Intranet to share data created in a local Knowledge base that can be accessed by local staff. IPWEA Asset Mates forum uses the Web also for the dynamic exchange of information over the Internet in order to foster knowledge sharing at industry level using web forums that respond to the users’ needs. The Web is therefore already present in the Road Asset Management activities, but it is not currently used for automatic dynamic interaction, such as publishing and sharing real-time information of relevance to internal or third party agencies. One reason for this lack of real-time information sharing is the absence of a consistent information model for collecting and storing this type of information at the industry level. The vast majority of road authorities are using Information Systems to store their data, but they all use different information models to do that. Sharing and re-using information over the Internet is going to be much more difficult to achieve if the same type of information is stored in different information models. An increasing number of information standards, models, frameworks and guidelines have been created in recent years to deal with this issue (Table 3), but this makes it very difficult for practitioners to select the particular one they need and there is no systematic approach to define these standard information models at present. From the review of the literature and other industries information models, it can be concluded that XML is the format required for information standards that are going to be widely spread, as this format can be used over the Internet, the largest of the communication networks available. XML based industry and information standards are therefore the first step in creating smart data independent of applications that will provide benefits in terms of data interoperability, as well as benefits derived from direct and indirect network effects (direct network effects coming from an increasingly larger communication network and indirect network effects [14] coming from the low price of Internet and Web hardware). In order to take advantage of the full potential of the Web, industry specific standards have been developed and implemented in various industries: ACORD in insurance, CIDX for chemicals, ebXML to exchange business messages, XBRL in business reporting, PIDX for petroleum and global energy, FIX in financial services (security transactions), FpML for financial derivatives instruments, MISMO for the mortgage industry, RosettaNet in electronics and high tech are all examples of XML based industry standards. XML has become the de facto standard for writing industry standards in many areas of practice. An increasing number of information standards originally written in natural language have been translated into XML or are directly created in XML format to increase interoperability at industry level [15]. Information models at industry level require a lot of effort to be created, and one of the most tedious jobs is the agreement on definitions of terms and concepts. This idea is demonstrated by the findings from analysing the IPWEA Asset Mates forum. As presented in Table 1, it can be noted that the threads Definitions and Terminology and Asset Data Collection methods attracted the most interest from the forum participants. The thread Definitions and Terminology is of particular relevance to this study and it was further analysed to get a better understanding of the issues. One of the main problems is the interpretation of the definition of various terms, such as: Asset Hierarchy, Asset Class, Asset Category, Asset Group, Asset type, Asset component, Asset attribute, Asset inventory, Asset register, Road inventory, Road register. To exemplify the issues with only two terms, Asset Register and Asset Inventory, there are several views on the data each of these storage devices contain. The International Infrastructure Management Manual (IIMM) defines an Asset Register as "a record of asset information considered worthy of separate identification including inventory, historical, financial, condition, construction, technical, and financial information about each....." [11]. This definition can have various interpretations, as follows: Forum Participant 1: ‘…An Asset Inventory would record the number of assets of a given type/capacity etc, owned by the entity, while the Register would record each of these assets separately, and include information specific to that individual asset, such as location, condition and so on’ Forum Participant 2: ‘… The Asset Register is a financial instrument that identifies all of the assets under the control of the organisation.... The detailed data is what you would hold in your asset inventory.’ Forum Participant 3: ‘An asset register contains assets above a threshold while an inventory contains assets (all assets or just those below the threshold) which are worth tracking individually (such as mobile phones) despite their lower value.’ Forum Participant 4: ‘Asset Register is developed using the Asset Inventory data. Asset inventory is recording data and continuously updating by collecting more and more data to increase the quality of the asset register. I also think that asset register has to include all the assets identified by the inventory.’
182
183
Issuing Body and/or Information Source Description
IIMM includes a glossary of 113 definitions related to asset management, from operational and maintenance terms, to management and finance activities. Asset hierarchies are provided as Association of Local Government International Infrastructure examples in appendixes. The terms and relationships between them are clearly defined, but only Engineering NZ Inc (INGENIUM) Management Manual (IIMM)- 3rd as examples and at a very high level of semantics, leaving organizations the freedom to add suband ed., 2006 components according to their needs. Physical assets are grouped by functionality (service area) Institute of Public Works Engineering of [11] and type (components). Road Assets hierarchy consists of roads and structures with service Australia (IPWEA) areas (land carriageway, footpaths, cycleways, etc.) classified under roads and bridge or retaining structure under structures. The components differ between roads and structures. PAS 55 has been designed as a specification providing guidance on good practice in all aspects of managing physical assets including acquiring, owning and disposing of physical assets. PAS Published by BSI British Standards BSI PAS 55 -2nd ed., 2008 55 is not a standard per se, describing mainly the processes and the steps involved in managing and distributed through [12] physical assets. It does not contain much detail about definition of terms or relationship between The Institute of Asset Management them. The first edition was published in 2004 in 2 parts: Part 1: specification for the optimized [13] (IAM) UK management of physical infrastructure assets and Part 2: guidelines for the application of PAS 55-1. Australian Infrastructure Financial 8 position papers have been prepared and posted on the IPWEA website for comments, as part of The IPWEA National Asset Management Management Guidelines (work in the background to the development of the new national guidelines for financial management of Strategy Committee (NAMS.AU) progress) infrastructure. NSW road classification divides roads into State Roads (Freeways and Primary Arterials), New South Wales Road New South Wales Roads and Traffic Regional Roads (Secondary or Sub Arterials) and Local Roads (Collector and Local Access Classification Authority (RTA) Roads). A review of road classification in NSW was done in 2004-2005. A total of 2,249 km of roads were considered for reclassification, but the main roads hierarchy remained the same. QLD road classification consists of five main categories: National Highways, State Strategic Queensland Department of Main Roads Roads, Regional Roads, District Roads and Local Government Roads. The first 4 categories are Queensland Road Classification (DMRQ) State Controlled Roads and their management is regulated by the Transport Infrastructure Act 1994. Road classification in Victoria is based on based on the Road Management Act 2004. It consists of Freeways and Arterial Roads managed by State Government (VicRoads) and Municipal Roads Victoria Road Classification VicRoads managed by Local Government agencies The National Association of Australian NAASRA classification separates roads by functionality, replacing the State classifications based State Road Authorities (NAASRA), on legislated definitions. It is used by road management authorities to define the road types NAASRA Classification for Roads currently eligible for Commonwealth Grants Commission (CGC). Variations of the NAASRA Management (used in ACT, NT, SA, The Association of Australian and New classification for road management are currently used in ACT, NT, SA, TAS and WA. NAASRA TAS and WA) Zealand Road transport and traffic classification consists of 9 classes separated in 2 groups: Rural Roads (classes 1-5) and Urban authorities (Austroads) Roads (classes 6-9).
Industry Standards
Table 3 Road Asset Management standards and guidelines [Source- Authors]
184
Issuing Body and/or Information Source Description
This framework was recommended by the National Transport Data Working Group (NTD-WG) in 2004 in order to facilitate land transport planning at the national, state, territory and local government level. At the heart of the NTDF will be a website designed as a central portal with an The National Transport Data The Australian Transport Council (ATC) open public interface allowing access to individual data collections according to various access Framework (Australia) restrictions. The data holdings should include Foundation data (Category ‘A’), New structured data (Category ‘B’) and Research data (Category ‘C’). Each Foundation data collection would have at least three layers: a public layer, a subscriber layer, and a private layer. The National Local Roads Data System (NLRDS) was designed by the Australian Local Government Association to aggregate existing sources of local road information in order to Australian Local Government provide a simple, consolidated, national local roads data system. NLRDS uses the following National Local Roads Data System Association (ALGA) performance measures: sealing of gravel roads; state of the asset; expenditure on roads and bridges; expenditure on roads and bridges per km for unsealed roads; lengths of unsealed roads, data used in performance measure; road asset consumption; road asset sustainability; road safety. The Queensland Road Alliance was established in 2002 as a partnership between Queensland's state and local governments to jointly manage about 32,000km of Queensland roads. The Road Queensland Department of Main Roads Alliance promotes sound asset management practices that will require a minimum set of road Qld Road Alliance (DMRQ) data inputs and outputs that are consistent statewide for the primary purpose of providing a and LGAQ relative ranking score for each road segment. The Road Alliance aims to provide a comparison, at a strategic network level, of road conditions across the state. These guidelines refer to the development and implementation of an Integrated Asset Management framework for managing road networks. The document regards a road network as a AP-R202/02 Austroads Integrated The Association of Australian and New major asset that need to be managed from 3 perspectives: financial, performance, and Asset Management Guidelines for Zealand Road transport and traffic deterioration. It is suggested that the asset management of a road network should focus on Road Networks authorities (Austroads) formations (cuttings and embankments including the subgrade), drainage, pavements (the road surfacing and structural layers that support the traffic loading), bridges, traffic control equipment such as signals, and roadside ITS installations, etc. The report describes a 1999 benchmarking study regarding the road asset management decision AP-R204/02 Austroads Road The Association of Australian and New processes in 12 international road agencies including Australia. It compares practices in strategic Network Asset Management: Zealand Road transport and traffic planning against the generic business processes presented in the Integrated Asset Management International Benchmarking Study authorities (Austroads) Guidelines for Road Networks. Among the opportunities for improvement: integration of processes and systems and documentation of Integrated Asset Management processes. It discusses topics of vehicle detection and classification. The Austroads 12 bin classification by AP-G84/04 - Best practices in road The Association of Australian and New axle configuration is considered stable and well received by road authorities. The issue of vehicle use data collection, analysis and Zealand Road transport and traffic classification by lengths into 3, 4 or 5 bins is also discussed and current practices are presented reporting authorities (Austroads) but not harmonised. Data integration and accessibility of road use data from multiple sources is considered, as many stakeholders are involved.
Industry Standards
185
Issuing Body and/or Information Source Description
This guide focuses on the management of the physical road assets including a range of AGAM01/06 Guide to Asset The Association of Australian and New coordinated activities, such as transport planning, design, implementation and operations. The Management Part 1: Introduction to Zealand Road transport and traffic Guide complements the Austroads Publication AP-R202/02 Integrated Asset Management Asset Management authorities (Austroads) Guidelines for Road Networks. The Association of Australian and New This guide is a collection of traffic studies related to collecting and analysing data related to Austroads Guide to traffic Zealand Road transport and traffic traffic. It includes a classification of Parking Data types into on-street and off-street. The Guide engineering practice - Traffic studies authorities (Austroads) is a reference for the Glossary of Austroads Terms. Some of the types of road use data considered for analysis are: traffic volume, traffic flow, traffic composition, weigh-in-motion, and adjustment factors. Recommendations: no new data AP-R292/06 - A Review of Road The Association of Australian and New framework is required, there is a need to link road use data with network performance data, road Use Data Integration and Zealand Road transport and traffic authorities need to remain abreast of the developments at NTDF (National Transport Data Management Models authorities (Austroads) Framework) and NDN (National Data Network), road authorities should work together with the private sector on the development of industry standard data protocols. It is recommended that road authorities become partners in aggregators established to meet the AP-R293/06 - A Review of Road The Association of Australian and New requirements outside the scope of road authorities. Likely partners in the aggregator would Use Data Pricing, Partnerships and Zealand Road transport and traffic include map data providers, telematics service providers, media organisations, motorist Accessibility authorities (Austroads) organisations, toll road operators, current commercial traffic information providers and telecommunication providers. This Glossary includes terms and definitions relevant to Austroads members and road and The Association of Australian and New AP-C87/08 - Glossary of Austroads transport industry practitioners, as well as a list of organisational acronyms. The Glossary has Zealand Road transport and traffic Terms 142 pages of entries. The Austroads Glossary is planned to be continually checked and new authorities (Austroads) terms, or definitions, included as deemed necessary.. The main goal of the Australian Government Interoperability Framework is that ‘information that is generated and held by government will be valued and managed as a national strategic asset’. Australian Government Information Australian Government Information The foundations of Information Interoperability are based on Information Management Interoperability Framework Management Office (AGIMO) principles. The Technical Interoperability Framework specifies a conceptual model and the agreed technical standards that support collaboration between the Australian government agencies such as: XML, UNICODE, AGLS, RDF, GIF, XML schema (DTD), BPEL4WS. The AGLS Metadata Standard is a set of 19 descriptive elements AGLS based on the Dublin Australian Government Locator Core Metadata Element Set (DCMES). The AGLS Metadata Element Set is intended to be used Service (AGLS) Metadata Standard- National Archives of Australia by the Australian government departments and agencies to improve the visibility and Australian Standard AS 5044 accessibility of their web services.
Industry Standards
The meaning of these 2 terms, Asset Register and Asset Inventory, has therefore important consequences on what type of data is collected, how it is stored, and consequently impacting on what data is available for use. For instance, if the asset register contains only assets above a [financial] threshold as proposed by Forum Participant 3, then it cannot be developed using the Asset Inventory data, as proposed by Forum Participant 4, because the 2 storage devices will contain different types of data. As important as information standards are, one thing needs to be remembered: increasing the number of information standards makes it very difficult for practitioners to select the particular one they need. This study concludes that information standards are better created and adopted at industry level rather than agency or road authority level, as this can reduce the number of information standards required and also the number of mapping required between agencies. With the increase in sharing information over the Internet, it becomes important to create industry information standards for the common business tasks and processes, and limit the number of local information models for unique circumstances that cannot fit in the general industry model.
4
CONCLUSION AND RECOMMENDATIONS
Standard information models are expected to play an important role in the information environment based on Web technologies. XML has become the de facto standard for writing industry information standards in many areas of practice. Mature industries such as finance, business or electronics have developed international standards based on XML. Road Asset Management is a relatively young industry therefore the information models are found mainly at the weak semantics end of the spectrum in the form of asset classifications, road hierarchies or industry standards. These information models have been created by various user groups to meet local needs and so far there is no consistency of information models across the industry at the national level. There are different taxonomies serving the same purpose (e.g. road classifications) that are built on different concepts and are used by various local or state road authorities. Currently there are no Road Asset Management information models at industry level based on XML. Standard information models are welcomed by practitioners, as there are many benefits associated with them, in addition to their usefulness for the Web environment. At the same time the findings suggest that the industry faces a lot of issues related to creating and implementing such information models at industry level. The effort of creating information models for this industry requires agreement on definitions and concepts, which is considered to be an issue Recommendations emerging from the findings of this study are: there is a recognised need for standard information models at industry levels; preferably these information standards would be created in XML format, to allow for interoperability at industry and inter-industry level using Web technologies; the information models should define specific tasks and business processes specific to Road Asset Management and for overlapping areas such as Finance and Business Reporting, existing XML information standards need to be carefully considered and adapted to the RAM industry needs by careful consideration at national level. New information standards should be created only for the specific industry needs not covered by any existing information standards.
5
REFERENCES
1
Australian Government-Productivity Commission (2008) Assessing Local Government Revenue Raising CapacityResearch Report: Canberra.
2
Australian Local Government Association (ALGA) (2009) 2009-2010 Budget Submission: Securing Australia’s Economic and Social Future: Canberra.
3
Burns, P., Roorda, J. & Hope, D. (2001) A Wealth of Opportunities: A Report on the Potential from Infrastructure Asset Management in South Australian Local Government, Contents and Glossary. Local Government Infrastructure Management Group.
4
Daconta, M.C., Obrst, L.J. & Smith, K.T. (2003) The Semantic Web: a guide to the future of XML, Web services, and knowledge management. 1st ed. Indianapolis, Ind. : Wiley Publishing, Inc.
5
Obrst, L. (2003) Ontologies for semantically interoperable systems in Proceedings of the twelfth international conference on Information and knowledge management. New Orleans, LA, USA: ACM.
6
Hepp, M. & de Bruijn, J. (2007) GenTax: A Generic Methodology for Deriving OWL and RDF-S Ontologies from Hierarchical Classifications, Thesauri, and Inconsistent Taxonomies, in The Semantic Web: Research and Applications. p. 129-144.
7
Hepp, M. (2006) Products and Services Ontologies: A Methodology for Deriving OWL Ontologies from Industrial Categorization Standards. International Journal on Semantic Web & Information Systems (IJSWIS) 2, 72-99
186
8
Nastasie, D.L., Koronios, A. & Sandhu, K. (2008) Factors Influencing the Diffusion of Ontologies in Road Asset Management- A Preliminary Conceptual Model in Proceedings of the 3rd World Congress on Engineering Asset Management and Intelligent Maintenance Systems (WCEAM-IMS). Beijing, China: Springer-Verlag London Ltd.
9
Bloomberg, J. & Schmelzer, R. (2006) Service Orient or Be Doomed: How Service Orientation Will Change Your Business. John Wiley & Sons, Inc.
10 Mihai, F., Binning, N. & Dowling, L. (2000) Road Network Asset Management as a Business Process, in REAAA Conference: 4-9 September, Japan. 11 INGENIUM & IPWEA (2006) International Infrastructure Management Manual. 3rd edition ed. Thames, New Zealand: Association of Local Government Engineering NZ Inc (INGENIUM). 12 Institute of Asset Management (IAM) (2004) PAS 55-1 Asset Management Part 1: specification for the optimized management of physical infrastructure assets. British Standards Institution (BIS): London, UK. 13 Institute of Asset Management (IAM) (2004) PAS 55-2 Asset Management Part 2: guidelines for the application of PAS 55-1. British Standards Institution (BIS): London, UK. 14 Katz, M.L. & Shapiro, C., (1985) Network Externalities, Competition, and Compatibility. The American Economic Review, 75(3): p. 424-440. 15 Koronios, A., Nastasie, D., Chanana, V. & Haider, A. (2007) Integration Through Standards – an Overview of International Standards for Engineering Asset Management, in Second World Congress on Engineering Asset Management, 11-14 June 2007: Harrogate, United Kingdom.
Acknowledgements The authors are very grateful to the Road Asset Management experts who accepted to be interviewed for this study. Gaining insight into the issues of information standardisation from the practitioners who work in this industry has been extremely valuable for this study. This paper was developed within the CRC for Integrated Engineering Asset Management, established and supported under the Australian Government's Cooperative Research Centre Programme.
187
Proceedings of the 4th World Congress on Engineering Asset Management Athens, Greece 28 - 30 September 2009
THE DIFFUSION OF STANDARD INFORMATION MODELS IN ROAD ASSET MANAGEMENT - A STUDY BASED ON THE HUMAN-TECHNOLOGY-ENVIRONMENT MODEL Daniela L. Nastasie a, b, Andy Koronios a, b a
Cooperative Research Centre for Integrated Engineering Asset Management (CIEAM), Brisbane, Australia b Systems Integration and IT, University of South Australia, Mawson Lakes SA 5095, Australia.
This paper reports on findings from the first stage of an exploratory study into the factors that influence the diffusion of ontologies in the Road Asset Management sector in Australia. The study investigates issues related to the diffusion of standard information models (taxonomies, road classifications and hierarchies, as well as various information systems conceptual schemas) in Road Asset Management. Individual and group interviews were conducted with 14 industry experts in four South Australian road authorities at state and local government level. The qualitative analysis of the findings is based on the preliminary HTE (human, technology, environment) conceptual model [1]. The findings suggest that the diffusion of standard information models at industry level is a complex process that combines human characteristics with technology and environment characteristics. Key Words: Road Asset Management, Diffusion of Information Standards, HTE model 1
INTRODUCTION
The information environment of the world has changed rapidly in the last few decades due to the proliferation of the Internet. Forty years after the creation of the ARPANET (the predecessor of Internet), the Internet has become one of the most common information exchange environment around the globe. Digital content has revolutionised the way information is used and the need for faster and wider information communication channels lead to an increasing demand for broadband Internet access. OECD considers broadband to have a similar impact on economic activities as electricity and internal combustion engine [2]. The potential of the new information environment is still to be fully exploited, but one particular application, the Web, has become as widely spread as the Internet itself. Among the Web technologies, the XML (Extensible Markup Language) enables data to become independent of the applications that create or store it, changing the focus of attention from software to information itself for the first time in the history of information management [3]. According to Sir Berners-Lee, the creator of the Web, the future information environment will evolve around the semantics (meaning) of data and information and will be based on high level information models such as ontologies. Standard information models (taxonomies, classifications, hierarchies) and information standards in general are expected to play an important role in the semantic information environment. The standard information models supporting the collection and storage of data are expected to provide the first step towards building ontologies [4], which would support semantic interaction across the Web. A more detailed analysis on the importance of the standard information models in the Web environment, discussing XML and information models described in XML can be found in [1, 5]. Effective Road Asset Management requires large quantities of data, from road inventory and condition data, to road usage and financial data. Various information systems are used to support the collection and analysis of data to perform road asset management tasks [6]. In 2008 the Australian Local Government Association proposed funding of $20 million over the next four years to develop a national data collection framework ($7 million) and to establish and/or upgrade asset management at local government level ($13 million) [7]. These figures demonstrate the importance assigned by the Australian government to data and information in relation to asset management. While roads and assets related digital information is gathered in increasingly larger quantities, these collections of data are in many instances very difficult to be analysed. The new Web environment presents opportunities for Road Asset Management in terms of dealing with large quantities of data, information management and automation of tasks. Information standardisation is one way of dealing with large amounts of data. Standard information models adopted at industry level would support the
188
exchange of information and sharing of knowledge between practitioners inside the Road Asset Management sector and with third party organisations. At the same time standard information models adopted at industry level would support the development of more formal logical structures, such as ontologies, that could allow semantic interaction over the Internet. This study investigated the factors that influence the adoption and implementation of standard information models in four government agencies in South Australia. 2
RELATED WORKS
Road Asset Management is heavily dependent on data and information. Information standards are discussed in the Information Systems literature as two separate groups: Horizontal IT standards and Vertical IS standards [8-10]. Markus and collaborators [8] differentiate the Horizontal IT standards and the Vertical IS standards as follows: ‘In contrast to horizontal IT standards, which concern the characteristics of IT products and apply to users in many industries, vertical IS standards focus on data structures and definitions, document formats, and business processes and address business problems unique to particular industries.’ This paper will use the term standard information model to refer to potential Vertical IS information models used by the various Asset Management Information Systems as well as information models that support classifications and hierarchies used in Road Asset Management. This approach is necessary as the Road Asset Management industry is too young to have developed vertical IS standards yet. Recent research covering the Engineering Asset Management information standards [11] discusses the plethora of information standards that could be developed into Vertical IS standards for this industry, with the most promising candidates being the information standards written in XML format. In the new Web based information environment standard information models are expected to play an important role. Current Information Systems research analyses the information models used in Road Asset Management industry in order to get a better understanding of the current status of information models in this industry [5]. Once these information models are created, their adoption and implementation becomes very important, as according to the Network Effects theory [12] the benefits they provide increase with the scale of the network that adopts them. Research is underway to study the adoption and implementation of standard information models in order to predict the adoption of ontologies in Road Asset Management [1]. A three dimensional conceptual model looking at Human, Technology and Environment characteristics (HTE) was designed to support the exploratory study in the first stage. The HTE model is based on a synthesis of constructs from Information Systems theories such as the Diffusion of Innovations (DOI) theory[13]; the Technology Organization Environment (TOE) [14]; TAM Technology Acceptance Model [15, 16] and related theories; NE - Network Effects - [12], CAT - Collective Action Theory [17] and the Public Goods Theory [18]. This model has been used to structure the data collection for the South Australian study. 3
RESEARCH METHODOLOGY AND FINDINGS
This paper is based on findings from an exploratory study conducted in South Australia between October 2008 and March 2009. In order to understand how standard information models (taxonomies, road classifications and hierarchies, information systems conceptual schemas, etc.) are adopted and implemented in the Road Asset Management industry, researchers [1] proposed a three dimensional conceptual model based on human (H), technology (T) and environment (E) characteristics, the HTE model as in Figure 1. A semi-structured interview questionnaire containing nine questions derived from the HTE model (with the ninth question open, asking for any other comments) was designed as presented in Appendix 1. The questionnaire has been used in 5 individual and 3 group interviews conducted with 14 industry experts in 4 South Australian government agencies at state and local government level. The transcripts from the interviews have been analysed using the NVivo 8 software and the findings were used to revise the HTE model. The findings show that the three stages of the diffusion of consistent information structures, Adoption, Implementation and Diffusion are supported by HTE drivers and hold back by HTE inhibitors, as shown in Figure 2. 3.1 RESEARCH FINDINGS Findings from this study show that: • Complex information systems pass the adoption stage raising high expectations, but they create big problems or even fail to deliver at the implementation and diffusion stage if not enough resources exist to manage their complexity • Organisations tend to be more cautious in adopting new information technologies. Early adopters of Information Systems prefer to wait longer now before they introduce new information technologies because most of their experiences as early adopters were negative. If the implementation is perceived as difficult, findings show that people are reluctant to become early adopters • External influences can be direct (as in government policies or influences from other councils) or indirect (through the people who move from one organisation to another). People with prior experience in the same area in other organisations bring a new way of looking at things and they can influence the new environment. While the impact of
189
people on organisations is obvious through the decisions they make day by day, the environment (organisation) influences people as well through the work practices in place, which remain imprinted in people’s mind long after they leave the organisation. This is an example of prior experience influencing an environment characteristic (business process).
Individual characteristics
H
Group characteristics
Adoption
External influences
T
Implementation
Technology characteristics
Technological context
Organization’s characteristics
E
Dissemination
Decision-making characteristics
Communication characteristics Figure 1- HTE conceptual model [[1], fig. 4, p. 1173]
Figure 2- HTE model with drivers and inhibitors [Source- Authors] •
One of the main issues is how to get agreement. Some of the factors that hinder the process of agreement on shared terminologies in RAM: o the plethora of Information Systems that have different conceptual schemas behind them, hence different concepts and terms o the historical development of information management in each organisation, with different business processes responding to the same needs In relation to ontologies the issue of agreement has been raised in the past by researchers trying to get agreements at the designing stage of ontologies [19] or investigating the automation of agreements between agents using different ontologies [20]. There have been no studies on getting agreement on taxonomies at industry level so far.
190
• •
•
• •
Agreement on Taxonomies. The majority of respondents have pointed out that agreement on terminology is very much needed, but very difficult to achieve. Getting agreement on terminology is an essential step in building ontologies at the industry level and more research is needed to understand how agreement can be achieved. Interoperability at industry level. Practitioners tend to connect more with peers that work on similar tasks and have similar issues at industry level rather than with staff from other areas in the same organisation. This leads to the idea that while information interoperability at enterprise level is important from the enterprise point of view, communication at industry level is very important for performing the business tasks effectively and efficiently. A number of interviewees reported that using different taxonomies for similar tasks creates real problems with benchmarking and communication at industry level. If one organisation uses Confirm and the other uses Hansen as their Asset Management Information Systems, then the way they collect and store information about their assets is different. While it is not imperative that these 2 systems ‘talk’ to each other, it is very important that people discuss their issues based on similar concepts. Taxonomies and metadata. Taxonomies provide relevant metadata to be associated with information contained in documents. This allows for consistent and systematic description of all the information contained in documents, and hence for a reliable exchange of information at industry level. A variety of taxonomies are required to cover different needs of the industry. Consistent taxonomies at industry level are the basis of formal ontologies. Human control over information decreases with the increase in the semantics level of the information structures. Once people don’t understand ‘how the system works’ they stop using it. This can make the new technology useless or used at a lower level capacity, as people need to have control over the data and information at all times. Politics have a big influence on the adoption and implementation of information standards. Beside the technical issues that have been investigated in the Computer Science arena, practitioners report social, economic and legal difficulties, a view supported by recent research in developing relevant ontologies [21].
3.2 DATA ANALYSIS Each of the three stages of the diffusion of standard information models, the adoption, the implementation and the dissemination of the model, is under the influence of two groups of factors – the HTE drivers, which motivate the diffusion of information models, and the HTE inhibitors, which can hinder or stagnate the diffusion process, as presented in Figure 1 The drivers and inhibitors from the findings of this study have been then classified according to the HTE model [1] as shown in Figure 3 (HTE drivers) and Figure 4 (HTE inhibitors). From a visual analysis of the two graphics representing the drivers and inhibitors, it can be noticed that the HTE drivers are in larger numbers than HTE inhibitors. Even though the number of factors does not necessarily reflect on the importance of the factors themselves, it provides an indication on the importance that standard information models are considered to play in Road Asset Management.
Figure 3- HTE drivers [Source- the authors] Some of these factors influence the diffusion of standard information models throughout its three stages. As an example, legislation to support the diffusion of a particular information model influences all three stages of the diffusion, while the benefits of data access and data maintenance are more important during the implementation and dissemination stages. Similarly, some inhibitors, such as the budget, influence the three stages of the diffusion process, while configuration issues
191
influence mainly the implementation of the information model. It is therefore necessary to distinguish between the factors according to their role in the process of the diffusion on information models. Considering the drivers presented in Figure 3, it can be noticed that some of the drivers, such as external influences (e.g. the Web environment that could force an agency into adopting and implementing a particular information model in order to be able to communicate with partners), or legislation (either mandatory or of an incentive nature) are motivators in themselves. These types of drivers will be called motivators. Another class of drivers (such as improved efficiency, transfer of skills, shared knowledge, etc.) could all come under the name of expected benefits, where expected benefits are one type of motivators. Similarly, the HTE inhibitors from Figure 4 can be grouped into factors that can block the diffusion process, such as an inappropriate budget or agreement on the standard information model, named blockers, and other issues that could impact negatively on the diffusion process if not addressed properly (such as definition of terms, resistance to change, etc.) and can be considered blockers for part of the diffusion process at least. The HTE drivers and inhibitors grouped by their role in the diffusion process are presented in Table 1.Expected benefits are listed under motivators as a group, and they are detailed in a separate column. One particular group of benefits was related to improved data quality and they were separated in the list to draw attention on the large number of benefits related to data quality that the practitioners have mentioned during the interviews. The HTE inhibitors were also separated into blockers and other issues, where other issues were listed in a separate column. Interviewees were concerned with the fact that information standardisation would reduce flexibility in terms of how the processes were defined and it will lead to a waste of knowledge in organisations that would have to change their business processes to fit the new information model.
Figure 4- HTE inhibitors [Source-the authors] A third category of factors that influence the diffusion of information standards are factors that could act both as drivers or inhibitors, depending on the context in which they apply. These are factors such as prior experience (which could be a driver in cases where the prior experience supports standard information models, or could be an inhibitor in cases where the prior experience with standard information models was negative) or the amount of information required in particular areas of practice (it was noticed during the interviews that respondents working in operational areas considered information standards almost irrelevant for their line of business, while areas such as finance, decision making or systems considered that standardisation of information would help a lot their work). These factors influence the personal motivation, for example when required to perform activities such as the collection of data required by certain information models. As this job is important for the quality of the data collected and stored in the system, if the personal motivation acts as a blocker it needs to be acknowledged and dealt with appropriately in order for the diffusion process of information standards to be successful. For the time being personal motivation has been included both in HTE motivators and HTE drivers. It should be noted that a factor like configuration issues, even though it influences the implementation stage mainly, it can have devastating influences on the whole diffusion process, as some of the interviewees reported. If the configuration of the system is not done properly, sometimes due to lack of time, sometime due to lack of expertise, the system cannot work efficiently and therefore it does not provide the expected benefits, which creates the false impression that the system is not appropriate for the job, when in reality it is not configured properly. Therefore the interaction between the factors is very important at any point in time. IS literature discusses a similar type of interaction in relation to the resistance to change in organisations to the implementation of Information Systems [22]. Standard information models are at the core of information
192
systems, so from this point of view the implementation of a new information model would be very similar to the implementation of a new information system. Even though in the HTE model resistance to change is one of the issues that need to be addressed in the process of implementation and dissemination, the findings of this study are in agreement with the interaction theory presented by Markus [22] who considers that people resist change because the interaction between people characteristics and system characteristics. The HTE model adds to this view the interaction to the environment characteristics, including influences external to the organisation, such as legislation, for instance, which is an important driver that can change the balance of power in the diffusion of information standards process Table 1 Drivers and inhibitors by their role in the diffusion of standard information models [Source-Authors] HTE Drivers Expected benefits assist accountability assist benchmarking assist transparency collaboration consistent business processes gaining grants legislation (mandatory improved data quality or incentive) data access data analysis expected benefits data collection data consistency personal motivation data control data independence data integration data maintenance data structure more useful data improved access control improved communication improved services increase efficiency information management legislation reduced costs reporting shared knowledge strategic planning support decision making third party information exchange transfer of skills Motivators external influences (e.g. Web environment, government incentives)
Blockers agreement budget other issues
personal motivation
HTE Inhibitors Other Issues business change configuration issues data representation data re-structuring definition issues different interpretations lack of expertise local politics perspective reduced flexibility resistance to change size of organisation technology requirements wasted knowledge
. 4
CONCLUSION
The proliferation of the Internet and the associated Web technologies present the Road Asset Management industry with an information environment suitable for managing large amounts of data in a very efficient way. In order to take advantage of the Web information environment, industry specific information standards need to be created. The standardisation of industry specific information can enhance human communication through a common terminology and can assist further automation of tasks. Semantic information retrieval can be achieved by increasing the level of semantics of the standard information models and designing ontologies specific to the Road Asset Management sector. These types of information models require communication at industry level, which has different requirements than communication inside an organisation. This paper is based on an exploratory study in four government agencies in South Australia investigating issues related to the adoption and implementation of standard information models in the Road Asset Management industry. According to the findings of this study, Road Asset Management practitioners can think of many benefits that can be derived from adopting and implementing industry level information standards: improved accountability and transparency, clear benchmarking and reporting, reduced costs and transfer of skills, improved communication at industry level and shared knowledge, etc. The
193
findings show that the interviewees consider also the issues associated with adopting industry information standards, such as reduced flexibility, lack of expertise or wasted knowledge if business processes have to be changed in order to implement the new information standards. These pro and contra factors have been classified according to the HTE (Human, Technology, Environment) model and some major motivators as well as blockers of the implementation process have been defined. Among the motivators, external influences (such as the existence of the Web environment or government incentives) and legislation (mandatory or incentive) add to the expected benefits to support the implementation of standard information models. The main blockers according to the findings of this study are agreement at industry level on such standard information models, budget constraints as well as several other issues that do not have the intensity of the blockers but could still have a negative impact on the implementation process. It has been noted also that certain factors, such as prior experience or the amount of information required in particular areas of business can act both as motivators or blockers. These factors have been classified under personal motivation category and listed under both headings (motivators and blockers). 5
LIMITATIONS AND FURTHER WORK
This exploratory study is the first in a number of studies planned to be conducted in relation to this topic. The study has been limited in number of participants (14 experts), location (Road Asset Management industry in South Australia) and time (October 2008- March 2009). These restrictions have been imposed by the time limits of this project and the number of volunteers among the industry experts. The findings from this study need to be validated against findings from road authorities in other states, to get a deeper understanding of the issues related to the adoption and implementation of standard information models in the Australian Road Asset Management sector. Further analysis is required to establish the factors that have a positive or negative influence on each of the three stages of the diffusion of standard information models (adoption, implementation and dissemination) in order to refine the HTE model. This analysis needs to take into account the interaction between the HTE factors, as the process of diffusion of standard information models is a dynamic process in which the factors influence each other. Therefore, even though an analysis for each individual stage of the diffusion is necessary to find out which of the factors have a greater influence on which stage, the fact that the factors influence each other has to be considered as well. A further revision of the HTE model will be done once the findings from other states are analysed. Recommendations for a best practice approach for the diffusion of standard information models in Road Asset Management will follow. 6
REFERENCES
1
Nastasie, D.L., A. Koronios, and K. Sandhu. (2008) Factors Influencing the Diffusion of Ontologies in Road Asset Management- A Preliminary Conceptual Model. in Proceedings of the 3rd World Congress on Engineering Asset Management and Intelligent Maintenance Systems (WCEAM-IMS). Beijing, China: Springer-Verlag London Ltd.
2
OECD (Organization for Economic Co-operation and Development), (2008) OECD Information Technology Outlook 2008: Highlights.
3
Daconta, M.C., L.J. Obrst, and K.T. Smith, (2003) The Semantic Web: a guide to the future of XML, Web services, and knowledge management. 1st ed. Indianapolis, Ind. : Wiley Publishing, Inc.
4
Hepp, M. and J. de Bruijn, (2007) GenTax: A Generic Methodology for Deriving OWL and RDF-S Ontologies from Hierarchical Classifications, Thesauri, and Inconsistent Taxonomies, in The Semantic Web: Research and Applications. p. 129-144.
5
Nastasie, D.L. and A. Koronios, (2009) The Role of Standard Information Models In Road Asset Management, in Fourth World Congress on Engineering Asset Management (WCEAM), Springer-Verlag London Ltd.: Athens, Greece.
6
Austroads. (2009) Asset Management - FAQ. [cited 2009 01 April]; Available from: http://www.austroads.com.au/asset/faq.html.
7
Australian Government-Productivity Commission, (2008) Assessing Local Government Revenue Raising CapacityResearch Report. Canberra.
8
Markus, M.L., C.W. Steinfield, and R.T. Wigand, (2003) The Evolution of Vertical IS Standards: Electronic Interchange Standards in the US Home Mortgage Industry, in ICIS 2003 MISQ Special Issue Workshop on Standards.
9
Markus, M.L., et al., (2006) Industry-Wide Information Systems Standardization as Collective Action: The Case of the U.S. Residential Mortgage Industry. MIS Quarterly, 30, 439-465.
10 Wigand, R.T., C.W. Steinfield, and M.L. Markus, (2005) Information Technology Standards Choices and Industry Structure Outcomes: The Case of the U.S. Home Mortgage Industry. Journal of Management Information Systems, 22(2), 165-191.
194
11 Koronios, A., et al., (2007) Integration Through Standards – an Overview of International Standards for Engineering Asset Management, in Second World Congress on Engineering Asset Management, 11-14 June 2007. Harrogate, United Kingdom. 12 Katz, M.L. and C. Shapiro, (1986) Technology Adoption in the Presence of Network Externalities. Journal of Political Economy, 94(4), 822. 13 Rogers, E.M., (2003) Diffusion of innovations. 5th ed. New York: Free Press. 14 Tornatzky, L.G. and M. Fleischer, (1990) The Processes of Technological Innovation. Lexington, MA: Lexington Books. 15 Davis, F.D., (1989) Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. MIS Quarterly, 13(3), 318-340. 16 Davis, F.D., R.P. Bagozzi, and P.R. Warshaw, (1989) User Acceptance of Computer Technology: A Comparison of Two Theoretical Models. Management Science, 35(8), 982-1003. 17 Weiss, M. and C. Cargill, (1992) Consortia in the Standards Development Process. Journal of the American Society for Information Science, 43(8), 559-565. 18 Kindleberger, C.P., (1983) Standards as Public, Collective and Private Goods. Kyklos, 36(3), 377. 19 Skuce, D., (1997) How We Might Reach Agreement on Shared Ontologies: A Fundamental Approach in AAAI Technical Report SS-97-06. University of Ottawa. 20 Laera, L., et al., (2006) Reaching Agreement over Ontology Alignments, in The Semantic Web - ISWC 2006. 371-384. 21 Hepp, M., (2007) Possible Ontologies: How Reality Constrains the Development of Relevant Ontologies. Internet Computing, IEEE, 11(1), 90-96. 22 Markus, M.L., (2002) Power, Politics and MIS Implementation, in Qualitative research in information systems : a reader, M.D. Myers and D. Avison, Editors. SAGE Publications: London. 19-48.
Acknowledgements The authors are very grateful to the Road Asset Management experts who accepted to be interviewed for this study. Gaining practical insight into the issues of information standardisation has been extremely valuable for this study. This paper was developed within the CRC for Integrated Engineering Asset Management, established and supported under the Australian Government's Cooperative Research Centre Programme.
195
Appendix 1 – Semi-structured interview questionnaire based on HTE model 1.
What is your role in this organization and what is your background? •
professional experience
•
qualifications
•
How would you describe your organization’s environment in regard to adopting information technologies (early, late adopter)? Who is usually involved at the adoption stage vs. the implementation stage? (Inter departmental relations, componence of groups)
2.
In regard to the adoption and/or implementation of [specific standard/information system] could you provide some details about the whole process?
3.
What can you say about the qualities and shortcomings of the [specific standard/information system]?
4.
How did the already existent technologies or the absence of some technology influence the process of adoption and implementation of the [specific standard/information system]?
5.
What can you say about the human resources expertise and skills involved in the process of adoption and implementation of the [specific standard/information system] ?
6.
7.
What communication channels were used to implement this [specific standard/information system] ? •
role of the intra and inter-organizational networks
•
industry and government networks
•
facilitating conditions
•
coordination among these networks
What external influences do you believe were related to the decision to adopt and implement this [specific standard/information system]?
8.
1.
the [specific standard/information system]promotion agent
2.
partner organizations had already adopted it
3.
it was imposed by an authoritative body
4.
other
Any other comments?
196
Proceedings of the 4th World Congress on Engineering Asset Management Athens, Greece 28 - 30 September 2009
DESIGN AND IMPLEMENTATION OF A REAL-TIME FLEET MANAGEMENT SYSTEM FOR A COURIER OPERATOR G. Ninikasa, Th. Athanasopoulosa, H. Marentakisb, V. Zeimpekisa, I. Minisa a
Department of Financial Management Engineering, University of the Aegean, 31 Fostini Str., 82 100, Chios, email: {g.ninikas; t.athanasopoulos; vzeimp; i.minis}@fme.aegean.gr b
ELTA Courier S.A. 40 D. Gounari Str., 153 43, Agia Paraskevi, Athens, email:
[email protected] The need for higher customer service and minimization of operational costs has led many courier operators to seek innovative information systems for efficient handling of customer requests that occur either during the planning and/or the execution of daily deliveries. These systems address a series of operational issues that occur in the courier sector such as fleet and HR management, vehicle routing and monitoring, proof of delivery, trackand trace services, and so fourth. However, the demanding environment of the courier industry generates further operational needs that have not fully addressed by the existing systems. This paper presents the architecture of an innovative fleet management system that have been developed and implemented to a Hellenic courier operator, in order to address daily challenges and provide an integrated framework that supports effectively the dispatchers during the planning and execution of delivery schedules. The proposed system manages and allocates in realtime the dynamic requests that occur during service execution, as well as, the bulk deliveries that need to be serviced over a multiple-period (days) time horizon upon their receipt. The system has been evaluated through simulation tests and field experiments so as to ensure the robustness and interoperability of its components and assess the potential of adopting such a system in the courier industry. Key Words: Intelligent Information Systems, Dynamic Vehicle Routing, Telematic services, System Design 1
INTRODUCTION
In a courier service environment, efficient delivery is the key issue for customer satisfaction. The use of manual or empirical techniques for customer allocation to delivery vehicles, although necessary, is by no means sufficient to address customer requests that are likely to occur during delivery planning or execution due to randomness and complexity. A non optimized vehicle allocation plan may have negative effects on delivery performance leading thus, to higher costs and inferior customer service. With the advances in telecommunication and information systems such as Global Positioning Systems (GPS), Geographical Information Systems (GIS), and Intelligent Transportation Systems (ITS), it has become realistic to monitor in real-time the execution of routes, to control operational costs and at the same time to fulfil certain customer requests in real-time such as proof-of-delivery (POD). However, requests that arise during delivery execution and route planning for a multi-day framework cannot be addressed by current systems. The main aim of this paper is to present an innovative fleet management system that incorporates all the necessary modules that will be able to cope with the aforementioned operational needs. The paper is organized as follows. Section 2 presents the characteristics and the main operational challenges of the courier business sector. Section 3 describes the addressed problems/ challenges by the proposed system, along with their theoretical background. Section 4 analyses the system architecture, in parallel with its basic modules, while Section 5 describes the actions taken (pilot testing) to evaluate the efficiency of the proposed system. Finally, Section 6 presents the main conclusions of the system implementation. 2
THE COURIER SERVICES ENVIRONMENT
The core business of a courier company incorporates the delivery and pick-up of parcels and envelopes in a short period of time. A typical model of such type of delivery involves six major stages: a) the pick-up of the item, b) the initial processing of
197
the item in the local service point (LSP), c) the further processing in the main Hub, d) the longhaul transportation to the destination area Hub, e) the processing in the destination Hub and f) the delivery of the item from the destination LSP. This process is depicted in Fig. 1. In a typical courier delivery schedule, a large amount of requests are known to the dispatcher in advance and concern predefined deliveries to customers or scheduled pick-ups. The nature of courier problems however, is moderately dynamic [1], while a typically moderate amount of requests appear dynamically over time as the delivery plan is executed. As a result, vehicle routing is a challenging task as locations to be served vary greatly from day to day. Apart from dealing with their usual work, most courier companies deal, also, with bulk deliveries. These items can be informational/ advertising material, packages ordered through other sales networks such as Internet, etc. The courier company contracts with a client, based on a service level agreement (SLA), in order to provide distribution services to several customers.
Figure 1: Typical model of courier services distribution procedures Dynamic requests and bulk deliveries contribute significantly to the revenues of a courier company. On the other hand, the workload that occurs, demands an intelligent management in order to confront with the additional operational problems, such as: (a) the efficient allocation of dynamic requests to routes, and (b) the management of bulk deliveries in a multiple period (days) time horizon. On top of that, there is a series of dynamic parameters that affect the execution of a delivery and/or pick-up schedule. These are: • • • • • • •
Distribution area (e.g. area size) Type and volume capacity of vehicle (scooter, van) Pick-up/delivery item (letter, parcel) Type of distribution work (pick-up or delivery) Distribution times (travel/service/waiting times) Available resources (e.g. delivery vehicles and labour) Unexpected incidents (e.g. traffic congestion)
Due to the aforementioned dynamic parameters, the dispatchers face various difficulties during delivery scheduling and execution. Table 1 shows typical challenges that arise during the route planning and execution along with the main consequences resulted from them.
Table 1 Courier management challenges Category Problem Manually scheduling and plan of the initial routes Delivery Planning Massive planned deliveries/ pickups Manually assign dynamic requests to vehicles Late Proof of Delivery (PoD) Delivery Lack of fleet surveillance Execution Deviations on the planned pickup/ delivery time windows
198
• • • • • •
Consequences Increase of distribution cost Limited fleet control and delivery quality Customer complaints Dynamic incident handling inability (e.g. dynamic requests) Increase of pickup/delivery completion time Customer service and Quality of services
In order to define the main requirements for the design of the proposed intelligent fleet management system an extensive literature review in the area of express logistics, coupled with in-depth interviews with a leading Greek courier services company has been performed. The main requirements resulted are presented in Table 2. Table 2 Main requirements of courier services Addressed by current systems
Requirements Automation in the initial routing-scheduling Fleet monitoring/surveillance and vehicle performance
Proof of Delivery (PoD)
Bulk deliveries allocation in a multi-day framework
partially
Dynamic request handling
Existing fleet management systems can satisfy only a subset of the requirements of Table 2. However, most of the systems cannot handle in a dynamic manner the requests that arise during delivery execution neither route effectively deliveries that are planned in a multi-day framework. The aforementioned requirements constitute the main problems/ challenges to be confronted by the proposed system and are described analytically in the following section.
3
PROBLEM DESCRIPTION AND THEORETICAL BACKGROUND
Several research studies have addressed issues related to the ones addressed by the proposed system. The aforementioned problems/ challenges are described below together with selected references on similar problem settings. 3.1
Dynamic request handling
While executing a delivery plan, a fleet of vehicles is in movement to service customer requests known in advance, while new requests may arise dynamically over time as the working plan unfolds. Many different factors must be considered when a decision about the allocation and scheduling of a new dynamic request is taken such as the current location of each vehicle, their current planned route and schedule, characteristics of the new request, travel times between the service points, characteristics of the underlying road network, service policy of the company and other related constraints. Dynamic request handling is described mainly by the Dynamic Vehicle Routing Problem [2,3,21]. These problems exhibit special features [4]. Extensive research has been carried out on related models focusing on the Vehicle Routing Problem with Time Windows [5], Pickup and Delivery Problem with Time Windows [6,7,8] and other variations of them. In the literature, one can find three main approaches to cope with newly occurring requests. Typically, some static algorithm is first applied over requests that are known at the start of the day to construct an initial set of routes. Firstly, as the work day unfolds, a fast local update procedure can be used to integrate the new request into these routes. Insertion-based heuristics derived from local-search approaches is a way to solve these types of routing problems [9,10,11]. Re-optimization of the total VRP is a second approach and is used in order to improve the initial static solution every time an event occurs. A typical example is tabu search [12]. More recently, a third approach has proposed which instead of reacting to problem changes, it uses waiting strategies [13,14], i.e. it investigates the anticipation of these requests by positioning the vehicles in strategic locations or by exploiting information about future. 3.2
Routing of bulk deliveries in a multi-day framework
Bulk deliveries are being given an N-day horizon upon their request receipt, to serve each customer. Customers are provided with a specific service level (e.g. delivery within N days). The problem consists on deciding the best allocation of the requests to the next N-day schedules in order a) to avoid certain bottlenecks that may occur, b) to normalize the additional workload that occurs between periods (days), and c) to allow a proportion of the network availability to be allocated to dynamic requests through i.e. time-buffers of phantom customers. Related multi-period problems in the literature include the Inventory Routing Problem (IRP) [17, 18] and the Periodic Vehicle Routing Problem (PVRP) [19, 20]. For these problems it is clear that a solution, which results from applying a single period approach repeatedly, will not provide the efficiency of a multi-period solution. Thus, approaches used to solve such problems may sacrifice local optimality within a single period (day) in order to obtain global optimality over the entire horizon.
199
Rolling horizon problems apart from the above problem settings, have been studied in [15, 16]. The basic difference of the current problem vs. the PVRP and IRP is that in the former the frequency of customer requests is not known in advance (as in PVRP), or there is no a priori knowledge in order to determine this frequency (as in IRP in which the inventory and the consumption rate per customer is known). 4
SYSTEM DESIGN
In order to deal with the aforementioned operational inefficiencies and problems, an integrated system was designed and implemented. The system comprises of several modules – components which allow the incessant communication and information sharing. Initially, the core system components – modules are presented followed by the system’s architecture, which incorporates all the modules along with the needed input, outputs and interfaces. Additionally, the necessary user interfaces was designed and implemented in order to support the end users of the system (dispatchers, courier managers). 4.1
System Components and Modules
The core parts of the system include two (2) components which incorporate the algorithmic modules (dynamic request handling, Bulk deliveries allocation), as well as two (2) systemic tools (fleet monitoring tool, initial routing module), which support the integrated fleet management system. Dynamic Request Handling We consider a fleet of vehicles on their route to customers and a subset of requests that are dynamically revealed over time. The service is realized under various operational constraints such as time windows, limited vehicle capacity and route length and, therefore, decisions and changes must be made in a highly dynamic environment. Fig. 2 describes the proposed dynamic request management handler. As can be seen, the Allocate Module assigns these requests to vehicles. The inputs for this module are the initial day routes for all vehicles and the feedback given by the fleet surveillance system depicting the current location and availability of vehicles.
Figure 2: Dynamic request handling A fast local update procedure is used to integrate the new requests into the routes. The routing algorithm that solves this problem should be computationally efficient since the solutions are provided in real time. The key feature of the system is that it can allocate a large number of dynamic requests simultaneously and it can provide good solutions for moderately large customer sets in small computational time, taking into consideration all the operational constraints (time windows, available capacity of the vehicles, total length of the route, etc.). After solving the allocation of the dynamic requests, the system provides the vehicles with the updated delivery plan via the telematics system. A graphic user interface (GUI) was developed for the dynamic request handling module in order to support the end users of the system, as depicted in Fig. 3.
200
Figure 3: Dynamic Requests Handling Module Routing of bulk deliveries in a multi-day framework Scope of this module is to allocate the bulk deliveries in a multiday horizon in order to rationalize the extra routing workload. This module consists of three main parts, a) the historical data analysis, b) the routing component, and c) the allocation component. Fig. 4 presents the module’s procedures and the operational timings and Fig. 5 presents the user interface designed and implemented for the allocation module.
Model of the working VRP Environment
Allocation Problem
Historical Historical Data Data (raw) (raw) Routing Routing Historical Historical Data Data Analysis Analysis
• Expected customers • Parameters
• Expected Routes • Parameters (route length, capacities)
Daily Daily Requests Requests input input
• Flexible Requests
Allocation Allocation
• Routed Customers • Unallocated Customers
Unallocated Unallocated Customers Customers
• Unallocated Customers from previous days
Periodically (i.e once a month or based on service seasonality)
Daily
Figure 4: Allocation of Requests Initially, historical data collected over a specified time period (i.e. one month) are analyzed to provide the phantom (expected) customers. These customers represent urban areas with high request density, which are scheduled / serviced each day of operation. Expected customers are being routed in order to create the expected routes. Expected routes represent the typical paths that are most likely to be traversed in almost each day. Typical routes obtained will provide the basis for the allocation procedure for a specified time horizon. The allocation procedure is as follows: During each planning day, the following N days will be scheduled. Customers to be considered for allocation include the customers received during the last time period, as well as customers that have not been scheduled yet. Customers are allocated in the next N periods (days) based on an overall cost elimination procedure for the N period horizon. Customers that must serviced in day 1 (i.e. requests for which the N-day horizon expires) are scheduled with priority. In order to maintain a certain customer service level, any candidate request is forced to be scheduled no later than N days after request acceptance. The allocation procedure terminates when all customers have been assigned to one route of the N periods. Customers assigned to the first period’s routes are considered for implementation only, while the rest customers (allocated in periods 2 to N) form the unallocated customers of the next period’s allocation procedure. The operation will be repeated each day by subtracting the first day of the N-day horizon and adding day N+1. Thus, the problem is solved in a rolling horizon framework.
201
Figure 5: Bulk Deliveries Allocation Module User Interface (Allocation Component) Fleet Monitoring Module It provides all necessary real-time information on the state of the fleet (location, availability). Scope of the fleet monitoring module is to provide (a) a user friendly environment for the fleet surveillance during the execution of the scheduled routes, (b) the needed interfaces for the collection of historical data and (c) the main interface between the dynamic requests handling module and the actual implemented routes information in real time. The module is comprised of an XML client (field information database), several vehicle GPS devices (transmission of vehicle position information) and the fleet surveillance software (VIEWER). Fig. 6 presents the user interface developed and implemented for the fleet monitoring module.
Vehicle / Route Information Analytical route info (single veh.)
Actual vs. Plan (vehicle delays)
Map / routes imaging
Figure 6: Fleet Monitoring System User Interface Initial Routing Module The initial routing module is based on commercial routing software. Main purpose of this module is to provide the initial customer visiting order for each vehicle. The output of the routing procedure is provided to couriers as their initial job assignments and to the fleet monitoring module in order to monitor the execution of the assigned jobs and to provide the interconnection with the dynamic request handling module. 4.2
System Architecture
The aforementioned modules are interconnected in an integrated way in order to guarantee the interoperability of the proposed system. Fig. 7 presents the logical architecture of the proposed systems including the components – modules (purple
202
boxes), separated in algorithmic and systemic tools, and the main input / output data. Additionally, an illustration of the timings that each operation occurs is presented.
Figure 7: Integrated System Diagram Initially, in day i , bulk deliveries are scheduled based on the historical data (expected routes) and the pending bulk requests, and the initial routing module designs the routes to be implemented based on existing information (incl. normal courier requests and bulk deliveries). In day i + 1 , the dynamic requests handling module, in collaboration with the fleet monitoring system, will allocate the dynamic requests and will collect all needed historical data. The dynamic request handler connects with the fleet monitoring system via a TCP/IP connection to the server in order to receive the necessary data for the rerouting process (pending customers) and send back the updated routes that include the newly occurred requests. Also, fleet monitoring module supports the dispatchers providing all necessary real-time vehicle information. The physical architecture of the fleet management system that addresses the aforementioned requirements is categorized in three essential pillars as described below (see Fig. 8): 1.
Back-end sub-system: This sub-system consists of an integrated enterprise resource planning (ERP) communication system (communications server, database server, map server), which communicates bi-directionally with the front-end sub-system.
2.
Communications sub-system: It provides the wireless communication between the back-end and front-end systems. It consists of terrestrial mobile network (GPRS) and satellite positioning technology (GPS).
3.
Front-end sub-system: It consists of a GPS receiver and a GPRS modem which communicates (through wireless network) with the communication server in the back-end sub-system.
Figure 8: Architecture of the fleet management system
203
5
SYSTEM IMPLEMENTATION
The proposed system was implemented in a Hellenic courier company (Tachimetafores ELTA SA) in order to assess its functionality and efficiency. A medium-scale service area was chosen that services approximately 500 static and 70 dynamic customers daily with a heterogeneous fleet of 8 vans and 11 scooters. The implementation of the proposed system in real conditions has been found to be extremely difficult due to various operational constraints and complexities that correspond to the lack of an integrated information system. Due to the risk of causing significant nervousness to the user operations, the system implementation took place in two main phases, a) an online testing (field experiments) and b) an offline testing (simulation testing). The online testing focused on the assessment of the robustness of the proposed system modules, and the interoperability and functionality of the integrated system. Additionally, this phase evaluated the potential adaptation of such a system in the courier company operations. The offline testing took place in order to assess the efficiency of the routing systems (initial routing and dynamic request handler) compared with the current routing procedures of the user. It is worth mentioning that the bulk deliveries module has not been evaluated in this phase due to insufficient customer data (bulk deliveries) during the pilot testing period. The design of the aforementioned evaluation tests coupled with indicative results are described below. 5.1
Simulation testing set-up
The implementation of the simulation testing took place in a three day period in order to assess and compare the executed routing results of the user with the one proposed by the proposed system. The offline tests have been designed carefully in order to simulate the real operational conditions. A precise methodology was used in order to gather and match the customers and routes data in order to regenerate the conditions addressed by the company. This methodology provided the test data with approximately 500 customers, 70 dynamic requests and 13 routes per day (5 vans and 8 scooters). The offline tests were implemented in three stages that concern a) the assessment of the initial/planned routing, b) the dynamic request handling and c) the total routing cost. In the first stage, the current planned routes designed by the user were compared with the initial routing module’s results, while the second one provides comparison between the final routes executed by the user and the ones resulted by the dynamic request handler module. Finally, the third stage provides comparative results on the effectiveness of the integrated system concerning both initial routing and dynamic request handling. The dynamic request handling stage was compared using as initial routes the ones planned by the user and the ones resulted by the initial routing module. The results of the above stages are presented in the following section. 5.2
Results from simulation testing
Table 3 presents the 1st stage results consisting of the total routing cost (hrs) per day for the current planned routes and the total routing cost resulted by the initial routing module. The efficiency of the initial routing module is obvious while it succeeds in a 14% improvement of the total routing cost (hrs) on average, compared to the current route planning procedures of the user. Table 3 Stage 1 results (Offline Testing) [hrs] Day 1
Day 2
Day 3
Current Planned Routes
59,05
60,61
47,79
Initial Routing Module
49,93
51,26
41,36
% Difference
15%
15%
13%
Table 4 presents the 2nd stage results that depict the excess cost (hrs) occurred due to the insertion of the dynamic requests in the current routes as executed by the user and as proposed by the dynamic request handler module. The results prove the efficiency of this module as it reduces the total excess cost by 40% on average compared to the current dynamic requests’ allocation by the user. Table 4 Stage 2 results (Offline Testing) [hrs] Day 1
Day 2
Day 3
Current Routing Procedures
6,97
6,63
8,12
Dynamic Request Handler Module % Difference
4,11
4,45
4,38
41,0%
32,9%
46,1%
204
Table 5 presents the 3rd stage results that depict the total routing cost (hrs) of the current routes as executed by the user compared to the solution provided by the integrated system (initial routing module and dynamic request handler module). The integrated system results in a reduction of 16,5% of the total routing cost on average. Table 5 Stage 3 results (Offline Testing) [hrs] Current Routing Procedures Initial Routing + Dynamic Handler Modules (integrated system) % Difference
6
Day 1
Day 2
Day 3
66,02
67,24
55,91
54,60
56,74
46,71
17,3%
15,6%
16,5%
CONCLUSIONS
The courier business sector is an industry that is characterized by various operational and technical complexities. Several of these complexities can be addressed by the use of information systems such as the one proposed in this paper. The technical and operational principles underlying the design of the proposed fleet management system have been shown, followed by validation of the technical design, though empirical assessment of the benefits enabled by the use of the proposed system. The proposed system resulted to be rather effective in the routing procedures, presenting significant operational and cost reductions. Additionally, one of the basic advantages of the proposed system is that it eliminates the human factor inefficiencies caused by the empirical routing and scheduling procedures and provides thorough results and robust business operations. The effectiveness of the proposed system, as far as the operational cost (time) reduction is concerned was validated through: a) the reduction of the initial routing cost and b) the reduction of the dynamic requests allocation cost. The overall system resulted in a significant reduction of 16,5% on average of the total routing cost, while the standalone operation of the initial routing module resulted in 14% reduction of the initial routing cost. Finally, the dynamic request handling module reduced the excess cost on top of the initial routing cost by 40% on average. 7
REFERENCES
1
Larsen, (2000) The dynamic vehicle routing problem. Ph.D dissertation, Technical University of Denmark, Lyngby, Denmark.
2
M. Gendreau, F. Guertin, J-Y. Potvin, R. Seguin, (2006) Neighborhood search heuristics for a dynamic vehicle dispatching problem with pick-ups and deliveries. Transportation Research Part C, 157-174.
3
Febri, P. Recht, (2006) On dynamic pickup and delivery vehicle routing with several time windows and waiting times. Transportation Research Part C, 40, 335-350.
4
H.N. Psaraftis, (1995) Dynamic vehicle routing: status and prospects. Annals of Operational Research, 61, 143-164.
5
J. F. Cordeau, G. Desaulniers, J. Desrosiers, M. M. Solomon, F. Soumis, (2002) The VRP with Time Windows. In P. Toth and D. Vigo, The Vehicle Routing Problem, SIAM Monographs on Discrete Mathematics and Applications, Philadelphia, 157-193.
6
G. Desaulniers, J. Desrosiers, A. Erdmann, M. M. Solomon, F. Soumis, (2000) The VRP with Pickup and Delivery. Les Cahiers du GERAD.
7
Y. Dumas, J. Desrosiers, F. Soumis, (1991) The pickup and Delivery Problem with Time Windows: A Survey. European Journal of Operational Research, 54, 7-22.
8
S. Mitrovic-Minic, (1998) Pickup and Delivery Problem with Time Windows: A Survey. SFU CMPT TR.
9
Q. Lu, M.M. Dessouky, (2006) A new insertion-based construction heuristic for solving the pickup and delivery problem with time windows. European Journal of Operational Research, vol. 175, pp. 672–687.
10
A.M. Campbell, M. Savelsbergh, (2004) Efficient insertion heuristics for vehicle routing and scheduling problems. Transportation Science, 38(3), 369–378.
205
11
S. Mitrovic-Minic, R. Krishnamurti, G. Laporte, (2004) Double-horizon based heuristics for the dynamic pickup and delivery problem with time windows. Transportation Research Part B, 38, 669-685.
12
M. Gendreau, F. Guertin, J-Y Potvin, E. Taillard, (1999) Parallel Tabu Search for Real-Time Vehicle Routing and Dispatching. Transportation Science, 33(4), 381–390.
13
S. Ichoua, M. Gendreau, J-Y. Potvin, (2006) Exploiting Knowledge About Future Demands for Real-Time Vehicle Dispatching. Transportation Science, 40(2), 211–225.
14
S. Mitrovic-Minic, G. Laporte, (2004) Waiting strategies for the dynamic pickup and delivery problem with time windows. Transportation Research Part B, 38, 635–655.
15
P. Jaillet, J. Bard, L. Huang, M. Dror, (2002) Delivery Cost Approximations for Inventory Routing Problems in a Rolling Horizon Framework. Transportation Science, 36(3), 292-300.
16
H. N. Psaraftis, (1988) Dynamic vehicle routing problems. In B. L. Golden and A. A. Assad, Vehicle Routing: Methods and Studies, North-Holland, Amsterdam, 223-248.
17
M. Dror, M. Ball, B. Golden, (1985) Computational comparison of algorithms for the inventory routing problem. Annals of Operations Research, 4, 3-23.
18
M. Campbell, M. W. P. Savelsbergh, (2004) A decomposition approach for the Inventory-Routing Problem. Transportation Science, 38(4), 488-502.
19
M. Newman, C. A. Yano, P. M. Kaminsky, (2005) Third Party Logistics planning with routing and inventory costs. In J. Geunes and P. M. Pardalos, Supply Chain Optimization, Kluwer.
20
N. Christofides, J. Beasley, (1984) The Period Routing Problem. Networks, 14, 237-256,
21
V. Zeimpekis, C. Tarantilis, G. Giaglis, I. Minis, (2007) Dynamic Fleet Management: Concepts, Systems, Algorithms & Case Studies. Operations Research/Computer Science Interfaces Series, Springer-Verlag, Vol. 38.
Acknowledgments This work is partially funded by the “Regional Operational Programme of Attica” (ROP of Attica) [project “Management of Dynamic Requests in Logistics (MaDReL)”] and the “Reinforcement Program of Human Research Manpower” (PENED) [cofinanced by National and Community Funds (25% from the Greek Ministry of Development-General Secretariat of Research and Technology and 75% from EU-European Social Fund) – 03ED067 research project].
206
Proceedings of the 4th World Congress on Engineering Asset Management Athens, Greece 28 - 30 September 2009
OPEN SOURCE SOFTWARE: LESSONS FOR ASSET LIFECYCLE MANAGEMENT Dr. Abrar Haider a, b a
b
CRC for Integrated Engineering Asset Management, Brisbane, Australia
School of Computer and Infomatin Science, University of South Australia, Mawson Lakes Campus, SA 5095, Australia.
Generally, software used by individuals and organizations are proprietary software. Proprietary software usually provides hidden source codes, available at a specific cost, and limited flexibility on copyright licenses. On the other hand, open source software appears with some competitive features to proprietary software. The unique part of open source software is a development concept that harnesses open development from a wide community and decentralized peer review. This development process is effective in terms of software quality and the lowered software production cost. This paper presents an account of literature review to provide insights into the strengths of the concept of open source software development and to develop a case for its application in collaborative research environments to enhance and further develop the technical infrastructure to support asset lifecycle. Key Words: Open source, Asset management, Software. 1
INTRODUCTION
Economies worldwide are acknowledging the potential of OSS and research in viability, usability, maintainability, and supportability of Open source software (OSS) is gaining momentum. For example, European Union that has been a forerunner in adoption, development, and research of OSS in different areas of economy. However, interest in OSS is not limited to Europe only. It has been successfully implemented in many public sector organisations in countries such as, Brazil, Italy, Malaysia, Germany, Netherland, United Kingdom, France, USA, Denmark, Sweden and South Africa. It is not just the economic advantages that are attractive to economies around the globe; in fact OSS presents itself as a collaborative reliable, robust, and flexible alternative to proprietary software systems. Proprietary software is tightly controlled and leads to usage dependencies. At the same time, proprietary software is developed on standardised user requirements and thus does not exactly meet all of the demands of any organisation. In addition, proprietary software has on going support and up-gradation constraints, and contractual restrictions. On the other hand, OSS allows a participatory forum that engages communities of interest to decrease dependence on commercial software vendors in terms of source code, functionality, and contractual commitments. There are significant hidden costs involved in support, maintenance, training, re-engineering, and installation of software applications. These costs are generally not considered at the time of software procurement; however, they are significant in the total cost of ownership and even softwares like office suites or packages developed or tailored for a specific organisation can be dauntingly expensive. In addition, the licenses available for proprietary software are not flexible and are available for specific number of seats, whereby expansion for an organisation with available licenses is difficult and expensive. There are also legal costs and risks involved with checking and signing licenses, ensuring that license conditions are adhered to, and ensuring that all relevant licenses have been purchased and are up to date [11]. On the other hand, OSS adheres to simple and non-cumbersome licensing and free distribution. Concept of OSS has significant relevance for research organisations like Cooperative Research Centre for Integrated Engineering Asset Management (CEIAM). CIEAM brings together researchers and practitioners belonging to industry verticals such as electricity, gas, water, and transport. These researchers and practitioners have expertise in various areas of asset management such as design, operation, maintenance, lifecycle support, IT support development for lifecycle management, human resource development for asset management, and asset accounting. CIEAM, thus, resembles an extensive community of interest of asset lifecycle management. The concept of OSS could be applied to CIEAM to bridge the gap between software developers and software users through continuous software audit and requirements refinement. There is
207
enormous potential for asset management researchers, practitioners, and lifecycle IT support developers to get together and participate in an open source initiative to develop and mature software applications specific to the asset lifecycle management. This will not only help them create a robust set of applications but will also reduce their dependencies on software vendors as well as the need for reengineering the applications to customise them to organisational needs. Nevertheless, success of OSS is contingent upon critical aspects such as, its implementation enabling technical and economic value; its maintainability and adequate support available to sustain its utilisation. It is, therefore, essential to establish the potential and value profile of OSS for engineering enterprises. This paper presents an overview of the strengths and weaknesses of OSS to develop a case for its utilisation in asset lifecycle management. It begins with an overview of OSS development culture and adoption framework, followed by the OSS governance modus operandi. The paper concludes with a discussion on the relevance of OSS to asset management.
2
THEORETICAL FOUNDATIONS OF OPEN SOURCE SOFTWARE
OSS is developed by the community at large by providing free access to source code. The advantage of providing source code within OSS distribution is to enable end users to learn more about the program [1]. Thus programmers can improve the software and redistribute it again to the society. According to Perens [2], open source doesn’t simply mean open access to the source code, but the definition itself must comply with several criteria listed below, a.
Free Redistribution: Software license allows everyone to create multiple copies of software, sell, or give away the program without any fees required.
b.
Source Code: Software must include source code within their distribution or provide an accessible website that contains free downloadable source code. OSS thus enables programmers to modify or repair the program.
c.
Derived Works: Software can be freely modified, though needs to be redistributed by acknowledging the original licensing scheme.
d.
Integrity of the author’s source code: this rule defines the separation of modification code and original code to respect the integrity of the author’s original source code.
e.
No discrimination against persons or groups: Software must be available for any persons or groups without any exclusion.
f.
No discrimination against fields of endeavour: Software must be equally usable for any field of knowledge without any exclusion.
g.
Distribution of license: there is no first party’s signature required for distribution of license from second party to third party.
h.
License must not be specific to a product: the open source license must always be attached to the derivation works and particular parts of program distributions.
i.
License must not contaminate other software: the license does not restrict another program that is distributed along with open source software to be also open source license
Free definition in OSS is not similar to freeware or shareware applications. According to Hamel [3], The Free/Libre Open Source Software (FLOSS) definition is used to differentiate from free as freedom, free speech and free beer. Shareware by definition is software that is available to share though the user needs to buy it if the software is to be used for a longer period of time. On the other hand, freeware software is available for free download and free use with no charge by end users. However, free definition in OSS does not only mean free of costs (gratis), but also includes freedom and liberty development, usage, and distribution of the software. Richard Stallman [4], the inventor of copyleft mechanism instead of copyright explained well the definition of free in OSS within Stallman’s Four Freedom concept in GNU (Gnu is Not Unix) Public License: a.
Freedom 0 – “Freedom to run the program for any purpose”.
b.
Freedom 1 – “Freedom to study how the program works and adapt it to your needs”.
c.
Freedom 2 – “Freedom to redistribute copies so that you can help your neighbour”.
d.
Freedom 3 – “Freedom to improve the program and release your improvements to the public for whole community benefits”.
208
3
OSS DEVELOPMENT CULTURE
Unique quality of OSS is a development culture that harnesses open development from wide community and decentralized peer review; thus the development process is effective in lowering software production cost and improving the software quality [5]. According to his book, “The cathedral and the bazaar”, Raymond [5] draws an analogy between “The Cathedral” (Proprietary Software) and “The Bazaar” (OSS), where “the Cathedral” development is carefully crafted by individual “wizards” in an isolated work place and there are no beta releases on this development style since everything has been fixed to a single plan, single point of focus, or even single mind [6]. On the other hand, “The Bazaar” style is similar to the common meeting place, when everybody adds different things to the interaction, like minded community congregate and talk, and disseminate important information within the development community [5,6].
Submit feature/ bug fix/ bug report/ request to incorporate into main implementation
Feature
Stable core implementation
Distribute
Community of users and developers
Bug Fix
Core developers
Embodies Bug report
Source code modifications and feature
Source code modification and fix
Details of bug and steps to replace
Modular design :::
Feature request
Feature request
Figure 1: OSS Development Cycle [7] Figure 1 illustrates the OSS development cycle and highlights developers’ motivations and interactions. In OSS development process, developer(s) develop the core of an application and make its source code available to general public via internet. Like minded individuals use the application or go through the source code and add their enhancements to the core to enhance functionality to the core application, remove bugs, point out errors, report desired enhancement, and provide software quality assurance through testing. When this feedback is available to the developers they are in a better position to improve the software. Thus, the software goes though the process of continuous improvement. However, OSS is available for commercial use as soon as the original develop makes it available on the internet. OSS software applications have a modular design, which makes it easier for community of interest to understand the software and apply enhancements. It should be noted that this process is entirely voluntary and is driven by knowledge and challenge rather than economic considerations alone. In proprietary software project development, software development houses are commonly motivated by money as an incentive for their working efforts. However, in OSS development concept, the reason is not money as an incentive for the contributors’ motivation. Lerner & Tirole [8] define two types of motivations as immediate pay-off and delayed benefits. Immediate pay-off such as recognition, innovation, and idea generation is the most common motivation for any software developments. On the other hand, delayed benefits are claimed as indirect economic benefits that software developers will get in the future. According to Woods & Guliani [4], the three most significant strengths of “Bazaar” like software development model include, faster development due to large number of developers available and at lower production cost; flexible and close to user requirements, since it is developed by wide community, the resultant software will serve several common needs; and improved developer’s skills through interactions between developers of varied experiences.
209
OSS projects are developed through communities of interests that evolve a governance structure around the project lifecycle. This governance structure in open source community simply starts from individual motivations that interact into one social control mechanism [9]. This social control creates conformity for certain moral and cultural rules within the development community [10]. Thus, there are two types of social control activities within open source project, i.e. direct governance and indirect governance. Direct governance is a social control that ensures the quality of project by doing direct inspection or monitoring tasks. On the other hand, indirect governance is based on output result from development. Table 1 further elaborates on the governance structure of OSS projects. Introduction
Growth
Maturity
Decline or Revival
Focus
Idea Generation
Expansion
Stability
Adaption
Structure
Completely informal
More centralized
Division of Labor
Generalists
Some specification
Highly decentralized
Less specialized
Coordination
Informal one on one
Technology Introduction
Formal intensive
Formal but adherence
Examples
Dam, HTM Larena plus accessibility
Eclipse, Typo3
Linux, Apache, Mozilla
formal,
Somewhat decentralized
formal,
technology
Slightly formal but less adherence
less
Gnutella
Table 1: Governance Details within each lifecycle stages [11] In open source development, the author of the original code mostly bequest his/her intellectual property to society without thinking of return from the code. Open source development culture is also identical with parents relationships with their kids[6]. In analogy, parents always give everything to their kids without expecting any return. Researchers [7, 11] attribute various other benefits with OSS, such as a.
Acquisition cost of OSS is generally lesser than proprietary software and may even be completely free of cost and thus may eliminate the financial burdens from proprietary licensing schemes.
b.
Reduced vendor lock-in contracts, since OSS is freely available for download. Support is generally free, and even when it is commercially provided it is available at a lesser cost. Proprietary software is generally bundled together with additional update cost and maintenance cost. Thus, the vendor may charge more for updates available within specific software support. At the same time, closure of vendor may lead to the unavailability of support for the software. In open source, the power of the developer is leveraged among the society, even though vendors may close someday, the knowledge is still kept within society.
c.
Open source development enables global effort that maximizes the potential of software by integrating various programming skills of all contributors and thus enhances reliability of the software.
d.
The availability of source code enables end users to improve functionality or modify the software to specifically fit-in with their own needs. Personal customizations also enable sense of belonging and full performance measurement among end users.
e.
Wide development culture enables OSS to grow and enable programmers to work together to produce a secure software. Some contributors act as bug finders and communicate them through forums to let other programmers fix up the problem.
f.
Most OSS are open standard which means open interoperability with other systems. Open Standard feature enables simple interoperability with other systems without the need of additional integration software or systems modification.
g.
Capability and flexibility of customization enable improvement on IT value within the business. OSS is developed by various people with different expectations and thus develop a sophisticated result for society.
However, these benefits demand certain requisites, which include, understanding of technology requirement of the business and the capabilities of OSS in meeting these requirements or the improvement potential of OSS in incorporating these improvements. Another important factor is the development and maintenance of skills required to install and configure open source, since open source may be different from common proprietary software design, it is important to learn some skills to develop and maintain specific OSS. It is equally important to be able evaluate maturity of open source software. Proprietary software ensures that software is launched fully tested and ready to use. However, OSS development produces software in a relatively shorter period of development. It is, therefore, essential to always monitor and measure the maturity of the version before implementation. Although OSS are generally licence free, however, this is not always the case. There are various
210
licensing schemes available within open source software, therefore, it is important to understand the licensing mechanism and choose the license that is most suited to the needs of user or business.
4
OPEN SOURCE SOFTWARE – A CASE FOR ASSET MANAGEMENT
Information technologies utilised in asset management not only have to provide for the decentralized control of asset management tasks but also have to act as instruments for decision support. These technologies, therefore, are required to provide an integrated view of lifecycle information such that informed choices about asset lifecycle could be made. An integrated view of asset management, however, requires appropriate hardware and software applications; quality, standardised, and interoperable information; appropriate skill set of employees to process information; the strategic fit between the asset management processes and the chosen information technologies; and conducive organisational environment. Current information systems in operation within engineering enterprises have paid for themselves, as the methodologies employed to design these systems define, acquire and build systems of the past not for the future [12]. For example, development of maintenance software systems that have attracted considerable attention in research and practice are far from being optimal. While maintenance activities have been carried out ever since advent of manufacturing; modelling of an all inclusive and efficient maintenance system has yet to come to fruition [13, 14]. This is mainly due to the continuously changing maintenance requirements and the increasing complexity of asset equipment. In response to the increased competitive pressures, maintenance strategies that once were run-to-failure are now fast changing to being condition based, thereby necessitating integration of asset management decision systems and computerized maintenance management systems in order to provide support for maintenance scheduling, maintenance workflow management, inventory management, and purchasing [15]. However, in practice, data is captured both electronically and manually, in a variety of formats, shared among an assortment of off the shelf and customized operational and administrative systems, communicated through a range of sources and to an array of business partners and sub contractors; and consequently inconsistencies in completeness, timeliness, and inaccuracy of information leads to the inability of quality decision support for asset lifecycle management [16]. In these circumstances, existing asset management information systems could best be described as pools of isolated data that are not being put to effective use to create value for the organisation. Perhaps, the major issues in this regard relate to the way organisations utilise technology; however, there are certain technological constraints that also constrict their improvement potential. These constraints are posed in form of variety of disparate commercial off the shelf systems being used by the industry, which not only have limited functionality but also contribute to issues of interoperability integration. In these circumstances, OSS with its features and development culture appears as a viable option for asset managing engineering enterprises. From technological perspective, OSS provides global development approach and software quality testing, tailored solutions, better security than proprietary solutions, open standard architecture, and a degree of independence from vendor control. From financial perspective, OSS is available with lesser acquisition costs, avoids vendor lock-in and hidden costs, and less training and software integration costs. Open source development projects involve large number of developers at a global scale. Thus the developer community is able to harness the competencies and capabilities of wider develop community. OSS products do not follow any specific project plans or schedules, which means that these projects are emergent in nature with no end in sight, unless the develop community ceases o see any value in the project. Thus, the product being developed is continuously being, peer reviewed, updated, and enhanced. In addition, since work is not assigned to specific persons, developers contribute on the basis of their expertise. There is no central structure of project and the project follows a decentralized governance structure, thus, the developers work without any organisational or deadline pressures. Since there is no explicit system level design in OSS development, projects start on innovation, need, or opportunity basis. For example, a practitioner sees an opportunity to resolve an issue and an attempt is made to resolve it. Since this attempt is made available to wider community of interest, others join in enhance the original effort. OSS allows access to source code, which enables developers to enhance, customise, and integrate the software according to organisational or personal preferences. The most common misconception of OSS is the reputation of being less secure because of its freely available code [17]. The global development concept not only enables a wide ranging developmental community but also allows for global quality test for OSS. OSS development enables software solutions to be fully customised according to the functionality needs of the organization. On the other hand, proprietary software is designed according to vendor’s development planning and follows common design and needs, which lacks the depth and breadth allowed by OSS [4]. In proprietary software, software quality testing is limited within a controlled environment and specific scenarios [8]. However, OSS development involves much more elaborate testing as OSS solutions are tested in various environments, by various skills and experiences of different programmers, and are tested in various geographic locations around the world [18]. As the main financial benefit of OSS, the acquisition cost of OSS is generally lesser than proprietary software or even free of charge [11]. In addition, more flexible license coverage can be gained from OSS that enable redistribution and software modification to comply with specific needs of the organization [4]. Since OSS is developed in public domain and is freely available, there is no dearth of skills and knowledge in any market. These cost differences between proprietary software and OSS can be used for better staff training, customization tasks, or enhancement in existing IT infrastructure [11]. The most significant benefit of OSS is its appeal in terms of software integration and interoperability. Asset managing engineering organisations produce, store, and manage enormous amount of data on a daily basis. This information becomes
211
highly valuable when it is readily accessible to asset lifecycle participants, such as planners, maintainers, designers, accountants, and asset operators. Since asset managing organisations utilise a plethora of software systems, it is highly unlikely that all of these systems conform to a single data format. At the same time, since most of these systems are closed source it is not possible to re-engineer the software. However, with OSS source code is available to the organisation and it can thus be reengineered to make it consistent with the information architecture of the organisation.
5
CONCLUSION
Although it appears that the financial appeal of OSS is the major reasons of its popularity in the time of economic distress; however, it offers a lot more than just economic benefits. The concept of OSS has significant relevance for research organisations like CIEAM. Open source development concept fosters a sense of belonging and community of practice approach. Open source projects foster the collaborative concept that helps develop the technical infrastructure by drawing on the expertise of different participants. This goes a long way in growth and development of asset management industry. It allows asset managing organisations to participate in collaborative forums that engage in development efforts to create efficient and quality software solutions for managing asset lifecycle, and thus reducing dependencies on commercial software. In doing so, they enhance and mature software applications specific to their asset management operation. On the other hand open source software, itself, has an edge against the proprietary software in various aspects; such as the technical benefits that provide various advantages from the global open source development concept and interoperability among software solutions.
6
REFERENCES
1
von Krogh, G, & von Hippel, E (2003) Special issue on open source software development. Journal of Research Policy, 32, 1149-1157.
2
Perens, B (1998) Open source definition. http://ldp.dvo.ru/LDP/LGNET/issue26/perens.html
3
Hamel, MP (2007) Open source collaboration in the public sector: the need for leadership and value. National Centre for Digital Government working paper, vol. 7, no. 4. accessed online 9 May 2009, at http://www.umass.edu/digitalcenter/research/working_papers/07_004HamelOSCollaboration.pdf
4
Woods, D, and Guliani, G (2005) Open source for the enterprise. 1st edition, Sebastopol, CA: O'Reilly Media Inc.
5
Raymond, ES (2001) The cathedral and the bazaar, Revised edition, Sebastopol, CA: O'Reilly Media, Inc.
6
Zeitlyn, D (2003) Gift economies in development of open source software: anthropological reflection. Research Policy, 32, 1287-1291.
7
Alexy, O, and Henkel, J (2006) Promoting the penguin: who is advocating open source software in commercial setting?. Munchen, Germany: Technische Universitat Muncheno. Document Number
8
Lerner, J, and Tirole, J (2002) Some simple economics of open source. Journal of Industrial Economics, 50(2), 197-234.
9
Vujovic, S, and Ulhoi, JP (2008) Online innovation: the case of open source software development. European Journal of Innovation Management, 11(1), 142-156.
10
Latteman, C, and Stieglitz, S (2005) Framework for governance in open source community. Paper presented at the 38th Hawaii International Conference on System Sciences, Hawaii.
11
Kovacs, GL, Drozdik, S, Zuliani, P, and Succi, G (2008) Open source software and open data standard in public administration. Paper presented at the Computational Cybernetics, 2004. ICCC 2004. Second IEEE International accessed online on 14 April 200p, at www.ieee.com
12
Haider, A, & Koronios, A (2004) Converging monitoring information for integrated predictive maintenance. Iin Proceedings of 3rd International Conference on Vibration Engineering & Technology of Machinery (VETOMAC-3) & 4th Asia -Pacific Conference on System Integrity and Maintenance (ACSIM 2004), New Delhi, India, December 6-9, paper No. 40.
13
Duffuaa, SO, Ben-Daya, M, Al-Sultan, K, & Andijani, A (2001) A Generic Conceptual Simulation Model For Maintenance Systems. Journal of Quality in Maintenance Engineering, 7(3), 207-219
14
Yamashina, H (200) Challenge to world class manufacturing. International Journal of Quality & Reliability Management, Vol. 17, No. 2, pp. 132-143.
15
Bever, K (2000) Understanding Plant Asset Management Systems. Maintenance Technology, July/August, pp. 20-25
Accessed
212
online
on
27
May
2009,
at
16
Haider, A, & Koronios, A (2005) ICT Based Asset Management Framework. In Proceedings of 8th International Conference on Enterprise Information Systems, ICEIS, Paphos, Cyprus, 3, 312-322.
17
Taylor, PW (2004) Open source open government. Centre for digital government, accessed online 11 May 2009, at http://www.govtech.com/gt/91970.
18
Mockus, A, Fielding, RT, and Herbsleb, JD (2002) Two case study on open source software development: apache and Mozilla. ACM Transaction on Software Engineering and Methodology, 11(3), 309-346.
213
Proceedings of the 4th World Congress on Engineering Asset Management Athens, Greece 28 - 30 September 2009
ASSESSING MAINTENANCE MANAGEMENT IT ON THE BASIS OF IT MATURITY Kans, Ma a
School of Technology and Design, Växjö University, Luckligs plats 1, S-351 95 Växjö, Sweden.
Research shows that investments in IT have a positive correlation to company profitability and competitiveness. This is the case also for maintenance management IT (MMIT), i.e. applications used for maintenance management purposes such as computerised maintenance management systems (CMMS) and maintenance management or asset management modules in enterprise resource planning (ERP) systems. Although, models and methods for evaluating maintenance IT needs and IT systems are not well developed. This paper shows how the IT maturity of the maintenance organisation could be considered in the IT procurement process. If we are able to define functionality for different levels of IT maturity, the assessment and selection of the relatively best IT application for the maintenance organisation would be supported. A model describing three phases of IT maturity within maintenance (IT beginners, medium IT users and IT mature organisations) forms the theoretical basis. The applicability of the approach is tested by evaluating 24 CMMS and enterprise resource planning (ERP) systems. Key Words: Computerised maintenance management, maintenance IT functionality, IT maturity 1
INTRODUCTION
To reach success in the utilisation of information technology (IT) for maintenance management, we must be able to choose the relatively best alternative from a set of possible IT solutions. This requires an ability to understand the maintenance IT needs, as well as ways to assess different alternative IT solutions. Although Computerised Maintenance Management Systems (CMMS) have been in use for several decades, models and methods for evaluating maintenance IT needs are not well developed. This paper will address how the maintenance management information technology (MMIT) procurement process could be supported by taking into account the IT maturity of the maintenance organisation. IT maturity denotes the extent to which an organisation or a human can benefit from the technology [1]. If we are able to define functionality for different levels of IT maturity, the assessment and selection of the relatively best IT application for the maintenance organisation would be supported. A model for determining the IT maturity of maintenance was developed in [2] and validated in [3]. In this paper, this model will be used as a basis for defining IT functionality requirements for the evaluation of different MMIT systems alternatives.
2
METHODS SUPPORTING THE PROCUREMENT OF MMIT
For covering the past research within the assessment of MMIT, a literature survey was conducted in the full text database ELIN (Electronic Library Information Navigator), which integrates vast amount of databases and providers, such as Blackwell, Cambridge journals, Emerald, IEEE, Science Direct and Wiley. Key words were chosen to cover the area of computerised maintenance management combined with the terms benefits, needs, requirements, purchasing, procurement, selection and evaluation. 40 hits in all were addressing MMIT, representing 22 unique papers published between 1987 and 2008. Three papers were found which address the problem area of MMIT procurement, especially focusing on identifying and selecting IT systems. The first, [4], presents an evaluation model based on multi-criteria decision-making (MCDM) and the analytic hierarchy process (AHP). The model comprises seven levels, where level two is different scenario alternatives classifying future users of CMMS. This classification is mainly based on maintenance organisation size, but also on maintenance practices and technology utilisation. Levels three to six represents the criteria to be considered, in all 52 criteria.
214
Four generic CMMS alternatives are represented in the seventh level. By utilising this model, the needs of the company, identified as belonging in one of the scenarios, could be connected to one generic CMMS alternative and thereby reduce the decision complexity, before the selection of software application alternatives is made. This model is similar to the work of [5] that also proposes an MCDM/AHP approach for the evaluation of CMMS. This model consists of five levels, wherefrom four represents the criteria for evaluation based on the ISO-IEC 9126 classification. The CMMS alternatives are classified according to size characteristics, while the needs are exemplified by judgements from administration, production and maintenance in paper industries. These two publications provide well structured and objective methods for evaluating CMMS, and the MCDM process is commonly used in the software selection process, providing possibility to compare different alternatives with respect to large amount of criteria. The MCDM is a tedious process and requires much data and ability to process the data appropriately though. Thus, developing criteria for the MCDM requires a lot of effort, and it is hard to align the criteria with the demands of the organisation. Moreover, the criteria selected for analysis in [4] and [5] are mainly non-functional requirements. In the third paper, [6], a method for determining IT requirements in maintenance was developed, which could be useful in the requirements determination and CMMS selection. It does not address any specific methods for the actual selection though, such as in [4] and [5]. It is suggested in [6] that IT maturity plays a role in the IT requirements determination process. Therefore, determining the IT maturity of the organisation is in [6] one step in defining the current state of the maintenance. IT maturity could be one way to reduce decision complexity in a MCDM analysis if it is used to classify IT solutions, and thereby reduce the amount of candidate IT systems to consider in the further analysis process.
3
IT MATURITY OF MAINTENANCE
IT maturity denotes the extent to which an organisation or a human can benefit from the technology. An organisation with a low level of IT maturity uses IT mainly for the automation of daily activities and for data storage, while an IT mature organisation uses IT for collecting and combining vast amount of data for advanced strategic decision making [7]. This section will discuss the term IT maturity with respect to maintenance management.
3.1
The Maintenance IT Maturity Model
A model for determining information technology maturity within maintenance, which is an application of the IT maturity and growth model by Nolan [7], was developed in [2] for the positioning of the maintenance organisation relative to its IT maturity. The model was validated in a study presented in [3]. Three distinct groups of companies with different levels of IT maturity were found in the study by the means of cluster analysis. The groups and their characteristics are accounted for in Table 1 in the next section. In the maintenance IT maturity model three main phases of maintenance IT maturity have been defined: Introduction, Coordination and Integration. The history of industrial computerisation shows that these phases are natural steps towards an integrated IT solution, see [8]. The author also shows that the development of maintenance management IT in general has followed the development of industrial IT. The demands on MMIT has shifted over the years from being a tool to automate preventive maintenance management, such as task scheduling, plant inventory and stock control or cost and budgeting, to support predictive and proactive maintenance by providing real time data processing, effective communication channels and business function integration [9], [10]. The three phases of IT maturity are described briefly as: 1) Introduction (Efficiency reached by using IT): IT is introduced into the maintenance organisation in form of traditional CMMS, which support mainly reactive and preventive maintenance. The procurement of IT is mainly technologyoriented and IT is used for operational purposes. Goals with the IT use are mainly concerning an efficient management of work orders, spare parts inventory, purchase and cost control. It results in good control of available resources and cost reduction of carried out maintenance. 2) Coordination (Effectiveness reached by using IT): Maintenance IT systems and other corporate IT systems are coordinated and the use of IT is more stressed than the technology itself. These more advanced CMMS and ERP systems support mainly preventive, and to some extent, predictive strategies. IT is used for operational and tactical purposes, such as the follow up of carried out activities and failure frequencies. Schedules can be optimised. It results in good control of, and the capability to use, resources in the best way, and investment in maintenance will likely give positive returns. 3) Integration (Cost-effectiveness reached by using IT): Maintenance is an integrated part of the corporate IT system, enabling predictive and proactive maintenance strategies. Investments in IT are connected to actual needs. IT is used for operational, tactical and strategic purposes. Automatic monitoring of damage development and rapid rescheduling of activities is enabled. Based on failure history measures can be judged and the best maintenance alternative can be chosen, giving highest returns of investments.
215
3.2
IT Systems Functionality Connected to IT Maturity
The IT maturity model could serve as an input for developing appropriate benchmarks for different levels of IT maturity within maintenance. It could also be a tool used by the procurer of MMIT to understand the prerequisites of the organisation before assessing different MMIT solutions available. As such, knowing what functionality to utilise depending on the level of maturity is important. In [6], the IT utilisation within maintenance management was studied with respect to Swedish industry. Table 1 lists the functionality studied and the results of the cluster analysis. Of 71 companies in total, 19 were characterised as belonging to group Introduction, 35 as belonging to group Coordination, and 19 as belonging to group Integration. The type of functions and the extent to which these are utilised is accounted for in the three rightmost columns. In the Introduction phase Preventive Maintenance planning and scheduling, Work order planning and scheduling, Equipment parts list and Equipment repair history were utilised. These form the core functionality for the first phase. In the Coordination phase following additional functionality were added: Inventory control, Spare parts purchasing, Maintenance budgeting and Key performance measures. For the Integration phase, three more functionalities were included: Equipment failure diagnosis, Manpower planning and scheduling, Condition monitoring parameter analysis and Spare parts requirement planning.
Table 1 Utilisation of IT in Swedish industry, Kans and Ingwald (2008)
Function
Introduction (17)
Coordination (35)
Integration (19)
WO planning and scheduling
2
3
4
Equipment parts list
2
3
4
Inventory control
0
3
4
PM planning and scheduling
4
4
5
Spare sparts requirements planning
1
3
4
Equipment failure diagnosis
1
1
3
Equipment repair history
2
3
4
Spare parts purchasing
0
4
4
Manpower planning and scheduling
0
1
3
Maintenance budgeting
1
2
3
CM parameter analysis
1
1
3
Key performance measures
1
2
4
0 = Do not have this functionality, 1 = Used minimally, 5 = Used extensively
4
SELECTING MMIT BASED ON IT MATURITY
This section will illustrate how the maintenance IT maturity could be utilised as a means to compare different MMIT, for instance when selecting which IT system to further investigate for instance by the means of MCDM in a MMIT procurement situation. We will look at two different scenarios depending on the level of IT maturity of the organisation: 1)
The procurement of MMIT is made for the first time, i.e. computerisation of manual routines. This scenario is characterised by a maturity level corresponding to the Introduction phase.
2)
The procurement of MMIT is mainly an upgrade from simple computerised support, for instance an obsolete CMMS or a simple spreadsheet solution, to a standard CMMS or ERP system. This scenario is characterised by a maturity level corresponding to the Coordination phase.
A third scenario not addressed in this paper would be the procurement of MMIT connected to a major investment in IT support due to highly automated or complex production. This scenario is characterised by a maturity level corresponding to the Integration phase, and in general these kinds of investments involves the procurement of more IT systems than only MMIT.
216
4.1
Sample of MMIT Systems to be Evaluated in the Scenarios
A study of commercial MMIT systems’ functionality conducted by the author is utilised as a real-world illustration of the problem area of selecting appropriate MMIT. The study covered seventeen CMMS and seven ERP systems that included a maintenance or asset management module, in total 24 IT systems. The basis for the data collection was information from the constructers or vendors of the IT systems in form of folders, brochures, demonstrations, web pages, demonstration software and telephone contact with vendors. Some results connected to this data set were presented in [1]. The information was collected during 2004-2005. The aim was to study the most commonly used off-the-shelf systems used for maintenance management in Swedish industry. The study objects were therefore determined using three sources: 1)
A survey about maintenance management including 118 Swedish companies conducted by the Department of Terotechnology at Växjö University in 2004, see [11]. In the questionnaire one question covered which commercial CMMS the company used for maintenance management. A total of 87 answers were given. Some companies used more than one system. From these, systems that were used by two or more companies were chosen, in all thirteen systems (10 CMMS and 3 ERP systems).
2)
An Internet survey of Enterprise Resource Planning (ERP) systems used in Sweden based upon information provided by Data Research DPU AB [12]. Of 100 ERP systems, only seven contained a maintenance or asset management module. These systems were all included in the study (3 of the systems were already included based on paragraph 1).
3)
A list of commonly used maintenance IT systems provided by Swedish Center for Maintenance Management (Underhållsföretagen) published in [13]. The list contained twenty-one CMMS/ERP systems from where systems that were pure decision support systems or had less than 30 total users worldwide were excluded. The list of study objects was complemented with additional seven systems using this source.
This data set is highly suitable as they represent a real-life decision to be made, and the two latter sources of information could directly be utilised for this purpose. The first source represents the actual choice procurers of MMIT in Swedish industry have made.
4.2 Functionality Selected for Comparison The functionality included in the survey accounted for in Table 1 did not completely match the functionality included in the study described in 4.1. Therefore a complete mapping of functionality for each IT maturity phase was not possible. The functionality Spare parts requirement planning connected to the Integration phase is therefore not considered. Table 2 lists the functionality that will be considered for each IT maturity phase.
Table 2 IT functionality needs for different phases of IT maturity
IT maturity Introduction
IT systems functionality Work order planning and scheduling Preventive maintenance planning and scheduling Equipment parts list Equipment repair history In addition to the functionality connected to phase 1:
Coordination
Inventory control Spare parts purchasing Maintenance budgeting Key performance measures In addition to the functionality connected to phase 2:
Integration
Equipment failure diagnosis Manpower planning and scheduling Condition monitoring parameter analysis
217
5
ANALYSIS AND RESULTS
In the following, the 24 IT systems will be evaluated with respect to their ability to provide the maintenance organisation with IT support depending on IT maturity level and decision-making scenario. The aim is to reduce the number of possible candidates from 24 to around five before a further in-depth evaluation. The functionality coverage for the system alternatives is found in Figure 1 and 2. Figure 1 describes the functionality coverage of CMMS while Figure 2 describes functionality coverage for ERP systems. The total amount of functions is given for each phase in the legend. For the Introduction phase, we can see that all CMMS contain three or four out of four functions in total. For the Coordination phase the amount of functions varies between four and eight of eight in total. For the Integration phase the coverage is between four and ten out of eleven functions.
Introduction (4 functions)
Coordination (8 functions)
Integration (11 functions)
11 10 9 Number of functions
8 7 6 5 4 3 2 1
C
C
M M S_
1 M M S_ 2 C M M S_ 3 C M M S_ 4 C M M S_ 5 C M M S_ 6 C M M S_ 7 C M M S_ 8 C M M S_ C 9 M M S_ 10 C M M S_ 11 C M M S_ 12 C M M S_ 13 C M M S_ 14 C M M S _1 C 5 M M S_ 16 C M M S_ 17
0
System
Figure 1. Functionality coverage for CMMS.
Introduction (4 finctions)
Coordination (8 functions)
Figure 2 is read the similar way as Figure 1. The total number of functions connected to a certain phase is found in the legend. The functionality coverage for the Introduction phase varies between two and four out of four in total for the ERP systems. For the Coordination phase the amount of functions varies between three and eight out of eight in total. For the Integration phase the coverage is between four and eleven out of eleven functions.
Integration (11 functions)
11 10 9
Number of functions
8 7 6 5 4 3 2 1 0 ERP_1
ERP_2
ERP_3
ERP_4
ERP_5
System
Figure 2. Functionality coverage for ERP systems.
218
ERP_6
ERP_7
5.1
Computerisation of Manual Routines
The decision is to find the most suitable candidates for a maintenance organisation that today relies on manual work. The four functions connected to the Introduction phase are considered as important, and should be mandatory in the requirements specification. The four functions belonging to the next phase will be desirable, while the functions connected to the Integration phase are seen as undesirable, as they indicate a too complex solution for the level of maturity. From Figure 1 we find nine CMMS reaching the mandatory requirements: CMMS_1, CMMS_9- 13 and CMMS_15-17. These are the candidates to select from. To further delimit the amount of candidates we compare the functionality coverage for the desirable requirements. It is found that four out of the nine CMMS’s contains high level of coverage. Therefore, we select systems CMMS_1, CMMS_12, CMMS_13 and CMMS_14 for further evaluation. We note that the system 13 seems to be a highly complex one though and we might disregard CMMS_13 due to this reason. The ERP system is in general a bit too complex for an IT beginner, but depending on factors such as the existence of an ERP system within the company, this option could be of interest also in this scenario. Four ERP systems contain full coverage of the mandatory functions (ERP_1-3, ERP_6), and these also shows up high coverage for the desirable functions. All of these systems seem to be rather advanced though, and due to this we will likely not go further with them in the analysis.
5.2
Upgrading of Existing IT System
This scenario regards the upgrading of an old, incomplete or simple IT solution for maintenance management. It is assumed that the maintenance organisation as well as the maintenance personnel is IT mature to some extent. They are for instance used to store data in digital form and to plan activities with computerised support. Functions connected to the first and second maturity phase are therefore to be seen as mandatory, while the functions connected to the last phase are desirable. Both a CMMS and an ERP solution are of interest. Only one CMMS (CMMS_13) and two ERP systems (ERP_1-2) fulfil the mandatory requirements and will be selected for further analysis. However, the system ERP_3 seems to contain a high degree of functions in total and is therefore included in the list of candidates.
6
CONCLUSIONS
This paper proposes taking IT maturity into account in the procurement of IT in maintenance management. In this paper, IT maturity has been used as a means to compare functionality in different MMIT in order to select the most suitable candidate systems for more detailed analysis. As such, it could for instance be utilised as a first step for further MCDM analysis. This paper only addresses the functional requirements, whereas the non-functional requirements have to be considered in the detailed analysis. The IT maturity is not a static condition. The history of IT has shown how companies gradually have moved from lower to higher levels of IT maturity, resulting in shifting demands on IT applications to be able to meet more advanced business goals. In maintenance, the similar has been noted. This implies that the assessment of IT applications is not an activity to be carried out only when new software is to be purchased, but should be made on regular basis to determine whether the IT is supporting current maintenance practices to full extent or not. For this purpose, the IT maturity model could be serving as a simple yet powerful tool. This paper has shown that the different maturity phases could be translated into IT functionality, which would enable the assessment of IT support on both overall and detailed level.
7
REFERENCES
1
Kans, M. (2008) On the Utilisation of Information Technology for the Management of Profitable Maintenance. PhD Thesis. Växjö. Växjö University Press.
2
Kans, M. (2007a) The Development of Computerized Maintenance Management Support. Proceedings of the International MultiConference of Engineers and Computer Scientists 2007. Hong Kong 21-23 March, 2113-2118
3
Kans, M. and Ingwald, A. (2008) Exploring the information technology maturity within maintenance management, COMADEM 2008, Energy and environmental issues: Proceedings of the 21th International Congress and Exhibition. Prague Czech Republic 11-13 June, 243-252
4
Carnero, M. C. and Novés, J. L. (2006) Selection of computerised maintenance management system by means of multicriteria methods. Production Planning and Control, 17(4), 335-355.
219
5
Braglia, M., Carmignani, G., Frosolini, M. and Grassi, A. (2006) AHP-based evaluation of CMMS software. Journal of Manufacturing Technology Management, 17(5), 585-602.
6
Kans, M. (2008) An approach for determining the requirements of computerised maintenance management systems. Computers in Industry, 59(1), 32-40.
7
Nolan, R. L. (1979) Managing the crises in data processing. Harvard Business Review, 57(2), 115-126.
8
Kans, M. (2009) The advancement of maintenance information technology: A literature review. Journal of Quality in Maintenance Engineering, 15(1), 5-16.
9
Labib, A.W. (2004) A decision analysis model for maintenance ploicy selection using a CMMS. Journal of Quality in Maintenance Engineering, 10(3), 191-202.
10
Pintelon, L., Preez, N. D. and Van Puyvelde, F. (1999) Information technology: opportunities for maintenance management. Journal of Quality in Maintenance Engineering, 5(1), 9-24.
11
Alsyouf, I. (2004) Cost effective maintenance for competitive advantages, PhD thesis, Växjö university, School of industrial engineering.
12
Data research DPU ab, [WWW document] URL http://www.dpu.se/listmeny.html. [2004-02-02].
13
Swedish Center for Maintenance Management (2003) U&D’s UH-system översikt. Underhåll & Driftsäkerhet 7-8, 28-29.
220
Proceedings of the 4th World Congress on Engineering Asset Management Athens, Greece 28 - 30 September 2009
OPEN STANDARDS-BASED SYSTEM INTEGRATION FOR ASSET MANAGEMENT DECISION SUPPORT Avin Mathew a, Michael Purser a, Lin Ma a, Matthew Barlow b a
CRC for Integrated Engineering Asset Management, Queensland University of Technology, Brisbane, Australia b
Australian Nuclear Science and Technology Organisation, Lucas Heights, Australia
Over the last decade, system integration has grown in popularity as it allows organisations to streamline business processes. Traditionally, system integration has been conducted through point-to-point solutions – as a new integration scenario requirement arises, a custom solution is built between the relevant systems. Bus-based solutions are now preferred, whereby all systems communicate via an intermediary system such as an enterprise service bus, using a common data exchange model. This research investigates the use of a common data exchange model based on open standards, specifically MIMOSA OSA-EAI, for asset management system integration. A case study is conducted that involves the integration of processes between a SCADA, maintenance decision support and work management system. A diverse number of software platforms are employed in developing the final solution, all tied together through MIMOSA OSA-EAI-based XML web services. The lessons learned from the exercise are presented throughout the paper. Key Words: system integration; asset management; enterprise service bus; MIMOSA OSA-EAI; web services; service oriented architecture 1
INTRODUCTION
Over the last decade, system integration has grown in popularity as it allows organisations to streamline business processes. Many companies are now automating their asset management workflows such that stock levels can be reordered based on RFID-scanned remaining quantities; work notifications can be triggered from condition monitoring prognoses; and work details and asset documents are automatically uploaded to PDAs for maintenance teams before they depart. System integration also supports business intelligence and data mining where data sets can be combined in non-traditional ways. This leads to scenarios such as visualising scheduled maintenance geographically; seeing failures times overlayed on charts of operation or condition parameters; or predicting future asset capacity based on reliability block diagrams, asset throughput specifications, and predicted availability. Traditionally, system integration has been conducted through point-to-point solutions – as a new integration scenario requirement arises, a custom solution is built between the relevant systems. It is now known that while point-to-point solutions offer good performance and relatively less development time, they are sorely lacking in scalability and ease of management. Bus-based solutions are now preferred, whereby all systems communicate via an intermediary system through adapters. The adapters convert data between the system’s native format to the bus’ format and back again. The enterprise service bus (ESB) is the result of this paradigm, with many of the larger IT vendors now offering competing products in this space. To streamline the transfer of data between a work management system, process control system, and an asset health management system in ANSTO, a nuclear research facility in Australia, a case study was conducted into developing a service bus approach using open standards. As opposed to an ETL (extraction, transformation, and loading) process conducted on a batch transfer basis, the service bus approach sends messages in real-time once they are collected or computed by the respective systems. The messages use the format of the MIMOSA (Machinery Information Management Open Systems Alliance) OSA-EAI (Open Systems Architecture for Enterprise Application Integration) [1] XML Schema internally, and are then converted to the native system formats through specially-designed adapters. As the goal is to move towards a serviceoriented architecture (SOA), all developed components (in particular, native data model to OSA-EAI data model mappings) are componentised for reuse.
221
2
BACKGROUND INFORMATION 2.1 Enterprise Service Bus
The enterprise service bus sits between information systems and facilitates the communications among them. Thus all messages pass through the bus, forming a hub-and-spoke architecture (see Figure 1) as opposed to a point-to-point architecture. The change in architecture leads to the number of potential connections reducing from n(n - 1)/2 to n – 1 (where n includes the ESB), decreasing complexity and increasing scalability.
Adapter
Adapter
A bus system also leads to a decoupling between systems whereby systems request data, and receive an answer but not knowing from where it came. It also leads to a greater reliance on contract-based services as systems only know of the ESB and the services or interfaces it makes visible to a particular system. To assist the exchange of messages with consistent semantics, a canonical data model [2] (sometimes known as a canonical message model) is used to provide a common, singular data format. In this case study, the OSA-EAI forms the canonical data model.
Enterprise Service Bus
Adapter
Design Drawings
Adapter
EAM
Adapter
Reliability
Adapter
GIS
Fault Reporting
SCADA
Figure 1. Enterprise service bus approach mediating asset management information systems
2.1.1 Adapters Adapters provide a mediation mechanism for data formats and accessibility between two systems. The adapter translates data models and reference data between the ESB’s canonical data model and the native system typically through primary key mappings. For example, the asset with ID 504493 in the work management system could map to the asset with primary key “0000000100000001”,”1” in the ESB model. As systems might be set up to use the same primary key, the mapping can be exposed to adapters through an ESB service (also following the SOA paradigm). Mappings can become more complex if entities in the native data model map other than 1:1 with the canonical data model (e.g. a person’s full name needing to be split into a first and last name). There are two options on where a mapping can be stored: either in the adapter itself, or in a service outside of the adapter. Storing it in the adapter is efficient when the data type is not used in another other integration scenario and there is a n:m mapping between the native and canonical data model. If the data type is used in multiple integration scenarios, and maps easily between the native and canonical format, then storing the mapping in an accessible service is preferred to enhance componentisation. Adapters can also assist in increasing the accessibility of systems in that data can be exposed through web services. Thus, the adapter takes a request, transforms the request into a form suitable for a file reader, database SQL query, or an API call, receives a result and returns the result to the original requester. 2.2 MIMOSA OSA-EAI The MIMOSA OSA-EAI1 provides open data exchange standards for operations and maintenance data and is comprised of several layers including a conceptual model, physical model, reference data, and XML schema definition. The OSA-EAI has 1
As MIMOSA OSA-EAI is a continually improving standard, this discussion is in reference to version 3.2.1 (the latest version at the time of writing).
222
been expounded in previous work and only the relevant sections are discussed below. These sections are the Application, Service and XML Definitions, and the Reference Data Library. The OSA-EAI supplies three methods of transferring data via XML: through the Tech-DOC, Tech-CDE, and Tech-XML specifications. Each of these approaches has their merits, and while at times they can appear as substitutes, it is important to select the right approach on a case-by-case basis. The Tech-DOC specification is a single XML Schema that represents all parts of the CRIS. Multiple CRIS entities as well as multiple rows of that entity can be transmitted in a single XML document. No connection metadata is stored in the file, and as such, it is binding independent compared to Tech-CDE and Tech-XML. The Tech-CDE specification comprises of three XML Schemas that include a query schema, write schema, and a common schema that contains CRIS and supporting structural message elements. The operations, set by parameters in the messages, closely align themselves with the CRUD (Create, Read, Update, and Delete) operations for databases. It covers all parts of the OSA-EAI, and multiple CRIS entities as well as multiple rows can be transmitted in a single XML document. Tech-CDE follows a request and acknowledgement model and contains a SOAP-based specification. The Tech-XML specification comprises of numerous XML Schemas split over ten different package areas (including a package for CRIS and supporting structural message elements). The package-based classification results in duplication of certain schemas (e.g. the CreateAsEvent schema exists in seven of the packages); however, these messages incorporate different element types resulting in non-interchangeable XML documents. The schemas are lightweight and specific to a certain operation and CRIS area. The semantics of the schemas are restricted to queries and inserts; neither edit nor delete schemas exist. Create messages are usually limited to a single row at a time – multiple rows are created by sending multiple messages. While all sections of the CRIS are covered by Tech-XML, not all entities in the CRIS are covered. As with TechCDE, Tech-XML also follows a request and acknowledgement model and contains a SOAP-based specification.
Figure 2. MIMOSA OSA-EAI 3.2.1 layers
With three different options for message formats, this research prioritises Tech-XML, followed by Tech-CDE, and then Tech-DOC. Tech-XML’s operational-based schemas with communication metadata suitably lend themselves to the SOA and ESB approach. Tech-CDE’s schemas also contain communication metadata, allows for edits and deletes, and multiple rows to be sent, but is not as specific as Tech-XML. Tech-DOC’s schema is a fallback if the other two do not fit technically or semantically. 3
REQUIREMENTS AND DESIGN
Four requirements influenced the design of the integration service. The first three were functional requirements, while the fourth was a technological limitation. These requirements were: 1.
Storing monthly-aggregated SCADA measurement data in the work management system for reporting purposes.
223
2. 3. 4.
Using a maintenance decision support system to process SCADA measurement data and work management system failure/maintenance data to predict failures. Triggering potential work notifications from the maintenance decision support system to the work management system. The SCADA system cannot be directly accessed from any network due to a security requirement.
From the requirements, the work management system and maintenance decision support system would require two-way communications, while the SCADA system would only require one-way (output) communication. The SCADA system data would be downloaded as a CSV (Comma Separated Value), which could then be distributed onto the company intranet. Three triggers were identified that would initiate integration processes and are shown in Table 1. As the maintenance decision support system is primarily condition data-driven, it was decided that once new SCADA measurement data was available (with the CSV collection interval determined by the organisation’s business rules), any new maintenance data would be acquired in real-time. Thus while new SCADA data was triggered by human action (bringing a CSV file to the system), the decision support system would invoke the process to acquire failure/maintenance data. Once a failure prediction and maintenance schedule was calculated by the maintenance decision support system, human action would be required to send the schedule to the work management system.
Table 1 Integration process triggers
Trigger
Initiator
New SCADA measurement data was received as a CSV file
Human
The decision support system started a failure prediction process which required failure/maintenance data
System
A new maintenance schedule was calculated by the maintenance decision support system
Human
3.1 Sequence Diagrams The requirements were translated into sequence diagrams to describe the processes that occur after a trigger had occurred. The sequence diagrams are presented in Figures Figure 3, Figure 4, and Figure 5 and show the systems involved with the integration process, the message order, and the message format as mapped to the MIMOSA OSA-EAI. The first sequence diagram, Figure 3, sees the three required systems as well as a Data Aggregation Service, which performs the function of aggregating the SCADA data (recorded at hourly intervals) to monthly averages. Data is transferred using the Tech-DOC XML Schema2 as: • •
Tech-CDE’s queries and writes did not semantically fit with the design The closest matching Tech-XML schema, CreateScalarDataAndAlarms from the TREND package, is designed for sending a single measurement point and becomes inefficient for larger datasets (increased CPU usage and latency due to the number of messages required – see Section 5 for details)
The second sequence diagram, Figure 4, sees the querying of failure data by the decision support system. Such data is stored as notification records in the work management system, and the ESB is only used to forward the request and return the result from and to the decision support system. Data is transferred using the Tech-XML QuerySgCompWork schema from the WORK package as failure data is stored within the maintenance work order records. All completed work orders for each segment are returned.
2
While using OPC’s standards would be the ideal means for acquiring data from the SCADA system, the physical network separation meant that this could not occur.
224
Figure 3. New SCADA CSV data event triggered process
Figure 4. Failure prediction process start event triggered process
Figure 5. Maintenance schedule calculated event triggered process
225
SCADA CSV File
Maintenance Decision Support System
Adapter
Reference Data Mapping & Data Aggregation Services
Enterprise Service Bus
Adapter
Presentation Layer
WPF
Service Layer
Service Layer
WCF
Business Layer
Data Access Layer Desktop Application
Work Management System EJB (HTTP BC)
WCF
OSA-EAI Objects
Service Layer
Service Layer
Business Layer
Business Layer
DoME
Business API
Adapter Service Layer
Data Access Layer
OSA-EAI Objects
ADO.NET
Application Server (IIS) .NET
Application Server (GlassFish)
Self-Hosted Service
Java EE
Smalltalk
Figure 6. Overall solution architecture mapped to technologies
The third sequence diagram, Figure 5, shows the integration process started by the calculation of a new maintenance schedule. As with the second sequence diagram, the ESB simply forwards the create request and returns the result. Data is transferred using the Tech-XML CreateAsRFWandWR schema from the WORK package. As the Tech-XML schema only allows one row to be sent per message, multiple messages may need to be marshalled and sent. As opposed to the first sequence diagram with transferring SCADA data, the number of messages in this scenario is extremely small and does not warrant the Tech-CDE or Tech-DOC capabilities of sending multiple rows. 4
IMPLEMENTATION
Both SOA and ESB paradigms are platform-agnostic, although most software has standardised on using SOAP-based web services. While MIMOSA OSA-EAI ultimately uses XML-based schemas, both Tech-CDE and Tech-XML are inclined towards SOAP bindings and thus this is the only platform restriction. A mixture of different platforms was selected in developing the integration scenarios including Microsoft .NET, Sun Microsystems Java EE, and VisualWorks Smalltalk. Figure 6 shows the platforms, the hosting mechanism, layers, and specific technologies used in implementing the solution. Due to the relatively small processing resources required, the entire system was deployed on a standard desktop machine. The SCADA CSV File Adapter is the only visible component of the system to users in the organisation, as it contains a user interface; the other components of the system, once installed and set up, run in the background. As per good architectural design, components are designed with clearly delineating layers in mind, promoting reusability and maintainability. As OSA-EAI business objects (automatically generated from XSD or WSDL documents) are used for all business layers, the reference data mapping service3 returns the appropriate OSA-EAI objects for a particular business object. The reference data mapping system forms a pseudo asset registry, as it contains all object and data mappings for all systems that wish to connect to the ESB. OpenESB was selected for the ESB component as it provided a free open-source platform that is also commercially supported. Internally, it uses Java Business Integration (JBI) standards for the implementation of its binding and service components such that it can easily interoperate with JBI components from other platforms. The three sequence diagrams were translated into BPEL (Business Process Execution Language) orchestrations using the graphical design tools within OpenESB. Access to the work management system was governed by a web service wrapper written using the Smalltalk Domain Modelling Environment (DoME). While the services provided by this component could have been replicated in the .NET or Java EE environments, the selection of the platform was outside the authority of this case study.
3
Note that the data aggregation service is distinct from the reference data mapping service, but contains the same layered architecture, and is hence combined on the figure.
226
5
DISCUSSION
The relatively small size of the case study might beg the question for using such a complicated design and architecture for a somewhat simple problem. A point-to-point solution could have been used that eliminated the ESB and MIMOSA OSA-EAI formats such that all components would communicate directly with other components, and transform data from their format to the target format. While this method would be quicker to develop and would most likely offer better performance, it cannot compete with the illustrated approach in terms of scalability, extensibility, and maintainability. The implementation of the design results in a number of platforms not from a technical restriction but from logistic and financial reasons. As two universities were engaged to develop the structure, both the .NET platform and Smalltalk platform were deemed to be the optimal choice given the different preferences and experience at the universities. Financial restrictions played into the selection of a free ESB platform, in which OpenESB was selected after a cost-benefit analysis was conducted. Nevertheless, the mixture of platforms highlights the benefits of working with a standard communication layer in that all components can interoperate despite using different underlying technologies. For the first integration sequence, a Tech-DOC schema was used for the transfer of SCADA measurement data, rather than a Tech-XML one. The reason was that of performance, and Table 2 shows that the Tech-DOC schema required half the time compared to the Tech-XML one for a particular dataset with 21168 measurement events. The difference is due to the time spent creating new objects in memory, marshalling/unmarshalling parameters into/from XML documents, and sending messages over the network, ceteris paribus. The difference in the amount of data sent as XML documents was not substantial, with the extra data composed of structural tags required by the XML Schema. As mentioned in previous literature involving MIMOSA OSA-EAI [3], documentation on the standard and best practices is sparse. While revisions have seen documentation improving with almost all CRIS fields and XML Schema documents being documented, the best practices in using the XML Schema documents remains a trial and error process. Efforts are being made in creating a software development kit which should alleviate certain implementation issues and provide guidance on how the standard should be interpreted.
Table 2 Transferring SCADA measurement data to the Maintenance Decision Support System Tech-DOC Number of measurement events Number of messages sent Total size of messages Elapsed time
6
Tech-XML 21168
3
21168
54.510 MB
87.953 MB
22 mins, 45.372 secs
44 mins, 51.738 secs
CONCLUSION
The ability to automatically transfer data seamlessly between systems leads to a raft of possibilities for business process optimisation. Standards in data exchange for asset management, while not yet fully mature, are making headway in allowing organisations to develop flexible and reusable integration scenarios. MIMOSA OSA-EAI is a contending standard and despite some minor issues regarding documentation, its support of a large range of asset management data types allows it to be used in numerous asset management integration processes. While the benefits of standards-based interoperability for asset management organisations is clear, it can only be achieved through collaboration amongst software vendors and the standards community. 7
REFERENCES
1
MIMOSA. (2008) Open System Architecture for www.mimosa.org/downloads/44/specifications/index.aspx
227
Enterprise
Application
Integration
V3.2.1,
from
2
Hohpe G & Woolf B. (2003) Enterprise Integration Patterns: Designing, Building, and Deploying Messaging Solutions. Boston, USA: Addison-Wesley Professional.
3
Mathew A, Zhang L, Zhang S & Ma L. (2006) A review of the MIMOSA OSA-EAI database for condition monitoring systems. Mathew J, Ma L, Tan A & Anderson D (Eds.). World Congress on Engineering Asset Management, Gold Coast, Queensland, Australia. Springer.
Acknowledgements This research was conducted within the CRC for Integrated Engineering Asset Management, established and supported under the Australian Government’s Cooperative Research Centres Programme. The authors would like to acknowledge the assistance of Dr. Georg Grossmann from the University of South Australia and Dr. Ken Bever from Machinery Information Management Open Systems Alliance.
228
229
Proceedings of the 4th World Congress on Engineering Asset Management Athens, Greece 28 - 30 September 2009
THE DATA QUALITY IMPLICATIONS OF THE SERVITIZATION – THEORY BUILDING Joe Peppard a, Andy Koronios b and Jing Gao c a
School of Management, Cranfield University, UK
b
School of Computer and Information Science, University of South Australia
c
School of Computer and Information Science, University of South Australia
Servitization is now widely recognised as the process of creating value by adding services to products. A cornerstone of any servitization strategy is that ownership of the product or asset does not transfer to the customer. Rather, the customer purchases a service or capability with the asset being used to deliver that service or capability. The bulk of the research to date seeks to understand how traditional manufacturers might deliver integrated products and services with greater efficiency and effectiveness. One area that has not been addressed is the customer implications as a result of high quality data now not being available to them. This paper will explore the data quality issues emerging through the Servitization transformation. Implications for customers will be highlighted using a proposed framework developed from the data quality literature.
Key Words: Data Quality and Servitization 1
SERVITIZATION
Servitization1 is now widely recognised as the process of creating value by adding services to products (Vandermerwe & Rada, [23]). Since this term was first coined in the late 1980s it has been studied by scholars to understand the methods and implications of service-led competitive strategies for traditional product manufacturers (e.g. Wise & Baumgartner, [27]; Oliva & Kallenberg, [16]; Slack, [20)). During this same period there has been a growth in research on related topics such as Product-Service Systems (PSS) (Goedkoop, [6]; Mont, [15]; Meijkamp, [14]; Manzini & Verzolli, [13]), services operations, services science (Chesborough & Spohrer, [4]) and engineering asset management (Steenstrup, [21]). A cornerstone of any servitization strategy is that ownership of the product or asset does not transfer to the customer. Rather, the customer purchases a service or capability with the asset being used to deliver that service or capability. Thus, the proposition essentially represents an integrated product and service offering that delivers value-in-use (Baines et al., [1]). For example, in the aerospace sector, engine manufacturers such as Rolls-Royce, General Electric and Pratt & Whitney all offer some form of performance-based contract to commercial airlines, tied to product availability and the capability it delivers (e.g., hours flown). Rolls-Royce (R-R) have now registered trademarks for both ‘Power by the Hour’ and the more inclusive ‘TotalCare’. Such contracts provide the airline operator with fixed engine maintenance costs, over an extended period of time (e.g. ten years). In developing TotalCare, R-R is just one example of a manufacturer that has adopted a product-centric servitization strategy. Today, many other western companies, especially those in industry sectors with large installed product bases and high value assets (e.g., locomotives, elevators, machine tools, business machines, printing machinery, construction equipment and agricultural machinery), are also following such strategies and inevitably face similar challenges.
1
Servitization is often referred to as servicizing, particularly in United States. See White et al (1999) and Rothenberg (2007).
230
2
DATA QUALITY PERSPECTIVE
Dimensions of data quality typically include accuracy, reliability, importance, consistency, precision, timeliness, fineness, understandability, conciseness, and usefulness (Ballou & Pazer [2]; Wand & Wang 1996 [24]). Although the many dimensions associated with data quality have now been identified, it is still difficult to obtain rigorous definitions for each dimension so that measurements may be taken and compared over time. From the literature, it appears that there is no general agreement as to the most suitable definition of data quality or to a parsimonious set of its dimensions (Klein [10]). Some researchers have placed a special focus on criteria specified by the users as the basis for high quality information (Strong [22]; English [5]; Salaun & Flores [19]). Orr [17] suggests that the issue of data quality is intertwined with how users actually use the data in the system, since the users are the ultimate judges of the quality of the data produced for them. As a result, Wang & Strong’s ([22]) widely-accepted definition of data quality, “quality data are data that are fit for use by the data consumer”, is adopted in this paper. The bulk of the research on Servitization to date seeks to understand how traditional manufacturers might deliver integrated products and services with greater efficiency and effectiveness. One area that has not been addressed is the customer implications emerging as a result of data and knowledge now not being available to them. Thus, from a commercial perspective, servitization may be an attractive proposition for customers; however there may be implications for data quality and knowledge accumulation which could have longer-term implications for their competitiveness. At a fundamental level, who owns the data generated in the operation of the asset? Does servitization result in knowledge being lost to the organisation and in data quality problems that could potentially be of significant value? This paper will explore the data quality issues emerging from the Servitization transformation. Data Quality implications for customers will be highlighted using a proposed framework developed from the data quality literature.
3
SERVITIZATION IMPACT ON DATA QUALITY IMPACT
The study by Lindberg and Nordin ([12]) shows that more and more firms move from manufacturing goods to providing services or to integrating products and services into solutions or functions. This concept of “servitization” suggests that all organisations, markets and societies are fundamentally centred on the exchange of services – specifically, the exchange of intangible resources. Kapletia and Probert [9] point out that industries are currently only focusing on the core capability of the servitization solutions such as systems integration, operational services, and business consulting, while paying less attention to the other capabilities such as data, information and knowledge accumulation. Traditionally, manufacturers are responsible for delivering the products and supply manuals (e.g. operational manuals and specification) to the end-users. However, to a lesser extent, they are responsible for collecting operational data for a long period of time. In the majority of cases, the manufacturers are not owners of the operational data and may have no right to request these data in sensitive environments. In many organisations, the analysis through the accumulated operational data is one key enabler for effective asset maintenance and business process improvement. However, it may become impossible in the servitization solutions. For example, instead of selling engines directly, Rolls-Royce provides a total solution (based on the flying hours) to airline companies. Rolls-Royce is responsible for monitoring and collecting all operational data during and after each flight. In many cases, a proportion of this data is visible to the airlines. However, sensitive data (e.g. performance and benchmark ratios in comparison to the competitive products) is unlikely to be disclosed. In real practice, the case-study by Johnson and Mena [7] reveals that the supplier (manufacturers) relies on the sensor instruments installed on the product to monitor and collect the operational data through real-time transmission. Once the product reaches its life-span, a warning message is issued from the supplier (manufacturer) system to the customer to replace the equipment or re-negotiate the contract. Inevitably, this causes a great deal of concern to the customers: If analysing data enables the manufacturers to improve the performance of the product, will any savings be passed on to the customer? If, for example, an engine manufacturer detects that fuel consumption could be improved by operating the engine differently, do they pass this knowledge to customers (assuming fuel consumption is the responsibility of the customer)? How does the customer’s lack of access to operational data affect them when re-negotiating contracts? Failure or deficiency in supplying and accumulating data to the customer often results in data quality problems and may lead to less-informed decisions. In order to address data quality impacts in servitization and allow customer organisations to better negotiate their servitization contracts, a preliminary theoretical framework consisting of four stages is built from the data quality literature. 1.
Understand the Role of Data in Servitization – As a product and as a part of the service
Data should be treated as both a product and a service. The data quality literature draws distinctions between the product quality and service quality of data (Zeithaml, Berry & Parasuraman [28]). Product quality of data includes product features that
231
involve the tangible measures of information quality, such as accuracy, completeness, and freedom from errors. Service quality of data includes dimensions related to the service delivery process, and intangible measures such as ease of manipulation, security, and added value of the data and information to consumers (Kahn, Strong & Wang [8]). With respects to the data quality definition discussed previously - “fitness to use” - the collected data indeed serves the organisation for its business operations. Thus, in any servitization solutions, data must firstly be considered as a product which needs to be delivered to the customer organisation along with the physical products for the service to be based on. Similar to the physical product, the customer organisation does not necessarily need to own the generated data; however, full access to the data must be granted. Secondly, data must also be supplied in a form that the customer organisation can analyse within their existing information systems. For example, Rolls-Royce could choose to supply the engine operational data in Excel spreadsheets to airlines instead of real-time feeding into the airline systems, this data could become meaningless and out-ofdated. 2.
Develop Data Requirements and Prioritise Quality Dimensions
Modern organisations, both public and private, are continually generating large volumes of data. According to Steenstrup from Gartner Research ([21]), each person on the planet generates an average of 250 Mbytes of data per annum, and this volume is doubling each year. At an organisational level, there are incredibly large amounts of data, including structured and unstructured, enduring and temporal, content data, and an increasing amount of structural and discovery metadata. Most organisations have far more data than they can possibly use; yet, at the same time, they do not have the quality data they really need (Levitan and Redman, [11]). It is unlikely for a servitization solution provider to supply all data to the customer organisation and it can also become expensive for a customer organisation to acquire a large amount of data (as reflected in the servitization contract). Thus, it is essential for customer organisations to understand how the operational data were used for its business purposes in the past, as the adoption of servitization solutions will reduce the organisation’s capability to collect data itself. As discussed previously, data may be considered both as a service and a product in servitization; there is therefore a need to list the data quality dimensions in order to ensure that the customer organisation is able to receive quality data and analyse the data for informed and reliable decision-making. The PSP/IQ model developed by Kahn, Strong & Wang [8] can be regarded as a basis for organisations to understand its data quality requirements to be used in negotiating the servitization contract with suppliers. The PSP/IQ model (Table 1) views data as a product, and suggests the value of a product or service depends on the quality of the data associated with the products and services.
Table 1: Mapping the IQ dimensions into the PSP/IQ model Conforms Dimensions Product Quality
Service Quality
to
Essential
DQ
DQ Dimensions that Meets or Exceeds Consumer Expectations
Accuracy
Useful Data
Concise Representation
Appropriate Amount
Completeness
Relevancy
Consistent Representation
Understandability
Reliability
Usable Information
Timeliness
Believability
Security
Accessibility Ease of Manipulation Reputation Value-Added
Adapted from Kahn, Strong & Wang ([8]) 3.
Establish a data quality maturity model for comparing solutions and evaluating the supplier performance
More and more companies are becoming servitization solution providers. As a result, the customer organisation needs to perform a comprehensive review of available choices based on a complex range of criteria. Additionally, the customer-
232
organisation also needs to evaluate solution providers’ performance during each contracting period. When coming to data quality performance, a data quality maturity model becomes critical. Caballero, Gómez & Piattini ([3]) report on research aiming to optimise the data management process (DMP) in order to assure data quality. For this, they developed a framework based on maturity levels with specific and generic data quality goals, which can be achieved by executing the corresponding activities. Their data quality model provides guidance for evaluating and improving DMP so that organisations become able to manage more and more efficiently all related data processes – such as data acquisition, data product manufacturing and data maintenance - by addressing several issues in the main process areas like process management, project management, support and engineering processes. The work by Caballero, Gómez and Piattini ([3]) addresses main issues on drawing a DMP (table 2), identifying all components and established relationships, highlighting quality aspects for processes and for data governed by data quality policies. Applying their DQ model, organisations can learn and model formally their data quality management, so that major data problems and sources can be identified. Once identified, initiatives for avoiding them or for improving efficiency can be arranged.
Table 2: The DQ model based on maturity levels DQ Maturity Level
Management Activity
Initial
No managed and coordinated efforts are made in order to assure data quality
Definition
Efforts are made in order to draw the entire process, identifying and defining all components (both active and passive), their relationships and the way in which these are developed according to a previous project. DMP project management Data requirements management Data quality dimensions and metrics management Data sources and data targets management Database or data warehouse development or acquisition project management For climbing from Initial level to Definition level, a plan for developing the DMP must be drawn up and followed
Integration
Many efforts are made in order to develop and execute according to organisational data quality polices. This implies the existence of several standardised data quality issues. Data quality team management Data quality product verification and validation Risk and poor data quality impact management Data quality standardisation management Organisational processes management
Quantitative Management
A DMP is integrated into an organisation’s data quality culture and many efforts are made in order to take several measures related to DMP and its components. DMP measurements management
Optimising
Quantitative measurements taken at previous levels are used in efforts in order to detect defect sources or identify ways to optimise entire processes. Causal analysis for defect prevention Organisational development and innovation
Source: Caballero, Gómez & Piattini [3]
233
4.
Adopt Total data quality management
It is believed that the total data quality management cycle (TDQM) developed by Wang ([25]) is still very relevant in servitization. Wang ([25]) identified four important roles in the data and information supply chain – data producers (suppliers), data custodians (manufacturers), data consumers, and data managers. Data producers are those who create or collect data for producing information products. Data custodians are those who design, develop, or maintain the data and system infrastructure to produce the required information products. Data consumers are those who use the information products in their work. Data managers are those who are responsible for managing the entire manufacturing process throughout the information product life cycle. The TDQM cycle (Figure 1) suggests that the total data quality management is an ongoing and iterative process consists of four phases: define data and information quality requirements, measure quality dimensions, analyze the performance and improve from the past. Especially, it is an ongoing process requires collaboration from above four roles above (data producers, etc). In servitization, similar to the data custodians, data producers become external parties of the organisation, thus ensuring information flow between the external and internal parties emerges as a new challenge. Figure 1: TDQM Cycle - Wang ([25])
4
CONCLUSION
Servitization is now becoming a widely known concept. More and more companies are transforming from supplying products to providing solutions. Central to these solutions, the solution providers are responsible for monitoring and collecting data on behalf of the customer organisation and to use these data to ensure service quality. As addressed in this paper, the long term effect associated with the issues of data ownership, data transformation, data storage and data analysis are often overlooked in the servitization contract. This often results in data quality problems which lead to less informed decisionmaking in customer organisations. Based on the existing data quality literature, this paper explores a theoretical framework consisting of four stages to guide the both servitization suppliers and customers to understand the data quality impacts and provides a guideline to manage this process effectively. This theoretical framework will be verified and refined in future case studies.
5
REFERENCES
1
Bains, T. et al., (2007) “State-of-the-art in Product Service-Systems”, Proceedings of the Institute of Mechanical Engineers, Vol. 221, Part B, Journal of Engineering Manufacture, pp. 1543-1552.
2
Ballou, DP & Pazer, HL (1995), 'Designing Information Systems to Optimize the Accuracy-timeliness Tradeoff', Information Systems Research, vol. 6, no. 1, pp. 51-72.
3
Caballero, I, Gómez, Ó & Piattini, M (2004), 'Getting Better Information Quality by Assessing and Improving Information Quality Management', paper presented at the 9th International Conference on Information Quality, Cambridge, MA, November 2004.
4
Chesborough, H. and Spohrer, J. (2006) “A research manifesto for services science”, Communications of the ACM, Vol. 49, No. 7, p. 35.
5
English, LP (1999), Improving Data Warehouse and Business Information Quality: Methods for Reducing Costs and Increasing Profits, John Wiley & Sons, New York.
6
Goedkoop M. et al., (1999), “Product Service-Systems, Ecological and Economic Basics”, Report for Dutch Ministries of Environment (VROM) and Economic Affairs (EZ).
7
Johnson, M. and Mena, C., (2008), Supply chain management for servitised products: A multi-industry case study, Int. J. Production Economics 114 (2008) 27–39
8
Kahn, BK, Strong, DM & Wang, RY (2002), 'Information Quality Benchmarks: Product and Service Performance', Communications of the ACM, vol. 45, no. 4, pp. 184-192.
234
9
Kapletia, D. and Probert, D., (2009), Migrating from products to solutions: An exploration of system support in the UK defense industry, Industrial Marketing Management Upcoming
10 Klein, BD (1998), 'Data Quality in the Practice of Consumer Product Management: Evidence from the Field', Data Quality, vol. 4, no. 1, pp. 19-40. 11 Levitin, AV & Redman, TC (1998), 'Data as a resource: Properties, implications, and prescriptions', MIT Sloan Management Review, vol. 40, no. 1, pp. 89-102. 12 Lindberg, N. and Nordin, F., (2007), From products to services and back again: Towards a new service procurement logic, Industrial Marketing Management 37 (2008) 292–300 13 Manzini, E. and Vezolli, C. (2003) “A strategic design approach to develop sustainable product service systems: examples taken from the ‘environmentally friendly innovation’ Italian prize”, Journal of Cleaner Production, Vol. 11, pp. 851–857. 14 Meijkamp, R. (2000) “Changing consumer behaviour through eco-efficient services. An empirical study of car sharing in the Netherlands”, Delft University of Technology, 15 Mont O. (2000) “Product Service-Systems”, Final Report for IIIEE, Lund University, 16 Oliva R. and Kallenberg R. (2003) “Managing the Transition from Products to Services”, International Journal of Service Industry Management , Vol.14, No. 2, pp. 1 -10. 17 Orr, K (1998), 'Data Quality and Systems Theory', Communications of the ACM, vol. 41, no. 2, pp. 66-71. 18 Rothenberg, S. (2007) “Sustainability through Servicizing”, MIT Sloan Management Review. 48( 2). 19 Salaun, Y & Flores, K (2001), 'Information Quality: Meeting the Needs of the Consumer', International Journal of Information Management, vol. 21, no. 1, pp. 21-37. 20 Slack N., (2005) “Patterns of “servitization”: Beyond products and services”, Institute for Manufacturing, Cambridge, London (CUEA). 21 Steenstrup, K., (2005) “Enterprise Asset Management Thrives on Data Consistency”, Gartner Research, Research Article 22 Strong, DM (1997), 'IT Process designs for Improving Information Quality and reducing Exception Handling: A Simulation Experiment.' Information and Management, vol. 31, pp. 251-263. 23 Vandermerwe S. and Rada J. (1988), “Servitization of Business: Adding Value by Adding Services”, European Management Journal, Vol.6, No. 4. 24 Wand, Y & Wang, RY (1996), 'Anchoring Data Quality Dimensions in Ontological Foundations', Communications of the ACM, vol. 39, no. 11, November 1996, pp. 86-95. 25 Wang, RY (1998), 'A Product Perspective on Total Data Quality Management', Communications of the ACM, vol. 41, no. 2, pp. 58-65. 26 White, A., et al., (1999) Servicizing: The Quiet Transition to Extended Product Responsibility, report from Tellus Institute. 27 Wise & Baumgartner (1999) “Go downstream: The New Profit Imperative in Manufacturing”, Harvard Business Review, Sept/Oct., pp. 133 - 141 28 Zeithaml, VA, Berry, LL & Parasuraman, A (1990), Delivering Quality Service: Balancing Customer Perceptions and Expectations, Free Press, New York, NY.
Acknowledgments Cooperative Research Centre of Integrated Engineering Asset Management (CIEAM), Australia
235
Proceedings of the 4th World Congress on Engineering Asset Management Athens, Greece 28 - 30 September 2009
ASSET MANAGEMENT FOR FOSSIL-FIRED POWER PLANTS: METHODOLOGY AND AN EXAMPLE OF APPLICATION Ludovic BENETRIX , Marie-Agnès GARNERO, Véronique VERRIERa a
Électricité De France (EDF),Research and Development Department, 6 quai Watier, 78400 Chatou,France.
The current industrial context (deregulation of the utility market, constant evolution of the air emission standards) creates new requirements for electric utilities in terms of asset management and quantitative valuation. This is why EDF has been developing for several years risk-informed asset valuation methodologies and associated decision support tools. One of them – called “Durability method” – is based on a probabilistic approach and allows to deal with both local (i.e. at a component level) and overall (i.e. at a plant or fleet level) issues. This paper aims at presenting the principles of this method and describing an example of application for coal-fired power plants. Key Words: asset management, valuation, probabilistic, fossil-fired, power plants. 1
OVERALL INDUSTRIAL CONTEXT 1.1 Role of fossil-fired power plants in the French energy mix
Following the oil price shocks of the early 1970’s, France decided to rely on nuclear power plants for electricity generation to protect itself from the possible large oscillations of raw material prices (coal, oil and gas) and thus to reduce its variation generation costs. This has lead to a particular energy mix which mainly depends on nuclear power plants. Indeed the nuclear power plants represent 65% of the installed capacity of electricity generation and more than 80% of the effective electricity generation every year [1]. As for 2008 the amount of fossil-fired generation was 3.3% of the total electricity generation in 2008, as shown by Figure 1.
Figure 1. Electricity generation distribution in France in 2008 Yet the role of fossil-fired generation is essential in the French energy mix. As fossil-fired power plants have a high degree of flexibility which allows quick start-up and power modulation, they are used to ensure the balance between generation and
236
consumption in real time. Figure 2 shows that fossil-fired plants are used for semi-base load operation (less than 5000 hours per year), peak load operation (less than 1500 hours per year) and extreme peak load operation (less than 200 hours per year). This is why the reliability and availability of them are significant issues for EDF.
Figure 2. Use of the various generation means to ensure balance between generation and consumption.
1.2 Deregulation of the electric utility market A European directive dated June 26th 2003 (refer to [4]) lead to the electric utility market deregulation. Since 2007 (and 2004 for non household customers) the electricity generation and commercialization have been submitted to competition. EDF status changed from a public entity to a public limited company with public service obligations. The main objective of EDF is now to provide its current and future new customers with a still efficient public service while participating in a European electricity market which prices are highly volatile. When fossil-fired power plants are needed to generate electricity (particularly for peak and extreme peak loads) electricity is rare and consequently expensive on the electricity market. This accounts for a strong need for long term optimization and relevant asset management of the EDF’s fossil-fired fleet.
1.3 Air emission standard evolution Except the high volatility of the combustible prices the main drawback for the use of fossil-fired electricity generation is the emission of air pollutants (nitrogen oxides NOx, sulphur dioxide SO2, dust) and greenhouse gases (carbon dioxide CO2). This is why the air emission standards are constantly evolving, driven by European directives. In practical terms this consists in defining maximum allowable values for air pollutant emissions. As an example, EDF has to comply with the following specifications from 2008 to 2015 for its 600 and 700 MW fuel and coal-fired plants (11 plants) : 11338 t/year maximum emission for SO2, 13749 t/year maximum emission for NOx, 1417 t/year maximum emission for dust. These maximum allowable values will be probably modified from 2016. Thus the following issue has to be tackled as far as fossil-fired fleet management is concerned : what kinds of investments are profitable given the current and possible future air emission standards ?
1.4 New requirements for electric utilities The overall industrial context as described in the previous sections leads EDF to handle new requirements as other European electric utilities. 1.
First the strategy to manage the entire fossil-fired fleet in terms of whole-life cost has to be optimized with respect to the current and future emission standards. As an example one of the questions EDF may have to answer is : is it better to develop new pollution control systems for its existing power plants or to build new plants that directly comply with the future air emission standards ?
237
2.
In the meantime the long term reliability and availability of the main components of fossil-fired power plants have to be optimized too. Steam turbines, steam generators, generators, condensers and pollution control systems are then concerned by these issues. Consequently the strategy to manage the ageing of the main components has also to be optimized to ensure a satisfying level of security and performance for fossil-fired power plants.
These 2 requirements are strongly correlated : indeed the decision to extend fossil-fired power plants’ lifetime (e.g. thanks to the implementation of pollution control systems) is strongly related to the good condition of the plants’ components; inversely the investments to handle the components ageing have to be optimized with respect to the lifetime target for the plants that are under study. In addition many uncertainties usually make the decision making process more difficult : e.g. on the future air emission standard evolutions or on the precise lifespan of the major components. As a conclusion we can see that the decision making process related to the new requirements EDF has to face is quite a delicate issue ; this is why EDF had to develop new methodologies and associated decision support tools. These methodologies and tools have to be based on the following principles :
2
1.
A multi-level approach that allows to deal with both plant/fleet level and component level issues.
2.
A probabilistic approach to enable the modelling and computation of the various uncertainties related to the fossil-fired fleet.
3.
An approach that makes possible to cope with cross-correlated issues.
A SOLUTION FOR THE ELECTRIC UTILITY NEW REQUIREMENTS : THE “DURABILITY METHOD” This section aims at presenting the EDF Durability methodology and its associated tools as also described in [2]. 2.1 Analysis at a component level
We focus on the component level which is the first phase of the overall analysis. Let us consider a component for which we have to optimize lifecycle management : what kind of investments have to be done to ensure a satisfying level of security, availability and performance for the component under study and when is it preferable to make the investments ? The component level analysis can then be applied to any major “System, Structure or Component” (SSC). For fossil-fired power plants the question can be asked for major components such as steam turbine, generator, steam generator or condenser.
1.1. SSC Experts & Existing data identification
1.2. SSC definition : material and functional breakdown
1. SSC-file Identification of events and mitigation actions
1.3. SSC current state evaluation & main risks identification
1.4. Events identificat ion
2. SSC-scenarios Probability assessments and selection of events/mit igation actions
1.5. M itigation actions identification
1.6. SSC-file technical and strategic validation
3. SSC-valuation Distribution of probabilistic technical and economic indicators
Figure 3. The key phases for component level management.
238
Figure 3 sums up the principles of the component level analysis. It breaks up into the main 3 following phases : 1.
SSC-file elaboration : identification of events (main risks) and mitigation actions related to the SSC.
2.
SSC-scenarios building : quantification of the events’ probability distributions, mitigation actions’ costs and scenario elaboration.
3.
SSC-valuation : computation of the probabilistic technical and economical indicators to compare scenarios.
The SSC-file elaboration (step 1) aims at retrieving and aggregating the technical knowledge for the SSC being studied. It is a fundamental step of the component level analysis because the quality of the information gathered during this step strongly determines the quality and applicability of the results to be obtained in the next steps. A compromise between completeness of the analysis and time constraints for the achievement of the study has to be found for each SSC to be elaborated. This step consists in identifying the main risks (events) that could occur until the end of the SSC’s life and the preventive and/or corrective mitigation actions EDF will be able to carry out to cancel or significantly reduce the impact of the possible occurrence of the events (steps 1.4 and 1.5). The events can be related to many issues : performance, obsolescence, ageing, regulatory evolutions… As we see on Figure 3 the identification of events and mitigation actions has to be preceded by : the experts & existing data (degradation, failure or operation histories) identification (step 1.1), the precise definition of the SSC in terms of material and functional scopes (phase 1.2), the current state evaluation and main risks identification (step 1.3). Then we have to build scenarios for the SSC (step 2). The dates of occurrence of the identified events are in general unknown because they rely on many different phenomena that are difficult to predict with precision : ageing of components, future utilization of the fleet, regulatory evolutions. This is why the analysis is based on a probabilistic approach that allows to take into account the various sources of uncertainty. Thus every event has to be associated with its probability distribution of occurrence over time. Then the mitigation actions have to be quantified too : material and labour costs as well as the impact of the mitigation action application on the event’s probability of occurrence over time. Finally the scenarios have to be built : the relevant events and the strategies of application for mitigation actions (preventive or corrective, date(s) or strategy of application) are defined. The last step of the component analysis (step 3) is the SSC valuation. Every scenario defined in step 2 is compared to a reference scenario (which has been also defined in step 2) so as to help the decision maker to choose the optimal strategy. The comparison is based on many technical economical indicators among which the Net Present Value (NPV) is the most relevant. This indicator is computed as follows : 1.
The financial flows for reference strategy are subtracted to the financial flows for the strategy to be valuated for the whole life period. The financial flows are discounted thanks to the EDF official discount rate. A relative NPV is then computed.
2.
The financial flows for the different strategies are directly linked to the occurrence of the events of the SSC under question. As the events’ dates of occurrence are modelled as random variables (refer to step 2) the financial flows and thus the computed NPV also become random variables.
3.
Usually the probability density function of the NPV cannot be easily computed because the models are often complex. This is why EDF has developed a dedicated tool (VME : refer to [2] and [3]) to enable the modelling of the scenarios and the approximate calculation of the NPV distribution. This tool is based on Monte-Carlo simulation.
At the end of step 3, the decision maker has obtained the relative NPV distribution of each scenario with respect to a reference scenario (that would be applied if the SSC analysis was not performed). The examination of the main parameters of the NPV distribution (mean NPV, extreme values of the NPV, probability for the scenario not to be profitable) allows to make the optimal decision with respect to the decision maker’s goals for the SSC currently under study.
2.2 Extension for the plant and fleet levels The method described in the previous section allows to optimize decision making for a given System, Structure or Component. Yet it does not enable to make a decision at a plant or fleet level because many other topics have then to be taken into account : issues related to other SSCs and possible correlations between them, overall assumptions about the regulatory background, valuation of the overall economic performances of the fleet. As a consequence it is interesting for EDF to have an extension of the component level analysis that benefits from the SSC studies that may have been possibly performed. This is the objective of the plant/fleet level analysis which is described hereafter. Figure 4 describes the main steps of the plant/fleet level analysis.
239
1. SSC selection Selection of relevant SSCs
2. SSC elaboration Identificat ion of events and mitigation actions, elaboration of scenarios
For each selected SSC
3. Plant evaluation Distribution of probabilistic technical and economic indicators
4. Decision making Distribution of probabilistic technical and economic indicators
Figure 4. The key phases for plant & fleet level management This analysis breaks up into 4 steps : 1.
SSC selection (step 1) : this step consists in selecting which SSCs will have to be processed so as to perform a relevant plant or fleet wide analysis. The goal of this phase is to find a compromise between completeness (that would lead to select a large number of SSCs) and time constraints for the achievement of the study (that would tend to reduce the number of SSCs). Usually only the major components or issues related to a plant are chosen as SSCs (steam turbine, steam generator, generator, condenser, environment).
2.
SSC elaboration (step 2) : this step aims at elaborating the SSC-file for each SSC selected during step 1. The method for this step is identical to the component level analysis (refer to section 2.1) without the final valuation because it will be done at a plant or fleet level : identification and quantification of events and mitigation actions, building of scenarios to be valuated.
3.
Plant/fleet evaluation (step 3) : after having elaborated every SSC-file the SSCs have to be aggregated to perform the plant or fleet evaluation. This is done by :
4.
a.
First identifying and taking into account the cross-correlations between the SSCs.
b.
Then building plant and fleet wide scenarios by aggregating the SSC level scenarios.
c.
Finally elaborating a last SSC file that allows to take into account the overall economic performances of the plant or fleet (valuation of the electricity generated, consideration of taxes, charges, maintenance and operation common costs) and also the financial flows related to secondary components or issues which were not studied as SSCs.
Decision making (step 4) : the decision maker obtains at the end of the analysis the NPV probability distribution of the plant or fleet that is under study for each scenario (as defined in step 3). The NPV probability distributions are computed using the dedicated EDF software tool EDEN [2]. The examination of the relevant parameters of the NPV distributions (mean value, standard deviation, extreme values, probability for the scenario not to be profitable…) enables the decision maker to compare the various scenarios and to make a decision in the light of an objective and quantitative valuation of them.
240
3
EXAMPLE OF APPLICATION : QUANTITATIVE VALUATION OF MAINTENANCE AND OPERATION STRATEGIES FOR COAL-FIRED POWER PLANTS’ STEAM TURBINES 3.1 Events, mitigation actions and scenarios
This section aims at describing a real example of application of the “Durability method” for coal-fired power plants. We focus on the component level analysis as described in section 2.1. The scope of this example is :
Fleet: 600 MW coal-fired power plant fleet which is made up of 3 power plants. This fleet is used for semi-base load operation (around 5000 hours operation a year) and has a significant role in the balance between production and consumption especially in Brittany where there is no nuclear power plant.
SSC : “Steam turbine” was first studied because it is one of the major component of a power plant.
Main assumption : EDF has recently implemented efficient air pollution control systems for these power plants for more than 300 M€. The fleet is then assumed to comply with future air emission standards and the aim is to operate them up to around 2030.
The “Durability method” has been used to help EDF decision makers to optimize the long term management of the steam turbine. As an illustrative example let us consider the following model based on the “Steam turbine” SSC elaboration. The event considered here is : “Failure of the component C (C is a part of the steam turbine, not precisely defined for confidential reasons)”. As an example the probability density function over time for one of the 3 studied power plants is summarized by the figure 5. The probability density functions for the other 2 power plants are slightly different from this one because the operation conditions are not exactly the same which implies that the probability of failure is not exactly the same for the 3 power plants.
Cumulative probability over time for event "Failure of component C" 1,2 1
Probability
0,8 0,6 0,4 0,2 0 2008
2010
2012
2014
2016
2018
2020
Year
Figure 5. Probability distribution of the event over time. The reference strategy for this event consists in fixing the C component failure for each occurrence of the event for each power plant. This operation implies a cost of X M€ (confidential) and a plant unavailability of Y weeks (confidential). The probability distribution over time once the failure has been fixed (probability that the event occurs again after fixing) is described by Figure 6 (with time 0 corresponding to the date where the failure is fixed).
241
Cumulative probability over time for event "Failure of component C" after fixing of the failure 1,2 1
Probability
0,8 0,6 0,4 0,2 0 0
2
4
6
8
10
12
Time (from date of fixing)
Figure 6. Probability distribution of the event over time after fixing of the failure. The decision maker wants to assess the relevancy of the implementation of an alternative strategy which would consist in buying in advance a new C component. The cost of this operation is much larger than the fixing cost (4 times more than the fixing cost). Though this cost is expected to be counterbalanced by the fact that it allows to fix the possible failed components without a plant unavailability thanks to concurrent operation implementation. The two scenarios to be compared are summarized in tables 1 and 2. Table 1. Summary of the reference scenario Reference strategy Cause
Mitigation actions
Probability after action
Any occurrence of event "Failure of C component"
Fixing of the failure on C component : - X M€. - Y weeks unavailability.
Refer to figure 6
Table 2. Summary of the alternative scenario Alternative strategy Date or occurence
Mitigation actions
Probability after action
2010
Preventive acquisition of a new C component : - 4X M€. - 6Y weeks delay.
Not changed
First occurrence of event "Failure of C component"
Replacement of C component by the new one : - 0,7X M€. - 0,35Y weeks unavailability.
0 until end of life of the plant
First occurrence of event "Failure of C component"
Fixing of the failed C component (concurrent operation) : - X M€. - Y weeks delay (but no plant unavailability).
Not changed
Other possible occurrences of event "Failure of C component" on other plants
Replacement of C component by the fixed C component : - 0,7X M€. - 0,35Y weeks unavailability.
Refer to figure 6
242
3.2 Valuation results and decision making The NPV probability distribution of alternative strategy compared to reference strategy is shown by Figure 7. The alternative strategy is always profitable that is to say the probability for the NPV to be greater than 0 equals 1. The mean value of the final NPV is NPVmean (confidential). The NPV distribution goes from (0.4xNPVmean) to (2.1xNPVmean). Alternative strategy vs. reference strategy : NPV probability distribution
0,09 0,08 0,07 Probability
0,06 0,05 0,04 0,03 0,02 0,01 0 0
0,5
1
1,5
2
2,5
Normalized NPV : NPV / NPVm ean
Figure 7. Relative NPV distribution for alternative scenario compared to reference scenario
Figure 8 shows the evolution over time of the mean value of the relative NPV : we can notice than the preventive acquisition of component C is not profitable at the beginning because the new C component is quite expensive. But the investment becomes profitable from 2015 as more and more failures occur for the reference scenario implying sizeable costs due to fixing operations and unavailability periods. These costs will be avoided in the alternative scenario thanks to the earlier acquisition of a spare C component. To conclude this study shows the long term profitability of the preventive supply of a spare component whereas the investment could have been considered too expensive compared to the corrective mitigation actions. Based on this probabilistic study the EDF decision maker will make a decision in the next few months about the acquisition of the component C.
Alternative strategy vs. reference strategy : NPV mean evolution over time
1,2
NPVmean / final NPVmean
1 0,8 0,6 0,4 0,2 0 2005
2010
2015
2020
2025
-0,2 Time (year)
Figure 8. NPV mean value evolution over time
243
2030
2035
4
CONCLUSION AND PERSPECTIVES
The overall European and French industrial contexts raises new needs for EDF in terms of long term management of its fossil-fired assets. The “Durability method” which has been developed by EDF since 2001 is one possible answer to these needs : this fully integrated method provides a probabilistic asset management approach allowing to deal with component, plant and fleet levels. The example of long term management optimization for one of the major components of coal-fired power plants – the steam turbine – showed how the method can be practically applied at a component level. It particularly underlined the benefit that could be made from supplying in advance a spare part of a steam turbine’s major component in spite of his high initial cost. The developments are still going on mainly in the following ways : 1.
About coal-fired power plants : other SSCs are currently under question or will be elaborated in the next months (steam generator, environment issues, generator) so as to make the overall long term analysis and optimization possible.
2.
About “Durability method” : the current method relies on the detailed elaboration of several SSC-files for the approach to be relevant. But the processing of a SSC-file can be quite a difficult and long issue. As a consequence it can be a delicate task to process all the needed SSC-files and to perform a plant or fleet optimization whereas the issue is fundamental for EDF life cycle management. Thus EDF Research & Development Department is currently working on a simplified approach to deal with plant or fleet level issues with respect to time and human resources constraints.
5
REFERENCES
1.
EDF group (2008) Document de référence – http://shareholders.edf.com/the-edf-group/shareholders-97231.html
2.
ICONE16-48911 – Overview of EDF life cycle management & nuclear asset management methodology and tools – P. HAÏK, K. FESSART, E. REMY, J. LONCHAMPT, EDF Resarch & Development.
3.
Lambda Mu 16 – October 7th to 9th 2008 – Relative valorisation of major maintenance investments taking into account risk – J. LONCHAMPT, K. FESSART, EDF Research & Development.
4.
Directive 2003/54/CE du Parlement européen et du Conseil, du 26 juin 2003.
244
Proceedings of the 4th World Congress on Engineering Asset Management Athens, Greece 28 - 30 September 2009
CMMS – INVESTMENT OR DISASTER? AVOID THE PITFALLS Christer Olsson a, Ashraf Labib b, and Cosmas Vamvalis c a
Senior Consultant Maintenance Management, Midroc Engineering AB, Member of Board, the Swedish Maintenance Society, UTEK, E-mail:
[email protected] b
Professor and Associate Dean (Research), University of Portsmouth, Portsmouth Business School, Richmond Building, Portland Street, Portsmouth PO1 3DE, United Kingdom, E-mail:
[email protected] c
Vice Chairman, Hellenic Maintenance Society, Managing Director, ATLANTIS Engineering Ltd, nt.Tritsi 21 Pilea, Thessaloniki, Greece, E-mail:
[email protected] It has been reported that a high percentage of Computerised Maintenance Management Systems (CMMS) implementations has not been successful in the past (Labib, 2008). Even when CMMS applications are implemented, it is noticed that low usage of software options is achieved. This may be attributed to the low organisation level of many maintenance departments. The investment in modern software for maintenance management is an expensive project especially when one considers the money invested in acquiring, implementing and maintenance license costs. If we also consider the time and effort that needs to be invested in training, setting up the structure of the system and data collection, then the investment grows to be at least double or even triple the initial cost. So, one might then think that the return of this investment is very well followed and the use of the new system is heavily supported by Management. However, in real life it is noticed that often those systems do not produce the support wanted or needed to the users resulting in poor trust from the maintenance organisations in their own tool. The use of standards to secure proper understanding and data quality is very often not in place. Using standards and proper structures gives a platform for a more effective use but there is also a need to get the commitment from the people involved to get the right return on the investment. We argue that this situation can be improved by the use of audits and benchmarking. One example on how this can be done is the Automated Tool of Maintenance Auditing, Benchmarking for Continuous Improvement (AMABI). AMABI consists of three modules, namely; auditing, benchmarking and recommendations. The first module is an auditing module which evaluates directly the CMMS data in order to assess the current organisation level and performance of the maintenance department. The second module is benchmarking. Within the benchmarking module, auditing results from different companies are automatically compared using different groupings (e.g. company sector and/or size). The third module is related to recommendations. In the third module points of further exploitation regarding the organisation of the maintenance department are identified and proposals for improvement, in a prioritised manner, are provided. The proposed tool is assisting the companies to be in a continuous improvement process environment. The main innovation of the tool is the automatic audit and benchmarking which makes it very cost effective, so it can be broadly used. Moreover, it ensures that results are not biased, as human judgment is minimised. Actual results of the tool in 50 Greek and Cypriot companies (e.g. auditing criteria, 2009 benchmarking results and indicative proposals of improvement) will be presented. Finally, benefits by companies using the tool will be reported.
245
1
STRATEGY – CULTURE
When one decides to invest in a new CMMS this is normally a part of an overall strategy to improve one’s maintenance performance. So, the first thing to think about is how do I measure if improvement really takes place? How do I use the new tool in the best way and how do I get the best out of the system? First of all – be open minded, try to understand the CMMS designers and how they have set up the system and the work processes they have worked out. Secondly – be ready to change one’s own processes if this is possible – Guess you have not bought a new system to support the old way of working? Be aware that most of the problems with new CMMS come from the fact that we try to convert the system to the way we are used to work – •
“I am not prepared to change my way of working because the system says so. o
“We have always worked like this!”
o
“We have never worked like that!”
As soon as one starts to change part of the system you also have a risk that some of the features built in to the system won’t work because one has a broken logistic chain. This is also very important from a cost perspective as all changes probably are done by consultants not only one time but every time one updates the system or a new feature is introduced. Overcoming the cultural rules and routines takes a lot of patience and hard work. Leadership and guidance are important issues as no system is better than the data quality in their database, and the data quality is created from motivated people. This means that they must know the quality parameters of the data and what it will be used for and, maybe most important, what’s in it for me? The use of industrial standards like CEN or ISO is a good strategy and gives one a safer road to success. ( EN 13306 Maintenance Terminology etc.) 1.1 Asset Hierarchy – History and Knowledge Base The backbone of all CMMS systems is the Asset Hierarchy. This is one of the crucial features of a CMMS. This is where one lists all the assets, decide all relations between them and connect all information that is needed for your work. The asset hierarchy is different from the asset register found in Accounting Departments as this is usually developed from a financial perspective rather than an engineering one. The hierarchy is preferably set up from a responsibility perspective so that it also will show which function in your organization is responsible for the care of each entity/equipment in your plant. Take your time before you decide on the hierarchy design because changing at a later date is very demanding as you start to make history from the first moment. If you need some guidance a standard like the ISO 14224 may be of help. That standard was made up by the oil and gas industry but it is especially valuable for those who plan to do benchmarking as it gives structure and advice on what belongs to a certain type of equipment and the structures you need to perform benchmarking. In other words it helps in providing guidance of the taxonomy (classification) of different machines and their components. 1.2 Work Process One of the most important issues you will deal with in your CMMS is the process of generating work/service requests and history records for your assets. Have the Deming planning circle (PDCA) in mind and be sure to set up the process to support all parts of the circle (Plan; Do, Check; Act). To guarantee the quality of work and historic data you must have the documentation and decision-making process in place and a proper training program for users. A good guide is the the EN 13460 standard (Maintenance –Documents for Maintenance) which also gives a proposal on which and what document should be used in work process. 1.3 Monitoring Progress An important task for the CMMS is to provide data for analysis on where there are possibilities for improvements in performance for your maintenance operations. Whether you want to analyze your data to follow only your own indicators over time or decide to start up benchmarking or comparing key performance indicators (KPI,s) with other companies, it is of great importance that you define your indicators very well. It is also important that you can measure the results from many perspectives, in a balanced approach, to find the right answer to your questions when formulating your strategy plan for the coming period.
246
Again, you will find help in the standards, e.g. EN 15341 (Maintenance – Maintenance Key Performance Indicators). This standard gives you the opportunity to select your set of indicators from the 71 KPI’s, grouped under three levels in three parts; Economic, Technical and Organizational indicators. More help is to be found from two products from the EFNMS Benchmarking Committee: •
“A user requirement specification for the calculation and presentation of Maintenance Key Performance Indicators according to EN 15341”. This product is for free use for the members of EFNMS (The European Federation of National Maintenance Societies).
•
“Global Maintenance and Reliability Indicators”. A publication of EFNMS and SMRP (Society of Maintenance and Reliability Professionals (USA)).
•
“The EFNMS Benchmarking Workshop” which provides training and practice in the calculation and understanding the indicators of EN 15341
1.4 Interfaces Make sure that your CMMS software is compatible with the other software packages involved or that you plan to involve your maintenance and operations. Very important are of course the financial, the documentation and the scheduling soft ware packages such as ERP (Enterprise Resource Planning). They must be integrated when it comes to setting rules for resource structures, rates and calendars as well as equipment and project breakdown structures. They must be able to supply you with the correct data for your indicators and your strategy planning. There is a very big risk that this will be very expensive if you have to put soft ware consultants to work on solutions and it will be so every time you update any of your systems. Again, you can use the Standards for the right understanding between the members or the financial, project and maintenance organizations on what is needed to provide the data quality for your operations. The Terminology standard, gives you, among other definitions, what is included in maintenance costs, the KPI standard what special data that might be needed from the financial system the document standard tells you what needs to be provided from your documentation system. 2
EXAMPLE OF INCREASING CMMS EXPLOITATION
Limited CMMS’s exploitation can be noticed in most implementations. This observation can be attributed to two fundamental reasons: The first is due to the limited CMMS options being used, and the second reason, is that from the options being used, it is common that data recorded are not analysed in order to take decisions. Several efforts have been made in order to increase CMMSs exploitation. One of these efforts is described by the following procedure. 2.1 General description - Main components This procedure consists of automatic CMMS auditing, benchmarking, and recommendations modules. These three modules are presented in the following paragraphs. The recommendations module is first presented, so the auditing module will be more efficient and transparent later on. 2.1.1 Recommendations The third module, which is first presented, is the recommendations. On this module, points of further exploitation regarding the organisation of the maintenance department are identified, based on the auditing results, and recommendations, in a prioritised manner, are provided. These recommendations are related to each company’s particularities (e.g. in numerous maintenance departments ‘maintenance tasks planning’ is more important compared to smaller departments). CMMS recommendations are made in its four main sections: “Corrective maintenance”, “Preventive maintenance”, “Spare parts”, and “Miscellaneous” and are presented below.
247
Figure 1. Recommendations concerning ‘Corrective maintenance’ In Figure 1, the recommendations in the corrective maintenance [1] (1), in a prioritised way, are presented: First, companies should record and give work orders for the immediate actions[1] (11). The next step is to record and give work orders for the differed actions (111). In parallel, immediate actions data should be analysed (112). When data are analysed, specific decisions should be taken (e.g. enhancement of PM schedules) (1121). It is important that the technical department except from doing their job right, they should also present this to the company’s management (11211). In parallel, KPI targets should be set and monitor their results. The most important KPI, concerning corrective maintenance is considered equipment availability [2,T1] (1122). It is also important to monitor & analyse maintenance cost (1123). When maintenance costs are known, ‘replace or repair’ model can run, especially for machines having the highest cost (11231). Finally, technicians’ performance can be analysed. This is a sensitive issue and management must be very careful in the way it is handled (1124). One more step is to collect maintenance requests directly from the production (into the CMMS) and not orally or in any other way (113).
248
Figure 2. Recommendations concerning ‘Preventive maintenance’ In Figure 2, recommendations in preventive maintenance (PM) (2) are presented: First PM schedules must be generated (21). Afterwards, PM tasks should be monitored (e.g. weekly or monthly). Next, PM tasks should be executed and recorded on the CMMS (211). After this, KPIs is recommended to be monitored, like technicians’ time spent for PM compared to total maintenance man-hours [2, O16 & O18] (2111). Finally, the optimum inspection time model (based on recorded immediate actions) can run (2112). In Figure 3, the recommendations in the spare parts (SP) (3) are presented: The most important is to record spare parts consumptions. Through this, a critical component of maintenance cost is filled and each machine’s bill of material is being built (31). The next step is to record spare parts deliveries (311). When consumptions & deliveries are accurately recorded, a physical inventory should be performed and stock level will be known (3111). Next step is to record SP location in the stores, specifically the row and bin no (31111). After this, labels on the shelves can be stuck (311111) and RF PDAs can be used in order to record consumptions and returns of the consumed SP (3111111). Also, optimum stock level model can run (31112) and KPIs regarding the spare parts can be set, like SP value/asset replacement value [2, E7] (31113). In parallel, orders to suppliers (3112) and offers from the suppliers (31121) can be recorded.
249
Figure 3. Recommendations concerning ‘Spare parts’
Figure 4. Recommendations concerning ‘Miscellaneous’
250
In Figure 4, the recommendations in the miscellaneous (4) are presented: There are two main miscellaneous issues. The one is to incorporate maintenance procedures into company’s ISO (41). The other is the data bridge with company’s ERP. This step is important in order to save human resources from recording the same data twice (42). Some general comments about the recommendations are: a) The order of the recommendations depends on each company’s priorities. b) The proposed order refers to an existing installation. Thus, the steps should be small in order to have the maximum result with the minimum effort. In a new installation, the order of the recommendations could be different and procedures could be followed based on their logical sequence (e.g. first orders and afterwards deliveries). On the process of increasing CMMS exploitation, there are two options: a) The one option is to use as many features of the CMMS as possible. b) The second option is to increase the exploitation of the features being used. This is done in four ways: b1) Increase the percentage of data being recorded on the CMMS (e.g. increase the percentage of deferred actions being recorded, Figure 1, 111). b2) Increase procedures execution percentage (e.g. increase PM tasks execution, Figure 2, 211). b3) Increase data analysis frequency (e.g. analysis of immediate actions history, Figure 1, 112). b4) Try to make procedures more effective (e.g. work orders being done orally to be through printout. Spare parts orders being done through printout and fax to be done electronically). 2.1.2 Auditing The first module, which is presented second, is the auditing module which evaluates directly CMMS data in order to assess the current organisation level and performance of the maintenance department. There are two kinds of auditing criteria: The one is Key Performance Indicators (KPIs). The common problem of KPIs is the reliability of their result, which, sometimes, makes difficult their use for decision making. The second kind of auditing criteria is related to the execution or not of simple procedures. On table 1, indicative automatically audited results, from four companies, on 13 auditing criteria are presented.
Table 1 Indicative auditing criteria from 4 companies
Company1
Company2
Company3
Company4
1. Number of machines 2. Number of immediate actions (Immediate actions per machine) 3. Average words/ immediate actions 4. Hours recorded per technician & day 5. Number of PM schedules (PM schedules per machine) 6. PM Schedules having at least one execution in the period(year) 7. Machines availability 8. Time spent for Preventive Maintenance compared to Corrective Maintenance 9. Suppliers offers 10. Spare parts location in the stores
496 1070 (2,1) 6,0 5,5 39 (0,07) 35 (89%)
166 113 (0,68) 6,8 5,0 89 (0,53) 64 (71%)
0,30
0,93
99 452 (4,5) 2,6 1,6 60 (0,60) 58 (97%) 95,47% 2,42
474 2217 (4,6) 8,5 7,8 527 (1,21) 245 (46%) 98,60% 0,45
5 0
0 0
11. Spare parts minimum stock level
101 (17%) 306 (0,62)
7 451 (27%) 415 (25%) 1.078 (6,49)
15 229 (52%) 0
12. Machines spare parts (Spare parts per machine) 13. CMMS reports used in company’s ISO
45,0%
251
27 (1%) 631 (6,37)
430 (0,91) 62,5%
From the audited results, the following points can be highlighted: •
On the 1st audited criterion, the number of machines in each company is presented.
•
On the 2nd criterion, the number of immediate actions being recorded and, in parenthesis, the number of immediate actions per machine is presented. It can be seen that Company 2 has a figure below 1 (0,68), which, most probably, means that not all immediate actions are recorded.
•
On the 3rd criterion, the average words per immediate action are presented. It can be seen that Company 3 has only 2,6 words per immediate action which means that it will be very difficult to analyse machines history afterwards.
•
On the 4th criterion, the hours recorded per technician & day are presented as it is important, for every company, to know how effectively the engineers are using their time. It can be seen that Company 3 has recorded only 1,6 hours per technician & day, which should be further analysed.
•
On the 5th criterion the number of existing PM schedules is presented. It can be noticed that Company 1 has only 39 PM schedules for 496 machines, which should be increased.
•
On the 6th criterion PM executions are presented. It can be noticed that Company 4 has only 46% of the PM schedules being executed. After analysing this, it has been found that this Company has a lot of daily PM schedules for which it is not recording on the CMMS the execution.
•
On the 7th criterion, it can be seen that only Company 3 & 4 are monitoring their machines Availability.
•
On the 8th criterion, it can be seen that Company 3 is doing much more PM compared to CM, so it appears as the best result. Nevertheless, this conclusion is not correct as Company 3 is recording only 1,6 h/td (4th criteria).
•
On the 9th criterion it can be noticed that very limited offers from the suppliers are recorded.
•
On the 10th criterion, it can be noticed that only Companies 2 & 4 are recording spare parts location in the stores. Company 4 has just started to monitor its spare parts, so this is the reason it has so limited spare parts.
•
On the 11th criterion can be seen that, in fact, only Company 2 is monitoring spare parts minimum stock level.
•
On the 12th criterion it can be seen that Company 2 has more updated machines bill of materials.
•
On the 13th criterion can be seen that only Company 2 & 4 are using ISO codes for their CMMS printouts.
2.1.3 Benchmarking The second module, which is presented third, is the benchmarking. On this module, auditing results from several companies are automatically compared, using different groupings, in order to find out maintenance department’s performance compared to other maintenance departments. The grouping typically used is CMMS sections, presented in §2.1.1. Other groupings are company’s sector and size. Finally, an interesting comparison is with same company’s results in the previous year.
Table 2 Benchmarking results for Company 2
Corrective maintenance Preventive maintenance Spare parts Miscellaneous Total
Maximum
Average
Company 2
Position
32 28 30 10 100
16,4 15,0 12,1 3,4 46,9
24,4 18,2 24,9 7,3 74,8
3 4 1 2 2
On the example of Table 2, are presented: a) Maximum marks in each section (in total 100).
252
b) Average marks of all companies participating (on the specific period). c) Marks, for each session, of Company 2 and its total marks. d) Company’s 2 position in each section and in total. On table 2, it can be noticed that the powerful section of Company 2 is “Spare parts” and the section having more room for improvement is “Preventive Maintenance”. 2.2 Companies feedback After running the process, companies are getting feedback, in three ways: a report sent to each company, a follow-up meeting, and the awards given. 2.2.1 Report The report sent to each company consists of three parts: a)
Results of the audited criteria and equivalent marks (depending on the range of the results). Examples of results have been presented on table 1.
b) Benchmarking, per CMMS section, comparing the maintenance department with other maintenance departments. Example has been presented in table 2. c)
Recommendations. Examples for the companies being audited (§2.1.2, table 1) could be: Based on criterion 5, company 1 is recommended to increase PM schedules. Based on criterion 7, company 1 & 2 are recommended to monitor their Availability. Based on criterion 11, companies 3 & 4 are recommended to set & monitor minimum stock level for their spare parts.
2.2.2 Review meeting Usually, a review meeting to discuss the report with each company’s top management is taking place. The main subject of this meeting is to identify the most significant points of potential improvement in the company and to decide the procedures to be followed to achieve the improvements [1, p.6]. Usually, the following year different points of potential improvement will be identified. 2.2.3 Awards Having the results on the audited criteria from all companies and by assigning marks on the ranges of the auditing results, a total mark for each maintenance department’s performance is extracted. The top performing companies are identified and three awards are given, which is an incentive and reward for the efforts of the maintenance departments. 2.3 Conclusion 2.3.1 Characteristics – Innovation The common alternative procedure is through the physical presence of an auditor, who will afterwards provide benchmarking and recommendations. The main characteristic – innovation of this process is the fact that it is automatic. This fact has two main effects: a) Minimises the cost, as it eliminates the need for the physical presence of an auditor. Minimum cost results in the increase of participating companies, so benchmarking results are more accurate and can run more frequently. b) It ensures that results are not biased, as human judgment is not involved. A second characteristic is that auditing information is detailed as thousands of data are manipulated in order to get the results. 2.3.2 Benefits The benefits of the procedure are as follows: •
Noble competition is developed among the companies (especially the top performing). This competition results on maintenance departments’ motivation for improvement.
•
Maintenance departments, as they know that they will be audited on the end of the year, are continuously doing their best throughout the year.
253
•
More active top management participation, which is achieved due to the report received and the review meeting. Thus, specific targets are set and top management is monitoring them.
2.3.3 Future The procedure described is planned to have the following three improvements: •
To further automate the process with the use of Internet.
•
To be adopted by more CMMSs. This will make benchmarking results more reliable.
•
To enhance the algorithm generating the recommendations. On the algorithm improvement, universities contribution will be significant.
3
REFERENCES
1
EN 13306 Maintenance terminology.
2
EN 15341 Maintenance Key Performance Indicators.
3
Labib, A.W. (2008) Computerised Maintenance Management Systems, in “Complex Systems Maintenance Handbook”, Edited by: K.A.H. Kobbacy and D.N.P. Murthy, Springer, ISBN 978-1-84800-010-0.
Acknowledgements The second and third authors would like to acknowledge the British Leonardo for partial funding of this work under the project titled iLearn2main.
254
Proceedings of the 4th World Congress on Engineering Asset Management Athens, Greece 28 - 30 September 2009
CONDITION MONITORING SUPPORTED DECISION PROCESS IN MAINTENANCE Samo Ulaga a , Mladen Jakovcic b, Drago Frkovic c a
University of Maribor, Mechanical Engineering Department, Smetanova 17, 2000 Maribor, Slovenia. b
b
Croatian Metrology Society, Berislavićeva 8, 10 000 Zagreb, Croatia.
INA d.d., Investment Management Sector, Lovinčićeva b.b., 10 000 Zagreb, Croatia.
In nowadays world of global competition high reliability of technical systems and small life cycle cost are crucial for business profitability. Systematic rise of inherent equipment reliability usually reduces operational cost but strongly influences acquisition cost. Consequently companies are looking for other, financially more satisfactory solutions. Introduction of measures to improve reliability of technical systems should be handled thoughtfully to prevent the situation where cost of changes exceeds savings due to those changes. In accordance with above statement must also introduction of any form of industrial diagnostics as part of preventive maintenance activities be an integral part of general asset management strategy. All activities should be carefully planned and systematically performed. Quality assurance mechanism must be introduced. Proposal of sequence of activities which are necessary to achieve expected goals is presented in the work. Key Words: maintenance, reliability, condition monitoring, P-F interval 1
INTRODUCTION
Maintenance is often the largest controllable operating cost in different industries. It is also a critical business function that impacts on plant output, product quality, production cost, safety and environmental performance. For these reasons maintenance should be regarded in best practice organisations not simply as a cost to be avoided but, together with reliability engineering, as a high leverage business function! A challenge for maintenance departments some years ago was to move out of a mostly reactive maintenance cycle. Their goal was to increase the productivity or effectiveness of existing personnel and make better use of the time allocated to maintenance. Today’s competitive environment demands an increasing level of equipment reliability in most industries. Industry has made great strides in the recent years in improving operational reliability. In addition to increasing acquisition costs to improve inherent equipment reliability, lowering of operational and total life cycle costs is clearly a recommended practice. However, investments into increasing operational reliability must be well considered to avoid exceeding optimal point beyond which total life cycle costs (LCC) begin superfluously increasing with further attempts of improvements in reliability. Systematic approach to integrate different condition monitoring techniques as a part of predictive maintenance activities and also as an important source of information for setting successful asset management policy in enterprises is studied in the paper. Predictive maintenance aims to identify problems in equipment. By identifying problems in their initial stages, the predictive maintenance system gives notice of impending failure, so downtime can be scheduled for the most convenient and inexpensive time. Predictive maintenance therefore minimizes the probability of unexpected failures, which would result in lost production. Attempts to apply different condition monitoring techniques as regular part of maintenance activities are often carried out in unsystematic and inconsiderate way. It commonly results in dissatisfaction and poor cost-effect ratio. When it is introduced systematically, there are many advantages of proactive approach: -
Equipment is only repaired when needed so that the costs of maintaining the machinery are reduced as resources are only used when needed. Potential failures are identified in advance and the severity of these failures can be substantially diminished by reducing or preventing secondary damage.
255
-
Inventory costs are reduced because a substantial warning of impeding failures is provided. Parts can be ordered when needed, rather than keeping them in large stock. Using predictive maintenance program, machines are only dismantled when necessary. Probability of ‘infant mortality’ is reduced. Predictive maintenance requires data from the plant to be taken, stored and analysed. Consequently the plant equipment efficiency is observed constantly and weak spots are detected.
Above list presents just a part of advantages, provided by systematic integration of machinery condition monitoring. It can be concluded that maintenance decision making process can benefit substantially from the information provided by predictive maintenance approach. 2
THEORY SURVEY
Corrective maintenance activities are performed when action is taken to restore the functional capabilities of failed or malfunctioned equipment. These actions are triggered by the unscheduled events of an equipment failure. With this kind of maintenance, the maintenance related costs are usually high due to the reasons like: secondary damage and safety hazards inflicted by the failure, high cost of the lost production, restoring equipment under crisis situation … Preventive maintenance is the approach developed to avoid this kind of cost. Traditionally it is performed in the form of actions at fixed intervals, irrespective of the actual state of the maintained component.
Lifecycle in 106 revolutions
Starting from new, properly built and installed part will operate at a particular level of performance. As its operating life progresses degradation occurs. Regardless of the reasons for degradation, the item can no longer meets its original service requirements and its level of performance falls. In [1] 30 equal ball bearings have been exposed to the same loading conditions. Figure 1 shows how heterogeneous distribution of achieved lifecycles they have exhibited.
30 equal ball bearings
Figure 1. Random bearing failure.
When analysing the behaviour of modern industrial machinery it has been proved that majority of failures are not agerelated. According to different studies and recognised text books like [2] there are six failure probability patterns, as shown in figure 2.
256
Likelihood of failure
< 25%
Age related patterns
> 75%
Random patterns Time
Figure 2. Failure patterns.
Probability of failure does not depend on length of use. There is no reliable procedure to predict expected equipment live cycle. Consequently time-based preventative maintenance is often pointless or can even mean introduction of failures while performing preventive measures. On the other hand by detecting the loss-in-condition of the item advanced information that degradation has started can be obtained. If one can detect this change in performance level it represents a means to forecast a coming failure. Condition based approach is an appropriate option when the following conditions apply: failure can not be prevented by redesign or change of use, events leading to failure occurs in random manner, measurable parameters which correlate with the onset of failures have been identified and selected cost effective condition monitoring method is technically feasible.
2.1 Determination of condition monitoring interval Introduction of condition monitoring into the daily maintenance routine is always exposed to severe expectations and must meet different requirements. CM analyst usually aims for higher monitoring frequency or even continuous monitoring. However increased number of monitoring task directly influences cost effectiveness of the applied method. In [2] author introduces the phrase P-F interval. Degradation process of equipment is represented by a curve, where time is presented on abscissa and resistance to failure on ordinate (figure 3). At least two prerequisites of condition monitoring should be fulfilled, to apply CM of machinery under consideration: a clear indication of decreased failure resistance, measurable by means of some CM method and consistent warning period prior to functional failure. Point in time of use where equipment has experienced measurable decrease in resistance of failure is labelled as P – the potential failure. Point of functional failure of monitored equipment is labelled F. Warning period, provided by particular CM method is known as P-F interval.
257
Resistance to failure
P-F interval
Detectable change
Net P-F
P
t
Potential failure detected
F
Inspections
Time Figure 3. P-F interval. Constraints that one should account for when deciding on condition monitoring methods to be used and inspection intervals to be prescribed are illustrated by the graph shown in Figure 3. Particular CM method must provide sufficient net P-F interval, adequate for the maintenance organization to react from the moment that a potential failure is detected. For example in the worse case it can happened that previous inspection has been done just before P – point. It means that the remaining warning period can be calculated as:
net P - F = (P - F ) - t
(1)
and it should still be long enough to prepare and perform maintenance action before functional failure. Let us consider vibration monitoring of an industrial fan, as schematically shown in figure 4. It represents critical equipment in a 24 –hour’s production process of a steelworks. It is decided to introduce a cost effective condition monitoring programme for it. Vibration monitoring has been recognised as technically feasible method for the purpose.
4
2
3
1
Figure 4. Industrial fan. Empirical warning time for an average bearing shows that, depending on the load, it takes several months for damage to the outer ring to develop and it takes some weeks to months for damage to the inner ring to develop. The most critical is failure of
258
the rolling elements. It can take as little as a couple of weeks for a bearing with damaged rolling element to fail! Consequently monitoring interval should be set to be shorter then this for CM to be effective! In some cases statistical approach to determine condition monitoring task frequency can be applied. It requires precise and comprehensive maintenance records to be available for equipment under consideration. For example [3] states that for random failures, the optimal CM frequency can be calculated using the following formula:
- MTBF Ci T ln (Cnpm - C pf )ln(1 - S ) n= ln (1 - S ) Where: n= T= MTBF = Ci = Cpf = Ci = S=
(2)
number of inspections during the P-F interval P-F interval of a particular CM method mean time between failures cost of one inspection task cost of correcting one potential failure cost of not doing preventive maintenance, including cost of lost production probability of detecting the failure in one inspection
To use such formula it is crucial to know MTBF of the equipment under consideration. It can be calculated from empirical formula like suggested by [4]. Typical failure rate calculation considers the material properties, operating environment and critical failure modes at the component part level to evaluate expected failure rate. Failure rate values can also be obtained from commercial reliability databases, like [5]. To make sure that condition monitoring task frequency reflects actual needs of particular piece of equipment it is advisable to use own maintenance records for calculation of MTBF. The following formula can be used to calculate failure rate of the particular failure mode: n
T = ∑ ti i =1
(3)
x 1 l = , MTBF = T l Where: i= ti = x= λ=
number of identical machines under consideration working time of individual machine number of registered failures for treated failure mode failure rate
2.2 Selection of suitable condition monitoring method Analysis of similar industrial equipment has shown that in most cases there is no adverse age-reliability relationship. This is because the ages at failure are distributed in such a way that there can be no consistent time expected between successive failures. Imposing an arbitrary preventive task at fixed intervals, regardless actual equipment condition can even increase the average failure rate through “infant mortality.” A variety of methods are available to assess the condition of machinery and to determine the most effective time to schedule and perform maintenance. These techniques should also be used to assess the quality of newly installed or rebuilt equipment. Overall equipment condition can be determined by using intrusive or nonintrusive methods. Process parameters like temperature, pressure, flow, turning speed, power consumption etc. can also provide valuable information. Vibration monitoring is widely used to assess the condition of the rotating components such as fans, gearboxes, pumps and motors. Lubricant analysis and wear particle analysis are used to identify problems with the lubricants and detect increased component wear. Thermography is used to check electrical installations, electrical motors, hydraulic systems and other machinery where failures can be detected by change of surface temperature distribution. Ultrasonic leak detection can be used to monitor condition of the compressed air infrastructure etc. Type of equipment to be monitored and the failure modes to be detected must be defined to select a suitable CM method. It also has to be checked if the equipment to be used is suitable for the actual conditions of application (environmental
259
conditions, accessibility, safety requirements etc.) It requires a deep understanding of monitored equipment and condition monitoring techniques to be used to provide a sound basis for carrying out the monitoring activities safely and correctly. Misunderstanding or overvaluation of the applied method can lead to unexpected failures, disappointment and aimless wasting of money! A case of unsuitable CM technique will be presented. Mixer in a rubber industry is powered by two electrical motors (1MW and 1,6 MW respectively) and a four stage gearbox (figure 5). RMS velocity sensors and temperature sensors were installed to monitor condition of the gearbox.
Figure 5. Four-stage gearbox. Temperature and vibration readings are presented in Table 1.
Table 1 CM readings Position (Fig. 4) 1 2 3 4 5 6 7 8 9
RMS_1 [mm/s] 0.64 0.79 0.60 0.64 0.55 0.58 0.50 0.39 0.51
RMS_2 [mm/s] 0.68 0.65 0.75 0.67 0.66 0.57 0.73 0.38 0.41
T_1 [ C] 38 40 54 51 53 54 45 38 37
T_2 [ C] 37 39 47 50 52 52 44 39 37
Detailed vibration measurements revealed rather advanced failure of the NU2240 bearing (Pos 4)! Acceleration envelop spectra and the state of the bearing is shown in figure 6.
260
Figure 6. Damaged bearing – POS 4. Readings in Table 1 denoted with index 1 (RMS_1, T_1) correspond to the time when bearing failure was detected by detailed vibration measurement, readings denoted with index 2 (RMS_2, T_2) were recorded 1 week later. Within this week has acceleration increased by factor 4.4, while temperature and velocity RMS show no sign of impending failure! It is obvious that chosen CM technique is not adequate for the intended purpose.
3
SYSTEMATIC APPROACH TO CONDITION BASED MAINTENANCE
The basic principle of condition based maintenance is to perform such measurements that enable maintenance department to predict which machinery will need maintenance action and when. Introduction of new condition monitoring techniques into daily maintenance routine is often underestimated and therefore not very successful project. To avoid aimless wasting of money and time it is necessarily to undertake introduction and integration of different condition monitoring techniques into general asset maintenance effort systematically! For successful integration of any condition monitoring method it is very important to make sure that benefits of applying such measurements are well explained to the staff. It has to be accepted as powerful tool in improving maintenance efficiency and not as additional workload. Introducing changes into traditional practices is always demanding and time consuming task. Staff is often not cooperative and difficult to stimulate. It is necessary to clearly define goals, advantages, work loads, tasks and responsibilities before the process can be initiated. It is a question of evolution process, no step-change in attitude should be expected! Activity flowchart for systematic introduction of condition monitoring techniques is suggested in figure 7. With regard to the experience of the authors, principal sequences of such process should be as follows: -
Condition based maintenance is recognised as a vital part of the global plant strategy. There are clearly defined expectations regarding equipment reliability and LCC. Financial constraints of the project are defined.
-
Thorough analysis of equipment failure criticality (with regard to safety, environmental impact and lost production cost) is performed. A list of critical equipment to be considered for condition monitoring is defined.
-
For each piece of equipment to be monitored and each failure mode to be detected a suitable CM method must be defined. Technical feasibility, effectiveness in detecting target failure mode, cost effectiveness and availability of human resources for performing it should be measures for selection of appropriate CM technique.
-
Production and maintenance staff should be acquainted with the purpose and importance of CM.
-
Quality assurance system should be established. Measurements must be carefully planned. Detected irregularities must be reported to production management and to people responsible for corrective actions. Control measurements must be performed after the repair.
-
Transparent plant performance monitoring at different levels should be developed
261
Figure 7. Systematic CM flowchart.
To enable impartial judgement and to follow effectiveness of the applied maintenance programme it is necessary to provide information required to calculate corresponding performance indicators. Equipment under consideration has to be catalogued. All maintenance events should be systematically registered. Criteria for availability calculation should be defined. Tool enabling transparent plant performance monitoring at different levels should be developed. An example is shown in figure 8.
262
Condition
Suggested measure
Deadline
Taskholder
Technologyst
Responsible
item
OK
measure
/
Name_2
Name
Name_1
item
bad
measure
/
Name_2
Name
Name_1
283400
item
OK
measure
/
Name_2
Name
Name_1
6m
282800
item
bad
measure
/
Name_2
Name
Name_1
0m
712200
item
OK
measure
/
Name_2
Name
Name_1
Line
0m
712300
item
bad
measure
/
Name_2
Name
Name_1
Line
0m
712600
item
bad
measure
/
Name_2
Name
Name_1
Nr.
Plant
proces line
Location
Position
Item
298
location
Line
18 m
4033A0
299
location
Line
0m
282800
300
location
Line
0m
301
location
Line
302
location
Line
303
location
304
location
Figure 8. Equipment status list. Such status list provides valuable information and does not require sophisticated and expensive software tools. It can be realised for example in simple Office environment and handled with in-house human resources. Information like availability, down time, failure type and applied corrective measure should also be regularly monitored and systematically registered.
4
CONCLUSION
Predictive maintenance can increase efficiency of plant production process and improves safe and continued plant operation. By reducing the likelihood of unexpected equipment breakdown safety of the employees is improved and possible environmental impacts are reduced. When introducing different condition monitoring techniques into daily maintenance routines it is of a crucial importance to do it systematically as a part of a well prepared asset maintenance strategy with a clearly defined goals, time schedule and task holders. In the presented work it is shown how in the real industrial environment different condition monitoring techniques can be successfully implemented and integrated into daily maintenance efforts to assure continues and predictable production process. Systematic approach to integrate different condition monitoring techniques as a part of predictive maintenance activities and also as an important source of information for setting successful asset management policy in enterprises is studied in the paper. Such approach has proven to be very efficient. Handling of measurements and processing of measurement results is very transparent. It provides exact information regarding the status of the equipment under consideration to maintenance and production staff. It is beneficial in establishing a proper image about the importance and usefulness of condition based maintenance to both, production and maintenance!
5
REFERENCES
1
Echmann et al (1985) Ball and Roller Bearings: Theory, Design&Application. John Wiley & Sons.
2
J. Moubray (1995) Reliability – centred Maintenance. Butterworth-Heinemann Ltd.
3
U.S. Department of Defense: MIL-STD-2173: Reliability Centered Maintenance Requirements for Naval Aircraft, Weapons Systems and Support Equipment
4
NSWC (1998) Handbook of Reliability Prediction Procedures for Mechanical Equipment. Carderock Division.
5
Reliability Analysis Centre (1995) Nonelectronic Parts Reliability Data 1995. RAC.
263
Proceedings of the 4th World Congress on Engineering Asset Management Athens, Greece 28 - 30 September 2009
MICROMECHANICS OF WEAR AND ITS APPLICATION TO PREDICT THE SERVICE LIFE OF PNEUMATIC CONVEYING PIPELINES A.A. Cennaa, Kim Panga, Williams, KCb and M.G. Jonesb a
b
Mechanical Engineering, Faculty of Engineering and Built Environment, University of Newcastle, Callaghan, NSW 2308, Australia.
Centre for Bulk Solids & Particulate Technologies, Faculty of Engineering and Built Environment, University of Newcastle, Callaghan, NSW 2308, Australia. Pneumatic conveying involves the transportation of a wide variety of dry powdered and granular solids through pipeline and bends using high pressure gas. It is a frequently used method of material transport particularly for in-plant transport over relatively short distances. This is primarily to exploit the degree of flexibility it offers in terms of pipeline routing as well as dust minimization. Approximately 80% of industrial systems are traditionally dilute phase system which uses relatively large amount of air to achieve high particle velocities to stay away from trouble, such as blocking the pipeline. However, for many applications higher velocities lead to excessive levels of particle attrition or wear of pipelines, bends and fittings. To combat these problems, there are systems designed to operate at relatively low velocity regimes. Yet one problem remains as a major issue with these conveying systems which is wear. In pneumatic conveying, service life is dictated by wear in critical areas of the pipelines and bends due to higher interaction between the particles and the surface. Depending on the conveying conditions or modes of flow, wear mechanism can be abrasive or erosive or a combination of both. Recent developments in predictive models of wear of materials showed that by using the particles energy dissipated to the surface and the surface material properties, it is possible to predict the overall material loss from the surface. Material loss from the surface is then can be converted to determine the pipeline thickness loss that can be used to indicate the service life of the pipeline. In this paper wear mechanisms in the critical wear areas of pneumatic conveying pipeline have been analysed. Based on the wear mechanisms, predictive models have been selected from the literature. A number of factors have been incorporated to apply the model into pneumatic conveying processes. Conveying tests were performed in the laboratory to determine the time to failure as well as gradual thickness loss in the bend. Finally experimental results have been compared the model output and the variations have been analysed for further improvement of the models. Key Words: Wear, abrasive, erosive, pneumatic conveying.
1
INTRODUCTION
Pneumatic conveying involves the transportation of a wide variety of dry powdered and granular solids through pipeline and bends using high pressure gas. It is a frequently used method of material transport particularly for in-plant transport over relatively short distances. This is primarily to exploit the degree of flexibility it offers in terms of pipeline routing as well as dust minimization. Approximately 80% of industrial systems are traditionally dilute phase system which uses relatively large amount of air to achieve high particle velocities to stay away from trouble, such as blocking the pipeline. However, for many applications higher velocities lead to excessive levels of particle attrition or wear of pipelines, bends and fittings. To combat these problems, there are systems designed to operate at relatively low velocity regimes. Yet one problem remains as a major issue with these conveying systems which is wear. Wear is surface damage that generally involves progressive material loss due to relative motion between the surface and the contacting substance or substances. In general the broad classification is based on the primary interaction between the surface and the contacting substance(s) and can be classified as abrasive wear and erosive wear. Based on the interactions between the surface and the erodent, material removal mechanism can be defined as cutting and deformation [1]. Although material can be removed from the surface by a single impact through cutting, for material removal through deformation multiple impacts as well as a secondary process such as fatigue may be involved.
264
1.1 Abrasive and Erosive Wear Abrasive wear occurs when the abrasive material stays in contact with the surface during the wear event. It is further categorised according to the type of contact as well as contact environment. The contact can be two-body (1 (a)), where the abrasive slides along the surface which acts like a cutting tool. In the case of three-body abrasion (1 (b)), the abrasives are trapped between two surfaces and free to roll or slide. In both the cases, it is possible to remove material through cutting and deformation.
(a)
(b)
(c) )
Figure 1: Illustration of basic wear mechanisms, a) two-body abrasive wear, b) three-body abrasive wear and c) erosive wear. The removal of material from a solid surface by action of impinging solid or liquid particles is known as erosion. The primary difference between erosion and abrasion is the contact duration of the particles with the surface. In the case of erosion, particles impact on the surface with a specific velocity and at certain angle with the surface. Depending on the particle impact velocity and impact angle, three things can happen. Particle can leave the surface with a residual velocity, particle can stop while cutting (particle can be embedded into the surface) or particle can deform the surface without any material removal from the surface. As a result, material removal mechanism can vary significantly depending on the particle and surface characteristics as well as impact parameters. Similar to abrasive wear, material can be removed from the surface through cutting and deformation mechanisms. As mentioned earlier, removal of materials occurs mainly by two different mechanisms, (a) caused by the cutting action of the free moving particles and (b) caused by deformation due to repeated collisions of particles with the surface and eventually resulting in breaking loose piece of material. In practice, these two types of material degradation occur simultaneously. In the case of hard and brittle materials, cutting wear is negligibly small compared to deformation wear, whereas for soft and ductile materials cutting is the primary mechanism of material removal process. Micro-cracking is the primary mechanism in brittle materials. In the case of micro-cracking, material is removed by subsurface lateral cracks spreading parallel to the surface, meeting the longitudinal cracks or cracks rising to the surface. Plastic deformation of the surface due to particles interactions generates a scale like topography on the surface of ductile materials which is harder than the substrate. Due to fluctuation of pressure on the surface, this harder layer can be delaminated and removed through cracks and crack propagation [2]. Micro-fatigue is a major contributor in this mechanism. If the particles are free to roll and slide on the surface, the surface can be subjected to a loading/unloading cycle due to the rolling contact of the particles. This causes micro-fatigue of the material which can subsequently generate randomly shaped wear areas similar to brittle fracture of the wall material. Material removal in these mechanisms can be increased considerably compared to cutting and deformation [3]. In pneumatic conveying, granular materials are transported through pipeline and bends. Particles interactions in the bends and in the reacceleration zone of the pipeline depend primarily on the solids loading ratio (ratio between the mass of solids to the mass of air). A detailed study of the flow structures in the bends as well as pipeline after the bends can be found in [4]. It was observed that for higher solid loading ratios, particles tend to accumulate in the bends and three-body abrasive wear become predominant after the impact. On the other hand, for lower solid loading ratios, erosive wear is the dominating wear mechanism. Mills and Mason [5] have conducted numerous investigations on the wear behaviour in pneumatic conveying systems and have found that in severe wear situations where a hole is formed in the pipeline, the majority of material loss is concentrated to an area where the hole has been formed. Mill’s also found that the highest wear rates defined by total material loss did not coincide with the tests where the holing of the pipe occurred. This suggests that in the case of severe wear, the flow profile is not uniform across the pipe cross-section. 1.2 Analysis of Wear Mechanisms in Industrial Pipeline Understanding of the wear mechanisms responsible for material removal in these areas is essential for the development of a predictive model for wear in pneumatic conveying. For a better understanding of the wear mechanisms industrial pipeline of
265
pneumatic conveying, wear sections have been analysed visually as well as unis the Scanning Electron Microscope (SEM). Visual observations of the samples from the pneumatic conveying of alumina showed that there were specific areas where severe wear and holing of the pipe occurred. These are primarily the bends and the straight sections immediately after the bends. Understanding of the critical wear areas of pneumatic conveying pipeline has been discussed in details in [4]. Critical wear areas after the bends are usually characterised by longitudinal channelling on the surface consistent with sliding wear. The continuous flow of material occurred over these surfaces clearly represent three body abrasive wear patterns.
a)
b)
Figure 2. a) Representative wear section from the pneumatic conveying pipeline. Backing plate had been used after the first appearance of a hole. The image shows the pipeline without the backing plate to realise the extent of wear. b) Samples for surface analysis for wear mechanism using SEM. The wear section in Figure 2a is representative of the severe wear areas from the pipeline after a bend. This section was chosen for a detailed analysis to determine the dominant wear mechanisms in the pneumatic conveying pipeline. Samples were cut from the pipe sections for further analysis using SEM as shown in Figure 2b. The back of the samples were machined flat for mounting purposes. Observations indicated that a continuous flow of concentrated particles created long wear on the pipe wall. These grooves narrowed with increasing material loss from the surface and eventually creating holes in the pipe wall. Surface analysis using SEM showed the wear patterns consistent with wear mechanisms such as cutting and deformation (Figure 3a). Other than cutting and deformation, lateral and longitudinal cracks were also revealed through the surface analysis using SEM. Although the crack formation and material removal through cracking are the primary mechanisms in brittle material, cracks are formed in these samples due to the sever alteration of the surface characteristics. The formation of cracks and material removal through brittle characteristics has been discussed in details in [2].
Figure 3: Surface characteristics in the high wear areas of the pipeline a) surface ripples, characteristic of deformation wear, b) lateral and longitudinal cracks presumably due to the pressure fluctuation in the pipeline
Ripples formation on the wear surface is a well known characteristic of wear surface in ductile materials specially for spherical erodent. Formation of ripples has been well documented by many researchers [6]. Ripples are formed when the rate of material removal due cutting is less than the rate of material removal through deformation on the surface. Talia et al. [6] showed
266
that the angular particles are more efficient in removing material from the surface as a result, may not produce the ripples as it the case with spherical particles. Formations of ripples with highly angular particles like alumina and ilmenite have been demonstrated by Kimpang et al. [7]. Figure 3b) showed the lateral and longitudinal cracks on the alumina conveying mild steel surface. This is one of the very specific wear characteristics in mild steel pipeline conveying alumina. In the process of conveying, the ultra particles of alumina can be embedded into the surface of mild steel around the impact areas. Due to the sintering capability of the alumina, more alumina particles are sintered on surface to generate a thin hard layer, so called alumina transfer film on the surface. Due to the fluctuation of pressure as well as fluctuation of the material impinging on the surface, the hard layer becomes delaminated from the substrate [8]. When the traction force of the sliding material become larger than the adhesion of the surface layer to the substrate, the layer start to peel off the surface in segments. Although the hard coating tends to protect the softer substrate from cutting and deformation wear, delamination and subsequent cracking of the surface layer increases the material loss dramatically, almost to a factor of 2-4 [9]. 2
DEVELOPMENT OF PREDICTIVE MODEL FOR WEAR IN PNEUMATIC CONVEYING PIPELINES
From the analysis of the material removal processes it was clear that one material removal mechanism cannot apply to all materials. In ductile materials erosion occurs by a process of plastic deformation in which material is removed by displacing or cutting action of the eroding particles. In brittle materials, the material is removed by the intersection of cracks which radiates out from the point of impact of the eroding particle. Finnie [37] divided the erosion problem into two major parts. The first part involves the determination of the number, direction, and velocity of the particles striking the surface, from fluid flow conditions. The second part of the problem is the calculation of the material removed from the surface. The first part is basically the problem of fluid mechanics. During erosion of ductile material, a large number of abrasive particles strike the surface. Some of these particles will land on flat faces and do no cutting, while others will cut into the surface and remove material. Finnie developed a model for material removal by the particles which displace or cut away material from the surface. An idealized picture of the particle interaction with a material surface is presented in Figure 4.
Figure 4: Idealized picture of abrasive particle striking a surface and removing material. Initial velocity of the particle’s centre of gravity makes an angle a with the surface. Finnie [10] derived and solved the equations of motion of the idealized particle and compared the predicted material loss with the experimental results. To solve the equations of motion of the particle the following assumptions were made: 1)
the ratio of the vertical and horizontal component of the force was assumed to be of constant value (K). This is reasonable if a geometrically similar configuration is maintained throughout the period of cutting.
2)
the depth of contact (l) to the depth of cut (ye) has a constant value (y).
3)
the particle cutting face is of uniform width, which is large compared to the cutting depth and
4)
a constant plastic flow stress (p) is reached immediately upon impact.
Based on these assumptions, the first microcutting model was developed based on the deformation caused by an individual particle. The volume of material W removed by a single abrasive grain of mass m, velocity V and impact angle a was given by
W=
mV 2 6 K 2 sin 2a - (sin a ) for tan a £ py K K 6
267
(1)
mV 2 k cos 2 a K for tan a ‡ W= py K 6 6
(2)
These two expressions predict the same weight loss when tan 2a = K/6 and the maximum erosion occurs at slightly lower angle given by tan 2a=K/3. The first equation applies to lower impact angles for which the particle leaves the surface while still cutting. The second equation applies to higher impact angles in which the horizontal component of the particle motion ceases while still cutting. The critical angle ac is the impact angle at which the horizontal velocity component has just become zero when the particle leaves the body; i.e., the impact angle above which the residual tangential speed of the particle equals zero. Based on the understanding of the material removal processes in erosion, Neilson and Gilchrist [11] proposed a simplified model for the erosion of material. They assumed the cutting wear factor f (kinetic energy needed to release unit mass of material from the surface through cutting) and deformation factor e (kinetic energy needed to release unit mass of material from the surface through deformation) and proposed the relationship for erosive wear loss based on the material and process parameters as:
W =
1 2
M (V 2 cos 2 a - v p 2 )
f
+
1 2
M (V sin a - K ) 2
e
( A) W =
1 2
2
for a < a 0
(3)
( B)
2
MV cos a
f
+
1 2
M (V sin a - K )
e
2
for a > a 0
(4)
(C ) ( B) where W is the erosion value, M is the mass of particles striking at angle a with velocity V. K is the velocity component normal to the surface below which no erosion takes place in certain materials and vp is the residual parallel component of particle velocity at small angles of attack. Part B accounts for deformation wear and part A and C account for cutting wear at small angles of attack and large angles of attack respectively. a0 is the angle at which the vp is zero so that at this angle both the equations will predict the same erosion. Tests on ductile materials with constant particle velocity show that, as the angle of attack is increased from zero, the erosion initially increases at a rapid rate but at larger values of angles, the rate decreases. With this observation the wear equation for a(6,4). The rising trend in the signal could be found around point no. 72. The trend is presented on Fig. 16.
Fig. 13. ECF on the local plane after 600 data blocks (3600s).
Fig. 14. ECF on the local plane after 650 data blocks (3900s).
Fig. 15. Time history of contacts (3,1)->(6,4) with visible rising trend marking cracked pinion tooth.
5
CONCLUSIONS
Local meshing plane (ni,mj) allows for determining the quality of contact of the single teeth pairs in the gearbox thus allowing for precise location of gear faults. Representing the base pitch part of path of contact as a K-element time vector allows for observation of a family of K-dimensional random variables indexed with the number of mating teeth (i,j). For such pre-processed signals different types of analysis could be adopted including trend analysis of the instantaneous energy density for each teeth pair, analysis of trend changes of the probability density functions for each teeth pair and methods of chaos analysis.
668
Moreover the calculations could be done on-line provided that there is enough time between acquiring the data blocks. For the chosen 6s length data blocks the analysis were performed on-line on the typical industrial DAQ PXI computer. The described method of the local plane could be a useful complement of currently used methods of machinery quality acceptance procedures. Proposed method is non-invasive and requires relatively simple equipment. It allows to investigate an assembled gear working in its natural conditions. It is possible to detect manufacturing and assembly errors (such as: gear-shaft misalignment, misalignment of bearing mountings, horizontal misalignment, pitch error, distance error, vertical misalignment) and investigate growth of there effects during exploitation. Observation on a local plane allows observation of tooth contact during normal work and selection the worst pinion – gear tooth contact in terms of dynamic overload, the factor that is critical for determining the durability of gear. Additionally local plane allows for detecting fatigue damages that occur to the gears during the exploitation.
Figure 16. Rising acceleration squared envelope trend for point 12 of contact (5,3) for all 3000 matings of this pair.
6
REFERENCES
1
Allianz. (1984). Handbuch der Schadensverhütung. Allianz Versicherungs AG.
2
Radkowski S., Zawisza M. (2004), Use of vibroacoustic signal for evaluation of fatigue-related damage of toothed gears”, The 17th International Congress & Exhibition on Condition Monitoring And Diagnostic Engineering Management, COMADEM 2004.
3
Mączak J. (1998) The use of modulation phenomena of vibroacoustic signal in helical gear diagnosis”, PhD Dissertation. Warsaw University of Technology (in polish).
4
Zakrajsek J.J., Townsend D.P., Lewicki D.G., Decker H.J., Handschuh R.F. (1995) Transmission diagnostic research at NASA Lewis Research Center. NASA Technical Memorandum 106901.
5
Decker H.J. (2002) Crack detection for aerospace quality spur gears. NASA TM-2002-211492, ARL-TR-2682,.
6
Randall R. B. 1982 A new method of modelling gear faults. Journal of Mechanical Design, 104, 259-267.
7
McFadden P.D. 1988. Determining the location of a fatigue crack in a gear from the phase of the change in the meshing vibration. Mechanical Systems and Signal Processing, 2(4), 403-409.
8
McFadden P. D., Smith J. D., (1985), A signal processing technique for detecting local defects in a gear from the signal average of vibration. Proceedings of the Institution of Mechanical Engineers, 199(C4), pp. 287-292.
9
Loutridis C. (2004) A local energy density methodology for monitoring the evolution of gear faults. NDT and E International, 6(37), 447-453.
10
Loutridis C. (2006) Instantaneous energy density as a feature for gear fault detection. Mechanical Systems and Signal Processing, 20, 1239-1253.
11
Mączak J., Radkowski S. (2002), “Use of envelope contact factor in fatigue crack diagnosis of helical gears”, Machine Dynamics Problems, 26, 115-122.
669
12
Mączak J. (2003), On a certain method of using local measures of fatigue-related damage of teeth in a toothed gear, COMADEM.
13
Mączak J. (2005), A method of detection of local disturbances in dynamic response of diagnosed machine element, Condition Monitoring, Cambridge.
14
Mączak J. (2009). Evolution of the instantaneous distribution of energy density on a local meshing plane as the measure of gear failures. The 8th International Conference on Reliability, Maintainability and Safety, Chengdu, China (in printing).
15
Bonnardot F., El Badaoui M., Randall R.B., Daniere J., F. Guillet F. (2005) Use of the acceleration signal of a gearbox in order to perform angular resampling (with limited speed fluctuation). Mechanical Systems and Signal Processing, 19, pp. 766–785.
Acknowledgments The author gratefully acknowledge the financial support from the polish Ministry of Science and Higher Education, scientific project for years 2008-2010.
670
Proceedings of the 4th World Congress on Engineering Asset Management Athens, Greece 28 - 30 September 2009
SUPPORT VECTOR MACHINE AND DISCRETE WAVELET TRANSFORM METHOD FOR STRIP RUPTURE DETECTION BASED ON TRANSIENT CURRENT SIGNAL S.W. Yang a, A. Widodo b, W. Caesarendra a, J.S. Oh a, M.C. Shim a S.J. Kim a, B.S. Yang *a and W.H. Lee c a
School of Mechanical Engineering, Pukyong National University, san 100, Yongdang-dong, Nam-gu, Busan 608-739 South Korea. b
Mechanical Engineering Department, Diponegoro University, Tembalang, Semarang 50275, Indonesia.
c
POSCO Technical Research Laboratories, No. 1, goidong-dong, Nam-gu, Pohang, Kyeongsangbuk-do.
This paper proposes the fault diagnosis method in 6 high cold rolling mill which consist of 5 stand to assess the normal and fault conditions. The proposed method concerns with the strip rupture fault diagnosis based on transient current signal. Firstly, the signal smoothing technique is performed initially to highlight the fundamental of transient signal at normal and fault condition. Then the smoothed signal is subtracted from the original signal in order to transform the original data become useful data that used for further analysis. Next, discrete wavelet transform (DWT) method is performed to present the detail signal. Moreover, features are calculated from detail signal of DWT and then extracted using principal component analysis (PCA) and kernel principal component analysis (KPCA) for dimensionality reduction purpose. Finally, using support vector machine (SVM) for classification, the results of stand 5 shows more clear classified compare with other stands. Key Words: Cold rolling mill, Strip rupture, Transient analysis, Wavelet transform, Support vector machine 1
INTRODUCTION
To increase productivity at low maintenance cost in cold rolling mill industry, the quality issues of steel strip is taking into account to be a first priority. The quality of rolling mill products can be determined by the uniformity of the movement of work rolls in contact with the strip. As increase the roll speed, the current signal becomes point of view to be analysed rather than steady state signal. It has information related with speed, vibration, force and thickness deviation. There is several type of damage that occurred in cold rolling mill due to high roll speed and steel works flattens to a desired thickness [1]. In this paper, strip rupture is considered as one of the steel strip damage which frequently occurred.
Figure 1 6 high cold rolling mill
671
The fault diagnosis method for 6 high cold rolling mill is presented in Figure 1, using SVM and DWT based on transient current signal is presented in this study. Previous work has discussed the utilization of wavelet transform and SVM for induction machine based transient signal [2]. Other articles that study the application of wavelet for transient signal have been published [3, 4]. In this work, the signal smoothing technique is performed initially to highlight the non-stationary fundamental of transient signal at normal and fault condition. Then the smoothed signal is subtracted from the original signal in order to transform the original signal become useful signal that used for further analysis. Next, DWT method is performed to present the detail signal. Moreover, nine features are calculated and extracted using time domain, frequency domain and entropy domain feature calculation formulas. To reduce the dimensionality and extract the useful features, PCA and kernel PCA are utilized. Finally, using SVM for classification, the result of stand 5 shows clearly classified compare with other stands.
2
BACKGROUND KNOWLEDGE
2.1 Wavelet transform Wavelet is known as a good tool for the analysis non-stationary signal having transient condition [5]. The wavelet transform decomposes a concerned signal into a linear combination of a time scale unit. The decomposition process is performed according to the translation of the mother wavelet (or wavelet basic function), which changes the scales and shows the transition of each frequency component [6]. There are many basic wavelet function and they have own characteristic. The most commonly used for discrete wavelet transform is Daubechies. Figure 2 shows Daubechies basic function. The basis function of wavelet system is scaling function f (t ) and wavelet function y (t ) that can be derived from a single scaling or wavelet function by scaling and translation. A scaling function f j , k (t ) scales with translates function f (t ) is defined as following equation:
f j ,k (t ) = 2 j / 2 f (2 j t - k )
(1)
Where, j is the log2 of the scale and 2- j k represent the translation in time. Then wavelet function is given by
y j ,k (t ) = 2 j / 2y (2 j t - k )
(2)
Figure 2 Daubechies basic function
2.2 Feature calculation and feature extraction Feature calculation Feature, in this study can be defined as representative information of machine condition [7]. In rotating machinery, the information of current condition can be obtained by applying appropriate feature calculation formula. Time domain, frequency domain and entropy domain feature calculation are considered in this study. The above some feature calculation formulas are listed as follows: n
n Mean: x = x1 + x 2 + L + x n = 1 x , Root mean square: x = ∑ i n n i =1
∑ xi2 i =1
n
1 n 4 ∑ xi x , Kurtosis: b 2 = n i =1 , Crest factor: CF = p 4
s
xs
(3)
672
where, xi is the ith time historical data, n is the number data points, x p is peak value and xrms root mean square value.
∫ Frequency center: F / C =
+¥
0
∫
f s ( f )df
+¥
0
s ( f )df
¥
, Root variance frequency: RVF =
∫ ( f - FC ) s( f )df ∫ s( f )df 2
0
¥
(4)
0
where, s(f) is the signal power spectrum. Entropy estimation: H ( x ) =
∫ p( x )
Inp( x )dx
(5)
Feature extraction Due to high dimensionality of features, it can decrease the accuracy of classification process for fault diagnosis. Therefore, feature extraction method is necessarily to reduce the dimensionality (Figure 3). Feature extraction is a method to obtain a new feature based on transformation and combination from feature calculation result [8]. In this paper, linear and nonlinear feature extraction methods are performed using PCA and KPCA, respectively.
Figure 3 Feature extraction
2.3 Support vector machine SVM is a supervised classification method based on the statistical learning theory. In SVM, original input space is mapped onto a high dimensional dot product space called a feature space, and in the feature space the optimal hyperplane is determined to maximize the generalization ability of the classifier. The maximal hyperplane is found by exploiting the optimization theory, and respecting insight provided by the statistical learning theory. Given data input xt (i = 1, 2,..., M ) , M is the number of samples. The samples are assumed have two classes’ namely positive class and negative class. Each of classes associate with labels is yi = 1 for positive class and yi = -1 for negative class,
respectively. In the case of linearly data, it is possible to determine the hyperplane f (x ) = 0 that separates the given data M
f ( x ) = wT x + b = ∑ wi xi + b = 0,
(6)
i =1
Where w is M-dimensional vector and b is a scalar. The vector w and scalar b are used to define the position of separating hyperplane. The decision function is made using sign f ( x ) to create separating hyperplane that classify input data in either positive class or negative class. A distinctly separating hyperplane should be satisfy the constraints
f ( x) = 1
if
yi = 1
f ( x ) = -1
if
yi = -1
(7)
Or it can be presented in complete equation
(
)
y i f ( xi ) = y i w T xi + b ‡ 1
for i = 1, 2, ..., M
(8)
The detail presentation of SVM theory can be found in Ref. [9]
673
3
RESULT AND DISCUSSION
3.1 Signal preparation for wavelet The flowchart of the proposed method is illustrated in Figure 4. Figure 5 shows the original transient current signal. The signal of motor current was acquired during one minute sampling time. Signal processing step using signal smoothing technique is initially used to process the original motor current signal to reduce effect of line frequency and pick up transient signal easily. In order to obtain useful signal also called “residual signal” that required in wavelet transform process, the smoothed signal have to transform by subtract the smoothed signal from original signal (Figure 6). The results of wavelet transform db1~db7 are shown in Figure 7. From wavelet result, it’s still difficult to identify the normal and fault condition even though up to detail 7 had been applied.
Figure 4 Flowchart of propose method
(a) Normal condition
(a) Normal condition
(b) Faulty condition
(b) Faulty condition
Figure 5 Original transient current signal
Figure 6 Residual current signal
674
(a) Normal condition
(b) Faulty condition
Figure 7 Wavelet transform for residual signal
3.2 Feature calculation and extraction To obtain useful information from the wavelet result, nine features are calculated from original data as presented in Table 1. The three best feature namely RMS, crest factor and root variance frequency are selected manually based on maximum separate distance of each feature. Figure 8 shows result of feature calculation using three best features. Blue circle and red triangle mean normal and fault data, respectively. Total number of data is 22 which consist of 11 normal data and 11 fault data. The result shows that stand 5 performs better cluster than the other stands. As shown in Figure 8, clustering of calculated features is not satisfied yet due to some normal feature are overlapping to the fault feature region. In addition, feature extraction using PCA and KPCA are employed in order to obtain the best feature called best principal component. The feature extraction results are plotted in Figure 9. Blue circle and red star mean normal and fault data, respectively. As shown in Figure 9, using feature extraction technique, PCA and KPCA have similar performance and the normal and fault features are not identified clearly. Table 1 Selected features Domain
Feature
Time
Mean RMS Kurtosis Crest factor
Frequency
Frequency center Root variance frequency Root mean square frequency
Entropy
Entropy estimation Entropy estimation error
3.3 Feature classification Classification results of SVM are presented in Tables 2-4. Total number of training data is 14 and testing data is 8. Test of classification is executed about ten times and indicated top three which is best accuracy about classification. Classification result using SVM and calculated features is presented in Table 2. Since using calculated features data, the best performance is accomplished in stand 5 and the accuracy is 87.5%. It means stand 5 have information on classification of normal and fault. Table 3 shows detail information of classification result using SVM and PCA feature extraction. The best classification is achieved in stand 5 and the accuracy is 87.5% that is identical when using SVM and calculated features. Comparing with previous result, classification using SVM and calculated features is similar to this method. The last method for classification is using SVM and KPCA feature extraction. Table 4 shows the classification results using this method. Result of SVM and KPCA feature extraction is also indicated in stand 5 but revealed only one time in ten times. When consider the frequency of accuracy, this result is worse than the previous two results. Comparing with result of classification using SVM and PCA, classification using SVM and KPCA achieved worse in performance due to improper kernel parameter. In this work, the kernel parameter of γ =1 is used. In Table 4, the proper parameters for RBF kernel function are 128 and 1 for C and γ, respectively.
675
Figure 8 Feature calculation (Stand 1~5)
Figure 9 Feature extraction using PCA and KPCA Table 2 Classification results using SVM and feature calculation No. of stand Stand 1
Stand 2
Stand 3
Stand 4
Stand 5
Accuracy (%)
Number of SVs
CPU times(s)
RBF kernel parameters C
γ
50.0
12
0.00177
512
2
50.0
11
0.00151
4096
4
50.0
11
0.00140
512
2
62.5
12
0.00145
64
0.25
62.5
12
0.00162
64
0.25
62.5
12
0.00147
64
2
75.0
10
0.00230
8192
16
75.0
11
0.00164
64
8
75.0
12
0.00119
1024
8
62.5
12
0.00173
4096
8
62.5
12
0.00161
256
128
50.0
11
0.00164
512
2
87.5
9
0.00156
512
16
87.5
12
0.00172
256
4
87.5
12
0.00167
128
8
676
Table 3 Classification results using SVM and PCA feature extraction No. of stand Stand 1
Stand 2
Stand 3
Stand 4
Stand 5
Accuracy (%)
Number of SVs
CPU times(s)
RBF kernel parameters C
γ
50.0
11
0.00148
1024
2
50.0
11
0.00238
1024
2
50.0
11
0.00154
4096
8
62.5
12
0.00103
64
1
62.5
11
0.00131
64
0.25
62.5
12
0.00142
64
0.25
75.0
10
0.00177
64
8
75.0
12
0.00155
64
2
75.0
10
0.00186
64
0.5
62.5
12
0.00153
256
32
62.5
12
0.00114
512
4
50.0
12
0.00193
256
64
87.5
10
0.00144
512
16
87.5
9
0.00150
64
2
75.0
12
0.00166
64
2
CPU times(s)
RBF kernel parameters
Table 4 Classification results using SVM and KPCA feature extraction No. of stand Stand 1
Stand 2
Stand 3
Stand 4
Stand 5
4
Accuracy (%)
Number of SVs
C
γ
50.0
11
0.00161
128
0.5
50.0
12
0.00177
128
4
50.0
12
0.00183
1024
8
62.5
12
0.00150
64
2
62.5
12
0.00264
64
2
62.5
12
0.00160
256
2
75.0
12
0.00128
128
8
75.0
12
0.00195
64
0.5
50.0
11
0.00156
512
2
50.0
12
0.00134
512
2
50.0
12
0.00164
512
2
50.0
12
0.00183
256
128
87.5
12
0.00190
128
1
75.0
12
0.00271
64
4
62.5
12
0.00135
64
4
CONCLUSIONS
The fault diagnosis method in application of 6 high cold rolling mill which consist of 5 stand has been studied. Since the transient current signal contains non-stationary fundamental, so it needs to be removed before classification. In this work, smoothing process and DWT are used to highlight the fundamental of transient signal at normal and fault condition. Feature calculation and extraction using component analysis is performed and salient classification of normal and fault is more clearly identified in stand 5 compared with the other stands. It means fault can be detected in early stage if stand 5 is monitored initially. The detail classification of normal and fault is necessary to improve and achieve the acceptable result of varies rolling
677
condition. To obtain the best result of classification between normal and fault, some appropriate prepossessing and classification method for transient signal are needed.
5
REFERENCES
1
Mackel J. (1999) Condition monitoring and diagnostic engineering for rolling mills. International congress of COMADEM.
2
Widodo A and Yang BS. (2008) Wavelet support vector machine for induction machine fault diagnosis based on transient current signal. Expert System with Application, 35(1-2), 307-316.
3
Douglas H, Pillay P and Ziarani A. (2004) A new algorithm for transient motor signature analysis using wavelet. IEEE Transaction on Industry Applications, 40(5), 1361-1368.
4
Douglas H, Pillay P and Ziarani A. (2004) The impact of wavelet selection on transient motor current signature analysis. IEEE International Conference on Electric Machines and Drives, 1361-1368.
5
Burrus CS, Gopinath RA and Guo H. (1998) Introduction to wavelets and wavelet transforms, a primer. Englewood Cliffs, NJ: Prentice-Hall.
6
Daubechies I. (1992) Ten lectures on wavelets. SIAM, Pennsylvania, USA.
7
Hwang WW. (2004) Condition classification and fault diagnosis of rotating machine using support vector machine, Master course thesis, Pukyong National Univ.
8
Han T. (2005) Developement of a feature-based fault diagnostics system and its application to induction motors, Doctor course thesis, Pukyong National Univ.
9
Vapnik VN. (1995) The nature of statistical learning theory. New York: Springer.
Acknowledgments This work was supported by the Brain Korea 21 Project.
678
Proceedings of the 4th World Congress on Engineering Asset Management Ath