ADVANCES IN E N G I N E E R I N G AND T E C H N O L O G Y
This page intentionally left blank
ADVANCES IN ENGINEERING AND T E C H N O L O G Y Proceedings of the First International Conference on Advances in Engineering and Technology 16-19 July 2006, Entebbe, U g a n d a
Edited by
J. A. Mwakali Department of Civil Engineering, Faculty of Technology, Makerere University, P.O. Box 7062, Kampala, Uganda
G. Taban-Wani Department of Engineering Mathematics, Faculty of Technology, Makerere University, P.O. Box 7062, Kampala, Uganda
2006
~~!i ~i.......~...9..~........~i.84 ..,~ii.i......
Amsterdam Paris
' Boston
' San D i e g o
' Heidelberg ' London ' New York . Oxford ' San F r a n c i s c o ' Singapore ' Sydney , Tokyo
iv
Elsevier Ltd is an imprint of Elsevier with offices at: Linacre House, Jordan Hill, Oxford OX2 8DP, UK The Boulevard, Langford Lane, Kidlington, Oxford OX5 1GB, UK 84 Theobald's Road, London WC1X 8RR, UK Radarweg 29, PO Box 211, 1000 AE Amsterdam, The Netherlands 30 Corporate Drive, Suite 400, Burlington, MA 01803, USA 525 B Street, Suite 1900, San Diego, CA 92101-4495, USA First edition 2006 Copyright 9 Elsevier Ltd. All rights reserved No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means electronic, mechanical, photocopying, recording or otherwise without the prior written permission of the publisher Permissions may be sought directly from Elsevier's Science & Technology Rights Department in Oxford, UK: phone (+44) (0) 1865 843830; fax (+44) (0) 1865 853333; email:
[email protected]. Alternatively you can submit your request online by visiting the Elsevier web site at http://elsevier.com/locate/permissions, and selecting Obtaining permission to use Elsevier material Notice No responsibility is assumed by the publisher for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions or ideas contained in the material herein. Because of rapid advances in the medical sciences, in particular, independent verification of diagnoses and drug dosages should be made
British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library
Library of Congress Cataloging-in-Publication Data A catalog record for this book is available from the Library of Congress For information on all biomaterials related publications visit our web site at books.elsevier.corn Printed and bound in Great Britain 06 07 08 09 10 10 9 8 7 6 5 4 3 2 1 ISBN-13:978-0-08-045312-5 ISBN-10:0-08-045312-0
Working together to grow libraries in developing countries www.elsevier.~m i www.bookaid.org I ~w.sabre.org
This page intentionally left blank
vi
Elsevier Internet Homepage- http'//www.elsevier.com Consult the Elsevier homepage for full catalogue information on all books, major reference works, journals, electronic products and services. All Elsevier journals are available online via ScienceDirect: www.sciencedirect.com
To contact the Publisher Elsevier welcomes enquiries concerning publishing proposals: books, journal special issues, conference proceedings, etc. All formats and media can be considered. Should you have a publishing proposal you wish to discuss, please contact, without obligation, the publisher responsible for Elsevier's materials and engineering programme: Jonathan Agbenyega Publisher Elsevier Ltd The Boulevard, Langford Lane Phone: Kidlington, Oxford Fax: OX5 1GB, UK E-mail:
+44 1865 843000 +44 1865 843987 j. agb enye @ elsevi er. corn
General enquiries, including placing orders, should be directed to Elsevier's Regional Sales Offices - please access the Elsevier homepage or full contact details (homepage details at the top of this page).
vii
PREFACE The International Conference on Advances in Engineering and Technology (AET2006) was a monumental event in the engineering and scientific fraternity not only from the African continent but also from the larger world, both technologically advanced and still developing. The Conference succeeded in bringing together to Uganda, affectionately called "The Pearl of Africa", scores of some of the world-famous scientists and engineers to share knowledge on recent advances in engineering and technology for the common good of humanity in a world that is no more than a global village. These Proceedings are a compilation of quality papers that were presented at the AET2006 Conference held in Entebbe, Uganda, from 16th t o 19th July, 2006. The papers cover a range of fields, representing a diversity of technological advances that have been registered in the last few decades of human civilization and development. The general areas covered range from advances in construction and industrial materials and methods to manufacturing processes; from advances in architectural concepts to energy efficient systems; from advances in geographical information systems to telecommunications, to mention but a few. The presentations are undoubtedly a pointer to more such advances that will continue to unfold in the coming years and decades to meet the ever growing demands and challenges of human survival in the face of diminishing natural resources for an ever-increasing population. The timing of the Conference could not have been more appropriate: it is at the time when most of Africa is facing an unprecedented energy crisis engendered by a combination of factors, namely drought (resulting in the recession of water reservoir levels), accelerated industrialization that outstrips available power generation, inadequate planning, poor economies, etc. We think the AET2006 Conference has presented practical ideas for solving this and many other problems that face the peoples of Africa and other continents. The editors of the Proceedings, on behalf of the AET2006 Conference Organising Committee, extend their thanks to the authors for accepting to share their knowledge in these Proceedings. All the experts who peer-reviewed the papers are most thanked for ensuring that quality material was published. The guidance given by the members of the International Scientific Advisory Board is greatly acknowledged. The Sponsoring Organisations are most sincerely thanked for making it possible for the Conference and its Proceedings to be realized. The staff of the Faculty of Technology, Makerere University, and particularly the Dean, Dr. Barnabas Nawangwe, is given special thanks for providing an environment that was conducive for the smooth accomplishment of the editorial work. Finally, the editors thank their families for the cooperation and support extended to them.
J. A. Mwakali G. Taban-Wani
viii
This page intentionally left blank
ix
T A B L E OF C O N T E N T S CHAPTER ONE - KEYNOTE PAPERS
WATER QUALITY MANAGEMENT IN RIVERS AND LAKES
Fontaine, Kenner & Hoyer IMPROVEMENTS INCORPORATED IN THE NEW HDM-4 VERSION 2
10
Odoki, Stannard & Kerali CHAPTER TWO - ARCHITECTURE
SPATIAL AND VISUAL CONNOTATION OF FENCES (A CASE OF DAR ES SALAAM- TANZANIA)
23
Kalugila A BUILDING QUALITY INDEX FOR HOUSES (BQIH), PART 1: DEVELOPMENT
31
Goliger & Mahachi A BUILDING QUALITY INDEX FOR HOUSES, (BQIH) PART 2: PILOT STUDY
40
Mahachi & Goliger USE OF WIND-TUNNEL TECHNOLOGY IN ENHANCING HUMAN HABITAT IN COASTAL CITIES OF SOUTHERN AFRICA
49
Goliger & Mahachi WOMEN PARTICIPATION IN THE CONSTRUCTION INDUSTRY
59
Elwidaa & Nawangwe CHAPTER THREE-
CIVIL ENGINEERING
STUDIES ON UGANDAN VOLCANIC ASH AND TUFF
75
Ekolu, Hooton & Thomas COMPARATIVE ANALYSIS OF HOLLOW CLAY BLOCKS AND SOLID REINFORCED CONCRETE SLABS
84
Kyakula, Behangana & Pariyo TECHNOLOGY TRANSFER TO MINIMIZE CONCRETE CONSTRUCTION FAILURES
Ekolu & Ballim
91
DEVELOPMENT OF AN INTEGRATED TIMBER FLOOR SYSTEM
99
Van Herwiinen & Jorissen CONSIDERATIONS IN VERTICAL EXTENSION OF REINFORCED CONCRETE STRUCTURES
109
Kyakula, Kapasa & Opus LIMITED STUDY ON A CHANGE FROM PRIVATE PUBLIC TO GOVERNMENT ONE TRANSPORT SYTEMS
117
Ssamula INFLUENCE OF TRUCK LOAD CHANNELISATION ON MOISTURE DAMAGE IN BITUMINOUS MIXTURES
125
Bagampadde & Kiggundu THE EFFECT OF MEROWE DAM ON THE TRAVEL TIME OF FLOOD WAVE FROM ATBARA TO DONGOLA
135
Zaghloul & El-Moattassem BUILDING MATERIAL ASPECTS IN EARTHQUAKE RESISTANT CONSTRUCTION IN WESTERN UGANDA
143
Kahuma, Kiggundu, Mwakali & Taban-Wani BIOSENSOR TO DETECT HEAVY METALS IN WASTE WATER
159
Ntihuga INTEGRATED ENVIRONMENTAL EDUCATION AND SUSTAINABLE DEVELOPMENT
167
Matiasi MAPPING WATER SUPPLY COVERAGE: A CASE STUDY FROM LAKE KIYANJA, MASINDI DISTRICT, UGANDA
176
Quin PHOSPHORUS SORPTION BEHAVIOURS AND PROPERTIES OF MBEYA-PUMICE
185
Mahenge, Mbwette & Njau PRELIMINARY INVESTIGATION OF LAKE VICTORIA GROUNDWATER SITUATION FROM ADVANCED VERY HIGH RESOLUTION RADIOMETER DATA
195
Mangeni & Ngirane-Katashaya COMPARISON OF TEST RESULTS FROM A COMPACTED FILL
Twesigye-omwe
203
xi
DEALING WITH SPATIAL VARIABILITY UNDER LIMITED HYDROGEOLOGICAL DATA. CASE STUDY: HYDROLOGICAL PARAMETER ESTIMATION IN MPIGI-WAKISO
211
Kigobe & Kizza TOWARDS APPROPRIATE PERFORMANCE INDICATORS FOR THE UGANDA CONSTRUCTION INDUSTRY
221
Tindiwensi, Mwakali & Rwelamila DEVELOPING AN INPUT-OUTPUT CLUSTER MAP FOR THE CONSTRUCTION INDUSTRY IN UGANDA
230
Mwesige & Tindiwensi REGIONAL FLOOD FREQUENCY ANALYSIS FOR NORTHERN UGANDA USING THE L-MOMENT APPROACH
238
Kizza, Ntale, Rugumayo & Kigobe QUALITATIVE ANALYSIS OF MAJOR SWAMPS FOR RICE CULTIVATION IN AKWA-IBOM, NIGERIA
251
Akinbile & Oyerinde EFFICIENCY OF CRAFTSMEN ON BUILDING S I T E S STUDIES IN UGANDA
260
Alinaitwe, Mwakali & Hansson BUILDING FIRM INNOVATION ENABLERS AND BARRIERS AFFECTING PRODUCTIVITY
268
Alinaitwe, Widen, Mwakali & Hansson FACTORS AFFECTING PRODUCTIVITY OF BUILDING C R A F T S M E N - A CASE OF UGANDA
277
Alinaitwe, Mwakali & Hansson A REVIEW OF CAUSES AND REMEDIES OF CONSTRUCTION RELATED ACCIDENTS: THE UGANDA EXPERIENCE
285
Mwakali THE RATIONALE FOR USE OF DECISION SUPPORT SYSTEMS FOR WATER RESOURCES MANAGEMENT IN UGANDA
300
Ngirane-Katashaya, Kizito & Mugabi THE NEED FOR EARTHQUAKE LOSS ESTIMATION TO ENHANCE PUBLIC AWARENESS OF EXPOSURE RISK AND STIMULATE MITIGATING ACTIONS: A CASE STUDY OF KAMPALA CIVIC CENTER
Mujugumbya, Akampuriira & Mwakali
309
xii
CHAPTER FOUR- CHEMICAL AND PROCESS ENGINEERING
PARTICLE DYNAMICS RESEARCH INITIATIVES AT THE FEDERAL UNIVERSITY OF TECHNOLOGY, AKURE, NIGERIA
315
Adewumi, Ogunlowo & Ademosun MATERIAL CLASSIFICATION IN CROSS FLOW SYSTEMS
321
Adewumi, Ogunlowo & Ademosun APPLICATION OF SOLAR-OPERATED LIQUID DESICCANT EVAPORATIVE COOLING SYSTEM FOR BANANA RIPENING AND COLD STORAGE
326
A bdalla, A bdalla, El-awad & Eljack FRACTIONATION OF CRUDE PYRETHRUM EXTRACT USING SUPERCRITICAL CARBON DIOXIDE
339
Kiriamiti, Sarmat & Nzila MOTOR VEHICLE EMISSION CONTROL VIA FUEL IONIZATION: "FUELMAX" EXPERIENCE
347
John, Wilson & Kasembe MODELLING BAGASSE ELECTRICITY GENERATION: AN APPLICATION TO THE SUGAR INDUSTRY IN ZIMBABWE
354
Mbohwa PROSPECTS OF HIGH TEMPERATURE AIR/STEAM GASIFICATION OF BIOMASS TECHNOLOGY
368
John, Mhilu, AIkilaha, Mkumbwa, Lugano & Mwaikondela DEVELOPING INDIGENOUS MACHINERY FOR CASSAVA PROCESSING AND FRUIT JUICE PRODUCTION IN NIGERIA
375
Agbetoye, Ademosun, Ogunlowo, Olukunle, Fapetu & Adesina C H A P T E R FIVE - E L E C T R I C A L E N G I N E E R I N G
FEASIBILITY OF CONSERVING ENERGY THROUGH EDUCATION: THE CASE OF UGANDA AS A DEVELOPING COUNTRY
385
Sendegeya, Lugujjo, Da Silva & Amelin PLASTIC SOLAR CELLS: AN AFFORDABLE ELECTRICITY GENERATION TECHNOLOGY
Chiguvare
395
xiii
IRON LOSS OPTIMISATION IN THREE PHASE AC INDUCTION SQUIRREL CAGE MOTORS BY USE OF FUZZY LOGIC
404
Saanane, Nzali & Chambega PROPAGATION OF LIGHTNING INDUCED VOLTAGES ON LOW VOLTAGES LINES: CASE STUDY TANZANIA
421
Clemence & Manyahi A CONTROLLER FOR A WIND DRIVEN MICRO-POWER ELECTRIC GENERATOR
429
Ali, Dhamadhikar & Mwangi REINFORCEMENT OF ELECTRICITY DISTRIBUTION NETWORK ON PRASLIN ISLAND
437
Vishwakarma CHAPTER
SIX - MECHANICAL
ENGINEERING
ELECTROPORCELAINS FROM RAW MATERIALS IN UGANDA: A REVIEW
454
Olupot, Jonsson & Byaruhanga A NOVEL COMBINED HEAT AND POWER (CHP) CYCLE BASED ON GASIFICATION OF BAGASSE
465
Okure, Musinguzi, Nabacwa, Babangira, Arineitwe & Okou ENERGY CONSERVATION AND EFFICIENCY USE OF BIOMAS USING THE E.E.S. STOVE
473
Kalyesubula FIELD-BASED ASSESSMENT OF BIOGAS, TECHNOLOGY: THE CASE OF UGANDA
481
Nabuuma & Okure MODELLING THE DEVELOPMENT OF ADVANCED 488 MANUFACTURING TECHNOLOGIES (AMT) IN DEVELOPING COUNTRIES
Okure, Mukasa & Otto CHAPTER
SEVEN - GEOMATICS
SPATIAL MAPPING OF RIPARIAN VEGETATION USING AIRBORNE REMOTE SENSING IN A GIS ENVIRONMENT. CASE STUDY: MIDDLE RIO GRANDE RIVER, NEW MEXICO
Farag, Akasheh & Neale
495
xiv
C H A P T E R E I G H T - ICT A N D M A T H E M A T I C A L
MODELLING
2-D HYDRODYNAMIC MODEL FOR PREDICTING EDDY FIELDS
504
EL-Belasy, Saad & Hafez SUSTAINABILITY IMPLICATIONS OF UBIQUITOUS COMPUTING ENVIRONMENT
514
Shrivastava & Ngarambe A MATHEMATICAL IMPROVEMENT OF THE SELF-ORGANIZING MAP ALGORITHM
522
Oyana, Achenie, Cuadros-Vargas, Rivers & Scott BRIDGING THE DIGITAL DIVIDE IN RURAL COMMUNITY: A CASE STUDY OF EKWUOMA TOMATOES PRODUCERS IN SOUTHERN NIGERIA
533
Chiemeke & Daodu STRATEGIES FOR IMPLEMENTING HYBRID E-LEARNING IN RURAL SECONDARY SCHOOL IN UGANDA
538
Lating, Kucel & Trojer DESIGN AND DEVELOPMENT OF INTERACTIVE MULTIMEDIA CD-ROMs FOR RURAL SECONDARY SCHOOLS IN UGANDA
546
Lating, Kucel & Trojer ON THE LINKS BETWEEN THE POTENTIAL ENERGY DUE TO A UNIT-POINT CHARGE, THE GENERATING FUNCTION AND RODRIGUE'S FORMULA FOR LEGENDRE'S POLYNOMIALS
554
Tickodri- Togboa VIRTUAL SCHOOLS USING LOCOLMS TO ENHANCE LEARNING IN THE LEAST DEVELOPED COUNTRIES
562
Phocus, Donart & Shrivastaya SCHEDULING A PRODUCTION PLANT USING CONSTRAINT DIRECTED SEARCH
572
Kib ira, Kariko-Buhwezi & Musasizi A NEGOTIATION MODEL FOR LARGE SCALE MULTI-AGENT SYSTEMS
Wanyama & Taban-Wani
580
xv
CHAPTER N I N E - TELEMATICS AND TELECOMMUNICATIONS
DIGITAL FILTER DESIGN USING AN ADAPTIVE MODELLING APPROACH
594
Mwangi AUGMENTED REALITY ENHANCES THE 4-WAY VIDEO CONFERENCING IN CELL PHONES
603
Anand DESIGN OF SURFACE WAVE FILTERS RESONATOR WITH CMOS LOW NOISE AMPLIFIER
612
Ntagwirumugara, Gryba & Lefebvre THE FADING CHANNEL PROBLEM AND ITS IMPACT ON WIRELESS COMMUNICATION SYSTEMS IN UGANDA
621
Kaluuba, Taban-Wani & Waigumbulizi SOLAR POWERED Wi-Fi WITH WiMAX ENABLES THIRD WORLD PHONES
635
Santhi & Kumaran ICT FOR EARTHQUAKE HAZARD MONITORING AND EARLY WARNING
646
Manyele, Aliila, Kabadi & Mwalembe CHAPTER T E N - LATE PAPERS NEW BUILDING SERVICES SYSTEMS IN KAMPALA'S BUILT HERITAGE: COMPLEMENTARY OR CON-FLICTING INTEGRALS?
655
Birabi FUZZY SETS AND STRUCTURAL ENGINEERING
671
Kala and Omishore
A PRE-CAST CONCRETE TECHNOLOGY FOR AFFORDABLE HOUSING IN KENYA
680
Shitote, Nyomboi, Muumbo, Wanjala, Khadambi, Orowe, Sakwa, Bamburi, Apollo & Bamburi ENVIRONMENTAL (HAZARDOUS CHEMICAL) RISK ASSESSMENT- ERA IN THE EUROPEAN UNION.
Musenze & Vandegehuchte
696
xvi
THE IMPACT OF A POTENTIAL DAM BREAK ON THE HYDRO ELECTRIC POWER GENERATION: CASE OF: OWEN FALLS DAM BREAK SIMULATION, UGANDA
710
Kizza & Mugume LEAD LEVELS IN THE SOSIANI
722
Chibole DEVELOPING A WEB PORTAL FOR THE UGANDAN CONSTRUCTION INDUSTRY
730
Irumba LOW FLOW ANALYSIS IN LAKE KYOGA BASIN- EASTERN UGANDA
739
Rugumayo & Ojeo SUITABILITY OF AGRICULTURAL RESIDUES AS FEEDSTOCK FOR FIXED BED GASIFIERS
756
Okure, Ndemere, Kucel & Kjellstrom NUMERICAL METHODS IN SIMULATION OF INDUSTRIAL PROCESSES 764
Lewis, Postek, Gethin, Yang, Pao, & Chao
MOBILE AGENT SYSTEM FOR COMPUTER NETWORK MANAGEMENT 796
Akinyokun & Imianvan GIS MODELLING FOR SOLID WASTE DISPOSAL SITE SELECTION
809
Aribo & Looijen AN ANALYSIS OF FACTORS AFFECTING THE PROJECTION OF AN ELLIPSOID (SPHEROID) ONTO A PLANE
813
Mukiibi-Katende SOLAR BATTERY CHARGING STATIONS FOR RURAL ELECTRIFICATION: THE CASE OF UZI ISLAND IN ZANZIBAR
820
Kihedu & Kimambo SURFACE RAINFALL ESTIMATE OF LAKE VICTORIA FROM ISLANDS STATIONS DATA Mangeni, Ngirane-Katashaya
832
Author Index*
841
Keyword Index*
843
* Other than late papers-pages 655 to 840
xvii
INTERNATIONAL CONFERENCE ON ADVANCES IN ENGINEERING AND TECHNOLOGY (AET 2006) Local Organising Committee Prof. Jackson A. Mwakali (Chairman), Makerere University Dr. Gyavira Taban-Wani (Secretary), Makerere University Dr. B. Nawangwe, Makerere University
Prof. E. Lugujjo, MakerereUniversity Prof. S.S. Tickodri-Togboa, Makerere University Dr. Mackay E. Okure, Makerere University Dr. Albert I. Rugumayo, Ministry of Energy and Mineral Development
International Scientific Advisory Board Prof. Adekunle Olusola Adeyeye, National University of Singapore
Prof. Ampadu, National University of Singapore Prof. Gerhard Bax, Universityof Uppsala, Sweden Prof. Mark Bradford, University of New South Whales, Australia Prof. Stephanie Burton, University of Cape Town, Cape Town, South Africa Prof. R.L. Carter, Department of Electrical Engineering, University of Texas at Arlington, USA Prof. David Dewar, University of Cape Town, South Africa Prof. P. Dowling, University of Surrey, UK Prof. Christopher Earls, Universityof Pittsburgh,USA Prof. N. EI-Shemy,Departmentof GeomaticsEngineering,Universityof Calgary, Alberta, Canada Prof. Tore Haavaldsen, NTNU, Norway Prof. Bengt Hansson, Lurid University, Sweden Prof. H.K. Higenyi, Department of Mechanical Engineering, Faculty of Technology, Makerere University Kampala, Uganda Prof. Peter B. Idowu, Penn State Harrisburg, Pennsylvania, USA Prof. N.M. Ijumba, University of Durban-Westville, South Africa Prof. Ulf Isaacson, Royal Technical University, Stockholm, Sweden Prof. Geofrey R. John, University of Dar-es-Salaam, Tanzania Prof. Rolf Johansson, Royal Technical University, Stockholm, Sweden Prof H~kan Johnson, Swedish Agricultural University, Uppsala, Sweden, Prof. V.B.A. Kasangaki, Uganda Institute of Communications Technology, Kampala, Uganda Prof. G. Ngirane-Katashaya, Department of Civil Engineering, Faculty of Technology, Makerere University, Kampala, Uganda Prof. Badru M. Kiggudu, Department of Civil Engineering, Faculty of Technology, Makerere University, Kampala, Uganda Dr. M.M. Kissaka, University of Dar-es-Salaam, Tanzania Era. Prof. Bj6rn Kjelstr6m, Royal Technical University, Stockholm, Sweden Prof. Jan-Ming Ko, Faculty of Construction and Land Use, Hong Kong Polytechnic University, Hong Kong, China Era. Prof. W.B. Kraetzig- Ruhr University Bochun, Germany Prof. R.W. Lewis,Universityof Wales, Swansea,UK
xviii
Prof. Beda Mutagahywa, University of Dar-es-Salaam, Tanzania Prof. Burton M. L. Mwamilla, University of Dar-es-Salaam, Tanzania Dr. E. Mwangi, Department of Electrical Engineering, University of Nairobi, Kenya Dr. Mai Nalubega, WorldBank, Kampala, Uganda Prof. Jo Nero, Universityof Cape Town, SouthAfrica Prof. D.A. Nethercot, Imperial College of Science, Technology & Medicine, UK Dr Catharina Nord, Royal Institute of Technology, Stockholm, Sweden Prof. A. Noureldin, Department of Electrical & Computer Engineering, Royal Military College of Canada, Kingston, Ontario, Canada Prof. Rudolfo Palabazer, University of Trento, Italy Prof. G.N. Pande, University of Wales, Swansea, UK Prof. G. A. Parke, University of Surrey, UK Prof. Petter Pilesj~, University of Lund, Sweden Dr. Pereira da Silva, Faculty of Technology, Makerere University, Kampala, Uganda Prof. Nigel John Smith, University of Leeds, UK Prof. Lennart Soder, Royal Institute of Technology, Stockholm, Sweden Prof. Orjan Svane, Royal Institute of Technology, Stockholm, Sweden Prof. Sven Thelandersson, Lurid University, Sweden Prof. Roger Thunvik, Royal Institute of Technology, Stockholm, Sweden Prof. Lena Trojer, Blekinge Institute of Technology, Sweden Prof. F.F. Tusubira, Directorate of ICT Support, Makerere University, Kampala, Uganda Prof. Brian Uy, Universityof Wollongong,Australia Prof. Dick Urban Vestbro, Royal Institute of Technology, Stockholm, Sweden Prof. A. Zingoni, University of Cape Town, South Africa
Sponsoring and Supporting Organisations Makerere University NUFU Sida/SAREC Ministry of Works, Housing and Communications Uganda Institution of Professional Engineers Construction Review
Fontaine, Kenner & Hoyer
CHAPTER ONE KEYNOTE PAPERS
WATER QUALITY MANAGEMENT IN RIVERS AND LAKES T. A Fontaine, Department of Civil and Environmental Engineering, South Dakota School of Mines and Technology, Rapid City, SD, USA S. J. Kenner, Department of Civil and Environmental Engineering, South Dakota School of Mines and Technology, Rapid City, SD, USA D. Hoyer, Water and Natural Resources, RESPEC, Rapid City, SD, USA
ABSTRACT An approach for national water quality management is illustrated based on the 1972 Clean Water Act in the United States. Beneficial uses are assigned to each stream and lake. Water quality standards are developed to support these beneficial uses. A data collection program is used to make periodic evaluation of the quality of water bodies in each state. A bi-annual listing of all impaired water is required, with a schedule for investigations to determine causes of pollution and to develop plans to restore desired water quality. The approach is illustrated using recent water quality investigations of two rivers in the Great Plains Region of the United States. Keywords: water quality management, total maximum daily load, pollution.
1.0 INTRODUCTION Water quality is related to the physical, chemical and biological characteristics of a stream, lake or groundwater system. Once the water quality of a water body is compromised, significant effort and cost are required to remediate the contamination. Protecting and improving the quality of water bodies enhances human health, agricultural production, ecosystem health, and commerce. Maintaining adequate water quality requires coordinated national policy and oversight of state and local water quality management.
International C o n f e r e n c e on A d v a n c e s in Engineering and T e c h n o l o g y
A critical component of water quality management in the USA is the 1972 Federal Clean Water Act, which established additional rules, strategies, and funding to protect and improve water quality of streams and lakes. The US Environmental Protection Agency (EPA) is the federal administrator of the program. Water quality management of specific water bodies (rivers, lakes, and estuaries) is delegated to state and local governments that are required to meet the federal regulations. Key components of this process include (1) definition of beneficial uses for each water body, (2) assigning water quality standards that support the beneficial uses, (3) an antidegredation policy, and (4) continual water quality monitoring. Each state must submit a list of impaired waters to the EPA every 2 years. The most common reasons for these waters to be impaired include pollution related to sediments, pathogens, nutrients, metals, and low dissolved oxygen. For each water body on the list, a plan is required for improving the polluted water resource. A fundamental tool in this plan is the development of a total maximum daily load (TMDL). For a specific river or lake, the TMDL includes data collection and a study of the water quality process, evaluation of current sources of pollution, and a management plan to restore the system to meet the water quality standards. These aspects of water quality management are described in the remainder of this paper. The concepts of beneficial uses, water quality standards, the antidegredation policy, the listing of impaired water bodies, and the development of a TMDL are discussed. Case studies from recent research in South Dakota are then used to illustrate the development of a TMDL. 2.0 The 9 9 9 9 9 9 9 9 9 9 9
B E N E F I C I A L USES AND W A T E R Q U A L I T Y STANDARDS State of South Dakota has designated 11 beneficial uses for surface waters: Domestic water supply Coldwater permanent fish life propagation Coldwater marginal fish life propagation Warmwater permanent fish life propagation Warmwater semi-permanent fish life propagation Warmwater marginal fish life propagation Immersion recreation Limited contact recreation Fish and wildlife propagation, recreation, and stock watering Irrigation Commerce and industry
The EPA has developed standards for various beneficial uses. Each state can apply the EPA standards, or establish their own state standards as long as they equal or exceed the EPA standards. Examples of parameters used for standards for general uses include total dis-
Fontaine, K e n n e r & H o y e r
solved solids, pH, water temperature, dissolved oxygen, unionized ammonia, and fecal coliform. Water quality standards for metals and toxic pollutants may be applied in special cases. Waters for fish propagation primarily involve parameters for dissolved oxygen, unionized ammonia, water temperature, pH, and suspended solids. Standards are either for "daily maximum", or acute values or "monthly average" or chronic values (an average of at least 3 samples during a 30-day period). Additional standards for lakes include visible pollutants, taste- and odor- producing materials, and nuisance aquatic life. The trophic status of a lake is assessed with a Trophic State Index (TSI) based on measures of water transparency, Chlorophyll-a, and total phosphorus. Maximum values of the TS! allowed as supporting beneficial uses of lakes range from 45 to 65 across the state. The detailed numeric standards for surface water quality in South Dakota are described in South Dakota Department of Environment and Natural Resources (2004). 3.0 LISTING OF IMPAIRED W A T E R BODIES Section 303d of the Federal Clean Water Act requires each state to identify waters failing to meet water quality standards, and to submit a list to the EPA of these waters and a schedule for developing a total maximum daily load (TMDL). A TMDL represents the amount of pollution that a waterbody can receive and still maintain the water quality standards for the associated beneficial use. The list of impaired waters (the "303d list") is required every 2 years. Examples of the most frequent reasons for listing waters across the USA are: (1) nutrients, sediments, low dissolved oxygen, and pH for lakes; and (2) sediments, metals, pathogens, and nutrients for streams. The number of waterbodies on the 303d list for South Dakota has been about 170 for the past 8 years.
The decision to place a waterbody on the 303d list can be based on existing data that document the impaired water quality, or on modeling that indicates failure to meet water quality standards. A waterbody that receives discharges from certain point sources can also be listed when the point source loads could impair the water quality. If existing data are used to evaluate whether or not a water should be listed, the following criteria apply: (1) 20 water quality samples of a specific parameter are required over the last 5 years; (2) over 10% of the samples must exceed the water quality standard for that parameter; and (3) the data must meet certain quality assurance requirements. 4.0 REMEDIATION STRATEGIES For each water placed on the 303d list, a strategy for improving the water quality so that the standards are met is required. The development and implementation of a TMDL is the most common approach for remediation strategies. A TMDL is calculated as the sum of individual waste load allocations for point sources, and load allocations for nonpoint sources and for natural background sources, that are necessary to achieve compliance with applicable surface water quality standards. The units of the TMDL can be mass per day or toxicity per
International Conference on Advances in Engineering and Technology
day for example, but not concentration. The waste load allocation involves point sources, which are regulated by the National Pollution Discharge Elimination System program (NPDES: see South Dakota Department of Environment and Natural Resources, (2004)). A point source permit must be renewed every 5 years. Examples of load allocations (nonpoint sources) include agricultural runoff and stormwater runoff from developed areas. Natural background loads involve pollution from non-human sources. Examples include high suspended solids in watersheds with severe erosion due to natural soil conditions, high fecal coliform concentrations due to wildlife, and elevated streamwater temperatures due to natural conditions. A margin of safety is included in the TMDL to account for the uncertainty in the link between the daily pollutant load and the resulting water quality in the stream or lake. The process of developing and implementing a TMDL usually involves a data collection phase, the development of proposed best management practices (BMPs), and an implementation and funding strategy. A water quality monitoring program may be required to generate data to define the watershed hydrologic system, measure water quality parameters, and identify the sources of pollution. A computer simulation model may be used to calculate the TMDL required for the stream or lake to meet the water quality standards for the beneficial uses involved. Once the TMDL is known, various management actions are evaluated for their effectiveness in decreasing the pollutant loads to the point where the water quality standards are met. Point source loads are managed through the NPDES permit system. Management of nonpoint sources requires cooperation among federal, state, and local agencies, business enterprises, and private landowners. Examples of activities by individuals, corporations, and government agencies that generate nonpoint pollution sources include agriculture (livestock and crop production), timber harvesting, construction, and mining. Federal and state funding can be applied for to promote voluntary participation in best management practices (BMPs) to reduce water pollution related to these activities. Once the implementation phase of the TMDL begins, water quality monitoring continues on a regular basis to measure the impact on water quality of the selected BMPs. The state is allowed 13 years from the time the specific river or lake is placed on the 303d list to develop the TMDL, complete the implementation, and restore the water to the standards required to support the beneficial uses of that water. A final aspect of the water quality management program is an antidegredation policy. Antidegradation applies to water bodies with water quality that is better than the beneficial use criteria. Reduction of water quality in high quality water bodies requires economic and social justification. In any case, beneficial use criteria must always be met. Establishing desired beneficial uses for every surface water body, and the associated standards required to support those uses, provides the framework to protect and improve the
Fontaine, K e n n e r & H o y e r
water quality of a country so that all benefit. The process of routine collection of water quality data provides the information needed to identify impaired waters, place them on the 303d list, and to define a plan to develop and implement a strategy for restoring the desired level of water quality. The following case studies illustrate some of the procedures and issues that are often involved in this process.
5.0 SPRING CREEK Spring Creek is located on the eastern side of the Black Hills of South Dakota. The portion of Spring Creek involved in this project has a drainage area of 327km 2 at the outflow gage at 43~ and 103029 , 18". The annual mean discharge is 0.62m3/s (1991 - 2004), maximum daily mean discharge of record is 14.9m3/s, and minimum daily mean discharge is 0.0m3/s. The average annual precipitation is 56cm and the land cover is Ponderosa Pine forest. The beneficial uses of this section of Spring Creek are (1) cold-water permanent fish life propagation, (2) immersion recreation, (3) limited-contact recreation, and (4) fish, wildlife propagation, recreation, and stock watering. Spring Creek was placed on the 303d list and scheduled for TMDL development because the standard for fecal coliform in immersion recreation waters was exceeded. Fecal coliform bacteria are present in the digestive systems of warm blooded animals, and therefore serve as an indicator that the receiving water has been contaminated by fecal material. Symptoms of exposure in humans include cramps, nausea, diarrhea, and headaches. The objective of the project was to support the development of the TMDL using a water quality monitoring program and a computer simulation program (Schwickerath et al, (2005)). Data from the water quality monitoring program helped identify the sources of fecal coliform and measure the current loads. The simulation model provided insight into the relation of the sources to the loads exceeding the standards for the immersion recreation use, and was used to estimate the reduction of pollution levels resulting from various water quality management activities in the watershed.
5.1 Monitoring Program Fourteen monitoring sites were selected in the study area: 9 on the main channel of Spring Creek, 2 on Palmer Gulch Tributary, 2 on Newton Fork Tributary, and 1 on Sunday Gulch Tributary. Monthly grab samples were collected for 15 months at all 14 sites, and samples during storm-runoff events were collected at 6 stations. The storm event samples were collected over a 12 to 24 hour period on a flow-weighted basis. Streamflow measurements were taken periodically during the 15 month study to establish stage-discharge ratings at each station. A quality assurance program using field blanks and field replicates every 10 samples was used to measure the reliability of the data.
International Conference on Advances in Engineering and Technology
Samples were analyzed for fecal coliform, total suspended solids, pH, temperature, ammonia, and dissolved oxygen. The criteria for fecal coliform in immersion contact recreation has two standards: (1) the geometric mean of at least 5 samples collected during a 30 day period must not exceed 200 colony-forming units (cfu) per 100mL; or (2) a maximum of 400cfu per 100mL in a single sample. The water is considered impaired if either standard is exceeded by more than 10% of the samples. The water quality standards for the other relevant parameters were: total suspended solids less than 53 mg/L (daily maximum sample), pH between 6.6 and 8.6, water temperature of 18.3~ or less, and at least 6 mg/L dissolved oxygen. The standard for ammonia depends on the temperature and pH at the time of sampling. The fecal coliform standard was exceeded in 17% of the samples from the main channel of Spring Creek, 30% of samples from Palmer Gulch Tributary, and 13% of samples from Sunday Gulch Tributary. More than 10% of samples from Palmer Gulch Tributary also exceeded standards for total suspended solids (22% exceeded), pH (11% exceeded), and ammonia (11% exceeded). Fourteen percent of samples in Newton Fork Tributary exceeded the temperature standard. These results confirm that a TMDL for fecal coliform bacteria is required for this section of Spring Creek. The results also indicate that Palmer Gulch Tributary should be considered for an independent listing on the 303d list of impaired water, and that additional monitoring is needed to investigate temperature conditions on Newton Fork Tributary. Additional sampling was used to estimate the distribution of fecal contamination coming from humans and animals. A DNA fingerprinting analysis called ribotyping can indicate the source of fecal coliforms. Results of the initial ribotyping samples suggest that 35% of the fecal coliform in Spring Creek originates from humans, with the other 65% coming from livestock (cattle) and wildlife in the catchment. This information is used to develop remediation options to help Spring Creek meet the water quality standard. For example, potential sources of human coliform include leaking sewer systems, leaking treatment lagoons at Hill City (a town of 780 people in the center of the study area), and failed septic systems.
5.2 Simulation Modeling Analysis The Better Assessment Science Integrating Point and Nonpoint Sources (BASINS) and the HSPF simulation models were used to investigate the impact of various remediation activities on the coliform contamination in Spring Creek (US Environmental Protection Agency (2001), Bicknell et al, (2000)). These models provide comprehensive simulation of the hydrology, channel processes, and contaminant processes on a continuous basis. Field data were used to calibrate and validate the model. The effectiveness of various best management practices (BMPs) for remediating pollution can be simulated with the models. The nonpoint sources of fecal coliform contamination in
Fontaine, Kenner & Hoyer
Spring Creek include humans and urban runoff, runoff from agricultural land and livestock, and wildlife. Human and urban runoff sources include leaks from septic systems of individual homes, sewer pipes, and treatment lagoons, and animal feces. Livestock (primarily cattie in this watershed) generates waste in concentrated compounds near farms during cold months and across widely distributed areas of the catchment during the warmer open range season. Fecal coliforms from livestock are deposited near, and easily washed into, streams in areas where no fences exist along the riparian zones. Examples of BMPs applied in the modeling analysis included improving failed septic systems, leaking sewer systems, and leaking treatment lagoons, and keeping cattle away from streams. Various combinations of these BMPs are simulated and the TMDL in Spring Creek is calculated for each scenario. Two of these scenarios were successful in reducing the TMDL to the point where the water quality in Spring Creek would be expected to support the beneficial uses. The final phase of the water quality program involves collaboration between the state environmental agency, local residents and landowners, and funding agencies to implement the effective BMPs. Water quality monitoring will continue during this period in order to measure the actual impact on fecal coliform loads, and to document the point when the water quality attains the standards for the beneficial uses of Spring Creek. 6.0 W H I T E R I V E R The White River is located in the prairie region of southwestern South Dakota. The drainage area is 26,000kin: at the downstream boundary of the study area at 43~ and 99~ ''. The annual mean discharge is 16.2m3/s (1929 - 2004), maximum daily mean discharge of record is 1247m3/s and minimum daily mean discharge is 0.0m3/s. Suspended sediment concentrations vary widely, with maximum daily mean of 72,300mg/L and minimum daily mean of 1 lmg/L (for period of 1971 to 2004). Climate is semi-arid, with 41cm of rain per year and 102cm of lake evaporation per year. Land cover is rangeland and grassland, with areas of Badlands (steep terrain with highly erodible, bare soil).
The river basin has 19 streamflow gaging stations. Spring-fed baseflow provides most of the discharge in the upper portions of the drainage area. Streamflow is a combination of baseflow and storm-event runoff in the lower portions of the basin. An analysis of streamflow data and a physical habitat assessment indicated that the river basin could be divided into three sections (the upper, middle and lower reaches), each reflecting water quality characteristics related to the hydrology, geology, and land use of the section (Foreman et al (2005)) The beneficial uses for the White River are (1) warm-water semi permanent fish life propagation; (2) limited contact recreation; (3) fish and wildlife propagation, recreation, and stock waters; and (4) irrigation waters. The White River is listed as impaired for the use of warm-
International Conference on Advances in Engineering and Technology
water semi permanent fish life propagation because of excessive total suspended solids (TSS) and for the use of limited contact recreation because of excessive fecal coliform. The applicable standard for TSS is a daily maximum of 158mg/L, or a 30-day average of 90mg/L. The applicable standard for fecal coliform is a single sample with 2000 cfu per 100mL, or a 30-day average of 1000 cfu per 100mL. Water quality standards for the other relevant parameters are: alkalinity less than 1313mg (daily maximum), total residual chlorine less than 0.019mg/L (acute), conductivity less than 4375 gohms/cm (daily maximum), hydrogen sulfide less than 0.002rag/L, nitrates less than 88rag/1 (daily maximum), dissolved oxygen of at least 5.0mg/L, pH between 6.5 and 9.0, sodium adsorption ratio of 10mg/L, total dissolved solids less than 4375mg/L (daily maximum), temperature less than 32.2~ total petroleum hydrocarbons less than 10mg/L, and oil and grease less than 10mg/L. The standard for ammonia depends on the water temperature and pH at the time of sampling.
6.1 Analysis of Water Quality Data Water quality data from six stations in the basin was analyzed to evaluate the water quality in the basin and to develop a TMDL summary report. Water is considered impaired for a specific beneficial use if more than 10% of samples exceed the standard for that use. The median concentration of TSS (mg/L) was 139 in the upper reach, 1118 in the middle reach and 1075 in the lower reach. The percent of samples exceeding the 158mg/L standard was 47% for the upper reach, 78% for the middle reach, and 79% for the lower reach. All three sections of the White River significantly exceed the maximum daily water quality standard for TSS. The TSS reduction required to meet the standard would be 90% in the upper section, 99% in the middle section and 99% in the lower section. Most of the TSS in the White River is considered natural background loading because of the amount of drainage area having steep terrain and highly erodible soil types, and the Badlands area. The extensive sediment loads from these sources create large sediment deposits in the channel system of the White River, which are easily suspended and transported as streamflow increases. Therefore, best management practices (BMPs) are not feasible and would not be expected to have a significant impact on TSS loads. If it appeared that BMPs could be effective, examples commonly explored for reducing high TSS include conservation cover, stream bank protection, rotational grazing, and upland wildlife habitat management. The median concentration of fecal coliform (cfu/100mL) was 450 in the upper reach, 370 in the middle reach and 2075 in the lower reach. The percent of samples exceeding the 2000 cfu/100mL standard was 9% for the upper reach, 54% for the middle reach, and 29% for the lower reach. The middle and lower section of the White River significantly exceed the wa-
Fontaine, K e n n e r & H o y e r
ter quality standard for fecal coliform of 2000 cfu/100mL. The coliform reduction required to meet the standard would be 88% in the middle section and 66% in the lower section. Best management practices to consider for reducing fecal coliform levels include conservation cover, filter strips, rotational grazing, upland wildlife habitat management, and stream bank protection. Implementing a combination of these land management tools would be expected to lower the coliform levels to meet the water quality standard for limited contact recreation. 7.0 CONCLUSIONS Water quality management policy and objectives are set at the national level, but a partnership at the federal, state and local levels is critical for effective water quality assessments and implementation of remediation projects. A water quality management program defines beneficial uses for each water body, assigns water quality standards to support those beneficial uses, and maintains a data collection program to identify impaired water and measure recovery. A periodic listing of impaired streams and lakes, along with a schedule of projects to restore water quality for beneficial uses, is also needed. The total maximum daily load (TMDL) is a tool for developing strategies for improving impaired waters. Implementing a TMDL-based solution requires collaboration of federal, state and local governments, plus individual landowners and business owners. The case studies of Spring Creek and White River in South Dakota illustrate these principles of water quality management. REFERENCES Bicknell, B.R., Imhoff, J.C., Kittle, J.L. Jr., Jobes, T.H., and Donigian, A.S., Jr., 2000. Hydrological Simulation Program-Fortran User's Manual for Release 12. US Environmental Protection Agency, Washington, DC. Foreman, C.S., Hoyer, D., and Kenner, S.J., 2005. Physical habitat assessment and historical water quality analysis on the White River, South Dakota. ASCE World Water & Environmental Congress, Anchorage, May 2005, 12 pg. Schwickerath, P., Fontaine, T.A., and Kenner, S.J., 2005. Analysis of fecal coliform bacteria in Spring Creek above Sheridan Lake in the Black Hills of South Dakota. ASCE World Water & Environmental Congress, Anchorage, May 2005, 12 pg. South Dakota Department of Environment and Natural Resources, 2004. The 2004 South Dakota Integrated Report for Surface Water Quality Assessment. Pierre, SD, USA. US Environmental Protection Agency, 2001. Better Assessment Science Integrating Point and Nonpoint Sources BASINS User's Manual. Washington, DC.,USA. US Environmental Protection Agency Office of Science and Technology.s
International Conference on Advances in Engineering and Technology
I M P R O V E M E N T S I N C O R P O R A T E D IN THE N E W H D M 4 VERSION 2 J. B. Odoki, Department of Civil Engineering, University of Birmingham, UK E. E. Stannard, HDMGlobal, University of Birmingham, UK H. R. Kerali, WormBank, WashingtonDC, USA
ABSTRACT The Highway Design and Maintenance Standards Model (HDM-III), developed by the World Bank, was used for over two decades between 1980 and 2000, to combine technical and economic appraisals of road projects, to prepare road investment programmes and to analyse road network strategies. The International Study of Highway Development and Management (ISOHDM) extended the scope of the World Bank HDM-III model, to provide a harmonised systems approach to road management, with adaptable and user-friendly software tools. The Highway Development and Management Tool (HDM-4 Version 1), which was released in 2000 considerably broadened the scope of traditional project appraisal tools such as HDM-III, to provide a powerful system for the analysis of road management and investment alternatives. Since the release of HDM-4 Version 1, the software has been used in many countries for a diverse range of projects. The experience gained from the project applications together with the feedback received from the broad user base, identified the need for improvements to the technical models and to the applications implemented within HDM-4. The improvements included in Version 2 of HDM-4 are described in detail in the paper and these are categorized as follows: new applications, improved technical models, improved usability and configuration, improved data handling and organization, and improved connectivity. Keywords: HDM-4; roads; highways; investment appraisal; software tools, sensitivity analysis; budget scenarios; asset valuation; multi-criteria analysis; technical models; database.
1.0 INTRODUCTION
When planning investments in the roads sector, it is necessary to evaluate all costs and benefits associated with the proposed project over the expected life of the road. The purpose of road investment appraisal is to select projects that will maximise benefits to society/stakeholders. The purpose of an economic appraisal of road projects therefore is to determine how much to invest and what economic returns to expect. The size of the invest-
10
Odoki, Stannard & Kerali
ment is determined by the costs of construction and annual road maintenance, and these are usually borne by the agency or authority in charge of the road network. The economic returns are mainly in the form of savings in road user costs resulting from the provision of a better road facility. Road user costs are borne by the community at large in the form of vehicle operating costs (VOC), travel time costs, accident costs and other indirect costs. Road agency costs and road user costs constitute what is commonly referred to as the total (road) transport cost or the whole life cycle cost (Kerali, 2003). The primary function of a road investment appraisal model is to calculate the individual components of total road transport cost for a specified analysis period. This is accomplished by modelling the interrelationships between the environment, construction standards, maintenance standards, geometric standards and traffic characteristics. The interaction among these factors has a direct effect on the annual trend in road condition, vehicle speeds and on the costs of vehicle operation and accident rates on the road. A road investment appraisal model may therefore be used to assist with the selection of appropriate road design and maintenance standards, which minimise the total transport cost or environmental effects. The Highway Development and Management Tools (HDM-4) is the result of the International Study of Highway Development and Management (ISOHDM) that was carried out to extend the scope of the World Bank HDM-III model. The scope of the new HDM-4 tools have been broadened considerably beyond traditional project appraisals, to provide a powerful system for the analysis of road management and investment alternatives and to provide a harmonised systems approach to road management, within adaptable and user-friendly software tools. The HDM-4 system can be used for assessing technical, economic, social and environmental impacts of road investment for both MT and NMT modes of transport (Ker-
ali, 2000). HDM-4 Version 1 software, which was released in 2000, has been used in many countries for a diverse range of projects. The experience gained from the project applications together with the feedback received from the broad user base, identified the need for improvements to the technical models and to the applications implemented within HDM-4. This paper describes in detail the improvements incorporated in Version 2 of HDM-4 and these are categorized as follows: new applications, improved technical models, improved usability and configuration, improved data handling and organization, and improved connectivity. 2.0 NEW APPLICATIONS Improvements in applications that have been incorporated in HDM-4 Version 2 are: sensitivity analysis, budget scenario analysis, road asset valuation, multi-criteria analysis (MCA), and estimation of social benefits.
11
International Conference on Advances in Engineering and Technology
2.1 Sensitivity Analysis Sensitivity analysis is used to study the effects of changes in one parameter on the overall viability of a road project as measured by various technical and economic indicators. This analysis should indicate which of the parameters examined are likely to have the most significant effect on the feasibility of the project because of the inherent uncertainty (Odoki,
2002). Scenario analysis is used to determine the broad range of parameters which would affect the viability of the road project. For example, a review of government long-term development plans could yield alternative economic growth rates. Investment projects should be chosen on their ability to deliver a satisfactory level of service across a range of scenarios. In this way, the economic return of a project need not be the sole criterion since social and political realities can also be taken into account. The key parameters considered for sensitivity analysis in HDM-4 are described below. The choice of which variables to test will depend upon the kind of study being conducted and it is a matter of judgement on the part of the user. 2.2 Traffic Levels The economic viability of most road investment projects will depend significantly on the traffic data used. However, it is difficult to obtain reliable estimates of traffic and to forecast future growth rates, (TRRL, 1988). Thus sensitivity analysis should be carried out, both of baseline flows and of forecast growth. In HDM-4, traffic is considered in three categories as normal, diverted and generated. Baseline flows are specified separately for motorised transport (MT) and for non-motorised transport (NMT) in terms of the annual average daily traffic (AADT) by vehicle type. Future traffic is expressed in terms of annual percentage growth rate or annual increase in AADT for each vehicle type. 2.3 Vehicle Use In HDM-4, there are several parameters related to vehicle loading and annual utilisation which are difficult to estimate and should therefore be considered as candidate variables for sensitivity analysis. The vehicle use parameters include the average vehicle operating weight, equivalent standard axle load factor, baseline annual number of vehicle kilometres, and baseline annual number of working hours. The inclusion of these parameters for sensitivity and scenario analysis has enhanced the capability of HDM-4 for carrying out special research studies, for example the determination of road use cost. 3.0 NET BENEFITS STREAMS Total net benefits stream is considered under three components namely: net benefits from savings in road agency costs, net benefits from savings in road user costs, and net benefits related to savings in exogenous costs.
12
Odoki, Stannard & Kerali
3.1 Budget Scenario Analysis The amount of financial resources available to a road agency determines what road investment works can be affordable. The level of budget is not always constant over time due to a variety of factors including competing demands from other sectors, changes in a country's macro economic performance, etc. This variation of budget levels over time affects the functional standards as well as the size of road network that can be sustainable. It is therefore important to study the effects of different budget levels or budget scenarios on the road network performance. This feature has been implemented in HDM-4 and it permits comparisons to be made between the effects of different budget scenarios and to produce desired reports. The most important aspect of budget scenario analysis is the presentation of results. This should be given at two levels as follows: 9 At detail level: to include parameters for each section alternative analysed and the performance indicators. 9 In aggregate terms: to present performance indicators for the whole road system over the analysis period for each budget scenario, and the results of comparison between the effects of different budget scenarios. Figure 1 illustrates the effect of different budget scenarios on the road network condition.
3.2 Road Asset Valuation The purpose of preparing annual asset valuations for a road network is to provide a means of checking on the success or otherwise of the road authority in preserving the assets it holds on behalf of the nation. All public assets should have associated with them a current capital value. For the implementation of road asset valuation in HDM-4, only the following components are relevant (Odoki, 2003): 9 Road formation, drainage channels, and sub-grade (i.e. earthworks); 9 Road pavement layers 9 Footways, footpaths and cycle-ways 9 Bridges and structures 9 Traffic facilities, signs and road furniture Depreciation accounting, which is based on the assumption that depreciation of the network equals the sum of the depreciation of all of the asset components making up the network, can be applied to road asset valuation. The basis of valuation used is as follows (Interna-
tional Infrastructure Management Manual, 2002):
13
International Conference on Advances in Engineering and Technology
The Optimised Replacement Cost (ORC) of each component of the road asset, which is defined in general terms as the cost of a replacement asset that most efficiently provides the same utility as the existing asset. This can be estimated as equivalent to the initial financial cost of construction, adjusted to current year prices. (ii) The Optimised Depreciated Replacement Cost (ODRC) of each component; ODRC is the replacement cost of an existing asset after deducting an allowance for wear or consumption to reflect the remaining useful life of the asset. (i)
The relevant basis of valuation and method for the road components considered is given in Table 1. The following ODRC methods are used for valuation of the road components: the straight-line method, production-based method, and condition-based method.
Annual AverageRoughnessfor the network grouped by BudgetScenario (weighted by length) 10
o n, 4 L_
r >
~
2
0 (D ('N
O C',l
O Cq
O ('N
O ('N
C, r
O r
O CN
O C',I
O C',I
O C'q
O ('N
O C',I
O r
O r
O C',l
C, C",l
O ('N
O r
O ('N
Year
Fig. 1: The effect of different budget scenarios on road condition
14
Odoki, Stannard & Kerali
250 i
' -t 3200 A
~"
o 0
200
3000 •
,r X
"
m
~ .....
150
C
.
@
2800
"O C
O
U X B
100 2600
r-
Z 50
2400 r "-9
00
r
04
~'
r
r
r
04
,r
04
04
04
04
04
04
04
,r o 04
%,o 04
Fig. 2" Road asset valuation
Table 1" Valuation methods of road assets considered in HDM-4 Feature/Component Basis of Depreciation method valuation Road formation and sub-grade ORC Road pavement layers ODRC Production or Conditionbased Footways, footpaths and cycle-ways ODRC Straight Line Bridges and structures ODRC Straight Line Traffic facilities, signs and road furniture ODRC Straight Line The backbone of HDM-4 analysis is the ability to predict the life cycle pavement performance and the resulting user costs under specified road works scenarios. The asset valuation methodology used links the capital value of the asset with its condition, which is predicted annually using the road deterioration and works effects models in HDM-4. Figure 2 illustrates an output from the asset valuation procedures.
15
International Conference on Advances in Engineering and Technology
3.3 Multi-Criteria Analysis Multiple criteria analysis provides a systematic framework for breaking a problem into its constituent parts in order to understand the problem and consequently arrive at a decision. It provides a means to investigate a number of choices or alternatives, in light of conflicting priorities. By structuring a problem within the multiple criteria analysis framework, road investment alternatives may be evaluated according to pre-established preferences in order to achieve defined objectives (Cafiso et al., 2002). The analytical framework of HDM-4 has been extended beyond technical and economic factors to consider explicitly social, political and environmental aspects of road investments. There are instances where it is important to consider the opinion of others interested in the condition of the road network (e.g. road users, industrialists, environmental groups, and community leaders) when evaluating road investment projects, standards and strategies. For example, the evaluation of the following: a low trafficked rural road that serves a politically or socially sensitive area of the country; the frequency of wearing course maintenance for particular road sections for which the economics are secondary to the minimisation of noise and intrusion from traffic (e.g. adjacent to hospitals); cases where national pride is deemed paramount, for example, the road leading between a main airport and the capital city; and roads of strategic/security importance to the country. Table 2 gives a list of criteria supported in HDM-4 (Odoki, 2003). MCA basically requires the clear definition of possible alternatives, together with the identification of the criteria under which the relative performance of the alternatives in achieving pre-established objectives is to be measured. Thereafter it requires the assignment of preferences (i.e. a measure of relative importance, or weighting) to each of the criteria. The selection of a particular set of investment alternatives will greatly depend on the relative importance (or weights) assigned to each criterion. Table 2: Criteria supported in HDM-4 Multi-criteria analysis
Category
Criteria/Objectives
Attributes
Economic
Minimise road user costs
Total road user costs are calculated internally within HDM-4 for each alternative. Economic net benefit to society is calculated internally within HDM-4 for each alternative. Number and severity of road accidents. These are calculated internally within HDM-4. Provide good riding quality to road users. This is defined on the basis of average IRI (international roughness index). The average IRI is calculated internally within HDM-4.
i
Maximise net present value r
Reduce accidents
Safety i
Functional service level
16
i
Provide comfort
i
Odoki, Stannard & Kerali
Reduce road congestion
Delay and congestion effects. Level of congestion is defined in terms of volumecapacity ratio (VCR). VCR values are calculated internally within HDM-4. Air pollution is measured in terms of quantities of pollutants from vehicle emissions, which are computed within HDM-4. Efficiency in both global and national energy use in the road transport sector. Energy use is calculated internally within HDM-4.
Environment
Reduce air pollution
Energy
Maximise energy efficiency
Social
Maximise social benefits
Social benefits include improved access to social services (e.g. schools, health centres, markets, etc.). A representative value is externally user-defined for each alternative.
Political
Consider political issues
Fairness in providing road access, promotion of political stability, strategic importance of roads, etc. A representative value is externally user-defined for each alternative.
The Analytic Hierarchy Process (AHP) method has been selected for implementation in HDM-4 because it systematically transforms the analysis of competing objectives to a series of simple comparisons between the constituent elements. AHP is based on "pairwise" comparisons of alternatives for each of the criteria to obtain the ratings (Saaty, 1990). The MCA procedure incorporated in HDM-4 Version 2 will produce a matrix of "multiple criteria ranking numbers" or ratings for each alternative of each road section included in the study. The alternative with the highest value is selected for each section. If ranking vector number is the same for two or more mutually exclusive alternatives then the minimum cost alternative should be selected. 3.4 Estimation of Social Benefits It has often been necessary to include the social benefits of road investments within HDM-4. The simple framework for including social benefits has now been made more transparent by incorporating them within the exogenous costs and benefits user interface.
4.0 I M P R O V E D T E C H N I C A L MODELS 4.1 Road Deterioration and Work Effects The road deterioration (RD) and works effects (WE) models in HDM-4 Version 2 have been updated in accordance with the specification provided by PIARC. For bituminous pavements, the changes include improvements to the pothole progression model, updated rut depth model,
17
International Conference on Advances in Engineering and Technology
improved user-calibration of the RD models, and updated WE models for patching and preparatory work effects. For unsealed roads, the most significant change is the introduction of three different grading types (non-mechanical, light mechanical grading, and heavy mechanical grading), and improved calibration of the unsealed roughness model using section calibration factors and workspace configuration parameters. 4.2 Road User Effects The Road User Effects (RUE) model in HDM-4 Version 2 has been updated in accordance with the specification provided by PIARC. The changes include the following: engine speed model, parts consumption modeling, constant service life model has been changed so that it no longer depends upon the percentage of private use, and major update to the modelling of vehicle emissions.
5.0 IMPROVED USABILITY AND CONFIGURATION 5.1 Intervention Triggers for Road Works The definition of the triggering logic of work items and improvements has been simplified and improved by the introduction of an improved intervention editor. The main areas of improvement are as follows: 9 The need to select scheduled or responsive intervention mode for a work item has been removed 9 The predefined limit parameters associated with the triggering logic are now optionally entered in the intervention editor as part of the main trigger expression. 9 The triggering of works has been extended to allow the combination of AND/OR logic operators. 9 Works can now be scheduled to occur in set years rather than just periodically. 9 The user is no longer constrained to select a trigger attribute from a pre-defined list. In fact any trigger can be used with any work type. 5.2 User-Interface for Defining Investment Alternatives The user interface for the definition of analysis alternatives has been redesigned to reduce the number of dialogs and buttons involved, to improve navigation through the alternatives in a familiar style, and to give improved view to the user. The new user interface allows the user to navigate through the alternatives and its assignments using a view similar to the windows explorer directory navigation tree, and uses a context sensitive spreadsheet-type view that facilitates easier assignment of maintenance and improvement. 5.3 The Model Dynamic Link Library Architecture The model architecture has undergone some revision to improve maintainability, flexibility, and to allow future customisation. Some parts of the analysis framework have been revised
18
Odoki, Stannard & Kerali
to take advantage of these architectural improvements. Although to the general users these changes will not be visible.
5.4 Post-lmprovement Maintenance Standards It is now possible to assign a maintenance standard to be applied after a road improvement standard has been applied (i.e. the maintenance standard will only be applied if the associated improvement is triggered). This facility is implemented in the new user-interface for defining alternatives. 5.5 Improvement Effects After-work attributes for some improvement effects can now be defined either in terms of the change in attribute value or in terms of the final value of the attribute (i.e. either in relative or absolute terms). This is intended to make improvement standards less section specific, so that they can be applied to a group of sections. Temporary exclusion of road sections from study When setting up a project analysis it is now possible to select a section for the study, assign the traffic growth set and define its alternatives, but then exclude it from analysis without loss of data (i.e. traffic, alternatives, etc.). This was identified by users to be a useful function if several sections have been selected in a project analysis and there is the need to focus on defining and refining the assignments of one section at a time without the overhead of analyzing all the other sections each time.
5.6 Calibration Sets Calibration sets" have been introduced to allow users to define sets of section calibration coefficients (i.e. a calibration item) for the range of pavement types commonly found on their road network. Road sections which have the same characteristics can all use the same calibration. The process of defining a section has therefore been simplified as a user now has only has to select an appropriate calibration item for the section's known characteristics rather than supply values for all the calibration parameters. 5.7 Improved Configuration A new HDM-4 data type has been provided to allow the user to model accident effects separately from speed flow types. An explanatory graph has been added to the user interface to explain the relationship between the capacity characteristic parameters. To reflect the correlation between road type, and capacity characteristics, and to improve consistency, the "number of lanes" parameter has been moved to the Speed Flow Type item from the road section. A graph is now shown on this dialog to reflect the flow distribution data entered by the user. As the user changes this data, the graph changes accordingly. The graph is intended to improve user feedback, and to engender understanding of the effects to the flow distribution data.
19
International Conference on A d v a n c e s in Engineering and T e c h n o l o g y
6.0 I M P R O V E D DATA H A N D L I N G AND O R G A N I Z A T I O N
6.1 Updated Database Technology HDM-4 uses an object-orientated database to store its local data. HDM-4 Version 2 has been updated to use the latest version of this database to ensure the latest developments and enhancements are available, as well as continued support and backup from the provider is accessible.
6.2 Redesign of New Section Facilities A new approach allows new sections to be reused across studies and alternatives by defining new sections within the work standards folder in the workspace, and assigning them to alternatives using the new user-interface for alternatives. 6.3 Traffic Redesign The management and entry of traffic related data in HDM-4 has undergone a number of changes that affect road networks, sections, vehicle fleets and the three analysis modes. The traffic data for a section is now defined for each section within the road network. To enable this to take place a road network is associated with a vehicle fleet. A user can enter multiple years of traffic data which is now defined in terms of absolute AADT values. A traffic growth set defines how the traffic grows over time and is defined within the vehicle fleet and assigned to a section within an analysis. The use interface for traffic growth sets is similar to that used in Version 1 for the definition of normal traffic. As growth sets may be used to define the traffic growth characteristics of multiple studies, the periods are defined as relative years rather than absolute years. These improvements allow traffic data for a section to be common in each analysis in which the section is included, and for the typical traffic growths to be reused in each analysis. When creating a new analysis a user now only selects the road network to be used, as the vehicle fleet is associated with it.
6.4 Report Management Flexible reporting is important to view the results of an analysis and to present the results. HDM-4 Version 2 supports user-defined reports by using Crystal Report templates, and the management and organisation of these has been improved. 7.0 I M P R O V E D C O N N E C T I V I T Y
7.1 Run-Data in Microsoft Access Format The run-data produced by HDM-4 during an analysis is now output to a single file in Microsoft Access format. The main benefit of this change is that the use of the Access format
20
Odoki, Stannard & Kerali
makes it easier for end-users to access the run-data with widely available software products (such as Microsoft Access, and Microsoft Excel) and easier to share with other users. For the purposes of users who wish to view the run-data but do not have a HDM-4 licence, a free tool, HDM-4 Version 2 Report Viewer, will also be available. 7.2 Import/Export in Microsoft Access format The import/export data produced by HDM-4 is now stored in a single file in Microsoft Access format, and replaces the multiple *.dbf and *.hdbf files of HDM-4 Version 1. The main benefit of this change is that the use of the Access format makes it easier for end-users to access the data with widely available software products, and easier to share with other users. 7.3 Import Validation An import wizard has been introduced that guides the user through the process of importing externally-defined data into HDM-4 Version 2. Previously no validation of the imported data was preformed and values that were outside the allowable range could produce numerical errors when an analysis was subsequently performed. HDM-4 Version 2 introduces the optional validation of vehicle fleet and road network data for incorrect values as the data is being imported. 8.0 SUPPORT TO EXISTING USERS It has been recognised by the ISOHDM that the existing data used with HDM-4 Version 1.3 is valuable to an organisation and therefore as part of HDM-4 Version 2 a tool has been developed to aid the migration of this data into a format that can be used within the improved analysis framework.
The transition to the HDM-4 Version 2 will require some recalibration of the RD and WE models to ensure the updated technical models are correctly adapted to local conditions, and studies reviewed to make the most advantage of the new features available. 9.0 CONCLUSION The paper has presented the major improvements that have been incorporated in HDM-4 Version 2. These improvements relate to new analysis modules to enhance HDM-4 applications. They include sensitivity analysis, budget scenario analysis, road asset valuation, multi-criteria analysis, and estimation of social benefits. In addition, there are several software enhancements including: improved connectivity to other databases, simplified import/export of data to/from HDM-4 with data import validation, updated database technology, redesign of the user interface, and enhanced report management. There have also been significant improvements to the technical models, including revisions to the bituminous Road Deterioration and Works Effects models, and several enhancements to the Road User Effects models. Version 2 also introduces the concept of Calibration Sets to allow users to
21
Intemational Conference on Advances in Engineering and Technology
defne calibration coefficients for the range of pavement types commonly found on their road networks. HDM-4 is the de-facto international standard tool for analyzing road sector investments. HDM-4 is now used by all of the major intemational financing institutions, such as the World Bank, the UK Department for International Development, the Asian Development Bank and the African Development Bank, to assess their financing in the roads sector. REFERENCES
Cafiso, S., Di Graziano, A., Kerali, H.R. and Odoki, J.B. (2002). Multi-criteria evaluation for pavement maintenance management using HDM-4. Journal of the Transportation Research Board No 1816, National Academy of Sciences, Paper No. 02-3898, pp 73-84, 2002. Washington, D.C. Kerali, H.R. (2000), Overview of HDM-4. The Highway Development and Management Series, Volume 1. PIARC World Road Association, Paris, France, ISBN 2-84060-059-5 Kerali, H.R (2003), Economic appraisal of roadprojects in countries with developing and transition economies. Transport Reviews, vol. 23, no. 3,249-262 Odoki, J.B. (2003), Specifications for road asset valuation in HDM-4, International Study of Highway Development and Management, University of Birmingham, UK. Odoki, J.B. (2003), Implementation of multi-criteria analysis in HDM-4, International Study of Highway Development and Management, University of Birmingham, UK. Odoki, J.B. (2002), Implementation of sensitivity and scenario analysis in HDM-4, International Study of Highway Development and Management, University of Birmingham, UK. Saaty, T.L. (1990), The analytic hierarchy process: planning, priority setting, resource allocation, RWS Publications Pittsburgh, Pa The Institute of Asset Management, (2002), International Infrastructure Management Manual International Infrastructure Management Manual Version 2. O, United Kingdom Edition, UK TRRL Overseas Unit, (1988), A guide to road project appraisal, Road Note 5, Transport and Road Research Laboratory, Crowthorne, Berkshire, UK
22
Kalugila
CHAPTER TWO ARCHITECTURE
SPATIAL AND VISUAL CONNOTATION OF FENCES (A CASE OF DAR ES SALAAM- TANZANIA) S. Kalugila, Department of Architecture, UCLAS, Tanzania
ABSTRACT
Increased immigration of people into cities, coupled with escalation in urban poverty and unemployment, has generated social and economic problems which are associated with a rise in burglary, theft, mugging and rape. The demand for security has lead to the increase of fences especially for inhabitants of high and middle income status. This paper is an attempt to contribute towards addressing the special and visual implications of fending that result from fences erected around properties. By interviews and observation, Sinza area was used as a case to examine streetscapes as well as outdoor spaces in fenced properties. The discussion is based on the types of fences, architecture relationships within the built environment; the way people perceive fences and the role that legal framework plays in regulating fences. The last part give suggestions on the way forward towards helping to create a harmonious living environment between fences. Keywords:
Fences; Built environment; Concrete; Cement; Urbanisation; Neighbour-hood; Building permit; Finishes; Legal framework, Architecture.
1.0 INTRODUCTION Communities in most parts of the world are increasingly living in a rapidly urbanising world. The pace of urbanisation is increasing in countries all over the world, Africa included. In urban centres in Tanzania, especially in the city of Dares Salaam, rapid urbanisation growth has outstripped public capacities to manage urban growth and promote public welfare including security, and safety of the urban inhabitants. Dares Salaam, being the commercial city in Tanzania, has the biggest urban agglomeration that accommodates most social and economic sources. It accommodates 25% of the total urban population i.e. 2.5 out of 10 million. It is also an industrial centre with most Tanzanian indus
23
International Conference on Advances in Engineering and Technology
tries and highest level of social services including educational and health facilities (Lupala, 2002) Due to increasing insecurity, fences have been increasing in most residential areas especially those inhabited by middle and high income settlers. Erection of a fencing wall (fortification) around a house or property has become a common feature in most housing areas and even in town neighbourhoods, which predominantly accommodate offices and commercial functions. Living between and within fences have become a way of life, and as a result, often one hardly notices them or takes note of their implications even though they are a dominant feature in residents daily lives. Fences seem to be important to the way we think and value land or property and the protection one enjoys or expects his or her land or property to provide. Fences can define, protect, confine and liberate properties. Fences can also tell where residents belong and who one is in relation to others. This is often because fences vary in size, quality and complexity; most of these depict extent of protection, desirable and financial social status of an individual. On the other hand, the public and the private spaces can be disjoined by a fence. It also announces who has or who is denied access to a certain property. Therefore fences also shape community and individual's identity. At the same time, they protect assets from encroachment upon by unwanted visitors. "Though fence ranks among the minor matters of a building, it is far from being unimportant. Without it no residence can be properly protected or regarded as complete" (Dreicer, 1996:21). Amongst most people, particularly the affluent, living in a house which does not have a fence is considered both risky and a manifestation of incompletion of the building structure. Most urban areas demand for, and creation of, fences seems to be increasing with time an activity which at present remain largely unregulated by the local authorities in most urban areas in Tanzania, including Dar es salaam. Provision of fencing in most cases seems to be largely an afterthought which often distorts the quality and visual value of the resulting built environment. The variety of design, colours, forms and heights create inharmonious relationship between fenced buildings and its surroundings. Main reasons for fencing include security, boundary definition, privacy, and portraying status. 2.0 M E T H O D O L O G Y Case study approach was used where both qualitative and quantitative data collection methods were applied in Sinza. Quantitative method provided both measurable fact and data, while qualitative methods answered questions that provided data on people's views and perceptions they have on fences (Kalugila, 2005).
24
Kalugila
Sinza area is a settlement located in Kinondoni Municipality about eleven kilometres from Dare s salaam city centre. It is a settlement where both middle and high income earners live. There is a relationship between fences and income because the more affluent one is the more security a person needs as well as stronger identity he or she would often employ to distinguish him or her from other group of low class. Considering this factor, Sinza is considered an information rich area. 3.0 DISCUSSION The discussion is based on the types of fences that were found in the study area, the resulted architectural relationship, fences from the owner's and observer's perspective, effect of fences on the expected street functions, the resulted street life and the role of the legal framework in relation to the existing fences.
3.1 Types of Fences Variation in fence type has an impact in the visual quality and architecture of a street. Fences appear different mainly because of material used as well as the construction techniques applied. The case of Sinza demonstrated that the types of fences found were dominated by cement products in different designs. Concrete blocks could either be used singly or mixed with perforated blocks or iron bars. Those made of bricks or stones were rarely found. Finishes were of either rough or smooth texture. Those who could afford plastering used different colours including mixtures of grey, cream, red, black, blue, red or brown. Those who could not afibrd plaster left the walls with fare-face. Vegetation was used but most of them were not properly taken care of. In many cases fences were built as an afterthought as a result little relationship existed between buildings and their fences. Figure 1 shows some of the exising types of fences. In relation to this there was a need to look into the kind of fencing architecture house owners come up with; the following section explains this further.
25
International Conference on Advances in Engineering and Technology
Fig. 1: Types of fences 3.2 The Resulted Architectural Relationship Between Fences and Houses In attempt to investigate the architectural coherence and unity between the fence and the house, building enclosed, the following were noted after an observation: 33% of the visited houses were having fences exhibiting different languages in terms of colour, caps, perforation, (openings) and materials; 50% have had resemblance only in colour, 37% had similarity in the caps used for the wall and that used for parapet elements on the roof; 20% had a similarity in design of perforation elements and only 33% resembled in the iron bars used. The beauty of most places was distorted because there was no common element that unified the structure enclosed and fence. While owners might be building without architectural impression they wanted to create, at the end of the day, the extensively varying streetscape, as well as lack of visual harmony between the enclosed house and fence generate unattractive urbanscape. At the same time for those producing some of the fencing elements too much variation reduces economies of scale. It also indicates that owners prefer material clashing, or by the time of constructing the fence the elements used for the house were not available. 3.3 Fences from the Owner's versus Observer's Perspectives Together with performing the expected functions; house owners felt that fences had their disadvantages. Out of thirty owners, (13%) said during an interview session that they were experiencing discomfort due to limited air circulation (considering the warm-humid climate of Dar-es-Salaam). This was particularly reported by those with solid wall fences where the fences acted as walls blocking the air movement in or out the enclosed space. Other adverse effects which were reported that arose from fences were boundary disputes. This arose when the setting of a fence was not according to ones property boundary, i.e. in cases where fences were used to demarcate two properties of deferent owners.
Observers were the most affected by the visual link and resulted street created by fences. Findings during interview session was that out of ten respondents, (50%) said that high and solid fences created a sense of fear. Others said they felt claustrophobia when passing through a street with high fences.
26
Kalugila
Most terrifying hours in walled paths or streets were reported to be late evenings and nights. This was because most gates were then closed and there were no lights. The life on the street is also affected by the kind of enclosures used in molding them as discussed further in the following section. 3.4 The Visual link and Resulted Street Life Fences contribute to degree of visual linkage between a fenced and adjacent area. Also they can affect the richness of income generating activities on the streets. The things that made a street with fences on the side lively or dead, attractive or unattractive was the absence or presence of activities on the sides such as shops and petty traders. This is summarised on figure 2.
Fig. 2: Degrees of transparency and in relation to street life Not only did the activities generate income but their presence enhanced security and made the street a lively place to walk, stroll or play even for children. As Sinza is a part of the city which is characterised by mostly single storey buildings, the existence of fences formed
27
International Conference on Advances in Engineering and Technology
strong edges which were visually pronounced as opposed to a case where fences enclosed high buildings. Together with advocating active street life, the streets are expected to perform their main function which is transportation including service provision. The existence of fences may hinder this as discussed in the following section. 3.5 Effect of Fences on the Streets' Expected Functions Service provision is an important factor in a residential area. Solid fences in Sinza neighbourhood were found to be causing difficulty in the delivery of basic services. This was because services like garbage collections, fire brigade services, and sewage collection require big trucks that need wide roads. Such trucks often require sufficient space for turning. This was not always available when every house has a fence, some of them protruding into the public roads reserve. If there are fire outbreaks, impacts for such problems could be catastrophic. Due to fencing walls, truck drivers could not turn; they had to reverse into the major road in order to turn around (see, for example figure 3). These fences were erected the way they were because of lack of a proper legal framework that could guide them.
Fig. 3: Dumping site, ghost neigbourhood and service difficulty 3.6 The Legal Framework in Relation to Existing Fences The Tanzania Building Regulations (1997) do not directly address construction of permanent fences. In Dares Salaam, normally when a building is designed, the Municipal Council is supposed to approve drawings which include detailed design for a house and any other structure that is to be built on the plot in this study, a question was designed to elicit from respondents on this matter. From the interviews, seventeen (57%) out of 30 respondents said they had their house plans (without fences plans) approved by the Municipal Council even though of the total eighteen (60%) had the fence built after the construction of the building was complete, implying that even though of the total slightly more than half had their house plans approved, the fences were not checked or approved. In cases of inherited buildings, it was difficult to know if there was any building permit obtained.
28
Kalugila
The discussion with the local authority suggested that council might not take action if the fence erected does not destruct peace and harmony. In other words one may erect a fence and justify it as long as it is not provocative to anyone. Discussion with interviewers suggested that some house builders were ignorant about the need for getting plans for fences approved if not submitted with building plans. During discussion with one of house owners, it was learnt that some house owners did not see the need of applying permit for fencing. They built when and how they wanted because no body came to inspect them. What is, however, clear is that about half of buildings are built with building permits. Yet, there are those who submit plans for buildings with fences plans. This implied that overall few fence plans were submitted or approved by municipal council. 4.0 CONCLUSION AND RECOMMENDATIONS This study has shown that fences are more than vertical elements in a built environment. They have functions and exist in varieties in Sinza depending on ones socio-economic situation, residential density and purpose of fence and exposure to alternatives. Functions of fences which were uncovered in the study area include privacy, security, exhibiting ones socio-economic status and boundary definition. The limited awareness and knowledge people have about fences, their impacts, and options available are some of the problems which lead to erection of fences which are not in consonance with public requirements or in harmony with the local environment conditions. This study has empirically demonstrated that fences shape the built environment even though people knew and cared very little about them. Their implications were many, including environmental degradation, effect on service provision, distortion of aesthetics of an area and blocking visual continuity of a space.
From the foregoing discussions the following recommendations are made: 4.1 Need for Clear Legal Framework As noted, existing legal framework is somewhat paradoxical about approval of design and construction of fences in residential areas. Therefore, a review of the current legislation, namely, Cap 101 (Township Rules of 1920), Tanzania Building Regulations of 1997, together with Town and Country Planning Act 1956 which was revised 1961, are to be attended to so as to make it explicit that fences require an approved plan and permit issued by the Local Authority. Specifications and regulations for fences have to be worked out also under the revised Cap 101. 4.2 Decentralising of Development Control Enforcement to Grass Root Level At present, Local Authorities are responsible for development control through the building inspectors in the Ward. The leaders (Ward Executive Offices) and Mtaa (sub-ward) Secretaries
29
International Conference on Advances in Engineering and Technology
or local residences are not involved in enforcing and monitoring land development including house and fence construction activities even though they are the victims of poor construction of fences especially in cases where public interests have been disregarded. It is, therefore, recommended that while the regulations and laws are formulated by Local Authorities and Central Government. Enforcement should be a collective activity, where residents should take a lead. This also underscores the need for public awareness creation to make community members aware of pros and cons of varying fence types and minimum conditionality for erecting them including respect of public interests. 4.3 Awareness Creation It has was also observed that many home builders were unaware about fence construction regulations particularly the condition that require them to submit fence plans for approval by the Local Authority before construction starts. It is important that once the existing regulations are reviewed, public awareness creating campaign is carried out. They should also be educated about adverse effects of fences and options for reducing the same. House builders should be encouraged to submit fence designs when applying for building permits even though the construction might be done much later so that the effects are considered by the authorities for approval. REFERENCES Dreicer, K. (1996), Between Fences. U.S.A: National Building Museum and Princeton Architectural Press. Kalugila, S. (2005) Fences and Their Implications in the Built Environment: A case of Dar es Salaam, Tanzania, Oslo School of Architecture, Oslo. Unpublished Masters Thesis. Lupala, J. (2002), Urban Types in Rapidly Urbanising Cities: Analysis of Formal and Informal Settlements in Dar es salaam, Tanzania. Royal Institute of Technology, Stockholm. Published PhD Thesis. United Republic of Tanzania, (1920), Township Ordinance (Cap 101), Dares Salaam: Government Printer. United Republic of Tanzania (1956), Town and Country planning- Cap 378 Revised in 1961, Dares salaam: Government Printer. United Republic of Tanzania (1997), The Tanzania Building Regulations, Dares Salaam: Government Printer.
30
Goliger & Mahachi
A BUILDING QUALITY INDEX FOR HOUSES (BQIH), PART 1: DEVELOPMENT Adam Goliger, CSIR, P 0 Box 395, Pretoria 0001, South Africa Jeffrey Mahachi, NHBRC, P 0 Box 461, Randburg 2125, South Africa
ABSTRACT One of the biggest challenges and economic achievements of South African society is the development of adequate housing for a large portion of its population. Despite the large pool of information on house construction (i.e. the correct applications of materials and technologies as well as minima standard requirements) available, unacceptable construction quality is apparent throughout the entire spectrum of housing. This issue requires an urgent attention and intervention at a national level. The paper presents a development process of a tool for post-construction quality assessment of houses, referred as Building Quality Index for Houses (BQIH). Keywords: BQIH; housing; quality assessment; quality systems.
1.0 QUALITY OF HOUSING IN SOUTH AFRICA In South Africa a large pool of technical and legislative information on good houseconstruction practices is available. Various phases of the development process (i.e. land acquisition, planning, design, etc.) are supported by relevant legislative and technical norms. Nevertheless, inadequate quality is apparent throughout the entire spectrum of housing (i.e. low- to high-income dwellings). Figure l a is a close-up view of the base of a load-bearing column supporting a second floor bay-windowed room of an upmarket mansion in Pretoria. At the time the photograph was taken, the columns were already cast with the second floor in place, but almost all bricks underneath the base were loose. Figure lb demonstrates an unacceptable practice of using loose bricks as an infill of a foundation for a low-income housing unit. Despite the huge housing stock in South Africa (estimated at nearly 10 million units, including informal) there are no formal mechanisms, methodology or socially accepted platform either for proactive and consistent monitoring of its quality or the development of relevant statistics. A need is therefore apparent for the development and implementation of a comprehensive and straightforward quality-appraisal system to measure housing quality standards.
31
International Conference on Advances in Engineering and Technology
Since 1994 the issues of the quality of house construction and risk management have been the concern of the National Home Builders Registration Council - NHBRC (Government Gazette, 1998; Mahachi et al, 2004). In 2003 the NHBRC commissioned the development of a system for assessing the quality of houses, and this was undertaken at the Division of Building and Construction Technology, CSIR. The philosophy and principles of the proposed Building Quality Index for Houses (BQIH) were based on an internationally accepted quality control scheme, Conquas 21 (1998), which was developed and implemented by the Construction Industry Development Board of Singapore. However, owing to the pronounced contextual and technological differences between the residential sectors of both countries, the two systems differ significantly.
Fig. l a: Support of a column
Fig. l b: In-fill of a foundation
2.0 DEVELOPMENT PROCESS OF THE BUILDING QUALITY INDEX FOR HOUSES (BQIH) The development process of the BQIH system is summarised in the flow chart presented in Figure 2. Various steps of the above process will be presented in the following sections.
Initially, following several interactions with the Singapore Construction Industry Development Board (CIDB), Conquas 21 has been analysed (blocks 1 and 2 in Figure 2) in the context of the South African situation (block 3). On the basis of that, the principles of the proposed system applicable to local conditions were identified (block 4). Based on the review of South African practice and standards (block 5) the scoring system (block 6) was developed. A series of initial appraisals have been carried out (block 7), and their analysis (block 8) served as the basis of an iterative process of calibrating and improving the scoring system (block 6) and developing scoring sheets (block 10). The information obtained from the analysis (block 8) also formed inputs to developing the user manual (block 9). A pocket-size-computer programme for calculating the scores (block 11) was developed. A pilot study (block 12) was undertaken in order to evaluate the applicability and relevance of the proposed system. The IT application system (block 13) was used to develop relevant statis-
32
Goliger & Mahachi
tics on the quality of houses (block 14). The pilot study and its results are presented in a subsequent paper.
C O N Q U A S 21
l. . . . . . . . . 1 ; ....... 3-
5
.....
analysis, I ~n~eraction$w~th C IDB ~ingapom,
[ ....
Sou~h Aft~an:
II
I
soclo.econom::~c
I!
J
I '~176176176 ii
Evalu:a|ien,
pta~ce~,s!~ndatds:,
II
l ~"~176176176176 J
|t-~iini!~ng
[.............................................. 4
......
Principtes o! BQtH
Scohng sheels components
we~htings
9
appr.aisais
Scores
Obs:erva'~o#S
,~..................
~o .............. ~..
Use~ Manual
Score Sheei.s
BQIH
IT applicaiion
System
An.~iysi$ Pilot$l~dy ~atist~cs on quarry
Fig 2: Schematic flowchart of the development process 2.1 Conquas 21 Over the last 50 years or so, the focus and emphasis of the home-building industry worldwide has gradually shifted from quantity to quality in human shelter. Most countries have developed and introduced sets of policies, regulations and documentation relevant to their particular situation and aimed at safeguarding the interests of the consumer. Nevertheless,
33
International Conference on Advances in Engineering and Technology
relatively few quality-assessment systems are in place to monitor and capture aspects of construction quality in a structured and consistent way. Perhaps the most internationally accepted and established is the Construction Quality Assessment System (Conquas), which was launched in 1989 in Singapore where, until recently, nearly two thousand construction projects have been assessed. Within eight years of its implementation the average Conquas score improved steadily from about 68 to 75, which reflects a significant improvement in the quality of construction in Singapore (Ho, 2002). In view of its attractiveness, an analysis of the applicability of Conquas 21 to South African conditions, and in particular this country's house-construction industry, has been carried out. Several contextual differences have been identified, as summarised below. 9 Geographical~climatic: Singapore is a fairly small and flat tropical island experiencing uniform climatic conditions, dominated by moist coastal air and cyclonic wind/rain events. South Africa's land surface is significantly larger, with a wide spectrum of altitudes, geological formations and climatic zones. 9 S o c i o - e c o n o m i c : The population of Singapore is largely of an Eastern cultural background renowned for perfectionism, perseverance and attention to detail. The country experiences a high rate of employment, as well as high living and educational standards, and has access to a large pool of skilled/educated labour. Unfortunately, these socio-economic conditions do not prevail in South Africa. 9 S p a t i a l : Like elsewhere in Asia, and as a result of the lack of urban space and the lifestyle expectations of the community, most of the development in Singapore is high-rise. In South Africa, apart of the centres of large cities, most housing development is singlestorey. 9 D e v e l o p m e n t a l . " The entire Singaporean development and construction industry is centralised and strictly controlled. This is not the case in South Africa. 9 Technical: Technical differences refer to general standards and tolerances, adherence to those requirements, the general level of technical skills and professional inputs.
2.2 Principles of BQIH Several aspects of the proposed system applicable to the South African situation and its needs were considered and investigated. These let us to the belief that: 9 The system should follow the broad philosophy of Conquas in respect of its aims, the structure (i.e. division into building components) and the principle of relative weights. 9 Both structural and architectural aspects of house construction should be considered. However, in line with the NHBRC mandate, the system should focus on assessing aspects of the quality of basic construction that affect the structural performance and safety of housing units. 9 Important aims applicable to the South African situation have been identified as: - the provision of an objective method for evaluating the performance of building
34
Goliger & M a h a c h i
contractors, the identification of good and bad construction practices, and - identification of the training needs of contractors. The system should be inclusive of the entire spectrum of the housing i n d u s t r y - from the low- to the high-income sector. The system should be self-contained, straightforward, concise and practicable. -
Our research has shown that a large pool of information on required minimum construction standards is available in the relevant codes of practice, building regulations, construction guides and requirements of national/local authorities, in South Africa. The problem is that this information is often not implemented, not easily accessible and understandable for less experienced people, and in some cases even confusing. 9 The appraisal should be based on visual assessment of relevant items, assuming access to and verification of relevant technical documentation pertinent to the site. No destructive investigations and testing will be permitted. 9 Following the initial research, one of the critical matters identified was the issue of subjectivity of assessment, with the obvious counter-measure being the appropriate training of the inspectors. Another tactic in this regard, which was adopted, was to introduce a relatively high number of items to be scored. 2.3
Benefits
There are several important benefits from implementing the proposed system. These benefits relate to various features of the society and the relevant role-players, as summarised below: 9 The contractors will also benefit from the system, which will serve as a tool to identify the problem areas in their business. Good performers can also use their BQIH Index for marketing purposes. 9 Perhaps the most obvious are the benefits to the consumer, i.e, the house owners. 9 For local authorities the most important benefit is the ability to make an independent comparison of the relative performance of various contractors involved in the construction process, and the introduction of a quality-driven management system for awarding contract work. 9 From the perspective of the national authorities, implementation of the system will provide a platform for a comprehensive and consistent assessment of the quality of housing stock in South Africa. For low-income and subsidy housing, the statistical data obtained can form the basis for risk-assessment studies, as well as for budgeting and the allocation of resources (i.e. investment in new developments vs the maintenance and upgrading of existing stock). 3.0
SCORE
SHEETS
The BQIH system contains score sheets, which include building components and items, as well as the User Manual.
35
International Conference on Advances in Engineering and Technology
3.1 Building Components Five basic building components were adopted, as shown in Table 1. Table 1: Building components Reference
Description
1 2 3 4 5
Foundations Floors & stairs Walls Roofs Electrical & plumbing
Weighting
(%) 30 15 25 20 10
3.2 Building Items For each of the components listed in Table 1, a relevant list of items has been developed. The role of this list is to identify all aspects of a specific building component that influence/determine the overall quality performance of component (e.g. plaster and brickwork to determine the quality of the walls). The process of identifying the relevant items was based on the initial comparative research work carried out in 2000-2002, and supported by input from Boutek's experts in relevant disciplines. The allocation of relative weightings followed an iterative process based on Boutek's experience in building pathology and trial appraisals of houses. 3.3 Assessment Criteria
The investigation into a suitable and reasonable set of assessment criteria was preceded by a comprehensive review of South African sources of technical data regarding minimum quality requirements in construction. This involved a review of relevant codes of practice, technical guides, specifications and national regulations. Most of the specifications appearing in various sources were found to be fairly consistent, although some differences are present. Direct comparison of them is often difficult in view of additional cross-referencing, and conditions/stipulations in the applicability of various clauses. This is demonstrated in Table 2, in which a sample comparison of selected issues is presented. (Also included are the corresponding stipulations of Conquas 21 .) Our interactions with small building contractors revealed that some information on minimum requirements is not readily accessible, while other information is difficult to interpret. Certain information given in technical specifications is impractical and deliberately ignored (or bypassed) by contractors.
36
Goliger & Mahachi
Table 2. Comparison of minimum requirements/allowable deviations NHBRC
[ref. 9] 10
50
SABS 0100
SABS 0155
S A B S 0 1 0 7 (1)
SABS 0400
Conquas
[ref. 8] [ref. 9] [ref. 6] [ref. 7] Minimum strength of concrete in foundations (MPa) 10
12o-6o
Minimum concrete cover of reinforcement (mm) I { t Deviations from level in finished floors (ram) 3-10 3mm over over 3m 2m length(2) length
10 over 6m length or 6 over 3m length (1) application of ceramic tiles (2) depending on external conditions _
[ref. 3] According to specs {
25 1 per lm, max deviation 10
Appraisal In Conquas 21 each of the components contains a detailed list questions regarding compliance with specific items, and facilitates only two options of scoring, namely: 0 for noncompliance and 1.0 for compliance. It was felt that in the South African context the direct application of this approach would be too restrictive and, in fact, could disqualify large portions of housing units. Furthermore, our initial trial tests using Conquas 21 indicated that this type of philosophy is suited for the assessment of individual aspects of finishes, and tends to distort the appraisal of structural elements as well as items of a more generic nature. It was therefore decided that, for certain building items (where possible and feasible), other than 0 and 1 ratings, to introduce an intermediate rating of 0.50, which enables a more graduated scoring of an item. This rating refers to the quality that is generally acceptable, with a few permissible non-compliances, which have been noted. The amount of noncompliances allowed for each type of item is specified in the User Manual, which is discussed in Section 4. Apart from human resources, the implementation of the present system requires a fairly limited amount of basic tools/instruments, which include a measuring tape, a spirit level, a torch, a ladder and a camera. The appraisal of houses is based on visual assessment of their elements, combined with verification of relevant documentation. Scoring of a component/unit is carried out once only, without any provision for re-working and subsequent re-scoring of a specific unit. (This is in line with the philosophy of CONQUAS 21 - i.e. to encourage a culture of 'doing things correctly right from the beginning'.)
37
International Conference on Advances in Engineering and Technology
4.0 USER MANUAL A self-contained User Manual has been developed to support the use of score sheets. This was done in such a manner that the headings and paragraph numbers in the manual correspond to those of the respective items in the score sheets. The manual includes a straightforward and practical guide to the compliance of specific items on the score sheets.
5.0 IT APPLICATION A Microsoft-compatible computer system has been developed to accommodate electronic handling and calculation of the scores, as well as pre-processing of the data for further analysis. The system has been loaded into a pocket-size computer to enable on-site data capture, and central data storage of all captured information. Upon the completion of a project, data from several pocket-size computers can be downl-oaded and synchronised with the main database. These data can subsequently be analysed. 6.0 CONCLUSIONS The paper has presented a summary of the principles and process of development of a postconstruction appraisal system for houses in South Africa, referred to as the Building Quality Index for Houses. The BQIH system offers a straightforward and concise assessment tool for the quality assessment of houses across the entire spectrum of the housing market in South Africa. A pilot assessment study on the implementation of the BQIH system is presented in a subsequent paper. 7.0 ACKNOWLEDGEMENTS The development of the system has been made possible by the contributions of a large number of people. We would like to single out (in alphabetical order) the commitment and contribution of: Messrs M Bolton, W Boshoff, X Nxumalo, M Smit, F Wagenaar, T van Wyk and Drs M Kelly, J Kruger, and BLunt. REFERENCES Government Gazette (1998), Housing Consumer Protection Measure Act 1998, Act No. 95, 1998. Government Gazette No. 19418, Cape Town, RSA. Mahachi, J., Goliger A.M. and Wagenaar F., (2004), Risk management of structural performance of housing in South Africa, Editor A. Zingoni. Proceedings of 2nd International Conference on Structural Engineering, Mechanics and Computation, July 2004, Cape Town. CONQUAS 21 (1998), The CIDB Construction Quality Assessment System. Singapore, 5th Edition. Ho, K., (2002), Presentation, Senior Development Officer, Quality Assessment Dept., Building and Construction Authority, Singapore.
38
Goliger & Mahachi
NHBRC (1999), Home Building Manual. National Home Builders Registration Council, South Africa. SABS 0400-1990 (1990), The application of National Building Regulations. Council of the South African Bureau of Standards, Pretoria. SABS 0100-1992 (1992) Part 1, Code of practice for the structural use of concrete. Design., Council of the South African Bureau of Standards, Pretoria. SABS 0155-1980 (1994), Code ofpractice for." accuracy in buildings, Council of the South African Bureau of Standards, Pretoria. SABS 0107(1996), The design and installation of ceramic tiling, Council of the South African Bureau of Standards, Pretoria.
39
International C o n f e r e n c e on A d v a n c e s in Engineering and T e c h n o l o g y
A BUILDING QUALITY INDEX FOR HOUSES, (BQIH) PART 2: PILOT STUDY Jeffrey Mahachi, NHBRC, P 0 Box 461, Randburg 2125 Adam Goliger, CSIR, P 0 Box 395, Pretoria 0001
ABSTRACT This paper is second in a series of two. The first paper summarises the development process of a Building Quality Index for Houses (BQIH) and the current one describes the process and selected results of a pilot study in which the BQIH system has been used.
Keywords: BQIH; housing; quality assessment; site
1.0 INTRODUCTION The current paper is second in a series of two, which describe the proposed quality assessment system referred as to the Building Quality Index for Houses. In the first paper a development process of the proposed system was described and the current paper presents its implementation, on the basis of a pilot study. The aim of the pilot study was to test the operation of BQIH system and assess its applicability and usefulness for a 'post-construction' appraisal of housing stock in South Africa. An assessment of nearly 200 houses was carried out, in the course of the project.
2.0 HOUSES AND SITES About 180 of the houses were 'subsidy', and 20 were 'non-subsidy'. (Subsidy housing refers to the developments cross-subsidised by the relevant state authority.) All housing developments were located in the central and most industrialised province of South Africa - Gauteng. (The subsidy houses were located at: Lotus Gardens in Pretoria West, Olievenhoutbosch in Midrand, and Johandeo, near Vanderbijlpark. The non-subsidy houses were selected at Cornwall Hill Estate, located to the south of Pretoria, and Portofino Security Village in Pretoria East.) Figure l a presents a general view of a subsidy-housing scheme in Lotus Garden and Figure lb its typical unit with an area of 32 m 2. The Cornwall Hill, and to a lesser extent Portofino, developments represent the other 'end' of the housing spectrum in South Africa. One of the units, with a value of several millions of rands (i.e. more than 0,5m US $) is presented in Figure l c. A comparison of Figures l b and l c clearly demonstrates the flexibility and inclusiveness of the proposed quality assessment system in respect to its ability of non-bias
40
Mahachi & G o l i g e r
appraisal of relative construction quality achieved at seemingly non-comparable types of houses, constituting the extreme ends of the housing market in South Africa. 3.0 A S S E S S M E N T P R O C E S S
The assessment project was carried out during May and June 2004. It started well after the end of the rainy season. Nevertheless, an unexpected intense thunderstorm, which developed over the Midrand-Pretoria, at the time of inspection process, resulted in significant rainfall over Olievenhoutbosch and enabled us to validate our concerns regarding the problem of water penetration through the roofs and walls of the houses. (See Section 4)
Fig. 1a: Lotus Gardens Initially, a site-training session took place. This included people involved in the development of the system and the assessors. The training was followed by a set of calibration tests. For these tests seven housing units at Lotus Gardens were selected and each of them were inspected independently by two assessors. A comparison of indexes, which were derived, was good, with typical differences between 2% and 5%.
41
International Conference on Advances in Engineering and Technology
Fig. lb: Unit type A
Fig. 1c: A non-subsidy house
4.0 GENERAL PROBLEMS OF SUBSIDY HOUSES This section gives a summary of common issues and problems affecting the quality of housing units, which were repeatedly evident during the inspections. These issues are raised not necessarily in order of their importance or prevalence.
4.1 Design Shortcomings Few design shortcomings were observed. These refer to roof support structure, inadequate overlap of roof sheeting (Figure 2a), lack of attention given to the problem of heaving soils affecting the water supply and disposal systems. Minor design inconsistencies were also noted.
4.2 Compliance with the Design and Minimum Specifications Discrepancies between the design and construction were observed. These refer to the presence and positioning of movement joints, distribution and heights of internal walls (Figure 2b) and installation of sewerage system.
Fig. 2a Gap between sheets
Fig. 2b Height of internal walls
4.3 Completeness At the time of the inspection process several housing units or their surroundings were incomplete. This is in respect to the external landscaping works, internal plumbing installations (wash-basins, toilets or taps) and glazing. (According to the site management, the latter issues were apparent as precautionary measures against theft.) The issue of incompleteness of some of the units offers an interesting insight into the advantageous nature and flexibility of the BQIH assessment system, which, despite these disparities, offers a fair platform for quality comparison of housing units.
42
Mahachi & Goliger
4.4 Foundations
In principle, the assessment process at a post-construction stage does not offer adequate opportunities for foundation assessment and relies heavily on availability of relevant geotechnical, engineering and concrete supplier certifications. However, during the process of assessing completed units there was an opportunity of inspecting few neighbouring sites where the construction of foundation slabs was in progress. In some cases geometry of the slabs did not comply with the design (Figure 2c) and unacceptable fill material and its compaction were observed, together with an insufficient dept of the foundations. 4.5 Water Penetration
Several issues observed during the inspection process indicate a fair potential for water penetration into houses (Figure 2d). These relate to the minimum height above the ground level, the water-tightness of walls and the roof cover.
Fig. 2c: Overhang of external walls
Fig. 2d: Water penetration
4.6 Other Problems
Other typical problems, which were observed, refer to: 9 Faulty doors and window frames and/or their installation. These problems relate to inadequate gauge of the sheeting combined with careless handling and installation of these elements (Figure 2e). 9 Lack of tying-up of external and internal walls, structural cracks (Figure 2f) and unacceptable finish (plaster and paint) of internal walls. 9 Poor quality of mortar, which typically crumbles between the fingers. (The origin of the cements used for the mortar is unknown, and is of questionable composition.) 5.0 G E N E R A L P R O B L E M S OF N O N - S U B S I D Y H O U E S
Most of the non-subsidy houses reflect good (if not excellent) construction finishes. However, thorough investigations and discussions with occupants revealed that similar problems to those observed in the subsidy-housing sector occur. Typical problems related to non-compliance
43
International Conference on Advances in Engineering and Technology
with the design, insufficient compaction of in-fills, roof-leaks and inadequate waterproofing, structural cracks and improper installation of doors and windows. Most of the houses have architecturally pleasing but complicated roof geometries. Unfortunately such designs lead to insufficient or incorrect water flow over the roof surfaces and water penetration problems.
Fig. 2e: Re-installation of a frame Fig. 2f: Structural crack 6.0 RESULTS OF SURVEY In total, 179 of subsidy and 19 non-subsidy houses were inspected and indexed. All scores obtained from assessment of individual houses were transferred to a database. 6.1 Overall Index
An average index of nearly 65 (i.e. 64.98) was obtained and Figure 3a presents the distribution of indexes obtained from the survey. It can be seen that the data follows a fairly well-defined trend, in which most of the indexes lie between 60% and 70%. A rapid decrease in the number of houses corresponds to indices lower than 55 and higher than 75. An average index of 63.2 was obtained for the subsidy houses and 82.4 for non-subsidy houses. A difference of nearly 20 points clearly indicates the disparity in quality of product delivered to these two ends of the housing market.
44
Mahachi & Goliger
oo ~
o ! ~ii I ~/: i I:~1~! I i ; !; lllill I i ! i iill t [ ,I I l,lllti~lit ! i ililHIRl~i! 4
30
40
50
60
70
~l !~1~~1 1~, ~I ! i I I !I I ] 80
90
100
quality index Fig 3a. Distribution of quality indexes (all houses) 6.2 Comparison of Contractors
Table 1 is a summary report on the average index obtained by five best quality achievers. It can be seen that the best quality construction was achieved by EG Chapman and SJ Delport, both within operating the non-subsidy sector. It can be noted, however, that the average index scored by Mr Ngobeni (subsidy housing) is not much different from that of Mr Delport (non-subsidy). This is encouraging, as it indicates the ability and scope for improvement of small/emerging builders.
Table 1. Top achievers in construction quality No. of units Site Builder EG Chapman 9 Cornwall Hill S J Delport 10 Portofino Isaak Ngobeni 11 Johandeo J Mbatha 5 Olievenhoutbosch Miriam Mmetle Olievenhoutbosch
Average index
Position
87 78 70 69 68
1 2 3 4
6.3 Evaluation of Building Components
In Table 2a comparative analysis of average index values obtained for various building components defined in the system, is presented. It can be seen that, on average, the lowest index (60% of the maximum score) has been measured for roof structures, followed by walls (64% of the maximum score). Foundations and floors reflect overall results in the region of 70% of the maximum score and higher.
Table 2. Summary of building components Component ref. number
Description of Component
Average index obtained
Maximum score
% Maximum score achieved
Foundations
20,6
30
69
45
International Conference on Advances in Engineering and Technology
2 3 4 5
Floors & stairs Walls Roofs Electrical & plumbing*
11,3 16,1 12,0 9,0
15 25 20 10
75 64 60 90
* For this comparison only the non-subsidy houses are considered The results of electrical and plumbing works do not reflect the true site situation, since for this summary only the non-subsidy houses were considered. This is due to the fact that electricity installation was not provided in all subsidy houses and in many cases the plumbing installation was incomplete. In Figure 3b probability distributions of overall indexes obtained for walls is plotted. It can be seen that the peak in the distribution obtained for walls corresponds to an index of about 16 and the distribution tails off gradually towards lower indexes. A similar trend was observed in respect to floors. 3o 25 60
9 20
0
15
0
~
10
,~,~,~i,!~!i,~:!~i ii84 !iii~i!,~ii i i~i i i i~i,~~,,,i,i~~!i,~,i~i,,i!~!ii,~!i,i~!~!~',~~i~,~,~,~,,~,,,J,,,~,,~,~,~,~,
iiiii!iiifi_fili_I
i !i!i i i i i i i~i~iiii, i i :::ii i !iiiii!-i~!i~i i!i !i i ii l!i!'!i ,,,,ii i i i i i~lIiiit
i'
0 0
2
iiii 84
4
6
8
10
12
14
16
18 20
overall index: for walls Fig 3b. Distribution of indexes obtained for walls The above trend offers an important insight into the current quality standards relevant to these building components, and also indicates the possible strategy for improvement. This is, in a sense, that any future efforts should be directed at improving the lower standards (i.e. shifting the tail of the distribution to the right). A similar shift in the peak of distribution towards the right will require much more input and effort (i.e. training, site controls, improvements in materials and design).
6.4 Correlation Between Building Components Figure 3c presents a comparison of scores obtained for floors and walls. In order to enable a fair comparison, both sets of data were normalised by the respective maximum overall
46
Mahachi & Goliger
weights so the percentage values, which were obtained, represent the relative and comparable accomplishment of quality for both components. The data is plotted in a way in which, for specific houses, the overall normalised score corresponding to floors is projected along the horizontal axis and the score for walls along the vertical axis. Each house is then represented by a single data point. The diagonal line at 45 degrees (referred to as the regression line of unity) represents the situation in which both relative quality scores are the same. It can be seen in Figure 3c that most of the data points are scattered below the regression line, which indicates that for most of the houses more of the quality problems relate to walls. This finding suggests that more efforts (e.g. training) should be concentrated on the construction of walls, and not the floors.
001- 1! 40 .
O0
.
.
20
.
.
.
.
40 60 floors
80
100
Fig.3c. Comparison of scores obtained for floors and walls 7.0 CONCLUSIONS AND RECOMMENDATIONS The results of the pilot study, which was carried out, indicate clearly the applicability and usefulness of the proposed BQIH system for post-construction assessment of houses in South Africa. The system constitutes a fair tool for comparing various sectors of housing in South Africa, from low-income subsidy houses to high-income non-subsidy housing. The results of the study indicate the system's ability to identify statistically the most critical problem areas, to evaluate the performance of various building contractors, and to identify elements of the construction process where additional training of contractors is required. The pilot study also enabled the identification of relevant issues and considerations for future implementation of the system on a larger scale project(s). The most important issues were: 9 full access to, and analysis of, all relevant documentation, 9 adequate, relevant and comprehensive training of the assessors before commencement of a project.
47
International Conference on Advances in Engineering and Technology
the timing of the inspection, in which in the rainy season water-penetration and structural crack problems might become more evident. 8.0 ACKNOWLEDGEMENTS We would like to acknowledge efforts of the CSIR's Inspection Team as well as cooperation and support obtained from NHBRC's management, its building inspectors, and municipal inspectors of the City of Tshwane. REFERENCES
Government Gazette (1998), Housing Consumer Protection Measure Act 1998, Act No. 95, 1998. Government Gazette No. 19418, Cape Town, RSA. Mahachi, J., Goliger A.M. and Wagenaar F., (2004), Risk management of structural performance of housing in South Africa, Editor A. Zingoni. Proceedings of 2nd International Conference on Structural Engineering, Mechanics and Computation, July 2004, Cape Town. CONQUAS 21 (1998), The CIDB Construction Quality Assessment System. Singapore, 5th Edition. Ho, K., (2002), Presentation, Senior Development Officer, Quality Assessment Dept., Building and Construction Authority, Singapore. NHBRC (1999), Home Building Manual. National Home Builders Registration Council, South Africa. SABS 0400-1990 (1990), The application of National Building Regulations. Council of the South African Bureau of Standards, Pretoria. SABS 0100-1992 (1992) Part 1, Code of practice for the structural use of concrete. Design., Council of the South African Bureau of Standards, Pretoria. SABS 0155-1980 (1994), Code ofpractice for." accuracy in buildings, Council of the South African Bureau of Standards, Pretoria. SABS 0107(1996), The design and installation of ceramic tiling, Council of the South African Bureau of Standards, Pretoria.
48
Goliger & Mahachi
USE OF W I N D - T U N N E L T E C H N O L O G Y IN E N H A N C I N G H U M A N HABITAT IN COASTAL CITIES OF S O U T H E R N AFRICA Adam Goliger, CSIR, P 0 Box 395, Pretoria 0001, South Africa Jeffrey Mahachi, NHBRC, P 0 Box 461, Randburg 2125, South Africa
ABSTRACT At the southern tip of the African continent, most of the coastal cities are subject to strong and extreme wind conditions. The negative effects of strong wind events can be primarily considered in terms of their direct impact i.e. wind damage to the built environment as well as the wind discomfort and danger to pedestrians utilising the public realm. The paper presents selected examples and statistics of wind damage to structures and discusses the issue of human comfort in coastal cities. Wind-tunnel technology can be used as a tool for: anticipating and preventing the potential damage to structures, identification of areas affected by dangerous wind conditions, as well as investigating the soil erosion and fire propagation in complex topography.
Keywords: wind-tunnel; climate of coastal cities; wind damage; wind environment; wind erosion
1.0 INTRODUCTION Across the world and throughout history, coastal regions have attracted human settlement and development. This was due to several advantages of the coastal environment, including, amongst others, access to transportation routes as well as marine resources, and more recently, also its recreational benefits. Along the southern tip of the African continent, several large cities have been established - including Cape Town, East London and Port Elizabeth. These cities are subject to strong and extreme wind conditions, many of them originating in southerly trade winds and large frontal systems, occasionally accompanied by convective activities. Negative wind effects in coastal cities can be considered in terms of wind damage, wind discomfort/danger to people, soil erosion as well as wind induced propagation of fire.
2.0 WIND-TUNNEL TECHNOLOGY Traditionally, boundary-layer wind-tunnel technology was used as a tool for prediction of wind loadings and structural response, in support of the development of significant wind
49
International Conference on Advances in Engineering and Technology
sensitive structures (e.g. tall buildings or long bridges) in developed countries of the world. This is largely not applicable to the African continent, where most of the development is low-rise and dynamically insensitive. Furthermore, the largest portion of the built environment receives very little or no engineering inputs during its design and construction stages. In an African scenario, wind-tunnel technology can be used as a tool for anticipating and preventing potential damage to medium- and low-rise structures and also in identification of areas in cities, which can be affected by negative or dangerous wind conditions. The latter issue became relevant (in recent years) as the emergence of usage of the space between buildings has been identified as focal point in developing highly pedestrianised large scale retail and leisure amenities.
3.0 NEGATIVE EFFECTS OF STRONG WINDS There are various negative effects of strong winds on people living in coastal cities. From an engineering point of view, the primary concern is the direct wind damage to the built environment due to wind force exerted on structures. In the recent years, more attention is also given to wind discomfort and danger to people utilising the public realm in big cities as well as the danger posed by flying debris (e.g. broken glass or elements of sheeting). Other effects include those in which wind may be perceived as of secondary importance, which is often not the case. In fact, wind is the most important factor affecting the drying of soil (which has large impacts on agricultural sector), soil erosion and transportation (important along the Western Coast of Southern Africa) as well as spread of uncontrolled fires (which is a serious problem in coastal regions of Western and Eastern Cape Provinces of South Africa).
3.1 Damage to Structures and the Design Aspects A database of wind damage due to strong winds, which contains about 1 000 of events has been developed (Goliger, :2000). A monthly distribution of the wind-related damage in South Africa is presented in Figure 1, from which it can be seen that most of devastating events occur in summer months (October through to February). These are mainly due to High Intensity Winds, which prevail inland of the country as well as the south easterlycoastal winds along the southern tip of the African continent.
50
Goliger & M a h a c h i
Fig. 1: Distribution of damage in South Africa Strong wind events can inflict various degrees of damage to buildings and structures. In a progressive order, these can vary from minor damage to roof sheeting, up to total collapse of walls. Figure 2 presents a devastation of a second floor of residential flats in Mannenberg, which occurred in August 1999 due to a large tornado which originated off-the coast of Cape Town and Figure 3 a collapse of a large Container Terminal crane in Port Elizabeth caused by south-easterly coastal wind.
Fig. 2: Wind damage to Mannenberg
Fig. 3: Collapse of a container crane
Due to their nature, wind loading codes provide the design information on the loads of typical geometrical forms of structures and buildings. Wind-tunnel modelling enables one to determine the critical wind loading of specific structures of unusual geometry and size. Figure 4 presents a wind-tunnel model of a container crane. The tests, which were carried out, provided information on the loading of the crane. Furthermore, it enabled an investigation into the ways of improving the geometrical form of the crane in order to reduce the wind loading generated over various components of the crane. Figure 5 is a wind-tunnel model of a medium-rise building of complex form. Wind modelling provided the information on pressure distribution over the building fagade and roof cladding, which was used in the design.
51
International Conference on Advances in Engineering and Technology
This information was critical for structural integrity of the building and also the safety of the public in its vicinity.
Fig. 4: Model of a crane
Fig. 5: Model of a building
3.2 Effects on People
In many coastal cities throughout the world, unpleasant and sometimes dangerous wind conditions are experienced by pedestrians. Apart from the harsh windy climatic conditions in these cities, extreme pedestrian winds are often introduced or intensified by unsuitable spatial development of urbanised areas, for example tall and/or large buildings surrounded by outsized open spaces envisaged for public use. A trend is evident in which the re-emergence of the 'public realm' (space between buildings) becomes a focus for city developments. This is accompanied by a growing public awareness of the right to safe and comfortable communal environments. The above trend has led professionals in the built environment to recognise the need to investigate the pedestrian-level wind environment, amongst other aspects that impact on people living and walking about in their own city's public space. A variety of problems are related to human safety and comfort in the context of the urban environment. Under strong wind conditions people are unable to utlise the public spaces (Figure 6). Extreme pedestrian level winds my lead to the danger of people being blown over and injured or even killed, and vehicles being blown over (Figure 7). Physical discomfort or danger has an indirect socio-economic impact, in that people avoid uncomfortable public places. This lack of utilisation in turn affects the viability of commercial developments.
52
Goliger & Mahachi
Fig. 6: Difficulty in walking
Fig. 7: Passenger vehicle overturned by wind
The use and application of wind-tunnel technology in investigating the wind environmental aspects of developments will be highlighted on the basis of a wind-tunnel testing programme of Cape Town's Foreshore. This area is renowned for its severe windy conditions due to the notorious Cape Southeaster or 'Cape Doctor', where in some places the internationally accepted threshold wind speed of human safety (23 m/s) is, on average, exceeded for a few hundred hours per year. A comprehensive programme of wind-tunnel testing was undertaken in co-operation with the city authorities, urban planners, and the architectural design team. The transportation/road design team was also involved, due to the presence of freeway bridges in the immediate vicinity of the proposed Convention Centre. The quantitative wind-tunnel measurements included wind erosion and directional techniques. Pedestrian wind conditions were found to be fairly variable and locationally sensitive. This is due to a combination of the effects of topography, the upwind city environment, and the bulk distribution and form of the proposed development. In Figure 8 a sample of the directional distribution of localised winds, at various places around the proposed development envisaged for pedestrian use, is presented. This flow pattern results from the approach of south-easterly winds, which are the most critical.
53
International Conference on Advances in Engineering and Technology
9.
i~
!
I~~ 84 ,i
Fig. 8: Directional flow pattern, Cape Town Foreshore development Figure 9 presents a sample of summary results of quantitative measurements at one of the locations where unacceptable wind conditions are presented. The graph was developed by integrating full-scale wind statistical data for Cape Town and wind-tunnel measurements of wind speeds for the entire range of wind directions compatible with the full-scale data. The graph is presented in terms of the wind-speed probability distribution function and includes acceptability criteria for wind conditions. It can be seen that the safety criterion of wind speeds higher than 23 m/s is exceeded, on average, for about 150 hours per year. As a result of the wind-tunnel study- and subsequent data processing and analysis - several unacceptable and/or dangerous windy environments were identified and various ways of improving wind conditions were proposed, including, amongst others, setting-up specifications regarding future developments in the immediate vicinity, limiting pedestrian access and addition of various architectural measures.
54
Goliger & M a h a c h i
Fig. 9: Wind speed probability distribution, Cape Town Foreshore development Figure l0 depicts typical situations in which architectural elements were added to building structures. The photograph on the left shows a visitor's disembarking zone, which includes a continuous horizontal canopy and a set of vertical glass shields. The photograph on the right shows canopy structures introduced to protect the loading zones of the Convention Centre from a 'downwash' current generated by a nearby multi-storey hotel building.
Fig. 10. Architectural elements added to obviate wind nuisance 3.3 Soil Erosion One of the mechanisms of structural damage to the built environment caused by wind action (which is often forgotten or neglected) is the erosion of foundations. Such erosion usually occurs in non-cohesive and non-saturated soils and in extreme cases it may lead to the un-
55
International Conference on Advances in Engineering and Technology
dermining of foundations and collapse of walls. Little information is available on this topic in the international literature. The issue of soil erosion and the consequent undermining of buildings (Fig. 11) and unwanted deposition of sand (Fig. 12) is applicable to several large coastal township developments in South Africa (e.g. Rosendal, Khayalitsha, Blue Downs).
Fig. 11: Undermining of foundations
Fig. 12 Unwanted deposition of sand
Results of an investigation (Van Wyk and Goliger 1996) demonstrated the applicability of a wind-tunnel technology to the investigation of wind-induced erosion. Initial characteristics (patterns) were identified, as well as the possibility of developing a general set of design principles to optimise the spacing (density) of the units, generic layouts and orientation of the grid with regard to the direction of the prevailing winds. In Figure 13 a sample of the wind erosion pattern obtained for one of the investigated layouts and a specific wind direction is presented.
Fig. 13: Erosion pattern within a mass-housing development
56
Goliger & Mahachi
3.4 Propagation of Fire In dry areas and/or during the dry season, large parts of the African continent are subject to extreme fire hazards. This refers predominantly to bush fires, but is also relevant to agricultural land, forestry, rural developments e.g. the recent fires in the Cape Town area. Wind forms one of the most important factors influencing the propagation of fire and its risk assessment, i.e, the Fire Danger Rating. The fire is influenced significantly by the: 9 gustiness of the wind; sudden changes in speed and direction can severely hamper efforts to bring a fire under control, and can affect the personal safety of firefighters, 9 direction, magnitude and duration of the prevailing winds, and 9 dominant topographical features; for example, where strong winds coincide with rising slopes, the convection column of the fire does not rise up but progresses rapidly due to the acceleration of the flow as presented schematically in Figure 8, Figure 14 presents an aerial photograph of Cape Town and its surrounding topography, with an average elevation of 1000m above sea level. Each year during the dry season, the slopes of the mountain are subject to severe run-away fire events. One of the most difficult aspects of these events are the instantaneous changes in the speed and directional characteristics of the spread of the fire. These parameters are determined by the presence and character of dominant topography in relation to the direction of the approaching wind flow. A wind tunnel study has been undertaken to study the wind distribution around Table Mountain in Cape Town. The results of the tests have determined the flow directional and wind speed quantities (mean wind speed, peak wind speed, intensity of turbulence) as a function of the direction of the incoming wind flow.
Fig. 4. Cape Town's dominant topography
57
International Conference on Advances in Engineering and Technology
REFERENCES Goliger, A.M. (2000), Database of the South African wind damage~disaster. Unpublished, Division of Building and Construction Technology, CSIR. Van Wyk, T. & Goliger, A.M. (1996), Foundation erosion of houses due to wind."pilot study. Internal report BOU/I41, Division of Building Technology, CSIR
58
Elwidaa & N a w a n g w e
W O M E N P A R T I C I P A T I O N IN THE C O N S T R U C T I O N INDUSTRY E. Elwidaa and B. Nawangwe, Department of Architecture, Faculty of Technology, Mak-
erere University
ABSTRACT The paper looks at the role women in Uganda have played in the development of the construction industry. The paper examines the policy framework that has affected women participation as well as the role of educational and civic organizations, including NGOs and CBOs. A critical analysis of factors that could hinder active participation of women in the construction industry at all levels is made. Recommendations are made for necessary policy changes to encourage more women participate in the construction industry at all levels. The paper is based on a study that was undertaken by the authors.
Keywords: Role of women, construction industry, policy framework, working hours, gender mainstreaming.
1.0 BACKGROUND 1.1 Introduction In spite of the worldwide efforts made to bring about equity, regardless of race, sex or age, women are still marginalized almost everywhere around the universe. Although women represent half (50%) the world adult population, one third (33.4%) of the official labour force and perform two thirds (66%) of all working hours, they receive only a tenth (10%) of the world's income and own less than 1% of the world property (NGP, 1997). However, through hard work of lobbyists and researchers, the importance of the roles played by women in the economic and social development of their countries and communities is increasingly recognized (Malunga, 1998).
59
International Conference on Advances in Engineering and Technology
II Men ! "
II Women
I
9Women ]
9Men
!
I
IMMen
9Women
I
! ..........
I
Fig. 1. The charts show the ratios of men to women according to population, labour force, working hours and share of income. Gender issues are highly related to socio-cultural aspects that dictate what is womanly or manly in a given society; hence gender issues are usually locally defined or in other words contextualized. Despite variations, researchers and activists realize there are still many issues in common with respect to gender across the world. Awareness and concern towards gender issues has been raised through many ways that include, but not confined to development of theories, workshops, speeches, researches, programs, projects as well as local and international conferences. It is through the international conferences that gender issues are transferred from the local level to the international level where common issues are identified, addressed and discussed; ideas and thoughts are exchanged; and goals and direction forward agreed upon (Kabonessa, 2004).
1.2 Scope of the Study The study focused on women participation in the formal employment of the construction sector. The geographical scope of the study encompassed the academic institutions that provide education and training in disciplines related to the construction sector with Makerere University had been selected as a case study being the principal university that supplies the sector with its professional workforce. Architectural, consultancy and contracting firms within Kampala City, Kampala City Council, MoHWC and the registration professional organizations were all investigated and addressed. Masese Women Housing Project MWHP, a housing scheme that targets women as its main beneficiaries and executors has been studied and utilized as an example of the government's initiatives that target enhancement of women participation in the construction sector.
1.3 Study limitations The main limitations to this study could be summarized as follows:
60
Elwidaa & N a w a n g w e
The study has a very strong socio-cultural element that could better be understood by tracing life experiences of women involved in the sector, which could not be done due to time limitation. Lack of statistical records was another obstacles that hindered deeper investigation of the subject and also caused the exclusion of addressing women participation in the informal workforce of the construction sector, which would have made the study more comprehensive. 2.0 CONCEPTS ABOUT W O M E N PARTICIPATION IN CONSTRUCTION 2.1 Introduction In this section gender issues in relation to the construction sector were addressed. Definition of the term gender and rational for addressing it was provided, together with the definitions of other gender related terms that were used throughout the research. Gender in relation to construction in general and at the Ugandan level in particular was also addressed. 2.2 Definition of the Term Gender Gender is a term that refers to socially constructed characteristics to what is manly or womanly, feminine or masculine. Hence, gender implies socially constructed expectations to women's and men's roles, relationships, attitudes and behaviour. Thus being unmanly or unwomanly is to act, think or behave in a manner that contradicts with the expectations of the society about men and women. Despite similarities between gender issues all over the world, being socially constructed and hence contextual, gender definition varies from one society to the other. What is feminine in one society might not be the same in another (Were, (2003)). 2.3 Rationale for Addressing Gender Often one might wonder as to why gender is investigated in terms of issues related to one's being a man or woman, in particular, as opposed to society that is actually composed of men and women, in general. This controversy may be explained by considering some of the benefits addressing gender could bring. By addressing gender issues we tend to be in a better position to understand needs, capabilities, rights and other issues of both men and women as separate entities that constitute the society across its various classes, races and age groups. In so doing we are able to act accordingly and consider all members of that society and hence minimize inequality and achieve social balance in that society. Elimination of inequality and empowerment of society members without prejudice will increase self-esteem and minimize psychological ills related to its absence. 2.4 Operational Definitions For better understanding of issues addressed in this study, definitions of some terms that were used in is to be provide in the following paragraphs.
61
International Conference on Advances in Engineering and Technology
The Construction Sector The construction sector in this study refers to the actions pertaining to planning, design and the erection of buildings, roads and infrastructures as well as supervising and monitoring the execution process to ensure compliance with original designs and approval of adjustments if the matter ever arises. The construction process starts by the architect translating the client's functional requirements or needs into spatial designs empowered with his knowledge into the best functional, economical, technical and esthetical form (architectural design).
Fig.2. Standard Organisational Chart During the construction process a supervisor or consultant, ideally the architect, is supposed to supervise the execution process and ensures that the building is built according to the initial architectural and technical design. Hence, in this research the construction industry refers to the sector that involves the design and the execution of buildings. Much as the role of all technical engineers is acknowledged, this research considers only the architectural and contracting (civil) disciplines. 2.4 Women in the Construction Industry The construction industry has always been a male-dominated field. This is evident even in countries where gender issues have long been addressed and women received much of their rights and treated as equal members of the society as men. Women, being the weaker sex; have been marginalized by the assumption that construction activities need physical efforts that are beyond their ability. However, it has been reported that even in countries where technology development has reduced dependency on physical power, such as The United States (Perreault, 1992), The United Kingdom (Ashmore, 2003) and Sweden, the construction sector is still dominated by men. Gender imbalances in the construction sector are further emphasized in some areas, not only by sex, but also by race and class. 2.5 The National Context Women in Uganda constitute more than half (52.8%) the total formal labour force and it is believed that the percentage is higher in the informal sector, but no statistics is available. The majority of the working women occupy jobs that are related to the agricultural sector (86%), with 12% in the service sector and only 3% in the industrial sector (ILO, 2003).
62
Elwidaa & N a w a n g w e
I m Agriculture !1 Service D Industrial I
Fig.3: Chart showing occupation of Ugandan women by economic sector 3.0 M E T H O D O L O G I C A L APPROACH 3.1 The Analytical Framework To guide the investigation on gender sensitivity of the construction sector the study adopted a framework that identifies key issues, which interact together and are determinant of gender mainstreaming in the construction sector. It assumed that these elements formulate a network that cooperate and continuously influence one another for the achievement of gender sensitization and mainstreaming in the construction sector autonomously. The first and foremost element identified in the framework was the attainment of the policy or decision-making bodies' commitment to gender mainstreaming of the sector. The framework argues that political commitment usually comes as a result of persistent lobbying, manipulation and efforts of stakeholders, activists or any concerned bodies that are devoted to the cause, formulating gender pressure groups. If granted, political commitment is assumed to result into resources allocation, policies formulation that would be translated into programs or projects, as well as institutional building that all target gender sensitization or mainstreaming in the sector. Allocation of resources and formulation of gender sensitive policies and institutions would significantly assist in dismantling barriers against women participation in the sector in addition to increasing the level of gender sensitization and awareness among the community. These together will enhance women accessibility to training and education opportunities in construction related disciplines, which will empower them with construction knowledge and skills. Therefore stemming from the mentioned framework, the main themes or areas of investigations could be stated as follows: 9 The support and commitment of decision making bodies or policy makers to gender issues and concerns. 9 Dismantling of barriers against women participation in the construction sector together with their empowerment in construction related fields and skills.
63
International Conference on Advances in Engineering and Technology
9 9 9
Women absorption in the construction formal workforce referring to employment opportunities and type of employment. The level of gender sensitivity and awareness towards women participation in the construction sector. Identification of key actors and their roles with respect to mainstreaming gender in the construction sector among the professional and any other organizations who can act as pressure groups for the purpose.
The following diagram (Figure 4) further explains how the analytical framework operates for the attainment of gender mainstreaming in the construction sector. ~[ 9 Influence of pressure groups "1 (civil institutions, political constituencies, gender activist, ofo
V Political commitment
Allocation of resources and Formulation of policies. Institutional building
Training and education.
Dismantling of barriers
Gender awareness & sensitization ,
Increase in women empowerment & p icipation
~ /
Emergence of gender sensitive pressure groups
Fig 4. Analytical framework of the Study 4.0 ANALYSIS OF DATA AND DISCUSSION 4.1 Political Commitment towards Engendering the Construction Sectors
The government of Uganda recognizes the various imbalances in the Ugandan society and has committed itself to resolving them. This is clearly stated in the Local Govern-
64
Elwidaa & Nawangwe
ment Act Amendment, 2001 which calls for: "establishing affirmative action in favour of groups marginalized on the bases of gender, age, disability or any other reason created by history, tradition or custom, for the purpose of addressing imbalances which exist against them" (LGAA, 2001). With special reference to gender, the Government of Uganda went a step further and placed gender policies as an integral part of the national development process. It advocates for the assurance of gender concerns to be routinely addressed in planning, implementation, monitoring and evaluation of program activities across all sectors (NGP, 1997). For this purpose, a gender desk has been placed in almost all ministries to address gender issues in the respective sector and to target gender mainstreaming in their policies, programs and projects. The Ministry of Housing, Works and Communications has not been an exception. Initially, gender mainstreaming had been among the responsibility of the policy and planning department that falls under the directorate of transport and communications (Figure 4). But afterwards a section that a deal with gender mainstreaming in the ministry has been established on consultancy basis and is to run for three years (2003-2006). This office is to act as a focal point that is supposed to develop policy statements, guidelines, strategies, checklists and equip the ministry's staff with the necessary tools targeting building their capacity to implement gender mainstreaming in the ministry's sections and departments. The section is placed within the quality management department that falls under the directorate of engineering. Quality management department is responsible for quality assurance of the ministry's activities including material tests and research together with the protection and development of the environment. Environment is considered physically and socially and this is where gender is seen to relate (see the Ministry structure chart in Figure 4). It is important to note that although the gender unit is located within the engineering directorate that is concerned with buildings and construction, most of the activities of the unit were affiliated to the mainstreaming gender in the construction of roads than building. The reason behind that is road projects usually receive more donor money, which facilitates the sector's activities and development. However the unit's activities are expected to influence gender mainstreaming in all sectors of the ministry including the buildings and construction sector furnishing a precious opportunity for the purpose. It is important to note that relating gender issue to quality assurance of the ministry's performance and activities indicates the gender sensitivity of its policy makers and decision-making bodies, which poses a valuable opportunity for engendering the sector. Investigation of the unit's activities showed no evidence of collaboration with professional civil organizations, like the Ugandan Institution of Professional Engineers (UIPE), nor professional statutory organizations, like the Engineering Registration Board (ERB), which would have made the activities of the gender desk more comprehensive.
65
International Conference on Advances in Engineering and Technology
5.0 CASE STUDY: THE MASESE W O M E N HOUSING PROJECT M W H P
Masese Women's Housing project is located within Jinja Municipality. It started in 1989, funded by the Danish international development organization (DANIDA), implemented by the African Housing Fund (AHF), and monitored by the Ugandan Government through Ministry of Lands, Housing and Urban Development together with Jinja Municipality. The project posed a facilitation role; for example, it assists in delivering building materials to the site (Figure 5), while women carry out the actual construction work and handle some managerial issues as well. In the beginning the project targeted to assist 700 families in possessing their own houses. Women were trained through the AHF training team, in construction and managerial techniques for the purpose of building the houses as well as managing and monitoring the project. The construction and managerial skills women empowered with were to be utilized as an income generating activity for the betterment of their living standards during and after the end of the project. Women were also involved in the management, execution and monitoring of the project. Small loans that were to be paid back to the African Development Bank ADB were also provided. Though evaluated and was to be paid back in monetary terms, the loans were given in the form of construction materials to avoid diversion of use. Hence, benefits of the project were channeled to the poor families through women. The group skills in managing the project improved remarkably over the years. By the end of the project in 1993, three hundreds and seventy houses together with a day care center, had been constructed. In addition jobs had been created for 200 members; as well as training, skills and income generating activity potential were provided to many members. As a result of women empowerment, Masese women group managed to put up a factory that produces building materials, not only to supply the project, but also the market outside. For example, women were trained to manufacture slabs for pit latrines that are used for the benefit of the project and also for marketing elsewhere.
Fig. 6 Latrine slabs produced by women to supply the project and the market
66
Elwidaa & Nawangwe
Due to the success of the project, DANIDA showed more interest in funding a second phase that targeted the improvement of the infrastructures and social services in Masese as well as creating employment opportunities and supporting other construction programs. The Masese Women Construction Factory was to supply building materials for the construction of classrooms in five schools within Jinja Municipality. Hence, the project commenced in 1994 and built 12 classrooms in 3 schools and produced some furniture for those schools. Plans for the project to improve on the roads were made together with the establishment of credit schemes to assist members that were not employed by the project, in other income generating activities for the betterment of their lives and hence be able to pay back the housing loans. Despite its success, the AHF encountered some problems and withdrew from the country in 1996 without official handing over of the project to the Ugandan Government. In 1999 the government intervened and took over the role of the AHF to ensure continuation of the project targeting housing construction, building material production, women mobilization, housing loan recovery, employment opportunity generation and thus maintaining the project's sustainability. 5.1 General Evaluation of MWHP Inhabitants of Masese area are very poor with low level of education and minimum opportunities to uplift their living standards. They mainly depend on brewing, commercial sex work and provision of services and skilled labour to the nearby industries. People live in very poor housing conditions. Thus the project managed to utilize and mobilize the available human resource and power to the betterment of their housing conditions and living standards. The amounts of the loans given to people were proportional to the ability of the beneficiary to pay back, which was very important for the sustainability of the project. The loans though evaluated in monetary terms were given in the form of building materials to avoid diversion of use, which is again, a very good point in ensuring support to the construction sector. In general, the project had a positive influence with respect to housing provision, development of the construction sector at Masese, as well as empowering the people who participated with managerial and construction skills that facilitated upgrading of their living standards. 5.2 Gender Analysis of the MWHP The gender component of the project posed the greatest challenge to the project, as it was the first time a housing scheme is specifically designed targeting women not only as beneficiaries but also as implementers. In this respect the project has great accomplishments. In the following paragraphs some of the project's achievements will be illustrated.
In spite of the Ugandan patriarchal society, it is realized that women are usually more sensitive to housing needs in terms of size, space utilization and design as they spend longer time and do more chores in it. In some instances, they are even responsible for its maintenance. The project managed to tap this embedded knowledge, which was a key factor to its success. The project empowered women with skills that are to be utilized in income-generating activities as an alternative to the prevailing practices like commercial sex work and brew-
67
International Conference on Advances in Engineering and Technology
ing. This helped in steering them towards better moral, economical and social standards. Through this project women's self-confidence and self-esteem were restored and this contributed to a great extent in the prevailing attitudes towards women's ability in taking charge of their lives and those of their families.
Fig.7. Women show the satisfaction of achievement during the focus group discussion By involving women in the construction activities, the project succeeded in demystifying the myth about women participation in the construction sector and showed how it could be useful to both society and the sector. Members of the project act as role models for others to emulate and take up construction work as a means of upgrading the housing conditions and as an income generating activity to uplift their socio-economic standards. The project's success serves as a positive experience that can be replicable in other parts of the country. The project could be utilized for purposes of upgrading housing conditions, economic development, women empowerment and facilitating change in the sociocultural attitudes towards women involvement in the construction sector, which is a key issue in its gender mainstreaming. Women who had been trained in construction and management skills provided a pool of trainers that would transfer their knowledge to others. Women efforts in housing maintenance and up keeping were usually unnoticed and unpaid for, bit the project managed to recognize and highlight these efforts. It evaluated labour in monetary terms and hence made it possible for the beneficiaries to pay back the loans they had taken. Women skills were utililised to benefit the sector in production of building materials for marketing and increasing the knowledge base. Another problem encountered by the women of MWHP was the lack of consideration in the evaluation of their performance, productivity and hence payment during special times
68
Elwidaa & N a w a n g w e
of pregnancy and breast-feeding. Thus causing them a financial draw back in covering their living expenses and repayment of the loans acquired. Lastly, one could conclude that in spite of its limited shortcomings, the Masese Women Housing Project posed a real success story with respect to women empowerment and involvement in the construction sector, and hence its enhancement. The project also illustrates the government's genuine concern for engendering the construction sector. 5.3 Women Empowerment in Construction Related Fields One of the important issues the study considered when investigating gender sensitivity of the construction sector was to look into fields of studies and training that supply the sector with its professional workforce. The research identified civil engineering and architecture as the major disciplines for the purpose. Investigations were carried out mainly at Faculty of Technology (FoT), Makerere University (MU), being the principal education institute that provides the mentioned disciplines in Kampala. At the technical level, the study considered technical institutions and organizations that provide training in skills related to construction, which includes the vocational and technical institutes in general. However, Makerere University remains the principal case study for this research in reference to academic issues. 5.4 Women in Construction Related Academic Fields As mentioned, Faculty of Technology at Makerere University was utilized as the case study for the conduction of gender analysis to the educational fields that supply the construction sector with its workforce. Within the faculty, civil and architecture were the principal departments the study looked into. 5.5 Students Enrollment in Construction Related Disciplines by Gender Student intake at FoT, MU has almost tripled during the past decade increasing from 78 students in 1992/93 academic years to 202 in 2003/04. Civil engineering department had always received the highest percentage of students' enrollment among the various departments of the faculty ranging between 34 to35% of the total number of students during the last decade, while architecture accounts for only 6-14% for the same period of time. The small number of architecture students could be referred to its late introduction (1989), compared to civil engineering that has had been taught since 1970. However, it was noted that the number of architecture students has increased at a higher rate than that of civil engineering for the same period of time with a 400% increasing rate for the first and 250% for the later.
Nevertheless, the architecture and civil engineering students put together, usually account for almost half the student intake in the faculty, ranging between 47-57% during the last decade.
69
International Conference on Advances in Engineering and Technology
250
200
Q total
150 ~D ~100
r
I civil + arch
50
Academic year
Fig. 8. Percentage of civil and architecture students combined in relation to total faculty of technology students' intake 5.6 Female Staff Members as Role Models in Construction Disciplines The department of architecture has a higher number of female staff members compared to civil engineering, which could be explained by the higher number of female architecture graduates compared to civil engineering. However, it should be noted that in spite of the higher number of female staff members in the department of architecture, none has ever held a senior position, such as head of department, dean or associate dean. It is only in the civil engineering department that a female has ever been head of section, (note not a department). The reasons behind that were not very clear, but it could be due to the late launching of the architecture discipline, thus requiring more time to allow female graduates acquire the necessary academic and professional qualifications for the senior posts.
It was realized that female staff members in civil engineering and architectural departments pose as role models for younger generations both in the educational and professional fields of construction with more predominant influence in the first, educational field, than the later the practical fields. This was contributed to the embedded sociocultural perception of the unsuitability of the practical construction works to women. Women's more prevalence in the academic fields compared to the practical ones could also be due to their academic excellence in their graduate year which qualifies them for a teaching job immediately. The opportunity is usually eagerly accepted by female graduates as it save them from the tedious and wearisome procedure of job searching in the professional practice not to mention the security the teaching-post provides. Further investigations show that there are many other factors responsible for female architects' and civil engineers' preference of the academic field of construction over the professional practice. Few of them are discussed hereafter. Girls who manage to break through all the socio-cultural myths and joined these fields are usually bright and with strong characters, not to mention determination and academic ambition.
70
Elwidaa & Nawangwe
Exposure of architecture and civil engineering female students to female role models in the teaching profession during the years of study is higher than in the professional practice. Although the industrial training program where students have to fulfill during their undergraduate studies, provides an opportunity for exposure to female role models in professional practice, which widens the students' scope and increases their options of work but is hardly utilized properly for the purpose. Moreover the number of women in the professional is usually very little and they usually occupy junior positions. As a result, many are motivated to join the educational line after graduation. 5.7 W o m e n in Construction Professional Practice
This section addressed women position in the professional practice both as an employee in a construction or consultancy firm as well as being the employer to others in the firms. Women as employees: The survey reflected that the construction sector was receptive to both civil engineering and architecture graduates taking less than six months for a graduate in both disciplines to get a job irrespective of his sex. However, majority of respondents (75%) admitted that personal connections through relatives or friends was their means of getting employed; while random search using qualifications was the means of getting a job for a lower percentage (23%). Comparing civil engineering and architecture graduates shows that it is easier for the later to get jobs depending on qualifications solely than the former whose appointment depends on networking and personal contacts.
With reference to contracting the problem for civil engineering graduates becomes even more acute due to their large number compared to architecture resulting in higher competition especially, if we consider the added number of technical school graduates and the informal contractors. 6
CONCLUSION AND RECOMMENDATIONS
6.1 Conclusions
The conclusions had been arranged and presented in the same thematic order of the analysis for better understanding and comprehension. 9 Gender Sensitivity o f the Policy Makers The Ugandan Government recognizes the gender imbalances in the society and it has committed itself to its elimination. It has made gender policies an integral part of the national development process. To this effect, a gender desk has been incorporated in almost all ministries to ensure gender mainstreaming in their activities and thus posing a great opportunity for the purpose of mainstreaming gender in the construction sector.
At the Ministry of Housing, Works and communication, MoHWC, the gender desk main activities is to develop policy statements, guidelines, strategies, checklists and equip the ministry's staff with the necessary tools targeting building their capacity to implement gender mainstreaming in the ministry's sections and departments. However, its actual influence on gender mainstreaming of the construction sector is not yet clearly evident because of its recentness.
71
International Conference on Advances in Engineering and Technology
9
Women Empowerment in Relation to Construction
The research identified the tertiary and vocational training as the major formal approaches that empower women to participate in the construction formal work force. At the educational training level, Faculty of Technology at Makerere University being the principle academic institution that supplies the construction sector with its formal workforce was selected as a case study for investigation. It was found out that students of civil and architecture disciplines, the main departments within the faculty that supply the construction sector with professionals, put together account for almost half the total students' number in the faculty. Within the two departments, civil and architecture, females comprise a quarter of the total students' number with higher percentage of them in the later than the first. It was noted that this percentage is reflected proportionally in the workforce. Investigations showed no evidence of biases or discrimination against female students or staff members in the faculty. Incidents of sexual harassment were also not reported. It was observed that female graduates in both civil engineering and architectural, given a chance, prefer working in the academic line to the professional practice despite the greater financial returns of the later. This was attributed to the following: 9 Negative social attitudes towards women involvement in the construction field. 9 The high competitive environment of the construction profession that has been made harder for women due to the preference of men especially in site work. 9 Exposure of female students to role models in teaching more than the professional practice. 9 The intellectual environment of teaching is more accommodating and gender sensitive for women than the professional practice. 9 Greater opportunities for promotion and career development in the academic line as qualifications and competence were valued irrespective of gender which is not the casein the professional practice It was also realized that after few years of teaching, some of the females change course and join the professional practice due to their accumulated confidence with time. This coupled with the lower pay of the teaching jobs and their increasing family responsibilities drive them to join the professional practice being with more financial returns. Vocational training in construction related skills is provided mainly at technical institutes where women are very few, which is mainly due to the conviction that construction is not suitable for women. In very few cases training opportunities are provided as an element of a gender related program. 9
Women in the Work Force o f the Construction Professional Practice
The research revealed that personal contacts play the principal role in acquiring a job for architects and civil engineers irrespective of gender, though men are always preferred to women in site work. The reason behind this is mainly socio-cultural as people lack faith in women ability to handle site work. Moreover, promotion possibilities for women are more available in office than site work. It was discovered that most women engineers, who joined the professional practice, preferred to work as employees other than be selfemployed either individually or in partnership due to the following reasons:
72
Elwidaa & N a w a n g w e
9 9 9 9
Lack of self-confidence caused by the negative socio-cultural attitudes against women involvement in the male-dominant construction sector. Lack of capital required for the establishment of a private business and constrained access to financial loans or credits. Lack of business networking that is essential to the development and success of the construction business. Binding family commitments, which put pressure on women's time and activities.
9 Assessing Awareness towards Gender Sensitivity of the Construction Sector In spite of the identified good intentions and concerns towards gender sensitization of the construction sector there is no evidence of serious actions to demonstrate them. The identified possible ways through which gender sensitization of the construction sector could be achieved are: (i) Workshops and conferences Although many of the workshops and conferences that addressed gender sensitization of the construction sector took place in recent years, their influence on raising public awareness towards the issue is limited due to the insufficient publicity they received and the confined places where they took place. It was noted that in most of these conferences gender topics are handled superficially and proceedings are not closely followed up. This was attributed mainly to the superimposition of the gender topic by the donors in researches and projects without genuine interest and concern. (ii) The media" Although advantages of the media in raising public awareness towards gender issues is highly acknowledged, gender mainstreaming of the construction sector has never been addressed in the media. (iii) Role and activities of the professional and Gender concerned organizations: The main professional bodies looked at were the Uganda Institute for Professional Engineers UIPE, the Engineers Registration Board ERB, the Architects Registration Board ARB and the Ugandan Society for Architects USA. It was observed that much as addressing gender sensitization of the construction sector should have been among their responsibilities and concerns, none had ever addressed the issue in any of its activities. Furthermore, It was noted that although Uganda boasts of many gender concerned organizations, generally none had gender mainstreaming in the construction sector as its main concern. It can therefore be concluded that there is no adequate response or action from the responsible construction professional organizations as well as the gender concerned ones towards gender sensitization of the construction sector. 6.2 Recommendations In light of the previous conclusions the following recommendations follow: 9 It is recommended that the Ugandan government initiatives should target eradication of gender imbalances in the construction 9 It is recommended that women should be encouraged to study construction related disciplines. 9 Increase training opportunities for women in construction related skills in isolation of developmental or housing schemes are recommended:
73
International Conference on Advances in Engineering and Technology
Further Research in gender mainstreaming in construction professions and training generally is needed. REFERENCES
Consolata Kabonessa 2003, Lecture notes for the course on Advanced Gender Research Methodology July 2003, Department of Gender and Women Studies, Makerere University, Kampala Uganda. Constance Newman & R. Sudharshan Canagarrajah June 2000, Gender, Poverty, and Nonfarm Employment in Ghana and Uganda, World Bank Policy Research Paper No. 2367, World Bank- Poverty Reduction and Economic Management (PREM)cited in: http://papers.ssm.com/so13/papers.cfm?abstract_id=630739 sourced in 2nd May 2004. Jane S. Malunga 1998, Women Employees in the Informal Sector Kampala, Uganda, Gender Issues Research Report Series- No. 8, OSSREA, Addis Ababa, Ethiopia. Johana 2003, Lecture Notes on course on advanced gender research methodology, Women and Gender Department, Makerere University, Kampala Uganda. Lisa Ashmore 2003, Gender Trends in Professional Practice, cited in Design Intelligence Journal Feb 2003, Greenway Communications International, U S A . cited in: http://www.di.net/article.php?article_id=203 sourced in l st May 2004. Mabel Radebe 2003, Black Women Building Contractors in South Africa: A Case Study of Gauteng and Apumalanga, edited by Anita Larsson, Ann Schlyter, Matseliso Mapetla 1998, Changing Gender Relations in Southern Africa, Issues of Urban Life, Institute of Southern African Studies, National University of Lesotho, Lesotho. Ministry of Gender & Community Development 1997, The National Gender Policy 1997, Kampala, Uganda. Ministry of Works Housing & Communications June 2005, Construction Expo Magazine" Issue 1, June 2005, Kampala, Uganda. Raymond Perreault, 1992, Identification of Issues Facing Women in the Construction Industry, Auburn University, Auburn, Alabama, USA. Cited in http://asceditor.unl.edu/-archives/1992/perreault92.htm sourced in 16 April 2004. Uganda People 2000, Cited in: Photius Coutsoukis http://www.photius.com/wfb2000/countries/uganda/uganda_people.html sourced in 25 May 2005. Were Higenyi 2003, Mainstreaming Gender into Policies, Programmes and Projects, a paper presented at Gender in the Construction Sector Workshop, Makerere University Uganda, extracted from Gender and Rural Transport Initiative Training Manual.
74
Ekolu, Hooton & Thomas
C H A P T E R THREE CIVIL E N G I N E E R I N G STUDIES ON U G A N D A N V O L C A N I C ASH AND TUFF S.O. Ekolu, School of Civil and Environmental Engineering, University of the Witwatersrand, South Africa R.D. Hooton, Department of Civil Engineering, University of Toronto, Canada M.D.A. Thomas, Department of Civil Engineering, University of New Brunswick, Can-
ada
ABSTRACT This study was conducted to investigate certain characteristics of tuff and volcanic ash quarried from Mt. Elgon and Mt. Rwenzori in Uganda that may render the materials beneficial for use in industrial applications as pozzolans. Both tuff and volcanic ash materials were ground and blended with Portland cement at varied replacement levels and tested for several properties. It was found that incorporation of 20 to 25% volcanic ash gave the highest compressive strength and substantially reduced alkali-silica reactivity. The ash met ASTM requirements for 'Class N' pozzolans. This study suggests that the volcanic ash, when ground to 506 m2/kg Blaine fineness develops high qualities for potential use as a mineral admixture in cement and concrete. Conversely, the use of tuff was found to significantly increase alkali-silica reaction. This reiterates the possible harmful effects of some pozzolans to concrete if used without precaution, discretion or thorough understanding of their characteristics. Keywords: Pozzolans; Tuff; Volcanic ash; Compressive strength; Alkali-silica reaction; Fineness; Mineralogy.
1.0 INTRODUCTION The use of natural pozzolans results in a reduction of C O 2 emissions associated with Portland cement production. A 50% Portland cement replacement by a natural pozzolan would mean a reduction of such greenhouse gas emissions in cement production by one half, which could have enormous positive consequences for the environment. Secondly, depending on the grindability (if necessary) and closeness to the construction site, natural
75
International Conference on Advances in Engineering and Technology
pozzolans can significantly reduce the cost of concrete production, dam construction or production of mass housing units. As found with ancient concrete (Day, 1990; Mehta., 1981), natural pozzolans used in normal proportions typically improve concrete performance and durability. Whereas the benefits of most pozzolans used far outweigh their disadvantages, it is imperative that a thorough study of any particular geological source of natural pozzolan is conducted to understand its performance characteristics. This also helps to define discretionary use of materials where applicable. In this investigation, tuff and volcanic ash quarried from the mountainous regions of Elgon and Rwenzori in Uganda were studied to determine their properties for potential use as pozzolans, appropriate blending proportions for incorporation in cement and concrete, and evaluation of their pozzolanic activities. Earlier extensive studies by Mills and Hooton (1992) and by Tabaaro (2000) found the volcanic ash properties to be satisfactory for use in making lime-pozzolan cements. The pozzolan materials were blended with ordinary Portland cement in proportions ranging from 15 to 30% and performance related parameters were measured and compared in accordance with ASTM C-311 procedures. The techniques employed include differential thermal analysis (DTA), petrography and scanning electron microscopy (SEM).
2.0 EXPERIMENTAL 2.1 Materials
A low-alkali ASTM Type I Portland cement and two different forms of natural pozzolans of volcanic origin, tuff and volcanic ash were used in this investigation. Table 1 shows the chemical analyses of the cementitious materials. Both natural pozzolans had low CaO typical of Class F fly ash (Malvar et al., 2002). Volcanic ash was a typically dark broken rock material of highly irregular shape with networks of large bubble cavities. The tuff consisted of grayish consolidated chunks most of them over 100 mm diameter. The pozzolans were air-dried at room temperature and 50% RH for one week and then ground to required fineness levels. The materials were ground to within the normal range of cement fineness. Table 1: Chemical analyses of cementitious materials. SiO2 A1203 Fe20 3 CaO MgO SO3 K20 Na20 Na20 e LOI Cement Tuff Volcanic ash
76
20.34 4.94 2 . 3 3 63.50 2.45 2.93 0.47 0.17 42.66 12.74 13.05 10.89 5.56 0.03 1.82 4.59 46.67 13.96 12.62 9 . 1 6 7.15 0.10 3.19 2.85
0.48 5.79 4.95
2.64 5.71 0.00
Ekolu, Hooton & Thomas
2.2 Test Procedures and Specifications The procedures described in ASTM C-305 were followed in preparation of mortar mixtures. The mixtures used in the study were made in proportions of 15%, 20%, 25% and 30% of pozzolans by mass of cement. ASTM C-311 test procedures were followed. The water content of mortar mixtures was adjusted to ensure a flow of 100 to 115%. Properties of the pozzolans were evaluated in accordance with ASTM C-618 requirements. Thin sections prepared from chunks of the pozzolan materials were examined by optical microscopy equipped with polarized light. Lime-pozzolan pastes were studied by DTA for consumption of free C-H present in the hydrated specimens at different ages. 3.0 RESULTS AND DISCUSSION 3.1 Density and Fineness The densities of the pozzolans were 2860 kg/m 3 for volcanic ash and 2760 kg/m 3 for tuff as determined by the Le Chartelier flask method (ASTM C-188). The Blaine fineness levels of the raw materials (ASTM C-204) ground to different time periods by a laboratory ball mill are given in Table 2. Apparently volcanic ash requires a higher energy input for grinding as compared to tuff. Table 2: Blaine fineness of pozzolan materials. i
Volcanic ash Low High fineness fineness 3 8 259 506 i
Grinding period (hours) Blaine fineness (m2/kg)
Table 3:
i
Tuff i
Low fineness 1.5 748
High fineness 3.5 1080 i
Compressive strengths of mortars of 0.5 w/cm ratio containing 20 to 30% pozzolan (OPC-ordinary Portland cement, w/cm-water/cementitious ratio).
Cementitious materials
Control (OPC)
100%
Volcanic ash (of 259 m2/kg Blaine)
20% 25% 30%
Tuff (of 748 m2/kg Blaine)
20% 25% 30%
Bulk density at 28 da2s (kg/m3) 2271
Compressive strength (MPa) 3 da~s 32.5
7 da~s 38.9
28 da~s 54.3
2287
22.8 18.4 15.2
29.8 24.5 20.9
42.0 34.7 30.4
2233
16.4 16.4 12.5
23.2 23.4 17.0
33.5 30.2 23.8
77
International Conference on Advances in Engineering and Technology
3.2 Compressive Strength The compressive strength data for ages up to 28 days are shown in Table 3 and plotted in Figs. 1 and 2 for mixtures containing varied proportions of volcanic ash and tuff, respectively. After 3 days, the blended mixtures containing 20% volcanic ash had strength of 70% of the strength of control mix. This value increased significantly to 76% at 7 days and 77% at 28 days. Mixtures containing 20% tuff had compressive strengths of 50% of the strength of control mix at 3 days, 60% at 7 days and 62% at 28 days. The results show that more strength gain took place between 3 days and 7 days than at later ages. However, other findings (Mehta, 1981) have suggested that the pozzolanic reaction taking place within the first 7 days of cement hydration is insignificant or none. At a relatively low fineness of 259 m21kg, the compressive strength of mortar Fig.1 : Compressive strength ofm nrtars incorporcontaining 20% volcanic ash was ating volcanic ash 12f259 m 2 k g B 1 a k fineness. greater than the minimum requirement of 75% of the strength of control (ASTM C-618) for both ages of 7 and 28 days.
3.3 Pozzolanic Activity with Lime Mixtures containing pozzolans of different fineness levels were tested for pozzolanic activity with lime. A low strength of 4.8 MPa was achieved at a low fineness of 259 m2/kg as compared to 6.3 MPa at 506 m2/kg fineness for volcanic ash. The results plotted in Fig. 3 show that volcanic ash meets the minimum compressive strength of 5.5 MPa (based on ASTM C 6 1889) when ground to high fineness. Fig. 2: Compressive strength ofmortarsincorporatingtuff 1 3 7 4 8 m2kgBlaine fineness.
78
Ekolu, Hooton & Thomas
3.4 Control of Alkali-Silica Reaction
The 14-day ASR expansions of specimens stored and measured as required in ASTM C227 have been plotted in Fig. 4. At 14 days, ASR expansions of all mixtures containing volcanic ash were lower than the expansion of the control mix. The proportion of volcanic ash replacement level of 20% reduced ASR expansion to 0.02% much less than the required 0.06% (ASTM C-618). However, the opposite was found to be true for tuff. It is likely that tuff released alkalis into the pore solution, increasing ASR expansion regardless of the proportion of tuff incorporated into the mixtures. ~"
8 6.3 4.8 2.8
G~ 2
0
-
o
I
25.9 rn2/kG ash
I
I
506 m2A:g, .ash
1080 m2fkg, tuff
F i n e n e s s o f p o ~ z o l a n u s e d in ~ ~ e
Fig. 3" Lime-pozzolans activity.
~
[
-
=
0.12 o.lo
- 0.o8 0.06
. . . . . . . . . . . . . . . . . . . . . . .
~ 9 0.04 0.02 0.00
0
15
30
Volcanic ash (%)
Fig. 4: ASR expansion versus the proportion of volcanic ash or tuff replacing Portland cement. 3.5 Evaluation of the Characteristics of Volcanic Ash and Tuff
ASTM C-618 covers the requirements for use of natural pozzolans as mineral admixtures in concretes. In Table 4, results from experimental studies are compared against standard specifications for those tests performed on volcanic ash and tuff. The results summarized
79
International Conference on Advances in Engineering and Technology
in Table 4 reflect good performance by volcanic ash. Overall the material meets the ASTM C-618 requirements for 'Class N' pozzolans with test values well within the specified limits. Results of the mixes containing tuff did not measure up to the requirements of the standard. Table 4" Evaluation of volcanic ash and tuff against some major standard requirements for "Class N" pozzolans. ASTM C618-01 70.0
Volcanic ash 73.3
Tuff
SO3, max (%)
4.0
0.1
0.03
Moisture content, max (%) Loss on i~nition, max (%) Strength activity index: at 7 days, min (%) : at 28 days, min (%) Pozzolanic activity index with lime, min (MPa) Water demand, max (% of control) Expansion of test mixture as a percentage of low-alkali cement control at 14 da~s, max (%) Mortar expansion at 14 days in alkali expansion test, max (%)
3.0 10.0 75 75 5.5s 115 100
0.34 0.00 76.6 77.3 6.27 100 30#
2.26 5.71 59.6 61.7 2.80 107 217
0.06*
0.018 (25% ash)
0.13 (15% tuff)
SiO2+A1203+Fe203, min (%)
68.5
*Based on ASTM C 618-89; *Expansion of control made with low-alkali Portland cement," #This is equivalent to 70% reduction in ASR expansion
3.6 Chemical Constituents, Mineralogy and Microanalysis Some major differences in the chemical constitution of the pozzolans are evident in Table 1 showing results of their chemical analyses. The 5.71% loss on ignition of tuff may be due to bound water and the presence of a large proportion of inorganic or organic materials in contrast to practically 0% ignition loss in volcanic ash. Both pozzolans contained 5 to 6% N a 2 0 e alkali levels, however the availability of these alkalis for reaction appears to be quite different for each of the pozzolans. It is implied from the ASR control test carried out that there was high availability of alkalis in the tuff leading to promotion of ASR expansion. For the volcanic ash, alkalis may be in a bound state enabling the ash to contribute to reduction in ASR expansion. To further examine whether the materials being tested were pozzolanic, the consumption of C-H was monitored for volcanic ash, having shown good results from physical tests. The ash was mixed with hydrated lime and water in proportions of 1: 2.25:5 lime to ash to water. The mix was shaken in a sealed vial to ensure uniformity and stored at 38~ for up to 3 years. At different ages, the lime-pozzolan pastes were removed and the amount of C-H left in the samples was determined using DTA analysis as shown in Fig. 5. Most of the C-H in the samples was consumed within 28 days and after 3 years there was no
80
Ekolu, Hooton & Thomas
more of it left in the samples. It is interesting to note that at later ages, the consumption of the C-H was associated with the formation of another phase at around 180~ The new phase is presumably some form of C-S-H.
DTA/mW/mg 1.0- ,~exo
C-H peaks
1. lime
,!~,~j/j!~"~:~ii:":'ii!ii'ii!i 1 ,,/ :~ !ii /i" "iii
7 days 2.3. va-]imeva_]Lme
0.5-
4. va-lLme 28 days 5. va-lime 3 years
0 -
3
C- S- H 0.5-
1.01.52.0-
360 T e mpe ra~.r e / o C
Fig. 5" Calcium hydroxide consumption in lime-pozzolan pastes of volcanic ash stored at 38~ for up to 3 years. Va represents volcanic ash, C-H is calcium hydroxide, C-S-H is calcium silicate hydrate.
Thin sections prepared from chunks of volcanic ash and tuff were used for petrography. The examination revealed that the volcanic ash was a scoriaceous basalt comprising olivine and clinopyroxene phenocrysts, and a ground mass of olivine, clinopyroxene, feldspar, magnetite. The tuff was made of fragments of basalticrhyolite volcanic rock in a heavily altered, clay rich matrix. Figs. 6 and 7 are scanning electron micrographs showing some of the mineralogical features described. Volcanic ash consisted of mainly glassy structure and
Fig. 6: Olivine c~r~als and ~picaily hum ero~.~ bt~ble cavihes. Scaring electron micrograph of vol cani c ash.
81
International Conference on Advances in Engineering and Technology
large bubble cavities. It is likely that the heavily clayey matrix of tuff observed from petrographic analysis contributed significantly to its high loss of ignition. Consequently, the tuff had low to poor strength properties and pozzolanic activity. 4.0 CONCLUSIONS When evaluated for use as a pozzolan in concrete, volcanic ash met the requirements for 'Class N' pozzolans specified in ASTM C-618. The tuff Fig. ~ ~ a x f~a~ents o f v ~ c ~ c ro~:~, failed to meet these requirements and miner~poxticles a m b e ~ e d i n a l~gely clayey may be of little use. Volcanic ash was found to be most effective at 20 to 25% replacement levels and 506 m2/kg Blaine fineness. Examination of mineralogies of the pozzolans revealed the volcanic ash to be scoriaceous basalt with a presence of olivine, clinopyroxene, feldspar and magnetite minerals. The tuff consisted of fragments of basalt-rhyolite volcanic rock in a heavily altered, clay rich matrix. ACKNOWLEDGEMENTS The authors are grateful to Professor Michael Gorton of the Department of Geology and Saraci Mirela of Civil Engineering Department, both of University of Toronto for conducting studies on mineralogy of the pozzolans. We are also grateful to Eng. Balu Tabaaro of the Department of Survey and Mines, Mineral Dressing Laboratory, Entebbe, Uganda for providing some samples and literature. REFERENCES
Day, R.L (1990), Pozzolansfor use in low-cost housing, A state-of-the-art report, International Development Research Centre, Ottawa, Canada, September, 1990. Malvar, L.J., Cline, G.D., Burke, D.F., Rollings, R., Sherman, T.W., and Green, J. (2002), Alkali-silica reaction mitigation: state-of-the-art and recommendations, ACI materials journal, vol. 99, no. 5, Sept-Oct 2002, 21 p. Mehta, P.K. (1981), Studies on blended portland cements containing santorin earth, Cement and Concrete Research, vol. 11, no. 4, p.507-518. Mills, R.H and Hooton, R.D. (1992), Final report to International Development Research Centre (IDRC) of Canada, on production of Ugandan lime-pozzolan cement, blended cements their utilization and economic analysis, prepared by the Department of Geological Survey and Mines, mineral dressing laboratory, Entebbe, Uganda in conjunction with the Department of Civil Engineering, University of Toronto, Toronto, Canada, November 1992, 72 pages.
82
Ekolu, Hooton & Thomas
Tabaaro, E.W. (2000), Bio-composites for the building and construction industry in Uganda, International Workshop on Development of Natural Polymers and Composites in East Africa, Arusha, Tanzania, September 2000.
83
International Conference on Advances in Engineering and Technology
C O M P A R A T I V E ANALYSIS OF H O L L O W CLAY BLOCKS AND SOLID REINFORCED CONCRETE SLABS M. Kyakula, N. Behangana and B. Pariyo, Department of Civil and Building
Engineering, Kyambogo University, Uganda
ABSTRACT Over 99% of multi storey structures in Uganda are of reinforced concrete framing. Steel and brick structures account for less than 1%. Of the reinforced concrete structures currently under construction, 75% use hollow clay blocks reinforced concrete slabs. This paper looks at the form of the hollow clay blocks that contribute to its ease of use, and enables it to be held in the slab both by mechanical interlock and friction. It explores its limitations and ways in which its form may be improved.
Designs of single slab panel two storey reinforced concrete structures with one side having a constant dimension of 8m while the dimension is varied from 2m, 3m, 4m, 5m, 6m, 7m up to 8m were carried out for both solid and hollow clay blocks slabs construction. The design loads, moments, reinforcement, shear stresses and costs for each case of solid and hollow blocks slabs were compared. It was found that contrary to common beliefs; solid slabs are cheaper than hollow clay blocks slabs. This is because, hollow clay blocks need a minimum topping of 50mm, and are manufactured in standard sizes of 125mm, 150mm, 175mm, 200mm and 225mm. This implies that for spans of about 2m, solid slabs can be 75mm, 100mm thick, while the minimum thickness of hollow blocks is 175mm. Also unlike solid slabs, for hollow clay blocks slab over 6m long, shear reinforcement may be needed. As the length increases to 8m, the topping for hollow blocks increases to an uneconomic value. However for large structures with over two storeys, hollow blocks slab construction might be cheaper as the reduced weight leads to smaller columns and foundations. Furthermore hollow block slabs are easier to detail, construct and are less prone to errors on site. Keywords: Hollow clay blocks and solid RC slab; block shape; Design loads; shear stress, moments; Reinforcement; cost, ease of design/construction
1.0 INTRODUCTION Concrete slabs behave primarily as flexural members and the design is similar to that of beams except that. The breadth of solid slabs is assumed to be one meter wide while hollow block slabs are designed as T beams with effective width equal to the spacing between ribs. Slabs are designed to span smaller distances than beams and consequently have smaller effective depth (50 to 350ram). Also the shears stresses in slabs are usually low and compression reinforcement is rarely used. Concrete slabs may be classified according to the nature and type of support; for example simply supported, direction of support; for example one way spanning, and type of section; for example solid.
84
Kyakula, B e h a n g a n a & Pariyo
Until recently, the practice has been to use hollow blocks for lightly loaded floors such as residential flats. But a survey of 70 buildings currently being constructed in different parts of the country has revealed that hollow clay blocks are used in flats, hotels, student's hostels, offices, schools, libraries and shopping arcades, (Pariyo, 2005). The basis of design justifies this advance in utilization: The design of hollow clay blocks slabs depend on the fact that concrete in tension below the neutral axis has cracked. Whereas this cracked concrete contributes to the rigidity of the floor, the concrete surrounding the tension bars that holds the bars in the structure and provide bond offers its only contribution to strength. Thus any concrete in tension remote from the bars may be eliminated, thus reducing the weight while at the same time maintaining the strength of the slab. In hollow blocks slab construction, the hollow blocks are laid in a line with the hollow side end to end and the last block has its ends sealed to prevent entry of concrete into the holes. The slab is thus constrained to act as one way spanning between supports. The slab acts and is designed as a T beam with the flange width equal to the distance between ribs but is made solid at about 0.5m to 1.0m from the support to increase the shear strength. A weld mesh is laid in the topping to distribute any imposed load. Thus hollow block slabs can take most loadings. Hollow clay blocks slab construction is the most widespread form of slab construction; 60 of the 70 sites surveyed throughout the nation were using hollow clay blocks slab construction, (Pariyo, 2005). The wide spread usage and acceptability of this material necessitates that it should be thoroughly investigated. This paper is an attempt in this direction. 1.1 Hollow Blocks: A Sketch of a typical clay hollow block is shown below in Figure I below, its surface has small grooves which help introduce friction forces and a key for mechanical interlock, these hold the block in the concrete. The dimensions given in Figure 1 were measured from actual hollow clay blocks on the market. The four hollow blocks sizes available on the Uganda market (from catalogues) are shown in Table 1. The limited number of sizes means that the least depth of hollow blocks slabs is 175mm; this is because the least height of hollow blocks is 125ram and the minimum topping allowed is 50ram. This implies that even for small spans such as l m to 2m, which could require a slab thickness of 5 0 r a m - 100mm, one still has to use 175mm. However, as the span increases to 5m the thickness of the solid floor slab and hollow blocks slab are about equal.
Table 1. Hollow block types on the Ugandan market S/No Length (mm) Width (mm) Height (mm)
Weight (Kg)
1
400
300
125
7.3
2
400
300
150
8.4
3
400
300
175
11.73
4
400
300
225
13.58
85
International Conference on Advances in Engineering and Technology
1.2 Implications of the Shape: A reasonable arrangement of blocks leaves a minimum width of 75mm, which allows for a 50mm diameter porker vibrator and 12.5mm clearance on either side. Thus a minimum rib width at the bottom is given as 75 + 2x40 = 155mm. This greater than 125mm; the minimum rib width required for fire resistance as given in Figure 3.2, of BS8110. The applied shear stress v , for a ribbed beam is given by; v = (V/b,d), where: V is the applied shear force, d is the effective depth and b, is the average rib width. Ribs created between the hollow blocks are 75mm wide at the top and 155mm at the bottom as shown in Figure 2. For a case of a 175mm thick slab, with hollow blocks of 125mm depth, topping of 50mm and 25mm cover to tension bars. It would be more conservative to use the smaller value of b, = 75mm in shear design calculations, however in practice the larger value of 6 , = 155mm is used. Moreover it may be difficult to justify using the average rib width if the rib width is not tapering. One alternative is to modify the hollow blocks such that the key is recessed into the blocks rather than out, as illustrated in Figure 3 This could reduce on the required rib width from 155mm to the minimum allowed of 125mm, thus saving on the concrete, making the calculation of concrete shear stress easier, while at the same time providing the key for holding the hollow blocks safely in the slab. 2.0 COMPARATIVE ANALYSIS Two sets of slabs were designed; one set using hollow blocks while the other used solid blocks. For each set, one side of the slab was kept at 8m while the other was varied from 2m, 3m, 4m, 5m, 6m, 7m, up to 8m. The imposed and partition loads were assumed to be 2.5N I mm2 and 1 .ON I m m 2 respectively. The floor finish and underside plaster was each assumed to be 25mm and of unit weight 24.0kN I m 3 ; giving a dead load from partitions and finishes of : DL(p&F,= 1.O + 0 . 0 5 ~ 2 4= 2.2kN I m 2 . The dead load for the hollow block slab is given by: DL(,,,,) = 24(h - N,V9) + N, W, , where: h is the overall 3 slab depth in meters, N, is number of blocks per m , Vh is volume in m of a hollow block and W, is Weight of a block in kN . The slab was assumed to be an interior panel in a building with over 3 panels in either direction. The corresponding beams, columns, and pad footings were designed. Comparative analyses of the design loading, Moments, reinforcement, shear forces and costs of construction were carried and a few of these are given below. 2.1 Design Loads per Square Meter As the span and thus loading increases, design loads in kN I m 2 increases for both solid and hollow blocks slabs. Figure 4 shows a comparison of design loads for hollow blocks and solid slabs. For hollow blocks slabs less than 4m span, the design load is constant because the slab thickness used is dictated by topping requirements and depth of available blocks. For this depth and span (175mm & < 4m), deflection is not critical. On the other hand the design depth increases with span in solid slabs because slab thickness varies as per allowable deflection requirements.
86
Kyakula, Behangana & Pariyo
Figure 4: Variation of design loads for Solid and hollow blocks Floor slabs 20 A
--B--- Hollow blocks slab
i.
Solid slab
~6
--'~
J
.a4 Z
~2
~o
U
0
1
m
2
3
m
4 5 Span Length (m
6
7
8
9
2.2 Moments and Reinforcement From Figure 5 it is seen that, despite the fact that the solid slab has a greater load and thus greater applied moment it has a greater reserve capacity, its ratio of applied to ultimate moments is less than that of hollow blocks for all spans greater than 3m. Also its areas of 2 reinforcement in mm per m width of slab are less than that for hollow blocks slab for all spans. This is because for lower than 4m, even where required area of reinforcement is small, one must provide the minimum allowed, the hollow blocks slab is treated as a Tee beam and one is required to provide a minimum of area of steel given by
(100A~/b w h ) - 0.26 fy - 460N / mm
2
for
flanged
beams
with
the
flange
in
compression
and
as per table 3.25 BS8110. On the other hand, Solid slabs are provided
with a minimum of ( 1 0 0 A ~ / b h ) - 0 . 1 3 in both directions. Also for hollow slabs it is preferable to provide one bar per rib, thus the next bar size has to be provided even where required area of steel has exceeded the previous bar size by a small value.
Figure 5: Variation of ratio of applied moment to Moment of resistance for solid and Hollow blocks Slabs 0.25
E 0
0.20
=E o r
C .~
0 =En~
--9 r r
Hollow blocks slab J
- - A - - Solid
~
0.15
0.10
0.05
v ) . However if the keys are not ignored and b v - 7 5 r a m , then as shown in Figure 6, for span greater than 3m (vc < v), thus necessitating shear reinforcement or using a solid slab up to .a length when the applied shear stress is no longer critical. On the other hand, the design concrete shear stress for the solid slab was greater than the applied shear stress for all length of span.
Figure 6: Comparision of applied shear and Concrete shear stresses for hollow blocks slab (bv =75mm) 1.2 A
t~ :3
O"
r 0.8 E E 0.6 #
0.4
Applied Shear stress v
------Zk--Concrete Shear stress Vc
L
I o.2
0
1
2
3
I
l
i
I
I
l
4
5
Span Length (m)
I
,
6
7
8
9
2.4 Cost Comparisons The cost of various Structural Elements were derived and compared for both solid and hollow blocks slabs. The cost of each element designed using solid slab was divided by that of the hollow blocks slab and this ratio was plotted against span. Figure 7 shows the variation of the cost of solid and hollow blocks slabs with span length. It is seen that for slabs less than 4m and greater than 5m, the cost of hollow slabs is higher than that of solid slabs. This is due to the fact that for spans less than 4m, solid slabs allow smaller depth, as per deflection requirements and hollow blocks slabs dictates slab thickness based on the depth of available blocks and topping. Thus for spans of 2m and 3m, hollow blocks have bigger slab depth than solid slabs with corresponding material requirements. At 4 and 5m, the hollow blocks slab becomes cheaper. Above 5m, the minimum topping (50mm) cannot be use because the available hollow blocks offer few standard depths and in order to meet deflection requirements as the span increases, the only option is to increase the topping. Thus for 8m span, deflection requirements dictate the overall depth of
88
Kyakula, Behangana & Pariyo
340mm, yet maximum depth of available hollow blocks is 225mm, giving uneconomical a topping is 115mm. The comparison of the cost for beams revealed that for spans less than 4m, and greater than 5m beams supporting solid slabs were slightly cheaper. This is because. The current practice of using the beams of the same size even when the hollow blocks have constrained the slab to act as one way spanning maintains rigidity of the structure and reduces effective height for the columns, but offers no reduction in the materials used in beams. The cost of columns were found to be the same for both cases because, the case considered carried little weight and the reinforcement areas were dictated by the minimum requirements rather that loading conditions. This implies that for structures supporting many floors, the columns of one for hollow blocks slabs will be cheaper because it will carry less loads and the bending may be assumed to act about only one axis for all the columns. On the other hand the foundation for a structure supporting hollow blocks slab were found to be cheaper by an average of 10%. This is because the hollow blocks slabs ensured a reduced weight.
2.5 Design and Construction Use of hollow blocks constrains the slab to act as one-way spanning. These are simple to analyse, and design. The Structural drawings are easy to detail and understand. During construction it is easier to lay the reinforcement, thus minimizing mistakes on site. The weld mesh included in the topping ensures distribution of imposed loading to the whole slab. Its ease of construction has contributed to its growing popularity such that it now occupies 75% of the market share.
Figure 7" Variation of the cost of solid and hollow blocks slab with span 1.1
1 A
E -c
09
c ,_1
c
0.8
0.. t~
#
V a r i a t i o n of c o s t y ratio
0.7
0.6 2
3
4
5
6
7
8
Cost of Solid/hollow blocks slab
89
International Conference o n Advances in Engineering and Technology
3.0 CONCLUSION The current shape of the hollow clay blocks has keys and groves that provide mechanical interlock and friction resistance to hold the blocks firmly in concrete. However, his shape could also decrease the shear resistance of the slab. A shape has been proposed that has all the advantages of the one currently used, while at the same time increases shear resistance of the slab and a saving in the concrete used. The limited range of hollow blocks available on the market makes hollow blocks slabs more expensive than solid slabs for spans less than 4m or greater than 5m. For spans less than 4m the minimum slab depth is 175mm, because the minimum available block depth is 125mm and minimum topping required is 50mm. Yet for solid slabs the depths can vary from 50mm to 150mm for spans varying from l m to 3m depending on loading and deflection requirements. For spans greater than 5m, deflection requirements dictate increasing depth with spans, yet the maximum depth of available blocks available is 225mm, leading to uneconomical depth of the topping. Using the beams of the same size even when the hollow blocks have constrained the slab to act as one way spanning maintains rigidity of the structure and reduces effective height for the columns, but offers no reduction in the materials used in beams. The reduced weight due to use of hollow blocks slabs results in reduced cost of columns and foundations. Moreover since use of hollow blocks constrains the slab to be designed and act as one way spanning, the loading and thus moments from one set of beams framing into the column is negligible compared to the other. Thus the columns experience uniaxially moments, which causes a saving in reinforcement.
REFERENCES Balu Tabaaro. W. (2004), “Sustainable development and application of indigenous building materials in Uganda” Journal of Construction exhibition. Issue 1 Page 4-5 BS8110-1 (1985, 1997) Structural use of Concrete- Part 1, code of practice for design and construction Mos1eyW.H. and Buney J. H. (1989) “Reinforced Concrete Design” 5‘hEdition Macmillan, London Pariyo Bernard. (2005) “Comparative cost analysis of solid reinforced concrete slab and hollow clay blocks slab construction”, Final year undergraduate project, Kyambogo University” Seeley 1. H. (1993) “Civil Engineering quantities” 5’hedition Macmillan London
Uganda Clays, Kajjansi catalogue (2004) Price lists and weights of suspended floor units.
90
Ekolu & Ballim
TECHNOLOGY TRANSFER TO MINIMIZE CONCRETE CONSTRUCTION FAILURES S.O. Ekolu and Y. Ballim, School of Civil and Environmental Engineering, University of the Witwutersrund, South Africu
ABSTRACT The use of concrete in developing countries is rapidly growing. There is however, a strong possibility that its increasing application as a construction material is likely to be accompanied by increase in incidents of construction failures. Such are problems that have been experienced by many countries during infancy of the concrete industry. Concrete construction is resource intensive and construction failures come with significant economic costs, loss of resources and sometimes, fatalities. For sustainable development in Africa, countries cannot afford to incur waste of resources and enormous expenses from failures that occur especially in avoidable circumstances. Although research in concrete technology is growing rapidly and faces many challenges associated with skills and technological expertise, an important contributor to failure is that there is much existing knowledge that is not adequately applied. The reason for this redundant knowledge base is inadequate technology transfer to all levels of the work force - from design engineers to the concrete work team at the construction site. This paper explores some of the barriers to effective technology transfer and considers ways of dealing with this problem in developing countries. Also presented is a case study of a recent fatal collapse of a new reinforced concrete building under construction in Uganda.
Keywords: Concrete; Construction failures; Technology transfer; Education; Skills development
1.0 INTRODUCTION It is anticipated that developing countries are on the path to experience the largest growth in the world for utilization of concrete in construction and consumption of cementitious materials. The great existing need for infrastructure and general construction in these countries is a necessary ingredient for growth in lockstep with industrialization efforts. As an example, the recent trend of industrial growth in China, one of the large developing nations has triggered significant use of concrete and cementitious materials, consuming about one-half of the world cement production (Weizu, 2004). This is not to suggest that other developing countries will experience similar growth trends but the need for physical infrastructure in Afi-ica is being driven by pressures associated with population growth and urbanization increases, as shown in Fig. 1, as well as ongoing industrial development and globalization trends that are likely to propel increase in cement consumption and the concrete construction industry.
91
International Conference on Advances in Engineering and Technology
65% L
Developing
om ~
55% o k
45%
~'-
"~~
Industrial
Z
,35%
I
1990
I
I
I
I
2000
2010 2020 Forecast Fig. 1" Forecast urban growth (Source." United Nations, 1998) (CERF) But the concrete industry in Africa is relatively young and could potentially experience disproportionately high construction failures. This is not to be pessimistic but to rather highlight the need to be cautious so that major past mistakes leading to failures experienced at the infancy of the concrete industry in North America and Europe over 100 years ago are not repeated in developing countries. In the early years of concrete construction, the conceptts of concrete durability and sustainable development were either not known or they were not fully appreciated. In the present era, much knowledge has been accumulated on these issues and they can no longer be ignored in any credible concrete design and construction, more so for developing economies.
1.1 Early Precedents of Failures in Concrete Construction At the inception of concrete as a new construction material, records indicate that there were rampant and some spectacular construction failures that occurred. Based on past experiences, it can be shown that there are very few new causes of construction failures today other than variations of the same problems associated with the broad categories of design deficiencies, poor concrete materials and workmanship, formwork and scaffold problems during construction process, foundations, and hazards (Feld, 1964; Mckaig, 1962). In an assessment of 484 building failures in Algeria, Kenai, et al. (1999) found poor workmanship and poor concrete quality to be the main causes of building failures in addition to soil movement. The lessons learnt from early experiences have been built into rules and procedures to act as safeguards to minimize re-occurences of failures. These rules have been standardized into required building codes, construction material specifications, systematic selection procedures for engineers and contractors, professional registration requirements, exposure of professionals to legal reprisals. Modem theories of technical risk management have been employed with the support of computer technology and analysis software. While these developments are most effective in defending against construction failures due to technical errors, their inappropriate use is often a problem of technology transfer. This manifests itself as ignorance of the existence of such codes and design guides, lack of understanding of the theoretical underpinnings of the code recommendations and specifications, inadequate application of such guides and specifications and the absence of a quality assurance proce-
92
Ekolu & Ballim
dure to ensure compliance. Human error adds a further dimension to the problem and it cannot be easily predicated, quantified or eliminated. The human error factor is a complicated subject that might not be hlly handled technically but its danger can be reduced with proper preparation, care and special attention to critical aspects of concrete science and technology in construction. 1.2 Construction Failures and Sustainable Development Construction failures inhibit efficient and sustainable development and should be appropriately addressed. Although concrete is a relatively new construction material in developing countries, construction failures are not expected to be as frequent as it could be nor are there any records to suggest so. Instead, most specifications and design codes governing construction practices are already in existence or have been adapted, or directly imported from more developed countries. There are often problems associated with the direct importing of these standards (Ballim, 1999). Nevertheless, most of these procedures are often undermined in circumstances of compromised relationships between owners, designers and contractors, political and social uncertainties, marginalization of local expertise due to foreign-influenced financing policies. Engineers and construction professionals in developing countries face a unique set of challenges. In many Ahcan countries, infrastructure construction projects have in the past been largely contracted to foreign firms or expatriates, citing incompetence and/or lack of local capacity. But the real challenge for professionals from developing countries is in translating the existing knowledge base from design to construction site, from theory to practice while upholding the principles of effective and sustainable engineering and development within the local environment. This can only be successhlly achieved through development of appropriate and relevant specifications, education and training at all levels of staff in the design and construction process and systems that assures quality and compliance. This paper presents some views on these issues and explores potential ways of minimising concrete construction failures within the context of effective resource utilisation. 2.0 THE TECHNOLOGY TRANSFER BOTTELNECK Any construction project is a system of operations on and off site. The role of technology transfer is to bring together the main components of the system, suggested as construction systems and equipment, supplies and materials of construction, human knowledge and skills. For an effective construction process, the independent operations of each component must be integrated to simultaneously perform in response to the other components, placing restrictions in accordance with output requirements. Human knowledge and skills play the pivotal role of planning, organising and executing works within the system towards optimal or efficient output. Many technical and non-technical errors are often made during integration and interaction of the system components and deficiencies here often lead to construction failures. This segment forms the ‘constriction in construction’ shown in Fig. 2 of the simplified system model described.
93
International Conference on Advances in Engineering and Technology
Technology and existing knowledge base Construction systems, tools and equipment
Human knowledge & skills,
Materials of
, Technology transfer constriction
Construction job site
Fig. 2: Technology transfer bottleneck
Concrete construction is by and large an execution of its material science on the job site. This is where the major problem arises. Engineers design concrete structures using structural analysis concepts but the structures have to be built through execution of concrete material science fundamentals on the job site. A construction site is also a concrete manufacturing factory. While the engineers, trades persons, artisans and labourers need proper and appropriate skills on the fundamentals of concrete as a structural material to produce good construction, the designer who may also be the supervisor should be more focussed on the implications of concrete processing methods on design and analysis concepts. This is the stage where knowledge-based skills transfer becomes critical. Often these impediments are manifested as incompetence and ignorance on the part of trades’ persons, deficiency in supervision, outright negligence or lapses on the part of an engineer, who otherwise is a competent and carefd professional. These deficiencies translate into poor concrete materials lacking durability, poor workmanship, problems in loading and removal of formwork and scaffolding that often constitute major causes for concrete construction failures.
3.0 TECHNOLOGY TRANSFER IN CONCRETE CONSTRUCTION 3.1 Concrete Technology, Skills Transfer and Education There are likely to be many non-catastrophic construction failures in developing countries that are not reported or documented. Fear of legal reprisals and professional sanction discourages openness and record keeping of construction failures. However, the danger is that hture engineers could repeat similar mistakes and further enhance the perception that engineering competence is lacking in developing countries and needs to be provided from the developed countries. On a positive note, construction failures provide an opportunity for betterment of skills and techniques through lessons learnt, and a chance to add value by including elements of
94
Ekolu & Ballim
service-life extension into the repairs. Experiences, formal and informal education, and appropriate training are required to improve existing technology and minimize construction failures. Concrete technology itself is changing fast but concrete research and innovation is rarely developed or applied in developing countries. In most cases, engineering educational institutions emphasize design analysis while minimizing the fundamental concepts of material science of concrete that are key to the process of effective concrete construction. It is often assumed that understanding of these important issues can be acquired through practice or continued professional development which in most developing countries is not readily available to engineers except through serendipitous experience for the fortunate few,. However, the concrete construction industry can benefit greatly from special courses and programs if provided by civil engineering institutions through their curricula. Current industry concerns such as construction failures, fundamentals of concrete making, ethics and many other topics can be easily accommodated as short courses or as units within major academic/educational programs.
3.2 Concrete Market and Industry in Developing Countries Except for a few countries such as South Africa, the concrete market in most developing economies is highly fragmented. The concrete industry has multiple players including producers and suppliers of construction materials, contractors, engineers and architects, unions of trades persons and artisans, formal institutions of research and education. None of these stakeholders benefit from construction failures and it is important that they make their individual contributions through a representative structure that coordinates training and development to the benefit of the entire sector. Here lies an important challenge to all players in the concrete construction sector in developing countries, as most of Africa is: they have to form a mutually supporting coordination structure which focuses on technology transfer through appropriate education, training and human resources development at all levels of the industry. This must be achieved if such countries are to grow positive and respectable indigenous concrete construction sectors.
3.3 Engineering for Sustainable Development There are principles and procedures governing approval of construction projects and designs for physical infrastructure. Project cost and duration have traditionally been held as the main considerations while evaluation is based on completion time, actual project cost, owner satisfaction and other factors. Recent advances have included environmental requirements into some construction project designs. But the concept of sustainable development has not been entrenched into construction from the engineering perspective. There is need to develop quantitative techniques that broadly measure the contribution of construction projects towards sustainable development. Such systems could then be built into the requirements for approval and evaluation of construction projects. 4.0 CASE STUDY: COLLAPSE OF J & M AIRPORT ROAD HOTEL, ENTEBBE, UGANDA The collapse of a three-story concrete structure during construction of J & M Airport Road Hotel on September I , 2004 causing the death of 1 1 persons and injuring 27 others was perhaps one of the most publicised recent incidents of a construction failure in the East Ahcan region. The section that collapsed was adjacent to a large section of an already erected six-
95
International Conference on Advances in Engineering and Technology
story reinforced concrete frame with brickwork wall filling. This brief overview is based on available reports, and is given only for the purpose of illustrating important issues concerning concrete in construction failures. The type of building was a standard reinforced concrete, column-and-beam type construction, with concrete floor slabs, brick wall partitions and cladding. On the date of collapse, construction of the section had reached the third floor. At around 10 am when the structure collapsed, reports indicate that the workers had been removing scaffolding in preparation for erection of partitions (Bogere and Senkabinva, 2004). The whole section of the structure fell vertically with the beams shearing off from adjacent erected six-story section of the same hotel building. The results of a site survey and construction materials testing conducted by Uganda National Bureau of Standards (UNBS, 2004) showed that concrete strength for columns was low and highly variable, ranging from 7 MPa to 20 MPa, well below its expected grade of 25 MPa. The report showed evidence of segregated and severely honey combed concrete with loose aggregates that could be easily hand-picked, particularly at the joints of columns and beams or floors. No hazards were involved and foundation problems were unlikely. Even before considering the possibility of a design deficiency, a myriad of faults could be assembled. Poor workmanship and poor concrete quality were apparent. The removal of scaffolding and supports at the lower floor could have been the trigger for collapse given that columns of such low strength concrete could easily fail to support the upper two floors. Indeed, columns for the existing six-story adjacent section had shown signs of buckling and, additional columns and props had to be provided at the ground floor level for fixther support. Clearly there was deficiency in ensuring that the fundamentals of concrete making involving mixing, placing and curing were not compromised. In this case, potential errors could have been related to some or all of the following: inadequate cement content, dirty or inferior quality of aggregates, inappropriate mix design, incorrect batching of concrete mixture components, segregation during placing and compaction, poor curing, absence of supervision and quality control testing, premature removal of scaffoldingiformwork, etc. These are all skills-related issues of specific concern for concrete materials in construction. 5.0 PROPOSALS Generally, it has been recognized that concrete is a complex material and its market so diverse that coordinating structures in the form of non-profit organizations are necessary to bring together all stakeholders who then consider the issues that potentially affect the sector. The key role of such a structure is to advance the local concrete market and technology. In addition, a non-profit organization for the concrete market in a developing economy would be expected to promote requirements for sustainable development in concrete construction and technology: alternative materials such as pozzolans, industrial waste utilisation, recycling and re-use, appropriate technologies for concrete products, training on fundamentals and advances in concrete technology, research and innovations to meet local needs.
Institutions such as the Concrete Society of Southern Africa and American Concrete Institute, Cement Manufacturers’ Association (India) are examples of coordinating structures that provide essential education on concrete technology and its advance, fund research and innovation, improve technology and skills transfer, facilitate information dis-
96
Ekolu & Ballim
semination and grow the concrete market in their regions. In East Africa and many other developing regions such frameworks are non-existent. As such, the concrete industry is fragmented and not well served against construction failures and concrete technologies that are simply transplanted from more developed countries. A second and equally important weakness is the dearth of locally appropriate design codes and specifications for durable concrete construction. These documents must be developed by the local concrete community and must be accompanied by the parallel development of systems and procedures for quality assurance to ensure compliance. The authors are also of the view that, while failures must be avoided in the first instance, when they do occur, more can be achieved by evaluating the impact of the failure on sustainability in addition to identifying the cause(s) of construction failures. During repair or new construction after the failure, parameters for sustainable concrete construction can then be built into the project work in order to add value which compensates for the cost of the failure. This way a construction failure can be converted into a channel for technology transfer while achieving the benefits of learning from it and promoting sustainable development. A simple technique has been proposed that can be developed and used to evaluate the impact of construction failures on sustainable construction engineering and development. It consists of four broad requirements already identified by Africa Engineers Forum (AEF, 2004) as: ( I ) affordability, (2) sustainability, (3) appropriate technology, (4) indigenious capacity and skills transfer. A scoring system can be used for each requirement based on qualifying indicators. For each of the requirements, the impact value can be calculated as: / =4
SCEIV= ZweightedSCR, ; where SCR, = I
iv
1 Y
ryr
And: SCEIV = sustainable construction engineering impact value for a given project SCR. = sustainable construction requirement J RQI scores = qualifying indicator scores for a specific requirement N . = number of qualifying indicators rq' 6.0 CONCLUSIONS It has been seen that some of the common causes of concrete construction failures are attributed to problems stemming from technical and human errors, when the construction labour force and professional teams do not give proper attention to basic concepts and advances in concrete technology. Poor workmanship, poor concrete quality and unsafe removal of scaffolding contributed to collapse of the new reinforced concrete building discussed in the case study. It is proposed that an important reason for such failure is the lack of technology transfer to all sectors of the construction industry. This can best be addressed through the development of locally appropriate design codes and specifications, establishing local and regional coordinating structures which represent the development interests of the concrete sector and aligning the curricula of education and training institutions to attend to the learning needs of employees in the sector. Furthermore, civil engineering institutions of higher learning are better placed to provide some special course programs and from time to time adjust their curricula to include
97
International Conference on Advances in Engineering and Technology
relevant topics especially construction failures, concrete materials technology, understanding design concepts. In addition to identifying failure causes, evaluation of the impact of construction failures on sustainable development needs to be considered from an engineering perspective. Repairs or new construction following failures could be conducted with value-adding components that promote sustainable development and perhaps recovering some of the long-term cost of the failure. The concept of using an algorithm has been suggested that can be developed to analyze the impact of construction failures on sustainable construction. Through these approaches some mistakes made at the inception of concrete construction in more developed countries could be avoided or improved upon by developing countries. REFERENCES AEF (2004), Africa Engineers Forum-protocol of understanding, second edition, SAICE, Private Bag 3, X200, halfway house, 1685, South Africa. Ballim, Y (1999), Localising international concrete models - the case of creep and shrinkage prediction. Proceedings of the 5th International Conference on Concrete Technology for Developing Countries. New Delhi, November 1999. National Council for Cement and Building Materials, India. pp: 111-36 to 111-45. Bogere, H., Senkabinva, M. (2004), Collapsing building buries 25, The Monitor, news article, 2 September 2004, Monitor Publications Limited, P.O. Box 12141, Kampala, Uganda. CERF, The future of the design and construction industry: where will you be in I 0 years?, CERF monograph, 213 1 k Street NW, Suite 700, Washington DC 20037 Feld, J. (1 964), Lessons from failures of concrete structures, American Concrete Institute, Monograph No. 1, Detroit, MI, USA. Kenai, S., Laribi, A., Berroubi, A. (1999), Building failures and concrete construction problems in Algeria-statistical review, Proceedings of International Conference on Infrastructure Regeneration and Rehabilitation, held at the University of Sheffeld, Ed. R.N. Swamy, 28 June-2 July, 1999, p.1147. McKaig, T.H. (1962), Building failures: case studies in construction and design, New York, McGraw Hill, 261p. UNBS (2004), Preliminary report on the collapse of the building for J & M airport road hotel apartment and leisure centre on Bwebajja Hill Entebbe Road, Uganda National Bureau of Standards, Plot M217 Nakawa, Industrial Area, P.O. Box 6329, Kampala, Uganda. Wiezu, Q. (2004), What role could concrete technology play for sustainability in China?, Proceedings of the International Workshop on Sustainable Development and Concrete Technology, Ed. K. Wang, 20-21 May 2004, p.35.
98
Van Herwiinen 2% Jorissen
DEVELOPMENT OF AN INTEGRATED TIMBER FLOOR SYSTEM F. van Herwijnen, Department of Structural Design, University of Technology Eindhoven, Netherlands, ABT Consulting Engineers, Delf/Velp, Netherlands.
A. J. M. Jorissen, Department qf Structural Design, University of Technology Eindhoven, Netherlands, SHR TimberResearch. Wageningen, Netherlands.
ABSTRACT The requirements of building structures are likely to change during their functional working life. Therefore designers of building structures should strive for the best possible match between technical service life (TSL) and functional working life (FWL). Industrial, Flexible and Dismountable (IFD) building is defined and presented as a design approach for building structures to deal with these aspects. The IFD concept combines sustainability with functionality and results in a higher quality level of buildings. The IFD design approach leads among others to integration and independence of disciplines. This will be showed in the development of a new lightweight integrated timber floor system. This timber floor system makes use of the neutral zone of the floor to accommodate technical installations. The paper describes the composition of the integrated timber floor system and focuses on the dynamic behavior (sound insulation and vibration) and fire safety of this lightweight floor system.
Keywords: Functional working life; IFD building; Integration; Floor system; Timber structures, Vibrations, Fire safety.
1.0 INTRODUCTION The design process of structures should consider the whole period of development, construction, use and demolition or disasscmbly. The requirements of building structures change during their life time. The following terms regarding the life time of structures can be defined: (i) Technical service life (TSL): the period for which a structure can actually be used for its structural intended purpose (possibly with necessary maintenance but without major repair). (ii) Functional working life (FWL): the period for which a structure can still meet the demands of its (possibly changing) users (may be with repairs and/or adaptations).
99
International Conference on Advances in Engineering and Technology
Because of the large expenses often involved in adapting building structures it can be advantageous to strive for a functional working life equal to the technical service life. The IFD concept as described hereafter makes this possible. On the other hand there is a tendency to organize the horizontal distribution of installations in combination with the floor system. To save height the installations are accommodated inside the floor. To fdfill changing demands of users, installations should be reachable for adaptations and repair during their technical lifetime. But also due to the different technical life times of the floor structure and installations, the last should be reachable inside the floor for replacement. To facilitate this, integrated floor systems have been developed, both as concrete and composite structure. To save own weight, also lightweight integrated steel floor systems have been introduced, however with uncomfortable vibration behavior. For this reason the possibility was investigated to develop with timber an integrated floor system with a comfortable vibration behavior.
2.0 IFD CONCEPT From the important notion to strive for sustainable building rose the concept of IFD building: Industrial, Flexible and Dismountable Building. Industrialized and flexible building in itself is not new. However, the combination with dismountable building is. The three elements of IFD building can be defined as follows: Industrial building in this context is industrially making of building products. Flexibility is the quality of a building or building component, which allows adjustments according to the demands and wishes of the users. Flexibility may relate to two stages: 0 Design stage: variability in the composition and the use of material; User stage: flexibility to adjust the composition and the applied building components to the changing demands of the same or varying users while in use. 0 Dismountable building is the construction of a building in such a way that a building component may be removed and possibly re-used or recycled, soiled as little as possible by other materials, and without damaging the surrounding building components. (In recycling we do not use the complete product, but only its raw material). Dismountable building is also a means for the realization of flexibility, because building components may be easily detached and replaced by other (industrial) building components. The IFD concept combines sustainability with functionality and results in a higher quality level of the building (Van Henvijnen, 2000). Industrial building increases the quality of the components, reduces the amount of energy for production and construction and reduces the amount of waste on the building site: less waste and less energy. Flexibility by adaptation of the building structure increases the functional working life: long life. Dismountable building makes re-use of elements/components or restructuring possible: loosefit and less waste.
100
Van Henviinen & Jorissen
3.0 INTEGRATED FLOOR DESIGN The IFD-philosophy leads among others to integration and independence of disciplines. Integration concerns design of components taking into consideration other components; independence relates to independent replace ability of components. This can be shown in three existing integrated floor systems: the composite Infra+ floor, the steel IDES floor and the concrete Wing floor, described and discussed in (Van Henvijnen, 2004). The goal of this research was to develop an integrated timber floor system that fulfills modem comfort criteria regarding vibrations, acoustics and fire safety.
4.0 STARTING POINTS FOR INTEGRATED TIMBER FLOOR SYSTEM As stated before the new timber floor system should be IFD- based: an Industrial way of fabrication, i.e. prefabricated, Flexible and Dismountable. Beside that, the floor has to: Accommodate technical installations inside the floor; Suitable for both office and residential buildings, resulting in a live load of 3 kN/m2, and a dead load of 0.5 kN/m2 for lightweight separation walls; Have a free span of maximum 7.2 meter; Have a width based on a multiple of a modular measure of 300 mm, with a maximum of 2.4 meter due to transport restrictions; Transfer wind loads from the facades to diaphragm walls every 14.4 meters; Have a fire resistance against failure of 90 minutes (top floor level =< 13 meter above ground level) or 120 minutes (top floor level > 13 meter above ground level); Have a comfortable vibration behavior. 5.0 FLOOR DESIGN
5.1 Technical Installations The dimension of the installation zone inside the floor is determined by the dimensions of the air ducts, their connections, the air grates and the space to be conditioned. The choice for the best installation system from a list of alternative solutions was made using a multi-criteria method. This resulted in an installation with: a balanced ventilation system; all air ducts inside the floor system, always reachable from above; a climate window (downstream type) in the faqade; air exhaust in sanitary rooms and climate facades. For a space to be conditioned of 7.2 x 3.6 meters, the installation zone inside the floor was determined to be at least 780 x 260 mm for rectangular air ducts, see Fig. 1. 5.2 Lay out of Ground Plan Dutch office buildings have usually a ground plan with two bays of 7.2 meters and a central corridor of 2.4 meter. The central corridor may have a suspended ceiling to create space for technical installations.
For residential buildings this ground plan also fits: two zones next to the faqades for living and a central zone for installation shafts, vertical transport, bathrooms, kitchens and washrooms. This results in a typical cross section as showed in Fig. 2.
101
International Conference on Advances in Engineering and Technology
5.3 Typology of the Floor Section To integrate the technical installations inside the floor thickness, a hollow floor section is needed. To reach from above the installation components for maintenance and repair, the top floor should be removable. This means that the top floor can not be a structural part. The structural components should be a combination of a floor plate (as physical separation between two stories) stiffened by beams. Fig. 3 shows possible typologies of the floor sections. Selected was typology c. with a width of 2.4 meters. The floor plate is a sound and fire barrier, and should not be penetrated. Adaptations to the installations can be done from above, without approval of neighbors below. No suspended ceiling is necessary.
rectangular cross-section air d ~ d b u ~ n system 150"400
circular cross-section air distribution system ~250 ~.~..................................... ~.:.:.~.~.......................... ~.~ i~.!.:.:.:.:.~.~.:.~.~.;.:.::.~.~,:.:.o:. ........................ ~
....................
~
....................
~
~
............ ~
~
-
,
~.~
Dimensions Ins~.llatJon , e ~ i ~ Vertical c r ~ , ~
Otrne,nsl~s tnsta!latlon e q u ~ Vertical c r ~ ~ ~
T i
!i i
I ~i /
i
!i
O,irnens~s instal[att-on e q u ~ top 9 vim necessary :s~ce: with: 290 § 50 = ~ mm heigi~t: 58~50 = ~ rnm
Dimensions ins~Utatton e~iprnent top vJ~' necessary sp~ce: with: 210 + 50 = ~ turn height: 730+~ = ~ mm
Fig. 1 Required installation zone inside the floor.
102
V a n H e r w i i n e n & Jorissen
~
A
j ....
:: i
:i ii
Fl!Oer panel ~,~,,iih,c,o:,.,~ng f l ~ r
.,.
:: i ::i
:,
~, r:' ! i ) i :: '~:
i
~:B
width 1
C i: :i i
:::iDI E '
"~-~-
~i
.
.
.
i i
i I
i i
ii
ii
i I
i.i
.................
B C
:b:: E:
:::IF'
i I I i L L ~ ]
.
A
I
i ~
F}oor pane! li~'~i~hcovenng ~lOOf
!
i
i i i i:
i i i i: i
Floor pane.i wi~h c.,,o~:~e~ngifloor
.
:,:F :~
Ft,oor p~r~el .w~i~hc~veeng f:~of
i
Fig. 2 Typical cross section over the building, with two bays of 7.2 meters and a central corridor of 2.4 meters.
t
l
l b
t
l
M
c
Fig. 3 Typologies of the floor sections" a. = U-shape, b . - T-shape and c. = UU-shape.
6.0 COMPOSITION OF THE TIMBER FLOOR SYSTEM 6.1 Floor spanning 7.2 meters in facade area (see Fig. 4) For the floor plate a laminated veneer lumber is chosen: Kerto Q, 33 mm thick. Plywood was no option, because it is not available in a length of 7.2 meters. Moreover, Kerto Q has a higher flexural stiffness than plywood in the main direction (parallel floor beams). Floor beams, dimensions 110 x 350 ram, are made of laminated timber, class GL 28h, because of the needed length of 7.2 meters. Plate and beams are glued together, and act as a composite T-structure. On top of the floor plate an acoustical and fire-protecting insulation of rock wool is applied, with a thickness of minimum 70 mm to maximum 100 mm.
103
International Conference on Advances in Engineering and Technology
wiggle s~ip
.......... '
!?;I!7ii:i:ii~iiii}i!;!i~il)!il !;!i171!i!:i!i!71i177!! floor pa nel ~77::] 26; it is put in the very high salinity and very high sodium hazard class (C4-S,) hence not suitable for irrigation. However, all the other samples analyzed have low electrical conductivity EC and low salinity hazards. This therefore puts them in low salinity and low sodium hazard class (C,-S,) using the USDA classification system. The samples have EC < 0.25 mill siemendcm and sodium adsorption ratio (SAR) 26mm sicm and 40 respectively which are above the WHO standard and also high contents of basic cautions such as Na, K, Mg and Ca which contributes to hardness and scale formation in water.
As for the soils, all the samples analyzed have the same combination of land suitabilities for the several crops considered which is based on the poorly drained days. This is very suitable for both rice and maize cultivation with fertility limitations but only marginally suitable for cassava and vegetables. Texture limitations for the former, fertility limitations for the latter and wetness limitations for both 5.0 RESEARCH PRIORITIES The soil conditions and water quality contribute towards favourable environment for rice production. Having considered the quality of all the samples good enough for irrigation, further research should be carried out regarding the exact quantity required by rice for the entire growing season. Estimating irrigation requirements for rice is difficult since its actual evapotranspiration is about 110 percent of grass (Seckler et al, 1998). Rice fields are kept flooded primarily for weed control. This creates high percolation ‘losses’ from the fields. The belief that either running water through rice fields or simply holding stagnant water increases yield (and perhaps taste) is erroneous. There is no scientific evidence for this belief except that during very hot days, running water may beneficially cool the plant. On the other hand, this practice flushes out the fertilizer and contributes to water pollution. This aspect can only be evaluated after a more precise identification of development options in all the sample sites. If necessary this will be studied where irrigation schemes with their main supply from the river have been identified.
6.0 ACKNOWLEDGEMENTS The authors wish to acknowledge immense contributions of Engr. Richard Ekpe, Director, Engineering Services, Akwa-Ibom State Ministry of Agriculture and Natural Resources, Uyo for his assistance in site assessment and for using some of the project’s equipment in data processing. The efforts of Anietie Bassey and Peter Offong in data collection are also appreciated. The grant for the project was sourced from ADB (African Development Bank) by the government of Akwa-Ibom State.
25 8
Akinbile & Oyerinde
REFERENCES Akinyosoye. V.O. ( 1985) Senior Tropical Agricultural, Macmillan Publishers Limited, London, pp 88-90. Association of Official Analytical Chemists ( 1 990) Official Methods of Analysis, 15th Edition, AOAC, Arlington, Virgina. F A 0 (1 998) Food Production and Consumption Database, Food Production and Import Data for South Africa, Rome, FAO. International Food Policy Research Institute (IFPRI) (2002) Green Revolution, Curse or Blessing, Washington DC. National Population Census, Ministry of Internal Affairs, Abuja, 1992. Ngambeki D.S and F.S Idachaba (1985) Supply Responses of Upland Rice on Ogun State of Nigeria; A producer Panel Approach, Journal of Agricultural Economics, Vol. 35, No2 pp 239-249. Seckler, D. Amarasinghe U., Molden D. de Silva. T and R. Barker (1998) World Water Demand and Supply, 1990 to 2025, Scenario and Issues, Research Report 19, Colombo, Sri Lanka, International Water Management Institute (IWMI) Stern, P.H, ( 1994) Small Scale Irrigation, Intermediate Technology Publications, London, pp 13-18. This Day (2004) A Report Titled ‘FA0 Organizes Global Contest to Boost Rice Production’ on Agric Business Page of This Day Newspaper, by Crusoe Osagie Vol. 10, No 3291, pp 40, Tuesday, April, 27. Twort, A.C. Law, F.M. and F.W. Crowley, (1987), Water Supply, 3rd Edition, Edward Arnold Publishers, London, pp 47-56,200-230. WHO (1996) Guidelines for Drinking Water Quality: Health Criteria and Supporting Information, 2nd Edition Geneva 2, 27 1 pp.
259
International Conference on Advances in Engineering and Technology
EFFICIENCY OF CRAFTSMEN ON BUILDING SITES STUDIES IN UGANDA H. Alinaitwe, Department of Civil Engineering, Makerere University, Uganda
J. A. Mwakali, Department of Civil Engineering, Makerere University, Uganda B. Hansson, Division of Construction Management, Lund UniversiQ, Sweden
ABSTRACT Many researchers have of recent been concerned about low and in some cases declining labour productivity in the construction industry. Some have gone to question whether labour is efficiently utilized. This paper reports on a survey carried out on building craftsmen to find out how efficiently they utilize the time available to them for building activities on site. It involved taking observations from 150 craftsmen from 52 building sites and 45 contractors. In all 37500 usable observations were made. The study found that the craftsmen use about 20 percent of the available time for making the buildings grow. The statistics obtained comparable with what was found in other countries. It was found out that building construction craftsmen spend about 33 percent of the time on non-value adding activities. Research should be directed to reducing the non-value adding activities on sites.
Keywords: Activity Sampling, Craftsmen, Building Sites, Efficiency, Productivity
INTRODUCTION Many researchers have of recent been concerned about low and in some instances declining labour productivity in the construction industry. Allmon et a1 (2000) for example shows that labour productivity in the United States has been declining over the years. In the United Kingdom, the Egan report (1998) was made following public outcry. A lot of research is currently going on to improve performance of the construction industries following various reports. According to Buchan et al. (1993) and Gilleard (1992) labour cost is somewhere between 20% and 50% of the total project cost. Hence how efficiently labour is utilised on construction costs is very important for the improvement of performance of the industry. The construction industry is characterised by repeated delays and cost overruns, more so in developing countries (Mansfield et al, 1994). The time and cost overruns have been so severe in some cases that questions as to the efficiency of human factors in the construction process have emerged (Imbert, 1990). Research in developed countries has been going on how labour is efficiently utilised. According to Jenkins and Orth (2004), the time used by workers on daily basis on productive work averages about 29% of the total time available for construction work.
260
Alinaitwe, Mwakali & Hansson
Oglesby et a1 (1989) argues that direct work activities on construction sites take 40 to 60% of the available time. Agblous and AbouRizk (2003) made activity-sampling studies on plumbers and each activity observed on full time shift. They concluded that 32% was value adding whereas 68% was classified as being non-value adding. Motwani et a1 (1995) noted that improving communication skills, preplanning and stricter management could help to raise the individual productivity rate from an average of 32 percent productive time per hour to almost 60 percent per hour. Motwani et a1 further argue that concentrating on productivity improvement in the larger portions of non-productive employee time would be more effective in improving productivity in construction where there are a variety of uncontrollable productivity influence factors. Strandberg and Josephsson (2005) found out that building workers in Sweden spend less than 20% of their time on direct work activities. However, there has not been a similar study in the developing countries where the level of mechanisation is low and construction activities are largely labour intensive. Studies by Orth and Jenkins (2003) and Strandberg and Josephsson (2005) used activity sampling concentrated on single sites and on a few craftsmen. To have a better picture, one needs a wider view from a number of sites and different tradesmen. The objective of this research was to find out how efficiently labour is utilised in Uganda and to compare them with those results from other countries. The other objective was to find out the way the time is distributed so that efforts in future research can focus on reducing the wasted and non-value adding activities. Activity sampling was used to find out the way construction workers utilize the time available to them.
METHODS 2.1 Research Method There are five research styles: experiment, survey, action research, ethnographic research and case study (Fellows and Liu, 2003). Research in construction is usually carried out through experiments, surveys or case studies. Because of several factors that affect construction productivity, experiments would not be easily performed. They would require a lot of time and the cost would also be high. Case studies would also not be appropriate because they provide a limited area of study. Activity sampling, motion analysis, time study, and the method productivity delay model have been used before for evaluating the influence of factors and efficiency of worker productivity (Adrian and Adrian, 1995). In this case, activity sampling was used. It is a simple, quick and inexpensive technique which involves a series of instantaneous observations of work in progress taken at random times over a period of time (Jenkins and Orth, 2003). Observations, also known as samples, which can be taken from a large number of workers, are compiled together at the end of the study to show the percentage of that day spent by workers performing various activities. By using the information obtained from the study, one can evaluate which components are detrimental to productivity and where improvement is need. It provides information on the amount of time workers spend productivity and non-productive work and identify site-specific factors that have either a positive or adverse effect on productivity. It is less obstructive to workers since the samples are random. Workers are more apt to cooperate with activity sampling since the
26 1
International Conference on Advances in Engineering and Technology
results focus on the work performed by workers as a whole, rather than singling out individual performance (Jenkins and Orth, 2003). Winch and Carr (200 1) describe activity sampling as one of the best ways of obtaining a detailed knowledge of the performance of any production process. Results from the work sampling can be used as a benchmark for future studies. Activity sampling is less controversial that the other approaches such as output per unit time. The argument made against Activity sampling is that the direct work time does not necessarily correlate with unit rate productivity. The reasoning is that the level of labour skill and standard equipment that influence productivity without necessarily influencing the percentage of time used on direct work activities. However, there is high correlation between the time used efficiently and the level of productivity. The purpose would therefore be to identify the causes of inefficiencies and find ways of reducing or eliminating them altogether.
2.1 Observation Form Design A form was designed for taking observations of worker activity. It was based on the breakdown of time used by Winch and Carr (2001). The time when workers are on sites can be utilized in production; statutory ancillary time; support ancillary time; and nonvalue adding wasted time. Productive time is comprised of time for making the building grow (F); preparation of materials (P); handling materials at the workplace (H2); cleaning up (C) and unloading (U). Statutory ancillary time is comprised of Official Break (BK); Safety related (HS); and Inclement Weather (RO). Support ancillary time is comprised of supervision (SU); material distribution support (H3); setting out (T3); and Testing (T2). Non-value adding wasted time comprises of absent (A); materials transfer (Hl); not working (I); Making good (RT); walking around (W) and waiting (N). 2.2 Pilot Studies The Observation Form was shown to two researchers whose input was taken into account by modifying the layout of the form. Pilot studies were then carried out to ensure the clarity and relevance of the form to the research assistants. Five different research assistants tried it on one bricklayer, one roof joiner, one painter, one concretor and plasterer so that they learn how to use it and get conversant with the codes. A second phase of the pilot study was conducted on another set of five craftsmen. Based on the feedback received, it was calculated that the percentage of time used on making the building grow was 19%. 2.3 Sample Size Use was made of the formula in equation (1) given by Harris and McCaffer (2001)
1
N=Z2xPx(1-P)L2 where N is the number of observations required; P is the activity rate observed from the pilot study in this case 19%; L is the limit of accuracy required in this case 5%; and Z is the standard normal variable depending on the level of confidence in this case 1.96 at 95% level of confidence.
262
Alinaitwe, Mwakali & Hansson
Substituting the values above in equation (1) gives N = 236 and so 250 samples per craftsman was adopted. O’Brian (2001) believes that an accuracy of 5000 samples is regarded as very accurate by around plus or minus 2 percent. As a result, activity sampling targeted 250 observations on each craftsman. A decision was made to make observations on 30 tradesmen in each of the 5 categories in order to have a sample big enough of 7500. The observations were made at intervals of 5 minutes such that for each person, they were made over three days.
2.4 Sample Selection The selection of the subjects for study was carried out through quota sampling as follows: Contractors who are registered with the contractors’ association Uganda National Association of Building and Civil Engineering Contractors (UNABCEC) and doing formal building contract were targeted. From the survey, data was gathered from as many different contractors and as broad a geographic area within Uganda as possible. For the purposes of this survey, the mailing list of building contractors during the year 2005 was used. There were 167 in number on the list but for various reasons 30 could not be reached. So we reduced our target to 137 contractors. They were requested by telephone that we need sites to make observations on the following activities. 0 Bricklayers laying a 230 mm brick wall of height not more than 3 m high. Concrete layers laying oversite concrete 100 mm thick on hardcore Applying paint on a freshly plastered wall. Joining timber for a roof with timber trusses and rafters Applying 1 :4cement sand plaster on a brick wall not more than 3 m high. The contractors would inform the research team when they would be carrying out the work on any of those items. These are typical activities on new building work and they are from sections that form big cost centres. Observations were made on one craftsman in each of the activities per contractor until 30 craftsmen in each category of work were observed. 2.5 Surveys Meetings were held with the site foremen to inform them about the purpose and nature of the study. They also confirmed the availability of work in the desired categories and identified the craftsmen to use in the study. Exact location of the work areas and site specific issues such as times for start, break, lunch and end were made known to the survey team before hand. The observations made on the first day were not utilized. This was to enable the observer settle in on site and also to make sure that the observations made did not affect workers on site. The supervisor would then explain to the workers the purpose of the study that it is for research purposes and not to evaluate the work performance of each individual employee. Only tradesmen with experience of not less than 2 years were used.
Officially, work would normally start at 8.00 and close at 1630 hours with a 45-minute lunch break and 15-minute break but the specifics depended on the sites. 52 different buildings sites and 45 contractors were involved from various parts of the country. Sites
263
International Conference on Advances in Engineering and Technology
were chosen basing on availability of work on the required categories for at least four working days continuously and where workers were paid on daily performance rather than subcontract basis. Each observer would observe not more than three craftsmen on one site and they would be of different trades. Observations were made on only one craftsman from each trade per site. Three research assistants carried out the observations over a period of three months from August to October 2005. It was the observer’s responsibility to record what the individual worker was doing at the very first instant observation took place. The observer would record according to the categories indicated on the observation sheet. The observations would be made at random times but only one observation would be made per person within an interval of 5 minutes.
264
Alinaitwe, Mwakali & Hansson
3.0 RESULTS AND DISCUSSION The percentages of time that the various categories of craftsmen were performing different work items were calculated and are presented in Table 1. The results show that on the average productive time which comprises of time spent on making the building grow; preparation of materials; handling materials at the workplace; cleaning up and unloading takes about 39.6% of the total time available on site. Statutory ancillary time that is comprised of Official Break; Safety related; and Inclement Weather take about 13.7%. Support ancillary time that is comprised of supervision; material distribution support; setting out; and testing takes about 13.5%. Non-value adding wasted time that comprises time when the workers are absent; materials transfer; not working; making good; walking around and waiting consumes about 33.3% of the total time. The percentage of time spent on making the building grow for the various building activities averages 19.34%. Bricklayers perform worst on this aspect because they wait for someone else to prepare the mortar and then deliver the bricks at the work site thereby loosing a lot of time. Painters, scored highest possibly because their tasks also do not involve a lot in transferring materials. The painters spend the biggest percentage (6.55%) of time cleaning up. This could be partly because they do not provide ample cover of the areas where paint could easily spill. One way for them to improve this is to cover up all affected surfaces before work starts. The concretors spend the most time unloading (5.04%). This is by the very nature the way the concreting process is done. Concrete is unloaded from chutes and wheelbarrows on the sites where observations were made. Bricklayers spend the list time unloading. There were porters to unload the materials at the work place and the bricklayers only step in to unload when there is a shortage at the work areas. The average percentage of time spent on official breaks was 5.42%. It was observed however that the official break time was not strictly adhered to. Some people were taking longer than granted and in some cases some were taking a shorter time when there was need to complete given tasks. This had an effect in that the gangs were not balanced when some of the workers were not there. It is important that all observe break times and they resume work together in order to minimize unbalanced gangs. Regarding safety related issues, the workers spend very little time on briefing on safety related matters. Most of the instances recorded were on attending to injuries and preparation of supports and barricade to safeguard the workers. Inclement weather on average takes a big percentage of time. Uganda being in the tropics experiences heavy torrential rains and these are usually in the afternoon and last about 30 minutes. These are the ones that tend to disrupt work on many sites. The sun is at times hot around mid day and makes working uncomfortable when then are not protected from it. The construction in Uganda is that most of the work is done in situ and the roof is provided last after the walls have been erected as a matter of construction procedure. There is need to provide cover on some occasions by changing the construction procedure to provide the roof in order to avoid inclement weather. Setting out takes about 2.69% and testing takes about 0.60% of the time of the average. For a big part of the time (4.4%), the workers were absent. They were generally missing
265
International Conference on Advances in Engineering and Technology
in action. The worst performers in the area are the bricklayers at 6.5 1% of the time. This may show a laxity in supervision and enforcement of work. It was observed that a lot of time is spent on materials transfer i.e. 5.31% of the total time. The worst culprits are the plasterers who spend 7.2% of their time on this work item. It is there necessary to examine how the work gangs are formed. Transferring materials is mainly the work of labourers on many sites. On average about 6.6% on average is spent not working. The workers are largely idle and talking amongst themselves. The survey indicates that for 5% of the time workers were walking around while 5% they were waiting for materials. This reflects on the laxity in inspection and inadequate work targets. For 5.59% of the time, the craftsmen were seen reworking. Rework not only consumes time but also leads to a lot of wastage of materials. This research shows that the efficiency of building craftsmen in Uganda is low. However, what is interesting to note is that the availability of workers for productive work is comparable to that in developed countries where workers have easy access to tools, equipment, and materials. The availability of time for making the building grow of 19.3% is comparable with what Strandberg and Josephsson (2005) obtained after similar studies in Sweden. Yet in Sweden, the assumption that project managers make is that craftsmen are available for productive work 50% of the total time on site. The average time spent by workers on the productive activities of 39.6% seems to be less or at most on the lower boundary of the range 40 - 60 % observed in the United States. This could partly be because most of the materials handling in Uganda is manual. Some of the materials like fresh concrete are mixed on site and this takes the time of the craftsmen. The skills of the workers and the level of supervision are poorer in comparison and are possible causes of lower availability for productive work. All the workers studied were men. On average, the percentage of total time used by workers not working, walking around, waiting, and absent excluding the official break amounts to 22.4%. This is much greater than the 14% relaxation allowance recommended by International Labour Organisation (ILO) (1 978) for relaxation assuming that they are men working under awkward (bending) conditions and carrying 4 kg loads.
4.0 CONCLUSIONS Due to various reasons, the time that the workers spend on productive activities of 39.6 percent seems to be low. However, this seems to be comparable with what was observed in Sweden and Unites States. The proportion of time spent on non-value adding activities of 33.3 percent is significant. The time used by craftsmen on productive activities in the building industry is lower than that recommended by ILO. Comparison with studies from developed countries shows that there is not much difference in how efficiently the craftsmen are employed. Those in developed countries have machinery and equipment at their disposal and that is why their productivity could be higher in spite of the low efficiency. REFERENCES Agbulos, A. and AbouRizk, S. M. (2003) An application of lean concepts and simulation for drainage operations maintenance crews. In Chick, S., Sanchez, P. J., Ferrin, D. and Morrice, D. J. (eds). Proceedings of the 2003 Winter Simulation Conference, 1534 - 1540.
266
Alinaitwe, Mwakali & Hansson
Adrian, J. J. and Adrian, D. J. (1995) Total Productivity and Quality Management of Construction. Champaign, IL: Stipes Publishing. Allmon, E., Haas, C., Borcherding J. and Goodrum, P. (2000) U.S. Construction Labour Productivity trends, 1970 - 1 998, Journal of Construction Engineering and Management, 126(2), 97 - 104. Buchan, R. D., Fleming, F. W. and Kelly, J. R. (1 993) Estimating ,fbr builders and quantity surveyors. Buttenvorth-Heinemann, Oxford. Egan, J ( 1998) Rethinking Construction. HMSO Department of Trade and Industry, London. Fellows, R. and Liu, A. (2003), Research Methods ,f;,r Construction, Second edition, Blackwell Science, Oxford. Gilleard, J. D. (1992) The creation and use of labour productivity standards among specialist subcontractors in the construction industry, Cost Engineering, 34(4), 1 1-1 6. Harris, F. and McCaffer, R. (200 I ) . Modern Construction Management. Fifth edition. Blackwell Publishing, London Imbert, 1. D. C. (1990) Human issues affecting construction projects in developing countries, Construction Management and Economics, 8(2), 2 19 - 228. International Labour Organisation ( 1978) Introduction to work study methods. Third edition, 1L0, Geneva. Jenkins, J. L. and Orth, D. L. (2004) Productivity improvement through work sampling. Cost Engineering, 46(3), 27 - 32. Mansfield, N., Ugwu, 0 and Doran, T. (1994) Causes of delays and cost overruns in Nigeria construction projects. International Journal of Project Management, 12(4), 254 - 260. Motwani. J., Kumar, A. and Novakoski, M. (1995) Measuring construction productivity: A practical approach. Work Study, 44(8), pp 18 20. O'Brian, K. E. (2001) Improvement of on site productivity. KE O'Brian & Associates, Inc., Toronto, Canada. Ogelsby, C., Parker, H. and Howell, G. (1989) Productivity Improvement in Construction. McGraw-Hill, New York. Strandberg, J. and Josephsson, P. (2005) What do construction workers do? Direct observations in housing projects. In A. S. Kazi (ed.) Systemic Innovation in the management of'constrirction procrsses, 184 - 193. Winch G. and Carr, B. (2001) Benchmarking on-site productivity in France and the UK: a CALIBRE approach. Construction Management and Economics, 19(6), pp 577 - 590. ~
267
International Conference on Advances in Engineering and Technology
BUILDING FIRM INNOVATION ENABLERS AND BARRIERS AFFECTING PRODUCTIVITY H. Alinaitwe, Department of Civil Engineering, Makerere University, Uganda K. Widen, Division of Construction Management, Lund University, Sweden
J. A. Mwakali, Department of Civil Engineering, Makerere University, Uganda B. Hansson, Division of Construction Management, Lund University, Sweden
ABSTRACT Efforts to promote innovation are at the heart of current research in building in an endeavour to make it more productive and competitive. At the same time, there is lack of statistical data on innovation. A review of the major barriers and enablers to innovation at business level has been carried out which made been a basis for making propositions about the factors that affect innovation. A questionnaire was then made and a survey carried out on building contractors in Uganda. The identified enablers and barriers to innovation were then ranked and correlated. Having an educated technically qualified workforce and having experienced diverse workforce are regarded the greatest enablers to innovation for building firms that will drive forward productivity. The effect of design on construction and the level of tax regimes are regarded the greatest barriers to innovation in construction firms. Keywords: Construction Firm, Productivity, Innovation, Barriers, Enablers.
INTRODUCTION Efforts to promote innovation are at the heart of current research in construction in the endeavour to make it more productive and competitive. In comparison to other sectors, construction is usually regarded as a traditional or a low-technology sector with low levels of expenditure on activities associated with innovation, such as research and development (R&D) (OECD, 2000; Seaden and Manseau, 2001). It appears that many construction firms are in a vicious cycle of low performance, anaemic levels of profitability, limited investment, and poor organisational capabilities (OECD, 2000). In the UK, Egan report outlines a new vision for the system of design, production and operation of the built environment, involving considerable investment in new technologies, management practices and techniques of production (Egan, 1998). Their recommendations have been taken up by researchers in the construction industry world wide because of the general relevance. The new vision implies change and therefore need for more innovativeness. In response to the challenges, changes are taking place in the delivery of construction goods and services around the world (Seaden et al, 2003).
268
Alinaitwe, Widen, Mwakali & Hansson
There have been a number of case studies of how successful firms have been able to make a range of different organisational, managerial or technological innovations to overcome the limits of their environment (Slaughter, 1993; Sexton and Barrett, 2003). Innovation is common in all sectors; the potential for innovation for an individual firm is shaped by its own activities and the environment it operates within (Reichstein et al, 2005). A recent survey in Canada explores the importance of innovation and the use of advanced practices in shaping the performance of construction firms (Seadan et al, 2003). It focuses on the sources of success and failure in innovation among construction firms. However, the barriers and enablers of innovation faced by construction firms have not been sufficiently studied and quantified. The objectives of the present research were to identify and rank the main enablers and barriers to innovation in the building industry Uganda. The main barriers and enablers of innovation at firm level were identified through a literature search. FIRM LEVEL INNOVATION ENABLERS AND BARRIERS This section includes literature review on barriers and enablers to innovation at businessifirm level. The identified factors are formulated into propositions and numbered FEl . . . FBI.. . etc. The factors are from the general industries but the intention was to capture those that affect productivity in construction firms. 2.1 Firm Level Innovation Enablers The principal divers for innovation are often created at the firm level, within a stimulating macro-economic context (Seaden & Manseau, 200 1). In most industrialised countries, a few construction companies have achieved superior market position through the use of innovative practices. Developing countries like Uganda need to create stimulating macro-economic conditions that will encourage innovation. There is great need for empirical studies to determine the key success factors of innovative firms to allow others to emulate their examples.
Research and development (R&D) is a key enabler as the winning of new knowledge is the basis of human civilisation. There is strong statistical evidence of the positive relationship between R&D activities at firm level and adoption of innovations (Dodgson, 2000). FE1: Innovation at the firm level is positively affected by Research and DevelopmenL The implementation of quality control procedures is associated with innovation. Innovative firms integrate process improvement with quality control (Rothwell, 1992; Chiesa et al, 1996). FE2: -Innovation at thefirm level is positively affected by Quality controlprocedures. A highly educated and technically qualified workforce is more receptive to innovations. The extent of training is associated with innovation (Reichstein et al, 2005).
269
International Conference on Advances in Engineering and Technology
FE3:- Innovation at the firm level is positively associated with an educated technically qualified workforce. The proportions of staff that are scientists, engineers or managers and have relevant experience in another company stimulate innovation. The use of technocrats increases the production of innovative ideas. It is also argued that organisations whose staffs are from diverse backgrounds and experiences will be more receptive to innovation; as such staff will generate a wide range of innovative suggestions. FE4:- Innovation at the firm level is positively associated with experienced diverse workforce. Communication, both internal and external, is vital for implementation of innovations. Internal communication will enable circulation of new ideas to all employees. External communication will enable interaction with suppliers and customers and therefore enable feedback from other stakeholders. Circulation of new ideas keeps the personnel aware of the firm’s direction and enhances innovation (Chiesa, 1996; Souitaris, 2002).
FE5:- Innovation at the firm level is positively associated with communication both internal and external. Strength in marketing enables innovation. Strong marketing programmes feature strong user linkages and a significant effort towards identifying user requirements (Cooper Robert, 1984; Rothwell, 1992) FE6:- Innovation at the firm level is positively associated with strong marketing programmes. The presence of a project champion is an enabler to innovation (Schon, 1975). The project champion is an individual who enthusiastically supports an innovation project and who is personally committed to it. He is particularly effective in maintaining impetus and support when the project encounters major difficulties. FE7: Innovation at thefirm level is positively associated with the presence of a project champion. Teamwork is regarded as an issue of major interest in innovation and a number of authors have highlighted its importance. Teamwork, linkages and clusters for horizontal and upward cooperation are associated with innovation. (Chiesa et ul, 1996). FE8: Innovation at the firm level is positively associated with teamwork, employment conditions and linkages.
2.2 Firm Level Barriers to Innovation According to Pihkala (2002), the financing cost of invention and diffusion is a key barrier. The cost may not be easily affordable to many organisations.
270
Alinaitwe, Widen, Mwakali & Hansson
FB1:- Innovation at the firm level is negatively associated with high financing cost of invention and diffusion. Lack of risk propensity is a major barrier in many firms (Pihkala 2002). Firms that are risk averse are less likely to be innovative. Adoption to change is part of innovation and this involves taking some risk. FB2:- Innovation at the firm level is negatively associated with lack of propensity to take risk. A fragmented industry, leading to many small companies, which are not in position to meet the cost of R&D, hinders innovation (Seaden & Manseau, (2001)). FB3:- Innovation at the firm level is negatively associated with fragmentation of the building industry. High tax regimes also discourage innovations. Uncertainty of occupation and insecurity of employment stifles innovation. Lack of flexibility and empowerment of workers so as to encourage the creation and diffusion of knowledge in the employment policy is a hindrance (Dodgson, 2000; Pihkala 2002). FB4:-Innovation at the firm level is negative!v associated with high tax regimes. FB5:- Innovation at thefirm level is negatively associated with uncertainty of occupation q f t h e workers. FB6:- Innovation at the firm level is negatively associated with lack of flexibility on the part of workers. FB7:- Innovation at thefirm level is associated with the eflect of design on construction. 3.0 METHODS Surveys are one of the most frequently used methods of data gathering in social research. The survey protocol of random sampling procedures allows a relatively small number of people to represent a much larger population (Ferber, 1980). The opinions and characteristics of a population can be explained through the use of a representative sample. Surveys are an effective means to gain a lot of data on attitudes, on issues and causal relationships and they are inexpensive to administer. The study aimed at using a representative sample rather than using anecdotal or case study evidence based on a few, select firms. However, surveys can only show the strength of statistical association between variables. Cross sectional surveys like the one that was used do not explain changes in attitudes and views over time. Surveys also provide no basis to expect that the questions are correctly interpreted by the respondents.
3.1 Questionnaire Design The main barriers and enablers to innovation, which affect productivity, were identified through a literature search that is summarized in the section for literature review. The search identified 8 main enablers and 7 barriers at firm level. Respondents were asked to
27 1
International Conference on Advances in Engineering and Technology
rate each of the listed factors for either enabling or acting as barriers to innovation in order to achieve greater productivity in the building industry. A five-point Likert scale (Kothari, 2003) was used, where 1 was for ‘no effect’, 3 for ‘fairly significant effect’, and 5, ‘very big effect’.
3.2 Pilot Studies Pilot studies were carried out to ensure the clarity and relevance of the questionnaire to the contractors. The questionnaire was shown to two researchers. Based on their feedback, amendments were made to the questionnaire and the second phase of the pilot study was conducted on four building contractors in Uganda. Based on the feedback received, minor amendments were again made to the questionnaire to remove any ambiguities and discrepancies. This pilot study was conducted to validate and improve the questionnaire, in terms of its format and layout, the wording of statements and the overall content. The draft questionnaire was revised to include the suggestions of these participants. In short, the questionnaire was validated through this process and provided the research with improvement opportunities before launching the main survey. 3.3 Sample Selection The survey gathered data from chief executives of building contractors from as broad a geographic area within Uganda as possible. For this purpose, it was determined that the largest contractors who are registered with the contractor’s association (UNABCEC) are targeted. This was along the argument by Schumpeter (1976) that companies need to be large and in a dominant position in order to innovate. One of the aims of UNABCEC at the moment is to increase productivity (UNABCEC, 2004). It was decided that all those in category A and B be the source of potential participants. At the national level, one recognized way of categorizing construction companies is by the UNABCEC class. The classification from A to E takes into account the financial strength, size and ability to carry out jobs. Those in class A are the biggest and undertake works of the biggest magnitude and include some international companies. For the purposes of this survey, the 2005 mailing list of contractors was reduced to those in classes A and B that deal in building construction. Owing to the relatively small number of firms within the two categories A and B, all the 57 building contractors in the two categories were targeted. A total of 54 questionnaires were sent out and three companies did not participate for various reasons. 3.4 Survey Response As a result of mailing and telephone and physical follow up, a total of 44 questionnaires were completed out of the 54 that were sent to contractors, making the total response rate 82 percent as summarized in Table 1. The survey package comprised a covering letter, the questionnaire and a pre-stamped self-addressed envelope.
I
272
Table 1: Response On Questionnaire From The Contractors UNABCEC [ NO. of questionNO.of rePercentage naires sent sponses response Class 34 89 38 A 10 63 16 B 54 44 82 All
I
I
1
Alinaitwe, Widen, Mwakali & Hansson
A review of the responses indicated no measurable differences in the respondents’ answers to the questions. And because group B is less than 30, the two groups were combined for the analysis of this survey. 4.0 RESULTS AND DISCUSSION This section contains a summary of the statistical analysis and gives results of the survey and discussions ensuing from there. The mean ratings, standard deviations, and correlation coefficients were determined for enablers and barriers at the firm level as perceived by the contractors. Statistical analysis of the Likert scale ratings given through the questionnaires was conducted using the Statistical Package for Social Sciences (SPSS 10) software. The ranking of the factors according to the mean rating of the enablers and barriers to innovation in the building industry both at the firm level and national level as perceived by building contractors is summarized in tables 2 to 3. Table 2 gives the ranking for enablers at firm level starting with the highest rated. Table 3 gives the ranking for barriers at firm level. The ranking of enablers at firm level in table 2 indicates that having an educated technically qualified workforce (FE3) is the highest rated enabler to innovation for increasing labour productivity in the building industry with a mean rating of 4.32. Having an experienced diverse workforce (FE4) is rated the second highest enabler to innovation with a mean rating of at the firm level. High rating of technically qualified workforce is in agreement with research which indicates that lack of skills is one of the major factors that negatively impact on productivity in the building industries (Reichstein et al, 2005; Sha and Jiang, 2003). The implication is that construction companies should invest more in training or should get other ways of training their craftsmen if there is to be increase in innovation. At the moment, many firms are not directly involved in formally training their workforce.
273
International Conference on Advances in Engineering and Technology
Rank
Factor
Mean
1 2 3
Effect of design on construction (FB7) Level of tax regimes (FB4) Level of uncertainty of occupation of the workers
3.77 3.73 3.59
4 5 6 7
Standard deviation 0.91 0.95 1.13
I Level of fragmentation of the building industry (FB3) I 3.48
I
3.48 3.34 3.07
1 I
I
I I
Level of cost for invention and diffusion (FB 1) Degree of propensity to take risks (FB2) Level of flexibility on the part of workers (FB6) \
I
1
I I
1
0.85 1.23 1.01 0.95
The effect of design on construction (FB7) is rated the worst barrier as regards innovation at firm level. The result seems to be in agreement with previous research that suggests that fragmentation of the industry and low levels of design and build type of procurement are some of the biggest hindrances to innovation (Reichstein et al, 2005; Dulaimi et al, 2002). This might be used to support the argument that design should be more integrated with construction for contractors to be more innovative. This therefore suggests that the call of shift from the traditional form of procurement where design is separate from construction will make the building industry more innovative. All the identified enablers and barriers have got mean ratings of more than 3.0 which implies that all are taken as having at least fairly significant effect on innovation in the building industry. Among the enablers, having an educated and technically qualified workforce and having experienced diverse workforce at the firm level have average rating above 4.0. This implies that these two factors are perceived as having the biggest effect on innovation. The contractors regard the level of involvement in R&D as the least enabler of innovation among the listed factors that would improve productivity in the building industry with a mean rating of 3.36. It may be that the firms do no like spending on R&D and would prefer more government involvement in R&D like was found out in Singapore (Dulaimi et al, 2002). From Table 3, the standard deviations of the factors that are ranked highest are generally the smallest. This suggests that there is closer agreement in the rating by the contractors towards those factors with high mean rating. This survey was however carried out with building contractors in focus because they are the ones who carry out the building work. The survey did not include the informal contractors who also carry out a significant amount of construction work. The authors believed they would not get representative samples from the informal contractors at ease before gathering ample data on their activities and addresses. The survey could as well have included consultants, clients and other stakeholders in the construction industry. However, each of these categories requires a different sent of questions that are relevant to them. It is also important to note that barriers and enablers to innovation are related. Lack of an enabler can be regarded as a barrier and the converse is true. The search tried
274
Alinaitwe, Widen, Mwakali & Hansson
to identify the major enablers and barriers at the industry level but it is possible that some of the factors were not included.
5.0 CONCLUSIONS The enablers and barriers to innovation at firm level from the view of the building contractors have been identified. Having a technically qualified workforce and having experienced diverse workforce are looked at as the greatest enablers to innovation in building construction that will drive forward productivity. The construction firms and policy makers should therefore focus on how to improve the identified enablers. The effect of design on construction and level of tax regimes are the worst barriers to innovation that lead to low productivity in the building industry in Uganda. Parties to the construction process especially designers and clients should address the separation of design from construction. The policy makers in governments should also address the level of taxation and how it affects innovation in the construction industry.
REFERENCES Chiesa, V, Coughlan, P and Voss, C A ( 1 996) Development of a Technical Innovation Audit. Journal Of'ProdzictInnovation Management, 13(2), 105-136. Cooper Robert. G. ( 1 984) The Strategy-Performance Link in Product Innovation. R & D Management, 14(4), 247-259. Dodgson, M (2000) The Management of' Technological Innovation. Oxford: Oxford University Press. Dulaimi, M F, Ling, F Y, Ofori, G and De Silva, N (2002) Enhancing integration and innovation in construction. Building Research and Information, 30(4), 237-247. Egan, J (1998) Rethinking Constrztction. London: HMSO Department of Trade and Industry. Ferber, R. ( 1 980) Readings in the analysis of'swvey data, New York: American Marketing Association. Kothari, C R (2003) Research Methodolom, Methods and Techniques. New Delhi: Wisha Prakashan. OECD (2000) Technical Policj.: A n International Comparison qf Innovation in Major Capital Prqjects. Paris: OECD. OECDiEurostat ( 1997) Proposed gztidelines ,for Collecting and Interpreting Technological Innovation Data - Oslo Manual. Paris: OECD. Pihkala, T, Ylinenpaa, H and Vesalainen, J (2002) Innovation barriers amongst clusters of European SMEs. International Joiirnal ?f Entrepreneurship and Innovation Management, 2(6), 520 - 536. Reichstein, T., Salter, A. and Gann, D (2005) Last along equals: a comparison of innovation in construction, services and manufacturing in the UK. Construction Management and Economics, 23(6), 63 1 - 644. Rothwell, R ( 1992) Successful Industrial Innovation: Critical Factors for the 1990s. R & D Management, 22(3), 22 1-239. Schumpeter, J. A. ( 1976) Capitalism, Socialism and Democracy. London: Routledge. Seaden, G and Manseau, A (2001) Public policy and construction innovation. Building Research and Infi)rmation, 29(3), 1 82- 1 96.
275
International Conference on Advances in Engineering and Technology
Seaden, G., Guolla, M, and Doutriaux, J (2003) Strategic decisions and innovation in construction firms. Construction management and economics, 21(6), 603 - 612. Sexton, M and Barrett, P (2003) Appropriate innovation in small construction firms. Construction management and economics, 21(6), 623 - 633. Sha K, and Jiang Z (2003) Improving rural labourers’ status in China’s construction industry. Building Research and Information, 31(6), 464 - 473. Schon, D (1975) Deutero - learning in organisations: learning for increased effectiveness. Organisational Dynamics (3), 2 - 16. Slaughter, E S (1993) Builders as sources of construction innovation. Journal of construction engineering and management, 119(3), 532 - 549. Souitaris, V (2002) Technological trajectories as moderators of firm-level determinants of innovation. Research Policy, 31(6), 877-98. UNABCEC (2004) Improving Uganda’s Construction Industry. Construction Review. 15(10), 18 19. ~
276
Alinaitwe, Mwakali & Hansson
FACTORS AFFECTING PRODUCTIVITY OF BUILDING CRAFTSMEN - A CASE OF UGANDA H. Alinaitwe, Department of Civil Engineering, Makerere University, Uganda J. A. Mwakali, Department of Civil Engineering, Makerere University, Uganda
B. Hansson, Division of Construction Management, Lund University, Sweden
ABSTRACT Poor productivity of construction workers is one of the causes of cost and time overruns on construction projects. The productivity of labour is particularly important especially in developing countries where most of the building construction work is still on manual basis. This paper reports on a survey made on project managers of building projects in Uganda. The managers were required to rate using their experience the way 36 factors affect productivity with respect to time, cost and quality. The survey was carried out through a questionnaire and responses received over a period of three months. The ten most significant problems affecting labour productivity were identified as incompetent supervisors; lack of skills; rework; lack of toolsiequipment; poor construction methods; poor communication; inaccurate drawings; stoppages because of work being rejected by consultants; political insecurity; toolsiequipment breakdown; and harsh weather conditions.
Keywords: Labour, productivity, factors, ranking, building craftsmen.
INTRODUCTION Construction industries in many countries are greatly concerned about the low level of productivity (Egan, 1998; Lim and Alum, 1995). Poor productivity of craftsmen is one of the most daunting problems that construction industries especially those in developing countries face (Kaming et a/, 1997). Although some research has been carried out (Imbert, 1990; Olomolaiye et al, 1987; Kaming et al, 1997; Rahaman e f al, 1990), research on construction craftsmen productivity is generally in its infancy in developing countries. The construction industry in Uganda constitutes over 7% of the Gross Domestic Product (Uganda Bureau of Statistics, 2005) and has witnessed a steady growth for the last 20 years. It is assumed that any effort directed at improving productivity will greatly enhance the country’s chances of realizing her development goals. The construction industry in Uganda suffers from cost and time over-runs (Mubiru, 2001). Over-runs in the construction industry are indicators of productivity problems. Improving construction productivity will go a long way towards eliminating time and cost overruns. Identifying and evaluating the factors that influence productivity are critical issues faced by construction managers (Motwani et al, 1995). Research on factors that affect productivity in
277
International Conference on Advances in Engineering and Technology
developed countries has been explored extensively (Yates and Guhatharkta, 1993; Borcherding, 1976). Strategies for performance improvement have been identified and implemented mainly basing on the identified key factors. It is therefore important that factors affecting productivity in industry are well identified so that efforts can be made to improve the situation. However, the results from some of the earlier research were based on the perception with regard to time only. For example the Importance Index used by Lim and Alum (1995) is based on the frequency of encountering the factors. It is important that time, cost and quality aspects are included in assessing productivity. The three common indicators of performance in construction projects are cost, schedule and quality (McKim et al, 2000). The objective of this study is to identify and rank the major factors that affect the productivity of craftsmen in Uganda. The goal is to find an appropriate strategy for improving the productivity of craftsmen in Uganda.
PRODUCTIVITY PROBLEMS Although factors influencing productivity are widely researched, are not yet fully explored even in developed countries (Lema, 1996). There is therefore need to study further the factors that affect labour productivity. To improve productivity, the impact of each of the factors can be assesses using statistical methods and attention given to those particular parameters that adversely affect productivity. What follows is a review from earlier studies. Lack of materials, incomplete drawings, incompetent supervisors, lack of tools and equipment, absenteeism, poor communication, instruction time, poor site layout, inspection delay and rework were found to be the ten most significant problems affecting construction productivity in Thailand (Makulsawatudon and Emsley, 2003). Kaming et a1 (1 997) found out that lack of materials, rework, worker interference, worker absenteeism, and lack of equipment were the most significant problems affecting workers in Indonesia. Lema (1 996) through a survey of contractors in Tanzania found out that the major factors that influence productivity are leadership, level of skill, wages, level of mechanization, and monetary incentives. Lack of materials, weather and physical site conditions, lack of proper tools and equipment, design, drawing and change orders, inspection delays, absenteeism, safety, improper plan of work, repeating work, changing crew size and labour turnover were found out through a survey to be the most important factors in Iran (Zakeri 1996). The ranking was based on perception by project managers’ on influence and potential for improvement. Motwani et a1 (1995) found out through a survey in United States of America that the five major problems that impede productivity are adverse site conditions; poor sequencing of works; drawing conflictilack of information; searching for tools, materials, and weather. Olomolaiye et a1 (1987) found that the five most significant factors are lack of materials, rework, lack of equipment, supervision delays, absenteeism, and interference in Nigeria. Lim and Alum (1 995) found that the major problems with labour productivity in Singapore are recruitment of supervisors, recruitment of workers, high rate of labour turnover, absenteeism at the workplace, communication with foreign workers, and inclement weather. From the literature cited above, material shortage was usually the most significant problem as found out from the various studies. However, according to Lim and Alum (1 995), material shortage was ranked eighth instead. It is important to note that the questionnaires
278
Alinaitwe. Mwakali & Hansson
and ranking were based on time aspect of frequency of occurrence. However, quality and cost are equally important in assessing the factors that affect productivity. Craftsmen can deliver varying quantities of work but the quality and cost should be acceptable to their supervisors. Rosefielde and Mills (1979) argued that any measure of construction productivity that does not account for the changes in design and quality will lead to low, if not negative, measures of construction productivity.
METHODS
3.1 Research Method Fellows and Liu (2003) highlight five research styles: experiment, survey, action research, ethnographic research and case study. Research in construction is usually carried out through experiments, surveys or case studies. Experiments on barriers and enablers in the construction industry would take a long time to yield results and at the same time would be expensive. Case studies would not provide results that are easy to generalise as different companies face different problems. Surveys through questionnaires were found appropriate because of the relative ease of obtaining standard data appropriate for achieving the objectives of this study. Surveys are one of the most frequently used methods of data gathering in social research. The survey protocol of random sampling procedures allows a relatively small number of people to represent a much larger population (Ferber, 1980). The opinions and characteristics of a population can be explained through the use of a representative sample. Surveys are an effective means to gain a lot of data on attitudes, on issues and causal relationships and they arc inexpensive to administer. However, they can only show the strength of statistical association between variables. Cross sectional surveys do not explain changes in attitudes and views over time. They also provide no basis to expect that the questions are correctly interpreted by the respondents.
3.2 Questionnaire Design Factors that affect the productivity of craftsmen were identified through the literature based on previous research (Makulsawatudon and Emsley, 2003; Kaming 1997; Zakeri 1996; Lim and Alum, 1995; Motwani et al, 1995; Leina, 1996, Sanders and Thomas, 199 1 ; Oloinolaiye et ol, 1987; Borcheding, 1976). A total of 36 factors were identified. The project managers were required to rate the factors in the way they affect productivity in relation to time, cost and quality from their own experiences on building sites. The questionnaire required the respondents to rank their answers on a Likert Scale (Kothari. 2003). The survey package comprised a covering letter, the questionnaire and a prestamped self-addressed envelope. 3.3 Pilot Studies Pilot studies were carried out to ensure the clarity and relevance of the questionnaire to the contractors. The questionnaire was shown to two researchers. Based on their feedback, amendments were made to the questionnaire and the second phase of the pilot study was conducted on four building contractors in Uganda among those who were not going to
279
International Conference on Advances in Engineering and Technology
participate on the final one. Based on the feedback received, minor amendments were again made to the questionnaire to remove any ambiguities and discrepancies. 3.3 Sample Selection The survey gathered data from project managers of building contractors from as broad a geographic area within Uganda as possible. For this purpose, it was determined that all contractors who registered with the contractor’s association participate. The target population of contractors was 167, those registered with the contractors’ association of Uganda National Building and Civil Engineering Contractors’ Association (UNABCEC), and engaged in formal building work. At the national level, one recognized way of categorizing construction companies is by the UNABCEC grade. The classification from A to E takes into account the financial strength, size and ability to carry out jobs. Those in class A are the biggest and undertake works of the biggest magnitude and include some multinational companies. At the time of the survey, UNABCEC had a membership of 189 including civil engineering contractors. For the purposes of this survey, the mailing list of all those who deal in building construction during the year 2005 was used. A total of 159 questionnaires were sent out. 22 of the contractors did not participate for various reasons. The sample therefore reduced to 137.The survey was carried within a period of three months from mid July to October 2005. 3.4 Survey Response As a result of mailing and follow up, a total of 73 usable questionnaires were completed and returned. The distribution in the various grades of the 137 who were contacted and the 73 who responded is given in table 2.
A review of the responses from the national surveys indicated no measurable differences in the respondents’ answers to the questions. All the questionnaires were therefore combined for the analysis of this survey. 4.0 RESULTS AND DISCUSSION 4.1 Data Analysis and Results The average rankings were calculated basing on four different criteria: mean ratings for effect on time; effect on cost; effect on quality and combined importance index. The means for time, cost and quality were calculated using the formula
280
Alinaitwe. Mwakali & Hansson
where, x = time, cost or quality; R, is the mean rating with respect to time, cost, or quality from the Z number of raters; and R, in the rating given by the respondents. The mean combined importance index from the rankings were calculated using the formula
where,
R, is the rating basing on time, R, is the rating on cost and R, is the rating on quality. Table 2 is a summary of the calculated mean values for the different factors and also their ranking within the groups. 4.2 Discussion This section contains the results from the ratings as given in table 2 and a discussion about the factors. There follows a section on checking the reliability of the ratings obtained. We shall discuss the ten highest ranked within the category of overall ranking and five highest ranking in terms of time, cost and quality where they are not yet discussed. The highest ranked according to the Overall Importance Index are: incompetent supervisors; lack of skills from the workers e.g. inexperienced poorly trained workers; rework e.g. from poor work done; lack of tools/equipment; poor construction method e.g. poor sequencing of work items; poor communication e.g. inaccurate instructions; inaccurate drawings; stoppages because of work being rejected by consultants; political insecurity e.g. insurgency, wars, and risk; toolsiequipment breakdown; and harsh weather conditions. Materials shortages and delays is ranked first in terms of time. This is similar to what was found out in other countries (Makulsawatudon and Emsley, 2003; Kaming et ul, 1997, Olomolaiye et al, 1987). However, on the overall Importance Index, it is ranked seventeenth. Material shortages consume a lot of the contractors’ time but the effect on cost and quality is relatively lower. The main cost incurred due to shortages is for the idle time that craftsmen have to wait for the materials. The factor of Incompetent supervisors is rated highest on the overall Importance index. This could be partly because supervisors do not attend refresher courses. Most of the supervisors are trained but their formal training stops when they leave school. The way knowledge in managed is important. There is therefore need for continuous training of the supervisors. Lack of skills is a major problem and seriously affects the time, cost quality achieved. The hope is that since the government of Uganda is promising to introduce technical schools at all sub counties, the right skills will be developed in future but this will take at least three years to
28 1
International Conference on Advances in Engineering and Technology
have impact on the industry. There is need to make a needs assessment and identify the key trades and right numbers to train in order to change the scenario. Table 2: Ranking of factors according to time, cost, quality and combined importance index
282
Alinaitwe, Mwakali & Hansson
Rework is a rated third overall on Importance Index. It is ranked second, first and seventh against time, cost and quality respectively. It is mainly caused by failure to follow specifications. Specifications should be made clear and explained to the executing team to avoid rework. Repetition of instructions everyday with visual management aids could possibly make it easier for the foremen and workers so that they always refer to them. At the moment, the specifications are usually kept in office and relayed only when they are needed. Lack of tools and equipment is ranked fourth overall. Tools are mainly provided to the craftsmen engaged on full time basis. Casual workers are expected to bring their own tools. This is partly because the casual workers end up taking the very tools they are provided with. Some equipment is not readily available in some places even for hiring. Poor construction methods is ranked fifth on the overall importance index. Poor construction methods are mainly due to poor planning of the work. Poor planning is partly due to the incompetence of the supervisors. The other problem is that of designs that are not easily buildable. Lack of buildability is due to designs that do not take into account the available resources for construction purposes and inadequate appreciation of construction techniques. Poor communication due for instance to inaccurate instructions and inaccurate drawings is ranked fifth sixth and stoppages because of work being rejected by consultants is rated seventh overall. Political insecurity e.g. insurgency, wars is rated eighth on the overall importance. The factor of risk and insecurity has not been rated high before. This might have come up because Uganda has not been at peace for a long time. Currently a big portion of the total area of the country faces risk of insecurity froin armed rebels and this affects execution of building contracts. Toolsiequipment breakdown is ranked ninth according to the overall Importance Index. This is in relation to breakdown on equipment like vibrators, water pumps, powered machinery, etc. These break down due to poor maintenance and lack of regular service. Many of them are also not in the best condition as they lack spares. There is need for good garages and workshops to take care of the repairs and maintenance and for contractors to understand that there is optimal age for replacing such tools and equipment. Harsh weather condition is ranked tenth from the overall importance index. Uganda being along the equator experiences wet and dry conditions. The rains are heavy but in many cases last short periods. They cause damage to unprotected building components under construction that are mainly carried out in situ. The afternoons are generally hot at average maximum of about 28 - 33 "C when there is no cloud cover.
5.0 CONCLUSION From the survey, the five highest ranked factors that affect productivity of labour taking into account effect on time, cost and quality are incompetent supervisors, lack of skills from the workers; rework; lack of tooldequipment; and poor construction method. The competency of supervisors, level of skills of construction workers should be improved. The contractors too should focus on improving these areas by way of giving refresher courses, rewarding on the basis of skill and output, and participating in structured training on workers in the construction industry. Research geared at improving productivity should focus on the identified factors preferably those top on the list by importance index.
283
International Conference o n Advances in Engineering and Technology
REFERENCES Borcherding, J. D. (1 976) Improving productivity in industrial construction. Journal of the Construction Division, Proceedings of the A X E , 102 (C04), 599 - 614. Egan, Sir (1998) Rethinking Construction. DETR, HMSO, London. Fellows, R. and Liu, A. (2003) Research Methods for Construction. Second edition, Blackwell Science, Oxford. Ferber (1 980) Readings in the analysis of survey data. American Marketing Assoc., NY. Imbert, I. D. C. (1990) Human issues affecting construction projects in developing countries. Construction Management and Economics, 8(2), 219 - 228. Kaming, P. F., Olomolaiye, P. O., Holt, G. and Harris, F. (1997) Factors influencing craftsmen productivity in Indonesia. International Journal of Project Management, 15(1), 21-30. Kothari, C. R. (2003) Research Methodologv, Methods and Techniques. Wisha Prakashan, New Delhi Lema, M. N. (1996) Construction labour productivity analysis and benchmarking - the case of Tanzania. PhD Thesis, Loughborough University. Lim, E. C., and Alum, J. (1995) Construction Productivity: issues encountered by contractors in Singapore. International Journal of Project Management, 13(1), 5 1 - 58. Makulsawatudom, A. and Emsley, M. (2003). Critical factors influencing construction productivity in Thailand. Construction Innovation and Global Competitiveness. CIB 1othInternational symposium, Cincinnati. McKim, Hezagy, and Attala (2000) Project performance control in reconstruction projects. Journal of Construction Engineering and Management, 126(2), 137 - 141. Motwani, J., Kumar, A. and Novakoski, M. (1995) Measuring construction productivity: A practical approach. Work Study, 44(8), 18 - 20. Mubiru, F (2001) Comparative Analysis of bidding strategies of contractors in Uganda. Master of Engineering Dissertation, Makerere University, Kampala. Olomolaiye, P., Wahab, K. and Price, A. (1987) Problems influencing craftsmen productivity in Nigeria. Building and Environment, 22(4), 3 17 - 323. Rosefielde, S., and Quinn Mills, D. (1979). Is construction technologicaZly stagnant? The Construction Industuy: balance wheel of the economy. In J. Lange and D. Mills (eds), Lexington Books, Lexington. Sanders and Thomas (199 1) Analysing construction company profitability. Cost Engineering, 33(2), 7 - 15. Uganda Bureau of Statistics. (2005). Statistical Abstract, UBOS, Entebbe. Yates J. K. and Guhathakurta S. (1993), International labour productivity, Cost Engineering, 35(1), 15 - 26. Zakeri M., Olomolaiye, P., Holt, G. and Harris, F. (1996) A survey of constraints on Iranian construction operatives’ productivity. Construction Management and Economics, 14(5), 417 - 426.
284
Mwakali
A REVIEW OF CAUSES AND REMEDIES OF CONSTRUCTION RELATED ACCIDENTS: THE UGANDA EXPERIENCE J. A. Mwakali, Department of Civil Engineering, Makerere University, Uganda
ABSTRACT With robust economic conditions fuelling a construction boom in Uganda, the frequency and severity of construction site accidents is bound to increase in the hture. In recent years many such accidents have been reported, the most severe ones still fresh in the minds of many Ugandans being the Bwebajja building accident of 2004 in which a multistoreyed hotel building under construction collapsed and the recent collapse in March 2006 of a church structure under construction in Kalenve. There is therefore the need to increase or strengthen health and safety activities in the construction sector. The paper, based on studies by various researches, presents the most common causes of constructionrelated accidents in Uganda and beyond and suggests remedies to reduce on them. Keywords: Accident; Building; Bwebajja; Civil; Collapse; Construction; Formal construction; Hazard; Health; Informal construction; Infrastructure; Injury; Labour, Legislation; Occupational; OSH; Regulation; Safety; Workers.
1.0 INTRODUCTION One lexical definition of infrastructure is “the system or structures which are necessary for the operation of a country or an organization” (Longmans dictionary). Civil infrastructure refers to public and private works, namely buildings, roads, bridges, and water and sewerage facilities. Civil engineers are responsible for their planning, design, construction, operation and maintenance. A nation’s infrastructure system provides for the delivery of essential services and a sustained standard of living. An efficient system of infrastructure is fundamental to the well-being of a country and indispensable for the promotion of productive activities and social development. Civil constructions are normally medium to large-scale (both physically and financially), involve many trades and products which have to be selected with care, and are for use by many people. Public safety is therefore paramount. Such safety can only be guaranteed by experts in construction, in particular civil engineers. Mistakes in civil constructions are usually very costly. Due to the nature of the construction trade, individuals employed on construction sites find themselves confronted with dangerous, life-threatening work conditions on a daily basis. Serious accidents and injuries resulting in personal injury occur with alarming frequency at construction sites throughout the world. For example, in the European Union more than 1,300 people are killed in construction accidents every year. In many
285
International Conference on Advances in Engineering and Technology
countries, construction is the sector most at risk of accidents. Worldwide, construction worker s are three times more likely to be killed and twice as likely to be injured as workers in other occupations. The costs of these accidents are immense to the individual, to the employer and to society. They can amount to an appreciable proportion of the contract price. Most of the construction firms tend to be Small and Medium Enterprises (SMEs), the latter being the most affected by construction accidents. See European Agency for Safety and Health at Work, (ESAW) (2001). In Uganda, as robust economic conditions have been fuelling a construction boom, the frequency and severity of construction site accidents is bound to increase in the future. The most severe construction accidents still fresh in the minds of many Ugandans are probably the Bwebajja building accident of 1 st September 2004 in which 11 people died and 26 were injured when a multi-storeyed hotel building under construction suddenly collapsed on the Kampala-Entebbe highway (Mwakali, 2004); and the church building collapse at Kalerwe, a poor suburb on the north of Kampala City, in which dozens of worshippers died and hundreds were injured on the night of 8th March 2006 (The New Vision, 2006). Figs. 1 & 2 below show the extent of the Bwebajja and Kalerwe church building collapses, respectively. The situation of the construction industry therefore underpins the need to increase or strengthen health and safety activities.
Fig. 1. A section of the collapsed Bwebajja building. (From Mwakali, 2004)
286
Mwakali
Fig. 2:
Kalerwe church collapse of 8m March 2006. (From http://news.bbc.co.uk/2/hi/in__pictures/4788872.stm. Accessed o n 12 th March 2006)
2.0 A C C I D E N T S T A T I S T I C S F R O M E U R O P E According to European Statistics of Accidents at Work (ESAW, 2001): 9 About 4.8 million accidents at work resulted in more than 3 days absence from work in the 15 Member States of the EU 15. 9 The estimated total number of accidents at work in the EU 15 is about 7.4 million. 9 In 2000, there were 5,200 fatal accidents at work 9 The fatal accident incidence rate decreased between 1994 and 2000 There are variations in the accident pattern throughout the workforce: 9 Men have more accidents than women 9 Young workers (18-24yrs) have a much higher accident incidence rate than other age groups, but older workers (55-64yrs) have more fatal accidents 9 The accident incidence rates in industry sectors vary widely. 9 In the wood industry, every year, 10% of workers have an accident 9 The rate of accidents is higher in small companies than large enterprises 9 Accidents occurring at night tend to be more fatal than ones occurring at other times. 9 The "upper extremities" (arms, etc.) are the parts of the body injured by accidents at work 9 Wounds and superficial injuries are the most common type of injury According to the European Survey on Working Conditions 2000 (EWSC):
287
International Conference on Advances in Engineering and Technology
9 9
17% of illness absences from work are due to accidents at work. This adds up to about 210 million working days lost due to accidents at work.
The Eurostat Labour Force Survey reveals: 9 Workers who have under 5 years seniority in an enterprise are more likely to suffer an accidental injury at work. 9 Workers usually or sometimes doing shift work have a higher accident incidence rate than those never doing shift work. 9 2.3 million Europeans consider themselves having a longstanding disability due to an accident at work. Every year, about 5,500 people are killed in the workplace across the European Union, with another 4.5 million accidents resulting in more than 3 days absence from work (amounting to around 146 million working days lost). These accidents are estimated to cost the EU about 20 billion Euro. The problem affects all sectors of the economy and is particularly acute in enterprises with less than 50 workers. Due to accidents at work, around 5% of people were forced to change their job or place of work or reduce their working hours. 0.2% stopped working permanently. Between 1998 and 1999, it is estimated that work-related accidents cost the EU 150 million working days per year. A further 350 million days were lost through work-related health problems. Together, the total 'bill' was 500 million days per year. 3.0 ACCIDENT STATISTICS F R O M UGANDA Uganda is now experiencing a significant economic growth at an average rate of 6.3% of GDP p.a., making it one of the fastest growing economies in the world. The high economic growth rate has had a very positive impact on the construction sector, growth going up from 15% in 1990 to 40% in 1999, making it the second largest employer after agriculture. The informal construction sector accounts for up to 70% and the formal construction sector is dominated by small and medium size contractors and a few big ones often International subcontracting a number of local contractors. Generally, this rapid growth in the industry has brought about increased threat to occupational safety and health (OSH) and as such there have been a number of injuries and accidents (fatal) in the recent past. See Senyonjo (undated).
Fig. 3 gives the total number of reported accidents and dangerous occurrences in Uganda for the period 1984 to 1994, while Fig. 4 gives a breakdown of the numbers by industry. Fig. 5 gives the numbers for construction-related accidents in the same period. For the same period, Fig. 6 shows the percentage distribution of accidents by type while Fig. 7 shows the percent of those that were fatal. However, surveys commissioned by the British Health and Safety Executive (HSE, 2003) indicate a reporting rate by employers for other reportable injuries of less than 40%. Thus, published statistics are the tip of the iceberg.
288
Mwakali
350
293
300 r
250
230
-
246
221
"o
"6 200
166
o
'
iv
-T, )dL
(13
.t m , - m r ( N l - W)+ m,dV
m,,i+di,U +dW
t 4 m , , i , W
rn,d\\,
m,,
+ - m a @ -
L dL
J
3.0 EVAPORATIVE AIR COOLER MODEL 3.1 To Derive Cooler Finite Difference Model Equations By imagining that water temperature remains unchanged, Figure 4 can be used to derive the basic differential equations for modeling the re-circulated counter-flow spray type evaporative air cooler performance in this paper. Differential equations are derived assuming that the rate at which make-up water is added to the sump is negligibly small compared to the rate of water flow through the distribution system, that heat transfer through the cooler walls froin the ambient may be ignored, that the small addition of energy to the water by the pump has negligible effect upon the water temperature. Steady state conditions for the cooler control volume yields: -
dm, = m adW = hmaA(Ws,Tv - W)dL
(14)
333
International Conference on Advances in Engineering and Technology
ma dia = C,,dTa + i , , d W
=
i. 'I
ma dW if,,
Assuming m a , h, and T, as constants, integrating equation 14 yield: -
Taking air specific heat at constant pressure Cp,a as constant the evaporative cooler efficiency yield:
Z=
h,aAdL ma
4.0 THE SOLAR HEATING SYSTEM MODEL The performance of all solar heating systems depends on the weather; both the energy collected and energy demanded (heating load) are functions of solar radiation, the ambient temperature, and other meteorological variables. The weather, best described as irregular functions of time both on short (hourly) and long (seasonal) time scales, may be viewed as a set of neither completely random not deterministic time-dependent forcing functions. As solar energy systems analysis often requires long time period performance examination and it is difficult to vary parameters to see their effect on the system performance, experiments are very expensive and time-consuming. Computer simulations supplied with meteorological data and mathematical models that provide the same thermal performance information as physical experiments with significantly less time and expense can be formulated to simulate the transient performance of solar energy systems and used directly as a design tool by repeatedly simulating the system performance with different values of design parameters. A mathematical model of a heating system, done either numerically or analytically, is a powerful means to analyze different possible configurations and component sizes so as to arrive at an optimal system design; it represents the thermal behavior of a component or a system. Sizing a solar liquid heater involves determining the total collector area and the storage volume required to provide the necessary amount of hot fluid.
334
Abdalla, Abdalla, El-awad & Eljack
Figure 7 Schematic of a Forced-circulation solar liquid heater The solar heating system consists basically of a collector for heating the working fluid, working fluid storage tank, and a heat exchanger in which the working fluid exchanges heat with the load, Figure 7. For material compatibility, economics and safety reasons, a heat exchanger may be sometimes provided between the solar collector and the load to isolate the collector's working fluid from the load, and to prevent freezing of the working fluid. Depending upon the overall objective of the model, the complexity of the system can be increased to reflect the actual conditions by including the pipe losses, heat exchanger effectiveness, etc. Assume that all collector components have negligible heat capacity, the glass cover is thin and of negligible solar absorptivity, the collector plate solar absorptivity is close to unity and independent of the angle of incidence, the collector plate fins and back side have highly reflective surfaces, and radiation heat transfer from these surfaces to the insulation inside surface is negligible, the instantaneous total useful energy delivered by the flat plate collector is given by: -
Solving for the temperature of the thermal fluid the collector Tfi, and subtract T, from both sides yield: -
Assume the storage tank is fully mixed, the temperature rise of the thermal fluid in the storage tank can be written following the simplified mathematical model described by Beckman et al [I as : Cst ' dTs = Q d
dt
-Q,
-Qe
Assume the rate of energy delivered to the storage tank
Qd
to be equal to the useful en-
335
International Conference on Advances in Engineering and Technology
ergy is delivered by collector Q, = AcFR [Ha - UL(Tfi - T,)], the load Q L can be written in terms of the thermal fluid mass-specific heat capacity product Cf , the temperature of the thermal fluid leaving the collector Tf, (assumed to equal the storage tank temperature T,) and thermal fluid return temperature T, as QL = Cf(Ts - T,.), the loss from the storage tank Qe in terms of the storage tank loss coefficient area product (UA)s, tank temperature T, and ambient temperature T, as Q, = (UA),(T, - Ta), equation 22 can be numerically integrated to obtain the new storage tank temperature is the collector inlet temperature for the next hour; thus, the entire day's useful energy delivered can be obtained as well as the storage tank temperature profile.
T,+ =T, +-
:[
Cst
1
A,F,(H, -u,(T, -T~))-(UA)~(T* -T,)-&(T, - T ~ )(23)
5.0 RESULTS OF THE NUMERICAL SIMULATION A computer code (the code and its results are not included in this paper) developed based on unit subroutines containing system's components governing equations was employed in this study. In this code, the components are linked together by a main program that calls unit subroutines according to the user's specification to form the complete cycle; a mathematical solver routine is employed to solve all established entire cycle equations simultaneously. Property subroutines contained in the program serve to provide thermodynamic properties of the different working fluids. The property subroutine for LiC1-water, the particular working fluid employed in this study, contains correlations derived from the work of Manuel R. Conde. The computer simulation yields temperature and humidity ratio of air at evaporative air and water coolers outlet as well as heat duties of the various system components as functions of the specified inlet conditions and other operating parameters. In conducting the simulation, a reference case (ambient air condition of 43' C dry-bulb and 23.4' C wet-bulb temperatures, and indoor conditions of 23' C dry-bulb and 90% relative humidity) has been selected and the values of the relevant parameters were varied around it. Only one parameter (cooling water flow rate, air flow rates through the absorber, air cooler, water cooler and regenerator, salt water solution flow rate and concentration) was varied at a time, all others remained fixed at their design values. 6.0 CONCLUSION The above description reveals a number of advantages of solar-driven desiccant evaporative cooling for banana ripening and cold storage over conventional air-conditioning cycles: 1. Liquid desiccant evaporative cooling system seems to be the most cost appropriate banana ripening and cold storage technology option for future applications not because it is environmentally friendly and require low high grade energy input but also it improves banana ripening and cold storage facility substantially in a most energy efficient manner.
336
Abdalla, Abdalla, El-awad & Eljack
Pressure-sealed units are avoided as the whole system operates at atmospheric xessure. 3eater flexibility as water evaporation process in the regenerator is independent From dehumidification in the absorber. Efficient utilization of very low heat source temperatures is possible. [n contrast to conventional air-conditioning systems, moisture control in liquid desiccant adds no cooling load to the system; moisture control in conventional air:onnditioning systems adds a significant cooling load to the air-conditioning system i s the moisture added must removed using refrigeration. zoompared to conventional air-conditioning systems, the product (banana) is :xposed to high air volume rates (good air circulation) and lower temperature lifferentials; this minimizes chilling disorders bananas may encounter after storage. 1VOMENCLATURE
Arca of the collector plate, m' Mass transfer area per unit volume of chamber m2/m' Heat transfer area per unit volume of evaporative chamber m2/m3 Specific heat of moist air at constant pressure kW/kg-'C Mass specific heat product k W k Spccific heat of water at constant pressure kW/kg-'C Collector heat removal factor Solar radiation absorbed by the collector Convection heat transfer coefficient kW/m'-'C Convection mass transfer coefficient kg/sec- m2 Latent heat of vaporization of the water. kJ/kg Spccific enthalpy of saturated liquid vapor, kJ/kg Specific enthalpy of saturated water vapor, kJ/kg Chamber total height, m Ambient temperature "C Overall heat loss coefficient, kW/m'-'C Humidity ratio kg,/kgd,,, ',,,
REFERENCES S. A. Abdalla, Non-adiabatic Evaporative Cooling for Banana Ripening, M. Sc Thesis. Faculty of Engineering & Arch., University of Khartoum, Sudan, 1985. Andrew Lowenstein, A Solar Liquid-desiccant Air-conditioner, All Research Inc, Princeton, NJ 08543. ASHRAE Handbook, Fundamentals Volume, American Society of Heating, Refrigeration & Air-conditioning Engineers, Inc., 1997. J. L. Threlkeld, Thermal Environmental Engineering, Prentice-hall International, London. P. Stabat, S. Ginestet & D. Marchio, Limits of Feasibility & Energy Consumption of desiccant evaporative Cooling in Temperate Climates, Ecole des Mines de Paris Center of energy studies, 60 boulevard Saint Michel, 75272 Paris, France. Sanjeev Jain, Desiccant Augmented Evaporative cooling: An Emerging Airconditioning Alternative, Department of Mechanical Engineering, Indian Institute of Technology Delhi, Hauz Khas, New Delhi- 1 10016, India.
337
International Conference on Advances in Engineering and Technology
Esam Elsarraj & Salah Ahmed Abdalla, Banana Ripening and Cold Storage in Sudan using Solar Operated Desiccant Evaporative Cooling System, Proceedings of WREC2005, Aberdeen, UK. Conde Manuel R. (2004) Properties of aqueous solutions of lithium and calcium chlorides: formulations for use in air conditioning equipment design, International Journal of Thermal Sciences. Michael Wetter, Air-to-Air Plate Heat Exchanger: Simulation Model, Simulation Research Group Building Technologies Department Environmental Energy Technologies Division Lawrence Berkeley National Laboratory Berkeley, CA 94720
338
Kiriamiti, Sarmat & Nzila
FRACTIONATION OF CRUDE PYRETHRUM EXTRACT USING SUPERCRITICAL CARBON DIOXIDE H. Kiriamiti, Department of’Chemical andprocess Engineering, Moi University, Kenya , S. Sarmat, Department ojChemica1 and process Engineering Moi U n i v e m i ~Kenva
C. Nzila, Departnient of Textile Engineering, Moi University, Kenya
ABSTRACT Fractionation of pyrethrum extract (crude extract) using supercritical fluid carbon dioxide shows that fixed oils and pyrethrin could be separated in a supercritical extractor with two separators in series. In the first separator, more oil, which is less volatile, is obtained and in the second separator, more of the pyrethrin is obtained. Fractionation of ground pyrethrum flowers extract gives 24% pyrethrin in the first separator and 34% in the second separator. In the case of fractionation of crude hexane extract (oleoresin), the percentage of pyrethrin in second separator is twice that in first separator. In all cases, the product obtained is solid because of the waxes, which are fractionated in both separators.
Keywords: pyrethrin, pyrethrum, extraction, fractionation, supercritical fluid, oleoresin
1.0 INTRODUCTION Today there is a high demand for natural insecticides due to an increase of biological farming in the western world. Among the well-known natural insecticides are: pyrethrin, nicotine, rotenone, limonene, lazadirachtine from neem oil, camphor, turpentine, etc Salgado V. L. ( 1997). Except for pyrethrin and rotenone, most of the natural insecticides are expensive to exploit. Pyrethrin is one of the most widely used natural domestic insecticides and is extracted from pyrethrum flowers. Pyrethrin is a mixture of 6 active ingredients, which are classified as Pyrethrins I and Pyrethrins 11. Pyrethrins 1 are composed of pyrethrin I, jasmolin I and cinerin I, while Pyrethrins I I are composed of pyrethrin 11, jasmolin I1 and cinerin 11. Pyrethrin is non-toxic to warm blooded animals and it decomposes very fast in the presence of light. In the conventional commercial process, extraction with organic solvents, such as hexane is carried out to obtain oleoresin concentrate. Oleoresin is purified in several steps to eliminate waxes, chlorophyll pigments and fixed oils to obtain a final product referred to in the industry as “pyrethrin pale”. In their earliest works, Stahl (1980) observed that between 20°C and 40”C, no decomposition of pyrethrin occurred in both liquid and supercritical carbon dioxide (CO,).
339
International Conference on Advances in Engineering and Technology
Marr (1 984) developed a method for the identification of the six active ingredients in pyrethrum extract using High Performance Liquid Chromatography (HPLC). In 1980, Sims (1981) described and patented an extraction process for the extraction of crude extract from pyrethrum flowers using liquid CO2. Wynn (1995) described a preparative supercritical COz extraction process of crude extract from pyrethrum flowers at 40°C and 80 bar. Otterbach (1999) compared crude extract obtained by ultrasonic extraction, Soxhlet extraction using hexane, and supercritical CO2 extraction and observed that the supercritical C 0 2process yielded better quality product in terms of colour and pyrethrin content. Della Porta (2002) extracted pyrethrin from the ground powder of pyrethrum flowers with simultaneous successive extractive fractionation and post-extractive fractionation. In our previous work Kiriamiti (2003a, b), we have shown the effect of pressure, temperature, particle size and pre-treatment on the amount of crude extract and pyrethrin content and also developed a method for the purification of crude hexane extract (CHE) using carbon dioxide. In this paper, we have studied fractionation of pyrethrin and fixed oil in a postextractive fractionation of crude extract obtained directly from pyrethrum flowers using C 0 2 and CHE. 2.0 MATERIALS AND METHODS Pyrethrum flowers were bought from local farms in Kenya. Batch extraction of pyrethrin from ground pyrethrum flowers with hexane was conducted in an agitated mixing vessel at ambient temperatures. The batch process was repeated several times, until the colour of the solvent in the mixing vessel was clear. CHE was obtained by evaporation of hexane from cumulative extracts of all batches. A CHE with a pyrethrin content of 0.16 gig CHE was obtained.
The C 0 2 extraction was performed with a pilot plant from Separex Chimie Fine, France (series 3417 type SF 200), having an extraction capacity of 200 ml and 3 separators in series of capacity 15ml each with a maximum C 0 2 flow rate of 5 kgih. The schematic diagram of the pilot plant is shown in our previous work Kiriamiti (2003a, b). The extractor and separators are jacketed to maintain a constant temperature. The ground flowers or the CHE slurry were put in the extractor’s cylinder and filter mesh screens were placed at both ends of the cylinder. The cylinder is then introduced into the temperature-controlled extractor. Care is taken to ensure that the air is purged before the extraction process is started. The C 0 2 is pumped at constant flow rate and directed into the bottom of the extractor. The fluid phase from the extractor is passed through valves where the pressure is throttled via the three separators in series. Then the C 0 2 is cooled and recycled again into the system. The extracts are collected only in the first and the second separator at regular intervals. Samples are weighed and analysed. In all experiments, C 0 2 flow rate was kept constant at 0.403 kgih. Analyses of the extracts were performed using a high-performance liquid chromatograph (HPLC), Hewlett Packard series 1050 chromatograph, equipped with a 250mm x 4.6 mm column Lichrosorb S160 5pm, as proposed by Marr (1984). Elution was con-
340
Kiriamiti, Sarmat & Nzila
ducted with a mixture of acetyl acetate and hexane, in a ratio of 1 : 10 at a constant flow rate of 1.5 ml per minute, leading to a 15-minute analysis. The UV-detector was set at a wavelength of 242 nm in series with a Light Scattering Detector (LSD). A refined pyrethrin sample whose pyrethrin content was 21.1% (by weight) was bought from RdH laborchemikalien & Co KG (Germany) for standardisation of the analytical method.
3.0 RESULTS AND DISCUSSION 3.1 Fractionation of crude C 0 2 extract Experiments were carried out in order to compare the use of supercritical COZ and liquid CO1 extraction when fractionation of the crude C 0 2 extract from extractor is implemented by using a "cascade of depressions" through various separators. The operating conditions chosen, as well as the results obtained at the end of the extraction process are presented in tables 1 and 2. The quantity of the pyrethrum flowers used was 45g.
Operating conditions Pressure (bar) Extractor Temperature ("C) Density COZ(kg. m-3) Pressure (bar) Separator 1 Temperature ("C) Density CO, (ka. m-? Pressure (bar) Separator 2 Temperature ("C) and 3 Density CO1 (kr. m-3)
Liquid COz 120.00 19.00 890.375 80.00 35.00 429.349 50.00 28.00 127.76
Supercritical COZ 250.00 40.00 798.45 80.00 37.00 338.80 50.00 30.00 122.91
Table 2: Mass fraction of Dvrethrin. oil and immrities in the crude CO7 extract. Mass fraction of pyrethrin Mass fraction of oil Mass fraction of impurities (mainly waxes)
Separator 1 Separator 2 Separator 1 Separator 2 Separator 1 Separator 2
0.1548 0.3437 0.1227 0.125 1 0.7225 0.53 12
0.1 182 0.2489 0.323 0.1579 0.5588 0.5932
It clearly appears that the amount of pyrethrin obtained in separator 2 is higher than that in separator 1 for both supercritical and liquid COz. Distribution of oil in both separators is the same for liquid but in supercritical more oil is deposited in separator I . The quantity of impurities (mainly waxes) obtained is lower in separator 2 of liquid, which contributes to the improvement of quality of partial extract, while in supercritical they are almost the same. The realization of a fractionation thus makes it possible to obtain a product more concentrated in pyrethrin in separator 2 with partial extract containing
34 1
International Conference on Advances in Engineering and Technology
34.37% of pyrethrin when extracted with liquid C02 and 24.89% in the case of extraction with supercritical C02. Figure 1 shows the evolution of the cumulated pyrethrin mass recovered in separator 2 presented with respect to time. It was observed that in the two cases (liquid and supercritical C 0 2 )the quantity of pyrethrin recovered is very similar. On the other hand, the high quantity of oil recovery in the case of supercritical COz caused a drop in the quality of pyrethrin in extract. Figure 2 shows the mass fractions of pyrethrin and oils extracted, indicating that a higher mass fraction of pyrethrin is obtained with an extraction using liquid C02.
Figure 1: Cumulative mass of a) pyrethrin b) oil in second separator; liquid COz( I 20 bar, 19OC); supercritical CO, (250bar, 4OOC)
*
a
0.45
n3c Om=
1
0.2
-
O.t5 -
b A
A
A
m A m
A I
.
0.1 0.2
' 0
0.05 -
100
200
3011
04 0
TIme (rnin)
Figure 2: Mass fraction of a) pyrethrin b) oil in liquid C02( 120 bar, 19°C); A supercritical CO,(250bar, 40°C) In general, the quantity of extracted impurities (and thus recovered) is lower in the case of the extraction using liquid C02. This is explained by the fact that the solubility of waxes is lower in low temperatures. One can deduce from these experiments that the most satisfactory product can be obtained by an extraction using liquid C 0 2 and then followed by an on-line fractionation. On the other hand, working with pressures and temperatures lower than critical point seems to improve the quality of pyrethrin extracted. However, in all cases, the end product obtained is a yellow solid, thus meaning that it still contains a large quantity of waxes. This result confirms those obtained by Stahl (1 980) who noticed that fractiona-
342
Kiriamiti, S a n n a t & Nzila
tion with two separators can improve the quality of the extraction. To eliminate the waxes more effectively, Della Porta (2002) imposed a very low temperature (- 15OC) in the first separator at a pressure of 90 bar. In both works, they did not mention the state of the product extracted.
3.2 Fractionation of Crude Hexane Extract (CHE) In this study, CHE was re-extracted with supercritical C 0 2 at 250 bar and 40°C followed by on-line fractionation separators. In order to have stable conditions, a low flow rate of COz was used in order not to cause the flow of products of one separator towards the other and a flow of COz was thus fixed at 0.403kg/h. The conditions in the first separator were fixed at 100 bar and 40°C, while in the second and third separators at 50 bar and 40°C. Under these conditions, the presence of extract only in the first and the second separator was noted. Figure 3 shows the cumulative mass of pyrethrin in the two separators and figure 4 shows the cumulative mass of oil in the two separators. It is observed that the quantity of extracted pyrethrin, as well as its mass fraction, is much more in separator 2 than in separator 1. It is noted that the results obtained from extraction fractionation of CHE resemble those obtained from crude C 0 2 extract.
E ;:: 3E :::
'
300
2 ;;; : a
# * *
0
Figure 3: Quantity of pyrethrin recovered in first separator and second separator extracted at 250 bar and 40°C lvith a flow rate 0.403kp71: 4 separator I ; W separator 2.
;
250
200
f
150
s
100
rn
50
a
0
w
B
I
0
200
400
600
Time [min]
Figure 4: Quantity of oil recovered in separator I and separator 2 exmcted at 250 bar separator 2. and 40°C with a flow rate 0.403kgh; 4 separator 1;
343
International Conference on Advances in Engineering and Technology
Figure 5 shows the yield of pyrethrin extracted. Because of the very low C 0 2 flow rates (0.403kg/h), extraction time lasts relatively longer. It was thus observedthat after 410 minutes only 42% of pyrethrin initially present in the extractor was extracted. Figure 6 shows the mass fraction of pyrethrin recovered in the two separators and figure 7 that of oil. In separator 2, a product more concentrated in pyrethrin was obtained. At the end of the extraction process, a product containing more than 63% of pyrethrin by mass was obtained. In separator 1 a very low mass of pyrethrin with concentration of 39% by mass was extracted. This result is satisfactory but the product obtained is solid at ambient temperatures, which poses a problem for the final product formulation.
50 40
I
10 I
i
~
+
0
0
100
~
-~
I
~-
7 1 -
200
300
1
400
Time [min] - -
1
500'
~-
Figure 5: Total pyrethrin yield extracted at 250 bar and 40°C with a flow rate 0.403kgih followed by fractionation of CHE.
--
0
200 400 Time [min]
~
~ _ _ _ _
-
-
_ _ _ _ ~ - -
- _ _ _ _ - ~
I
600~ ~
Figure 6: Mass fraction of extracted pyrethrin recovered in separator 1 and separator 2 at 250 bar and 40°C with a flow rate 0.403kgih; t separator 1; separator 2.
344
Kiriamiti, Sarmat & Nzila
0.3 0.25 ~"
o.2i ~
0 19 5
4-~
0 19
~
~9
i
o.o5 0
4
200 400 Time [min]
600 !
Figure 7" Mass fraction of extracted oil recovered in separator 1 and separator 2 at 250 bar and 40~ with a flow rate of 0.403kg/h;, separator 1; 9 separator 2.
The initial CHE ratio of pyrethrin I/pyrethrin II in extractor was 1.95. A value of 1.68 was obtained in the first separator while a value of 2.56 was obtained in the second separator. The mass fraction of oil is much more in the second separator than in the first separator at the beginning of the extraction process, but at the end, they are identical. In the first separator, the majority of the compounds are undesirable. Through this postextractive fractionation, we hoped that in the first separator, less soluble oils would be recovered and that in the second separator, an extract more concentrated in pyrethrin would be recovered. The extraction from CHE at 250 bar and 40~ normally dissolves many compounds due to increase in CO2 density. 4.0 CONCLUSION The experimental results nevertheless showed the presence of a considerable quantity of pyrethrin in the first separator, as well as a considerable quantity of oil in the second separator. So, this fractionation does not reach the expected ideal due to the fact that the separators are too small, the residence times are very short and also a thermodynamic model to support the phase equilibria of these mixtures is lacking. In particular, the existence of specific interactions between wax, oil and pyrethrin in the presence of CO2, contributes to the mutual solubility of oil and pyrethrin, which affects the effectiveness of the fractionation separation. Fractionation of crude extract in two separators gives a better quality product than a single step extraction process as is observed in our previous work Kiriamiti (2003a). The product is solid at normal temperatures, a property, which is undesired in formulation of insecticides. This process of fractionation of crude extract can be used to concentrate dewaxed extract and also to obtain products of different pyrethrins I/pyrethrins II ratios.
5.0 A C K N O W L E D G M E N T We would like to acknowledge the laboratoire de genie chime (LGC) Toulouse for enabling the use of their facilities to carry out the experimental work. We are grateful to
345
International Conference on Advances in Engineering and Technology
Professor Jean-Stephene Condoret for his advice and providing the use of the SFC equipment REFERENCES Della Porta G., Reverchon E. (2002), Pyrethrin extraction, 4th international symposium on high pressure technology and chemical engineering, Venice, Italy . Kiriamiti, H. K., Camy, S., Gourdon, C., Condoret, J-S. (2003a), Pyrethrins extraction frompyrethrumflowers using carbon dioxide ; J. Super. Fluids. 26(3), p. 193-200. Kiriamiti H, Camy S, Gourdon C, Condoret J.S. (2003b), Supercritical Carbon Dioxide Processing of Pyrethrum Oleoresin and Pale.; J Agric Food Chem12;5 1(4), p. 880-884. Marr, R., Lack, E. Bunzenberger (1984), C 0 2-extraction: comparison of supercriticaland subcritical extraction conditions Ger., Chem. Eng., 7, p. 25- 3 1. Otterbach, A.; Wenclawiak, B. W. (1 999), Supercriticalfluid extraction kinetics ofpyrethrinsfromflowers and allethrinfrom paper strips, J. Anal. Chem. 365 (8), p. 472-474. Salgado V. L. (1997), The modes of action of spinosadandother insect controlproducts Down to Earth Dow AgroSciences, Midland, MI .52(2), p. 35-43. Sims M. (1981), Liquid Carbon dioxide extraction ofpyrethrins, US Pat. 4281 171. Stahl, E.; Schutz, E. (1980), Extraction of natural compounds with supercritical gases, J. of Medicinal Plant Research, 40, p. 12-2 1. Wynn, H. T. P., Cheng-Chin, C., Tien-Tsu, S., Frong, L., Ming-Ren, S. F. (1995), Preparative supercritical fluid extraction of pyrethrin I and II from Pyrethrum flower, Talanta, 42 , p. 1745-1749.
346
John, Wilson & Kasembe
MOTOR VEHICLE EMISSION CONTROL VIA FUEL IONIZATION: “FUELMAX” EXPERIENCE G. R. John, L. Wilson and E. Kasembe, Energy Engineering Department, Faculty of Mechanical and Cheniical Engineering, University of Dar-es-Salaam, P. 0. Box 35131, Dar-es-Salaarn. Tunzani
ABSTRACT World energy supply is dominated by fossil fuels, which are associated with uncertainties of supply reliability. The world energy crisis of 1973174 and 1978179 followed by the recent supply fluctuations resulting from regional conflicts calls for the need of their rational use. Further to the supply fluctuations, oil reserves are being fast depleted and it is estimated that the existing reserves will last in the next 40 years while natural gas reserves are also estimated to last for about 60 years. Use of fossil fuels for motor vehicle propulsion is the major cause environmental pollution and the associated greenhouse gas effect. Air particulate matter (PM), nitrogen oxpollutants from motor vehicles include ozone (03), ides (NO,), carbon monoxide (CO), carbon dioxide (COZ), sulphur dioxide (SO*)and general hydrocarbons (HCs). While curtailment and alternative energy sources are effective measures in reducing the effects of supply and environmental problems, increasing efficiency of existing motor vehicles have an immediate effects. One of the methods of achieving this is by fuel ionization. Fuel ionization has shown to enhance fuel combustion and thereby improving engine performance at reduced emissions. This paper discusses findings of a study that was done to a diesel engine in laboratory conditions. Fuel ionization was achieved by utilizing a magnetic frequency resonator (type FuelMax), which was fitted to the fuel supply line under pressure feeding the engine from the fuel tank. Fuel consumption level with and without FuelMax is compared. Keywords: Fuel ionization; Vehicle emission control; Fuel conversion efficiency; Specific fuel consumption; Brake mean effective pressure
1.0 INTRODUCTION World energy supply is dominated by petroleum fuels accounting for 37.3% of total energy supply and majority of this fuel is consumed by the transport sector (BP, 2004). Due to the important role it plays in economies, worldwide average annual energy consumption growth in the period 2001 to 2025 is estimated at 2.1 percent (Energy Information Administration, 2004). The overdependence on petroleum fuels is a major concern of greenhouse gas emissions and poses a risk of world resources depletion. Road transport alone releases 20-25 percent of the greenhouse gases particularly carbon dioxide (SAIC, 2002). On the other hand, oil reserves are being fast depleted and it is estimated that the existing reserves will last to the next 40 years (Almgren, 2004). Various measures are deployed in minimizing environmental pollution from motor vehicles. These include demand curtailment, use of efficient engine designs, cleaner fuels,
347
International Conference on Advances in Engineering and Technology
alternative fuels and the application of exhaust gas after treatment devices. Measures that bases on efficiency improvement and fuel substitution are said to have more impact on greenhouse gases mitigation compared to measures that are addressing travel demand (Patterson 1999, Yoshida et al 2000). The application of exhaust gas after treatment devices like three-way catalytic converters is capable of reducing tail pipe emissions of CO, HC and NOx. Further to these techniques, increasing environmental performance of existing motor vehicles can also be achieved by fuel ionisation. Fuel ionization has shown to enhance fuel combustion and thereby reducing the combustion emissions. Fuel ionization can be deployed by retrofits to engine. The simplest of these retrofits is the clamp on ionization type similar to the one known as FuelMAX as deployed by the International Research and Development (IRD), a company based in the U. S. A. FuelMAX consist of two halves of a strong magnetic material made from neodymium (NdFeB37), which is clamped on fuel line near the carburettor or injection system. When the fuel passes through the strong magnetic resonator, the magnetic moment rearranges its electrons on a molecular and atomic level. Because it is a fluid, the now positively charged fuel attracts air for better oxidation resulting in more complete burn. The existing FuelMAX is capable of reducing fuel consumption in the range 20% - 27% while the respective emission saving of CO and CH are reported to be in the range of 40% - 50% (RAFEL Ltd. 2005, Sigma Automotive 2005). Consequently, the use of FuelMAX will improve engine horsepower by up to 20% (Fuel Saver Pro, 2005). This paper presents findings of study that was done in order to quantify fuel savings and the respective reduction in pollution by the application of a fuel ionizer type FuelMAX. The study was carried out in laboratory conditions using a diesel engine. 2.0 E X P E R I M E N T A L AND M E T H O D O L O G Y
A single cylinder diesel engine, Hatz Diesel type E 108U No. 331082015601, was utilized for the laboratory testing. Fuel ionization was achieved by utilizing a magnetic frequency resonator type Super FuelMax, Fig. 1. The resonator was fitted to the fuel line that supplies diesel to the engine from fuel tank.
Magnetic field (!
' .... .....
COin v!e nit iio n a ~ Fuel
Ionized Fuel
Fig. 1. Schematic Representation of Fuel Ionization The engine was serviced (which included replacing the air cleaner, changing lubricant and checking for proper nozzle setting) prior to performing the test. One set of data was obtained before fitting the resonator and the other was recorded after making the retrofit.
348
John, Wilson & Kasembe
Upon fitting the resonator, the engine was run for 30 minutes at idling speed before collecting the experimental data. This ensured the removal of carbon and varnish deposits from the engine. Engine speeds that were set during experimentation were 1500 rpm, 1700 rpm, 1800 rpm, 2000 rpm, 2100 rpm and 2200 rpm. Initially, one complete set of data was obtained without loading. Latter on, three loads (14.32Nm, 21.48Nm and 28.65Nm) were used to each test speed. The loading was achieved by a Froude hydraulic dynamometer size DPX2 No. BX31942. Engine's fuel consumption was obtained by measuring time elapsed to consume 50cc of diesel fuel. A single data point constituted of 4 readings, which was averaged for analysis. 3.0 FUEL IONIZATION ANALYSIS Diesel engines utilize either open-chamber (termed indirect injection, IDI) or divided chamber (termed indirect injection, IDI). Here the mixing of fuel and air depends mostly on the spray characteristics (Graves, 1979). Consequently, the engines are sensitive to spray characteristics, which must be carefully worked out to secure rapid mixing. A i r - fuel mixing is assisted by swirl and squish (Henein, 1979). Swirl is a rotary motion induced by directing the inlet air tangentially which also results in a radial velocity component known as squish. Fuel ionization by FuelMAX is similarly enhancing fuelair mixing that results in optimized combustion. 4.0 RESULTS AND DISCUSSION 4.1 Results
Table 1 shows a summarized result for the performance testing of FuelMAX. Average fuel consumption under no load and all test load conditions (14.32 Nm, 21.48 Nm, 28.65 Nm) showed that fuel consumption without FuelMAX was 1.72 litres per hour while with FuelMAX fitted was 1.69 litres per hour. Consequently, overall fuel saving accrued from the use of FuelMAX was 1.61%. Table 1. Summary of FuelMAX performance. LOAD
FUEL CONSUMPTION RATE (1/hr)
SAVINGS, %
WITHOUT
WITH
No load
1.071
1.075
14.32
1.621
1.534
5.37
21.48
1.930
1.915
0.77
28.65
2.268
2.254
0.64
AVERAGE
1.722
1.694
1.61
(0.35)
At no load conditions FuelMAX performance had no significant reduction in fuel consumption to the test engine, Fig. 2.
349
International Conference on Advances in Engineering and Technology
NO
LOAD
CONDITIONS
L-e-
1.30 0 ,1,. I-
-
//:"
1 _20
faO z 0
1.1 o
1.00
~_~__~"
/ 0.90
0_ 8 0
0.70
|
1500
,
1700
,
1800
Speed,
I
--:~--
V V I T H : O U T
,
2000
|
21 O0
2200
RPM
=
~ ! ~
I
Fig. 2. FuelMAX performance under no load condition. Typical FuelMAX performance under loaded conditions at low speeds below 1700 rpm and at higher speeds over 2000 rpm were investigated. Good performance was experienced at part load conditions close to 14.32 Nm and for mid range speeds of 1700 - 1800 rpm, Fig. 3. Performance at other conditions is depicted in Figs. 4 and Fig. 5. 4.2 Discussion
While the details differ from one engine to another, performance maps of engines are similar. Maximum brake mean effective pressure (bmep) contours occurs in mid-speed range and the minimum brake specific fuel consumption (bsfc) island is located at slightly lower speed and at part-load, see Fig. 6. At very low speeds, fuel-air mixture combustion quality deteriorates and dilution with exhaust gas becomes remarkable. On the other hand, Very high speeds increase sfc of motor vehicles. The already good fuel conversion efficiency is outweighed by friction losses that increase almost linearly with increasing speed. Other contributing factors are the variation in volumetric efficiency (qv) and the marginally increase in indicated fuel conversion efficiency (qf). Indicated fuel conversion efficiency increases slowly due to the decreasing importance of heat transfer per cycle with increasing speed (Slezek and Vossmeyer 1981).
350
John, Wilson & Kasembe
L O A D = 14.3::2 N m
2. t0 / l l .
2 O0
.,:~: ---=: z'--==....
i.
"" I..9,0
....~:j;:;"
i
.,..;....
o 1..8:0 I la.. 1,70
....::;::;"
...,;:::;::;::" .Z /
,..{:::/
1,60 Z
9: -_.._.
O
o
1,50
.....
ii
w
,:~::;::" ---.
.
.
.
.....-._:~
1.40
.
.
,
'
0,7 -
i
~
'
I
'
i
,
i
'
i
'
i
'
typical
i
~
v
(::!)
0,6
E~
~o~
05 O >
0,4
0
0,3
0 (-
0,2
Q.
0
o ~
O.I m W / c m 2 --o--
o,1 0,0
. 50
1 mW/cm 2 3 mW/cm 2 10 mW/cm 2 20 mW/cm 2 50 mW/cm = 1 O0 mW/cm 2 . . .
~..~__&-~.~ O
............
\%
~o
oL o
.
.
.
160 1;0 260 2;0 360 3;0 Temperature (K)
460
Fig. 7 Temperature dependent open circuit voltage of a typical ITO/PEDOT:PSS/ P3HT:PCBM/A1 solar cell at 300 K, at different incident light intensities.
5.0 C O N C L U S I O N S
A configuration of an ideal donor/acceptor heterojunction solar cell that consists of an interpenetrating network of donor and acceptor as the absorber layer has been fabricated and characterised by means of temperature and illumination dependent current density-
402
C higuvare
voltage characteristics. We stress however the need of a homogeneous mixture of donor and acceptor to ensure sufficient electronic overlap between molecules of the D-A blend, and propose an optimum mixture ratio of I : 1 by mass. Junction formation procedures that should eliminate any possibility of contact with oxygen or other contaminants, is another possible way of improving the efficiency of solar cells based on P3HT. 6.0 ACKNOWLEDGEMENTS I acknowledge the contributions of the following: V. Dyakonov, J. Parisi, and the PV research group of the University of Oldenburg in Germany, where all the experiments were carried out. Acknowledgements also go to the GTZ and DAAD - Germany for funding the research. REFERENCES Antoniadis, H. et al, Phys. Rev. B 50, 149 1 1 (1994). Assadi, A., Appl. Phys. Lett. 53, (1988). Barth, S. and Bassler, H., Phys. Rev. Lett. 79,4445 (1 997). Barth S., Bassler, H., Rost, H. and Horhold, H. H., Phys. Rev. B 56,3844 (1997). Brabec, C. J., Dyakonov, V., Parisi, J. and Saricitci, N. S. Organic Photovoltaics: Concepts and Realization, Springer Series in Material Science, 60, Springer Verlag, 2003. Brabec, C. J., Zerza, G., Sariciftci, N. S., Cerullo, G., DeSilvesteri, S., Luzatti, S., Hummelen, J. C. Briitting W., Berleb S. and Muck1 A. G., Organic Electronics (accepted) 2000. Chiguvare, Z., Electrical and optical characterisation of'bulk heterojlrnction polymer,fullerene solar cells, PhD Thesis, University of Oldenburg, Germany, (2005) . Sariciftci, N. S., Prog. Quant. Electr., 19, 13 1 ( 1 995). Shaheen, S. E., Brabec, C.J., Padinger, F, Fromherz, T., Hummelen, J.C. and Sariciftci, N. S. - Appl. Phys. Lett. 78 (2001) 841.
403
International Conference on Advances in Engineering and Technology
IRON LOSS OPTIMISATION IN THREE PHASE AC INDUCTION SQUIRREL CAGE MOTORS BY USE OF FUZZY LOGIC B.B.Saanane, A.H.Nzali and D.J.Chambega, Department of Electrical Power, University of Dare s Salaam, Tanzania
ABSTRACT Until now, the computation of iron (core) losses in induction motors cannot be performed through exact analytical methods but is dependent mainly on empirical formulae and experience of motor designers and manufacturers. This paper proposes a new approach through the use of fuzzy logic with the aim of optimizing the iron loss and hence optimizing the total machine loss in order to improve the efficiency. The multi-objective optimization algorithm through fuzzy logic therefore, is used to tackle the optimization problem between the objective parameters (core losses and magnetic flux density) and the airgap diameter which define the machine geometry (e.g. slot and tooth dimensions, airgap thickness, core length etc.). The fuzzy logic toolbox is employed based on the graphical user interface (GUI) on the matlab 6.5 environment. The optimal points of airgap diameter, airgap magnetic flux density and iron loss are then used to reconfigure a new motor geometry with an optimized total loss. The new motor design is simulated on a 2D-FEM to analyse the new motor response. Experimental results which agree with the results of the design, show an improvement of motor efficiency. Keywords: Fuzzy logic model, optimisation, analysis, motor efficiency.
INTRODUCTION Fuzzy logic deals with degrees of truth and provides a conceptual framework for approximate rather than exact reasoning. Therefore, fuzzy logic has come to mean any mathematical or computer system that reasons with fuzzy sets. It is based on rules of the form “if.. .then” that convert inputs to outputs-one fuzzy set into another, (Canova et al, (1998)). The rules of a fuzzy system define a set of overlapping patches that relate a full range of inputs to a full range of outputs. In that sense, the fuzzy system approximates some mathematical function or equation of cause and effect. Fuzzy set theory and fuzzy logic provides a mathematical basis for representing and reasoning with knowledge in uncertain and imprecise problem domain. Unlike Boolean set theory where an element is either a member of the set or it is not, the underlying principle in fuzzy
404
Saanane, Nzali & Chambega
set theory is the partial set membership which is permitted (Canova et al, (1998) and JungHsien & Pei-Yi, (2004)). In this paper, the multi-objective optimization algorithm for iron loss optimization through fuzzy logic is an approach employed to tackle the optimization problem between the objective parameters (core losses and magnetic flux density) and the airgap diameter which defines the machine geometry (e.g. slot and tooth dimensions, airgap thickness, core length etc.). The h z z y logic toolbox employed is based on the graphical user interface (GUI) on the matlab environment. 2.0 PROPOSED NEW APPROACH The multi-objective optimisation was performed through an algorithm linked to outputs of the developed iron loss optimization model. Therefore, the fuzzy logic model was represented by a set of objective values
Yi(X)
which also defined the value of fuzzy global ob-
jective function as in Canova et al, (1 998):
where: =the number of objective functions; X =the vector of machine design parameters like the airgap diameter D,the airgap magnetic flux density B, etc; p , =the i-membership function of a machine parameter normalized between values 0
y2
to 1 ; and y , =a set of objective values. Through this approach, the optimisation problem became scalar and consisted in the deter-
-*
mination of the vector X
such that:
O(x*)= max(O(x)) = max( min (pi(yi(x))) ) X€X
X C X i=l,...,n
The multi-objective optimisation was then accomplished through an algorithm linked to outputs of the developed iron loss optimization model [3] as shown in Figure 1.
405
International Conference on Advances in Engineering and Technology
D X1
g
y
........................... . . ....
IRON LOSS MODEL t
B
Xn
mi J ....
J~]
1a
.......................... ." "
)
. . . . . . . . . . . . . .
o
Fuzzy Global Performance Index
Fig.l" Block diagram of the Proposed Fuzzy Approach Following the fuzzy theory, a fuzzy set, characterized by a membership function (MF), was associated with the chosen objective parameters D and B as shown in Figure 1. The objective parameters D, B and
Pfe
were converted to membership functions (~i) within the limits
as, in correspondence with degrees of satisfaction normalized between 0 (unacceptable value) and 1 (total fulfilment of the requirement). Such an approach allowed to easily optimise the chosen parameters, which were defined for within a band of acceptable values. The membership functions for D and B were passed through the fuzzy logic model to obtain a single global quality index, representing the overall balancing, by means of the minimum operator. In this research the single global quality index parameter Pfe was also represented as a membership function, which is the minimized iron loss value,
Pfe,
for the motor frame
investigated. 3.0 M E T H O D O L O G Y
The Fuzzy Inference System (FIS) which explains the specific methods was used and implemented on the Fuzzy Logic Toolbox based on the matlab platform with the Graphical User Interface (GUI) tools. This was a process of formulating the mapping from the two inputs D and B to the iron loss
Pie
as an output using fuzzy logic. The actual process of the
fuzzy inference involved the formulation of the membership function, fuzzy logic operators, and if-then rules. The Mamdani's methodology, (Canova et al, (1998)) was applied for the fuzzy inference system as an algorithm for the decision processes. So, this Mamdani-type inference as defined for the Fuzzy Logic Toolbox, the output membership functions were also fuzzy sets. After this aggregation process, there was then a fuzzy set for each output variable that needed de-fuzzification, that is resolving to a single number in this case the optimised value for the iron loss
406
Pfe
for each motor frame under consideration.
Saanane, Nzali & Chambega
Although it was possible to use the Fuzzy Logic Toolbox by working strictly from the command line, but it was much easier to build the system graphically. There were five primary GUI tools for building, editing and observing the fuzzy inference system in the Fuzzy Logic Toolbox. These GUI tools were dynamically linked such that changes you make to the FIS using one of them, could affect what you see on any of the other open GUI tools. It was also possible to have any or all of them open for any given system. The five GUI tools which made possible to implement the fuzzy inference process: are listed below 9 The membership functions; 9 AND methods; 9 OR methods; 9 Implication methods; 9 Aggregation methods; and 9 De-fuzzification methods. Figure 2 shows the block diagram for the employed FIS.
407
International Conference on Advances in Engineering and Technology
D ,. Airgap_diarneter
2: Fuzzy Logic Block diagram for computation of optimized parameters. The Mamdani's methodology was applied for the fuzzy inference system as an algorithm for the decision processes utilizing the following rule:
If (Airgap diameter D [m] is mfl and (Airgap induction B [T] is mf2) then (Ironloss Pfe [W] is mf 3). 3.1 The Fuzzy Inference System The concept of fuzzy inference is a method that interprets the values in the input vector and based on some set of rules, assigns values to the output vector. So, Fuzzy Inference Systems (FIS) explains the specific methods of fuzzy inference used in the Fuzzy Logic Toolbox. In this research the Fuzzy Inference System was used and implemented on the Fuzzy Logic Toolbox based on the matlab 6.5 platform with the Graphical User Interface (GUI) tools. This was a process of formulating the mapping from the two inputs D and B to the iron loss
408
Saanane, Nzali & Chambega
Pf~
as an output using fuzzy logic. The actual process of the fuzzy inference involved the
formulation of the membership function, fuzzy logic operators, and if-then rules.
The Process Used in the Fuzzy Inference System (FIS) Below is a description of the process which was used for organizing the major blocks in the FIS. The description is provided for one frame size M3AP 160 L-4. Therefore, the process for the major blocks of FIS is as shown below:
The membership functions: The Gaussian distribution curve was used to build the membership functions for the fuzzy sets D, B and
Pf~.
The fuzzy sets D and B as
shown in Figure 3 and Figure 4 were simultaneously varied in order to optimize fuzzy set
Pf~
shown in Figure 5, such that the fuzzy operator in all the antecedents were made
to be AND. That is,
D: min(/aD (x)),
(3)
B: max(/a~ ( x ) ) ,
(4)
AND, implication and aggregation methods: The fuzzy AND aggregated the two membership functions by an output having a value at a given input x (D or B). The result of fuzzy AND served as a weight showing the influence of this rule on the fuzzy set in the consequent (Jung-Hsien & Pei-Yi, (2004), and Qilian & Jerry, (2000)). The aggregated membership of the antecedent was then used as a weight factor to modify the size and shape of the membership function of the output fuzzy set
Pie
in a way of trun-
cation as in Xu and coresearchers in the Textile Research Journal, Vol. 72, No. 6. The truncation was done by chopping-off the gaussian output function. Considering that the membership function of the output fuzzy set as /.tpfe( x ) a n d the weight generated from the antecedent as w, so the truncated functions had the form:
,ues~(x) - max
f,f~(x), w},
(5)
So, Figure 5 represents the weighted membership functions of the output fuzzy sets of
Pie
for frame size M3AP 160 L-4.
409
International Conference on Advances in Engineering and Technology
De-fuzzification method: After all the fuzzy rules and evaluations were done the FIS needed to output a crisp member to represent the classification result
Pie
for the input
data of D and B. This step is called defuzzification. The most popular method for defuzzification, the centroid calculation was employed which gave a grade weighted by the area under the aggregated output function. Let
al, a 2 ,..., a~
be the areas of the truncated areas under the aggregated function, and
cl, c2, ..., c n be the coordinates of their centers on the axis. The centroid of the aggregated area is given by Xu and group again.
G=
aic i Z i=1
(6)
~-'a i i=1
Therefore the location of the centroid indicated the value of optimized ironloss
Pfe
to
the input D and B as shown in Figure 6. The solution to the optimiztion problem was represented as a three-dimensional surface equivalent to the mapping Pfe ( D , B) as shown in Figure 7.
3.2 Implementation of the Fuzzy Logic Model Implementation of the model for parameters D, B and
Pfe
was based on the use of the GUI
tools of the Fuzzy Logic Toolbox (Lehtla, (1996), Papliski, (2004), Qilian & Jerry, (2000))]as shown in Figure 2 with the parameters D and B as the two inputs and
Pfe
as one
output parameter. The Mamdani's methodology was applied for the fuzzy inference system as an algorithm for the decision processes utilizing the following rule as in Papliski, (2004). If (Airgap diameter D [m] is is
mfl
and (Airgap induction B IT] is mf2) then (Ironloss
Pie
[W]
mf 3).
In Section 3.2.2 below, curves which show the shapes of the adopted membership functions with the ordinate values ranging between 0 and 1 and the procedure for fuzzy logic optimization of the iron loss
Pie
Fuzzy Logic Model Inputs
410
for the frame size M3AP 160 L-4. are provided.
Saanane, Nzali & Chambega
Table 1: Inputs to the Fuzzy Logic Model Type of moAirgap diameAirgap induction, B, ter, D, lml ITI tor M3AP 160 L4
0.14
............... ........... o
,.,/...................
.:,,
~',
i'
..... ...........
s
W
J........... ""
................
During the course of interconnection, considering the geographical locations and the prevailing environmental constraints, the 11 kV inter-connector between Cote D'or and Vallee De Mai feeders shall have to cross hilly regions and pass along coast areas. The complete HV interconnection would require under noted 4 stages: 1. The existing 2 phase, 11 kV line for a route length of 0.75 km shall have to be upgraded to 3 phase, 11 KV up to the end of Cote D'or feeder at Anse Lazio. 2. 3 phase High Voltage ABC cable shall have to be additionally strung for route length of 3 km to connect with the end of Vallee De Mai feeder at Mt. Plaiser. 3. Installation of 11 KV Air Break Isolators at the start and end of inter-connector. 4. Installation of Surge Diverters at both ends of High Voltage ABC cable. The suggested 11 kV interconnection would bring under noted improvement in network:
452
Vishwakarma
1. 2.
Availability of flexible supply arrangements between Vallee De Mai and Cote D'or feeders since either of them can be extended further during supply outage. Reduction in off-supply durations during faults or routine maintenance works.
9.0 CONCLUSIONS Upon analyzing the different technical parameters of the 11 kV feeders of the existing distribution network at Praslin, it has been observed that they are within allowable limits and their present performance is quite acceptable. However considering the growth of the load as per the prevailing rate, one of the feeders shall be critical by the year 2011. Using the modern engineering principles, several short term and long term solutions have been suggested to reinforce the distribution network after carrying out associated calculations and predicting their end results to meet the future demand. REFERENCES Annual Reports of PUC - Seychelles (1998 - 2004) AS Pabla, Electric Power Distribution, Tata McGraw-Hill Publishing Company Limited, New Delhi, 5th edition. Log Sheets of Electricity Generating Stations of P U C - Seychelles Manuals and Technical Literature supplied by various Manufacturers. Suresh Vishwakarma (2005), High Voltage Network Interconnection and Reinforcement on
Praslin Island, Seychelles, E C - UK
453
International Conference on Advances in Engineering and Technology
C H A P T E R SIX MECHANICAL ENGINEERING
E L E C T R O P O R C E L A I N S F R O M R A W M A T E R I A L S IN UGANDA: A REVIEW P.W. Olupot, Department of Mechanical Engineering, Makerere UniversitY, Uganda. S. Jonsson, Department of Material Science and Engineering, Royal Institute of Technology, Sweden. J.K. Byaruhanga, Department of Mechanical Engineering, Makerere University, Uganda.
ABSTRACT Porcelains are vitrified and fine grained ceramic whitewares, used either glazed or unglazed. Electrical porcelains are widely used as insulators in electrical power transmission systems due to the high stability of their electrical, mechanical and thermal properties in the presence of harsh environments. They are primarily composed of clay, feldspar and a filler material, usually quartz or alumina. These materials are widely available in Uganda, but little research has been carried out on them in relation to technical porcelains let alone their use for the same. Based on the abundance of the requisite materials and the corresponding demand for insulation materials, this paper reviews the current and traditional methods of manufacturing electrical porcelains. The major objective is to review the processes for production of electric porcelains from the basic raw materials including material characterisation methods. Keywords: Porcelain, materials, properties, characterisation, Uganda.
1.0 INTRODUCTION Porcelains are polycrystalline ceramic bodies containing typically more than about 10 volume percent of the vitreous phase (Cho & Yoon, 2001). The vitreous phase controls densification, porosity and phase distribution within the porcelain and to a large extent its mechanical and dielectric properties. Porcelains are widely used as insulators in both low and high voltage applications, mainly due to the high stability of their electrical, mechanical and thermal properties in the presence of harsh environments (Kingery, 1967; Bribiesca et al, 1999). They are classified as triaxial, steatite/non feldspathic types depending on the composition and amount of vitreous phase present. Non feldspathic porcelains are found in the system MgO-AI203-SiO2 (Buchanan, 1991). The raw materials used are talc (3MgO'4SiOz'2H20), kaolinite clays, and alkaline earth fluxes such as
454
Olupot, Jonsson & Byaruhanga
BaCO3 and CaCO3. These porcelains are typically of higher purity than the triaxial porcelains with superior dielectric properties but are more difficult to produce due to a narrower sintering range. Triaxial porcelain forms a large base of the commonly used porcelain insulators for both low and high tension insulation. It is considered to be one of the most complex ceramic materials and most widely studied ceramic system (Dana et al, 2004), yet there still remains significant challenges in understanding it in relation to raw materials, processing science, phase and microstructure evolution (Carty & Senapati, 1998).They are made from a mixture of the minerals clay-flint-feldspar. The clay [A12Si2Os(OH)4], give plasticity to the ceramic mixture; flint or quartz (SiO2), maintains the shape of the formed article during firing; and feldspar [KxNal_x(A1Si3)Os], serves as flux. These three place triaxial porcelain in the phase system [(K,Na)20-A1203-SiO2)] in terms of oxide constituents (Buchanan 1991). The fired product contains mullite (A16Si2013) and undissolved quartz (SiO2) crystals embedded in a continuous glassy phase, originating from feldspar and other low melting impurities in the raw materials. By varying the proportions of the three main ingredients, it is possible to emphasize certain properties, as illustrated in Figure 1 (Thurnauer, 1954). The above mentioned materials are widely available in Uganda, but there is no evidence of the successful use of the same for porcelains in the country. This paper reviews the current and traditional methods of producing electrical porcelains. The objective is to give an understanding of the process for production of triaxial electric porcelains from raw materials. 50000
M t '[
45000 M Microcline ordered, KAISi~O s "
40000
A Albite, NaAISi30 s
35000
i
Q Quartz
I Lunya Feldspar
30000
i
25O00 20000
M
M
M M MM
A M M t' , .Q!!':i:
MMM
M
15000 : Mutaka Feldspar
10000 5000 MM
0
,
2
i
14
M
M
M M M M M M M~ .... , ,~,
MM M M
,
16
18
20
22
24
26
28
30
32
34
2 Theta I~
Figure 1" XRD of some of Uganda's Feldspar Deposits (Olupot et al, In Press)
Figure 2:
Porcelain of various properties in the Triaxial diagram (Thumauer. 1954)
2.0 RAW MATERIALS FOR TRIAXIAL PORCELAIN 2.1 Kaolin Kaolin is a clay material that consists primarily of the clay mineral kaolinite, A12Si2Os(OH)4. It is white when dry and white when fired. In terms of origin, kaolins exist as residual and sedimentary kaolins (Norton, 1970; Dombrowski, 1998; Murray, 1998). A pure kaolinite crystal has the composition of A1203"2SiO2"2H20, giving it a theoretical composition of 39.8 wt % AlzO3 46.3 wt % SiO2, and 13.9 wt % H20. Used
455
International Conference on Advances in Engineering and Technology
alone, kaolin is difficult to shape into objects because of its poor plasticity. Its refractoriness also makes it difficult to mature by firing to a hard, dense object. In practice, other materials are added to it to increase its workability and to lower the kiln temperature necessary to produce a hard dense product. Because of its coarse grain structure, kaolin has little dry strength and a low firing shrinkage.
2.2 Ball Clay Ball clay consists, largely of the mineral kaolinite, but with a smaller crystal size than that of other clays. The reasons for inclusion of ball clays in whiteware bodies include; increased workability of the body in the plastic state (Singer & Singer, 1979), development of increased green strength, increased fluidity imparted to casting slips and the fluxing ability of some ball clays. The amount of ball clays in the whiteware, however, has to be controlled because in most cases they contain substantial amounts of iron oxide and titania, which impair the whiteness of the fired bodies and reduce the translucency of vitreous ware. If whiteness is desired, not more than about 15% of ball clay can be added to a clay body (Powell, 1996). In addition, the large amounts of water that must be added to develop high plasticity, results in large shrinkages during drying. Therefore ball clays cannot completely replace kaolins in a ceramic body without causing cracking and warpage of the ware. 2.3 Feldspar Feldspars of importance to ceramics are aluminosilicates of sodium, potassium, and calcium (Jones & Berard, 1993). They are used as fluxes to form a glassy phase in bodies, thus promoting vitrification and translucency. They also serve as a source of alkalis and alumina in glazes. The pure spars are albite (NaA1Si308), orthoclase or microcline (KA1Si308) and anorthite (CaA12Si208). The soda spars are used in glasses and glazes, and the high potash spars in whiteware bodies. Potassium feldspars (KA1Si308) enable the broadest firing range and the best stability of the body against deformation during firing. Sodium feldspar (NaA1Si308) exhibits a lower viscosity than potassium feldspar when melted at a given temperature. This enables vitrification at lower temperatures but carries the risk of increased deformation. Calcium feldspar (CaA12Si208) has an extremely narrow firing range. Figure 2 shows the mineralogy of some of Uganda's feldspar deposits. 2.4 Quartz Silica, SiO2 occurs in nature as a dense rock quartzite and as silica sand. Sand is the preferred raw material for ceramics as it does not need the energy-consuming crushing process. Quartzites however dissolve more rapidly than sand in the molten phase as indicated by the transformation rate to cristobalite during heating (Schuller, 1997). Quartz is added to ceramic bodies as filler. Fillers are minerals of high melting temperature that are chemically resistant at commercial firing temperatures ( ,.,..,~ .... , ~ f
........ o, =~, ,o, ~,, :o~ ,
Fig 1" Map of Uganda showing districts covered during the survey
EFFLUENT OUTLET PIPE AS EXIT PIPE
pjFEED lb)LET PIPE
~
Fig 2: Fixed dome digester
483
International Conference on Advances in Engineering and Technology
GUIDE MIXINGER PIPE
~.
. ~METAHLIC GASHOLDER
EFFLUENT
HOLDIN?TAN
/
_
pi/IpLET
ET
"~'///'////////////"////~[
Fig.3" Floating cover digester
FEEDINLET
t
BIOGAS
~
]~
DIGESTER CONTENTS
LEVELLED SURFACE
Fig.4: Tubular digester
4.0 T E C H N O L O G Y P E R F O R M A N C E Generally 48% of the plants were not functioning. Of these, more than 80% failed in less than six years after construction. While this percentage is general, all the nonfunctioning tubular digester systems failed in less than four years. On the other hand, more than 70% of the functioning systems have been in operation for less than six years. With the current status quo, we can not be sure that the currently functioning systems will stay in operation for more than six years especially when only 17% had lasted more than 8 years at the time of the survey. Specific gas production which is a measure of the biogas produced per day in m 3 per m 3 of digester and pressure developed were used as Indicators of system performance 3
3
The highest specific biogas production registered during the study was 0.23m/m/day and the lowest 0.05 m 3/ m /3d a y both of which were for fixed dome digester systems. Comparing these actual figures with the expected results as obtained from on-station
484
Nabuuma & Okure
tests carried out at AEATRI with consistent daily feeding of plants reveals a very big deviation. While an average of 0.23, 0.25, 0.17 m3/m3/day was expected for floating cover, fixed dome and tubular digester respectively (Odogola et al, 2001), those from the field indicated an average of 0.11, 0.14 m3/m3/day for floating cover and fixed dome respectively. Specific gas production for tubular digester systems could not be established because of inconsistent feeding as well as technical faults which rendered the plants non-functioning. Some of the plants did not develop enough pressure to allow for flow of gas from the digester storage chambers to the gas consumption points so they could only be used once in a while when the pressure had increased to sufficient levels. Possible explanations for dismal performance were investigated along two lines, design and construction as well as operation and maintenance. The findings are outlined as follows. 4.1 Design and Construction Problems Most of the blockages in the pipes were due to lack of or poor location of water drain valves. This hindered effective condensate removal and eventually led to intermittent flow. The tubular digesters were placed in positions unprotected from moving animals and objects and were therefore susceptible to damage. As a result, most of these bags were pierced or developed holes and could no longer hold gas. In addition to that, most of the bag reservoirs were placed on tree branches. The effect of the sun on the continually expanding and contracting bag resulted into wear and tear and finally the bags failed. With the failure of the biogas reserve bags, the systems could no longer develop enough pressure. Some of the supply lines were placed under concrete making maintenance activities such as emptying of blocked pipes a tedious exercise since it necessitates breaking of concrete before accessing the lines. Some gas outlet pipes were placed below the highest slurry levels in the digester and were therefore blocked by the digester contents hindering gas flow to the appliances. 4.2 Operational and Maintenance Problems Besides the technical faults on the biogas plants, problems such as insufficient gas production and low pressure in most of the plants was found to arise from inconsistent feeding, incorrect dilution ratios and insufficient loading of digesters. Although dilution ratios should be 1:1 and 1:2 for fresh and dry manure respectively results show biogas system operators, whose source of feed was paddocks and therefore using dry manure, operating with ratios such as 1:0.9, 1:1.2 and 1: 3.6. In some cases at the time of the survey they no longer had animals from which to get digester feed or the animal waste was too little to be of any use. Other plants were abandoned simply because of the burden of transporting effluent to gardens especially for homes whose gardens were far away from the biogas effluent chambers.
485
International Conference on Advances in Engineering and Technology
These coupled with lack of awareness on the labour intensive operation and maintenance requirements for biogas systems as well as the availability of alternative sources of energy such as wood and charcoal led to the abandonment of the biogas systems. A number of non-functioning floating cover and fixed dome systems could have been saved from further deterioration either by carrying out simple, minimal cost maintenance activities such as water condensate draining and cleaning of burners. In some cases all that was required was replacement of low cost system components such as drain valves or hosepipes but this too was neglected. Although this is possible with floating cover and fixed dome systems, replacement of faulty parts for tubular digester systems is rather costly. According to information provided by Agricultural Engineering and Appropriate Technology Institute, the cost of installing a 4m 3 tubular digester system would require about 240,000 Uganda shillings (140 dollars) in materials and 100,000 Uganda shilling (60 dollars)labour. In most cases the damaged components of this system were found to be the bag digester and gas reservoir which cost about 75,000 Uganda shilling (45 dollars). This means replacement of the bags would require more than 30% of the initial installation cost for the entire system. Therefore, although considered affordable because of the low initial cost, tubular digester systems are not sustainable unless an element of protecting the bags both from moving objects and the effect of sun is incorporated into the construction costs an idea which would compromise the initial objective of providing low cost biogas digester systems. One of the greatest hindrances to the sustainability of biogas technology in Uganda is lack of awareness of even the minimum requirements for good performance of a biogas plant, possible failures, causes and remedies. There is a very big knowledge gap between the promoters of the technology and the users. While the promoters are aware of all the benefits as well as the necessary inputs to acquire them, most of the operators or users of the technology are unaware of the benefit package as well as the operation and maintenance requirements. Based on the information availed by the users in the field, only close to 20% of the users admitted to having received both written and oral instructions while the other percentage only received oral instructions. None of those who received written instructions had a copy of the same. Most of the information that the operators had was verbally handed down to them from the owners or had obtained it from the previous operator. The possibility of losing some of the information in the transfer is quite high since the contents are not constant. Further analysis revealed that 60% of the users were aware of the technical problems on their systems and 48 % of these knew the causes, only 32 % knew how to solve the problem and only one user managed to solve the problem. Repair was not done either because of lack of technical know how or the high cost of replacing component parts. 5.0 SUMMARY AND RECOMMENDATIONS The failure of most of the systems cannot be entirely blamed on the technology. These problems seem to have originated right from conception of the dissemination programmes, through the system design, to the construction and operation and maintenance process. The survey revealed that the technical aspect of biogas technology cannot be totally isolated from the social and economic aspects. From the study, it seems the only
486
Nabuuma & Okure
reason for acceptance of the technology was the convenience associated with using the energy obtained from a biogas plant. This, compared to the cost of operation, was not strong enough reason to favour continual commitment to the technology especially where users perceived that the other sources are available and convenient. It seems the benefits of the technology were over emphasized while neglecting the inputs required for generating them. This is evidenced by the enthusiastic early adoption of the technology which quickly wanes with eventual abandonment of the technology even when the infrastructure is still in good condition. Since the majority of Uganda's population still depends on agriculture for its livelihood and the energy demand in still on the increase, biogas technology still has potential in Uganda. As a way forward, the following recommendations may be made: (1) In order to achieve the benefits of the biogas technology, the owners/users should be given all the necessary information regarding the inputs required for good performance of a biogas system in a way that can easily be passed on from one user to another with minimal knowledge loss. Biogas is a labour intensive technology and emphasis should be placed on specifying labour, substrate, operation and maintenance requirements Information regarding potential hazards to gas inhalation and fire risks as well as emergency medical care should be availed to the users. (2) There is need to develop technical and maintenance skills. The first step in this is to encourage the owners to participate in the construction of the entire biogas system while highlighting possible problems, causes and remedies. (3) For future projects, dissemination should be based on end-user approach instead of the supply-side approach should be used in order to facilitate sustainability of the technology. Dissemination program teams should endeavour to highlight how each of the benefits of the technology can transform the lives of the users as a way of fostering commitment to the technology REFERENCES
Gunners, Charles G. and Stuckey, David C. (1986). Integrated Resource Recovery Series World Bank, Washington D.C. Hao, Tang Ying, Lo, Ibrahim and Megersa, Buyene (1989). Biogas Manual. African Regional Centre for Technology. Karekezi, Steven and Ranja, Timothy (1997). Renewable Energy Technologies in Africa, Zed Books Ltd, London and New Jersey. Kristoferson, L.A. and Bokalders, V. (1986). Renewable Energy Technologies, Their Application in Developing Countries. Pergamon Press, Oxford. New York. Beijing. Odogola, R. W., Kato, C., Benoni, B. and Makumbi, G. (2001). Biogas Production for Small hold Energy Needs, Unpublished report, Agricultural Engineering and AppropriateTechnology Research Institute. Wafula, James (1992). Biomass Energy development, Rural Biogas Plants Technical and Dissemination aspects. GTZ-SEP Ministry of Energy. Walugembe, D. and Kamani, M. (1992). Biomas energy development, Technical issues, Publisher- Regional Wood Energy Programme for Africa.
487
International Conference on Advances in Engineering and Technology
M O D E L L I N G THE D E V E L O P M E N T OF A D V A N C E D M A N U F A C T U R I N G TECHNOLOGIES (AMT) IN D E V E L O P I N G COUNTRIES M.A.E. Okure, Department of Mechanical Engineering, Makerere University, Uganda N. Mukasa, Department of Mechanical Engineering, Makerere University, Uganda Bjorn Otto Elvenes, Norwegian University of Science and Technology (NTNU), Norway
ABSTRACT
This paper presents models that can collectively be used to analyse the manufacturing industry in developing countries. The models take into account the existing environment and highlight the effect production strategy as a moderating factor has in influencing decisions to adopt Advanced Manufacturing Technologies (AMT) as well as upgrading technical skills to levels necessary for absorption of these technologies. A method of quantifying proponents, skill levels and production strategies as main effects to the degree of automation is presented. Finally the type of activity a firm is engaged in as an influential factor to AMT adoption is explored. Keywords: Models, manufacturing Industry, existing environment, advanced manufacturing technology, skill levels, firm activity.
1.0 INTRODUCTION The manufacturing industry in developing countries is generally characterised by low growth, low volume/capacity, lacks high responsiveness and consequently cannot survive in highly competitive markets. Small-medium batch sizes and non-flow line production technologies are typical of industries in this sector. The vitality and speed with which other sectors such as telecommunication, have adopted advances in technology has not been observed in the manufacturing industry particularly in developing countries. Small manufacturing firms are the norm rather than the exception employing the majority of manufacturing employees and can contribute enormously to the vitality of these economies. Past research on AMT adoption and implementation has mainly focussed on large firms since it is assumed that small firms do not have the resources to make extensive use of these technologies. The level of inventiveness of smaller firms is apparently inexistent among larger manufacturers. The general trend world wide to achieve efficiency and utilisation levels of mass production, while retaining the flexibility that job shops have in batch production through Flexible Manufacturing Systems (FMS), has not taken root in developing countries. Dearth of appropriate Government policies, lack of awareness, poor industrial strategy, no external markets to complement the domestic one, high tool investment costs, lack of
488
Okure, M u k a s a & Otto
suitable financing, unreliable electricity supply and low returns can be cited as bottlenecks to the growth of the industry. This article explores the modeling of the effect education levels and technical skills have on the technological development of this industry, the main incentives for AMT adoption in existence and the types and degree of automation appropriate to the various categories of establishments in developing countries. 2.O MODELS Several models can be used to try to unravel the status and prospects of the manufacturing industry in a developing country, especially with respect to adoption and utilization of AMTS. 2.1 Education Levels Model This model fits the relationship between education levels and the degree of automation. It takes the form of a multiple regression model
AMI7 - ,8o + fl~ (CE), + ,82 (SEC), + ,83 (MGR), + ,84 (ENG), +/35 (BCW), + c, (1) where, AMT~ = breadth of AMT adoption of firm i, CEi = percentage of clerical employees that use computer based technologies on a daily basis in firm i, SECi = percentage of secretaries that use computer based technologies on a daily basis in firm i, MGR~ = percentage of managers that use computer based technologies on a daily basis in firm i, ENG~ = percentage of engineers that use computer based technologies on a daily basis in firm i, BCW~ = percentage of blue collar workers that use computer based technologies on a daily basis in firm i. ~'i is the error term. 2.2 Proponents Model This model fits proponents of AMT's and degree of automation, also in a multiple regression model of the form
AMT i - ~o + ~1 ( T A X ) i nt- ~2 (ENV )~ ~4 (MD )i
+
+
~3 ( G U S T ) i +
fls( ENG )~ + f16( MRT )~ + g~
(2)
where, AMTi = breadth of AMT adoption of firm i, TAXi = firm i's response to tax incentives and/or favourable financing, ENV~ = firm i's response to environment, safety or health, CUSTi = firm i's response to customers, MDi = firm i's response to Managing director or Chief executive officer, ENG~ = firm i's response to Engineering/Production departments, MRTi = firm i's response to Marketing/Sales department. ~'i is the error term.
489
International Conference on Advances in Engineering and Technology
2.3 Activity Model This model tests the effect of manufacturing activity of firms on AMT adoption. The following analysis of variance (ANOVA) model is used yo = ,u~ + aij + eo
H0" ill-
]./2-
...-
(3)
~./~c
H 0 1 : ~ 1 = a 2 = ... = 0~'ir = 0 where, YOis the i th firm'S breadth of AMT in the jth category of activity and otij is the effect due to i th firm. 2.4 Production Strategy Model This model checks the dependence of the degree of automation on production strategies and is made up of the following multiple regression model
A MTi = 1~o + ]~1 ( P R D C T ) i "[-~2 ( L B C T ) , + 183 (PRO)i .-t-184 ( P R D Q T ) i -Jr-
(4)
f15 (CUSTSQ ) i + ,B6 ( D M R T ) , + f17 ( F M R T ) , + ,B 8 (CPADV ) ~
(5)
-t- ~9 (FLX )i 7t- ~
(6)
where again, AMTi = breadth of AMT adoption of firm i, PRDCTi = firm i's response to reduction in cost of finished goods, L B C T ~ - firm i's response to reduction in labour costs, PRDg = firm i's response to increase in overall productivity, PRDQT~ = firm i's response to increased quality of product(s), C U S T S Q i - firm i's response to increased quality of customers services, DMRT~ = firm i's response to increased domestic market share, FMRTi = firm i's response to increased foreign market share, CPAD V~= firm i's response to superior firm image, FLX~ = firm i's response to increase in the flexibility of the manufacturing process. ~i is the error term. 2.5 Interaction Model 1 This model measures the moderating effect production strategy has on technological skills as determinants for the degree of automation. The following Multivariate regression model is used:
AMT ijk
-- ~ 0 nt- ~ l ( P S ) ij "{- ~2 ( T S ) ik + ~3 ( P S ) ij x
+
(7) Here, (PS)ij = effect of firm i with dimension j of production strategy (TS)ik= effect of firm i with technical capability j, and
490
Okure, M u k a s a & Otto
(PS)ijx (TS)ik=Interaction skills.
effects between strategic motivations and technical
2.6 Interaction Model 2 This model measures the extent to which production strategies modify the form of the relationship between degree of automation and influence of proponents. The following Multivariate regression model is used:
AMT ijk -- ~ 0 -Jr- }~1 (PS)!/ -+- J~2 ( IP ) ik q-
i~3 ( PS
) ij x ( I P ) , , + c,
(8) where, (PS)ij = effect of firm i with dimension j of production strategy (IP)ik = effect of firm i with influence of proponents k (PS)ijx (IP)ik=Interaction effects between strategic motivations and technical skills 3.0 RESEARCH M E T H O D O L O G Y To test the models and later validate them, the following methodology is used. 3.1 Sampling The research adopts the firm as its unit of analysis. The population is manufacturing establishments in Uganda employing more than 5 people that make use of and own machine tools. The sampling frame to be used is the Uganda Bureau of Statistics 2003 Business Register. Within the register, 1939 firms were identified as meeting the criteria for inclusion in research population.
In order to ensure that all manufacturing activities are represented stratified random sampling methods will be used. In compilation of the sampling frame, the population is divided into twelve groups based on the kind of activity the establishments are engaged in. These form the strata. On this basis, the following numbers were found in each group: 9 Food processing which includes processing of meat, fish and diary products; grain milling; bakeries; sugar and jaggery; coffee roasting, coffee processing; tea processing; other food processing and animal feeds. (584) 9 Tobacco and beverages (45) 9 Textile, clothing, leather and footwear (124) 9 Timber, paper and printing (165) 9 Chemicals, paint and soap (53) 9 Plastics and rubber (22) 9 Bricks and cement (88) 9 Steel and steel products (256) 9 Foam Products (5) 9 Furniture (449) 9 Civil works (141 ) 9 Miscellaneous (7).
491
International Conference on Advances in Engineering and Technology
Data in this study will be collected by interviewing leaders of the selected establishments. In addition a Survey instrument is filled in.
3.20perationalisation of Variables In the research, the dependent variable is AMT. The survey instrument asks firms to identify the type and number of AMT they had adopted. A firm's breadth of adoption (AMT) was the number of different types of advanced manufacturing technologies used by each firm. The survey instrument identified 21 possible such technologies. Thus, a firm's AMT can range from "0" (a firm which has no AMT) to "21" (a firm which has adopted all 21). The various independent variables are outlined below, based on the different model already presented above.
3.3 Education Levels Model In the Educational levels model, technical capabilities of the different categories of employees are measured as the actual percentage of employees within each category who use computer based technologies on a daily basis. In their study Lefebvre, (1996) noted that the level and type of educational background and the extent of functional experience were poor proxies for the level of technical skills: for example some extremely skilled machinists operating on computerized numerically controlled machines (CNC machines) had only two or three years of experience and no post-secondary diploma. They also noticed that, in the more sophisticated firms, an extensive use of computerbased technologies by the non productive employees was almost invariably associated with a higher AMT adoption rate. As a result of this the survey instrument assesses the extent of use of computer based applications for all types of employees. 3.4 Proponents Model: The variables in this model are measured using a five-point Likert scale. Firms are asked to respond to the following question: "On a scale of 1-5 indicate how the following groups, individuals or factors influence decisions to adopt AMT's in your firm.". The influences are then categorized into two groups: 9 External Influences 9 Tax incentives/favourable financing 9 Environment safety/health 9 Customers 9 Internal Influences 9 Managing Director/Chief Executive Officer 9 Engineering/Production Department 9 Marketing/Sales Department 3.5 Production Strategy Model: The variables here are also measured on five-point Likert scale and firms are asked to respond to a question of the form: "On a scale of 1-5 indicate how the following strategic motivations would influence or influenced your decision to adopt AMT's". In all nine production strategies are listed in the instrument.
492
Okure, Mukasa & Otto
4.0 IMPLICATIONS OF THE PROPOSED MODELS The models are so far presented as a broad framework for the explanation of educational levels, proponents of AMTs, production strategies and manufacturing activity phenomena in the context of AMT adoption in developing countries. The following are the implications of the models: 1. The models can be used as a useful means to measure strengths and weaknesses of manufacturing firms in developing countries in their efforts to adopt more efficient technologies and therefore compete favourably in the global market 2. The analysis of resulting data may reveal that certain key variables do predominate in determining the efficacy of the interacting variables. This effort may pinpoint key factors which determine success in AMT adoption and therefore provide a basis for policy intervention to accelerate the movement to industrialisation. 3. Consequently the models will explain the slow pace at which firms in developing countries are introducing advanced technologies. The resulting propositions can then be used to suggest coping strategies that can alleviate these problems 4. The models can act as a basis for establishing more credible and valid models which can be used to explain interaction factors in the context of AMT adoption in developing countries
5.0 CONCLUSIONS The establishment of production facilities in developing countries is limited in part by the increasing scales that emerged with industrialization. The small size of their domestic market reinforces the argument advanced by Alcorta & Ludovico, (2001), who point out that exports could provide a way out of the scale problem but immediately caution that a minimum of efficiency is often necessary prior to entering foreign markets. Understanding why these firms are not adopting agile manufacturing and time-based technologies can be critical to their competitiveness. The paper has presented general models that can be used as a launching pad for further work. Production strategy has been used as the main moderating variable but both influence of proponents and educational levels may be taken on separately as interacting variables. Models that measure the interacting effects of impeding factors, flexibility types, openness to innovation and technical collaboration have not been presented. This study took manufacturing establishments that make use of and own machine tools as its population. Future work can be in the area of analyzing tooling facilities in the establishments to identify any moderating role played in the context of AMT adoption. REFERENCES Acs, Z.J. & Audretsch, D. B. (1998). "Innovation and firm size in manufacturing," Technovat. 7, 197-210. Ahluwalia, I.J. (1991). productivity and Growth in Indian Manufacturing. Dehli; Oxford University Press Alcorta, Ludovico, (2001). Technical and Organization Change and Economies of Scale and Scope in Developing Countries. Oxford Development Studies, 29, 77-101.
493
International Conference on Advances in Engineering and Technology
Bennet, D., Vaidya, K. & Zhao. H. (1999). Valuing Transferred Machine Tool Technology. International Journal of Operations & Production Management, 19, 491-515. Bonaccorsi, A. (1992). On the relationship between firm size and export intensity, J. Int. Bus. Studies 3 (4), 605-635. Carlsson, B., Stankiewicz, R. (1991). On the nature, function and composition of technological systems. Journal of Evolutionary Economics, 1(2), 93 -118. Hobday, M. (1995). Innovation in East Asia. Edward Elgar, London Katrak, H. (2000). Economic Liberalization and the Vintages of Machinery Imports in Developing Countries" An Empirical Test for India's Imports from the United Kingdom. Oxford Development Studies, 28, 309-323. Lall, S. (1990). Building Industrial Competitiveness in Developing Countries. Paris; OECD Development Centre Studies Lefebvre L.A, Lefebvre, E. & Harvey, J. (1996). Intangible assets as determinants of advanced manufacturing technology adoption in SME's" Toward an evolutionary model. IEEE Transactions on Engineering Management, 43(3), 307-322. Merideth, J. (1987b), The strategic advantages of new manufacturing technologies for small firms, Strategic Manage. J., 8, 249-258. Naik, B. & Chakravarty, A.K. (1992). Strategic acquisition of new manufacturing technology: A review and research framework. International Journal of Production Research, 30 (7), 1575-1601. Pimrose P.L. & Leonard, R. (1985). "Evaluating the "intangible" benefits of flexible manufacturing systems by use of discounted algorithms within a comprehensive computer program," Proc. Inst. Mechan. Engineers, 199, 23-28. Rajagopalan, N., Rosheed, A.M. & Datta., D.K. (1993). Strategic decision processes" An integrative framework and future directions. Oxford, U.K.; Blackwell Rodrik, D. (1992). The limits of trade policy reform in developing countries. Journal of Economic Perspectives 6, 87-105. Samuels, J., Greenfield, S. & Mpoku, H. (1992). Exporting and the small firm, Int. Small Bus. J 10 (2), 24-36. Soete, L. (1985). International diffusion of technology, industrial development and technological leapfrogging. World Dev 13(3):409-32 Sung, T.K., Carlsson, B. (2003). The evolution of a technological system" the case of CNC machine tools in Korea. Journal of Evolutionary_ Economics, 13, 435 - 460. Tsuji, M. (2003). Technological innovation and the formation of Japanese technology: the case of the machine tool industry. Journal of AI & Soc. 17, 291 - 306.
494
F arag, Akasheh & Neale
CHAPTER SEVEN GEOMATICS
SPATIAL M A P P I N G OF RIPARIAN V E G E T A T I O N USING A I R B O R N E R E M O T E SENSING IN A GIS E N V I R O N M E N T . CASE STUDY: MIDDLE RIO GRANDE RIVER, N E W M E X I C O F. Farag, Strategic Research Unit, National Water Research Center, Cairo, Egypt 0. Akasheh, Biological and Irrigation Engineering Department, Utah State University, USA C. Neale, Biological and Irrigation Engineering Department, Utah State University, USA
ABSTRACT
This paper demonstrated a procedure to classify riparian vegetation in the middle Rio Grande River, New Mexico, USA using high resolution airborne remote sensing in a GIS environment. Airborne multispectral digital images with spatial resolution of 0.5meter pixels were acquired over the riparian corridor of the middle Rio Grande River, New Mexico, in July of 2001, using the new Utah State University (USU) digital imaging system covering approximately 175 miles (282 km). The images were corrected for vignetting effects, geometric lens distortions, rectified to 1:24000 USGS digital orthophoto quads as a base map, mosaicked and classified. Areas of the vegetation classes and in-stream features were extracted and presented. Surface water area within the river along with the meso-scale hydraulic features such as riffles, runs and pools were classified. The water surface area parameters were presented not as an indication of water flow volume in the river, though they could be related, but as a means of showing how changes occur as moving downstream. Analyzing the river images shows that water diversions have a big effect on the water surface of the river. Records of river flows on that date confirm these classification results. Riparian vegetation mapping using high resolution remote sensing gives a broad and comprehensive idea about the riparian zone health and condition along the river. In case of Middle Rio Grande, the vegetation classification image maps will help decision makers to study and identify problems that affect the river system. This map will also provide a base from which to monitor the riparian vegetation in the future and provide the basis for change detection resulting from any management plan applied to the river corridor with the aim of protecting and restoring the river ecosystem.
495
International Conference on Advances in Engineering and Technology
Keywords" Riparian Vegetation; Remote Sensing; GIS, and Meso-scale hydraulic features.
INTRODUCTION Riparian vegetation systems are important for maintaining water quality and habitat diversity of rivers. Traditional methods of mapping riparian systems include aerial photography, as well as extensive ground-based mapping using well-established surveying and measurement techniques (Neale, 1997). Global Positioning Systems (GPS) have also been used as aids in ground-based mapping. Furthermore, airborne multispectral videography system gained acceptance over the last several years as applications developed to assist in mapping such riparian systems. However, a technique known as airborne multispectral digital system has been gaining approval over a few years ago as new applications have been developed and proven viable. In addition, improved digital camera systems have become available commercially, providing better quality imagery in digital format. Airborne multispectral digital imagery provides some advantages over traditional aerial photography; whereas the processing of aerial photograph film is expensive, airborne digital imagery can provide a quick turnaround of multi-band imagery in digital form ready for computer processing. Aerial photographs must be scanned for computer use in digital form, or features of interest interpreted and digitized from the photographs. Calibrated multispectral digital images lends itself well to computer image classification and the automated extraction of features such as vegetation types, soils, vegetation density and cover, standing water, wetland areas, instream hydraulic features, exposed banks, and other features of interest in a riparian zone. As municipal and irrigation water demand increase due to growing world population, rivers and streams are exposed to extensive pumping and diversion of water. In the past, there has been no consideration for riparian and wetland vegetation water requirements. Encroachment of agriculture and reduced fiver flows has led to the reduction in riparian vegetation areas. Estimation of riparian vegetation water requirement is considered one step toward conserving this resource. This will set limits for water diversion from rivers and streams for irrigation and municipal purposes. The estimation of riparian vegetation evapotranspiration is not sufficient to quantify the riparian vegetation water requirement, unless the riparian vegetation is mapped and the area of the main species is estimated precisely. Furthermore, classification of surface water areas within the rivers along with the meso-scale hydraulic features will assist in water resources planning and management. High resolution airborne remote sensing is a powerful technique for riparian vegetation mapping and monitoring. This paper demonstrates the processing steps and procedures required to use this type of multispectral digital images for riparian vegetation classification and mapping over the riparian corridors of the middle Rio Grande River, New Mexico. Furthermore, the study presents classification of the surface water areas within the fiver strip along with the meso-scale hydraulic features such as riffles, runs and pools.
496
Farag, Akasheh & Neale
METHODS 2.1 Image Acquisition and Processing High-resolution airborne multispectral images were acquired using Utah State University (USU) airborne digital system at a nominal spatial resolution of 0.5-meter pixels. The middle Rio Grande was covered from Cochiti Dam down to Elephant Butte reservoir, approximately 175 fiver miles. The image acquisition flights occurred on 24th, 25th and 26th of July of 2001 under mostly clear sky conditions. Figure (1) shows the flown riparian buffer over the fiver coveting approximately 175 miles (282 kin). The USU airborne multispectral digital system acquires spectral images centered in the Green (0.55 urn), Red (0.67 urn) and near-infrared (0.80 urn) portion of the electromagnetic spectrum (Neale 2001).
New Mexico
. . . . . . . . . . . . . . . . . . . . . . . . . .......
Iv
I
Fig. 1" The flown riparian buffer over the middle Rio Grande River, NM, USA in 2001 The images were acquired at a nominal overlap of 60% along the flight lines in one swath centered over the river. For the most parts, the 1 km swath width was enough to cover the riparian zone on both sides of the river up to the drains that run parallel to the river on both sides. The individual spectral band images were geometrically corrected for radial distortions, radiometrically adjusted for lens vignetting effects and registered into 3 band images using the same technique developed by Neale and Crowther (1994) and Sundararaman et al. (1997). The 3-band images were then rectified to 1:24000 USGS digital orthophoto quads using common control points visible in both sets of imagery. The rectified images were mosaicked into larger image strips along the flight lines, representing reaches of the river. The mosaicked strips were calibrated to a reflectance standard using the USU system calibration and measurements of incoming radiation developed by Crosby et al. (1999).
497
International Conference on Advances in Engineering and Technology
2.2 Image Classification Supervised classification was conducted using ground truthing information obtained during a field campaign using technique developed by Neale (1997). Prints of selected multispectral images along with Global Positioning System were used to locate and identify different vegetation types and spectral signatures visible in the images. In addition, a second ground truthing data set for the study area, provided by The US Bureau of Reclamation, Denver office, Colorado, was used. Part of the ground truthing data set was used to extract vegetation signatures and train the computer software (ERDAS IMAGINE) to recognize different surface types. The remainder of the ground truthing data was used to verify the accuracy of the classification of the major riparian vegetation classes. Spectral signatures were extracted from the riparian zone and the surrounding areas and the within the fiver using the Seed Property Tools in IMAGINE. Several signatures were extracted visually and iteratively from the image to cover most features and surfaces that appeared in the images. The spectral statistical separability of the classes was studied using the Transformed Divergence method within the ERDAS IMAGINE software (ERDAS Field Guide 2001). The final classification was then conducted using the Maximum Likelihood scheme and all pixels in the images were assigned to a specific class. Figure (2) shows the 3-band image and the corresponding classified image as well as the final list of the classes for a section of the river showing the areas for each class.
74:9 Sand Bare Soil Cottonwood DenseTamarisk SparseTarnarisk Dead Tamarisk, GoodingWillow Acacia/Bushes Wet Soil/Wet Sand Grasses Dilt Road AsphaltRoad Shadow DrainWater SubmergedSand Backwaier Riffle Railway CoyotoWillow RussianOlives ElmTree BurntCottonwood Lillies Rock crops
.i!~iili!~ili~y~ii~ili~ili~i~l~i~i~. "
178.1 315.0
328
........
17718 0,3 25 21,8 36,6 23.6
4.3
I I
145.5 11.3
3.'.1 iiiiNi!i!i!Niili ii~N /i
o.o 0.5 0,0 21.7
,~,~:..~N,
5.2
~!~i1
o,o
.ii!~,i,,i!i,,ii,;i,,ili,,ili,,i,li,,ili,;i,,iil,,H,,i,i N!i!iiiiNii
o,
o~o 1.0
Fig. 2" 3-band image and the corresponding classified image for a section of the river. 2.3 Accuracy Assessment An accuracy assessment was conducted on the classified images using the ground truthing data that was not used in the classification process. The accuracy assessment was conducted on the major vegetation classes, which were: Cottonwood, Tamarisk, Russian olives, Coyote willow. Ground trothing data that was not used in the signature set training process was compared to the classified results for that area. The matching and mismatching events were
498
Farag, Akasheh & Neale
recorded. In case of mismatching events, the mismatching class was noted under the corresponding class in the confusion table in the classification column. In this table four parameters were calculated: users accuracy, producers accuracy, overall accuracy, as well as omission and commission errors. Users accuracy is calculated by dividing the number of samples correctly classified by the number of samples of that class. The producers accuracy is a measure of how well a certain area is classified and is calculated as the number of samples of the ground truth class that were correctly classified divided by total number of samples of the ground truth. Commission and Omission errors are the result of 100% accuracy minus the user and producer accuracy respectively.
3.0 RESULTS AND ANALYSIS The following paragraphs summarize the findings of the investigation.
3.1 Vegetation Distribution Analysis Areas of different vegetation classes were extracted from the final classified images for every corresponding quadrangle base map, using the ERDAS IMAGINE software. The area statistics were extracted from the riparian zone, digitized as an area-of-interest (AOI) polygon over the 3-band image mosaics and essentially corresponded to the region between the two drains that run parallel to the river in most sections of the river. Figure (3) shows a full resolution of the 3-band multispectral imagery and the corresponding classified image for part of river. Figure (4) shows the results of the vegetation class areas per quadrangle sheet, listed from north to south along the x-axis. The most important observation is that the Cottonwood trees areas decreased from north to south while the Tamarisk trees areas increased within the riparian zone. This distribution may indicate problems in the river ecosystem. Tamarisk results in the deterioration of the soil chemical and physical properties and prevents new cottonwood seedlings and other native species from emerging. Irrigation water diversions and the resulting decrease of in-stream flows as the fiver flows from north to south might be affecting the balance and capacity of the native species to compete with Tamarisk. The vegetation distribution chart also indicates areas where Tamarisk control activities such as burning result in the dead Tamarisk class, mostly on the lower sections of the middle Rio Grande. Figure (5) shows a section of the fiver with dead Tamarisk in the 3band image and the corresponding classified image. The value of the near infrared band reflectance drops significantly over the dead tamarisk compared with the healthy one.
Fig. 3:
Full resolution of the 3-band multispectral imagery and the corresponding classified image.
499
International Conference on Advances in Engineering and Technology
800 700 600 r-,
500 400
,~
300 200 t00
.
li_li,
[] Cottonwood
[] Acacia/Bushes [] FIm Tree
Fig. 4:
[] Tamarisk
[] Dead Tamarisk
O Grasses 9Coyote Willow [] Burnt Cottonwood [] Lilhes
i tl [] Gooding Willow
[] Kussian Olives [] Crops
Surface class statistics of the riparian area along the middle Rio Grande resulting from image classification.
Fig. 5" Area with dead Tamarisk resulting from a fire
3.2 In-Stream Surface Water Distribution Analysis Figure (6) shows the surface water area within the fiver along with the meso-scale hydraulic features such as riffles, runs and pools. The water surface area parameter is presented not as an indication of water flow volume in the fiver, though they could be related, but as a means of showing how changes occur as we move downstream. The following figure show the typical change in vegetation pattern and water surface area in the fiver from north (Albuquerque) to the South (San Antonio). More Cottonwood and less Tamarisk appears in the northern section of the imagery while more Tamarisk and less water surface area are present in the downstream section south of San Antonio. Analyzing the river images from upstream to down stream, it was clear that water diversions have a big effect on the water surface area in the river, a strong indication that the water flows were affected as well. Records of fiver flows on that date confirm these classification results. Figure (7) shows mean daily stream flow measurement for the month of July 2001 when the flight took place. These measurements are for four gauging stations going from north (below Cochiti) to south (San Marcial). There is a significant change in stream flow
500
Farag, Akasheh & Neale
going from north to south. The peaks might be attributed to the effects of precipitation events in the area or in the watershed. These peaks disappear as we go south due to the extensive water use upstream. Other statistics to note is the decrease in the water surface area as diversions occur, and the subsequent increase of wet soil/sand and submerged sand downstream resulting from these diversions. Rapid variations in river flows affect in-stream habitats and affect riparian vegetation. The river corridor downstream from the diversion dam has a much higher area in dry and wet sand and lower water surface area.
~
~~7
Fig. 6"
......................
Section of the river in the Albuquerque Area (left) where Cottonwood is the predominant class (dark green color), and Classified image from San Antonio South (Right) where Tamarisk is the major vegetation class in the area (light Green)
Daily Mean Stream Flow in Four different location 1200 1000 o0 co
800 0 Ii
600
E
400 '4--' O3
200
Days in July 2001 Below Cochiti dam San Acacia
~ Albuquerque .................San ...... Marcial
Fig. 7" Mean daily stream flow at four different locations along the Middle Rio Grande Figure (8) shows the in-stream class area distribution along the river going from north to south. The total water surface area decreased as the river flowed from north to south. Hopefully these statistics will aid policy makers to set diversions and preserve in-stream flows to support the native riparian vegetation and river habitats.
501
International Conference on Advances in Engineering and Technology
200 . . . . . . . . . . . . . . . . . . 180 i 160 140 ~" 120 ~" 100 80 60 _~. 4o
.
.
.
_k
.
20
.
~[[L
.
...... - !1 r i l l 0
,
,
~.
,
,
~
_~]11
9Run 9Pool 9Sand 9Wet Soil/wet land 9Sub~rged SandN Bac~at-er m RiffleFig. 8" In-Stream class area distribution obtained from the classified airborne Images for Middle Rio Grande River
3.3 Accuracy Assessment Analysis The contingency table for the four major vegetation classes in the fiver shows that Coyote Willow had the highest classification producer accuracy with 92% while Cottonwood had the highest in user accuracy with 89%. The classification methodology equally identified Tamarisk and Russian Olives at 86% and 82% user accuracy and with a producer accuracy of 86% and 82%, respectively. The reason for high accuracy for Coyote Willow and Cottonwood is that they are easier to distinguish from other tree, because cottonwood are larger trees with larger shadows and coyote willow is small in size similar to a shrub with almost no shadow. The ground truthing that was used to create this table was not used in the signature extraction process. The overall accuracy was calculated to be 88%. In summary, the classification accuracy was considered to be good and comparable to other studies using airborne multispectral imaging with spectral classification. 4.0 CONCLUSIONS AND RECOMMENDATIONS Vegetation mapping using high resolution remote sensing gives a broad and comprehensive idea about the riparian zone health and condition along the river. In the case of Middle Rio Grande, the vegetation classification image maps will help decision makers to study and identify problems that affect the fiver system. This map also will provide a base from which to monitor the riparian vegetation in the future and provide the basis for detection of change resulting from any management plan applied to the river corridor with the aim of protecting and restoring the fiver ecosystem. The paper presents some examples of the use of airborne multispectral digital images for riparian system mapping as well as the meso-scale hydraulic features along the middle Rio
502
Farag, Akasheh & Neale
Grande River, located in New Mexico, USA. Digital images from these systems are well suited for image processing and spectral classification of vegetation types and densities. The images can be easily incorporated and analyzed within a GIS environment. Georeferenced images can also be used for geo-morphological studies and physical measurements within the riparian zone. Image resolution (0.5-meter pixels) is selected according to the size of the riparian system of interest. Future monitoring of the river using high resolution airborne remote sensing will aid in the detection of the escalating problems due to water quantity and quality and its availability for riparian vegetation and other habitats. High resolution remote sensing can aid in detecting changes in vegetation due to natural causes or control practices on introduced vegetation such as Tamarisk. The effectiveness of new Tamarisk control methods using a beetle imported from Asia could be assessed with a future flight of the river. REFERENCES
Bartz, K., J. L. Kershner, R. D. Ramsey, and C. M. U. Neale (1994). Delineation riparian cover types using multispectral, airborne Videography. Pages 58-67 in C. M. U. Neale, editor. The proceedings of the 14th Biennial Workshop on Color aerial photography and videography for resources monitoring, May 1993, Logan, Utah. American Society for Photogrammetry and Remote Sensing, Bethesda, Maryland. Crosby G.S., Neale C. M. U. and Seyftried M. (1999). Vegetation Parameter Scaling on a Semiarid Watershed. Proc. 17th Biennial Workshop on Color Photography and Videography in Resource Assessment, May 5-7, 1999, Reno, Nevada. American Society of Photogrammetry and Remote Sensing, and Department of Environmental and Resource Sciences, University of Nevada, Reno, Nevada, pg. (218- 222). Edited by Paul T. Tueller. Neale, C.M.U. & Crowther, B.G. (1994) An airborne multispectral video/radiometer remote sensing system: development and calibration. Remote Sensing of Environment 49. Neale, C. M. U. (1997). Classification and mapping of riparian systems using airborne multispectral Videography. Journal of Restoration Ecology Vol. 5 No. 45, pp 103 - 112. Neale, C.M.U. 1991. "An airborne multispectral video/radiometer remote sensing system for agriculture and environmental monitoring". ASAE Symposium on Automated Agriculture for the 21st Century: December 16-17, 1991, Chicago, Illinois Redd, T., C. M. U. Neale, and T. B. Hardy (1994). Classification and delineation of riparian vegetation on two western river systems using airborne multispectral video imagery. Pages 202-211 in C. M. U. Neale, editor. The proceedings of the 14th Biennial Workshop on Color aerial photography and videography for resources monitoring, May 1993, Logan, Utah. American Society for Photogrammetry and Remote Sensing, Bethesda, Maryland. Sundararaman S., and Neale C.M..N., (1997). Geometric Calibration of The USU Videography System. Proceeding of the 16th Biennial Workshop on Videography and Color Photography in Resource Assessment, April 29 - May 1st, 1997, Weslaco, Texas. American Society of Photogrammetry and Remote Sensing, USDA Subtropical Agricultural Research Laboratory.
503
International Conference on Advances in Engineering and Technology
CHAPTER EIGHT ICT AND M A T H E M A T I C A L M O D E L L I N G
2-D H Y D R O D Y N A M I C M O D E L FOR P R E D I C T I N G E D D Y FIELDS A. M. EL-Belasy, HRI, National Water Research Center, Delta Barrage, Egypt. M. B. Saad, First under secretary of the Ministry of Water Resources &Irrigation, Egypt. Y. I. Hafez, NRI, National Water Research Center, Delta Barrage, Egypt.
ABSTRACT A 2-D mathematical model was modified in the Hydraulic Research Institute to predict eddy formation and to determine its dimension and back velocity. The model includes a module for varying turbulent viscosity to obtain better representation of the turbulence phenomena occurring downstream hydropower structures. The irregular boundaries can be represented very well by the modified model. The modified model was applied to the experiments of Jain et al (1988) for a hydropower structure with navigation installation. In this phase the predicted flow velocity gave good agreement with the measured data in terms of matching the eddy length and back velocity. As will, the model was tested by comparing the results with experimental runs carried out in the Hydraulic Research Institute (MWRI) for Esna and Naga Hamaadi Barrages. The results simulated well the formation of large eddies and back flow. Keywords: 2-D Mathematical models; flow field; eddies; back velocity; turbulent viscosity.
1.0 INTRODUCTION Most hydropower structures on a river consist of a powerhouse, a sluiceway and a navigational lock. The existence of these components along with the operational scheme of the structure induces a flow field with high turbulence intensity and complicated eddies, which cause problems to navigation and may cause erosion and sedimentation, that may threat structures stability. Mathematical models are needed to study how to minimize the negative effect of such flow fields. The high turbulence is introduced when most of the river flow is diverted through the powerhouse; thus creating a jet with very high average velocities which might be in the order of 2-4 m/s. It is customary to locate the
504
EL-Belasy, Saad & Hafez
powerhouse in the opposite side to the navigational lock in order to avoid the reverse flow coming from the powerhouse jet into the navigational lock. When the river flow is released through the powerhouse near the bank side, the skewness of the jet discharge creates a large eddy with reverse flow just downstream of the lock, Fig.1. This reverse flow creates problems to the navigational unites coming out of the lock. To cope with this back flow, the guide walls of the lock are constructed usually in a way such that they reduce the reverse flow or its effect as much as possible. Investigations have resorted to physical and/or mathematical models in order to investigate the nature of the eddy structure downstream of the hydropower structures and to find out solutions of reducing its effect. The present study aims to modify a 2-D mathematical model for predicting eddies formation and its dimensions in addition to back velocity and to introduce turbulent viscosity as a function of space. The present study also aims to compute a 2-D flow pattern around the hydraulic structures. The 9 9 9
study phases thus proceed by: Reviewing the available mathematical models Modifying a hydrodynamic model of Molinas & Hafez (2000) Applying the modified model B
Lock
~
"-
l
l
~C Q ~
Eddy with Reverse Flow
Navigation
b
Channel
Spillwayll~ ~--" A
P. . . . h ~
- - ! ~ QtI~
I
Nonover flow Dam
k--~ Scale (Meters)
50 1
0 1
50
100
1
Fig. 1" Formation of a large eddy just downstream a lock (after Jain et al, 1988) 2.0 A V A I L A B L E M A T H E M A T I C A L M O D E L S
Determination of the flow distribution around hydraulic structures is an important aspect for their protection and safety. In general, flow around hydraulic structures can be numerically simulated through the use of 2-dimensional or 3-dimensional models. 2dimensional models have been already used for a long time for tidal flow in seas and estuaries. They are also used for quasi-steady flows in river flow computations. As for harbors and bays, several models are available. Among them is the model Kuipers & Vreugdenhil (1973). This model uses a 2-dimensional depth-averaged model for un-
505
International Conference on Advances in Engineering and Technology
steady free surface flow to predict steady recirculating flows. The model neglects the turbulent transport terms in the equations. But, owing to a smoothing procedure introduced to obtain numerical stability in their center difference scheme, terms, which exert a diffuse action, is effectively introduced. This is physically unreasonable, since the diffusion present in the numerical solution depends only on the smoothing coefficient used. On the other hand, many models are available for predicting eddy dimensions and flow field in curved channels such as: Mcgurik & Rodi (1978), Booij (1989),Yu & Zhang (1989), Chapman & Kuo (1985), Yee-Chunge & Peter (1993), Bravo & Holly (1996), Lien, Hsieh & Yang (1999), Ouillon & Darms (1997) and Molinas & Hafez (2000). These models under estimated the predicted size of the recirculation zone. From the above-mentioned survey of 2-D models, it can be concluded that most of the 2-D models are depth-averaged ones. As for jet flows, the jet induce a velocity field that causes a large surface eddy with high reverse flow. This causes a problem to the navigational units. These surface currents are required to be predicted. These currents cannot be predicted by depth averaged models. Therefore, a surface model is needed to be modified in order to simulate jet and curved flows (circulation). Model Molinas & Hafez (2000) was thus chosen to be modified. 3.0 THE H Y D R O D Y N A M I C MODEL
The differential governing equations (Molinas & Hafez, 2000), are written in the Cartesian X-Y coordinates, where the X-direction is in the main flow direction and Ydirection is in the lateral direction as shown in Fig. (2).
Fig. 2: Cartesian X-Y coordinates direction The complete equations of motion for a viscous fluid are known as Reynolds average equations. It is assumed that the fluid is incompressible and follows according to the Newtonian shear stress law whereby viscous force is linearly related to rate of strain. For two-dimensional steady incompressible flows, the flow hydrodynamics governing equations are the equation for conservation of mass and the equations for conservation of momentum. Conservation of mass equation takes the form of the continuity equation while Newton's equations of motion in two dimensions express the conservation of momentum. The continuity equation is
506
EL-Belasy, Saad & Hafez
~,u - + #x
(1)
c~v =0 0Y
-
The momentum equation in the longitudinal (X) direction is given by
a~ a au av
uc~U+vCOU__ _ 1 OP + 0 ' 2 v~'~-X)+~-y-( vc(~--f+ - ~ ))+Fx c3X OY pc?X c~X
+[a[2~] ]~=" ~
(2)
The momentum equation in the lateral (Y) direction is given as: /
.....
~---(
/
\,,,
(2v c~V)+F~.+
....
(3)
where, U - Longitudinal surface velocity, V - Lateral surface velocity, P - Mean pressure,vo = Kinematic eddy viscosity, Fx = Body force in X direction = g sin 0, Fy = Body force in Y direction = 0.0, g = Gravitational acceleration, 0 - Average water surface slope, 9 = Fluid density, zf• -- Turbulent frictional stresses in X-direction and "rfy= Turbulent frictional stresses in Y-direction. The Governing equation of the mathematical model was modified to meet the objective of the study by adding a new module for Kinematic eddy viscosity. The Kinematic eddy viscosity is assumed as a function of the velocity gradients or more precisely in terms of the shear and normal turbulent stress as in equation (5). Therefore, Equation (5) is important to model the turbulence over a constant turbulent viscosity models as in (Molin a s & Hafez, 2000; Bravo & Holly, 1996). The Kinematic eddy viscosity is calculated according to Smagorinsky (1963) as:
e=C*dxdy
OU
*~
+
+
+
c?Y
-
-
c?X
12
Equation (4) is modified in the model to become V e= C , d x d v "
,
-
~7- U
+
? x
~ V ~
+
1
c~ -U + c? Y
-
-
~- V o x
+V
B.G
5
The parameter C is equal to 0.1 (Smagorinsky, 1963), dxdy is the area of the element, and v=~ is a background turbulent viscosity that accounts for the turbulence generated by the bed and transported by the mean flow vertical velocity gradient. The two frictional stress terms (rfx, rfy) were evaluated at the water surface by Molinas & Hafez (2000) as shown below:
oz
pj
n v8
507
International Conference on Advances in Engineering and Technology
..... c3Z
k
IVV l+m
(7)
H
In this study, the numerical technique used to solve the governing equations is based on the Galerkin finite element method (F.E.M). Application of the finite element method begins by dividing the water body being modeled into elements. The quadrilateral elements in shape are used, this shape can be easily arranged to fit complex boundaries. The elements are defined by a series of node points at the element vertices. Following the Galerkin method, values of dependent variables are approximated within each element using the nodal values and a set of interpolation functions. Approximations of the dependent variables are substituted into the governing equations results in a set of integral equations. These equations are integrated over each element using the four-point Gaussian quadrature. The contributions of all element integrations are added together to obtain a global matrix, the solution of which represents the finite element approximation of boundary value problem. Due to the presence of inertia terms, the governing equations are nonlinear; therefore the global matrix representing these terms is also nonlinear. Due to the nonlinear nature of the governing equations, the numerical solution is obtained by assuming initial values for the variables and by iterating. The initial value assumed for the two velocity components and the pressure term were zero at all interior nods. Gauss forward elimination and back substitution techniques are used to solve the systems of equations. After each iteration the solution vector is updated using:
o[AU] n+l
(8)
Where, 0 is the relaxation coefficient, and superscripts (n) and (n+l) refer to iteration counter. The iteration process is continued until the maximum difference between two successive iterations across all the nodes of the mesh is less than a specific tolerance. The iterative penalty concept is used to enforce the constraint of incompressibility. In this approach, the non-hydrostatic pressure is considered as an implicit variable adjusting itself expressed to enforce the incompressibility constraint. In the iterative penalty concept, the pressure is expressed as (Zienkiwics (1989)): c?U
c~V
*'~+~=~'~-~ (}--2 + oY )
(9)
Where, k is the penalty parameter and n is iteration counter. Replacing the pressure term in Eq (9) into Eqs (2) and (3) indirectly enforces the conservation of mass condition. The continuity equation (1) may then be omitted from the set of governing equations, reducing the number of simultaneous equations from three to two, and therefore improving the computational efficiency. After solving the system of equations and obtaining the velocities U and V, the pressure value is updated according to Eq. (9).
508
EL-Belasy, Saad & Hafez
As for the boundary conditions, uniform longitudinal and lateral velocities are prescribed (U and V) at the upstream boundary. Fully developed flow conditions are applied for longitudinal velocity at the downstream. For the lateral velocity equation, the lateral boundary shear is set to zero. For the channel wall, the no-slip boundary conditions were applied. (U=0.0, V=0.0) 4.0 MODIFIED MODEL APPLICATION TO JAIN E T AL (1988) EXPERIMENTS Jain et al (1988) investigated experimentally the flow fields and navigation conditions induced by hydropower releases, see Fig. 1. They used undistorted models to identify the size of eddies and back velocity in the model in a typical lock and dam installation. With release from the powerhouse and no flow over spillway, a large eddy is formed, as sketched in Fig. 1. The size of the eddy was taken herein as the distance between its downstream and upstream ends identified by letters C and D in Fig.1. The upstream velocity of the eddy U was specified by the average velocity over the distance CD. The measured value of the eddy size and the back velocity were L=165 m and U=0.8 m/s.
It was found that the experiments of Jain et al (1988) provide valuable data sets. In addition to the application of Bravo & Holly (1996) model provides a valuable asset for comparison with the suggested-model results. The domain of the experiments of Jain et al (1988) data is divided into finite elements mesh as shown in Fig. 3. The total number of elements was 3940 elements, which produced 4086 nodes using the four-node element. This number of nodes herein is nearly twice the number of grid points used by Bravo & Holly (1996). The complex nature of the eddy structure dictates that a finer resolution is needed as much as possible. In the context of constructing the finite element mesh, the domain of the study was initially divided into trapezoidal loops or macro-elements where a total of 34 loops were used. These loops are divided into elements. The source model of Molinas & Hafez (2000) before modification was applied to the experiments of Jain et al, (1988). The eddy viscosity is constant. Values of 10 m2/s, 5 m2/s and 2.5 m2/s for the global eddy viscosity were used in this study. UsseglioPolatera & Schwartz-Benezeth (1987) suggested using a value v = 10m2/s for flows with high vorticity zones. Eddy viscosity in natural open channels can be related to the bed shear velocity and depth (Rodi, 1982) by v-g
o
+c
/a
U,H
(10)
Where, Vo = base kinematic eddy viscosity, and c~ = dimensionless coefficient, approximately equals 0.6 in natural channels. A constant eddy viscosity is assigned by specifying c~ = 0.0 and Vo > 0.0. Following equation (10), the second and third values have been selected (5m2/s and 2.5mZ/s). The discharge given to the model was 745m3/s from the hydropower and the average water depth was taken as 4.74 m. The DarcyWeisbach friction factor was taken equal to 0.05 400
0 o
80 509
International Conference on Advances in Engineering and Technology
Fig. 3" Finite elements mesh of the experiments of Jain et al, (1988)
The results showed that, the calculated recirculation in the main backflow eddy is underpredicted with a value v - 10 mZ/s 9the computed eddy length was L = 95m (approximately 58% of the measured value) and the averaged back velocity (Fig.l) was U = 0.1 m/s (approximately 12.5% of the measured value).The results obtained using a value v = 5 mZ/s and 2.5 mZ/s are shown in Table 1. Table 1" Comparison between measured averaged back velocity and eddy length for Jain et al (1988) and computed by Bravo & Holly (1996), Molinas & Hafez (2000), modified model
Parameter Averaged back velocity (m/s) Eddy length(m)
Jain's et al,
Bravo &
Molinas & Hafez (2000)
(1988) ex-
Holly
periment
(1996)
v=10 mZ/s
v=5 m2/s
v = 2.5 m2/s
model
0.80
0.5
0.1
0.28
0.35
0.65
165
150
95
195
225
160
Modified
The Modified model was applied to the experiments of Jain et al (1988) by using the same finite element mesh which used in the application of source model of Molinas & Hafez (2000). Fig. 4 shows vector plot of the two-dimensional velocity field obtained from the developed model. In this Fig. the jet flow generates two eddies with different size and strength. The weaker eddy with smaller size and strength is below the jet stream and confined by the channel walls. It has been generated by the shearing action of the jet stream on the nearly stagnant water in this region. The modified model succeeded in picking up the details of this eddy, which did not appear in the experimental data. The simulated eddy length by the modified model was 160 m while the predicted averaged back velocity (Fig. 1) was 0.65 m/s. The values are to be compared with measured eddy length of 165 m and back flow velocity of 0.8 m/s, while Bravo & Holly corresponding values are 150 m and 0.5 m/s. Therefore, the modified model prediction of the general eddy parameters (length and back flow velocity) can be considered reasonable based on the experimental data. Velocity profiles at three cross-sections were predicted and compared to the experimental data of Jain's et al, (1988) and the predictions by Bravo & Holly (1996). The location of the measured cross-section 1, 2 and 3
510
EL-Belasy, Saad & Hafez
are shown in Fig. 5 and the comparison between velocities profiles are shown in Fig. 6. It is clear from Fig. 6 that the velocities profiles computed by the developed model are close to both the experimental data and Bravo & Holly model predictions. The model was also run to compare the results with experimental runs carried out in the Hydraulic Research Institute for Esna and Naga Hamaadi Barrages. The results simulated well the formation of large eddies and back flow.
':...
(!!(!!!!!(!!!!!!!!i i'~'~'~'~'~'~'~i':ii''~''i'~' '~' '~'~'i'i'~' '~
Lock
52.5
Q
200
-
-
O~w,-
Spillway
/
-_
/
3'6
64
~ ~
....
~....
N. . . . . . Flow D a m /
/
/
/
/
/
/
/
CS NO. 3
C S NO. 2
CS NO. 1 800.0 m
Fig. 4: Velocity field obtained from the modified model
511
International Conference on Advances in Engineering and Technology
Velocity Distribution at CS No. 1 45
i~01
|
0s ,~
...
~
u
lO
B
o
5o
1~
~
~
2~
~
~
4o0
4~
Distance (m)
|
experiments
,--...-.--Bravo and Holly (1996)
-
-
.Developed Model i
Velocity Distribution at CS No. 2
w o
so
loo
150
20O
~0
30O
~0
4OO
450
Distance (m)
9 experiments -------Bravo and Holly (1996) -
-
,Developed Model
Fig. 5" Locations of measured cross sections Velocity Distribution at CS No. 2
| 50
o
100
150
200
250
300
350
4O0
45O
Distance (m)
|
Fig. 6:
experiments .,.,.....--Bravo and Holly (1996)
-
-
,Developed Model
The model predictions along with the experimental data of Jain et al, (1988) and predictions by Bravo & Holly (1996)
3.0 SUMMARY AND CONCLUSIONS A Two-dimensional mathematical model was modified including a module for varying turbulent viscosity that can be used to achieve a better identification and prediction of the location and size of eddies and back velocity around hydraulic structures. This enables the designer for solving the navigation difficulties and improving the design of hydraulic structures. The modified Two-dimensional mathematical model was examined by using the Experiments of Jain et al (1988) for a hydropower structure with navigation installation. The predicted flow velocity gave good agreement with the measured data in terms of matching the eddy length and back velocity. The application of Bravo & Holly (1996) model to the experiments of Jain et al (1988) provides a valuable asset for comparison with the suggested-model results. The conclusions that can be deduced from the peresent study are:
512
EL-Belasy, Saad & Hafez
1. 2.
Models with constant eddy viscosity value give underpredicted values for eddy length and back velocity downstream of hydropower. Introducing a module for varying turbulent viscosity gives reasonable results for the circulating eddies and back flow downstream of hydropower structures.
REFERENCES
Bravo, H. R. & Holly, F. M. (1996), Turbulence model for depth-average flow in navigation installations, Journal of Hydraulic Engineering, ASCE, 122(12) Booij, R. (1989), Depth-averaged k-e modeling, 23rd IAHR Congress, Ottawa, Canada, pp. A- 199-A-206 Chapman, R. S. & Kuo, C. Y. (1985), Application of the two-equation k-e turbulence model to a two dimensional, steady, free surface flow problem with separation, International J. For Num. Methods in Fluids, Vol. 5, pp. 257-268 H. C. Lien, T. Y. Hsieh & J. C. yang (1999), Bend flow simulation using 2-D depthAverage Model, Journal of Hydraulic Engineering, ASCE, Vol. 125, No. 10 Jain, S. C., Bravo, H. R. & Kennedy, J. F. (1988), Evaluation and minimization of effect hydroplant release on navigation, IHR Rep. Iowa Inst. of Hydro. Res. Univ. of Iowa, Iowa City, Iowa. Kuipers, J. & Verugdenhil, C. B. (1973), Calculations of two-dimensional horizontal flow, Rep. No. S 163, Part 1, delft Hydr. Lab., Delft, the Netherlands. Lien, H. C., Hsieh, Y. Y. & Yang, J. C. (1999), Use of two-step split-operator approach for 2-D shallow waterflow computation, Int. J. Numer. Methods in Fluid, in Press. Mcguirk, J. J. & Rodi, W. (1978), A depth-averaged mathematical model for the near field of side discharge into open channel flow, J. Fluid mech., Vol. 86, Part 4, pp. 761-781. Molinas, A. & Hafez, Y. I. (2000), Finite element surface model for flow around vertical wall abutments, Journal of Fluid and structures (2000) Vol. 14.711-733 Ouillon, S. & Dartus, D. (1997), Three-dimensional computation offlow around groins, Journal of Hydraulic Engineering, ASCE, 109 (11). Rodi, W. (1982), Hydraulics computation with K-~ turbulence model, In Smith, P. E., Ed, Proceeding of The Conference of The Hydraulic Division of The American of Civil Engineering, Jackson, miss., 1982 Smagorinsky, J. (1963), General circulation experiments with the primitive equations. I." The basic experiment, Monthly Weather Rev. 91, 99-164. Usseglio-Polatera, J. M. & Schwartz-Benezeth, S. (1987), "CYTHERE program. User guide" Sogreah, Grenoble, France. Yee-Chunge Jin & peter M. S. (1993), Predicting flow in curved open channels by depth-averaged method, Journal of Hydraulic Engineering, ASCE, 119, No. 1. Yu, L. R. & Zhang, S. N. (1989), A new depth-average two-equation (k-~) turbulent closure model, Proc. 3rd International Symposium on Refined flow Modeling and Turbulent Measurements, Tokyo, Japan, July 1988, pp. 549-555 Zienkiewicz, O. C. (1989), thefinite element method, Fourth Edition. Vol. 1. New York: McGraw-Hill.
513
International Conference on Advances in Engineering and Technology
SUSTAINABILITY IMPLICATIONS OF UBIQUITOUS COMPUTING ENVIRONMENT Manish Shrivastava and Donart A Ngarambe, Department of CELT, Kigali Institute of
Science, Technology and Management, Kigali, Rwanda
ABSTRACT
In ubiquitous computing environment, a person might interact with hundreds of computers at a time, each invisibly embedded in the environment and wirelessly communicating with each other. The vision of ubiquitous computing is to make computers available in everyday objects. It is a new kind of relationship of people to computers. As progress in ubiquitous computing increases, the significant opportunities and threats will also be involved towards social and environmental sustainability. There are many issues regarding the sustainability like: How will ubiquitously available computing systems affect the ecological balance? What happens to society when there are hundreds of invisible microcomputers to each other? What are the implications on social sustainability? This paper explores theoretical issues of social, environmental and ethical implications of Ubiquitous Computing Environment on Sustainable Development. Keywords: Sustainable Development, Ubiquitous Computing, Ubiquitous Society
1.0 INTRODUCTION Ubiquitous computing refers to a new vision of applying Information and Communication technologies to our daily lives. It involves the miniaturization and embedding of hundreds of invisible computers in everyday objects that communicate to each other using wireless networking, thus making computers ubiquitous in the world around us. Due to the ubiquitous computing environment, the world is moving towards a ubiquitous society where people can access and operate their computing devices anywhere, anytime. The key devices involved in building a ubiquitous computing environment are Mobile and Smart phones, PDAs (Personal Digital Assistants) and Hand-held devices, Sensors and Wearable computers etc. The history of computing has been associated with paradigm shifts in the relationship between humans and computers. As Patric Mckeown wrote "More than 25 year before, the first personal computer changed the way people thought about computing. One negative aspect of the use of personal computers relates to the location of the resulting data and information on a single computer located in a home or office because people often want data and information on computer other than their personal computers. Instead of having a personal computer, people want to have personal information available to them on any kind of machine, no matter where they are working" Mckeown (2003).
514
Shrivastava & Ngarambe
As advances in ubiquitous computing increases, the field moves away from being purely technology-driven and towards a more human-centric perspective. It is a new kind of relationship of people to computers where computers will be invisibly embedded in everyday objects and will support people in their daily life. If our life is to be surrounded and supported by these miniature computing devices, designers of ubiquitous computing systems must take into account the potential social, ecological and ethical impact of their systems. One must ask whether these technologies might not have undesirable side effects on human health. Whether sustainable development will be supported or not? How will ubiquitously available computing systems affect the ecological balance? What about social sustainability if consumers' privacy and freedom of choice are threatened? The more ubiquitous computers become in society, the larger this concern will become. This paper explores such issues of ubiquitous computing environment on sustainable development.
2.0 THE PARADIGM OF UBIQUITOUS COMPUTING The ubiquitous computing is referred as the "Third Paradigm" computing. First were Mainframes, where one computer was shared by many people. Second is the Personal computing era, where one computer is used by a single person. Next comes ubiquitous computing, where one person can use many computers. As Weiser and brown wrote "The third wave of computing is that of ubiquitous computing, whose cross-over point with personal computing will be around 2005-2020" Weiser, et al, (1996). This emerging paradigm is a result of the rapid advancements in the ongoing miniaturization of electronic circuits and the corresponding exponential increase in embedded computational power. The increasing miniaturization of computer technology will make it possible to integrate small processors and tiny sensors into more and more everyday objects, leading to the disappearance of traditional PC input and output media such as keyboards, mice, and screens. Instead, we will communicate directly with our clothes, watches, pens, and furniture and these objects will communicate with each other and with other people's objects. This era was once described by former IBM Chairman Lou Gerstner as "A billion people interacting with a million e-businesses through a trillion interconnected intelligent devices" Gerstner (2000).
2.1 What is Ubiquitous Computing? In Latin the word 'ubiquitous' means "God exists everywhere simultaneously". Webster defines ubiquitous as "Existing or being everywhere at the same time". Ubiquitous computing has roots in many aspects of computing. In its current form, it was first articulated by Mark Weiser at the Computer Science Lab at Xerox PARC. He described it as "Ubiquitous computing is the method of enhancing computer use by making many computers available throughout the physical environment, but making them effectively invisible to the user" Weiser (1993). Weiser suggests, "Computing technology will become so specialised and well integrated into our physical world that we will no longer be aware of it in itself, just as we would
515
International Conference on Advances in Engineering and Technology
now not particularly think of the pen or pencil technology that we use when writing some notes on a sheet of paper" Weiser (1991). Rick Belluzo, general manager of Hewlett-Packard, compared ubiquitous computing to electricity, calling it "the stage when we take computing for granted. We only notice its absence, rather than its presence" Amor Danie (2001). 2.2 The Technology Trends Ubiquitous computing comprises a broad and dynamic spectrum of technologies. Two of the most common placeholders for these devices are the personal technologies and smart environments.
Personal Area Network (PAN)." It is an interconnection of personal technology devices to communicate over a short distance, which is less than 33 feet or 10 meters or within the range of an individual person, typically using some form of wireless technologies. Some of these technologies are: 9 Bluetooth technology: The idea behind Bluetooth is to embed a low cost transceiver chip in each device, making it possible for wireless devices to be totally synchronized without the user having to initiate any operation. The chips would communicate over a previously unused radio frequency at up to 2 Mbps. The overall goal of Bluetooth might be stated as enabling ubiquitous connectivity between personal technology devices without the use of cabling as written in Mckeown (2003a). High rate W-PANs: As per standard IEEE 802.15 TG3, launched in 2003, these technologies use higher power devices (8 dBm) than regular Bluetooth equipment (0 dBm) to transmit data at a rate of up to 55 Mbps and over a range of up to 55 m Ailisto et al (2003). Low power W-PANs: As per standard IEEE 802.15 TG4, these technologies are particularly useful for handheld devices since energy consumption for data transmission purposes, and costs, are extremely low. The range of operation of up to 75 m is higher than current Bluetooth applications, but the data transfer rate is low (250 Kbps) Ailisto et al (2003).
BodyArea Network (BAN): Wireless body area networks interlink various wearable computers and can connect them to outside networks and exchange digital information using the electrical conductivity of the human body as a data network. Advantages of BANs versus PANs are the short range and the resulting lower risk of tapping and interference, as well as low frequency operation, which leads to lower system complexity. Technologies used for wireless BANs include magnetic, capacitive, low-power far-field and infrared connections Raisinghani et al (2004). Sensors and Actuators: Sensors are essential in capturing physical information from the real world. Different types of sensors are needed for different phenomena. These devices collect data about the real world and pass it on to the computing infrastructure for enabling decision-making. They can detect and measure mechanical phenomena of the user like movements, tilt angle, acceleration and direction. Actuators provide the output
516
Shrivastava & Ngarambe
direction from the digital world to the real world. These devices allow a computing environment to affect changes in the real world.
Smart Tags: The smart tags contain microchips and wireless antennas that transmit data to any nearby receiver which is acting as a reader. Beyond just computing a price, the smart tags will enable companies to track a product all the way. New tags can recognize more than 268 million manufacturers, each with more than 1 million products. They use Radio frequency identification (RFID) system, which encompasses wireless identification through radio transmission. 3.0 SUSTAINABLE DEVELOPMENT The most widely cited definition of sustainable development was given by the World Commission on Environment and Development in 1987: In order to be considered sustainable, a pattern of development has to ensure "that it meets the needs of the present without compromising the ability of future generations to meet their own needs" (WCED, 1987). The world summits on environment and development in Rio de Janeiro in 1992 and in Johannesburg 2002 have shown that the goal of attaining sustainable development has become a predominant issue in international environmental and development policy. According to the UN statement following the 'Rio+5' event in 1997: "Economic development, social development, and environmental protection are interdependent and mutually reinforcing components of sustainable development. Sustained economic growth is essential to the economic and social development of all countries, in particular developing countries" retrieved from http://www.ecouncil.ac.cr/rio/susdevelopment.htm. It is widely accepted that sustainability has an environmental, a social and an economic dimension. These sustainability dimensions are (Ducatel et al, 2005). 9
9
9
Personal physical and psychological sustainability: Can it reduce (mental) health risks from information stress, virtual identities and information overload? What precautionary evaluation is needed to avoid new health impacts of these pervasive electronic radiations? Socio-economic sustainability: Digital divides emerging from unequal developments and access to the Ubiquitous infrastructure could be related to income, education and skills, age and work. Environmental sustainability: There are pressures created by new growth and the material wealth associated with these technologies. The embedding of computers implies considerable extension of recycling and reclamation of electronic waste.
4.0 SOCIAL IMPLICATIONS The omnipresence of computing power and its widespread use has begun to affect our everyday lives in many ways we do not even notice. The sustainability-related opportunities and risks of ubiquitous computing for society are illustrated here. 4.1 Social Opportunities
517
International Conference on Advances in Engineering and Technology
Personal Empowerment There are two main motives for personal augmentation. The first is to overcome some of the physical disabilities and the second is to augment the capabilities of a normal healthy person. Using wearable computers and sensors, individual sensory and physical capabilities can be significantly enhanced. Access to information and knowledge will work more efficiently under ubiquitous computing. Access will be possible everywhere and anytime (pervasiveness), and be dependent upon one's location and local environment (context sensitivity). Improvements in both physical and mental performance of a human being can be enhanced by providing him a smart working environment. Several concepts are in development that could increase individual's work efficiency and personal productivity. Protection As RFID is intended to be used for unique identification of real-world objects (e.g., items sold in supermarkets), using RFID transponders in the form of "smart labels" will probably become the first and most widespread example of ubiquitous computing. With "smart labels" it will be much easier to protect goods from theft or imitation. 4.2 Social Risks
Consumer Freedom of Choice As more and more objects and environments are being equipped with ubiquitous technology, the degree of our dependence on the correct, reliable functioning of the deployed devices and microcomputers including their software infrastructures is increasing accordingly. Today, in most cases, we are still able to decide for ourselves whether we want to use devices equipped with modem computer technology or not. But in a largely computerized future, it might not be possible to escape from this sort of technologically induced dependence, which leads to a number of fundamental social challenges. Jtirgen Bohn et al (2004). Moreover, a loss in competition among service providers may occur if proprietary defacto standards continue to play a significant role in the computer economy. As a result the consumer may lose the power to decide which ICT products or ICT services he uses and what price he pays. Andreas Koehler et al (2005).
Knowledge Sustainability Most information in our everyday life today remains valid for an extended period of time, e.g. food prices in our favourite supermarket, or prices for public transport. Using the acquired knowledge and prior experiences, individual manage with future situations and tasks. In a highly dynamic world, an experience that was valid and useful one minute could become obsolete and unusable the next minute. For example using the mobile phones, individuals do not remember most of the phone numbers and the numbers are changing very frequently. Moreover if a mobile phone is not working then it is very difficult to contact. Such a loss of knowledge sustainability could, in the long term, contribute to an increased uncertainty and lack of direction for people in society.
518
Shrivastava & Ngarambe
Impact on Privacy One major characteristic of ubiquitous computing technology is that it has a significant impact on the social environments in which it is used. Although data protection/privacy is not a new problem, ubiquitous computing introduces a new privacy risk due to timely and accurate location data for an individual (both real-time and historical) being made available. Because location management is part of such an environment, it can also be used to intrude on the privacy of people. Some users may be uncomfortable with the ability of ubiquitous computing system to be able to obtain their locations at any time. For example, assume your apartment is outfitted with all kinds of sensors to feed a ubiquitous computing system that could help you to manage life threatening situations, such as fires. However, a big concern would be about how the collected data is used, would you want your neighbour police station to be able to monitor in which room you are currently residing and how much alcohol you are consuming. Gathering data of any kind irrevocably leads to privacy concerns. Psychological Stress Apart from the privacy implications the ubiquity of sensors may also lead to psychological unease on the part of users. The constant feeling of observability, as it can be generated by the perpetual presence of certain sensors can, hence, lead to undesirable psychological feelings and unease about the sensor-laden environment. The old sayings that 'the walls have ears' and 'if these walls could talk' have become the disturbing reality. Effects of Ubiquitous computing can indirectly influence the user's behavior and the social context encountered. Such as poor usability, disturbance and distraction, the feeling of being under surveillance, the possible misuse of technology for criminal purposes, as well as increased demands on individuals' productivity. Stress has many side effects on health. 5.0 ENVIRONMENTAL IMPACTS Generally most ubiquitous pervasive computing devices will have one significant environmental advantage over traditional computers: that they are physically smaller and inherently consume less material. On the other hand, they have many other disadvantages on our environment in terms of raw material consumption, energy consumption, and disposal. The low cost will encourage rapid replacement and in addition, their small size, weight, embedding in other materials and overall design for ubiquity will disperse them widely. Ecological sustainability will be influenced by the following ways:
Resource Consumption Intel expects that semiconductor technology will develop continuously towards design geometry of 22 nanometers within the coming ten years without a general change in material composition. However, due to the increasing number of components that will be used, the total material and energy consumption caused by the production of electronic goods is still expected to accelerate global resource depletion. Furthermore, the trend toward throwaway electronics caused by price reductions will shorten the average service life of electronic devices and components in general. For these reasons, a reduc-
519
International Conference on Advances in Engineering and Technology
tion of the total demand for raw materials by the ICT sector can be anticipated only in the moderate scenario. Andreas Koehler, et al, (2005). Due to use of low power batteries, there is a great potential for power savings. On the other hand these batteries often contain heavy metals and are an environmental hazard in them, rather than fixed AC power.
End-of-Life Treatment Another environmental risk of ubiquitous computing is the release of pollutants caused by the disposal of the resulting waste.. Service life is an essential parameter of the waste generation by ICT products. Halving service life means doubling the resource use for production and doubling the amount of waste disposed per service unit. Disposable versions of some devices, like disposable cell phones will soon emerge. By this effect, ubiquitous computing could indirectly contribute to an increasing demand for raw materials and an increasing amount of waste. End-of-life treatment of Ubiquitous computing will have to deal with large numbers of small electronic components that are embedded in other products. More and more microelectronic throwaway products, including rechargeable batteries, will be found in waste streams outside that of electronic waste (packaging, textiles). As a consequence, the risk of uncontrolled disposal of toxic substances as a part of household waste could counteract the goals of the Environmental Sustainability. If no adequate solution is found for the end-of-life treatment of the electronic waste generated by millions of very small components, precious raw materials will be lost and noxious pollutants emitted to the environment. Andreas Koehler, et al, (2005).
Indirect Effects The ubiquitous use of miniaturized and embedded microelectronic components interconnected in wireless networks could have an influence on human health due to the additional exposure to non-ionizing radiation. Non-ionizing radiation (NIR) is emitted for wireless data transfer, which is one of the basic technologies of ubiquitous computing. As a consequence, a great part of the emitted radiation will be absorbed by human body. Even sources of low transmitting power may cause high exposure to radiation if they are very close to body tissues. Due to the wide range of substances used for microelectronics, the risk of allergic reactions or chronic poisoning increases. However the level of risk depends on the substances contained and the kind of encapsulation. In the future, new types of microelectronics will emerge that may release new potentially harmful substances. 6.0 CONCLUSION Ubiquitous computing is a socio-technical phenomenon in which computers are integrated into people's lives and the world at large. This paper discussed some of the issues about the possible consequences of this technology from social and environmental perspectives. To make ubiquitous computing sustainable, precautionary measures have to be initiated quickly.
520
Shrivastava & Ngarambe
REFERENCES
Ailisto, H., Kotila, A. and Str6mmer, E. (2003), Ubicom applications and technologies, Presentation slides from ITEA, http://www.vtt.fi/ict/publications/ailisto et al 030821.pdf Amor, Danie (2001), Pervasive Computing: The Next Chapter on the Internet, http://www.informit.com/articles/article.asp?p = 165227&rl = 1 Andreas, Koehler and Claudia, Sore (2005), Effects of Pervasive Computing on Sustainable Development, http://www.patmedia.net/tbookman/techsoc/Koehler.htm Ducatel, K., Bogdanowicz, M., Scapolo, F. and Leijte, J. (2005). That's what friends are for. Ambient Intelligence (AmI) and the IS in 2010, http://www.itas.fzk.de/esociety/preprints/esociety/Ducatel%20et%20al.pdf Gerstner, L.V (2000), IBM, http://www5.ibm.com/ de/entwicklung/produkte/ pervasive.html Jtirgen Bohn, Vlad Coroam~, Marc Langheinrich, Friedemann Mattern, Michael Rohs (2004), Living in a World of Smart Everyday Objects - Social, Economic, and Ethical Implications, Human and Ecological Risk Assessment, Vol. 10, No. 5, October 2004. McKeown, Patrick (2003), Information technology and the networked economy, second edition, (p.164), Thomson course technologies. USA McKeown, Patrick (2003a), Information technology and the networked economy, second edition, (p.84), Thomson course technologies. USA McKeown, Patrick (2003b), Information technology and the networked economy, second edition, (pp.438-439), Thomson course technologies. USA Raisinghani, Mahesh S, Benoit Ally, Ding Jianchun, Gomez Maria, Gupta Kanak, Gusila Victor, Power Daniel and Schmedding Oliver (2004), Ambient Intelligence: Changing Forms of Human-Computer Interaction and their Social Implication, Journal of Digital Information, 5(4), Article No. 271, 2004-08-24, http ://j odi.tamu, edu/Artic le s/v05/i04/Rai singhani/?printab le= 1 Weiser, M.(1991), The Computer for the 21st Century, Scientific American, September 1991, pp. 94- 104. http://www.ubiq.com/hypertext/weiser/SciAmDraft3.html Weiser, Mark (1993), Some Computer Science Issues in Ubiquitous Computing, Commun. ACM 36(7), pp.74-84, http ://www. informat ik. unii er. de/-l ey/db /j oumals/ cacm/ cacm3 6.html Weiser, Mark, Brown John S. (1996), Designing Calm Technology, Power Grid Journal, v 1.01, http://powergrid.electriciti.com/1.01 W C E D - World Commission on Environment and Development (1987). Our Common Future. Oxford: Oxford University Press.
521
International Conference on Advances in Engineering and Technology
A M A T H E M A T I C A L I M P R O V E M E N T OF THE SELFORGANIZING MAP ALGORITHM Tonny J. Oyana, Department of Geography and Environmental Resources, Southern Illinois
University, USA. Luke E. K. Achenie, Department of Chemical Engineering, University of Connecticut, USA. Ernesto Cuadros-Vargas, School of Computer Science, Universidad Catolica San Pablo,
Peru. Patrick A. Rivers, Health Management Program, Southern Illinois University, USA. Kara E. Scott, Department of Geography and Environmental Resources, Southern Illinois
University, USA.
ABSTRACT
The objective of this paper is to report a mathematical improvement of the self-organizing map (SOM) algorithm implemented using real georeferenced biomedical and disease informatics data. The SOM algorithm is a very powerful unsupervised neural network with both competitive and cooperative learning abilities. It provides a foundation for knowledge discovery in large spatial databases and has successfully been applied to recognize patterns in several problem domains. Although significant progress has been achieved in using SOM to visualize multidimensional data or utilizing SOM for data mining purposes, certain limitations related to its performance still exist. In this paper, we propose a mathematical improvement as a result of discovering these limitations while using SOM-trained data for biomedical applications. The paper also introduces a new SOM-based model, mathematically improved learning- SOM (MIL-SOM*). Keywords:
SOM; MIL-SOM*; GIS; Clustering; Geography; Algorithms; Spatial Data Mining; Visualization; Biomedical Applications.
INTRODUCTION
The self-organizing map (SOM) is a special type of artificial neural network (ANN) that clusters high-dimensional data vectors according to a similarity measure (Kohonen 1982). The SOM is not only used for clustering in high dimensional spaces, but it is also designed to self organize similar data which have not yet been classified. In the SOM, neurons compete with each other in order to represent the input data. As a result, data in the multidimensional attribute space can be abstracted to a much smaller number of latent dimensions, which is organized on a basis of a
522
Oyana, Achenie, Cuadros-Vargas, Rivers & Scott
predefined geometry in a space of lower dimensionality, usually a regular two-dimensional array of neurons. The SOM clusters the data in a manner similar to other clustering algorithms, but has an additional benefit of ordering the clusters and enabling the visualization of large numbers of clusters. Although a number of SOM applications have been developed for the biomedical sciences (Manduca 1994; Tamminen et al. 2000; Sugiyama and Kotani 2002) none have integrated SOM-trained data with geographic information systems (GIS) data models. Similarly, to the best of our knowledge, no-one has attempted to integrate non-SOM like biomedical data with GIS. The increasing demand of spatial databases for biomedical applications also provides an incredible opportunity for the development of more sophisticated geospatial tools. In the context of spatial databases, for example, the integration of SOM-trained data with GIS data models could assist in a physical database design that could thus lend further support for the development of more reliable spatial access methods (SAM) (Gaede and Guenther 1997). 2.1 The Basic Structure of K o h o n e n ' s S O M Algorithm
The basic structure of Kohonen's SOM algorithm consists of five major steps: 1. Begin with an input vector X = [Xk_l..... Xk_n] with d dimensions represented by an input layer W~/containing a grid of units (m • n) with 0" coordinates. 2. Define SOM training parameters (size, training rate, map grid, and neighborhood size). Equally important is the main principle in selecting the size because it is contingent upon the number of clusters, and pattern or the SOM structure; however, defining an initial network may no longer be necessary as illustrated by the growing neural gas example (Fritzke 1995). 3. Compute and select the winning neuron or Best Matching Unit (BMU) based on a distance measure and neighborhood function illustrated in Equations 1 and 2, respectively. In most cases the metric space is considered in Euclidean terms while the neighborhood function is Gaussian. [[ X k
-
Wbm,, [[= argmin{[[ Ark -
w O.
If
(I)
i
In Equation 1, II. II is the absolute distance, mbm u is the winning neuron, and w!/ corresponds to the coordinates on the grid of units.
hci(t) -
exp(- Jc2
2o.(t)2 )
(21
In Equation 2, hci(t ) is the neighborhood function, cy(t)is the neighborhood radius at time t, and d~,;is the distance between neurons c and i on the SOM (mx n) grid. Update the attributes of the winning neuron using the update rule in Equation 3
523
International Conference on Advances in Engineering and Technology
w O. (t + 1) = w O. (t) + ct(t)hci (t) [A'Tc(t) - wij (t)] w o. (t + 1) : w o. (t)
f o r / s Nc(t) for i ~ Nc(t)
(3)
In Equation 3, Xk(t) is a sample vector randomly taken from input vectors, wo.(t) is the output vector for the coordinates on the (m x n) grid with coordinates i andj within the neighborhood NO(t), and a(t) and hci(t) are the learning rate function and neighborhood kernel function, respectively. Since Nc(t) specifies the topological neighborhood for the neurons surrounding the winning neuron, its size reduces slowly as a function of time, i.e. it starts with fairly large neighborhoods and ends with small ones. The training rate function can be linear, exponential or inversely proportional to time t. The training length is divided into two periods: t~ is the initial period and t2 is the fine tuning period with neighboring units hci (t). 5.
Repeat steps 3 and 4 until complete convergence is realized for the SOM network.
2.2 Mathematical Improvements to the Kohonen's SOM Model In the new mathematically improved learning-SOM (MIL-SOM*) model, we have proposed a better updating procedure than the one in Step 4 of Kohonen's SOM model. Figure 1 gives the pseudo code for the MIL-SOM* algorithm. This pseudo code specifies an augmented learning mathematical procedure to improve, particularly, the updating method outlined in Equation 3. The new learning method was suggested to address a number of efficiency and convergence issues associated with Kohonen's SOM model. This algorithm offers a solution to four issues (1) speed and quality of clustering, (2) stabilizing the number of clusters, (3) the updating procedure for the winning neurons, and (4) the learning rate in the SOM model. Cuadros-Vargas and Romero (2005) also investigated some of these issues. In future studies, we plan to compare and analyze the performance of their two constructive algorithms (SAM-SOM* and MAM-SOM*) with this MIL-SOM* model. 3.0 EXPERIMENTAL DESIGN In this study, we compared standard SOM and MIL-SOM* learning procedures together with GIS methods to explore disease data. We built a topological structure representing the original surface by encoding the disease map via a 3-D spherical mesh output. The neurons were positioned based on their weight vectors. The BMU was the neuron nearest to the sampling point in a Euclidean distance measure. We performed three experiments using two disease datasets encoded with a vector data structure (point and polygon data structure) and a randomly generated dataset. In addition to encoded disease data, each map also contained unorganized sample points.
The two published datasets (Oyana and Lwebuga-Mukasa 2004; Oyana et al. 2004; Oyana and Rivers 2005; Oyana et al. 2005) contained geographically referenced data points of adult (n = 4,910) and children (n = 10,289) patients diagnosed with asthma between 1996
524
Oyana, Achenie, Cuadros-Vargas, Rivers & Scott
and 2000. The biomedical datasets were obtained from Kaleida Health Systems, a major healthcare provider in western N e w York. Vector patterns consisted of six dimensions (X, Y, case_control/code, IN500, 1N 1000, and PM1000). The MIL-SOM* algorithm for training a 2-dimensional map is defined as follows: Let X be the set of n training patterns Xk--1, X2..., Xk=n W be a m x n grid of units wij where i and j are their coordinates on that grid Jsom be the best cluster after iterations where p is the distance between all possible pairs of neural nodes and data points alpha (ct) be the original learning rate, assuming values in [0,1 ] initialized to a given initial learning rate alphal=alpha*al be the first improved learning rate alpha2=alpha*a2 be the second improved learning rate al be the first non-negative parameter of alphal when set to zero it yields the original SOM update a2 be the second non-negative parameter of alpha2 when set to zero it also yields the original SOM update diff(Xk-wij) is the differentiation of (Xk-wij) int(((Xk-wij)),O,(n-1)) is the integral term for (Xk-wij) with intervals 0 to n-1 (1 to n). radius (~) be the radius of the neighborhood function H (wij, Wbmu,~), initialized to a given radius Repeat for k= 1 to n for all wijeW, calculate absolute distance dij -- Xk-wij for p= 1 up to number_iteration Calculate the sum of the distances Jsombetween all possible pairs of neural nodes and data points Assign the unit that minimizes dij as the winning neuron Wbmu Iterate to minimize the quantization and topological errors and select the best SOM clusters with minimum Jsom --Standard SOM used to Update each unit wijeW: wit = wij + alpha H(Wbmu, Wij, ~) I Xk-wij 11 Define Xk, wij as syms Xk, Wij Apply improved procedure to Update each unit wijeW" wij = wij+(H*((alpha*(Xk wij)+(alphal (diff(Xk-wij)))+(alpha2 (int(((Xk-wij)),0,(n- 1)))))))); --Note d/dt(Xk-wij) will tend in the direction of zero as learning improves Decrease the values of alpha, alpha 1, alpha2, and radius Until alpha, alpha 1, and alpha2 reach 0 --Visualize output of MIL-SOM* using the distance matrix, e.g., U-Matrix Figure 1" Pseudo code for the M I L - S O M * (Mathematically Improved Learning-SOM) Algorithm
525
International Conference on Advances in Engineering and Technology
In this case, X and Y represent the coordinates of the patients, the case_control/code indicates whether the patient has asthma (case) or gastroenteritis (control), the IN500 indicates whether the patient is within 500m of the highway, IN1000 indicates whether the patient is within 1000m of a pollution source, and PM1000 indicates whether the patient is within 1000m of the sampling site of measured particulate matter concentrations. The plan for conducting these experiments was to train three datasets at different epochs. The first epoch consisted of training two published datasets. These were clinically-acquired geospatial biomedical datasets, which identified geographic patterns of childhood and adult asthma. The second epoch consisted of training a randomly generated dataset. The training was designed this way in order to compare the performance of traditional SOM (Kohonen 1982) with the new MIL-SOM* model, using two real datasets and a random computergenerated dataset. We randomly selected either 1000 or 2000 data points from the entire dataset, then continued adding on the same number of data points (e.g., 1000, 2000, 3000, etc. or 2000, 4000, 6000, etc.) until the completion of training. We implemented different data ranges for the distinct datasets due to their different sizes, and trained these three datasets using standard SOM and MIL-SOM* algorithms. We conducted our experiments in two phases using standard SOM and MIL-SOM* algorithms. The first phase (rough training), immediately following initialization of both algorithms, performed separately, consisted of taking relatively large initial learning rates and neighborhood radii. The second phase (fine'-tuning) concentrated on much smaller rates using the same criteria. A 20 x 20 map size was used, and the initial neighborhood radii for rough training and fine tuning were 5 and 1.25, respectively. We initialized the weight vector for each of the neurons (vector quantization) in a linear fashion. We engaged standard SOM and MIL-SOM* algorithms to train six dimensional data vectors. Batch and sequential training algorithms were used. The training procedure for both algorithms corresponded to approximately the same space as the input data and the fine-tuned maps. Table 1 shows the training parameters of the standard SOM and MIL-SOM*.
526
Oyana, Achenie, Cuadros-Vargas, Rivers & Scott
Table 1" Training parameters of Standard SOM and M I L - S O M * algorithms
Data Points
Standard SOM
MIL-SOM*
Standard SOM
Elapsed Time (s)
Elapsed Time (s)
Qe
Te
MIL-SOM*
Qe
Te
Childhood Asthma data
2000
5.406
1.016
2081
0.052
1064
0.019
4000
8.016
1.625
1780
0.032
1103
0.035
6000
10.75
2.094
1637
0.028
961
0.036
8000
12.75
2.797
1502
0.029
887
0.031
10000
18.547
3.922
1309
0.031
828
0.037
1000 2000
3.25 5.61
0.922 1.094
813 635
0.034 0.007
571 467
0.021 0.039
3000
7.109
1.297
612
0.018
434
0.04
4000
7.875
1.563
537
0.015
416
0.031
4910
7.5
1.641
507
0.019
403
0.043
Adult Asthma data
Randomly Generated Numbers
2000
5.625
1.141
0.379
0.065
0.345
0.107
4000
7.813
1.516
0.374
0.059
0.338
0.127
6000
10.344
2.016
0.365
0.057
0.336
0.147
8000
12.672
2.828
0.347
0.065
0.322
0.141
10000
19.765
3.703
0.336
0.062
0.318
0.136
* Note: There was a general improvement in map quality (quantization error (Qe), topological error (Te)) in the last two columns when we integrated the new method (MIL-SOM*) for initialization and training the standard SOM. We used initial neighborhood radii for the rough training phase and fine tuning phase as max(msize)/4 = 5 and max(msize)/4)/4 =1.25, respectively until the fine tuning radius reached 1, where max is the m a x i m u m value of the map size matrix. For all the datasets the map size was [20 20], so max is 20. We also could easily adjust and specify a different map size, a different radius, and a different training length to obtain better results with M I L - S O M * 4.0 EXPERIMENTAL
RESULTS
AND DISCUSSIONS
We have successfully implemented a mathematical improvement to resolve efficiency and convergence concerns associated with standard SOM. The successful implementation introduced a new family of a constructive M I L - S O M * generation of algorithms, which focused on three significant opportunities: (1) evaluation at the metrics level, (2) focus on finding optimal clustering solutions, and (3) augmenting the learning rate of standard SOM. As noted, the performance of this constructive M I L - S O M * was tested using three datasets.
527
International Conference on Advances in Engineering and Technology
In order to understand the performance of MIL-SOM*, we trained the same datasets using a standard SOM. In all three of the experiments that were conducted, the newly modified (MIL-SOM*) algorithm showed a dramatic improvement in performance during training and in map quality when compared to the standard SOM. Figures 2 through 4 illustrate experimental results for the standard SOM and MIL-SOM* by comparing the number of data points and quantization error. The figures clearly indicate that MIL-SOM* has definitively outperforms the standard SOM by revealing an overall decrease in quantization error. Before training the data using either algorithm the quantization errors were relatively the same, for the childhood asthma data. We observed that the quantization errors for both the standard SOM and MIL-SOM*dropped tremendously after training. After training, however, the quantization error using MIL-SOM* showed much improvement with a steady decline, relative to that of the standard SOM. According to Table 1, the elapsed time improved by approximately a factor of one-sixth when using MIL-SOM*. For the adult asthma data, we observed that before training the data using either algorithm, the quantization error was roughly 1400, reflecting minimal change as the number of data points increased. This error showed a continuous decrease in error following training using both the standard SOM and MIL-SOM* algorithms. The results do however reveal an even greater and continuous decline after using MIL-SOM*. The elapsed time for this adult asthma data decreased by a factor of approximately one-fifth. For the randomly generated dataset, the quantization errors before training, using either the standard SOM or MIL-SOM*, was approximately 0.58. Although both algorithms revealed a downward trend after training, the error following training using MIL-SOM* showed a greater improvement than that following the standard SOM, with a greater decrease in quantization error. Another benefit of the MIL-SOM*, as revealed in Table 1, was an improvement for the elapsed when compared to that of the standard SOM. A key property of the MIL-SOM* algorithm is the minimal additional computations per learning step, which is conveniently easy to implement. Learning with MIL-SOM* is also accomplished using the same methods as for the standard SOM. Since only a small fraction of the MIL-SOM* has to be modified during each training step, adaptation is fast and the elapsed time is low, even if a large number of iterations might be necessary or the dataset is unusually large. The MIL-SOM* algorithm also has other properties. It is very stable, has increased performance, and maximizes time available for processing data. Thus it is scalable and has independence of input and insertion sequence.
528
Oyana, Achenie, Cuadros-Vargas, Rivers & Scott
3500
r-
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3000
25OO s Before Training (Standard SOM) ._
~
2000
:-..,+:a,-,.:.:.~,. 9 After Training (Standard SOM)
m ....................................
Before Training ( ML-SEA4~)
c
--x--- After Training (ML-SQM~) O 1500
l m .............
1000
500 2000
4000
6000
8000
10000
Number of Data Points
Figure 2" Standard SOM and MIL-SOM* applied to a published childhood asthma dataset
A
I•
._~
,
v
r 1000
Before Training (Standard SOM)
............ m ...........A f t e r Training (Standard SOM) Before Training (ML-SOIVI*) A f t e r Training (ML-SOM*)
800
................ g ....................................................................................................................... m .......................
:"m ............................................................................................................................................. M
1000
2000
3000
4000
4910
N u m b e r o f Data P o i n t s
Figure 3" Standard SOM and MIL-SOM* applied to a published adult asthma dataset
529
International Conference on Advances in Engineering and Technology
0.55
Before Training (Standard SOM) - 4 ~ - After Training (Standard SOM)
0.5
Before Training (ML-SOM*)
2
After Training (ML-SOM*) .N
0.45
O 0.4
.
.
.
.
.
.
.
0.:35
0.3
.
.
.
~
.........................
,
2000
4000
,
6000
,
8000
......
10000
Number of Data Points
Figure 4" Standard SOM and MIL-SOM* applied to a randomly generated dataset 3.0 CONCLUSIONS AND RECOMMENDATIONS This study has introduced a new family of MIL-SOM* developed from suggested mathematical improvements of the original SOM (Kohonen 1982). We are confident that the properties of this new family will be useful for data classification, visualization, and mining georeferenced data. These improvements can be used for georeferenced biomedical applications, and to address problems associated with the integration of SOM-trained data with GIS data models for physical database design efforts. The GIS data models are computer encodings of abstracted forms or constructs of geographic space based on a simple graph, topology, geometry, or an array of pixels. MIL-SOM* can also be used for Similarity Information Retrieval, as suggested by Cuadros-Vargas and Romero (2005), as well as for building and exploring homogenous spatial data. While we have not developed a specific plan in this experiment, we recognize that assigning a confidence level to the SOM results is very important. Therefore in future studies we will explore the use of hypothesis testing and the Bayes' inference method to assess the probability of obtaining the correct clustering results (i.e., class confidence) from MIL-SOM*. 4.0 ACKNOWLEDGEMENT Supported in part by SIUC Faculty Start-up Fund and SIUC ORDA Faculty Seed Grant. Mr. Dharani Babu Shanmugam, a Computer Programmer was responsible for implementing the
530
Oyana, Achenie, Cuadros-Vargas, Rivers & Scott
mathematical improvements in SOM for these experiments. Dr. Jamson S. Lwebuga-Mukasa, Department of Internal Medicine, UB School of Medicine and Biomedical Sciences, Kaleida Health Systems Buffalo General Division provided the datasets for this study. 5.0 P R O T E C T I O N OF H U M A N S U B J E C T S
All research was approved by the Southern Illinois University Carbondale Human Investigation Review Board in accordance with national and institutional guidelines for the protection of human subjects. REFERENCES
Cuadros-Vargas, E. and Romero, R.A.F. (2005), Introduction to the SAM-SOM* and MAMSOM* Families, International Joint Conference on Neural Networks (IJCNN) 2005, Montreal. Gaede, V. and Guenther, O. (1997), Multidimensional Access Methods, ACM Computing Surveys, 30(2): 123-169. Kohonen, T. (1982), Self-organized formation of topologically correct feature maps, Biological Cybernetics 43:59-69. Manduca, A. (1994), Multiparameter medical image visualization with self-organizing maps, IEEE World Congress on Computational Intelligence, IEEE International Conference on Neural Networks (1994), 6(27):3990-3995. Oyana, T.J., Boppidi, D., Yah, J., and Lwebuga-Mukasa, J.S. (2005), Exploration of geographic information systems-based medical databases with self-organizing maps." A case study of adult asthma, In Proceedings of the 8th International Conference on GeoComputation, lst-3rd August 2005, Ann Arbor, University of Michigan. Oyana, T.J., and Lwebuga-Mukasa, J.S. (2004), Spatial relationships among asthma prevalence, healthcare utilization, and pollution sources in Buffalo neighborhoods, New York State, Journal of Environmental Health 66(8):25-38. Oyana, T.J., Rogerson, P., and Lwebuga-Mukasa, J.S. (2004), Geographic clustering of adult asthma hospitalization and residential exposure to pollution sites in Buffalo neighborhoods at a U.S.-Canada Border Crossing Point, American Journal of Public Health 94(7): 1250-1257. Oyana, T.J., and Rivers, P.A. (2005), Geographic variations of childhood asthma hospitalization and outpatient visits and proximity to ambient pollution sources at a U.S.-Canada border crossing, International Journal of Health Geographics 4(1): 14. Sugiyama, A. Kotani, M. (2002), Analysis of gene expression data by using self-organizing maps and k-means clustering, IJCNN 2002, Proceedings of the 2002 International Joint Conference on Neural Networks, 12-17 May 2002, 2:1342-1345. Tamminen, S. Pirttikangas, S. Roning, J. (2000), Self-organizing maps in adaptive health monitoring, Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks (IJCNN) 24-27 July 2000, 4:259-264.
531
International Conference on Advances in Engineering and Technology
532
Chiemeke & Daodu
B R I D G I N G THE DIGITAL DIVIDE IN R U R A L C O M M U N I T Y : A CASE STUDY OF E K W U O M A T O M A T O E S P R O D U C E R S IN S O U T H E R N NIGERIA S. C. Chiemeke and S. S. Daodu, Department of Computer Science, University of Benin,
Nigeria
ABSTRACT Information and Communication Technology (ICT) has become a potent force for transforming social, economic and political life globally. In an attempt to bridge the "digital divide" (between those who have access to information resources and those who do not), the three tier levels of government of Nigeria (Federal, State and Local Government) laid emphasis on rural electrification, thus promoting informative education through radio and television as well as licensing of GSM (Global Systems for Mobile Communication) operation. In Delta State of Nigeria, the wife of the Governor, Chief (Mrs.) N. Ibori initiated women centers. Also, she introduced an interactive 30 minutes weekly programme on television for Ibo tomatoes producers. The Ibo communities are known to produce 70% of pluvial tomatoes (available from June to October in two out of six geographical zones of Nigeria). Recently, there has been an attempt to improve the production, distribution, preservation and sales of pluvial tomatoes in these communities through ICT. The introduction of ICT is with a view to replace the older form of ITs (Information Technologies), that is, town crier, smoke piper, drumming etc.
This paper looks into the life style of Ibo community that promotes community- based ICT. It also x-rays access and effectiveness of ICT, as well as social and economic activities through ICT in Ekwuoma community of Delta State of Nigeria. Keywords: Digital Divide, Information Technology, Rural Concept, Pluvial Tomatoes.
1.0 INTRODUCTION Worldwide, Information and Communication Technology (ICT) provide a great development opportunity. ICT contributes to information dissemination, and provides an array of communication capabilities thereby increasing access to technology and knowledge among others. ICT is a key weapon in the war against world poverty. When used effectively, it offers huge potential to empower people in developing countries and disadvantaged communities (Peter, 2003). ICT overcomes obstacles, addresses social problems, strengthens communities' democratic institutions, offers a free press, and boosts local economies. With the growing popularity of ICT (such as the internet, software packages and cellular telephones), people are incorporating technology into their daily routine. These technologies are
533
International Conference on Advances in Engineering and Technology
making people's lives easier. The people most able to utilize these advancements however are the ones who have access to physical resources and knowledge of the newest changes. On the other hand, there are people that, for economic, social, cultural or educational reason do not have access to computers and internet. This people will not be able to utilize the information provided by these technologies. This will lead to a gap between those that have access and those that do not. This gap is a global problem and is referred to as "digital divide" (Novak & Hoffman, 2000). In Nigeria, four aspects of digital divide are worthy of consideration. These include: (i) Regional aspects which manifest within the country in the distribution of amenities due to influx of young ones from rural to urban areas in order to seek for greener pastures. Many of these young youths discover to their displeasure the hard and unfriendly community they find themselves. Thus many youths spend most of their times in cyber cafes hoping to better their lives through information. The unequal access to ICT facilities between the urban and rural people is probably the worst in Nigeria. (ii) The socio-economic aspects, given the skewed distribution of income in the country, where only members of the elite can afford access to the technical equipment required for joining the ICT order. This means that only small and minute proportion of the Nigerian society have access to electronic information media particularly the e-mail. (iii) Gender disparities that are disparity between male and female access to ICTs. In Nigeria, one of the most frequently cited solutions for targeting the digital divide is to construct Cyber Caf6s. A Cyber Caf6 is a public location with Internet connected computers available, for a per-minute or package minute fee. Cyber Caf6s are termed as "youth centers" (i.e. for ages between 15 and 24). These cafes enable people who do not own a computer at home, to utilize Internet technology. In this way it is thought that these cafes help to bridge the digital divide. However, In Nigeria cyber cafes are not women targeted due to socio-cultural factors. Cyber Caf6 serves as a perpetuating medium of immoral act. The centers serve as good convergence for lousy hangout especially adolescence who are involved in watching of pornographic films. Also, there is a constant robber raiding. The violence coupled with the lousy environment makes it difficult for females to want to browse. All these reasons make Nigerian females disadvantaged and discouraged. The culture frowns at females hanging out with males. These age groups are very inquisitive and exploring. Nigerian males are minimally involved in home keeping thus they could spend the time in Cyber Caf6. Unlike Nigerian females who are culturally assigned the responsibilities of home management have lesser time to spend on internet. Thus the direct effect is that many Nigerian females are technology shy. (iv) Disparity between the rich and poor. There is a real risk that the poor people will continue to be marginalized and that the existing educational divide will be compounded by a growing digital divide.
534
Chiemeke & Daodu
The contributing factor for these digital divide are inadequate provision of electricity (Never expect power at all - NEPA), Television and Radio broadcasting. Broadcasting has traditionally provided the basic information infrastructure for Africa's entry into the information society. Access to radio and television is by far greater, per capita, than access to newspaper, telephones or even computers. According to UNICEF, there were 226 radio sets and 66 television sets per 1000 population in Nigeria in 1977 (Idowu et al., 2003). With the emergence of many new and independent radio and TV stations, their impact has been quite far reaching. Although lack of universal access to electricity limits the coverage of local television. 1.0 RURAL CONCEPT OF INFORMATION T E C H N O L O G Y IN NIGERIA Information Technology has existed in Nigeria in its primitive form from time immemorial when jingle drums, cave drawings, smoke signals etc., were used for communication between villages, communities and hamlets (Ajayi & Ighoroje, 1996). Other basic forms used then were town criers who conveyed information and announcements within a community. Communal living makes it possible to pass information from one household to another. Some of these older and primitive forms of IT are still relevant in some communities, especially the rural communities (like Ekwuoma village in Delta state of Nigeria), due to their overall slow pace of development. The impact of modem technology is therefore hardly felt in these communities because in extreme cases these technologies are not accepted. Due to high poverty and illiteracy levels, it is sometimes very difficult to effect dramatic changes in the philosophy and attitudes of rural dwellers towards technological growth. There is overdependence on governmental interventions, which are grossly inadequate or even nonexistence in some cases. Such interventions could be educational support at all levels, provision of the necessary infrastructure like constant power supply and a reliable telecommunication network. All these reasons enumerated above are the direct effects of low patronage of ICT in rural areas. Furthermore, the non-availability and independability of the supply of electricity is one of Nigeria's Achilles heels. In spite of the abundance sources of energy in the country - thermal, solar and oil, Nigeria is perpetually short of electricity. These perennial shortages and the epileptic nature of the inadequate supply constitute a major constraint to the realization of the technology-driven globalization in rural communities of Nigeria. Rural communities occupy the deepest part of this digital divide. 3.0 B R I D G I N G THE DIGITAL EKWUOMA COMMUNITY
DIVIDE
IN NIGERIA:
A CASE
STUDY
OF
In bridging the digital divide, the three-tier levels of government (Federal, State and Local government) are involved. At Federal level, the President, Olusegun Obasanjo initiated the rural electrification at the inception of his administration in May 29, 1999. The Federal government is currently executing 1,141 rural electrification projects across the country (Government in action, 2003). The project was distributed on a zone-by-zone basis and 241 rural
535
International Conference on Advances in Engineering and Technology
electrification projects were distributed to the south- south of the six geopolitical zone. At the state level, the Delta state governor's wife (Chief (Mrs) N. Ibori) initiated women centres in rural communities, while at the local government level; facilitators were provided to empower the rural dwellers on various social, economic and political lives peculiar to their communities. Ekwuoma is an Ibo community in Delta State located in the south-south geographical location of Nigeria. The Ibos are known to live in a close-knitted community, which made their older primitive forms of IT (town crier, smoke pipers) to be made relevant. They are mostly peasant farmers. They are known to produce 70% of pluvial tomatoes (available from June to October) in two out of six geographical zones of Nigeria (http://www.bj.refer.org). Inspite of these activities, they are poor and neglected. They are neglected because of access to global world. They are highly skilled and willing to adopt any method that will enhance productivity and sales of their farm products especially tomatoes. The life style of Ibo community made it possible for the wife of the Delta Sate governor Chief (Mrs.) N. Ibori initiated agricultural centre in Ekwuoma community. Chief (Mrs.) Ibori also introduced an interactive 30 minutes weekly program on television (for Ibo tomatoes producers). This centre serves as means of boosting agricultural products through transfer of technology. The agricultural centre is located in the central part of the village (1.5km radius from an individual home). Accessibility is made easy by earthen constructed footpath. The centre opens once in two weeks on Sundays between 4.30pm to 6.00pm. Notice of meeting are circulated through town criers and reinforced by GSM (Global System for Mobile Communication) calls where necessary. Agricultural extension officers serve as facilitators. These facilitators are involved in educating the farmers on how to boost production and sales of agricultural products especially pluvial tomatoes. The information are disseminated in local dialects. It is an interactive session, which is later related as a weekly television programmes. Undocumented reports suggest that these broadcasting cum agricultural centres are welcome developments in many rural areas, especially Ekwuoma. ICT is brought nearer home and the effectiveness is greatly enhanced due to self-driven methods. There is a better social interaction amongst the community tomatoes producers. This has led to the formation of vibrant cooperative societies, which fix prices, and distribution of pluvial tomatoes in Ekwuoma village. The GDP of the inhabitants of Ekwuoma village has improved due to access and effectiveness of ICT. Within the last two (2) years, access to telecommunication in the rural communities has increased by over 50% (Ajayi, 2003). For example, in Ekwuoma village, all tomatoes producers possess GSM, Television and Radio through cooperative societies. This has enhanced interaction among the pluvial tomatoes producers and sellers. Transportation of harvested tomatoes to the neighboring villages or town where they are disposed off has improved from the usual carriage by head or bicycle to the use of autobyke or motorcycle. This level of farming has also boost the social life of the tomatoes farmers
536
Chiemeke & Daodu
by forming different clubs apart from cooperative societies where they socialize and exchange ideas on the improvement of land acquisition, improve tomatoes varieties, method of farming through the use of tractors and planters as the case may be. This is usually enhanced by the local agricultural extension officers who help the farmers with these advance implements. Part of the ways to encourage these local farmers by the government is awarding of prices to the best farmers of the year through yearly exhibitions and assessments by the government officials. In exceptional cases, scholarships are also awarded to the children of the best farmers. 4.0 CONCLUSION The partnership between the three tier levels of government and the tomatoes producers is a welcome development in Nigeria. We feel that in the next few years, the digital divide between the rural and urban communities will be reduced to a barest minimum. REFERENCES
Ajayi, G. O. (2003). The Nigerian National Information Technology Policy, http://www.jidaw.com/policy.htm Ajayi, O. B. & Ighoroje, A.D.A. (1996). Female Enrolment for Information Technology in Nigeria. In Achieving the Four E's Edited by Prof. R. K. Banerijee, GASAT 8, Amenbad, India 1, 41-51 Government in Action (2003) FG Executes Rural Electrification Projects. Available online: http ://www.nigeriafirst.org/article_977. shtml http://www, bj .re fer. org/b enin_ct/e c o/lare s/thema/thema 7/English/po int2. htm Idowu, B, Ogunbedede, E. & Idowu, B (2003). Information and Communication Technology in Nigeria. Journal of Information Technology Impact, Vol. 3, No. 2, pp.69-76 Novak, T. P. & Hoffman, D. L. (2000). Bridging the Digital Divide: The Impact of Race on Computer Access and Internet Use. Vanderbilt University. Peters, T. (2003). Bridging the Digital Divide. The Evolving Internet Edited by William Peters, Washington D. C., U.S. Department of States.
537
International Conference on Advances in Engineering and Technology
STRATEGIES FOR IMPLEMENTING HYBRID E-LEARNING IN RURAL SECONDARY SCHOOL IN UGANDA P. O. Lating, Sub-Department of Engineering Mathematics, Makerere University, Uganda S.B.Kucel, Department of Mechanical Engineering, Makerere University, Uganda L.Trojer, Division of Technoscience Studies, Blekinge Institute of Technology, Sweden
ABSTRACT
This paper discusses the strategy that should be used while introducing e-learning in rural girls' secondary schools in Uganda for the benefit of female students of advanced level Physics and Mathematics. The strategy was formulated after numerous field visits to Arua, one of the poorest districts in Uganda. Urban secondary schools where Uconnect and SchoolNet projects are being implemented were also visited. Some literatures were reviewed from the Web on the subject. The results show that a limited form of e-learning, the Hybrid E-learning, can be introduced in rural secondary schools and the main delivery platform is the CD-ROM. To implement the hybrid e-learning, multistakeholder participatory approach, VSAT internet connectivity, and use of open source software are recommended. The implementation of this strategy will result in reducing the digital divide and achievement of one of the Millennium Development Goals of empowering women at reduced costs. Keywords: ICT; E-learning; Hybrid; Rural; Poverty; Secondary School; Female Students; Gender.
1.0 INTRODUCTION Rural secondary schools in Uganda are poor and have inadequate infrastructure, facilities and qualified teachers for Physics and Mathematics subjects. These are the essential technology and engineering subjects that are required for entry for degree courses in the Faculty of Technology, Makerere University, the most dominant tertiary institution in Uganda with a sound research base. Students perform poorly in Physics and Mathematics, especially female students from rural schools. The result is the low participation of female students from rural secondary schools in the engineering and technology profession. This disparity is distinctly evident in the graduation patterns of students from Faculty of Technology, Makerere University, see table 1.
538
Lating, Kucel & Trojer
From table l, it can be seen that in 4 years (from 2000 to 2003) Makerere University produced 417 Engineers out of which 85 were female, giving a 20.4%graduation ratio of female engineers as compared to the total number that graduated in a period of four years. Table 1. Graduations of Undergraduate Students by Gender from Faculty of Technology, Makerere University, March 2000 to March 2003 Course CivilEngineering Electrical Engineering Mechanical Engineering Total Male 154 102 76 332 Female 35 34 16 85 Total 189 136 92 417 Source: Academic Registrar's Office, Makerere University It was observed that in the 2005/2006 admissions into the Faculty of Technology, all the female students admitted were from only four urban, educationally elite Districts of Kampala (the capital city of Uganda) and its surrounding Districts of Mukono, Wakiso and Mpigi. There are 69 Districts in Uganda currently. Therefore, 65 rural Districts failed to produce female students who could perform well in Physics, Chemistry and Mathematics so as to qualify for admission into Makerere University for engineering training. This is a reflection of gender inequality in the education sector: rural female students do not participate in the engineering profession. Such inequalities should be addressed. The main causes of the poor performance of rural secondary schools in national examinations are: 9 Absence of senior laboratories for advanced level experiments. Those that have the laboratories cannot equip them with chemicals and necessary facilities for practical work. 9 Such schools lack libraries. Those with libraries cannot stock them with recommended text books. 9 Shortage of qualified and committed teachers. Good teachers go to urban and periurban schools where they are better remunerated and have attractive fringe benefits. The schools are too poor to invest in laboratories and libraries. Nor can they attract and remunerate qualified teachers. These are poverty related problems that must be solved by application of ICT in education. The paper starts by looking at some key international documents that support ICT and Gender research. The relevant policies of the Ugandan government that support this research are reviewed by the researcher. Problems of science education training in rural secondary schools are highlighted. There have been some attempts to introduce e-learning in Ugandan secondary schools under a number of projects with the aim of solving some of these problems. These projects are analyzed to see if they are the appropriate approach to introducing
539
International Conference on Advances in Engineering and Technology
e-learning in schools. At the end of the paper is a strategy for implementing e-learning in rural secondary school science education of female students in Arua district. The research is in progress. 2.0 WHY ICT AND GENDER RESEARCH 2.1 United Nations Millennium Development Goals
In September 2000, 189 world leaders under the auspices of the United Nations, (UN), agreed and set eight Millennium Development Goals (MDGs) to guide development of its member countries in the 21 st century (UN Publications). By the year 2015, all the 191 UN Member States have pledged to meet these goals. The UN MDG No. 3 specifically deals with empowerment of women. As an indicator for the achievement of this specific goal, gender disparity in primary and secondary education must be eliminated preferably by 2005 and at all levels by 2015. 2.2 The World Summit on the Information Society
In 2003, the World Summit on the Information Society (WSIS) set objectives and targets necessary for UN member countries to achieve the MDGs mainly through the application of Information and Communications Technologies (ICT) in every sector of human endeavour (UN Publications). WSIS operates under the patronage of the UN Secretary General, Mr. Kofi Arian. 2.3 The World Summit on the Information Society Gender Caucus
The WSIS Gender Caucus identified six Plans of Action. The sixth plan recommends strongly the need for Research Analysis and Evaluation to guide actions by UN member countries (UN Publications). It says: "Governments and other stakeholders must apply creative research and evaluation techniques to measure and monitor impacts- intended or unintended- on women generally and subgroups of women. At minimum, Governments and others should collect information disaggregated by sex, income, age, location and other relevant factors. On the basis of these data, and applying a gender perspective, we should intervene and be proactive in ensuring that the impacts of lCTs are beneficial to all". This particular Plan of Action calls for a more proactive involvement in ICT and Gender Research. 3.0 NATIONAL ICT AND GENDER EQUALITY POLICIES IN UGANDA 3.1 National ICT and Rural Communications Policies in Uganda
Uganda Government has identified ICT as one of the eight strategic intervention areas. The Government approved the National Draft ICT Policy (December 2003) for the country. The growth of the ICT use in Uganda was boosted by the Government's decision to exempt all ICT equipment from custom taxes. This helps in making the equipment such as computers more affordable to people.
540
Lating, Kucel & Trojer
In 1998, Uganda Communications Commission (UCC) was set up according to the Uganda Communications Act of 1997 as an independent communication regulator in the country. UCC adopted a Rural Communication Development Policy (July 2001). According to this policy, the three National Telecommunications Operators have been required, directly through the license rollout obligations, to attend to rural communication development. The three National Operators in the country are charged 1% of their annual gross turnover as contribution for the Rural Communication Development Fund (RCDF). UCC set up and manages this Fund (RCDF). The fund, while limited, is being used to leverage investment in rural communications through competitive private sector bidding. 3.2 Gender Policy of the Ugandan Government There are a number of gender related policies that Uganda government is implementing but the National Gender Policy (2003) is most relevant. At all levels of leadership in Uganda, Gender Mainstreaming is being emphasized. Women have specified number of seats in Parliament and in Local Government Councils. Gender is a component in the composition of Boards of Public Institutions and Corporations. In Education, female advanced level senior secondary school students get additional 1.5 points when they are being considered for entry into public Universities or Tertiary Institutions. Rwendeire (1998) defended educating women in Uganda by identifying the relevant social benefits involved. 3.3 Limitations of the ICT and Gender Policies in Uganda Unfortunately, both the ICT policy and the National Gender Policy are being implemented without the necessary laws that have been enacted by parliament to guide their implementation.
Access tariffs for Internet, however, remain quite high because Uganda's international access is only through European or American satellites that are expensive compared to our level of development. Minges & Kelly (1997) found that for Dial-up access to Internet for 30 hours a month (i.e. one hour a day), the monthly tariff was almost the same as the annual Gross National Income (GN1) per capita was only 240 US$ in 2003. The tariffs consist of fixed ISP and telephone subscription charges and variable telephone usage charges. u c c established the local Internet Exchange Point for local Internet use. However, services are still affected by lack of appropriate high capacity backbone infrastructure resulting in high local connection costs and bandwidth constraints. There has been a move by UCC to improve this by waiving license fees on Internet Access Service Licenses and use of 2.4GHz spectrum from July 2004. By the end of 2003, Internet bandwidth in Uganda had grown to about 25 Mbps (for up link and l0 Mbps (for down link) from only about 1 Mbps (for both up and down links) in 1998.
541
International Conference on Advances in Engineering and Technology
UN annually measures the level of ICT penetration and diffusion as one of the Human Development Index parameters. In its Human Development Report (2003), Uganda's number of Internet users (per 1,000 populations) was only 2.5. This is so low if you compare with a country like Sweden that had 516.3 in the same report. 4.0 INTERNET CONNECTIVITY IN RURAL SECONDARY SCHOOLS IN UGANDA There have been some attempts aimed at introducing Internet for learning and teaching in some selected secondary schools in Uganda. Most notably were the SchoolNet Uganda Project and the Uconnect Project.
4.1 The SchoolNet Project SchoolNet connected Internet to some schools at a capital cost of 30,340 USD. Generally VSAT connectivity methods are used with some schools connected using the Broad Spectrum technology. Dial-up and other wired Internet connectivity methods like ISDN, DSL, Leased Lines and Fiber Optic are not suitable for rural areas. They are narrow band and teledensity is low in rural schools. The project is mainly funded by donors especially the World Bank. However, such schools have problems in sustaining the project and cannot meet the recurrent monthly expenditure of 1,680 USD. And Internet in those schools is not being used for e-learning. Furthermore, the schools that SchoolNet chose are the best urban schools in the country with relatively good science laboratories, libraries, infrastructures and qualified teachers. SchoolNet intends to introduce a commercial, proprietary e-learning platform, the Blackboard. This is a very expensive platform to acquire (the cheapest version id at 12,000USD) and maintain. No rural secondary schools can afford this. Makerere University has also thrown it out. 4.2 The Ueonneet Project Uconnect is an NGO that sells refurbished computers to schools at 175 USD per set, networks them and helps the schools with training of students and teachers in ICT. In some schools they arrange for Internet connectivity by contracting private businesses. Uconnect supplies their clients with the School Axxess server, the LIBRARIAN search engine. Students use it to find web pages of interest to them from among the thousands stored in its web-cache which includes many multimedia sites. There are also full motion interactive training videos that come with the server. This web-caching technology is very relevant for rural schools. Uconnect also encourages the use of open source platform especially the SchoolWeb for e-learning. But none of the schools have any ideas about e-learning.
542
Lating, Kucel & Trojer
5.0 S T R A T E G I E S F O R I M P L E M E N T I N G E - L E A R N I N G IN R U R A L S E C O N D A R Y S C H O O L S IN U G A N D A
5.1 What does "rural" mean in the Ugandan context? In the Ugandan context, the word "rural" means "poor" and it is not a classification based on whether an area is sparsely populated or not as is the case in Europe. Therefore, a rural secondary school in Uganda is another name for a poor secondary school. When implementing e-learning in such poor schools, their unique situations must be borne in mind. And one of the crucial decisions to make is the choice of the delivery platform(s) that will be used. 5.2 Hybrid Type of E-Learning Platforms most Suitable for Rural Secondary Schools
in Uganda The following types of platforms are used for electronic/distance learning purposes: 9 C D - R O M s are stand-alone instructional or informational programs not connected to Internet or other communication processes. 9 Web-sites are linked wed pages on an Internet or Intranet. They can be compared to a reference manual or reading a book. They provide passive information. 9 Asynchronous Internet Communication (AIC) is a listserver forum using communication tools, such as e-mail or bulletin boards, on an Internet or Intranet and is usually accompanied with an archive or database accessible by participants and the instructor. The users log on and write to each other at different times. A listserver is a program that automatically sends e-mail to a list of subscribers. It is the mechanism that is used to keep newsgroups informed. 9 Synchronous Internet Communication (SIC) is a form of communication like chat, video conferencing via the Internet, and voice chat. The chat function is when the individuals are simultaneously connected to a common site where typed messages are displayed for everyone to see. Each person can type his or her own message. Bulletin boards can be used the same way. 9 Web-based training is an on-line learning platform containing communication and course management tools on an Internet or Intranet, and can combine the above features. 9 Hybrids are any combination of the above with classical classroom training or coaching or group facilitation. In the circumstances of rural, poor secondary schools in Uganda, hybrid platforms are most suitable. And the main course delivery platform should be the CD-ROMs.
543
International Conference on Advances in Engineering and Technology
5.3 Multistakeholder Best Practices to be used when Implementing Hybrid E-learning in Rural Secondary Schools The implementation of Hybrid E-learning Project should be done using the Multistakeholder participatory approach. Local Government, Businesses, the participating schools, and NGOs, etc. should join hands with Makerere University, Faculty of Technology, in implementing the Research Project. Poverty-related problems cannot be solved single-handedly. 5.4 lnternet Connectivity In circumstances where teledensity is low and private businesses are reluctant to operate in rural areas, VSAT Internet connectivity is only viable method of introducing Internet to schools. The Broad Spectrum technology can be used to connect Internet to schools that are within a radius of 30 kms from a hub, which can be one of the schools itself. Refurbished computers with multimedia capability can be used in rural schools to reduce costs. Four monitors may also be connected to one CPU to further reduce costs. To reduce bandwidth requirement, web-caching will be used. The schools can operate as telecenters and allow the communities to access Internet. This will help to reduce the digital divide. 5.5 Course Management System For managing the learning environment, a Course Management System will be required and a Website for the Research Project created. This must be an open source software, not proprietary. The Mambo is a good product to try and is hosted on an open source server by bone.net. The author has some experience in using the Mambo. 6.0 CONCLUSIONS In Uganda, as one of the least developed countries, people in the rural areas face a situation of being among the poorest of the poor in the country. Support to the education system in the rural areas is highly needed. The scarcity of the schools includes teachers (whether skilled or not), textbooks and other learning materials, laboratories, infrastructure like electricity and, in the context of this paper, Internet connection. Uganda has adopted one of the most radical gender equality policies in the world. When this policy is linked to the policies of development and poverty reduction, the emphasis on well educated women and men in Uganda is inevitable. As elsewhere there is still a long way to go having gender balance especially in higher education. The paper is considering this issue for science education in rural secondary schools. Thus the Faculty of Technology at Makerere University is expected to benefit from the increased admission of female students from the target secondary schools. This is one way to bridge the gender gap existing currently in the Technology and Engineering training, where only about 18 - 20% of the students are female.
544
Lating, Kucel & Trojer
Going from policy to practice implies a number of challenges on fundamental levels. Issues such as general technology, ICT, multistakeholder collaboration, open source software, hybrid e-learning platforms, open archive resources as well as web caching for developing digital libraries constitute ways forward. Conclusions have been drawn that e-learning using hybrid delivery platforms can be put into practice in Advanced-level rural secondary science education of female students. This is expected to result in improved performance of female students in Physics and Mathematics. Still another impact of the hybrid e-learning project is the parallel development of using the targeted schools as telecentres for the surrounding society. If successful, it is expected that Internet use in the rural community will increase. This may have some impact on the digital divide. REFERENCES
2003 UN Human Development Report. http://www.undp.org~dr2003/indicator/cW f UGA.html Minges & Kelly (1997). Uganda Internet: Case Study. http://www.itu.int/africaintemet2000/Documents/ppt/Uganda%20Case%20 Study.ppt# 1 Ministry of Gender, Labour and Social Development: National Guidelines in Uganda (1997, 1999, 2003). http://www.ilo.org/public/english/employment/gems/eeo/guide/uganda/mglsd.htm Ministry of Works, Housing and Communications: National Information and Communication Technology Policy Framework (Draft). (May 2002). http://www.logosnet.net/ilo/150_base/en/init/uga_l .htm Rural Communications Development Policy for Uganda. (2001). http://www.ucc.co.ug/rcdf/rcdfPolicy.pdf Rwendeire A. 1998. Presentation at the World Conference on Higher Education, 5-9 October, 1998, Parish. http://unesdoc.unesco.org/images/0011/001173/117374e.pdf SchoolNet Uganda VSAT pilot project, http://www.schoolnetuganda.sc.ug/hompage.php?opti on=v satproj ect Uconnect Schools Project. http://www.uconnect.org/projectextract.html UN Millennium Development Goals. (n.d). http://www.un.org/millenniumgoals/ World Summit on the Information Society Gender Caucus" Recommendations for Action. (n.d) .http ://www.genderwsis.org/fileadmin/resources/Recommendations_ForAction_Dec_2003_Engl.pdf World Summit on the Information Society: Objectives, Goals and Targets. (n.d). All the websites were retrieved on March 10 th, 2006.
545
International Conference on Advances in Engineering and Technology
DESIGN AND D E V E L O P M E N T OF INTERACTIVE M U L T I M E D I A CD-ROMs FOR RURAL S E C O N D A R Y SCHOOLS IN UGANDA P. O. Lating, Sub-Department of Engineering Mathematics, Makerere University, Uganda S.B.Kucel, Department of Mechanical Engineering, Makerere University, Uganda L.Trojer, Division of Technoscience Studies, Blekinge Institute of Technology, Sweden
ABSTRACT The paper discusses the design and development of interactive multimedia CD-ROMs for advanced-level secondary school Physics and Mathematics for use by the disadvantaged rural female students in the rural district of Arua. Multimedia content of the CD-ROMs was developed at a Workshop of advanced level secondary school Physics and Mathematics teachers from the district in September, 2005. The Interface design and production of the two CD-ROMs (one for each subject) were made after some 'trade offs' and are being tested in the two girls' schools in Arua: Muni and Ediofe. It is expected that their use by the female students will result in improved performance in national examinations. This is the first successful case of advanced level course content being delivered to students using CD-ROMs that were locally developed based on the Ugandan curriculum. It is also a successful application of ICT in women empowerment.
Keywords" Interactive. Multimedia. CD-ROMs. Physics. Mathematics. Rural. Secondary School. Uganda. Poverty
1.0 INTRODUCTION Rural advanced level senior secondary schools in Uganda lack senior laboratories for core science subjects. Those with laboratories cannot afford to equip them and buy chemicals for practical work or experiments. Libraries are not well stocked. Science and Mathematics teachers are few and poorly remunerated and most of them teach in more than one school in order to get more pay. This makes them not committed and in many instances the syllabus is not completed. Government financial assistance to such schools is negligible and schools rely on the meager contribution of the poor parents whose annual income per capita is under 300 US$. The situation has led to students, especially female ones, dropping science subjects especially Physics and Mathematics, key engineering subjects. A strategy for implementing hydrid e-learning in such rural secondary schools to support disadvantaged female students in advanced level Physics and Mathematics was developed
546
Lating, Kucel & Trojer
by Lating, Kucel &Trojer, (unpublished). The Hybrid E-learning research project is currently being implemented in the Ugandan rural district of Arua in the two girls' advanced level secondary schools, Muni and Ediofe. In this project, the main course delivery platforms are the interactive self-study multimedia CD-ROMs which were designed and are being developed based on the local Ugandan curriculum. The project is being financed by Sida/SAREC as part of its support for research activities of Makerere University, Faculty of Technology. The paper starts by reviewing literatures on the advantages of CD-ROMs before describing the content and interface designs of the CD-ROMs that have been developed for advanced- level secondary school Physics and Mathematics based on the local Ugandan Syllabus. The paper ends by giving the hardware and software used in the production process. The CD-ROMs are being tested in Muni and Ediofe before mass production for use in other Ugandan secondary schools.
2.0 ADVANTAGES OF INTERACTIVE MULTIMEDIA CD ROMS There are no interactive multimedia CD ROMs based on the local advanced level curriculum that are being used in Ugandan secondary schools at the moment. Therefore, the delivery of content using interactive multimedia CD ROMs for advanced level secondary schools is a new phenomenon in Uganda. But in many developed countries, training CD-ROMs are used quite extensively. The main advantages of CD-ROMs are big memory capacity, multimedia applications and popularity due to its standardization. 2.1 Big Memory Capacity Advantages of interactive multimedia CD ROMs have been stated by Tapia et al, (2002). As storage medium CD ROMs have high capacity (650 to 700 megabytes) and are relatively cheaper compared to media like floppy disks (1.44 MB). For example, Woolbury (n.d) notes that a CD-ROM disc with a memory capacity of 550 megabytes of data is equivalent to about 250,000 pages of text. And most common CD-ROMs these days have memory capacities of up to 700 floppy discs, enough memory to store 300,000 text pages. Therefore, CD ROMs are very suitable for presenting rich graphic information, videos and animations that would be difficult to download from a website. Beheshti, Large & Moukdad, (2001), while supporting the use of CD ROMs further clarified that limited bandwidth has hindered the efficient transmission of large quantities of information over the Internet. A related problem in accessing online information is the need for complex networking technologies. Even today, many countries including Uganda, lack the necessary telecommunications infrastructure to effectively and reliably use the Web, especially in rural areas. But such countries can afford to buy inexpensive computers with CDROM drives. CD ROMs can be designed and implemented for large class teaching with very modest investments in the equipment.
547
International Conference on Advances in Engineering and Technology
Interactive CD ROMs have very fast data transfer rates compared to the Internet. The transfer rate for a CD ROM is typically between 300 to 1,200 Kbytes/sec as compared to the Internet technology that Ranky (1997) describes as a relatively slow technology with an equally slow transfer rates from 1.8 Kbytes/sec (for 14.4 modem) to 16 Kbytes/sec (for ISDN). This means that the CD ROM is more capable of supporting real-time learning needs than the Internet. 2.2 Interactive Multimedia Applications The basic components of an interactive multimedia CD ROM include the interface, texts, graphics, sound effects, animation, narration and video clips. 2.3 Popularity and Standardization of Interactive Multimedia CD ROMs For a digital delivery medium like CD ROMs, standards are necessary so that software manufacturers have an established unit to run their ever changing software. This gives the consumers confidence that the digital hardware they are purchasing will not be obsolete soon after purchase.
The basic digital hardware for CD ROM is already a standard agreed by two very influential companies: Sonny and Philips. Having set the standard for CD-Audio, the two companies have continued the trend with CD-ROMs, and most recently CD-I, a multimedia offshoot of CD-ROM. One beneficial effect of standardization of the compact disc is that the runaway popularity of CD-Audio has helped to set the compact disc in the public mind. Since CD-ROM is a close relative, its acceptance is made easier. That is why there are now many drives that will play CD-ROM and CD-Audio. This gives better assurance to those interested in CDROMs that it is a delivery medium that is here to stay. 3.0 DSIGN AND DEVELOPMENT PROCEDURE FOR THE CD-ROMs 3.1 Multimedia Local Content Design A workshop of advanced-level Physics and Mathematics teachers was held from 4 th to 1 lth September, 2005 in Arua district headquarters. Thirty three advanced level Physics and Mathematics teachers attended the Workshop and created hand-written interactive local content in both subjects based on the current local examination syllabus. The local content created was in English, the official language in all schools in Uganda. It was hand-written since the teachers only 4 teachers out of the 33 teachers had basic computer literacy skills. The facilitators of the Workshop were officials from the National Curriculum Development Center (NCDC), an Institution under the Ugandan Ministry of Education and Sports.
548
Lating, Kucel & Trojer
The teachers were guided by the facilitators how they were supposed to structure the content logically. Each topic in Physics or Mathematics was divided into sub-topics, lessons and a number of teaching units. Content was created for all the teaching units under a particular topic. A teaching unit would have a title and a text of 100-300 words and the subject teacher was to indicate where an accompanying illustration, graphic, activity by the student or animation should appear. This would help the programmer/producer to include the interactivity in the CD-ROM. Common interactivities that the teachers were told to use include animations, text entries, multiple choices, drag and drop; match the correct answer, true or false statements, comparing answers and zooming. The content was written in such a way that it encourages higher-order thinking skills (based on problem-solving approach). Activities must encourage the learner to think and play a role. The student discovers the concept/ideas herself. She should discover her own answer. And every teaching unit would have three activities to introduce the concept. The first one is to be done by the teacher with little student involvement. The second activity was to ensure that the student is involved 50% (i.e. half way). The third activity was to be exclusively done by the student to discover the answer. The learner would have the impression that she is being taught but the teacher is not there.
3.2 Interface Design Interface was created to establish a seamless navigation among the multimedia content. The design of the interface was based on the following principles: 9 Audience." Advanced level female secondary school students. 9 Interface consistency." As much as possible the screens have identical layout, terminology, prompts, menus, colours, and fonts. Navigational aids are situated in the same locations on each screen so that the students will not have to search for them. Colour schemes and type size and fonts are consistent for all screens. 9 Ease o f use and learning." The students should have minimum training, if any, in the use of the interface. 9 Efficiency." The students should navigate through the materials relatively efficiently. 9 Colourful and meaningful navigational tools." All the navigational aids and buttons are clearly and vividly marked to be distinguished from surrounding objects. 9 Information f e e d b a c k . For every student action, the interface provides a feedback. For example, the buttons are activated when the mouse pointer is moved over them, or they are clicked, and the student is provided with immediate feedback. 9 Error prevention. the system is error proof. Objects are only activated on the student's command. 9 'Previous" a n d 'Next' buttons make it easier to browse through the entire lesson. Other buttons on the screen are Chapter for proceeding to the introductory screen of a particular chapter, Contents for proceeding to the contents screen, etc.
549
International Conference on Advances in Engineering and Technology
4.0 PRODUCTION OF THE INTERACTIVE MULTIMEDIA CD ROMS 4.1 Hardware Requirements For the production of the CD-ROMs DELL computer with multimedia capability was purchased. It has 512 MB of RAM. 80 GB hard disk, Pentium| 4 CPU 2.60 GHz and Integrated Audio and Video cards. The monitor is 17". It has a CD/DVD/RW/R unit with a writing speed of 32X.
4.2 Software Adobe Photoshop and Corel PhotoPaint are used for resizing images and creating graphical interface elements. Macromedia Flash MX 2004, and Macromedia Flash Player 7 and Macromedia Flash HomeSuite + were installed for purposes of developing animations. Roxio Easy CD Creator 5 and Burn CD&DVDs with Roxio are some software applications that we use for recording CDs. Macromedia Dreamweaver 4 is used for multimedia asset development like graphics, animations, video and sound. 4.3 'Trade offs' Multimedia assets consume a huge amount of computer storage space and their inclusion in the CD-ROMs were restricted to only the essential parts. The hardware and software requirements for them are prohibitive. For example, a video clip requires 3 MB per minute of disc space. Video clips were restricted. And topics that are relatively straightforward will be delivered in the conventional way just like those that do not have any significant multimedia input. There is no point having a textbook on a CD ROM. Iskander et al (1996) call this a "tradeoff" when developing CD ROMs. Focus is placed on visualization of abstract and highly mathematical topics, interactive participation in laboratory experiments and on presentation of practical applications.
Secondly, multimedia authoring is enormously time consuming according to Hinchliffe's (2002) experience in production of training CD ROMs. He notes that a 30 second animation, which might occupy the user's attention for two or three minutes only, might take an hour or two to put together. Animations are good interactive methods but they also require a lot of bandwidth. All these considerations were made during development of the interactive multimedia CD-ROMs for A-level Physics and Mathematics. 4.4 Production of the CD-ROMs
Digitalization of the Local Content Created The stages in the production of the CD-ROMs are shown in table 1. The main stages of the CD-ROM production process included digitalization, authoring, multimedia integration, production of test copies, producing master copies and mass production.
550
Lating, Kucel & Trojer
Arrangements were made with some secretaries and the manual hand-written copies of the local content created were converted into electronic copies. The teachers could not do it because of very low computer skills. The scanner we have does not have the capability of recognizing hand written letters and numbers. It is a digital flatbed scanner with photo-quality results of 2400dpi and 48-bit colour. This made the exercise of digitalization very tedious. Table l: Timeline for the production of the CD-ROMs Activity Responsibility
Expected Completion Date
Status
Subject teachers in a Workshop Researcher, Secretaries
19.9.2005
Done
2.11.2005
Done
Researcher
On-going 15.1.2006 10.2.2006
Done
30.1.2006
Done
February, 2006
Done
Testing
Researcher, Multimedia Programmer Researcher, Multimedia Programmer Researcher, Multimedia Programmer Researcher, Multimedia Programmer $6 Students of Muni, Ediofe
Ongoing Done
March, 2006
CD covers/insert design Production (500 copies) Marketing in other schools
Researcher Researcher Researcher
March, 2006 April, 2006 From April, 2006
In progress Done
Local content creation Digitization of content
Copyright permission for some resources Interface Design Programming Narration, Music Writing/Editing
For the completed CD ROMs there will be small pamphlets or inserts included in their jewel boxes. The pamphlets will contain information about the CDs and how to use them. They will also have copyright statements and hardware requirements. Finally, master CDs will be burnt along with the other documents for mass manufacture and possible use in other rural secondary schools. 5.0 CONCLUSIONS In Hybrid E-Learning for poor rural secondary schools, the course delivery platform to be used is the CD-ROM. The main advantages of CD-ROMs are big memory capacity, multimedia applications and popularity due to its standardization. Web-based (Internet) delivery options are inherently not suitable for this purpose and context of rural communities.
551
International Conference on Advances in Engineering and Technology
In designing and developing interactive multimedia CD-ROMs for advanced -level Physics and Mathematics subjects the main objective was to demonstrate that the technology could be viable for use in poor rural schools that cannot afford to build science laboratories, libraries while at the same time cannot attract committed and qualified teachers. The CD-ROMs are being tested in the two girls' schools in the rural District of Arua, 500 kms from Kampala, the capital city of Uganda. The two girls' schools are Muni and Ediofe. The aim is to encourage more female students to pursue engineering career later after improving their performances at the secondary level of education. This will help to narrow the gender gap currently exists in the engineering profession in Uganda. In a wider context, the CD-ROMs will contribute towards the achievement of Millennium Development Goal No. 3 which specifically deals with women empowerment. In the design and development of the CDs, multistakeholder participatory approach was used. The content was created based on the local curriculum by the subject teachers in the District of Arua and was facilitated by experts from the National Curriculum Development Center, an Institution in the Ugandan Ministry of Education and Sports. Headteachers of a secondary schools in Arua and Koboko Districts willingly allowed their teachers to attend the content creation Workshop. Hardware and software for the production was procured by Makerere University, Faculty of Technology with financial support by Sida/SAREC. This type of participatory approach is good when handling community, poverty-related issues. The community takes ownership of the process. While designing the multimedia content, the context of the schools where the female students are studying had to be considered. They have earlier versions of refurbished PCs. The multimedia capabilities of such computers may not be very good. They do not have a lot of memory space and the hard disc capacity is also limited. So some 'tradeoffs' were done. Video clips that require a lot of bandwidth were restricted to the remarks of the Dean, Faculty of Technology, Makerere University. Other multimedia assets like graphics, simulations, etc. were also limited to an absolute minimum. Digitalization was the lengthiest process since the content created by teachers was handwritten, thanks to their low computer skills level. The scanner that could have been used did not have the character-recognizing capability. This caused a lot of delays when creating digital copies of the content. The final products are the CD-ROMs for advanced-level Physics and Mathematics. It is the first time locally produced training CDs have been produced for that level of education in Uganda.
552
Lating, Kucel & Trojer
REFERENCES Beheshti, J., Large, A., & Moukdad, H. (2001). Designing and Developing Multimedia CD-
ROMs: Lessons from the Treasures of Islam.http://www.ingentaconnect.com/search/expand?pub=infobike ://mcb/264/2001/00000025/00000004/art0000 !&unc = Hinchliffe, P. (2002). Developing an Interactive Multimedia CD-ROM to support AS Physics. http ://ferl.becta.org.uk/disp!ay.cfm?resID=2937. Iskander, M.F., Catten, J.C., Jameson, R.M., &Rodriguez, A. (n.d). Interactive Multimedia CD-ROMs for Education. www.ieeexplore.ieee.org/ie 13/4276/12342/0573085.pdf Peter Okidi-Lating, SamuelBaker Kucel, & Lena Trojer. Unpublished. Strategiesfor implementinghybrid elearningin rural secondaryschoolin uganda Ranky, P.G., & Ranky, M. F. (1997). Engineering Multimedia CD-ROMs with Internet Support for Educating the Next Generation of Engineers for Emerging Technologies. (A Presentation with and Interactive Multimedia CD-ROM Demonstration). h.ttp://ieeexplore,.ieee.org/xpl/abs_free.j sp? arNumber=616247 Tapia, C., Sapag-Hager, J., Muller, M., Valenzuela, F., & Basualto, C. (2002). Development of an Interactive CD-ROM for Teaching Unit Operations to Pharmacy Students. www.ajpe.org/legacy/pdfs/aj 660312.pdf Woodbury, V. (n.d). CD-ROM: Potential and Practicalities. http://calico,org/joumalarticles/Volume6/vol6-1/Woodbury.pdf Note" All references were retrieved on 28 th November from the respective Websites.
553
International Conference on Advances in Engineering and Technology
ON THE LINKS B E T W E E N THE POTENTIAL E N E R G Y DUE TO A UNIT-POINT CHARGE, THE G E N E R A T I N G FUNCTION AND RODRIGUE'S F O R M U L A FOR L E G E N D R E ' S POLYNOMIALS Sandy S. Tickodri-Togboa, Department of Engineering Mathematics, Makerere University
ABSTRACT Open any textbook of advanced engineering mathematics that has chapters on special functions. Turn to that chapter on Legendre's functions. Two items that you will never fail to find are the generating function and Rodrigue's formula for Legendre polynomials. Invariably, the generating function will be reported as a very effective device that facilitates proofs of numerous interrelationships between Legendre polynomials of various orders and/or their derivatives. Rodrigue's formula, on the other hand, will invariably be introduced as a formula that enables you to generate Legendre polynomials of any order without recourse to the method of separation of variables. Rarely, except perhaps for electrical engineers, will you be told that the two items have their roots traceable to the potential energy of a unit point-charge, situated at some distance from the origin along the z - axis of the Cartesian coordinate system! This paper aims to trace this linkage and demonstrate that this linkage leads to the definition of Legendre polynomials without actually having to solve the traditional Legendre differential equation. It simply happens that the polynomials so defined turn out to satisfy a special case of Legendre's differential equation.
Keywords: Cartesian coordinates, Generating function, Legendre polynomials, Potential energy, Unit-point charge, Rodrigue's formula, Taylor series expansion, Spherical polar coordinates
1.0 INTRODUCTION Polynomials are popularly defined as functions obtained by means of the basic arithmetic operations of addition and multiplication 1 on the constant and the linear functions. However, there is at least one set of polynomials, the Legendre polynomials, that can be obtained by other means, namely, by means of the calculus operation of differentiation. These polynomials feature very prominently in scientific and engineering applications, particularly in expansions of functions in series, extrapolations and in connection with solutions of boundary-value problems in spherical domains by the method of separation of variables. They also feature in fields treated by finite element methods. 1Here we regard subtraction simplyas the inverse of addition, and division,where this admissible, as the inverse of multiplication.
554
Tickodri-Togboa
In most treatments of Legendre polynomials, two items that never escape our attention are the generating function and Rodrigue's formula for Legendre polynomials. Invariably, the generating function is almost always introduced as a very effective device that facilitates proofs of numerous interrelationships between Legendre polynomials of various orders and/or their derivatives. Rodrigue's formula, on the other hand, is introduced as a formula that enables one to generate Legendre polynomials of any order without recourse to series solutions of one of the equations that results from application of the method of separation of variables to Laplace's equation in spherical coordinates, hence avoiding the need to specify some arbitrary coefficients in special circumstances. Rarely, except perhaps for electrical engineers, is it ever stated that the two items have their roots traceable to the potential energy of a unit point-charge, displaced from the origin along the z - axis of the Cartesian coordinate system! This paper endeavours to trace the links between the potential energy of a unit point-charge situated on the z - a x i s and displaced a distance h from the origin, the generating function of Legendre's polynomials and Rodrigues's formula for Legendre polynomials 2. 2.0 P O T E N T I A L E N E R G Y OF A P O I N T - C H A R G E D I S P L A C E D F R O M ORIGIN Consider the potential energy of a point charge situated on the z - axis at a distance h from the origin, as depicted in the Figure 1 below. Let this point charge initially be q Coulombs. Its location in terms of Cartesian coordinate system variables is then of course (0,0,h).
We now wish to examine the potential energy of this point charge at the point P(x, y , z ) , whose location in terms of the spherical polar coordinate variables is (r, q~,0). These spherical coordinate system variables are related to the rectangular Cartesian coordinate system variables through the expressions x = r cos (,osin O, y = r sin (p sin O, z = r cos O.
(1)
The reverse relationships of the Cartesian coordinate system variables to the spherical polar coordinates are of course provided by the expressions 2
r -x
2
2
"~
+ y + z ~,
(p-arctan y-, X
O-
Z
.(2)
~/X 2 + y2 + Z:
According to Coulomb's law, the potential energy at the point P(x, y, z ) , due to the point charge, while it is located at the origin is given by q U (x, y, z) - x/x2 + y2 + z 2
(3)
2 Note that in the context of gravitational potential, analogous relationships can equally be established by supplanting point-charges with point-masses.
555
International Conference on Advances in Engineering and Technology
With the point charge displaced to a distance h from the origin in the positive direction o f z - axis, the new expression for the corresponding potential energy is then given by
U(x,y,z,h) -
q . x/x 2 + y2 + ( z - h) 2
(4)
f Z P(x, y, z) = P(r, (p, O)
(0,0,h)
i
h Y
Figure 1: Location of a point charge Considering the case of q = 1 Coulomb, the potential energy at the point P(x, y,z) due to a unit point-change located at the point (0, 0, h) is then specified by the expression 1 U ( x , y, z, h) = x/'x~ + y~ + (z - t7) ~
(5)
We would, however, like to express this potential energy in terms of the potential energy of a unit point-charge positioned at the origin and in terms of any changes that the potential energy may have undergone as a consequence of the displacement. For this purpose, we invoke Taylor series expansion to write
556
Tickodri-Togboa
hu]
U(x, y, ~, h) - u(x, y, ~ + 7,. - ~
~=o
h3 + 2! 0h 2 h=O
h4 I + I +L 3! ~h 3 Ih=0 4! ~h 4 Ih=O
(6)
The differential coefficients in this expansion can then of course be obtained through successive differentiations of the potential energy expression (5) with respect to the displacement h and evaluations of the limits as h --~ 0 (as we approach the origin). As a result, we find that the derivatives are given by the expressions OU l,=o --
Oh
0 2Uj
z
Ex 2 + y 2 _+_Z2 13/2 '
__ 2_72 _ (X2 + y2 ) --
/2
7
03U h=O -- 6_73 - 9(x2 + y2)z
Ex2
2+ +y2+)z'
04U~h-7t,_-o - 24z4 - 7 2 ( x 2 I x 2 y z z i99(x 22
+ y2 )2
and so on. Recalling from expressions (1) and (2) that x z + yZ +_72 _ r z ' so that X2 -nt- y2 _ r 2 - z 2 , and z - r cos 0, these derivatives may be recast in the alternative forms z cos ~ 0 Oh h=0 F2
0U
020h 2Uh=o -- 3 COS3r30 - 1 03U~h3h=0 ----3 (5 COS3 F40-3 COS 0)
04Uh= 0 0 4h _- 35cos 4 0 - z30 5 r cos 0 + 3 and so on.
557
International Conference on Advances in Engineering and Technology
1
By multiplying the n - th derivative by -n-)-, for n - 1, 2, 3, 4 , K , we find that the coefficients in the potential energy expansion (6) are given by cosO
1 c~U
r2
1! Oh h--o 1 ~2UI 2V c3h2
9
h=0
____13COS3 O-- 1 -2 r3 '
1 c33UI __15cos 3 0 - 3 c o s O 3! c~h 3 h : o - 2 r7 ' 1 0 4 U h:0 -- 1 3 5 cos4 O - 30 cos 20 + 3 ,K 4! c3h 4 8 r and so on. On plugging these results in expression (6), we have h
u(~, o, o, h~: U(x, y, z~ + [cos ~ +
3cos 2 0 - 1
E
h2
2
E35 c O S 4 0 3 h0 C O4S 2 08- 3 1 - 7 +
5COS3 0 - 3 c o s O
-7- +
2
[63c~176176h 5
as the desired expression for the potential energy at the point
h3
J 8
7 +
(7) ~ +L
P(r,~b,O)due to the unit
point-charge at the location (0, 0, h). Can it be expressed differently? This is the question we wish to address next 9 3.0 R O D R I G U E ' S F O R M U L A A N D T H E G E N E R A T I N G F U N C T I O N
First, we note that the terms in the square brackets appear to submit to some pattern of consistency, in that the powers of the cosine terms in a given pair of square brackets are either all even or all odd! Furthermore, we can easily verify that the first few of them can be expressed alternatively as
coso:l/
/lcoso-lt :-8/ -sin-OdO/
2 -sinOdO
3cos 2 0 - 1
2
558
1
d
(cos 20_1)2
Tickodri-Togboa
3
5 COS3 0 - 3 COS 0
48 - s i n O d O 35 cos 4 O - 30 COS 3 0
--
3
1
)3 cos 0 - 1
/
,
/4(\COS2 0--1)4 ,
384 - s i n O d O and so on. Moreover, it is also easily verified that the bracketed coefficients are actually captured within the general differentiation formula
/
, 1 d 1 (cos 2 0 - 1 sin2n 0 . 2"n! sin0d0 2 n! s i n 0 d 0 Accordingly, therefore, the potential energy expression (6) may be captured in the forms
)
1
1
U ( r , ~o, O, h) - ~_, .=0 2" n !
d
2
sin 0 dO
or
{ / 1
n=0
"
.cos
1
d
1
r-7,
/ }:,, sin >, 6/
What is even more interesting is that if we set cos 0 = 4"
(8)
(9)
(10)
(11)
so that (12)
-sinOdO=d(,
both potential energy expressions (9) or (1 O) may be collapsed into the single expression U(r,q~,O,h) -
,,--0
2"
n! d ( "
((2 -1
r "+l
(13)
An alternative path to expressing the potential energy at the point P ( r , qk, O)due to the point-charge at the location (0, 0,h) in terms of the potential energy of the unit point-charge located at the origin and any changes that the potential energy may have undergone as a consequence of the displacement is to revert to equation (5) and subject its right-hand side to the usual binomial series expansion. First though, we need to note that by expanding the square term ( z - h)2, the equation may be re-written as 1
U(x,y,z,h)=
x/x + y2 + z 2 _ 2zh + h 2
On migrating to the spherical polar coordinates(r, q~, 0 ) , this expression becomes U(r,q~,O,h) =
(14)
~/r 2 - 2rh cos 0 + h2
559
International Conference on Advances in Engineering and Technology
Be extracting the factor r outside the square root sign in the denominator, it may be recast as
g(r, cp, O,h) h By setting - - u r looking form
1 . r 4 1 - 2(h / r) cos 0 + (h / r) 2
and recalling that cos 0 - ( ,
(15)
we may indeed re-write it in the simpler1
g(r, (p, O, h) -
.
(16)
rx/1- 2 u ( + u z On re-writing this expression in the form 1
U(r,(p,O,h) = l
- ( 2 u ( - u 2 )12 ,
(17)
r
and invoking the standard binomial expansion theorem to expand the components 1
[ 1 - ( 2 u ( - u 2 )1 2 in powers of ( 2 u ( - u 2 ) , we get
'
1 -I+-~(2Eu-u2)+
(2Eu
35 - u2) 2 +~3(2~u - ~/2)3 +-~(2~u-u
2 )4 +L
After multiplying out the powers of (2U~'-, 2 ) higher than unity and reorganising the terms in this expansion through collection of the coefficients of the various powers u, we find that oo
1
[1-(2u~-u2)]
2 -EP,(~)u"
(19)
n=O
where P~ ( ( ) , n - 0,1, 2, 3,K turn-out to be Legendre polynomials. In other words, the ex1
pansion of the function [ 1 - ( 2 u ( - u 2)]2 in binomial series thus "generates" Legendre 1
polynomials and provides the basis for calling
the generating function
of Legendre polynomials. With result (19), the potential energy expression (17) thus becomes
U(F,(to, P nO,h)-Z -- """ n=O
(4") r
b/n--Z
~
h~
Pn ( ( )
n=0
n+l "
(20)
r
Upon comparing of statements (13) and (20), it is not difficult to confirm the definition 1
P'(()-2"n!d("
560
d"
((2_1),
' n-0,1,2,3,K
.
(21)
(i8)
Tickodri-Togboa
of Legendre's polynomials, known as Rodrigue's formula. It is thus evident that Legendre polynomials can be obtained through operations other than the "pure" arithmetic operations of addition and multiplication. Consideration of the potential energy of a unit point-charge in a spherical domain setting seems to provide the link. 4.0 CONCLUSION We have thus traced the links between the potential energy at the point P(r, ~b,O)due to a unit point-charge at the location (0, 0, h), the definitions of Legendre polynomials by means of Rodrigue's formula and the generating function for the same polynomials. Hopefully, we dispelled the mystery about having to obtain Legendre's polynomials through the series solution of the corresponding Legendre differential equation. REFERENCE
Dunham Jackson (1941), Fourier Series and Orthogonal Polynomials, Dover Publications, Inc., New York, Harry F. Davis (1963), Fourier Series and Orthogonal Functions, Dover Publications, Inc. New York, Oliver Dimon Kellogg (1954), Foundations of Potential Theory, Dover Publications, Inc. New York.
561
International Conference on Advances in Engineering and Technology
V I R T U A L S C H O O L S USING L O C O L M S TO E N H A N C E L E A R N I N G IN THE LEAST D E V E L O P E D C O U N T R I E S Ngarambe Donart, Kigali Institute of Science and Technology, Department of CEIT, Kigali,
Rwanda Ntayombya Phocus, UNICEF, Kigali, Rwanda Shrivastava Manish, Kigali Institute of Science and Technology, Department of CELT, Ki-
gali, Rwanda
ABSTRACT The long-time traditional classroom mode of education has been struggling a lot to cope with low educational budgets, especially among the Least Developed Countries (LDCs), against the disproportionate exponential population growth rate to support the expected increases both in classroom spaces and training of educators. With the inception of ICT technologies, both constraints are being overcome because it has now become possible that: (1) few educators can be utilized to serve a large geographical area, or even an entire global area, thus saving on the budgets towards the training of educators; (2) learners can learn at any time and from anywhere, thus making classroom spaces a less compulsory learning factor. The paper discusses Local College Learning Management System (LoColms) for the cheap delivery, of rich full-motion video contents, usually of prohibitive costs, to Virtual Schools in the poor communities. We have developed a system, LoColms, a learning management system that can be used to support a scenario of Virtual Schools whereby learners can watch educators' demonstrations, which is a much better learning environment. This solution as well addresses the cost issues that come with the video resource delivery; which is achieved by employing Proxy Cache Servers & Multimedia Stream storage Servers and Point-toPoint communication Protocol (PPP) Technologies over the ubiquitous Public Switched Telephone Network (PSTN), which in most of the least developed countries is now fully digital and having sufficient bandwidth. With this approach LDCs' governments can save in two major areas: 1) on the budgets involved in training and salary payments of many teachers, and on procurement of scholastic materials such as text books, chalks, etc. For example, some cases only one computer would be required per virtual school class room; 2) fewer classroom spaces would be required as classes can go for 24 hours everyday, since no educator would be physically required, as the contents would be asynchronously accessed from the Proxy Cache server It
562
Donart, Phocus & Manish
provides a sustainable solution as this aims at empowering local institutions to prepare and disseminate contents that are of local relevance. Key words: Virtual School, LoColms, Proxy Cache Server, PPP
1.0 INTRODUCTION In several Regional and International Declarations it was emphasized that education should be a must for all: in the Universal Declaration of Human Rights,(UDHR), (1948), the freedom of education for all was recognized among other human rights that need to be protected - "Everyone has the right to education" -; World Conference on Education For All, (WCEFA, 1990), held in Jomtien, Thailand, the four conveners of the conference, UNESCO, UNICEF, UNDP, and the World Bank, the WCEFA participants, 155 governments, 33 intergovernmental bodies, and 125 NGOs, adopted an initiative intended to stimulate international commitment to a new and broader vision of basic education to " meet the basic learning needs of all, to equip people with the knowledge, skills, values, attitudes they need to live in dignity, to continue learning, to improve their own lives, and to contribute to the development of their communities and nations"; In the Beirut Declaration on Higher Education in the Arab States (BDOHEIAS, 1998), it was also acknowledged that education is a useful tool in the economic growth in the face of Globalisation and towards Regional Peaceful coexistence. The LDCs represent the poorest and weakest segment of the International community, comprisingabout 49 poorest countries, constituting a population of about 1.2 billion people. These countries are characterized by their exposure to a series of vulnerabilities and constraints such as limited human, institutional and productive capacity; acute susceptibility to external economic shocks, natural and man-made disasters and communicable diseases; limited access to education, health and other social services and to natural resources; poor infrastructure; and lack of access to information and communication technologies, (The Third United Nations Conference on the Least Developed Countries Brussels, UNCOLDC (2001). The emergence of distance education provides an important way to address these concerns (Gundani et al, 1997),. The basic barriers to distance education in these countries are the lack of: 1) resources needed for meaningful development and sustenance of technologybased learning; 2) infrastructures (which includes information and communication hardware systems) to support modem technologies in least developed and/or low-technology countries, and; 3) recurrent funding necessary to acquire or develop appropriate software and courseware on a continuous basis, and maintain, service and replace the equipment. In an attempt to find a solution that is relevant to the situation in the LDCs, we have developed a web-based distance educational system, Local Colleges Learning Management Sys-
563
International Conference on Advances in Engineering and Technology
tern, (LoColms), whose primary objective is to empower the local educational institutions to improve their educational capacities in qualitative and quantitative terms. The rationale is to take the advantage of what already exists in these countries, like the PSTN's wellestablished infrastructure, which, presumably, has already been digitally upgraded for the ease of data communication, and the local educational institutions. The key technologies supporting these LoColms supported Virtual Schools, are the PSTN (have a high Quality of Service (QoS) infrastructure, the Point-to-Point Protocols (PPP), and the Proxy Caches (ProCa). The choice of the PSTN is to eliminate the duplication of communication networks (such as Internet, a packet-switched network, that has been the main infrastructure of the Web), and the choice to utilize the local educational institutions is in order that these can be empowered, by owning the process. The PPP is used to provide a direct TCP/IP link between the local educational institutions and the Virtual Schools, while the Proxy Caches are to minimize communication traffic and costs. The choice of the optical fiber cable or Digital Subscriber Link (xDSL) is to provide a broadband environment over the ordinary telephone subscriber lines to provide very high quality of shareable multimedia study materials (video contents). The LoColms seeks to address financial and administrative problems. Its aim is to integrate Distance Education into the main stream of educational system of individual colleges to avoid the quagmire of many players, and aims at achieving both the cost-effectiveness ('cheapness' of educational provision- usually expressed in terms of per-student costs) and the cost-efficiency (the optimal balance between cost, student numbers, and educational quality) for each college on the system. In considering sound educational investment, it is essential to distinguish effectiveness from efficiency. The main advantage of the LoColms is that it addresses the financial considerations, in that there is no need for major government funding required. Probably the governments would only come in with policies to ensure the smooth operation of the educational system over commercially run Virtual Schools as well as providing preferential treatment to the operators of the Virtual Schools. Also, since the remote students would not require any of the university's facilities for accommodation, feeding, healthcare, classroom, library, etc., necessarily the cost for tuition should be reasonably very low. The costs over the telephone system are almost eliminated by the use of the Proxy Caches, which are normally used to reduce latency and traffic of the study contents over communication networks. This paper provides a scenario of Virtual Schools solution to improve the per capita children educational deficit in the low income countries. In this way the pressure of training great numbers of trainers on the side of these governments does get eased, and the necessity of providing specified locations and times for providing teaching/learning becomes less mandatory, and for Learners to have access to the more learning friendly mode - Full Motion Video - either from the Virtual Schools centers established in existing schools and/or in established study centers. The architecture of such services will consist of Multimedia Storage
564
Donart, Phocus & Manish
Servers that are connected to client sites via high-speed networks. However, in this paper we opt for asynchronous delivery of media quanta (a Database based resources delivery) in order to cut costs even much further. 2.0 C H O I C E OF SCHOOLS
TECHNOLOGY
FOR
THE
LOCOLMS
BASED
VIRTUAL
Studies reveal that education in the LDCs is still far from being a right but continues to be a privilege for most people due to a couple of constraints: 9 Budgetary constraint: According to the (UNESCO, "Education and Training in the Least Developed Countries", UETLDC (1995)), it is indicated that the majority of educational systems in the developing countries, especially the LDCs, are confronted with major setbacks, mainly due to the use of inadequate, inappropriate, often inefficient and most always costly educational strategies. The links between cost-benefits, costefficiency, and cost-effectiveness remain weak in most of these countries because of high costs of educational materials and services, burdensome procedures and mechanisms for educational spending, and the use of inappropriate technologies and educational methods. 9 Unsustainable supporting technologies constraint: According to, Hilary Perraton, et al, (2000), it is stated that there cannot be a practical substitute for Primary schools --Children need to learn within a social environment. However, the paper observes that technologies may play a part in meeting the needs of children or adults who cannot get to school or conventional class and it makes sense to look at the technologies together, from print to broadcasting to computers. Even though the International Community is showing a great concern and doing everything possible to improve the educational situation in the LDCs, by for example the World Bank supporting Computer for Linking Schools projects Hilary Perraton et al (2000); the USAID supporting the Interactive Radio Instruction projects; UNESCO supporting Computer For Teacher Training projects; and many other funding agencies and NGOs supporting other various educational projects, obviously these noble efforts can only offer a temporary solution. This can be deduced from the example of an ambitious attempt to use technology to raise the quality of basic education and widen access using the Television Broadcast Project in C6te d'Ivoire, Hilary Perraton, et al (2000), a program that was launched in 1971, with the intention of reaching 21 000 1st grade children in the first year and with the other 5 grades added every year which by 1975 was reaching 235 000 children but, in 1981 the government of C6te d'Ivoire closed it down. We suppose the reason for the closure was probably because the government realized it would not be able to takeover the task in the event the funding agencies withdrew their role. Since the inception of the digitization of the PSTN system, data communication has been possible over the traditional voice communication network, and also its traditional bandwidth limitations no longer exist. For instance, with the use of data compression techniques,
565
International Conference on Advances in Engineering and Technology
the most modest bandwidths required are as follows: 64Kbps for video conferencing applications, 2Mbps for Full-motion broadcast video applications, and 19Mbps for Highdefinition television applications. The data rates might continue to scale down, substantially, in future with more improvements on the compressions power. To date, inter switching stations with data rates in Gigabits/s are possible over PSTN using SONET/SDH technologies; and about 2 Mbps and STS-1 by ISDN (H12) and xDSL, respectively, over the telephone subscriber loop twisted pair copper cables. Over the PSTN, we utilize the Point-to-Point Protocol (PPP), the protocol that supports TCP/IP over serial communications lines where routers and gateways are not used, and the ProCa, which regulate communication traffic and the communication costs over the PSTN. PPP provides a standard method for transporting multi-protocol datagrams over point-topoint links. The PPP encapsulation is suitable over SONET/SDH, DSL, and ISDN links, since by definition these are point-to-point circuits. The system employs ProCa to provide temporary storage web objects (HTML pages, images and files) for later retrieval. The basic Internet client-server model (where clients connect directly to servers) wastes resources, especially for the often-repeated transfer highly popular information. That's because popular objects are requested only once, and served to a large number of clients. Because ProCa usually have a large number of users behind them, they are very good at reducing latency and traffic: 1) reduce latency - Because the request is satisfied from the cache (which is closer to the client) instead of the origin server, it takes less time for the client to get the object and display it; 2) reduce traffic - Because each object is only gotten from the server once, it reduces the amount of bandwidth used by a client. This saves money for the client is paying by traffic, and keeps their bandwidth requirements lower and more manageable. Freshness and Validation of the contents are controlled (LastModified or If-Modified-Since common validators) to avoid having to download the entire object when they already have a copy locally, but are not sure if it's still fresh by time. 3.0 THE L O C O L M S BASED VIRTUAL SCHOOLS
A Virtual School is based on the concept of the networking learning environment and the technical possibilities offered by new information and communication technology, which is able to deal with all the tasks of school without the need for a physical school building. Virtual school, thus, does not exist according to an ontological analysis as a concrete building with classrooms, office rooms, teachers, other staff, or pupils. Virtual school is a logical extension of the use of computers in teaching (Tella et al, 1996). We can thus regard virtual school defined narrowly (Illich, 1972), as a school without a building but still connected to society. Independence of time and place and historical neutrality is central to the concept of virtual school. Virtual school can work as a virtual extension of ordinary school or classroom activity.
566
Donart, Phocus & Manish
If we regard virtual school as a symbiotic extension of ordinary school, part of the activities of physical school may be moved to virtual school and carried out there with the aid of information and communication technologies. The LoColms system provides virtuality in the educational systems but sticking to the local colleges' educational standards. The system allows for customization for the sake of protecting the autonomy of the educational culture of individual nations. It primarily encourages full involvement of the local resources, such as lecturers and their teaching materials.As has been indicated earlier, the emergence of distance education provides an important way of addressing the precarious situation of education in the LDCs, but for reasons ranging from economic disadvantages to the level of sophistication of the Information Technology (IT) systems, technology supported education might delay to have an impact on these developing nations. This system is not merely aiming at raising enrollment figures, even though that apparently is the ultimate objective, but also at improving on educational conditions in a given country. The objective is to help educational local institutions to increase schools' enrollment capacities without requiring any major governments' or foreign donors funding, because it is potentially a commercially viable system for the educational local institutions as well as other parties involved in the implementation of the system. The size of colleges will virtually increase by remote enrollment, and the limited resources, such as qualified teachers and study materials can tremendously be stretched, made accessible to all, all the time. LoColms based Virtual Study Centers (VSCs) is arranged as in fig. 1.
PSTN (PPP) Network (Connection to the
, '
''
~"
,,
,
~
I
'
~
,
~
'
/
9
~-
\
OllO )
,'
~,.
"ll!' 'hi
proCa
AC
Fig. 1" VSCs linked to Local Educational Colleges over PSTN by PPP LoColms supports asynchronous multimedia (mainly video recorded) instructions that are transferred over the PSTN. The system utilizes the proxy caches to store the downloaded study contents at the learners' end, yet keeping the education providers constantly informed
567
International Conference on Advances in Engineering and Technology
of the online learners' progress. Basically, the system addresses two asynchronous instruction pedagogical concerns: 1) how to keep track of learners' attendance; 2) and how to provide an online support mechanism for the online evaluation. The purpose of organizing learners on the LoColms supported virtual schools in virtual school study centers is to enable the use of broadband media connections, and easy invigilation for the online examinations. On making a PPP dialup connection to the respective educational institution, the system verifies password for authentication. This also allows the server to be kept informed each time an online learner reports for a lesson. After the logon onto the system, the PPP dialup connection is established, the client side guides the remote scholar through login process, and when the LoColms Client side has gathered all the information about the student's intentions, such as the college, the year of study, the Program (or major), and subject of study, the LoColms Server is contacted for: 1) password and payment verification; and 2) providing the required service. These are provided by the client side of the LoColms application, so that the request to the LoColms server is made only once. After the study resources from the College database servers have been downloaded by individual learners, they are temporarily stored into the VSC's ProCa, for the subsequent learners, until they are replaced by successive course packages by automatic prefetch according to the Course Sequencing Prerequisites and Completion Requirements. It is highly hoped that many courses would be shared, although different remote learners may be studying different Assignment Units (Aus) of a same course, according to the Course Structure Format (CSF) of each course, SCORM 1.0 (2000). Each time a learner is issued with the current Assignment Unit (AU), according to the curricular Taxonomy, it should include an exercise and frequently asked questions (FAQ) for each AU, and each time the student has finished the AU's exercise, a had- class is uploaded to the system server. If the student's intention is to study, the LoColms will first check to make sure these contents are not already saved on the ProCa. If the course contents are not in the ProCa, they will be downloaded from the university LoColms server and then the system will disconnect the PSTN transmission channel. The learner(s) carry on learning from the ProCa of the VSC's LAN. In other words, the telephone line is only in use during the login, for administrative procedures, and for downloading only when the content doesn't exist in the ProCa. The system mainly uses video materials of the recorded class sessions saved into the LoColms servers of the university. So broadband links (Optical fiber or DSL,) between the VSCs' LANs, the local educational institutions' LANs, and the telephone central office are a must, which is much a better method of online learning than the textual or audio modes. The Course Structure Format (CSF) and Course "Packaging" in the LoColms is emulated in the ProCa, although the ProCa requires contents by block units of a topic of the subjects being studied at a time. This is because the contents are big video recorded class sessions files. The elements block, Assignment Unit au, and objective will satisfy the prerequisites and
568
Donart, Phocus & Manish
completionReq of the CSF. The learners from the study centers will be served with topic assignment units (Tau) according to the prerequisites and completionReq procedure, (Tau~&]au~ &...&laux); studied sequentially according to the order the units were taught at the physical school, with an after Tau exercise to mark the completion of the Tau if attempted after a period of 60 minutes, and a finished Tau is recorded in the LoColms server against the topic of a given subject.The application consists of two servers, the ProCa server and the LoColms server, as in fig. 2, and fig. 3 study procedure screen shots.
Fig. 2" system architecture *'~ ~:~:: ~!~ :~::::~:~:.i:~.; ...... ................. .~:.~ .: :~: :;;:~..~: e : ~ : : . ~ : . N~:.;..; ~::~:.NN;N:::,~:::.~ ~. . . . . . . . . . . . . . . . . . .
~
..
: ~: :::;!111::~,~ ~ ~ :::: ;;;~-r
:: . . . . . . . . ~ : ~ .~ :~ ~
~i~
~::::~i~
~
!~ ~!:,
: ;
i:::: :::: : . . . : ..... : : : : ::: ... ~:-. ::.. ::::
Fig.3. Student study procedure screen shot
569
International Conference on Advances in Engineering and Technology
4.0 CONCLUSION In paper we discussed the Local College Learning Management System (LoColms) based Virtual School Application, whose objective is to provide both a sustainable and economical solution, suitable for educational situation in the LDCs. The application is a web-based system, and aims at improving the traditional form of education by empowering the local educational institutions. Its economicability comes from the fact that it is supported by traditional communication technology, the public switching telephone network system (PSTN), formerly regarded a voice communication system, which already exists in all of the LDC countries to avoid the costs that would be involved in deploying packet switched networks or dedicated private virtual networks (PVN) usually required in similar situations along side, that would otherwise become an intimidating factor to the decision makers in these countries. The work discussed is an innovation, whereby different technologies are combined to make web based education a realizable dream in the least developed countries in the soonest possible time. By this approach a lot can be achieved: 1) the WWW infrastructure would be economically established and with ease; 2) individual colleges' enrollment would, virtually, rise exponentially; 3) the local resources would be helped to develop; 4) the web-based educational system would be sustainable. We hope that this work will stimulate further research in appropriate technologies, especially the web-based ones, that will be more applicable in the LDCs' situations, for the interest of education in the LDCs in particular and any other socio-economic aspects in an effort to bridge the digital divide in general, relying on the locally available resources with an aim of strengthening them. Although the mastery of IT related technologies should become a priority, it shouldn't be a precondition for these countries to engage in the technology-based education systems, especially if there already exists a minimum technological capacity with which to start. In our view, what is required is a formula, and in what combinations of these technologies. REVERENCE BDOHEIAS - Beirut Declaration on Higher Education for Arab Regional Conference on Higher Education Beirut, Lebanon, 2 - 5, (1998). Gundani,& Govinda Shrestha, "Distance Education in Developing Countries", http://www.undp.org/info21/public/distance/pb-dis.html#up, (1997) Hilary Perraton & Charlotte Creed "Applying New Technologies and Cost-Effective Delivery Systems in Basic Education Thematic Review": International Research Foundation for Open Learning for the Department for International Development on behalf of multiagency review, (2000) Illich, I. "Towards a School-Free Society." Helsinki: Otava, (1972). SCORM 1.0, http://www.adlnet.org, (2000).
570
Donart, Phocus & Manish
Tella, S., "Virtual School in a Networking Learning Environment." University of Helsinki. Lahti Research and Training Centre, 146-176, (1995). UDHR- General Assembly of the United Nations proclaimed this Universal Declaration of Human Rights (Article 26.1), (1948), http://www.historyoftheuniverse.com/udhr.html. UETLDCUMTROETLDC - UNESCO, "Education and Training in the Least Developed Countries", Mid-term Review, Paris, (1995). UNCOLDC, The Third United Nations Conference on the Least Developed Countries, Brussels, (2001) WCEFA - Final Report, World Conference on Education for All (Jomtien, Thailand, 5-9), (1990).
571
International Conference on Advances in Engineering and Technology
SCHEDULING A PRODUCTION PLANT USING CONSTRAINT DIRECTED SEARCH D. Kibira, Department of Mechanical Engineering, Makerere University, Uganda
B. Kariko-Buhwezi,Departmentof MechanicalEngineering, Makerere University, Uganda P. I. Musasizi, Department of Engineering Mathematics, Makerere University, Uganda
ABSTRACT
The paper presents the application of constraint directed search to production scheduling at Uganda Clays Limited, to increase productivity and timely order delivery. An experienced human has in the past performed the scheduling task for the 69 clay products made on the same facility. This has been a serious challenge. The production process consists of five major sections i.e. the Silo, Green production section, dryers, kilns, and the stockyard. A scheduling system based on the Multiple Perspective Scheduling technique has been developed for the Green production section. The system uses products and resources data and provides for selection of scheduling policies. The schedules developed are similar to those of an experienced human. Therefore, the developed system would go a long way in addressing the need for an automated Production Scheduling Decision Support System at Uganda Clays. It can also be adapted to similar production environments. Keywords: Production Scheduling; Uganda Clays Limited; Green Production Section; Constraint-Directed Search; Multiple Perspective Scheduling; Decision Support System; Resource Utilization; Order Due-Date Compliancy.
1.0 INTRODUCTION Uganda Clays Ltd (UCL) is the leading manufacturer of baked-clay building materials and other clay products with a market share of about 65% in Uganda. It manufactures a variety of products under the following classes; roofing tiles, walling and partitioning blocks, decorative grilles, suspended floor units, ventilators and other products. Each of these product classes has a number of variants giving a range of over 69 different product items. Scheduling of such batch-produced items on the same production facility to satisfy customer due dates has been a serious challenge. The production process consists of four series processing sections; the silo for milling and blending the clay, the green production section for moulding the milled and blended clay into different products, the dryers, and the kilns for firing the dried products. There is also a stockyard for sorting, grading and storing before delivery. Each of the above sections per-
572
Kibira, Kariko-Buhwezi & Musasizi
forms different scheduling activities with the corporate goal being the optimum utilization of resources and order due date compliancy. In this paper, we present the development of an improved production scheduling system for the green production section. 1.1 Production System Description The Green Production Section has three production lines; each fitted with an extruder and a cutting table. The extruders include: the Bongioani 15-MA, Morando MVA-400 and Synthesis SYN-500 extruders. There are three tile presses; the Bongioanni Automatic Tile Press (ATP) Morando Automatic Tile Press (ATPNL), and Manual Tile Press (MTP).
I
SILO
15-MA
ATP
MVA-400
ATPNL
SYN-550
MTP
Extruders
Presses
DRYERS
1 KILNS
GREEN PRODUCTION SECTION STOCKYARD Fig. 1" Schematic Representation of the Production System The production system is schematically represented as shown in Figure 1 above. The Green Production Section has two types of machines and processes. There is the extrusion process and the pressing process. Some products are just extruded while others are extruded and pressed. The Extruders and Presses can be used to produce different products by changing the mould. Currently, certain products can only be produced on particular machines because moulds are designed for a specific machine and the moulds available do not cover all products on all machines. 1.2 Current Scheduling Practice Currently, production-scheduling is done manually on a weekly basis. Due to the inherent complexity associated with scheduling in this environment, developing a satisfactory production schedule without the support of an information system is a difficult and protracted procedure. Some objectives are often conflicting; for instance satisfying high priority orders and meeting due dates. Additionally, orders arrive in a stochastic manner with varying dispatch priority and other order characteristics such as ordered product array and quantities.
573
International Conference on Advances in Engineering and Technology
Every year some products are flagged "prime products" based on the demand history and must be catered for while making the weekly production schedule. Therefore, UCL gives priority to the production of prime products and caters for the others based on weekly orders made. These latter are called "special orders". The prime products for the year 2004 were Mangalore Tiles, Portuguese Tiles, Ridges, Bricks, Half Bricks, Maxpan 5"and Maxpan 6". Production scheduling of prime products is based on monitoring their stock levels in the stockyard. The so-called special orders are made on a make-to-order strategy. Product items are produced in batches per shift, whose sizes are limited to the budgeted capacity (or planned production level) of a particular machine. The plant runs two eight-hour shifts a day. There are five working days in a week. In this arrangement, a weekly green production schedule is produced. The actual execution of the schedule is of course affected by factors such as machine breakdowns, bad weather, and intermittent electricity power supply. 1.3 Data Collection
The data collected includes products' specifications, prime products, green production equipment specifications, and classification of products by machine. Such information provided the bases for the development and input to the Decision Support System (DSS) and the selected scheduling policies utilize this data to generate the Weekly Green Production Schedule. Interviews with production personnel were carried out in form of discussions. Detailed information on product and equipment characteristics was collected using data collection and recording sheets. The information collected includes products data, special orders data, and resource utilization data. The exhibits of collected data are shown below. (i) Budgeted Green Production January to December 2005 (ii) The prime products were Mangalore, Marseille and Portuguese Tiles, Half-Bricks (GR), Ridges, Maxpan 5" and Maxpan 6". The budget was based on a day of two shifts, five working days a week and 4 weeks a month (22 working days). Table 1: Green Production Budget for Prime Products for the year 2005 Product
Machine
Pieces/day
Number of days
Pieces/month
Mangalore Tiles
ATP ATPNL MTP MTP ATPNL 15MA
16,200 13,000 12,000 12,000 12,000 26,958
22 20 15 3 2 22
356,400 260,000 180,000 36,000 24,000 593,076
4 5 9
26,000 38,500 72,000
Marseille Tiles Portuguese Tiles Half-Brick
(GR) Ridges Maxpan 5" Maxpan 6"
574
MTP 6,500 S Y N - 5 5 0 7,700 S Y N - 5 5 0 8,000
Kibira, Kariko-Buhwezi & Musasizi
Other information collected: (i) Products data; i.e. product name, description, specifications (weight and physical dimensions), production process and lead-time. (ii) Green production equipment data; i.e. equipment name, description, design capacity (tonnes per shift), budgeted capacity, whether available for both shifts, whether it can be powered by the standby generator and whether it has been reserved for any product. (iii) Products by machine; i.e. whether a particular machine can make a given product.
1.4 Scheduling Technique Scheduling can be defined as the allocation of resources over time in order to perform a collection of tasks (Baker, 1974) and many useful models have been devised over time for solution. In many cases however, scheduling problems manifest themselves as sequencing problems. The scheduling problem at hand requires an algorithm for determining a good sequence for producing a given set of orders over a week's period given the constraints imposed by due dates, order priority, machine capacity and capability and pre-budgeted production levels for prime products. Given that even simple models of scheduling (e.g. job-shop scheduling) are N-P hard (Gary and Johnson, 1979), the search process typically depends on heuristic commitments, propagation of the effects of commitments, and the retraction of commitments. In more complex scheduling models the goal is not simply meeting due dates but also satisfying many complex (and interacting) constraints from disparate sources within the organization as a whole (Fox, 1983 and Fox, 1990). Therefore, the solution technique selected under the circumstances is the knowledge-based constraint directed search, which requires a representation of a scheduling problem and the search for a solution by focusing upon the constraints in the problem There are different solution approaches that exist in constraint directed scheduling (Fox, 1990) but the best suitable in this environment is called Multiple Perspective Scheduling, which together with the Preferred Customer Order dispatch rule was used to model the scheduling process. This technique is applicable to environments, just like at UCL, with high resource contention. It is both order and resource centered, with the aim of optimizing resource allocation. It involves identification of the island bottleneck, then guided by constraints and a priority rule through Order Selection, Capacity Analysis, Resource Analysis, and Resource Assignment to establish the associated resource reservations. Limited production capacity is the most critical (island) bottleneck at UCL. The scheduling challenge is to assemble and utilize all the quantifiable data drawn from the UCL operations domain, and develop a weekly schedule for the Green Production Section that will maximize machine utilization and Order due date compliancy subject to the constraints manifested by the production system. The following constraints were identified as the major determinants of the Weekly Green Production Schedule:
575
International Conference on Advances in Engineering and Technology
1.5 Budgeted Green Stock Levels This requires that the sum of units of a product to be produced must be at least equal to the "Budgeted Green Stock Level" for that product for the week being scheduled. During scheduling, the product with the highest "Budgeted Green Stock Level" takes precedence over the rest i.e. it is the Preferred Customer Order. 1.6 Order Characteristics Since production of a particular product is restricted to go on till the end of the shift, it was noted that when a special order is received there could be stock (Baked Stock), which can be used to service part or all of the order. Each order also specifies the quantity of each product item being ordered. This implies that reservation for production of an item on special order should only cater for the deficit between the unit in baked stock and units on order. However, due to losses in the dryers and kilns not all green stock is output as baked stock. The losses are a result of breakages during handling, and over firing in the kilns. Management considers a 30% factor on all green stock to become baked stock (i.e. only 70% of the green stock is expected to become good baked stock).
Additionally since all orders have varying priority, the following classification was adopted to indicate the order priority inline with the PCO dispatch rule: Order Classification
Basis For Classification
Hot
The order must be served before normal time
Normal
The order is to be served on normal time
Cold
The order can be served after normal time
The products in H o t orders are hence given first priority. Those in N o r m a l and Cold orders follow in that sequence. Within the order classes, product items are scheduled by giving first priority to the product with highest green stock level. 1.7 Machine Capabilities In case a machine is reserved it can only be scheduled to produce to green stock level the end product (this excludes reservation for Slabs) for which its reserved and not scheduled for any other product thereafter. Otherwise a product is scheduled on a particular machine if that machine has the mould for that product. 1.8 Resource Availability The factors that affect resource availability (emergency breakdowns, power failures, etc) are of a stochastic nature and cannot be represented by deterministic quantities. Since these are probabilistic, it would call for rescheduling at any unique instantiation. The developed system
576
Kibira, Kariko-Buhwezi & Musasizi
does not cover reactive scheduling; it is limited to consideration of production system's parameters that can be represented by deterministic quantities. To provide for comparison among several simulated scenarios ("what-if' simulation) resultant from the application of different scheduling policies derived from the UCL production environment. An event-driven model was design, giving the DSS user the opportunity to define the scheduling policies by selecting any desired basis for the DSS to build a schedule. The Scheduling Polices Modeled were categorized as: 9 Prime Products Only This is the default scheduling option; the user does not select to turn it on or off. In case no other option is selected, then a schedule based on only considering Prime Products is developed by the system. 9 Prime Products and Machine Capabilities This option gives the user the opportunity to see the effect of Machine Capabilities on the schedule if only Prime Products are considered. Prime products and Special Orders This option gives the user the opportunity to see the effect of assuming that all presses can produce any Extruded and Pressed product and all extruders can produce any Extruded product on the schedule if Prime Products and Special Orders are considered. Prime products and Special Orders and Machine Capabilities This option is a combination of the Prime Products and Special Orders option, and the Prime Products and Machine Capabilities option. It shows the effect of all the considered constraints and rules on the schedule. Within each policy, each Production Order is assigned a priority index and each machine is also assigned a sequence index. 2.0 THE DEVELOPED SYSTEM The system model was implemented into a computer program using a Microsoft Visual Basic 6.0 Integrated Development Environment. It is supported by a dynamic graphical user interface through which the scheduler can easily access and manipulate information about products and resources. The interface provides for the definition of the desired scheduling policies, update of production data, display of the generated schedule, and manipulation of the schedule report through the export method provided to enable interfacing with other text processing software. 2.1 Sample Test Run A set of data was input to the system with the objective of testing the performance of the developed system.
577
International Conference on Advances in Engineering and Technology
Using the policy of prime products only a production schedule is generated as indicated in exhibit below. A different schedule is generated for each of the policies. UGANDA CLAYS LIMITED Weekly Green Production Schedule Date
Mon 22
Shift
I
Bongioanni
Synthesis
Morando MVA
Bongioanni
Manual
15MA
550
400
Press
Tile Press
Press
Slabs
Maxpan
Slabs
Mangarole
Portuguese
Mangarole
Tiles
Tiles
Tiles
August 2005 Mon 22
6" II
Slabs
August 2005 Tue 23
Tiles
Halfbricks(GR)
Mangarole
Mangarole
Tiles
Tiles
Halfbricks(GR)
Mangarole
Mangarole
Tiles
Tiles
Halfbricks(GR)
Mangarole
Mangarole
Maxpan
II
Slabs
Maxpan
I
Slabs
Maxpan
Tiles
Tiles
II
Slabs
Halfbricks(GR)
Mangarole
Mangarole
Tiles
Tiles
I
Slabs
Halfbricks(GR)
Mangarole
Marseile
Tiles
Tiles
II
Slabs
Andalusia
Mangarole
Marseile
Tiles
Tiles
I
Slabs
Andalusia
Mangarole
Marseile
Tiles
Tiles
II
Slabs
Maxpan 6"
Mangarole
Ridges
6" 5" 5"
August 2005 Thu 25 August 2005 Thu 25 August 2005 Fri 26 August 2005 Fri 26
Mangarole
Tiles
Slabs
August 2005 Wed 24
Mangarole
I
August 2005 Wed 24
Halfbricks(GR)
6"
August 2005 Tue 23
Maxpan
Morando
August 2005
Tiles
The program output is consistent with what would be achieved if the schedules were otherwise developed manually based on the same rationale. 3.0 CONCLUSION
A scheduling system has been developed to aid the scheduler at UCL in achieving the following: 9 Finding feasible Weekly Green Production Schedules instantaneously based on the data derived from the system requirements and the scheduling policies defined by the scheduler. 9 Simplifying the regular scheduling exercises by requiring only general knowledge of the UCL production system's characteristics from the user. 9 Carrying out sufficient what-if analysis of the scheduling decisions 9 Achieving higher productivity using the same resources 9 Maximizing due date compliance and hence enhancing competitiveness
578
Kibira, Kariko-Buhwezi & Musasizi
Enhanced predictability in the production process at UCL can result in enhanced reliability and improvements in the associated economic performance and the degree of utilization of production equipment, due date compliance and overall customer satisfaction. REFERENCES
Adams, J., Balas, E. and Zawack, D. (1988). The Shifting Bottleneck Procedure for Job Shop Scheduling. Management Science, Vol. 34, pp. 391-401. Baker, K. R. (1974). Introduction to Sequencing and Scheduling. Wiley and sons. Christopher J. Beck, Mark S. Fox (2000). Constraint-Directed Techniques for Scheduling Alternative Activities. Artificial Intelligence Vol. 121, pp 211-250, Elsevier Science. Fox M. S. (1990). Constraint-Guided Scheduling- A Short History of Research at Carnegie Mellon University, Pittsburgh, PA 15213 U.S.A. Computers in Industry Vol. 14, pp 79-88. Elsevier Science. Fox, M.S. and Smith, S.F. (1984). ISIS- A Knowledge-Based System For Factory Scheduling. Expert Systems, Vol. 1, pp. 25-49. Froeschl, K.A. (1993). Two Paradigms of Combinatorial Production Scheduling. Operations Research and Artificial Intelligence. Scheduling of Production Processes, Chapter 1, Dorn, J. and Froeschl, K., eds., Ellis Horwood. Garey, M. R. and Johnson, D. S. (1979). A Guide to the Theory of NP-Completeness. W. H. Freeman and Company, New York.
579
International Conference on Advances in Engineering and Technology
A N E G O T I A T I O N M O D E L FOR LARGE SCALE MULTIA G E N T SYSTEMS T. Wanyama, Department of Electrical Engineering, Makerere University, Uganda G. T. Wani, Department of Engineering Mathematics, Makerere University, Uganda
ABSTRACT Modeling agent negotiation is of key importance in building multi-agent systems, because negotiation is one of the most important types of agent interaction. Negotiation provides the basis for managing the expectations of the individual negotiating agents, and it enables selecting solutions that satisfy all the agents as much as possible. Thus far, most negotiation models have serious limitations and weaknesses when employed in large-scale multi-agent systems. Yet, large-scale multi-agent systems find their use in major domains of human development, such as space exploration, military technology, disaster response systems, and health technology. This paper presents a negotiation model for large-scale multi-agents systems that is based on Qualitative Reasoning and Game Theory, and on the similarity criteria. In the model, each agent classifies its negotiation opponents according to the similarity of their preference model. The agents use Qualitative Reasoning components of the negotiation model to estimate the preference models of their negotiation opponents, and to determine the "amount" of tradeoff associated with the various solution options. Moreover, they use the Game Theory component of the negotiation model to determine the social-acceptance of each of the solution options. The output of the Qualitative Reasoning and Game Theory components of the negotiation model is used to determine the rationale for accepting or rejecting offers made by the negotiation opponents of the agents. Keywords: Centralized, Decentralized, Decision-Making, Game-Theory, Group-Choice, Large-Scale, Multi-Agent, Negotiation, Opponent, Preferences, Reasoning
1.0 INTRODUCTION When solving Group-Choice problems where agents have to select solution options from sets of alternatives, each agent normally has its own preference models, which that are made up of sets of decision variables (criteria for evaluating solution options) and preference value functions (criteria weights). Furthermore, each agent applies its preference model independently onto the features of the solution options, using a Multi-Criteria Decision Making (MCDM) technique. This results in a ranking of the solution options for each agent. The solution that is ranked highest by all the agents is the dominant solution, and it should be selected as the agreement solution. However, Group-Choice problems normally do not have
580
Wanyama & Wani
dominant solutions due to the differences in the preference models of the agents. In this case, an agreement (best-fit) solution option is identified through a negotiation process. Negotiation is a form of agent interaction that aims at identifying agreement solution options through an iterative process of making proposals (offers). The attributes of these proposals depend heavily on the preference models of the concern agents, and on the knowledge that the agents have about the preference models of their negotiation opponents. Consequently, opposite negotiation models should be able to assist agents to collect preference information of their negotiation opponents, and to integrate this information with their own preferences models in order to identify and make proposals that are most likely to be accepted as the agreement solution options. Although many negotiation models have been reported in literature, they all fall under two distinctive categories, namely: Analytic Based Models (Barbuceanu & Lo, 2000; Kraus, 1997) and Knowledge Based Models (Raiffa, 1982). In the context of Group-Choice problems in Large Scale Multi-Agent Systems (LSMAS), negotiation models in literature have the following shortfalls: 9 Both categories of negotiation models are developed with an implicitly assumption that agents are always available during the entire negotiation period. This is not realistic in large-scale distributed multi-agents systems, because in such systems agents are terminated or crash without warning. 9 Most analytic based agent negotiation models employ techniques that are naturally centralized; this is against the principle of decentralization, which is fundamental to the concept of Multi-Agent Systems (MAS). Moreover, analytic based models require the central processor to have complete information about the preferences of all the negotiating agents. This is impractical for LSMAS. 9 Most knowledge based negotiation models result in random behavior (lack of mechanism to track the negotiation process) of the negotiating agents. This behavior results in unnecessary deadlocks in LSMAS. The few knowledge based negotiation models that track negotiation processes are invariability feasible for negotiations involving two agents, such as in the buyer seller negotiation problem. This paper presents a negotiation model for solving group-choice problems in LSMAS. The model is based on categorizing negotiation opponents of agents according to the similarity of their preferences. Since the agents focus on making proposals that are acceptable to classes of opponents, instead of dealing with each of the opponents individually, the model presented in this paper enables the agents to address issues associated with many negotiation opponents. This makes the model to be practical for both small and large scale MAS. Furthermore, the model allows the agents to seamlessly join or leave the negotiation process, which addresses the issue of agents crashing, or being started and terminated without warning.
581
International Conference on Advances in Engineering and Technology
1.0 R E L A T E D W O R K
Negotiation is a very extensive subject that spans from pre-negotiation to post-negotiation analysis, both at the local and social level. Consequently, a considerable amount of work on negotiation is available in literature from different domains, such as operational research, economics, and decision theory (Jennings et al, 2001), Faratin et al (undated)). In this section, we present the work that is directly related to our negotiation model. The analytic based agent negotiation models utilize analytic techniques such as Game Theory to determine the solutions that maximizes the social welfare of the negotiating agents (Kraus, 1997). In most of these models, each agent evaluates the solution options according to the preference model of its clients, a process that results in performance scores of each solution option for every agent. These scores are sent to a central processor that determines the 'best-fit' solution option and/or a ranking of the solution option with respect to the combined preferences of the negotiating agents. Analytic based agent negotiation model minimize communication among the negotiating agents; however, besides the drawbacks associated with LSMAS presented in Section 1, these models invariably have the following general shortfalls: 9 The agents have no control over the tradeoffs made during the negotiation process; the models consider only the quantity of the tradeoffs, disregarding their quality. That is, analytic based models are used with an implicitly assumption that the negotiating agents accept any tradeoffs so long as they are associated with the smallest total tradeoff quantity. However, this is not always true, since agents may sometimes be more willing to give larger concessions on some decision variables, than to give a small concession on others. 9 The analytic based models do not follow the natural process of negotiation, where in between offers and counter offers, multiple negation decision variables are traded-off against one another, in order to identify the solution that maximizes the social welfare. Kraus (2001) presents a knowledge based agent negotiation model that implicitly depends on tradeoffs made by the negotiating agents to determine the agreement solution. In the model, the agents evaluate the solution options individually, and then start the process of making offers and counter offers. In between each negotiation round, the agents make tradeoffs aimed at identifying a solution option that is acceptable to all negotiating agents. This model has the following major shortfalls: 9 It does not give any guarantees that the agreement solution maximizes the social welfare of the negotiating agents. 9 It does not support learning from the offers made by the agent negotiation opponents in order to enable the agents to make offers that are more socially acceptable, as the negotiation progresses; resulting in a random behavior of the agents. 9 The agents have no way of knowing whether the negotiation is converging or not.
582
W a n y a m a & Wani
To circumvent the shortfalls of the analytic models, as well as the shortfalls of the Kraus (2001) model, Faratin et al, (not dated) have proposed an agent negotiation model that depends on utility, similar to the analytic models. Moreover, the model enables the agents to tradeoff during negotiation, like the knowledge-based models. The negotiating agents can utilize the model proposed by Faratin et al even if they have partial information about the solution, thus the model has the potential of enabling the agents to search a larger solution space. However, in the context of LSMAS, the model has the following shortfall: It is viable for only two negotiating agents such as in buyer-seller negotiation problems. Therefore, the approach of Faratin et al may not be applicable to LSMAS in its current form. Ray & Triantaphyllou (1998) [9] propose a negotiation model that is based on the possible number of agreements and conflicts on the relative importance of the decision variables. However, having different preference functions does not necessarily mean preferring different solution options. Therefore, this model is too inefficient to be utilized in LSMAS. The other shortfalls of this model are the assumptions that the clients of the agents have the same concerns, hence the same set of decision variables, and that the preference models of negotiating agents is public information. In practice, agent clients normally have different concerns, which lead to having different sets of decision variables, as well as preference value functions, and this information is private. This paper presents an agent negotiation model that is based on Qualitative Reasoning (QR), and Game Theory (GT). We call it the Universal Agent NEgotiation Model (UANEM), because it is applicable to both small and large scale MAS. Moreover, the model can be used in a variety of negotiation problems such as Group-Choice negotiation, Seller-Buyer negotiation, and Auction problems. It should be noted that this paper focuses on the use of the model in the Group-Choice negotiation problems. The QR component of the model assists the agents to estimate the preference models of their negotiation opponents, and to determine the similarities between preference models. On the other hand, each of the agents utilizes the Game Theory component to determine how acceptable each of the solution options is, to all the negotiating agents. UANEM is similar to the negotiation model of Faratin [6]; except, our model utilizes a Game Theory component to support negotiation among nagents. 2.0 NEGOTIATION MODEL FOR LSMAS
Increasing the number of agents in MAS introduces complexity in the modeling of agent negotiation processes. As a matter of fact, there are agent negotiation models that are highly efficient for negotiations between two agents, but whose efficiency drops considerably when the number of agents in the MAS is increased by just one agent. The model proposed in Faratin et al [6] is a good example of such models. In the following sub-section, we describe how we developed an automatic agent negotiation model for Group-Choice problems, and how we modified it to become applicable to LSMAS.
583
International Conference on Advances in Engineering and Technology
3.1 The First Version of Our Group-Choice Negotiation Model We developed our Group-Choice Negotiation Model (GCNM) for use in a Decision Support System (DSS) for the selection of Commercial-Off-The-Shelf (COTS) products, which we were working on [10]. The main objective of that project was to develop a DSS, which allows both the group and the individual stakeholder processes to be carried out concurrently. Therefore, our main concern was the provision of appropriate user agents for the various stakeholders of the COTS selection process, and the integration of the user information to automatically identify the 'best-fit' COTS products. The automatic negotiation was not met to replace the human decision makers, but to assist the stakeholders to carryout simulation based analysis and ask the 'what if' questions, both at the individual and group levels. At this time we did not mind whether the resulting MAS was centralized or decentralized. Moreover, the COTS selection problem normally involves few (3-10) stakeholders, thus our agent negotiation model did not have to satisfy the requirements imposed by LSMAS. Figure 1 shows the first version of our GCNM, and to facilitate this model, each user agent has a negotiation engine that has three components: 9 The first component has a Multi-Criteria Decision Making (MCDM) algorithm that enables the agents to evaluate and rank the solution options according to their performance scores against the preference models of their clients. 9 The second component of the negotiation engine has a simple comparison algorithm that allows the agents to compare the individual ranking of the solution options to the group ranking. 9 The third component of the engine contains a Qualitative Reasoning algorithm that enables the agents to adjust automatically the preference models of their clients.
584
W a n y a m a & Wani
Agent Preference Model
Features of Solutions
Scores of Solution Options to the > Arbitrator Agent
Group Ranking o[ the Solution Options by the Arbitrator Agent
Individual and Grc_ ,_ - - o have d([ferent best Solution
l
Same Solution is the best for Both Individual and Group Rankings. Agent Sends No Change Massage to Arbitrator Agent
Figure 1: First version of our GCNM The negotiation model shown in Figure 1 works as follows: (i) Each user agent j determines the score 7/'j (i) of every solution option i, and sends their scores of all the solution options to the arbitrator agent. (ii) The arbitrator agent determines the optimal solution option for the negotiating agents using a Game Theory model. (iii) The Arbitrator agent ranks the solution options according to how close they are to the optimal solution. We refer to the closeness of a solution option to the optimal solution as the degree of fitness of the solution option in meeting the combined concerns of all stakeholders. The degree of fitness of solution options is represented by their Social Fitness Factors IGr ). (iv) Arbitrator agent sends to all negotiating agents the Social Fitness Factors of the solution options. (v) If the 'best' Social Fitness Factor corresponds to the most preferred solution option for all agents, the negotiation ends. However if any of the agents prefers another option, it adjusts its preference model in such away as to improve the score (payoff) of the option with the best Gf. The agent targets the solution option with the best Social Fitness Factors because it is aware that it has to maximize its payoff subject to the satisfaction of the group. After adjusting the preferences, the agent evaluates all solution options using the new preference model and then sends the new scores of the solution options to the arbitrator agent. This amounts to calling for another around of negotiation.
585
International Conference on Advances in Engineering and Technology
The above five steps continue until all agents prefer the alternative with the 'best' G j , or all agent acknowledge that there is nothing they can change to improve their negotiated payoffs without depreciating the Gu of the best fit alternative considerably. The negotiation model in Figure 1 turned out to be very unreliable. Whenever the arbitrator agent was unavailable, it would not be possible to carry out any group processes. This was very frustrating since we had designed our negotiation model in such a way as to support asynchronous decision making, where agents that are not available at some stage of the negotiation process can catch up with the others at a later stage without being at an advantage or a disadvantage. Moreover, the model assumes environments where only the grand coalition maximizes the utility of the agents. Yet, in practice forming a grand coalition does not guarantee maximum utility for the involved agents. Finally, the negotiation model in Figure 1 does not follow the natural process of negotiation, where agents trade offers and counter offers. Instead the model relies on arbitrator to resolve the differences between the agents.
3.2 The Second Version of Our Group-Choice Negotiation Model We addressed the above-mentioned shortfalls of the negotiation model in Figure 1 by modifying the agent negotiation engines as follows: 9 The Qualitative Reasoning algorithm was modified to be able to estimate the preference models of the negotiation opponents of agents based on their offers. This enables the agents to estimate the scores of the option options with respect to the preference models of the various negotiation opponents of the agents. Furthermore, the Qualitative Reasoning algorithm was modified to determine the 'amount' of tradeoff (TradeoffFactors) associated with the various solution options. This helps the agents to know in advance what they again and/or loses if a particular solution is selected. 9 A coalition formation component was added to the negotiation engine. The component has a coalition formation algorithm that assists the agents to join coalitions that maximized their utilities according to the negotiation strategies of their clients. These strategies determine the basis for joining coalitions and the level of commitment that the agents have to their coalitions. 9 The arbitrator agent was removed from the MAS and a social welfare component was added to the negotiation engine of the user agents. This component has a Game Theory model, which is used to determine the Social Fitness Factors of the solution options. The input to the Game Theory model are the estimated scores of the solution options for the coalition mates of the concerned agent, as well as the actual solution scores for the concern agent. 9 The acceptance component was inserted in the negotiation engine of the user agents. This component has an algorithm for combining the Social Fitness Factors, the Tradeoff Factors, and the parameters of the agent strategies, to determine the Acceptance factors of the solution options.
586
Wanyama & Wani
The decision making algorithm was changed from making decisions based on whether the solution with the 'best' Social Fitness Factor is the one preferred by the concern agent, to selecting offers to be made to the opponents of the agent, based on the ranking of the solution options according to the preferences of the agent. In addition, the algorithm was modified to make it capable of deciding on how to respond to offers made by the opponents of the agent, based on the Acceptance Factors of the offers. The above modifications of the agent negotiation engine resulted in the second version of our Group-Choice agent negotiation model. The model enabled decentralizing the MAS such that if any of the agents was not available for some reason, the others could go ahead with the negotiation process. This increased the reliability of the MAS. Moreover, the modifications resulted in a negotiation model that follows the natural process of negotiation, where agents trade offers and counter offers after evaluating the solution options. Figure 2 shows the negotiation model associated with the modified agent negotiation engine. It should be noted that the solution option with the highest score is the offer that the concern agent makes. IB< " "
N'J' I
Ne
0)'1 ~'r
Iao I I I I I I I I
i i I i
'//
I
I
II I I I I I I I I I I I I
IThe end o f the ne o ia ion
i . . . . . . . . .
II-I
I I I I I I I I I !
Acceptance Factors used to d e c i d e on the I '~ Offer
I ~
I I
I
,,
Updated A c c e p t a n c e
F a c t o r s used to d e c i d e on the 2 '~d o ffe r
I I I I I I I
Final Updated A c c e p t a n c e Factors for the n e g o t i a t i o n round, used to decide o n the N th Ot'|'er
Update
Figure 2" Second version of our GCNM Therefore, Figure 2 illustrates that on receiving an offer, the agent checks it to determine its type. This results in the following scenarios: (i) The offer is the same as the solution option that the agent prefers. In which case, the offer is accepted.
587
International Conference on Advances in Engineering and Technology
(ii) The offer is not the preferred solution option of the agent, and it is made by an agent that is not a member of the agent's coalition. Such a solution is sent to the decision component of the negotiation engine to determine whether it satisfies the acceptance criteria before accepting or rejecting it. (iii) The offer is not the preferred solution option of the agent, and it is made by a member of the agent's coalition. The offer is sent to the Reasoning Component of the negotiation engine to finally estimate the Acceptance Factors of the solution options. The Acceptance Factors are thereafter sent to the Decision Component of the engine to determine whether the offer satisfies the acceptance criteria. Figure 2 illustrates how the Acceptance Factors of the solution options are updated as more coalition members make offers. It should be noted that the figure depicts only a single negotiation round. Moreover, Figure 2 shows that if an agreement is not reached by the end of a negotiation round, the final Acceptance Factors of the solution options are used in the negotiation engine to modify the preference model of the concern agent in preparation for the next negotiation round. The agent modifies its preference model by adjusting the preference values of some decision variable in such a way as to increase the score of the solution option with the 'best' Acceptance Factor; if that solution is not the agent's most preferred, then the modified preference model is used to evaluate the solution option at the beginning of the next negotiation round. When we employed the second version of our negotiation model in Group-Choice problems that involve many (more than 15) stakeholders, the model proved to be inefficient. For example, an agent running on a Personal Computer (PC) with the following specifications: AMD Duron (tm) Processor, 1.10 GHz, 256 MB of RAM), would make cause the PC to freeze for up to 5 seconds whenever the agent received the last offer in a negotiation round involving 20 negotiation opponents. Since we designed our agents to run on general purpose PCs and/or servers, this level of resource utilization was unacceptable, because it interfered with other processes running on these machines. Moreover, such time delays would definitely affect the applicability of the negotiation model to time-constrained Group-Choice problems such as resource management in wireless networks. We modified the agent negotiation engine to reduce on the computational resource, as well as the time required by agents to respond to offers. The negotiation model that resulted is applicable to both small scale and large scale MAS, and it can be modified to become applicable to other negotiation problems such as buyer-seller negotiation and auction problems. We therefore refer to this model as the Universal Agent NEgotiation Model (UANEM). 3.3 The UANEM
To make our agent negotiation model applicable to LSMAS, we reduced amount of processing offers, by enabling the agents to classify their negotiation opponents according to the similarity of their preference models. This was achieved by adding capability to the Qualita-
588
W a n y a m a & Wani
tive Reasoning algorithm to compare offers, as well as the estimated preference models of the negotiation opponents of agents. The resulting agent negotiation model that we refer to as AUNEM is similar to the model shown in Figure 2, but instead of the input to the Game Theory model being the estimated scores of the solution options with respect to all the negotiation opponents of the concern agent, as well as the actual scores of the solution options for the concern agent; it is a set of the scores of the solution options associated with the various classes of the negotiating agents, and the number of agents in each class. This compresses the input data to the Game Theory model, resulting in a reduction of the computational resources and time required by the agents to respond to offers. The UANEM can be viewed as a version of the Model in Figure 2 that has memory of previous offers, and that has the ability to classify the negotiation opponents of agents according to the similarities of their offers. On receiving an offer, agents in a negotiation process that is based of UANEM are required to check the offer to determine if the same offer has been previously received in the current negotiation round. This results in two scenarios: 9 The offer has previously been received; in this case the agent proposing the offer is added to the classes of agents that is associated with its offer, and the number of agents in each class, as well as scores of the solution options that corresponding to every agent class are sent to the Social Welfare Component of the negotiation engine of the concern agent. 9 The offer has not previously been received; in this case, the preference model of the proposing agent is estimated, then it is compared with the representative preference models of the existing agent classes. If it is found to be similar to one or more of the category representative preference model(s), the agent is marked as a member of the class whose preference model is most similar to the preference model of the agent. However, if the preference model of the proposing agent is not similar to any of the representative preference models of existing agent categories, the proposing agent is marked as the first member of a new agent class, and its preference model is labeled the representative preference model of the new agent class. It should be noted that the level of similarity(co)between two preference models can be set anywhere between the extremes of 0% and 100%. Where the 100% setting means that for two preference models to be similar, they must have the same decision variables and the same preference value functions. In other words, the two preference models must be identical. On the other hand, the 0% setting of (are you missing a word here?) implies that the preference models being compared do not have to have anything in common to be treated as being similar. In fact, with a 0% setting there is no need to go through the process of memorizing previous offers or comparing preference model. The 0% setting reduces the UANEM to the model proposed by Kraus, (2001). In that model, agents do not process the offers of their opponents, and adjust their preference models randomly at the end of every negotiation round.
589
International Conference on Advances in Engineering and Technology
4.0 SIMULATION EXPERIMENTS In these experiments, agents were required to select a Commercial-Off-The-Shelf (COTS) product to be used in the development of a web-shop, from a set of eight solution options. The agents evaluated the solution options based on the preference models made up of twelve predefined decision variables, and the initial preference value functions of the agents were generated using a truncated random number generator. Three types of agent negotiation models were tested in the experiments: the model proposed in Kraus, (2001), the second version of our agent negotiation model, and the UANEM. In all experiments, we kept the number of solution options constant (eight solution options), and the number of negotiating agents was increase from 2 to 50 in steps of 1. For each number of agents, we ran the simulation one hundred times, noting the negotiation rounds, and the time taken by one of the two agents with which the simulation started (Agent a), to process the last offer in every negotiation round. The last offers in the rounds are targeted because they involve processing the preferences information of all the negotiating agents; thus resulting in maximum offer processing time. For the UANEM, we carried out simulations with the value of co set to 0%, 50% and 100%. Moreover, for simplicity we made the following assumptions with regard to the second version of our agent negotiation model: All negotiating agents subscribe to the grand coalition, and every agent is totally committed to maximizing the utility of the grand coalition. The simulation measurements were carried out on a computer that has the following specifications: AMD Duron (tm) Processor, 1.10 GHz, 256 MB of RAM. The MAS that we tested in the simulations was developed using Java, and it ran on windows XP machines with Java Run-time Environment ORE 1.4.2). 5.0 RESULTS Figure 3 shows the variation of the maximum number of negotiation rounds with the number of agents involved in the negotiation process, and Figure 4 shows the variation of the average of the maximum offer processing time with the number of negotiating agents.
590
Wanyama & Wani
350
rr
300
._ :~
// 250
/ /
Z
d ,z " ' - "" f
200 / 150
l
f
f /
/",,,I I 100
/I I ......
50
/
i
''/
. , - . . . ,..
,'" "-
- -" ""
I
0
2
6
10
14
18 Number
Legend:
. . . . .......
Kraus Model & UAN~: UANEM: w =50%
-
2nd Version
of
Agents
w =0%
UANEM: w =100% -
Figure 3" Variation os maximum number os negotiation rounds with the number of negotiating agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
oo
i
v12
/ , ~
d) 03
5o
10
._
8 .. __
13..
6
9
4
~
~
i
2
10
18
26
34
42 Number
Legend
~
UANEM:w=100%
.......
2nd
50
of Agents
Version
Figure 4: Variation of maximum offer processing time with the number of negotiating agents
591
International Conference on Advances in Engineering and Technology
6.0 DISCUSSION OF RESULTS Figure 3 reveals that the negotiation model proposed in Kraus, (2001) is synonymous with the UANEM with the similarity level (co)set to 0%. Moreover, the Figure shows that the performance (in term of negotiation rounds) of the UANEM with o~ set to 100% is comparable to that of the second version of our agent negotiation model. Any other setting of co results in a negotiation-rounds performance that lies between that of the UANEM with co = 0% (Kraus's Model) and that of the UANEM with o~ = 100% (see performance of the UANEM with co = 50%). For Kraus's negotiation model (Kraus, (2001) ), the number of negotiation rounds increases sharply with increasing the number of agents involved in the negotiation (see Figure 3). This makes it inappropriate for LSMAS. Kraus's agent negotiation model (Kraus, (2001)) does not require the agents to carryout any processing of the offers that they receive. This saves processing time, but it results in random behavior of the agents, leading to poor or no control of the dynamics of the negotiation process. On the other hand, the second version of our agent negotiation model requires agents to process offers that they receive in order to identify appropriate counter offers. This controls the dynamics of the negotiation process. However, processing of offers results in offer processing time that increases sharply with increasing the number of agents involved in the negotiation process (see Figure 4). This makes the second version our agent negotiation model inappropriate for LSMAS. Furthermore, Figure 4 shows that the UANEM (co set to 100%) results in offer processing time that does not vary significantly with the number of agents involved in the negotiation process, implying that the mode is applicable to LSMAS. 7.0 CONCLUSION AND FUTURE W O R K This paper presents an agent negotiation model for GCDM in LSMAS. Moreover, the paper describes how the negotiation model for LSMAS was derived from a simple centralized negotiation model. The simulation results presented in this paper show that the negotiation model proposed in Kraus, (2001) and the second version of our agent negotiation model represent the two extremes of knowledge based negotiation models. That is, in the context of LSMAS, Kraus's model is associated with minimum (zero) offer processing time and maximum number of negotiation rounds. On the other hand, the second version of our agent negotiation model is associated with maximum offer processing time and minimum number of negotiation round. Furthermore, the simulations reveal that the UANEM is associated with low offer processing time (close to Kraus's model) and low negotiation rounds (close to the second version of our agent negotiation model), making it suitable for LSMAS. From Figures 3 and 4, it is noticed that the offer processing time and the number of negotiation rounds vary in opposite directions with the variation of the similarity level (co). Therefore, in the future we would like to establish the optimal similarity levels associated with different agent negotiation situations.
592
Wanyama & Wani
REFERENCES
Barbuceanu, M. and Lo, W. (2000), A Multi-attribute Utility Theoretic Negotiation Architecture for Electronic Commerce, Proceedings of the Fourth International Conference on Autonomous Agents, Barcelona, Spain. Faratin, P., Sierra, C., Jennings, N. R., Using similarity criteria to make issue trade-offs in automated negotiations, Artificial Intelligence, Vol. 142. Jennings, N. R., Faratin, P., Johnson, M. J., O'Brien, P., and Wiegand, M. E., (1996), Using Intelligent Agents to Manage Business Processes, Proceedings of the Practical Application of Intelligent Agents and Multi-Agent Technology Conference, London The United Kingdom. Jennings, N. R., Faratin, P., Lomuscio, A. R., Parsons, S., Sierra, C., and Wooldridge, M. (2001), Automated Negotiation." Prospects Methods, and Challenges, International Journal of Group Decision and Negotiation, Vol. 10, No. 2. Kraus, S., (1997), Negotiation and Cooperation in Multi-agent Environment, Artificial Intelligence, Vol. 9, Nos 1-2. Kraus, S., (2001), Strategic Negotiation in Multiagent Environments, Cambridge: Massachusetts Institute of Technology Press. Raiffa, H. (1982), The Art and science of Negotiation, Cambridge: Harvard University Press, USA. Ray, T. G., and Triantaphyllou, E., (1998), Theory and methodology.'- Evaluation of Rankings with regard to the Possible number of Agreements and Conflicts, European Journal of Operational Research. Wanyama, T., and Far, B. H., (2004), Multi-Agent System for Group-Choice Negotiation and Decision Support, proceedings of the 3rd Workshop on Agent Oriented Information Systems, New York USA. Yokoo, M. and Hirayama, K. (1998), Distributed Constraint Satisfaction Algorithm for Complex Local Problems, Proceedings of the Third International Conference on MultiAgent Systems. IEEE Computer Society Press.
593
International Conference on Advances in Engineering and Technology
C H A P T E R NINE T E L E M A T I C S AND T E L E C O M M U N I C A T I O N S
DIGITAL FILTER DESIGN USING AN ADAPTIVE MODELLING APPROACH Elijah Mwangi, Department of Electrical & Electronic Engineering, University of Nairobi, PO BOX30197, Nairobi 00100, Kenya.
ABSTRACT
The design of an FIR filter using the Wiener approach is presented and compared to an LMS design. The Wiener filter is synthesized by computing the optimum weights from the signal characteristics. For the LMS filter, the optimum weights are obtained iteratively by minimising the MSE of an error signal that is the difference between the filter output and the output of an ideal filter that meets the design specifications exactly. Results from MATLAB computer simulations show that both methods give filters that meet the design specifications in terms of cut-off frequency and linear phase response. The presentation gives an alternative design methodology for FIR filters and is also suitable for illustrating the properties of the LMS algorithm as an approximation to the Wiener approach. Keywords: FIR Filters, LMS algorithm, Wiener filtering, Adaptive filters.
1.0 INTRODUCTION The aim of this paper is to illustrate adaptive signal processing concepts through the design of an FIR filter. Since the design methodologies for FIR filters are well established and easily understood, the design of such filters using an adaptive process would form a good basis of introducing adaptive signal processing techniques. In the proposed method, the ideal magnitude and phase characteristics of the FIR filter are specified. The design problem can be stated as follows: Given the magnitude and phase response of a discrete linear timeinvariant system, an FIR filter is to be synthesized by using an adaptive solution that gives a good minimum square error fit to the magnitude specifications. The filter phase response should also exhibit linear phase characteristics. Two synthesis methods are investigated. These are the Wiener approach and the LMS algorithm approach. The Wiener method computes for an optimum set of filter weights that give a response that best fits the design speci-
594
Mwangi
fications. The LMS algorithm method provides an iterative solution that converges to a set of filter weights that approximates the Wiener solution. The adaptive process of the LMS algorithm is demonstrated by a reduction in the Mean Square Error (MSE) at each iteration. In this paper an FIR low pass filter with a specified number of coefficients is designed using both the Wiener and the LMS algorithm. Results, obtained by MATLAB simulation show that the LMS algorithm filter gives similar magnitude and phase response to those obtained with the Wiener approach. 2.0 THE P R O B L E M S T A T E M E N T
The design process can be modelled as a system that consists of two parts: An ideal filter that meets the design specifications exactly, and an adaptive filter that gives an approximation to the specifications. The difference between the ideal filter output and the adaptive filter output is an error signal can be used to adjust the filter weights. The process is illustrated in Figure 1. Let the input signal x(n) be a sum of K-sinusoids, where each sinusoid is sampled at a frequency, f In addition, each sinusoid is of unit amplitude. K
x(n) - Z
sin 2~r(f k / fs )n
(1)
k=l
The output of the ideal filter is also a sum of sinusoids that exhibits a phase difference from the input. For the ideal filter, some of the output sinusoids will be attenuated and others will pass through the filter as per the design specifications. Thus, K
d(n) - Z Ak sin 2~[(fk / f~ )n + Ok ]
(2)
k=l
where A~ is the magnitude specification at frequencyJ~ and with a phase shift 0h.
595
International Conference on Advances in Engineering and T e c h n o l o g y
x(n)
d(n) ~[ Ideal Filter r-i
+
e(n]
.~[ ad *"1
I
ter v(n~
Fig 1. The adaptive process model. The adaptive filter is an FIR filter with a transfer function:
H(z) - w(O)+ w(1)z -1 + w(2)z -2 + .... + w ( M - 1 ) z -(M-l)
(3)
Where the filter coefficients, or weights, w(i), i=0,1,2 .... (M-I); are adjustable. The output
y(n) of the adaptive filter is: M-1
(4)
y(n) - Z w(n)x(n- m) m=0
3.0 THE W I E N E R SOLUTION: The error signal e(n) is the difference between the desired output d(n) and the adaptive filter output y(n), i.e. M-1
e(n) - d ( n ) - y(n) - d ( n ) - Z w(n)x(n- m)
(5)
m=0
As per the Wiener filter theory (Widrow & Steams, 1985), the optimum set of filter coefficients Woptare given by:
Wopt - R - 1 P
596
(6)
Mwangi
Where, the autocorrelation matrix R is a Toeplitz matrix with elements as given in equation
(7). i K
rxx (m) - --2 Zk=l cos 2er(f k / f,. )m
(7)
P is the cross-correlation vector of the input signal samples to the desired signal samples and is computed as shown in equation (8). K
k =1
K
K
coso ,Z Ak c~
/ L ) + Ok),.., Z
Ak cos(2(M- 1)~r(fk / f~ ) + 0 h ]r (8)
k=l
k =1
4.0 THE L M S S O L U T I O N
In the LMS algorithm, the computation of the optimum weight vector is done iteratively by minimizing the MSE. Thus the LMS is a steepest descent algorithm where the weight vector is updated for every input sample as follows (Ifeachor & Jervis, 1993): Wi+, - WX - / N #
(9)
Where W/+: is the updated weight vector, /4/.1is the current weight vector, ~ is a gradient vector. The parameter ,a controls the convergence rate of the algorithm and also regulates adaptation stability. The value of/1 is restricted in the range 0 to [1/tr(R)], where tr(R) is the trace of the autocorrelation matrix (Widrow & Steams, 1985). If P is the cross-correlation of the input and desired samples and R is the autocorrelation of the input samples, then the gradient vector at the jth sampling instant is: V j - -2Pi + 2Rj WX - - 2 X j d ( n ) + 2Xj X ~ Wj - - 2 X [ d ( n ) - X ~ Wj ]
(lo)
The signal Xjr~ 9is the filtered output of an FIR filter with a weight vector W and input signal vector X. Therefore the error signal is: gj (n) - d(n) - X.lr. Wi.
(11)
Substituting equation (11) into equation (10) gives: V j - -2~j X
(12)
597
International Conference on Advances in Engineering and Technology
Thus, the weights update in equation (9) becomes: m j +1 = m j -k- 2 /u~X j
(13)
It can be noted from the above derivation that the LMS algorithm gives an estimate of Wj-+I without the need of direct computation of signal statistics. The need for matrix inversion which can be computationally expensive is also avoided. The computation procedure for the LMS algorithm is as summarized below.
Step (i): Initially, the filter weights are set to an arbitrary fixed value, say: w(m)=O. O; for i=O, 1, (M-I). Step (ii)- The adaptive filter output is computed. M-1
y(n) - Z w(n)x(n - m)
(14)
m=0
Step (iii)- The error estimate c(n) is then obtained as the difference between the desired output and the adaptive filter output. a(n) = d(n) - y(n) (15) Step (iv)" The adaptive filter weights are then updated so that there is symmetry about the centre weight. This ensures that the filter will exhibit linear phase characteristics. Wj+1(i) = Wj (i) + 2/,tCjXj (k - m)
(16)
Step (v): For each subsequent sampling instant steps (ii) to (iv) are repeated. The process is stopped if either the change in the weight vector is insignificant as per some preset criterion or for given number of iterations. The comparison of the LMS algorithm to the Wiener filtering is best illustrated by the computation of the MSE at each iteration stage. This is given by: (Widrow & Steams, 1985). -- ~min + ( W - Wopt ) T R ( W - Wopt)
where ~:mi,is the Wiener MSE.
598
(17)
Mwangi
5.0 A DESIGN EXAMPLE A digital FIR low pass filter with the following specifications is to be designed. Passband: dc to 3.4kHz; Phase: Linear, Sampling frequency: 8kHz 1.1 Wiener Approach A pseudo filter with an ideal magnitude response is used. The passband has a magnitude of unity while the magnitude is zero in the stopband. An ideal brick wall transition from the passband to the stopband is used. The phase is made to vary linearly with frequency in the passband. For a filter of length N, a good approximation of the phase response is given by: (Rabiner & Gold, 1975). o(co) = - a c o
(18)
where, c~=(N-1)/2. The magnitude and the phase response of the simulated filter are illustrated in Figure 2. 1.2 The LMS Approach The same ideal filter that is used in the Wiener approach simulation is also employed in the filter simulation using the LMS algorithm. The adaptive filter length is also kept at N=I 7. The magnitude and phase response are shown in Figure 3. These characteristics are obtained after 400 iterations and with a value of 11=0.001. In order to monitor the progress of the LMS algorithm the MSE was computed at each iteration stage. The results are illustrated in Figure 4. 2.0 DISCUSSION From the results displayed in Figure 2 and Figure 3, it can be noted that the Wiener filter and the LMS filter have identical characteristics that closely match the design specifications. A summary of the filter parameters is given in Table 1. The figure given for the attenuation is the maximum side lobe attenuation in the stop-band.
599
International Conference on Advances in Engineering and Technology
10 . . . . . . . .
:. . . . . .
..........
]
=
-
7
......
J 0 L 133 "u9
.......................................
-10
--
: .
.
4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
~
---~ ~
.
-,
---
(1) g
133
-20
.
.
.
-30
. . . . .
.
.
.
.
.
.
.
.
.
~
.
.
.
.
.
.
J
.
9
.
:
_ -40
.......
-50
.
,
. . . 500
0
.
.
.
. . . 1000
.
.
.
. . . 1500
.
.
.
. . 2000
.
Frequency
o ~:::::::
i
. . . . .
.................
.
.
. . . 2500
3000
3500
4O0O
(Hz)
....................
....................................... -5oo
i
.
. . . . . . . . . . .
::: :-
--
g, "o
=
i
.... .
t'n
-1000
.
.
.
.
.
.
.
-1500 0
500
1500
1000
2000
2500
Frequency
3000
3500
4000
(Hz)
Fig 2. The Magnitude and Phase response of the Wiener filter.
10
0
. . . . . . . . . . . . . . . . . . . . . . . .
.
m -10
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
-.--
. . . . . . . . . . .
--
.
i
'.
.+7._= -20
-
.
-3o
.
-40
.....................
0
.
-
.
i
.
:
.
.
.
.
: 500
.
.
.
.
• 1000
.
.
.
.
.
.
.
.
.
: : i
.
-
%-. -500~
.
.
.
.
.
i
1500
2000
2500
.
.
.
.
3000
3500
4000
(Hz)
.
.
.
.
.
!
.
i '
v
.
-
Frequency
0. . . . :
.
:":i
9 :
.
.
.
~
.
.
.
.........
....
.
.
.
.
.
.
9
-lOOO
-~nn
..... 0
~ 500
~ 1000
: 1500
: 2000
.
2500
.
.
. 3000
.
.
. 3500
4000
Frequency(Hz)
Fig 3. The Magnitude and Phase response of the LMS filter.
600
Mwangi
0
50
1 O0
150
200
250
300
350
400
Number of iterations
Fig 4. Learning curve of the LMS adaptation process with g=0.001.
Filter Ideal Wiener LMS
Table 1. Filter parameters Cut-off frequency Attenuation 3.4kHz Not specified 3.3kHz -22dB 3.3kHz - 18dB
Phase Linear Linear Linear
It can be noted that the Wiener filter does not offer any significant improvement over the LMS filter in terms of the cut off frequency accuracy. However, the Wiener filter exhibits deeper stop-band attenuation. A further observation is that both filters satisfy the linear phase requirement. Results on the level of MSE obtained at each iteration stage as illustrated in Figure 4 serve to indicate that the LMS algorithm is simply an approximation of the Wiener process. After the 50 th iteration, the filter coefficients quickly converge to near Wiener coefficients. Table 2 gives the coefficients of the Wiener filter and those of the LMS filter at convergence. It can be noted that all the 17 coefficients are very close. The above results can be improved by increasing the number of sampling points on the magnitude characteristics of the ideal filter to more points than 30 that have been used in the
601
International Conference on Advances in Engineering and Technology
simulation. Further improvement in stop-band attenuation and in sharper pass-band transition may also be obtained by increasing the filter length. Table 2. Coefficients of the Wiener and the LMS filter. COEFFICIENT w(1)=w(17) w(2)=w(16) w(3)=w(15) w(4)=w(14) w(5)=w(13) w(6)=w(12) w(7)=w(11) w(8)=w(1 O)
w(9)
WIENER 0.0292 -0.0143 -0.0107 0.0399 -0.0736 0.1041 - 0.1311 0.1473
0.8457
LMS 0.0263 -0.0108 -0.0119 0.0433 -0.0749 0.1082 -0.1335 0.1532
0.8427
3.0 CONCLUSION In this paper, we have presented both the Wiener method and the LMS algorithm method for the design of an FIR filter from ideal filter specifications. The computer results show that both methods give filters with magnitude and phase characteristics that meet the design criteria. The application of adaptive signal processing algorithms in FIR filter design is hence illustrated. REFERENCES
Fisher, M., Mandic, D., Bangham, J., and Harvey, R. (2000). Visualising error surfaces for adaptive filters and other purposes. IEEE International Conference on Acoustics, Speech & Signal Processing. p3522-3525. Ifeachor, E. C., and Jervis, B. W. (1993). Digital Signal Processing, A practical approach. Addison-Wesley Longmans Ltd. Essex, UK.. Rabiner, L. R., and Gold, B. (1975). Theory and Applications of Digital Signal Processing. Prentice Hall international. New Jersey, USA. Widrow, B. and Steams, S.D. (1985). Adaptive Signal Processing. Prentice Hall International. New Jersey, USA.
602
Anand
A U G M E N T E D REALITY ENHANCES THE 4-WAY VIDEO C O N F E R E N C I N G IN CELL PHONES P.M.Rubesh Anand, Department of Electronics and Telecommunication Engineering, Kigali Institute of Science and Technology, Rwanda.
ABSTRACT Third generation (3G) mobile networks are currently being deployed and user demand for multimedia applications are ever increasing. Four-way video conferencing is one of such an application which could be possible through the cell phones too. This paper deals with the difficulties faced by the video quality in the cell phones during the video conferencing between more than two persons and analysed the possible ways to overcome those difficulties. End user's satisfaction determines the Quality of Service (QoS) and obviously the satisfaction depends on the reality of the image the user is watching. Due to the small screen size and lower bandwidth, the quality of the image in cell phones ever be made perfect from the transmitter side and the quality of image has to be improved at the receiver side only. Augmented Reality (AR) is one of such promising approaches which guarantees for the high quality of video. This paper proposes an idea of using AR in cell phones for enhancing the video quality during video conferencing.
Keywords: Four-way video conferencing; cell phone; Augmented Reality; 3G; QoS.
1.0 I N T R O D U C T I O N A live voice can be somewhat reassuring, but there is nothing like a live picture to bring a sense of relief or satisfaction. In 1964, AT&T demonstrated the picture phone at the New York World's Fair which was not a successful one. There are several reasons which are frequently cited for its failure but all the studies highlights that the failure is because of the non-realistic nature of video that was transmitted. Though today's mobile phones have the capability of transmitting high quality videos, the question of realistic nature of those videos still arises. It is obvious that the new technology, new media will have new problems. Quality of service support in future mobile multimedia systems is one of the most significant challenging issues faced by the researchers in the field of telecommunications. An issue on an end-to-end basis, necessarily with appropriate harmonization consideration among heterogeneous networks of wired and wireless is still under research. It has been estimated that by the end of the year 2006 approximately 60% of all cell-phones will be equipped with digital cameras. Consequently, using Augmented Reality in cell-phones has a lot of end user applications. Compared to high-end Personal Digital Assistants (PDAs) and Head-Mounted Displays (HMDs) together with personal computers, the implementation of Augmented Reality (AR) on the 3G cell phone is a challenging task: Ultra-low video stream resolutions,
603
International Conference on Advances in Engineering and Technology
little graphics and memory capabilities, as well as slow processors set technological limitations. The main motivation for this research is due to the demand for better availability of services and applications, rapid increase in the number of wireless subscribers who want to make use of the same handheld terminal while roaming and support for bandwidth intensive applications such as videoconferencing. The basic idea of AR is to enhance our perception of the real world with computer generated information. It is obvious that AR technology has a high potential to generate enormous benefit for a large amount of possible applications. AR superimposes computer-generated graphics onto the user's view of the real world. AR allows virtual and real objects to coexist within the same space. Most AR application scenarios proposed up to now belong to engineering. In particular maintenance applications are very popular. In contrast to that trend, the paper decided to find an AR application that attracts the mass market. The target user should be not a specialized person like a maintenance engineer, but a general user. Successful mass applications in most cases result in increasing great demand for devices and appropriate application services. 2.0 CURRENT ISSUES IN BANDWIDTH High bit rates are required to provide end users with the necessary service quality for multimedia communications. Separation of call and connection/bearer control such as speech, video and data could be associated with one single call and these could be handed over separately. 3G-324M is the 3GPP standard for 3G mobile phone conferencing. 3G networks are mostly based on Wideband Code Division Multiple Access (W-CDMA) technology to transfer data over its networks which is based on CDMA2000 technology. W-CDMA sends data in a digital format over a range of frequencies, which makes the data move faster, but also uses more bandwidth than digital voice services. UMTS also has its own wireless standard which works with GSM technology which also offers the high data rate up to 2 Mbps. In the case of multimedia applications, such as videoconferencing, it is also necessary to maintain synchronization of the different media streams. Failure to provide low enough transfer delay will result in unacceptable lack of quality. For videoconferencing, the targets are similar, except the play-out delay has to be much less so that end-to-end delay does not exceed 400 ms. The degree of jitter that must be compensated is up to 200 ms. Throughput must range from 32 Kbps upwards, including the specific rates of 384 and 128 Kbps for packet and circuit switching, respectively. Video compression techniques should be used to reduce the bandwidth required. H.264/MPEG-4 part 10 video coding standard was recently developed by the JVT (Joint Video Team). The basic technique of motion prediction works by sending a full frame followed by a sequence of frames that only contain the parts of the image that have changed. Full frames are also known as 'key frames' or 'I-frames' and the predicted frames are known as 'P-frames'. Since a lost or dropped frame can cause a sequence of frames sent after it to be illegible, new 'I-frames' are sent after a predetermined number of 'P-frames'. This compression standard saves the bandwidth used for video conferencing.
604
Anand
3.0 A U G M E N T E D R E A L I T Y Augmented Reality (AR) is a growing area in virtual reality research. As this field is still young, no standard methodology and product seems to be recognized yet. As a consequence, the development of applications is reduced because the learning process can only be done through sparse conferences and literature reading. AR is a very interesting field because it requires multidisciplinary expertise to be achieved correctly. An augmented reality is a combination of the real scene viewed by the user and a virtual scene generated by the computer that augments the scene with additional information. The application domains reveal that the augmentation can take on a number of different forms. The ultimate goal of this paper is to create a system such that the user cannot tell the difference between the real world and the virtual augmentation of it. To the user of this ultimate system it would appear that he is looking at a single real scene. Most AR research focuses on "see-through" devices, usually worn on the head that overlay graphics and text on the user's view. Virtual information can also be in other sensory forms, such as sound or touch, but this paper concentrates only on visual enhancements. AR systems in HMDs track the position and orientation of the user's head so that the overlaid material can be aligned with the user's view of the world. Through this process, known as registration, graphics software can place a threedimensional image over it. AR systems employ some of the same hardware technologies used in virtual-reality research, but there's a crucial difference: whereas virtual reality brashly aims to replace the real world, augmented reality respectfully supplements it. 4.0 PREVIOUS A P P R O A C H E S WITH A U G M E N T E D R E A L I T Y SYSTEM The fields of computer vision, computer graphics and user interfaces are actively contributing to advances in augmented reality systems. The previous approaches used the AR system consisting of the AR client and the AR server. In the first step, the built-in camera of the AR client is pointed to a certain object in the real world. The images from the camera (a video stream or single image) are sent to the remote AR server via wireless data transmission. In addition to the image data, interaction data are also sent from the AR client to the AR server. Interaction data control the AR application. Then the AR server analyses the image data received from the AR client. The real world object is recognized. For successful recognition, certain information on this object must be stored in a database on the AR server. After successful recognition, additional graphical information is generated by the AR server and mixed with the real world images. The rendered, i.e. computer generated information, are some kind of overlay to the real world images. There are two kinds of computer generated information: 3D data and 2D data. Handling of 3D data is especially challenging since it needs to be rendered spatially correct, i.e. in correct position and orientation. All additional information has to be pre-defined by an AR authoring system and stored in the database of the AR server. The computer enhanced image are encoded and sent back to the AR client via wireless data transmission and the client decodes and displays the image in its display.
605
International Conference on Advances in Engineering and Technology
5.0 D R A W B A C K S IN T H E E X I S T I N G A R SYSTEMS The AR systems are used only to display text and information about the object over the real scene which looks like an information system. The video enhancement is not proved in the existing AR systems. There is requirement to run real-time video through the existing AR system, which in this case means that the system must be capable to display real-time frame rates (i.e. at least 20-25 frames per second) on the AR client. Real-time update rates are necessary because position and orientation of an object will constantly vary on the real world image, if the user moves around it. Since it is almost impossible to avoid delays in the system, lower updates rates (1-2 frames per second) have to be accepted at this time. In contrast to the majority of AR applications available today, augmentation is not executed on the mobile device because the object recognition and augmentation require a considerable amount of computing power that is not available on mobile devices like cell phones. Furthermore, the AR server consists of a database to store the information for object recognition and augmentation whereas the cell phones do not have memory capacity as in server. So the clients like cell phones can only display the video data and serves as interaction device for the user.
6.0 PROPOSED AUGMENTED REALITY SYSTEM FOR CELL PHONES Recent 3G mobiles have the capabilities of real-time video image processing, computer graphic systems and new display technologies converged to make possible the display of a virtual graphical image correctly registered with a view of the 3D environment surrounding the user. By using all the advantages of the recent 3G cell phones, the paper proposes an AR system which can enhance the video quality in cell phones during four-way video conferencing. During the four-way video conferencing, the cell phone screen is split into four windows and the video is displayed independently. Fig.1 shows the continuous presence screen layout in the cell phone display. The diagram shows the full-screen mode, 2-way and 4-way video conferencing modes. During four way video conferencing, the screen can also be used to display one large video sequence and three small video sequences along with the control information in a small window.
Full Screen
2-way
4-way
1 large + 4 Small
Fig. 1: Continuous Presence Screen Layout in Cell phone Display
606
Anand
The proposed block diagram of the AR system (fig.2) differs from the traditional AR system which has client and server. In this system the functions of the server like object recognition and augmentation is done at the client side itself. The AR system processes the three videos independently and simultaneously (as the fourth one is from the user camera which needs not to be processed). This AR system and the display system get the image at the same time and the AR system identifies them separately while the display system projects them. The identified three individual images are aligned with their corresponding graphics image generators. Then each image is checked for their intensity and chromacity levels by their respective graphics image generators. Image recognition is separated in two sub modules: Low-Level-Vision (LLV) and High-Level-Vision (HLV). LLV uses image recognition algorithms to detect certain distinct characteristics of the image. For this purpose, dominant edges of the image are used in the current application. Based on the results of LLV, HLV then recognizes information about intensity and colour levels by comparison with recognition data stored in the database of the AR system (Generally, the human faces are stored in database as the paper deals only with the face to face video conferencing). According to the recognized edges of the face, the intensity or/and colour of the image is enhanced for better viewing quality. The amount of intensity and colour to be added with the original image are generated by the graphics image generators with the help of the database (as reference) and displayed immediately over the cell phone display system along with the original image which is already at its background. AR works in the following manner: All points in the system are represented in an affine reference frame. This reference frame is defined by 4 non-coplanar points, po ... P3. The origin is taken as po and the affine basis points are defined by D, P2 and P3. Having defined the affine frame, the next step is to place the virtual objects into the image. The camera, real scene and virtual scene are defined in the same affine reference frame. Thus the original image along with the corrected graphics image projecting over it forms the augmented image. Now the user can enjoy a high quality of video in all the screens. In this system, all the blocks shown in the fig.2 are embedded inside the cell phone itself. The cell phone now acts both as a server and a client. The AR system requires only the video from the transmitting section rather than cell phone camera's intrinsic (focal length and lens distortion) and extrinsic (position and pose) parameters. Hence the integration between different types of cell phones will be easy and the future coming models can also be joined with the proposed AR system without any modification in the hardware and software.
607
International Conference on Advances in Engineering and Technology
Real Scene
Image Coordinates
3G-324~ Mobile
Align the graphics Image generator-1 to real image-1
Generate the augmented image for real image-1
Align the graphics Image generator2 to real image-2
Generate the augmented image for real image-2
Align the graphics Image generator3 to real image-3
Generate the augmented image for real image-3
/
~_~
IlL~
[
Video
~
Image
[~L~ \~L Graphics Image Coordinates
Graphics I J
k \
LI ~l
/
Image /
J."
"" ", ..T
/ Augmented Video
,." / '
Combine the entire augmented image and align with the global affine plane and project on the cell phone screen
Fig. 2: Block Diagram of the Proposed Augmented Reality System for Cell phone
7.0 P E R F O R M A N C E ISSUES IN THE P R O P O S E D A R S Y S T E M
Augmented reality systems are expected to run in real-time such a way that the user will be able to see a properly rendered augmented image all the time. This places two performance criteria on the system. They are: 9 Update rate for generating the augmenting image, 9 Accuracy of the registration of the real and virtual image. Visually the real-time constraint is manifested in the user viewing an augmented image in which the virtual parts are rendered without any visible jumps. To appear without any jumps, a standard rule is that the graphics system must be able to render the virtual scene at
608
Anand
least 10 times per second. This is well within the capabilities of current graphics systems in computers but it is a question in cell phones for simple to moderate graphics scenes. For the virtual objects to appear realistically, more photorealistic graphics rendering is required. The current graphics technology does not support fully lit, shaded and ray-traced images of complex scenes. Fortunately, there are many applications for augmented reality in which the virtual part is either not very complex or will not require a high level of photorealism. The second performance criterion has two possible causes. One is a misregistration of the real and virtual scene because of noise in the system. As mentioned previously, our human visual system is very sensitive to visual errors which in this case would be the perception that the virtual object is not stationary in the real scene or is incorrectly positioned. Misregistrations of even a pixel can be detected under the right conditions. The second cause of misregistration is time delays in the system. As mentioned previously, a minimum cycle time of 0.1 seconds is needed for acceptable real-time performance. If there are delays in calculating the camera position or the correct alignment of the graphics camera then the augmented objects will tend to lag behind motions in the real scene. The system design should minimize the delays to keep overall system delay within the requirements for real-time performance. The combination of real and virtual images into a single image presents new technical challenges for designers of augmented reality systems. The AR system relies on tracking features in the scene and using those features to create an affine coordinate system in which the virtual objects are represented. Due to the nature of the merging of the virtual scene with the live video scene, a virtual object drawn at a particular pixel location will always occlude the live video at that pixel location. By defining real objects in the affine coordinate system real objects that are closer to the viewer in 3D space can correctly occlude a virtual object. The computer generated virtual objects must be accurately registered with the real world in all dimensions. Errors in this registration will prevent the user from seeing the fused real and virtual images. Discrepancies or changes in the apparent registration will range from distracting which makes working with the augmented view more difficult, to physically disturbing for the user making the system completely unusable. The phenomenon of visual capture gives the vision system a stronger influence in our perception. This will allow a user to accept or adjust to a visual stimulus overriding the discrepancies with input from sensory systems. In contrast, errors of misregistration in an augmented reality system are between two visual stimuli which we are trying to fuse to see as one scene. In the cell phone video conferencing, faces are the usual image under consideration. So the problem of merging a virtual scene with a real scene is reduced to: 9 Tracking a set of points defining the affine basis that may be undergoing a rigid transformation, 9 Computing the affine representation of any virtual scene, 9 Calculating the projection of the virtual objects for a new scene view as linear combinations of the projections of the affine basis points in that new view.
609
International Conference on Advances in Engineering and Technology
Achieving a consistent lighting situation between real and virtual environments is important for convincing augmented reality applications. A rich pallet of algorithms and techniques has to be developed that match illumination for video-based augmented reality to provide an acceptable level of realism and interactivity. Methods have to be developed which create a consistent illumination between real and virtual components. Diffuse real images and to illuminate them under new synthetic lighting conditions is very difficult to achieve unless efficient image processing software is available. Latency is as much of a problem in augmented reality systems. Other than just getting faster equipment, predictive methods which help to mitigate the latency effects should be developed. Models of the human operator and the position measurements should be considered in the algorithms of predicting forward in time to get a perfect AR video. 8.0 C O N C L U S I O N S A N D F U T U R E R E S E A R C H D I R E C T I O N S
Augmented reality will truly change the way we view the world. The majority of AR achievements have found few real-world applications. As for many other technological domains, AR needs to provide sufficient robustness, functionality and flexibility to find acceptance and to support its seamless integration into our well-established living environments. In this paper, a look at this future technology, its components and how it will be used were discussed. Many existing mobile devices (3G mobile phones) fulfill the basic requirements for augmenting real world pictures with computer generated content. But the bandwidth and processing facilities are the strong obstacles prohibiting their success. Though the bandwidth requirements may be solved by 3G standards, and up coming 4G standards, the processing efficiency solely depend on the hardware of the cell phones which has to be improved. The paper concludes with a proposed AR system for cell phones and its success mainly rely on the processing capability of the cell phone. The paper does not deal with the cell phone video conferencing and augmented reality at mobility. In UMTS, the maximum speed envisaged for the high mobility category is 500 km/h using terrestrial services to cover all high-speed train services and 1000 km/h using satellite links for aircraft. The data-rate is restricted during mobility to 144 Kbps instead of the guaranteed rate of 2 Mbps. The mobility problem can be used for the future research by improving its technology such that the virtual elements in the scene become less distinguishable from the real ones. REFERENCES
Bimber, O.,Grundh6fer, A., Wetzstein, G., and Kn6del, S. (2003), Consistent Illumination within Optical See-Through Augmented Environments, In proceedings of International Symposium on Mixed and Augmented Reality, The National Center of Sciences ,Tokyo. Christian Geiger, Bernd Kleinnjohann, Christian Reimann and Dirk Stichling. (2001), Mobile AR4ALL, In Proceedings of the IEEE and ACM International Symposium on Augmented Reality, Columbia University, New York. International Telecommunication Union, Recommendation ITU-T (2003), H.264: Advanced Video Coding for Generic Audiovisual Services, ITU-T.
610
Anand
Vasconcellos, S.V. and Rezende, J.F. (2002), QoS and mobility in 3G Core Networks, Proceedings of Workshop on QoS and Mobility, Brazil. Nilufar Baghaei, Ray Hunt. (2004), Review of quality of service performance in wireless LANs and 3G multimedia application services, In the Journal of Computer Communications, Elsevier, Netherlands. Wang, Y., Ostermann, J. and Zhang, Y. (2001), Video Processing and Communications, Prentice-Hall, Englewood Cliffs, New Jersey.
611
International Conference on Advances in Engineering and Technology
D E S I G N OF S U R F A C E W A V E FILTERS R E S O N A T O R W I T H C M O S L O W NOISE A M P L I F I E R Et. Ntagwirumugara, Department of Electronics and Telecommunication, Kigali Institute of Science and Technology, Rwanda T.Gryba, IEMN, UMR CNRS 8520, Department OAE, Valenciennes University,BP311,
59313- Valenciennes Cedex, France J. E. Lefebvre, IEMN, UMR CNRS 8520, Department OAE, Valenciennes University,BP311,
59313- Valenciennes Cedex, France
ABSTRACT In this communication we present the analysis of a ladder-type filter with a CMOS low noise amplifier (LNA) in the frequency band of 925-960 MHz. The filter will be developed on a structure with three layers of a ZnO film and aluminium (A1) electrodes on a silicon (Si) substrate with Ti/Au for metallization. This filter is composed of six resonators on the same port. After, we added a 943-MHz fully integrated CMOS low noise amplifier (LNA), intended for use in a global system mobile (GSM) receiver, have been implemented in a standard 0.35~tm CMOS process. Design procedure and simulation results are presented in this paper. The amplifier provides a forward gain of 10.6dB with a noise figure of only 1dB while drawing 8.4mA from a 2.5 V power supply.
Keywords: SAW filter; Resonator; Coupling-of-modes (COM); IDT; ZnO/Si/A1, CMOS; Low-noise amplifier; Low power; Low voltage; Noise figure.
1.0 INTRODUCTION Expansion of small size and dual-band mobile phones require strongly the development of compact devices. Because of their small size, low height and light weight, SAW filter and LNA are used as a key component in GSM and GPS communication equipment. Recently, a new and exciting RF circuit capability came to light as radio front-end integrated circuits have been developed in silicon or GaAs technologies. As the first bloc of radio frequency receiver following the antenna, filter and low noise amplifier play a significant role. RF design for applications below 2 GHz moved from the printed circuit board and discrete components to large-scale integration. For these reasons, co-integration of SAW filters with RF systems in the same substrate (Si or GaAs) appears to be a key solution. On the present work, the SAW filters on the ZnO/Si/A1 structure are study by employing the design approaches based on multiple resonator techniques (Thorvaldsson, 1989; Wright, 1986).
612
Ntagwirumugara, Gryba & Lefebvre
2.0 S A W F I L T E R D E S I G N The COM-
Modelling
The transducer generates forward and backward propagating surface waves with amplitudes R(x) and S(x) that are coupled together (Fig. 1).
f-....__s ........ ................f"
~1 = k]./p
":~I~
P
Fig .1" Geometry of a SAW transducer The general COM equations describing both the multiple reflections and the SAW excitation of an IDT are given by (Chen & Herman, (1985), Suzuki et al, (1975)).
dR(x) dx
dS(x)
= -jkllR(x
) - jk,2e 2JO:'S ( x ) + jc~eJ~
- jk,2e-ZJO:"R(x)+ j k l , S ( x ) - jc~ e - J ~ V
(la)
(lb)
dx
dZ(x)
9 9 _/&
= --2JC~ e
R(x) - 2j~ze/~S(x) + jcoQV
(lc)
dx where,
R ( x ) , S ( x ) = The slowly varying amplitudes of the forward and backward waves respectively, k~j, k~2 - coupling coefficients, V and I are applied voltage and current drawn by the IDT such that, 6 = (co - co0)/v f = k l - k0 (wave mismatch), k t - co / v l (the free wave vector), k 0 - 27r / A, (A~ is show in fig 1), (z) , C S and a' are respectively the radian frequency, the static capacitance per unit length and with of the transducer and the transduction coefficient. T h e C O M P a r a m e t e r s kll and kl2 The parameters kll and kl2 are closely related to the more commonly used parameters of average velocity shift At_)/ol. and acoustic mismatch A Z / A Z 0 respectively Chen & Herman, (1985). These relationships were derived (Suzuki et al, 1976; Thorvaldsson, 1986).
613
International Conference on Advances in Engineering and Technology
(2a)
kl l / k f - - ( A u / o f )
k,2/kf k12/kf
-(1/~)(AZ / Z o )
(2b)
- -(1/~)(AZ/Zo)
(2c) (2d)
where k z is the electromechanical coupling coefficient, Hm is the metal film thickness and ,,~ is the acoustic wavelength. The first terms on the fight side of both (6) and (7) represent the piezoelectric loading effect, and the two fight most terms represent the mechanical loading effect. The Electrical And Mechanical Perturbation Terms
The electrical perturbation terms Dk and Rk can be written as
] D k - - - ~11 1+ P~s9(- c~ P_0 (- cos( rrl))
[
(3a)
1
(3b)
where, Pv(x) is the Legendre function of order v and r] is the metallization ratio (see B.11 and B. 14 in Chen & Herman (1985)). The mechanical perturbation terms are given by (Chen & Herman, (1985), Wright, (1986))
Om
Rm
= - - ~
z
-
C,
1- p o f
(4a)
(4b)
c,
where, /9 is mass density, O1(1 and t51(2 characterize the overlay material. Ui and (p represent magnitude and phase of the free surface mechanical displacements and electrical potential (Chen & Herman, 1985).
614
Ntagwirumugara, Gryba & Lefebvre
Simulation Results The former model is applied to calculate and optimize the performances of the ladder-type filter. The results of simulation allow the choice of the optimal geometry, position of IDT fingers either on the surface or between ZnO and metallization. The optimization of the filter will be carried using Math-lab program while acting on several parameters: The aperture, the number of the fingers of the transducer and reflectors, the thickness of IDT electrodes and the acoustic wavelength. The structure of the ladder-type SAW filter realized on the silicon substrate and using the frequency band 925-960 MHz with RI=R3=R5 and R2=R4=R6 is as follows: Ri R; Rf
........
R2 ~
Vo ~ "f"l , , T
R~ m
R~ l
rs
7-
Fig.2" The structure of the ladder-type SAW filter
Reso
1
O
1111111111 Resort
a:t~u:r 2
Fig.3" Fundamental structure of a ladder type SAW filter
.....
!
Fig.4: SAW fabrication
A summary of parameters for ZnO/Si/A1 obtained with the theory described above is given in Tables 1 and 2.
Of [m/s] 5083.83 Rm 2.4718
Table 1" SAW parameters k2 Cs[e-10F/m] Dm 0.84 -0.3706 0.0575 Dk Rk -7285 -0.7178
615
International Conference on Advances in Engineering and Technology
Table 2 9Design parameters
~
Acoustic wavelength ~ , / ] ' 2 [gm]
= 5.32 =5.48 Nt=51 Nr=80 W~=80 ,;~ W2-- 160 A~. 700 0.5
Number of fingers in each IDT Number of fingers in each reflector Aperture Metal film thickness [gm] Metallization ration
Tran~duc~r!
Re~n~t~r i
i!i
.................... oo~176176
Fig. 5" Conductance and Susceptance
Fig. 6: Conductance and Susceptance of Transducer 1
,f
Fig. 7: Conductance and Susceptance
Coaduc,tan, 9
Fig. 8: Conductance and Susceptance of Transducer 2
Fig. 9" Insertion Loss
616
~.
Ntagwirumugara, Gryba & Lefebvre
3.0 LNA CMOS DESIGN For our study of low noise amplifier in CMOS technology, we have used a tuned-cascode LNA topology shown below (Nguyen et a12004" Mitrea & Glesner, 2004; Shaeffer & Lee, 1997):
9-c>
--
m
I
Fig. 10: Schematic of a cascade LNA topology
3.1 Equations Used for Simulation For the cascade LNA with inductive source degeneration shown in figure 7, its input impedance is (Shaeffer & Lee, 1997)" Zi, , = j w ( L
1 + Ls)+
jw(C,,
gm = /leffCoxWgsat tg(2 + tg)
2(1 +
t9) 2 -
g ~, - - +LC d + C ~ ) + R g + R ~ + Cg,
s
2p 0 I I + p / 2 1 __ /a, VadLg ..... /9(1 + p) /a41 1 + Vod /L,5",,,,,
p
m
h
Ts). This condition occurs whenever the received multipath components of a symbol extend beyond the symbol's time duration, thus causing channel-induced inter-symbol interference (ISI) [7]. 3.6 Nonselective or Flat Fading Channel Viewed in the time-domain, a channel is said to be frequency non-selective or to exhibit flat fading if Tm < Ts. In this case, all of the received multipath components of a symbol arrive within the symbol time duration; hence, the components are not resolvable. Here there is no channel-induced ISI distortion, since the signal time-spreading does not result in significant overlap among neighbouring received symbols. In general, for a wireless digital communication system, the significance of channel delay spread depends on the relationship between the rms delay-spread of the channel and the symbol period of the digital modulation [5]. If the rms delay-spread is much less than the symbol period, then delay spread has little impact on the performance of the communication system. In this case the shape of the power-delay profile is immaterial to the error performance of the communication system. This condition is called 'flat-fading". On the other hand, /f the rms
delay-spread is a significant fraction of or greater than, the symbol period, the channel delay spread will then significantly impair the performance of the communication system. Furthermore, the error performance of a digital communication system depends on the shape of the power-delay profile. This condition is often referred to as "time-dispersive fading" or "frequency-selective fading. " Since the power-delay profile is an empirical quantity that depends on the operating environment, for computer simulation purposes, we can only postulate functional forms of the profile, and vary the parameters of these functional forms in order to obtain results that are applicable to a broad spectrum of wireless environments. 3.7 Doppler Spread When a single-frequency sinusoid is transmitted in a free-space propagation environment where there is no multipath propagation, the relative motion between the transmitter and
627
International Conference on Advances in Engineering and Technology
receiver results in an apparent change in the frequency of the received signal. This apparent frequency change is called the Doppler shift (see Fig. 3).
dcos(
X,.,
,.,Y
Fig.3" Illustration of Doppler shift in the free-space propagation environment. The receiver moves at a constant velocity v along a direction that forms an angle a with the incident wave. The difference in path lengths traveled by the wave from the transmitter to the mobile receiver at points X and Y is given by A1 - d cos a - v a t cos ~z where,
..............................
(1)
At is the time required for the mobile to travel from X to Y. The phase change in
the received signal due to the difference in path lengths is therefore 2:rA1
Aq) -
2
2rtvAt - ~ c o s a 2
............................
(2)
Where, ]L is the wavelength of the carrier signal. Hence the apparent change in the received frequency, or Doppler shift, is given by
1 A(p m
f d -- 2rC A t
=
V mCOS~
2
628
.......................................
(3)
.......................................
(4)
Kaluuba, Taban-Wani & Waigumbulizi
v
-
- f c cosoc
.......................................
(5)
c
In the last equation, c is the speed of light and fc is the frequency of the transmitted sinusoid (carrier). Note taht c - fc A- Equation (5) shows that the Doppler shift is a function of, among other parameters, the angle of arrival of the transmitted signal. In a multipath propagation environment in which multiple signal copies propagate to the receiver with different angles of arrival, the Doppler shift will be different for different propagation paths. The resulting signal at the receiver antenna is the sum of the multipath components. Consequently, the frequency spectrum of the received signal will in general be "broader or wider" than that of the transmitted signal, i.e., it contains more frequency components than were transmitted. This phenomenon is referred to as Doppler spread. Since a multipath propagation channel is time-varying, when there is relative motion, the amount of Doppler spread characterizes the rate of channel variations [5]. Doppler spread can be quantitatively characterized by the Doppler spectrum:
K s(f)
:
.....................
(6)
The Doppler spectrum is the power spectral density of the received signal when a singlefrequency sinusoid is transmitted over a multipath propagation environment. The bandwidth of the Doppler spectrum, or equivalently, the maximum Doppler shift fma• is a measure of the rate of channel variations. When the Doppler bandwidth is small compared to the bandwidth of the signal, the channel variations are slow relative to the signal variations. This is often referred to as "slow fading". On the other hand, when the Doppler bandwidth is comparable to or greater than the bandwidth of the signal, the channel variations are as fast or faster than the signal variations. This is often called "fast fading" 4.0 P A R A M E T E R S M E A S U R E D T O D E T E R M I N E F A D I N G L E V E L S
Fade Margins: Fade margin refers to the difference between the normal unfaded signal and receiver thresholds defined by the received signal level required to cause the worst 3kHz slot of receiver baseband to have a 30dB S/N and is defined as Fade Margin (dB) = System Gain (dB) - Net Path Loss (dB) . . . . .
(7)
629
International Conference on Advances in Engineering and Technology
Pathloss: Refers to the difference between transmitted and received power or Pathloss
= Tx_Power-
Rx_Power
.................................
(8)
Threshold Crossing Rate: This is the average number of times per second that a fading signal crosses a certain threshold level. Fade Duration: This is the average period of time for which the received signal is below a required or desired level. Received Signal Strength Indication (RSSI): This is the strength of the received signal in dB/dBm.
Bit Error Rates (BER): This is the number of errors in a transmitted message carried on particular link. Since this links or channels are digital communication channels, BER is used to evaluate the level of erroneous bits in the message. 5.0 FADING M I T I G A T I O N T E C H N I Q U E S Fading mitigation techniques can be divided into three categories: 9 Power control 9 Adaptive waveform, and 9 Diversity [ 11 ]. Power control and adaptive waveform fade mitigation techniques are characterized by the sharing of some unused in-excess resource of the system, whereas diversity fade mitigation techniques imply adopting a re-route strategy. The former aim at directly compensating fading occurring on a particular link in order to maintain or to improve the link performance, whereas diversity techniques allow to avoid a propagation impairment by changing the frequency band or the geometry. 5.1 P o w e r C o n t r o l
In power control techniques, transmitter power or the antenna beam shape are modified in order to adapt the signal to the propagation conditions. Several implementations are possible depending on the location of the control technique. 6.0 ADAPTIVE W A V E F O R M OF SIGNAL P R O C E S S I N G T E C H N I Q U E S Three types of methods can be identified which translate in reductions of power requirements to compensate for additional attenuation on the link, and lead to modifications in the use of the system resource by acting on the bandwidth or on the data rate.
630
Kaluuba, Taban-Wani & Waigumbulizi
6.1 Adaptive Coding When a link is experiencing fading, the introduction of additional redundant bits to the information bits to improve error correction capabilities (FEC) allows to maintain the nominal BER while leading to a reduction of the required energy per information bit. Adaptive coding consists in implementing variable coding rate in order to match impairments due to propagation conditions. A gain of varying from 2 to 10 dB can be achieved depending on the coding rate. The limitations of this fade mitigation technique are linked to additional bandwidth requirements for FDMA and larger bursts in the same frame for TDMA. Adaptive coding at constant information data rate then translates in a reduction of the total system throughput when various links are experiencing fading simultaneously.
6.2 Adaptive Modulation Under clear sky conditions, high system capacity for a specified bandwidth can be achieved by using modulation schemes with high spectral efficiency such as coded modulation or combined amplitude and phase modulation [5,6]. In case of fading, the modulation schemes could be changed to implement more robust modulations requiring less symbol energy. As for adaptive coding, the aim of the adaptive modulation technique is to decrease the required bit energy per noise power spectral density ratio ( E b / N O ) corresponding to a given BER, by using a lower level modulation scheme at the expenses of a reduction of the spectral efficiency.
6.3 Data Rate Reduction With data rate reduction, the information data rate is decreased when the link experiences fading, and this translates in a decrease by the same amount of the required carrier power to noise power spectral density ratio ( C / N o ) if the required bit energy per noise power spectral density ratio ( E b / N o ) is kept constant (no change in the coding gains and constant BER). The transmitted bit rate is reduced accordingly and turns into a similar reduction of the occupied carrier bandwidth. Operation at constant resource by keeping a constant transmitted data rate is also possible by adjusting the coding rate accordingly. In that case, the coding gain adds to the reduction of the information data rate. This fade mitigation technique requires services that can tolerate a reduction of the information rate such as video or voice transmission (assuming a change of the source coding at the expense of a reduction of the perceived quality), and data transmission (assuming an increase of transfer duration or a reduced throughput if Internet access). Moreover, extra delay and/or complexity may be required due to the exchange of signaling between the transmitter and the receiver [11].
6.4 Diversity Techniques [7, 12] Diversity fade mitigation techniques involve setting up of a new link when the primary link is experiencing fading. The new link can be implemented at a different frequency (Fre-
631
International Conference on Advances in Engineering and Technology
quency Diversity), with a different geometry (Site or Station Diversity), or a different period of time (Time Diversity).
6.5 Frequency Diversity Provided that two different frequency bands are available, with frequency diversity information is transmitted onto a carrier using the frequency band that is the least affected by meteorological situation (typically the lowest frequency) when a fade is occurring. It requires a pair of terminals at each frequency at both link terminations, and suffers from inefficient use of the radio resource.
6.6 Site Diversity With site diversity, the selection at one end of a terminal at a different location and in a different angular direction modifies the geometry of the link and prevents the path from going through an atmospheric perturbation which is going to produce a fade. Side diversity is based on the fact that convective rain cells which produce high fades are a few kilometers in size, and that the probability of simultaneous occurrence on two geometrically separated links is low. This technique requires to re-route the connection in the network.
6.7 Time Diversity Time diversity aims at re-sending the information when the state of the propagation channel allows to get through. This assumes that there is no or little time constraints for transmission of the data (e.g. push services), or that a variable delay (minutes to tens of minutes) is acceptable between data packets (non-real time services). 7.0 FADE M I T I G A T I O N IN THE UGANDA C E L L U L A R ENVIRONMENT Several methods have been adapted in the Uganda cellular communication networks for mitigating the effects of fading phenomena. These include antenna systems, multipath equalization techniques, proper frequency planning, frequency hopping and discontinuous transmission and reception techniques. 8.0 CONCLUSION Due to the presence of reflectors and scatterers in the environment, the signal transmitted through a wireless channel propagates to the receiver antenna via many different paths. The output of the receiver antenna is, therefore, a sum of many distorted copies of the transmitted signal. These copies generally have different amplitudes, time delays, phase shifts, and angles of arrival. This phenomenon is referred to as multipath propagation. The effects of multipath propagation can be classified into large-scale and small-scale variations. Small-scale variations include signal fading, delay spread, and Doppler spread.
632
Kaluuba, Taban-Wani & Waigumbulizi
Signal fading refers to the rapid change in received signal strength over a small travel distance or time interval. It occurs because of the constructive and destructive interference between the signal copies. Delay spread refers to the smearing or widening of a short pulse transmitted through a multipath propagation channel. It happens because different propagation paths have different time delays. Doppler spread refers to the widening of the spectrum of a narrow-band signal transmitted through a multipath propagation channel. It is due to the different Doppler shift frequencies associated with the multiple propagation paths when there is relative motion between the transmitter and the receiver. These small-scale effects can be quantitatively characterized using the signal amplitude distribution, power-delay profile, and rms delay-spread, and Doppler spectrum. All these characterizations are empirical statistics that must be obtained using extensive field measurements. However, field measurement is expensive and difficult, and cannot be generalized for all situations. Because of the stochastic nature of the environment in which wireless systems operate, and because of the complexity of modem wireless systems, the use of simulation enables the design engineer to predict some estimates about the degrading effects of fading, interference, power requirement, and hand-off in a proposed system, before installation of the actual system. During the simulation process of a multipath signal propagation environment, the powerdelay profile and Doppler spectrum of this channel model can be investigated by properly specifying the distribution of some model parameters, such as the path delays, Doppler shifts and path phases. A special blend of advanced techniques and technologies are required to overcome fading and other interference problems in non-line-of-sight wireless communication. REFERENCES
[1]
[2]
[3]
Lapidoth and P. Narayan, "Reliable Communication Under Channel Uncertainty", 1998 IEEE International Symposium On Information Theory, Cambridge, M.A. August 17-21 1998. Bernard Sklar, "Rayleigh Fading Channels in Mobile Digital Communication Systems", IEEE Communications Magazine, July 1997, Part I: p . 9 0 - 100, Part II: p.102 - 109. Information- Theoretic and Communications Aspects". IEEE Transactions on Information Theory, Vol. 44, No. 6, October 1998, p.2619 - 2692.
633
International Conference on Advances in Engineering and Technology
[4]
[5] [6]
[7]
[8]
[9]
[10]
[11]
[12]
634
Mohamed-Slim Alouini, Andrea J. Goldsmith, "Capacity of Rayleigh Fading Channels Under Different Adaptive Transmission and Diversity-Combining Techniques", IEEE Transactions on Vehicular Technology, Vol. 48, No.4, July 1999, p. 1165-1181. Yumin Lee, "Adaptive Equalization and Receiver Diversity for Indoor Wireless Data Communications", Stanford University, 1997. PhD Thesis Publication. Andrea J. Goldsmith, Soon-Ghee Chua, "Adaptive Coded Modulation for fading Channels," IEEE Transactions on communications, Vol.46, No.5, May 1998, p.595602. Bernard Sklar, "Mitigating the Degradation Effects of Fading Channels," http://www.informit.com/content/images/art sklar6_mitigating/ Elena Simona Lohan, Ridha Hamila, Abdelmonaem Lakhzouri, and Markku Renfors, "Highly Efficient Techniques for Mitigating the Effects of Multipath Propagation in DSCDMA Delay Estimation", IEEE Transactions on Wireless Communications, Vol.4, No. 1, January 2005, p. 149-162. Oghenekome Oteri, aragyaswami Paulraj, "Fading And Interference Mitigation Using a Greedy Approach", Information systems Laboratory Department of Electrical Engineering, Stanford University, Stanford, CA, 94305. Ana Aguiar, and James Gross, "Wireless Channel Models", Technical University BerlinTelecommunication Networks Group (TKN), Berlin, April 2003, TKN Technical Reports Series. Ana Bolea Alamanac and Michel Bousquet. "Millimetre-Wave Radio Systems: Guidelines on Propagation and Impairment Mitigation Techniques Research Needs", COST-action 280 PM308s 1st International Workshop, July 2002. Andrea Goldsmith, "Wireless Communications." Stanford University, 2005 Cambridge University Press.
Santhi & Kumaran
S O L A R P O W E R E D Wi-Fi W I T H W i M A X E N A B L E S THIRD W O R L D P H O N E S K.R.Santhi and G. Senthil Kumaran, Department of CELT, Kigali Institute of Science and
Technology (KIST), Rwanda
ABSTRACT The lack of access to reliable energy remains a significant barrier to sustainable socioeconomic development in the world's poorest countries. Majority of their population are largely concentrated in the rural areas and access to power is often sporadic or altogether lacking. Without power the traditional telecom infrastructure is impossible. So these lower income strata people are living for years without electricity or telephones, relying on occasional visitors and a sluggish postal system for news of the outside world. So if electricity is playing havoc, there is a need to devise low tech solutions to help bridge not only the digital divide but also the electrical divide. One of such solution is a solar and pedal powered remote ICT system by Inveneo a non- profit organization which combines the power of the computer and a clever application of the increasingly popular Wi-Fi wireless technology powered by solar energy. With this system the rural villagers pedal onto the hand-built, bicycle-powered PC in the village which would send signals, via an IEEE 802.11 b connection, to a solar-powered mountaintop relay station. The signal would then bounce to a server in the nearest town with phone service and electricity and from there to the Internet and the world. This paper describes a prototype of how the wireless broadband WiMAX technology can be integrated in to the existing system and gain global possibilities. With the new suggested prototype each village will connect to one WiMAX station through the Wi-Fi Access Point (AP) which is powered by a solar means. The WiMAX tower then send the radio signal to fixed fiber backbone that will connect the villages to the Internet and enables VoIP communications. Keywords: Wi-Fi, WiMAX, Solar powered ICT, Pedal powered PC
1.0 INTRODUCTION The rural villages are frequently "off-grid", away from wired power grids, or energy infrastructure of any kind. For the people in these remote locations the telecommunications facility is very important, specifically the capability to make local calls and to make calls overseas. An innovative, low-cost, pedal powered, wireless network can provide communication facility through Internet to off-grid villages. For them telephony is the top priority, not the Internet. With this system villagers will be jumping on stationary bikes to pedal their way onto the Information Superhighway, be able to make phone calls, using Internet-based voice technologies. A complete system will provide simple computing, email, voice and Internet
635
International Conference on Advances in Engineering and Technology
capabilities for remote villages using pedal powered PC, Solar powered Wi-Fi, WiMAX, VoIP and Linux technologies. While these might not exactly sound like big technology break-throughs, simple solutions like these could take computing powers and in turn the communication facility to the electricity-starved areas bridging the digital divide but also helping to bridge the electrical divide. Section 1 describes about the need for alternate energy sources for implementing the telecommunication facility in rural villages. 2.0 M O T I V A T I O N OF RESEARCH
Rwanda is still one of the poorest nations in the world, heavily reliant on subsistence farming and international help. Disparities between rural and urban areas are widespread, with over 94% of Rwandan population without access to electrical power are located in rural areas: in fact only 6% of the populations live in urban areas. Energy consumption in Rwanda is greatly inferior to that needed for industrialization. The required minimum is generally thought to be 0.6 tep per person per year, whereas at the moment available energy is of the order of 0.16 tep per person per year. Today 80% of electricity consumed is by the capital city, Kigali, where only 5% of the population lives. In the present context, the lack of or the unreliability of power and phone lines as well as the high cost of access to existing infrastructure severely limit Rwanda's development. For example these isolated Communities depend on intermediaries for information often leading to weak bargaining positions which leads to undervaluing the prices of their crops or paying too high of prices for materials they require. So an innovative power management system that would take pedal powered personal computers combined with the solar powered wireless technologies to the powerstarved villages is a necessity to improve the living conditions of the population. One such path-breaking initiative is the ICT prototype described in this paper. 3.0 NEED FOR ALTERNATE ENERGY P O W E R E D ICT
Today, in most places especially in the rural areas, infrastructure and services are a key problem without proper communication facility. Alternate energy powered ICT is a necessity for rural markets and an option for urban markets due to the following reasons. (i) Villagers in the remote locations have lived for years without electricity or telephones, relying on occasional visitors and a sluggish postal system for news of the outside world. They have families scattered around the globe but no ways to call relatives living abroad, or even in the next town. (ii) There is a clear requirement for power back-ups that can enable the delivery of services to citizens various e-governance projects. (iii) Whether it is villages, small towns or even metropolitan cities, long power cuts, no electricity, voltage fluctuations is part of every human's life in a poor country. (iv) In countries like Rwanda Power and telephone service is absent and cellular phones struggle to get a signal in the hilly terrain.
636
Santhi & Kumaran
(v) Cellular has sometimes proved to be effective as the ideal platform when there is no electricity in some third world countries. But laptops, PDAs, cell and satellite phones all have batteries, and can operate on its internal battery for short periods of time. In a truly off-grid situation, recharging is still a problem. 4.0 ALTERNATE METHODS OF P O W E R MANAGEMENT There are a number of ways to power small-scale ICT installations in locations that are not served by the electricity grid. When grid extension is not an option, a standalone or distributed power system can be installed to generate electricity at a location close to the site where the electricity is needed. Examples of small-scale, standalone power systems include generator sets powered by diesel, solar PV systems, small wind systems, and micro-hydro systems. As illustrated in the table 1 below, the cost of providing power in off-grid locations is influenced by the technology, the size or capacity of the system, and the ongoing operating costs of fuel and maintenance. Renewable energy such as solar power, pedal power is considered as power solutions. Some of the technical equipments used are a wind generator, solar panels, a bank of deep cycle batteries etc. Table l: Cost of providing power by various methods Grid extenSolar PV Small Wind sion $4,000 to 12,000 to $2,000 to Capital 10,000 per 20,000 per 8,000 per Costs km kW kW Operating $80 to 120 $5 $10 Costs (per kWh)
MicroHydro $1,000 to 4,000 per kW
Diesel/Gas generator
$20
$250
$1,000 per kW
4.1 Generator Using a generator or continuously running a vehicle engine is impractical because it provides far more power than most electronic communication devices need. At the same time, recharging many electronic devices can take hours, so charging them from a vehicle battery is not always advisable. Most car/truck batteries are designed to maintain a relatively high charge, and deep, frequent discharges will dramatically shorten the life of the battery, and/or diminish its performance. 4.2 Wind Power An Air 403 wind generator can be mounted on a pole. This wind generator is capable of providing 400 Watts in strong wind and features an internal regulator and protection against wind speeds which are too excessive. The wind generator required guy wire stays fixed in four directions for safety.
637
International Conference on Advances in Engineering and Technology
4.3 Solar power
Photovoltaic power is an interesting option worth considering for many remote ICT facilities. Small-scale PV systems turn sunlight directly into electricity for use by communications devices, computers and other kinds of equipment. An array of twelve 115 Watt solar panels effectively provides just over 1300 Watt (1.3 kiloWatt) of power during full sunlight as described in Psand (2004). This amount of power at 12 volts needs very careful handling and regulation. 12 volts lkiloWatt equates to a current of just over 100 Amperes. The following are the advantages of a Solar Power system as said in Humaninet ICT (2005): (i) Resource: Broad availability of the solar resource, sunlight, often makes PV the most technically and economically feasible power generation option for small installations in remote areas. (ii) Maintenance: Since there are typically no moving parts in PV systems, they require minimal maintenance. So this technology is well suited for isolated locations and rural applications where assistance may be infrequently available. (iii) Operation: Operation required for typical PV systems is the periodic addition of distilled water to the batteries when flooded batteries are used. More expensive systems, using sealed batteries, can run for extended periods without user intervention. (iv) Environmental Impacts: Solar system produces negligible pollutants during normal operation. (v) Costs: Costs per installed Watt depend on system size, the installation site and component quality. Smaller systems (less than 1 kW) tend to be at the higher end of the cost range. (vi) Viability: Unlike generator sets, PV systems are quiet and do not generate pollution. With proper design, installation and maintenance practices, PV systems can be more reliable and longer lasting. 4.4 Pedal power
Compared to options such as solar panels and generators, using pedal power for small appliances like PCs, printers could reduce the cost of the project. Power would be supplied by a car battery charged by a person pedaling away on a stationary bike nearby. One minute of pedaling generates about five minutes of power: 12 V at 4 to 10 A, depending on the pedaler's effort. The main focus must be how to apply the technology of pedal power to a laptop computer. There is a necessity for appropriate technology to be compatible with high technology. This task is a little difficult because computers are very sensitive to power surges. If you tried to plug your computer right into the generator you would more than likely crash your computer. To avoid this outcome the generator is plugged into a battery that can then safely plug into your laptop computer. The battery will deliver consistent power to the laptop, where power straight from the generator would be inconsistent due to the nature of pedaling. This set up can be used to power many other appliances; for example - lights, televisions, radios, and any other battery powered appliances. The following are the advantages of pedal power system.
638
Santhi & Kumaran
(i) The efficiency and variable speed of the output are two features that can be exploited. Basically, any device that was hand cranked, foot-powered, or powered by a fractional horsepower electric motor could potentially be converted to pedal power. (ii) It requires no fuel, and is not affected by time-of-day or weather, so it would make an excellent emergency generator. 5.0 DESCRIPTION OF THE EXISTING INVENEO REMOTE ICT SYSTEM The Inveneo Remote ICT system provides access to essential information and communication tools in this region where there is limited and/or unaffordable electricity and telephone service. This low cost solar and bicycle powered computer provides basic computing, voice calling, and Internet access for villages without access to electricity or telecommunications and uses standard off-the-shelf PC, VoIP and Wi-Fi and open source software technologies (including Asterisk) that have been designed for low power consumption and integrated, ruggedized and adapted for the local environment and language.
The Computer will be powered by electricity stored in a car battery charged by foot cranks. These are essentially bicycle wheels and pedals hooked to a small generator. The generator is connected to a car battery) and the car battery is connected to the computer. Connection with each computer to the others will be by radio local area network (LAN). The rural villagers pedal onto the Internet via the bicycle-powered computer in the village which would send signals, to one repeater station powered by a solar means on the ridge near the river valley. That station will then send the radio signal to the microwave tower nearby and eventually to a server in the town that will connect the villages to the internet. The system like the above is already being implemented in Phon Kham in Laos and in the rural villages of Uganda as shown in figure 1 (Inveneo, 2002), which are one of the world's poorest countries, has very little--no electricity, no phones, no running water etc.. With such a system the villagers will be able to call out and receive incoming calls on a regular telephone instrument hooked up to the computer. Inveneo's systems utilize opensource software (Linux, KDE, Open Office) for Internet access and productivity tools. The phone connections are established using SIP VoIP signaling protocol and the Asterisk opensource PBX system. Each village has its own extension and voice mail box. The PBX system allows for free calls among the connected villages. Any phone in the world can call the stations in the villages and calls to any phone in Uganda are possible from the village stations.
639
International Conference on Advances in Engineering and Technology
Regional -
.~." / ~ ~ :~...... .
.;4~ ~ .
.
"
.
:S: . . . . . . . . .
~%:~i~PSTN ~ '
:
.
.
.
.
.
.
.
.
.
.
i
~zeway
~ | ~,
~
" I1~ ~11
.
~ ,,
Relay stel.ion
~
A
.... ~,
%.
~
.
~9 i ,
........ ~~.,, A
...~.,~;~:,~,~i~i!~i~i~i~.,~., . . . . . . . . . . . . .
. \z/
,
:U):
....%il;:i.;~i~i: :~:~~~:i~~:::,:~'::~,:~i:'~i::~:~:i :~:~:~lii:~i~?~:~ :~ ~,~:~ ~ .I~;~'~
.
Village
~
~[[l~
~:~eeavt|~:~ . . .
.....
~.,o ,
~ :~,
9an~ co:mmunicatlon . ~ j s~:.tJons :~nr e m o t e a r e a
~ 9
.
.
.
.
.
.
.
Fig.1. Inveneo Remote ICT System
6.0 THE S U G G E S T E D N E W R E M O T E ICT S Y S T E M W I T H W I M A X According to the new system the pedal powered village PCs would interconnect between themselves on a wireless LAN (local area network) and each PC in turn connects to an "access point" which serves to relay message packets between different destinations. The access point is connected to the WiMAX relay station. With WiMAX we can reach about 50 Km point to point broadband. The WiMAX tower in turn connects the villages to the fiber backbone through a server in the nearest town and from there to the Internet and the world. The access point is a solar-powered IEEE 802.11 b (Wi-Fi) connection.
6.1 About W i M A X WiMAX (World Wide Interoperability for Microwave Access) has rapidly emerged as the next generation broadband wireless technology which is based on 802.16x standard. The technology - officially known as 802.16 - not only transfers data as fast as 75 Mbps, but also goes through walls and has a maximum range of 30 miles and provides Internet connection with up to 25 times faster than today's broadband. Access to Internet is of prime importance as it has turned into a fully converged network delivering voice, audio, image and video in addition to data. WiMAX extends the reach of IP broadband metropolitan Fiber networks well beyond the relatively small LAN coverage of Wi-Fi in offices, homes or public access hot spots to rural areas. WiMAX is expected to provide flexible, cost-effective, standards based means of filling existing gaps in broadband coverage, and creating new forms of broadband services not envisioned in a "wired" world.
640
Santhi & Kumaran
6.2 Description about the new system The system is based upon low-power embedded PCs running the GNU/Linux operating system The PC also sports two PCMCIA slots to accommodate an IEEE 802.1 l b wireless local-area network (WLAN) card supporting Wi-Fi wireless communications and a voiceover-IP card (H.323) supporting voice communications as described in Craig Liddell, (2002). Phone CARD DSP/Phone Interface card can use standard analog phone as well as headset/microphone combination. All PCs in a cluster of the village use Wi-Fi to send data wirelessly to a central WiMAX tower. A single WiMAX tower can serve many clusters as shown in figure 2. Remote Viiliages wRh solad
WiNAX Towers
24e
M
Fiber-
Dial-:up or Broadband
~
,
;;, //
W I F i u~ith W i M A X s e r v i n g the S o l a r / P e d a l p o ~ ~ ~ ; o | P C o m m u n i c a t i o n S y s t e m of R u r a l Villages Fig.2. Remote ICT system with WiMAX The system uses a pedal to charge a car battery be attached to a special cycle battery unit called RP2, which, in turn, is connected to the PC to run it in those locations that do not have power. RP2 is a power management system that switches the computer to a power bat-
641
International Conference on Advances in Engineering and T e c h n o l o g y
tery when the power phases out. RP2 system can provide continuous power for about eight hours. Moreover one minute of pedaling yields five minutes of power. HCL Infosystems (Moumita, 2005) has designed the prototype of an external gadget that can be charged through pedaling and connects to a personal computer to run it under the most difficult of power situations and this can be easily used. The existence of open source software supporting such wireless communications was central to this decision. The relay point would therefore have a router (the "relay PC") serving the access point function for the villages and providing a link or the "backhaul" to the phone lines of the remote village. Though the bikes will power much of the system, the Access point and the routers are solar powered and highly resistant to environmental factors. The system consists of four distinctive parts:
(i) Main server: This system is placed in a location where phone lines, Internet access (dial-up or any kind of broadband) and electricity is available.. The server incorporates a modem (V.34) and a PSTN interface card capable of emulating a telephone and converting the voice signals to and from digital form. The main server act as the following: 9 It acts as the gateway to the local phone network (PSTN, analog or digital); 9 It maintains the connection to Internet (Internet access gateway -dial-up or broadband); 9 It handles the voicemail system with mailboxes for individuals; 9 It acts as the intranet web server for local content; file sharing; network monitoring etc; (ii) Solar powered Relay system: This system consists of a WiMAX tower and a router acts as a repeater, extending the range of the signal from the main server to the access point and further towards the village PCs. Also it allows extending the range of the wireless network by relaying the signal from the village PCs to the server PCs. Other features include: 9 Extends the range of village PCs from 50 Km away to the main server. 9 Enables Point-to-point or point-to-multipoint connections; 9 Multiple Relay Stations can be connected to the Central Sever to cover large areas; (iii) Access point: 802.11 wireless network links act as the access point and ranges from 2 to 6 Km. The PC is wired to a regular telephone set and a directional Wi-Fi antenna which transmits the internet signal to the Access Point and routed via the router to the WiMAX tower. (iv) The village PC" This system provides the users with access to a phone line, email, web browsing and basic computing applications. The village PCs are interconnected using wireless networking and has a telephone interface, so telephony is carried out using the standard telephone "human interface". Calls between villages and village clusters are routed by the router and cost nothing like dialing another room from a hotel PBX.
642
Santhi & K u m a r a n
7.0 A D V A N T A G E S OF THE R E M O T E ICT S Y S T E M W I T H W I M A X The use of the WiMAX technology into the ICT system contributes the following major advantages, among others: 9 Practical limitations prevent cable and DSL technologies from reaching many potential broadband customers. The absence of line of sight requirement, high bandwidth, and the inherent flexibility and low cost of WiMAX proving useful in delivering broadband services to rural areas where it's cost-prohibitive to install landline infrastructure. 9 WiMAX also provides backhaul connections to the internet for Wi-Fi hotspots in the remote locations. 9 The network is designed and built in such a way that it will cost very less around $ 25 a month to operate. 9 Even though the pedal device can also be powered by solar or gas generator, the idea is that young people will earn money/computer time pedaling the device. 9 In addition to fulfilling the desire for telephone service, there are basic computer functionality available for preparation of documents and spreadsheet functions. 9 Because much of the project can be built around nonproprietary, or "open source," software, villagers can essentially own the system.
8.0 C H A L L E N G E S The main challenges are the following: 9 There is a need for separate Rural PCs that will take into account factors including power, service ability and local conditions such as heat, cold and dust. Everything is to be designed for the high humidity environment. The physical security of the devices also matters a lot. 9 The success of the pedal power PC hinges on crucial issues such as the time taken to charge the battery via pedaling, number of hours that PC can be used thereafter, and the price. It is estimated that it will take about one hour of labor to re-charge the battery for 4 hours of computer/printer/lcd screen use. 9 Commercially available access point hardware is not programmable to the extent required for this monitoring purpose. So it is necessary use a PC in the relay station to know the state of charge of the battery, given the monsoon season, and any other information (regarding tampering, for instance) that may prove of use in assessing the state of the installation. 9 Although English Web sites will remain in English, villagers will be able to send and receive messages only in their native language. So software that will feature menus translated into the local language must be developed. 9.0 SUGGESTIONS 9 Students can be trained to use the system and teach older villagers. 9 Working with computer science and engineering students and teachers of nearby universities local language version of the Linux-based graphical desktop can be developed.
643
International Conference on Advances in Engineering and Technology
9 9
As a telecommunication system it is obvious that long service life would be important and the network design must accommodate it. The system had to be made as automatic as possible and simple enough to be operated by villagers in order to reduce operating costs
10.0 (i)
USES OF THE R E M O T E ICT SYSTEM Family communication: The global population shift from rural to urban communities inevitably breaks up families. These remote ICT networks allow distant family members to remain in contact, with positive results for community stability. (ii) Health Care: Heath clinics can communicate in real time with doctors and nurses in hospitals; provide AIDS awareness and prevention information (N-Ten, 2005), address complex medical treatment needs and emergencies etc. (iii) Human Rights: Communities get access to information allowing them to take part in shaping on their own destiny. They share information on human rights, women's rights, and land issues, improving farming techniques etc.. (iv) Education: The integration of ICT in teaching curriculums Increase availability of literacy and other training; Provide youth opportunity to acquire computer skills etc. as said in N-Ten, (2005). (v) Economic empowerment: Beyond the support for traditional economic practices, the introduction of information, communication and energy technologies allow for the development of useful trade skills related to those technologies, from solar technicians to software programming. (vi) Disaster relief: Rapid deployment of phone and data networks after disasters. (vii) Income generation (Inveneo, 2002): Through improved communication farmers access market data to maximize crop value by taking it to the highest paying nearby markets. Coops are formed between villages to improve buying power and share resources. This results in substantial income increases. (viii) Aids distribution: Though access to databases in real time provides resource information on grants and funding from government agencies and NGO entities (ix) Communication and transportation (N-Ten, 2005): Improves local communication using phone and email - eliminate time and expense to make the full day journey between villages. 11.0 CONCLUSION The suggested VoIP system with WiMAX can help sending two-way voice signals with computers, mimicking the traditional phone system and can make a big difference to the people in the rural areas. So each country belonging to the third world must adopt this system which is cost effective and must improve the living conditions of the people which in turn lead a path to economic empowerment. Many companies like HCL Infosystems (Moumitha, 2005) of India manufacture the new affordable model that is charged by pedal power that can be adopted. But government and other Aid agencies must develop policies to ira-
644
Santhi & Kumaran
plement the communication infrastructure with WiMAX so that rural areas are easily connected. I hope that this system will soon become ubiquitous in the poor parts of the world and transform the third world. REFERENCES
Alastair Ottar (2004), Pedal power Linux gets Ugandans talking", Tectonic-A•ca's source for open source news. Andreas Rudin (2005), Solar and pedal power 1CT, [Linuxola] e -magazine. Ashton Apple White (2005) , IT take a village IEEE spectrum Careers, www.spectrun.ieee.org/careers/careerstemplate.jsp? ArticleId=p090303 Craig Liddell (2002), Pedal powered." Look Ma no Wires, Australia intemet, http://www, australia, internet, com. N-TEN (2005), Inveneo-Solar/Pedal powered ICT, Tech Success Story, http:// ww.nten.org/tehsucess-inveneo. Inveneo (2002), Pedal and Solar powered PC and communications system, 2005, http://www. Invenoe. org. Michael (2003), Green wireless Networking, Slashdot-News for Nerds. Lee Felsenstein (2003), Pedal powered Networking Update. Technical information for the Jhai Pc and communications system, http://www.jhai.org. Lee Thorn (2002), Jhai PC. A computer for most of the world, TEN Technology. Roger weeks (2002), Pedal powered Wi-F1, Viridian note 00335. Steve Okay (2003), Technical information for the Jhai PC and Communication SystemSoftware, http'//www.jhai.org. Lee thorn et all (2003), Remote It village project Laos, The communication initiative. David Butcher, Pedal powered generator, http://www.los-gastos-ca.us/davidbu/pedgen.html. Michael G. Richard (2005), Inveneo. Solar and pedal powered phones for Uganda, Treehugger, http://treehugger.com/files/2OO5/O9/inveneo-solar-a.php . Digital Divide Network (2005), Generation gaps in Technology and Digital divide, www.digital divide.net/biog. Moumita Bakshi Chatterjee (2005), Bridging Digital Divide-New pedal power to run your computers, Business Line, http://thehindubusinessline.com/2005/07/29/stories. Cyrus Farivar (2005), VOIP phones give Villagers a Buzz, Wired News, http:// www.wired.com/news/technology/168796-0.html. Humaninet ICT (2005), Humaninet ICT Features, http://www.humaninet.org/ICT featureslist.html. Vinutha V (2005), A PC in every home, http://www.expresscomputeronline.com/20051024/market03.shtml Pragati Verma (2005), PCs that can bridge the Electrical Divide, OWL Institute Learning Service Provider, http://owli.org/node/355. Buzz Webmaster (2005), Closing the Digital Divide, http:www.politicsonline.con/blog/archive s/2005/07.
645
International Conference on Advances in Engineering and Technology
Psand Limited (2004), iTrike: The world's First Solar Powered Wireless Internet Rickshaw.
646
Manyele, Aliila, Kabadi & Mwalembe
ICT FOR EARTHQUAKE HAZARD MONITORING AND EARLY WARNING A. Manyele 1, K. Aliila, M.D. Kabadi and S. Mwalembe, Department of Electronics and Telecommunications, Dar es Salaam Institute of Technology, P.O. Box 2859, Dares Salaam, Tanzania
ABSTRACT Tanzania lies within the two branches of the East African rift valley system and has experienced earthquakes with magnitudes of up to 6.0. Six broadband seismic stations, Mbeya, Arusha, Tabora, Dodoma, Morogoro and Songea are currently recording seismic activities independent of each other in Tanzania. Data recorded on magnetic tapes from each station are collected and delivered to the University of Dares Salaam (UDSM) seismic processing center on monthly basis. There is no real-time monitoring or reporting of earthquakes in Tanzania, which put the population of Tanzania living in rift valley under high risks due to earthquakes and its related hazard. With the emerging development of the information and communication technology (ICT), this paper proposes a new and potentially revolutionary option for the real-time monitoring of earthquake data or warning through Short Message Service (SMS) and Internet. The Tanzanian remote Seismic recording stations will be connected to the UDSM Center for real-time data collection using mobile phone networks. With this system, earthquake data will be sent through mobile phones as a simple SMS message to the computer at the UDSM data processing center. Using the Internet the analyzed information can be sent to other emergence information center for real-time dissemination of earthquakes hazard and early warning, as opposed to current monthly reporting.
Keywords: Seismicity; East African rift valley system; Earthquakes; ICT; SMS; R-scale; Geological survey; Early warning; Earthquake monitoring; Real-time; Tsunami; Hazards; GPS; Radio transmitter.
INTRODUCTION 1.1 Seismicity of Tanzania Tanzania lies between the two branches of the East African Rift valley system, which is seismically very active and has experiences earthquake from small to large size magnitudes as shown in figure 1below. Figure l(a) show the position of two branches of rift valley with respect to Tanzania, Figure l(b) is the Seismicity of Tanzania for the period of 1990 to 1 Corresponding author: Tel: 255-0-744-586727; E-mail:
[email protected] 647
International Conference on Advances in Engineering and Technology
2000, Figure 1(c) is the Seismicity of Tanzania with respect to seismic activities in the Indian Ocean. From the figure it can be observed that areas along Lake Tanganyika, Lake Rukwa and Lake Nyasa have experienced a numerous earthquakes of magnitude up to 6.6 on R-scale. Figure l(d) shows the location of seismic station that monitors the seismic activities of different part of Tanzania. These seismic stations record seismic activity of particular area of Tanzania, independently of one another. In these seismic stations data is usually collected to the central recording site in person at the interval of one month. Among the historical earthquake that has caused some damages to the community are the earthquakes near Mbozi areas in 18/8/1994. In Year 2005 also Tanzania has been among the countries that were affected by Tsunami. Effects were felt countrywide accompanied with lose of properties as well as human lives. The information that Tanzania has been hit by Tsunami was obtained from the international monitoring agents, and nothing was announced to the public locally to alert people on the possible after shocks of the Tsunami. Tanzanian seismic stations recorded the event but its analysis had to wait for one month as per collection calendar of geological survey agent.
(a) East African Rift valley system
648
(b) Seismicity of Tanzania for the period 1990 - 2000
Manyele, Aliila, Kabadi & Mwalembe
~: '....~): ~
Md). The pressure on the concrete A b - 0 . 3 9 " d * b h - 0 . 3 9 " 8 4 " 100 - 3 2 7 6 m m
2
M , - 5.92 kNm
689
International Conference on Advances in Engineering and Technology
F~ = 5.92 / 0.071 = 83.4 k N crb = F s / Ab = 104/3276 = 2 5 N / m m
2
The quality of the concrete is C25, so the concrete is able to withstand this load.
(iv) Shear Force Capacity o f the Column There is no shear reinforcement in the columns, so the concrete contributes to the shear capacity. Again the shear capacity is chosen according to the Eurocode 2 (www.kenya.tudelf.nl, (29.03.06)). Due to the fact that the roof can be made of corrugated steel sheets no normal force will be taken into account. V~d = {rRd * k * (1.2 + 40Pl ) + 0.15 * o-~p} * bwd rRd = 0.25 * f~,k,~ / 7~ = 0.26 k=l.6-d=l.6-0.085=l.5
Pl = A,/b~d = 3 1 2 / ( 1 1 0 " 1 8 0 ) = 0.016 Crp = N,d / A ~ = 0 V~d = {0.26"1.5"(1.2 + 4 0 " 0 . 0 1 6 ) } * 1 1 0 " 1 8 0 = 14.2kN As can be seen the maximum shear load is 4.3 kN so the capacity exceeds the load (Vcd > Vma~). The above designed elements were produced using specially designed and manufactured moulds.
3.0 PILOT HOUSE 3.1 General Aspects The Pilot house is located at the Bamburi Special Products Factory site at Athi River about 30km South East of Nairobi. It is a 7.5m by 6.0m two bedroom self contained house shown in Figure 3 above made of pre-cast steel fibre reinforced concrete elements as described. Reinforced pre-cast columns designed above secure the house. The roofing consists of timber trusses and iron sheets. 3.2 Construction The overall sequence of the construction process of the pilot house was as illustrated in Fig. 5 shown below.
690
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
i 84i~ ~i~ ii ~ iI
!i i
i~ ii ~
!
~
~ i~i~ ....... ~i~ ~ ~!~i~i~i~ !i ~'!~i ~!~!~i(~i~ ~!Q~i!ii~!i!ii~!ii~ii~i~ii!i!iii!! ~ ~ i ii!~i ii !~ ~ ~ i !iiii~ i~i ,!~! ~ ~i i,!~ii ~ii~i ~!~! !~! ~ i ~ ~ ~! ,~i!~i~ii~'!!~!~ii~ ~ ii ~i
i 84184184184 I i ~!
~ ~
i i ~
~ ~!~!ii~ii~ii~ii~i~!~i~i~iii~ii~iii!~i~iii~!i~!~!~ii~i~i~i~!i~i~!i~!!~i%~ ~ ~5~i~ii~i~i ~ ~ ~ ~i~!!i~!!ii!~!i!ii~!!~!!i!!~@i~i~i)!~i!~!i~i~i~!i~i!~!i~i~i~iii~!!~i!~i~!!i~ ~2i !! ~iiii!;i;i!!!i i!ill ~ ~i ~
~ ~
~
iI~ ~ii~'
~ ~!~ii~iiii~ii~i!~iilii~i!~I~!~ i~ ~ i~i~ii;~!~ ii!iil
Fig. 5 Sequence of Construction
(i) Foundation and Columns The Foundation consists of a 300mm hardcore overlaid with a 150mm reinforced concrete slab. In the foundation, provisions were made to allow for clamping of the pre-cast columns on it during erection as shown in Figure 6 below.
691
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
Fig. 6: Foundation and Column erection (ii) Placing the Elements between Columns The columns were designed to have provisions for element fixing in terms of reduced thickness at designated heights. Placing of the elements between the columns was made through these sections by a rotating movement which allowed for horizontal insertion on one column then once aligned with the next column it was fitted to it and finally slid down. Figure 7 shows the way the rotation and placement process.
Fig. 7 Element fixing Process
692
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
(iii) Roofing The roof consists of timber truss and iron sheets. The trusses were fabricated on the ground and then mounted on a timber wall plate running along the top ends of the columns. The wall plate was fastened on to the columns by a stud bolt. The stud bolts were fastened on the columns using epoxy resin. Figure 8 below shows the roofing process.
Fig.8: Roofing 3.3 S u m m a r y of S t a t e m e n t M e t h o d of C o n s t r u c t i o n of the M o d e l H o u s e
The method used in construction of the model house using the elements designed in section 2 above is outlined below. 9
Foundation
(i) (ii) (iii) (iv) (v) (vi)
Set out the layout of the building and include an extra 500mm all around; Excavate to reduced levels (500mm deep) and remove to tip; Set out the slab according the dimension of the design; Excavate the area of the slab and include 500mm extra all around; Lay 300mm hardcore bed; Excavate into the hard core for the beams under the walls, external and internal, as shown on the design drawings; (vii)Fix appropriate shuttering around the perimeter of the slab; (viii) Place polythene sheeting (dpm) on top of the hard core with minimum overlaps on all joints (ix) Position the 'column pocket provision' formwork appropriately; (x) Concrete slab. 9
C o l u m n s and e l e m e n t s delivery on site
(xi) Receive the pre-cast columns and elements. 9
Erection of c o l u m n s & e l e m e n t s
(xii)Position the columns (Vertical & Horizontal Control); (xiii) Concrete column pockets to anchor columns; (xiv) Place the walling elements. 9
Installation of the r o o f
(xv) Fix wall plate onto the columns;
693
International Conference on Advances in Engineering and Technology
(xvi) (xvii) (xviii)
Fix trusses onto wall plate; Fix purlins onto trusses; Fix roof covering onto purlins.
3.4 Costs of the Pilot House The cost of the pilot house is Kshs 650,000 (9,200.00US$). The detailed costs are as shown in Table 3.1 below. Table 3.1: Schedule of costs Item
KShs
Excavation and Earthworks
71,232.25
Floor Slab
56,900.25
Walling Elements
153,255.60
Roof trusses
142,000.00
Roof covering
30,117.50
Finishes
30,000.00
Labour
145,051.68
Contigencies TOTAL Kshs -US dollars
21,442.72 650,000.00 9,200.00
4.0 EXPERIENCES In the process of implementing the project a number of challenges were encountered. Most of the problems arose due to deficiencies in the mould geometry during casting of the elements (particularly wall elements). Some wall elements could not fit properly into the columns and were either too loose or tight. This was caused by reduction and or increase in the dimensions of the grooves on the wall element ends as a result of bulging and caving of the moulds during casting. The problem of the mould was clearly due to inadequate thickness. Another geometric design problem on the vertical wall elements at the Jams (doors and windows openings) was also encountered. In the process of installing the jam elements, it was realized that they fitted loosely since they were only attached to the columns on one side only and the hinge formed by two elements at the mid heights of the Jams (they are vertically aligned) caused an outward instability. A temporary solution for the problem was to bolt the elements onto the columns however a redesign for special elements for the same is being undertaken. The geometric design for end gable elements did not work as it was established that anchorage of the same was inadequate and a redesign was required. However in this pilot project the gable ends were finished with timber boarding. In the foundation, provision for the column pockets for clamping of the same proved tricky. It was established that during compaction of the foundation layers, the walls of the pockets cave in and this necessitated planking and strutting. After column erection, wall plate anchorage on the
694
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
latter required that the columns be drilled at the top and a stud bolt fastened onto it with epoxy resin. The drilling of the columns should not have taken place but provision of the holes or insitu casting of the bolts should have taken place during casting of the columns. Availability of the steel fibres locally was yet another challenge given that the project was targeted to be affordable. In this pilot project, the fibres used were imported from Belgium while the element core material, styrofoam, was rather expensive. However a research for alternative solutions for these materials is ongoing and a success in the same will lead to considerable reduction in the overall cost of the house. 5.0 CONCLUSIONS AND RECOMMENDATIONS Pre-cast concrete technology for development of affordable housing in Kenya and any other country whose housing needs are acute is possible. Based on the pilot house it was possible to come up with a relatively affordable two bed roomed house (Ksh 650,000), whose construction was customised, labour intensive and can be installed within a short period of time. It is clear that with the replacement of the expensive materials and or mass production of pre-cast housing units of the same type lower cost per unit will be realised and this will allow for provision of affordable shelter. Furthermore a success in such technology will spur industrial growth in materials and construction since most of elements will be required to be mass produced independently and construction of the houses separately undertaken for individuals, firms and or schemes. However a number of improvements and designs are still needed in order to finally arrive at a sound and less costly pre-cast house. Based on the experiences encountered in this pilot research, it is recommended that in order to improve on the element geometry for robust anchorage, a redesign of Jam elements and change in mould thickness should be undertaken. Alternative solutions for the expensive and or imported materials be sought so as to allow for the development of a cheaper pre-cast unit. Furthermore new designs with reduced internal geometric measurements and lighter mix proportions should be investigated for production of much lighter elements than those developed for the pilot house. REFERENCES
Republic of Kenya, National Housing Development Programme 2003- 2007, Mirfistry of Roads and Public Works P.O Box 30260 Nairobi, 2003. pp 1. Republic of Kenya, Sessional Paper No. 3 on National Housing Policy for Kenya. Ministry of Lands and Housing P.O Box 30450 Nairobi, July 2004, pp7. CIVIS, Shelter finance for the Poor Series, Cities Alliance April 2003, issue IV, pp 5 CP 3 Chapter V: Part 2"1972 Wind Loads. w w w . k en y a. tude 1ft.nl Low-cost housing in Kenya: Pre-cast low-cost housing with steel fibre reinforced concrete as accessed on 29.03.06.
695
International Conference on Advances in Engineering and Technology
ENVIRONMENTAL (HAZARDOUS CHEMICAL) RISK ASSESSMENT- ERA IN THE EUROPEAN UNION. Musenze Ronald S; Centrefor Environmental Sanitation, Department of Biosciences Engineering, Ghent University, Michiel Vandegehuchte; J. Plateaustraat ,22, B- 9000 Gent. Belgium.
ABSTRACT The use of chemical substances causes complex environmental problems characterised by scientific uncertainty and controversies. Comprehensive risk assessments are now required by law but still they are is subject to debate, not least concerning how to interpret uncertainties.
When a chemical is discharged into the environment, it is transported and may undergo transformation. Knowledge of the environmental compartments in which the chemical substance will be present and of the form in which it will exist is paramount in the assessment of the possible impacts of the chemical on the environment. In the European Union (EU) risk assessment is often carried out and interpreted in accordance with the principles of sustainable development as chemical substances can cause adverse effects in both short and long term exposure scenarios. According to the technical guidelines, ERA is completed in four steps: hazard identification, exposure assessment, effects assessment, and risk characterisation. Attention is drawn towards the negative effects that chemicals may cause to the environment. The procedure shall be precisely discussed with emphasis on exposure and effects assessment. Key words; Environmental risk assessment, environmental compartment, exposure assessment, hazardous chemicals, sustainable development, hazard identification, risk characterisation.
1.0 INTRODUCTION The European Union (EU) directive 93/67, regulation 1488/94 and directive 98/8 require that an environmental risk assessment is carried out on notified new substances, on priority existing substances, active substances and substances of concern in a biocidal product. The experiences following more than half a century of use of man-made chemicals are divided. On the one hand, the benefits for society at large have been enormous. Production and use of chemicals play an indispensable role in e.g. agriculture, medicine, industry, and for the daily welfare of citizens. On the other hand, the use of many chemicals has caused severe adverse and complex problems characterised by scientific uncertainty and controver-
696
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
sies due to their toxic or otherwise 'hazardous' properties, such as persistence and liability to bioaccumulate(Karlsson, 2005). Adverse effects of already regulated hazardous substances prevail. For example, the levels of chlorinated hydrocarbons in oceans and marine biota are still high enough for authorities in the European Union to recommend or issue food restrictions for pregnant women (SNFA, 2004) and for the population at large (EC, 2001a). In the USA, PCB-levels in fishes in the Great Lakes are high enough to cause adverse health effects, such as impaired memory, among consumers with high fish consumption (Schantz et al., 2001). Among other examples of hazardous chemical environmental impacts, PCBs are also known for the 1968 cooking oil mass poisoning in, Yusho, Japan (Tsukamoto et al., 1969, Yang et al., 2005) while Methylated Mercury (Masazumi et al., 1995, 1999; Timbrell, 1989), medicine Thalidomide, and Dioxins are remembered for the 1956 Minamata disaster, the famous Agent Orange effect in Vietnam (Arnold et al., 1992) and the 1960 Softenon-scandal respectively. Remediation costs for hazardous chemical substances are often high. The total remediation and waste management costs for PCB in the EU for the period 1971-2018 has been estimated to be 15-75 billion euro (NCM, 2004), health and environmental costs uncounted. In addition to this, new problems are recognised. This is partly due to re-evaluations of earlier risk assessments, such as when the US National Research Council in 2001 considered arsenic in water to be ten times as carcinogenic as earlier thought (NRC, 2001), but also follows from completely new assessments of existing and new chemicals. In the European Union, for example, the use and the effects of single phthalates (such as DEHP) and brominated flame retardants (such as deca-BDE) are under scrutiny, and new regulations are being developed or imposed (ECB, 2004). Nevertheless, most substances in the European Union have not been assessed at all for their health and environmental risks (Allanou et al., 1999). As a result, a proposal for a new regulatory framework for registration, evaluation, authorisation and restrictions of chemicals (REACH) has been presented in the European Union (European commission, 2003). This proposal is at present the most disputed element of EU chemicals policy and has given rise to a heated debate in more or less all relevant political fora (European Commission, 2001 a, 2002, 2003; Scott and Franz, 2002; US House of Representatives, 2004). In the EU, a system has also been developed to aid ERA. The European Union System for the Evaluation of Substances (EUSES) is now widely used for initial and refined risk assessments rather than for comprehensive assessments (http://ecb.jrc.it/new-chemicals/). It is an approved decision-support instrument that enables government authorities, research institutes and chemical companies to carry out rapid and efficient assessments of the general risks posed by chemical substances to man and the environment. 1.1 RA and the Principles of the Sustainable Development
697
International Conference on A d v a n c e s in Engineering and T e c h n o l o g y
The concept of sustainable development is a cornerstone for ERA. In the European Union, sustainable development is stated in primary law as an objective for the union (European Commission, 1997), and a strategy for achieving the objective has been elaborated (European Commission, 2001b). The concept is often interpreted with reference to the World Commission on Environment and Development, meaning that 'the needs of the present' should be met 'without compromising the ability of future generations to meet their own needs' from environmental, economic, and social perspectives (WCED, 1987). This implies a moral duty to develop the society with a much stronger emphasis on improving the state of the environment, as well as socioeconomic and environmental living conditions for present and future generations. Upon this background, even chemicals which pose no adverse effects to the current generation are assessed for the inherent ability to do so in the long term. Due to uncertainty and controversy surrounding usage of chemicals, risk management while aiming at sustainable development is always posed with three important questions. How should the uncertainty be interpreted and managed? Who should do the interpretation and decide on management strategies? How should the responsibility for the management be distributed? (Karlsson, 2005). The answers are offered by three commonly accepted principles in environmental policy: the precautionary principle, the principle of public participation, and the polluter pays principle, all adopted by the international community as well as in the European Union (EC, 1997; UNCED, 1993). A good Hazardous Chemical Risk Assessment (HCRA) should thus recognize and take into account risk uncertainty, identify the polluter and the magnitude of pollution expected or so caused. Environmental risk assessment (ERA) is the link between environmental science and risk management. Its ultimate aim is to provide sufficient information for decision-making with the purpose of protecting the environment from unwanted effects of chemicals. ERA is normally based on a strategy aiming at comparing estimates of effect and exposure concentrations. According to the European Commission (2003a) it is completed in four steps: hazard identification, exposure assessment, effects assessment, and risk characterization (Fig 1).Risk Assessment (RA) is carried out for the three inland environmental compartments, i.e. aquatic environment, terrestrial environment and air, and for the marine environment.
698
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
Fig 1. Basic steps in Environmental Risk Assessment (van Leeuwen, Hermens 1995). In addition, effects relevant to the food chain (secondary poisoning) and to the microbiological activity of sewage treatment systems are considered. The latter is evaluated because proper functioning of sewage treatment plants (STPs) is important for the protection of the aquatic environment. The main goal of RA strategies is to compare estimates of effect and exposure concentrations. In the EU, the procedure of calculating Predicted Environmental Concentrations (PECs) and Predicted No-Effect-Concentrations (PNECs) is well laid out. Where this is not possible, the technical guidelines direct how to make; (1)qualitative estimates of environmental concentrations and effect/No Effect Concentrations (NOECs) (2)how to conduct a PBT (Persistence, Bioaccumulation and Toxicity) assessment and (3)how to decide on the testing strategy, if further tests need to be carried out and (4)how the results of such tests can be used to revise the PEC and/or the PNEC. 1.3 Types of Emissions and Sources (TGD 2ed II, 2003) Emission patterns vary widely from well-defined point sources (single or multiple) to diffuse releases from large numbers of small point sources (like households) or line sources (like a motorway with traffic emissions). Releases may also be continuous or intermittent. Besides releases from point sources, diffuse emissions from articles during their service life may contribute to the total exposure for a substance. For substances used in long-life materi-
699
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
als this may be a major source of emissions, both during use and as waste remaining in the environment, distribution processes, which may be relevant for the different environmental compartments. Transport and transformation ("fate") describe the distribution of a substance in the environment, or in organisms, and its changes with time (in concentration, chemical form, etc.), thus including both biotic and abiotic transformation processes. For each compartment, specific fate and distribution models are applied to determine the environmental concentrations of the chemical during exposure assessment. 2.0 EXPOSURE ASSESSMENT (EA) Environmental exposure assessment is based on representative measured data and/or model calculations. One of the major objectives of predicting environmental concentration is to estimate human exposure to chemicals. This is an important step in assessing environmental risk (Katsuya, Kyong, 2005). If appropriate, available information on substances with analogous use and exposure patterns or analogous properties is taken into account. EA is more realistic when detailed information on the use patterns, release into the environment and elimination including information on the downstream uses of the substance is available. Though the general rule is that the best and most realistic information available should be given preference, it is often useful to initially conduct an exposure assessment based on worst-case assumptions, and using default values when model calculations are applied. Such an approach is also used in the absence of sufficiently detailed data and if the outcome is that a substance is "of concern". The assessment is then, if possible, refined using a more realistic exposure prediction. Due to variation in exposure estimation with topographical and climatological variability, generic exposure scenarios, which assume that substances are emitted into a non-existing model environment with predefined agreed environmental characteristics, are always used (TGD 2ed II, 2003). The environment may be exposed to chemical substances during all stages of their life-cycle from production to disposal or recovery. For each environmental compartment (air, soil, water, sediment) potentially exposed, the exposure concentrations should be derived. In principle, the assessment procedure considers production, transport and storage, formulation (blending and mixing of substances in preparations), industrial/ professional use (large scale use including processing (industry) and/or small scale use (trade)), private or consumer use, service life of articles, and waste disposal (including waste treatment, landfill and recovery) as the stages in the life-cycle of a chemical substance. Exposure may also occur from sources not directly related to the life-cycle of the chemical substance being assessed. Due to the cumulative effect that gives rise to a "background concentration" in the environment, during the EA of existing chemicals, previous releases are thus always considered. Consideration is also made of the degradability of the chemical substance under assessment and the properties of the products that might arise.
700
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
2.1 Measured / Calculated Environmental Concentrations (TGD 2ed II, 2003) Concentrations of new substances are always estimated by modelling while data for a number of existing substances in the various environmental compartments has already been gathered. It may seem that measurements always give more reliable results than model estimations. However, measured concentrations can have a considerable uncertainty associated with them, due to temporal and spatial variations. Both approaches complement each other in the complex interpretation and integration of the data. Therefore, the availability of adequate measured data does not imply that PEC calculations are unnecessary. Where different models are available to describe an exposure situation, the best model for the particular substance and scenario is used and the choice explained.
When PECs have been derived from both measured data and calculation, they are compared and if not of the same order of magnitude, analysis and critical discussion of divergences are important steps for developing an ERA of existing substances. 2.2 Model Calculations (TGD 2ed II, 2003)
Calculation of the PEC value begins with the evaluation of the primary data and subsequently the estimation of the substance's release rate based upon its use pattern follows. All potential emission sources are analysed, and the releases and the receiving environmental compartment(s) identified. The fate of the substance in the environment is then considered by assessing the likely routes of exposure and biotic and abiotic transformation processes. Furthermore, secondary data (e.g. partition coefficients) are derived from primary data. The quantification of distribution and degradation of the substance (as a function of time and space) leads to an estimate of PEClocal and PECregional. PEC calculation is not restricted to the primary compartments; surface water, soil and air; but also includes secondary compartments such as sediments and groundwater. Transport of the chemical substances between the compartments is always, where possible, taken into account. As complexity (and relevance) of the model increases, the reliability usually decreases since the large number of parameters interacting increases the rate of random errors and makes the tests less reproducible. Exposure to chemical substances can only be minimised after identification of emission sources. Multimedia mathematical models (Cowan et al., 1995; Mackay, 2004) are extensively used at the stage of screening for risk assessment (Katsuya, Kyong, 2005). 3.0 E F F E C T S ASSESSMENT (TGD 2ed II, 2003) The effects assessment comprises of hazard identification and the concentration-response (effects) assessment. Hazard identification is always the first step during ERA. It is basically the visualisation of what can go wrong as a result of accidental or deliberate exposure to the chemical substance(s). It also involves identification of emissions and their respective sources and its main aim is to identify the effects of concern. For existing substances and
701
I n t e r n a t i o n a l C o n f e r e n c e o n A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
biocidal active substances and substances of concern in biocidal products, the aim is also to review the classification of the substance while for new substances a proposal on classification is done. Dose - response (effect) assessment is a study of the effects of varying concentrations of a chemical to which organisms are exposed in relation with time. It is a quantification step whose ultimate purpose is to determine the Predicted No Effect Concentration (PNEC), where possible. For both steps of the effects assessment, data is evaluated with regard to their adequacy and completeness. Evaluation is of particular importance for existing substances as tests will often be available with non-standard organisms and/or non-standardised methods. Evaluation of adequacy addresses the quality and relevance of data. Indeed the effects assessment process is suitably started with the evaluation of the available ecotoxicological data. Individual tests are described in terms of their (i) cost, (ii) ecological relevance (validity), (iii) reliability (reproducibility), and (iv) sensitivity. In this context, the term cost can refer to the monetary price for the execution of a test. Alternatively, it can also be used to denote the total social loss or detriment associated with a test. In the latter sense, sacrifice of animal welfare is part of the costs of the tests. By validity is meant that the test measures what it is intended to measure. Ecological relevance is the type of validity that is aimed at in ecotoxicology, namely that the test is appropriate for measuring potential hazards in the environment. By reliability is meant that repeated performance of the test will yield concordant results and sensitivity means that the test has sufficient statistical power to reveal an effect even if it is relatively small. The notion of sensitivity can be operationalized in terms of the detection level (Hansson, 1995). With a sufficiently large number of test(s) fulfilling the above four criteria, scientific uncertainties inherent in testing and risk assessment could be substantially reduced. But in reality every test is a trade-off between these aspects, and the combination of characteristics of the test more or less optimized. Therefore different tests are combined into test systems, and the combinations made so that the characteristics of the individual tests supplement each other. Just as for single tests, the design of a test system is aimed at optimizing the four factors. Most test systems are thus tiered (e.g., van Leeuwen, Hermens, 1995), which means that initial tests are used to determine the need for further testing, often in several stages. Different substances will take different paths in the test system, depending on the outcomes obtained in the tests to which they are successively subjected. Usually low cost is prioritized at lower tiers (to enable testing of many compounds), whereas reliability and ecological relevance increase at higher tiers (to enable well-founded risk management decisions).
702
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
Ecotoxicity tests may be acute or chronic as regards the duration and whether mortality or sub-lethal effects such as growth, reproduction and morphological deformation could be used as the test criteria. The exposure systems are always static, recirculation, renewal or flow-through. For simplicity, single species tests are used but to face to the challenges of ecological reality/complexity, multi-species tests are best suited. Two important assumptions that are usually made concerning the aquatic environment are; (1) ecosystem sensitivity depends on the most sensitive species, and (2) protecting ecosystem structure protects community function. These assumptions allow, however uncertain, an extrapolation to be made from single-species short-term toxicity data to ecosystem effects. Assessment factors as proposed by the US EPA and OECD (1992d) are then applied to predict a concentration below which an unacceptable effect will most likely not occur. Four major challenges, however, still remain; (1) intra- and inter-laboratory variation of toxicity data, (2) intra- and inter-species variations (biological variance), (3) short-term to long-term toxicity extrapolation (Acute Vs Chronic) and (4) laboratory data to field impact extrapolation (additive, synergistic and antagonistic effects from the presence of other substances may also play a role here). The approach of statistical extrapolation is still under debate and needs further validation. The advantage of these methods is that they use the whole sensitivity distribution of species in an ecosystem to derive a PNEC instead of taking always the lowest long-term No Observed Effect Concentration-NOEC. 3.1 Risk Characterisation
The risk decision process is traditionally divided into two stages, risk assessment and risk management. Risk assessment is the major bridge linking science to policy (Fig. 2). In risk assessment, scientific data on toxicological and ecotoxicological effects are used to determine possible adverse effects and the exposure levels at which these effects may be expected. Risk is thus characterised as the ratio of estimated ordinary (or worst-case) exposure levels, and levels estimated to be harmful. An assessment thus compares "predicted environmental concentrations" with the "predicted no effect concentration," as well as the "no observed effect level" or the "lowest observed effect level" with ordinary exposure levels (European Commission, 1993, 1994). Depending on whether the risk characterisation is performed for a new substance, for an existing substance or for a biocidal active substance, different conclusions can be drawn on the basis of the PEC/PNEC ratio for the different endpoints, and different strategies can be followed when PEC/PNEC ratios greater than one are observed. Therefore, the descriptions of the risk characterisation approaches are given separately for new substances, for existing substances and for biocides. In general, the risk characterisation phase is an iterative process that involves determination of the PEC/PNEC ratios for the different environmental compartments dependent on which further information/testing may lead to redefinition of the risk quotient until a final conclusion regarding the risk can be reached.
703
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
For the aquatic and terrestrial ecosystems, including secondary poisoning, a direct comparison of the PEC and PNEC values is carried out, presuming that the relevant data are available. If the PEC/PNEC ratio is greater than one the substance is "of concern" and further action has to be taken. For the air compartment usually only a qualitative assessment of abiotic effects is carried out. If there are indications that one or more of these effects occur for a given substance, expert knowledge is consulted or the substance is handed over to the relevant international group, e.g. to the responsible body in the United Nations Environment Programme (UNEP) for ozone depleting substances. In some cases also an assessment of the biotic effects to plants can be carried out (TGD 2ed II, 2003). For top predators, if the ratio of PECoral / PNECoral is greater than one and a refinement of the PECoral or the PNECoral is not possible or reasonable, risk reduction measures are considered. For microorganisms in sewage treatment systems; if the ratio of PECstp to the PNECmicroorganisms is greater than one, the substance may have a detrimental effect on the function of the STP and therefore is "of concern" (TGD 2ed II, 2003). In all, when PEC/PNEC ratios greater than one have been calculated, the competent authority consults the concerned industry for possibilities of getting additional data on exposure and/or ecotoxicity as to refine the assessment. The decision to request additional data should be transparent and justified and be based on the principles of lowest cost and effort, highest gain of information and the avoidance of unnecessary testing on animals. Risk characterization is used as a part of the basis for risk management decisions on appropriate measures to handle the risk. Such decisions range from taking no actions at all, via limited measures to reduce the highest exposures, to extensive regulations aiming at completely eliminating the risk, for instance by prohibiting activities leading to exposure. In the risk management decision, factors other than the scientific assessment of the risk are taken into account, such as social and economical impacts, technical feasibility, and general social practicability. According to the European Commission Technical Guidance Document for risk assessment (European Commission, 2003) "the risk assessment process relies heavily on expert judgment".
3.2 Classification and Labelling Once the substances have been satisfactorily assessed, scientific information is then put in a form consumable to the masses in a category/class format by labelling. The classification and labelling system is particularly interesting since according to current regulations, certain aspects of the classification of a substance should depend only on the information that is summarized and evaluated in the risk assessment process (Hansson and Rud'en, 2005). According to the Institute for Health and Consumer Research Centre for the EC and the European Chemicals Bureau (ECB), classification and labelling involves evaluation of the hazard
704
International C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
of a substance or preparation in accordance to Directive 67/548/EEC (substances) and 1999/45/EC (preparations) and a communication of that hazard via the label. According to the EU regulations (Commission Directive, 2001/59/EC) and TGD, substances and preparations are classified according to their inherent toxicological and ecotoxicological properties into the danger classes summarized in Table 1. Substances and preparations belonging to these classes have to be provided with a warning label, as well as standardized risk and safety phrases that are assigned in a strictly rule-based way. The labelling is the first and often the only information on the hazards of a chemical that reaches the user, which could be a consumer or a worker. The classification rules are all inflexible in the sense that if one of the rules puts a substance into one of these classes, then additional information cannot lower the classification of that substance but can lead to a stricter classification (Hansson and Rud'en, 2003). R:e~arch
.Risk ~ s ~ s ~ e n t
R~sk m a n a ~ m e n t
I
T~176
a
,, !t~
Extra pota!::iion
me~iodis
i 1
.......
~
ide;~i:~ifica:t,:~n l
Do~-,:re~po:,~se
......................................
I
a s ~ m e n t . J ................................... I
........................................................................................................................................
!i!
i! ................................... ~
ExF&N.d NpU~:'O:~ I!l il as~essrnent . . . . .
.
.
.
.
. . . . . . . .
iiiil
.......................................... 1 .............................
I ~ ~i ,............... lil !i11.111.1...11:.:.2.:.11 .................. i...........i ~ i RiSk ~tdecismn and l 11 ch:ar~teriza~ion! _ li [T: ]:,
ti f AQ:e.cy !
Fig 2. The Risk decision process as it is usually conceived. (National research council, 1994) The European Chemical Substances Information System (ESIS), a subsidiary of ECB provides a link to the European INventory of Existing Commercial chemical Substances (EINECS). This online service readily disseminates useful information to the public about the risk assessment status (with draft RA reports) for a number of chemical substances that have already been assessed (http://ecb.jrc.it/ESIS/). Table 1. The classes used in the European classification and labelling system (Hansson and Rud'en, 2005). Very toxic (T+) Toxic (T) Corrosive (c) Harmful (Xn) Irritant (Xi) ,,,
705
International C o n f e r e n c e on A d v a n c e s in Engineering and T e c h n o l o g y
Sensitizing (Xn or Xi) Carcinogenic (T or Xn) Mutagenic (T or Xn) Toxic to reproduction (T or Xn) Dangerous to the environment (N) This has raised awareness regarding different chemical substances that are being manufactured and/or used in the EU.
3.3 Challenges for Improved Ecotoxicological Testing New regulations, in particular the new European chemicals legislation (REACH), will increase the demands on environmental risk assessment (ERA). This Legislation will result into a number of changes in the ERA process. The most significant being a large number of previously untested substances shall have to undergo testing and risk assessment (EC, 2003b). An interesting issue about REACH is that the burden of poof that a chemical is safe is now for the industry and not for the governments any more. Development of effective, simple, and sensitive tools that are needed to fulfil the objectives of environmental policies, as required by REACH, needs an improved understanding of ecotoxicological structures and their interrelationships (Breitholtz et al., 2005). In the EU today, the requirements on efficient ecotoxicological testing systems are well known and can be summarised as 10 major issues (challenges) for improvement of ERA practices: (1) The choice of representative test species, (2) The development of test systems that are relevant for ecosystems in different parts of the world, (3) The inclusion of sensitive life stages in test systems, (4) The inclusion of endpoints on genetic variation in populations, (5) Using mechanistic understanding of toxic effects to develop more informative and efficient test systems, (6) Studying disruption in invertebrate endocrine mechanisms, that may differ radically from those we know from vertebrates, (7) Developing standardized methodologies for testing of poorly water-soluble substances, (8) Taking ethical considerations into account, in particular by reducing the use of vertebrates in ecotoxicological tests, (9) Using a systematic (statistical) approach in combination with mechanistic knowledge to combine tests efficiently into testing systems, and (10) Developing ERA so that it provides the information needed for precautionary decisionmaking. Since most of the research necessary for the safety evaluation of chemicals requires the killing of laboratory animals, toxicologists are now faced with an ethical conflict between their professional duties and the interests of the animals. In the past, the protection of consumers
706
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
against chemical injury was considered to be of the greatest importance, and society approved of all efforts to detect even the slightest hazards from chemicals. But of recent, toxicologists have become aware of their ethical responsibilities not only for the safety of the human population but also for the welfare of the animals (Zbinden, 1985, www.ncbi.nlm.nih.gov). Consequently, many resources are now being invested to observe the 'three Rs' (Replacement, Reduction, Refinement) concerning the use of laboratory animals in toxicological testing (Otto, 2002). The trend is shifting towards the development and use of alternative methods that permit the investigation of toxicological responses in unicellular organisms and cell cultures (Zbinden, 1985) and molecular methods. 4.0 C O N C L U D I N G REMARKS It is a common misunderstanding that since the precautionary principle is a risk management principle, it does not influence risk assessment. And according to one interpretation, it consists of taking at least some risk-reducing measures at certainty levels that are lower than those required for considering an effect scientifically proven (Hansson and Rud'en, 2004). It should thus be clear that it is the task of risk assessors to provide risk managers with the information they need to take their decisions according to the criteria they choose.
Even in the developed world where the manufacture and use of chemical substances is unequalled, the process of ERA still faces hardships. Though there is no rigidity in most assessment procedures as provided by the EU technical guidelines, expert judgement and professional knowledge is fundamental for each case before hand. The challenges highlighted above are just a tentative checklist that is now used in optimisation of ecological relevance of any ERA activity. The list is non-exhaustive and the factors can be weighted differently from one assessment to another depending on the underlying objective. The combined effect of scientific uncertainty and a high degree of flexibility and reliance on individual experts makes it practically impossible to achieve a risk assessment process that is fully consistent and systematic in all aspects (Hansson & Rud'en, 2005). It is therefore essential to scrutinize and evaluate the risk assessment process, in order to learn more about (1) how scientific information is reflected in risk assessment, (2) how and to what degree risk assessment influences risk management, (3) to what degree the risk decision process as a whole satisfies general quality criteria such as efficiency, consistency, and transparency. Additional knowledge about the applications of general risk assessment and risk management principles in different regulatory settings, in particular the substitution principle (i.e. the principle that a chemical substance should be substituted when a safer alternative is available), and the precautionary principle (i.e. that actions to reduce unwanted effects from chemicals should be taken even if the scientific indications of the existence of that effect do not amount to full scientific proof), is furthermore needed.
707
International C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
It is also imperative to acknowledge that there is no single perfect test for any chemical during RA. However, combining tests with different strengths and weaknesses to scientifically wellfounded and resource efficient test systems is possible, though challenging. Note should also be taken that because of the influence of both biotic and abiotic factors on the fate of chemicals, exposure models being used in the EU can not always be used in other regions prior to modifications and validation if conclusive results are to be achieved. The biggest challenge ahead of the EU, now, is to monitor the implementation of REACH, and to evaluate the actual working of this system compared to the system it is replacing, once it comes into force. REFERENCES
European Commission, 2003. Technical guidance document in support of the commission directive 93/67/EEC in risk assessment for new notified substances and the Commission regulation (EC) 1488/94 on risk assessment for existing substances. (Available on-line at: www.ecb.it). Institute for Health and Consumer Protection, Europeans Chemical Bureau, European Commission Joint Research Centre, 2003; Technical Guidance on Risk Assessment (TGD) 2 nd ed Part II. IHCP/JRC. Ispra, Italy. ECB (European Chemicals Bureau), 2003b. European chemical substance information system (ESIS). ,Ispra, Italy. Schantz, S.L., Gasior, D.M., Polverejan, E., McCaVrey, R.J., Sweeney, A.M., Humphrey, H.E.B., Gardiner, J.C., 2001. Impairments of memory and learning in older adults exposed to polychlorinated biphenyls via consumption of great lakes Wsh. Environ. Health Persp. 109, 605-611. Mikael Karlsson, 2005. Science and norms in policies for sustainable development: Assessing and managing risks of chemical substances and genetically modified organisms in the European Union. Regulatory Toxicology and Pharmacology 44 (2006) 49-56. NRC, 2001. Arsenic in Drinking Water: 2001 Update. Subcommittee to Update the 1999 Arsenic in Drinking Water Repoxrt, Committee on Toxicology, Board on Environmental Studies and Toxicology, National Research Council (NRC). National Academy Press, Washington. Breitholtz M.,Rude'n C., S.O. Hansson, B. Bengtsson, 2005; Ten challenges for improved ecotoxicological testing in environmental risk assessment. Ecotoxicology and Environmental Safety 2005. Article in Press. Frederik A.M. Verdonck, Geert Boeije, Veronique Vandenberghe, Mike Comber, Watze de Wolf, Tom Feijtel, Martin Holt, Volker Koch, Andre" Lecloux, Angela Siebel-Sauer, Peter A. Vanrolleghem, 2004. A rule-based screening environmental risk assessment tool derived from EUSES. Chemosphere 58 (2005) 1169-1176
708
International C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
Masazumi Harada, Hirokatsu Akagi, Toshihide Tsuda, Takako Kizaki, Hideki Ohno, 1999. Methylmercury level in umbilical cords from patients with congenital Minamata disease. The Science of the Total Environment 234_1999.59162 Harada M . , 1995. Minamata disease" Methylmercury poisoning in Japan caused by environmental pollution. Crit Rev Toxicol 1995;25"1124. Sven Ove Hansson, Christina Rud'en, 2005. Evaluating the risk decision process. Toxicology 218 (2006) 100-111 Hansson, S.O., Rud'en, C., 2003. Improving the incentives for toxicity testing. J. Risk Res. 6, 3-21. Rud'en, C., Hansson, S.O., 2003. How accurate are the European Union's classifications of chemical substances. Toxicol. Lett. 144 (2), 159-173. Otto Meyer 2002. Testing and assessment strategies, including alternative and new approaches. Toxicology Letters 140_ 141 (2003) 21_ 30. Katsuya Kawamoto, Kyong A Park, 2005. Calculation of environmental concentration and comparison of output for existing chemicals using regional multimedia modelling. Chemosphere xxx (2005) xxx-xxx. Michael Fryer, Chris D. Collins, Helen Ferrier, Roy N. Colvile, Mark J. Nieuwenhuijsen, 2006. Human exposure modelling for chemical risk assessment: a review of current approaches and research and policy implications, available at www.sciencedirect .com. Chiu-Yueh Yang, Mei-Lin Yu, How-Ran Guo, Te-Jen Lai, Chen-Chin Hsu, George Lambert, Yueliang Leon Guo, 2005. The endocrine and reproductive function of the female Yucheng adolescents prenatally exposed to PCBs/PCDFs. Chemosphere 61 (2005) 355-360. C.J. van Leeuwen and J.L.M. Hermens, Kluwer Academic, Dordrecht, ISBN 1997. Review Risk Assessment of Chemicals" an Introduction. Aquatic Toxicology 38 (1997) 199-201 Steve K.Teo,David I. Stirling and Jerome B.Zeldis, 2005. Thalidomide as a novel therapeutic agent:new uses for an old product. DDT 9Volume 10, # 2 9January 2005 available at www.sciencedirect.com/science/journal. J. A. Timbrell. Taylor & Francis, Basingstoke, 1989. Introduction to toxicology. Environmental Pollution, Volume 61, Issue 2, 1989, Pages 171-172 Zbinden, 1985. Ethical considerations in toxicology. Food Chem Toxicol. 1985 Feb;23(2)" 137-8.
709
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
THE I M P A C T OF A P O T E N T I A L D A M B R E A K ON THE H Y D R O ELECTRIC P O W E R GENERATION: CASE OF: O W E N FALLS D A M B R E A K SIMULATION, U G A N D A Michael Kizza;_Department of Civil Engineering, Makerere University, P. 0 Box 7062, Kampala, Uganda,
[email protected] Seith Mugume; Department of Civil Engineering, Makerere University, P.O Box 7062, Kampala, Uganda,
[email protected] ABSTRACT:
Dams play a vital role in the economy of a country by providing essential benefits like irrigation, hydropower, flood control, drinking water, recreation etc. However, in the unlikely and rare event of failure, these may cause catastrophic flooding in the down stream area which may result in huge loss of human life and property worth billions of dollars The loss of life would vary with extent of inundation area, size of population at risk, and the amount of warning time available. Also a severe energy crisis would befall a nation whose energy is heavily dependent on Hydro Electric Power. This would in the long run hamper industrial progress and the economic development of the nation. Keywords: Dam Break Simulation, Flood control, recreation, Hydro Electric Power, energy crisis, Catastrophic flooding, down stream, installed capacity.
1.0 INTRODUCTION Uganda is a developing country, which is heavily dependent on Hydro Electric Power to feed the National grid. Uganda's installed capacity is 380 MW after the extension of the Owen Falls (Nalubaale) Dam Complex in Jinja, Uganda. The Dam was formally opened on Thursday 29 th April 1954 by Her Majesty the Queen Elizabeth of England as one single investment that would lay the foundation for Industrial development in Uganda. It is a reinforced concrete gravity dam with a design life of 50 years and is located on the Victoria Nile River in the southeast Uganda near Jinja. The old Owen Falls (Nalubaale) Dam has a capacity of 180MW of Hydro Electricity. An additional 200MW of installed capacity was realised after the completion of the Owen falls Extension Project. (Kiira Dam). No structure is permanent however advanced the construction and material technologies employed. (Anderson et al, 2002; Fleming, 2001). According to the US Association of Dam Safety Officials, the average life expectancy for an un-maintained dam is approximately 50years. (Donnelly et. al, (2001)). The Dam has therefore outlived its design life of 50 years and some serviceability failures are already showing in form of cracks and leakages within
710
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
the Nalubaale Powerhouse Structure, raising concerns about the safety of the Dam in its current state. Though major Dams are designed for a very low risk of failure, it is important to note the risk becomes more significant with age. Therefore, as we ponder the current energy crisis, it is important to keep in mind the risk of failure and associated threats.
Figure 1: Owen Falls Dam Complex, Jinja Uganda (Source: Eskom (U) Ltd) 2.0 POTENTIAL FAILURE INCIDENTS From the previous research, the types of potential incidents that could occur to the Dam complex are presented below. The characteristics of the catchment and the configuration and arrangement of the Owen falls complex are unusual and heavily impact the type and nature of the incidents that could occur.
2.1 Earth Quake Damage The Dam lies in a relatively inactive Earth Quake zone between the two more active zones of the rift valley and the Rwenzori Mountains. The Owen falls Dam complex was designed to withstand with no damage an Earthquake acceleration of 0.06g (Operating Basis Earthquake, OBE) and withstand with no failure an Earthquake acceleration of 0.17g (MDE or Maximum Design Earthquake). These Earth quakes have a probability of recurrence of 1,000 and 10,000 years respectively and are applied as horizontal (or 2/3of Vertical force). However, this does not eliminate the extremely remote possibility of a large event. If a large event occurred, it could cause instability of the intake dams or the main concrete dam, stability failure of the wing walls embankments adjacent to the Kiira Power station or the cutting/ embankment forming the west bank of the canal or damage leading to later failure. 2.2 Terrorist Attack The Owen Falls Dam complex is strategically located as a gateway linking Uganda to the coast of Mombasa. Given its strategic location and the damage that could be inflicted by deliberate action, the Owen Falls Complex must be regarded as a terrorist target.
711
International Conference on Advances in Engineering and Technology
2.3 Sliding or Overturning of Concrete gravity sections The failure of gravity sections, should it occur would be by; (i) Overturning (toppling), (ii) Sliding, (iii)Combination of the two The gravity sections were designed to international dam safety standards. The configuration and height of the structures also naturally limits the discharge that would result from a credible failure. Stability failure is therefore unlikely, but a worst credible event can be derived for purposes of generating a flooding event. This can be assumed to be the collapse of Owen Falls Nalubaale intake block at the level of the machine hall, the main Owen Falls Dam complex or the Kiira Power station. 2.4 Embankment Instability Embankment instability would take the form of settlement and/or slip circle failure. Any failure of this sort is likely to be progressive and therefore gives some measure of warning. The embankments like those adjacent to Kiira power station or the cutting/embankment forming the west bank of the canal were designed and assessed for stability. These assessments including settlement and slip circle failure analysis were performed to modem safety standards. Hence failure resulting embankment instability is highly unlikely. 2.5 Embankment seepage failure Embankment seepage failure would take the form of seepage through the structure or foundation which increases and removes material as the flow builds up leading to the development of a pipe and ultimately to failure. Depending on the location of the seepage outlet, some measure of warning may be expected. The embankments like those adjacent to Kiira Power station, or the embankment forming the West bank of the canal have been designed and assessed for seepage failure. These assessments were performed to modem safety standards and hence seepage failure is unlikely. In addition, as with the forms of embankment failure, the configuration and width of the canal would limit the discharge resulting from a seepage failure. 3.0 MODELLING AND SIMULATION A model is a simplified representation of a complex system. Modelling refers to the work of making a simple description of a system or process that can be used to explain it. Simulation refers to the development and use of a computer model for evaluating actual or postulated dynamic systems (McGraw-Hill, 1987c). During simulation, a particular set of conditions is created artificially in order to study or experience a system that could exist in reality. Engineering models can be used for planning, design, operation and for research. 3.1 Dam Break Modelling Dam break modelling consists of (i) Prediction of the outflow hydrograph due to the dam breach
712
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
(ii)
Routing of the hydrograph through the downstream valley to get maximum water level and discharge along with the time of travel at different locations of the river downstream of the dam.
Dam Break studies can be carried out using either (i) Scaled physical hydraulic models or (ii) mathematical simulation using computers. A modem tool for the dam break analysis is the mathematical model which is most cost effective, and approximately solves the governing flow equations of continuity and momentum by computer simulation. Computer models such as SMPDBK, DAMBRK and MIKE11 have been developed in the recent years, however, these computer models are dependent on certain inputs regarding geometrical and temporal characteristics of the breach. The state-of-art in estimating these breach characteristics is not as advanced as that of the computer programs and therefore, these are limiting factors in the Dam Break Analysis. 3.2 The Simplified Dam Break Model The SMPDBK model was developed by Wetmore and Fread (1984) at the National Weather Service (NWS) of the USA. This model produces information needed for determining the areas threatened by dam-break flood waters while substantially reducing the amount of time, data, computer facilities, and technical expertise required in employing more sophisticated unsteady flow routing models such as the DAMBRK model. The NWS SMPDBK model computes the dam break outflow from a simplified equation and routes the outflow base on curves generated with the NWS DAMBRK model. Flow depths are computed based on Manning's equation.
3.2.I Data Requirements (i) Breaching Parameters (final height and width of breach) (ii) Breach formation time (iii) Non dam break flow (spillway/turbine/sluice gate/overtopping flow) (iv) Volume of reservoir (v) Surface area of reservoir (vi) Manning's roughness coefficient for the river channel downstream (vii) Elevation Vs. Width Data for up to five downstream river cross sections. In producing the dam break flood forecast, the SMPDBK model first computes the peak outflow at the dam, based on the reservoir size and the temporal and geometrical description of the breach. The computed flood wave and the channel properties are used in conjunction with routing curves to determine how the peak flow will be diminished as it moves downstream. Based on this predicted flood wave reduction, the model computes the peak flows at specified down stream points with an average error of less than 10%. The model then computes the depth reached by the peak flow based on the channel geometry, slope, and roughness at these downstream points. The model also computes the time required for the peak to
713
International Conference on Advances in Engineering and Technology
reach each forecast point and, if the user entered a flood depth for the point, the time at which that depth is reached as well as when the flood wave recedes below that depth, thus providing the user with a time frame for evacuation and fortification on which the preparedness plan may be based.
3.2.2 Peak outflow computation Qbmax -
W h e r e C = 23.4
Q0 + 3.1B r
I
C ty / 6 0 + C / ~
13
(1)
Sa Br
Qo - Spillway/turbine/overtopping flow (cfs) B r - Breach Width (ft)
t y = Time of failure (hr) h d - Height of Dam (ft)
3.2.3 Flow Depth Computation The model computes depths from Manning's equations for a known discharge. Q _ 1.49 A(A/B) 2 ~fs
(2)
/7 /7
Sc - 7 7 0 0 0 ~
2
(3)
1
D 3
Where n = Manning's roughness coefficient, S = Slope of the channel, Sc = Critical slope.
3.2.4 Routed Flow Routing of peak discharge is done using empirically derived relationships (as used in the DMBRK model). It is represented via dimensionless parameters. Qp/Qb.... X/Xc,V* and F r 9Routing curves are then used. These curves were derived from numerous executions of the NWS DAMBRK model and they are grouped into families based on the Froude Number associated with the flood wave peak (Fread D.L, Wetmore J.N, 1981). To determine the correct family and member curve that most accurately predicts the attenuation of the flood, the user must define the routing parameters listed above. This requires the user to first describe the river channel downstream from the dam to the first routing point as a prism.
714
International C o n f e r e n c e on A d v a n c e s in Engineering and T e c h n o l o g y
6VOL
X C-
(4)
_
A(1 + 4 . 0 . 5 ~+~ )
VOL
v* = - AcX~.
(5)
X* _ X
(6)
s
Where;
VOL = Volume in R e s e r v o i r (ft 3) X~, = Distance parameter (ft) m = average channel geometry fitting coefficient
Z- gAc/B
(7)
Q,_ Qp
(8)
Qbmax
Where, F,. = Froude number g = acceleration due to gravity
SIMPLIFIED DAMBREAKROUTING CURVES F=0.25 os ~ ~5 ............................................. 0.B ....
~..
0.7
9 ..xx\ ..':~.,< ......................................... O 0,6 "x. , ..... 0.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
d
\
0.4 0.3
...........
x~ ................ "~''"
0.2
\
~
~ "---
....
..... i
""
"":~
....
..... v,=5.0 ~" V'=4.0 ...... v,=3.0
.,-.,- ~
.227...~ ~ _ ~ . ~ . ~
9
-.~
"-
0.1 ............................................
""~ "-" : ~
.
.
.
.
~"
V*=2,0
~"".._ v , = L o
I ............ ii i 0 o , 2 3 4 5 6 7 a 9 ~o,~2~3~4~5~6~7~s~92o '
X,/xc, Fig. 2: Simplified Dam Break Routing Curves
Computation of Time to Peak (Tp) The simplified dam break model computes time to peak using the following equations X Tp - t/. + - -
(9)
C
715
International Conference on Advances in Engineering and Technology
C-0"682Vxi[
52m3 3m+ll
(10)
2
(11)
g x i _ 1.4___99.k/-~Dx~/
n Dx i _
(12)
href
m+l hre f -
(13)
f ( Q* , n, if-if, Axi , Dxi )
From Manning's Equation, we have: Q _ 1.49
2
AD ~ ff-ff
(14)
n m
(15)
Q* - Q~/2 (0.3 + ~-~)
Computation of Time to flooding (Tfld) and de-flooding (Tdfld) of Elevation Hf (16)
Qu - a(h f )b
t~d - tp - ({QP I
Qf
(17]
x.
taftd -- tp +
Qp -Qo
--tf
(18)
Qp -Qo
4.0 M O D E L S E T U P 4.1 Reservoir Data
Due to the hydraulic constraint at the former Rippon Falls, the outflow from Lake Victoria will not significantly empty the lake in the short term apart from the 2.Skm reach from Rippon Falls to Owen Falls. Therefore, this reach will act as the effective reservoir in the short term (immediately after the breach): (i) Length of reach 2.8km (1.75mi) (ii) Surface Area = 0.884km 2 (88.4acres) (iii) Average Depth within Reach = 12m (39.37ft) (iv) Total live storage volume = 10,600,000m 3 (3480.315 acres-ft) 4.2 Flow Data
Nalubaale Power station (i) Turbine flow (10 turbines) (ii) Sluice gates flow (6 sluices)
716
= =
1200m3/s 1200m3/s
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
Kiira Power station (i) Turbine flow (ii) Three bay Spillway
=
1100m3/s 1740m3/s.
Victoria Nile
(i) Average flow rate in Victoria Nile data from WRMD).
l154.175m3/s (Source: ADCP flow
4.3 Victoria Nile Channel Geometry and Model set up The Victoria Nile Reach that has been modeled is 5 l km long and comprises of 14 cross sections obtained from ADCP measurements carried out along Victoria Nile and Napoleon Gulf in 2004 by the WRMD of the DWD. Missing values of altitude along the Nile were obtained by interpolation between known ones. Also missing cross sectional data was modeled as average values of the known cross sections. The average depth of water in the River Nile Model has been taken as 5m. Also 1 in 50,000 topographic maps were obtained from WRMD i.e; Jinja (sheet 71/1), Kagoma (sheet 62/3), Kamuli (sheet 62/1), Kayonza (sheet 61/2) and Bale (sheet 51/4). These were used to obtain distances along the Nile from Owen falls dam to the particular points of interest. 5.0 DAM BREAK ANALYSES Failure of the dam will most likely result from earth quake damage of one of the concrete sections or from a terrorist attack (sabotage). Therefore, we can consider breaching of the dam by considering three scenarios; (i) Breaching of the Nalubaale Power station intake block (ii) Breaching of Kiira Power station intake block (iii) Breaching of the Main Owen falls Dam complex. The Owen Falls Nalubaale Intake Block can be considered as a concrete gravity dam which will tend to fail by partial breaching as soon as one or more of the monolithic concrete sections formed during construction are removed by the escaping water. The time required for breach formation in this situation is in the range of a few minutes. 5.1 Breaching Parameters
Fig. 3" Breaching Parameters illustrated.
717
International Conference on Advances in Engineering and Technology
Table 1" Breaching Parameters for the three scenarios Section of Dam
Breaching
Breaching
Breaching
Manning's n
Width(m)
Height
time (min)
Value
160
17
0.050
Kiira Power Station
56
24
0.050
Main Owen Falls Dam
190
30
0.050
Nalubaale Power Intake Block
Table 4: Nalubaale Dam Break Results Chainage lm]
Max Flow im3/s]
Max Elevation [mASL]
Max Depth [m]
Time [hr] to Max Depth
0.00
718
0.00
0.00
17365.94
1113.83
3.53
0.03
3.46
15144.01
1113.13
3.58
0.12
4.42
12409.82
1112.97
3.63
0.21
6.00
10128.76
1112.97
3.97
0.29
8.00
7776.00
1108.33
3.33
0.52
11.01
7199.47
1100.62
3.62
0.63
13.01
7131.96
1089.96
2.96
0.66
19.01
5990.76
1079.56
3.56
0.99
26.00
5190.61
1065.25
3.25
1.38
28.00
5012.93
1061.79
3.59
1.50
36.00
4483.80
1046.25
3.25
1.96
38.00
4334.34
1045.85
3.65
2.11
41.01
4138.93
1045.85
4.86
2.37
51.01
3561.75
1042.65
3.65
3.30
International Conference on Advances in Engineering and Technology
7.0 A N A L Y S I S O F R E S U L T S 7.1 N a l u b a a l e D a m B r e a k Results Outflow hydrograph for Nalubaale Dam Breach
iiiiiiiiiilililiiiiiiiiiiiiiiiiiiiiiiiiilililiiiiiiiiiiiiiii!iiiiiiiii ilililiiiiiiiiiiiiiliiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiill
[ii i i ilililili i i i ~ililili i i i i i ilili i i i il iiiiiiil iii!~iiii!iiiiiiiiiiiiiii
--1.--•2000.00 !iiiiiiiiiiliiiiii ~ O00OIO0
~ii~iiii~iti~i~i~iiiiii?iiiiltliil ................................................................... ..................................
iiiiiiiiiiiiiiiiiii:iiiiiiiiiiiiiiliiiiiiiiiiiiiiiiiiiiiiiiiiiiill
~5 8ooo.oo
i!i!i!i!iii)i~!i~i)iii!i!!~!~ii}! ililililiiiiiiiliiiiiiiiiiiiiiiiiiiiiiiiiiiilililililiiii!ii!iliiii
iiiiiiiiiiii}iiiiiiiiiiiiiiiiiiiiil
N iiN ~iiNiNN iNNNiN iNN iNN iNN iN i!NiN iNNN iN ii iiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiii iiiiiiiiiil;iiiiiii!iii!iiltlii!iii[iil;ili!i!i!iii[iii!i!i!!ii!iil
2000.00
i!iiiiiiiii!iiii!ii!ii!ii!iiii!ii
0.00 Zi:ZZZZ;i;iZZi;iZZ 0.20 0.00
0.40
0.60
0.80
1.00
1.20
1.40
t .60
Time [hrs] Discharge [m~/s] Vs. Chainage [mi] along the Nile
1"~)C E
ttor
Chainage [km]
7.2
Kiira D a m B r e a k Results Chainage [m]
Max
Max
Max
Flow [m3/s]
Elevation [mASL]
Depth [m]
0.00 0.00
3.46
11579.80 10382.45
Time [hr] to Max Depth 0.00
1113.80 1113.15
3.50 3.60
0.03 0.17
719
International C o n f e r e n c e on A d v a n c e s in Engineering and T e c h n o l o g y Outflow hydrograph
for Kiira Dam Breach
12000.00
10000.00
8000.00
t~
E ~6000.00 ._ a
4000.00
2000.00
0.00 0.00
1.00
2.00
3.00
4.00
5.00
6.00
Time [hrs] Discharge[m3/s]Vs. Chainage[mi] alongthe Nile
ZE 60 u~
,
00
10.00
15.00
20.00
25.00
30.00
35.00
Chainage [km]
Maximum Flow
(i) Nalubaale Dam Break = 17365.94m3/s (ii) Kiira Dam Break = 11579.80m3/s Compare with the maximum flow in River Nile in 50 years of 1396 m3/s!
8.0 CONCLUSIONS (i) The Consequence of any breach at Owen Falls in the short term would result into a sudden flood of up to 17365.93m3/s and a subsequent increased steady flow in the Nile up to 5000m3/s controlled by the Rippon falls (ii) The 2.Skm stretch of the Napoleon gulf would be emptied in minutes and the extent of flooding would impact the reach of Victoria Nile between Jinja and Lake Kyoga. The Attenuation of the flows will occur at Lake Kyoga and Albert, hence limiting the flooding effects downstream.
720
International Conference on Advances in Engineering and Technology
(iii) The Water level in Lake Victoria would significantly reduce and hence affecting Water Supply, Water transport and fishing activities in the Lake. (iv) The following infrastructure would be potentially at Risk (v) Road bridge across the Owen falls Dam (vi) Road Bridge Across the new canal (vii)Njeru town main water supply crossing the Nalubaale Power station intake, Owen falls dam, and the new canal bridge. (viii) Telecommunications landline from Kampala to Jinja (ix) Fibre optic control line which connects the two control rooms in the Nalubaale and Kiira power stations. This route crosses the Owen falls Dam and runs along the left bank of the canal. (x) Minor electric power line along the roadway (xi) Power and control cables to the sluices on the Owen Falls Dam (xii)Transmission lines at Jinja connecting Kiira Power station to the switch yard at Nalubaale Power station (xiii) New MTN fibre optic line installed in June 2003 (xiv)
A Serious energy crisis would result in the Nation.
9.0 R E C O M M E N D A T I O N S
(i) There is need to carry out an immediate Dam Safety Analysis. (ii) There is need to carry out an inventory/Structure appraisal for the Dam to ascertain its life span. (iii) Plans should be set to either decommission the dam or carry out immediate renovation works. (iv) Take account of the resulting flood in the design of the flood control structures at Bujagali Dam. REFERENCES
Fread D.L (1998) Dam Breach Modelling and Flood Routing: A perspective on present capabilities and future directions. Jacobs J.E (2003) Emergency Preparedness Plan (ECP), Volume 1. Masood S. H, Nitya N. R, One dimensional Dam break Flood Analysis for Kameng Hydro Electric Project, India. Source www.ymparisto.fi/default.asp?contentid accessed o n 18 th Nov 2005. Tony L Wahl (1998) Uncertainty of Predictions of Embankment Dam Breach Parameters. ASCE Paper. Wetmore J. N, Fread D.L, (1980) The New Simplified Dam Break Flood forecasting Model for Desktops and Hand Micro- Computers. Hydrologic Research Laboratory.
721
International Conference on Advances in Engineering and Technology
L E A D L E V E L S IN T H E S O S I A N I O.K. Chibole, School of Environmental Studies, Moi University, Kenya elku2001 @yahoo.corn
ABSTRACT River Sosiani is a tributary of r. Nzoia, one of the major rivers draining the eastern water catchment of Lake Victoria: the largest flesh water lake in Africa. River Sosiani also bisects Eldoret town, the biggest town in the northern region of Kenya. Although in Kenya there is provision for unleaded fuel, leaded fuel is still popular among the motorists. Widespread habit of car washing including petroleum tankers along the shores of the Sosiani; traffic congestion along the bridges on the Sosiani during peak hours; dumping of solid waste, including used motor vehicle batteries next to the river course is cause of concern. River Sosiani drainage course was divided into three zones: (1) the forested zone (Fz), the upper reach of the river in the forest, (2) agricultural zone, the middle reach (Az), and (3) the urban zone (Uz), the lower reach in Eldoret town. There were two sampling sites (Fzl, Fz2) in F z - as reference, and four sampling sites each in Az (Azl, Az2, Az3, Az4), and Uz (Uzl, Uz2, Uz3, Uz4). Water samples and sediment samples, where feasible, were collected from each sampling site once a month for a period of two years and analysed using AAS, Varian SpectrAA 100/200 Model at the Moi University, School of Environmental Studies' laboratory. Results show very low lead levels (10 days (lla) Similarly"
Q75(D) - 0 7 5 ( 1 0 ) - 1 6 6 5 4 { 0 7 5 ( 1 0 ) } -~~176( 1 0 - D) 1~ for D 10 days (13a) Similarly,
MAM(D) = MAM(1 O)- 9409.4(MAM(10))-2.9,5(D - 10) 1"~ for D V
Fig. 4. Standardised storage yield curves
748
International C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
Curves are plotted on the same graph to compare the standardized curves of the different rivers for the same yield Q40. The curves obtained are as shown in Fig. 5. S t a n d a r d i s e d C u r v e s for Q 4 0 for s o m e of the R i v e r s
[
g
0.1
- . m
_ -Log.
(Namatala) ~
Log.
(Mpologoma) ....
Log.
0.01
(Manafwa) ~Log.
(Simu)
- - Log. (Malaba)
o 0.001
........................................ 10
100
return period of an event requiring storage > V
Fig.5 Standardised curves for Q40 for 5 Catchments of some rivers in Eastern Uganda Judging from the closeness of curves, it is sufficient to conclude that a series of standardized curves drawn for different yields of one river is appropriate enough as a reference for the other rivers and so will give a fair approximation for the storage yield analysis of all the other rivers in the analysis. Graphs of storage requirement against yield are then plotted for each catchment and the line of best fit drawn through the plotted points. The storage-yield curves for two of the catchments are shown in Fig. 6. Typical hydrographs are shown in Fig.7 and indicate bimodal rainfall characteristics with peaks in May to June and August to October. The recession periods are October to February and June to July, with the least flow occurring in March to April. STORAGE-YIELD
CURVE
FOR
STORAGE-YIELD
R. S I M U
CURVE
FOR
R. SIPI
4 x
3.5
Z
3
j-
2.5
~ 0.5 m 0
/ 1
2
3
/
f
4
YIELD (cumecs)
1
1.5
YIELD (cumecs)
Fig. 6: Typical Storage Yield Curves
749
International Conference on Advances in Engineering and Technology
HYDROGRAPH FOR R. MANAFWA
HYDROGRAPH FOR R. KAPIRI i .
14-
3
5
.
.
.
.
7
i
ca ~ 1 2 .
E ..~
,J
!
/ "
~ 1 0 !
....
.
.
.
.
E =
........
.
'
4
7
"
.......
J
/
,
,. . . . . . .
!
.
_o
i i
........... 25
, 0
30
,
[ i
20 ......'
i
.....
z_6~
e, c,-15
co
o 4
g
10
,
.
.
.
.
.
.
.
/ - / ......
-. . . . . . . . . .
' j
i
~s >
>
i 0
............................................................................
o_
;
g
Time
0 -
o
~
~
.
.
.
.
.
1975
-
1'976
.
.
.
.
n
o
Time
(months)
(months)
Fig.7. Typical Hydrographs B a s e Flow Index
-
R.
M a l a b a at dinja-Tororo Road.
(82218)
I
I
1 0.0
1956
1 -
1957
1958
-
1959
-
1960
/
-
1961
-
1962
TotalNydrograph
-
1963
-
1964
-
1965
-
1966
-
1967
-
1966
-
1969
-
1970
1971
: -
1973
-
1w
-
/ Baseflow
Fig. 8 A plot of the base flow index Fig.8. indicates a typical plot of the base flow index. It was also noted that the values of base flow indices, of rivers through wetlands, were very high giving an indication of the magnitude of the base flow contribution to the total flow of the rivers. The effect of wetlands on river flow is a result of their ability to retain water. Furthermore, large parts of the runoff from upstream catchments evaporate within the wetlands. The low flow indices derived and the catchment characteristics are shown in Tables 2 and 3 respectively. Table 4 displays the monthly low flow parameters.
750
International Conference on Advances in Engineering and Technology
Table 2" Low Flow Indices ADF
Q75(10)
Catchment
m3/s
%ADF
Mpologoma
Q95(10)
MAM(10)
BFI
KREC
0.5434
% ADF
19.545
11
0.007
9.838
0.8747
Manafwa
6.872
35
17
19.214
0.6287
0.6855
Namalu
0.316
22
15
19.296
0.5832
0.7144 0.6732
Kapiri
14.644
10
.0001
17.710
1.0000
Malaba
14.181
32
9
14.591
0.7194
0.6111
Sipi
2.976
2.5
0.004
11.069
0.7823
0.5167
Simu
3.477
36
16
16.19
0.7870
0.6511
Namatala
2.614
40
13
25.005
0.6024
0.6648
23.6
8.75
16.614
0.7472
0.6324
Mean
13.258
Table 3: Catchment Characteristics Area km 2
S1085
STFRQ
Catchment
MSL
MAR
km
mm
P.E mm
Mpologoma
2479.6
17.34
0.113
128.9
1436.0
2005.3
Manafwa
477.6
19.07
0.60
61.8
1459.2
1694.7
Namalu
34.0
30.0
0.21
12.0
1180.0
1791.2
Kapiri
23946.9
2.77
0.011
160.2
999.8
2007.0
Malaba
1603.8
1911.0
13.57
0.025
73.9
1432.3
Sipi
92.0
76.3
0.34
32.5
1731.9
1942.2
Simu
165.0
77.69
0.37
34.0
1816.2
1653.8
Namatala
123.6
43.8
0.60
27.9
1344.9
1652.7
1425.0
1832.2
.....
Mean
Table 4 Monthly low flow parameters Low Flow Analysis (Monthly values) Catchment
Average Driest (m3/s)
Min
flow
re-
1.5 year low
corded (m3/s)
(m3/s) 0.618
Manafwa
February
2.99
0.66
Namatala
February
1.38
0
0.322
Mpologoma
March
3.82
0
3.616
Malaba
February
3.99
0.98
1.232
Kapiri
April
4.13
0
1.64
751
International Conference on Advances in Engineering and Technology
Namalu
March
0.07
0
0.070
Simu
March
1.2
0.35
0.356
Sipi
March
1.01
0
0.086
Table 5 provides the results of model verification, achieved after comparing the derived values and the predicted values for a selection of the indices for the different catchments.
Table 5 Derived and Predicted Indices Catchment
Index
Derived
Predicted
Error %
Mpologoma
Q75
11
11.6
5.45
Manafwa
Q95
17
16
5.88
Namalu
MAM(10)
19.296
20.6
6.76
Kapiri
ICREC
0.6732
0.6727
0
Malaba
BFI
0.7194
0.7198
0
Sipi
ADF
2.976
2.914
2.08
Table 6 Models Generated From Ungauged Catchments. Independent Variables Dependant Variable
Cons-
MAR
AREA 10"6
S1085 10-
STRFQ
MSL
3
10 -z
10-3
PE 10-4
KRE
R
C
tant
10 -s
Q75 (10)
232.631
2038(6)
146.9(4)
314000(3)
-2250.7(~)
3.195(5)
-1210000(2)
0
0.961
Q95(10)
103.586
1130(6)
193.6(5)
- 191000(2)
- 1042.2(1)
-82.32(4)
-526.8(3)
0
0.989
ICREC
1.660
-4.465(5)
5.206(6)
-1.202(2)
-9.819(1)
.-722(3)
-4.719(4)
0
0.978
MAM(10)
73.807
-1139(~)
170.6(5)
.-7332(4)
435.6(6)
-22.16(3)
-225.5(2)
0
0.974
ADF
2.485
387(5)
-456.8(4)
-50.99(2)
-743.1(1)
152000(6) -24.48(3)
0
0.992
BFI
-179
4.917(3)
-2.545(2)
3.667(6)
10700(1)
2.821(5)
354(7)
1
. . . .
1.844(4)
Table 7. The Sums of Ranking Coefficients and the Catchment Characteristics MAR AREA S1085 STFRQ MSL PE 18 27 11 19 26 Sum of Rank- 26 ing 14 22 10 13 24 23 Exclude BFI 10 19 11 16 Exclude BFI, 18 KREC
752
KREC
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
Table 6 shows the models developed for determining low flow indices at ungauged sites together with the corresponding multiple regression coefficient. In all cases there is a very high degree of correlation. 5.0 DISCUSSION The data used is of good quality as was evidenced from the double mass plots. This was further confirmed by its use in the development of models that can predict these indices. The flow duration curve, is like a signature of a catchment and can used for general hydrological description, licensing abstractions or effluents and hydropower schemes. The 90% exceedance flow value can be used as one of the measures of groundwater contribution to stream flow. The ratio Q90/Q50 can be used to represent the proportion of stream flow from groundwater sources.
The Low Flow Frequency Curve LFFC can be used to obtain the return period of drought, the design of water schemes and water quality studies. The Malaba, Manafwa, Namalu, Namatala catchments underlain by the Basement Complex rocks, have lower base flow indices than the other catchments of Mpologoma, Simu and Sipi and Kapiri underlain by Nyanzian granites, Tertiary volcanic and Pleistocene sediments alluvium, respectively. The slope of the LFFC may also be considered as a low flow index; represented by the difference between two flow values( normalized by the catcment area). One from the high and another from low probability domains. The similarity in values of the Average Recession Constant KREC, may imply that the rocks are comparable in their storativity and that the catchment climates are related. This is consistent with the fact that they are in the same climatic area. Other indices may be obtained from the LFFC, where it exhibits a break in the curve near the modal value. ~SThough not a general feature, it is regarded by some researchers as the point where a change in drought characteristics occurs. It means that higher frequency values are no longer drought flows but have a tendency towards normal conditions. It may also indicate conditions at which a river starts getting water exclusively from deep subsurface storage. The storage yield diagram can be used for preliminary design of reservoirs sites, including their screening and to estimate yield levels at a certain levels of reliability. By plotting the storage requirements of other catchments on the same graph, as shown in Fig. 5., the hypothesis that all flows corresponding to a particular percentile on the flow duration curve would have the same storage yield curve is tested. The curves are close, and the hypothesis is therefore valid. One curve can thus be used to approximate the storage return period relationship of other catchments. The above analysis is based on the assumption, that the flow duration curve is an appropriate index of storage yield characteristics of catchments.
753
International Conference on Advances in Engineering and Technology
In comparison with the Malawi Study 9, the values of ADF, Q75 (10) and MAM (10) here are higher, implying lower river flow values in Malawi. The average recession constants (KREC) are much lower here, implying less variable rainfall distribution than in Malawi. In comparison with South African 14 rivers, the flows here are higher here and there is less variability of rainfall than in South Africa. Furthermore, the models developed here, for estimating low flow parameters on ungauged catchments are linear, as compared to the UK Studies 1~ The subscript after each value in Table 6 ranks the coefficients of the catchment characteristics, according to their effect on the dependant low flow parameter. If the ranking of all the coefficients of a particular catchment characteristic are then added, the value obtained gives an indication of the impact of individual catchment characteristic on the low flow indices. In Table 7, the 1st row gives a summation of the ranking of the coefficients shows that when all low flow indices are considered, the mean annual rainfall, the area and main stream length (MSL) are the most signifcant independent variables. The next factors are the slope and potential evaporation, respectively and lastly the stream frequency and recession constant. The stream frequency has the most effect on the MAM (10). The BFI is mostly dependant on the recession constant and slope. In the 2nd row, after excluding BFI, the significant dependant variables still remain MAR AREA and MSL. In the 3 rd row when both KREC and BFI are excluded from the dependent variable, the significant factors remain the same. The effect of potential evaporation is provided by a negative coefficient in all the indices except BFI. These observations compare with the UK Low Flow Studies 1~where area and rainfall were significant factors, while the potential evaporation had a negative coefficient. 6.0 CONCLUSIONS (i) .The development of a database on low flow indices has been initiated by taking account the eastern Uganda catchments that have sufficient stream flow data .The data available ranged from the steam 17 to 28 years. The indices provide data on licensing abstractions, hydropower assessment, hydrological description, return periods of drought, reservoir design, short term forecasting and hydrogeology. (ii). The models developed for estimating low flow indices at ungauged sites, based on multiple linear regression, provide very good estimates of the indices. These linear models can be used for design purposes at ungauged catchments. (iii).The applicability and accuracy of these models is a function of the quality and the length of record of the streamflow data together with the accuracy in measurement of catchment characteristics within the region. (iv) The results show that the methodology applied here, can be used for other relatively homogenous climatic regions, with fairly uniform soil and geologic conditions.
754
International Conference on Advances in Engineering and Technology
(v) The results show that the dominant catchment characteristics that determine the values of low flow indices are the mean annual rainfall, the area, the main stream length, the slope and potential evaporation in that order. REFERENCES
State of Environment Report 1996 National Environment Management Authority, Ministry of Water, Lands and Environment 1997, Kampala, Uganda National Biomass Study, Technical Report, Forestry Department, Ministry of Water Lands and Environment, 2002 Kampala Uganda. State of Environment Report 2000/2001, National Environment Management Authority, Ministry of Water, Lands and Environment 2002, Kampala, Uganda State of Environment Report 1998 National Environment Authority, Ministry of Water, Lands and Environment 1999, Kampala, Uganda Barifaijo E. (ed) Geology of Uganda, Geology Department, Makerere University, Kampala, Uganda 2002. Database, Water Resources Management Department, Directorate of Water Development, Ministry of Water Lands and Environment 2003, Entebbe, Uganda. Ayoade J. O Tropical Hydrology and Water Resources, Macmillan 1988 London UK. Institute of Hydrology, Low Flow Report, Institute of Hydrology 1980, Wallingford UK. Drayton A.R.S, Kidd EH.R, Mandeville A.N. and Miller. J.B.A. Regional Analysis of River Floods and Low flows in Malawi 1980, Institute of Hydrology Wallingford UK. Gustard A. Bullock A, Dixon J.M, Low flow Estimation in the United Kingdom, Institute of Hydrology 1992, Wallingford UK. Ruks D.A., Owen. W. G, Hanna L.W, Potential Evaporation in Uganda, Water Development Department, Ministry of Mineral and Water Resources 1970. Entebbe Uganda. Haan C.T. Statistical methods in Hydrology. Iowa State University Press 1982, IOWA, USA. Ojeo. J. A Low Flow Study of Eastern Catchments. Unpublished Report. Department of Civil Engineering Makerere University, Kampala Uganda. Smakhtin, V. Y., Watkins DA, Low Flow Estimation in South Africa 1997 Water Research Commission Report No.494/1/97 Pretoria, South Africa. Velz,C.J.,Gannon J.J.Low .flow characteristics of streams. Ohio State University Studies Engineering Survey XXII: 138-157 1953,Ohio USA.
755
International Conference on Advances in Engineering and Technology
S U I T A B I L I T Y OF A G R I C U L T U R A L R E S I D U E S AS FEEDSTOCK FOR FIXED BED GASIFIERS M. Okure, J.A. Ndemere, S.B. Kucel; Department of Mechanical Engineering, Faculty of
Technology, Makerere University, P. O. Box 7062 Kampala, Uganda. Tel. +256-41 541173, Fax +256-41 530686 B.O. Kjellstrom; Professor emeritus, Division of Heat and Power Technology
The Royal Institute of Technology SE-I O0 44 Stockholm, Sweden
ABSTRACT The use of agricultural residues as feedstocks for fixed bed gasifiers could help Uganda and other developing countries to break their over dependence on expensive fossils fuels for heat and power generation. Uganda produces residues, such as bagasse, rice husks, maize cobs, coffee husks and groundnut shells, that are usually wasted or poorly and inefficiently used. This paper presents the results of an investigation into using the different agricultural residues in the same gasifier units where the only major difficulty is the fuel feeding system. The results of the physical and thermo-chemical tests carried out showed that gasification of these residues is a promising technology with expected relatively high gas yields and heating values. Key words: Agricultural residues, gasification, particle size, heating values.
1.0 INTRODUCTION In the world today, the leading sources of energy are fossil fuels, which include mainly coal, oil and natural gas. The ever growing demand of heat and power for cooking, district heating and other heating processes, construction, manufacturing, communications, transportation, lighting and other utility needs, has led to the great reduction of the energy sources and the subsequent price increments over the years. This high demand is attributed to the growth in economies, especially of the developing countries. This fact has called for the reduction on the dependency on this depletable energy source and advocated for the utilization of the more abundant and renewable energy sources such as biomass, hydropower, solar energy, wind energy, geothermal and tidal energy, and the use of more efficient energy conversion technologies aimed at reducing the inefficient and high energy consumption technologies. Gasification, under research, is a thermal-chemical process for converting a solid fuel into combustible gases by partial combustion process (Gabra Mohamed, (2000)). The gas generated can be combusted in a stove to produce heat or in an internal combustion engine or a gas
756
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
turbine to produce electricity through an electric generator. This technology helps to change the form of the solid fuels into a form that can be used easily and with improved efficiency. The types of solid fuels include, coal, peat and biomass. Among these, biomass is the most environmentally friendly source of energy and is an important renewable energy with a large potential in many parts for the world. Biomass includes wood and forest residues, agrowastes, industrial wastes such as saw dust, and human/animal wastes. The biomass potential in Uganda is enormous (Ministry of Energy and Mineral Development, (2001)). At present, wood and forest residues, and sawdust, is mainly used in thermal processes such as cooking or heating. Animal wastes are utilized to some extent by anaerobic digestion for the production of biogas. In both rural and urban areas of Uganda, the use ofbiomass fuels for heat generation is widely practiced. This does not apply only to areas not connected to the national electric grid but also to a big percentage of people or settings that use hydro electricity. This is mainly because biomass fuels are cheaper compared to the cost of using electricity for heating and cooking. Wood fuels are widely used in rural areas while charcoal is mainly used in urban areas. Due to the increasingly high demand for wood and forest residues, forests have been cleared for energy needs settlement and for farming and led to increase in the prices for biomass fuels and to further environmental degradation. Large amounts of agricultural residues are burnt in the fields as a means of disposal leaving only little for improving or maintaining soil fertility. This leaves a great deal of energy potential unutilized and being wasted. A technology that can efficiently convert these agricultural residues into heat and power would lessen the pressure being put on the forests and also reduce the over dependency on wood and forest residues. Gasification in fixed bed appears to be a technology that would be suitable for the applications of relatively limited capacity that would be suitable in Uganda. Not much research had been done on these residues in as far as physical and thermo-chemical properties are concerned and therefore it was not easy to ascertain which of the various agricultural residues are best suited for the gasification technology. It is imperative to note that some physical and thermo-chemical properties of the different categories of biomass vary from place to place. There was need therefore to carry out a thorough study to determine the suitability of agricultural residues available in Uganda, as feedstocks for fixed bed gasifiers. 2.0 A G R I C U L T U R A L R E S I D U E S IN U G A N D A
Biomass energy constitutes 92-94% of Uganda's total energy consumption (Okure, (2005)). Biomass has traditionally bio-energy forms which include firewood (88.6%), charcoal (5.9%) and agricultural residues (5.5%). Modem biomass includes biogas, briquettes, pellets, liquid bio-fuels and producer gas. The use ofbiogas is limited to few households, Sebbit, (2005).
757
International Conference on Advances in Engineering and Technology
Biomass can be classified as woody biomass and non-woody biomass, (Okure, 2005). Woody biomass includes both hard and soft wood. Non-woody biomass includes agricultural residues, grasses, sawdust and cow dung (Skreiberg, (2005)). Agricultural residues are the leftovers after crops are harvested or processed. Currently most of these residues are left unused, burnt in the fields and on a small scale used for space heating in rural areas as well as for commercial use in few thermal industrial applications (Kuteesakwe, (2005)). The use of these agricultural residues for industrial purposes is much more environmentally friendly practice than many residue disposal methods currently in use. Agricultural residues are an excellent alternative to using woody biomass for many reasons. Aside from their abundance and renewability, using agricultural residues will benefit farmers, industry and human health and the environment (Meghan, (1997)). 3.0 E X P E R I M E N T A L PROCEDURE AND RESULTS Various tests, which were carried out on the various agricultural residues, included tests on fuel physical properties, thermo-chemical properties as well as fuel feeding characteristics
The determination of fuel moisture content was based on a wet basis (%wt.b) (Sluiter, (2004a). Coffee husks were found to have the highest moisture content of 14.073% and rice husks had the lowest of 10.038%. Figure 1 shows the results of the moisture content tests for the various agricultural residues used in the study. The lower and upper quartiles of the data are represented by q 1 and q3 respectively. Moisture content
vs
Fuel
16.000 14.000 Standard deviation
12.000
10.000 "~ ~
8.000
=
6.000
9ql " Min o Median ::::~:max x q3
4.000 2.000 0.000
~-
.~
o,o
oo
Fuel .
.
.
.
.
Figure.l" Moisture content for agricultural residues
758
International Conference on Advances in Engineering and Technology
The next experiment was determining the bulk density, Pb (kg/m3) suing the method described in Albrecht Kaupp, (1984). The results showed that coffee husks have the highest bulk density in comparison to the other agricultural residues. The details can be seen in Figure 2. Bulk density Vs Fuel
350.000
A
Standard deviation
300.000 250.000
,._,,
4, Min
200.000
Median Q) 150.000
Max q3
100.000 50.000 0.000
. ^,se"
.~,"
. ^,5e"
..~o~"
oo"
Fuel
Figure 2" Bulk density for agricultural residues Particle size was also determined. For fuels with relatively big particles, the particle sizes were determined by measuring their dimensions of length width and height using a metre rule. For small particles sizes, the particles were spread on a sheet of paper and a metre rule was used to measure the size with the help of a magnifying glass. The results showed that maize cobs had the biggest particle size followed by bagasse, groundnut shells, coffee husks and rice husks in that order. The heat contents of the agricultural residues were determined based on the lower heating value, LHV using a bomb calorimeter, (Cussions Technologies, (Undated)). The results are shown in Table 1. Also included are the lower heating values for the dry fuels as well as dry ash-free fuels, which were calculated from the Dulong's formula using data for the ultimate analysis. Bagasse had the highest heating value of 17.84MJ/kg while rice husks had the lowest of 13.37 MJ/kg. The tests for the ash content involved complete burning of a dry fuel sample of known weight in an oven at 550~ and weighing the remains, (Sluiter, (2004b)). Rice husks were found to have the highest ash content value of 21.29% while maize cobs had the lowest of 2.07%. Figure 3 shows the detailed results.
759
International Conference on Advances in Engineering and Technology
Table l" Fuel heating values for the agricultural residues LHV for dry LHV for dry ash-free Fuel Fuel ID fuel (MJ/kg) fuel (MJ/kg) Rice husks F1 11.92 16.31 Groundnut F2 17.89 20.70 shells Coffee F3 16.08 17.54 husks Bagasse F4 16.53 17.34 16.25 16.28 Maize cobs F5 Ash
content
LHV (MJ/kg) + 2% (measured) 13.37 17.27 17.08 17.84 17.54
vs Fuel
25.000 Standard deviation
20.000 A i ,-. e-9 15.000
. Min
i ~i
1:: Median
I
8
,.~ 10.000
~:~:.M a x
i
~
i~q3 5.000
0.000
#,-, ~,,o
oo
~,Fuel
Figure 3." Ash contents for the agricultural residues The composition of producer gas was analyzed using a Gas Chromatograph. Methane, carbon dioxide and a negligible amount of ethane were detected. The concentrations of carbon monoxide, methane, and hydrogen were then used to calculate the gas heating value. Gas produced from maize cobs were found to have the highest gas heating value of 3.797MJ/Nm 3 with rice husks showing the lowest of 2.486MJ/Nm 3 as shown in Figure 4.
760
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
Gas heating value Vs Fuel 5.000 ~- 4.500 E z 4.000
Standard deviation
3.500 = 9 3.000
.
ql
m
> 2.500
m Min
m 2.000 c ,-
i
1.500 1.000
o
..... IVledian, :::::::max
xq3
0.500 0.000
oO
e,o Fuel
Figure 4." Gas heating values for producer gas from the agricultural residues The determination of the bulk flow characteristics of the various agricultural residues was also considered important. The results are shown in Table 2. Table 2" Flow characteristics of the agricultural residues
Fuel sample Rice husks Groundnut shells Coffee husks Bagasse Maize cobs
Average angle of repose (o)
Hopper angle (o)
32.6
57.4
30.4
59.6
25.8 30 27
64.2 60 63
4.0 DISCUSSION Low moisture content has a positive impact on the fuel heating value as well as gas heating value because fuels with low moisture content burn more efficiency. Efficient burning or combustion causes the reduction and oxidation processes of gasification to take place at higher temperatures hence yielding producer gas with a high gas heating value. Therefore, fuels with high moisture contents have low fuel heating values and produce a gas with low gas heating value. Based on the above, rice husks should have the highest fuel and gas heating values. Instead they have the lowest fuel heating value and second lowest gas heating value. This is because rice husks have a relatively high ash content (>20%).
761
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
The handling and flow of the fuel into he gasifier depends on several factors including bulk density, particle size and angle of restitution. Due to the difference in bulk densities, the feeding system should be able to handle the various agro-residues for a multi-fuel system. Bulk density does not only depend on the moisture content and the size of the fuel particle but also on the manner in which the fuel is packed in the container. This definitely varies from fuel to fuel and from an individual to another. Fuels with small particles such as rice husks are likely to cause flow problems in the gasitier. There is also a possibility of high-pressure drop in the reduction zone leading to low temperatures and tar production. Large particles like maize cobs may lead to startup problems and poor gas quality i.e. low gas heating value due to the low fuel reactivity. The fuel heating value is generally the representation of the carbon and hydrogen contents in the fuel which in effect influence the gas heating value. It should be noted that the fuel ash content also affects the fuel heating value. The higher the fuel moisture content, the lower the fuel heating value. However, fuels for gasification should not be completely dry because some moisture is necessary for the water gasification of char. Ash content impacts greatly on the running and operation of fixed bed gasifier units. Ashes can cause various problems that include the formation of slag, which lead to excessive tar formation and blocking of the gasifier unit. Gasification of fuels with ash contents of less than 5% do not lead to formation of slag while severe slag formation is encountered with fuels which have ash contents of at least 12%. The heating values for producer gas from fuels with high bulk densities are generally higher compared to those for low bulk density fuels. This means that the heating values increase with increasing bulk densities. However for coffee husks, the gas heating value is slightly out of the general trend because its high moisture content reduces the thermal efficiency of the gasification process hence the abnormally low gas heating value of coffee husks compared to other agricultural residues. It should also be noted that many of the characteristics investigated in this study could change due to various reasons. The fuel moisture content could vary with changes in weather as well as location and storage. Particle size could vary depending on the harvesting and shelling methods and technologies. Bulk density could vary depending on the level of packing which changes from person to person. These physical properties could in turn affect the thermal-chemical properties.
762
International C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
5.0 CONCLUSIONS
This study showed that agricultural residues, such as maize cobs, bagasse, coffee husks and groundnut shells as well as, to a small extent, rice husks can be used as feedstocks for fixed bed gasifiers. The availability of large amounts of agricultural residues such as coffee husks, bagasse, maize cobs and rice husks all year round presents Uganda with a sustainable energy source that could contribute to solving the country's current energy problems. This in turn could impact greatly on the country's economy, bringing about growth and development hence improving the quality of living of the people. Gasification of agricultural residues has a great potential in Uganda and it could help in reducing on the unsustainable exploitation of woody biomass for purposes of cooking and lighting hence preserving nature as well as maintaining a clear environment. REFERENCES
Gabra Mohamed (2000), "Study of possibilities and some problems of using Cane residues as fuels in a gas turbine for power generation in the sugar industry" Doctoral Thesis, Lulea University of Technology, Sweden. Ministry of Energy and Mineral Development (2001), "National Biomass Energy Demand Strategy -2001 - 2010" Draft Document Okure Mackay, (2005), "Biomass resources in Uganda" presented at the Norwegian University of Science and Technology and Makerere University Joint summer course-Energy systems for developing countries Sebbit Adam (2005), "Traditional use of biomass in Uganda", presented at the Norwegian University of Science and Technology and Makerere University Joint summer courseEnergy systems for developing countries Skreiberg Oyvind (2005), "An introduction to heating values, energy quality, efficiency, fuel and ash analysis and environmental aspects", presented at the Norwegian University of Science and Technology and Makerere University Joint summer course-Energy systems for developing countries Kuteesakwe John (2005), "Biomass commercial utilization", presented at the Norwegian University of Science and Technology and Makerere University Joint summer courseEnergy systems for developing countries. Meghan Hayes (1997), "Agricultural residues: A Promising Alternative to Virgin Wood Fiber", Resource Conservation Alliance, Washington DC Sluiter Amie (2004a), "Determination of Total solids in Biomass", Laboratory Analytical Procedure Albrecht Kaupp, (1984) "Gasification of Rice Hulls: theory and praxis", Gate/Friedr. Vieweg & Sohn Braunschweig/Wiesbaden Cussons Technology, "The P6310 Bomb Calorimeter set", Instruction manual, 102 Great Clowes Street, Manchester M7 1RH, England Sluiter Amie (2004b) "Determination of Ash in Biomass", Laboratory Analytical Procedure.
763
International Conference on Advances in Engineering and Technology
N U M E R I C A L METHODS IN SIMULATION OF INDUSTRIAL PROCESSES Roland W. Lewis, Eligiusz W. Postek, David T. Gethin, Xin-She Yang, William K.S. Pao, Lin Chao;
[email protected]; Department of Mechanical Engineering, University of
Wales Swansea, Singleton Park, SA2 8PP Swansea, Wales
ABSTRACT The paper deals with an overview of some industrial applications leading to a formulation for advanced numerical techniques. The applications comprise squeeze casting processes, forming of tablets and petroleum reservoir modelling. All of the problems lead to solutions of highly nonlinear, coupled sets of multiphysics equations. Keywords: squeeze forming, powder compaction, oil fields, coupled problems, thermomechanics, porous media, fluid flow, nonlinear solid mechanics, phase transformations, microstructural solidification models, numerical methods, contact problems, discrete elements, finite elements.
1.0 INTRODUCTION. Contemporary technology still requires increasingly sophisticated numerical techniques. The complexity of most industrial processes and natural phenomenae usually leads to highly nonlinear, coupled problems. The nonlinearities are embedded in the behaviour of the materials, body interactions and interaction of the tensor fields. Further complexities, which are also sources of nonlinearities, are the existence of widely understood imperfections, i.e. geometrical, material. All of these require the elaboration of new numerical algorithms embracing such effects and the effective solution of the arising multiphysics systems of nonlinear differential equations. These problems require a solution in order to improve the design, quality of products and in consequence the quality of life. A few applications of such algorithms, which describe manufacturing processes of everyday products and the description of natural large scale phenomenae, are presented herein. 2.0 SQUEEZE FORMING PROCESSES 2.1 General Problem Statement
The analysis of squeeze forming processes is currently divided in two parts, namely, mould filling and the analysis of thermal stresses. During mould filling special attention is paid to
764
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
the metal displacement and during the stress analysis to the pressure effect on the cooling rate and to the second order effects. The metal displacement during the die closure in squeeze casting is an important process, because many defects such as air entrapment, slag inclusion, cold shuts and cold laps may arise during this process. Modelling the metal displacement is an efficient approach to optimize an existing process or guide a new design. As a typical numerical approach, the finite element method has been used successfully in modelling the mould filling process of conventional castings [1-4]. However, little work has been done on modelling the metal displacement in the squeeze casting process except for the early work by Gethin et al. [5], in which an approximate method was employed to incorporate the effect of the metal displacement in the solidification simulation. The analysis of stresses during the squeeze casting process leads to highly nonlinear coupled thermomechanical problems including phase transformations. The effective and accurate analysis of the stresses is important and should lead to an evaluation of the residual stresses in the workpieces and the stress cycles in the die. The accurate estimation of the stress levels in the die should allow the prediction of the life time of the die from a fatigue aspect.
2.2 Mould Filling In this paper, a quasi-static Eulerian finite element method is presented for modelling the metal displacement in the squeeze casting process. The dynamic metal displacement process is divided into a series of static processes, referred to as subcycles, in each of which the dieset configuration is considered as being in a static state, thus the metal displacement is modelled by solving the Navier-Stokes equation on a fixed mesh. For each subcycle, an individual mesh is created to accommodate the changed dieset configuration due to the motion of the punch. Mesh-to-mesh data mapping is carried out regularly between two adjacent subcycles. The metal front is tracked with the pseudo-concentration method in which a first order pure convection equation is solved by using the Taylor-Galerkin method. An aluminum alloy casting is simulated and the numerical results are discussed to assess the effectiveness of the numerical approach. The associated thermal and solidification problems are described in the thermal stress analysis section, since both analyses exploit the same mathematical formulation. Fuid Flow and Free Surface Tracking The flow of liquid metal may be assumed to be Newtonian and incompressible. The governing Navier-Stokes equations, which represent the conservation of mass and momentum, are given below in terms of the primitive flow variables, i.e. the velocity vector u and pressure p: V.u =0 (1) p
+ (u. V)u
- v.
(Vu)
,
765
International Conference on Advances in Engineering and Technology
where p is the density, p is the pressure, r is the dynamic viscosity and g is the gravitational acceleration vector. The free surface movement is governed by the following first order pure advection equation:
OF c~t
+ (u. V ) F = 0,
(3)
where F is the pseudo-concentration function, which is defined as a continuous function varying between -1 and 1 across the element lying on the free surface. Details of the finite element formulation and numerical algorithm can be found in, Lewis, (2000).
Modelling of Metal Displacement The metal displacement in the die closure process of squeeze casting is a dynamic process in which the liquid metal is driven by the continuously downward punch movement. As a result of the fluid flow, the metal front goes upwards in the die cavity and in some cases where the die has secondary cavities, overspill may take place as well. With this process, the whole die cavity, including the filled and unfilled regions, and all of the molten metal is forced to frequently relocate in the varying die cavity until the process is finished. Obviously, the metal displacement in the squeeze casting process is different from the mould filling of conventional casting processes. As mentioned earlier, an Eulerian type approach is employed in the present study, which implies that the fluid flow and free surface are computed on a fixed finite element mesh which is placed over the entire domain of the filled and unfilled regions. To accommodate the variation of the die cavity, more than one mesh, generally a set of meshes corresponding to different punch positions, has to be generated to cover the whole process of the die closure. Accordingly, the dynamic process of metal displacement is divided into a series of static processes, in each of which a fixed dieset configuration and its corresponding finite element mesh are employed to model the fluid flow and free surface movement. The combination of all the static processes is used to represent the dynamic process approximately. This is the reason why the present method is termed as a "quasi-static approach". Here, each of the static processes, referred to as a subcycle, and any two adjacent subcycles are linked by appropriate interpolation for velocity, pressure, and the pseudo-concentration function from a previous mesh to the following one, which is also called data mapping in this paper. In addition, it is noticeable that the total volume of the molten metal should be constant provided any volume change caused by cooling and solidification is negligible. Therefore, the global volume or mass conservation must be ensured in the simulation.
Punch Movement Simulation The downward punch movement has two direct impacts. One of them is to change the shape and size of the whole die cavity which can be accommodated by generating a series of finite element meshes as mentioned earlier. The other impact is to force the molten metal to flow into the die cavity.
766
International Conference on Advances in Engineering and Technology
i i i
'i Punc....h /1,
/-; : Punch
i
i P7n veoc:W~ ~
t
ii ~ . ......>!". 9
/ ...........f.' ~
Velocities ~ :
I
I
(a)
(b)
(e)
Fig. 1. Schematic illustration of modelling the metal flow in squeeze casting process In the present work, a velocity boundary condition is imposed at the interface between the punch and the liquid metal to simulate the effect of the punch action, as shown in Fig. 1. This manifests itself as a prescription of the inlet velocity boundary condition in conventional mould filling simulations. However, there are some differences with respect to the normal "inlet" condition. In the conventional mould filling process, the positions and size of the inlet do not change. In contrast, in the squeeze casting process the punch/metal interface may vary with the movement of the metal front. This implies that the punch/metal interface, where the velocity boundary condition is to be prescribed, depends upon the profile of the metal front, which is an unknown itself. Therefore, an iterative solution procedure is employed, in which the status of each node on the punch/metal boundaries is switched "on" or "off" dynamically by referring to its pseudo-concentration function value. Whether the boundary velocity is prescribed depends on the node status. PL~.ch
Ouier
~rmg
O~l~r
~ing ,
P~.:rmh
Ca~ir~g Met~
Lo~e~ al~e
Lower die
Fig.2. The initial and final dieset configurations for the casting without metal overspill.
Mesh-To-Mesh Data Mapping The mesh-to-mesh data mapping from a previous subcycle to the following one is implemented based on a mesh of three-node triangular elements which is generated by splitting the six-node flow elements. As mentioned earlier, the values of the velocity and the pseudo-
767
I n t e r n a t i o n a l C o n f e r e n c e o n A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
concentration function are assigned to all of the nodes, but the pressure values are solved only for the comer nodes of the six-node triangular elements. To enable the three-node elements to be used for the data mapping, the pressure values for the mid-side nodes of the flow element are calculated by using a linear interpolation. In the data mapping process, a node-locating procedure, in which all of the new-mesh nodes are located in the old mesh, is followed by a simple linear interpolation based on three-node triangular elements. Global Mass Conservation The global mass conservation for the molten metal must be guaranteed in the modelling. Based on the above description, the metal mass in the die cavity after the data mapping is less than that at the initial moment. The initial metal mass can be used as a criterion to judge whether it is the time to finish an ongoing subcycle and commence a new subcycle. In detail, the total mass of the metal at the initial moment is calculated and is denoted by M0. In the computation for each subcycle, the metal mass in the die cavity is monitored after each iterative loop. Once it achieves M0, the ongoing subcycle is ended immediately and a new subcycle commences.
768
(a) punch displacement 0 mm
(b) punch displacement 3 mm
(c) punch displacement 6 mm
(d) punch displacement 9 mm
(e) punch displacement 12 mm
(f) punch displacement 15 mm
(g) punch displacement 18 mm
(h) punch displacement 21 mm
(i) punch displacement 24 mm
International Conference on Advances in Engineering and Technology
(j) punch displace(k) punch displacement 27 mm ment 30 mm Fig. 3. The evolution of the metal profile in the die cavity
Numerical Simulation and Results Numerical simulation is carried out for an aluminum alloy casting. The computer code employed in the simulation is developed basing on the mould filling part of the integrated finite element package, MERLIN (Lewis & MERLIN, (1996), which has been tested with benchmark problems of fluid flow, Lewis, (2000). The initial and final dieset configurations for the casting are shown in Fig. 2. As the casting has an axisymmetric geometry, only half of the vertical section of the casting and dieset configuration is considered in the numerical simulation. The outer diameter of the casting is 320 mm, the height 80 mm, and the wall thickness 10 mm. The total displacement of the punch from its immediate contact on the metal surface to the end of the die closure process is 30 mm and divided into 10 equal displacement increments in the simulation. The speed of the punch is 5.0 mm/s and the whole metal displacement process lasts 6.0 s. Fig. 3 shows the evolution of the metal profile in the die cavity. The simulation results clearly expose the process in which the liquid metal is displaced in the continuously changed die cavity as a result of the punch action. 2.3 Thermal Stresses Analysis With respect to the stress analysis the following assumptions are made: the mechanical constitutive model is elasto-visco-plastic, the problem is transient and a staggered solution scheme is employed. The influence of the air gap on the interfacial heat transfer coefficient is also included. These issues are illustrated by 3D numerical examples. An extensive literature exists concerning the solution of thermomechanical problems, for example, Lewis, (1996), Zienkiewicz & Taylor, (2000).
Governing Equations The FE discretized thermal equation is of the form
K r T + Crj" - Fr,
(5)
769
International Conference on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
where, K r , Cr are the conductivity and heat capacity matrices and Fr is the thermal loading vector. Eqn (5) can be solved using implicit or explicit time marching schemes. In our case the implicit scheme is chosen. For the case of a nonlinear static problem, with the assumption of large displacements, the mechanical problem is of the /form \
(K e-vp 4-K a J A q
- AF
+ AG
+ AF c ,
(6)
w h e r e K e-vp is
the elasto viscoplastic matrix, K~, is the 'initial stresses' matrix, AF is the increment of the external load vector, AG is the increment of body forces vector, AFc is the increment of contact forces and Aq is the displacement increment. In the case of a dynamic problem the equation governing\ the mechanics is of the form /
~/lq 4- Cq 4- tK e-vp 4- K a )q - V + V c ,
(7)
where M and C are the mass and damping matrices, F and Fc are external and contact forces, respectively. The increment of stresses includes the thermal and the viscoplastic effect assuming Perzyna' s model (Perzyna, (1971) and reads: c3Q A.
- D ( A I ; - A~: "p - A ~ r )
~'~P - 7 < ~p(F) > -
c~ (~p(F))
_~ 0
[ co(F)
F
0
where, F and Q are the yield and plastic potential functions (assumed F=Q, associative plasticity) and 7 is the fluidity parameter. Additionally, in the case of phase transformation the existence of shrinkage strains is assumed when the temperature passes the liquidus threshold in the cast material. The yield limit is a function of temperature.
Outline of the Solution Algorithms A staggered scheme is adopted for the two field problems (thermal and mechanical), Felippa & Park, (1980) and Vaz & Owen, (1996). The general scheme for this type of problem is presented in Fig. 4 (a). The solution is obtained by sequential execution of two modules (thermal and mechanical). T
7 + A~-
~- + 2 A t
T
.,j ....
T
,~:ir~~'H. &/!,
,
[
M
,,:,. ~r (b) (a) :~o Fig.4. Illustration of the solution methods, staggered solution - exchange of information between thermal and mechanical modules, (a) and enthalpy method, (b).
770
International Conference on Advances in Engineering and Technology
The sources of coupling are as follows: thermomechanical contact relations (dependence of the interfacial heat transfer coefficient on the width of the gap between the cast and mould), dependence of the yield limit on temperature and the existence of the shrinkage strains during phase transformation. In the case of phase transformation, due to the existence of a strong discontinuity in the dependence of heat capacity with respect to time (Fig. 4, (b)), the enthalpy method is applied, as shown by Lewis et al, (1978, 1996). The essence of the application of the enthalpy method is the involvement of a new variable (enthalpy). This formulation enables us to circumvent the problems involved with the sharp change in heat capacity due to latent heat release during the phase transformation and leads to a faster convergence. Introducing the new variable, H, and employing the finite element approximation result in the thermal equation taking the form
pcp =
dH
KT
dT
+
dlt
J" - F r
(9)
dT
The definitions of the enthalpy variable for pure metals and alloys are given by Eqn (13) as follows
H-
IcdT,
T
(~o)
The finite difference approximation (Lewis et al, (1978)) is used for the estimation of the enthalpy variable. The same solution scheme is used in the case of mould filling analysis.
Mechanical Contact The basic assumption is that the whole cast part is in perfect contact with the mould at the beginning of the thermal stress analysis. The assumption is justified by the fact that the thermal stress analysis starts after the commencement of solidification. Because of the assumption concerning the small deformations we may consider the so call "node to node" contact. A penalty formulation is used which is briefly described. The potential energy of the mechanical system is augmented with a system of constraints represented by the stiffness L After minimization the resulting equations of equilibrium are of the form H _1_ r _ rF 1 r . 2q Kq q +-~g_ ~,g,
' Kq
_
!
F,
(ll)
where K' and F' are the augmented stiffness matrix and equivalent force vector, The term g represents a vector of penetration of contacting nodes into the contact surface, respectively. In the case of non-existence of the contact the distance between the nodes is calculated and
771
International C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
in consequence the value is transferred to the thermal module where the interfacial heat transfer coefficient is calculated. Thermal Contact
As mentioned above the interfacial heat transfer coefficient is used for establishing the interface thermal properties of the layer between the mould and the cast part. The coefficient depends on the air conductivity (kair), thermal properties of the interface materials and the magnitude of the gap (g). The formula from Lewis, (2000), Lewis & Ransing, (1998), is adopted: h=kair/(g+kair/ho). The value ho, an initial heat transfer coefficient should be obtained from experiments and reflects the influence of the type of interface materials where coatings may be applied. Additionally, from a numerical point of view, it allows us to monitor the dependence of the resulting interfacial coefficient on the gap magnitude.
Microstructural Model The main assumptions of a microstructural solidification model are presented herein (Yhevoz, Desbiolles and Rappaz, (1989), Celentano, (2002)). The partition laws are given below. The partition laws state that the sum of the solidus,f, and liquidus, f,, parts is 1. The solidus consists of the sum of the dendritic, j~, and eutectic, fe, fractions. A further assumption, valid for the equiaxed dendritic solid fraction, namely, that the solid fraction consists of dendritic grains, f J , internal,f, and intergranural eutectic volumetric fractions, respectively
f + f~ - 1
f ~ - f a + fe
f~ - f j f
+fg.
(12)
The internal fraction is split into the sum of dendritic, fd, and eutectic, fe, internal volumetric fractions which leads to the final formulae for dendritic and eutectic fractions, i.e.,
f _ fd + fe
fd -- f J f d
f~ -- f J f ~
+ fg
(13)
with the assumption that in the alloy the intergranular eutectic phase does not @pear, Le=0 and the spherical growth 4 4 f gd - - FINaR 3 , f ie - - FINER ) , (14) 3 3 where Na, Ne and Ra, Re are the grain densities and grain sizes described by nucleation and growth laws.
Illustrative Examples
Cylinder To demonstrate the effect of the applied pressure, the geometry of a cylindrical sample is adopted. The diameter of the mould is 0.084 m, the diameter of the cast is 0.034 m, the height of the cast is 0.075 m and the height of mould is 0.095 m.
772
International C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
lie 6o
[40 0
!--
5
10
15
20
squeezed
25
time
(a) (b) Fig. 5. Discretized mould and cast (a), temperature variation close to the bottom of the cast, squeezed and free casting, (b). The sample was discretized with 9140 isoparametric linear bricks and 10024 nodes. The finite element mesh for half of the cylinder (even though the whole cylinder was analyzed) is presented in Fig. 5. The following thermal boundary and initial conditions were assumed: constant temperature 20~ on the outer surface of the mould, 200~ on the top of the cast, 700~ the initial temperature of the cast, 200~ - the initial temperature of the mould, respectively. The mould is fixed rigidly to the foundation. The die is made of steel H13 with the properties: Young modulus 0.25E+12 N/m 2, Poisson's ratio 0.3, density 7721 kg/m 3, yield stress 0.55E+10 N/m 2, thermal exp. coeff. 0.12E-5 and the material properties of cast (aluminium alloy, LM25): Young modulus 0.71E+l 1 N/m 2, Poisson's ratio 0.3, density 2520 kg/m 3, yield stress 0.15E+9 N/m 2, fluidity parameter 0.1E-2, thermal expansion coefficient 0.21E-4, contraction 0.3E-12 Tzi,=612 C, Tso~=532 C.
(a)
(b)
773
International Conference on Advances in Engineering and Technology
(c)
(d)
Fig.6. Solidification (a, b) and displacement (c, d) patterns (squeeze casting - left, no external pressure- right). The effect of pressure applied to the top of the cast is demonstrated in Fig. 6. When comparing the displacement patterns for both cases it is seen that the displacements for the squeezed workpiece are the smallest at the bottom where the gap is closed. It implies a higher cooling rate and in consequence faster solidification, the solidified region is larger for the squeezed part. The temperature close to the bottom is lower in the squeezed part than in the one without external pressure (Fig 5, right). Aluminium P a r t - Influence of Large Displacements and Initial Stresses The analysed aluminium part has overall dimensions of 0.47 m x 0.175 m x 0.11 m. The finite element discretization of the cast and mould are presented in Fig. 7. The parts are discretized with 2520 linear bricks and 3132 nodes.
Fig. 7. Finite element mesh of the mould (a) and the cast (b).
774
International Conference on Advances in Engineering and Technology
(a)
(b)
(c) Fig.8: Solidification patterns, small displacements, no external pressure (a), squeeze casting, (b), squeeze casting, large displacements (c). Thermal boundary and initial conditions are assumed as in the previous case. The mould is fixed rigidly to the foundation and the pressure is applied to the top of the cast. The material data are set as for the previous example. The process is simulated over the first 30 sec of the cooling cycle. Results concerning three cases are given in Fig 8. We focus our attention on the solidification patterns. Assuming small displacements it can be seen that the effect of pressure is significant, namely, the solidification is further advanced when applying pressure than for the case of a free casting, Fig, 8 (a) and (b). For the case of nonlinear geometry, Fig 8 (c) the solidification appears to be less advanced than in the case of neglecting this effect, Fig. 8 (b). However, the solidification is still more advanced in the case of squeeze forming than without applying the external pressure.
Coupling the Mould Filling and Thermal Stress Analyses In this case we follow the general assumptions that the process is sequential, which implies that the thermal stress analysis is performed after filling the mould with metal and reaching the final position of the punch. The latter implies that the final shape has been achieved. In
775
International C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
this process the temperature field obtained at the end of the mould filling process represents the initial condition for the thermal stress transient analysis. An example of an industrial squeeze forming process is described herein. Fig. 9 presents the coolant channel system of the punch and die. The problem is actually considered as axisymmetric, and the part being formed is a wheel. The material properties are the same as presented in the previous examples. The diameter of the wheel is 0.5 m, the diameter of the die-punch-ring system is 0.9 m, the height of the punch is 0.23 m and the thickness of the part is 0.015 m. The initial temperatures of the particular parts of the system were as follows: cast 650 ~ die and ring 280 ~ and punch 300 ~
Fig. 9: Coolant channels system, die (a), punch (b). The sequence of the punch positions and the corresponding advancement of filling of the cavity is shown in Fig. 10. The maximum punch travel is 49 ram. The temperature distribution, after completion of the filling process, is given in Fig 11 (a). The next figure, Fig. 11 (b), shows the temperature distribution after 16 sec of the cooling phase. The corresponding solidification pattern is given in Fig. 11 (c) and the von Mises stress distribution is presented in Fig 11 (d). The highest von Mises stress, 325 MPa, is in the punch close to the top of the cast part.
ii
i ~......
:~ (a)
776
i ~:
(b)
(c)
International C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
.............
i
(d)
(e)
(f)
Fig.10: Sequence of the punch positions (0 mm, 10 mm, 35 mm, 40 mm, 45 mm, final position - 39 ram) and the advancement of metal front (pseudo-concentration function distribution). :/ .
.
~i|
.
.
i..
9
0
B .
.
.
s
"
.
.
.
.
.
.
.
(a)
!i
(b)
| (c) (d) Fig. 11: Temperature distribution after ending of the filling process (a), temperature distribution after 16 sec. of the cooling phase (b), solidification pattern after 16 sec. (c), yon Mises stress distribution after 16 sec. Example
of a Microstructural
Model
The geometry of the cylindrical sample presented above is adopted. The mechanical properties are taken from the previous example.
777
International C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
sl
sl
r 1~ 9
1.50E-02 L "~
1.00E-02
:
._=~ . L_
0
5
10
15
20
-
,
Sl-
~
1.50E-02
~ ~
~00~.o3
1~ L
O.OOE+O0
~,
25
undercooling
lw~w,-0 5
........... . :::':::~:~:2.~-*:~:~............... 10
15
20
25
undercooling
Fig. 12. Growth laws, dendritic (left), eutectic (right). The material is an aluminium alloy with 7% silicon, solidification properties of dendritic and eutectic parts: average undercoolings are 4.0 deg and 3.5 deg, standard deviations 0.8, maximum grain densities 3.0E+09, latent heats 398000 J/kg, liquidus diffusivity 0.11. The temperature at the top of the sample is kept constant at 600 ~ The growth evolution laws are presented in Fig. 12 and distributions of the internal variables at time 2.7 sec. e.g. the liquidus and solid dendritic and eutectic fractions are given in Fig.13 (a, b, c). The conductivity distribution is also given in Fig. 13 (d).
(a)
(b)
(c) (d) Fig. 13. Distributions: liquidus (a), solid dendritic fraction (b), solid eutectic fraction (c), conductivity (d). Thermal Stress Analysis, Microstructural Solidification, Industrial Example
778
International C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
An example of the solidification and thermal stress analysis of a wheel is presented. The die and cast are discretized with 25108 isoparametric bricks and 22909 nodes. The discretization scheme for the cast and mould is given in Figure 14. The material data are assumed as above.
a)
9 :~
(b)
Figure 14. Discretization scheme, mould (a) and cast(b)
o
. . . .
(a)
(b) 9
....... ...... ! ?
~,~
> .
.
.
.
.
i
!
9
.............................................................................................................................................................
(c)
~,
:
250
,r
~200 P
200~
f" w
'~15o
o.
150~
..f"
,~
=
~I00
lOO~ 0
,~w"~'
e~_ ~:~.... ~
.
.
.
.
.
.~.e@*;,+~ . . . .
tl00 . . . .
l , 150
0
Time(Months)
Y~
J(a) (b) Fig. 29. Total oil production and water cut (a) and comparison of present study with field measurement (b). stress arch formation
Fig. 30. Stress arch formation in the overburden layer.
792
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
The reservoir composes mainly of heterogeneous chalk with an average porosity of 35%. The depth of the reservoir is approximately 2.7-2.8 km subsea. In order to minimise run time, a relatively coarse mesh was used for the representation. A total of 166 months of history were simulated. Fig. 29 (a) shows the total oil production and the water cut ratio spanning over the simulation period and Fig.29 (b) shows the comparison of the present study with field measurement. The analysis showed that the seabed has sunk approximately 0.11 m below the subsea level. The maximum vertical displacement occurs at the crestal region of the field where most of the active production wells are located. The magnitude of the maximum seabed subsidence is approximately 3.3 m. The extent of the subsidence effected by the production is very extensive, covering an area of approximately 112 sq. km. Fig. 30 shows the stress arch formation in the top of the overburden layer. Due to the load redistribution, the flanks of the reservoir experience overpressuring. The vertical downward movement of the reservoir forces the reserves into the flank region. This also explains the relatively low overall average reservoir pressure decline, apart from the replenishment of reservoir energy due to compaction. 4.4 Conclusions
In this paper, we have presented an analysis of the coupled reservoir equation and perform a critical analysis of the pore compressibility term. It is shown here that the traditional approach of using a single-valued parameter to characterise the pore compressibility is insufficient. In addition to that, field observation has repeatedly invalidated the fundamental assumption of constant total stress in uncoupled formulation. Finally, we present a field case study of a real-life reservoir in North Sea. The analysis showed that formational compaction replenishes the reservoir energy and extend the life of the production period. During active operation, the heterogeneous weak chalk formation experiences compaction in the range of subsidence/compaction ratio (S/C) of 0.7-0.75. 5.0 CLOSURE We have presented a few representative successful applications of developed algorithms and programs. The applications comprise manufacturing and natural phenomena connected with the exploitaition of natural resources. Further research will be connected with the development of algorithms and programs allowing deeper get into nature of the mentioned processes, namely, structural interactions, fluid flow structure interactions, fluid flow temperature interactions, fluid flow structure temperature interactions along with the investigation of influences of different type imperfections by means of extensive parametric studies and design sensitivity analysis. 6.0 A C K N O W L E D G M E N T
The support of Engineering and Physical Research Council, UK, GKN Squeezeform, AstraZeneca, BP Amoco and Elf is gratefully acknowledged.
793
International Conference on Advances in Engineering and Technology
REFERENCES
Usmani A.S., Cross J.T., Lewis R.W., A finite element model for the simulations of mould filling in metal casting and the associated heat transfer, Int. j. numer, methods eng., Usmani A.S, Cross J.T., Lewis R.W., The analysis of mould filling in castings using the finite element method, J. of Mat. Proc. Tech., 38(1993), pp. 291-302. Lewis R.W., Usmani A.S., Cross J.T., Efficient mould filling simulation in castings by an explicit finite element method, Int. j. numer, methods fluids, 20(1995), pp. 493-506. Lewis R.W., Navti S.E., Taylor C., A mixed Lagrangian-Eulerian approach to modelling fluid flow during mould filling, Int. j. numer, methods fluids, 25(1997), pp. 931-952. Gethin D.T., Lewis R.W., Tadayon M.R., A finite element approach for modelling metal flow and pressurised solidification in the squeeze casting process, Int. j. numer, methods eng., 35(1992), pp. 939-950 Ravindran K., Lewis, R.W., Finite element modelling of solidification effects in mould filling, Finite Elements in Analysis and Design, 31 (1998), pp. 99-116. Lewis R.W., Ravindran K., Finite element simulation of metal casting, Int. j. numer, methods eng., 47(2000). Lewis R.W., MERLIN-An Integrated finite element package for casting simulation, University of Wales Swansea, 1996. Zienkiewicz, O.C. and Taylor, R.L., The Finite Element Method, fifth ed., ButterworthHeinemann, Oxford, 2000. Sluzalec, A., Introduction to nonlinear thermomechanics, Springer Verlag, 1992. Kleiber, M., Computational coupled non-associative thermo-plasticity, Comp. Meth. Appl. Mech. Eng., 90 (1991), pp. 943-967. Perzyna, P., Thermodynamic theory of viscoplasticity, in Advances in Applied Mechanics, Academic Press, New York, 11 (1971) Felippa, C..A., Park, K.C., Staggered transient analysis procedures for coupled dynamic Vaz M., Owen D.R.J., Thermo-mechanical coupling: Models, Strategies and Application, CR/945/96, University od Wales Swansea, 1996. Lewis, R.W., Morgan, K., Zienkiewicz, O.C., An improved algorithm for heat conduction problems with phase change, Int. j. numer, methods eng., 12 (1978), pp. 1191-1195. Lewis, R.W., Morgan, K., Thomas, H. R., Seetharamu, K.N., The Finite Element Method in Heat Transfer Analysis, Wiley, 1996. Lewis, R.W., Ransing R.S., The optimal design of interfacial heat transfer coefficients via a thermal stress model, Finite Elements in Analysis and Design, 34 (2000), pp. 193-209. Lewis R.W., Ransing R.S., A correlation to describe interfacial heat transfer during solidification simulation and its use in the optimal feeding design of castings, Metall. Mater. Trans. B, 29 (1998), pp. 437-448. Thevoz Ph., Desbiolles J., Rappaz M., Modelling of equiaxial formation in casting, Metall. Trans. A, 20A, 1989, 311. Celentano D.J., A thermomechanical model with microstructure evolution for aluminium alloy casting processes, Int. J. of Plasticity, 18, 2002, pp. 1291-1335.
794
International C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
Cundall P.A., and Strack O.D.L., A discrete element model for granular assemblies, Geotechnique, 29 (1979), pp. 47-65. Kibbe, A.H., Pharmaceutical Excipients, APA and Pharmaceutical Press, 2000. Lewis R.W. and Schrefler B.A., The Finite Element Method in the Static and Dynamics Deformation and Consolidation of Porous Media, 2nd Ed., John Wiley & Son, England, 1998. Gethin D.T., Ransing R.S., Lewis R.W., Dutko M., Crook A.J.L., Numerical comparison of a deformable discrete element model and an equivalent continuum analysis for the compaction of ductile porous material, Computers & Structures, 79 (2001), pp. 1287-1294. Goodman R.E., Taylor R.L. and Brekke T., A model for mechanics of jointed rock, J. Soil Mech Found, ASCE, 1968. Ransing R.S., Gethin D.T., Khoei A.R., Mosbah P., Lewis R.W., Powder compaction modelling via the discrete and finite element method, Materials & Design, 21 (2000), pp. 263-269. Rowe R.C. and Roberts R.J., Mechanical properties, in: Pharmaceutical powder compaction technology, eds Alderborn G. and Nystrom C., Marcel Dekker Inc, New York, 1996, pp. 283-322. Macropac reference manual, Oxmat, 2001. Dong L.L., Lewis R.W., Gethin D.T., Postek E., Simulation of deformation of ductile pharmaceutical particles with finite element method, ACME conference, April 2004, Cardiff, UK. Zienkiewicz O.C., Zhu J. A simple error estimator and adaptive procedure for practical engineering analysis. International Journal of Numerical Methods in Engineering, 24 (1987), pp. 337--357. Lewis R.W., Makurat A. and Pao W.K.S., Fully coupled modelling of seabed subsidence and reservoir compaction of North Sea oil fields. J. of Hydrogeology. (27), 2000. Johnson J.P., Rhett D.W., Siemers W.T., Rock mechanics of the Ekofisk reservoir in the evaluation of subsidence. JPT, July (1989), pp 717-722. Settari A., Kry P.R., Yee C.T., Coupling of fluid flow and soil behaviour to model injection into uncemented oil sands. JCPT, 28 (1989), pp. 81-92. Finol A. and Farouq-Ali S.M., Numerical Simulation of Oil Production with Simultaneous Ground Subsidence. SPEJ, (1975), pp. 411-424. Gutierrez M., Lewis R.W. and Masters I. (2001) Petroleum reservoir simulation coupling fluid flow and geomechanics. SPE Reser. Eval. & Engrg., June (2001), pp. 164-172.
795
International Conference on Advances in Engineering and Technology
MOBILE AGENT SYSTEM FOR COMPUTER NETWORK MANAGEMENT O. C. Akinyokun; Bells University of Technology, Ota, Ogun State, Nigeria A. A. Imianvan; Department of Computer Science, University of Benin, Benin, Nigeria
ABSTRACT Conventionally, the management of computer networks involves the physical movement of the Network Administrator from one computer location to another. The mathematical modeling and simulation of mobile agent systems for the management of the performance of computer networks with emphasis on quantitative decision variables have been reported in the literature. The prototype of an expert system for the administration of computer networks resources with emphasis on both quantitative and qualitative decision variables using mobile agent technology is presented in this paper. The architecture of the system is characterized by the relational database of the computer networks resources and the process of management is modeled by using Unified Modeling Language (UML) and Z-Notation. The implementation of the mobile agent is driven by intelligent mechanism for de-assembling, serializing, queuing, and Divide-and-Conquer Novelty Relay Strategy of its component parts (subagents), and a mechanism for assembling the output reports of the subagents. The ultimate goal is to provide an intelligent autonomous system that can police the economic use of the computer network resources and generate desirable statistics for policy formulation and decision making. Keywords: Mobile, Agent, management, Launch, Migrate, Queue, En-Queue, Serialize, Assemble, Relay
1.0 INTRODUCTION A computer network is a group of computers connected together and separated by physical distance. The searching for resources in a network, conventionally, involves the physical movement of the Network Administrator from one computer location to another. The approach is not only stressful but introduces some delays in monitoring events on the network. Besides, events on the network were not monitored as they arise and Network Administrator were, often, bored with the issue of which computer to monitor next. Mobile agents are autonomous and intelligent software that are capable of moving through a network, searching for and interacting with the resources on behalf of the Network Administrator. Mobile agent technology has been applied to electronic commerce transactions in (Jonathan et al, (1999), Olga, (2003), Yun et al, (2002), Harry et al, (2000), Li et al, (2003),
796
International C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
Youyong et al, (2003), Dipanjan & Hui, (2004)); distributed information retrieval in (Lalana et al, (2001), Harry & Tim, (2002), Jeffrey & Anupam, (2003), Meas, (1994)), and Network management in (Cassel et al, (1989), Krishnan & Zimmer, (1991), Allan & Karen, (1993), Marshall, (1994), German & Thomas, (1998)) In (Akinyokun, (1997)), an attempt is made to develop a utility program using Pascal to police the economic utilization of the RAM and Hard Disk of microcomputers. The utility program is capable of being activated and run on a stand-alone computer to: (a) Keep track of the users' data and program files in a computer hard disk. (b) Give notice of each of the users file that: (i) Is more than one month old. (ii) Has not been accessed and updated within a month. (iii) Occupies more than a specified memory space. (c) Raises an alarm and recommend that the offending files be backed up by the Operations Manager (d) Automatically delete the offending users' files at the end of the third alarm.. In (Lai & Baker, (2000); Saito & Shusho, (2002)), attempts were made to develop mobile agents that were capable of managing bandwidth of computer network environment. A mathematical modeling and simulation of the performance management of computer network throughput, utilization and availability is proposed in (Aderounmu, (2001)). This research attempts to study the features of a computer network environment and develop an expert system using mobile agent technology to intelligently and practically manage computer network bandwidth, response time, memory (primary and secondary), file (data and program) and input-output device. Each of the resources of a computer network environment has unique features; hence the management of each resource is supported by a unique model using Unified Modeling Language (UML) (Bruce, (1998), Meilir, (2000)), and Z Notation (Bowen et al, (1997), Fraser et al, (1994), Spivey, (1998)). The details of the UML and Z Notation of each model of a subagent have been presented in (Imianvan, (2006)). In an attempt to minimize the size of this paper, the UML for managing the bandwidth of computer network is presented as a typical example while the Z Schema of all the subagents are presented. The ultimate goal is to provide an intelligent autonomous system that can police the economic use of the computer network resources and generate desirable statistics for policy formulation and decision making. 2.0 D E S I G N OF THE M O B I L E A G E N T
The (a) (b) (c)
computer network resources that are of interest are as follows: Bandwidth. Primary memory. Secondary memory.
797
International Conference on Advances in Engineering and Technology
(d) (e) (f) (g) (h)
Response time. Data files. Program files. Input device. Output device.
A modular architecture is proposed whereby each resource is administered by a subagent. The platform for the take off of the mobile agent at the source computer and the platform for its landing at the target computer are the source host operating system and target host operating system respectively. An interface is developed to aid the launching of the mobile agent while it migrates with another interface which causes it to be broken into its component parts (subagents) at the target. Each subagent interacts with the host operating system of the target and its appendages or utility programs for network monitor and cyber clock for the purpose of assessing and evaluating the resources that are available. At the source, the mobile agent is decomposed into its constituent parts (subagents). The results obtained by all the subagents after a successful visit to a set of target workstations are assembled for the purpose of reporting them for external analysis, interpretation, policy formulation and decision making by the Network Administrator. 2.1 Bandwidth Management The used bandwidth in the computer network environment denoted by 'B' is evaluated by:
m mbiy 8-ZZ i=1 j=l
t]
where, bij represents the bandwidth used in transmitting jth packet in ith workstation and tj is the transmission time of jth packet. The percentage of the used bandwidth (Bu) to the bandwidth subscribed (Bs) to by the computer network environment denoted by 'P' is evaluated as:
P - 1 0 0 B.
B,
The Unified Modeling Language (UML) specification of the logic of bandwidth management is presented in Figure 2.2 while its Z Schema is presented as follows: 2.2 Z Schema of Bandwidth Management
NumberofPacets?, NumberofWorkstations? 9~] PacketSize, TargetPacketSize? 9 TransmissionTime, TargetTime? 9 BandwidthUsed, BandwidthUsed' 9 i,j, m, n "~]
798
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
BandwidthSubcribed, PercentageOfUsedBandwidth 9 BandwidthUsed ,--- 0 n +-- NumberofWorkstations? m ~-- NumberofPackets? Vi, 1 < i < n /* loop over workstations or targets */ Begin Vj, 1 < j < m /* loop over packets transmitted in ith workstation*/ Begin PacketSize ,-TargetPacketSize? TransmissionTime +-- TargetTime? BandwidthUsed' ~- BandwidthUsed + (PacketSize / TransmissionTime) DisplayData (PacketSize, TransmissionTime, BandwidthUsed) End End BandwidthPercentage ~-- (BandwidthUsed' / BandwidthSubcribed) * 100 DisplayData (BandwidthUsed', BandwidthSubcribed, BandwidthPercentage) Packet
VodBandw--~ Request
,= v
Number of targets, Target identity
Transmission Time Module Facket t: 'ansmitt, ,'dPacket q ize
Bandwidth Module
Generate Report
Used Bandwidth, %of Bandwidth ,.. Used p,-
Transmi: ;sion time y
mdwidth valuator. Display t 'acket transmitted, P tcket Size, Transmi~ ;sion Time, Bandwid :h Used, Bandwidth Subscribed, Percent; tge of Used Band, vidth Fig. l - UML Diagram for Bandwidth Management
2.3 Primary and Secondary Memory Management The primary or secondary memory of a target computer used over a period of time 't' denoted by 'R' is evaluated by: m
m
R-ZZ, i=l j : l
799
International Conference on Advances in Engineering and Technology
where ri,j represents the primary or secondary memory space used by the jth packet in the ith workstation. The percentage of the used memory denoted by 'Pr' is evaluated as:
P~ -100 R" Rw where 'Rw' represents the memory size of the target computer and 'R.' represents the memory size used by the packets. The Z Schema of memory management is presented as:
2.4 Z Schema of Memory Management NumberofPackets?, NumberofWorkstations? 9~] PacketSize, TargetPacketSize? 9 TimeFrame, TargetPeriodofMemoryusage? 9 Memoryused, Memoryused' : i,j, m, n "~] SubcribedMemory" PercentageMemoryused 9 TotalMemoryused ~-- 0 n +-- NumberofWorkstations? m +-- NumberofPackets? '7'i, 1 _ 1 Strong negative autocorrelation I