21st EUROPEAN SYMPOSIUM ON COMPUTER AIDED PROCESS ENGINEERING
COMPUTER-AIDED CHEMICAL ENGINEERING Advisory Editor: R. Gani and E.N. Pistikopoulos Volume 1: Distillation Design in Practice (L.M. Rose) Volume 2: The Art of Chemical Process Design (G.L. Wells and L.M. Rose) Volume 3: Computer Programming Examples for Chemical Engineers (G. Ross) Volume 4: Analysis and Synthesis of Chemical Process Systems (K. Hartmann and K. Kaplick) Volume 5: Studies in Computer-Aided Modelling. Design and Operation Part A: Unite Operations (I. Pallai and Z. Fonyó, Editors) Part B: Systems (I. Pallai and G.E. Veress, Editors) Volume 6: Neural Networks for Chemical Engineers (A.B. Bulsari, Editor) Volume 7: Material and Energy Balancing in the Process Industries - From Microscopic Balances to Large Plants (V.V. Veverka and F. Madron) Volume 8: European Symposium on Computer Aided Process Engineering-10 (S. Pierucci, Editor) Volume 9: European Symposium on Computer Aided Process Engineering-11 (R. Gani and S.B. Jørgensen, Editors) Volume 10: European Symposium on Computer Aided Process Engineering-12 (J. Grievink and J. van Schijndel, Editors) Volume 11: Software Architectures and Tools for Computer Aided Process Engineering (B. Braunschweig and R. Gani, Editors) Volume 12: Computer Aided Molecular Design: Theory and Practice (L.E.K. Achenie, R. Gani and V. Venkatasubramanian, Editors) Volume 13: Integrated Design and Simulation of Chemical Processes (A.C. Dimian) Volume 14: European Symposium on Computer Aided Process Engineering-13 (A. Kraslawski and I. Turunen, Editors) Volume 15: Process Systems Engineering 2003 (Bingzhen Chen and A.W. Westerberg, Editors) Volume 16: Dynamic Model Development: Methods, Theory and Applications (S.P. Asprey and S. Macchietto, Editors) Volume 17: The Integration of Process Design and Control (P. Seferlis and M.C. Georgiadis, Editors) Volume 18: European Symposium on Computer-Aided Process Engineering-14 (A. Barbosa-Póvoa and H. Matos, Editors) Volume 19: Computer Aided Property Estimation for Process and Product Design (M. Kontogeorgis and R. Gani, Editors) Volume 20: European Symposium on Computer-Aided Process Engineering-15 (L. Puigjaner and A. Espuña, Editors) Volume 21: 16th European Symposium on Computer Aided Process Engineering and 9th International Symposium on Process Systems Engineering (W. Marquardt and C. Pantelides) Volume 22: Multiscale Modelling of Polymer Properties (M. Laso and E.A. Perpète) Volume 23: Chemical Product Design: Towards a Perspective through Case Studies (K.M. Ng, R. Gani and K. Dam-Johansen, Editors) Volume 24: 17th European Symposium on Computer Aided Process Engineering (V. Plesu and P.S. Agachi, Editors) Volume 25: 18th European Symposium on Computer Aided Process Engineering (B. Braunschweig and X. Joulia, Editors) Volume 26: 19th European Symposium on Computer Aided Process Engineering (Jacek Je owski and Jan Thullie, Editors) Volume 27: 10th International Symposium on Process Systems Engineering (Rita Maria de Brito Alves, Claudio Augusto Oller do Nascimento and Evaristo Chalbaud Biscaia, Editors) Volume 28: 20th European Symposium on Computer Aided Process Engineering (S. Pierucci and G. Buzzi Ferraris, Editors)
COMPUTER-AIDED CHEMICAL ENGINEERING, 29
21st EUROPEAN SYMPOSIUM ON COMPUTER AIDED PROCESS ENGINEERING PART – A
Edited by
E.N. Pistikopoulos Imperial College London, UK
M.C. Georgiadis Aristotle University of Thessaloniki, Greece
A.C. Kokossis National Technical University of Athens, Greece
Amsterdam – Boston – Heidelberg – London – New York – Oxford Paris – San Diego – San Francisco – Singapore – Sydney – Tokyo
Elsevier Radarweg 29, PO Box 211, 1000 AE Amsterdam, The Netherlands The Boulevard, Langford Lane, Kidlington, Oxford OX5 1GB, UK First edition 2011 Copyright © 2011 Elsevier B.V. All rights reserved No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means electronic, mechanical, photocopying, recording or otherwise without the prior written permission of the publisher Permissions may be sought directly from Elsevier's Science & Technology Rights Department in Oxford, UK: phone (+44) (0) 1865 843830; fax (+44) (0) 1865 853333; email:
[email protected]. Alternatively you can submit your request online by visiting the Elsevier web site at http://elsevier.com/locate/permissions, and selecting Obtaining permission to use Elsevier material
Notice No responsibility is assumed by the publisher for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions or ideas contained in the material herein.
British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library Library of Congress Cataloging-in-Publication Data A catalog record for this book is available from the Library of Congress
ISBN (Part A): ISBN (Set):
978-0-444-53711-9 978-0-444-53895-6
For information on all Elsevier publications visit our web site at elsevierdirect.com
Printed and bound in Great Britain 11 12 10 9 8 7 6 5 4 3 2 1
Contents Preface
xxxiii
Members of the International Scientific Committee
xxxv
Multiscale Modeling Detailed Mathematical Modelling of Liquid-Liquid Extraction Columns Moutasem Jaradat, Menwer Attarakih and Hans-Jörg Bart
1
Multi-Scale modelling of a membrane reforming power cycle with CO2 capture Øivind Wilhelmsen, Rahul Anantharaman, David Berstad and Kristin Jordal
6
Modeling the liquid back mixing characteristics for a kinetically controlled reactive distillation process Mayank Shah, Edwin Zondervan, Anton A. Kiss, Andre B. de Haan
11
Application of computer-aided multi-scale modelling framework – Aerosol case study Martina Heitzig, Chistopher Gregson, Gürkan Sin, Rafiqul Gani
16
Sensitivity of shrinkage and collapse functions involved in pore formation during drying Seddik Khalloufi, Cristhian Almeida-Rivera, Jo Jansen, Marcel Van-Der-Vaart, and Peter Bongers
21
A reduced-order approach of Distributed parameter models using Proper orthogonal decomposition M. Valbuena, D. Sarabia, C. de Prada
26
A Process Unit Modeling Framework within a Heterogeneous Simulation Environment Ingo Thomas
31
Mathematical description of mass transfer in supercritical-carbon-dioxide-drying processes Cristhian Almeida-Rivera, Seddik Khalloufi, Jo Jansen and Peter Bongers
36
Three-moments conserving sectional techniques for the solution of coagulation and breakage population balances Margaritis Kostoglou, Michalis C. Georgiadis
41
Modelling and Simulation of Forced Convection Drying of Electric Insulators Cristea Vasile-Mircea, Goga Firuta, Mogos Liviu Mihai
46
Comprehensive Mathematical Modeling of Controlled Radical Copolymerization in Tubular Reactors Mariano Asteasuain, Daniel Covan, Claudia Sarmoria, Adriana Brandolin Carolina Leite de Araujo, José Carlos Pinto
51
An Efficient High Resolution FEM for PDE Systems Duc Hoang Minh, Harvey Arellano-Garcia, Lorenz T. Biegler
56
Simulation of Reactive Absorption: Model Validation for CO 2 -MEA system Chinmay Kale, Inga Tönnies, Hans Hasse, Andrzej Górak
61
A CFD-Population Balance Model for the Simulation of Kühni Extraction Column Mark W. Hlawitschka, Moutasem Jaradat, Fang Chen, Menwer M. Attarakih, Jörg Kuhnert, Hans-Jörg Bart
66
vi
Contents
CFD Study on the Application of Rotary Kiln in Pyrolysis Ka-Leung Lam, Adetoyese O. Oyedun, Chi-Wai Hui
71
Towards a Generic Simulation Environment for Multiscale Modelling based on Tool Integration Yang Zhao, Cheng Jiang, Aidong Yang
76
Integral Formulation of the Population Balance Equation using the Cumulative QMOM Menwer Attarakih, M. Jaradat, M. Hlawitschka, H.-J. Bart, J. Kuhnert
81
Integration of Generic Multi-dimensional Model and Operational Policies for Batch Cooling Crystallization Noor Asma Fazli Abdul Samad, Ravendra Singh, Gürkan Sin, Krist V. Gernaey, Rafiqul Gani
86
A Multi-scale Systems Approach to Granulation Process Design Rohit Ramachandran
91
Multi-scale modeling of activated sludge floc structure formation in wastewater bioreactors Irina D. OfiĠeru, Micol Bellucci, Vasile Lavric, Cristian Picioreanu, Thomas P. Curtis
96
A Multi-layered Ontology for Physical-Chemical-Biological Processes Heinz A Preisig
101
Modeling and simulation of a gas cleaning section in a Cu/Ni metallurgical plant Mirnes Alic, Tor Anders Hauge, Bernt Lie
106
A novel approach to the biomass pyrolysis step and product lumping Daniele Bernocco, Paolo Greppi, Elisabetta Arato
111
Towards a rigorous model of electrodialysis processes Matthias Johannink, Adel Mhamdi, Wolfgang Marquardt
116
Stochastic Monte Carlo Simulations as an Efficient Multi-Scale Modeling Tool for the Prediction of Multi-Variate Distributions Dimitrios Meimaroglou, Costas Kiparissides
121
Modeling of a batch emulsion copolymerization reactor in the presence of a chain transfer agent : estimability analysis, parameters identification and experimental validation. B. Benyahia , M. A. Latifi , C. Fonteix , F. Pla
126
Multiscale modeling of chemical vapor deposition of silicon Nikolaos Cheimarios, Sokratis Garnelis, George Kokkoris, Andreas G. Boudouvis
131
3D Cellular automata for modeling of spray freeze drying process S. Ivanov, A. Troyankin, P. Gurikov, A.Kolnoochenko, N. Menshutina
136
Spatially 3D simulation of a catalytic monolith by coupling of 1D channel model with CFD Jan ŠtČpánek, Petr Koþí, Milan Kubíþek, František Plát, Miloš Mare k
141
Contents
vii
A generic framework for stochastic dynamic simulation of chemical engineering systems using free/opensource software Carl Sandrock and Philip de Vaal
146
Modelling of micro- and nano-patterned electrodes for the study and control of spillover processes in catalysis I. Bonis, S. Valiño-Pazos, I.S. Fragkopoulos, C. Theodoropoulos
151
Multiscale Modeling of a Silicon Solar Wafer Manufacturing Process Ruochen Liu, German Oliveros, Seetharaman Sridhar, B. Erik Ydstie
156
Process modelling and model reduction for chemical engineering applications Bogdan Dorneanu, Johan Grievink, Costin S. Bildea
161
General-purpose graphics processing units application for diffusion simulation using cellular automata A. Kolnoochenko, P. Gurikov, N. Menshutina
166
Mercury Transformation Modelling with Bromine Addition in Coal Derived Flue Gases Kevin J. Hughes, Lin Ma, Richard T.J. Porter and Mohamed Pourkashanian
171
Synthesis and Design Optimal design of multiple dividing wall columns based on genetic programming Fernando I. Gómez-Castro, Mario A. Rodríguez-Ángeles, Juan G. SegoviaHernández,Claudia Gutiérrez-Antonio, Abel Briones-Ramírez
176
Retrofit design of a pharmaceutical batch process considering green chemistry and engineering principles Alireza Banimostafa, Stavros Papadokonstantakis, Konrad Hungerbühler
181
Design and control of an energy integrated biodiesel process Anton A. Kiss, Costin Sorin Bildea
186
A systematic approach towards applicability of reactive distillation Anton A. Kiss, Prachi Singh, Cornald J. G. van Strien
191
Strategies for the Robust Simulation of Thermally Coupled Distillation Sequences Miguel A. Navarro, José A. Caballero, Ignacio E. Grossmann
196
Spatiotemporal pattern formation in an electrochemical membrane reactor during deep CO removal from reformate gas Richard Hanke-Rauschenbach, Sebastian Kirsch and Kai Sundmacher
201
Optimization of Design and Operation of Reverse Osmosis Based Desalination Process Using MINLP Approach Incorporating Fouling Effect Kamal M. Sassi, Iqbal. M. Mujtaba
206
Logic-Sequential Approach to the Synthesis of Complex Thermally Coupled Distillation Systems. José A. Caballero, Ignacio E. Grossmann
211
viii
Contents
Computer Aided Design and Analysis of Continuous Pharmaceutical Manufacturing Processes Fani Boukouvala, Rohit Ramachandran, Aditya Vanarase, Fernando J. Muzzio, Marianthi G. Ierapetritou
216
Phenomena-based Process Synthesis and Design to achieve Process Intensification Philip Lutze, Rafiqul Gani, John M. Woodley
221
A Novel Process Design for the Hydroformylation of Higher Alkenes. Michael Müller, Victor Alejandro Merchan, Harvey Arellano-Garcia Reinhard Schomäcker, Günter Wozny
226
Flowsheet Optimization by Memetic Algorithms Maren Urselmann, Sebastian Engell
231
Biomass to chemicals: Design of an extractive reaction process for the production of 5-hydroxymethylfurfural Ana I. Torres, Prodromos Daoutidis, Michael Tsapatsis
236
A strategy to extend reactive distillation column performance under catalyst deactivation Rui M. Filipe, Henrique A. Matos, Augusto Q. Novais
241
Separation Circuits Analysis and Design, Using Sensitivity Analysis Freddy Lucay, Mario E. Mellado, Luis A. Cisternas, Edelmira D. Gálvez
246
Feasibility of reactive pressure swing batch distillation in a double column configuration Gabor Modla
251
Lipid Processing Technology: Building a Multilevel Modeling Network Carlos A. Diaz-Tovar, Azizul A. Mustaffa, Amol Hukkerikar, Alberto Quaglia, Gürkan Sin, Georgios Kontogeorgis, Bent Sarup, Rafiqul Gani
256
Enhancement of Productivity of Distillate Fractions by Crude Oil Hydrotreatment: Development of Kinetic Model for the Hydrotreating Process Aysar T. Jarullah, Iqbal M. Mujtaba, and Alastair S. Wood
261
Modeling and design of reacting systems with phase transfer catalysis Chiara Piccolo, George Hodges, Patrick M. Piccione, Rafiqul Gani
266
A systematic methodology for the design of continuous active pharmaceutical ingredient production processes Albert E. Cervera, Rafiqul Gani, Søren Kiil,Tommy Skovby,Krist V. Gernaey
271
Synthesis tool for separation processes in the pharmaceutical industry Ana I. C. Morão, Edwin Zondervan, Gerard Krooshof, Rob Geertman, André B. de Haan
276
New algorithm for the determination of product sequences in azeotropic batch distillation Laszlo Hegely, Peter Lang
281
Contents
ix
Designing multi-product biopharmaceutical facilities using evolutionary algorithms Ana S. Simaria, Ying Gao, Richard Turner and Suzanne S. Farid Ravendra Singh, Raquel Rozada-Sanchez, Tim Wrate, Frans Muller, Krist V. Gernaey, Rafiqul Gani, John M. Woodley
291
Modified Case Based Reasoning cycle for Expert Knowledge Acquisition during Process design. Eduardo Roldán, Stéphane Negny, Jean Marc Le Lann, Guillermo Cortés
296
Integrating process simulation and MINLP methods for the optimal design of absorption cooling systems Juan A. Reyes-Labarta, Robert Brunet, José A. Caballero, Dieter Boer, Laureano Jiménez
301
A method for the design and planning operations of heap leaching circuits Jorcy Y. Trujillo, Mario E. Mellado, Edelmira D. Gálvez, Luis A. Cisternas
306
A data mining approach for efficient systems optimization under uncertainty using stochastic search methods Garyfallos Giannakoudis, Athanasios I. Papadopoulos, Panos Seferlis, Spyros Voutetakis
311
Integrated Design of a Reactor and a Gas-Expanded Solvent Eirini Siougkrou, Amparo Galindo and Claire S. Adjiman
316
Computer Aided Flowsheet Design using Group Contribution Methods Susilpa Bommareddy, Mario R. Eden, Rafiqul Gani
321
A Business Process Model for Process Design that Incorporates Independent Protection Layer Considerations Tetsuo Fuchino, Yukiyasu Shimada, Teiji Kitajima, Kazuhiro Takeda, Rafael Batres, Yuji Naka
326
Conceptual design of glycerol etherification processes Elena Vlad, Costin Sorin Bildea, Elena Zaharia, Grigore Bozga.
331
Dynamic Conceptual Design under Market Uncertainty and Price Volatility Davide Manca, Andrea Fini, Mirko Oliosi
336
Analysis of separation possibilities of multicomponent mixtures Laszlo Szabo, Sandor Nemeth, Ferenc Szeifert
341
A Computer tool for the development of poly(lactic acid) synthesis process from renewable feedstock for biomanufacturing Guillermo A. R. Martinez, Astrid J. R. Lasprilla, Betânia H. Lunelli, André L. Jardini, Rubens Maciel Filho
346
Robust optimisation methodology for the process synthesis of continuous technologies Mayank P. Patel, Nilay Shah, Robert Ashe
351
A Shortcut Design for Kaibel Columns Based on Minimum Energy Diagrams Maryam Ghadrdan, Ivar J. Halvorsen, Sigurd Skogestad
356
286
x
Contents
A superstructure optimization approach for optimal refinery water network systems synthesis with membrane-based regenerators Cheng Seong Khor, Nilay Shah
361
New generalised double-column system for batch heteroazeotropic distillation Ferenc Denes, Peter Lang, Xavier Joulia
366
Design of an Optimal Biorefinery Mehboob Nawaz, Edwin Zondervan, John Woodley and Rafiqul Gani
371
A Novel Design Concept for the Oxidative Coupling of Methane Using Hybrid Reactors Stanislav Jašo, Harvey Arellano-Garcia, Günter Wozny
377
Comparison of Extractive and Pressure-Swing Batch Distillation for Acetone-Methanol Separation Gabor Modla and Peter Lang
382
Constructive nonlinear dynamics for reactor network synthesis with guaranteed robust stability Xiao Zhaoa, Wolfgang Marquardt
387
Systems Analysis of Benign Hydrogen Peroxide Synthesis in Supercritical CO 2 Deborah B. Bacik, Wei Yuan, Christopher B. Roberts, Mario R. Eden
392
Design of pervaporation modules based on computational process modelling Patrick Schiffmann, Jens-Uwe Repke
397
Surrogate-based VSA Process Optimization for Post-Combustion CO 2 Capture M. M. Faruque Hasan, I. A. Karimi, S. Farooq, A. Rajendran,M. Amanullah,
402
Design of flexible process flow sheets with a large number of uncertain parameters Mihael Kasaš, Zdravko Kravanja, Zorka Novak Pintariþ
407
A Design methodology for Internally Heat-Integrated Distillation Columns (IHIDiC) with side condensers and side reboilers (SCSR) Sankari Maddu, Ranjan K Malik
412
Development of a synthesis tool for Gas-To-Liquid complexes. Jan van Schijndel, Nort Thijssen, Govert Baak, Abhijeet Avhale, Jerome Ellepola, Johan Grievink
417
Pareto-Navigation in Chemical Engineering Norbert Asprion, Sergej Blagov, Oliver Ryll, Richard Welke, Anton Winterfeld, Agnes Dittel, Michael Bortz, Karl-Heinz Küfer, Jakob Burger, Andreas Scheithauer, Hans Hasse
422
Optimization and Control Integration of ontology and knowledge-based optimization in process synthesis applications Franjo Cecelja, Antonis Kokossis, Du Du
427
Contents
xi
Feasibility analysis of black-box processes using an adaptive sampling kriging based method Fani Boukouvala, Fernando J. Muzzio, Marianthi G. Ierapetritou
432
Multiobjective Optimization for Plastic Sheet Production M. Rivera-Toledo, G. Meneses-Castellanos, and A. Flores-Tlacuahuac
437
Systematic identification and robust control design for uncertain time delay processes Jakob K. Huusom, Niels K. Poulsen, Sten B. Jørgensen, John B. Jørgensen
442
Control and dynamic optimization of a BTX dividing-wall column Anton A. Kiss, Rohit R. Rewagad
447
Process Dynamic Optimization Using ROMeo Flavio Manenti, Guido Buzzi-Ferraris, Sauro Pierucci, Maurizio Rovaglio, Harpreet Gulati
452
Model based optimisation of a cyclic reactor for the production of hydrogen Filip Logist, Joost Lauwers, Benoît Trigaux, Jan F. Van Impe
457
Multi-objective optimisation approach to optimal experiment design in dynamic bioprocesses using ACADO toolkit Filip Logist, Dries Telen, Eva Van Derlinden, Jan F. Van Impe
462
A disturbance estimation approach for online model-based redesign of experiments in the presence of systematic errors F. Galvanin, M. Barolo, G. Pannocchia and F. Bezzo
467
A Semidefinite Programming Approach to Portfolio Optimization Raquel J. Fonseca, Wolfram Wiesemann, Berç Rustem
472
Increase the catalytic cracking process efficiency by implementation an optimal control structure. Case study Cristina Popa, Cristian Pătrăúcioiu
477
Experimental Evaluation of a Robust NMPC Strategy for an Unstable Nonlinear Process Udo Schubert, Andreas Lange, Harvey Arellano-Garcia, Günter Wozny
482
Economic Plantwide Control of C4 Isomerization Process Rahul Jagtap, Sonam Goenka, Nitin Kaistha
487
Application of Graphic Processing Unit in Model Predictive Control Arash Sadrieh, Parisa A. Bahri
492
Statistical Process Control of Multivariate Systems with Autocorrelation Tiago J. Rato, Marco S. Reis
497
Implementation of model predictive controller in a pharmaceutical development plant Stéphane Hattou , Marie-Véronique Le Lann, Karlheinz Preuss, Boris Roussel, Michel Cabassud
502
xii
Contents
A Hybrid Branch-and-Cut Approach for the Capacitated Vehicle Routing Problem Chrysanthos E. Gounaris, Panagiotis P. Repoussis, Christos D. Tarantilis, and Christodoulos A. Floudas
507
Design of robust PID controller for processes with stochastic uncertainties Pham L. T. Duong, Moonyong Lee
512
MPC vs. PID. The advanced control solution for an industrial heat integrated fluid catalytic cracking plant Mihaela Iancu, Mircea V. Cristea, Paul S. Agachi
517
Plantwide Control of a Cumene Manufacture Process Vivek Gera, Nitin Kaistha, Mehdi Panahi, Sigurd Skogestad
522
A robust optimization based approach to the general solution of mp-MILP problems Martina Wittmann-Hohlbein, Efstratios N. Pistikopoulos
527
A deterministic optimization approach for the unit commitment problem Marian G. Marcovecchio, Augusto Q. Novaisa, Ignacio E. Grossmann
532
Tight Convex and Concave Relaxations via Taylor Models for Global Dynamic Optimization Ali M. Sahlodin and BenoˆÕt Chachuat
537
Simulation-based dynamic optimization of discretely controlled continuous processes Mariano De Paula, Ernesto Martínez
542
Evaluation of Steady State Multiplicity for the Anaerobic Degradation of Solid Organic Waste Mihaela Sbarciog, Andres Donoso-Bravo, Alain Vande Wouwer
547
Towards global optimization of combined distillation-crystallization processes for the separation of closely boiling mixtures Martin Ballerstein, Achim Kienle, Christian Kunde, Dennis Michaels, Robert Weismantel
552
Time Optimal Control of Particle Size Distribution in Emulsion Polymerization Ahmad Mansour, Ala Eldin Bouaswaig, Sebastian Engell
557
Multi-objective optimization of three-phase batch extractive distillation Alien Arias Barreto, Ivonne Rodriguez Donis, V. Gerbaud, X. Joulia
562
Integrating Graph-based Representation and Genetic Algorithm for Large-Scale Optimization: Refinery Crude Oil Scheduling Manojkumar Ramteke, Rajagopalan Srinivasan
567
Self-adaptive Differential Evolution with Taboo List for Constrained Optimization Problems and Its Application to Pooling Problems Haibo Zhang and G. P. Rangaiah
572
Contents
xiii
Disturbance Estimation via Moving Horizon Estimation for In-flight Model-based Wind Estimation Anna Voelker, Konstantinos Kouramas, Christos Panos, Efstratios N. Pistikopoulos
577
Deterministic global optimization of kinetic models of metabolic networks: outer approximation vs. spatial branch and bound Carlos Pozo, Gonzalo Guillén-Gosálbez, Albert Sorribas, Laureano Jiménez
582
Optimal Grade Transitions in an Industrial Slurry-Phase Catalytic Olefin Polymerization Loop-Reactor Series Vassileios Touloupides, Vassileios Kanellopoulos, Christos Chatzidoukas, and Costas Kiparissides
587
Nonlinear State Estimation with Delayed Measurements. Application to Polymer Processes Ruben Galdeano, Mariano Asteasuain, Mabel C. Sanchez
592
Optimal controlled variable selection using a nonlinear simulation-optimization framework Mahdi Sharifzadeh , Nina F. Thornhill
597
Branch-and-Sandwich: An Algorithm for Optimistic Bi-Level Programming Problems Polyxeni M. Kleniati, Claire S. Adjiman
602
Comparison of Gradient Estimation Methods for Real-time Optimization Bala Srinivasan, Grégory François and Dominique Bonvin
607
Multiobjective optimization of the pulp/water storage towers in design of paper production systems Aino Ropponen, Miika Rajala, Risto Ritala
612
Combined nonlinear model reduction and multiparametric nonlinear programming for nonlinear model predictive control Pedro Rivotti, Romain S.C. Lambert, Luis Dominguez, Efstratios N. Pistikopoulos
617
Multi-Model MPC for Nonlinear Systems: Case Study of a Complex pH Neutralization Process Weiting Tang, M. Nazmul Karim
622
Integrated Design and Control of Pressure Swing Adsorption Systems Harish Khajuria, Efstratios N. Pistikopoulos
628
A robust MILP-based approach to vehicle routing problems with uncertain demands A. Aguirre, M. Coccola, M. Zamarripa, C. Méndez and A. Espuña
633
An Improved Formulation for the Process Control Structure Selection based on Economics Problem Andreas Psaltis, Ioannis K. Kookos, Costas Kravaris
638
Software application for intelligent control of a bioprocess. Case study Cristina Tănase, Mihai Caramihai, Camelia Ungureanu, Gheorghe Sârbu, Ana Aurelia Chirvase, Ovidiu Muntean
643
xiv
Contents
Integration of a multilevel control system in an ontological information environment Edrisi Muñoz, Antonio Espuña, Luis Puigjaner
648
Control Structure Selection with Regard to Stationary and Dynamic Performance with Application to A Ternary Distillation Column Le Chi Pham,Sebastian Engell
653
The Coulomb Glass – Modeling and Computational Experience with a Large Scale 0-1 QP Problem Ray Pörn, Otto Nissfolk, Fredrik Jansson and Tapio Westerlund
658
Reliable optimal control of a fed-batch fermentation process using ant colony optimisation and bootstrap aggregated neural network models Jie Zhang, Yiting Feng, Mahmood Hilal Al-Mahrouqi
663
Integrated process and control design by the normal vector approach: Application to the Tennessee-Eastman process Diego A. Muñoz, Johannes Gerhard, Ralf Hannemann, Wolfgang Marquardt
668
Calibration of a polyethylene plant model for grade change optimisations Niklas Andersson, Per-Ola Larsson, Johan Åkesson, Staffan Haugwitz, Bernt Nilsson
673
Membrane process optimization for hydrogen peroxide ultrapurification Ricardo Abejón, Aurora Garea, Angel Irabien
678
Dynamic optimization of porous media combustor using a greybox neural model and NMPC technique Luis Henríquez-Vargas, Valeri Bubnovich and Francisco Cubillos
683
Monte Carlo Assessment of the Arrival Cost Evaluation Method in Moving Horizon Estimation for Chemical Processes Rincón Cuellar , F.D.. Hirota , W.H.. Giudici , R., Le Roux , G.A.C.
688
Adaptive Advanced Control of a Copolymerization System Nádson M. N. Lima, Lamia Zuñiga Liñan, Flavio Manenti, Rubens Maciel Filho, Marcelo Embiruçu, Maria R. Wolf Maciel
693
Control of processes with multiple steady states using MPC and RBF neural networks Alex Alexandridis, Haralambos Sarimveis
698
Simulation Optimization of Cost, Safety and Displacements in a Construction Design Eleftherios-Stamatios Telis, George Besseris,Constantinos Stergiou
703
Methodologies for input-output data exchange between LabVIEW® and MATLAB®/Simulink® software for Real Time Control of a Pilot Scale Distillation Process Alexandre J. S. Chambel, Carla I.C. Pinheiro, José Borges, João M. Silva 708 Plant-wide optimisation and control of a multi-scale pharmaceutical process Mayank P. Patel, Nilay Shah, Robert Ashe
713
Contents
xv
Optimization of Hybrid Reactive Distillation-Pervaporation System Vinay Amte
718
Dynamic Modeling and Optimization of Flash Separators for Highly-Viscous Polymerization Processes Prokopis Pladis, Vassileios Kanellopoulos, Apostolos Baltsas and Costas Kiparissides
723
Role of MPC in Building Climate Control Samuel Prívara, ZdenČk VáĖa, JiĜí Cigler, Frauke Oldewurtel and Josef Komárek
728
Ef¿cient Computation of First- and Second-Order Sensitivities Using an Internal Forward Differentiation Scheme T. Barz, L. Zhu, G. Wozny, H. Arellano-Garcia
733
A novel approximation technique for online and multi-parametric model predictive control Romain S.C. Lambert, Pedro Rivotti, E.N. Pistikopoulos
738
Multi-Parametric Model Predictive Control of an Automated Integrated Fuel Cell Testing Unit Chrysovalantou Ziogou, Christos Panos, Konstantinos I. Kouramas, Simira Papadopoulou, Michael C. Georgiadis, Spyros Voutetakis, Efstratios N. Pistikopoulos
743
Use of commercial structured databases as innovative solution for FEED projects Fabio Ferrari, Lorenzo Selmi
748
Controlled Variables from Optimal Operation Data Johannes Jäschke, Sigurd Skogestad
753
Optimization of IMC-PID Tuning Parameters for Adaptive Control: Part 1 Chih-Wei Chua, B. Erik Ydstie , Nikolaos V. Sahinidis
758
System identi¿cation using wavelet analysis ZdenČk VáĖa, Samuel Prívara, JiĜí Cigler and Heinz A. Preisig
763
Robust Reallocation and Upgrade of Sensor Networks for Fault Diagnosis Suryanarayana Kolluri and Mani Bhushan
768
Explicit/Multi-Parametric Model Predictive Control of a Solid Oxide Fuel Cell Kostas Kouramas, Petar S. Varbanov, Michael C. Georgiadis, JiĜí J. Klemeš, Efstratios N. Pistikopoulos
773
A Reformulation Scheme for Parameter Estimation of Hybrid Systems Ines Mynttinen and Pu Li
778
Dynamic optimization of bioreactors using probabilistic tendency models and Bayesian active learning Ernesto Martínez, Mariano Cristaldi, Ricardo Grau, Joao Lopes
783
xvi
Contents
Plantewide Control Design of a Postcombustion CO 2 Capture Process Marc-Oliver Schach, Rüdiger Schneider, Henning Schramm, Jens-Uwe Repke
788
A theoretically rigorous approach to soft sensor development using Principal Components Analysis C.K. Naveen Karthik, Shankar Narasimhan
793
Approximate Multi-Parametric Programming based B&B Algorithm for MINLPs Taoufiq Gueddar and Vivek Dua
798
Experimental Comparison of Type-1 and Type-2 Fuzzy Logic Controllers for the Control of Level and Temperature in a Vessel B. Cosenza, M. Galluzzo
803
Simulation-based Dynamic Optimization under Uncertainty of an Industrial Biological Process Guillermo A. Durand, Aníbal M. Blanco,Fernando D. Mele, J. Alberto Bandoni
808
Parallel Solution of Large-Scale Dynamic Optimization Problems Carl D. Laird, Angelica V. Wong, Johan Akesson
813
Optimization of simulated moving bed chromatography with fractionation and feedback incorporating an enrichment step Suzhou Li, Yoshiaki Kawajiri, Jörg Raisch, Andreas Seidel-Morgenstern
818
Tuning a Distillation Column Simulator Kurt E. Häggblom and Ramkrishna K. Ghosh
823
A Comparative Study of MPC-Based Control Configurations of an Industrial Bioreactor to Produce Ethanol Aarón Romo-Hernández, Salvador Hernández, Arturo Sánchez, Héctor Hernández-Escoto
828
Control of an azeotropic distillation process to acetonitrile production Andrea Ruiz Ruiz, Nelson Borda Beltrán, Alexander Leguizamón R., Javier R. Guevara L., Ivan D. Gil C.
833
Optimal Temperature Tracking of a Solid State Fermentation Reactor C. González-Figueredo, O.R. Ayala, S. Aguilar, O. Aroche, A. Loukianov, A. Sánchez
839
Receding Nonlinear Kalman (RNK) Filter for Nonlinear Constrained State Estimation Raghunathan Rengaswamy, Shankar Narasimhan, Vidyashankar Kuppuraj
844
Free Radicals Copolymerization Optimization, System: Acrylonitrile-Vinyl Acetate in CSTR S.V. Vallecillo-Gómez, J.C. Tapia-Picazo, A. Bonilla-Petriciolet, G.G. DeAlba-Pérez-de-Gracia
849
Convex optimization for shape manipulation of multidimensional crystal particles Naim Bajcinca, Ricardo Perl, Kai Sundmacher
855
Contents
xvii
A Worst-Case Observer for Impurities in Enantioseparation by Preferential Crystallization Steffen Hofmann, Matthias Eicke, Martin Peter Elsner, Andreas SeidelMorgenstern, Jörg Raisch
860
Production Operations A Simulated Annealing Approach for the Bi-Objective Design and Scheduling of Multipurpose Batch Plants Nelson Chibeles-Martins, Tânia Pinto-Varela, Ana Paula Barbósa-Póvoa, A. Q. Novais
865
Robust Logistics Network Modeling and Design against Uncertainties Yoshiaki Shimizu, Hideaki Fushimi, Takeshi Wada
870
Operating Procedure Synthesis Subject to Restricted State Transition Using Differential Evolution Yoshiaki Shimizu
875
MILP Formulation for Resource-Constrained Project Scheduling Problems Thomas S. Kyriakidis, Georgios M. Kopanos, Michael C. Georgiadis
880
Self-learning of fault diagnosis identification José Luis de la Mata, Manuel Rodríguez
885
Complex Network Optimization in FMCG Ali Mehdizadeh, Nilay Shah, Peter M.M. Bongers, Cristhian Almeida-Rivera
890
Freshwater Production by MSF Desalination Process: Coping with Variable Demand by Flexible Design and Operation Ebrahim A. Hawaidi and Iqbal M. Mujtaba
895
Optimal run length in Factory operations to reduce overall costs Peter Bongers, Cristhian Almeida-Rivera
900
Batch sizing in multi-stage, multi-product batch production systems Norbert Trautmann, Philipp Baumann, Nadine Saner, Tobias Schäfer
905
Decision Support System for Multiproduct Pipeline and Inventory Management Systems Susana Relvas, Ana Paula F.D. Barbosa-Póvoa, Henrique A. Matos, Pedro Pinto
910
Ice Cream Scheduling: Modeling the Intermediate Storage Martijn A.H. van Elzakker, Edwin Zondervan, Cristhian Almeida-Rivera, Ignacio E. Grossmann, Peter M.M. Bongers
915
Production Optimization and Scheduling across a Steel Plant Iiro Harjunkoski, Sleman Saliba , Matteo Biondi
920
Simultaneous Optimization of Planning and Scheduling in an Oil Refinery Edwin Zondervan, Tijn P.J. van Boekel, Jan C. Fransoo, André B. de Haan
925
Efficient Scheduling of Batch Plants Using Reachability Tree Search for Timed Automata with Lower Bound Computations Subanatarajan Subbiah, Christian Schoppmeyer, Sebastian Engell
930
xviii
Contents
Robust Market Launch Planning for a Multi-Echelon Pharmaceutical Supply Chain Klaus Reinholdt Nyhuus Hansen, Martin Grunow, Rafiqul Gani
935
A new Coordination Heuristic for Plant-wide Planning and Scheduling Chaojun Xu, Christian Staud, Guido Sand, Sebastian Engell
940
Optimization of Closed-Loop Supply Chains under Uncertain Quality of Returns M Isabel Gomes, Luis J Zeballos, Ana P Barbosa-Povoa, Augusto Q Novais
945
Integrated Refinery Planning under Product Demand Uncertainty Edith Ejikeme-Ugwu, Songsong Liu and Meihong Wang
950
Modelling and dynamic optimisation for optimal operation of industrial tubular reactor for propane cracking Mehdi Berreni and Meihong Wang
955
An Efficient Mathematical Framework for Detailed Production Scheduling in Food Industries: The Ice-cream Production Line Georgios M. Kopanos, Luis Puigjaner, Michael C. Georgiadis, Peter M. M. Bongers
960
Corporate Production Planning for Industrial Gas Supply Chains under Low-Demand Conditions Matteo D’Isanto, Flavio Manenti, Nadson M. N. Lima, Lamia Zuniga Linan
965
Standards for Continual Scheduling of Batch Operations Charles Siletti, Demetri Petrides, Dimitri Vardalis
970
New Scheduling Approach for Shared Resources and Mixed Storage Policies Pedro M. Castro, Luis J. Zeballos, Carlos A. Méndez
975
Optimal Scheduling of Multi-Level Tree-Structure Pipeline Networks Diego C. Cafaro, Jaime Cerdá
980
New Tools for the Detailed Scheduling of Refined Products Pipelines Vanina G. Cafaro, Diego C. Cafaro, Carlos A. Méndez, Jaime Cerdá
985
A rigorous mathematical formulation to Automated Wet-Etch Station scheduling with multiple material-handling robots in Semiconductor Manufacturing Systems Adrián M. Aguirre, Carlos A. Méndez, Pedro M. Castro
990
A MILP Planning Model for a Real-world Multiproduct Pipeline Network Suelen N. Boschetto, Leandro Magatão, Flávio Neves-Jr, Ana P.F.D. Barbosa-Póvoa
995
Improving supply chain management in a competitive environment M. Zamarripa,A. M. Aguirre, C. A. Méndez and A. Espuña
1000
Optimal Scheduling of Biodiesel Plants through Property-based Integration with Oil Refineries Vasiliki Kazantzi, Stella Bezergianni, Rene’ Elms, Fadwa Eljack, and Mahmoud M. El-Halwagi
1005
Contents
xix
Integration of financial statement analysis in the optimal design and operation of supply chain networks Pantelis Longinidis, Michael C. Georgiadis, Panagiotis Tsiakis
1010
Integrated production planning and scheduling optimization of multi-site, multi-product process industry Nikisha K. Shah, Marianthi G. Ierapetritou
1015
Simulation-based reactive scheduling in tomato processing plant with raw material uncertainty Alexandros Koulouris, Ioanna Kotelida
1020
Scenario-Based Strategic Supply Chain Design and Analysis for the Forest Biorefinery Behrang Mansoornejad, Efstratios N. Pistikopoulos, Paul Stuart
1025
The Role of Supply Chain Analysis in Market-Driven Product Portfolio Selection for the Forest Biorefinery Virginie Chambost, Behrang Mansoornejad and Paul Stuart
1030
Real-time Process Management in Particulate and Pharmaceutical Systems Arun Giridhar, Intan Hamdan, Girish Joglekar, Venkat Venkatasubramanian, Gintaras V. Reklaitis
1035
Modeling Next Generation Feedstock Development for Chemical Process Industry Selen Cremaschi
1040
Perdiction of the Permeability and Filtration Performance of Packed Beds Mishal Islam, Xiaodong Jia, Michael Fairweather, Richard Williams
1045
Study of Closed Operation Modes of Batch Distillation Columns Laszlo Hegely, Peter Lang
1050
Dynamic failure assessment of incidents reported in the Greek Petrochemical Industry Eftychia C. Marcoulaki, Myrto Konstandinidou, Ioannis A. Papazoglou
1055
A continuous-time MILP to compute schedules with minimum changeover times for a make-and-pack production Philipp Baumann, Norbert Trautmann
1060
An Evaluation Method for Plant Alarm System Based on a Two-Layer Cause-Effect Model Naoki Kimura, Kazuhiro Takeda, Masaru Noda, Takashi Hamaguchi
1065
Generating cause-implication graphs for process systems via blended hazard identification methods Erzsébet Németh, Benjamin J. Seligmann, Kim Hockings, Jim Oakley, Con O'Brien, Katalin M. Hangos, Ian T. Cameron
1070
Integrated Supply Chain Planning for Multinational Pharmaceutical Enterprises Naresh Susarla, I A Karimi
1075
Data Mining and Decision Making Tool Development for an Industrial Dual Sequential Batch Reactor Soledad Gutiérrez, Adrián Ferrari, Alejandra Benítez
1080
xx
Contents
A Novel CP Approach for Scheduling an Automated Wet-Etch Station Juan M. Novas, Gabriela P. Henning
1085
Agent-based coordination framework for disruption management in a chemical supply chain Behzad Behdani, Zofia Lukszo, Arief Adhitya, Rajagopalan Srinivasan
1090
Recipe-driven dynamic hybrid simulation of batch processes: a combined optimization/simulation approach Gilles Hétreux, Anthony Ramaroson. Jean-Marc Le Lann
1095
Recipe-based Batch Process Engineering Tool for Development Workflow Jae Hyun Cho, Junghwan Kim, Il Moon
1100
Superstructure Approach to Batch Process Scheduling by S-graph Representation B. Bertok, R.Adonyi, F. Friedler, L.T. Fan
1105
Training & Education The TriLab and ilough-Lab portal - Systematic evaluation of the use of remote and virtual laboratories in engineering education Mahmoud Abdulwahed, Zoltan K Nagy
1110
Long Distance Operator Training Yiannis Bessiris, Dionyssia Kyriakopoulou, Fadi Ghajar, Curtis Steuckrath
1115
Modularization within the framework of the course Computer-Aided Plant Design àukasz Hady, Günter Wozny
1120
Academic performance and success rate: A challenge problem for the PSE community Moisès Graells and Antonio Espuña
1125
Is it possible to improve creativity? If yes, how do we do it? Seungnam Kim, Woorim Moon, Woosik Kim, Seonjoo Park and Il Moon
1130
Use of Advanced Educational Technologies in a Process Simulation Course Mordechai Shacham
1135
MOSAIC, an environment for web-based modeling in the documentation level Stefan Kuntsche, Harvey Arellano-Garcia, Günter Wozny
1140
Addressing interdisciplinary process engineering design, construction and operations through 4D virtual environments Ian Cameron, Caroline Crosthwaite, David Shallcross, Roger Hadgraft, Jo Dalvean, Nicoleta Maynard, Moses Tade, John Kavanagh, Grant Lukey
1145
Integrating Alternate Reality Games and Social Media in Engineering Education Sonia Zheleva, Toshko Zhelev
1150
Environmental Systems Engineering Supply Chain Design and Planning with Environmental Impacts: An RTN approach Tânia Pinto-Varela, Ana Paula F. D. Barbosa-Póvoa and Augusto Q. Novais
1155
Contents
xxi
Modelling the Natural Gas Pipeline Internal Corrosion Rate Resulting from Hydrate Formation E.O. Obanijesu, M.K. Akindeju, P. Vishnu, and M.O. Tade 1160 Multilevel strategies for the retrofit of a large industrial water system Hella Tokos, Zorka Novak Pintariþ, Yongrong Yang, Zdravko Kravanja
1165
Synthesis of water integration networks in eco-industrial parks Eusiel Rubio-Castro, José María Ponce-Ortega, Mahmoud M. El-Halwagi, Medardo Serna-González,and Arturo Jiménez-Gutiérrez
1170
Eco Industrial Parks for Water and Heat Management Marianne Boix, Ludovic Montastruc, Luc Pibouleau, Catherine Azzaro-Pantel, Serge Domenech
1175
Effect of Demister Separation Efficiency on the Freshwater Purity in MSF Desalination Process Ebrahim A. Hawaidi and Iqbal M. Mujtaba
1180
Evaluation of CO2 absorption-desorption cycle by dynamic modeling and simulation Ana-Maria Cormos, Jozsef Gaspar, Paul-Serban Agachi
1185
CO2 Sustainable Recovery Network Cluster for Carbon Capture and Sequestration J. Duque, A.P.F.D. Barbosa-Póvoa, A.Q.Novais
1190
Minimization of the life cycle impact of chemical supply chain networks under demand uncertainty Rubén Ruiz-Femenia, José A. Caballero and Laureano Jiménez
1195
Design of an electric and electronic equipment recovery network in Portugal – Costs vs. Sustainability Pedro Furtado, Maria Isabel Gomes, Ana Paula Barbosa-Povoa
1200
Multiscale whole-systems design and analysis of CO 2 capture and transport networks Niall Mac Dowell, Ahmed Alhajaj, Murthy Konda and Nilay Shah
1205
On the model based optimization of secreting mammalian cell cultures via minimal glucose provision Alexandros Kiparissides, Efstratios N. Pistikopoulos, Athanasios Mantalaris
1210
A systematic methodology for the synthesis of unit process chains using Life Cycle Assessment and Industrial Ecology Principles Léda Gerber, Jérôme Mayer, François Maréchal
1215
Integrating Economic, Environmental and Social Indicators for Sustainable Supply Chains Peng Cheng Wang, Iskandar Halim, Arief Adhitya, Rajagopalan Srinivasan
1220
Evaluating the reactivity of limestone utilized in Flue Gas Desulfurization. An application of the Danckwerts theory for particles reacting in acidic environments and agitated vessels with Archimedes number less than 40 Cataldo De Blasio, Claudio Carletti, Lauri Järvinen, Tapio Westerlund
1225
xxii
Contents
Sustainability in Chemical Processes: Application of different environmental methodologies to evaluate process alternatives Acácio Nobre Mendes, Ana Carvalho, Henrique A. Matos
1230
Design and Simulation of Eco-Efficient Biodiesel Manufacture Sandra Couto, Teresa M. Mata, António A. Martins, Bruna Moura, Joana Magalhães, Nidia S. Caetano
1235
New Environmentally-Conscious Design Approach and Evaluation Tool for Chemical Processes Carmen M. Torres, Mamdouh Gadalla, Josep M. Mateo, Laureano Jiménez
1241
Optimal Reactor Design for the Hydroformylation of Long Chain Alkenes in Biphasic Liquid Systems Andreas Peschel, Benjamin Hentschel, Hannsjörg Freund, Kai Sundmacher
1246
Optimal design of real world industrial wastewater treatment networks B. Galán, I.E. Grossmann
1251
A Mixed-Integer Programming Model for Pollution Trading Vicente Rico-Ramirez, Francisco Lopez-Villarreal, Salvador HernandezCastro and Urmila M. Diwekar
1256
Modelling and process integration of carbon dioxide capture using membrane contactors J. Albo, J. Cristóbal and A. Irabien
1261
Increasing the Understanding of the BP Texas City Refinery Accident Davide Manca, Sara Brambilla, Alessandro Villa
1266
Integrating process simulation, multi-objective optimization and LCA for the development of sustainable processes: application to biotechnological plants Robert Brunet, Kartik S. Kumar, Gonzalo Guillén-Gosálbez, Laureano Jiménez
1271
Multi-objective optimization of integrated bioethanol-sugar supply chains considering different LCA metrics simultaneously Andrei Kostin, Fernando D. Mele, Gonzalo Guillén-Gozálbez
1276
Determination of biorestoration strategies in eutrophic water bodies through the formulation of an optimal control problem based on a 3D ecological model Vanina Estrada, Sabrina Belén Rodriguez Reartes, M. Soledad Diaz
1281
Integration of Carbon Footprint Minimization into the Process Design of SWRO Desalination Pre-treatment Matan Beery, Günter Wozny, Jens-Uwe Repke
1286
Optimization of a Sequencing Batch Reactor process for waste water treatment using a two step nitrification model M. N. Cruz Bournazou , K. Hooshiar , H. Arellano-Garcia , G. Lyberatos, C.Kravaris , G. Wozny
1291
Contents
xxiii
Optimization of solar assisted reverse osmosis plants considering economic and environmental concerns Raquel Salcedo-Díaz, Gonzalo Guillén-Gosálbez, Laureano Jiménez, Ekaterina Antipova
1296
Bioprocess Systems Engineering Dynamic modelling of the margarine production process Peter Bongers, Cristhian Almeida-Rivera
1301
Microbial Strain Design for Biochemical Production Using Mixed-integer Programming Techniques Joonhoon Kim, Jennifer L. Reed, and Christos T. Maravelias
1306
A Comprehensive Multi-Scale Modeling of Heterogeneities in Mammalian Cell Culture Processes Srinivas Karra, Brian Sager and M. Nazmul Karim
1311
Population balance modelling of homogeneous and heterogeneous cellulose hydrolysis Philip Engel, Benjamin Bonhage, Douglas Pernik, Roberto Rinaldi, Patrick Schmidt, Helene Wulfhorst, Antje C. Spiess
1316
Predicting microbial growth kinetics with the use of genetic circuit models Michalis Koutinas, Alexandros Kiparissides, Victor de Lorenzo, Vitor A.P. Martins dos Santos, Efstratios N. Pistikopoulos, Athanasios Mantalaris
1321
A combined growth kinetics, metabolism and gene expression model for 3D ESC bioprocesses David Yeo, Alexandros Kiparissides, Efstratios Pistikopoulos, and Athanasios Mantalaris
1326
Toward Online Control of Glycosylation in MAbs Melissa M. St. Amand, Anne S. Robinson, Babatunde A. Ogunnaike
1331
Population balance modelling of influenza virus replication during vaccine production – Influence of apoptosis Thomas Müller, Robert Dürr, Britta Isken , Josef Schulze-Horsel , Udo Reichl, Achim Kienle
1336
Assessment of Jatropha Curcas bioprocess for fuel production using LCA and CAPE Sayed Gillani, Caroline Sablayrolles, Jean-Pierre Belaud, Mireille Montrejaud-Vignoles, Jean Marc Le Lann
1341
Methodological Approach for Modeling of Multi-enzyme in-pot Processes Paloma A. Santacoloma, Alicia Roman-Martinez, Gürkan Sin, Krist V. Gernaey, and John M. Woodley
1346
Systematic Data and Knowledge Utilization to Speed up Bioprocess Design Jun Zhang, Anthony Hunter, Yuhong Zhou
1351
Integration of stochastic simulation with advanced multivariate and visualisation analyses for rapid prediction of facility fit issues in biopharmaceutical processes Adam Stonier, Dave Pain, Ashley Westlake, Nicholas Hutchinson, Nina F Thornhill, Suzanne S. Farid
1356
xxiv
Contents
Standards for Continual Scheduling of Batch Operations Charles Siletti, Demetri Petrides, Dimitri Vardalis
1361
Optimizing cyanobacteria metabolic network for ethanol production Cecilia Paulo, Jimena Di Maggio, Vanina Estrada, M. Soledad Diaz
1366
Dynamic process monitoring and fault detection in a batch fermentation process: comparative performance assessment between MPCA and BDPCA Isaac Monroy, Kris Villez, Moisès Graells, Venkat Venkatasubramanian
1371
Techno-Economic Assessment and Risk Analysis of Biorefinery Processes Eemeli Hytönen, Paul Stuart
1376
BIOCORE– A systems integration paradigm in the real-life development of a lignocellulosic biorefinery Aikaterini D. Mountraki, Athanassios Nikolakopoulos, Bouchra Benjelloun Mlayah, Antonis C. Kokossis
1381
Prediction of activation of metabolic pathways via dynamic optimization Gundian M. De Hijas-Liste, Eva Balsa-Canto, Julio R. Banga
1386
Non linear identification of Spirulina maxima growth and characteristics Márcia P. Vega, José W. Silva and Maria A.C.L. Oliveira
1391
Real-time optimization for lactic acid production from sucrose fermentation by Lactobacillus plantarum Betânia H. Lunelli, Delba N. C. Melo, Edvaldo R. de Morais, Igor R. S. Victorino, Eduardo C. Vasco de Toledo, Maria Regina Wolf Maciel, Rubens Maciel Filho
1396
Model-based Dynamic Optimisation of Microbial Processes for the High-Yield Production of Biopolymers with Tailor-made Molecular Properties Giannis Penloglou, Christos Chatzidoukas, Avraam Roussos, Costas Kiparissides
1401
Systematic Procedure for Integrated Process Operation: Reverse Electro-Enhanced Dialysis (REED) during Lactic Acid Fermentation Oscar Andrés Prado-Rubio, Sten Bay Jørgensen and Gunnar Jonsson
1406
Bioprocessing of exopolysaccharides (EPS): CFD optimization of bioreactor conditions Serafim Vlaev, Konstantza Tonova, Kostantsa Pavlova, Mohammed Elqotbi
1411
Simultaneous design and scheduling of a plant for producing ethanol and derivatives Yanina Fumero, Gabriela Corsano, Jorge M. Montagna
1416
Glycerol metabolic conversion to succinic acid using Actinobacillus succinogenes: a metabolic network-based analysis Michael Binns, Anestis Vlysidis, Colin Webb, Constantinos Theodoropoulos, Pedro de Atauri, Marta Cascante
1421
Design and Operation of a Continuous Reactor for Acid Pretreatment of Lignocellulosic Biomass Mauricio Sales-Cruz, Edgar Ramírez-Jiménez, Teresa López-Arenas
1426
Contents
xxv
Viscosity Prediction of Compounds Derived from Castor Oil: Parameter Optimization Teresa López-Arenas, Gloria Aca-Aca, Oscar Sánchez-Daza, Mauricio Sales-Cruz
1431
Global sensitivity analysis in bioreactor networks Maria Paz Ochoa, Patricia M. Hoch
1436
Graph Theory Augmented Recursive MILP Approach for Identifying Multiple Minimal Reaction Sets in Metabolic Networks Sudhakar Jonnalagadda and Rajagopalan Srinivasan
1441
Model-driven design based on sensitivity analysis for a synthetic biology application Nikolaos Anesiadis, William R. Cluett, Radhakrishnan Mahadevan
1446
Simulations of hydrodynamic stress in stirred-tank bioreactors using CFD technology Y. Verkholaz, P. Lavrov, E. Guseva, N. Menshutina, J. Boudrant
1451
A framework for model-based optimization of bioprocesses under uncertainty: Identifying critical parameters and operating variables. Ricardo Morales-Rodriguez, Anne S. Meyer, Krist V. Gernaey, Gürkan Sin
1455
Robust optimal control of a biochemical reactor with multiple objectives Filip Logist, Boris Houska, Moritz Diehl, Jan F. Van Impe
1460
System inversion of multidimensional population balance systems Henrique Menarin and Naim Bajcinca
1465
Implementation and initial evaluation of a decision support platform for selecting production routes of biomass-derived chemicals Marinella Tsakalova, Ta-Chen Lin, Aidong Yang, Antonis C. Kokossis
1470
Biomedical Systems Engineering Multi-Scale Modeling of PLGA Microparticle Drug Delivery Systems Ashlee N. Ford, Daniel W. Pack, Richard D. Braatz
1475
Computational Molecular Design of Drug Delivery Vehicles for Anti-HIV Microbicides Taylor Wilson, Amber Markey, Kyle V. Camarda, Sarah Kieweg
1480
Towards in silico models of decomplexification in human endotoxemia Jeremy D. Scheff, Pantelis Mavroudis, Steve E. Calvano, Stephen F. Lowry, Ioannis P. Androulakis
1485
Physiologically Based Pharmacokinetic Modeling and Predictive Control: An integrated approach for optimal drug administration Pantelis Sopasakis, Panagiotis Patrinos, Stefania Giannikou, Haralambos Sarimveis
1490
A Novel Physiologically Based Compartmental Model for Volatile Anaesthesia Alexandra Krieger, Nicki Panoskaltsis, Athanasios Mantalaris, Michael C. Georgiadis, Efstratios N. Pistikopoulos
1495
Modelling of the Insulin Delivery System for patients with Type 1 Diabetes Mellitus Stamatina Zavitsanou, Nicki Panoskaltsi, Athanasios Mantalaris, Michael C. Georgiadis, Efstratios N. Pistikopoulos
1500
xxvi
Contents
Towards a high-fidelity model for model based optimisation of drug delivery systems in acute myeloid leukemia Eleni Pefani, Nicki Panoskaltsis, Athanasios Mantalaris, Michael C. Georgiadis, Efstratios N. Pistikopoulos
1505
From Chemical Process Diagnosis to Cancer Prognosis: An Integrated Approach for Diagnosis and Sensor/Marker Selection Lyamine Hedjazi, Marie-Véronique Le Lann, Tatiana KempowskyHamon, Joseph Aguilar-Martin, Florence Dalenc, Gilles Favre, Laurène Despenes, Sébastien Elgue
1510
Computational Investigation of Vascular Surgical Interventions on Popliteal Artery Aneurysms D. Papadimitriou, A.H. Alexopoulos, T. Gerasimidis, T. and C. Kiparissides
1515
A Minimal Exercise Extension for Models of the Glucoregulatory System Alain Bock, Grégory François, Thierry Prud'homme, Denis Gillet
1520
Three Dimensional Simulation and Experimental Investigation of Intrathecal Drug Delivery in the Spinal Canal and the Brain Ying Hsu, Timothy J. Harris Jr, H.D.M. Hettiarachchi, Richard Penn, Andreas A. Linninger
1525
A Computational Model of Cerebral Vasculature, Brain Tissue, and Cerebrospinal Fluid Nicholas M. Vaiþaitis, Brian J. Sweetman, Andreas A. Linninger
1530
Systems engineers’ role in biomedical research Andreas A. Linninger
1535
Physiologically-Based Pharmacokinetic Modeling: Parameter Estimation for Cyclosporin A Eric Lueshen, Cierra Hall, Andrej MošaĢ and Andreas Linninger
1543
Disease Classification through Integer Optimisation Chrysanthi Ainali, Frank Nestle, Lazaros G. Papageorgiou , Sophia Tsoka
1548
Optimal design of chitosan-based scaffolds for controlled drug release using dynamic optimization Belmiro P.M. Duarte, Nuno M.C. Oliveira, Maria J.C. Moura
1553
Insulin Administration for People with Type 1 diabetes Dimitri Boiroux, Daniel Aaron Finan, Niels Kjølstad Poulsen, Henrik Madsen and John Bagterp Jørgensen
1558
A Variational Bayesian Approach for Dosage Regimen Individualization J. M. Laínez, L. Mockus, G. Blau, S. Orçun, and G.V. Reklaitis
1563
Development of a fuzzy expert system for the control of glycemia in type 1 diabetic patients Leonardo Nobile, Bartolomeo Cosenza, Marco Amato, Valentina Guarnotta, Carla Giordano, Aldo Galluzzo, Mosè Galluzzo
1568
Contents
xxvii
Materials & Molecular Systems Engineering Controlling Particle Size in a Novel Spinning Disc Continuous Stir Tank and Settler Reactor for the Continuous Synthesis of Titania M.K. Akindeju and P.H. Ong
1573
Simultaneous Design of Ionic Liquids and Azeotropic Separation Processes Brock C. Roughton, John White, Kyle V. Camarda, and Rafiqul Gani
1578
GPU-Based Parallel Calculation Method for 1Molecular Weight Distribution of Batch Free Radical Polymerization Zhiqiang Chen, Xi Chen, Zhen Yao, Zhijiang Shao
1583
Chemicals-Based Formulation Design: Virtual Experimentations Elisa Conte, Rafiqul Gani
1588
Simultaneous prediction of phase behaviour and second derivative properties with a group contribution approach (SAFT-γ Mie) Vasileios Papaioannou, Thomas Lafitte, Claire S. Adjiman, Amparo Galindo and George Jackson
1593
A Lattice Boltzmann Method for Non Ideal Gases Based on the Gradient Theory of Interfaces E.S. Kikkinides, M.E. Kainourgiakis, A.G. Yiotis and A.K. Stubos
1598
Towards robust fabrication of non-periodic nanoscale systems via directed self assembly Richard Lakerveld, George Stephanopoulos, Paul I. Barton
1603
Models driven conception of an Computer Aided Mixture Design tool Juliette Heintz, Vincent Gerbaud, Jean-Pierre Belaud
1608
Iterative learning control of a reactive polymer composite moulding process using batch-wise updated linearised models Jie Zhang, Nikos G. Pantelelis
1613
CFD Modelling of the Demister in the Multi Stage Flash Desalination plant Hala Al-Fulaij, Andrea Cipollina, Giorgio Micale, David Bogle, Hisham Ettouney
1618
Predicting a Variety of Constant Pure Compound Properties by the Targeted QSPR Method Mordechai Shacham, Neima Brauner
1623
PSE in Pharmaceutical Process Development Krist V. Gernaey, Albert E. Cervera and John M. Woodley
1628
Molecular Design of Biofuel Additives for Optimization of Fuel Characteristics Subin Hada, Charles C. Solvason, Mario R. Eden
1633
Online estimation of crystal size distribution (CSD) within industrial gibbsite precipitation plants Jan K. Hurst, Parisa A. Bahri, Ali Nooraii
1638
xxviii
Contents
Energy Systems Engineering Simulation of Water Gas Shift Membrane Reactors by a Two-dimensional Model M. De Falco, V. Piemonte, A.Basile
1643
Potential Impacts and Modelling of the Heat Loss due to Copper Chelation in Natural Gas Processing and Transport D.J. Hunt, M.K. Akindeju, E.O. Obanijesu, V.K. Pareek and M.O Tade
1648
Optimal biorefinery planning considering simultaneously economic and environmental objectives José Ezequiel Santibáñez-Aguilar, J. Betzabe González-Campos, José María Ponce-Ortega, Medardo Serna-González
1653
Reduce Costs and Energy Consumption of Deethanizing and Depropanizing Fractionation Steps in NGL Recovery Process Nguyen Van Duc Long and Moonyong Lee
1658
Ethanol from corn: screening options and power supply improvement to ethanol plant in Italy Marco Soldà, Franjo Cecelja, Aidong Yang, Piyalap Manakit
1663
A Mixed-Integer Programming Approach to Infrastructure Planning for Chemical Centres: A Case Study in the UK Pei Liu, Alan Whitaker, Efstratios N. Pistikopoulos, Zheng Li, Yong Chen
1668
A Multi-Objective Optimization Method to integrate Heat Pumps in Industrial Processes Helen Becker, Giulia Spinato, François Maréchal
1673
Techno-economical and environmental evaluations of IGCC power generation process with carbon capture and storage (CCS) Calin-Cristian Cormos, Ana-Maria Cormos, Paul Serban Agachi
1678
Reynolds Number Effects on Particle Dispersion and Deposition in Turbulent Square Duct Flows J.F.W. Adams, J. Yao and M. Fairweather
1683
Multiscale Modeling of Biorefineries Seyed Ali Hosseini, Nilay Shah
1688
Towards Second Generation Bioethanol: Supply Chain Design and Capacity Planning Andrea Zamboni, Sara Giarola, Fabrizio Bezzo
1693
Optimization of lignocellulosic based diesel Mariano Martín, Ignacio E. Grossmann
1698
Using Low-Grade Heat for Solvent Extraction based Efficient Water Desalination Kary Thanapalan and Vivek Dua
1703
Impact of hydrogen injection in natural gas infrastructures Guillermo Hernández-Rodríguez, Luc Pibouleau, Catherine Azzaro-Pantel, Serge Domenech
1708
Optimal Design and Operation of Distributed Energy Systems E. D. Mehleri, H. Sarimveis, N. C. Markatos, L. G. Papageorgiou
1713
Contents
xxix
Process Synthesis with Heat and Power Integration of Thermochemical Coal, Biomass, and Natural Gas Hybrid Energy Processes Richard C. Baliban, Josephine A. Elia, Christodoulos A. Floudas
1718
A Novel Catalytic Strategy for the Production of Liquid Fuels from Ligno-cellulosic Biomass Carlos A. Henao, DrewJ. Braden, Christos T. Maravelias, James A. Dumesic
1723
Optimizing the Lignocellulosic Biomass-to-Ethanol Supply Chain: A Case Study for the Midwestern United States W. Alex Marvin, Lanny D. Schmidt, Saif Benjaafar, Douglas G. Tiffany, Prodromos Daoutidis
1728
Modeling and Simulation of the Production of Lead and Elementary Sulphur from Lead Sulphide Concentrates Giulia Bozzano, Mario Dente, Sauro Pierucci, Massimo Maccagni
1733
Strategic Planning of Petroleum Supply Chains Leão José Fernandes , Susana Relvas, Ana Paula Barbosa-Póvoa
1738
Network generation and analysis of complex biomass conversion systems Srinivas Rangarajan, Ted Kaminski, Eric Van Wyk, Aditya Bhan, Prodromos Daoutidis
1743
Fractional-order transfer functions applied to the modeling of hydrogen PEM fuel cells Vitor V. Lopes, Carmen M. Rangel, Augusto Q. Novais
1748
An Integrated Approach to Optimal Pipeline Routing, Design, Operation and Maintenance Eftychia C. Marcoulaki, Ioannis A. Papazoglou, Nathalie Pixopoulou
1753
General Methodology for Exergy Balance in a Process Simulator Ali Ghannadzadeh, Raphaële Thery-Hetreux, Olivier Baudouin, Philippe Baudet, Pascal Floquet, Xavier Joulia
1758
Process Modelling of Entrained Flow Gasification Ruwaida A. Rasid, Peter J. Heggs, Kevin J. Hughes and Mohamed Pourkashanian
1763
Modeling post-combustion CO2 capture with amine solvents Grégoire Léonard, Georges Heyen
1768
Modelling biomass and biofuels supply chains Christiana Papapostolou, Emilia Kondili, John K. Kaldellis
1773
Design and performance optimization of hybrid energy systems E. Kondili, J. K. Kaldellis
1778
Recurrent neural network prediction of steam production in a Kraft recovery boiler Matthieu Sainlez, Georges Heyen
1784
Improved Wind Power Forecasting with ARIMA Models Bri-Mathias Hodge, Austin Zeiler, Duncan Brooks, Gary Blau, Joseph Pekny, Gintaras Reklatis
1789
xxx
Contents
Power reduction in air separation units for oxy-combustion processes based on exergy analysis Chao Fu, Truls Gundersen
1794
An MILP Model for the Strategic Design of the UK Bioethanol Supply Chain Ozlem Akgul, Nilay Shah, Lazaros G. Papageorgiou
1799
Long-Term Planning of Wind Farm Siting in the Electricity Grid Jingjie Xiao, Bri-Mathias S. Hodge, Andrew L. Liu, Joseph F. Pekny, Gintaras V. Reklaitis
1804
Optimal location of gasification plants for electricity production in rural areas Mar Pérez-Fortes, Pol Arranz-Piera, José Miguel Laínez, Enric Velo and Luis Puigjaner
1809
Multi-objective optimization of the electricity production from coal burning Jorge Cristóbal, Gonzalo Guillén-Gosálbez, Laureano Jiménez, Angel Irabien
1814
Detailed Operation Scheduling and Control for Renewable Energy Powered Microgrids Miguel Zamarripa, Juan C. Vasquez, Josep M. Guerrero, Moisès Graells
1819
Optimization of mixed-refrigerant system in LNG liquefaction process Kyungjae Tak, Wonsub Lim, Kwangho Choi, Daeho Ko, Il Moon
1824
BOG Handling Method for Energy Saving in LNG Receiving Terminal Chansaem Park, Youngsub Lim, Sangho Lee, Chonghun Han
1829
Oil Well Drilling Process - Simulation and Experimental Multi-Objective Studies Márcia Peixoto Vega, Marcela Galdino de Freitas, Claudia Miriam Scheid and André Leibsohn Martins
1834
Economic MPC for Power Management in the SmartGrid Tobias Gybel Hovgaard, Kristian Edlund, John Bagterp Jørgensen
1839
Fisher information based time-series segmentation of streaming process data for monitoring and supporting on-line parameter estimation in energy systems László Dobos, János Abonyi
1844
NMPC for Oil Reservoir Production Optimization Carsten Völcker, John Bagterp Jørgensen, Per Grove Thomsen, Erling Halfdan Stenby
1849
Optimization of LNG plants – challenges and strategies Magnus G. Jacobsen, Sigurd Skogestad
1854
Site-wide process integration for low grade heat recovery Ankur Kapil, Igor Bulatov, Robin Smith, Jin-Kuk Kim
1859
Novel optimization method for retrofitting heat exchanger networks with intensified heat transfer Ming Pan, Igor Bulatov, Robin Smith, Jin-Kuk Kim
1864
A CFD-process model of steam generation in a power plant by a thermosyphon system Penelope J. Edge, Peter J. Heggs, Mohamed Pourkashanian, Alan Williams
1869
Contents
xxxi
Techno-Economic Analysis for Ethylene and Methanol Production from the Oxidative Coupling of Methane Process Daniel Salerno, Harvey Arellano-Garcia, Günter Wozny
1874
The Effects of Electricity Storage on Large Scale Wind Integration Shisheng Huang, Bri-Mathias S. Hodge, Jingjie Xiao, Gintaras V. Reklaitis, Joseph F. Pekny
1879
Co-production of ethanol, hydrogen and biogas using agro-wastes. Conceptual plant design and NPV analysis for mid-size agricultural sectors Arturo Sanchez, Victor Sevilla-Guitron, Gabriela Magaña, Paulina Melgoza, Hector Hernandez
1884
Energy Systems Analysis for a Renewable Transportation Sector Dharik S. Mallapragada, Navneet R. Singh, Rakesh Agrawal
1889
Prediction of Conversion of a Packed Bed of Fuel Particles on a Forward Acting Grate by the Discrete Particle Method (DPM) Bernhard Peters, Algis Dziugys
1894
Monitor and diagnosis of LNG plant fractionation process using k-mean clustering and principal component analysis Hahyung Pyun, Daeyoun Kim, Kyungjin Kim, Chonghun Han
1899
Design of Integrated Gasification Combined Cycle plant with Carbon Capture and Storage based on co-gasification of coal and biomass Victoria Maxim, Calin-Cristian Cormos, Paul Serban Agachi
1904
Low Temperature Process Design: Challenges and Approaches for using Exergy Efficiencies Danahe Marmolejo-Correa, Truls Gundersen
1909
Optimization of sustainable energy planning with consideration of uncertainties in learning rates and external cost factors Seunghyok Kim, Jamin Koo, En Sup Yoon
1914
Analysis of Integrated Gasification Combined Cycle (IGCC) Power Plant Based on Climate Change Scenarios with Respect to CO 2 Capture Ratio Kyungtae Park, Kyusang Han and En Sup Yoon
1919
SynFlex: A Computational Framework for Synthesis of Flexible Heat Exchanger Networks M. Escobar, J.O. Trierweiler, and I.E. Grossmann
1924
Modeling and Optimization of Supercritical Phase Fischer-Tropsch Synthesis Wei Yuan, Gregory C. Vaughan, Christopher B. Roberts, Mario R. Eden
1929
Optimization of pipeline unloading operations in an LPG terminal S.Arun Srikanth, Sridharakumar Narasimhan, Shankar Narasimhan
1934
Assessment for Carbon Capture and Storage Opportunities: Greek Case Study Christos Ioakimidis, Nikolaos Koukouzas, Anna Chatzimichali, Sergio Casimiro, Grigorios Itskos
1939
xxxii
Contents
Methodology for Maximising the Use of Renewables with Variable Availability Andreja Nemet, JiĜí J. Klemeš, Petar S. Varbanov
1944
Exergy-based methods for computer-aided design of energy conversion systems George Tsatsaronis, Tatiana Morosuk
1949
Computational support as efficient sophisticated approach in waste-to-energy systems Petr Stehlík
1954
Regional Optimizer (RegiOpt) – Sustainable energy technology network solutions for regions K.H. Kettl, N. Niemetz, N. Sandor, M. Eder, I. Heckl, M. Narodoslawsky
1959
Improving Energy Efficiency of a Dyes Intermediates Synthesis Plant. A Developing Country Specific Case Study Zsófia Fodor, Paul Krajnik, Petar Sabev Varbanov, JiĜí Jaromír Klemeš
1964
Evaluation of Design Issues and Automation Infrastructure in a Solar-Hydrogen Production Unit at CERTH in Thessaloniki Chrysovalantou Ziogou, Dimitris Ipsakis, Fotis Stergiopoulos, Simira Papadopoulou, Stella Bezergianni, Spyros Voutetakis
1969
Optimal Operation of a Concentrated Solar Thermal Cogeneration Plant Amin Ghobeity, Alexander Mitsos
1974
The role of energy consumption in batch process scheduling Mate Hegyhati, Gerenc Friedler
1979
Modeling Fluid Flow of Vipertex Enhanced Heat Transfer Tubes David J. Kukulka and Rick Smith
1984
Energy targeting in heat integrated water networks with isothermal mixing Santanu Bandyopadhyay, Gopal Chandra Sahu
1989
Design of renewable energy systems incorporating uncertainties through pinch analysis Santanu Bandyopadhyay
1994
Sustainable LCA-based MIP Synthesis of Biogas Processes Lidija ýuþek, Rozalija Drobež, Bojan Pahor, Zdravko Kravanja
1999
Energy, Water and Process Technologies Integration for the Simultaneous Production of Ethanol and Food from the entire Corn Plant Lidija ýuþek, Mariano Martín, Ignacio E. Grossmann, Zdravko Kravanja
2004
Ontology-Driven Design of an Energy Management System Karel Macek, Karel MaĜík, Petr Stluka
2009
Synthesis of Flexible Palm Oil-Based Regional Energy Supply Chain Dominic C. Y. Foo, Raymond R. Tan , Hon Loong Lam, Mustafa Kamal, JiĜí J. Klemeš
2014
Index
2019
ESCAPE-21 - PREFACE This book includes papers presented at the 21st European Symposium on ComputerAided Process Engineering (ESCAPE-21) held at Porto Carras Resort, Chalkidiki, Greece from 29 May to 1 June 2011. The ESCAPE series constitute the major European annual event which serves as a global forum for engineers, scientists, researchers, managers and students to present and discuss progress being made in the area of Process Systems Engineering. Previous events took place in Lyon, France, 2008 (ESCAPE-18), Cracow, Poland, 2009 (ESCAPE-19) and Ischia, Italy, 2010 (ESCAPE-20). European industries are bringing innovations into our lives, whether in the form of new technologies to address environmental problems, new products to make our homes more comfortable and energy efficient, or new therapies to improve the health and well-being of European citizens. The technical theme of ESCAPE 21 hence recognizes the continuous and increasingly expanding importance of and need for a systems approach in tackling such industrial and societal grand challenges, featuring the following strands: Core Process Systems Engineering x x x x x
Multi-scale Modeling Synthesis and Design Optimization and Control Production Operations Training and Education
Grand Challenges – Domain-driven PSE x x x x x
Environmental Systems Engineering Bioprocess Systems Engineering Biomedical Systems Engineering Materials and Molecular Systems Engineering Energy Systems Engineering
More than 670 abstracts from almost 60 countries were originally submitted to the symposium. Out of them 399 were finally selected for oral and poster presentations and included in the book. All papers have been peer reviewed – we are indeed grateful to the members of the international scientific committee for their evaluations,
xxxiv
Preface
comments and recommendations. We are also extremely grateful to the authors for their outstanding contributions. We hope that this book will serve as a valuable reference document to the scientific and industrial community and that it will contribute to the progress of process systems engineering. Efstratios N. Pistikopoulos Michael C. Georgiadis Antonis Kokossis ESCAPE-21 co-chairmen
Members of the International Scientific Committee Ali Abbas,
Iftekhar Karimi,
University of Sydney, Australia
National University of Singapore, Singapore
Claire Adjiman,
Jiri Klemes,
Imperial College London, UK
University of Pannonia, Hungary
Rakesh Agrawal,
Andrzej Kraslawski,
Purdue University, USA
University of Lappeenranta, Finland
Yiannis Androulakis,
Zdravko Kravanja,
Rutgers University, USA
University of Maribor, Slovenia
Adisa Azapagic,
Jay Lee,
University of Manchester, UK
KAIST, Republic of Korea
Miguel Bagajewicz,
Andreas Linninger,
University of Oklahoma, USA
University of Illinois at Chicago, USA
Julio Banga,
Sandro Macchietto,
CSIC, Spain
Imperial College London, UK
Ana Barbosa, IST -Technical Uni of Lisbon, Portugal
Sakis Mantalaris, Imperial College London, UK
David Bogle,
Costas Maranas,
University College London, UK
Pennsylvania State University, USA
Peter Bongers,
Christos Maravellias,
Unilever / DTU, The Netherlands
University of Wisconsin, USA
Ian Cameron,
Francois Marechal,
The University of Sydney, Australia
EPFL, Switzerland
Benoit Chachuat,
Wolfgang Marquardt,
Imperial College London, UK
RWTH-Aachen, Germany
Panagiotis Christofides, University of California LA, USA
Il Moon, Yonsei University, Korea
Prodromos Daoutidis,
Iqbal Mujtaba,
University of Minnesota, USA
University of Bradford, UK
Mario Eden,
Costas Pantelides,
Auburn University, USA
PSE Ltd, UK
Sebastian Engell, University of Dortmund, Germany
Lazaros Papageorgiou, University College London, UK
xxxvi
Members of the International Scientific Committee
Antonio Espuna,
Sauro Pierucci,
UPC, Spain
University Polytechnic of Milano, Italy
Panagiota Foteinou,
Valentin Plesu,
UCSB-MIT-Caltech/ARO, USA
Technical Uni of Bucharest, Romania
Ferenc Friedler,
Luis Puigjaner,
University of Pannonia, Hungary
UPC, Spain
Rafiqul Gani,
Rex Reklaitis,
DTU, Denmark
Purdue University, USA
Mahmoud El Halwagi,
Jose Romagnoli,
Texas A&M University, USA
Louisiana State University, USA
Chonghun Han,
Nick Sahinidis,
Seoul National University, South Korea
Carnegie Mellon University, USA
Georges Heyen,
Nilay Shah,
Université de Liège, Belgium
Imperial College London, UK
Marianthi Ierapetriou,
Sigurd Skogestad,
Rutgers University, USA
NTNU, Norway
George Jackson,
Raja Srinivasan,
Imperial College London, UK
National University of Singapore, Singapore
Christian Jallut,
Paul Stuart,
Université Claude Bernard, Lyon, France
Ecole Polytechnique Montreal, Canada
Sten Jorgensen,
Doros Theodorou,
DTU, Denmark
National Technical University of Athens
Emilia Kondili,
Harris Sarimveis,
Technology and Education Institute of Peiraia
National Technical University of Athens
Xavier Joulia,
Tapio Westerlund,
INPT-ENSIACET, Toulouse, France
Abo Akademi, Finland
Costas Kravaris,
Panos Seferlis,
University of Patras, Greece
Aristotle University of Thessaloniki & Chemical Process Engineering Research Institute, Greece
Yiannis Kookos, University of Patras, Greece
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Detailed Mathematical Modelling of Liquid-Liquid Extraction Columns Moutasem JARADAT1, 3, Menwer ATTARAKIH 2,3 and Hans-Jörg BART1,3 1
Chair of Separation Science and Technology, TU Kaiserslautern, POB 3049, 67653 Kaiserslautern, Germany 2 Faculty of Eng. Tech., Chem. Eng. Dept., Al-Balqa Applied University, POB 15008, 11134 Amman, Jordan 3 Centre of Mathematical and Computational Modelling, TU Kaiserslautern, Germany
Abstract A comprehensive bivariate population balance model for the dynamic and steady state simulation of extraction columns is developed. The model is programmed using visual digital FORTRAN and then integrated into the whole LLECMOD program [23]. As a case study, the simulation tool LLECMOD is used to simulate the steady state performance of pulsed packed and sieve plate columns. Two chemical test systems recommended by the EFCE are used in the simulation. Model predictions are successfully validated against steady state and dynamic experimental data, where good agreements are achieved. Keywords: LLECMOD, Extraction Columns, Population Balance, Simulation.
1. Introduction Liquid-liquid extraction is an important separation processes encountered in many chemical process industries [1]. Different kinds of liquid-liquid columns are being used in industries; which can be classified into two main categories: agitated and nonagitated columns. Non-agitated (packed and sieve plate) columns are frequently used in liquid–liquid extraction operations due to their high throughput, high separation efficiency and insensitivity towards contamination of the interface, thus led to its wide applicability, particularly in the extraction of radioactive materials. These columns use difference in the density of the two phases to carry out the contact between them, thus they do not require external energy. Van Dijck [2] devised the use of external energy in the form of pulsing in sieve plate columns; which has found wide applications in nuclear fuel reprocessing. These columns have a clear advantage over other mechanical contactors when processing corrosive or radioactive solutions. The absence of moving mechanical parts in such columns obviates the need for frequent repair and servicing. The internals (packing/ perforated plates) reduces axial mixing; increases drop coalescence and breakage rates resulting in increased mass transfer rates, and affect the mean residence time of the dispersed phase. The performance of these columns can be enhanced by mechanical pulsation of the continuous phase. This is a result of an increase in shear forces and consequent reduction in size of dispersed droplets so that the interfacial area, and hence the mass transfer rate, is increased [3]. To shed more light on the extraction behaviour in the pulsed packed and sieve plate columns, the hydrodynamics as well as the mass transfer characteristics must be well understood. Our present knowledge of the design and performance of extraction columns is still far from satisfactory. The reason is mainly due to the complex behaviours of the hydrodynamics and mass transfer [4]. It is obvious that the changes in
2
M. JARADAT et al.
the characteristics (holdup, Sauter diameter, etc.) of the drop population along the column have to be considered in order to describe conveniently the behaviour of the column. The dispersed phase in the case of liquid-liquid extraction undergoes changes and loses its identity continuously as the drops break and coalesce. Accordingly, detailed modelling on a discrete level is needed using the population balance equation as a mathematical framework. The multivariate non-equilibrium population balance models have emerged as an effective tool for the study of the complex coupled hydrodynamics and mass transfer in liquid-liquid extraction columns. The development of computational tools to model industrial processes has increased in the last decades. However; to the best of the authors’ knowledge, there are no comprehensive non-equilibrium population balance models to describe in sufficient detail the behaviour of extraction columns. The main objective of this work is to develop a model that is capable to describe the dynamic and steady state behaviour of pulsed packed and sieve tray extraction columns. The models of both columns are integrated into the existing program: LLECMOD [Reference], which can also simulate agitated extraction columns (RDC and Kühni). LLECMOD can simulate the steady state and dynamic behaviour of extraction columns taking into account the effect of dispersed phase inlet (light or heavy phase is dispersed) and the direction of mass transfer (from continuous to dispersed phase and vice versa) [5]. Therefore, scale-up and simulation of agitated and non-agitated extraction columns based on population balance modelling can now be carried out successfully.
2. Mathematical model Mathematical modelling of pulsed extraction columns is considered by many researchers [6-13]. An empirical model for predicting the hydrodynamics in pulsed sieve plate columns was proposed by Kumar and Hartland [7]. A stagewise model for the transient behaviour of a sieve-plate extraction column taking into account the back flow and assuming constant hold-up was developed by Blass and Zimmerman [8]. Hufnagl et al. [9] evaluated a differential model of a Kühni column. Steiner et al. [10] modelled a packed column using differential contact model without axial mixing. Weinstein et al. [11] evaluated the differential model of a Kühni column. An improved dynamic model considering the influence of drop size distribution was developed by Xiaojin, et al. [12]. Several population balance models have been proposed by various authors: Garg and Pratt [13] developed a population balance model for a pulsed sieveplate extraction taking into account experimentally determined values for drop breakage and coalescence. Casamatta et al., [14] proposed a population balance model as described by Gourdon et al. [15]. Al Khani et al. [16] and Milot et al. [17] applied this model for dynamic and steady state simulations of a pulsed sieve-plate extraction column. Recently extensive work has been done on the population balance modelling of extraction columns many researchers [15, 18-23]. 2.1. The population balance model The general spatially distributed population balance model describing the coupled hydrodynamics and mass transfer can be written as [21]: 2 s[[ f sfd ,c (\ ) s[uy fd ,c (\ )] i d ,cy (\ )] y y st sz s[i i 1 (1) sfd ,c (\ ) ¯ Q in f in s ¡ y y y ° (d, cy ; t )E(z z y ) b \\ ^ ¡D ° sz ¡ y sz ° Ac vin ¢ ±
Detailed Mathematical Modelling of Liquid-Liquid Extraction Columns In this equation the components of the vector \ [ d c y z t ] are those for the droplet internal coordinates (diameter and solute concentration), the external coordinate is z and t is time. The velocity vector along the internal coordinates is given by ] [d c y ] . The source term bw] represents the net number of droplets produced by breakage and coalescence per unit volume and unit time in the coordinates range [] , ] w] ] . The droplets axial dispersion is characterized by the dispersion coefficient, Dy. The second term on the right hand side is the rate at which the droplets entering the LLEC with volumetric flow rate, Qy,in, that is perpendicular to the column cross-sectional area, Ac, at a location Zy with an inlet number density, fyin . The dispersed phase velocity, uy, is relative to the walls of the column [23]. 2.2. Model parameters Equation (1) is general for any type of extraction column. However, what makes the equation specific is the internal geometry of the column as reflected by the required correlations for hydrodynamics and mass transfer. Experimental correlations are used for the estimation of the turbulent energy dissipation and the slip velocities of the moving droplets along with interaction frequencies of breakage and coalescence. In this work, correlations for packed and sieve plate columns concerning droplet velocity, coalescence and mass transfer are taken from the work of Henschke [24]. The slowing factor and the droplet breakage frequency are taken from the work of Garthe [25]. 2.3. Numerical solution The resulting model is composed of a system of integro-partial differential and algebraic equations that are dominated by convection and hence it calls for a specialized discretization approach. The model solved using an optimized and efficient numerical algorithms developed by Attarakih et al. [19, 21, 22].
3. LLECMOD program These aforementioned mathematical models, and in particular for pulsed extraction columns, are programmed in LLECMOD using Visual Digital FORTRAN. Recent correlations for fluid dynamics and mass transfer are now available and are extensively validated against experimental data collected from pilot and industrial columns. The graphical interface of the LLECMOD program contains the main input window and subwindows for parameter and correlation inputs. The main window contains all correlations and operating conditions that can be selected using drop down menus. The basic feature of this program is to provide an easy tool for the simulation of coupled hydrodynamics and mass transfer in liquid-liquid extraction columns based on the population balance approach for both transient and steady states conditions. Details about LLECMOD can be found in [23].
4. Results and discussion To completely specify the model, the following geometry is used for a pilot plant scale LLEC (packed pulsed column): column height (H) =4.4 m, inlet of the dispersed phase (zy) = 0.85 m, inlet of the continuous phase (zx) = 3.8 m, column diameter (d) = 0.08 m, the inlet feed is normally distributed with mean equals to 3.2 mm and standard deviation of 0.5 mm. The two EFCE test systems (toluene-acetone-water and butyl acetateacetone-water) are used. The direction of mass transfer is from the continuous to the dispersed phase. The inlet solute concentrations in the continuous and dispersed phases
3
4
M. JARADAT et al.
are taken for the toluene–acetone-water as 5.73 and 0 % and for the second system (butyl acetate-acetone-water) as 5.22 and 0 % respectively. The pulsation intensity (a. f) = 1 cm/sec and the total flow rate of the continuous phase: Qc = 40 lit/hr and dispersed phase: Qd = 48 lit/hr. 3.5 3
2.5
Droplet Diameter (mm)
Droplet Diameter (mm)
3
2 1.5 1 0.5 0
sim. exp. 0
1
2 3 Column Height (m)
4
2.5 2 1.5 1
sim. exp.
0.5 0 0
5
1
2 3 Column Height (m)
4
5
Fig.1: Simulated mean droplet diameter along the column height compared to the experimental data [25]. Left panel the test system is (toluene–acetone-water) and the right panel is (butyl acetate-acetone-water).
Fig.(1) shows the variation of the mean droplet diameter along the column height compared to the experimental data for both chemical systems. A fairly good agreement between the experimental and simulated profiles is achieved for both systems. A comparison between the simulated holdup profiles along the column height and the experimental data [25] is shown in Fig.(2). Again, a very good agreement is achieved for both test systems. 10
8
sim. exp.
8 Holduo (%)
Holdup (%)
6
4
2
0
1
2 3 Column Height (m)
4
4 2
sim. exp. 0
6
0
5
0
1
2 3 Column Height (m)
4
5
Fig.2: Simulated holdup profiles along the column height compared to the experimental data [25]. Left panel the test system is (toluene–acetone-water) and the right panel is (butyl acetate-acetone-water).
Fig.(3) shows the simulated and experimental solute concentration profiles as function of column height in both phases. The agreement between the simulation and experiment is excellent for both test systems. 0.06
0.06 Cx sim.
0.05
Cx exp.
0.04
Concentration (%)
Concentration (%)
0.05
Cy sim. Cy exp.
0.03 0.02
0.04 0.03
0.01
0.01
0
0
0
1
2 3 Column Height (m)
4
5
Cx sim.
0.02
Cy sim. Cx exp. Cy exp. 0
1
2 3 Column Height (m)
4
5
Fig.3: Simulated solute concentration profiles in both phases along the column height compared to the experimental data [25]. Left panel the test system is (toluene–acetone-water) and the right panel is (butyl acetate-acetone-water).
Detailed Mathematical Modelling of Liquid-Liquid Extraction Columns LLECMOD provides also dynamic simulations to describe the transient behaviour of the extraction columns. Using the LLECMOD program, the transient column behaviour can be investigated numerically. To analyse the dynamic behaviour of the column, stepand exponential changes can be applied to the inlet variables to get the dynamic step response of the model. In the transient module, the following step and exponential changes can be applied: Inlet solute concentration in the dispersed phase, (Cy,in), inlet solute concentration in the continuous phase, (Cx,in). The dynamic evolution of solute concentration in the extract along with the experimental data will be discussed in a separate publication. It is obvious that LLECMOD is able to catch the dynamic behaviour of the extraction column with a good accuracy.
5. Conclusions The present nonequilibrium bivariate population balance model can be considered as an effective tool to describe the steady state and dynamic behaviour for hydrodynamics and mass transfer in extraction columns. In this work, pulsed packed and sieve plate extraction columns are considered. The transient and steady state performance of a pulsed packed extraction column is studied using the present model as an alternative to the commonly used models (backmixing and dispersion models). The simulation results from the present model are found in good agreement with the available experimental data.
References [1] T.C Lo et al. (Eds.), 1983, Handbook of Solvent Extraction, J. Wiley & Sons, New York. [2] W. J. D. Van Dijck, 1935, U.S Patent 2,011,186. [3] H. R. C. Pratt and G. W. Stevens, 1992, In: J. D. Thornton, Ed., “Science and Practice in Liquid-Liquid Extraction,” Oxford University Press, New York, 491–589. [4] G. Luo et al., 1998, Chem. Eng. Technol., 21, 10, 823–827. [5] M. Jaradat et al., 2010, Chem. Eng. J., 165, 2, 379-387. [6] S. Mohanty, 2000, Rev. Chem. Eng., 16, 3, 199–248. [7] A. Kumar and S. Hartland, 1995, Ind. Eng. Chem. Res., 34, 11, 3925–3932. [8] E. Blass and H. Zimmerman, 1982, Verfahrenstechnik, 16, 9, 682-690. [9] H. Hufnagl et al., 1991, Chem. Eng. Technol., 14, 301–306. [10] L. Steiner et al., 1995, Chem. Eng. Res. Des., 73, 5, 542-550. [11] O. Weinstein et al., 1998, Chem. Eng. Sci., 53, 2, 325–339. [12] T. Xiaojin et al., 2005, Chem. Eng. Sci., 60, 4409–4421. [13] M. O. Garg and H. R. C. Pratt, 1984, AIChE J., 30, 3, 432–441. [14] G. Casamatta and A. Vogelpohl, 1985, Ger. Chem. Eng., 8, 96-103. [15] C. Gourdon et al., 1994, in: J.C. Godfrey and M.J. Slater (Eds.), Liquid-Liquid Extraction Equipment, Wiley, Chichester, 137-226. [16] S. D. Al Khani et al., 1989, Chem. Eng. Sci., 44, 6, 1295-1305. [17] J.F. Milot et al., 1990, Chem. Eng. J., 45, 2, 111-122. [18] T. Kronberger et al., 1995, Comput. Chem. Eng., 19, 639-644. [19] M. Attarakih et al., 2004, Chem. Eng. Sci., 59, 2567-2592. [20] M. Attarakih et al., 2004b, Chem. Eng. Sci., 59, 2547-2565. [21] M. Attarakih et al., 2006, Chem. Eng. Sci., 61, 113-123. [22] M. Attarakih et al., 2006b, Chem. Eng. Tech., 29, 435-441. [23] M. Attarakih et al., 2008, Open Chem. Eng. J., 2, 10-34. [24] M. Henschke, 2004, Auslegung pulsierter Siebboden-Extraktionskolonnen, Shaker Verlag Aachen. [25] G. Garthe, 2006, Dissertation, TU München, Germany.
5
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Multi-Scale modelling of a membrane reforming power cycle with CO2 capture Øivind Wilhelmsen, Rahul Anantharaman, David Berstad and Kristin Jordal SINTEF Energy Research, Sem Sælands vei 11, 7034 Trondheim, Norway
Abstract This work presents the initial investigations of an Integrated Reforming Combined Cycle (IRCC) process with CO2 capture using a membrane reformer. A geometrically generic 1-dimensional model of a membrane reformer has been implemented in Matlab 7.9. This model includes detailed balance equations for energy, momentum and mass in all three sections of the membrane reformer. Widely accepted empirical relations have been used to take into account the mass and energy transport across the membrane as functions of the conditions inside the chemical reactor. The reactor model has been integrated into an overall steady state IRCC process simulation model developed in HYSYS and GTPro. The work shows that multi-scale modelling is necessary to capture the behaviour of the process. The overall cycle efficiency of the process was 46.83 % with 85 % CO2 capture. Keywords: Carbon Capture Storage, Integrated Reforming Combined Cycle, Hydrogen Membrane Reactor, Multi-scale modelling
1. Introduction CO2 capture in power plants has been identified as an important technology to mitigate climate change. A key drawback of power plants with CO2 capture is the relatively large energy penalty associated with capture and subsequent efficiency drop (10-15% points for natural gas based cycles). Integrated Reforming Combined Cycles (IRCC) for precombustion capture of CO2 using aMDEA typically have an energy penalty of 13% points, of which the reformers and shift reactors contribute 6% points and the CO2 capture unit 2% points. Incorporating the reforming, shift and CO2 separation units into a single unit using a Hydrogen Membrane Reactor (HMR) can potentially improve the overall efficiency of an IRCC plant with CO2 capture since intermediate heating and cooling steps are eliminated. Due to the complexity of the membrane reformer, a sufficiently detailed unit model is necessary to provide realistic cycle studies. In this work, we will thus investigate the potential of an IRCC cycle with an integrated HMR (Figure 1) employing a steady-state one-dimensional membrane reformer unit model. Since the reforming is endothermic, hot exhaust gas from a gas turbine is used as the heating utility flowing co-currently with the feed. Nitrogen is used as sweep gas where both hydrogen and heat is transferred across the membrane in the membrane reactor.
2. Multi-scale modelling and numerical approach The HMR illustrated in Figure 2 was implemented in Matlab 7.9 and then linked to HYSYS and GTPro using Excel. This section will describe the balance equations of the HMR unit model. The model is generic, meaning that it can be applied to both the flatplate membrane reactor to the left in Figure 2, and the tubular membrane reactor displayed to the right. Heat is transferred from the hot exhaust gas (1) to the reacting
7
Multi-scale modelling and optimization of a membrane reforming power-cyle feed gas mixture (2). The section containing the exhaust gas is insulated and no heat is assumed lost to the ambient. The following reactions are assumed to take place near the surface of the catalyst pellets: ܪܥସ ܪଶ ܱ ՞ ܱܥ ͵ܪଶ Eq. 1 Eq. 2 ܱܥ ܪଶ ܱ ุ ܱܥଶ ܪଶ ܪܥସ ʹܪଶ ܱ ุ ܱܥଶ Ͷܪଶ Eq. 3 The overall production of hydrogen is endothermic and the reactions need energy from the hot exhaust gas to produce hydrogen. The reaction kinetics is modelled by the equations proposed by Xu and Froment [1]. Both heat and hydrogen is assumed transferred through the membrane.
Figure 1 Illustration of the power-cycle process. The main assumption of the one-dimensional model is plug flow for all sections. The parameters taking into account the geometry of the reactor are displayed in Table 1. The tubular reactor is assumed to have a length L and radii R1, R2 and R3 for the sections 1, 2 and 3 respectively (Figure 2). The flat-plate reactor is assumed to have a width W and heights H1, H2 and H3. The number of components is Nc and the number of reactions Nr. The energy balance of the exhaust gas section is: െߛଵ ܬǡଵ՜ଶ ݀ܶଵ ൌ ேǤభ ݀ݖ σୀଵ ܨଵǡ ܥǡ
Eq.4
Here, the subscript i denotes each component, subscript j each reaction and 1,2,3 the different sections. T is the temperature, Jq the heat flux, Fi the flow rate of component i and Cp the heat capacity.
Figure 2: Illustration of a membrane reactor. 1: The hot exhaust gas. 2: The feed gas mixture. 3: The permeate. A flat-plate membrane reactor configuration (left) and a tubular membrane reactor configuration (right).
8
Ø. Wilhelmsen et al.
Table 1: geometrical parameters
Parameter:
Tubular:
Flat-plate
Ȗ1 Ȗ2 Ȗ3
2ʌR3 2ʌR2 ʌR32-ʌR22
W W WH2
The momentum balance of the exhaust gas section was omitted due to an insignificant contribution to the simulations within the relative accuracy of 5E-6. This was validated by including the momentum balance from [2]. The energy balance of the feed gas section is: ேೝ ݀ܶଶ ߛଵ ܬǡଵ՜ଶ െ ߛଶ ܬǡଶ՜ଷ ߛଷ ߩ σ ߟ ݎ ൫െο ܪ ൯ ൌ ேǡమ ݀ݖ σୀଵ ܨଶǡ ܥǡ െሺͲǡ െܬுଶ ሻߛଶ ܬுଶ ൫݄ଶǡுమ െ ݄ଷǡுమ ൯ ே
ǡమ σୀଵ ܨଶǡ ܥǡ
Eq.5
Here, ȡB is the density of the catalyst, ߟ the effectiveness factor of reaction j, rj the reaction rate of reaction j and ο ܪ is the enthalpy of reaction j. hH2 denotes the intensive enthalpy of hydrogen and ܬுଶ is the flux of hydrogen through the membrane. The momentum balance is taken into account by Hicks equation which is described in [2], and the mole balances for the feed gas section are: ேೝ
݀ܨଶǡ ൌ ߛଷ ߩ ߟݎ ݒǡ െ ߜǡுଶ ߛଶ ܬுଶ ݀ݖ
Eq.6
Here, ߜǡுଶ denotes the Kronecker delta. Only hydrogen is assumed to permeate through the membrane. The mole balances at the permeate side are: ݀ܨଷǡ ൌ ߜǡுଶ ߛଶ ܬுଶ Eq.7 ݀ݖ The momentum balance for the permeate is neglected of the same reasons as in the exhaust gas section. This assumption has been assessed by including the same momentum balance for the permeate as in [3]. Ideal gas law is used as equation of state, giving an expression for the velocities, needed for the momentum balance in the feed section and also in the correlations for heat transfer across the tube walls. Finally, the energy balance of the permeate is: ݀ܶଷ ߛଶ ܬǡଶ՜ଷ ሺͲǡ ܬுଶ ሻߛଶ ܬுଶ ൫݄ଶǡுమ െ ݄ଷǡுమ ൯ ൌ ேǡయ ݀ݖ σୀଵ ܨଷǡ ܥǡ
Eq.8
Representative values for the effectiveness factors are found by including the mass balances for the catalyst pellets, assumed to be spherically shaped. The detailed balance equations are more closely described in [4]. The heat transfer coefficients and thermo physical models were modelled using semi-empirical expressions also found in [4]. The heat flux from the exhaust to the feed gas section was taken into account by a constant
Multi-scale modelling and optimization of a membrane reforming power-cyle overall heat transfer coefficient. The hydrogen flux model used in this work is identical to [5].
3. Results and discussion The overall process outlined in Figure 1 was modelled in HYSYS and GTPro. The Matlab HMR model was linked to HYSYS using Excel. This allowed the membrane reactor to be solved at the scale of the balance equations while the surrounding process was solved at a larger scale in HYSYS and GTPro. Exhaust gas from the hydrogen fired gas turbine (around 600 °C) was used as the heating medium. Nitrogen from a cryogenic ASU was used as the sweep gas. The nitrogen also acts as the necessary gas turbine fuel diluent for hydrogen rich combustion.
Figure 3 Temperatures profiles and conversion of methane in the reactor (left and right). Figure 3 shows the temperatures and the methane conversion in the membrane reactor. In accordance with Falco et. al. [5], the methane conversion is far from unity at these conditions. With 30-50% methane conversion, integration of the process unit in a power generation process involves additional processing such as an auto-thermal reformer downstream of the membrane reformer or an oxy-combustion power island downstream of the membrane reformer. In the process modelled in this work, the retentate from the membrane reformer is sent to an oxygen blown auto-thermal reformer (ATR) and a two stage shift reactor to convert unconverted methane and CO to H2 and CO2. An aMDEA based capture unit is designed to capture 95% of the CO2. An advantage of integrating the HMR with the ATR in this case is the relatively higher partial pressure of CO2 in the syngas stream and hence lower efficiency penalty for CO2 capture. The H2 rich syngas is then mixed with the retentate and fed to a H2 fired gas turbine. The temperature of the exhaust gas from the turbine after heat exchange in the reformer is around 520 °C and is increased to 580 °C using duct burning. The overall performance of the process is presented in Table 2. Note that the overall CO2 capture ratio is 85% due to duct burning and a significant amount of unconverted CO after the shift reactors.
4. Conclusion A one-dimensional model of a HMR has been developed aiming to provide a generic model that can be adapted to different scenarios. The membrane reformer model is integrated in a power cycle with CO2 capture modelled in HYSYS to evaluate its potential. By comparing the multi-scale power cycle modelling to previous work where
9
10
Ø. Wilhelmsen et al.
they used modelling only at the process scale, [6] it is obvious that the detailed membrane reformer model is vital to reveal the limits in methane conversion and the necessary additional process steps in potential integration schemes. The process scheme designed as part of this work has an overall process efficiency of 46.8%. For comparison purposes, an IRCC with an oxygen blown reformer with similar set of assumptions has an overall plant efficiency of 46%, while a NGCC with amine-based post-combustion capture has an efficiency of 49.5%. However, it must be noted that with the multitude of parameters that could be manipulated in such integration, a sensitivity analysis needs to be performed to identify the optimal process parameters and integration. This will be the subject of future work. Table 2: Overall performance of HMR integrated power plant with CO2 capture
NG Flow
t/h
Thermal energy of fuel - LHV basis [A]
MWth
868.85
62.52
Gas turbine output Steam turbine output Air expander
MWe MWe MWe
319.09 150.87 3.59
Gross electric power output [B]
MWe
469.96
Total ancillary power consumption [C]
MWe
63.08
Net electric power output [D]
MWe
406.89
Gross electrical efficiency [B/A*100]
%
54.09
Net electrical efficiency [D/A*100]
%
46.8
Carbon capture ratio
%
85.00
References [1] J. Xu and G.F. Froment, 1989, “Methane Steam Reforming, Methanation and Water-Gas Shift: I. Intrinsic Kinetics”, AlChe, 35, 1, 97-103 [2] M.H. Wesenberg, 2006, “Gas Heated Steam Reformer Modelling”, PhD-thesis, The Norwegian University of Science and Technology. [3] E. Johannessen and K. Jordal, 2005, “Study of a H2 separating membrane reactor for methane steam reforming at conditions relevant for power processes with CO2 capture”, Energy Conv. and Manag., 46, 1059-1071 [4] Ø. Wilhelmsen, 2010, “ The state of minimum entropy production in reactor design”, MSc. Thesis at The Norwegian University of Science and Technology [5] M. De Falco, L. Di Paola, L. Marrelli and P. Nardella, 2007 “Simulation of large-scale membrane reformers by a two-dimensional model”, Chem. Eng. J., 128, 115-125 [6] K. Jordal, R. Bredesen, H.M. Kvamsdal, O. Bolland, 2004, “Integration of H2-separating membrane technology in gas turbine processes for CO2 capture”, Energy, 29, 1269-1278
5. Acknowledgements This publication has been produced with support from the BIGCCS Centre, performed under the Norwegian research program Centres for Environment-friendly Energy Research (FME). The authors acknowledge the following partners for their contributions: Aker Solutions, ConocoPhilips, Det Norske Veritas AS, Gassco AS, Hydro Aluminium AS, Shell Technology Norway AS, Statkraft Development AS, Statoil Petroleum AS, TOTAL E&P Norge AS, GDF SUEZ E&P Norge AS and the Research Council of Norway (193816/S60).
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Modeling the liquid back mixing characteristics for a kinetically controlled reactive distillation process Mayank Shah,a Edwin Zondervan,a Anton A. Kiss,b Andre B. de Haana a
Process Systems Engineering, Department of Chemical Engineering and Chemistry, Eindhoven University of Technology, 5600 MB, The Netherlands b AkzoNobel – Research, Development & Innovation, Process Technology ECG, Velperweg 76, 6824 BM Arnhem, The Netherlands. E-mail:
[email protected] Abstract The state of the art equilibrium model and the rate-based models for reactive distillation (RD) are well known and have been used since a couple of decades. However, these models are not sufficient to represent a slow reaction process that is kinetically controlled. This shortcoming is due to neglecting the effect of liquid back mixing on the whole process. This work starts with reviewing the modeling approach for the RD and then discusses the applicability of various models. The main focus is on the extension of the dynamic rate-based model to take into account the liquid back mixing. We also show how the axial dispersion is introduced into the RD model, without adopting the axial dispersion model. The results of the rate-based model were compared, with and without the axial dispersion. Remarkably, the extended model predicts more accurately the kinetically controlled process, as compared to the conventional rate-based model. Keywords: Reactive distillation modeling, liquid back mixing, Aspen Custom Modeler
1. Introduction Studying multi-component multistage separation processes such as distillation, gas absorption and reactive distillation by computer aided design and simulation is an important aspect of modern chemical engineering. Such studies are currently based on either the equilibrium (EQ) modeling or the rate-based modeling. In the equilibrium modeling, the vapor and liquid are assumed to be in equilibrium. However, this does not apply to the actual operation since a column rarely operates at equilibrium. The degree of separation depends on the mass and energy transfer between the phases being contacted on a tray, or within a packed section of the column. In practice, the theoretical number of stages obtained from the equilibrium model calculations are converted to the required real stages, either through overall efficiency of a tray or by height equivalent of a theoretical plate (HETP) for packed columns. This is a useful approach to simulate binary system or existing column. However, this approach is not reliable to simulate a multi-component system or an existing column with different operating conditions.1 Compared to the equilibrium modeling, the rate-based modeling offers accuracy in design of a column as it accounts for: 1) vapor-liquid equilibrium only at the interface between the bulk liquid and vapor phases 2) a transport-based approach to predict the flux of mass and energy across the interface, and 3) the real hydrodynamic situation of either a tray or a packed column. Due to these reasons, the over-design and underdesign is avoided, there is no need for efficiencies and HETPs and the column is designed more realistic as compared to EQ modeling, thereby reducing the capital and operating costs. Although the state of the art rate-based model predicts better than the EQ model, the model is limited to reliably predict the mass transfer limited processes.
12
M.shah et al.
The rate-based model is not sufficient to represent a slow reaction process that is kinetically controlled. This shortcoming is due to not taking into account the axial dispersion in the model which results in neglecting the effect of liquid back mixing on the whole process. The liquid back mixing is very important to predict accurately the end product composition in a slow reaction process, as the end product composition strongly influences the physical and chemical properties of the product. The kinetically controlled processes are often encountered in the specialty chemical sector, and they are often batch processes due to the small scale production requirement. The best examples of kinetically controlled processes encountered in the specialty chemicals sector are fatty acids, fatty acid nitriles and polyesters synthesis. In order to apply the reactive distillation (RD) technology in the specialty chemicals sector, the reactive distillation column must be designed in such a way that several products can be produced in same column, and while switching from one product to other the undesired product formation is avoided or minimized. The undesired product formation is minimized by reducing the back mixing in the system. These clearly suggest the necessity to incorporate the liquid back mixing in the model in order to investigate a multi-product RD process. In this work, the dynamic rate based model is extended to account for the liquid back mixing. We also show how the axial dispersion is introduced into the RD model, without adopting the axial dispersion model. The extended model is simulated in steady state mode to predict the process characteristics, and in dynamic mode to predict the influence of back mixing on the product change over. We also compared the results of the rate-based model with and without the axial dispersion.
2. Model development The dynamic rate-based model is extended to account for the liquid back mixing, by considering each stage as a stirred tank reactor. The extended model accounts for convection, mass transfer, reaction and axial dispersion. The liquid phase balance is discussed in detail in order to show explicitly how the axial dispersion is introduced into the RD model. The description of the vapor phase, the mass transfer between phases and the reactions in liquid phase remains the same as in the rate-based modeling. The model consists of n number of stages in series and each stage is considered as stirred tank reactor as shown in Figure 1. Note that n number of stirred tank reactors in series represents plug flow behavior hence the liquid phase balance can be represented by a plug flow reactor (PFR) model that is composed of partial differential equation (PDE): L
V Vj j-1
Lj-1
yi,j
Ci,j-1
Tvj
TLj-1
fvi,j yfi,j
fLi,j Cfi,j Vapour
Liquid
j RD
N
'z =h
Sj+1 - side draw from stage j+1 to j
j+1
L
Vj+1
Lj
yi,j+1
Ci,j
Tvj+1
TLj
V
Figure 1. Schematic view of a RD column and the corresponding stage balance
Modeling the liquid back mixing characteristics for a kinetically controlled reactive distillation process 13
wCi wt
v
. wCi wCi2 Dax Ri M i 2 wz wz
(1)
where, Ci is the concentration of component i (mol/kg), t is the time (s), v is the linear flow velocity (m/s), z is the position coordinate down the length of the column (m), Dax is the axial dispersion coefficient (m2/s), Ri is the reaction rate of component i .
(mol/kg/s) and M i is the mass transfer flux (mol/kg/s). By discretization of spatial derivatives of eq. (1), the liquid phase balance is represented as ordinary differential equation (ODE): dci dt
v
Ci , j Ci , j 1 'z
Dax
Ci , j 1 2Ci , j Ci , j 1 'z
2
.
Ri , j M i , j
(2)
where, Ci,j-1, Ci,j, Ci,j+1 is the concentration of component i at a stage j-1, j, j+1, respectively and 'z is the height of stage. In order to compare eq. (2) to the traditional liquid phase balance of the rate-based model, eq. (2) is formulated by replacing Ci = ni/M and 'z = h on the left and right sides, respectively:
dniL, j dt
L j 1Ci , j 1 L j Ci , j Dax M j
Ci , j 1 2Ci , j Ci , j 1 h2
M j Ri , j N iL, j F jL C fi , j
(3)
where, nLi,j is the number of moles of component i on a stage j, Lj-1, Lj is the liquid flow rate (kg/s) on a stage j-1 and j, respectively. Mj is the hold up on a stage j (kg), NLi,j is the mass transfer rate (mol/s), FL is the liquid feed low rate (kg/hr) and Cfi,j is the liquid feed concentration (mol/kg). The Dirichlet and Neumann boundary conditions2 are applied to solve eq. (3) for the top stage (j=1) and for the bottom stage (j=J), respectively. Taylor et al.3 have introduced a side draw from each stage in the rate-based model for coupled RD and side reactor process. We used this concept to introduce the concentration of component i, Ci, j+1 from stage j+1 to a stage j by using the side draw (Sj+1) from a bottom stage (j+1) to a subsequent top stage (j). Since eq. (3) accounts for convection, dispersion, reaction and mass transfer, eq. (3) represents the complete liquid phase component material balance for the kinetically controlled processes. The total material balance for liquid phase is given by:
dn Lj dt
i n
i n
i 1
i 1
L j 1 L j S j 1 M j ¦ Ri , j ¦ N iL, j F jL
(4)
The component and total material balances for vapor phase are described by eq. (5) and (6), respectively:
dniV, j dt
dnVj dt
V j 1 yi , j 1 V j yi , j N iV, j F jV y fi , j i n
V j 1 V j ¦ N iV, j F jV i 1
(5) (6)
The vapor-liquid equilibrium at the interface is represented by:
yi , j Pj
xi , j J i , j ( xi , j , T j ) pisat ,j
(7)
14
M.shah et al.
where, Pj is the total pressure (bar), Ji,j is the activity coefficient of component i as function of xi,j and Tj (K) and psati,j is the vapor pressure of pure component i. The mass transfer rates at the interface are represented by eq. (8), (9) and (10):
N iV, j
N iL, j
(8)
N iL, j
kl a(Ci*, j Ci , j )
(9)
N iV, j
k g a( yi , j yi*, j )
(10)
where, kl, kg are respectively liquid and vapor side mass transfer coefficients (kg/s/m2), a is the interfacial area (m2) and C*i,j, y*i,j are the liquid and vapor side equilibrium concentration of component i, respectively.
3. Results and discussions A reactive distillation model is developed to describe a kinetically controlled reactive distillation process. This model can be also used to simulate a multi-product reactive distillation column. In this work, Aspen Custom Modeler is used as a powerful CAPE tool to extend the traditional rate-based model so that complete model accounts for the effect of axial dispersion on the process. The simulations of a kinetically controlled process show that there is a significant influence of the axial dispersion on the conversion (as illustrated by Figure 2). When the axial dispersion is neglected, a conversion of 96% is predicted. However, the conversion significantly reduces to 90% with the increase of the axial dispersion coefficient. This clearly shows that – due to the low conversion of the reactant in the highly back mixing system – the end product composition is also significantly altered. In order to achieve a 96% conversion in the highly dispersed system, more stages are required, compared to the low dispersed system as shown in Table 1. This important difference shows that the axial dispersion should be included in the RD model in order to analyze reliably the kinetically controlled process. As the column internals significantly influence the axial dispersion, it is also necessary to investigate internals that have low axial dispersion in order to properly design a kinetically controlled process. Undesired products are often formed during the product change over in a multi-product continuous RD column. The dynamic simulation of product change over in multiproduct continuous RD column is shown in Figure 3, for various axial dispersions. top1 2
Table 1: Number of stages required in series to get a conversion of 96%
4
stages [-]
6
Dax [m2/s]
Stages [-]
0
20
2
0.0002
25
2
0.014
33
0.028
44
8 10 12 14 16
18 bottom 20 0
2
Dax = 0 m /s Dax = 0.002 m /s Dax = 0.014 m /s 2
Dax = 0.028 m /s 0.2
0.4 0.6 conversion [-]
0.8
1
Figure 2. Influence of axial dispersion on conversion
Modeling the liquid back mixing characteristics for a kinetically controlled reactive distillation process
15
feed compostions [-]
1
x
1
0.5
x
2
0 0
0.5
1
1.5
2
2.5
3
3.5
4
1.02
product compositions [-]
1
P
1
0.98 0.96 0.94 0.92
D D D D
0.9 0
ax ax ax ax
= 0.028 m2/s
P
2
= 0.014 m2/s = 0.002 m2/s = 0.0 m2/s 0.5
1
1.5
2 time [hr]
2.5
3
3.5
4
Figure 3. Dynamic profile of product change over Due to the fact that different conversions are achieved with varying axial dispersion coefficients (as illustrated in figure 2), the steady state composition of product P2 are different. It is noticeable that during the product change over, the product transition time from a steady state composition of product P1 to a steady state composition of product P2 is significantly higher in the highly dispersed system as compared to a low dispersed system. This results in more undesired product formation in the highly dispersed system. The dynamic simulations show that formation of such undesired product during the product transitions in highly dispersed system, is at least 1.5 times higher as compared to a low dispersed system. This demonstrates that the RD model without the axial dispersion is not sufficient to represent the reality of the process. The extended rate-based model improves the predictive capability of the slow reaction process (kinetically controlled process) compare to conventional RD models. The extended model also provides a better platform to analyze the multi-product RD process.
4. Conclusions The extended RD model proposed in this work represents more accurately a kinetically controlled RD process as compared to the conventional rate-based model. The extended model can also be used to describe realistically a multi-product RD column. The simulation results demonstrate that the axial dispersion influence significantly the kinetically controlled process and must not be neglected. The extended model predicts that the conversion is reduced in highly dispersed system as compared to a low dispersed system. The undesired product is formed at least 1.5 times higher in the highly dispersed system as compared to a low dispersed system.
Acknowledgment The financial support from the Dutch Separation Technology Institute (project SC-00005) and the industrial partners (DSM, AkzoNobel) is greatly acknowledged.
References 1. R. Bauer; R. Taylor; R. Krishna, 2000, Chemical Engineering Journal, 76, 33-47 2. H. Fogler, Elements of Chemical Reaction Engineering, 2nd ed., Prentice Hall (1992) 3. R. Taylor; R. Krishna, 2000, Chemical Engineering Science, 55, 5183-5229
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Application of computer-aided multi-scale modelling framework – Aerosol case study Martina Heitziga, Chistopher Gregsonb, Gürkan Sina, Rafiqul Gania a
CAPEC, Department of Chemical & Biochemical Engineering, Technical University of Denmark, Søltofts Plads, Bld. 227, 2800 Kgs. Lyngby, Denmark b Firmenich Inc., 250 Plainsboro Road, Plainsboro, NJ, 08536, USA
Abstract A computer-aided modelling tool for efficient multi-scale modelling has been developed and is applied to solve a multi-scale modelling problem related to design and evaluation of fragrance aerosol products. The developed modelling scenario spans three length scales and describes how droplets of different sizes are formed when a liquid fragrance product is sprayed from a pressurized can and how these droplets evaporate while they settle down due to sedimentation and convective mixing. Keywords: multi-scale modelling, modelling framework, aerosols
1. Introduction A computer-aided multi-scale modelling framework has been developed with the goal to increase the efficiency of the involved work-flows for model development and application [1]. This is achieved by designing the structure of the framework such that it can handle the work-flows and data-flows associated with different modelling tasks, combining state-of-the-art modelling techniques for the different work-flow steps as well as supporting model-documentation and model reuse. In this paper, a case study related to a multi-scale modelling problem (four different length scales) of an aerosol system is presented in order to highlight the main features of the modelling framework. The modelling scenario describes the spraying of a liquid fragrance product from a pressurized can which causes droplet formation. Furthermore, the fate of these droplets due to evaporation and transport has been considered. The work-flow to solve this and similar multi-scale problems has been developed and incorporated in the modelling framework. The developed models have been implemented in the model library of the framework so that they are available for application by other modelling projects. Section 2 introduces the developed work-flow for multi-scale model development while Section 3 describes a case study used to highlight the work-flow and the application of the modelling framework.
2. Work-flow in multi-scale modelling problems The work-flow of the framework supporting the development of multi-scale models, such as, the spraying of an aerosol, is summarized as: 1) Modelling objective and system description; 2) Identification of required models and model types (with respect to modelling objective); 3) Development, analysis, identification and validation of models; 4) Linking of models involved and solution strategy; 5) Evaluation of model performance and results. In step 1 the modelling objective is defined and available information on the system is collected. In step 2 the different elements of the system are identified and for each element it is investigated how it can be modelled. The work-flow
A new modelling tool for multi-scale model development and evaluation
17
identifies the need for multiple time and/or length scales for models based on the model assumptions, considered phenomena and desired model outputs. Once the models have been developed (step 3) they are evaluated on how they should be linked and solved sequentially or simultaneously (step 4).
3. Case study 3.1. Modelling objective and system description The modelling objective is to describe the spraying process of a fragrance product (for example fine fragrance, air freshener) so that the product qualities can be evaluated. The system under investigation is depicted in Figure 1. A pressurized liquid mixture of active ingredients, solvents, additives and propellants is released from a can to the surrounding atmosphere. The compounds are limonene (fragrance) and ethanol (carrier). During the release process a part of the liquid evaporates while the remaining liquid forms droplets of different sizes which account for the fragrance delivery. The generated droplets move downwards due to sedimentation and convective mixing as fragrance chemicals evaporate. Consequently, the modelling objectives of the system can be divided into two parts (see Figure 1): Part I: Describe the spraying of the compressed liquid from a can to the atmosphere and predict the ratio of released vapour to liquid as well as the size distribution of the formed droplets and their temperature. Part II: Model the fate of the droplets as they settle down and evaporate.
௩
Figure 1. Spraying process (Vsed-sedimentation velocity; ݉ሶ
ሺݐǡ ݄ሻ-evaporation mass flow of compound i)
3.2. Identification of required models and model types Part I: The ratio of liquid to vapour released as well as the temperature and composition of the droplets are predicted by an adiabatic flash model. Based on these results the droplet size distribution is determined by an experimentally regressed correlation (alternatively, a normal distribution may also be assumed). Part II: In order to describe the fate of the droplets, models are needed to describe the transport process as well as the evaporation together with appropriate constitutive models for different properties. 3.3. Development, analysis, identification and validation of models Because of page limits, we are presenting only the modelling of Part II in this paper. 3.3.1. Transport of droplets: The transport model of the droplets in the atmosphere (W. Koch, SprayExpo Program Description, Toxikologie und Experimentelle Medizin, Frauenhofer Institut, Hannover, Germany) has been adopted here. It considers the
18
M. Heitzig et al.
transport due to sedimentation as well as convective mixing by eddy diffusion. The sedimentation is modelled based on Stoke’s friction law. Coalescence between droplets and transport in horizontal directions is neglected. The droplets are assumed to be spherical. The corresponding model equations are given below: ఘ ȉ (1) ܣൌ ೣ ଵ଼ȉఎೌೝ డೝ
ൌ ܭௗௗ௬ ȉ
డ௧
డమ ೝ డమ
െ ܣȉ ܦௗ ଶ ȉ
డೝ
(2)
డ
3.3.2. Droplet evaporation: For the droplet evaporation the model from [2] has been modified by adding a dynamic energy balance. Important model assumptions are: the droplet is spherical, ideally mixed and consists only of a binary mixture; VLE is established at the droplet surface; convection and thermal diffusion are neglected; the gas phase is ideal; the temperature profile around the droplet is given by zeroth order approximation; the temperature of the surrounding gas phase ܶ is constant; and the concentration of the droplet compounds ݕǡ far away from the droplet is zero. The corresponding model equations are: ܶ௦ ൌ ݊ ൌ
൫ೣ ்ೞ ൯ ಿ
σసభ
ெௐ
ȉ
(3)
, ݔ ൌ σ , ݅ ൌ ͳǡ ǥ ǡ ܰܿ݉
ே , ܦௗ ൌ ఘ ே ݉ ൌ σୀଵ ݉ ,ݓ ൌ , ݅ ିସȉெௐ ȉఙ ܴ௩ǡ ൌ ݁ ݔቄ ቅ ఘ ȉோȉ் ȉ
ܸௗ ൌ σୀଵ
ௗ ௗ௧
ൌ
ଶȉగȉೝ ȉெௐ ȉ ȉ ோȉ்ೌ
ௗ൫ೣ ்ೞ ൯
(4), (5)
ଵൗ ଷ ȉ ቀ ೝ ቁ గ
ೞ
ೝ
ൌ ͳǡ ǥ ǡ ܰܿ݉, ߩ௫ ൌ ௦ǡ
, ܲ௦ ൌ ܲ
ଵିೞ ȉ௫ ȉఊ Ȁ
݈݊ ൜
(6), (7)
ଵି௬ೌǡ
σே ݓ ୀଵ
ȉ ߩ
(8), (9), (10)
ȉ ܴ௩ǡ
(11), (12)
ൠ
(13) ௗ
ே ଶ ସ ൌ ʹߨܦௗ ܭ ሺܶ െ ܶ௦ ሻ ߨܦௗ Ȟሺܶ െ ܶ௦ସ ሻ σୀଵ ܮ (14) ௗ௧ Here, ܶ௦ is the droplet temperature, ܦௗ the droplet diameter and ݉ is the mass of compound i inside the droplet. The model has been successfully validated using data by [3] for evaporating water droplets. 3.3.3. Constitutive models Correlations for the pure component properties with respect to changing temperature are taken from the ICAS database which is linked to the modelling framework. The required properties are: liquid heat capacity ܿ , liquid density ߩ , vapor diffusion coefficient in air ܦ , surface tension ߪ , heat of vaporization ܮ and vapour pressure ܲ௦ for all system compounds and the thermal conductivity of air ܭ . For the liquid phase activity coefficients ߛ the UNIQUAC model has been applied. The UNIQUAC parameters have been regressed through experimental data for the system ethanollimonene by [4]. ௗ௧
3.4. Linking of models involved and solution strategy: The developed models for the spraying process span four different length scales. The models of Part I, that is the adiabatic flash model and the droplet size distribution model, are at the macro scale. In order to describe the fate of the droplets (Part II), two size scales have been employed. On the meso scale (Eqs. 1-2) the transport of one droplet size fraction is considered, while the micro scale (Eqs. 3-14) describes the evaporation of a single droplet and the required properties are calculated on the nano scale with respect to temperature and composition. Figure 2 shows the linking scheme of the different models together with the data-flow and sketches of the modelled system on the different scales. The macro
A new modelling tool for multi-scale model development and evaluation
19
scale models are solved sequentially. Due to the data-flow requirements between the size scales, the remaining lower scale models must be solved simultaneously. The lower scale models need to be solved for each discrete droplet size fraction j in the macro scale (Ndis times). The droplet temperature Ts, the compound masses in the droplet mi(j) and the number of droplets NDr(j) are communicated from the macro scale to the lower scale models where they are used as initial values. In order to solve the meso scale model, the partial differential equation (Eq. 2) is discretized (in vertical direction h). This is done automatically by the modelling framework based on user specifications (method of lines, hmin=0.4 m, hmax=2.7 m, 184 discretization points). The height where the droplets are generated is 1.6 m. Results communicated back to the macro scale are the number of droplets in each discrete height h NDr(j)(h,t), the mass of the compounds i inside the droplet mi(j)(t) and the mass flow evaporating from the droplet mievaporating(t), all with respect to time. After solving the lower scale models for each discrete droplet size fraction j, the macro scale aggregates the results. Figure 2 also shows the output variables of each model in the linking scheme.
Figure 2. Linking scheme for spraying and fate of aerosol
3.5. Evaluation of results Simulations have been conducted for a total number of 1.02x1010 droplets having a droplet size distribution of 22 discrete diameters between 1.3 and 34 ȝm. Initially, all droplets had a composition of 5 vol% limonene and 95 vol% ethanol. The micro, meso and macro scale results are highlighted in Figures 3, 4 and5.
4. Conclusions The computer-aided modelling framework provides important features for the solution of the presented case study and similar multi-scale problems. It is structured based on the work-flows the modeller needs to follow, not only for model development, but also for model analysis, identification, validation and application for simulation and optimization. At each step of the work-flows the modelling framework supports the modeller by providing expertise as well as the required computer-aided tools, library and database connections. In that way, the process of model development is
20
M. Heitzig et al.
systematized and becomes more efficient. As regards the case study, the presented aerosol model allows the evaluation of product attributes such as how much vapour and of which composition is released at which height and at what time; how fast droplets settle down and when they disappear due to evaporation. This allows the product designer to simulate different scenarios (starting composition, spray-can pressure, release devices, etc.) and design the appropriate product with the corresponding delivery device. The developed models are also included in the modelling framework library and are ready for reuse for a different application context. If during application the available model needs to be further extended, the modelling framework provides strategies for this. The aerosol model can certainly be extended.
Figure 3. Micro scale results. Left: droplet composition during evaporation (34 ȝm droplet), Right: lifetimes of droplets for all 22 discrete diameters.
Figure 4. Meso scale results. Left: Location of droplets at droplet lifetime for different discrete diameters, Right: No. of 34 ȝm droplets in different height fractions vs. time.
Figure 5. Macro scale results. Left: Total mass of limonene released (by all droplets) vs. time, Right: Total mass flow of ethanol and limonene at different heights vs. time.
References [1] M. Heitzig, G. Sin, R. Gani, 2010, A Computer-Aided Framework for Modelling and Identification, submitted to Computers & Chemical Engineering [2] J. Kukkonen, T. Vesala, M. Kulmala, 1989, J. Aerosol Sci., 20, 7, 749-763 [3] W. E. Ranz, W. R. Marshall Jr., 1952, Chem. Eng. Prog., 48, 4, 173-180 [4] A. Cháfer, R. Muñoz, M.C. Burguet, A. Berna, 2004, Fluid Phase Equilib., 224, 251-256
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Sensitivity of shrinkage and collapse functions involved in pore formation during drying Seddik Khalloufia, Cristhian Almeida-Riveraa, Jo Jansena, Marcel Van-Der-Vaarta, and Peter Bongersab a
Unilever R&D Vlaardingen, Structured Materials and Process Science Department, 3130 AT Vlaardingen. The Netherlands. Tel. +31 10 460 8501, Fax. +31 10 460 5025,
[email protected] b
Chemical Engineering and Chemistry, Eindhoven University of Technology, Eindhoven, The Netherlands
Abstract The pore formation during drying is controlled by two mechanisms which are represented by two functions. These functions are expected to be universal, thus recurrent and applicable to relevant physical properties of the products during drying. This contribution aims at studying the sensitivity of shrinkage and collapse functions in predicting the porosity as a function of moisture content. A set of experimental data from an independent research group using air-drying of carrot were used to evaluate the sensitivity of these two functions. The results of this analysis showed that (i) at high moisture content, the porosity is not sensitive to the shrinkage function whatever the value of the collapse function is, (ii) in the case of air drying and at low moisture content, the porosity could be strongly affected by the shrinkage function, and (iii) the collapse function has a strong effect on porosity in products with a high volume of initial air. These findings are reported here for the first time and the approach used in this contribution could be very relevant to assess other parameters involved in drying processes such as bulk density and shrinkage coefficient. Keywords: Porosity, drying, sensitivity, theoretical model, shrinkage, collapse.
1. Introduction Drying is one of the major food processing technologies used to preserve and to increase the shelf-life of food products. Indeed, dried products are characterized by a low water activity, which inhibits microbial growth and undesirable enzymatic reactions (Mayor et al., 2004). In addition, drying facilitates handling, storage and transport of products without involving expensive cooling systems. During drying, food products undergo deformations that can be characterized by changes in volume, shape, porosity, density, shrinkage and/or collapse phenomena. These modifications are of extreme importance in terms of product quality and
S. Khalloufi et al.
22
characterization of mass and heat transfer phenomena. Optimization of these phenomena taking into account the quality of the output product and the cost of the processing is a requirement for the development and the perpetuity of drying technologies. Several mathematical expressions have been suggested to predict the porosity as a function of moisture content. Recently, a new theoretical model was suggested (Khalloufi et al 2009). One of the advantages of this new model is its ability to capture, with high accuracy, the porosity profiles regardless of the products and/or the technology used. Furthermore, this model involves two physical phenomena, namely shrinkage and collapse, that can be used to understand the mechanisms behind the pore formation. However, so far, there was no sensitivity study of the shrinkage and/or collapse functions in predicting the porosity and this is the main aim of this contribution.
2. Theoretical background of the model The main steps which were taken to build this new theoretical model are illustrated in Table 1 (Khalloufi et al. 2009). We proposed a mechanism-driven description of the porosity changes during drying in terms of shrinkage and collapse phenomena. The first phenomenon is related to the amount of water removed and replaced by air, which is represented by the shrinkage function. The second phenomenon refers to the variation of the air initially existing within the product, which is represented by the collapse function. Table 1: Main steps for building the new theoretical model (Khalloufi et al. 2009) Definition
Mathematical expression ε (X ) =
Porosity
Va ( X ) Va ( X ) + V w ( X ) + Vs
Va / a ( X ) = δ ( X )
Variation of the initial air
ε0 ª X0 « 1− ε0 ¬ ρw
Volume of initial air
Va0
Volume of air replacing water removed
Vw / a ( X ) = φ ( X ) ms
= ms
V a ( X ) = φ ( X ) ms
Total volume of air
ρw
Volume of the remaining water
Vw
= ms
Volume of the solid fraction
Vs
= ms
final expression of porosity Shrinkage function Collapse function
ε (X ) =
+
[X 0
[X 0 − X ] + δ ( X ) m
Va0
−
ρw
s
1 º
ρ s »¼ X]
1 º ε0 ª X0 + » « 1 − ε 0 ¬ ρw ρs ¼
X
ρw 1
ρs
δ ( X ) ε 0 [1 + β X 0 ] + φ ( X ) β [1 − ε 0 ][X 0 − X ] δ ( X ) ε 0 [1 + β X 0 ] + φ ( X ) β [1 − ε 0 ][X 0 − X ] + [ 1 − ε 0 ][1 + β X ] φ ( X ) = r1 + r2 X + r3 X 2 δ ( X ) = 1 − 0.5 [ 1 − Tanh [ p ( X − X c
)]]
Sensitivity of shrinkage and collapse functions involved in pore formation during drying
23
3. Sensitivity study approach To perform the sensitivity study of shrinkage and collapse functions in predicting the porosity, two maximum boundaries of ±10% or ±50% of both the nominal shrinkage and collapse functions were studied. In order to preserve the physical meaning of these functions, the following constraints were always respected: If δ ( X ) < 0 Then δ ( X ) = 0, If δ ( X ) > 1 Then δ ( X ) = 1 ® ¯If φ ( X ) < 0 Then φ ( X ) = 0, If φ ( X ) > 1 Then φ ( X ) = 1 At each moisture content, 20 (10 upper and 10 lower) random values of nominal shrinkage and collapse functions were calculated within each boundary. The porosity was then obtained at each moisture content by a random combination between the values of the shrinkage and collapse functions. Thus, a cloud of points around the nominal value of the porosity was generated. A variation of ±10% from the nominal values of the porosity was used to discuss the sensitivity of the shrinkage and collapse functions. To explore the accuracy and level of prediction of the proposed model, we formulated it as a constrained optimization problem (Khalloufi et al. 2009). The model was implemented in Matlab (R2007b, The MathWorks Inc., USA) using the fmincon function of the optimization toolbox.
4. Results and discussion A set of experimental data of air dried carrot reported by Lozano et al. (1980) was used to perform this sensitivity study. This choice aimed at covering a special porosity profile characterised by an inversion point. Figure 1 depicts the porosity as a function of moisture content and, as already demonstrated previously (Khalloufi et al 2009), this theoretical model showed very good agreement with the experimental data.
0.25
0.29
0.23 0.21
0.24
Porosity [ ]
Porosity [ ]
0.19 0.17 0.15 0.13
0.19
0.14
0.11 0.09
0.09
0.07 0.05
0.04 0
0.1
0.2
0.3
0.4
0.5
0.6
Normalized moisture (X/X0)
0.7
0.8
0.9
1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Normalized moisture (X/X0)
Figure 1. Porosity as a function of moisture content: ε(X). Empty circles are the experimental data. Lines are the results of the simulations obtained by the present model. Dashed lines represent the limits of ±10% of the porosity from the experimental data (published by Lozano et al. (1980)). Cloudy data (signs) are the result of the variation of shrinkage and/or collapse functions (left ±10%, right ±50% from the nominal values of shrinkage and/or collapse functions).
S. Khalloufi et al.
24
Figure 2 and 3 show the shrinkage and collapse functions, respectively, within two different levels of variation from the nominal values. The variation of shrinkage and collapse functions by ±10 or ±50% from their nominal values resulted in the cloudy points around the experimental data of porosity (Figure 1).
0.18
0.14
0.16
0.12 Shrinkage function [ ]
Shrinkage function [ ]
0.14 0.1 0.08 0.06 0.04
0.12 0.10 0.08 0.06 0.04
0.02
0.02
0
0.00 0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
0
1
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Normalized moisture (X/X0)
Normalized moisture (X/X0)
1
1
0.9
0.9
0.8
0.8
0.7
0.7
Collpase function [ ]
Collpase function [ ]
Figure 2: Shrinkage as a function of moisture content: Ɏ(X). Lines are the results of the nominal values. Cloudy data are the result from variation in the shrinkage function (left ±10%, right ±50% from the nominal values of shrinkage function). Experimental data published by Lozano et al. (1980).
0.6 0.5 0.4 0.3 0.2
0.6 0.5 0.4 0.3 0.2
0.1
0.1
0
0 0
0.1
0.2
0.3
0.4 0.5 0.6 0.7 Normalized moisture (X/X0)
0.8
0.9
1
0
0.1
0.2
0.3
0.4 0.5 0.6 0.7 Normalized moisture (X/X0)
0.8
0.9
1
Figure 3: Collapse as a function of moisture content: δ(X). Lines are the results of the nominal values. Cloudy data are the result from variation in the collapse function (left ±10%, right ±50% from the nominal values of collapse function. Experimental data published by Lozano et al. (1980). The variation of ±10% of shrinkage and/or collapse functions (Figure 2 and 3) results in predictions within ±10% of the experimental values of porosity (Figure 1). This result was confirmed with two other sets of experimental data reported by Krokida et al (1997) for carrot dried with hot air or freeze drying (data not shown). However, at a high variation level (±50%) of shrinkage and/or collapse functions, the errors in porosity predictions were not anymore within ±10% of the experimental values of porosity (Figure 1). The significant variation from the experimental data at a high variation level (±50%) of shrinkage and/or collapse functions could be explained by: (i) the high variation of shrinkage function (between 2% and 18%) in the middle of the drying process (0.05X/X00.95) (Figure 2),
Sensitivity of shrinkage and collapse functions involved in pore formation during drying
25
and (ii) the relatively high value of the initial porosity (ε0§12%) thus, the significant effect of the collapse function especially at the beginning of the drying process (Figure 3). According to Figure 1, and at a high variation level (±50%) of shrinkage and/or collapse functions, the porosity can only be underestimated at high moisture content (X/X0≥0.75). This interesting observation can be explained by: (i) the collapse function (Figure 3) cannot be higher than 1 according to the constraints given above, and (ii) at high moisture content the effect of the shrinkage function on the porosity tends towards zero because of the (X0X) term involved in the calculation of the shrinkage function.
5. Conclusions The results of this sensitivity study showed that (i) at high moisture content, the porosity is not sensitive to the shrinkage function, (ii) at low moisture content, the porosity could be strongly affected by the shrinkage function, however this effect depends on the initial porosity and (iii) the collapse function has a strong effect on the porosity for products with a high volume of initial air. This is the first time that these findings are reported in the literature and the approach used in this contribution could be very relevant to assess other parameters involved in drying processes such as bulk density, shrinkage coefficient and/or thermal conductivity.
Nomenclatures m s: mass of solid (kg) p: coefficient involved in collapse function r1, r2, r3: coefficients involved in shrinkage function X: water content (kg of water per kg of dry material) initial water content (kg of water per kg of dry material) X 0: coefficient involved in collapse function X c: δ(X): collapse function as a function of moisture content (dimensionless) β: density ratio (ρs/ρw) (dimensionless) ε 0: initial porosity (dimensionless) ρs: solid density (kg/m3) ρw: water density (kg/m3) Ɏ(X): shrinkage function as a function of moisture content (dimensionless)
References J. Lozano, E. Rotstein, and M. Urbicain. (1980). Total porosity and open-pore porosity in the drying of fruits. Journal of Food Science, 5 (45), 1403-1407 L. Mayor,. and A. Sereno. (2004). Modelling shrinkage during convective drying of food materials: a review. Journal of Food Engineering, 3 (61), 373-386 M. Krokida,. and Z. Maroulis. (1997). Effect of drying method on shrinkage and porosity. Drying Technology, 10 (15), 2441-2458 S. Khalloufi, C. Almeida-Rivera, and P. Bongers. (2009). A theoretical model and its experimental validation to predict the porosity as a function of shrinkage and collapse phenomena during drying. Food Research International. (42) 1122-1130.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
A REDUCED-ORDER APPROACH OF DISTRIBUTED PARAMETER MODELS USING PROPER ORTHOGONAL DECOMPOSITION M. Valbuena a, D. Sarabia a, C. de Prada a a
Department of Systems engineering and Automatic, C/Real de Burgos s/n, Valladolid, University of Valladolid
Abstract This paper presents a reduced model of a pipeline of a hydrogen network in a petrol refinery obtained with the use of Proper Orthogonal Decomposition (POD) method. The original first principles model is a distributed parameter one composed of several PDE. The reduced model provides a good approximation of the main operations performed in the pipeline with a smaller computational cost, allowing its use for advanced controller design. Keywords: Proper orthogonal decomposition, model reduction, partial differential equations, hydrogen network.
1. Introduction Model reduction methods are used to obtain simplified models (in terms of number of equations, variables, etc.) of dynamic systems, while maintaining the main characteristics of the original complex model. These kind of methods are used in processes described by partial differential equations (PDEs) because the spatial discretization increases a lot its complexity and integration time required to solve it, being impossible to use the model in real time, for example, in model based predictive control or in on line optimization. A more detailed description of different approaches of numerical methods to integrate it is described in [Schilders]. The POD method (Proper Orthogonal Decomposition) has been widely used to compute efficient bases for dynamic systems and to derive low order models of dynamical systems. It was introduced in the context of the simulation of turbulences by Lumley and is also known as the Karhunen-Loéve decomposition and principal component analysis [Kunisch].
2. Proper Orthogonal Decomposition The POD method is based on patterns generated by the simulation data or the experiments. Suppose the values of a variable ߠ along the domain at every time step can be expressed as the linear combination of ܭpatterns: ߠሺݔǡ ݐሻ ൌ ܽଵ ሺݐሻ߮ଵ ሺݔሻ ܽଶ ሺݐሻ߮ଶ ሺݔሻ ڮ ܽ ሺݐሻ߮ ሺݔሻ
(1)
Where ߠሺݔǡ ݐሻ is the vector of the variables over the whole spatial domain and at time step ݐ. This vector contains ܭelements when the spatial domain is divided into ܭgrid cells. In mathematical terminology, the patterns are denoted by ሼ߮ ሺݔሻሽୀଵ and are called the basis functions or the modes. The patterns or basis functions are independent of each other. It means that the basis functions are orthogonal to each other. Suppose the number of patterns can be reduced only to ݊ patterns such that ߠሺݔǡ ݐሻ can be expressed as a linear combination of ݊ patterns:
A reduced-Order approach of parameter distributed models using proper orthogonal 27 decomposition
ߠሺݔǡ ݐሻ ൎ ܽଵ ሺݐሻ߮ଵ ሺݔሻ ܽଶ ሺݐሻ߮ଶ ሺݔሻ ڮ ܽ ሺݐሻ߮ ሺݔሻ
(2)
Where ݊ is substantially smaller than ܭin (1). If the process variables can be expressed as a linear combination of very few patterns, then an approximate model of the process variable can be derived by building a model for the first time ݊ varying coefficients. The coefficients ܽ ሺݐሻ in the equation (2) are determined by: ܽ ሺݐሻ ൌ ሺߠሺݔǡ ݐሻǡ ߮ ሺݔሻሻ
Being ܽ
ሺݐሻଶ
(6)
the amount of energy of ߠሺݔǡ ݐሻ in the direction of ߮ ሺݔሻ.
3. Example 3.1. Pipeline Hydrogen is one of the main products used in modern petrol refineries. It is distributed through a network comprising many pipelines between production centres and consumer plants. Figure 1 represents one of these pipelines. Overall control of flows in the network is a difficult problem because of the interactions between flows and pressures, so that a model based strategy is required for this purpose. Using a first principles approach, it is possible to obtain a realistic model involving the main variables in a pipeline [Valbuena].
Figure 1. Pipeline
The example consists into considered the system of the Figure 1. At the begin of the pipeline there is a production unit and at the end of the pipeline there is a valve. Then, the boundary conditions in the production unit are the pressure, temperature and composition and the boundary conditions in the valve is the flow. To know the dynamics of the variables in function so much time as of the longitudinal coordinate of the conduction, we use the global and individual mass balance, the equation of quantity of movement and the energy balance based on a macroscopic description. In addition it is supposed that in the direction of the radial coordinate there is no variation of density, flow, pressure and temperature. The equations of the model distributed of the collector are the following ones [Ames]. The equation (7) describes the global mass balance when the phenomenon of transport is only due to the convention: ߲݉ ߲ሺ݉ݒሻ ൌͲ ߲ݐ ߲ݔ
(7)
Being ݉, ݒ, ݔand ݐthe mass, velocity, longitudinal coordinate and time, respectively. Equations (8) and (9) show the individual mass balance being ܥ the composition of the component ݇ (hydrogen and impurities): ߲ሺ݉ܥ ሻ ߲ሺ݉ܥݒ ሻ ൌͲ ߲ݐ ߲ݔ ܥ ൌ ͳ
(8) (9)
The equation (10) shows the quantity of movement balance where and ݀ are the friction losses and diameter, respectively: ߲ሺ݉ݒሻ ߲ሺ݉ ݒଶ ሻ ൌ െ݉ ݒԡݒԡ ߲ݔ ߲ݐ ʹ݀
(10)
28
M. Valbuena et al.
Equation (11) shows the corresponding energy balance: ߲ሺ݉ܶሻ ߲ሺ݉ܶݒሻ ൌ െܷሺܶ െ ܶ௫௧ ሻ ߲ݔ ߲ݐ
(11)
Being ܶ and ܷ the temperature and global coefficient of heat transmission, respectively. Finally, we can use the ideal gases equation because the pressure and temperature operation are not high: ܸܲ ൌ
݉ ܴܶ ܲܯ
(12)
Being ܸ the volume, ܲ ܯthe molecular weight and ܴ the constant of the ideal gases. The spatial domain is discretized using finite differences in 100 nodes. Figure 2 - Figure 4 show the step performed in the original model (pressure, temperature and composition of hydrogen in the inlet of the pipe) and Figure 5 - Figure 10 show the result obtain (it only shows some of the 100 nodes that divide the spatial domain).
Figure 2. Step in the Composition of hydrogen
Figure 3. Step in the Pressure
Figure 4. Step in the Temperature
Figure 5. Evolution of the Pressure
Figure 6. Evolution of the Temperature
Figure 7. Evolution of the Composition of H2
Figure 8. Evolution of the density
A reduced-Order approach of parameter distributed models using proper orthogonal 29 decomposition
Figure 9. Evolution of the velocity
Figure 10. Evolution of the mass flow
Figure 11 shows the energy captured by the first empirical eigenfunctions. The energy captured is calculated as follow being ߣ are the elements of the diagonal of the correlation matrix: ܲ ൌ
σୀଵ ߣ Ǣ ݊ ൌ ͳǡ ǥ ǡ ܰ σே ୀଵ ߣ
(13)
1 0,9998
P T
%Energy
0,9996
C m
0,9994
v
0,9992 1
2
3
4
5
6
7
8
n Eigenvalue
Figure 11. Energy captured by the first empirical eigenfunctions
As show the above figure, we can considerer that the first 7 nodes can be representing the process variables. Then, an approximate model can be derived by building a model for the first time 7 varying coefficients. Table 1 shows the comparison in the number of nodes, equations, variables and CPU time for the original model and the reduced model. The CPU time in the reduced model is substantially smaller than in the original model. Table 1. Parameters of comparison Original Model Reduced Model
Number of nodes
Number of equations
Number of variables
CPU time (s)
100
2003
2031
24.041
7
143
171
0.768
Relative error is calculated as a percentage using the same set of simulation data (Figure 12 - Figure 15). The largest deviations are found in the moments of the steps. As show the Figure 2-Figure 4, the first step consist in a change of the composition, next a change in the pressure and finally a change in the temperature. The obtained mistakes are inside the acceptable range.
30
M. Valbuena et al.
0,3 0,2 0,1 0 -0,1 0 -0,2
5000
10000
C[1,H2] C[2,H2] C[3,H2] C[4,H2] C[5,H2] C[6,H2] C[7,H2]
2
%Relative Error
1,5 1 0,5 0
10000
-1 -1,5 -2
2 1,5 1 0,5 0
Time (s)
5000
2,5
-0,5 0
Figure 12. Relative error % for the pressure
-0,5 0
3
Time (s)
Figure 14. Relative error % for the Composition of H2
2000
4000 6000 Time (s)
8000
10000
Figure 13. Relative error % for the temperature
%Relative Error
% Relative Error
0,4
T[1] T[2] T[3] T[4] T[5] T[6] T[7]
3,5
%Relative Error
P[1] P[2] P[3] P[4] P[5] P[6] P[7]
0,5
0,015 0,01 0,005 0 -0,005 0 -0,01 -0,015 -0,02 -0,025 -0,03 -0,035
5000
rho[1] rho[2] rho[3] rho[4] rho[5] rho[6] 10000 rho[7]
Time (s)
Figure 15. Relative error % for the density
4. Conclusions It has studied the POD methodology and it has used in models of distributed parameters, where the main features of the original model have been maintained. The POD method has been applied in a complex example like the hydrogen pipeline of a refinery. The response of reduced models is similar to the response of the original models. This is evidenced by the graphs of the relative errors. As can be seen in the tables, using the POD method, has greatly reduced the number of varaiables, the number of equations and the CPU time of calculation.
References W. F. Ames, 1977. “Numerical Methods for Partial Differential Equations”. Academic Press, Inc, Second Edition. K. Kunisch, S. Volkwein, 1999. “Control of the Burgers Equation by a Reduced-Order Approach Using Proper Orthogonal Decomposition”. Journal of Optimization Theory and Applications: Vol 102, No. 2, pp. 345-371. H. A. Schilders et al., 2000. “Model Order Reduction. Theory, Research Aspects and Applications”. The European consortium for mathematics in industry. Springer. M. Valbuena et al., October 13-15, 2010. “Dynamical Simulation of a collector of H2 using methods of numerical integration and a graphical library in EcosimPro®” 22nd European Modeling & Simulation Symposium (Simulation in Industry), FES, Morocco.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
A Process Unit Modeling Framework within a Heterogeneous Simulation Environment Ingo Thomasa a Linde
AG, Linde Engineering Division, Dr.-Carl-von-Linde-Str. 6–14, 82049 Pullach
Abstract Rigorous dynamic simulation is becoming increasingly important at Linde Engineering, and therefore more detailled process unit models are required. In fact, the refinement of existing process unit models as well as the development of new ones is often part of dynamic simulation projects, especially when it comes to innovative processes. In order to facilitate model development within the heterogeneous simulation environment of Linde Engineering, a new Process Units Modeling Framework has been developed. Models developed within this framework may be used in R Linde’s inhouse simulator OPTISIM , as well as in the commercial process R simulator UNISIM , which are both widely used at Linde Engineering. Keywords: Hierarchical Modeling, Multiscale Modeling
1. Introduction As a leading international engineering and contracting company, the Linde Engineering Division of The Linde Group designs and builds turnkey process plants for a wide variety of industrial users and applications: chemical industries, air separation, manufacturers of hydrogen and synthesis gases, natural gas treatment, and more. R [ESLBK97] Linde’s in-house process simulation and optimization tool OPTISIM has been employed and enhanced for decades. As an efficient equation-oriented system, it is successfully applied and widely accepted by a large number of engiR neers as a steady-state process design tool. OPTISIM ’s dynamic simulation features have also been extended according to user’s needs [KSBK01]. R cover rigorous dynamic equipment More recent applications of OPTISIM simulation. An important application is the systematic survey of heat exchanger temperature differences during startup or shutdown. Another application is the propagation of pressure waves through pipelines due to cavitation or valve shutdowns. Compared to a stand alone simulation tool, the major advantage in studying these effects within a process simulation environment is that feedback of the process can be taken into account.
32
Ingo Thomas
These applications require more detailed models of the equipment. For instance, the simulation of pressure waves in pipelines requires a momentum balance in addition to material and energy balance equations. For the practical development of this new generation of process models it is beneficial to combine a descriptive model definition as applied in modern simulation environments for rapid development of models (as discussed, for instance, R in [KFGE97]) with the undisputed strengths of OPTISIM regarding process simulation and optimization. R R , UNISIM is also increasingly being used for dynamic Besides OPTISIM simulations at Linde, which brings up similar requirements regarding model enhancements. Hence, it is obviously beneficial to have the opportunity to easily transfer specific proprietary unit models from one simulator to the other. Therefore, the modeling environment is implemented so as to allow a straightforward R integration within OPTISIM as a generic unit model framework as well as in R UNISIM as a unit extension. 2. A new lightweight modeling environment The new declarative modeling environment is implemented as a lightweight C++ library which provides a small C/C++/FORTRAN API. The library basically consists of a virtual machine (VM) and a compiler. Mathematical expressions are compiled into “virtual machine code” or bytecode, similar to Java, Perl or Visual Basic. There are two ways of using the library: 1. The model is a part of a larger system. In that case, the virtual machine returns the residual of the expression, optionally together with its derivatives. This is how the library is used within the equation-oriented simR . ulator OPTISIM 2. The model is solved in a standalone manner. The virtual machine has interfaces to a number of numerical algorithms (Newton solvers, optimization codes, and DAE integrators); these solvers may be used to obtain the numerical solutions. This is how the liR brary is used within UNISIM .
"
!
A Process Unit Modeling Framework within a Heterogeneous Simulation Environment
33
The virtual machine optionally computes derivatives of the model equations. The structure of the Jacobian is computed during compilation. During evaluation, the derivatives are computed similar to the forward mode of automatic differentiation codes like ADOL-C [GJU96]. The virtual machine provides means for callbacks to other APIs, either via R COM, Dll interfaces or by static linkage. For instance, UNISIM ’s physical property system as well as Linde’s in-house physical property package GMPS (General Multi-Phase Property System) is made available by callback. The modeling environment provides means for modeling discrete-continuous processes. There are if-then-else structures, max/min statements and so on. These features are implemented along the lines of the “HSML” framework (see [Tay99]), i.e. each discontinuous function comes with a switching function, R whose sign changes indicate discrete state changes. Based on that, OPTISIM provides sophisticated means for re-factoring jacobi matrices, consistent reinitialization of higher-index DAEs and so on (see [KMG92, KSBK01]). New features compared to other dynamical integrators are the following: • Functionals. A “functional” is an abstraction of a quantitative relationship between state variables. Functionals may be compared to function pointers in C++ or FORTRAN. An application is the specification of a heat flux density through a tube wall by a functional (describing insulation loss or heating). Another application is the description of chemical reactions by conversion rate functionals as (optional) right hand sides to the component balance. • Tensors. Multi-dimensional arrays (which may have any number of dimensions) are assumed to be “tensors”, that is, they come with multiplication and addition operations using differential-algebraic conventions. In especially, linear algebra notations, e.g. dot (x) + A x = b (dot(x) being the vectorial time derivative of x), are valid. This feature is welcomed in particular by feedback control experts. 3. Modeling Requirements in a Heterogeneous Simulation Environment: An Example In general customer or project requirements enforce the use of specific tools. R Inhouse, dynamic machine simulations are often executed using OPTISIM . However, on customer request, we transferred such a simulation to UNISIM R . R valve model is not conform with Linde-specific We observed, that the UNISIM DIN standard requirements. Using the new modeling environment, a valve R model which agrees with the DIN standard model that matches the OPTISIM requirements could easily be generated. Though it may be theoretically possible to do this by CAPE OPEN interfaces R R (using gPROMs for the valve model and couple it to UNISIM ), this approach R is not practical due to risk and cost issues. Extending UNISIM by a Unit Extension, on the other hand, requires too much time and effort for a single ongoing project.
34
Ingo Thomas
4. Workflow and Strategical Benefits The modeling environment provides a software-technical abstraction layer, which separates the thermodynamical or physical model from the software engineering details of the process simulation environment. The practical and strategical consequences will be surveyed below. • Process unit model development is no longer overloaded with software engineering details; the unit model developer does not need to bother with compilers, linkers, memory management and so on. The software engineering details to be taken care of in complex systems R R as OPTISIM or UNISIM are not to be underestimated. Software engineering perils (e.g. memory management details) have been significant obstacles in a number of development projects. • Process unit models may be developed independently of the process simulation environment in which the model is to be used. By now, models may R R and UNISIM ; later on, a CAPE OPEN ESO be used in OPTISIM implementation may be discussed. This increases the safety of investment of the expensive development of detailed models. • Lets face it: Model development practise is mostly debugging. A declarative modeling environment reduces debugging time in several ways: – If derivatives have to be coded explicitly, they are a major source of subtle errors, which detoriate convergence speed and model reliability. Using automatic differentiation eliminates this source of errors. – Another common source of errors in discrete-continuous systems are bookkeeping errors regarding switching function states. This source of errors is eliminated as well. 5. Experiences R for some The evolving modeling environment has been part of OPTISIM years now. Hence, there already are some experiences worth noting. R • Thanks to its new modeling capabilities OPTISIM is being used for in-depth modeling of heat exchangers as well as for machine simulations.
• The modeling platform simplifies modeling of innovative processes, as “gaps” in terms of models for new apparatuses are closed easier. • The new capabilities opened up new vistas for studying feedback control strategies [See10]. The development of advanced control strategies is boosted by the simplicity of their implementation [Hel10]. • Especially conditional statements (if-then-else) simplify the implementation of flexible set-ups of process flowsheets. Hence, on the one hand, process models may be set up more generically, which boosts the development of standardized flowsheets.
A Process Unit Modeling Framework within a Heterogeneous Simulation Environment
35
6. Conclusion We presented some aspects of a Linde in-house development of a declarative modeling environment. Though a number of comparable systems are on the market, there are technical and strategical drawbacks integrating them into a process simulation environment. Hence, an in-house development turned out to be a viable alternative. References [ESLBK97] E. Eich-S¨ollner, P. Lory, P. Burr, and A. Kr¨oner, Stationary and dynamic flowsheeting in the chemical engineering industry, Surveys on Mathematics for Industry 7 (1997), 1–28. [GJU96] A. Griewank, D. Juedes, and J. Utke, Algorithm 755: Adol-c: a package for the automatic differentiation of algorithms written in c/c++, ACM Trans. Math. Softw. 22 (1996), no. 2, 131–167. [Hel10] S. Heldt, Dealing with structural constraints in self-optimizing control engineering, Journal of process control 20 (2010), no. 9, 1049– 1058. [KFGE97] L.U. Kreul, G. Fernholz, A. Gorak, and S. Engell, Erfahrungen mit den dynamischen Simulatoren DIVA, gPROMS und ABACUSS, Chemie Ingenieur Technik (CIT) 69 (1997), 650–653. [KMG92] A. Kr¨oner, W. Marquardt, and E.D. Gilles, Computing consistent initial conditions for differential-algebraic equations, Computers & Chemical Engineering 16 (1992), no. Supplement 1, S131 – S138, European Symposium on Computer Aided Process Engineering– 1, 23rd European Symposium of the Working Party on Computer Aided Process Engineering 463rd Event of the European Federation of Chemical Engineers (EFChE). [KSBK01] Th. Kronseder, O.v. Stryk, R. Bulirsch, and A. Kr¨oner, Towards Nonlinear Model-Based Predictive Optimal Control of Large-Scale Process Models with Application to Air Separation Plants, Online Optimization of Large Scale Systems (M. Gr¨otschel, S. O. Krumke, and J. Rambau, eds.), Springer, Berlin, 2001, pp. 385 – 410. [See10] Th. Seel, Modeling, Order Reduction and Multivariable Control Designs for Cryogenic Separation & Liquefaction Plants, Diplomarbeit, Otto-von-Guericke Universit¨ at Magdeburg, Germany, 2010. [Tay99] J. H. Taylor, Rigorous hybrid systems simulation with continuoustime discontinuities and discrete-time agents, vol. Software and Hardware Engineering for the 21st Century, ch. 60, pp. 383–388, World Scientific and Engineering Society Press, NY, 1999.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Mathematical description of mass transfer in supercritical-carbon-dioxide-drying processes Cristhian Almeida-Riveraa, Seddik Khalloufia, Jo Jansena and Peter Bongersa,b a
Unilever R&D Vlaardingen, Olivier van Noortlaan 120, 3130 AC, Vlaardingen, The Netherlands,
[email protected] b Hoogwerff chair in Product-Driven Process Engineering, Eindhoven University of Technology, PO Box 513, 5600 MB, Eindhoven, The Netherlands
Abstract For thermo-sensitive food products, supercritical-carbon-dioxide (SC-CO2) drying process could be a promising technology. The process takes place in three steps: (i) removal of water from the food matrixes, (ii) adsorption of the water removed in the adsorber bed, and (iii) regeneration of the adsorber with hot air. In this investigation, a mathematical model is derived to describe the changes of water concentration in SCCO2, in the solid food matrix and in the adsober bed during the entire drying processes. The mass balance equations of the model involve several parameters such as the geometry of the autoclave and the adsorber bed, mass transfer coefficients, diffusion coefficients, equilibrium constants between the solids and the fluids, the specific interfacial area of the solid matrixes, the porosities of the packed beds, the SC-CO2 flowrate and the particle size. Preliminary results obtained with the model suggest that each parameter may contribute differently to the drying kinetics. This finding allows the identification of the bottlenecks encountered in drying processes and offer leads and strategies to overcome them. The present model could eventually be used as a tool for optimizing the operating conditions and process scale-up in SC-CO2 drying. Keywords: mathematical simulation, supercritical-carbon-dioxide, drying, food, packed beds
1. Introduction Extending the shelf life of food products by reducing their water activity has been one of the challenges faced by the food sector globally. One of such preservation approaches involves the removal of water from the food matrixes by dedicated technologies. Among them, freeze drying is regarded as the golden standard technology due to the remarkable quality of the final product. On the other hand, freeze drying technology requires considerable operational and capital costs, which make this processing route unaffordable for low added value products [1]. Recently, the study of an alternative drying technology, assisted by supercritical carbon dioxide (Sc-CO2) was addressed [2, 3], highlighting the successful execution of extraction technology operated at such supercritical conditions. In the supercritical region, there is a continuous transition
Mathematical description of supercritical-CO2 drying
37
between the liquid and the gas phases, with no possible distinction between these two phases (Fig. 1). Beyond this point, the special combination of gas-like and liquid-like properties makes the supercritical fluid an excellent solvent for the extraction industry [4]. Thus, fluids at supercritical conditions exhibit a solvent power closer to that of liquids and viscosity and diffusivity comparable to those of gases [5]. Among the fluids used at supercritical conditions in extraction applications, currently the most and widely desirable fluid in foods and medicine is carbon dioxide [6]. In this contribution, a dynamic model is
Figure 1. Schematic phase diagram of CO2 around its critical point
presented for the Sc-CO2-assisted dehydration of solid matrix coupled with the dehumidification of the Sc-CO2 stream in regeneration unit.
2. System description 2.1. Physical description of the system The ScCO2-assisted drying unit is composed of three key elements: (i) the drying chamber, where the material to be dried gets in contact with a continuous stream of ScCO2; (ii) a regeneration unit, containing usually zeolite, and where the water in the Sc-CO2 stream gets absorbed, and (iii) a recirculation pump to maintain the CO2 stream at supercritical conditions. In this configuration, water is extracted from the food material by concentration gradient and carried by SC-CO2 stream to the zeolite material, where it is adsorbed.
38
C. Almeida-Rivera et al.
Figure 2. Schematic representation of a SC-CO2 drying unit.
3. Model Development and Implementation In our previous publications a detailed model derivation of the drying chamber was presented [2, 3]. The governing expressions accounted for a realistic description of the Sc-CO2-assisted drying process, albeit some simplifying assumptions were introduced. In one of such assumptions we considered the incoming Sc-CO2 stream to be water-free, i.e. it was implied that the amount of zeolite was sufficient to adsorb instantaneously the water extracted from the food matrix. Although the degree of prediction of the model was remarkably accurate (see Fig. 3 in [3]), we acknowledge that in real practice such assumption would imply an infinitively large (and thus non-realistic) zeolite reactor and a 100% efficient zeolite. In this contribution, we relax the water-free assumption for the incoming Sc-CO2 stream and provide a simplified modelling approach for the integrated description of the system. 3.1. Modelling of the drying chamber As described in detail in [2, 3] the following set of governing expressions can be derived for the dehydration of a solid matrix (subscript s) by the flow of a Sc-CO2 stream (subscript f), dCf dt
= D
d 2Cf 2
dz
− U
dCf dz
§1−ε ρs · dCs ¸ − ¨ ¨ ε ρ f ¸ dt ¹ ©
[
dCs = − K a Cs − Keq Cf dt
]
The following initial and boundary conditions are considered for the water concentration per unit mass C, where the flux entering a boundary must be equal that passing through the boundary.
Mathematical description of supercritical-CO2 drying
t = 0 and 0 z L:
Cs ( z ) = Cs 0
C f ( z) = 0
39
t > 0 and z=0 or z= L ∂C ∂z
f
=
∂C
0
∂z
z=L
f
= z=0
U (C D
f
−C
f , in
)
3.2. Modelling of the zeolite regeneration chamber The water adsorption in the zeolite regeneration chamber was modelled using a first order kinetics. Under the assumption of incompressibility conditions and negligible volumetric change due to water adsorption, the dynamic behaviour of the water concentration leaving the zeolite chamber was given by the expression, C f ,in =
1
η zeo
(
C f (t , L) 1 − e −kt
)
The kinetic constant k is the inverse of the residence time of the Sc-CO2 inside the zeolite regeneration chamber and is directly related to the size of the zeolite unit and the flowrate of the Sc-CO2 stream. An additional parameter (ηzeo) accounting for the zeolite efficiency has been introduced. The set of differential equations was discretised using finite difference method and the complete set of equations (differential and algebraic) was implemented and solved in Matlab/Simulink using an implicit Runga-Kutta solver.
4. Results Taking as reference case when the incoming Sc-CO2 stream is water-free (Cf,in=0) and the zeolite is 100% efficient (i.e. ηzeo=1, all water is absorbed instantaneously in the zeolite matrix), a parametric study has been performed. In this study the kinetic constant and zeolite efficiency are varied within sensible ranges and the overall performance of the process was assessed. This performance indicator was estimated in terms of the moisture content in the solid matrix at the end of the drying processing time. An increase in k implies small zeolite reactors (at a given flowrate) or the increase of flowrate (at a given chamber size). A decrease in ηzeo implies an slower partial absorption of water in the zeolite. As can be seen in Fig. 3-left, the kinetic constant k plays a key role in the system performance. As the kinetic constant increases (i.e. decreasing the chamber volume or increasing the Sc-CO2 volumetric flow), the drying time to achieve a moisture content target increases dramatically, reaching asymptotic behaviour. The effect of the zeolite efficiency is depicted in Fig. 3-right. As expected, the moisture profile when the efficiency decreases is comparable to those obtained when the kinetic parameter increases. Hence, desired level of moisture might not be attainable, as an increasing amount of water is being recirculated to the drying chamber.
40
C. Almeida-Rivera et al.
Figure 3. Simulation results for the moisture content in the solid matrix at various absorption kinetics constants (left) and at various zeolite efficiency values for a given kinetic constant (right).
5. Conclusions and Future Work In this contribution a mathematical model was derived for the description of the combined dehydration of solid matrices by Sc-CO2-assisted drying and the continuous dehumidification of the Sc-CO2 stream in a zeolite regeneration unit. Two parameters affecting the Sc-CO2 dehumidification step were considered and related to the size of the chamber and to the effectiveness of the zeolite. The simulated results showed that the performance of the drying process might be strongly diminished by insufficient amount of zeolite in the chamber and by poor-efficient zeolite. These results can be used as preliminary design principles for the optimization and scale-up of drying process assisted by Sc-CO2 extraction.
References [1] Ratti, C. Hot air and freeze-drying of high-value foods: A review. Journal of Food Engineering 2001, 4 (49), 311–319 [2] Khalloufi, S.; Almeida-Rivera, C.P.; Bongers, P. Supercritical-CO2 drying of foodstuffs in packed beds: Experimental validation of a mathematical model and sensitivity analysis. Journal of Food Engineering 2010, 96, 141–150 [3] Almeida-Rivera, C.P.; Khalloufi, S.; Bongers, P. Prediction of Supercritical Carbon Dioxide Drying of Food Products in Packed Beds. Drying Technology, 28: 1157–1163, 2010 [4] Nalawade, S.P.; Picchioni, F.; Janssen, L.P.B.M. Supercritical carbon dioxide as a green solvent for processing polymer melts: Processing aspects and applications. Progress in Polymer Science 2006, 1 (31), 19–43 [5] Barbosa-Ca´novas, G.V.; Tapia, M.S.; Cano, M.P. Novel Food Processing Technologies; CRC Press: Boca Raton, FL, 2005
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Three-moments conserving sectional techniques for the solution of coagulation and breakage population balances Margaritis KostoglouĮ, Michalis C. Georgiadisb a
Department of Chemistry, Aristotle University, Univ. Box 116, 54124 Thessaloniki, Greece,Email:
[email protected] b Department of Chemical Engineering, Aristotle University of Thessaloniki, 54124, Greece. E-mail:
[email protected] Abstract Sectional (zero order) methods constitute a very important class of methods for the solution of the population balance equation offering distinct advantages compared to their competitors, namely, higher order and moment methods. For the last ten years a particular sectional method, the so called fixed pivot technique (Kumar et al, 1996) has been the one most extensively used in the scientific community for the solution of the coagulation and breakage equations because it offers arbitrary grid choice and conservation of two moments of the particle size distribution. More recently, a new method (called cell average technique; Kumar et al, 2006; Kostoglou, 2007) has been developed which gives more accurate results than the fixed point technique. In the present work, the extension of this new method in order to conserve three moments is attempted. A stable algorithm for the solution of the coagulation and breakage equation is developed. The new method allows improved computation of moments of practical interest. Keywords: Population balances, sectional methods, coagulation, breakage, moment conservation
1. Main Text Coagulation and breakage (alternatively called fragmentation) are of paramount importance in several processes of technological and/or fundamental scientific interest. These phenomena concerns several scientific disciplines. For example, regarding polymer technology, the mechanism of polymer degradation can be considered to be breakage whereas regarding catalytic processes it influences their efficiency through the catalyst attrition. Regarding industrial aerosol processes, coagulation is an important step for nanoparticle production. In atmospheric sciences, coagulation and breakage are related to rain formation and in astrophysics to the size distribution of asteroids and to planet formation. In biotechnology, the cell division process can be described as a spontaneous breakage process. Other processes where breakage is the essential mechanism are those related to size reduction of solids (e.g. crushing, milling, grinding) whereas coagulation is of paramount importance in crystallization, precipitation,
M. Kostoglou and M. Georgiadis
42
pelletization and granulation. The bubble size distribution in bubble columns is largely due to coagulation/breakage, which in turn determines the characteristics of the flow field in the column. Furthermore, coagulation and breakage are very important in emulsion technology determining the droplet size distribution and the emulsion stability. The dynamics of a particle population undergoing coagulation and breakage is described by the coagulation-breakage equation that belongs to the more general class of the population balance equations. This equation is a non-linear partial integrodifferential equation and its numerical solution is by far no trivial. This is the primary reason for the development of so many methods for its solution, obtained from various scientific disciplines.
2. Problem Formulation The coagulation breakage population balance is the following non-linear partial integrodifferential equation:
wf (x, t) wt
x
f
1 K(y, x y)f (y, t)f (x y, t)dy f (x, t) K(x, y)f (y, t)dy 20 0
³
x
³
(1)
³
p(x,y)b(y)f(y,t)dy-b(x)f(x,t) 0
where t is the time, x is the particle volume, f(x,t) is the number concentration density function and K(x,y) the coagulation frequency between two particles with sizes x,y respectively, b(x) the breakage frequency and p(x,y) the probability distribution of particles of volume x resulting from the breakup of a particle of volume y. The first term of the right hand side of (1) represents the rate of generation of particles of volume x by coagulation, the second the loss by coagulation, the third the gain by breakage and the last the loss by breakage. The above equation must be solved for the evolution of the particle size distribution (PSD) having as initial condition a given PSD f(x,0)=fo(x). There are several approaches to the solution of the equation (1). At the one limit the socalled higher order methods ensure high accuracy requiring large computational effort and at the other limit the moments methods require a small computational effort but sometimes leading to questionable accuracy. The so-called sectional methods constitute the best compromise between the two approaches bridging their accuracy and computational requirements. The large range of particle sizes considered in practical problems suggests non-uniform discretization of equation (1) in size domain. The main problem of the older sectional methods is that they can conserve only one moment of the PSD in case of a non-uniform discretization. This problem was overcome 15 ago by the so called Fixed Pivot Technique (FPT) allowed the simultaneous conservation of two moments. Despite the conservation of the zeroth and first moment FPT exhibits large errors in the computation of the second moment. A significant improvement with respect to this problem was achieved by the Cell Average Technique (CAT) five years ago. Here a new approach based on CAT but requiring conservation of three moments of the PSD is introduced with the name Extended Cell Average technique (ECAT).
Three-moments conserving technique for the solution of coagulation and breakage population balances 43 in c o m in g p a r tic le s (a )
\\
x
i-2
x
i-1
v
x i
v
i
x i+ 1
i+ 1
incoming particles
)(b)
average particle size
x i-2
x i-1
xi
vi
v i+1
x i+1
incoming particles (c) average particle size
x i-2
x i-1
xi
vi
v i+1
x i+1
in c o m in g p a r tic le s (d ) a v e r a g e p a rtic le s iz e
x i-2
x i-1
vi
xi
v i+ 1
x i+ 1
I n c o m in g p a r t ic le s (e ) a v e r a g e p a r t ic le s iz e
X i- 2
X i- 1
V i
X i
V i+ 1
X i+ 1
Figure 1. Handling of the incoming particles in the section i by the sectional techniques: (a) Fixed Pivot Technique (coagulation and breakage) (b,c) Cell Average Technique (coagulation and breakage) for average incoming particle size smaller (case b) or larger (case c) than the pivot size xi (d) Extended Cell Average Technique (breakage) (e) Extended Cell Average Technique (coagulation)
44
M. Kostoglou and M. Georgiadis
3. Solution techniques A typical discretization scheme transforms the coagulation-breakage equation to a system of ordinary differential equations having as independent variable the time and depended variables the number of particles contained in the section i (i.e. particles of sizes between vi and vi+1. The symbols v1,v2,v3… stand for the finite volume (sectional) discretization of the particle volume domain. In case of a uniform grid and a discrete initial PSD (all particles consisting of monomers having a specific size xm) the discretized equation (1) degenerates to the discrete coagulation-breakage equation by a direct substitution x=ixm. The uniform grid has the property of moment conservation, i.e. the moments of the new particles resulting from a breakage event are the same as the moments of the parent particles (both new and parent particles must be assigned to some grid point). But whereas for some applications (e.g. bubble or droplet size distributions) the competition between coalescence and breakage, or the steep reduction of the breakage frequency with decreasing bubble size, leads to narrow PSD which can be modeled using a uniform grid, in other cases (e.g. size reduction of solids, fundamental studies of breakage) the particle volume (independent variable) may extend to many orders of magnitude rendering necessary the use of non-uniform (usually geometric) grid. The fact that for a non-uniform grid the moment conservation property is not satisfied led to the development of several sectional techniques based on the requirement of moment conservation. All the particles of the PSD are assigned to specific particle sizes called pivots (x1,x2,x3….) which define the grid of the discretization technique. The sections boundaries vi are related to the pivots as vi=(xi-1+xi)/2. The main issue concerning sectional techniques is how to distribute a fresh particle produced by a coagulation or breakage event, among the classes. The straightforward discretization corresponds to assigning the new particle at the size class it belongs. Although it seems to be a tautology, this approach leads to the conservation of only one moment. The main idea in FPT is the distribution of each new particle among two classes depending on the size of the new particle (classes i and i+1 for new particles larger than xi and classes i-1 and i for new particle smaller than xi). The distribution of the new particle among the two classes is such to ensure the conservation of two moments of the PSD. The difference in CAT is that not each new particle is distributed to the class but first an average particle entering the class i is constructed and this is distributed among the classes following the laws of FPT. This procedure still conserves two moments but the numerical results for other non-conserved moment are better than those of FPT. The concept of the average particle is also employed by ECAT but in this case it is distributed among three classes in order to conserve three moments of the PSD. The only combination of classes in which the average new particle entering the class i must be distributed, that ensures the stability of the numerical algorithm is the classes i-1,i,i+1 for coagulation-produced particles and the classes i-2,i-1,i for breakage produced particles. The above described approaches for handling the new particles entering the class i are presented graphically in Figure 1.
Three-moments conserving technique for the solution of coagulation and breakage population balances 45
4. Results Only separate tests of the new methods for the case of coagulation and breakage equation were performed until now. In case of coagulation ECAT leads to improved computation of the moments of the PSD but its performance with respect to the entire PSD depends on the coagulation kernel. In case of uniform coagulation kernel the ECAT leads to improved PSDs but in case of the sum kernel ECAT cannot improve the results of CAT implying that there is no one to one correspondence between moments and entire PSD computation. This statement is confirmed by the results for the breakage equation. In all cases of breakage models tested, ECAT leads to better estimation of the moments of the PSD but the entire PSD is better computed by the FPT method. In Figure 2 the ratio of approximate to exact PSD moments of order i is shown, under typical breakage conditions, as computed by several sectional techniques. Details on kernels and initial distribution can be found in Kostoglou and Karabelas (2009).The zeroth and first moments are conserved by FPT and CAT and in addition the r-th moment by ECAT. The superiority of ECAT regarding moment computation is evident.
Figure 2: Ratio of approximate to exact moments of order i computed by several sectional techniques under typical breakage conditions.
References S. Kumar, D. Ramkrishna, 1996, Chem. Engng Sci. 51, 1311. J. Kumar, M. Peglow, G. Warneke, S. Heinrich, L. Morl, 2006, Chem. Engng Sci. 61, 3327. M. Kostoglou, 2007, J. Colloid Interface Sci. 306, 72. M. Kostoglou, A.J. Karabelas 2009, Comp. Chem. Engng 33, 112.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Modelling and Simulation of Forced Convection Drying of Electric Insulators Cristea Vasile-Mircea, Goga Firuta, Mogos Liviu Mihai Babes-Bolyai University, 11 Arany Janos Street, 400028 Cluj-Napoca, Romania,
[email protected] Abstract The aim of the present works is to develop a model, implemented in a drying simulator, for describing the heat and mass transfer processes taking place inside the electric insulator body and the hot air surrounding it. Drying systems optimization and control relies on the capability of modelling these phenomena as they directly guide the way process manipulated variables have to be changed in time. The complex time evolution and spatial distribution of moisture content and temperature of the drying product associated to the temperature, velocity and humidity of the drying medium are predicted by the proposed model. The 3D model has been developed for dynamic simulation conditions. Experimental data, associated to literature records, have been used to fit parameters of the developed model that will be further used in the industrial unit for operation optimization and control purposes. Keywords: 3D model, CFD, drying, electric insulator
1. Introduction The traditional high-voltage electric insulator production requires a first batch drying step intended to reduce the moisture content of the drying product from 25-30% moisture (based on dry basis) to about 2-5%. This is performed in special gas-heated drying chambers. The second drying step is carried out in high temperature ovens, in order to achieve the desired moisture content of the final product. In the first step of the drying process air temperature is controlled according to a special program, mainly designed according to experimental tests. One of the most difficult problems to be solved during convective drying of porous materials consists in avoiding the cracking phenomena. If the drying rate is not properly established, problems may result in deformation and generation of material defects. Setting the appropriate drying rate reduces the drying time and leads to the desired quality of dried products. This may be performed by on basis of a complex model involving simultaneous heat and mass transfer both inside the drying body and between the body and the surrounding heating air. The whole process consists of several periods described by different mechanisms of drying. The mathematical modelling of the drying process enables appropriate equipment design, optimization and efficient control. The wet clay-kaolin body devoted to high-voltage electric insulators production is subject to convective drying. The paper presents the development of a 3D CFD based dynamic simulator for drying the electric insulator body, having typical geometry, in hot air stream.
Modelling and Simulation of Forced Convection Drying of Electric Insulators
47
2. Model description The theoretical background and main considerations for the model used in the paper are presented by Kowalski [1-3] and Kowalski and Strumillo [4]. Most of the literature reports divide the drying process into several steps: preheating, constant-rate and one or two falling-rate periods [5]. During the preheating period, which is usually very short, the clay body is heated from the ambient temperature to the wet-bulb temperature. Afterwards, during the constant-rate period, the liquid from the interior of the body migrates towards the liquid film situated at its surface. The liquid movement is driven by the capillary forces. The temperature at the surface continues to be constant at the wet-bulb temperature value, as equilibrium between the amount of evaporated liquid at the surface and diffused vapours into the surrounding heating air is attained. Phase transitions inside the dried material are ignored and the whole evaporation of the moisture is assumed to take place on the boundary of the dried material. Shrinkage stresses caused by non-uniform distribution of the moisture content are beginning to develop in this period. After the critical point, the liquid water starts to withdraw from the body surface towards its interior, opening the first falling rate period. The liquid water moves form body interior through the continuous liquid medium within the pores. But vapours may be also formed and they also migrate to the body surface. The temperature of the dried body exceeds the wet-bulb temperature. During the second falling rate period the liquid water predominantly evaporates inside the drying body. Non-continuous liquid and vapour regions inside the body are formed and water is transported by the evaporation and condensation mechanism. The temperature inside the dying body rises and reaches almost the surrounding hot air temperature while the drying rate is diminished. The equation describing the moisture content in the dried body is developed on the basis of the mass balance for moisture, relating the moisture flux with the gradient of moisture potential ȝ. It is associated to the heat transport equation. The rate equations consider the equations for: moisture transport (capillary, diffusion, and thermodiffusion), phase transition of liquid into vapours, heat transport including the heat convection by moving moisture, and heat exchange between body components. The model describes the drying process as a whole by including or excluding the individual heat and mass transfer mechanisms in the several stages of drying. The system of equations describing the electric insulator body moisture and temperature change in time and space are [1]:
1 is recommended. Inerts are present in many processes and in some RD commercial systems lighter reactant are fed together with an inert. The presence of inerts reduces the concentration of reactants and results in lower reaction rates. Hence, KEQ is reduced which increases the loss of reactant. However, certain amount of inerts can be beneficial for optimum conversion – e.g. the MTBE production, in which n-butene serves as a coolant for the reactive zone, thereby keeping the temperature of the reactive zone at a level where the equilibrium is favorable for MTBE conversion.11 In RD processes, the specific reaction rate for the main reaction can not be too low as it requires large liquid holdups, large amounts of catalyst on each reactive tray and eventually a larger column.5 Therefore, the reaction rate for main reaction should be higher than 10-5kmol/kgcat.sec (e.g. methylacetate hydrolysis).9 In RD processes, the desired column temperature should be lower than that of side reactions. For instance, in the methyl tert-butyl ether (MTBE) process, two consecutive side reactions are present: the irreversible dimerization of isobutene to di-isobutene (Tb= 101.45°C) and the reversible dehydration of methanol to dimethyl
A systematic approach towards applicability of reactive distillation
193
ether (DME, Tb= –24.83°C). Therefore, the purity of the desired products, isobutene and methanol could be reduced due to the by-products formed.12 The heat of reaction should be lower than the heat of vaporization of key components. A higher heat of reaction will result in drying out of trays and reduce the conversion. The last criterion is checking the production rate – if it is above 0.5-1 kton/yr then a RD process is feasible. For lower production rates it is important to evaluate the gross profit of the process. A gross profit higher than 15% for small scale production is suitable for RD process application (e.g. pharmaceutical industry). Ultimately, if all conditions are fulfilled then RD process is also attractive (e.g. fatty acids esterification to FAME).13-15
Figure 1. Flow diagram of the RD process application feasibility analysis.
A. A. Kiss et al.
194
3. Case studies Following examples were selected as case studies to evaluate the RD process feasibility:
4000 3000 2000 1000
Engineering & overheads
Reactors & vessels
Pumps & compressors
Fired heaters
Heat exchangers
Distillation column
0 RD column
Methyl-acetate
Cost (kGBP)
0.2 0.3
no l
0 .4
0 .5
Me tha
0.6
0.1
0 .9
0.
0.2 cid 0.3 c a 0.4 e ti Ac 6 0.5
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
Reactive distillation Conventional process
5000
0.7
0.7
6000
0 .8
0.8
0.9
0.1
Methyl acetate hydrolysis. In the production of acetic acid and methanol by hydrolysis of methyl acetate, the ester reactant is the lightest hence it is very difficult to keep it in the reactive zone. Figure 2 (left) shows the residue curve map (RCM) of methyl-acetate / methanol / acetic acid. Methanol and its ester form a binary azeotrope. Hence, it is not likely to obtain high purity methanol under ‘neat’ RD operation. Due to some technical constraints conventional RD process is not technically feasible,9 although RD is suitable for the reverse reaction. However, non-conventional RD processes such as reactive dividing-wall column (RDWC) could be a viable alternative.10
Figure 2. Residue curve map (RCM) of methyl-acetate / methanol / acetic acid (left). Cost comparison for a toluene hydro-dealkylation plant (right) HDA process. In the conventional process of hydro-dealkylation (HDA) of toluene to benzene, the reaction requires 20-25 bar and 400°C, whereas the RD column is operated at 30 bar and about 280°C in the reactive zone. As the optimal reaction vs separation conditions are significantly different and the pressure required by the RD column is higher, the drawbacks cancel out the overall advantages of the RD process. Figure 2 (right) gives a summary of the key capital cost elements for both RD and conventional processes for a design basis of 150 kton/yr xylenes and a costing basis in thousands of pounds (kGBP).4 The net effect is that the estimated capital saving is only in the order of 4% well below 25–50% improvement typically which is required to drive a new technology development.4 Therefore, in this case the RD application is not economically attractive in spite of being technically feasible and applicable. Biodiesel production. Fatty acid methyl esters – the main components of biodiesel – can be directly produced by esterification of free fatty acids (FFA) with methanol or bioethanol.13-15 Conventionally, biodiesel is produced batch-wise using homogeneous catalysts that have many associated problems (neutralization, separation, salt waste streams). The RD process powered by solid catalysts offers unique advantages, such as: higher productivity, efficient use of raw materials and equipment, no catalyst related issues, elimination of alcohol excess and recycle, lower capital and operating costs. Therefore, in this case, the RD process proves to be technologically feasible (Figure 3) and at the same time economically attractive, using only ~109 kW·h / ton biodiesel.14-16
A systematic approach towards applicability of reactive distillation Temperature / °C
Molar fraction 0
0.2
0.4
0.6
0.8
60
1
100
140
180
220
0
0
Fatty acid
195
Water 3
3 Water Acid
6
6
Stage
Alcohol
Recycle
Biodiesel FAME
9
Reaction rate
9
12
12
Ester
Methanol
Temperature
15
15 0
0.2
0.4
0.6
Molar fraction
0.8
1
0
0.5
1
1.5
2
2.5
Reaction rate / kmol/hr
Figure 3. RD process for fatty acids esterification (left). Composition, temperature and reaction rate profiles along the RD column (right) The three industrial relevant case studies briefly presented here, illustrate the potential applicability range of the proposed methodology. Based on the previous experience, we are also confident that virtually any potential RD application can be quickly and reliably evaluated using this approach that check if a process is not only technically feasible but also economically attractive – a very important criteria at industrial scale.
4. Conclusions The novel systematic approach proposed in this study evaluates the RD process feasibility, based on minimal knowledge of kinetics, thermodynamic and economics. The main advantage of this approach is that all important parameters influencing the design of an RD process are taken into account. A major requirement is that the process conditions that are suitable for the chemical reaction must be in line with the conditions required for vapour-liquid separation. The industrial relevant case studies analyzed in this study, validate the proposed approach for RD process feasibility analysis and make it clear when a RD process is technically feasible and also economically attractive.
References 1. Kenig E., Lakobsson K., Banik P., Aittamaa L., Gorak A., Koskinen M., Wettmann P., 1999, Chem. Eng. Sci., 54, 1347-1352 2. Aittamaa J., Kenig E., Jakobsson K., Banik P., Schembecker G., Górak A. et al., 1996, BriteEuram Project Reactive distillation BE95-1335 3. Kaymak D. B. and Luyben W. L., 2004, Comp. & Chem. Eng., 32, 1456–1470 4. Stitt E. H., 2002, Chem. Eng. Sci., 57, 1537-1543 5. Kaymak D. B., Luyben W. L., 2004, Ind. Eng. Chem. Res., Vol. 43, 3666-3671 6. Cao F., Fang D., Liu D. and Ying W., 2002, Fuel Chem. Div. Preprints, 74 (1), 295-297 7. Luyben W. L. and Yu C., 2008, Text Book, ISBN 978-0-470-22612-4 8. Perry's Chemical Engineers' Handbook (8th Edition), 2008, McGraw-Hill 9. Lin Y., Chen J., Cheng J., Huang H., Yu C., 2008, Chem. Eng. Sci., 63, 1668-1682 10. Sander S., Flisch C., Geissler E., Schoenmakers H., Ryll O. and Hasse H., 2007, Chem. Eng. Res. and Des., 85 (A1), 149–154 11. Higler A. P., Taylor R., Krishna R., 1999, Chem. Eng. Sci., 54, 1389-1395 12. Qi Z., Kienlei A., Steini E., Mohl K., Tuchlenski A. and Sundmacher K., 2004, Chem. Eng. Res. & Des., 82(A2), 185–191 13. Dimian A. C., Bildea C. S., Omota F., Kiss A. A., 2009, Comp. & Chem. Eng., 33, 743-750 14. Kiss A. A., Dimian A. C., Rothenberg G., 2008, Energy & Fuels, 22, 598-604 15. Kiss A. A., 2010, Comp. & Chem. Eng., 34, 812-820 16. Kiss A. A., 2011, Fuel Proc. Technol., Article in press, DOI: 10.1016/j.fuproc.2011.02.003
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Strategies for the Robust Simulation of Thermally Coupled Distillation Sequences Miguel A. Navarroa, José A. Caballeroa, Ignacio E. Grossmannb. a
Department of Chemical Engineering, University of Alicante., Ap Correos 99, 03080, Alicante, Spain b Department of Chemical Engineering, Carnegie Mellon University, 5000 Forbes Av. 15213. Pittsburgh, PA USA.
Abstract This paper presents a new strategy for simulation of thermally coupled distillation sequences using process simulators. First, we show that the two side streams connections that produce a ‘thermal couple’ can be accurately substituted by a combination of a material stream and a heat flow. In this way, the sequence of thermally coupled distillation columns can be simulated without recycle streams like any conventional simulation of zeotropic distillation sequences. In that way a sequence of thermally coupled distillation columns is as difficult to converge as any other distillation system without recycles. In most situations this approach introduces negligible errors, but in any case provides excellent initial points to the rigorous simulations with recycle streams. Different examples are presented, including mixtures of hydrocarbons (C4’s C5’s – C6’s), aromatics (BTX), alcohols, non-ideal azeotropic systems (acetone, benzene, chloroform) and systems involving 4 or 5 components. Different thermodynamic equivalent configurations that correspond to different alternatives for implementing this approach are also described. Keywords: Distillation; Simulation; Thermally Coupled Distillation.
1. Introduction Sustainable development of process systems motivates the pursuit of design solutions that achieve efficient use of energy. Distillation consumes about 3% of the energy worldwide1.Thermally Coupled Distillation systems (TCD), have acquired a renewed interest in the last years because of the possibilities of savings in energy and total costs, in some cases over 30% - 40%, when compared to systems with conventional columns. Furthermore, TCD has a richer space of alternative designs than conventional separation systems2. Most of the chemical process simulators include models for side columns or even Petlyuk type configurations. But when we have thermally coupled systems involving more than two columns (and in some cases even with two columns), the simulation of these systems is difficult because the two side flows connecting the columns produces systems with a large number of ‘recycle’ streams (in a modular simulator these recycle or information is converged though tear streams). Whatever the method used to converge the cyclic structure of the flowsheet (direct substitution, Newton or quasi-Newton methods) good initial values, close to the final solution are mandatory to converge the system maintaining the product specifications. A large number of tear streams slows down the simulation and makes the problem difficult to converge.
Strategies for the Robust Simulation of Thermally Coupled Distillation Sequences
197
2. Application of the new strategies of simulation, "acyclic system simulation" The basic idea used in this paper is to avoid the recycle structures that appear in TCD systems in modular process simulators. This idea is based on the work by Carlberg and Westerberg3,4. They proved, in the context of the Underwood shortcut method, that the two side streams in a TCD system connecting the rectifying section of the first column (see Figure 1a) with column 2, is equivalent to a superheated vapour stream whose flow is the net flow (difference between vapour exiting the column and the liquid entering in it) -Figure 1b-. If the two side streams are connecting the stripping section of the first column with the second column then these two streams are equivalent to a single subcooled liquid stream whose flow is the net flow (in this case liquid minus vapour flows) –See Figures 1c,d-. A
A
A Q
V
Superheated vapor
L
F = V-L B
ABC
B
ABC
C
ABC
C
(a)
F = V-L saturated vapor
C
(b)
A
(e)
A
A
ABC
ABC
B
ABC
B
B F= L-V
Saturated Liquid F = L-V
B
V Sub-cooled Liquid
L
-Q
C
(c)
C
(d)
C
(f)
Figure 1. a,b,e equivalent configurations. c,d,f equivalent configurations However, this approach cannot, in general, be implemented in modular process simulators because the degree of superheating and / or subcooling could be so large that it produces results without physical meaning, and therefore it can lead to failure in the convergence of the simulator. Fortunately, it is possible to solve this problem substituting the superheating or subcooling streams by a combination of a material and an energy stream. In the rectifying section, the material stream is vapor at its dew point and the energy stream is equivalent to the energy removed if we include a partial condenser to provide reflux to the first column -See Figure 1e-. In the stripping section, the material stream is liquid at its bubble point and the energy stream is equivalent to the energy added if we include a reboiler to provide vapor to the first column -See Figure 1f. Although this strategy is only an artificial representation to simulate the behavior of thermally coupled systems without requiring recycles, the results are good if the streams
198
M.A. Navarro, J.A. Caballero, I.E. Grossmann
introduced/withdrawn in/from column 2 were in equilibrium with the liquid and vapor flowing through this column (V1C1 with L2C2) -See Figure 2-.
Figure 2. Details of the connection between columns, "Cyclic system simulation"
Figure 3. Details of the connection between columns, "Acyclic system simulation"
Unfortunately the equilibrium assumption is not entirely true. In the Carlberg & Westerberg approximation it is considered that there is no mass exchange between the vapor and liquid streams. In the rigorous simulation, the energy streams are used to simulate the elimination of liquid that is withdrawn from column 2 to column 1 since it vaporizes part of the liquid stream, which is equivalent to the liquid removed- See Figure 3-. This is the main source of error. But if the vapor and liquid streams are introduced/withdrawn in/from the same tray, the error introduced is small and can usually be neglected. In any case, in the worst case the values obtained with this technique provide excellent initial points to converge the rigorous simulations of the original system.
3. Examples and results Different examples are presented, including mixtures of hydrocarbons (C4’s - C5’s – C6’s), aromatics (BTX), alcohols, non-ideal azeotropic systems (acetone, benzene, chloroform) and systems involving 4 or 5 components. Different thermodynamic equivalent configurations that correspond to different alternatives for implementing this approach are also described. All the simulations were performed using ASPEN HYSYS. The parameters studied to compare the results between cyclic and acyclic simulations are reboiler and condenser duties and the vapor/liquid internal flows. First, three components system were studied: a mixture of aromatics (Benzene, Toluene, p-Xylene), alcohols (methanol, ethanol, butanol), hydrocarbons (n-hexane, n-heptane, noctane). In all the cases, both cyclic and acyclic simulations are solved independently, although the results of the acyclic simulation were used as initial points of the actual system (with cyclic structure)-See Figure 4-.
Figure4: Simulations of acyclic and cyclic system configurations.
Strategies for the Robust Simulation of Thermally Coupled Distillation Sequences
199
Difficult separations involving three component systems were also studied: a) similar volatilities (i-butane, n-butane, cyclobutane) b) azeotropic systems (benzene-acetonechloroform) or multi-component mixture that must be separate in C3's, C4's and C5's groups. The separation of 4 components was also studied (Butane-Pentane-HexaneHeptane) in a sequence with 16 thermodynamically equivalent configurations. In this case, 3 configurations were studied using the same methodology previously explained See Figure 5-. Finally, the separation of a mixture of 5 components is also presented.
FEED
FEED
FEED
Configuration 1
Configuration 2
Configuration 3
Figure5: Thermodynamically equivalents configurations
The results obtained in these systems are shown in the following table:
Max. Error
INTERNAL FLOW Average Error St. Deviation
3 Components iButane-nButane-cyclebutane 3,42% Hexane-Heptane-Octane 3,47% Methanol-Ethanol-Butanol 3,94% Benzene-Toluene-Xylene 4,55% Azeotropic Distillation 4,04% C4's-C5's-C6's 6,72% Acetone - Acetic Acid - Acetic Anhydride 18,40% 4 Components (Butane-Pentane-Hexane-Heptane) Configuration 1 21,16% Configuration 2 22,42% Configuration 3 22,42% 5 Components (Butane-Pentane-Hexane-Heptane-Octane) Configuration 1 79,27%
ENERGY Max. Error
0,74% 0,96% 0,86% 1,46% 1,09% 1,43% 3,91%
0,87% 0,96% 0,92% 1,40% 0,98% 1,72% 4,61%
0,04% 0,08% 0,08% 0,19% 0,36% 0,05% 0,91%
4,88% 5,03% 4,86%
6,00% 6,16% 6,07%
0,41% 0,12% 0,41%
23,61%
31,46%
0,19%
Table1: Results obtained in all different systems
The internal flows of the worst case studied (Acetone - Acetic Acid - Acetic Anhydride) can be seen in the following figures:
200
M.A. Navarro, J.A. Caballero, I.E. Grossmann
Strategies for the Robust Simulation of Thermally Coupled Distillation Sequences sĂƉŽƌ&ůŽǁͲ ŽůƵŵŶϭ
>ŝƋƵŝĚ&ůŽǁͲ ŽůƵŵŶϭ ϭϱϬ
sĂƉŽƌ&ůŽǁ;ŬŵŽůͬŚͿ
>ŝƋƵŝĚ&ůŽǁ;ŬŵŽůͬŚͿ
ϮϬϬ ϭϱϬ ϭϬϬ
LJĐůŝĐĚŝƐƚŝůůĂƚŝŽŶ ĐLJĐůŝĐĚŝƐƚŝůůĂƚŝŽŶ
ϱϬ
ϭϬϬ LJĐůŝĐĚŝƐƚŝůůĂƚŝŽŶ ϱϬ
ĐLJĐůŝĐĚŝƐƚŝůůĂƚŝŽŶ
Ϭ
Ϭ Ϭ
Ϯ
ϰ
ϲ
ϴ
Ϭ
Ϯ
ϰ dƌĂLJƐ
dƌĂLJƐ
>ŝƋƵŝĚ&ůŽǁͲ ŽůƵŵŶϮ
ϲ
ϴ
sĂƉŽƌ&ůŽǁͲ ŽůƵŵŶϮ
ϰϬϬ ϯϱϬ ϯϬϬ
LJĐůŝĐĚŝƐƚŝůůĂƚŝŽŶ ĐLJĐůŝĐĚŝƐƚŝůůĂƚŝŽŶ
ϮϱϬ ϮϬϬ
sĂƉŽƌ&ůŽǁ;ŬŵŽůͬŚͿ
ϰϱϬ
>ŝƋƵŝĚ&ůŽǁ;ŬŵŽůͬŚͿ
5
LJĐůŝĐ ĚŝƐƚŝůůĂƚŝŽŶ ĐLJĐůŝĐ ĚŝƐƚŝůůĂƚŝŽŶ
ϰϬϬ ϯϱϬ ϯϬϬ ϮϱϬ ϮϬϬ
Ϭ
ϭϬ
ϮϬ
ϯϬ
ϰϬ
ϱϬ
ϲϬ
Ϭ
ϭϬ
dƌĂLJƐ
ϮϬ
ϯϬ
ϰϬ
ϱϬ
ϲϬ
dƌĂLJƐ
Figure 12: Comparison of flows for AAA separation
4. Conclusions The application of a new strategy for simulation of thermally coupled distillation sequences using process simulators to several case studies has shown that the results obtained with acyclic sequence technique are very close to those obtained with recycle calculation, with average errors below 2% for 3 component mixtures. The average error increases slightly with the number of components, due to the errors propagation as a consequence of the larger number of thermally coupled columns in the system. However, in all the cases the “acyclic simulation” produces excellent results, comparable with those of the actual system. Furthermore, this new strategy allows to get very good starting points to converge the rigorous simulations of these systems. In conclusion, this new technique works very well to quickly and easily study thermally coupled distillation system for the separation of 3, 4 or 5 components mixtures. Acknowledgements The authors gratefully acknowledge the financial support from the ‘‘Ministerio de Ciencia e Innovación’’ of Spain under project CTQ2009-14420-C02-02.
References 1. Soave, G.; Feliu, J. A., 2002, Saving energy in distillation towers by feed splitting. Applied Thermal Engineering, 22, (8), 889. 2. Giridhar, A,; Agrawal, R., 2010, Synthesis of distillation configurations.II a search formulation for basic configurations. Computers & Chemical Engineering, 34(1) 84. 3. Carlberg, N. A. Westerberg, A. W., 1989, Temperature Heat Diagrams For Complex Columns .2. Underwoods Method For Side Strippers And Enrichers. Industrial & Engineering Chemistry Research, 28(9) 1379-1386 4. Carlberg, N. A. Westerberg, A. W., 1989, Temperature Heat Diagrams For Complex Columns .3. Underwoods Method For The Petlyuk Configuration Industrial & Engineering Chemistry Research 28(9) 1386-1397
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Spatiotemporal pattern formation in an electrochemical membrane reactor during deep CO removal from reformate gas Richard Hanke-Rauschenbacha,*, Sebastian Kirscha and Kai Sundmachera,b a
Max Planck Institute for Dynamics of Complex Technical Systems, Sandtorstr. 1, 39106 Magdeburg, Germany b Process Systems Engineering, Otto-von-Guericke University, Universitätsplatz 2, 39106 Magdeburg, Germany *E-mail address:
[email protected] Abstract The preferential oxidation of CO from reformate gas in a spatially distributed electrochemical membrane reactor has been investigated. The reactor shows oscillations of the electric potential in space and time when operated in the galvanostatic mode. The operating behavior is complex and not straight-forward to predict, which hampers the application of classical methods for the design of such a reactor. Within the present work, a model-based approach is discussed to characterize the oscillations and their influence on the performance of the reactor. Keywords: H2 production, electrochemical membrane reactor, nonlinear dynamics.
1. Introduction One of the key issues limiting the application of proton exchange membrane fuel cells is their susceptibility to traces of carbon monoxide within the hydrogen used as fuel. CO is produced in substantial amounts during the conversion of hydrocarbons to hydrogenrich syngas. Typically, the fuel processor is coupled to or followed by a water-gas shift system, which reduces the CO content to a level of 1-3 vol%. In a subsequent removal step the CO content has to be decreased to tolerable levels of 10-30 ppm. Regarding this final purification step, the preferential oxidation (PrOx) of CO currently seems to be the most promising option for fuel cell systems with on-site hydrogen production. Recently, Zhang and Datta [1] suggested a novel approach involving the electrochemical preferential oxidation (ECPrOx) of CO, which might have the potential to replace the PrOx in the above mentioned scheme. The main advantage, in comparison to the PrOx concept, is that non-selectively oxidized hydrogen is converted into electrical energy instead of being burned. Anode reactions CO + H 2 O ↔ CO 2 + 2H + + 2e − (desired) H 2 ↔ 2H + + 2e − (undesired)
Cathode reaction O 2 + 4H + + 4e − → 2H 2 O
The design of such an ECPrOx reactor is similar to a PEM fuel cell. In contrast, instead of platinum, a platinum-ruthenium alloy is used as anode catalyst. When operated in the galvanostatic mode the reactor exhibits oscillations of the cell voltage, which allow for a selective electro-oxidation of CO at relatively low overpotentials [1-3]. The reason for this behavior is a cyclic interplay of the CO surface coverage șCO and the anode overpotential ȘA (Fig. 1).
Hanke-Rauschenbach et al.
202
(1)
H2
ε
εH2 (1)
ηA (V)
θCO (1)
1 In our previous contribu(b) (a) i, xCO Phase 2 tions [4-6], the gradient-less 0.98 + system was investigated. It 0.96 θCO has been shown that the − Phase 1 (activator) 0.94 selectivity of O2 towards 0 0.25 0.5 0.75 1 1.25 1.5 CO2 is decreasing with in- if ηA>ηcrit t (s) A creasing O2 conversion (i.e. 0.6 (c) with increasing cell current). ηcrit 0.4 A Phase 1 η This suggests either the use A + Phase 2 0.2 (inhibitor) of a reactor cascade or the + use of a spatially distributed 0 0 0.25 0.5 0.75 1 1.25 1.5 i reactor. For the cascade two t (s) different electrical con- Figure 1. Two-phase mechanism to explain the autonofigurations exist: (i) an elec- mous potential oscillations of the system: (a) Graphic trical series connection and representation of the interplay between the key variables; (ii) an electrical parallel (b) and (c) time evolution of the CO surface coverage connection of the reactors. ș and anode overvoltage Ș . CO A While the former leads to a significant increase in the selectivity the latter does not (Fig. 2). As a reason for this behavior rigid electrical coupling between the reactors, introduced by the electrical parallel connection, was identified. This coupling causes a synchronization of oscillation frequencies of the single reactors, which leads to an enslavement of all the upstream reactors by the last reactor down-stream the cascade [5]. In the present contribution, the behavior of the spatially distributed reactor is analyzed. The above mentioned unfavorably electrical coupling is an intrinsic property of this system, which is introduced by the lateral electrolytic resistance. The system shows complex patterns in space and time [7,8]. Parameters influencing the patterns and thereby the reactor performance are the residence time, the CO mole fraction at the reactor inlet, the electrolyte conductivity and its geometry as well as the applied current. The prediction of the operat1 1 ing behavior is not straightseries forward and hampers the single design of such a reactor by 0.9 0.95 reactor single means of classical methods. reactor Therefore, here a modelseries 0.8 0.9 based approach has been connection chosen, in order to get inparallel sight in the underlying pheparallel connection 0.7 0.85 nomena and to prepare a connection validating experiment. The (a) (b) model is briefly introduced 0.6 0.8 0.4 0.6 0.8 1 0.2 0.4 0.6 0.8 1 in the following section. X (1) X (1) CO CO Subsequently, the influence Figure 2: Hydrogen recovery degree İH2 as function of of selected design and operCO conversion XCO for two coupled ECPrOx reactors ating parameters on the type (the current density is the parameter of the curves): a) of pattern and the reactor Qualitative prediction [5] and b) experimental proof [6]. performance is discussed.
Spatiotemporal pattern formation in a membrane reactor
203
2. Model To capture the essential dynamics of the system, a transient, isothermal, spatially onedimensional distributed reactor model has been employed [7]. It primarily considers changes of the state variables in the direction along the channel (z-coordinate). For the membrane additionally changes in the direction through the plane (y-coordinate) are accounted for. A couple of simplifications have been made during model development (see [7] for details). Here only the governing equations are briefly collected. The unknown profiles of the CO mole fraction xCO(z,t) and the gas velocity v(z,t) within the anode channel are determined by the following material balances. p ∂x CO p ∂ (x CO v ) − σ CO =− BC : x CO (t , z = 0 ) = x CO,in RT ∂t RT ∂z p ∂v 0=− − σ α α = {H 2 , CO, H 2 O} BC : v(t , z = 0 ) = v in RT ∂z α
¦
The balances for the species at the catalyst surface yield the local profiles of the coverage șCO(z,t), șH(z,t) and șOH(z,t) and the fraction of free adsorption sites ș0(z,t).
γC t∗
∂θ β ∂t
=σβ
β = {CO, H, OH}
θ 0 = 1 − θ CO − θ H − θ OH
The material balances are completed by a set of nonlinear expressions describing the kinetics of the sorption processes and the electrochemical electrode reactions. They are given in [7] and enter the model through the source terms ı. The rate of the electrochemical reactions depend on the electrode potential, which needs be determined from the charge balances for the electrolyte and the anodic and cathodic electrochemical double layers.
0 = −κ
c dla c dlc
∂ 2ϕ m ∂z
2
−κ
∂ 2ϕ m ∂y
2
BCs :
∂ϕ ∂ (ϕ a − ϕ ma ) = −κ m ∂t ∂y ∂ϕ ∂ (ϕ c − ϕ mc ) = κ m ∂t ∂y
∂ϕ m ∂z
= 0 ∀y, t , z = 0 , y ,t
∂ϕ m ∂z
= 0 ∀y, t z = L , y ,t
ϕ m (z, y = 0, t ) = ϕ ma (z, t ), ϕ m (z, y = d , t ) = ϕ mc (z , t ) + σ ea− z , y = 0 ,t
+ σ ec−
ϕa : 0 =
I − A
z=L
³ −κ
z =0
∂ϕ m ∂y
dz z , y = 0,t
ϕ c : ϕ c = 0 ∀z , t (grounded cathode)
0 , y = d ,t
The final model consists of eight nonlinear, coupled partial differential equations and one implicit algebraic relation. Due to the high numerical effort for the solution of the model (up to four days on a up-to-date desktop computer), the set of equations has been further simplified [8]. For this purpose a couple of quasi-stationarity assumptions have been incorporated, which allow for the analytical solution of some parts of the model.
3. Results and Discussion To elucidate the impact of the intrinsic electric couplings on the reactor performance four qualitative different scenarios are considered (Tab. 1). In scenario “A” the model is studied in the limit of zero conductivity, representing a situation without electric coupling. Scenarios “B” and “C” represent situations with increased conductivity (B: 1 S/m
204
Hanke-Rauschenbach et al.
and C: 10 S/m) to study the inscenario reactor type electric couplings fluence of migration- and meanA PFTR none field-coupling, respectively. In B PFTR migration scenario “D” complete backcoupling mixing is considered, representC PFTR mean-field ing the spatially lumped system coupling as a reference scenario. D CSTR none The performance of the reactor in the respective scenar- Table 1: Definition of different scenarios to study the inios is compared by means of the fluence of the electric couplings. time-averaged hydrogen recovery degree İH2 and carbon-monoxide conversion XCO, which are defined as:
1 T →∞ T
ε H2 = lim
t +T
³ t
G H2,out (τ ) G H2,in
dτ
1 T →∞ T
X CO = lim
t +T
³ t
G CO,in − G CO,out (τ ) G CO,in
dτ
Both quantities take values between zero and one. The most desirable operating point would be XCO=1 and İH2=1 meaning that all CO is oxidized and all hydrogen being recovered (corresponds to a selectivity of O2 towards CO2 of SCO2,O2=1). The Symbols GH2,in and GCO,in stand for the molar flow rates of hydrogen and carbon monoxide at the inlet of the reactor. GH2,out and GCO,out are the periodic changing H2 and CO molar outlet flow rates, which can be easily measured in an experiment or calculated from the model above. Fig. 3 compares the different scenarios for various applied currents. In general, by increasing the current XCO is increasing, while İH2 is decreasing due to increasing hydrogen consumption. However, significant losses in performance are seen for the scenarios B-D, meaning that electric coupling as well as stirring have a negative impact. To relate the performance degradation to pattern formation, space-time plots of the anode overpotential ǻija and derived amplitude spectra are compared at a given current density (0.27A/cm2) in Fig.4. The amplitude spectra show the amplitudes of the frequencies contributing to the local oscillations (in logarithmic grey scale) along the spatial coordinate [7]. From Fig. 4 A (left column) it can be seen that in scenario A each reaction site oscillates with an intrinsic frequency (the lowest curve in the amplitude spectra; approx. 3 Hz at the reactor inlet and 0.6 Hz at the outlet). Other contributions to the signal mark the higher harmonics. The reason for the observed behavior is the oxidation of carbon monoxide and the resulting decrease of CO-content along the channel. Due to the missing electric interaction the anode overpotential lacks any spatial order and the mean anode overpotential ǻija (the easiest accessible variable in the experiment) shows a steady signal because local oscillations cancel out in average. In Fig. 4 B (center column) Figure 3: Hydrogen recovery degree İH2 as function the impact of migration-coupling can of CO conversion XCO for the different scenarios.
Spatiotemporal pattern formation in a membrane reactor
205
Figure 4: Spatio-temporal profiles of ǻija (top row) and the respective amplitude spectra (bottom).
be studied. For the given parameters migration-coupling is the dominant electric interaction and leads to a spatiotemporal chaotic behavior (e.g. see the broad distribution of frequencies). However, its impact on the performance (Fig. 3) is limited because the oscillation frequency of each reaction site is approximately its intrinsic frequency. Finally, in scenario C (Fig. 4, right column) the performance loss is more dramatic. At the given parameters mean-field-coupling is the dominant coupling process, which leads to strict entanglement of the oscillations. As effectively the downstream part of the reactor forces the upstream part to slow down the oscillations (compare amplitude spectra A and C), the inlet region is CO saturated most of the time to such degree that even during the short oxidation phase (Fig. 1 “Phase 2”) no CO can be oxidised as no free sites for H2O dissociation are left. The upstream part is dead in accordance with the results of two ECPrOx reactors connected in parallel (see introduction). To summarize, the impact of the membrane conductivity on pattern formation was studied, which themselves influence the reactor performance as seen above. However, the conductivity influences also the ohmic losses in the reactor. Therefore from a principal point of view, the conductivity should be maximal. The partial saturation of the 1D-reactor or the enslavement of upsteram reactors (if many are connected in parallel) due to intrinsic mean-field-coupling is therefore a problem with high relevance for the design of a future ECPrOx system.
References [1] J.X. Zhang and R. Datta, J. Electrochem. Soc. 152, A1180 (2005) [2] J.X. Zhang and R. Datta, J. Electrochem. Soc. 149, A1423 (2002) [3] J.X. Zhang, J.D. Fehribach and R. Datta, J. Electrochem. Soc. 151, A689 (2004) [4] H. Lu, L. Rihko-Struckmann, R. Hanke-Rauschenbach et al., Top. Catal. 51, 89 (2008) [5] R. Hanke-Rauschenbach, C. Weinzierl, M. Krasnik et al., J. Electrochem. Soc. 156, B1267 [6] H. Lu, L. Rihko, R. Hanke-Rauschenbach et al,. Electrochim. Acta 54, 1184 (2009) [7] R. Hanke-Rauschenbach, S. Kirsch, R. Kelling et al., J. Electrochem. Soc. 157, B1521 (2010) [8] S. Kirsch, R. Hanke-Rauschenbach and K. Sundmacher, J. Electrochem. Soc. (2011), accepted
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Optimization of Design and Operation of Reverse Osmosis Based Desalination Process Using MINLP Approach Incorporating Fouling Effect Kamal M. Sassia , Iqbal. M. Mujtabaa a
School of Engineering Design and Technology, University of Bradford, Bradford, West Yorkshir, BD7 1DP, UK, Emial:
[email protected] Abstract The synthesis of reverse osmosis (RO) networks for water desalination is investigated here by state space approach via a superstructure problem. The proposed superstructure considers every possible connection between the process units. The effect of fouling is described by an exponential function which represents the decline in water permeability coefficient. The optimal designs of RO layout using MINLP technique are obtained for brackish water desalination utilizing spiral wound membrane element. In this work, a variable fouling profile along membrane stages is considered. The total annualized cost of the RO networks is minimized in order to find the optimal operation and configuration of RO systems. It was found that the optimal design and operation of RO process are sensitive to the fouling distribution between stages even though the overall fouling remains constant. Keywords: Reverse osmosis, optimum design, fouling
1. Introduction One of the most pervasive problems troubling people throughout the world is the lack of fresh water. Recently, seawater desalination by RO has been the main source of drinking water supply in many regions in the world. RO membranes used in desalination are capable of producing good water quality by removing most of the salts and some other contaminants from water sources. The most critical obstacle restricts further growth and wider application of membrane separation processes is the fouling. Fouling generally results in decreased permeate flux, decreased product quality and increased feed pressure to maintain the fresh water demand. Usually fouling increases the energy consumption and chemicals due to frequent membrane cleaning for removing foulants and consequently results in higher treatment cost (Seidel and Elimelech, 2002). Several researchers optimized RO network design using MINLP approach (ElHalwagi, 1992; Zhu et al., 1997; Lu et al., 2007). Uppaluri et al. (2004) have used stochastic optimization technique utilizing a simulated annealing procedure for the design of membrane networks. In this work, RO network design problem based on a superstructure is formulated as MINLP Problem which is solved using outer approximation algorithm within gPROMS software. Most of the previous studies presented the permeability decline rate due to fouling as average and equal for all RO stages in the design and optimization purposes. Here different cases with varying fouling percentage in each stage are considered in order to predict their impact on the design and the cost of the RO process.
Optimization of design and operation of reverse osmosis based desalination process using MINLP approach incrporating fouling effect 207
2. RO Network Superstructure for the RO process configuration is shown in Fig. 1. Some connections are excluded from the general superstructure for simplification, for example, brine recycle stream to the same stage or mixing permeate stream with the brine are not shown. Each RO stage is assumed to contain parallel RO modules that accommodate the same type of membrane element and operate under the same operation conditions. Note that the mixing of streams is only allowable for the streams with equal pressure.
Fig. 1 Superstructure for two stages RO process
3. Optimization Problem Formulation The optimization problem is described as: Given: Fixed water demand and salt concentration, feed specification and design specifications of each membrane element. Optimize: The number of stages, the number of pressure vessels (PV) in each stage, feed pressure and flow, brine recycle stream from the last stage to the feed inlet, existence of the turbine and brine bypass stream outlet from first stage, existence of the inter-stage booster pump and its outlet pressure. Minimize: The total annualized cost. Subject to: Equality constraints such as process model; inequality constraints such as linear bounds on optimization variables. Mathematically the optimization problem can be represented as: Minimize TAC , Subject to: Equality constrains Model equations; Product demand and quality Inequality constrains ; ; ; ; ; ; ; Where TAC is the objective function representing the total annualized cost of the candidate configuration satisfying the operation and design restrictions. The most important cost components affecting the produced water price are included and given in are continuous variables and represent feed Fig. 2 (Lu et al., 2007). pressure, feed flow, brine bypass fraction and brine recycle fraction respectively. S, N, d and are integer parameters representing number of stages, number of PV’s in
K. M. Sassi and I. M. Mujtaba
208
each stage, number of pumps and number of turbines. The mathematical model equations for RO module used in this work are given in (Sassi and Mujtaba, 2010). Total annualized cost ($ y-1) Feed pre-treatment cost ($)
; Pump or turbine capital cost ($) ; Membrane modules cost ($)
Net pumping cost ($ y-1) ; Chemical treatment cost ($ y-1)
;
; Membrane replacement cost ($ y-1) ; Annual Spares cost ($ y-1)
Fig. 2 Cost equations
4. Case Study The MINLP problem was considered to optimize the configuration and operating parameters of RO process at a given demand. The characteristics of spiral wound membrane which is used here are presented in (Abbas, 2005). The parameters used in optimization calculation are given in Table 1. For 180 days of operation, several cases were solved, in which the fouling percentage into stages vary while the total production is maintained about 40 m3 h-1 with the maximum salt concentration of 100 ppm. Table 1 Input parameters Membrane module cost ($) PV cost ($) Feed temperature (°C) Maximum operating pressure (bar) Maximum flow rate per module (m-3 h-1) Turbine efficiency (%) Pump efficiency (%) Electricity cost ($ kWh-1)
Value 900 1000 25 41 19 80 75 0.08
Ref. (Lu et al., 2007) (Lu et al., 2007) (Abbas, 2005) (Abbas, 2005) (Lu et al., 2007) Assumed (Lu et al., 2007) (Lu et al., 2007)
4.1. Membrane Fouling Most previous models of reverse osmosis do not take in account the effect of fouling (Oh et al., 2009). In this work, an exponential function proposed by (Al-Bastaki, 2004) was used to represent the decline in water permeability coefficient. The water permeability is approximated as follows: Where is the initial water permeability, F is the fouling factor representing the decay in permeability coefficient caused by the effect of fouling and scaling. F is given as (Al-Bastaki, 2004):
In the past, for simulation and optimization purposes the effect of fouling on RO unit (which has more than one stage) is assumed to be equal, i.e. the decrease in water permeability with time has the same rate for all stages. The prediction of water flux through the membrane surface which is function of fouling has been carried out using fixed permeability decline rate for all stages. Many researchers showed that the value of fouling in different stages of RO process varies and depends on the stage location in the process layout (Zhu et al., 1997; Huiting et al., 2001; Vrouwenvelder et al., 2009).
Optimization of design and operation of reverse osmosis based desalination process using MINLP approach incrporating fouling effect 209 In this work, the fouling effect is incorporated within the MINLP optimization formulation. For two-stage RO process, different fouling percentages in membrane stages are assumed. The permeability coefficients are assumed to have different values depending on the stage position in the processing array. Xf1 represents the percentage of fouling level in the first stage. For RO process with two stages, if fouling extent is assumed equal in all stages (average) and the permeability coefficient for first stage equal that in second stage, then. Xf1=50. 4.2. MINLP Optimization Results Table 2 shows the optimization results obtained for different fouling distribution sequences. It can be seen that the optimization results are oriented to a section of the search region where the installing new pump in all fouling distribution scenarios is not favored because of the added cost of installing new booster pump prevail over the gain from the extra quantity of permeate produced as shown in Fig. 3. Bypass part of the brine also is not desirable because the brine bypass increases the operating cost without considerable enhancement in the product quantity. There is no mixing between streams with different pressure which was set as a condition in constructing the superstructure results that the recycle of brine should be done after passing through the turbine. This makes the brine recycle not always an attractive option. In an attempt to minimize pretreatment and chemical costs about 4.6 % of the brine is recycled at Xf1=50 (Fig. 3a). Table 2 Summary of MINLP optimization results Xf1 50 (avg.) 60 Optimum process layout Fig. 3a Fig. 3b Number of PV in stage 1 8 7 Number of PV in stage 2 3 5 Permeate concentration (ppm) 64 73 Feed flow (m-3 h-1) 45.5 44.4 Feed pressure (bar) 22.6 21.1 Overall water recover % 88.8 89.9 Outlet brine recycle (%) 4.6 0 Total annualized cost ($ y-1) 88301 87254 Product cost ($ m-3of Permeate) 0.252 0.249
80 Fig. 3b 3 9 79 44.3 17.9 90.0 0 83087 0.237
Fig. 3 Optimum process arrangements The process layout identified is changed as the fouling level in the first stage varies. As feed salinity is relatively low, two stages configuration was selected in all cases whereas one stages layout is appropriate for process with higher feed concentration (Abbas, 2005). In Table 2, it has been observed that the number of PV’s in second stage are increased in a reasonable way to compensate the flux reduction in the first stage due to rise in the fouling percentage while the number of PV’s in the first stage are decreased.
210
K. M. Sassi and I. M. Mujtaba
Fig. 4 shows optimal process feed pressure trajectory for different fouling distributions between stages. It can be seen that for lower fouling percentage in the first stage more feed pressure is needed to meet the water demand. The maximum value of the feed pressure is reached at Xf1=50. The variations in initial values of feed pressures are caused by the diversity of processes configurations that have been adapted for each fouling level. It is illustrated very clearly that the feed pressure decreases with decreasing fouling in the second stage or increasing fouling share in the first stage. Fig. 5 shows the annual operating cost profile for different scenario of fouling percentages as a function of time at a fixed demand. It can be seen that the operating cost is inversely proportional to the fouling levels in first stage and this may be caused by the effect of decreasing feed pressure (Fig. 4).
Fig. 4 Feed pressure at different fouling condition
Fig. 5 Annual operating cost profiles at different fouling condition
5. Conclusion The optimum RO design with spiral wound membrane for seawater desalination process is studied here for different fouling levels in RO stages. For each fouling level, the optimal operating parameters are also determined. The continuous/discrete simultaneous superstructure for RO process that contains all possible alternatives of a potential RO network is presented. The total annualized cost as objective function contains all the most important aspects of the reverse osmosis process costs including capital and operating costs. The optimal designs of RO layout are obtained using MINLP approach while optimizing the operating parameters. The study shows that the optimal design and operation of RO process are sensitive to the fouling distribution between stages although the overall fouling remains constant.
References Seidel, M. Elimelech, 2002, Journal of Membrane Science, 203, 1-2, 245-255. M.M. Elhalwagi, 1992, AICHE J., 38, 1185-1198. M.J. Zhu, M.M. ElHalwagi, M. AlAhmad, 1997, Journal of Membrane Science, 129, 161-174. Y.Y. Lu, Y.D. Hu, X.L. Zhang, L.Y. Wu, Q.Z. Liu, (2007), Journal of Membrane Science, 287, 219-229. R.V.S. Uppaluri, P. Linke, A.C. Kokossis, 2004, Ind. Eng. Chem. Res., 43, 4305-4322. K.M. Sassi, I.M. Mujtaba, 2010,Computer Aided Chemical Engineering, Elsevier, 28, 895-900. A. Abbas, 2005, Chemical Engineering and Processing 44, 999-1004. H.J. Oh, T.M. Hwang, S. Lee, 2009, Desalination, 238, 128-139. N. Al-Bastaki, A. Abbas, 2004, Chemical Engineering and Processing, 43, 555-558. H. Huiting, J. Kappelhof, T.G.J. Bosklopper, 2001, Desalination, 139, 183-189. J.S. Vrouwenvelder, J.A.M. van Paassen, J.C. Kruithof, M.C.M. van Loosdrecht, 2009, Journal of Membrane Science, 338, 92-99.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Logic-Sequential Approach to the Synthesis of Complex Thermally Coupled Distillation Systems. José A. Caballeroa, Ignacio E. Grossmannb. a
Department of Chemical Engineering, University of Alicante., Ap Correos 99, 03080, Alicante, Spain b Department of Chemical Engineering, Carnegie Mellon University, 5000 Forbes Av. 15213. Pittsburgh, PAUSA.
Abstract In this work a methodology is presented for the rigorous optimization of complex thermally coupled distillation systems using a sequential logic-mathematical programming approach. In order to explicitly include in the search space the possibility of divided wall columns (that are thermodynamically equivalent to three fully thermally coupled separations tasks) we use an hybrid logical representation of the system that takes into account the separation tasks, the states (mixtures) produced by the tasks and the possibilities of tasks (states) aggregation to generate divided wall columns (DWC). Once the sequence of tasks, and the DWC that these tasks produces, is determined it is possible to synthesize the sequence of actual columns searching in the space of thermodynamically equivalent configurations. An example of a five component mixture illustrates the procedure. Keywords: Distillation; Thermally Coupled Distillation, Process Synthesis, Disjunctive Programming, Divided Wall Columns.
1. Introduction Distillation is the most common separation and purification technique. Around 90-95% of all separations and purifications in the chemical industry are based on distillation, and this situation is not likely to change in the near future1. However, distillation is also one of the more energy inefficient unit operations. In the last years Thermally Coupled Distillation (TCD) has acquire a renewed interest because, when compared to conventional systems, it is possible to reach over 30% in energy reduction. Besides, some thermally coupled configurations are ‘thermodynamically equivalent’ to divided wall columns (DWC) (that produce important savings in investment). One of the major difficulties in the synthesis involving TCD is that the number of alternatives grows up much faster than when only conventional columns are considered. i.e. for a 5 component mixture there are 203 basic configurations2,3 (A Basic configuration is a sequence of separation tasks without taking into account the thermal state of the streams connecting the separation tasks).If we consider also the internal structure of heat exchangers there are around 104alternatives, and if we consider the thermodynamically equivalent configurations the number of alternatives is greater than2,4 2·105
212
J.A. Caballero, I.E. Grossmann
At the view of the huge number of alternatives it is not practical (may be not possible) try to generate a single sequence of columns considering directly all the alternatives. Giridhar& Agrawal3,4 and Caballero & Grossmann2 proposed an initial search considering only the basic configurations and then refine the search optimizing the heat exchanger structure. However, when the DWCs are considered the total cost of the DWC cannot be approximated by the sum of costs of individual tasks, and although in general it is possible to get good solutions, usually important improvements can be obtained explicitly introducing the DWCs in the initial search.
2. Logic approach to TCD sequences with DWCs The problem we are dealing with can be state as follows: given a M component mixture without azeotropes, generate a sequence of distillation columns to separate the mixture in N (N<M) fractions, where each fraction must not contain components of the other fractions (key components sharp split separation) consideringall the alternatives from conventional to fully thermally coupled configurations explicitly including DWCs. Without losing generality and for the sake of simplicity we consider the sharp separation of N components. Caballero & Grossmann2,5presented a set of logical relations between separation tasks that assure feasible sequences, the first step is then to extend those logical equations to take into account the possibility of including DWCs. It is important remark that exist a one to one relationshipbetween the sequence of separation tasks and the states formed by those tasks. Therefore, we can take advantage of this duality and express the logical relations to include DWCs in terms of states (instead of tasks). Although from a theoretical point of view it is possible to generate a multi-wall column, that in the extreme case separate all the components in a single column, nowadays only columns with one wall has been built and operate, so we constraint to a maximum of a wall in a given column. A divided wall column is formed by the union of three separation tasks (or by four states:a feed state, the two states produced by the first separation task and the intermediate product state). For example, Figure 1 shows 2 out of the four alternatives that produce C as intermediate product (state).
A
A
AB
AB
B
B
ABCD
ABCDE
BCD
BCDE
ABCD
BC
C
CD
ABCDE
BCD
C
BCDE
D DE
D DE
E
E
Logic-Sequential Approach to the Synthesis of Complex TCD systems
A
A AB
AB ABC
B
B
ABCDE
ABCDE
213
C
C
D
CDE
D DE
DE E
E
Figure 1.- Two alternatives to generate a divided wall column with C as intermdiate product. To generate the logical relations that assure that all alternative DWC are taken into account first it is necessary to identify which combinations of states (or tasks) are able to generate a DWC. The next conditions assure that all the DWC are taken into account: 1. 2. 3. 4.
All intermediate states (those that do not include a component with extreme volatility) could form part of a DWC. The intermediate product state in the DWC must be produced by two different states. The two states that produce intermediate product must be generated by a single contribution. The state that generates intermediate states in the DWC must be produced by the same state.
These logical equations can be added to the set of logical relationships previously presented by Caballero & Grossmann2,5and included in an MINLP model. For example, in a 5 component mixture it is possible identify 15 different DWCs (3 with B as intermediate product, 4 with C as intermediate product, 3 with D, 2 with BC, 2 with CD and 1 with BCD) As commented above, due to the huge number of alternatives, it is not practical try to search in the full space of alternatives (more than 2·105), instead we propose the following sequential approach: 1. First, instead of searching only in the space of basic configurations, a simultaneous search including also the internal structure of heat exchangers is performed. The model is posed in terms of ‘separation sections’ and ‘pseudo-columns’ using the STN formalism6, and formulated and solved as a disjunctive programming problem. The cost is evaluated for each individual section. An extra set of logical relationships allows determine the potential existence of a DWC, if this is the case, the sections that form this DWC are grouped and the cost is evaluated taking into account this possibility. In that way a tight lower bound to the actual cost of the system is obtained.
J.A. Caballero, I.E. Grossmann
214
In previous works5 this problem was solved sequentially: First the basic configurations and then, with the sequence of tasks fixed, the internal heat exchanger structure. This approach guarantees a ‘good’ solution, butwhen DWCs are considered the quality of this approach tends to worsenbecause DWCs do not only modify the heat exchanger structure but also the column configurations. 2. Once the sequence of separation tasks is established, the sequence of actual columns must be generated. Basically, here we must consider all the thermodynamically equivalent configurations that allow the distribution of sections in the minimum number of actual columns. In this stage operability considerations can be included (i.e. consider only configurations in which the vapour flows always from higher to lower pressures –easier to control configurations7,8-) or some building constraints, i.e. try to design columns with a single diameter or in which the diameters are similar. This provided an upper bound to the total cost. 3. A rigorous simulation of the configuration is performed in order to validate the model, check the assumptions and simplifications and correct, if necessary any parameter or data. 4. Add a binary cut to avoid repeated solutions go back to stage 1 until the lower and upper bound cross each other.
3. Example In order to illustrate the procedure consider a 5 component mixture (Data are shown in Table 1). Cost data are obtained from Turtonet al9. Physical properties of the compounds were obtained from open databases. The optimal solution was obtained using GAMS in around 20 minutes of CPU running under windows 7 (2.4 GHz, 8GB of RAM). The optimal solution (See Figure 2) includes a divided wall column together with two other columns with complex thermal coupled. Note that although configurations with two DWCs are possible, the total cost of this option is larger because it would produce a huge column. On the other side, the presence of DWCs reduces the number of thermodynamically equivalent alternatives, because most of the degrees of freedomintroduced by the thermal couples are used to generate the DWC. In this example there are only 2 thermodynamically equivalent configurations. Table 1.- Some basic data for the example. Component Feed molar fraction Benzene 0.3 Toluene 0.2 Ethylbenzene 0.1 Styrene 0.2 ǹ-methyl styrene 0.2 Total Feed = 200 kmol/h Key components recovery = 0.98
Logic-Sequential Approach to the Synthesis of Complex TCD systems
215
833 kW 777 kW
3083 kW A ABC BC ABCDE
BCD C
(200 kmol/h)
D = 1.11 m CD D = 2.59 m
D = 1.16 m
D
E 4472 kW
1250 kW
Figure 2. Optimal solution of the example Acknowledgements The authors want to acknowledge the financial support to the Spanish "Ministerio de Ciencia e Innovación" under the project CTQ2009-14420-C02
References 1. Soave, G.; Feliu, J. A., Saving energy in distillation towers by feed splitting. Applied Thermal Engineering 2002, 22, (8), 889. 2. Caballero, J. A.; Grossmann, I. E., Structural considerations and modeling in the synthesis of heat-integrated-thermally coupled distillation sequences. Industrial & Engineering Chemistry Research 2006, 45, (25), 8454-8474. 3. Giridhar, A.; Agrawal, R., Synthesis of distillation configurations: I. Characteristics of a good search space. Computers & Chemical Engineering 2010, 34, (1), 73. 4. Giridhar, A.; Agrawal, R., Synthesis of distillation configurations. II: A search formulation for basic configurations. Computers & Chemical Engineering 2010, 34, (1), 84. 5. Caballero, J. A.; Grossmann, I. E., Design of distillation sequences: from conventional to fully thermally coupled distillation systems. Computers & Chemical Engineering 2004, 28, (11), 2307-2329. 6. Yeomans, H.; Grossmann, I. E., A systematic modeling framework of superstructure optimization in process synthesis. Computers & Chemical Engineering 1999, 23, (6), 709-731. 7. Agrawal, R., More operable fully thermally coupled distribution column configurations for multicomponent distillation. Chemical Engineering Research & Design 1999, 77, (A6), 543553. 8. Caballero, J. A.; Grossmann, I. E., Thermodynamically equivalent configurations for thermally coupled distillation. AIChE Journal 2003, 49, (11), 2864-2884. 9. Turton, R.; Bailie, R.C.; Whiting, W.B.; Shaeiwitz, J.A.; Analysis, Synthesis and Design of Chemical Processes. 2003, Prentice Hall PTR New Yersey.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Computer Aided Design and Analysis of Continuous Pharmaceutical Manufacturing Processes Fani Boukouvala,a Rohit Ramachandran, a Aditya Vanarase, a Fernando J. Muzzio,a Marianthi G. Ierapetritou,a a
Dept. Chemical and Biochemical Engineering, Rutgers University, Piscataway, NJ, 08854
Abstract Dynamic flowsheet modeling and simulation is a pre-requisite for the design, analysis, control and optimization of an integrated process. Whilst several integrated modeling and simulation tools (commercial and non-commercial) have proven to be effective for fluids processes, their use has been fairly limited for solids processes. The objective of this study is to build a dynamic flowsheet simulation of an integrated continuous downstream pharmaceutical process, using a combination of fundamental and empirical models. Using two cases,the results elucidate (i) the evolution of key particle properties during the transient state (start-up and shutdown), (ii) the effect of changes in process parameters and/or material properties which typically can vary during continuous manufacturing and (iii) the dynamic response and recycle dynamics of an integrated blender and a recirculation tank. Simulation results lend credence to developing a dynamic flowsheet simulation of a fully integrated downstream pharmaceutical process which can be further extended to the general class of solids processes. Keywords: continuous manufacturing, pharmaceutical, dynamic flowsheet modeling.
1. Introduction The pharmaceutical industry is a tightly regulated industry where all production must comply with good manufacturing practices (GMP) and quality requirements should be strictly satisfied. Historically, manufacturing in the pharmaceutical industry has been carried out in batch mode which potentially results in expensive, inefficient and poorly controlled processes[1-2]. Recently, both pharmaceutical industries and regulatory authorities have recognized that continuous manufacturing has significant potential to improve product quality [3]. Moreover, environmental, health and safety issues are driving the industry towards more efficient and more predictive manufacturing. Therefore, a great opportunity arises for developing a generic continuous manufacturing platform that will benefit from state of the art strategies, modelling tools and enabling technologies to implement this transition. In this work we focus on the manufacturing of oral solid dosage drugs which consist of approximately 85% of the entire pharmaceutical production. A typical manufacturing process for a powder based product (e.g., tablets and most capsules) involves multiple processing steps, of which the most common are powder feeding, blending, granulation and tableting or capsule-filling. The integrated design of such a continuous system requires the detailed characterization of the unit operations involved, with the purpose of resolving the flow and stress fields within the equipment and quantifying the
Computer Aided Design and Analysis of Continuous Pharmaceutical Manufacturing Processes 217 functional relationship of key quality attributes with process parameters and material properties. Computer aided process design and simulation tools have been successfully used in a plethora of chemical industries to expedite development and optimize the design and operation of integrated processes [4]. Specifically for pharmaceutical manufacturing, however, there is a lack of simulation tools that can handle particulate processes that will be used for evaluating process design alternatives, scheduling, control, optimization and debottlenecking of an integrated production line.
2. Motivation and Objectives The overall philosophy of this study is to simulate an integrated continuous pharmaceutical downstream process whose models are based on the underlying physics and chemistry of the process and have been experimentally validated for various formulations and operating conditions. Challenges due to the complexity and variability that particulate processes introduce to the overall system, that need to be overcome for the implementation of this initiative include: (1) the characterization of all unit operations, identification of their important kinetic and thermodynamic parameters and development of models that describe their mechanisms, (2) experimental studies and data acquisition of multi-dimensional evolutions and distributions of key particle properties, (3) identification and elimination of primary bottlenecks of the integrated system to maximize throughput, (4) identification of all the possible manipulated and controlled variables and their interactions, (5) accounting for recycle dynamics and their impact on control structure selection to achieve a robust, controllable and controlled process that is maintained within the desired design space and (6) integration of process design and control to identify globally valid operating conditions. As a first attempt to model and dynamically simulate an integrated downstream pharmaceutical process, this study will focus on developing an integrated model in a dynamic simulation environment, of a feeder-blender-granulator system. Specific objectives will be to track evolution of key particle properties during (i) the transient state, (ii) the transition from one formulation and/or operating condition to another (i.e., effect of material properties and/or process parameters) and (iii) when a recycle loop is implemented for the blending process.
3. Dynamic integration
model
development
and
To illustrate the dynamics of an integrated flowsheet model, 2 case studies are considered.
Figure 1. Proposed integrated flowsheet model
3.1. Case 1: Integration of Feeder, Blender, Granulator The feeder-blender-granulator system (see Figure 1a) consists of two feeders (API + excipient) that feed into a blender where the API and excipient are mixed due to convective/diffusive forces. The mixture of API/excipient is then continuously transported into a granulator whereby through the addition of liquid binder, the particles are formed into larger granules, to improve its flow and dissolution properties.
F. Boukouvala et al.
218
Each continuous feeder operates under closed-loop proportional-integral (PI) control whereby the feedrate is specified as the set-point and the feeder RPM is manipulated to ensure that the set-point is met. To model each feeder, set-point changes were made to the feedrate and the dynamic response was observed to follow a first order profile. Therefore, a first-order plus time delay (FOPDT) model was used to fit the data. A population balance model was used to describe the dynamics of the blending and granulation process (Equation 1) డிሺ௭ǡǡ௧ሻ
డ
ௗ௭
డ
ௗ
ቃ ൌ Ը௧ ሺݖǡ ݎǡ ݐሻ െ Ըௗ௧ ሺݖǡ ݎǡ ݐሻ (1) Here r is the vector of internal variables used to characterize the distribution and z is the vector of external coordinates used to depict spatial position. F(z,r,t) is the population డ ௗ distribution function (a.k.a. the number density function). The term ቂܨሺݖǡ ݎǡ ݐሻ ቃ డ ௗ௧ would account for the rate at which the distribution evolves with respect to position and డ ௗ௭ ቂܨሺݖǡ ݎǡ ݐሻ ቃ accounts for the time due to the rate of consolidation. The term డ௭ ௗ௧ evolution of the distribution of the particle population with respect to spatial position. The function Ը௧ ሺݖǡ ݎǡ ݐሻ and Ըௗ௧ ሺݖǡ ݎǡ ݐሻ accounts for the formation and depletion of particles respectively due to discrete aggregation and breakage phenomena. In the blender model, aggregation and breakage are neglected; therefore the PBM reduces to a two-dimensional model with respect to the vector z where z denotes the axial and radial direction. In the granulation model, the granulator is assumed to be well-mixed; therefore the PBM is a four-dimensional model with respect to r, where r denotes the volume fractions of the API, excipient, liquid and gas. Details of the blending model and granulation model can be found in [5-6]. డ௧
డ௭
ቂܨሺݖǡ ݎǡ ݐሻ
ௗ௧
ቃ
డ
ቂܨሺݖǡ ݎǡ ݐሻ
ௗ௧
Figure 2.Dynamic integrated simulation results: (a) total mass flowrate, (b) average particle diameter, (c) particle bulk density, (d) API concentration
Computer Aided Design and Analysis of Continuous Pharmaceutical Manufacturing Processes 219 Figure 2a depicts the total mass flowrate of powder that exits the blender and enters the granulator. It can be seen that steady state is reached by t=150s, whereby a step change is introduced to the rpm (rpm is doubled). This results in a sharp increase in the mass flow rate which gradually reduces to the original mass flowrate. Figure 2b depicts the average granule diameter upon particles exiting the granulator. It can be seen that in the first few seconds of operation, no powder enters the granulator as they are still being processed in the blender. Upon powder entering the granulator, there is a sharp increase in granule diameter as granules undergo aggregation and eventually a steady state is attained by t=150s, whereby the step change in rpm results in a slight immediate decrease in granule diameter (due to the sudden influx of more fine powder in the granulator). Eventually, this leads to more fine powder being aggregated and this results in an increase in granule diameter till a new steady state is achieved. Similar transient profiles are achieved for granule bulk density and granule API concentration (see Figures 2c and 2d) 3.2. Case 2: Effect of Recirculation tank In a separate simulation, the integration of a blender with a recirculation tank was considered, to ascertain the effect of recycle dynamics on a unit operation. A system such as the one shown in Figure 1 (accounting for the second blender as the recirculation tank) could be advantageous in any continuous powder processing line. It offers several advantages including recirculation of the excess or out of specification material produced by the continuous mixer, providing material to other subsequent units operating under the desired capacity, minimizing the effect of overshoot in the feed rate (occurred during the refill operation) on blend uniformity. In order to obtain the dynamic response of the integrated system, mass balance equations were solved for the system of the two blenders, while the RTD model Ei (T ) used was assumed to be a 1-D axial dispersion model (Equation 2). Ei (T )
ª (1 (T W 0i ) / W i )2 º exp « » , i=1,2 2 S (T W 0i ) /(W i Pei ) ¬ 4(T W 0i ) /(W i Pei ) ¼ 1
(2)
RTD model parameters (residence time ( W ), dead time ( W 0 ) and Peclet number ( Pe )) for both blenders were obtained by fitting the experimental impulse response data. (a)
(b)
Figure 3. (a) Dynamic response of the integrated system undergoing feeder refills, (b) Sluggish response of the system undergoing excessive recycle flow rates
Dynamic response of the integrated system was computed for a particular input feed rate dataset (feeder undergoing refills). As shown in Figure 3a, overshoot in the feed rate at the outlet of the mixer decreases with increase in the recirculation flow rate. For
F. Boukouvala et al.
220
excessive recycle flow rates (10 times or 50 time in the input flow rate, the system response became sluggish. A shift in the baseline can be observed in Figure 3b. In conclusion, recycle only to a certain extent was found to improve the overall performance.
4. Conclusions and Current work This work aims to attain the knowledge required to design and optimize continuous manufacturing systems for a variety of pharmaceutical powder based products. In this work, the recently developed gPROMs-SOLIDS [7] simulation package and MATLAB will be used to perform dynamic flowsheet simulation of a multi-component integrated tablet manufacturing system. One of the major difficulties when dealing with solid processes is the lack of knowledge of the properties that characterize the powder mixtures and how these properties affect the final product properties. However, recent powder characterization studies have provided significant insight into how material properties of active ingredients, and excipients and their compositions in a mixture affect the behavior of the powders in different apparatus. Thus, a great opportunity arises for merging all the knowledge, experience, experimental and modeling work available for the development of a detailed flowsheet model for tablet production. Furthermore, the unification in a single flowsheet modeling platform of all the different available modeling techniques which range from empirical, population- balance, firstprinciple and Discrete Element Method (DEM) models that describe different unit operations, is another area which requires significant effort. Lastly, a detailed simulation will facilitate the following: (1) Quality by Design (QbD) features by the use of intrinsic process knowledge that will establish the functional relationships between key quality attributes, process parameters and material properties. Development of a hybrid superstructure model- will also be used to define the design space of the system, and (2) Process Analytical Technology (PAT) features through the development of online sensors, process control, multivariate analysis, statistical analysis and real time quality control.
References 1. 2. 3. 4. 5. 6. 7.
Gorsek, A. and P. Glavic, Design of Batch Versus Continuous Processes: Part I: Single-Purpose Equipment. Chemical Engineering Research and Design, 1997. 75(7): p. 709-717. Leuenberger, H., New trends in the production of pharmaceutical granules: batch versus continuous processing. European Journal of Pharmaceutics and Biopharmaceutics, 2001. 52(3): p. 289-296. Plumb, K., Continuous Processing in the Pharmaceutical Industry: Changing the Mind Set. Chemical Engineering Research and Design, 2005. 83(6): p. 730-738. Biegler T L, Grossmann E I, and Westerberg W.A, Systematic Methods of Chemica Process Design. International Series in the Physical and Chemical Engineering Sciences. 1997, New Jersey: Prentice Hall. Boukouvala, F., et al., Computational approaches for studying the granular dynamics of continuous blending processes II: Population balance and data-based methods, Manuscript under review. 2010. Poon, J.M.H., et al., Experimental validation studies on a multi-dimensional and multiscale population balance model of batch granulation. Chemical Engineering Science, 2009. 64(4): p. 775-786. ProcessSystemsEnterprise, gPROMS Advanced User Guide. 2003: London, UK.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Phenomena-based Process Synthesis and Design to achieve Process Intensification Philip Lutzea, Rafiqul Ganib, John M. Woodleya,b a
PROCESS, Department of chemical and Biochemical Engineering, Technical
University of Denmark, Soltofts Plads, DK-2800 Lyngby, Denmark b
CAPEC, Department of chemical and Biochemical Engineering, Technical University
of Denmark, Soltofts Plads, DK-2800 Lyngby, Denmark
Abstract In order to improve processes incorporating process intensification and to allow them to go beyond pre-defined unit operations, the process has to be viewed at a lower level of aggregation, namely the phenomena scale. In this contribution, an approach for aggregating processes through phenomena building blocks in a systematic methodology is presented. First, all potential phenomena are identified, and then synthesized to phenomena-based flowsheets which are then screened against pre-defined constraints before the most promising options are identified, optimized and verified at the unit operation level. This phenomena-based synthesis/design methodology is tested through a case study. Keywords: Process Synthesis, Phenomena, Process Intensification.
1. Introduction Process Intensification (PI) has attracted considerable interest as a potential means of process improvement and to meet the increasing demand for sustainable production. PI aims to improve processes without sacrificing product quality by increasing efficiency, reducing energy consumption, costs, volume, and waste as well as improving safety. In previous work [1], we reported the development of a general computer-aided systematic synthesis and design methodology incorporating PI. Even though, process improvements were achieved, this methodology is limited to pre-defined PI unit operations which are retrieved from a knowledge base. In order to invent new unit operations, going beyond those currently in existence, to achieve potentially even higher improvements, the process should be viewed at a lower level of aggregation [2, 3]. The similarity of the structure of flowsheets and molecules has been reported before [4] comparing molecules to processes and groups in molecules to unit operations respectively. This analogy can be extended through phenomena since they can be compared to atoms. That is, different combinations of phenomena lead to different characteristics/performance and therefore to different physical unit operations/ flowsheets, just as different combinations of atoms lead to different molecules with
222
P.Lutze et al.
different characteristics/performance. Hence, in order to extend the search space for process improvement, process synthesis and design incorporating PI needs to be investigated at the phenomenological level.
2. General Phenomena- based Synthesis Framework The developed phenomena-based synthesis and design approach is based on two contributions: a) the use of phenomena building blocks together with connection equations to represent a process; b) the use of a methodology for identification, generation and screening of phenomena-based flowsheets that systematically reduces the search space for the optimal solution. 2.1. Concept of phenomena-based aggregation Phenomena building blocks consist of mass, component, energy and momentum balances as well as constraint equations describing the phenomenon and the inlet and outlet stream conditions. In general, phenomena building blocks can be classified by the number of distinct phases involved which are further sub-classified into mixing, stream dividing, phase creation, phase transition, phase separation, reaction and energy transfer phenomena. Mixing phenomena have a minimum of two inlet streams and one outlet stream while dividing phenomena have one inlet and a minimum of two outlet streams. Phase creation implies the appearance of a new phase. It has a single phase inlet and a two-phase outlet. Phase transition blocks are defined to have one inlet and one outlet stream, each of mixed phases. Two-phase separation phenomena have one two-phase inlet stream and two outlets of pure phase streams. Energy transfer phenomena are defined to have either one inlet and one outlet, for example, for simplification of heating/cooling by an external source, or two inlet and two outlet streams, like the phenomena of convective or conductive heat transfer between two streams. Phenomena blocks can be connected through the use of suitable connection rules by streams or linking streams (for simultaneous occurrence of phenomena). Streams connecting phenomena building blocks may contain one or more phases. For example, a dividing block can be connected to any building block while a phase transition or phase separation cannot because they require an inlet stream with a two-phase mixture (see Table. 1). Another example of a connection rule is that an L-L phase split should be linked before a phase transition (V-L equilibrium) block since the vapor is in equilibrium with each of the two liquid phases and not with the total liquid mixture. Table 1. Examples of direct connectivity between phenomena building blocks. Second (following) phenomena block Mixing Phase transition Phase creation First phenomena block Ideal V-L (EQ) L-L split Mixing (2-phase) Ideal Yes Yes No Phase transition V-L (EQ) Yes No No Phase creation L-L split Yes Yes No 2.2. Methodology The developed methodology follows a stepwise hierarchical decomposition in which the
Phenomena-based Process Synthesis and Design to achieve Process Intensification 223 lower level steps employ simple and easy calculations, while the higher level steps employ more and more rigorous and detailed calculations (see Figure 1). First, the scenario, the goal and the constraints of the synthesis/ design problem and a performance metrics are defined. Second, the system is analyzed with respect to pure component, mixture and reaction properties to identify a set of phenomena building blocks that may be used in the processing steps of a flowsheet. These are retrieved from a phenomena building block library. Third, the identified phenomena building blocks are connected using the general connectivity rules (see Table 1) resulting in a superstructure. The superstructure may represent a large number of alternatives from which redundant options are removed through structural constraints. Fourth, the remaining alternatives are screened out through pre-defined operational constraints and benchmarked through performance metrics. Fifth, for each of the remaining phenomena-based flowsheet alternatives, currently available or novel unit operations are identified, assisted by algorithms or a library of pre-defined units. Linking the phenomena to the actual physical unit is important since additional constraints related to physical units such as wall boundary conditions need to be introduced. The performance criteria may be revised in this step for final optimization of the most promising alternatives and verification by rigorous simulation and experimentation in the last step.
Figure 1. Workflow for the phenomena-based synthesis and design to achieve PI.
3. Case Study The key steps of the phenomena-based synthesis and design methodology are highlighted through a case study involving the continuous production of isopropylacetate from isopropanol and acetic-acid. The liquid-phase reaction is catalyzed by Amberlyst 15 and follows the stoichiometry: CH3COOH C3 H7OH l C5 H10O2 H2O . (1) In step 1, since the reaction is limited by an unfavorable equilibrium, the objective is to increase the product yield in the reaction. Therefore, the product purity is not defined. Additionally, the number of units is selected for screening of options. FObj yield nProduct / nReactant (2) In step 2, pure component and mixture analysis is performed using ICAS [5]. Several binary as well as ternary azeotropes have been found as well as a LL-phase split between water and isopropyl-acetate. The operational window of the liquid phase
224
P.Lutze et al.
reaction, lies between the lowest boiling point, that is the temperature (347.34 K) of the ternary azeotrope of isopropanol, isopropyl-acetate and water at P=1atm and the highest melting point, that is the melting point temperature (289.8 K) of acetic acid at P=1atm. The reaction analysis based on kinetics from Sanz and Gmehling (2006) [6] confirmed the exothermic irreversibility and the equilibrium limitation of the reaction (K>1). Since, Amberlyst 15 was used as a catalyst the maximum allowable temperature to avoid catalyst degradation was set at 403K. Based on this, the following phenomena were identified: mixing (ideal), dividing phenomena, heating/ cooling (countercurrent, co-current, conductive), heterogeneous reaction and phase split. Additionally, from the analysis of ratios of pure component properties (Table 2), promising phase separations of products from reactants are identified to be based on Vapor-Liquid separation (boiling points) or pervaporation (radius of gyration). Both are represented by a phase creation followed by a phase separation phenomena. The phase creation necessary for pervaporation is described by a flux equation [6]. The heat of vaporization is introduced into the energy balance. Also, an additional constraint equation is necessary to assure that the liquid outlet stream is not freezing. The separation phenomenon VL is ideal. Table 2. Normal property ratios between products and reactants Water/ Water/ Acetic Acid/ Isopropanol/ Acetic Acid Isopropanol Isoproyl acetate Isopropyl acetate Boiling Point 1.05 1.05 1.08 1.02 Radius of gyration 4.24 4.56 1.41 1.31 For purposes of illustration, the phase split phenomenon is not taken into consideration and the heterogeneous reaction is simplified to a pseudo-homogeneous liquid phase reaction. In step 3, the identified phenomena were connected to form phenomena-based flowsheets using connectivity rules (see Table 1) and screened by additional logical and structural constraints (such as, the presence of at least one reaction phenomenon and a maximum of four reaction phenomena in one flowsheet).
Figure 2. Examples of identified units from generated phenomena-based flowsheets: A: single phase CSTR; B: CSTR with integrated heating jacket and membrane; C: Isothermal Reactive Flash; D: integrated membrane, thermal controlled tubular reactor. Phenomena: Ideal mixing M; Reaction R; Phase creation: pervaporation P, evaporation E; phase separation PS, Heating H, Cooling C and Dividing D. Utility streams for energy supply/ removal are not shown.
Phenomena-based Process Synthesis and Design to achieve Process Intensification 225 Examples of four generated phenomena-based flowsheets from step 3 and the corresponding identified physical units in step 4 are illustrated in Figure 2. A uniform single phase CSTR (A) was identified through a series of a mixing and reaction phenomenon while a single phase CSTR (B) divided into two compartments was identified in which neighboring compartments are linked (outlet stream of a dividing phenomenon becomes the inlet stream of a mixing phenomenon). An isothermal reactive flash (D) was identified through the simultaneous occurrence of evaporation as well as reaction phenomena. A tubular reactor (D) was identified as consisting of at least three ideal mixing and reaction phenomena in series. In step 4, the performance of the phenomena-based flowsheet options was compared against the objective function and the number of unit operations (Table 3). Ideally, assuming indefinite volumes and equimolar feed, options B and D were found to be equally good. The isothermal reactive flash (C) can be extended by connecting several of them in a network. This would represent a reactive distillation where the product is purified (which is not the objective here, which is to increase the yield) until the low boiling ternary azeotrope of isopropanol/ isopropyl-acetate and water is reached as the top-product. Hence, this would also result in an unavoidable loss of reactant which would limit the yield. In step 5, the most promising alternatives were optimized with respect to the objective function and verified through simulation. Table 3. Benchmarking results of three examples (Assumption: equimolar feed). Yield Number of units Option (kmol product/ kmol one reactant) A 0.65 1 B 0.99 1 C 0.8 1 D 0.99 1
4. Conclusions A methodology for phenomena-based synthesis and design to achieve PI has been developed and tested through a conceptual case study. The advantage of this approach is that it generates potentially novel process options (truly predictive models lead to reliable predictive solutions) as well as the simultaneous development of the necessary process models. The results are promising and further development of this approach together with the necessary tools is subject of current and future work.
References [1] P. Lutze, R. Gani, J.M. Woodley, 2010, Chem Eng Process, 49, 6, 547-558. [2] K.P. Papalexandri, E.N. Pistikopoulos, 1996, AIChE J, 42, 4, 1010-1032. [3] H. Freund, K. Sundmacher, 2008, Chem Eng Process, 47, 12, 2051-2060. [4] L. d’Anterroches, R. Gani, 2005, Fluid Phase Equilib, 228-9, 141-146. [5] C.A. Jaksland, R. Gani, K.M. Lien, 1995, Chem Eng Sci, 50, 511-530. [6] M.T. Sanz, J. Gmehling, 2006, Chem Eng J 123, 9-14.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) ©2011ElsevierB.V.Allrightsreserved.
A Novel Process Design for the Hydroformylation of Higher Alkenes. Michael Müllera,c, Victor Alejandro Merchana,c, Harvey Arellano-Garciaa,c, Reinhard Schomäckerb,c, Günter Woznya,c a
Chair of Process Dynamics and Operation, Sekr. KWT-9 Dept. of Chemistry, Sekr. TC3 c Technische Universität Berlin, Str. des 17. Juni 135, 10623 Berlin, Germany b
Abstract The hydroformylation is one of the most important industrial applications of homogenous catalysis. Due to the decreasing solubility of alkenes with an increasing length of carbon chains in the aqueous phase, the hydroformylation of higher alkenes is not carried out in a biphasic homogeneous system but in a single homogeneous organic phase with the use of a less reactive cobalt catalyst. However, this requires expensive reaction conditions such as high pressures and temperatures. Alternatively, the hydroformylation of higher alkenes with rhodium catalysts has been investigated in a batch operation mode by several scientists (Haumann et al., 2002, Miyagawa et al., 2005). Moreover, their research proved that this process can be carried out in a homogenous environment by the use of micellar systems (e.g. microemulsions). Due to the high cost of catalysts containing phosphine ligands and rare metals, their retention in continuous processes is extremely valuable. In this work, we propose a novel process design for a continuous operation for the hydroformylation of higher alkenes. This process design is defined by a combination of the rhodium-catalysed hydroformylation of higher alkenes in micellar systems including the catalyst recycle. Due to the high costs of rhodium, the catalyst has to be separated completely from the reaction products so as to ensure high process profitability. The recycle is accomplished in two steps, which comprise a decanter and an ultrafiltration step. Moreover, in order to show the feasibility of the proposed process concept, simulation studies, and sensitivity analysis are carried out within a broad operating range. The corresponding modeling work has been executed using the web-based modeling environment MOSAIC (Kuntsche et al., 2010). Keywords: Process design, hydroformylation, micellar catalysis, higher alkenes, MOSAIC
1. Introduction The hydroformylation of short alkenes is a standard process in the industry. However, although it is of great interest, there are only few implementations in industry for higher alkenes. In the open literature, there is no report on the development of a continuous process for the hydroformylation of higher olefins with rhodium catalysts in multiphase systems with or without added micelles. The feasibility of higher olefins hydroformylation in micellar solutions has been confirmed by different authors (eg. Fell et al., 1995, Gimenez-Pedros et al., 2003, Paetzold et al., 2003). In particular the work introduced by (Haumann et al., 2002) and (Miyagawa et al., 2005) demonstrate the
A Novel Process Design for the Hydroformylation of Higher Alkeses.
227
feasibility of the hydroformylation with high reaction rates at low surfactant concentrations near the critical micelle concentration (cmc) or in two-phase micellar systems. Since the products are dissolved in the organic, and the catalyst in the aqueous phase (two-phase reaction), products and catalyst can be separated easier than in a single-phase experimental procedure (Bode et al., 2000). An approach similar to (Bode et al., 2000), which implements a combination of reaction and catalyst separation, and thus, a built-in recycle considered here for the system, has not been pursued. In different studies and processes, the catalyst recycle process was at significantly altered process conditions. This causes for example for the cobalt catalyst, a chemical change that affects strongly the solubility. In this work, the results of the conceptual process design for a continuous mini-plant will be shown. A simulation model for the process has been developed and preliminary kinetic data from literature were implemented in a first step. In order to obtain an accurate description of the reactor and to enable the use of nonstandard rigorous models in the future, the reactor model and the reaction kinetics were first implemented via MOSAIC. Based on this simulation work and considering safety aspects, a mini-plant was designed (see Fig. 2). With the help of the developed miniplant system, the effects on changing operating conditions on the catalyst and its return will be analyzed in a comparative mode, in which the catalyst is not exposed to changing conditions. Thus, an integrated catalyst recycling is pursued.
2. Process concept Figure 1 shows the integrated process concept. First of all, reactants, synthesis gas (CO & H2), as well as dodecen (higher olefin) and the rhodium catalyst solved in an aqueous phase are added to the reactor. Due to the presence of a surfactant, a micellar system is composed that enables the hydroformylation within a homogenous environment. By cooling the reactors downstream, a separation of both (organic /aqueous) liquid phases is carried out in a decanter. Most of the hydrophilic catalyst resides at the aqueous phase that is recycled to the reactor. The organic phase, which almost consists of the product (aldehyde), is separated with an ultra filtration step, whereas olefins as well as the residual catalyst are once more recycled to the reactor. Thus, an almost complete recycle of the valuable catalyst can be obtained. Furthermore, the remaining olefins could also be recycled, which means a lower consumption of educts for the over all process.
Figure 1: Integrated process concept: Hydroformylation in a micellar system
228
M. Müller et al.
3. Process description At the chair of Process Dynamics and Operation at the TU Berlin, a new mini-plant is currently built in order to analyze the whole process (Fig. 2). The process concept can be divided into three sections: reaction, filtration, and product separation.
Figure 2: P & I flow chart of the designed mini-plant, TU Berlin
3.1. Reaction Section The hydroformylation takes place in a high-pressure reactor. This will be realized in two units. In the first step, a mixer-settler-system, in which, after the reaction, the two phases are separated in a decanter, is established. The decanter can be operated independently from the reactor, and thus, the separation of the two phases can be studied in detail. In the second step, a combination between reaction and phaseseparation is arranged in one device. The reactants will be pumped out of the storage tanks into one of the reactions-units. The syngas will be dosed from gas cylinders. 3.2. Filtration Section After the phase separation, the organic phase will be expanded in a Flash unit. The gaseous phase is composed of unreacted alkenes, water, and syngas. This is followed by a condensation and recycles to the reactor. The liquid phase, composed of alkanals, byproducts, surfactants and the catalyst, is delivered to an ultra filtration membrane. The micelles including the catalyst and the surfactants are retained in the membrane, whereas all reaction products und unreacted alkenes permeates. 3.3. Product Separation Section The separation of the products can be realized using hybrid processes, which are of special importance for close boiling substances. Unit operations including distillation, melt crystallization or organophilic nanofiltration are conceivable. In a distillation column the lower boiling alkenes are separated from adols and alkanes. The alkenes are then recycled to the reaction. The mixture of aldols and alkanes will be separated by an
A Novel Process Design for the Hydroformylation of Higher Alkeses.
229
organophilic nanofiltration. The high boiling aldols stay behind in the retentate, linear and branched alkanes permeate. The permeate can then be separated by melt crystallization. 3.4. Plant-Design Parameters Tab.1 and 2 shows an overview of the in- and outlet streams, as well as parameters, respectively. They are the basis for the plant/design. Table 1: In- and outlet streams, calculation for the plant-design Stream
Dodecene
H2
CO
H2O
surfactant
Tridecanal
[mol/h]
1.0
1.0
1.0
1.3
0.1
0.52
Table 2: Parameter used for the simulation studies Temperature
Pressure
Reactor volume
Residence time
Reactant Ratio
80°C
60 bar
3.5 L
14.1 h
1.0
4. Modeling and Simulation Studies Because of the importance of the reactor for the whole process, it is necessary to have an accurate model which reproduces the phenomena taking place in the gas-liquidliquid reaction. Since this requires a good description of phenomena like gas solubility or mass transfer in liquid phase, the reactor model to be used should provide detailed information on assumptions and the thermodynamic relations considered. Unfortunately, the user does not always get enough information. Another important aspect when modeling chemical reactions represents the possibility to directly implement reaction kinetics given in literature. However, most simulation tools offer the possibility to entry the reaction kinetics that fit in a standard scheme, but demand the creation of user subroutines and consequently the knowledge of a high-level programming language in order to implement other kinetics. A good approach to overcome these drawbacks is the possibility to define new reactor models and kinetics based on mathematical models formulated as symbolic expressions (e.g. as LaTeX-expressions). The developed modeling environment MOSAIC (Kuntsche et al., 2010) enables this approach. In MOSAIC models are saved as XMLfiles that can be used to generate language specific code for different simulation environments. The generated models can be exported and embedded in other simulation tools. The models of mass transfer between gas and liquid phase with the film theory and the description of phase equilibrium at the interface with Henry-coefficients were implemented. The implementation of the new reactor model from MOSAIC in CHEMCAD was realized using the code generation to produce the C++ code of a user added model (UAM) which is offered by default. This code requires the library BzzMath (Buzzi-Ferraris) in order to solve the model equations and needs to be compiled before the UAM is used. The implemented model for the reactor was connected with the other existing CHEMCAD-Units. The kinetic of the reaction is taken out of (Zhang et al., 2002). Different simulation studies were carried out to determine not only the mini-plant design but also possible operating points. Moreover, the main parameters for the reaction were found through sensitivity analysis (Tab. 3).
M. Müller et al.
230 Table 3: Results of the sensitivity analysis for the hydroformylation Pressure
Conversion change [% ]
Residence time
Reactant Ratio
-5%
+5%
-5%
+5%
-5%
+5%
-10.04
+8.12
-7.62
+2.20
-7.25
+5.63
5. Conclusions A novel process concept for the hydroformylation of higher alkenes has been designed for a mini-plant system. A first simulation model for the process has been developed. In order to attain an accurate description of the reactor and to allow the use of nonstandard rigorous models in the future, the reactor model and the reaction kinetics were first implemented via MOSAIC. With the help of the developed mini-plant system, the effects on changing operating conditions and unit performance as well as the catalyst and its recycle will be analyzed. Acknowledgement The authors acknowledge the support from the Collaborative Research Centre SFB/TR 63 “InPROMPT- Integrated Chemical Processes in Liquid Multiphase Systems” coordinated by the Berlin Institute of Technology - Technische Universität Berlin and funded by the German Research Foundation.
References 1. B. Fell, C. Schobben, G. Papadogianakis (1995): Hydroformylierung homologer [omega]Alkencarbonsaureester mit wasserlöslichen Rhodiumcarbonyl/tert. Phosphan-Komplexkatalysatorsystemen, Journal of Molecular Catalysis A: ChemicalVolume 101, Issue 3, pp. 179-186. 2. M. Gimenez-Pedros, A. Aghmiz, C. Claver, A. M. Masdeu-Bulto, D. Sinou (2003): Micellar effect in hydroformylation of high olefin catalysed by water-soluble rhodium complexes associated with sulfonated diphosphines, Journal of Molecular Catalysis A: ChemicalVolume 200, Issues 1-2, pp. 157-163. 3. E. Paetzold, G. Oehme, C. Fischer, M. Frank (2003): Phosphinoethylsulfonatoalkylthioethers and diphenyl-[omega]-sulfonatoalkyl-phosphines as ligands and polyoxyethylene-polyoxy-propylene-polyoxyethylene triblock co-polymers as promoters in the rhodium-catalyzed hydroformylation of 1-dodecene in aqueous two-phase systems, Journal of Molecular Catalysis A: ChemicalVolume 200, Issues 1-2, pp. 95-103. 4. Y. Zhang, Z.S. Mao, J. Chen, (2002): Macro-kinetics of biphasic hydroformylation of 1dodecene catalyzed by water-soluble rhodium complex, Catalysis Today Volume 74, Issues 1-2, pp. 23-35. 5. M. Haumann, H. Koch, P. Hugo, R. Schomäcker, Hydroformylation of 1-dodecene using Rh-TPPTS in a microemulsion (2002): Applied Catalysis A: GeneralVolume 225, Issues 12, pp. 239-249. 6. C.C. Miyagawa, J. Kupka, A. Schumpe (2005) Rhodium-catalyzed hydroformylation of 1octene in micro-emulsions and micellar media, Journal of Molecular Catalysis A: ChemicalVolume 234, Issues 1-2, pp. 9-17. 7. G. Bode, M. Lade, R. Schomäcker (2000), The Kinetics of an Interfacial Reaction in Micro-emulsions with Excess Phases, Chem. Eng. Technol. 23, pp. 405-409. 8. S. Kuntsche, H. Arellano-Garcia, G. Wozny (2010), A new Modeling Environment Based on Internet-Standards XML and MathML, Comp. Aided Chem. Eng, Vol. 28, pp 673-678. 9. G. Buzzi-Ferraris, BzzMath : Numerical libraries in C++, Politecnico di Milano, www.chem.polimi.it/homes/gbuzzi
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Flowsheet Optimization by Memetic Algorithms Maren Urselmanna, Sebastian Engella a
Process Dynamics and Operations Group, TU Dortmund, Emil-Figge-Str. 70, 44227 Dortmund, Germany
Abstract In this contribution, the memetic algorithm (MA) proposed in [1] is extended to optimize a flowsheet problem that comprises a reactive distillation column with optional amounts of catalyst on the stages and an (optional) external reactor such that different degrees of integration can be considered. The MA consists of an evolution strategy and a mathematical programming solver. The focus of this paper is on the influence of the presence of structural decisions which are represented as discrete variables in the optimization problem on the computational efficiency of the solution method. The introduction of discrete variables may result in an exponential increase of the computational effort needed for the solution by MINLP techniques. The results of the MA are compared to those obtained using commercially available MINLP solvers. Keywords: flowsheet optimization, reactive distillation, memetic algorithm.
1. Introduction Flowsheet optimization problems of chemical processes are characterized by the presence of a large number of discrete variables (representing e.g. the choice and the connections of equipment), continuous decision variables (as e.g. equipment sizes and operating parameters), complex nonlinear models that restrict the search space, nonlinear cost functions, and the presence of many local optima. The classical approach to solve such problems is to use MINLP solvers that work on a superstructure formulation which explicitly represents all flowsheet alternatives [2]. This solution procedure is usually based on a decomposition of the MINLP problem into an IPmaster-problem (optimization of the choice and the structure of equipment, and their interconnections) and NLP-sub-problems (optimization of continuous variables for fixed discrete variables). The structural decisions lead to a large number of discrete variables and to a significant increase in the computational effort needed for the solution by such methods. The mathematical programming (MP) methods which are employed to solve the continuous sub-problems that arise by fixing the discrete variables provide only one local optimum which depends strongly on the initialization. Thus standard methods may not find the global optimum despite long computation times. Recently, we introduced a memetic algorithm (MA) for the global solution of mixedinteger design problems for a single unit operation, e.g. a reactive distillation column (RDC) [1]. The MA integrates an evolutionary algorithm (EA) and a mathematical NLP solver. The EA generates initial points for the local solver. It works in the space of the design variables whereas the state variables of the designs are computed by the same solver that performs the local optimization. The approach was applied to the example of the heterogeneously catalyzed and kinetically controlled synthesis of methyl-tertiarybutyl-ether (MTBE) from isobutene and methanol in the presence of n-butane [3]. By the use of the MA, the computational effort needed for a global solution of the
232
M. Urselmann and S. Engell
continuous sub-problems could be reduced by 75% in comparison to the reference algorithm (OQNLP/CONOPT). The introduction of structural decisions and additional constraints leads only to a moderate increase in the computational effort which demonstrates the potential of the MA. In this contribution, the MA is extended to a flowsheet optimization problem that comprises a reactive distillation column with optional amounts of catalyst on the stages and an (optional) external reactor such that different degrees of integration (from a totally integrated reactive distillation column without external reactor to a distillation column with pure separation functionality with a pre- or a side reactor) can be considered. The results of the flowsheet optimization by the MA are compared to the results of MINLP-techniques that were used in previous work [4] for the optimization of such a reactor-separator configuration.
2. The memetic algorithm Memetic algorithms are hybrid evolutionary algorithms coupled with local refinement strategies. In this work, an evolution strategy (ES) which is a special variant of an EA is used. ES 2.1. Structure of the MA Initialization design variables The structure of the MA used here is model variables Simulation Evaluation of the shown in Figure 1. The optimization starting point first generation local optimizer Local search procedure starts with a feasible random initialization of the first population. In Selection for reproduction order to evaluate the P individuals of the CONOPT Variation Recombination/Mutation population, the corresponding model design variables variables are computed by CONOPT by model variables Simulation Evaluation of the solving a simulation model. The resulting starting point offspring generation local optimizer Local search point in the space of all variables Selection for new generation represents a possible design which is used No as a starting point for the local Termination criterion fulfilled? optimization in the space of all continuous Yes variables. This local search is also Figure 1. Structure of the memetic algorithm performed by CONOPT. According to the evolutionary model of Lamarck, the genes of the individuals are replaced by the values of the design variables of the corresponding local optimum. As long as no feasible design with N column stages without or in combination with an external reactor is found all model variables within the simulation model are initialized with the value 1. After a first solution has been found, the values of the model variables of the nearest feasible point found so far (measured by the Euclidean norm) are used as initial values. The generation cycle of the ES starts with a random selection of O individuals for reproduction. These individuals are recombined and mutated by problem-specific operators and become offspring individuals that are evaluated in the same manner as the individuals of the initial population. Then the population for the next generation cycle is selected by choosing the P best individuals that do not exceed a maximal ‘life-span’ of N generations out of the set of offspring and parent individuals. The generation cycle stops if a predefined termination criterion is fulfilled e.g. a time limit or a generation limit.
2.2. Variants of the MA Two variants of the MA were tested: the basic algorithm (MAbase) where no restrictions on the number of feeds nor on the number of exchange streams are considered and an extended version of the basic formulation where the number of feeds and the number of
Flowsheet Optimization by Memetic Algorithms
233
exchange streams are restricted to a maximal number of three (MACF1). A detailed description of these variants for the optimization of an RDC without an external reactor can be found in [1]. Here, the exchange streams between the reactor and the column are handled in the same fashion as the feed streams.
3. The case study As a case study, the optimization-based design of a reactive distillation column with an optional external reactor for the production of MTBE from isobutene and methanol (IB + MeOH ļ MTBE) in the presence of n-butane at a pressure of 8 bar is considered. The reaction is kinetically controlled, equilibrium limited and heterogeneously catalyzed. The substance system exhibits 3 binary azeotropes. The desired purity of the product is 99 mole-%. The total amount of the feed streams is fixed (F1,tot = 6.375 mole/s MeOH, F2,tot = 8.625 mole/s IB/n-butane). Structural and operational parameters - e.g. the number of stages and the reflux ratio - have to be determined such that the annual profit of the column is maximized. The superstructure of the process comprises N = 60 stages of which only a subset may be included in the optimal solution. The reboiler and the condenser are modelled as stages without reaction. The model of the reactor is similar to the model of the stages extended by external heat exchange and without the presence of a vapour phase. While the column stages can be purely separating or can possess an integrated functionality, the reactor provides a hold-up which is purely reactive. It has two possible feed streams (F1,cstr, F2,cstr) and possible liquid exchange streams with each stage of the column (from reactor to stage (Rin) and from stage to reactor (Rout)). The objective is to maximize the annual profit which is calculated by the annual revenues for the products minus the annualized investment cost, annual operating cost and annual cost for raw materials. The set of design variables comprises the amounts of both feeds i = 1, 2 on the stages k = 1, ..., N of the column denoted by Fi(k) and in the reactor cstr denoted by Fi,cstr, the amounts of catalyst on the stages k = 2, ...,N í1 denoted by Ecat(k), two variables Įtop and Įbottom (0, 1) for the reflux ratio at the top and the ratio of the evaporation rate to the product removal at the bottom of the column, the binary activation variables Mk for the stages k = 2, …, N – 1, the volume (Vcstr) of and the temperature (Tcstr) in the reactor and variables ERin(k) [0, 0.9] and ERout(k) [0, 1.0] for the ratio of the liquid stream from stage k to the reactor to the total liquid stream that leaves stage k and the ratio of the liquid stream that flows from the reactor to stage k to the total liquid stream that leaves the reactor. The models consist of a large number of algebraic equations that were formulated in the modelling language GAMS. Different models are used by the different algorithms. The reference algorithms SBB/CONOPT and SBB/OQNLP/CONOPT use the superstructure models MTBE_CSTRMINLP and MTBE_CSTRMINLP-CF. In the basic formulation MTBE_CSTRMINLP it is assumed that fractions of both feed streams can enter the column on each stage of the column including the reboiler and the condenser. It is also assumed that fractions of the liquid stream from the reactor can enter each stage of the column except the reboiler and the condenser, and fractions of the liquid streams can leave each of these stages to enter the reactor. MTBE_CSTRMINLP-CF is an extension of the basic formulation by a restriction on the number of the feed streams of each feed and on the number of the exchange streams in each direction to a maximum of three each. These restrictions introduce a large number of binary variables to represent the existence of the streams as well as additional inequality constraints into the model. The MA introduced in this contribution uses the model formulations MTBE_CSTRNLP and
234
M. Urselmann and S. Engell
the simulation model MTBE_CSTRSim. MTBE_CSTRNLP is the model of the continuous sub-problems which arise by fixing all discrete variables of MTBE_CSTRMINLP. The maximal number of stages N is fixed to a value between 10 and 60 and all of these stages are active. MTBE_CSTRSim is the model used to determine the values of the model variables that correspond to a certain column design. It comprises a subset of the equations and of the inequalities of the optimization model MTBE_CSTRNLP. The design variables here are removed from the set of free variables, and the equations and the inequalities that restrict the feasible values of the design variables are removed from the set of constraints as well.
4. Extension of the MA to Flowsheet Optimization The components of the MA developed for the optimization of an RDC only (see [1]) are extended by an optional external reactor. Due to the limited space, only the most important extensions are described here. 4.1. Representation The individuals of the MA are represented by the design variables described in Section 2. Instead of the superstructure representation, a variable-length representation is used here so that individuals of different lengths can be members of the same population. The number of the stages is represented by a single integer variable that defines the number of the remaining variables. The external reactor is represented by a binary variable Mcstr that indicates the existence of the CSTR and by the continuous design variables Fi,cstr, Vcstr, Tcstr, ERin(k) and ERout(k). In case of the formulations with restrictions on the number of streams (MACF1), this vector is extended by two integer variables nRin and nRout that represent the number of exchange streams in each direction and by two vectors iRin and iRout that represent the locations of the exchange streams. reflux ratio: 2.54
36
5.021 mol/s 1.354 mol/s
MeOH
30
23.4 m
4.2. Initialization & Variation All operators for the initialization, recombination and mutation are applied in a hierarchical fashion. In the first step, the number of stages of the column and the existence of the reactor are determined by these operators. The basic concepts of the operators developed for the case study without an external reactor [1] are adapted to handle the variables of the reactor in the same fashion as the variables of the column. The initialization is done randomly with a uniform distribution within the feasible range of the variables. The variation operators may cause a change in the number of stages of the column. In this case, a mapping of the stage indices of the individuals involved to the stages of the offspring is performed [1]. According to this mapping, the variables that correspond to the same stages are varied in groups. Missing values, e.g. in case of an increase of the number of stages caused by the mutation, are initialized randomly within their feasible domains. In the case of no external reactor, i.e. Mcstr = 0, all variables that correspond to the reactor are set to zero. After the application of the different operators, repair procedures are applied to reinstall feasibility with respect to the constraints defined on design variables. The extension of the approach to flow sheet optimization puts the focus on the mutation of the variables that define the existence of optional operation units (here: the external reactor). If the variable Mcstr is mutated with a high probability
5.137 mol/s 3.991 mol/s
0.302 m³ 327 K
20 6.506 mol/s
0.248 mol/s 3.240 mol/s
IB/n-butane
10 42.8 cm
1 pressure: 8.0
boil-up ratio: 1.61
Profit: 1018,830 €/a
MTBE
Figure 2. Best known solution
235
Flowsheet Optimization by Memetic Algorithms
pmut(Mcstr), the collected information about promising values for the variables of the reactor is lost. Therefore pmut(Mcstr) is defined as a parameter to configure the MA.
5. Results All algorithms were tested on a PC with 3.06GHz and 2GB RAM. Algorithms with stochastic influences were tested 10 times and the median performances of these runs are compared with the deterministic runs of SBB/CONOPT. For the case study without external reactor, a parameter tuning was done for all algorithms [1]. These parameters were also used for the extended case study here. The termination criterion is a limit of 4 hours (MA) and a node limit of 10,000 nodes (SBB). To find a good parameter value for pmut(Mcstr), the MA was tested ten times with pmut(Mcstr) = 0.05, 0.1, 0.2, …, 0.5. The values 0.4 in the case of MAbase and 0.2 in the case of MACF1 led to the best results. In Figure 2, the best known solution for both the formulations with and without restrictions on the number of streams that was found by the MA (but not by the other algorithms) is shown. MAbase found this solution in six test runs after 194 min and 40 sec in the median case. The progress curves of the different algorithms are shown in Figure 3. In (a)
1019
(b) 1000
1017 1016
MAbase (Median)
1015 1014
SBB/OQNLP/ 3CONOPT
SBB/CONOPT MAbase (Best) MAbase (Worst) 0
60
120
180
profit [1000€/a]
profit [1000€/a]
1018
950 900
SBB/CONOPT MAbase (Best)
850
MAbase (Median) MAbase (Worst)
240
time [min]
300
360
420 450
800
0
60 120 180 240 300 360 420 480 540 600 660 710 time [min]
Figure 3. Progress curves of the different algorithms, a) MTBE_CSTRMINLP, b) MTBE_CSTRMINLP-CF
both formulations, the MA provided better solutions than the reference algorithms. Without restrictions on the number of streams SBB/CONOPT is the fastest algorithm. It found one solution close (in profit) to the best known solution after 12 min and 6 sec. In case of the formulation MTBE_CSTRMINLP-CF, SBB/CONOPT only reached a solution quality of 951,295 €/a, while the quality of the best solution found by MACF1 is 1018,660 €/a in the median case. The median time needed to find this solution was 236 min and 33 sec. The best known solution was found by MACF1 in one of the ten test runs. SBB/OQNLP with 3 CONOPT calls could not find good solutions within several days of computation.
6. Conclusions The concept of the MA presented in [1] was successfully extended to a flowsheet optimization problem. The introduction of a large number of discrete variables and additional constraints led only to a moderate increase of the computational effort needed for the solution, while the solution quality was constantly high. This shows the potential of the algorithm for larger flowsheet optimization problems which are characterized by a large number of structural decisions that are difficult to handle by MINLP techniques.
References [1] M. Urselmann, S. Engell, Computers & Chemical Engineering (2011), in press. [2] I. E. Grossmann, J. A. Caballero and H. Yeomans, LAAR 30, (2000), 263-284. [3] S. Barkmann, G. Sand and S. Engell, CIT 80, (2008), 107–117.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Biomass to chemicals: Design of an extractive reaction process for the production of 5hydroxymethylfurfural Ana I. Torres,a Prodromos Daoutidis, a Michael Tsapatsis a a
Department of Chemical Engineering and Materials Science,,University of Minnesota, Minneapolis, MN, 55454, USA
Abstract Furanic compounds such as 5-hydroxymethylfurfural (HMF) can be obtained from sugars and have the potential to serve as substitutes of petroleum derived building blocks in the production of fuels and chemicals. In this work, we propose a process for the production of HMF from fructose based on extractive-reaction and formulate an optimization problem in order to find the operating conditions that minimize its cost of production. Keywords: Biorefinery, HMF, process design
1. Introduction 5-hydroxymethylfurfural (HMF) has been widely recognized as a key intermediate in the production of biomass derived fuels and polymers. Its synthesis is based on the acid catalyzed dehydration of sugars, mainly hexoses, which is highly non-selective when taking place in aqueous media. In order to improve HMF yield, several laboratory scale reaction-separation methods have been reported (Kuster 1990, Lewkowski 2000, Roman-Leshkov et al. 2006, Van Dam et al. 1986), yet, studies of the feasibility of these processes from an economic and energy perspective are still scarce. In our previous work (Torres et al. 2010) we considered the concept proposed by Dumesic and coworkers (Roman-Leshkov et al. 2006) to develop and evaluate a continuous process for the production of HMF from fructose, the hexose that provides the highest HMF yield in acid aqueous media. As shown in Fig. 1 (a) this process consists of a biphasic reactor in which the HMF produced in the aqueous phase is selectively extracted by the organic phase thus minimizing its degradation. A liquidliquid extractor improves HMF recovery and an evaporator is used for its purification. In that case, we found that HMF costs comparable to its oil-derived analogue were difficult to obtain and concluded that alternative processes together with lower fructose prices and more selective kinetics were needed in order to reduce the cost of HMF. In this work, we focus on extractive-reaction processes as an alternative for the production of HMF. These processes, which consist of a single unit combining reaction and extraction, have been broadly used in the production of metals, and the possibility of their application in other chemical industries has motivated research in this area (see for example Minotti et al. 1998). A possible scheme for the production of HMF using this approach is presented in Fig. 1 (b). Here, the dehydration of fructose and the extraction of HMF take place in a tubular biphasic reactor-extractor where the aqueous solution containing fructose and catalyst and the stream containing the organic solvent are fed countercurrently. As in the previous process, HMF is separated from the organic solvent by evaporation, and both the evaporated solvent and the aqueous solution
Biomass to chemicals: Design of an extractive-reaction process for the production of 5hydroxymethylfurfural 237
(a)
(b)
Figure. 1: (a) CSTR based process studied in Torres et al. 2010 (b) Extractive reaction process. Blue streams represent aqueous phase flows, green streams, organic phase flows. vk denotes molar flow rate in the stream k; FJk molar flow of component J in stream k; J= A: fructose, BPA: byproducts form fructose, B: HMF, C: levulinic acid, D: formic acid, BPB: other decomposition products from HMF, W: water, S: solvent .
containing unreacted fructose are recycled back to the reactor. The goal of this paper is to find the operating conditions that minimize the cost of production of HMF using this extractive-reaction process. The effect of different solvents, fructose prices and kinetics is also addressed.
2. Modeling and optimization of the process The reactor-extractor was envisioned as a RTL contactor which as shown in Fig. 2 (a) consists of a series of circular baffles that rotate on a horizontal axis in a cylindrical stator. The two phases flow countercurrently through the gap defined by the baffles and the cylinder. Open buckets placed between the baffles mix the phases by collecting and releasing portions of one phase into the other (Lo et al. 1983). For modeling purposes the compartment thus defined is assumed to behave as an ideal CSTR. All chemical reactions are assumed to occur only in the aqueous phase and as shown in Fig. 3 the simplified first order model proposed by Kuster and Temmink 1977 is considered. Transfer of HMF from the aqueous to the organic phase is modeled using the correlations for mass transfer coefficient (K) from Godfrey and Slater 1994 and the average mass transfer area (a) reported in Alper 1988. With the additional assumption of isothermal process, the following equations describe the steady state of the process presented in Fig. 1 (b) and represent the constraints of the optimization problem: FJ0+FJ2 · (1-z) –FJ1=0 v0+v2·(1-z)-v1=0 Ri-1-Ri+ȈJȈm ȖJm ·rmi-NBi=0 FRJi-1 -FRJi + Ȉm ȖJm ·rmi=0 Ei+1-Ei + NBi=0 FEBi+1 -FEBi +NBi=0 v4-v5-v6=0 FB4 -FB6=0 vS0+v5-v3=0 NBi=K·a· (ȡaq·R·yBi-ȡorg·wBi) ·Acomp·ǻl Here, vi, FJi, Ri, Ei, NBi, FRJi and FEBi are molar flowrates of species J in stream i (as defined in Figs. 1 (b) and 2 (b)); z is the fraction of the purge; Acomp and ǻl are the cross sectional area and length of each compartment of the contactor; ȡaq and ȡorg are the molar densities of the aqueous and organic phases respectively; R, is the partition coefficient of HMF between the organic and the aqueous phases; ȖJm is the stoichiometric coefficient of component J in reaction m, and rmi the reaction rate in compartment i, with rmi defined as follows: r3,42i =k3,4 · yBi ·MAi MAi= ȡaq * (1-X)* ·Acomp·ǻl r1,2i =k1,2 · yAi ·MAi xJi= FJi/ vi yJi= FRJi/ Ri wBi= FEBi/ Ei
A. Torres et al.
238
(a)
(b)
Figure. 2: Schematic of a RTL contactor. (a) Side and front view. (b) Flows in the i compartment. Ri and Ei indicate aqueous and organic phase global molar flows, FRJi and FEJi species J aqueous and organic phase molar flows (J as defined in Fig. 1), NBi amount of HMF transferred from the aqueous to the organic phase; MAi molar holdup.
In addition, the following bounds were added to account for design recommendations or values reported in the literature for the operation of RTL contactors: x Ratio between the aqueous and organic volumetric flows: 1/6 (v3/ ȡorg)/ (v1/ ȡaq) 6 (Lo et al. 1983) x Residence time in each compartment: 20 s IJaq, IJorg 35 s (Alper 1988) x Dimensions of the contactor: 1 m L 8 m, 0.1 m D 2 m (Lo et al. 1983) minimum ratio between the length and the diameter of the reactor: L/D 2 (Jarudilokkul et al. 2000) The objective function to be minimized is the cost of HMF that balances raw material costs (fructose and solvent), energy cost due to evaporation of the solvent and capital costs (RTL contactor, evaporator and condenser): min f = $fructose·FAo + $solvent·FSo + $fuel· Ȝ·v5/Ș +CCF·TCI FB6 In this equation, TCI represents the total capital investment and CCF is the annualization factor. The procedure given by Seider et al. 2009 and the correlations for equipment costs from Seider et al. 2009 (condenser), Couper et al. 2010 (evaporator) and Lo et al. 1983 (contactor) were used to estimate the total capital investment. The cost function and constraints described above define a NP optimization problem where all the flowrates (except v6 and FB6, which define the production rate and purity), the fraction of the purge (z), the holdups and all the other variables needed to size the equipments (and thus to compute capital costs) are the optimization variables. The flowrates were allowed to vary freely, z was bounded between 0 and 1 and the sizes of the equipments were constrained to the ranges for which the cost correlations in Seider et al. 2009, Couper et al. 2010 or Lo et al. 1983 are valid. The values of the partition coefficient (R), the price of fructose ($fructose) and the kinetic constants (km) were used to generate different case studies and thus varied between optimization runs; all the other parameters were kept constant. The ranges of variation for R, $fructose and km, as well as the values of the parameters used in the simulations are mentioned and justified in the following section. Finally, the optimization problem was solved using GAMS/SNOPT.
3. Results The base case optimal result was obtained considering an inlet aqueous stream containing 30 % w/w of fructose and 0.25 M hydrocloric acid and an organic inlet
Biomass to chemicals: Design of an extractive-reaction process for the production of 5hydroxymethylfurfural 239
Table 1: Parameters used in the base case simulation Parameter Partition coefficient Kinetic constants (T=453 K) Figure 3: Simplified reaction model used in the simulations. F: fructose, BPA: byproducts from fructose, LA: levulinic acid, FA formic acid, BPB other products from HMF.
Mass transfer coefficient Mass transfer area
Value R=1.65 k1= 1.536·10-2 s-1 k2= 1.536·10-2 s-1 k3= 9.136·10-4 s-1 k4= 1.163·10-3 s-1 K=4.85·10-2 m·s-1 a=117 m2/m3compartment
stream composed by a 7:3 w/w mixture of methylisobutylketone (MIBK) and 2-butanol. The process was assumed to operate at 453 K, the value for the partition coefficient was taken from Roman-Leshkov et al. 2010 and the kinetic parameters were estimated from Kuster and Temmink 1977 (see Table 1). The cost of fructose was assumed to be 25 ¢/lb, and the desired HMF purity and the production level were respectively fixed to 95% (molar basis) and 7000 tons/yr (between 2% and 5% of the ethanol production of a current biorefinery). Under these assumptions the minimum HMF cost, 0.24 $/mol, is achieved with a 1.96 m3 contactor with 12 compartments and represents a 22% reduction when compared to the minimum cost of 0.306 $/mol reported for the CSTR based process (Torres et al. 2010). As shown in Fig. 4 three facts explain this reduction in cost. First, as expected this process operates at higher fructose yields due to an increase in both conversion and selectivity (~100% and 46% vs 91% and 43%, respectively) thus reducing the main cost, fructose, which accounted for 83% of the previous cost. Second, the evaporation costs are reduced to a third which is consistent with the fact that we expect this process to provide a more efficient utilization of the solvent, which consequently reduces the amount of heat needed to reach the same HMF purity. Finally, capital costs are also largely reduced, mainly due to the absence of an expensive liquid-liquid extractor. However, as capital costs accounted for less than 5% of the total cost, its reduction has a lower impact on HMF cost. As reported in Roman-Leshkov and Dumesic 2009, higher selectivities towards HMF can be obtained if extracting solvents providing better partition coefficients than 7:3 MIBK-2-butanol are used. Simulations using tetrahydrofuran (THF), the solvent reported to have the largest partition coefficient (R=7.1), were performed finding that an extra 6% reduction in cost is possible. As seen in Fig. 4 (b), for both solvents fructose dominates the cost, accounting for almost 90% of it. Motivated by this, a set of simulations using the lowest possible price for fructose, i.e. glucose price 15 ¢/lb, were performed. The results showed that for the base case kinetics (Table 1) and extracting solvent (7:3 MIBK:2-butanol), the minimum achievable cost is 0.15 $/mol. Finally, as more recent experimental data published in Roman-Leshkov et al. 2006 reported HMF yields higher than those predicted by the kinetic constants presented in Table 1, the sensitivity of HMF cost to these constants was also considered. An estimation of kinetic constants that predict the yields in that paper can be obtained by fitting the reported conversion and selectivity to the first order model presented in Fig. 3 (more details on this can be found in our previous work, Torres et al. 2010). Simulations considering the kinetics that correspond to the 68 % conversion and 70% selectivity published for run 4 in Roman-Leshkov et al. 2006, resulted in HMF costs
A. Torres et al.
240
between 0.21 $/mol and 0.23 $/mol, representing at most a 10% improvement over the base case. Larger improvements are only obtained when considering more selective, hypothetical kinetics.
(a)
(b)
(c)
Figure. 4: Optimal results under different secenarios. CSTR+LLE: results from our previous study (Torres et al. 2010); ER: BC: Extractive-reactor base case; ER: THF:Extractive-reactor simulation using THF as the extracting solvent; ER: LP: Extractive-reactor simulation using the lowest possible price of fructose (15¢/lb). (a) Minimum HMF cost ($/mol). (b) Cost distribution ($/mol). $F: fructose cost; $Qev: evaporation cost; $S: solvent cost; $Cap: capital cost. (c) Conversion (C); selectivity (S) and yield (Y).
4. Conclusion The extractive-reaction process studied in this paper, represented an improvement over the one published in Torres et al. 2010, achieving HMF costs between 0.21 $/mol and 0.24 $/mol. These results correspond to simulations where either the kinetics reported by Kuster and Temmink 1977 or those estimated from Roman-Leshkov et al. 2006 were considered. Further reduction of HMF cost could come from alternative reaction pathways or from lower prices of fructose.
Acknowledgments The authors would like to thank the financial support from the National Science Foundation (grant CBET-0855863).
References E. Alper, 1988, Chemical Engineering Research & Design, 66, 147-151 T. Lo, M. Baird, C. Hanson, 1983, Handbook of Solvent Extraction J. Couper, W. Penney, J. Fair and S. Walas, 2010, Chemical Process Equipment - Selection and Design S. Jarudilokkul, E. Paulsen, D. Stuckey, 2000, Biotechnology Progress, 16, 1071-1078 B. Kuster and H. Temmink, 1977, Carbohydrate Research, 54, 185 - 191 B. Kuster, 1990, Starch, 42, 314 J. Lewkowski, 2000, ARKIVOC, i, 17 M. Minotti, M. Doherty, M. Malone, 1998, Industrial & Engineering Chemistry Research, 37, 4748-4755 Y. Roman-Leshkov, J. Chedda, J. Dumesic, 2006, Science, 312, 1933-1937 Y. Roman-Leshkov, J. Dumesic, 2009, Topics in Catalysis, 52, 297 W. Seider, J. Seader, D. Lewin, S. Widagdo, 2009, Product and Process Design Principles Synthesis, Analysis and Evaluation J. Godfrey and M. Slater, 1994, Liquid-Liquid Extraction Equipment A. Torres, P. Daoutidis, M. Tsapatsis, 2010, Energy and Environmental Science, 3, 1560-1572 H. Van Dam, A. Kieboom and H. Van Bekkum, 1986, Starch, 38, 95-101
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
A strategy to extend reactive distillation column performance under catalyst deactivation Rui M. Filipe,a,b Henrique A. Matos,b,c Augusto Q. Novaisd a
Área Departamental de Engenharia Química, Instituto Superior de Engenharia de Lisboa, R. Conselheiro Emídio Navarro, 1, 1959-007 Lisboa, Portugal b Centro de Processos Químicos, Av. Rovisco Pais, 1049-001 Lisboa, Portugal c Departamento de Engenharia Química e Biológica, Instituto Superior Técnico, Av. Rovisco Pais, 1049-001 Lisboa, Portugal d Unidade de Modelação e Optimização de Sistemas e Energia, Laboratório Nacional de Energia e Geologia, Est. do Paço do Lumiar, 1649-038 Lisboa, Portugal
Abstract This work addresses the effects of catalyst deactivation and investigates methods to reduce their impact on the reactive distillation columns performance. The use of variable feed quality and reboil ratio are investigated using a rigorous dynamic model developed in gPROMS and applied to an illustrative example, i.e., the olefin metathesis system, wherein 2-pentene reacts to form 2-butene and 3-hexene. Three designs and different strategies on column energy supply to tackle catalyst deactivation are investigated and the results compared. Keywords: reactive distillation, catalyst deactivation, feed quality, modeling, simulation.
1. Introduction Reactive distillation (RD) is a successful case of process intensification. It combines reaction and separation into the same physical vessel, with economic and environmental gains (Taylor and Krishna, 2000) leading to systems with significantly greener engineering attributes (Malone et al., 2003). In previous work, the authors’ developed a framework combining feasible regions and optimization techniques for the design and multi-objective optimization of complex reactive distillation columns (RDC) (Filipe et al., 2008). This led to the consideration of RDC with distributed feeds, involving the combination of superheated and subcooled feeds that provide a source or a sink of heat at specified trays of the columns, which favors reaction while reducing the total reactive holdup requirements. It was also found that higher conversions could be obtained with the same reactive holdup by using these feed qualities outside the traditional range, which led to the consideration of using this technique to overcome catalyst deactivation during column operation. Catalyst deactivation represents both an operational and a design problem. The reaction conversion achieved at each tray is reduced, which may limit column performance and product specifications. However, if catalyst deactivation is addressed at the design stage, an early assessment is possible and an operational strategy set in place to deal with the catalyst life-cycle. Little attention has been paid to the catalyst deactivation in
R.M. Filipe et al.
242
RDC by the research community. Wang et al. (2003) addresses the control of RDC when the production rate changes or the catalyst deactivates and proposes a control scheme able to maintain high purity and high conversion under such conditions. This work addresses the effects of catalyst deactivation and investigates methods to reduce their impact on the RDC performance. In previous work (Filipe et al., 2009) the use of variable feed quality and reboil ratio were investigated, and their positive effect in dealing with catalyst deactivation assessed. This work further extends previous analysis by adding two new designs and different strategies to tackle catalyst deactivation. A rigorous dynamic model developed in gPROMS and applied to an illustrative example, the olefin metathesis system, wherein 2-pentene reacts to form 2-butene and 3-hexene, is used to investigate how the feed quality and reboil ratio changes can maintain product purity while the catalyst deactivates A comparison of the results is also provided.
2. Dynamic model The rigorous dynamic model, developed using gPROMS, was built modularly and allows for different numbers of trays and feeds, as well as of feed qualities. Mass and energy balances are used at each element of the column. Pressure drop over the column is considered and calculated from the vapor flow speed and liquid height at each tray. Deviations from phase equilibrium can be accounted for through the built-in Murphree stage efficiency equation although they were neglected in this work. Physical properties are estimated using the included package IPPFO for ideal systems. The reaction is considered to occur only in the liquid phase at specified trays of the column, and the reactive holdup, rather than the catalyst amount, is specified. Three different designs taken from the Pareto front built using the previously reported design and optimization framework (Filipe et al., 2008) are used: case A represents the typical solution with one feed and low reactive holdup, while cases B and C represent designs with the same number of trays but with different reactive holdup and feed configurations (Table 1). The reactive holdup is equally distributed between the reactive trays. Catalyst deactivation is simulated through the inclusion of a negative exponential decay factor in the reaction rate constant. Although different decay laws could be used, the results shown here can be expected to be representative of a typical system behavior. Table 1. Design specifications Case Number of stages | Feed trays Reboil | Reflux ratio Feed rate (mol/s) Feed temperatures (K) Condenser | Reboiler duty (kW) Purity (mol %) Reactive trays Total reactive holdup (kmol)
A 14 | 8 3.2 | 6.02 5.56 401 -460.68 | 256.58 97.77 6-10 24.7
B 23 | 8,17 1.85 | 4.14 2.86, 2.70 298, 560 -337.58 | 148.38 96.82 7-11 22.55
C 23 | 9,14 1.23 | 2.48 2.18, 3.38 298, 416 -229.32 | 99.60 96.24 9-19 60.28
A strategy to extend RD column performance under catalyst deactivation
243
Figure 1 depicts the variation of product purity with catalyst activity for the three scenarios considered. The lower decrease rate observed for case C is justified by its larger reactive holdup.
3. Manipulation of the feed quality and reboil ratio The use of feed qualities outside the traditional range (q < 0 and q > 1) has proved to be instrumental in reducing the total reactive holdup (Hoffmaster and Hauan, 2006; Filipe et al., 2008). It is therefore expected that the feed quality can also be effective in dealing with catalyst deactivation. Feed quality is related to the energy content of the feed and dictates how the feed is distributed between the liquid and vapor streams. Figure 2 depicts the variation of the feed temperature with the feed quality for case B. The explicit specification of the feed quality is not possible in the model developed as it would involve the specification of the feed enthalpy, a dependent variable. Alternatively, the feed specification is made indirectly by assigning values to temperature, pressure and composition, which for one single component stream limits the conditions to below (q > 1) and above (q < 0) the boiling point. In order to obtain a mixed feed with quality comprised between 0 < q < 1, different combinations of two feeds with temperatures above and below the boiling point can be used, thus achieving different energy contributions to the total flow. Streams with constant pressure, fixed composition (pure 2-pentene) and variable temperature were used to assess the effect of the feed quality outside that range. Figure 3 depicts the variation of product purity with the feed temperature. Note that in cases B and C, only the vaporized “hot” feed was subject to changes in the temperature. Case A displays low sensitivity to feed temperature. In this design the small reactive holdup associated to the also small size of the column contributes to this limitation, since it restricts the scope to handle any further decreases in the already low catalyst availability. A slightly more flexible behavior is found in larger columns, such as in cases B and C, where a more marked influence of the feed temperature is found. Comparing cases B and C, which have the same number of trays, it can be noted that case C, while exhibiting the lowest reboil ratio of 1.23, has a reactive holdup almost three times higher than B, displaying greater flexibility to deal with catalyst deactivation. Ϭ͘Ϭ
ϵϴ
&ĞĞĚƋƵĂůŝƚLJ;ƋͿ
WƵƌŝƚLJ;йͿ
ͲϬ͘ϯ ϵϲ
ϵϰ
ϵϮ
ͲϬ͘ϲ ͲϬ͘ϵ Ͳϭ͘Ϯ Ͳϭ͘ϱ
ϰϬ
ϲϬ
ϴϬ
ϭϬϬ
ĐƚŝǀŝƚLJ;йͿ
Figure 1. Variation of product purity with catalyst activity.
ϯϬϬ
ϯϱϬ
ϰϬϬ ϰϱϬ ϱϬϬ dĞŵƉĞƌĂƚƵƌĞ;@ ZLWKWKHRSWLPDO VROXWLRQLV VKRZQ 7KHLQIRUPDWLRQLQFOXGHVWKHWKHUPRG\QDPLFSURSHUWLHVRIHDFKVWDWHSRLQWRIWKHF\FOH WKHIORZUDWHVWKHGHVLJQDQGRSHUDWLQJYDULDEOHVRIWKHFRROLQJV\VWHPDQGWKHYDOXHRI WKHREMHFWLYHIXQFWLRQ$VREVHUYHGWKHWRWDODQQXDOL]HGFRVWRIWKHV\VWHPLVUHGXFHG E\FRPSDUHGWRWKHEDVHFDVH7KLVLVDFFRPSOLVKHGE\UHGXFLQJWKHQXPEHURI VWDJHV1VWDJHV LQWKHGLVWLOODWLRQFROXPQDQGIHHGLQJWKHFROXPQZLWKDIHHGWUD\)VWDJH FORVHUWRWKHUHERLOHU$OVRWKHUHIOX[UDWLR55 DQGWKHGLVWLOODWHWRIHHGUDWLR') DUH UHGXFHG 7KHVH FKDQJHV OHDG WR D VPDOOHU FROXPQ ZKLFK LV DGHTXDWH IRU WKH UHODWLYHO\ KLJK HYDSRUDWRU WHPSHUDWXUHV RI RXU FDVH VWXG\ 7KH WHPSHUDWXUH 7 DQG SUHVVXUHV 3+LJK3/RZ RIWKHV\VWHPDGRSWWKHVDPHYDOXHDVLQWKHEDVHFDVH7KHPDVVIORZRI WKHDEVRUEHQWUHIULJHUDQWVROXWLRQP LVVLJQLILFDQWO\LQFUHDVHGZKLOHWKHVWUHDPVRIWKH H[WHUQDOIOXLGVPZPZPZ UHPDLQYHU\VLPLODUWRWKHEDVHFDVH:LWKWKHUHVXOWLQJ VHWRIGHFLVLRQYDULDEOHVWKHWRWDODUHDRIWKHKHDWH[FKDQJHUVLVVLJQLILFDQWO\LQFUHDVHG 7KH FRHIILFLHQW RI SHUIRUPDQFH &23 UHVXOWV LQ DQG LV LQFUHDVHG E\ 7DEOH&RPSDULVRQEHWZHHQWKHGHFLVLRQYDULDEOHVLQWKHEDVHFDVHDQGWKHRSWLPDOVROXWLRQ %DVHFDVH 2SWLPDOFDVH
1VWDJHV
)VWDJH
')
55
9IUDF
7>&@
P>NJV@
Ȧ
3+LJK>EDU@ 3/RZ>EDU@ PZ>NJV@ PZ>NJV@ PZ>NJV@ ǻ76+;>&@ ǻ76&>&@ %DVHFDVH 2SWLPDOFDVH
7DEOH5HVXOWVRIWKHRSWLPDODEVRUSWLRQFRROLQJV\VWHPREWDLQHGIURPWKHSUHVHQWHGDSSURDFK %DVHFDVH 2SWLPDOFDVH
7RWDO$QQXDOL]H 2SHUDWLRQDOFRVW )L[HGFRVW FRVW>0¼\U@ >0¼\U@ >0¼\U@
$7>P@
6WHDP >NJK@
&23
,QWHJUDWLQJSURFHVVVLPXODWRUVDQG0,1/3PHWKRGVIRUWKHRSWLPDOGHVLJQRI DEVRUSWLRQFRROLQJV\VWHPV
305
7DEOH7KHUPRG\QDPLFSURSHUWLHVDQGPDVVIORZUDWHVRIWKHF\FOHEDVHFDVHRSWLPDOFDVH 6WDWHSRLQW 3>EDU@ 7>&@ [>NJV@ P>NJV@ : : : : : :
&RQFOXVLRQV
7KLVZRUNLQWURGXFHVDV\VWHPDWLFVWUDWHJ\IRUWKHRSWLPDOGHVLJQRIDEVRUSWLRQFRROLQJ V\VWHPV7KHSUHVHQWHGPHWKRGUHOLHVRQD0,1/3DOJRULWKPWKDWLQWHJUDWHVFRPPHUFLDO SURFHVVVLPXODWRUVDQGRSWLPL]DWLRQWRROV7KHSURSRVHGDOJRULWKPLWHUDWHVEHWZHHQWZR W\SHV RI VXESUREOHPV D QRQOLQHDU SURJUDPPLQJ 1/3 VXESUREOHP DQG D VSHFLDOO\ WDLORUHG PDVWHU PL[HG LQWHJHU OLQHDU SURJUDPPLQJ 0,/3 SUREOHP )URP QXPHULFDO UHVXOWV ZH FRQFOXGHG WKDW LW LV SRVVLEOH WR VLJQLILFDQWO\ LPSURYH WKH HFRQRPLF SHUIRUPDQFHRIFRROLQJV\VWHPVRIWKHWRWDODQQXDOL]HGFRVWUHGXFWLRQ DQGDQ LQFUHDVH RI WKH FRHIILFLHQW RI SHUIRUPDQFH RI DOPRVW 3DUWLFXODUO\ WKH ODUJHU SURILWDELOLW\RIWKLV ZRUNLVDWWDLQHGE\SURSHUO\DGMXVWLQJ WKHRSHUDWLQJFRQGLWLRQVRI DOOWKHXQLWVDQGVWUHDPVHPEHGGHGLQWKHIORZVKHHW
5HIHUHQFHV >@ -70F0XOODQ5HIULJHUDWLRQDQGWKHHQYLURQPHQWIXWXUH,QW-5HIULJ >@ *$ )ORULGHV 6$ .DORJLURX 6$ 7DVVRX /& :UREHO 0RGHOOLQJ VLPXODWLRQ DQG ZDUPLQJLPSDFWDVVHVVPHQWRIDGRPHVWLFV\VWHP$SSO7KHUP(QJ >@ %+ *HEUHVODVVLH * *XLOOpQ*RViOEH] / -LPpQH] ' %RHU 'HVLJQ RI HQYLURQPHQWDOO\ FRQVFLRXV DEVRUSWLRQ FRROLQJ V\VWHPV YLD PXOWLREMHFWLYH RSWLPL]DWLRQ DQG OLIHF\FOHDVVHVVPHQW$SSOLHG(QHUJ\
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
A method for the design and planning operations of heap leaching circuits Jorcy Y. Trujillo,a Mario E. Mellado,b Edelmira D. Gálvez,b,c Luis A. Cisternas,a,b a
Departamento de Ingeniería Química, Universidad de Antofagasta, Chile Centro de Investigación Científico Tecnológico para la Minería, CICITEM, Chile c Departamento de Ingeniería Metalúrgica, Universidad Católica del Norte, Chile b
Abstract Heap leaching is a widely used extraction method for low-grade minerals, including copper, gold, silver, and uranium. This method has new applications in nonmetallic minerals such as saltpeter and soil remediation. Although a number of experimental studies have been carried out as well as modeling, which allows better understanding of the phenomena and its operation, few studies have been carried out with the objective of optimizing the process. Most of the studies which consider optimization, either experimentally or through the use of models, have been done from a technical perspective. The aim of this work is to develop a methodology for the design and planning of the heap leaching circuit. A superstructure which includes a number of alternative circuits is proposed. Then a mathematical model is developed that represents the constraints of the system and maximizes the profits. An example is considered to validate the proposed methodology. The results show that the current mode of operation of these systems can be improved using this methodology. Keywords: Heap leaching, optimization, hydrometallurgy, process design, planning.
1. Introduction Heap leaching (HL) is a process to extract low-grade minerals from ore, including copper, precious metals, nickel and uranium. This method has new applications including non-metallic minerals such as saltpeter (Valencia et al., 2008) and soil remediation (Hanson et al., 1993). In HL mined ore is crushed into small chunks, or ground and agglomerated into quasi-uniform particles, and heaped on an impermeable plastic and/or clay lined leach pad where it is irrigated with a leach solution which contains an appropriate leaching agent. Then, the solution percolates through the heap and leaches out the value metal. The leach solution containing the dissolved metals is then collected in a storage pond. Later, the leached solution is sent to a solution purification process (e.g. solvent extraction) and metal recovery process (e.g. electrowinning). Although a number of experimental studies have been carried out as well as modeling and optimization, which certainly allows to have a better understanding of the phenomena and its operation, only few studies have been done with the goal of optimizing the process concerning its design and planning. Most of the studies which have considered optimization, either experimentally or through the use of models, have been done from a technical perspective. For example, Mellado et al. (2010a) studied the optimization of the irrigation flow rate. Padilla et al. (2008) analyzed the economics of the heap, which represents a balance between the recovery and plant capacity. The results of the study have shown that the design (height of the heap), and planning of the
A method for design and planning operation of heap leaching circuits
307
operation (operational time) were interactive factors, and that maximum recovery was not necessarily the best measure of operational efficiency based on economic considerations. The aim of the present work is to develop a methodology for the design and planning of the heap leaching circuit.
2. Mathematical Model In this section, a MINLP model to optimize the planning and design of the heap leaching system is presented. The problem consists on designing (usually the heap height) and planning (leaching time) of a heap leaching system for the maximization of economic benefits. The strategy of solution consists on using mathematical programming based on a superstructure of the heap leaching system. The heap system consists of heap leach units and solution purification/metal extraction (SPME) units. The superstructure of the heap leaching system is built including a mixer in the input and a divider in the output of each heap leach unit and each SPME unit as shown in Fig. 1, and allowing the transfer of solutions between all the units or those that the designer wants to consider. Then, mass balances for each component were developed for each process units.
Ljin xj,kin
Li,j xi,j,k qi,j
qjin
Ljout xj,kout
xi,j,k
qjout
qj,i
j
Figure 1. Superstructure for heap leach and SPME units.
The mass balance in each heap leaching unit j, is given by
Lout j ,k
Linj ,k M j ,k R j ,k R j 1,k
Where L j ,k
(1)
x j ,k q j is the mass flow rate by cycle, x j ,k is the concentration of k, and
q j the volumetric flow rate by cycle in the heap j. Now, M j ,k is the mass of metal of the valuable specie k in each heap j, and is given by M j ,k
Z j Aj U g k . Here Z j , A j , U ,
g k are the heap height, heap area, mineral density and mineral grade respectively. Also, R j ,k is the recovery of metal k in heap j, and it can be calculated using the analytical model (Mellado et al., 2010b)
Rj
§ u · D § HbZ j · º ª -kT ¨ s t w ¸ w¸ k Ae ¨ t «1 O e ¨© H bZ j ¸¹ /e W H 0r 2 ¨© us ¸¹ » » Z Jj E « ¬« ¼»
D
(2)
J. Trujillo et al.
308
where D , E ,J , kT , kW , O , / are constants to be computed, u s is the superficial bulk flow velocity, H b is the bulk solution volume fraction, t is time, D Ae is the effective pore diffusivity of the reagent, H 0 is the ore porosity, and r is the particle radius. Note that eq. (2) assumes that the heap leaching units are operating in series. From the solution viewpoint the heap can operate in countercurrent, concurrent and/or cascade flow. In the event that the process unit corresponds to a solution purification/metal extraction (SPME) unit, the material balance is given by
Lout j ,k
Linj ,k Pj ,k
(3)
where Pj ,k is the production of metal k in the SPME unit j. The cycle time, t, is given by the following equation, where N is the number of cycles in the time horizon H.
H
(4)
Nt
The cycle time correspond to t
max( t j t j 1 ) , where t j corresponds to the leaching j
time of the heap j. The objective function corresponds to the maximization of the following function, where I is the income from sales and C to the design and operation costs. Also, w represents the weights of the revenue and cost functions.
max U
wI I wC C
(5)
The model is completed with the mass balances in the mixers and splitters, variables upper and lower bounds, assignments of concentrations and flow rates at the outlet of the SPME units, cost and income functions, and McCormick relaxation for bilinear expressions. The resulting model corresponds to an MINLP problem.
3. Case Study This section presents the application of the model to an example which corresponds to a heap leaching system with one heap and one SPME unit. The heap leaching unit corresponds to a copper ore heap of 456,000 m2, with a grade of 1.5 % of copper. Also, the time horizon corresponds to 360 days and copper price was considered to change from 2,000 to 12,000 $/ton. The recovery function is represented by the following disjunction function:
yi ª º «R abt» j » « LO iD «t d t d tiUP » i « » ¬« Z j Z i ¼»
(6)
A total of eight disjunctions were considered, including two levels of heap heights (6 and 9 m) and four time ranges (from 6 to 180 days) (see Figure 2). The problem was solved using GAMS-BARON using an Intel Core i7 CPU 2.67 GHz in 0.16 s. The results are shown in Figure 3. Although the profit increase as the selling price of copper increases, as expected, the leaching cycle times (and therefore the number of cycles in horizon time) and the heights of heaps vary in a non-regular way.
A method for design and planning operation of heap leaching circuits
309
Moreover, copper recoveries were between 83% and 55%. If the copper grade in the ore is reduced to 1%, the heap height was 9 m for all the copper prices, but the leaching times decreased (and also the recoveries) as the metal price increased. This simple example shows that there are clear ways to improve the design and planning, in order to maximize profits in heap leaching systems. 1,2
Copper Recovery (%)
1
0,8 Z=6 m
0,6
Z=9 m 0,4
0,2
0 0
50
100
150
200
Time (days)
Figure 2. Copper Recovery disjunction of equation 6.
a
b
c
d
Figure 3. Result of case study. a) heap height, b) cycle numbers, c) cycle time and d) profit.
310
J. Trujillo et al.
4. Conclusions A MINLP model has been developed for the process design and planning of heap leaching systems. The results show that the heap leaching design and planning are sensitive to the metal price. The results also show that the design (height of the heap) and the operation (leaching time) are problems coupled from the economic perspective, and thus the search for optimal conditions, from the economic perspective, must be included as well as other variables. This coupling is due to the fact that these variables affect both the recovery and the capacity of the heap leaching operation. It can be observed that the optimum, from the economic perspective, does not necessarily represent the maximum recovery. The application to more complex systems, such as systems with several heaps and SPME units, and production of various metals, relates to future work. Moreover, as used in this work, the disjunctions from a nonlinear recovery equation can lead to a bit error in the computations of the profits. This can lead to study estimations of the error involved in this approximation, just for theoretical reasons, mainly because, there are other operational variables that can lead to errors in the computations. Finally, as the price of copper is not completely predictable, and as the big mining operations are constantly building new heap systems, this approach can be used online with the economical variables that can affect the profits. Acknowledgments The authors wish to thank CONICYT for its support through Fondecyt Project 1090406.
References A. Hanson, B. Dwyer, Z. Samani, D. York, 1993. Remediation of chromium - containing soils by heap leaching: column study. Journal of Environment Engineering 199 (5), 825–841. M. Mellado, E. Gálvez, L. Cisternas, 2010a, On the optimization of flow rates on copper heap leaching operations, International Journal of Mineral Processing, submitted. M. Mellado, M. Casanova, L. Cisternas, E. Gálvez, 2010b. On scalable analytical models for heap leaching. Computers and Chemical Engineering. In Press G. Padilla, L. Cisternas, J. Cueto, 2008, On the optimization of heap leaching, Mineral Engineering 21, 673-678. W. Pennstrom, J. Arnold, 1999. Optimizing heap leach solution balances for enhanced performance, Minerals and Metallurgical Processing 16 (1), 12–17. J. Valencia, D. Mendez, J. Cueto, L. Cisternas, 2008, Saltpeter extraction and modelling of caliche mineral heap leaching. Hydrometallurgy, 90, 103-114
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved..
A data mining approach for efficient systems optimization under uncertainty using stochastic search methods Garyfallos Giannakoudisa, Athanasios I. Papadopoulosa, Panos Seferlisa,b, Spyros Voutetakisa a
Chemical Process Engineering Research Institute, Centre for Research and
Technology-Hellas, 6th km Harilaou Thermi Road, 57001, Thessaloniki, Greece b
Department of Mechanical Engineering, Aristotle University of Thessaloniki, 54124,
Thessaloniki, Greece
Abstract This work presents a novel approach for efficient systems design under uncertainty that uses data mining and model fitting methods during optimization to significantly reduce the associated computational effort. The proposed approach is implemented as part of a modified stochastic annealing algorithm, but remains independent of the employed optimization method. A numerical example and a case study on a stand-alone system for power generation from renewable energy sources are used to illustrate the merits of the developments. The obtained results indicate robustness and efficiency in terms of solution quality and computational performance, respectively. Keywords: Systems optimization, uncertainty, data mining, Stochastic Annealing, renewable energy.
1. Introduction Stochastic search methods such as Stochastic Annealing (StA) and Stochastic Genetic Algorithms (SGA) [1-4] have been proposed in recent years to address the optimization under uncertainty of process systems. The underlying algorithmic philosophy employed to treat uncertainty involves the use of probability distributions to generate samples which are introduced individually into the simulation of system models. This enables the emulation of effects caused by the uncertain parameters in the addressed optimization problem. Apparently, the number of utilized samples is of crucial importance. Large numbers of samples are required to maintain a realistic representation of the uncertain parameter distribution but at the expense of reduced computational efficiency. This is due to the increased computational effort required to simulate the effects of each sample through the employed system model during optimization. This major issue has been previously addressed [2, 3] by employing efficient sampling techniques and strategies that allow a variable sampling schedule throughout the optimization procedure. Fewer samples are allowed at initial optimization iterations, which are then increased significantly as the algorithm gradually proceeds to
G.Giannakoudis et al.
312
termination. However, their utilization often requires significant computational effort as the random selection of a large number of samples is not prevented, even at initial optimization iterations. Furthermore, high numbers of samples towards termination still result to an increased computational burden for large-scale problems involving detailed system models and combinatorial complexities.
2. Proposed method This work proposes the combined use of data mining and model fitting in the course of optimization to enable efficient management of the sampling procedure, employed to treat the considered uncertain parameters. Figure 1 illustrates the proposed approach as an extensive modification to the StA algorithm [1-3]. The novel algorithmic sequence is highlighted within the dashed frame. The implemented modifications are independent of the employed optimization algorithm, as they do not intervene with decision making operations that are distinctive of particular algorithms. While Hammersley sampling is employed in this work, any other sampling technique can also be utilized. Yes
No
Update temperature
Termination criteria?
Yes
Solution
Initialise randomly
No
x UpdateMarkovchain x Generatenewsetofdecisionvariables xm
Simulate clustercenteruk
Iterate k=1,Ɂclust times
CalculateɃF(xm,uk) Chainlength exceeded?
Selecti=1,Ɂsamp samplingpoints
Group points into k=1,Ɂclust clusters x Regressk=1,Ɂclustpoints x FitmodelOF(xm,uk)=f(uk)
Algorithm independent procedure PredictOF(xm,ui)for allpointsi=1,Ɂsamp
No
Accepted? Yes
Calculateaggregate objective
Newstate
Figure 1: Proposed data mining method as part of a modified Stochastic Annealing algorithm
2.1. Description Initially, a clustering method is used to generate k=1,Nclust coherent groups (clusters) of similar points out of the entire set of the selected sampling points i=1,ȃsamp, used for the representation of the uncertain parameters vector (u). Statistical cluster centers (uk) are then calculated for each group, which lie in close proximity to the entire data contained in each cluster. As a result, each cluster center can be considered a valid representative for all the data (sampling points) contained in the cluster. Subsequently all cluster centers, instead of all available sampling points, are introduced to simulations using a system model to calculate the objective function value OF(xm,uk) that corresponds to
A data mining approach for efficient systems optimization under uncertainty using stochastic search methods
313
each center (uk) (where xm represents the vector of decicion variables). In this respect, the available cluster center points (independent parameters) are then used in conjunction with their corresponding objective function values (dependent parameters) to calculate the regression coefficients of a continuous model. This model represents the employed OF(xm,uk) as a mathematical function of the cluster centers [OF(xm,uk)=f(uk)]. The objective function values OF(xm,ui) that correspond to the remaining sampling points (ui), contained in each cluster, can now be calculated using the developed predictive model, hence avoiding the time consuming simulations based on the system model. 2.2. Implementation details The proposed approach enables the use of constantly large numbers of sampling points regardless of the size of the optimization problem addressed or the stage of the performed optimization search. The number of generated clusters is an important parameter that affects the performance of the method. A large number of clusters results to fewer points within each cluster. This enables an improved representation from the derived center of all the cluster points and results to accurate predictions from the regression model. However, increasing the number of clusters also results to further time-consuming simulations. An automated statistical method is used [5] to maintain the number of clusters considerably lower compared to the sampled set of uncertain parameter values, while facilitating accurate model predictions. The fitted model provides objective function value predictions that are either identical or lie within very close proximity to the values calculated through simulations. This is verified by use of the R2 coefficient of multiple determination, which is calculated in three steps. Firstly, predictions are obtained through the regression model for OF(xm,ui) values. Subsequently, the predictions are used to replace their corresponding sampling points (ui) that exist within each one of the original clusters. Finally, a new cluster center is derived for each cluster based on the objective function values (and not the sampled points as previously). This center represents the predicted objective function values that lie within each cluster. If it is similar to the objective function values obtained through model simulations for each corresponding cluster center, then the regression model provides accurate predictions. This similarity is measured through R2. The number of regression terms employed in the model is derived through statistical Ftests for model adequacy, also used to evaluate the correctness of R2. 2.3. Numerical example The proposed method is illustrated through a numerical example that employs the following cost model (details available in [1]): OF 1 ( y 1 , y 2 , y 3 , u 1 , u 2 )
y1
¦ (( y
1
3 ) 2 ( u 1 y 2i 3 ) 2 ( u 2 y 3i 3 ) 2 )
(1)
i 1
Terms y1, y2, y3 represent the decision variables. The uncertain parameters u1 and u2 follow the probability distributions shown in Table 1, which also shows the clustering ranges considered and the employed regression model. The regression coefficients Įi (i=1,6) are recalculated in each algorithmic iteration. In all cases the performance of the StACMF algorithm (StA with clustering and model fitting) is compared with an
G.Giannakoudis et al.
314
adaptation of StA developed in this work. Their comparative performance is measured based on the ratio of the number of simulations performed by the two algorithms (NStACMF/NStA) to achieve optimality. The number of allowed samples is constantly 150 for StACMF, while sampling for StA is allowed to vary in the range [20, 150]. Table 1: Data and optimization-computational performance results for the numerical example Clustering Optimum values for Performance R2 Case u1 u2 range (y1),(y2),(y3) ratio
1 2 3
N(0,2) N(0,2) N(0,2)
N(0,2) N(0,2) U(1.5,3)
25-35
(3),(3,3,3),(3,3,3)
>0.999
15-25 (3),(3,3,3),(3,3,3) >0.999 20-30 (3),(3,3,3),(1,1,1) >0.996 Regression model: OF(u1,u2)=a1+a2 u1+a3 u2+a4 u1 u2+ a5 u22+a6 u1 u22
0.39 0.26 0.28
In all three cases the two algorithms found the same optimum solution. The obtained results indicate that StACMF is significantly faster, as the number of required simulations is only a small fraction of those required by StA. The value of R2 is very high in all cases, indicating that the employed model provides accurate predictions. The minor inaccuracies in the predictions (R2 1.27
A/C, C/E, D/E
Liquid Membrane
Radius of Gyration > 1.03 Molar Volume > 1.08 Solubility Parameter > 1.28
A/B, A/D, A/E, B/C, B/D, C/D, C/E, D/E
Pervaporation
Molar Volume > 1.08
A/B, A/D, A/E, B/C, B/D, C/D, C/E, D/E
Distillation
Vapor Pressure > 15
A/C, B/C
Flash
Vapor Pressure > 15
A/C, B/C
Table 2. Initialized process groups. Unit Operation
Process Group
Kinetic Model Based Reactor
rAE/pABCDE, rpervAE/pB/ACDE
Crystallization
crsE/DBCA, crsE/DCA, crsDBC/A, crsE/DC, crsDC/A
Liquid Membrane
lmemCDEA/B, lmemCDE/A, lmemCDA/B, lmemCD/E, lmemCD/A, lmemCD/B, lmemC/D, lmemA/B
Pervaporation
pervCDEA/B, pervCDE/A, pervCDA/B, pervCD/E, pervCD/A, pervCD/B, pervC/D, pervA/B
Distillation
AB/CDE, AB/CD, A/CDE, A/CD, B/CD
Flash
fAB/CDE,fAB/CD, fA/CDE,fA/CD, fB/CD
Computer Aided Flowsheet Design using Group Contribution Methods
325
The mixture analysis also reveals the existence of two binary azeotropes (water/diethyl succinate and water/ethanol). Therefore, azeotropic distillation, extractive distillation, and liquid-liquid extraction might also be potential separation techniques to be considered in the synthesis problem. However, pervaporation and the liquid membrane selectively remove water from the mixture thus alleviating the need for azeotropic separation. A pervaporation assisted reactor is found to be an efficient configuration for esterification reactions. After the initial analysis, 103 process groups were initialized and by applying the rules to remove structurally and practically infeasible flowsheets, the number of PGs were reduced to the 33 shown in Table 2. A total of 176 feasible flowsheets were identified from the candidate process groups and represented by the corresponding SMILES notation. The energy index flowsheet property was calculated for all candidate configurations and the two SFILES strings with the lowest value of the energy index (0.051) are shown below. The first configuration consists of a reactor and four separation units: pervaporation, distillation, crystallization and liquid membrane. The second configuration involves a pervaporation reactor and three separation tasks: distillation, crystallization and liquid membrane. 1. (iAE)(rAE/ABCDE)(pervCDEA/B)[(A/CDE)[(crsE/DC)[(oE)](lmemC/D)[(oC)] (oD)](oA)](oB) 2. (iAE)(rAE/pB/ACDE)[(A/CDE)[(crsE/DC)[(oE)](lmemC/D)[(oC)] (oD)](oA)](oB) It is assumed that the membranes exhibit very high selectivity thus leading to a near perfect separation and recovery. The reverse simulation of the distillation column using the driving force approach yielded a design operating at a maximum driving force of 0.85 corresponding to a column with 15 stages (feed location 13.5) and a reflux ratio of 0.552 (minimum 0.368). For the two feasible flowsheets selected for final verification, one has already been verified by Alvarado-Morales et al. (2010) [4] while the other with the pervaporation assisted reactor is currently being verified.
4. Conclusions In this paper a novel systematic framework for synthesis of flowsheets based on a process group contribution method has been presented. Representing each process configuration using these unique process groups significantly reduces the computational load as no detailed calculations are required during the synthesis step. Once identified, the candidate flowsheets are ranked using overall performance indicators (flowsheet properties) like energy consumption, cost/profit etc. Reverse simulation techniques are then employed to identify the design parameters of the optimal flowsheets, which are used as initial estimates for rigorous simulation of the design.
References [1] [2] [3] [4]
L. d’Anterroches, R. Gani (2005). Fluid Phase Equilibria, 228-229, 141-148. C.A. Jaksland, R. Gani, K.M. Lien (1995), Chemical Engineering Science, 50(3), 511-530. E. Bek-Pedersen, R. Gani (2004), Chemical Engineering and Processing, 43, 251-262. M. Alvarado-Morales, M.K.A. Hamid, G. Sin, K.V. Gernaey, J.M. Woodley, R. Gani (2010), Computers and Chemical Engineering, 34, 2043-2061. [5] D. Glasser, C.M. Crowe, D. Hildebrandt (1987), Ind. Eng. Chem. Res., 26, 1803-1810
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
A Business Process Model for Process Design that Incorporates Independent Protection Layer Considerations Tetsuo Fuchinoa, Yukiyasu Shimadab, Teiji Kitajimac, Kazuhiro Takedad, Rafael Batrese, Yuji Nakaf a
Chemical Engineering Department, Tokyo Institute of Technology,
2-12-1, O-okayama, Meguro-ku, Tokyo, 152-8552,Japan b
Chemical Safety Research Group, National Institute of Occupational Safety and Health,
1-4-6, Umezono, Kiyose, Tokyo, 204-0024, Japan c
Institute of Technology, Tokyo University of Agriculture and Technology,
2-24-16, Naka-cho, Koganei, Tokyo, 184-8588, Japan. d
Department of Materials Science and Chemical Engineering,
Shizuoka University,3-5-1, Johoku, Naka-ku, Hamamatsu, 432-8011, Japan e
Industrial Systems Engineering Group, Toyohashi University of Technology,
Hibarigaoka 1-1, Tempaku-cho, Toyohashi, 441-8580, Japan f
Chemical Resources Laboratory, Tokyo Institute of Technology,
4259, Nagatsuda, Midori-ku, Yokohama, 226-8503, Japan.
Abstract The purpose of independent protection layers (IPLs) is to prevent the occurrence of hazardous events by designing protective systems against any failure sequences that might lead to a significant hazard. Therefore, it becomes necessary to make sure that the process hazard analysis identifes all potential failure sequences so that the IPLs can be designed in a robust way consistent with the identified failure sequences. However, engineers are not always conscious about the whole process involving the process and plant design. Furthermore, the analysis that precedes the IPL design and the IPL design itself are not incorporated in a systematic way with the process and plant design. This is also related to the lack of design rationale in the design of safety systems, resulting in alarms floods and even more serious problems. This paper presents a business process model for process design that incorporates the notion of independent protection layer design. Keywords: independent protection layer, process hazard analysis, business process model, process safety design, IDEF0
Business Process Model for Process Design being Conscious of Independent Protection 327 Layer
1. Introduction Chemical processes have potential hazards, and are designed to avoid that a given potential hazard evolves into an incident. In general, the potential hazard is controlled within a safety region of the normal operating modes. However, some initiating events which exceed the control capabilities of normal operating limits are the cause of abnormal process deviations and hazardous events that can lead to human and physical damages. An independent protection layer (CCPS, 2001) is aimed at preventing incidents by protectings against a particular type of hazardous event. The most commonly encountered independent protection layers that should be considered during process and plant design are: (1) inherently safer process design, (2) basic control system, process alarm and operator supervision, (3) critical alarms, operator supervision, and manual intervention, (4) automatic Safety Interlock System (SIS), (5) physical protection (relief devices) (6) physical protection (containment dikes), (7) facility emergency response, (8) community emergency response. In a typical plant engineering project, the analysis that precedes the IPL design and the IPL design itself are not incorporated in a systematic way with the process and plant design. This is also related to the lack of design rationale in the design of safety systems, resulting in alarm floods (ISA, 2007), that occur when alarm rates exceeds 10 alarms in 10 minutes (often in the hundreds) in which important alarms are likely to be missed. Basically, alarm floods can be avoided if the process hazard analysis identified all potential hazard scenarios, and the independent protection layers were designed in a systematic way that is consistent with the identified hazard scenario. A variety of operations take place during the life-cycle of the plant. These include initial startup, partial startup, restart, startup after turnaround, normal shutdown, emergency shutdown, partial shutdown. During process and plant design, engineers must address the requirements imposed by these operations in an integrated way. Otherwise, the design and specifications that work well with a certain kind of operation may negatively affect the efficacy and efficiency of another kind of operation. Similarly, the design of an alarm that neglects the limits imposed at which SIS operate has a probability to lose the original intention of the alarm which is to “attract the attention of the plant operator to significant changes that require an assessment or action” (EEMUA, 2007). On the other hand, if an alarm is required by the plant but it lacks an automatic SIS, then the design would have to be modified in order to give more time to the operator to respond. However, it is often the case that the process designer designs the basic control system, process alarms, critical alarms, SIS and relief devices without an integrated approach to the design and specification of the independent protection layers. Furthermore, when process hazard analysis that is performed after completion of process design requires additional process alarms, the lack of rationalization and documentation of the independent protection layers can result into operational problems such as duplicated alarms or inconsistent alarm settings, that potentially lead to alarm flood situations. In this paper, a business process model for process design being conscious of the independent protection layer is developed. The IDEF0 (Integration Definition for Function Modeling) (NIST, 1993) is adopted for the business process modeling. In this model, the process hazard analysis and the independent protection layer design are explicitly integrated, so that the IPLs can be designed in a robust way consistent with the identi-
328
Fuchino et al.
fied hazard scenarios. Consequently, documentation and rationalization of the protection layer design becomes possible, resulting in a better alarm management.
2. Previous IDEF0 Model for Process and Plant Design There are several reports of business process models for representing the process design (Fuchino et al., 2004, Sugiyama et al., 2006). However, current efforts focus on development phase of process design, and the safety protection systems design is not considered. The PIEBASE (Process Industry Executive for achieving Business Advantage using Standards for data Exchange) was an international consortium to achieve a common strategy and vision for the delivery and use of internationally accepted standards for information sharing and exchange (ISO-STEP), and developed a business process model to represent the Fig. 1 Node Tree of PIEBASE Activity core business activity of chemical process Model industry (PIEBASE, 1998). The PIEBASE activity model uses a template approach across all principal activities. This template consists of three steps, (1) manage, (2) do and (3) provide resources. Fig. 1 shows the node tree which pulled out the part concerning to the safety design and process hazard analysis from PIEBASE activity model. In this model, "A-0: conduct Core Business" is developed into "A1: Manage the Business", "A2: Acquire Input", "A3: Create Product", "A4: Sell Output" and "Provide Supporting Resources". The process plant is one of the physical assets to be provided for the "A3: Create Production" activity. Designing and engineering process plant is divided into two phases; i.e. "in concept" and "in detailed", and the safety design and engineering are performed through these two phases of "A554224: Produce Conceptual Safety Engineering Designs" and "A5542326: Design Infrastructural and Safety, Health, and Environmental Protection Systems". The information on designed and constructed process plant becomes the mechanism information for A3 activity, and the result of safety design and engineering is evaluated in "A315 Assess Safety, Health, and Environmental Protection for Performing Production" activity. The purpose of PIEBASE activity model is to provide a common understanding of engineering and information requirements during the different activities that occur during the life-cycle of a plant. However, the activities in the model were defined so as to reflect the current existing practices. Therefore, the activity model fails in addressing the integration between the IPL design and the different process hazard analyses. The process hazard analysis is applied to only as a crosscheck of the resulting design.
Business Process Model for Process Design being Conscious of Independent Protection 329 Layer
3. Improved IDEF0 Model This is an IDEF0 activity model for process design that incorporates independent protection layer design. Similar to the PIEBASE activity model, the improved IDEF0 model is based on a template. However, the template has been extended to five types of sub-activities, i.e. “Manage”, “Plan”, “Do”, “Evaluate” and “Provide Resources” (Fuchino et al., 2010). The proposed template, Fig. 2 Node Tree from A-0 Activity is applied across all principal activities, and the lifecycle engineering view point is adopted. Fig. 2 shows a part of the node tree from “A-0: Perform LCE” as the top activity. This is a lifecycle model that follows the systems engineering organization of definition, development, and deployment. The process design activity consists of three phases: conceptual, preliminary and final, and the plant design is composed of two phases: preliminary and final. The conceptual process design phase (activity A33) corresponds to the inherently safer process design in IPL, including hazard elimination and substitution, inventory considerations, and plant location; preliminary process design phase is related to the design of IPLs (2) to (6). In “A34: Develop Preliminary Process Design” activity, the process design according to operational requirements of normal, abnormal and emergency operations is designed. In designing process for normal steady state operation (A343), basic process control is designed, so that the safety operating ranges should be assessed in A3432 before activity A3433. Develop Preliminary Process Design for Startup and Shutdown (A344) evaluates the current plant design to verify that all the necessary equipment are available to perform the startup and shutdown. As a result preliminary operating proceFig. 3 Development of Node A3442 dures are obtained along with information on operating limits and time-related data which can be used to configure state-based alarm algorithms that detect when the plant changes operating state and dynamically modify the alarm settings to conform to the proper settings for each state (Hollifield, 2007). The synthesis of startup and shutdown operations takes place in activity A3442. Fig. 3 shows further development from activity A3442. To specify initial conditions and safety safety constraints in A34423, the hazardous conditions should be assessed in A34422. Fig. 4 shows further development from “A345: Develop Preliminary Process Design for Abnormal Situation”. In order to determine the operation category (fallback, partial shutdown or total shutdown) process hazard analysis is necessary (A34522). This
Fuchino et al.
330
is because hazard analysis is used to identify possible hazard scenarios and its recommendations for additional sensors, alarms or other IPLs, some of which are addressed in activity A34523. Furthermore, because hazard scenarios contain information about causes, consequences, and corrective actions, they can also be used justify the design rationale for a given alarm. Furthermore, operational responsibility should be estimated in A34523, and the operation category is to be decided in A34524. The activities to perform process hazard analysis are depicted in Figs. 2, 3 and 4. It becomes clear that the process hazard analysis is necessary for the protection layer design.
4. Conclusion This paper describes an IDEF0 activity model for process design that takes into account the independent protection layers. It is clear that process hazard analyses and independent protection layer design should be performed concurrently to generate rationalized process safety design including process and critical alarms design with operator responsibilities. Fig. 4 Development of Node A345 For example, the proposed model insures that every alarm or other protection layer are consistent with the operator actions, time to response. The model can also be used to provide specific information such as (trip points, causes, consequences, corrective actions) that justifies a given alarm.
x
References
CCPS, 2001, “Layer of Protection Analysis,” New York; American Institute of Chemical Engineers, Center for Chemical Process Safety. EEMUA. Alarm Systems - A Guide to Design, Management and Procurement Engineering Equipment and Materials Users Association (2007) Fuchino, T., T. Wada and M. Hirao, 2004, “Acquisition of Engineering Knowledge on Design on Industrial Cleaning System through IDEF0 Activity Model,” Proceedings of KnowledgeBased Intelligent Information and Engineering Systems, pp 418-424 Fuchino, T. , Y. Shimada, T. Kitajima and Y. Naka, 2010, “Management of Engineering th Standards for Plant Maintenance based on Business Process Model,” Proceedings of 20 European Symposium on Computer Aided Process Engineering, pp 1363-1368 Hollifield, B. R., E. Habibi. Alarm Management - Seven Effective Methods for Optimum Performance. ISA, (2007) ISA, 2007, “Alarm Management: Seven Effective Methods for Optimum Performance, ” North Carolina; Instrumentation, Systems and Automation Society. NIST, 1993, “Integration Definition for Function Modeling,” Federal Information Processing Standards Publication, 183, http://www.itl.nist.gov/fipspubs/idef02.doc, National Institution of Standards and Technology. PIEBASE, 1998, “PIEBASE Activity Model”, http://www.posc.org/piebase/, Process Industries Executive for Achieving Business Advantage using Standards for Data Exchange. Sugiyama, H., M. Hirao, R. Mendivil, U. Fischer and K. Hungerbuhler, 2006, “A Hierarchical Activity Model of Chemical Process Design based on Life Cycle Assessment”, Trans IChemE, Part B, Proc. Saf. Environ. Prot. 84(B1), pp63-74.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Conceptual design of glycerol etherification processes Elena Vlad, Costin Sorin Bildea, Elena Zaharia, Grigore Bozga University Politehnica Bucharest, Department of Chemical Engineering, Polizu 1-7, 011061-Bucharest, Romania.
Abstract The feasibility of an industrial-scale, acid-catalyzed process for etherification of glycerol with i-butene is analyzed. A simplified mass balance of the process is derived using a kinetic model for the reactor and black-box model for the separation section. Sensitivity analysis of the steady state model shows that the system exhibits both state multiplicity and regions where no solution exists. The nominal operating point is chosen to avoid high sensitivity to disturbances and to guarantee feasibility when operation and design parameters are uncertain. The stability and robustness in operation are checked by rigorous dynamic simulation. Keywords: Glycerol etherification, design, control, dynamic simulation.
1. Introduction Glycerol is obtained as by-product of biodiesel production in amounts that are equivalent to approximately 10 % wt. of the total product. Di- and tri-ethers of glycerol are compounds soluble in diesel and biodiesel, improving the quality of the fuel and therefore being interesting alternatives to commercial oxygenate additives. Reaction of i-butene with glycerol in presence of homogeneous [1] or heterogeneous [2,3] acid catalysts yields a mixture of mono-, di-, and tri-tert-butyl glycerol ethers. Conceptual processes which could perform this reaction are described in references [1,4,5,6]. Here, we present the design of a glycerol etherification plant processing a nominal flow rate of 2 kmol/h of glycerol, assumed to be the by-product of a 15000 tone/year biodiesel plant. We focus on operating conditions leading to high selectivity in di-ether, 0.9 being a typical value. The robustness in operation is also considered.
2. Conceptual design 2.1. Reactor-separation-recycle model A simplified model of the plant is used to choose a nominal operating point together with the plantwide control structure. Fig. 1 (left) presents the Reactor–Separation– Recycle structure of the plant [7, 8]. i-Butene recycle i-Butene i-Butene recycle CS2 CS1 i-Butene i-Butene
i-Butene recycle
0
3
0
0
3
3
Mono-Ether recycle
FC
FC
3
LC
LC di-Ethers tri-Ether
1 Reactor
2
Separation
1
4
Glycerol
Glycerol
3
0
0
3
to Reactor
Glycerol recycle
x
r0
1
Glycerol 0
3
to Reactor
Glycerol recycle
Glycerol recycle 1
FC
FC
LC 1
to Reactor
LC 1
to Reactor
Fig. 1 – Reactor-Separation-Recycle structure of the glycerol etherification plant (left) and the principle of two different plantwide control structures
332
E. Vlat et al.
To investigate the steady state behaviour the etherification plant, two control structures is considered (Fig. 1). In all control structures the flow rate of fresh glycerol is set to the value FG,0. Control structures CS1 and CS2 differ by the second flow specification: fresh i-butene (FI,0) and ratio r0 = FI,0/FG,0, , respectively. For each control structure, the mathematical model of the Reactor – Separation - Recycle system is solved. The conversions of glycerol and i-butene (XG and XI, respectively) are plotted versus the flow rate of fresh glycerol (FG,0). For a fixed value of the fresh i-butene flow rate FI,0 (CS1) the model has a feasible solution (positive flow rates) only for a certain range of the fresh glycerol flow rate FG,0. Fig.2 displays the results obtained for an etherification plants employing a reactor of 1 m3. When the flow rate FG,0 is set to the upper limit, only di-ether is obtained as product. However, both glycerol and i-butene conversions approach zero and the recycle rates become infinite. This limit is independent of the reactor volume. 1
1
V = 1 m3
0.8
V = 1 m3
0.8
F I,0 / [kmol/h]=4.2
4.6
5
4.6
5
0.6
XI
XG
0.6
F I,0 / [kmol/h]= 4.2
0.4
0.4
0.2
0.2 0
0 1.5
2
2.5
1.5
3
2
F G,0 / [kmol/h]
2.5
3
F G,0 / [kmol/h]
Fig. 2 – Control structure CS1: glycerol and i-butene conversions vs. fresh glycerol flow rate, for different values of the fresh i-butene flow rate.
Fig. 2 shows that, for given values of the model parameters, either zero or two steady states are possible. When they exist, the two states are characterized by similar values for the glycerol conversion, but very different values for the conversion of i-butene. Moreover, a very high sensitivity of the i-butene conversion with respect to glycerol flow rate is observed when high selectivity in di-ether is required. This implies that small disturbances will lead to large changes of the i-butene recycle rate, known as the “snowball effect”. In conclusion, control structure CS1 offers the advantage of easily setting the ratio between the di- and tri-ethers by manipulating the reactants flow rates, but the operating points of high di-ether selectivity exhibits high sensitivity to disturbances and are dangerously close to the feasibility limit. Fig. 3 presents the conversions versus the fresh glycerol flow rate, for different reactor volumes, when control structure CS2 is used. The system shows multiple steady states and a region of unfeasibility. On the two solution branches, the values of glycerol conversion are very close to each other. 0.5
1
FI,0/FG,0=2.1
0.8
0.46
0.6
FI,0/FG,0=2.1
XG
XI
0.48
V / [m3] = 1
0.4
0.44
2
3
V / [m ] = 1
0.42
4
2
4
0.2 0
0.4 0
2
4
6
8
F G,0 / [kmol/h]
10
12
14
0
2
4
6
8
10
12
14
F G,0 / [kmol/h]
Fig. 3 – Control structure CS2: glycerol and i-butene conversions vs. fresh glycerol flow rate, for different values of the reactor volume.
Conceptual design of glycerol etherification processes
333
Compared to control structure CS1, much larger glycerol flow rates can be processed. The same selectivity in di-ether, namely VD/G = 3 – FI,0/FG,0 = 0.9 is obtained at all operating points depicted in Fig. 3. It appears that the 1 m3 reactor allows increasing the amount of processed glycerol by 50% from the nominal value, up to 3 kmol/h. 2.2. Separation section Depending on the temperature and composition, a mixture of glycerol, i-butene and glycerol ethers can exist in a single or two different liquid phases. Fig. 4 analyzes the liquid-liquid (L-L) equilibrium at 25 oC. Assuming that i-butene can be easily separated, the composition of the reactor-outlet stream (Fig. 4a, point M+D+G) falls in the singlephase region. This happens because the large amount of mono-ether increases the miscibility of glycerol and di-ether. However the immiscibility can be exploited for separating the reactants from products by mixing, in the L-L separator, the fresh glycerol with the reactor outlet. The mixture (point L) separates into a glycerol-rich phase (L1) and a DTBG-rich phase (L2). However, the DTBG-rich phase L2 contains significant amounts of MTBG. 1 G
L1
(a)
1
L
0.8
0.6 0.4
G / G0
XG
0.8
M+D+G
0.2
L2
0
D 0
M 0.2 0.4 0.6 0.8 XMTBG
1
0.6
IB0/G0=0
1
(L1)
(b) 2
0.4 (L ) 2 0.2 IB0/G0=0 0 0 0.4 0.8
2 1 1.2
1.6
M0 / G0
Fig. 4 – Liquid-liquid equilibrium (a) Liquid-liquid immiscibility occurs when the column bottom stream is mixed with fresh glycerol. (b) Addition of i-Butene improves the separation of glycerol
Fig. 4(b) shows glycerol distribution between the two liquid phases, starting from an equimolar glycerol - DTBG mixture with various amounts of MTBG and i-butene. It can be observed that i-butene has a favorable effect on the L-L equilibrium because it decreases the solubility of glycerol in the DTBG-rich phase. In conclusion, the separation of i-butene from the reaction mixture should be done after the L-L split.
3. Detailed design This section presents details of the glycerol etherification process (Fig. 5). The flowsheeting software AspenPlus was used as a CAPE tool. The physical properties of glycerol, i-butene and water are available in AspenPlus databank. The properties of the ethers were calculated using group contribution methods. The behaviour of the liquid phase was described by the NRTL activity model. The interaction parameters of pairs involving ethers and glycerol or i-butene were taken from [1]. The other unknown interaction parameters were estimated using UNIFAC. Ideal mixing was assumed. Glycerol etherification. The etherification of glycerol with i-butene takes place in a CSTR of 1 m3. The reaction temperature and pressure are set to 90 ºC and 14 bar, respectively, when the reaction mixture is liquid. When the same reactor-inlet flow rates were specified to the simplified (Reactor-Separation-Recycle) and Aspen models, identical results for the reactor-outlet stream were obtained.
E. Vlat et al.
334
I3b IB3B
I3a IB3A
IB0
I0
MIX-IB MIX -I B
I1 IB1
G0
G0
C1
C1
CSTR
C2 C2
CST R
4
D+T
SEP-GLL S EP-GLL
2
9
2a
2b M+D+T
2 B4 6
G1+M1a G1+M1A
B3
G1+M1 G1+M1
M1b
M1-B
Fig. 5 – Flowsheet of the glycerol etherification plant
G-L-L separation. The composition of the reactor-outlet stream falls in the singlephase region, but two liquid phases are formed when fresh glycerol is added, as previously discussed. Therefore, G-L-L separation is possible. The temperature is reduced to 50 ºC and the pressure is set to 1 bar. The cooling duty is 27.8 kW. 10% of ibutene is found in the vapour stream “I3a” and recycled. The stream “G1+M1a” contains glycerol and mono-ether and is recycled. The liquid stream “2a” contains ibutene and ethers. Column C1 separates the i-butene (stream I3b). It has 9 theoretical stages with the feed on stage 3. The column has a partial condenser and is operated at atmospheric pressure. The column diameter is 0.2 m. The reflux ratio is set to 2. The reboiler duty is 79.6 kW and the condenser duty is 25 kW. Column C2 separates di- and tri-ether from mono-ether. It has 50 theoretical stages with the feed on stage 25. The column has a total condenser and is operated under vacuum to avoid high temperature in the bottom of the column. The column diameter is 0.8 m. The reflux ratio is set to 4. The reboiler duty is 191 kW and the condenser duty is 209 kW. Stream “M1b” contains 97% mole fraction mono-ether and some of di-ether. Stream “4” contains 83% di-ether and 10.2% tri-ether (mole fractions).
4. Dynamics and control From the viewpoint of steady state behaviour, the design performed in the previous sections together with control structure CS2 allow processing the nominal flow rate of glycerol and tolerate rather large disturbances. However, the analysis showed two coexisting steady states, which cannot be simultaneously stable. Moreover, the simplified model used in previous section assumed perfect separation of the products from the unconsumed reactants, which certainly is not the case. Therefore, the dynamics of the plant must be considered in order to prove the stability of the operating point and the resiliency with respect to disturbances. To reach this goal, a dynamic model of the plant was built in AspenDynamics. Besides the control loops of CS2, standard control of the G-L-L separator and distillation columns was used. The controllers were tuned by a simple version of the direct synthesis method. Fig. 6 presents a sample of results obtained by dynamic simulation. Starting from the steady state, at time = 1 h, the glycerol flow rate was changed from 2 kmol/h to 2.2 kmol/h and 1.8 kmol/h, respectively.
Conceptual design of glycerol etherification processes
F / [kmol/h]
8
I3b
2
4
6
(a)
1.5
4
1
3
10
G1+M1
I3a
2
2
0.5
0
0
2
4
6
t / [h]
8
10
6 I3b
4
1
0
8
4
1.5
0.5
0
10 G1+M1
2.5
F / [kmol/h]
3 2.5
335
(b)
I3a
2 0
0
2
4
6
8
10
t / [h]
Fig. 6 – Dynamic simulation results.
It can be seen that the nominal operating point is stable, and the plant achieves stable operation when disturbances are introduced.
5. Conclusions Production of glycerol ethers by etherification of glycerol with i-butene catalyzed by homogeneous acid catalysts is feasible. For a typical glycerol flow rate of 2 kmol/h, the reaction can be carried on in a CSTR of 1 m3. The reactants conversion is high and small recycles are needed. The separation products - unconsumed reactants can be achieved by a combination of a 3-phase flash and two distillation columns. When one of the control structures considered in this work is applied, multiple steady states are possible and the flow rate of glycerol that can be processed is limited. For this reason, the behaviour of the plant was investigated by steady state sensitivity analysis. This allowed selecting a robust control structure.
Acknowledgement The work has been funded by the Sectoral Operational Programme Human Resources Development 2007-2013 of the Romanian Ministry of Labour, Family and Social Protection through the Financial Agreement POSDRU/88/1.5/S/61178 and by CNCSIS – UEFISCSU, projects IDEI 1545/2008 – “Advanced modeling and simulation of catalytic distillation for biodiesel synthesis and glycerol transformation” and IDEI 1543/2008 – “A nonlinear approach to conceptual design and safe operation of chemical processes”.
References 1. Behr, A. and L. Obendorf, Development of a process for the acid-catalyzed etherification of glycerine and isobutene forming glycerine tertiary butyl ethers, Eng. Life. Sci. Comm., 2, 185, 2003. 2. Klepáþová, K., D. Mravec, M. Bajus, Tert-Butylation of glycerol catalysed by ionexchange resins, Appl. Catal. A:General, 294, 141, 2005. 3. Klepáþová, K., D. Mravec, A. Kaszonyi, M. Bajus, Etherification of glycerol and ethylene glycol by isobutylene, Appl. Catal. A:General, 328, 1, 2007. 4. Versteeg, W.N., O. Ijben, W.N. Wernink. K. Klepacova, S. Van Loo, Method of preparing GTBE, WO 2009/147541 A1, 2009. 5. Gupta, V.P., Glycerine ditertiary butyl ether preparation, US 5476971, 1995. 6. Noureddini, H., Process for producing biodiesel fuel with reduced viscosity and a cloud point below 32 degrees Fahrenheit, US6015440, 2000. 7. A. C. Dimian, C. S. Bildea, Chemical Process Design: Computer-Aided Case Studies, Wiley-VCH, 2008. 8. Bildea, C.S. and A.C. Dimian, Fixing flow rates in recycle systems: Luyben’s rule revisited, Ind. Eng. Chem. Res., 42 , 4578, 2003.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Dynamic Conceptual Design under Market Uncertainty and Price Volatility Davide Manca, Andrea Fini, Mirko Oliosi CMIC Department, Politecnico di Milano, 20133 MILANO, ITALY,
[email protected] Abstract The paper proposes and discusses a methodology to quantify the economic potential of a plant subject to market variability. Price fluctuations and market uncertainty are analyzed and modeled. The manuscript assesses the variability of raw and product prices (i.e. hydrocarbons in the HDA process) related to a reference indicator (i.e. crude oil). Afterwards, it proposes a methodology to forecast a series of economic scenarios to quantify the feasibility of installing and running the HDA process (according to a dynamic interpretation of the economic potentials proposed by Douglas, 1988). Finally, the paper evaluates the statistical distribution of a dynamic economic potential to quantify the financial risk of investment on such a plant. Keywords: Conceptual design; Price fluctuations; Market uncertainty; Econometric model.
1. Introduction Douglas (1988) proposed, formalized and discussed the conceptual design of chemical plants based on a hierarchical methodology for the sequential optimization of economic potentials. Such potentials depend on investment and working costs as well as on revenues from selling the main product(s) and possible byproduct(s). Nevertheless, the work of Douglas did not take into account the price variability of raw materials, products, and utilities that are subject to market demand. Usually, the conventional approach to conceptual design finds either a sub optimal or even an unrealistic solution because it neglects the dependency of the economic terms from the time-varying market oscillations. Ullman’s encyclopedia (2002) reports that running a hydrodealchilation (HDA) plant (Douglas, 1988), which produces benzene from toluene, can be either economically profitable or unprofitable according to the price variability of such compounds. Milmo (2004) reports that the mean running time of a HDA plant is about 40% of the theoretical operating time due to frequent periods when the toluene price is higher than the benzene one. Nonetheless, HDA plants are run all over the world.
2. Modeling the functional dependence of hydrocarbon prices The commodity market dictates the oscillations of both raw and product materials according to the supply & demand law. With reference to hydrocarbons (e.g., toluene and benzene in the HDA process), it is reasonable to define crude oil as the reference indicator to model their price dynamics (i.e. market demand and economic fluctuations; see also Figure 1). A covariance analysis allows assessing the high correlation existing between the crude oil price and the benzene (toluene) price (i.e. absence of significant time delay). In addition, a time series analysis of commodity prices shows the absence
Dynamic Conceptual Design under Market Uncertainty and Price Volatility
337
120
140
100
120 crude oil price [$/bbl]
molar price [$/kmol]
of any seasonal nature whilst the plot of correlograms provides further information about the lack of time delays between benzene (toluene) price and crude oil quotation.
80
60
40
20
100 80 60 40
Toluene [$/kmol] Benzene [$/kmol] 0 2005
2006
2007 2008 2009 months jan 2005 - apr 2010
2010
20 2005
2006
2007 2008 2009 months jan 2005 - apr 2010
2010
Figure 1 - Monthly economic fluctuations of benzene, toluene and crude oil prices in the 2005-2010 period.
In addition, the autocorrelograms are monotonically decreasing. This point shows the manifest self-dependency of both benzene and toluene prices from the corresponding previous quotations (on a monthly basis). These bits of information allow proposing an autoregressive model with exogenous input (ARX) also known as “autoregressive with distributed lag” model, according to the econometrics terminology (Stock and Watson, 2003), whose structure is: Px ,i ax bx Pco,i cx Px ,i 1 (1)
Figure 2 - ARX model of the benzene price and comparison with the real quotation. The dashed vertical line divides the left portion of data used to identify the econometric model from the right portion used for the cross-validation. A quite similar trend is also shown by toluene.
where Px ,i is the price of hydrocarbon x at time i (in our model at i th month) and Pco is the price of crude oil. By minimizing the sum of the square differences between the real quotation and the model price of equation (1) over a given time interval, it is possible to evaluate the model parameters ( ax , bx , cx ) of HDA process for both benzene and toluene (see also Figure 2).
338
D. Manca et al.
3. Evolutionary scenarios of hydrocarbon prices Once the dependency of hydrocarbon prices from the crude oil indicator has been established, it is time to propose and discuss a model to forecast the possible and consistent scenarios of the hydrocarbon future quotations. Figure 3 shows the weekly relative variations between the market quotations of crude oil in the five-year period: 2005-2010.
Figure 3 - Weekly relative variations of crude oil price in the 2005-2010 period.
A statistical analysis of the relative variations shown in Figure 3 allows determining the stochastic nature of the phenomenon represented by a gaussian distribution with a slightly positive mean value, which shows a bullish trend of the crude oil quotations. The limited values of the autocorrelogram of the variations of the crude oil price prove the absence of periodic phenomena and show the stochastic nature of weekly fluctuations. These remarks allow defining the following dynamic model of the crude oil price: (2) Pco ,i Pco ,i 1 1 RANDN V co Pco where RANDN is a random number chosen from a normal distribution with mean zero ( Pco ) and standard deviation one ( V co ). Equation (2) describes a typical Markov process i.e. a stochastic discrete-time process where the new status of the process depends only from the previous one (Häggström, 2002). Once the scenarios of the future prices of crude-oil have been modeled, it is possible to determine from equation (1) the future scenarios of hydrocarbon derivates (e.g., benzene and toluene for the HDA process). The difference (i.e. error) between the ARX model and the real data (see also Figure 2) of hydrocarbon prices can be ascribed to the stochastic fluctuations of market quotations. This is supported by the gaussian distribution of the errors that has a zero mean. It is possible to observe that most of the absolute errors are below the 20% threshold (with the exception of the time period corresponding to the financial world crisis at the end of 2008, where the error is as high as 120% and, consequently, it is rejected as an outlier). Given these considerations, the future prices of crude oil derivates can be modeled by means of the following equation: (3) Px.i ax bx Pco,i cx Px ,i 1 1 RANDN V x
where the stochasticity of future market quotations is accounted for by the RANDN V x term. Given a crude-oil scenario of future market quotations determined by equation
Dynamic Conceptual Design under Market Uncertainty and Price Volatility
339
(2), it is possible to determine the corresponding scenarios of toluene and benzene prices from equation (3). Figure 4 shows one of the possible future scenarios for crude oil, toluene and benzene.
Figure 4 - One of the possible future scenarios of commodities market quotations in the fiveyear period 2010-2014.
4. Dynamic economic potentials The opportunity of retrofitting the HDA process, by installing an energy production section that can be run alternately with the feed-effluent heat-exchanger to maximize a dynamic economic potential, was discussed by Manca and Grana (2010) as a function of hourly fluctuations of the electric energy price within a given day. This paper focuses on an extension of classic economic potentials (as theorized by Douglas, 1988) to take into account the variability of market prices of raw materials, utilities, products, and byproducts in terms of a suitable distribution of future scenarios. It is then possible to define a new set of dynamic economic potentials: DEP 2, DEP3, DEP 4 in accordance with Douglas’ notation: EP 2, EP3, EP 4 . Rev 4i , k ª $ º ¬ h¼ NR ª NP ½º « °¦C p ,i , k Fp ¦Cr ,i , k Fr °» max « 0, ® p 1 (4) r 1 ¾» « °Cel ,i , k Wel Cst Fst CH O FH O C fuel Ffuel °» ¿¼ 2 2 ¬ ¯ nMonths
DEP 4k ª $ º ¬« y ¼»
¦ Rev4 i 1
i , k nHoursMonth
nEquip
¦ IC e
e 1
nMonths /12 nMonths /12 where i is the month index; k the scenario index over a period of nMonths months; C are costs; F flowrates; W electric power; p, r , el , st , H 2 O, fuel are referred respectively to products, reactants, electric energy (pumps and compressor), steam and water (condensers and reboilers), fuel (furnace); nHoursMonth is the number of operating hours in a month; IC are the investment costs of nEquip process units. The
340
D. Manca et al.
max function allows accounting only for positive Rev 4 revenues (i.e. when the toluene and benzene prices make the process economically viable) otherwise the plant is switched off (along the i th period) and only the negative term of equipment investment is charged in the computation of DEP 4 . By simulating a series of possible scenarios for the commodities and utilities future market price, it is possible to get a forecast of the distribution of economic potentials through a given period (e.g., a five-year period, as shown in left panel of Figure 5).
Figure 5 – Left panel: distribution of a set of 3,000 possible future scenarios for the DEP 4 indicator (five-year period). Right panel: cumulative distribution of the DEP 4 scenarios.
The cumulative distribution curve of dynamic economic potentials (see also right panel of Figure 5) allows comparing the HDA investment with other alternative investments (e.g., financial assets). This allows also quantifying the corresponding risk of investment under uncertainty. Finally, it is possible to compare the economic potential distribution of different plant layouts and the profitability of alternative solutions in the field of energy allocation and exploitation (Manca and Grana, 2010).
References Milmo S., “Benzene prices in Europe escalate to tight supply-demand”, Chemical Market Reporter, 5, 1-2, (2004) Douglas J. M., Conceptual Design of Chemical Processes, McGraw-Hill, New York, (1988) Manca D., R. Grana, “Dynamic Conceptual Design of Industrial Processes”, Computers and Chemical Engineering, 34, 5, 656–667, (2010) Häggström O., Finite Markov Chains and Algorithmic Applications, Cambridge University press, Cambridge, (2002) Stock, J.H., M.W. Watson, Introduction to Econometrics, Pearson Education, London, (2003) Ullmann's encyclopedia of industrial chemistry, Vol. 4, 6th edition, Wiley–Vch, (2002)
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Analysis of separation possibilities of multicomponent mixtures Laszlo Szabo, Sandor Nemeth, Ferenc Szeifert, University of Pannonia, Department of Process Engineering, Egyetem Str. 10, H-8200 Veszprém, Hungary
Abstract Rectification is the most often used fluid separation technology in the chemical industry. The improvement of the distillation equipments and processes are important because of their incidence and their huge energy needs. Recently systematic synthesis of separation sequences is developed significantly. This paper presents a case study of a separation of the multicomponent cracked gas. Those rules of the separation sequence synthesis, were emphasized that are based on the actual example and can be generalized. The generalized rules and the separation structure developed applying generalized rules unambiguously depend on the thermodynamics properties of multicomponent mixture and the specification of the products. Keywords: separation of multicomponent, boiling point order (BPO), difference of boiling points (DBP), cracked gas
1. Introduction Conventional distillation columns divide the feed stream into two products. The complex column configurations (side product, side stripper, etc.) can be decomposed to a sequence of conventional columns. Therefore the separation of multicomponent mixture can be realized as a sequence or sequences of the separation steps with two products. The separation with two products is unambiguously determined by the thermodynamics properties of the feed stream. Several methods are known for the determination of the best separation sequence [3, 7], for example: • algorithmic approaches involving established optimization principles, • heuristic methods based on rules of thumb, • evolutionary strategies wherein improvements are systematically made to an initially created separation sequence, and • thermodynamic methods involving applications of heat cascade principles. In some cases two or more methods are often combined in the process synthesis of distillation system. Disadvantages of the algorithmic and evolutionary methods are that their applications require special mathematical background and computational skill from the user. Although heuristic rules can be applied easily to determine the order of separation sequence unfortunately several heuristics rules contradict to each other [8]. Table 1 shows the evolution of the heuristic rules [1, 2, 4]. At the application of heuristic methods it is often a problem to define the easiest and heaviest splitting. Through the analysis of binary mixtures we collected those important thermodynamical parameters and models which describe the separation best and can be applied in the case of multicomponent mixtures. The separability of binary mixtures is determined by Vapor-Liquid-Equilibra (VLE), bubble and dew point curves (TXY) and the difference of boiling points (DBP) of the pure components. The VLE shows the thermodynamic
342
L. Szabó et al.
limits of the separation. The DBP of the pure components is also determined the difficulty of the separation. The relative volatility depends on the concentration of the mixture therefore it can not be used as characteristic parameter for the description of the separation. Special coefficients are used to describe the difficulty of separation in the heuristic rule systems. The most commonly used parameters are the coefficient of difficulty of separation (CDS) and the coefficient of ease of separation (CES) [5, 7]. The structure of this paper is the following: the applied rules are illustrated by the design of separation structure of cracked gas from olefin plant, and then we discuss the heuristic rules which can be generalized. Aspen PlusTM software was applied for the calculation. Table 1. Historical overview of the development of main heuristic rules Heuristic rules Perform the easiest separation first Perform equimolar splits (50/50) The heaviest separation last First remove the most plentiful component The cheapest separation first Perform separation with the lowest CDS Perform separation with the highest CES Perform separation with the lowest energy index Perform direct sequence Perform sequence without non-key components
Author, year Harbert, 1957; Douglas, 1988 Harbert, 1957; Heaven, 1969; King, 1971; Douglas, 1988 Rudd, 1973; Douglas, 1988 Nishimura in Hiraizumi, 1971; King, 1971; Rudd, 1971; Douglas, 1988 Harbert, 1957; Rudd, 1973; Douglas, 1988 Nath in Mothard, 1981 Nadgir in Liu, 1983 Lien, 1983 King, 1971 King, 1971;Gomez in Seader, 1976
2. Separation of the cracked gas A cracked gas from olefin plant was chosen for the investigation of the separation system of the multicomponent mixtures. The cleaned and cooled cracked gas consists of 36 components; the main compounds are the methane (No. 3) the ethylene (No. 4) the propylene (No. 6) and n-octane (No. 12) [6]. Difference of boiling point
Concentration of feed
80
0.4
70
DBP (K)
50 40
0.2
30 20
Mass fraction (-)
0.3
60
0.1
10 0
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 Components
Figure 1: Properties of the cracked gas In order to determine the structure of separation system in the first step the components of the cracked gas were ordered according to the boiling points of the pure component. This order is called as the boiling point order (BPO). In this case the BPO does not change with the pressure. The components are numbered and the first compound is the lightest (hydrogen), while the last compound is the heaviest (1, 3-butadiene). In the
Analysis of separation possibilities of multicomponent mixtures
343
second step the concentrations and the DBP of the neighboring components were plotted (Fig. 1). In third step the products were defined step by step using Figure 1. The product can be one component or a mixture of neighbor compounds in the BPO. Economic and thermodynamic considerations have to be taken into account during the definition of the products; for example the DBP of components to be separated is high enough and let the products are, from the market viewpoint, valuable. In our example both pure and mixture products were defined. The pure products are the ethylene (No. 4), the ethane (No. 5) and the propylene (No. 6). The mixture products are the light gas fraction (hydrogen No. 1, CO No. 2, methane No. 3), the light C3 fraction (No. 6-8), the heavy C3 fraction (No. 6-9), the C4 fraction (No. 10-16), the C5 fraction (No. 17-29) and the heavy aromatic fraction (No. 30-36). These products are recovered in high purity (99%). The light and heavy C3 fractions are rest of the light and heavy components. The ethane, the light and heavy C3 products are not pure therefore these mixtures can be fed back into the stream cracker. The fourth step of this method is the determination of the place of the first split and the specification of separation. The components were divided into two groups by the place of the split (head and bottom products). Based on Figure 2 the first separation step was defined between A= [1, 8] (light product) and B= [9, 36] (heavy product) groups, because: • The components with high concentration are in the A product. After the separation a large product (which consist of a few components) and a small product (which consist of many components) streams arise. • The feed stream is a gas or vapor phase from which it is necessary to condensate product B. The B product is in the smaller quantity in the feed stream. • At the splitting the DBP is high enough therefore the splitting is easy. • There are components with low concentration on the border of the split (the overlap will be small expectedly). • The number of products will be commensurable on the two sides (4 and 5). The separation was performed by short cut method (Aspen Plus DSTWU unit) hence the key components had to be defined. The key component of the “A” product was the propylene (6), while the key of the “B” product was the 1, 3-butadiene (12) and 99% recovery of both key components were defined, because: • These components are close to the place of split. • Concentrations of these components are high enough. • DBP of these components is significant. In the next separation step the head product of the first separation was performed. No. 3 and No. 4 were defined as key components (Figure 3), because: • This is a limit of the products specification. • DBP of these components is significant. • Concentrations of these components are high enough. This separation was also performed by short cut method (Aspen Plus DSTWU unit) defining 99% recovery of key components. At the other separation step we also applied the above principles (Figure 4 - 9). The place of the split was only defined at the small concentration and small DBP when it was on the limit of the product specification (5th separation step). The recovery of the key components was 99% in all separation steps. The Figure 11 shows the structure of the separation system, while the Figure 10 shows the purity of the products.
344
Place of split Key component
Difference of boiling point
Place of split
Concentration of feed
Key component
80
0.4
80
0.4
70
0.35
70
0.35
60
0.3
60
0.3
50
0.25
50
0.25
40
0.2
40
0.2
30
0.15
30
0.15
20
0.1
20
0.1
10
DBP (K)
Mass fraction (-)
10
0.05
0 3
4
5
6
7
8
9
10
11
12
13
14
15
0 2
3
4
5
Figure 2: 1st separation step
50
0.5
50
40
0.4
30
0.3
20
0.2
10
0.1
0 3
4
5
6
7
8
9
11
12
DBP (K)
Place of split Key component 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3
30
0.2 0.1 0
10 0 1
2
3
4
5
6
Components
Figure 4: 3rd. separation step
Figure 5: 4th separation step
Place of split Key component
Difference of boiling point Concentration of feed
40 30 20 10 0 5
6
7
8
9
0.35 0.3 0.25
10
0.2
8 6
0.15
4
0.1 0.05
2 0
10
0 12
13
14
Difference of boiling point Concentration of feed
Place of split Key component
0.4 0.3
40
0.25
30
0.2 0.15
20
12
13
0.18 0.16 0.14 0.12 0.1 0.08
30
0.06 0.04 0.02 0
10
0 11
0.2
20
0.05
0
Place of split Key component
40
0.1
10
0
14
26
27
28
Components
29
30
31
Components
Figure 8: 7th separation step
Figure 9: 8th separation step
Mass fraction of product 1-3
Mass fraction of product 4
Mass fraction of product 4- 6
Mass fraction of product 6
Mass fraction of product 7, 8
Mass fraction of product 6-9
Mass fraction of product 10-16
Mass fraction of product 17-29
Mass fraction of product 30-36
1 0.9
Mass fraction (-)
0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 1-3
18
50 DBP (K)
0.35
Mass fraction (-)
50
10
17
60
0.45
9
16
Figure 7: 6th separation step
60
8
15 Components
Figure 6: 5th separation step
7
Place of split Key component
12
Components
Difference of boiling point Concentration of feed
7
14
DBP (K)
1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 4
13
40
10
50
DBP (K)
10
Components
60
6
9
20
0
Difference of boiling point Concentration of feed
DBP (K)
Mass fraction (-)
60
Mass fraction (-)
DBP (K)
Difference of boiling point Concentration of feed 0.6
2
8
Figure 3: 2nd separation step
Place of split Key component
60
1
7
Components
Components
Difference of boiling point Concentration of feed
6
Mass fraction (-)
2
1
Mass fraction (-)
1
0.05
0
0
4
5
6
7
8
9
Components
Figure 10: Purity of the products
10-16
17-29
30-36
32
Mass fraction (-)
DBP (K)
Difference of boiling point Concentration of feed
Mass fraction (-)
L. Szabó et al.
Analysis of separation possibilities of multicomponent mixtures
345
Figure 11: Structure of the separation system
3. Rules for the separation of the multicomponent mixtures The elements of the experiences obtained in the course of the development of this case study which can be generalized the next are the following: • Beside the application of different indicators measuring the difficulty of the separation it is expedient ordering of the components of the mixture to be separated according to a boiling point and plotting concentration and DBP of the components (before all separation steps). • With the help of this figure, taking market viewpoints into consideration, N product class can be defined. • With the help of this figure the place of the split and the key components can be determined. • The place of the split is at the limit of the product where the DBP is large enough and the concentrations of the adjacent components are small between the products. • The key components are close to the split place and their concentration is relatively large. It is necessary to allow compromises during the design of the separation system of the multicomponent mixture since the requirements contradicting to each other may be frequent. The less compromises present the better result of the separation system are.
4. Acknowledgment László Szabó is grateful for the support of the PhD Fellowship of the MOL Plc The financial support from the TAMOP-4.2.2-08/1/2008-0018 (Livable environment and healthier people – Bioinnovation and Green Technology research at the University of Pannonia) project is gratefully acknowledged.
Reference 1. A. K. Modi, A. W. Westerberg, 1992, Distillation column sequencing using marginal price Ind. Eng. Chem. Res., 31 (3), pp 839–848 2. J. M. Douglas, 1988, Conceptual Design of Chemical Processes, ISBN: 0-07-017762-7 3. M.K. Kattan, P.L. Douglas, 1986, A New Approach to Thermal Integration of Distillation Sequences, Can J. of Chem. Eng 64/February, pp. 162–170. 4. N. Nishida, G. Stephanopoulos and A. W. Westerberg, 1981, A Review of Process Synthesis, AIChE J., 27/3, pp 321-351 5. R. Nath., R. L. Motard, 1981, Evolutionary Synthesis of Separation Processes, AIChE Journal, vol. 27, no. 4, July pp 578-587 6. T. Gál, B. G. Lakatos, 2006, Pirolizáló kemence matematikai modellezése és számítógépes szimulációja (Modelling and simulation of cracking furnace) , PhD Thesis, Univesity of Pannonia, Veszprém, Hungary 7. V.M. Nagdir, Y.A. Liu, 1983 , Studies in Process design and synthesis: Part V: A simpleheuristic method for systematic synthesis of initial sequences for multicomponentseparations, AIChE J. 29, pp 926-934. 8. Zs. Fonyó, Gy. Fábry, 2004, Vegyipari mĦvelettani alapismeretek (Unit operations), ISBN: 963-19-5315-7
0123ÿ56789 ÿ 9826ÿ8 ÿ8963 7ÿ ÿ78 22ÿ5
7 ÿÿ5 5ÿ01ÿ 5ÿ23898682ÿÿ 872ÿ ÿÿ88222ÿ53872!ÿ "ÿ0#11ÿ52 $ 7ÿ%&ÿÿ7'32ÿ7 2 7$ ÿ
(ÿ*+,-./01ÿ/++2ÿ3+1ÿ/40ÿ50602+-,07/ÿ+3ÿ-+2892:*/;*ÿ :*;5:?20ÿ3005=/+*@ÿ 3+1ÿ?;+,:7.3:*/.1;7Aÿ
6 78ÿÿBÿ 73 CDÿ237ÿEÿBÿF 297Dÿ% 3G ÿHÿF6 Dÿÿ D D 7IÿFÿE7 ÿÿB6D 2ÿ ÿJ'8ÿÿÿ K
LKMNOKPNOQÿNSÿTUPVWVXKPVNYZÿ[\]V^YÿKY_ÿ`_aKYb\_ÿcNYPONdeLTfc`gZÿÿhbiNNdÿNSÿ ci\WVbKdÿjY^VY\\OVY^ZÿhPKP\ÿkYVa\O]VPQÿNSÿcKWUVYK]ÿeklmc`nfgZÿcKWUVYK]ÿehfgÿ opqrpsrtuZÿvOKXVdÿ Mm Y]PVPwP\ÿNSÿvVNSKMOVbKPVNYZÿhPKP\ÿkYVa\O]VPQÿNSÿcKWUVYK]ÿeklmc`nfgZÿcKWUVYK]ÿehfgÿ 1x#yxzy{0ZÿvOKXVdÿ
(?=/1:*/ÿ
|' ÿ$ 2ÿ ÿ ÿ ÿ
7 ÿ38ÿ37 3ÿ822ÿ87ÿDÿ}6 38 ÿ8}ÿ87 2ÿ87ÿ 3226 2ÿ'$ ÿ83$3 ÿ3' ÿ $ 89 3ÿ8}ÿD83 72ÿ ÿ3 ' ~6 2ÿ}87ÿ D8 6}367 ÿ38ÿ978$ ÿ} ÿ~63ÿ8 ÿ3' 2 ÿD83 72ÿD898 72!ÿ3' ÿ 983ÿ!ÿF!ÿ'2ÿ7 $ ÿ2 } 3ÿ33 38 ÿFÿ2ÿ9786 ÿ}78ÿ3ÿ ÿÿ97863ÿ3'3ÿ ÿD ÿ8D3 ÿDÿ} 7 338 ÿ8}ÿ2672ÿ}78ÿ7 D ÿ2867 2ÿ 26'ÿ2ÿ267 ÿ ÿ87 ÿFÿ2ÿ3' 78923ÿ ÿ''ÿ237 3'ÿ3 7ÿ ÿ3ÿ2ÿ 7 ÿ ÿ3' ÿD8ÿDÿ29 ÿ'7822ÿ8}ÿ3' ÿ 23 7ÿ3ÿÿ73 ÿ3'3ÿ ÿD ÿ8 378 ÿ Fÿ ÿ32ÿ898 72ÿ7 ÿD ÿ62 ÿ ÿ3' ÿ ÿ} ÿ ÿ3' ÿ}87ÿ8}ÿ9 32ÿ87ÿ $ 2ÿ6 ÿ38ÿ32ÿ 3ÿD8893D3ÿ ÿD8 7D3ÿFÿ2ÿÿ$ 723 ÿ 98 7ÿ''ÿ ÿD ÿ9786 ÿ3'ÿÿ ÿ29 376ÿ8}ÿ9789 73 2ÿ3ÿ2ÿ 37 ÿ }}63ÿ38ÿ 9 7 3ÿ} ÿ863ÿ3' ÿ893ÿ$6 2ÿ8}ÿ3' ÿ ÿ$7D 2ÿ38ÿ' $ ÿ 3' ÿ7 ~67 ÿ9789 73 2ÿ2 ÿ3ÿ2ÿ3 ÿ8 26 ÿ ÿ 9 2$ ÿ% 7 ÿ3'2ÿ ÿ ÿ 3'2ÿ87ÿ2ÿ38ÿ263 ÿ3' ÿFÿ2 3' 22ÿ978 22ÿ}78ÿ7 D ÿ}
238ÿ ÿÿ 8 7ÿ26387ÿ 5ÿF !ÿ 3} ÿ3' ÿ98 7ÿ9789 73 2ÿ7 38 ÿ3'ÿ 89 738 ÿ8 38 2ÿ38ÿ8D3 ÿÿ} ÿ97863ÿ3'ÿ 27 ÿ'73 7232ÿÿ ÿ 08>+15=
ÿD8 7D ÿ98 7ÿD8 ÿ $ 2ÿ2638 ÿ 32ÿ 98 7C38 ÿ
ÿ7/1+5.*/;+7ÿÿ
|226 ÿ
7 ÿ2ÿ8 ÿ8}ÿ3' ÿ823ÿ9873 3ÿ7 2ÿ8}ÿ3 7ÿ2 ÿ ÿ''ÿ 6329 7ÿ2 3232ÿ7 ÿ8 37D63 ÿ38ÿ'6 ÿ' 3'ÿ7 ÿDÿ9786 ÿ3226 ÿ 26D23363 2ÿ3'3ÿ ÿ7 2387 ÿ3' ÿ2376367ÿ} 367 2ÿ ÿ9'288ÿ}6 38 2ÿ8}ÿ 367ÿ 3226 2ÿ ÿ$$8ÿ 3ÿ ÿ 7ÿ9 32ÿ 7 ÿ3' ÿ823ÿ62 ÿ}87ÿ ÿ967982 2ÿ ÿÿÿÿÿ'$ ÿD
ÿ8 936ÿ' ÿDÿ7 3ÿ $ 2ÿ ÿ $ 89 ÿ8}ÿD898 72ÿ 367ÿ ÿ2 3' 3!ÿD2 ÿ8 ÿ7 D ÿ 7 2867 2ÿ%8 7D ÿ ÿD8D287DD ÿ98 72ÿ7 ÿ9D ÿ8}ÿ883 ÿ z7$ ÿ3226 ÿ}8738 ÿ ÿ6 78ÿ 738 ÿ3' 7 Dÿ97 $ 3 ÿ3' ÿ93 3ÿ}78ÿ 6 78 ÿÿ2 8 ÿ267 7ÿ38ÿ7 8$ ÿ3' ÿ $ ÿ7 6 ÿ823ÿ ÿ376ÿ 83ÿ!ÿF!ÿ87ÿ983 ÿ23 2ÿ863ÿ8 ÿ3' ÿ 7 ÿD89232ÿ 7 3ÿ3'ÿ3' ÿD 23ÿ$D3ÿ9789 73ÿ978} ÿ ÿ3' ÿ823ÿ3373$ ÿ823ÿ2376367 ÿ ' ÿ 3ÿÿ0##!ÿÿ
0ÿ23456789ÿ733 ÿ39ÿ7 8ÿ 88 35487ÿ3ÿ53 272ÿ2 ÿ7 8ÿ59328ÿ934ÿ 347 988 8ÿ88 732ÿ39ÿ3462769ÿÿ ÿÿÿÿÿÿ!"#ÿ!ÿ$"ÿ!ÿ%ÿ&''ÿ ÿ(ÿÿ$"ÿ$ÿ)ÿ$"ÿ'ÿ*ÿ+ÿ""!ÿ %ÿ'&(%ÿ!#ÿ'ÿ"ÿ$$ÿ(ÿ$ÿ!,'ÿ!ÿ"ÿ$ÿ !ÿ,ÿÿ(ÿ)-)ÿ!"#ÿ./01ÿ!#ÿÿÿ2.331-ÿ !ÿ'ÿ'ÿ!ÿ.4')ÿÿ!*%ÿ05561*ÿ7ÿ$ÿ'ÿÿ$ÿ &ÿ"ÿÿ'ÿ!ÿ%ÿ'ÿ'"ÿÿ$ÿ'ÿÿÿÿ!ÿ "$ÿÿ!"#ÿ$ÿÿ!!ÿ"8ÿ$ÿÿ.91ÿÿ:ÿ.-1ÿ"ÿ$"ÿ'ÿ ""*ÿ;$$ÿ$ÿ'!)ÿ'ÿ!ÿÿ')'ÿ!ÿÿÿ$ÿ'ÿ (ÿ)3ÿ'&//)$*ÿ,?(ÿ9-ÿ.$&13ÿ,7ÿ &ÿBC"Dÿ/$&'(,/ÿ&1*ÿ&ÿ*)3()%%&(),1ÿ',%?.1ÿ(,ÿ/$'-'%$*ÿ%&'()'ÿ&')*=ÿ"#$ÿ*).$/ÿ7,/.&(),1ÿ)3ÿ (#$ÿ1$@(ÿ3(&5$ÿ,7ÿ:; PAB, P'CD > PCD and trivially: P'AB > PBC and P'CD > PBC.
2
P'CD P'CD
1.5
P AB
P CD
V/F
P BC 1
0.5
0
0
0.2
0.4
0.6
0.8
1
D/F
Figure 1. (a) Vmin diagrams for equimolar feed of the first 4 simple alcohols, α = [ 6.616 4.343 2.256 1] , (b) Schematic of the column
In case of inequal peaks in petlyuk configuration, there will be an optimality region which is a line from preferred split point to the point where the two peaks become equal (Halvorsen 2001). The optimality region will be like a square below B/C peak (as shown in Figure 1), which is impurity allowance in prefractionator. We assume that the recovery of c1 in the top of prefractionator and the recovery of c4 in the bottom of prefractionator are 1 and 0 respectively ( rc ,T = 1, rc ,T = β1 , rc ,T = β 2 , rc ,T = 0 ). The net 1
2
3
4
flow rates which enter the main column for the top and bottom will be calculated from ¦ zi F βi and ¦ zi F (1 − βi ) respectively. The common underwood roots in the prefractionator are calculated from equation (1). The solution obeys α1 ≥ θ1 ≥ α 2 ≥ θ 2 ≥ " ≥ α N . 1 − q = ¦ (α i zi α i − θ )
(1)
Vmin, p = ¦ (α i zi F α i − θ ) × β i
(2)
i
358
M. Ghadrdan et al.
The vapour flow rate which corresponds to θ 2 will be the minimum requirement for prefractionator because it characterizes the B/C split.
3. Select product purites Selection of product purities is based on the economical analysis and customer needs. Note that the minimum vapour flow for the Kaibel column is the same as the maximum of the minimum energy required for any pair of product splits, and the highest peak shows the most difficult split. It is clear that we can think of extra energy in one section and then talk about either increasing the product recovery or designing with lower number of trays. It is shown that overfractionating one of the products makes it possible to bypass some of the feed and mixing it into the product while retaining the constraints on the products (Alstad, Halvorsen et al. 2004). In addition, the impurities in products can be guessed from Vmin diagram. For example, the highest peak in the Vmin diagram determines the component that may appear as impurity in the side stream during optimal operation. So, care should be taken in specifying the product impurities. Figure 2 shows the trends of changes in side stream impurity ratios as functions of splits and impurities coming from the prefractionator for the example studied in this paper. This proves the fact about the impurity flows which go to the sidestream and also helps to put some feasible values in mass balance equations. By writing the total and component mass balances for the whole column to get the minimum allowable flows inside each section we will have 8 equations (component balances) and 20 unknowns, which means that 12 variables should be set in order to solve the mass balance equations. Fzci = Dxci , D + S1 xci , S1 + S2 xci , S2 + Bxci , B and ¦ xi , Strj = 1 where x m , N means mole fraction of component m in Product N. We assume that the composition of the component in two sections away from which it is the main product, is nearly zero, e.g. the compositions of the lightest component in side stream 2 and bottom stream. By doing so and also specifying the composition of the main product in each product stream, there remains two DOF to be specified. It is shown that specifying two composition specifications in a product stream may lead to problems (Wolff and Skogestad 1995). This means that the impurity can not be chosen as an arbitrary value. Figure 3 shows the contours of the ratios of impurities in side streams around the optimum as functions of vapour and liquid split. It can be read from the figures that for example the specifying two ratios as any arbitrary specification may be infeasible. So, one important issue is the allowable variables which can be set for product impurities so that the mass balance equations lead to feasible solution.
4. Minimum allowable and actual internal flows The other internal flow rates for the prefractionator section and main column will be calculated easily from balances around different junctions. The common roots in the prefractionator section, will be the active roots in the main section. The minimum vapour flow rate value for each section in the main column can be calculated from equation (2), by simply substituting the proper feed flow, feed composition and recovery values for each section (for example zi ,2 = ( F D1 ) × β i zi , F , q2 = − Lmin, p D1 ,
β i ( sec 2 ) = Dz D D1 z D for top section of the main column). 1
A shortcut design for Kaibel Columns Based on Minimum Energy Diagrams
359
Now, we can continue with assuming the actual vapour flow needed for the whole column to some extent (we assume 10%) higher than the minimum value and then calculate the actual internal flows. 1.95
0.08
2.2
C3 impurity in Pref. top
1. 6
1. 6 5
6 1.
0.03 0.02 1.6
0.06
0.04
0.05 0.02
0
1.65 1.7 0.01
0.1
C3 impurity in Pref. top
75
5 1. 6
0.01
0 0.08
1.9
0.04
1.6 1.5 0.1
1.75
0.05
1.8 5
1.7
1. 6
0.06
1.8
1.8
1.65
1.7
1.9
1.7
1.65
1.
V/F
2
1.7
1. 85
0.07
1.952
1.9 1.85 1. 8 1.75
1.7
2.1
1.95
1.9 1.85 1.8 1.75
1.9 1.85 1.8 1.75
1.8
0.1 0.09 2.3
95 1. 2 2 05 0.09 0.1
1. 7 1. 851.9 1.75 1.8 0.03 0.04 0.05 0.06 0.07 0.08 C2 impurity in Pref. bottom
0.02
C2 impurity in Pref. bottom 0.1 0.09
25
0.08 0.07
C3 impurity in Pref. top
xA /xC in S1
20 15 10 5
0.05
4
0.03
8 10 2 1 1416 18
0.08
0.05
0.06
4
0.01
0.04 0
6
0.02
0.1
C3 impurity in Pref. top
2
0.04 2
0 0.1
0.06
0.02 0
0.01
C2 impurity in Pref. bottom
0.1
0.02
0.03 0.04 0.05 0.06 0.07 C2 impurity in Pref. bottom
15
5
0.05 0.04
15
0.03
5
0.02 0
10
0 0.1
0.06 10
C3 impurity in Pref. top
xB/xD in S2
10
15
15
0.07
5
20
20
5
30 25
25
10
5
0.1
15
10
35 0.08
0.09
20
15
0.09 10
0.08
0.01 0.08
0.06
0.05 0.04
0.02
0
0.1
0.01
C3 impurity in Pref. top
0.02
C2 impurity in Pref. bottom
0.03 0.04 0.05 0.06 0.07 C2 impurity in Pref. bottom
0.08
0.09
0.1
Figure 2. Objective value and side streams impurities as functions of impurities of C2 and C3 from bottom and top of the prefractionator respectively 10
8 x B/xD in S2
xA /xC in S1
8 6 4
6 4 2 0
2
0.7
0
0.65
0.7
0.6
0.65 0.6
0.45 0.55 0.5
Rv
0.3
0.35
0.45
0.55 Rv
Rl
0.5
0.5
0.4
0.4 0.5
0.35 0.3 Rl
Figure 3. Contours of the impurity ratios in side streams as functions of liquid and vapour split
360
M. Ghadrdan et al.
The liquid and vapour splits are defined as the ratio of the strams going to the . prefractionator to the amount coming to the joint. rL = L1 L2 and rV = V1 V3 The other internal flows on two sides of the wall will be calculated based on the splits. Since the internal flows should be greater than the minimum flows, there are some constraints which should be met. Otherwise, the equations will not have proper roots related to relative volatilities.
(
rL < ( L2 − Lmin,2 ) L2
rV > max V1,min V3 , (V1,min − (1 − q ) F ) V3
)
(
rL > max Lmin,1 L2 , ( Lmin,1 − qF ) L2 rV < (V3 − V3,min ) V3
)
(3)
Section four is the section between two side-streams and it’s considered to have total reflux and the number of trays will be calculated directly from Fenske equation. Since Fenske equation is based on assuming equal compositions of liquid and vapour streams at top and bottom of prefractionator, -which is not the case for DWC-, we derive the minimum number of trays from Underwood equation. A few iterations are done to reach a desired value for number of trays and energy requirement. The equation below is used for calculating the number of trays in each section. xi , L is the composition of the entering stream to prefractionator, which is calculated from pinch point equations (Halvorsen 2001). §§ α i xi , D ¨¨¦ α i − φ2 N = log ¨ ¨ ¨¨ ¨¨ ©©
· ¸ ¸ αx ¦ α i −i,φD ¸¸ i 1 ¹
α i xi , L § ·· ¨¦ α −φ ¸¸ i 2 ¨ ¸¸ αx ¨ ¦ α i −i,φL ¸¸ ¸¸ ¨ i 1 ¹¹ ©
§φ · log ¨ 2 ¸ © φ1 ¹
(4)
5. Conclusion Designing the complex columns is not as straightforward as the conventional columns. In this paper we have presented a method for shortcut design of Kaibel column based on Vmin diagram. By plotting the contours of the objective value as a function of the two operational DOFs, we can get more information about the behaviour of the column close to the optimum and do the optimal design based on the rigorous model.
References Alstad, V., I. J. Halvorsen, et al. (2004). "Optimal operation of Petlyuk Distillation Column: Energy Savings by Over-fractionating." Computer Aided Chemical Engineering 18: 547-552. Halvorsen, I. J. (2001). Minimum Energy Requirements in Complex Distillation Arrangements, Norwegian University of Science and Technology, Department of Chemical Engineering (Available from home page of S. Skogestad). PhD. Halvorsen, I. J. and S. Skogestad (2006). Minimum Energy for the four-product Kaibel-column AIChE Annual meeting 2006. San Francisco 216d Sotudeh, N. and B. Hashemi Shahraki (2007). "A Method for the Design of Divided Wall Columns." Chem. Eng. Technol. 30(9): 1-9. Triantafyllou, C. and R. Smith (1992). "The design and Optimisation of Fully Thermally Coupled Distillation Columns " Trans. Inst. Chem. 70: 118-132. Wolff, E. A. and S. Skogestad (1995). "Operation of integrated three-product (Petlyuk) distillation columns." Ind. Eng. Chem. Res. 34: 2094-2103.
VW(XURSHDQ6\PSRVLXPRQ&RPSXWHU$LGHG3URFHVV(QJLQHHULQJ±(6&$3( (13LVWLNRSRXORV0&*HRUJLDGLVDQG$&.RNRVVLV(GLWRUV (OVHYLHU%9$OOULJKWVUHVHUYHG
$VXSHUVWUXFWXUHRSWLPL]DWLRQDSSURDFKIRU RSWLPDOUHILQHU\ZDWHUQHWZRUNV\VWHPVV\QWKHVLV ZLWKPHPEUDQHEDVHGUHJHQHUDWRUV &KHQJ6HRQJ.KRU1LOD\6KDK &HQWUHIRU3URFHVV6\VWHPV(QJLQHHULQJ'HSDUWPHQWRI&KHPLFDO(QJLQHHULQJ ,PSHULDO&ROOHJH/RQGRQ6RXWK.HQVLQJWRQ&DPSXV/RQGRQ6:$=8QLWHG .LQJGRP
$EVWUDFW :DWHULVDNH\HOHPHQWLQWKHRSHUDWLRQRISHWUROHXPUHILQHULHV$VVXFKWKHUHDUHJUHDW LQWHUHVWV WR LQFRUSRUDWH ZDWHU UHXVH UHJHQHUDWLRQ WUHDWPHQW DQG UHF\FOH :5 DSSURDFKHVLQWKHGHVLJQRIUHILQHU\ZDWHUQHWZRUNV\VWHPVZLWKWKHDLPRIPLQLPL]LQJ IUHVKZDWHU FRQVXPSWLRQ DQG ZDVWHZDWHU JHQHUDWLRQ +HQFH WKLV ZRUN FRQFHUQV WKH RSWLPL]DWLRQ RI UHILQHU\ ZDWHU QHWZRUN V\VWHPV V\QWKHVLV FRPSULVLQJ ZDWHUSURGXFLQJ VWUHDPV VRXUFHV ZDWHUXVLQJ XQLWV VLQNV RU GHPDQGV DQG ZDWHUWUHDWPHQW WHFKQRORJLHV UHJHQHUDWRUV :H GHYHORS D VRXUFH±LQWHUFHSWRU±VLQN VXSHUVWUXFWXUH UHSUHVHQWDWLRQ WKDW HPEHGV DV PDQ\ IHDVLEOH DOWHUQDWLYHV DV SRVVLEOH IRU LPSOHPHQWLQJ :5 ZKLOH SUHVHUYLQJ DWWUDFWLYH FRQYH[LW\ SURSHUW\ DQG EHLQJ DPHQDEOH WR WLJKWHU PRGHO IRUPXODWLRQ $ PL[HGLQWHJHU QRQOLQHDU SURJUDP 0,1/3 LV IRUPXODWHG EDVHG RQ WKH VXSHUVWUXFWXUH WR GHWHUPLQH WKH RSWLPDO UHWURILW RI D ZDWHU QHWZRUN VWUXFWXUH LQ WHUPV RI WKH FRQWLQXRXV YDULDEOHV RI WRWDO VWUHDP IORZUDWHV DQG FRQWDPLQDQW FRQFHQWUDWLRQVDQGWKH±YDULDEOHVRIVWUHDPSLSLQJFRQQHFWLRQV7KHVXSHUVWUXFWXUH DQGWKH0,1/3H[SOLFLWO\PRGHOVSDUWLWLRQLQJUHJHQHUDWRUVSDUWLFXODUO\WKHPHPEUDQH EDVHGWUHDWPHQWWHFKQRORJLHVRIXOWUDILOWUDWLRQDQGUHYHUVHRVPRVLVZLWKWKHREMHFWLYHRI PLQLPL]LQJWKHIL[HGFDSLWDOFRVWVRILQVWDOOLQJSLSLQJFRQQHFWLRQVDQGWKHYDULDEOHFRVW RI RSHUDWLQJ DOO VWUHDP FRQQHFWLRQV ZKLOH UHGXFLQJ WKH SROOXWDQWV OHYHO WR ZLWKLQ UHJXODWRU\OLPLWV7KHSURSRVHGPRGHOLQJDSSURDFKLVLPSOHPHQWHGRQDQLQGXVWULDOFDVH VWXG\ XVLQJ WKH *$06%$521 SODWIRUP WR REWDLQ D JOREDOO\ RSWLPDO ZDWHU QHWZRUN WRSRORJ\ .H\ZRUGV2SWLPL]DWLRQ ZDWHUUHXVHUHF\FOHV\QWKHVLVVXSHUVWUXFWXUH PL[HGLQWHJHU QRQOLQHDUSURJUDPPLQJ0,1/3
,QWURGXFWLRQ ,QWKLVZRUNZHLQYHVWLJDWHWKHDSSOLFDWLRQRIWKHPDWKHPDWLFDORSWLPL]DWLRQDSSURDFK RIPL[HGLQWHJHUQRQOLQHDUSURJUDPPLQJ0,1/3 WRWKHUHWURILWRIDQRLOUHILQHU\ZDWHU QHWZRUN V\VWHPV 7KH VHPLQDO SDSHU DSSO\LQJ DQ RSWLPL]DWLRQ DSSURDFK WR VXFK SUREOHPVLVE\7DNDPDHWDO ZKLFKDGGUHVVHVWKHRSWLPDOZDWHUDOORFDWLRQLQD UHILQHU\ ZKLOH PRUH UHFHQW ZRUN DSSO\LQJ YDULRXV RSWLPL]DWLRQ WHFKQLTXHV WR WDFNOH VXFK FODVV RI SRROLQJ SUREOHPV FDQ EH IRXQG LQ *RXQDULV HW DO 0LVHQHU HW DO :LFDNVRQR DQG .DULPL .DUXSSLDK DQG *URVVPDQQ 0H\HU DQG )ORXGDV :H DUH PRWLYDWHG E\ WZR UHDVRQV WR XQGHUWDNH WKLV ZRUN )LUVW KLJK GHPDQG RI ZDWHU LQ WKH IXWXUH PD\ UHVXOW LQ D UHILQHU\ EHFRPLQJ YXOQHUDEOH WR ZDWHU VXSSO\ LQWHUUXSWLRQV 6HFRQG WKH ZRUN LV LQ VXSSRUW RI VXVWDLQDEOH GHYHORSPHQW DV H[HPSOLILHGE\LWVREMHFWLYHV RI PLQLPL]LQJ IUHVKZDWHU XVHDQGZDVWHZDWHUJHQHUDWLRQ LQDUHILQHU\
C.S. Khor and N. Shah
362
3UREOHP6WDWHPHQWDQG5HVHDUFK2EMHFWLYHV 7KH DLP RI WKLV ZRUN LV WR GHWHUPLQH DQ RSWLPDO UHILQHU\ ZDWHU QHWZRUN V\VWHPV VWUXFWXUH FRPSULVLQJ VHWV RI ZDWHUSURGXFLQJ VWUHDPV RI SURFHVV VRXUFHV ZLWK NQRZQ ZDWHU IORZUDWHV DQG FRQWDPLQDQW FRQFHQWUDWLRQV ZDWHUXVLQJ RSHUDWLRQV RI SURFHVV VLQNV ZLWK NQRZQ ZDWHU UHTXLUHPHQWV DQG PD[LPXP DOORZDEOH LQOHW FRQWDPLQDQW FRQFHQWUDWLRQV DQG ZDWHUWUHDWPHQW WHFKQRORJLHV WKDW PHHWV WKH FULWHULD RI PLQLPXP IUHVKZDWHU XVH DQG PLQLPXP ZDVWHZDWHU JHQHUDWLRQ ZLWK FRQWDPLQDQW FRQFHQWUDWLRQV ZLWKLQ WKH DOORZDEOH RSHUDWLQJ OLPLWV 7KLV LV DFKLHYHG WKURXJK WKH IRUPXODWLRQ DQG VROXWLRQ RI DQ RSWLPL]DWLRQ PRGHO EDVHG RQ D VXSHUVWUXFWXUH RI SRVVLEO\ DOO IHDVLEOH DOWHUQDWLYH FRQILJXUDWLRQV RI VXFK LQWHJUDWHG UHILQHU\ ZDWHU QHWZRUN V\VWHPV ZLWK WKH LQFRUSRUDWLRQRIZDWHUUHXVHUHJHQHUDWLRQDQGUHF\FOH:5 VWUDWHJLHV
6XSHUVWUXFWXUH5HSUHVHQWDWLRQ :HGHYHORSDVXSHUVWUXFWXUHWKDWH[SOLFLWO\PRGHOVWKHPDWHULDOEDODQFHVIRUPHPEUDQH EDVHGSDUWLWLRQLQJUHJHQHUDWRUVSDUWLFXODUO\WKHWUHDWPHQWWHFKQRORJLHVRIXOWUDILOWUDWLRQ 8) DQGUHYHUVHRVPRVLV52 DVVKRZQLQ)LJXUH7KHSHUPHDWHDQGUHMHFWVWUHDPV DUHPRGHOHGDVLPDJLQDU\VWDQGDORQHLQGLYLGXDOUHJHQHUDWRUV
2SWLPL]DWLRQ0RGHO)RUPXODWLRQ %DVHGRQWKHVXSHUVWUXFWXUHZHIRUPXODWHDPL[HGLQWHJHUQRQOLQHDUSURJUDP0,1/3 WKDW LV ODUJHO\ EDVHG RQ WKH PRGHO RI 0H\HU DQG )ORXGDV DQG *DEULHO DQG (O +DOZDJL DVSUHVHQWHGLQWKHIROORZLQJ :DWHUIORZEDODQFHVIRUWKHVRXUFHV ) ( L ) = ¦ )G ( LN ) + ¦ )D ( LM ) L , N.
M-
:DWHUIORZEDODQFHVIRUWKHJHQHUDOQRQPHPEUDQHEDVHGLQWHUFHSWRUV ¦ )G (LN* ) + ¦ )FF* ( N*c N* ) + ¦ )FF* ( N3 N* ) + ¦ )FF* ( N5 N* ) c .* N* c z N* N*
L,
= ¦ )E* ( N* M ) + M-
¦
c .* N* c z N* N*
N 3 . 3
)F * ( N* N*c ) +
¦
N 5 . 5
N 3 . 3
)F* ( N* N3 ) +
¦
N 5 . 5
)F * ( N* N5 ) N* .*
&RQWDPLQDQWFRQFHQWUDWLRQEDODQFHVIRUWKHJHQHUDOQRQPHPEUDQHEDVHGLQWHUFHSWRUV § ¦ )G ( LN* ) &62 ( T L ) + ¦ )FF* ( N*c N* ) &* ( T N*c ) ·¸ ¨ L, c .* N* c z N* N* ( 55 ( T N* ) ) ¨ ¸ + ) N N & T N ¨¨ ¦ FF* ( 3 * ) 3 ( 3 ) + ¦ )FF * ( N 5 N* ) &5 ( T N5 ) ¸¸ N 5 . 5 © N3. 3 ¹ § ¦ )E* ( N* M ) + )F * ( N* N*c ) · ¦ ¨ M¸ c .* N* c z N* N* = ( &* ( T N* ) ) ¨ ¸ N* .* T 4 ¨¨ + ¦ )F * ( N* N 3 ) + ¦ )F * ( N* N 5 ) ¸¸ N 5 . 5 © N3.3 ¹ :DWHUIORZEDODQFHVIRUWKHSHUPHDWHVWUHDPRIDPHPEUDQHEDVHGLQWHUFHSWRU ¦ )G ( L N3 ) + ¦ )FF3 ( N* N3 ) + ¦ )FF3 ( N3c N3 ) + ¦ )FF3 ( N5 N3 ) L,
N* .*
= ¦ )E3 ( N3 M ) + M-
¦
N* .*
N 3c . 3 N 3c z N 3
)F3 ( N3 N* ) +
¦
N 3c . 3 N 3c z N 3
N 5 . 3 N 5 z N 3
)F3 ( N 3 N3c ) +
¦
N 5 . 5 N 5 z N 3
)F3 ( N 3 N5 ) N 3 . 3
A Superstructure optimization approach for optimal refinery water network systems Synthesis with membrane-based regenerators
363
)LJXUH6LPSOLILHGVXSHUVWUXFWXUHUHSUHVHQWDWLRQIRUUHILQHU\ZDWHUQHWZRUNV\QWKHVLVSUREOHP
&RQFHQWUDWLRQEDODQFHVIRUWKHSHUPHDWHVWUHDPRIDPHPEUDQHEDVHGLQWHUFHSWRU § ¦ )G ( L N 3 ) &62 ( T L ) + ¦ )FF3 ( N* N 3 ) &* ( T N* ) · ¨ L, ¸ N* .* ( 55 ( T N3 ) ) ¨ ¸ ¦ )FF3 ( N3c N3 ) &3 ( T N3c ) + ¦ )FF3 ( N5 N3 ) &5 ( T N5 ) ¸¸ ¨¨ + N 5 . 5 © N3c .3 N3c zN3 ¹
§ ¦ )E3 ( N 3 M ) + ¦ )F 3 ( N 3 N* ) · ¨ M¸ N* .* = &3 ( T N 3 ) ¨ ¸ N 3 . 3 T 4 ¦ )F3 ( N3 N3c ) + ¦ )F3 ( N3 N5 ) ¸¸ ¨¨ + N 5 . 5 © N3c . 3 N3c zN3 ¹ 6SOLWUDWLRRQIORZEDVHGRQOLTXLGSKDVHUHFRYHU\IRUWKHSHUPHDWHVWUHDP § ¦ )G ( L N 3 ) + ¦ )FF3 ( N* N 3 ) + ¦ )FF3 ( N3c N3 ) + ¦ )FF3 ( N5c N3 ) ·¸ ¨ L, c . 5 N 5 c z N3 N* .* N 3c . 3 N 3c z N 3 N5 D¨ ¸ ¦ )FF5 ( N3c N5 ) + ¦ )FF5 ( N5c N5 ) ¸¸ ¨¨ + ¦ )G ( L N 5 ) + ¦ )FF5 ( N* N 5 ) + c . 5 N 5 c z N5 N* .* N 3c . 3 N3c z N 5 N5 © L, ¹ = ¦ )E3 ( N 3 M ) + ¦ )F3 ( N3 N* ) + ¦ )F3 ( N3 N3c ) + ¦ )F3 ( N3 N5c ) M-
N* .*
N3c . 3 N 3c z N 3
N 5 . 5 N 5 z N3
( N 3 N5 ) . 3 . 5 :DWHUIORZEDODQFHVIRUWKHUHMHFWVWUHDPRIDPHPEUDQHEDVHGLQWHUFHSWRU ¦ )G ( L N5 ) + ¦ )FF5 ( N* N5 ) + ¦ )FF5 ( N3 N5 ) + ¦ )FF5 ( N5c N5 ) L,
N* .*
= ¦ )E5 ( N5 M ) + M-
N3 . 3
¦
N* .*
)F 5 ( N 5 N* ) +
¦
N 3 . 3
c z.5 N5 c z N5 N5
)F 5 ( N 5 N 3 ) +
¦
c z .5 N5 c zN5 N5
)F5 ( N 5 N 5c ) N 5 . 5
364
C.S. Khor and N. Shah
&RQFHQWUDWLRQEDODQFHVIRUWKHUHMHFWVWUHDPRIDPHPEUDQHEDVHGLQWHUFHSWRU § ¦ )G ( L N5 ) &62 ( T L ) + ¦ )FF5 ( N* N5 ) &* ( T N* ) · ¨ L, ¸ N* .* § · D ¨ ¸ 55 T N ( ) ¸ + 5 ¸ ¨¨ ( D ) )FF5 ( N3 N5 ) &3 ( T N5 ) + ¦ )FF5 ( N5c N5 ) &5 ( T N5 ) ¸ © ¹¨ N ¦ c . 5 . N5 ¨ 3 3 ¸ c z N5 N5 © ¹
§ ¦ )E5 ( N5 M ) + ¦ )F5 ( N5 N* ) · ¨ M¸ N* .* ¸ N5 . 5 = &5 ( T N5 ) ¨ ¨ + ¦ )F5 ( N5 N3 ) + ¦ )F5 ( N5 N5c ) ¸ c . 5 N5 ¨ N3 .3 ¸ c z N5 N5 © ¹ :DWHUIORZEDODQFHVIRUWKHVLQNV ¦ )D ( L M ) + ¦ )E* ( N* M ) + ¦ )E3 ( N3 M ) + L,
N* .*
N 3. 3
¦
N 5 . 5
&RQWDPLQDQWFRQFHQWUDWLRQEDODQFHVIRUWKHVLQNV ¦ )D (L M ) &62 ( T L ) + ¦ )E* ( N* M ) &* ( T N* ) + L,
+
¦
N 5 . 5
N* .*
)E5 ( N5 M ) = ) ( M ) M -
¦
N3 . 3
)E3 ( N3 M ) &3 ( T N3 )
)E5 ( N5 M ) &5 ( T N5 ) d ) ( M ) &PD[ ( T M ) M -
%LJ0ORJLFDOFRQVWUDLQWVDQH[DPSOHLVLOOXVWUDWHGDVIROORZVIRU)D )D ( L M ) d )D8 ( L M ) \D ( L M )
)D ( L M ) t )D/ ( L M ) \D ( L M ) )RUELGGHQ PL[LQJ RI WKH SHUPHDWH DQG UHMHFW VWUHDPV RI DQ LQWHUFHSWRU LQ D VLQN LQ DQRWKHULQWHUFHSWRUDQGIURPDQRWKHULQWHUFHSWRU )E3 ( N 3 M ) )E5 ( N5 M ) = M )E3 ( N 3 N c) )E5 ( N5 N c) =
N z N c N .
)E3 ( N N3 ) )E5 ( N N 5 ) =
N .
&RPSXWDWLRQDO5HVXOWV :H DSSO\ WKH SURSRVHG 0,1/3 IRUPXODWLRQ RQ DQ LQGXVWULDOVFDOH FDVH VWXG\ RI D UHILQHU\ ZDWHU QHWZRUN VWUXFWXUH SUREOHP LQYROYLQJ VRXUFHV SRWHQWLDO WUHDWPHQW WHFKQRORJLHV DQG VLQNV ZLWK WKH FRQWDPLQDQW RLO DQG JUHDVH 7KH RSWLPL]DWLRQ LV H[HFXWHG XVLQJ WKH JOREDO RSWLPL]DWLRQ VROYHU *$06%$521 ZLWK DQ DEVROXWH RSWLPDOLW\WROHUDQFHRIDQGDUHODWLYHRSWLPDOLW\WROHUDQFHRI7KHRSWLPDOZDWHU QHWZRUNVWUXFWXUHFRPSXWHGLVVKRZQLQ)LJXUHWKDWUHJLVWHUVDERXWUHGXFWLRQLQ IUHVKZDWHUXVH
&RQFOXGLQJ5HPDUNV 7KLV ZRUN SURSRVHV D VXSHUVWUXFWXUH DQG D 0,1/3 IRUPXODWLRQ WKDW H[SOLFLWO\ PRGHOV PHPEUDQHEDVHGWUHDWPHQWWHFKQRORJLHVE\WUHDWLQJWKHSHUPHDWHDQGUHMHFWVWUHDPVDV LQGLYLGXDO LQWHUFHSWRUV 7KH QXPHULFDO H[DPSOHV GHPRQVWUDWH WKH FDSDELOLW\ RI WKH SURSRVHGDSSURDFKWRHYDOXDWH:5DOWHUQDWLYHVWRGHWHUPLQHDQRSWLPDOUHILQHU\ZDWHU QHWZRUNV\VWHPZLWKUHGXFWLRQLQIUHVKZDWHUFRQVXPSWLRQ
A Superstructure optimization approach for optimal refinery water network systems Synthesis with membrane-based regenerators
365
$FNQRZOHGJPHQWV 7KHPDLQDXWKRULVJUDWHIXOWR'RPLQLF)RRIRULQLWLDOGLVFXVVLRQVRQWKHSUREOHPDQG 1JDL 200) needed for HDMR feasibility analysis [5]. ݑ ݐݏǤ െ ʹߠଵ ߠଶ െ ͳͷ ݑǡ ߠଵଶ Τʹ Ͷߠଵ െ ߠଶ െ ͷ ݑ െሺߠଵ െ Ͷሻଶ Τͷ െ ߠଶ ଶ ΤͲǤͷ ͳͲ ݑ (3) ߠଵ אሾെͳͲͷሿǡ ߠଵ௺ ൌ െʹǤͷ, ߠଶ אሾെͳͷͳͷሿǡ ߠଶ௺ ൌ Ͳ
4. Conclusions In this work, a Kriging based methodology is introduced for accurately mapping the feasible region of operation of black-box processes as a function of the range of uncertain parameters. Several key aspects are identified in order to develop a systematic and reliable method for identifying the feasible operation by simultaneously minimizing the required sampling cost.
F. Boukouvala et. al
436
Figure 3. Cross- Validation diagnostic plot of initial experimental design of 81 points
Figure 4. Predicted vs. Real feasible region of Problem 3
First, it is important to ensure that the initial experimental design is representative of the entire feasible region and this is assessed by cross-validation diagnostic plots and the average Kriging error estimate. Once the initial experimental design samples are obtained, the sampling set is adaptively refined only in critical regions that are likely to provide maximum information to the model.
Acknowledgments The authors acknowledge the support provided by the ERC (NSF-0504497, NSF-ECC 0540855).
References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.
Swaney, R.E. and I.E. Grossmann, An index for operational flexibility in chemical process design. Part I: Formulation and theory. 1985. p. 621-630. Floudas, C.A. and Z.H. Gumus, Global Optimization in Design under Uncertainty: Feasibility Test and Flexibility Index Problems. Industrial & Engineering Chemistry Research, 2001. 40(20): p. 4267-4282. Grossmann, I.E. and C.A. Floudas (1987) Active Constraint Strategy for Flexibility Analysis in Chemical Processes. Computers & Chemical Engineering 11, 675-693. Ierapetritou, M.G., New approach for quantifying process feasibility: Convex and 1-D quasi-convex regions. 2001. p. 1407-1417. Banerjee, I. and M.G. Ierapetritou, Design Optimization under Parameter Uncertainty for General Black-Box Models. Industrial & Engineering Chemistry Research, 2002. 41(26): p. 6687-6697. Rasmussen, C.E. and C.K.I. Williams, Gaussian Processes for Machine Learning, ed. M. Press. 2006, Boston. Jones, D.R., M. Schonlau, and W.J. Welch, Efficient Global Optimization of Expensive Black-Box Functions. Journal of Global Optimization, 1998. 13(4): p. 455-492. Davis, E. and M. Ierapetritou, A kriging based method for the solution of mixed-integer nonlinear programs containing black-box functions. Journal of Global Optimization, 2009. 43(2): p. 191-205. Davis, E. and M. Ierapetritou, A centroid-based sampling strategy for kriging global modeling and optimization. AIChE Journal. 56(1): p. 220-240. Browne, M.W., Cross-Validation Methods. Journal of Mathematical Psychology, 2000. 44(1): p. 108-132.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) c 2011 Elsevier B.V. All rights reserved.
Multiobjective Optimization for Plastic Sheet Production M. Rivera-Toledo, G. Meneses-Castellanos, and A. Flores-Tlacuahuac∗ Depto. Ing. y Ciencias Químicas, Universidad Iberoamericana, México D.F., México
Abstract In this work, we formulate a Multiobjective Optimization problem using conÀicting performance objectives in polymerization systems such as maximize monomer conversion and minimize molecular weight distribution. The problem is subject to a mathematical model comprising highly nonlinear Partial Differential Equations. These describe the dynamic response of the poly(methyl methacrylate) cell-cast process. A full discretization approach was used for solving the associated Nonlinear Programming problems. We analyze the effect of different process constraints on the Pareto curve and select suitable operating policies in an open-loop environment. Keywords: multiobjective, sheet reactor, PDEs
1 Introduction One of the main issues addressed in the optimization of chemical processes so far has been optimization for one objective at a time. However, many practical applications involve several objectives to be considered simultaneously. The appropriate objectives for a particular application are often conÀicting, which means achieving the optimum for one objective requires some compromise on one or more other objectives. Multiobjective optimization (MO), particularly outside engineering, refers to ¿nding values of decision variables which correspond to and provide the optimum of more than one objective. MO can be applied to handle conÀicting performance objectives. The goal is to obtain a set of equally good solutions, known as Pareto optimal solutions. In a Pareto set, no solution can be considered to be better than any other solution with respect to all objective functions. When one moves from one Pareto solution to another, at least one objective function improves while at least one other gets worse. Hence, MO involves special methods for considering more than one objective and analyzing the results obtained. The most often exploited approaches to generate this Pareto set are the weighting method and the ε -constraint method[see Rangaiah (2009)], although more recent approaches are available [see Das and Dennis (1998)]. In this work we address the MO of an experimentally validated model of an industrial polymerization reactor whose dynamic model is described in terms of a set of highly nonlinear partial differential and algebraic equations. As conÀicting objectives we have selected the maximization of both monomer conversion and molecular weight distribution which, for some polymerization systems, are commonly in conÀict. ∗
[email protected] 438
M. Rivera-Toledo et al.
2 Cell-Cast Process In the typical casting of acrilic sheet, molds formed by two glass plates which are separated by a peripheral gasket sealer and clamped together, are filled with casting syrup through a gap left in the gasket. The casting syrup is made up of partially polymerized monomer (20%) which, once placed in the mold, is inserted into a furnace which is heated by circulating warm air (see Figure 1). It is extremely important to control the progress of the polymerization throughout the procedure and to create suitable mild thermic conditions which, in turn, requires speedy and effective dissipation of excess heat, due to the low heat capacity of air, effective control of the thermal conditions during the operation is very important, so that, the heating is affected by the circulating air as it has been showed by M. Rivera and Vílchis (2006). Besides the conventional chemical kinetics, physical phenomena related to the diffusion of various chemical reactive species are very important in free-radical polymerization reactions. Several models have been published dealing with the mathematical description of diffusion-controlled kinetic rate constants in free-radical polymerization [see Dubé et al. (1997)]. The reaction mechanism adopted here consists of a simple approximation for the well know free-radical polymerization kinetics featuring straightforward initiation, propagation, and termination reactions described by Achilias and Kiparissides (1992). The following assumptions are taken (1) the diffusion effect is negligible since we are interested in the thermal process behavior, then the polymer processing is controlled by the chemical kinetic, (2) only the mass balance for the monomer conversion, initiator concentration and the growing radical concentration are considered. The sheet reactor model is considered for the PMMA plastic sheet production. For lack of space, in this paper we indicate briefly some assumptions on which the mathematical model rests, nevertheless, on previous work, M. Rivera and Vílchis (2006), we discuss in detail the limitations and scopes of the model. This model was derived assuming that the heating source resulting from polymer reaction is a function of the local temperature. It was also assumed that polymer properties, like density, heat capacity, thermal conductivity are constants. To get the one-dimensional dynamic energy balance, the total heat entering and leaving at the z coordinate was modeled by the Fourier law and the rate of change of energy in the control volume was obtained applying the shell energy balance method. Dynamic mass and energy balances coupled through polymerization kinetics describe monomer conversion, initiator concentration,the growing radical concentration, and Mw dynamic time evolution. Air is circulated through the forced convection mechanism inside the oven to provide the required energy to rise
Figure 1. Cell-cast process for PMMA plastic sheet manufacture
Multiobjective Optimization for Plastic Sheet Production
439
up the plastic sheet temperature until a point where signi¿cant polymerization rates take place. Inside the monomer, the dominant heat transfer mechanism is conduction. The dimensionless modeling equations for the sheet reactor consist of the following energy and mass balances:
∂θ ∂τ dX dτ d λ¯0 dτ d I¯
= a2
∂ 2θ (1 − X)2 λ¯0 −A2 /θ e + Bi(θa − θ ) + A9 2 ∂ζ 1 + εX
(1)
¯
= A1 (1 − X)eλ0 −A2 /θ
(2)
1 − X λ¯0 −A2 /θ ¯ ¯ e + A6 e−λ0 −A5 /θ − A7 eλ0 −A8 /θ 1 + εX 1 − X λ¯0 −A2 /θ e = A4 − A3 e−A5 /θ dτ 1 + εX and the initial (IC) and boundary (BC) conditions are given by IC : τ = 0, ∀ζ ∈ [0, 1], θ = θ0 , X = X0 , λ¯0 = ln(λ0 ), I¯= 1 = −A4
(3) (4)
0
BC1 : ∀τ > 0, ζ = 0, BC2 : ∀τ > 0, ζ = 1,
∂θ = Bi(θa − θ ) ∂ζ ∂θ =0 ∂ζ
(5)
Here, the dimensionless variables for the polymer temperature, air temperature, position, time, growing radical concentration and the initiator concentration are de¿ned as follows: θ = T /T0 , θa = Ta /T0 , ζ = z/L, τ = α t/H 2 , λ¯0 = ln(λ0 ), I¯ = I/I0 , respectively; and Bi = hH/k is the Biot number, T is the polymer temperature, T0 is the initial monomer temperature, Ta is the surrounding temperature, t is the polymerization time, z is the axis for the sheet length, X is the monomer conversion, λ0 is the growing radical concentration, I is the initiator concentration, L is the sheet length , H is the sheet thickness, ε is the volume expansion factor, α is the thermal diffusivity, and h is the heat transfer coef¿cient.
3
Multiobjective Optimization Problem
In the present work, we formulate a MO problem using conÀicting performance objectives in polymerization systems such as maximize molecular weight distribution, fMw , and maximize monomer conversion, fX , as follows:
τ τ Mw 2 θ 2 min fMw = + 1− d Xd τ (6) 1− d d τ , and max fX = θ Mw 0 0 then we use the ε -constraint method for generating the Pareto frontier, in which one of the solutions will be an "ideal" solution. The MO problem (equation 6) is subject to the partial differential and algebraic equations (PDAE) and the initial and boundary conditions (equations 1-5). In the above equation θ d and Mwd stand for the desired values of plastic sheet and Molecular Weight distribution, respectively. The ratio between the rate of propagation and the rates of propagation and termination of polymerization was used to obtain the Mw . Using the simultaneous approach [Biegler et al. (2003)] for solving the dynamic optimization problems, these are converted into a non linear programming ¯ λ¯0 , Mw ) and control (θa ) variables by (NLP) problem by approximating the states (θ , X, I, the application of the method of lines for the spatial coordinate and orthogonal collocation on ¿nite elements for handling the time coordinate.
440
M. Rivera-Toledo et al.
4 Results In Figure 2, we present comparison between pareto optimal set and ideal solutions when (a) Mwd = 1x104 , and (b) Mwd = 1x106 , for (ii) dynamic temperature, (iii) Mw and growing radical concentration, (iv) monomer conversion and initiator concentration pro¿les for 3 mm plastic sheet thickness. Solutions are labelled for the edge (z = 0) and right extreme (z = L) along the longitudinal z-coordinate. The numerical results were obtained using next dimensionless parameter values for PDAE: A1 = 6.1539x108 , A2 = 6.7786, A3 = 1.3184x1018 , A4 = −1.4626x108 , A5 = 47.6514, A6 = 1.3812x1018 , A7 = 1.2266x1011 , A8 = 1.0916, A9 = 4.4036x108 , a = 5.555x10−4 , and Bi = 5.3366. When the ε -constrained method was used, the MO problem was written as min fMw subject to fX ε and PDAE, and ε lies in the interval [0.75,0.99]. The NLP problem was solved using the CONOPT NLP solver embedded in the GAMS algebraic modelling system. An ideal solution was obtained by determining the pareto solution which is closest to the utopia point according to the approach suggested in Grossmann and Jain (1982). As can be seen from the results displayed in Figure 2 acceptable monomer conversion and molecular weight distribution results are obtained by setting a trade-off between the addressed conÀicting objectives. Although the complexity of the underlying dynamic system, the Pareto frontier was computed using a relative modest computational time.
5 Conclusions In this work the multiobjective optimization of a highly nonlinear polymerization reactor was addressed. Because in polymerization systems normally conÀicting objectives emerge it makes sense to compute the set of optimal solutions that reÀect a trade-off among conÀicting objectives and let the designer to pick up the solution he/she considers to meet operating objectives in the best possible manner. Even when the set of optimal solutions are off-line computed presently we are extending this work to compute and implement on-line optimal dynamic solutions by using nonlinear model predictive control strategies.
References Achilias, D., Kiparissides, C., 1992. Development of a General Mathematical Framework for Modeling Diffusion-Controlled Free Radical Polymerization Reactions. Macromolecules 25, 3739–3750. Biegler, L., Ghattas, O., Heinkenschloss, M., van Bloemen Waanders, B., 2003. Large-Scale PDEConstrained Optimization. Springer, Berlin. Das, I., Dennis, J. E., 1998. Normal-boundary intersection: A new method for generating the pareto surface in nonlinear multicritria optimization problems. SIAM J. Optim. 8, 631. Dubé, M. A., Soares, J. B. P., Penlidis, A., Hamielec, A. E., 1997. Mathematical Modeling of Multicomponent Chain-Growth polymerizations in Batch, Semibatch, and Continuous Reactors: A Review . Ind. Eng. Chem. Res. 36, 966–1015. Grossmann, I. E., D. R., Jain, R. K., 1982. Incorporating toxicology in the synthesis of industrial chemical complexes. Chem. Eng. Commun 17, 151. M. Rivera, L. E. García, A. F., Vílchis, L., 2006. Dynamic modeling and experimental validation of the mma cells cast process for plastic sheet production. Ind. Eng. Chem. Res. 45 (25), 8539–8553. Rangaiah, G. P., 2009. Multi-objective optimization: techniques and applications in chemical engineering. World Scienti¿c, Singapur.
Multiobjective Optimization for Plastic Sheet Production
441
(a)
(b)
Figure 2. Plot of (i)Pareto optimal set and ideal solutions when (a)Mwd = 1x104 , and (b)Mwd = 1x106 , for dynamic (ii)monomer and air temperatures, (iii)Mw and growing radical concentration, (iv)monomer conversion and initiator concentration profiles for 3 mm plastic sheet thickness. Solutions are labelled for the edge, z = 0 (subscript 1), and right extreme, z = L (subscripts Nz ) along the longitudinal z-coordinate.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Systematic identification and robust control design for uncertain time delay processes Jakob K. Huusoma, Niels K. Poulsenb, Sten B. Jørgensena, John B. Jørgensenb a
CAPEC, Department of Chemical and Biochemical Engineering Department of Informatics and Mathematical Modelling Technical University of Denmark, DK-2800 Lyngby, Denmark
b
Abstract A systematic procedure is proposed to handle the standard process control problem. The considered standard problem involves infrequent step disturbances to processes with large delays and measurement noise. The process is modeled as an ARX model and extended with a suitable noise model in order to reject unmeasured step disturbances and unavoidable model errors. This controller is illustrated to perform well for both set point tracking and a disturbance rejection for a SISO process example of a furnace which has a time delay which is significantly longer than the dominating time constant. Keywords: Model Predictive Control, Autoregressive models, Time delay systems.
1. Introduction Many chemical engineering processes contain time delays and a degree of measurement noise. Often unmeasured step disturbances appear, e.g. when a feed source changes which may give rise to offset in the controlled variable. This rather common control problem deserves development of a systematic methodology. In the present contribution a model based methodology which combines model identification with unmeasured disturbance estimation and robust offset free control design is developed. The methodology is illustrated on a furnace control problem. Model Predictive Control (MPC) is a state of the art control technology which utilizes a model of the system to predict the process output over some future horizon and solve a quadratic optimization problem with the control signal as decision variables. Early achievements and industrial implementations of Model Prediction Control include IDCOM and Dynamic Matrix Control [1,2]. These algorithms were based on step or impulse response models. More general linear input-output model structures were used in Generalized Predictive Control [3], but an interest in MPC implementations based on state space models were created by the seminal paper [4]. The state space approach provides a unified framework for discussion of the various predictive control algorithms and is well suited for stability analysis [5]. Therefore, MPC based on state space models is useful as an implementation paradigm when other linear model classes are identified.
2. Model Predictive Control based on Autoregressive models It is proposed to base MPC on autoregressive models with exogenous inputs (ARX). ܣሺି ݍଵ ሻݕሺݐሻ ൌ ܤሺି ݍଵ ሻݑሺݐሻ ߝሺݐሻǡ ߝሺݐሻ ࣨ אሺͲǡ ߪ ଶ ሻ ARX models can be reliably identified from data using convex optimization since they are linear in the system parameters. This feature presents an advantage in embedded applications for robust and automatic system identification. Furthermore MIMO systems can be identified as easily as for SISO systems, which is not the case for the
Model Predictive Control with dead-band for uncertain time delay systems
443
class of ARMAX models. The ARX model can be converted to a state space model in the observer canonical, innovation form, hence the special correlation between process and measurement noise can be exploited. The innovation, ݁ ൌ ݕ െ ݕොȁିଵ , and future predictions of the states and the process output are calculated based on the Kalman filter. The optimal control input sequence is based on the minimization on the following quadratic objective, subject to constraints on the input, u, and the control move, ǻX. ଶ ଶ ଵ ߶ ൌ σேିଵ ොାଵାȁ െ ݎାଵା ฮ ฮȟݑାȁ ฮ ୀ ฮݕ ଶ
ொ
ௌೠ
where Q and Su are weight matrices. The constrained optimal control problem can be converted into a standard convex quadratic program [6]. 2.1. Soft output constraint Model Predictive Control The above objective function has the disadvantage that presence of noise gives rise to an active controller even though the process control operates with zero error on average. Therefore a new performance objective which includes soft constraints is introduced: ଶ ଶ ଶ ଵ ் ොሺାଵାȁሻ െ ݎାଵା ฮொ ฮȟݑሺାȁሻ ฮௌ ฮɄାଵା ฮௌ ʹݏఎ ߟାଵା ߶௦௧ ൌ σேିଵ ୀ ฮݕ ଶ
ೠ
ആ
where Q6XDQG6ȘDUHZHLJKWPDWULFHVDQGȘLVYHFWRURIDX[LOLDU\YDULDEOHV7KH03& controller with soft output constraints will solve the quadratic programming problem ߶ௌ௧ ݆ ൌ Ͳǡͳǡʹǡ ǥ ǡ ܰ െ ͳ ൛௨ೖశೕ ǡఎೖశభశೕ ൟ
Such that the input, u, and the control move, ǻX, are constrained to an interval. The soft output constraints are imposed by demanding that the auxiliary variable, Ș, positive and ୫୧୬ǡ୩ାଵା୨ െ Ʉ୩ାଵା୨ ୩ାଵା୨ ୫ୟ୶ǡ୩ାଵା୨ Ʉ୩ାଵା୨ Fig. 1 shows the penalty function on the tracking error for nominal and soft output constrained MPC. It is seen that inclusion of the soft constraints gives a detuning of the controller within the limits on the tracking error. An in-depth discussion of implementation of soft output constrained MPC is given in [8], which also covers an improved robustness compared to the nominal MPC.
Figure 1. Penalty function for the tracking error for nominal and soft MPC.
2.2. Offset free performance Since one objective of the control is to ensure offset-free tracking, it is proposed to identify an ARX model from a set of input/output plant-data and base the control implementation on the following model [7] ܣሺି ݍଵ ሻݕሺݐሻ ൌ ܤሺି ݍଵ ሻݑሺݐሻ ߟሺݐሻ ͳ െ ߙି ݍଵ ݁ሺݐሻ ߟሺݐሻ ൌ ͳ െ ି ݍଵ Due to inclusion of the integrator in the noise model, the effect of a sustained non-zero disturbance can be eliminated. The first order moving average part of the noise model balances the speed of estimating an unknown disturbance versus the noise sensitivity of
444
J. K. Huusom et al.
the disturbance estimate. The effect that Į has on this trade-off is depicted in Fig 2. It is seen that Į offers a good trade-off independent of the system. The variance of the disturbance estimate is less than 20% of the process noise and 95% of an unmeasured step is estimated in approximately 10 samples. The system description with the linear noise model can be realized as an ARMAX model which means that it can be converted to the state space descriptions used by the controller ܣҧሺି ݍଵ ሻݕሺݐሻ ൌ ܤത ሺି ݍଵ ሻݑሺݐሻ ܥሺି ݍଵ ሻ݁ሺݐሻ ିଵ ିଵ ܣҧሺ ݍሻ ൌ ሺͳ െ ݍሻܣሺି ݍଵ ሻ, ܤതሺି ݍଵ ሻ ൌ ሺͳ െ ି ݍଵ ሻܤሺି ݍଵ ሻ and ܥሺି ݍଵ ሻ ൌ ͳ െ ߙି ݍଵ .
Figure 2. The variance of the disturbance estimate and response time for the disturbance estimator given a step, as function of the free parameter Į in the noise model.
3. Process example – The Gas-Oil Furnace This problem deals with a process where a liquid oil stream is heated and evaporated in a furnace. This example is inspired by a set of papers by Rivera and co-workers [9]. The goal when operating this plant is to maintain a constant gas temperature in the product stream by manipulating the fuel flow rate to the furnace such that oil feed flow rate disturbances are rejected. The disturbance is random while its mean value may change stepwise in order to change the unit throughput. The process and the signals are depicted in Fig. 3. The temperature dynamics of the process can be described by the following second order plus delay transfer functions with real valued poles. ʹͲ ݕሺݏሻ ൌ ܩ ሺݏሻ ൌ ݁ ିହ௦ ሺͶͲݏ ͳሻሺͶ ݏ ͳሻ ݑሺݏሻ െͷ ݕሺݏሻ ൌ ܩௗ ሺݏሻ ൌ ݁ ିଵ௦ ሺͷ ݏ ͳሻሺͷ ݏ ͳሻ ݀ሺݏሻ The process output is measured at discrete time instants every 2 minutes for the purpose of feedback control. This measurement is noisy and assumed to be Gaussian distributed. ݁௧ ࣨ אሺͲǡ ͲǤͷଶ ሻ ݕ௧ ൌ ݕ௧ ݁௧ ǡ The disturbance signal used for simulation of this process is assumed to behave as ݀ ࣨ אሺͲǡ ͲǤʹͷଶ ሻ ݀௧ ൌ ݀ௗ௧ ݀ ǡ where ddet is the desired rate of production and dran is the stochastic element of the disturbance when the system is discretized with a sample time of 2 minutes.
Model Predictive Control with dead-band for uncertain time delay systems
445
Figure 3. The Oil-Gas Furnace process.
3.1. System Identification In order to implement a MPC for the furnace temperature it is necessary to require a discrete time linear model of the dynamics between the input and the process. This dynamic relation will be estimated as an ARX-model based on data which is generated from the true plant model. The structure of the noise model used in the MPC, will be exploited in the system identification. The model is identified from a set of data {ݕ,ݑ} which is the set of plant data, {ݕ,}ݑ, filtered through the inverse of the noise model. A PRBS signal with 360 samples, corresponding to a 12h experiment, has been design for the probing signal. In order to avoid too rapid changes the signal has been design with banded frequency content. The signal will exhibit changes between its extreme values every 10 samples or slower. The most promising model among a large range of low order ARX models with a fixed time delay is found. Models with delays ranging from 24 to 27 samples have been investigated. The model parameters, with the 99% confidence interval for the most promising model, are reported in Table 1. This model performs well when comparing estimate and observed step responses. Table 1. Estimated ARX model parameters of the Gas-Oil Furnace process are presented with 99% confidence limits. The delay was estimated to be 26 samples. The sample time is 2 minutes. A(q-1)
Estimate
B(q-1)
Estimate
a1
-0.5683 (±0.1547)
b0
0.6892 (±0.2485)
a2
-0.3782 (±0.1555)
b1
0.4783 (±0.2819)
3.2. Closed loop performance The identified model is used in an ARX-MPC implementation with a prediction and control horizon of 60 samples. The actuator is constrained between ±1 and the control move is constrained to ±0.5. For the soft output constraint, a band of 5 degrees around the reference temperature is chosen. The closed loop systems responses to a set point change at 3.5 and 7h and for an unmeasured step disturbance at 1h are show in Fig 4.
Figure 4. Responses to a set point change and an unmeasured step disturbance with nominal (dotted line) and soft (full line) ARX-MPC. The dashed lines indicate constraints.
Satisfactory performance is found using a tuning with {Q=10-3, Su=103, SȘ=1} for the soft MPC which corresponds to {Q=1, Su=103, SȘ=0} for the nominal MPC. It is seen that for both implementations, the closed loop system tracks the set points and rejects an unmeasured disturbance satisfactorily despite the long time delays in the system. This long delay is the reason for the very high value of Su which detunes the controller. For lower values, the response is faster but not sufficiently damped while a higher value
446
J. K. Huusom et al.
gives a too slow response. In general it is seen that the nominal MPC keeps the output closer to the reference at the price of a more aggressive input signal.
4. Conclusions The almost standard control problem in the process industries where processes with large delays and noise are exposed to infrequent step disturbances is proposed solved using a selected set of systems engineering methods. The paper shows that combining an ARX model based MPC control design with soft output constraints and a system independent tuned filtered noise model provides a sound basis for development of a systematic method for the above standard control problem. The proposed filtered ARXMPC control strategy with soft constraints provides a systematic approach to obtaining offset-free performance and reducing sensitivity to noise. The proposed strategy is especially advantageous for processes with delays longer than the dominant time constants. Since the identification method is guaranteed convergence, the control design is robust towards noise and model uncertainties, the combined methodology is expected to be robust.
5. Acknowledgements The first author gratefully acknowledges the Danish Council for Independent Research, Technology and Production Sciences (FTP) for funding through grand no. 274-08-0059.
References 1. J. Richalet, A. Rault, J. L. Testud, and J. Papon. 1978. Model predictive heuristic control: Application to industrial processes. Automatica, 14(5):413 – 428. 2. C. Cutler and B. Ramaker. 1980. Dynamic matrix control – A computer control algorithm. In Proceedings of the Joint Automatic Control Conference. 3. D.W. Clarke, C. Mohtadi, and P. S. Tuffs. 1987. Generalized predictive control - part 1. The basic algorithm. Automatica, 23(2):137 – 148. 4. K. R. Muske and J. B. Rawlings. 1993. Model predictive control with linear models. AIChE Journal, 39(2): 262 – 287. 5. D. Q. Mayne, J. B. Rawlings, C. V. Rao, and P. O. M. Scokaert. 2000. Constrained model predictive control: Stability and optimality. Automatica, 36(6):789 – 814. 6. J. K. Huusom, N. K. Poulsen, S. B. Jørgensen, and J. B. Jørgensen. 2010. Tuning of methods for offset free MPC based on ARX model representations. In Proceedings of the American Control Conference, pages 2355 – 2360. 7. J. K. Huusom, N. K. Poulsen, S. B. Jørgensen, and J. B. Jørgensen. Noise Modelling and MPC Tuning for Systems with Infrequent Step Disturbances. Submitted for the IFAC World Congress 2011. 8. G. Prasath and J. B. Jørgensen. 2009. Soft Constraints for Robust MPC of Uncertain Systems. In Proceedings of the International Symposium on Advanced Control of Chemical Processes. 9. D. E. Rivera, K. S. Jun, V. E. Sater and M. K. Shetty. 1996. Teaching process dynamics and control using an industrial-scale real-time computing environment. Computer Applications in Engineering Education, 4(3), 191–205.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Control and dynamic optimization of a BTX dividing-wall column Anton A. Kiss,a Rohit R. Rewagadb a
AkzoNobel – Research, Development and Innovation, Velperweg 76, 6824 BM, Arnhem, The Netherlands. E-mail :
[email protected] b University of Twente, Faculty of Science and Technology, Enschede, The Netherlands. E-mail:
[email protected] Abstract This work presents simulation results of the control and dynamic optimization of a dividing-wall column (DWC) used for the separation of a benzene-toluene-xylene (BTX) ternary mixture. Rigorous simulations were carried out in Aspen Plus and Aspen Dynamics. Several conventional control structures based on PID control loops (DB/LSV, DV/LSB, LB/DSV, LV/DSB) were used as a control basis. These control structures were enhanced by adding an extra loop controlling the heavy component composition in the top of the prefractionator, by using the liquid split as an additional manipulated variable, thus implicitly achieving minimization of energy requirements. The results of the dynamic simulations show short settling times and low overshooting especially for the DB/LSV and LB/DSV control structures. Keywords: dynamic optimization, PID control, energy efficiency, dividing-wall column
1. Introduction Industrial applications of dividing-wall columns (DWC) for ternary separations is nowadays considered as proven technology, with over 100 DWC reported in operation worldwide (Dejanovic et al., 2010). Remarkably, DWC is not limited only to ternary separations, but it can be used also in extractive distillation (Bravo-Bravo et al., 2010), azeotropic separations and reactive distillation (Kiss et al., 2009). DWC is considered a major breakthrough in distillation, as it brings significant reduction in CapEx and OpEx – up to 30-40%. Remarkably, DWC is the only known large scale process intensification example, where both capital and operating costs can be vastly reduced, with the additional benefit of reducing also the required installation space, by up to 40%. Basically, DWC is a practical implementation of the well-known Petlyuk configuration that features a prefractionator and a main column with interconnected vapor and liquid streams (Figure 1, where A – lightest, B – mid boiler, C – heaviest component). In spite of being a technology already implemented at industrial scale, the dynamic control and optimization of DWC was explored only in just a few papers (Diggelen et al., 2010; Ling and Luyben, 2009, 2010). Compared to a conventional separation sequences, the control of DWC is more difficult due to the increased interaction among the controlled and manipulated variables. This paper proposes several multi-loop PID control structures (DB/LSV, DV/LSB, LB/DSV, LV/DSB) that keep under control the product purities while at the same time implicitly minimize the energy requirements. This is achieved by manipulating the liquid split (rL) in order to control the composition of the heaviest component (C) in the top of the prefractionator side of the DWC.
A. A. Kiss and R. R. Rewagad
448
A
Liquid split
A
AB
Dividing wall
LIQ
ABC
B
PF
ABC
DC
Prefractionation section
VAP
B
Main column
BC
C
Vapour split
C
Figure 1. Schematics of Petlyuk configuration (left) and dividing-wall column (right).
2. Problem statement The integration of two columns into one shell leads also to more interactions among the controlled and manipulated variables, and ultimately in the controllability of the system. Although much of the literature focuses on the control of binary distillation columns, there are just a few studies on the controllability and dynamic optimization of DWC (Halvorsen et al., 1997; Adrian, 2004; Ling, 2009, 2010; Diggelen et al., 2010). The problem is that different DWC separation systems were used hence no fair comparison of controllers is possible. To solve this problem, we explore the DWC control issues on one system (BTX) and compare various multi-loop PID control strategies enhanced with implicit dynamic optimization – minimization of the energy requirements.
3. Steady-state and dynamic models Aspen Plus and Aspen Dynamics were used as powerful CAPE tools, in order to build the rigorous steady-state and dynamic simulations. Figure 2 (left) illustrates the schematics of the modeled DWC, consisting of 6 sections of 8 stages each. The feed stream consisting of benzene-toluene-xylene (noted as ABC for convenience) is fed into the prefractionator side, between section 1 and 2. Benzene is obtained as top distillate, xylene as bottom product, and toluene is withdrawn as side stream of the main column (between sections 4 and 5). The ternary diagram showing the composition profile along the column is illustrated in Figure 2 (right). In this work, the steady-state purity of all product streams is considered to be 97% in order to allow comparison to previous work. The converged AspenPlus simulation was exported to Aspen Dynamics, where several PID loops within a multi-loop framework were applied. Note that the PID controllers remain the most used controllers in the chemical industry, for several practical reasons: • Simplicity of the control structure. • Robustness with respect to model uncertainties and disturbances. • Quite easy manual stabilization of the process, when an actuator or sensor fails. In case of a DWC, two multi loops are needed to stabilize the column and another three to maintain the set points specifying the product purities. From a practical viewpoint, there are only a few configurations that make sense. The level of the reflux drum and the reboiler can be controlled by the variables L (liquid reflux), D (distillate), V (vapor boil-up) or B (bottoms), respectively. Consequently, there are four inventory control options to stabilize the column and to control the level in the reflux tank and the level in the reboiler, namely the combinations: D/B, L/V, L/B and V/D (Diggelen et al., 2010).
Control and dynamic optimization of a BTX dividing-wall column
1
QC
N1 + N2+2 . . .
rL
(ABC)
0.8
D
0.6
N1 + N2+N3 + 1
2
F
L
N1 + N2+N3
1
1
. . .
. . .
4
S
N1 + N2+N3 + N4
N1
0.4
N1 + N2+N3 + N4 + 1
N1+1 N1+2
. . . .
5
2
. . .
N1 + N2
Side product
XB
N1 + N2+1
3
449
Prefractionator M ain column
0.2
N1 + N2+N3 + N4 + N5
rV
Feed
N1 + N2+N3 + N4 + N5 +1 . .
6
N1 + N2+N3 + N4 + N5 +N6 -1
V
0
B
N1 + N2+N3 + N4 + N5 +N6
Bottom product
0
QB
0.2
Top product
0.4
0.6
0.8 X 1 A
Figure 2. Schematics of the simulated DWC: 6 sections of 8 stages each (left). Ternary diagram showing the composition profile along the dividing-wall column (right). Figures 3 shows the multi-loop PID control structures considered in this work: DB/LSV, DV/LSB, LB/DSV, and LV/DSB. The part for the control of product purities is often called regulatory control. One actuator is left (rL) that can be used for optimization purposes such as minimizing the energy requirements. Note that the control loops were tuned by the direct synthesis method (Luyben and Luyben, 1997). All these control structures are based on PID loops within a multi-loop framework, with an additional optimization loop that manipulates the liquid split in order to control the heavy component composition in the top of fractionator, and implicitly achieving minimization of the energy requirements. Ling and Luyben (2009) have already shown that implicit optimization of the energy usage is achieved by controlling the heavy impurity at the top of the prefractionator. Note that any heavy component (C) going out the top of the wall will appear also in the liquid flowing down in the main column and thus strongly affecting the purity of the sidestream (S). Since the sidestream is collected as a liquid product, it means that any small amounts of light impurity in the vapour phase will not significantly affect its composition. However, even tiny amounts of heavy impurity in the liquid phase will greatly affect the composition of the side stream. DV/LSB
DB/LSV
LC
rL
CC
CC
LC
A
CC
CC
B
rL
CC
YC
YC
B
F
B
F
CC
CC
CC
A
A
rL
CC
F
B
LC
LC
CC
YC
F
A
rL
CC
YC
LV/DSB
LB/DSV
CC
LC
LC
LC CC
C
C CC
LC CC
C
C CC
Figure 3. Control structures based on PID loops within a multi-loop framework.
A. A. Kiss and R. R. Rewagad
450
4. Results and discussion Sensitivity analysis was used to determine the optimal parameters corresponding to the minimum energy requirements. The diagrams shown in Figure 4 illustrate the optimal liquid split ratio (rL) – as well as the heavy component mole fraction in the top of fractionator (YC-PF1) – corresponding to the minimum reboiler duty (Qreb). 2500 Base
1400
+10% F
1200
-10% F
1000
Qreb / [kW]
Qreb / [kW]
2000 1500 1000
800 600 Base
400
500
+10% F
200
-10% F
0
0 0
0.1
0.2
0.3
0.4
0
0.5
0.01
0.02 YC_PF1 / [-]
rL / [-]
0.04
0.03
Figure 4. Reboiler duty vs liquid split ratio (left) and molar fraction of the heavy component on the first stage of the prefractionator (right), at base case and ±10% F.
0.975
0.975
0.970
0.973
Mole fraction / [-]
Mole fraction / [-]
For the dynamic simulations performed in this study, the purity set points (SP) are 97% for all product specifications, while persistent disturbances of +10% in the feed flow rate (F) and +10% in the feed composition (xA) were used for the dynamic scenarios. Although the reported disturbances are not exerted at the same time, no serious problems – such as instability or lack of capability to reach the setpoints – were observed in case of simultaneous disturbances. Due to space limitations, we present here the dynamic response only for the best two control structures: DB/LSV and LB/DSV.
0.965 0.960
SP xA xB xC
0.955 Disturbance +10% F
0.950 0
1
2
3
4 5 6 Time / [hr]
0.971 0.969
Disturbance +10% xA
0.965 7
8
9
SP xA xB xC
0.967
10
0
1
2
3
4 5 6 Time / [hr]
7
8
9
10
0.986
1.000
0.980
0.986
Mole fraction / [-]
Mole fraction / [-]
Figure 5. Dynamic response of DB/LSV control structure, at a persistent disturbance of +10% in the feed flow rate (left) and +10% xA in the feed composition (right).
0.974 0.968 SP xA
0.962
xB xC
Disturbance +10% F
0.956 0
1
2
3
4 5 6 Time / [hr]
7
8
9
0.972 0.958
SP xA
0.944
xB
10
xC
Disturbance +10% xA
0.930 0
1
2
3
4 5 6 Time / [hr]
7
8
9
10
Figure 6. Dynamic response of LB/DSV control structure, at a persistent disturbance of +10% in the feed flow rate (left) and +10% xA in the feed composition (right).
Control and dynamic optimization of a BTX dividing-wall column
451
The mole fractions of components A in the top distillate (xA), B in the side stream (xB) and C in the bottom product (xC) are returning to their set point (SP) within reasonable short settling times. The dynamic response of the DB/LSV control structure is shown in Figure 5, being characterized by low overshooting and short settling times. Figure 6 illustrates the case of the LB/DSV control structure, which shows similar performance. The overall results of the dynamic simulations demonstrate that these control structures cope well with persistent disturbances in the feed flowrate and in the feed composition. Moreover, the DV/LSB control structure has a dynamic response similar to DB/LSV, while the LV/DSB control structure is similar to LB/DSV. However, the LV/DSB control structure leads to oscillations and longer settling times – which are in fact in line with the previous reports (van Diggelen, 2010). Basically, using the reboiler duty – instead of the bottom flowrate – to control the liquid level, and the reflux to control the level in the reflux drum (when L>>D) leads to oscillation in the dynamic response.
5. Conclusions The DWC control structures proposed in this paper – based on PID controllers in a multi-loop framework – are able to simultaneously control the products compositions and minimize the energy requirements in a very practical way. The dynamic optimization is based on a simple strategy, namely to control the heavy component composition at the top of the prefractionator side of the DWC by manipulating the liquid split ratio. Remarkably, this optimal control condition is implicitly sufficient. The steady-state relationships show that maintaining or minimizing this composition leads to energy requirements that are near or at the minimum values as feed composition change. The results of the dynamic simulations illustrate the feasibility of the control structures. The DB/LSV and LB/DSV control structures are the best, in terms of low overshooting and short settling times. Based on the successful application to other relevant mixtures and the excellent performance similar to MPC (Kiss and Bildea, 2011), we consider that these control structures are well applicable to other ternary separations in DWC.
Acknowledgements We thank Costin S. Bildea (‘Politehnica’ University of Bucharest, RO), Zarco Olujic (TU Delft, NL), Igor Dejanovic (University of Zagreb, HR), Ivar J. Halvorsen (SINTEF, NO) and Sigurd Skogestad (Norwegian University of Science and Technology) for the very helpful discussions. The financial support given by AkzoNobel to Rohit Rewagad (University of Twente, NL) during his MSc internship is also gratefully acknowledged.
References 1. 2.
R. Adrian, H. Schoenmakers, M. Boll, 2004, Chem. Eng. & Proc., 43, 347-355. C. Bravo-Bravo, J. G. Segovia-Hernandez, C. Gutierrez-Antonio, A. L. Duran, A. BonillaPetriciolet, A. Briones-Ramirez, 2010, Ind. Eng. Chem. Res., 49, 3672-3688. 3. I. Dejanovic, Lj. Matijasevic, Z. Olujic, 2010, Chem. Eng. Proc., 49, 559-580. 4. I. J. Halvorsen, S. Skogestad, 1997, Comput. Chem. Eng., 21, 249-254. 5. R. C. van Diggelen, A. A. Kiss, A. W. Heemink, 2010, Ind. Eng. Chem. Res., 49, 288-307. 6. A. A. Kiss, J. J. Pragt, C. J. G. van Strien, 2009, Chem. Eng. Comm., 196, 1366-1374. 7. A. A. Kiss, C. S. Bildea, 2011, Chem. Eng. Proc., In press, DOI: 10.1016/j.cep.2011.01.011 8. H. Ling, W. L. Luyben, 2009, Ind. Eng. Chem. Res., 48, 6034-6049. 9. H. Ling, W. L. Luyben, 2010, Ind. Eng. Chem. Res., 49, 189-203. 10. W. L. Luyben, M. L. Luyben, 1997, Essentials of process control. New-York: McGraw-Hill.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Process Dynamic Optimization Using ROMeo Flavio Manentia,1, Guido Buzzi-Ferrarisa, Sauro Pieruccia, Maurizio Rovagliob, Harpreet Gulatib a
Politecnico di Milano, Dipartimento di Chimica, Materiali e Ingegneria Chimica “Giulio Natta”, Piazza Leonardo da Vinci 32, 20133 Milano, ITALY b Invensys Operations Management, 26561 Rancho Parkway South Lake Forest, 92630, California, USA 1 Corresponding author. Phone: +39 02 2399 3273; E-mail:
[email protected] Abstract The present research activity is aimed at demonstrating the feasibility of dynamic realtime optimization (D-RTO) on the industrial scale. Some well-established and fieldproven tools, such as ROMeo™ for real-time optimization (RTO) and DynSim™ for dynamic simulation, are combined with very performing solvers for differential systems (BzzMath library) and specific methods (multiple shooting) to obtain a full-integrated solution for D-RTO. A steam-cracking furnace is selected as validation case: the SPYRO®-based dynamic simulation is developed using FORTRAN, C++, and DynSim™ and it is integrated in ROMeo™ to perform the D-RTO. A quantitative comparison between the traditional RTO and the D-RTO is also provided. Keywords: Dynamic Optimization; ROMeo; DynSim; BzzMath; SPYRO.
1. Introduction Process dynamic optimization is a challenging issue for many research groups of the computer-aided process engineering community (Kadam et al., 2002; Tosukhowong et al., 2004; Lang and Biegler, 2007; Manenti and Rovaglio, 2008; Dones et al., 2010) with the need of finding at the same time efficient and robust solutions as well as of ensuring its on-line feasibility for large-scale systems typical of process industry. In addition, no well-established field-proven solutions are nowadays able to overcome the traditionally strong inertia of process industries in implementing novel control and optimization methodologies, apart from their relevant effectiveness and economical benefits. From this perspective, it is not surprising that the process dynamic optimization is still perceived as an academic concept rather than as an industrial one and it seems to be far from a massive application by the field. The background above summarizes the main reasons pushing us to exploit the best architecture and the potential evolution of an existing, well-established, reliable, and widespread package like ROMeo™. This real-time optimizer is a commercial tool and field-proven in many industrial applications such as oil refineries, gas plants or petrochemicals. The idea of starting from ROMeo™ relies on the concept that, assembling and evolving a commercial/reliable tools, will create an easier and more suitable transition to dynamic real-time optimization applications (D-RTO) in the process industries.
2. Essentials of dynamic real-time optimization (D-RTO) The D-RTO is similar in its mathematical formulation and time-scale to the nonlinear model predictive control (NMPC), another optimization level of the so-called process
Process Dynamic Optimization Using ROMeo
453
control hierarchy (Busch et al., 2007). They are based on the moving horizon methodology (Rawlings, 2000) and lead to multidimensional, constrained, nonlinear programming (NLP) problems based on convolution models, often requiring specific optimizers and differential solvers (Manenti et al., 2009; Buzzi-Ferraris and Manenti, 2010a). Differences between D-RTO and NMPC problems and the solution strategies are summarized in many papers (Biegler and Grossmann, 2004). Very performing, solvers and the use of parallel computing are also explained elsewhere (Manenti et al. 2009; Buzzi-Ferraris and Manenti, 2010a). The multiple shooting technique belonging to the family of simultaneous methods is adopted in the present research activity.
3. Software integration The kernel of the present research activity is to combine three worlds to achieve an integrated and reliable tool for the D-RTO: x DynSim™ (Invensys): powerful dynamic simulator for a wide set of processes. x ROMeo™ (Invensys): package for RTO. ROMeo provides a complex and wellestablished architecture for the process optimization. x BzzMath library (Politecnico di Milano): a comprehensive numerical library to significantly speed-up calculations, especially to integrate large-scale differentialalgebraic systems (Buzzi-Ferraris, 2010). to which it is necessary to add a fourth point to set up the selected study to check the industrial feasibility of the D-RTO and to validate the newborn integrated tool: x SPYRO® (Pyrotec-Technip): a well-established tool to simulate the coil of the radiant section of the steam cracking furnaces of olefins plants. ROMeo
DYNSIM
BZZMATH ... LARGE-SCALE ALGEBRAIC SYSTEMS
BZ
ZM
AT
H
ROBUST and EFFICIENT OPTIMIZERS
LI B
RA
RY
ODE/DAE and PDE/PDAE SYSTEMS
NUMERICAL SOLVERS
Figure 1. Integration path to an effective solution for industrial D-RTO.
This is possible by exploiting features of the object-oriented programming and MS Visual C++. Actually, as qualitatively reported in Figure 1, the differential and differential algebraic solvers of the BzzMath library can be fully integrated and synchronized in DynSim™ by replacing the default solvers so as to speed-up computations and to ensure the online feasibility of D-RTO. Next, it is possible to developed complex dynamic models in DynSim™ environment and solve them using BzzMath solvers with superior performances. At last, rather than using the traditional steady-state models implemented in ROMeo, a drag & drop technology has been developed to move the dynamic models developed in DynSim™ (together with the BzzMath solvers) into ROMeo™ and to use them as convolution models of a multiple shooting structure to solve the D-RTO problem.
F. Manenti et al.
454
4. Validation case: steam cracking furnace There are different methodologies to crack heavy hydrocarbons to obtain light-ends (i.e. fluid catalytic cracking, thermal cracking, hydrocracking). The validation case we selected focuses on the steam cracking, which produces ethylene and, in general, olefins from a feed of saturated hydrocarbons diluted with steam and then heated in a furnace. Before entering the radiant region, the feed flowrate is preheated in a series of heat exchangers placed in the convection region (see the qualitative scheme of Figure 2). In the thermal furnace, the temperature is considerably high (>800°C) and the residence time is in the order of some milliseconds (Dente et al., 1992). Here, one of the most important parameters to control and monitor the process performances is the coil outlet temperature (COT), which is measured before exiting the thermal furnace and is strictly related to the wall temperature. Therefore, the hot gas is quickly quenched in the transfer line exchanger (TLE) in order to stop the reaction and to produce high-pressure steam (about 100bar) as well. Assuming a fixed residence time, the outlet flowrate composition depends on the feed composition, the hydrocarbon to steam ratio, and the COT. The outlet flowrate is sent to the main fractionator and to the separation section (Pierucci et al., 1996). Specifically, since the main goal of this paper is to check the industrial feasibility of the D-RTO, a reduced portion of the furnace and of the control scheme is considered for the sake of simplicity. In addition, the reactor efficiency degradation is not considered in this work. Nonetheless, it is worth remarking that a steam cracking furnace can usually run for a few months only at a time between decoking operations. The selected control system related to the radiant section is reported in Figure 2. It consists of a direct-action temperature controller where the wall temperature of the radiant section, and hence the COT, is the controlled variable and the fuel fed to the burners is the manipulated variable. A higher fuel flowrate corresponds to a higher COT value. A ratio controller manages the air flowrate insufflated into the radiant section, in order to maintain the desired stoichiometric ratio. The optimal setpoint is assigned by D-RTO. The higher the fuel flowrate, the higher the air flowrate. Stack Damper
COT Feed
RADIANT SECTION
Breeching
Olefins
Convection Section
PV
OUT
High Pressure Steam
Steam Coil Outlet Temperature (COT)
FC Transfer Line Exchanger (TLE) >800°C 400°C
Main Factionator
Temperature Controller
TC
PV
OUT
Fuel PV Flowrate Ratio SP Controller Air
Radiant Section Burners / Air Blowers
Figure 2. Half plan slice of a thermal cracking furnace (left-hand side); radiant section and related control scheme considered for the D-RTO: PV stays for Process Variable, OUT stays for controller OUTput; and SP stays for Setpoint (right-hand side).
Process Dynamic Optimization Using ROMeo
455
5. Numerical results An all-in-one tool for SPYRO-based smart dynamic simulation and optimization of olefins plants is developed to check the D-RTO feasibility on a steam-cracking furnace. It has required a complex programming activity. A mixed-language approach (BuzziFerraris and Manenti, 2010b) was adopted to implement the SPYRO®, completely written in FORTRAN, a coil model into the C++ dynamic model developed for simulating the radiant section of the thermal furnace. The very performing solver of differential-algebraic systems from BzzMath library was implemented to obtain an efficient and stable (in the following “smart”) solution of the SPYRO-based dynamic simulation. The smart solution was implemented in DynSim™ to avail from the userfriendly interface and component, properties, and thermo database of such a commercial dynamic simulation suite. At last, the smart dynamic simulation was fully integrated and synchronized into ROMeo™ environment by means of the multiple shooting method. This step was possible thanks to the structure of OPERA® solver currently included in ROMeo™ and to the peculiarities of BzzDae solver from BzzMath library. A short selection of numerical results is reported in Figure 3. A severity change is imposed by the higher propylene price (current worldwide market situation). The traditional RTO approach presents a marked instability in driving the furnace from the initial condition to the optimized one. Also, the convergence towards the new optimal point is significantly slower than the D-RTO optimal path. Moreover, the variations in the fuel flowrate supplied to the furnace are so high throughout the RTO severity change that overcomes the physical upper bound of 7000kg/h. Consequently, the RTO must unavoidably perform a two-steps severity change (Figures 3e-3f) by significantly prolonging the process transient. 1.05
1.1
TRADITIONAL
1
4 SHOOTS
0.95
0.9
C3H6/C2H4
CH4/C3H6
1
0.8
0.7
TRADITIONAL 4 SHOOTS
0.9 0.85 0.8 0.75
16, 32 SHOOTS
0.7 0.6
0.65
16, 32 SHOOTS 0.5 0
20
40
60
80
100
120
Time [min]
a
0.6
b
0
40
60
80
100
120
Time [min]
9000
9000
TRADITIONAL
TRADITIONAL 8000
8000
7000
FUEL FLOWRATE
FUEL FLOWRATE
20
4 SHOOTS 6000
5000
STARTING POINT
4000
7000
4 SHOOTS
6000
5000
16, 32 SHOOTS
4000
OPTIMUM 3000
3000
STARTING POINT
16, 32 SHOOTS
OPTIMUM 2000 0.5
0.6
0.7
0.8
0.9
1
1.1
c d
CH4/C3H6 SEVERITY 5000
0.65
0.7
0.75
0.8
0.85
0.9
0.95
1
1.05
C3H6/C2H4 SEVERITY 4000
1-step traditional RTO
Traditional RTO
4500
3800
FUEL FLOWRATE
FUEL FLOWRATE
2000 0.6
2-step traditional RTO
4000
3500
3000
32-shoots D-RTO 2500
3600
3400
32-shoots D-RTO
3200
D-RTO off-spec time RTO off-spec time
2000 0
50
100
150
200
TIME [min]
250
300
3000
e
f
0
50
100
150
200
TIME [min]
250
300
Figure 3. CH4/C3H6 (a) and C3H6/C2H4 (b) severity changes; convergence comparison between the RTO and the D-RTO for CH4/C3H6 (c) and C3H6/C2H4 (d) severity changes; comparison between the fuel flowrate supplied using the RTO and the D-RTO (e, f).
456
6. Conclusions The present activity showed the industrial feasibility of the dynamic optimization (DRTO). Main benefits of D-RTO versus the traditional RTO have been discussed and quantified, showing, for example, that D-RTO practically halves the off-spec during process transients. Computational efforts required to solve the RTO and D-RTO are practically comparable, by making even the D-RTO feasibility on the industrial scale. Moreover, looking at the traditional inertia of process industries and oil refineries, no visible changes to ROMeo’s user were introduced so as to preserve to current ROMeo’s interface and to have an easy-to-use tool for the fast industrial application of D-RTO.
7. Disclaimer ROMEO and DynSim are trademarks of Invensys Operations Management. SPYRO is a registered product of Technip-Pyrotec, originally developed by Politecnico di Milano.
References Biegler, L.T., & Grossmann, I.E., 2004, Retrospective on optimization. Computers & Chemical Engineering 28(8), 1169-1192. Busch, J., Oldenburg, J., Santos, M., Cruse, A., & Marquardt, W., 2007, Dynamic Predictive Scheduling of Operational Strategies for Continuous Processes Using Mixed-logic Dynamic Optimization. Computers & Chemical Engineering 31, 574-587. Buzzi-Ferraris, G., & Manenti, F., 2010a, A Combination of Parallel Computing and ObjectOriented Programming to Improve Optimizer Robustness and Efficiency. Computer Aided Chemical Engineering 28, 337-342. Buzzi-Ferraris, G. (2010). BzzMath: Numerical library in C++. Politecnico di Milano, http://chem.polimi.it/homes/gbuzzi. Buzzi-Ferraris, G., & Manenti, F., 2010b, Fundamentals and Linear Algebra for the Chemical Engineer: Solving Numerical Problems. Wiley-VCH, Weinheim, Germany. Dente, M., Pierucci, S., Ranzi, E., & Bussani, G., 1992, New Improvements in Modeling Kinetic Schemes for Hydrocarbon Pyrolysis Reactors. Chemical Engineering Science 47, 2629-2634. Dones, I., Manenti, F., Preisig, H.A., & Buzzi-Ferraris, G., 2010, Nonlinear Model Predictive Control: a Self-Adaptive Approach. Industrial & Engineering Chemistry Research 49(10), 4782-4791. Kadam, J.V., Schlegel, M., Marquardt, W., Tousain, R.L., van Hessem, D.H., van der Berg, J., et al., 2002, A Two-level Strategy of Integrated Dynamic Optimization and Control of Industrial Processes - a Case Study. ESCAPE-12, The Hague, The Netherlands, 511-516. Lang, Y.D., & Biegler, L.T., 2007, A Software Environment for Simultaneous Dynamic Optimization. Computers & Chemical Engineering 31, 931-942. Manenti, F., & Rovaglio, M., 2008, Integrated multilevel optimization in large-scale poly(ethylene terephthalate) plants. Industrial & Engineering Chemistry Research 47(1), 92104. Manenti, F., Dones, I., Buzzi-Ferraris, G., & Preisig, H.A., 2009, Efficient Numerical Solver for Partially Structured Differential and Algebraic Equation Systems. Industrial & Engineering Chemistry Research 48(22), 9979-9984. Pierucci, S., Brandani, P., Ranzi, E., & Sogaro, A., 1996, An industrial application of an on-line data reconciliation and optimization problem. Computers & Chemical Engineering 20, S1539S1544. Rawlings, J.B., 2000, Tutorial Overview of Model Predictive Control. IEEE Control Systems Magazine 20(3), 38-52. Tosukhowong, T., Lee, J.M., Lee, J.H., & Lu, J., 2004, An Introduction to a Dynamic Plant-wide Optimization Strategy for an Integrated Plant. Computers & Chemical Engineering 29(1), 199208.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Model based optimisation of a cyclic reactor for the production of hydrogen Filip Logist,a Joost Lauwers,a Benoît Trigaux,a Jan F. Van Impea a
BioTeC & OPTEC - Chemical Engineering Dept., Katholieke Universiteit Leuven,
W. de Croylaan 46, B-3001 Leuven, Belgium.
Abstract This paper studies the model based optimisation of a cyclically operated tubular reactor, i.e., the Cyclic Water Gas Shift Reactor, for the production of hydrogen. The most important degrees of freedom are first identified based on a sensitivity analysis and are afterwards optimised. The optimisation results show that there exists an optimum. In general, short switching times and quasi symmetric operations are preferred. In addition, deviations from the symmetric operation regime gave rise to a drastic decrease in productivity. Keywords: cyclic operation, hydrogen production, dynamic optimisation, cyclic water gas shift reactor.
1. Introduction Operating reactors in a periodic way, often leads to enhanced performances and process intensification [1]. In the current study, a model based optimisation is performed for a Cyclic Water Gas Shift Reactor (CWGSR). This type of reactor is based on the repeated reduction of a fixed bed using a mixture of hydrogen and carbon monoxide and its subsequent oxidation with steam to produce pure hydrogen. This reactor has been identified as a promising alternative to upgrade the hydrogen streams containing carbon oxides from, e.g., reforming processes to high-purity hydrogen, required for, e.g., fuel cells [2, 3]. However, the rigorous model based optimisation of the design and operation of this reactor is a challenge due to its distributed nature and its time-periodic operation, giving rise to a system of coupled nonlinear partial differential equations (PDEs) with timeperiodic boundary conditions. A method of lines approach is employed to reformulate the set of PDEs to a large-scale system of differential and algebraic equations (DAEs). Before starting the optimisation, a preliminary sensitivity analysis has been performed in order to indicate the most important degrees of freedom which are subsequently optimised. For the optimisation a sequential strategy is employed in order to allow the use of standard numerical tools and avoid the implementation of tailored schemes.
F. Logist et al.
458
2. Cyclic Water Gas Shift Reactor 2.1. Operation The principle of the water gas shift reactor (or also called sponge iron process) is not new (see [2, 3] and the references therein). This reaction involves a red-ox reaction, in which the carbon within the carbon monoxide is oxidised and the hydrogen within the water is reduced. Due to the cyclic operation and the presence of iron oxide inside which captures and releases hydrogen, the two parts of the reaction are separated in time (see Fig. 1). When a stream containing carbon monoxide and hydrogen is fed into the reactor, the iron oxide is reduced, producing carbon dioxide and water: CO + 1/x FeOx o CO2 + 1/x Fe H2 + 1/x FeOx o H2O + 1/x Fe When the bed is sufficiently reduced, the feed is switched to steam. Then the iron is oxidized to iron oxide and pure hydrogen without carbon monoxide is produced: Fig 1. Scheme of the CWGSR [3]. H2O + 1/x Fe o H2 + 1/x FeOx 2.2. Mathematical model As a mathematical model, the conceptual 1D pseudo-homogeneous model from [3] is adopted. The model variables are the components CO, CO2, H2 and H2O as well as the degree of reduction of the bed. The reactions considered are the following: r2: H2 + FeOx o H2O + Fe r1: H2O + Fe o H2 + FeOx r4: CO + FeO o CO2 + Fe. r3: CO2 + Fe o CO + FeOx This yields the following balances for the individual components, the total mass, the degree of reduction and the energy: wxi wx Z i ¦Q i , j Da j R j (1) wW w9 j wZ 1 w- Z w0 (2) w9 - wW - wW wD 4 ¦j Q Fe , j Da j R j (3) wW 1 w 2ww(4) Z < ¦ '-ad , j Da j R j St - -e wW w9 Pe w9 2 j with W and ] the dimensionless time and length, xi the dimensionless component gas concentrations, Z the dimensionless gas velocity, # the dimensionless temperature, D the degree of reduction. vi,j is the coefficient of component i in reaction j. Rj is the rate of this reaction, while '-ad,j represents the heat of reaction. Da, Pe and St indicate the Damkohler, Peclet and Stanton number, respectively. The substantial fixed bed capacity 4 and the thermal capacity < are defined as the ratios between the oxygen capacity of the fixed bed to the gas hold up, and between the heat capacity of the fixed bed to the heat capacity of the gas phase. (For the exact parameter values, see [3].)
0, t f @ dt (9) 0 bc ( x (0)) (10) 0 t c p ( x (t ), u(t ), t ) (11) 0 t c t ( x (t f ), u(t f ), t f ) Here, x are the states, while u denote the controls. The vector f represents the dynamic system equations (on the interval t [0, tf]) with initial conditions bc. The vectors cp and ct indicate respectively path and terminal inequality constraints. Each individual cost function can consist of Mayer and Lagrange terms. tf J i hi ( x(t f ), t f ) ³ g i ( x(t ), u(t ), t )dt (12)
subject to:
0
In multi-objective optimisation, typically no single optimal solution exists, but a set of Pareto optimal solutions. Broadly speaking, a solution is called Pareto optimal if there exists no other feasible solution that improves at least one objective function without worsening another. (For a formal definition, see, e.g., [4,5].)
3.2. Numerical solution Scalarisation methods convert the multi-objective optimal control problems, into a series of optimal control problems that are function of scalarisation parameters or weights. This series is solved by direct optimal control methods as Single and Multiple Shooting. To tackle the multi-objective aspect, several scalarisation techniques (i.e., Weighted Sum (WS), Normal Boundary Intersection (NBI) and Normalised Normal Constraint (NNC)) have been implemented in ACADO Multi-Objective [5], which is the multi-objective extension of the freeware tool ACADO [7] (www.acadotoolkit.org). This ensures an efficient solution of the multi-objective optimal control problems. In addition, ACADO can also be used for estimating parameters in dynamic processes.
4. Results To compute the Pareto sets, NBI with 41 points and Multiple Shooting with a 30 piece piecewise constant control discretisation have been used. To integrate the ODE system of bioreactor equations, the sensitivity equations and the Fischer matrix elements, a Runge-Kutta78 integrator is employed with an integration tolerance of 10-6. The discretised optimal control problem is solved by an SQP method with a KKT tolerance of 10-5. Despite the strong nonlinearities, the largely differing scales and the presence of singular arcs, the total CPU times were mainly under four minutes. Fig 1. displays the corresponding Pareto set. As can be observed there is a clear trade-off visible. However, the trade-off plot also shows that the condition number can easily be decreased at the expense of almost no productivity loss. Nevertheless, when
Multi-objective optimisation approach to optimal experiment design in dynamic bioprocesses using ACADO toolkit really pushing the condition number towards the lowest possible value, a large productivity loss is evidenced. Hence, a natural choice would be a point in the knee of the curve. According to [8], this is the point with the largest distance along the quasi-normal to the convex hull of individual minima (CHIM). Note that despite the largely different magnitudes, a nice spread on the Pareto curve is obtained, which is impossible with, e.g., the WS.
465
Fig 1. Pareto set for production vs. Mod E criterion.
Fig 2. Optimal states, controls and sensitivities both extremes and the knee point in the curve.
Fig 2. illustrates for both extremes and the point in the knee, the corresponding optimal control, state and sensitivity evolutions. When focussing on productivity all substrate is as expected fed at the beginning in order to achieve an as high as possible substrate concentration and stimulate growth as much possible. This yields a highly accurate estimation of Pmax but little information on the Ks as both sensitivities and also Fisher matrix elements are in largely different orders of magnitude. However, when the information content comes into play, not all substrate is fed at the beginning but a singular feeding rate is observed. Clearly, the production of biomass at the end is lower, and the sensitivities and Fisher elements are smaller but in a similar order of magnitude allowing also a more accurate estimate of Ks. As expected, the knee point exhibits an intermediate behaviour. Finally, the optimal profiles for the individual objectives are tested on a simulation level. To mimic biological variability and experimental uncertainty, uncorrelated Gaussian white noise has been added to all measurements. Fig 3. depicts the contours of the corresponding SSE cost surfaces. As can be seen, when production is focussed on a strong correlation between the parameters is present based on the elongated contours,
466
F. Logist et al.
Fig 3. SSE contours: production (left), knee (middle) and information (right).
which even do not close in the current plot. Alternatively, when information content is focussed on, the contours become more and more circular and close, indicating the higher de-correlation. Hence, to be able to estimate Ks too, not the intuitive solution for maximum biomass production is needed, but a more subtle feeding throughout the batch. The price to be paid, however, is a performance decrease. Nevertheless, the Pareto set allows to assess the trade-offs involved.
5. Conclusion. The current paper illustrated the features of the toolkit ACADO Multi-Objective to study the trade-offs in dynamic biochemical production processes between objectives for production and optimally designing experiments in view of parameter estimation. Based on recent deterministic multi-objective optimal control approaches the set of trade-off or Pareto optimal solutions was efficiently and accurately produced. A clear and sharp trade-off was observed. This trade-off was also reflected when testing the different solutions on a simulation level and estimating the parameters. Acknowledgements Work supported in part by Projects OT/09/025/TBA, OT/10/035, OPTEC (Center-of-Excellence Optimization in Engineering) PFV/10/002 and SCORES4CHEM KP/09/005 of the K.U. Leuven, and by the Belgian Program on Interuniversity Poles of Attraction, initiated by the Belgian Federal Science Policy Office. D. Telen has a Ph.D grant of the Institute for the Promotion of Innovation through Science and Technology in Flanders (IWT-Vlaanderen). E. Van Derlinden is supported by grant PDKM/10/122 of the K.U.Leuven research fund. J.F. Van Impe holds the chair Safety Engineering sponsored by the Belgian chemistry and life sciences federation essenscia.
References [1] J.R. Banga, K.J. Versyck and J.F. Van Impe. Computation of optimal identification experiments for nonlinear dynamic process models, Industrial & Engineering Chemistry Research, 41: 2425-2430, 2002. [2] G. Franceschini and S. Macchietto, Model-based design of experiments for parameter precision: State of the art, Chemical Engineering Science, 63:4846-4872, 2008. [3] K. Versyck and J. Van Impe. Feed rate optimization for fed-batch bioreactors: from optimal process performance to optimal parameter estimation. Chem. Engineering Communications, 172:107-124, 1999. [4] F. Logist, B. Houska, M. Diehl and J. Van Impe. Fast Pareto set generation for nonlinear optimal control problems with multiple objectives. Structural and Multidisciplinary Optimization, 42:591-603, 2010. [5] F. Logist, P.M.M. Van Erdeghem, and J.F. Van Impe. Efficient deterministic multiple objective optimal control of (bio)chemical processes. Chemical Engineering Science, 64:2527-2538, 2009. [6] E. Walter and L. Pronzato. Identification of Parametric Models from Experimental Data, Springer, 1997. [7] B. Houska, H. Ferreau, M. Diehl. ACADO Toolkit - An Open-Source Framework for Automatic Control and Dynamic Optimization. Optimal Control Applications & Methods (in press, doi:10.1002/oca.939). [8] I. Das. On characterizing the "knee" of the Pareto curve based on Normal-boundary Intersection. Structural Optimization 18:107-115, 1999.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
A disturbance estimation approach for online model-based redesign of experiments in the presence of systematic errors F. Galvanin1,*, M. Barolo1, G. Pannocchia2 and F. Bezzo1 1
CAPE-Lab, Dipartimento di Principi e Impianti di Ingegneria Chimica, Università di Padova, via Marzolo 9, 35131 Padova, Italy 2 DICCISM – Dipartimento di Ingegneria Chimica, Università di Pisa, via Diotisalvi 2, 56122 Pisa, Italy * E-mail:
[email protected] Abstract Online Model-Based Redesign of Experiment (OMBRE) strategies represent a valuable support to the development of dynamic deterministic models, allowing for the dynamic update of the experimental conditions to yield the most informative data for the parameter identification task. However, the effectiveness of OMBRE strategies may be severely affected by the presence of systematic modelling errors. In this paper, a disturbance estimation approach is exploited within an OMBRE framework (DEOMBRE) in order to achieve a statistically satisfactory estimation of the model parameters, thus avoiding (or reducing) constraint violations even in the presence of systematic modelling errors. A case study illustrates the benefits of the new approach. Keywords: model-based experiment design, model updating, disturbance estimation
1. Introduction A wide class of physical systems can be described by dynamic deterministic models expressed in the form of systems of differential and algebraic equations. Once a dynamic model structure is found adequate to represent a physical system, a set of identification experiments needs to be carried out to estimate the set of parameters of the model in the most precise and accurate way. Model-based design of experiments (MBDoE) techniques [1] represent a valuable tool for the rapid assessment and development of dynamic deterministic models, allowing for the maximisation of the information content of the experiments in order to support and improve the parameter identification task. Conventional MBDoE techniques for parameter identification usually involve a sequential procedure: 1) the design of the experiment (based on current knowledge on model structure and parameters); 2) the execution of the designed experiment, where new data are collected; 3) the estimation and statistical assessment of model parameters. Iteration of steps 1 to 3 generally provides a new information flux coming from planned experiments leading to a progressive reduction of the uncertainty region of model parameters (as demonstrated in a wide range of applications [2]). However, each experiment design step is performed at the initial values of model parameters, and uncertainty on these values as well as inadequacy of the given model structure can deeply affect the efficiency of the design procedure [3]. Recently, Online Model-Based Redesign of Experiment (OMBRE) strategies [4] have been proposed to exploit the information as soon as soon as it is generated by the running experiment. In OMBRE the manipulated input profiles of the running experiment are updated
F. Galvanin et al.
468
performing one or more intermediate experiment designs (i.e., redesigns), and each redesign is performed adopting the current value of the parameters set, which is the value of estimated model parameters until that moment. OMBRE mitigates the effect of parametric uncertainty on the design effectiveness but the technique is still particularly sensitive to the presence of systematic modeling errors, that may affect the effectiveness of the entire identification procedure. Following an analogy with offset-free model predictive control strategies [5] a novel experiment design approach (DE-OMBRE) is presented in this paper where a model updating policy including disturbance estimation (DE) is embedded within an OMBRE strategy. An augmented model, lumping the effect of systematic errors, is here considered to estimate both the states and the system outputs in a given time frame, updating the constraint conditions in a consistent way as soon as the effect of bias disturbances propagates in the system. The purpose is to achieve a statistically satisfactory estimation of the model parameters avoiding (or reducing) constraint violations even in the presence of systematic errors. The benefits of the proposed strategy are illustrated and discussed through a simulated case study, where the effectiveness of the design is assessed by comparison to conventional MBDoE and OMBRE techniques.
2. The methodology Conventional MBDoE procedures aim at decreasing the model parameter uncertainty region predicted a priori by the model by acting on the nij-dimensional experiment design vector ij and solving the following set of equations
M opt
^
^
`
`
arg min ȥ ª¬ HT1 ș, M º¼ ij
(1)
yˆ
h x
(2)
arg min ȥ ª¬ VT ș, M º¼ ij
subject to f x , x, u, w , ș, t
0
F x G t d 0
(3)
H yˆ D t d 0
(4)
with the set of initial conditions x(0) = x0. In (1) Vș and Hș are the variance-covariance matrix of model parameters and the dynamic information matrix respectively; x(t) is the Nx-dimensional vector of time-dependent state variables, u(t) and w are the timedependent and time-invariant manipulated inputs, ș is the Nș-dimensional set of unknown model parameters to be estimated, and t is time. The symbol ^ is used to indicate the estimate of a variable (or a set of variables). Among the set of constraint conditions (3-4) a distinction is made between constraints involving unmeasurable states (3) and estimated outputs (4). The formulations are expressed through a set of (possibly time-varying) constraints G(t) and D(t) on the state variables, while F and H are two sets of selection functions, allowing to choose the variables being actually constrained. The design vector ij in the most general form may contain a Nydimensional set of initial conditions for the measured variables (y0), the manipulated input variables (u and w), the duration of the single experiment (IJ) and the Nsp-set of time instants at which the output variables are sampled tsp. Function ȥ in (1) is an assigned measurement function of the variance-covariance matrix of model parameters Vș, and represents the chosen design criterion [1]. When an OMBRE approach is exploited [4], intermediate parameter estimations are carried out while the experiment is still running and, by exploiting the information
A disturbance estimation approach for online model-based redesign of experiments 469 obtained, the experiment is partially redesigned before its termination. The experiment is thus divided into a number of sub-experiments where the design variables are distributed. Each redesign is carried out by solving the optimisation problem given by the solution of equations (1-4) in the corresponding time interval. 2.1. Online Model-Based Redesign of the experiment including disturbance estimation (DE-OMBRE) The presence of a systematic error between the model and the real system (bias) is not explicitly handled by OMBRE. Disturbance models have been proposed in model predictive control (MPC) [5] to ensure offset-free performance [6] when disturbances as well as plant-model mismatch are present. Let us consider an “augmented model” in the form f x , x, u, w , ș, t
yˆ
h x d
0
with
d
(5)
0
where d is a Ny-dimensional set of lumped disturbances on the outputs. The constraint equations (4) concerning the outputs for the augmented model will take the form H h x d D t d 0
(6)
where each element of d at the sampling time k can be estimated through a two step procedure: 1. prediction: simulation of the augmented model (5) with d = dk|k-1; 2. filtering: given the measurement yk the prediction error is
ek
yk h xk d k k 1
(7)
and the lumped disturbance dk|k can be evaluated as dk k
d k k 1 Ld ek
(8)
where Ld is a tuning parameter (based on the actual measurement confidence). In DE-OMBRE, these prediction and filtering steps are repeated, within each redesign time interval, until a suitable value for d is evaluated. The value can be used in the following redesign to update the model predictions.
3. Case study The case study considered is a model of glucose homeostasis for simulation of type 1 diabetes subjects in the form proposed by Lehmann and Deutsch [7], particularly suitable for simulation of multiple daily injections and recently adopted in nonlinear model predictive control studies [8]:
I
s (t t0 ) s 1 T50s D
ke I 2 s VI ªT50s t t0 º ¬ ¼ kabs Ggut GNHB t Gout t Gren t G VG
Ia
G gut
k1 I k2 I a Gempt t kabs Ggut .
(9)
F. Galvanin et al.
470
In (9) I and Ia are the plasmatic and active insulin concentrations, respectively; G and Ggut are the plasmatic and gut glucose concentrations, respectively. Gren is the renal excretion, Gempt is a trapezoidal function managing the carbohydrates uptake, GNHB is the net hepatic glucose balance, and Gout represents the glucose peripheral utilization. The purpose of the study is to design a single-day test (starting at 0:00 AM) in order to identify the set of parameters ș = [k1 k2 ke]T in a statistically satisfactory way. The variables being optimised by design are: the insulin injections (the t0 times and the amount D of Nb = 4 fast acting Lispro boluses) and the amount of carbohydrates of Nm = 4 meals (scheduled at 8:00 AM, 12:00 AM, 4:00 PM and 8:00 PM). The constraints in the (4) form acting on this system are related to normoglycaemia attainment, and are the upper (D1 = 180 mg/dL) and lower (D2 = 60 mg/dL) thresholds on G, which is the only state variable being constrained and the only measured state variable (i.e. x1 = y = G). Three distinct design configurations have been compared: 1. STDE: conventional E-optimal design; 2. OMBRE: E-optimal redesign (the redesign is scheduled every ǻtup = 6 h); 3. DE-OMBRE: E-optimal redesign including disturbance estimation (ǻtup = 6 h). A single insulin injection is optimised during each sub-experiment performed in the redesign strategies. In DE-OMBRE configuration the tuning parameter Ld appearing in (8) is kept constant to 1 and d(0) = 0. The simulated glucose measurements are available with a constant relative deviation on the readings of 0.10 and the sampling time is ǻt = 60 minutes. Additionally, a constant systematic error b is supposed to affect the readings (b = 20 mg/dL). The initial guess on model parameters is ș0 = [1.000 1.000 1.000]T while the true set of parameters defining the diabetic subject is ș = [0.025 1.250 5.400]T. A dedicated program has been developed in Octave/C++, and an SQP optimizer has been used to handle the nonlinear programming problem. 3.1. Results and comments Parameter estimation results in terms of estimate and a-posteriori statistics (including tvalues and weighted sum of squared residuals WSSR) are given in Table 1. Table 1 Comparison of different experiment design configurations. Superscript * indicates tvalues failing the t-test (the reference value is tref = 1.721) Design
Parameter Estimate T
STDE
[0.006 1.466 3.664]
OMBRE
[0.008 0.950 5.547]T
DEOMBRE
[0.025 1.003 4.090]T
Conf. Interval (95%) [±0.408 ±2.001 ±0.728] [±0.086 ±0.121, ±0.083 ] [±0.014 ±0.017 ±0.038]
t-values *
[0.015 0.73 5.027 ] [0.09* 7.85* 66.83 ] [1.78 59.20 107.63]
WSSR *
25.4 14.7 8.2
Note how a conventional design approach would provide a test where the subject is moved to a severely hyperglycemic condition (Figure 1a). Additionally, a statistically poor estimation of the model parameters is obtained (Table 1). OMBRE provides a better fit, also improving the quality of parameter estimation, but even in that case (not shown here for the sake of conciseness), slight hyperglycemic conditions are realized. Conversely, the newly proposed DE-OMBRE technique (Figure 1b) is able to preserve both optimality and feasibility of the planned test. The fitting of experimental data is
A disturbance estimation approach for online model-based redesign of experiments 471
2
4
6
8
10
12
14
16
18
20
22
24
22
140 120 100 80 60 40 20 0 24
Insulin Meals
0
2
4
6
8
10
12
14
16
18
20
320 280 240 200 160 120 80
DE-OMBRE result Test samples After identif.
0 16 14 12 10 8 6 4 2 0
2
4
6
8
12
14
16
18
20
22
24
22
140 120 100 80 60 40 20 0 24
Insulin Meals
0
2
4
6
8
10
12
14
16
18
20
Time [h]
Time [h]
(a)
10
Carbohydrates [g]
Insulin [U]
0 16 14 12 10 8 6 4 2 0
Glucose [mg/dL]
STDE result Test samples After identif.
Insulin [U]
320 280 240 200 160 120 80
Carbohydrates [g]
Glucose [mg/dL]
greatly improved, with the estimate of k1 (which is the most critical parameter to be estimated) closer to the true value defining the subject affected by diabetes. A further advantage of DE-OMBRE if compared with STDE is the lower computational cost (5.7 min against 13.2 min for STDE on a Pentium® D 3Ghz processor): like in OMBRE, the whole optimization problem is split into a number of more accessible subproblems, and this implicitly improve the robustness of the whole design procedure.
(b)
Figure 1 Predicted glucose profiles (before and after identification) and manipulated inputs (insulin doses and meal uptakes) as provided by (a) STDE and (b) DE-OMBRE. The subject’s actual response is indicated by diamonds .
Conclusions A disturbance estimation approach for online model-based redesign of experiments (DE-OMBRE) has been presented in this paper. The technique allows for the detection of systematic errors between reality and model, and for a systematic update of the constraints thanks to the information coming from the running experiment. Preliminary results clearly show the benefits coming from the novel design approach, which is able to preserve feasibility as well as optimality of the planned experiment also in the presence of model mismatch. Future work will assess the applicability of DE-OMBRE to larger systems, and extend the features of the novel design approach.
Acknowledgements The authors gratefully acknowledge the financial support granted to this work by the University of Padova under Project CPDR095313-2009 on “Towards the development of an artificial pancreas for diabetes mellitus care: optimal model-based design of experiments for parameter identification of physiological models”.
References [1] F. Pukelsheim, 1993, Optimal Design of Experiments, J. Wiley & Sons, New York, U.S.A. [2] G. Franceschini, S. Macchietto, 2008, Chem. Eng. Sci., 63, 4846-4872. [3] S. Körkel, E. Kostina, H. G. Bock, J.P. Schlöder, 2004,Opt. Methods and Software, 19, 327338. [4] F. Galvanin, M. Barolo, F. Bezzo, 2009, Ind. Eng. Chem. Res., 48, 4415-4427. [5] J.M. Maciejowski, 2002, Predictive control with constraints, Prentice Hall, Harlow, U.K. [6] G. Pannocchia, J. B. Rawlings, 2003, AIChE J., 49, 426-437. [7] E. D. Lehmann, T. Deutsch, 1992, J. Biomed. Eng., 14, 235-242. [8] G. Pannocchia, A. Landi, M. Laurino, 2010, IEEE Workshop on Health Care Management, Venice (Italy), 1-6.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
A Semidefinite Programming Approach to Portfolio Optimization Raquel J. Fonseca, Wolfram Wiesemann, Berç Rustem Department of Computing, Imperial College, London, United Kingdom
Abstract The application of robust optimization techniques to an international portfolio allocation problem introduces non-linearities in the model. These stem from the triangulation requirement of the foreign exchange rates and the product of the local asset and the currency returns. We show that, by making appropriate assumptions regarding the formulation of the uncertainty sets, the proposed model has a semidefinite programming formulation and can be solved efficiently. Keywords: semidefinite programming, robust optimization, international portfolio optimization, risk management.
1. Introduction Markowitz’s seminal work on portfolio optimization initiated great interest and further academic research in the area of risk management (Markowitz 1952). The same interest was extended to international portfolios, as, due to the low correlation between foreign and domestic assets, there could be a positive impact on the overall variance of the portfolio. Changes in the currency value, however, give rise to a new source of risk, and therefore research on international portfolios is closely related to the issue of hedging and the use of forwards and other financial instruments. A survey of the topic may be found in (Shawky et al. 1997). The paradigm of robust optimization gained the attention of the academic community after the simultaneous works of El-Ghaoui, Ben-Tal and their collaborators (El-Ghaoui & Lebret 1997), (Ben-Tal & Nemirovski 1998). In this framework, uncertainty is directly incorporated in the model by considering the problem parameters as random variables. Robust optimization was first applied to an international portfolio model in (Rustem & Howe 2002) and later to a currency only portfolio in (Fonseca et al. 2011). We expand on the work in (Rustem & Howe 2002) by reformulating the problem in a convex tractable framework, and by subsequently evaluating the model using historical market data. By using a semidefinite programming formulation, we are able to maintain the bilinear nature of the asset and currency returns and solve the model in an efficient manner.
2. Robust International Portfolio Optimization Our starting point is a US investor who wishes to invest in foreign assets. We assume there are n available assets in the market, denominated in m foreign currencies. The current and the future price of the ith asset in its local currency are denoted by Pi0 and Pi, respectively. The local return of asset i is ria = Pi / Pi 0 . We denote by Ej and Ej0 the future and the current spot exchange rate of the jth currency, respectively. Both quantities are expressed in terms of the base currency per unit of the foreign currency j.
A Semidefinite Programming Approach to Portfolio Optimization
473
The return on a specific currency j is described by rje = E j / E 0j . The total return on any asset i is equal to the product of the local returns ria with the respective currency returns rje. Additionally, we define an auxiliary matrix O that assigns to each asset exactly one currency. If oij is the ijth element of O , we have:
oij =
1 if the ith asset is traded in the jth currency, otherwise. 0
The portfolio return R(w) can be written as ([diag(r a )Or e ]w) , where the variable w denotes the vector of asset weights in the portfolio. In the Markowitz framework we would want to minimize the portfolio variance (Var[R(w)]), while guaranteeing a minimum expected return. As the estimates of the expected asset returns are taken as given, the Markowitz model lacks in robustness. Even small deviations of the materialized returns from their estimates could pull the solution previously obtained away from the optimum or render it infeasible. In view of this, we would like to incorporate in the model the uncertainty inherent to the estimation of the asset and currency returns by using robust optimization techniques. 2.1. The Robust Model of International Portfolio Optimization In a robust framework, uncertain parameters are assumed to be random variables. The investor has some information about their distribution, such as the first two moments, and can therefore construct a set in which these parameters are expected to materialize. This region, which is commonly designated as uncertainty set, may reflect some probabilistic measures, such as a confidence interval. We would like to obtain a solution to our problem that satisfies all the constraints, for all the possible values of the returns within that defined uncertainty set. Hence, we are interested in the worst-case value of the returns for which the solution is still feasible. We define our robust international portfolio optimization model as: (1) max min [diag(r a )Or e ] w a e w
(r ,r )
s .t . 1 w = 1 w 0, where the uncertainty set is defined as: r a r a 1 r a r a a e e 2 = (r , r ) 0 : Ar 0 e
. e e e
r r
r r The uncertainty set described here is the intersection of two different sets. The risk associated with the asset and the currency returns is expressed by a joint confidence region forming an ellipsoid, in which deviations of the returns from their estimates are weighted by the covariance matrix . Note that does not only refer to the relationship between assets, but also between assets and currencies, and between currencies. The system of linear inequalities Ar e 0 reflects the triangular relationship between the foreign exchange rates, which must be respected at all times in an arbitrage-free market. If we define two exchange rates E j and Ek relative to a base currency, a cross exchange rate X jk = Ek / E j is automatically defined between those two rates. We must then ensure that the cross exchange rate returns xjk are also within adequate bounds and respect the triangulation constraint. For simplicity we assume that x jk is materialized
R. J. Fonseca et al.
474
between a lower bound L and an upper bound U, which allows us to reformulate the bounding constraints as: L x jk U Lrje rke Urje . We are now faced with the problem of optimizing the product of two random variables. A common approximation is to consider the total asset returns as the sum of the local asset and the currency returns. In the remainder, we propose an alternative semidefinite programming approach. A semidefinite program maximizes a linear function subject to the constraint that an affine combination of symmetric matrices is positive semidefinite (Vandenberghe & Boyd 1996). 2.2. Semidefinite Programming Approximation We start by rewriting our robust problem (1) in the epigraph form: max w,
s.t.
[diag(r a )Or e ]' w 0,
(
(2)
)
r a , r e
1' w = 1 w0 We also rewrite the constraints that define the support of our uncertain returns in the form: = k : e1 ' = 1, 'Wl 0, l = 1,…, t , where e1 is a basis vector in k
{
}
whose first element is 1 and all the others 0. This construction guarantees that the first component of the vector is equal to 1. Starting from the ellipsoidal region, we define an equivalent constraint of the form 'W1 0 , where:
2 1 1 a r e r a ( r a
= r , W1 = 1 r a r e re
r e ) r a
1 r e
.
1
The linear system of inequalities representing the triangulation requirement may be constructed following a similar procedure, with a matrix Wl for each constraint. We then replace the semi-infinite inequality constraint by a linear matrix inequality, using the following result (Ben-Tal et al. 2004): Approximate S-lemma: Consider t+1 symmetric matrices S and Wl with l = 1,…, t and the following propositions: t
(i) t with 0 and S l Wl 0; l=1
(ii) S 0, = { k : e1 = 1, Wl 0, l = 1,…, t}. Then, (i) implies (ii). Our final model formulation is then: max w, , t
s.t. S l Wl 0 l=1
1' w = 1 w, 0
(3)
A Semidefinite Programming Approach to Portfolio Optimization
475
0 0 1 0 0 diag(w)O . where: S = 2 0 1 O ' diag(w) 0 2 The reformulated problem (3) on the decision variables w, and provides a lower bound on the optimal objective function value of the original problem. The advantage of this formulation is its tractability. Because both the objective function and the constraints are convex, we are now able to solve the problem efficiently with a standard semidefinite programming solver.
3. Numerical Results We would like to assess the performance of the theoretical model developed in the previous section with historical market data. Our US investor wishes to invest not only in domestic assets, such as the S&P500 and the NASDAQ, but also in foreign assets. We consider 3 international indexes: the German DAX and the French CAC40 denominated in EUR, and the Swiss SMI in CHF. Each month we calculate the optimal asset allocation taking the expected asset and currency returns as the mean of the historical returns from the previous twelve months. The upper and lower bounds on the cross-exchange rates were calculated based on the currencies’ mean returns for the period considered plus the standard deviation for the same period multiplied by a factor of ±1.5. These bounds and the covariance matrix are assumed to remain constant throughout this period. At the end of each month, the actual portfolio return is computed based on the materialized returns. This procedure is repeated every month, and the accumulated wealth is calculated. We compare our robust model (3), designated as SDP model, with other strategies to compute international portfolios: the EG approach (Elton et al. 2007) that does not consider the multiplicative term in the total asset returns, and the Base Currency approach where all foreign returns are converted to the base currency of the investor. Additionally, the original non-convex model (1) is solved to local optimality with a semi-infinite algorithm. We consider an uncertainty set of size = 1 . The left chart in Figure 1 depicts the accumulated wealth over the period from October 1998 to September 2008 for the different approaches. For this particular data set, the SDP model appears to outperform the other strategies, yielding an average annual portfolio return of 6%, against 2.7%, 1.8% and 1.2% obtained by the Base Currency, EG and Local Optima approaches, respectively. These results lead us to conclude that accounting for the correlation between the local assets and currency returns, as well as their multiplicative effect is important. The right chart in Figure 1 compares our robust model with the Markowitz approach of risk minimization. Again, the robust model appears to outperform the risk minimization model, with average annual returns of 6% and 2.84%, respectively. The guarantee provided by the robust model is clearly seen in the period from January 2002 to September 2003. Because we are optimizing for the worst-case scenario, our accumulated portfolio return never falls short of that given by the Markowitz model.
R. J. Fonseca et al.
476
3
3 Minimum risk SDP model
SDP model Local optimality Base currency EG approach
2.5
2.5
2 Wealth
Wealth
2
1.5
1.5
1
0.5 Oct98
1
May00
Jan02
Sep03 Time
May05
Feb07
Sep08
0.5 Oct98
May00
Jan02
Sep03 Time
May05
Feb07
Sep08
Figure 1: Accumulated wealth over the period from Oct98 to Sep08
4. Conclusion We presented a robust optimization approach to the portfolio allocation problem when foreign assets are available for investment. We showed that the bilinear relationship between local asset and currency returns could be expressed by a tractable convex formulation by using the approximate S-Lemma and rewriting our model as a semidefinite programming problem. The backtesting experiments seem to point towards the better performance of this approach when compared to the Markowitz risk minimization model and to other international portfolio optimization models.
Acknowledgments Financial support from the EU Commission through MRTN-CT-2006-034270 COMISEF and Fundação Calouste Gulbenkian - 113392 is gratefully acknowledged.
References Ben-Tal, A. et al., 2004. Adjustable Robust Solutions of Uncertain Linear Programs. Mathematical Programming Ser. A, 99, pp.351-376. Ben-Tal, A. & Nemirovski, A., 1998. Robust Convex Optimization. Mathematics of Operations Research, 23, pp.769-805. El-Ghaoui, L. & Lebret, H., 1997. Robust Solutions to Least-Squares Problems With Uncertain Data. SIAM Journal on Matrix Analysis & Applications, 18, pp.1035-1064. Elton, E.J. et al., 2007. Modern Portfolio Theory and Investment Analysis - 7th Edition, Wiley. Fonseca, R.J. et al., 2011. Robust Optimization of Currency Portfolios. Journal of Computational Finance, forthcoming. Markowitz, H., 1952. Portfolio Selection. Journal of Finance, 7, pp.77-91. Rustem, B. & Howe, M., 2002. Algorithms for Worst-Case Design and Applications to Risk Management, Princeton University Press. Shawky, H.A. et al., 1997. International Portfolio Diversification: a Synthesis and an Update. Journal of International Financial Markets, Institutions & Money, 7, pp.303-327. Vandenberghe, L. & Boyd, S., 1996. Semidefinite Programming. SIAM Review, 38, pp.49-95.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Increase the catalytic cracking process efficiency by implementation an optimal control structure. Case study Cristina Popaa, Cristian Pătrăúcioiua a
Control Engineering and Computers Department, Petroleum Gas University of
Ploiesti, Bucuresti Blvd., 39, 1006800, Ploiesti, ROMANIA
Abstract The paper presents an original catalytic cracking optimal control system developed by the authors for a catalytic cracking plant. The efficiency improvement for a Romanian catalytic cracking plant using the proposed optimal control system is provided as a case study. The paper is structured in three parts. The first part describes the hierarchical control structure for the fluid catalytic cracking unit. The suggested control structure is the result of extensive analysis of the control structure design strategies used for the chemical process. The second part is dedicated to the study of the objective function of the optimal control system and development of the optimal control system. The last part contains a case study of the Romanian catalytic cracking process. The authors have elaborated a specific process model and an optimal controller. Using the adequate simulation program, the authors have demonstrated the optimal control system efficiency. Keywords: catalytic cracking, control, simulation, optimization.
1. Introduction The fluid catalytic cracking unit (FCCU) has an important role in the petroleum industry. The main goal of this plant is to get the maximum benefit assuring safety and stability. The increase of the catalytic cracking processes efficiency is made by various means: the mechanical design of the reactor and regenerator, the construction and the performances of the cracking gas compressors and the air blow, the physical and chemical properties of the feedstock, the kinetic characteristics of the catalyst and the implementation of an optimal hierarchical control system. The control problem of the FCCU has been treated by under various aspects in several works. Some works deal with conventional process control [1, 2] and another category of paper deals with aspect of advanced control [3, 4]. However, the optimal control systems applied to FCCU are insufficiently treated. In theses conditions, the authors have focused the researches on the development of an optimal hierarchical control system for increasing the catalytic cracking processes
478
C. Popa et al.
efficiency. The researches have been completed by a case study of the optimal control system for a Romanian FCCU.
2. The Hierarchical Control Structure To develop the control system structure, the authors have studied the structure of the fluid catalytic cracking process. The process has been decomposed into four subprocesses: the interfusion node, the riser (adiabatic tubular reactor), the striper and the regenerator (the burn coke reactor) [5, 6]. For each sub-process, the authors have developed a mathematical model in steady state and dynamical regime [5]. To build the hierarchical control structure the authors have used the hierarchical organize concepts of the complex systems [7] and the Plantwide Control design strategies [8]. The control structure proposed by the authors is organized in three hierarchical levels: the conventional control level, the advanced control level and the optimal control level, figure 1.
Figure 1. Optimal hierarchical control structures for the FCCU.
The conventional control level contains 10 mono-variable control loops based on standard PID controllers. The advanced control level contains a multi-variable predictive controller developed by the authors. The set points of the predictive controller are the riser outlet temperature (used to control the cracking conversion) and the regenerator temperature (used to control the catalyst regeneration). The predictive controller performances have been tested using a dynamic simulator elaborated by the authors [6]. The optimal control level contains an optimal controller which the objective of the controller is to calculate the set points of the second level (the optimum riser outlet temperature and optimum regenerator temperature).
3. The optimal controller design The mains process variables that affect the yield gasoline are the regenerate catalyst temperature Treg and the catalyst/feedstock contact ratio a. The goal of the optimal
Increase the catalytic cracking process efficiency by implementation an optimal control structure. Case study 479 controller is to generate optimal set points of the predictive controller that maximize the yield gasoline. The optimal controller developed by authors contains three components: an objective function, an optimal algorithm and the steady state process model, see figure 2.
Figure 2. The structure of the optimal controller.
The steady state process model developed by the authors has been reduced to the next form
>YG , TR @
Model Treg , a
(1)
where the YG represents the yield gasoline, TR – the riser temperature, a – the catalyst/feedstock contact ratio, Treg – the regenerate catalyst temperature. The objective function proposed by authors is represented by yield of gasoline of the catalytic cracking of the process
F T reg , a
YG .
(2)
The process variable a, respectively catalyst/feedstock contact ratio, cannot be used as manipulated variable. Industrial practice recommends the riser outlet temperature as manipulated variable. In this condition, the authors have proposed a correlation between the catalyst/feedstock contact ratio and the riser outlet temperature. The optimization module calculates the optimal value for the catalyst/feedstock contact ratio and after this operation the controller algorithm determines the optimal riser outlet temperature using the steady state process model. The optimal algorithm used by the authors is the multidimensional exploration algorithm based on Hessian matrix with simple restrictions. The objective function restrictions are marginally simple type:
700 Treg 750 ® 3 a 6 ¯
>qC @
.
(3)
480
C. Popa et al.
4. Case study. The increase of the catalytic cracking process efficiency into
Romanian refinery
The authors have studied a Romanian catalytic cracking plant and they have colected industrial data, figure 3. The steady state and dynamical process model have been adapted using theses industrial data [9].
Figure 3. Industrial data: a) feedstock and gasoline flow rate; b) feedstock, reactor and regenerator temperature.
Using the steady state process model and the optimization tool of Matlab, the authors have studied the objective function (2) associated to the optimal control. The 3D graphic has confirmed that the function has an optimal region, figure 4a, and the contour graphic of the objective function has indicated that the gasoline yield has a maximum value, approximated at Yg 0.46 , figure 4b. The authors have implemented in Matlab a special program for dynamic simulation of the catalytic cracking optimal control system. The numerical results have confirmed the increases of the efficiency catalytic cracking process with 3%, respectively with 10 mil de euro/year, figure 5.
Figure 4. The objective function (2): a) the 3D graphic; b) the contour graphic.
Increase the catalytic cracking process efficiency by implementation an optimal control structure. Case study 481
Figure 5. Comparison between industrial data and optimal control system data of the gasoline yield.
5. Conclusion In this paper there are presented aspects for improving the catalytic cracking processes efficiency by implementation of an optimal hierarchical control system. The main contributions brought by the authors within this paper are: x development of a hierarchical control structure; x development of an optimal controller; x optimal control system simulation and economic benefits obtained by using the optimal control system.
References [1] R. Aguilar, A. Poznyak, R. Martínez-Guerra, R. Maya-Yescas, 2002, Temperature control in catalytic cracking reactors via a robust PID controller, Journal of Process Control, Volume 12, Issue 6, p. 695. [2] M. Cristea, P. Agachi, 2007, Comparison between different control approaches of the UOP fluid catalytic cracking unit, Computer Aided Chemical Engineering, Volume 24, p. 847. [3] A.A. Alaradi, S. Rohani, 2002, Identification and Control of a Riser –Type FCC Unit Using Neural Networks, Computers and Chemical Engineering, 26, p. 401. [4] J. Chunyang , S. Rohani, J. Arthutr, 2003, FCC Unit Modeling, Identification and Model Predictive Control, a Simulation Study, Chemical Engineering and Processing, 42, p. 311. [5] Popa C., Pătrăscioiu C., 2010, The Model Predictive Control System for the Fluid Catalytic Cracking Unit, Advances in Dynamic Systems and Control, 6th WSEAS International Conference Dynamical Systems and Control, Tunisia, p 95. [6] C. Popa, C. Pătrăúcioiu, 2010, New Approach in Modeling, Simulation and Hierarchical Control of Fluid Catalytic Cracking Process. I- Process Modelling, Revista de chimie, Bucharest, Romania, vol. 60, no. 4, p. 419. [7] M. D. Mesaroviþ, D. Macko, Y. Takahara , 1970, Theory of Hierarchical. Multilevel System, New York: Academic Press. [8] W. Luyben, B.D. Tyrèus, M.L. Luyben, 1998, Plantwide process control, McGraw-Hill, New York, USA. [9] C. Pătrăúcioiu, C. Popa, 2007, Kinetic Model Adaptation of Catalytic Cracking Unit, Chemical Bulletin of “Politehnica” University of Timiúoara, Vol. 52(66), 1-2, p. 34.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Experimental Evaluation of a Robust NMPC Strategy for an Unstable Nonlinear Process Udo Schubert, Andreas Lange, Harvey Arellano-Garcia, Günter Wozny Chair of Process Dynamics and Operation; Berlin Institute of Technology, Sekr. KWT-9, Straße d. 17. Juni 135, D-10623 Berlin, Germany
Abstract In this work, we introduce a nonlinear model predictive controller (NMPC) for the safe operation of an open-loop unstable process with nonlinear dynamics and grade transitions. The nominal stability of the reactor is achieved through inclusion of an optimal terminal state penalty. Moreover, a path constraint and a set-point trajectory have been investigated to minimize the chances for reactor runaway during transitions between unstable operation points. In this contribution, the derivation of the proposed control algorithm and the solution strategy of the optimal control problem will be presented in detail. Moreover, experimental results will be used to validate the approach and provide a comparison. Furthermore, aspects of infeasibility and performance of the solution of the optimal control problem are also discussed. Keywords: Nonlinear Model Predictive Control, Multiplicity, Reactor Runaway
1. Introduction The considered case study is a continuous reactor with a strong irreversible exothermic first order chemical reaction.
Figure 1. Process flowsheet (left) and stationary heat-gain heat-loss diagram (right). The heat is transported from the reactor holdup to a constant coolant recycle and finally removed from the system using a heat exchanger with external coolant makeup according to the flowsheet in Fig. 1(left). Following the static heat-gain heat-loss diagram in Fig. 1 (right), the reactor exhibits multiplicity behavior with respect to the reactor temperature, which also includes the occurrence of an unstable steady state. However, in some cases,
Experimental Evaluation of a Robust NMPC Startegy for an Unstable Nonlinear Process 483
the economically desirable operation point is represented through this unstable state B, whereas the stable steady states A, C are related to a reduced conversion because of a low temperature level, or decomposition reactions because of the high temperature level. Despite being economically desirable, the nonlinearities of the chemical reaction show pronounced effects at the unstable steady state and are prone to ignition/extinction behavior. These phenomena may lead to critical system states causing long process transitions or even a plant shutdown. Therefore, control schemes are required to stabilize the process at an unstable operation point, whilst being robust against disturbances by complying with safety margins so as to avoid both reactor runaway and reaction extinction. For the validation of the proposed NMPC scheme, an experimental setup has been implemented, which makes use of a mixed-reality approach to provide an economically feasible and realistic framework for comprehensive control algorithm testing. Whereas previous work approximated the real process behavior by replacing the chemical reaction using controlled steam injection into a reactor dummy and a water feed [1, 2], the mixed reality approach replaces the reactor completely with a simulation layer and an interface to the jacket and the cooling system [3].
2. NMPC Approach The nominal stability of the system is achieved by formulating the optimal control problem as a quasi-infinite horizon control QIH-NMPC scheme [4] following (1-5). The objective function (6) consists of the quadratic stage cost (7) and is extended using a terminal cost function (8) that penalizes a deviation of the state from the desired steady state at the end of the prediction horizon. Thereby, the computationally infeasible infinite horizon control scheme is approximated. min J (x, u, Tc )
(1)
u
s.t. f (˙x, x, u,t) = 0 g (x, u,t) = 0
, x(t0 ) = x0
(2) (3)
h (x, u,t) ≤ 0
(4)
≤u≤u
(5)
u
min
max
However, for large deviations from the setpoint (e.g. because of large disturbances, or a setpoint change), the terminal region may become infeasible within the finite prediction horizon. In this case, the optimal solution may result in unstable closed-loop behavior and result in reactor runaway or reaction extinction. In order to avoid feasibility problems of the optimization routine, the terminal state constraint included in [4] is removed, whilst the control horizon is extended to recover the stability properties in stationary operation [5]. Therefore, special attention has to be paid to the problem of providing stable transitions between unstable operation points. In order to limit the feasible trajectories to a space of safe dynamic operation that prevents the reactor from reactor runaway or reaction extinction, the utilization of (i) a dynamic path constraint and (ii) a safe setpoint trajectory for the reactor temperature have been investigated. The solution of the open-loop optimal control problem is obtained using a sequential approach in order to parametrize the control vector over the prediction horizon. It is well known that such a feasible path approach has single shooting properties and may be sensitive for instability of the controlled system [6]. This stems from strong nonlinearities of the objective function for longer prediction
U. Schubert et al.
484
horizons and the dependency of the initial guess for the control vector [7, 8]. t+Tc
J (x, u, Tc ) =
t
(7)
F (x(τ), u(τ)) dτ + E( x(t + Tc )) stage−cost
(8)
terminal−cost
T
(6)
T
F (x(τ), u(τ)) = (x(τ) − xs ) Q (x(τ) − xs ) + (u(τ) − us ) R (u(τ) − us ) E (x(t + Tc )) = (x(t + Tc ) − xs )T P (x(t + Tc ) − xs ) The computation of a suitable terminal region (9) and the corresponding terminal penalty (9) matrix P is usually not a trivial task. In this work, the approach presented in [9] has been utilized, by linearizing the system around the unstable steady state (B). Ω = x ∈ Rn xT Px ≤ α Then, the cost of the infinite horizon can be approximated from the solution of the mxn op-. timization problem (10). Therein, the linear feedback law K = YX−1 and the terminal penalty matrix P = αX−1 are designed to (i) stabilize the system and (ii) to maximize the volume of the resulting terminal region for α ∈ R+ using matrices X ∈ Rnxn and Y ∈ R (10) max det αP−1 α,P,K
s.t. 0 ≺ X = XT ⎡ −AX − XAT − BY − YT BT ⎣ Q1/2 X R1/2 Y
XQ1/2 αI 0
⎤ YT R1/2 ⎦0 0 αI
The solution of problem (10) has been obtained using the solver sdpt3 [10] and the corresponding MATLAB® interface yalmip [11].
3. Simulation Results In a first step, the controller has been designed using a dynamic model of the process depicted in Fig. 1. A path constraint has been included in the optimal control problem (1) described in (11), taken from [12]. A safe operation point requires the divergence to be negative, whereas for unstable processes, the divergence may be slightly greater than zero. To allow a transition towards higher temperatures, a certain degree of runaway behavior is required and therefore a limit of divmax ∈ R+ is defined. However, it turns out that (11) this constraint is too restrictive and results in inappropriate long process transitions. div (g(x(t))) =
∂ g1 ∂ g2 + + ∂ x1 ∂ x2
···
+
∂ gn < divmax ∂ xn
Therefore, using an offline optimization of process transitions with a fixed control horizon of 5400sec, an empirical setpoint trajectory has been determined. A constant gradient of 7K/h could be determined that provides safe transitions since it delivers feasible intermittent terminal regions. Accordingly, the length of the control and the prediction horizon has been set to Tc = 600sec with a step size of dt = 60sec, giving a control vector with 10 elements. In Fig. 2 the simulation results are illustrated, which were obtained for upand downward setpoint changes with stochastic input disturbances on the feed temperature, composition and flow, as well as the coolant temperature. The setpoint trajectory
Experimental Evaluation of a Robust NMPC Startegy for an Unstable Nonlinear Process 485
has been disabled for the downward step in Fig. 2 (left) to illustrate the fact that a stable transition in this direction can be directly obtained without any constraints. However, the difficult transition to the unstable steady state on the elevated temperature level can be accomplished smoothly with the setpoint trajectory.
360
360 350
Reactor Setpoint Jacket Jacket Return
340
330
Temperature [K]
Temperature [K]
350
340
Reactor Setpoint Jacket Jacket Return
330 320 310
320
300 0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Concentration kmol/m3
0
6 5 4 3
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Valve position [%]
Valve position [%]
Concentration kmol/m3
310
100
50
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0
0.5
1
1.5
2
0
0.5
1
1.5
2
0
0.5
1
1.5
2
6 5 4 3
100
50
0
Time [h]
Time [h]
Figure 2. Simulation results for a setpoint change down- (left) and upwards (right).
4. Experimental Results In this section, the results obtained using the simulation in the previous section is validated using online experiments using the Mixed-Reality CSTR. As seen from Fig. 3 (left), the step response for a downward setpoint change can be replicated quite well, also without the setpoint trajectory.
360 350
340
Reactor Setpoint Jacket Jacket Return
330 320
Temperature [K]
Temperature [K]
350
0.2
0.3
0.4
0.5
0.6
5 0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
100
50
0
0.1
0.2
0.3
0.4
Time [h]
0.5
0.6
300
0.7
5.5
0
320
0.7
Concentration kmol/m3
0.1
Valve position [%]
Concentration kmol/m3 Valve position [%]
0
6
4.5
Reactor Setpoint Jacket Jacket Return
330
310
310 300
340
0
0.2
0.4
0.6
0.8
1
1.2
1.4
0
0.2
0.4
0.6
0.8
1
1.2
1.4
0
0.2
0.4
0.6
0.8
1
1.2
1.4
7 6 5 4
100
50
0
Time [h]
Figure 3. Experimental results of the Mixed-Reality reactor for a setpoint change down(left) and upwards (right). Except for some oscillations in the coolant valve position, when the new steady state is being approached, the dynamic behavior is very similar. While the coolant valve shows permanent oscillations in the upward setpoint change scenario in Fig. 3 (right), the reactor temperature tracks the trajectory quite well and remains stable. The oscillations are the result of minor model inaccuracy concerning the predicted heat exchanger outlet temperature and have also been reported in [2].
U. Schubert et al.
486
5. Conclusions The NMPC control scheme adopted from the QIH-NMPC approach has been designed for closed loop stability and safe setpoint transitions using dynamic simulation. For its validation, it has been implemented on a Mixed-Reality CSTR. In order to achieve a closed loop stability during stepoint transitions with short control horizons, a safe setpoint trajectory has been calculated. However, the introduced path constraint is being further exploited so as to incorporate the dynamic properties of the heat transport in order to achieve optimal transitions without the performance limitation by a suboptimal setpoint gradient. This work has been financially supported by the german research foundation. The authors would also like to thank from Prof. Allgöwer and Christoph Böm for their support with the terminal region calculations.
References [1] L. Kershenbaum. Experimental testing of advanced algorithms for process control: When is it worth the effort? Chemical Engineering Research and Design, 78(4):509–521, 2000. [2] Lino O. Santos, Paulo A. F. N. A. Afonso, Jose A. A. M. Castro, Nuno M. C. Oliveira, and Lorenz T. Biegler. On-line implementation of nonlinear mpc: an experimental case study. Control Engineering Practice, 9(8):847–857, 2001. [3] U. Schubert, H. Arellano-Garcia, and G. Wozny. Development and experimental verification of model-based process control using mixed-reality environments. In Computer Aided Chemical Engineering, volume 26 of 19th European Symposium on Computer Aided Process Engineering, pages 333–337. Elsevier, 2009. [4] Rolf Findeisen and Frank Allgöwer. The quasi-infinite horizon approach to nonlinear model predictive control. pages 89–108. 2003. [5] D. Q. Mayne, J. B. Rawlings, C. V. Rao, and P. O. M. Scokaert. Constrained model predictive control: Stability and optimality. Automatica, 36:789–814, 2000. [6] T. Binder, C. Blank, H. Georg Bock, R. Bulirsch, W. Dahmen, M. Diehl, T. Kronseder, W. Marquardt, Johannes P. Schlöder, and O. v. Stryk. Introduction to model based optimization of chemical processes on moving horizons. In M. Grötschel, S. O. Krumke, and J. Rambau, editors, Online Optimization of Large Scale Systems: State of the Art, pages 295– 340. Springer-Verlag Berlin, Heidelberg, 2001. [7] Moritz Diehl, H. Georg Bock, Johannes P. Schlöder, Rolf Findeisen, Zoltan Nagy, and Frank Allgöwer. Real-time optimization and nonlinear model predictive control of processes governed by differential-algebraic equations. Journal of Process Control, 12(4):577–585, 2002. [8] Victor Zavala, Carl Laird, and Lorenz Biegler. Fast implementations and rigorous models: Can both be accommodated in nmpc? International Journal of Robust and Nonlinear Control, 18(8):800–815, 2008. [9] C. Böhm, R. Findeisen, and F. Allgöwer. Robust control of constrained sector bounded lur’e systems with applications to nonlinear model predictive control. Dynamics of Continuous, Discrete and Impulsive Systems, 17(6):24, 2010. [10] R. H. Tütüncü, K. C. Toh, and M. J. Todd. Solving semidefinite-quadratic-linear programs using sdpt3. Mathematical Programming, 95(2):189–217, 2003. [11] J. Lofberg. Yalmip : A toolbox for modeling and optimization in MATLAB. In Proceedings of the CACSD Conference, 2004. [12] J. M. Zaldívar, J. Cano, M. A. Alós, J. Sempere, R. Nomen, D. Lister, G. Maschio, T. Obertopp, E. D. Gilles, J. Bosch, and F. Strozzi. A general criterion to define runaway limits in chemical reactors. Journal of Loss Prevention in the Process Industries, 16(3):187–200, 2003. doi: DOI: 10.1016/S0950-4230(03)00003-2.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Economic Plantwide Control of C4 Isomerization Process Rahul Jagtapa, Sonam Goenkaa, Nitin Kaisthaa a
Chemical Engineering, Indian Institute of Technology Kanpur, Kanpur 208016, India
Abstract Plantwide control system design for economically optimum operation of a C4 isomerization process is studied. The steady state degrees of freedom of a base case design are optimized for a given C4 fresh feed processing rate (Mode I) and maximum production (Mode II). At maximum production, the number of active constraints equal the steady state degrees of freedom (dof) exhausting all the available dof. From the set of active constraints, regulatory plantwide control structures, CS1 and CS2, that minimize the back-off from the economically dominant active constraints are synthesized along with a simple supervisory optimizing scheme to drive the process operation as close as possible to the active constraints. Quantitative results for the backoff necessary to avoid constraint limit violation during transients due to a ±10% feed composition change are reported. Comparison with a conventional plantwide control structure, CS3, where the fresh feed is flow controlled, shows that the maximum achievable throughput (profit) for CS2 is higher by ~2% (> $1x106 per yr). Keywords: Plantwide control, optimal process operation, control structure design
1. Introduction In refinery operations, iso-butane (i-C4) is a more valuable feedstock than n-butane (nC4) as it is used in the production of high octane gasoline blending components, propylene oxide and tertiary butyl alcohol. The isomerization process is commonly used to convert the n-C4 to the more valuable i-C4. As depicted in Figure 11, it consists of a de-isobutanizer (DIB) column that takes in the fresh C4 stream with small amounts of C3 and C5 impurities to recover i-C4 as the distillate (along with C3 impurity). The n-C4 leaves from the bottoms with some i-C4 (light key) impurity and all the C5 in the fresh feed. This bottoms stream is further fractionated in the purge column that recovers the heavy C5 as the bottoms with a n-C4 rich distillate. This distillate is preheated using the hot reactor effluent in a feed effluent heat exchanger (FEHE), vaporized and further heated to the reaction temperature in a furnace. The hot C4 stream enters an adiabatic packed bed reactor where n-C4 isomerizes irreversibly to i-C4. The hot reactor effluent, after losing heat in the FEHE is cooled and condensed in a flooded condenser. The i-C4 rich condensed stream is fed to the DIB above the fresh feed for recovering the i-C4. For smooth operation of this industrially important process, Luyben et al.1 have designed a regulatory plantwide control structure using their heuristic bottom-up design procedure2. Of the several reasonable control structure possibilities, this procedure gives a structure for smooth transients in the overall plantwide response to principal disturbances such as a throughput change. Economic considerations are however ignored in the design of the plantwide control structure. In today’s fiercely competitive market environment, processes must be operated for optimal economic profitability (eg to maximize throughput / operating profit or to minimize energy consumption). The optimum steady state usually is at the intersection of multiple process constraints. Economic operation then requires driving process
488
R. Jagtap et al.
operation as close as possible to these active constraints. The implemented regulatory control system determines the severity of the transients in the active constraint variables and consequently the degree of closeness of operation to the constraint limits and economic profitability 3-5. To the best of our knowledge, there are no literature reports that consider plantwide control system design for the industrially relevant C4 isomerization process from the perspective of economically optimal operation. This work presents the systematic design of such a plantwide control system for the process. nC4, iC4 Recycle
198.9°C 45 bar
FEHE
nC4 ĺ iC4
Qfur: 863 kW Qcool: 1140 kW 1
20
Frcy: 190.1 kmol/h 30
FC4: 263.1 kmol/h 0.02 C3 0.24 iC4 0.69 nC4 0.05 iC5
P2: 4.35 atm Qcnd2: 3.7 MW
Qcnd1: 10.1 MW P1: 6.4 atm L1
D1: 263.1 kmol/h 0.022 C3 0.958 iC4 0.020 nC4 0.000 iC5
1
L2
D2: 190.1 kmol/h
10
50
Qreb1: 10.4 MW
B1
20
Qreb2: 3.53 MW
B2: 13.29 kmol/h
Figure 1. C4 isomerization process schematic with base case conditions
2. Optimal Process Operation A base case design of the C4 isomerization process for processing 263.1 kmol/h of fresh C4 feed (2%C3, 24% i-C4, 69% n-C4 and 5% i-C5) to produce an i-C4 product stream with 2% n-C4 impurity has been reported by Luyben et al. 1. We take this existing design (see Figure 1 for salient design / operating parameters) and optimize the steady state operating degrees of freedom for (a) A given fresh feed processing rate of 263.1 kmol/h (Mode I) and (b) Maximum fresh feed processing rate (Mode II). There are a total of seven steady state degrees of freedom (dof), one for the fresh feed, four for the two columns (two per column), one for the furnace (duty or reactor inlet temperature) and one for the flooded condenser (duty or outlet temperature). It is assumed that the reactor is operated at the highest possible pressure (45 bar) for maximum reaction conversion and the reactor pressure is not counted as a degree of freedom. The seven independent variables chosen to fully specify the process flowsheet are the fresh feed rate (FC4), heavy key and light key impurity mol fractions in respectively the distillate and bottoms stream of the DIB column ([xDnC4]DIB) and [xBiC4] DIB) and the purge column ([xDiC5] Purge) and [xBnC4] Purge), the reactor inlet temperature (Trxr) and cooler outlet temperature (Tcool). All material and energy stream flows (except furnace duty) are constrained to between 0 and twice the base-case steady state values. The maximum furnace duty is constrained at 1.5 the base case value to reflect the limited overdesign of an expensive equipment. Similarly, the maximum DIB column boilup denoting onset of flooding is taken as 1.3 times its base case value. The corresponding factor for the purge column is 1.5. The maximum reactor temperature
Economic Plantwide Control of C4 Isomerization Process
489
and pressure limits are 200 °C and 45 atm respectively. Finally, the n-C4 impurity in the product stream should be below 2%. To minimize the quality give away, the impurity in the product stream must be at its constraint value (ie 2%). Also the reactor inlet temperature should be maximum to maximize reaction conversion for minimum recycle cost. There exists an energy consumption versus production rate trade-off with respect to the loss of n-C4 in the C5 purge stream ([xBnC4] Purge). However since the flow rate of the purge stream is small, we simply set [xBnC4] Purge to a small value (1%) so that the n-C4 loss is small. Lastly Tcool is fixed at a reasonable value of 53 °C. This leaves three steady state degrees of freedom to be optimized. The constrained minimization of the total energy cost is Table 1. Process optimization results’ summary performed using fmincon in Matlab Mode I: Minimum energy cost* Objective with Hysys as the background solver function (J) Mode II: Maximum throughput (FC4) for the two modes of operation. Case Mode I Mode II The optimization problem and its & results for Mode I and Mode II are FC4 263.1 kmol/hr 334.5 kmol/h# Trxr 200 °C Max 200 °C Max briefly summarized in Table 1. The Tcool 53 °C Fixed 53 °C Fixed Mode I energy cost is $1.716x106 yr-1 [xDnC4]DIB 0.02 Max 0.02 Max while the Mode II maximum [xBiC4]DIB 0.0517 0.0125 throughput for the given fresh feed [xDiC5]Purge 0.0202 0.00011 composition is 334.5 kmol/h. In Mode [xBnC4]Purge 0.01 Fixed 0.01 Fixed I, there are four active constraints 6 -1 Optimum J $1.716x10 yr $334.4 kmol/h leaving three unconstrained dof. In MAX Mode II, three additional constraints Qfur , Additional namely, the maximum furnace duty Vreb1MAX, Constraints Vreb2MAX (QfurMAX), the maximum DIB boilup MAX (Vreb1 ) and the maximum purge *: Furnace duty $9.83 GJ-1; Steam $4.83 GJ-1; Cooling water $0.16 GJ-1 column boilup (Vreb2MAX) are active so &: FC4 is specified that all dof are exhausted with seven #: FC4 is optimized for maximum throughput active constraints. The result is typical of chemical processes with the process being driven to its maximum throughput limit by exhausting all the dof to drive as many constraints to their respective limits.
3. Plantwide Control System Design and Economic Performance 3.1. Plantwide Control System Design To design a regulatory control structure that minimizes the economic loss due to the need for a back-off from the active constraint limit due to transients, consider the active constraints in Mode I and Mode II. Table 2 reports the percentage loss in objective function per unit back-off in an active Table 2. Percent change in objective function J hard constraint. In Mode I, Trxr is the per percent back off in active constraint* economically dominant active constraint variable. In Mode II, the throughput is Mode I Mode II affected most by Trxr and Qfur. To # 0.658 Qfur 0.360 T rxr eliminate a back-off in Qfur, we may Trxr# 0.926 Vreb1 0.086 Vreb2 0.002 flow control the furnace fuel valve and *: Only hard constraints considered. #: 50 °C span not use it as a manipulated variable (eg to maintain Trxr). Alternatively, since both Qfur and Trxr constraint variables are located in the reaction section, flow controlling the feed to the reactor and not using it as a manipulated variable would eliminate the flow variability in the reactor feed and hence
490
R. Jagtap et al.
mitigate the transients (and consequently, back-off) in both Qfur and Trxr. These two options result in plantwide regulatory control structures, CS1 and CS2. The inventory control system for CS1 is built around the flow controlled (fixed) furnace duty with loop pairings as in Table 3. The unavailability of Qfur for manipulation forces Trxr control using the recycle flow rate. The purge column reflux drum level is then controlled using the column feed. Table 3. Plantwide control structures The sump level is controlled using Regulatory control loops (Mode I) the reboiler duty as the bottoms MV stream is very small making it CV CS1 CS2 CS2 inappropriate for level control. The impurities [xBnC4] Purge and TPM QfurSP D2SP FC4SP Trxr D2 Qfur Qfur [xDiC5] Purge are maintained using Lvltop1 L1 L1 L1 the bottoms and reflux rate bot1 Lvl FC4 FC4 B1 top2 respectively. In the DIB column, Lvl B1 B1 D2 Lvlbot2 Vreb2 Vreb2 Vreb2 the sump level is controlled using [xDnC4]DIB [L/D]2 [L/D]2 [L/D]2 the fresh feed. The reflux drum [xBiC4]DIB Vreb1 Vreb1 Vreb1 level is controlled using the reflux [xDiC5]Purge L2 L2 L2 B Purge rate as the reflux ratio is large (>5) [x nC4] B2 B2 B2 with a relatively small distillate. Mode II supervisory control loops The key component impurities Trxr Maximum Maximum Maximum D DIB B DIB [x nC4] and [x iC4] are Qfur Maximum D2SP FC4SP controlled using respectively the Vreb1 [xBiC4]DIB SP [xBiC4]DIB SP [xBiC4]DIB SP Vreb2 [xDiC5]Purge SP [xDiC5]Purge SP [xDiC5]Purge SP distillate and the reboiler duty. With the basic regulatory control system in place, supervisory loops for Mode II operation are implemented where the setpoints [xBiC4] DIB and [xDiC5] Purge are adjusted to maintain the boilups Vreb1 and Vreb2 near maximum. In Mode I (given throughput), the QfurSP is slowly adjusted for the desired fresh feed processing rate. QfurSP thus is the throughput manipulator (TPM). In CS2, the regulatory loops are built around the flow controlled recycle stream (Mode I TPM). The purge column reflux drum level is controlled using the column feed and the DIB sump level is controlled using the fresh feed. Trxr is controlled using Qfur. The remainder of the regulatory control structure and the Mode II supervisory loops are similar to CS1. For comparison purposes, Table 3 also reports a conventional control structure where the fresh feed is flow controlled and acts as the Mode I TPM. Here, in addition to the CS1 Mode II supervisory loops, FC4 is adjusted to maintain Qfur. 3.2. Quantitative Back-off Results The process cannot be operated at the limit of the active constraints as ever present disturbances would cause transient hard constraint violation which is unacceptable (TrxrMAX QfurMAX, Vreb1MAX and Vreb2MAX constraints are considered hard). The constraint variable control loop setpoints must be appropriately backed off from their limits for the worst case disturbance. A ±10% step change in the fresh feed n-C4 composition with a complementary change in the i-C4 mol fraction is considered the worst case disturbance. Table 4 reports the back-off from the active constraints along with the economic objective function using the three control structures for Mode I and Mode II. In both modes, there is no back-off in Trxr for CS1 and CS2 as the flow variability in the reactor feed is negligible while a small back-off of 0.1 °C occurs in CS3 for Mode II. In Mode I, CS3 fails for a +10% n-C4 feed mol fraction change with the purge column reflux drum filling up in about 10 hours. This is due to accumulation of the unreacted n-C4 in the recycle loop (snowball effect). In Mode II, the Qfur shows a significant (7%) backoff
Economic Plantwide Control of C4 Isomerization Process
491
for CS3 while no back-off is required in CS1 and CS2. Some back-off is also necessary in the column boilups in all the structures. This back-off causes an almost negligible throughput loss of 0.1% in CS1 and CS2. The throughput loss for CS3 is however much larger at 1.9% due to the back-off in Qfur. Assuming a $20 per kmol product-raw material (including energy expense) price differential, this corresponds to a yearly revenue loss of about $1.068x106 in CS3 compared to CS1 and CS2 which is significant. The result shows that the implemented plantwide control structure significantly affects the profitability of the process. Table 4. Back-off in active constraints and economic loss for CS1, CS2 and CS3* Mode I Optimum CS1 CS2 CS3
Trxra 200 200 200
Mode II
FC4b
Jc
Trxra
Qfurd
Vreb1e
Vreb2e
Jf
263.1 263.1 263.1 Fails
1.726 1.726 1.726
200.0 200.0 200.0 199.9
1294max 12940% 12940% 12037%
2522max 24861.4% 24861.4% 24961%
851.8max 845.50.7% 845.50.7% 820.03.7%
334.5 334.10.1% 334.10.1% 328.01.4%
*: Subscripts denote % backoff. a: °C; b: kmol/h; c: x10 6 $/yr; d: MW; e: kmol/h; f: FC4 kmol/h
The TPMs for CS1, CS2 and CS3 are respectively, QfurSP, D2SP and FC4SP. Since TrxrMAX and QfurMAX are economically dominant Mode II active constraints and as these are located in the reaction section, minimizing the severity of transients (back-off) in both constraint variables requires minimizing the transient variability into the reaction section. This is accomplished is CS1 and CS2 with the TPM located in the reaction section eliminating reactor feed flow variability. The TPM location thus plays a crucial role in the economic operation of a process. It should be located close to and where possible at the economically dominant active constraint(s).
4. Conclusion This plantwide control study of a C4 isomerization process shows that the regulatory control structure can significantly affect process economic performance by determining the severity of the transients in the economically dominant active constraints. To minimize the back-off from the constraint limit and hence the economic loss, the regulatory layer TPM should be located as close as possible to the dominant constraint(s). A top-down bottom-up approach, where the TPM is first chosen based on the economically dominant active constraint(s) (top-down part) followed by the synthesis of regulatory control loops (bottom-up part) appears the most appropriate systematic methodology for plantwide control system design.
References 1. 2. 3. 4. 5.
W.L. Luyben, B.D. Tyreus, M.L. Luyben, 1999, Isomerization process, Plantwide Process Control, McGraw Hill: New York, 273-293. M.L. Luyben, B.D. Tyreus, W.L. Luyben, 1997, Plantwide Control Design Procedure, AIChE J., 43, 12, 3161-3174. R. Kanodia, N. Kaistha, 2010, Plantwide control for throughput maximization: A case study, Ind. Eng. Chem. Res., 49, 210-221. R. Jagtap, N. Kaistha, S. Skogestad, 2010, Plantwide control for economic operation of a recycle process, Compter Aided Chemical Engineering, 28, C, 499-504. P.A. Bahri, J.A. Bandoni, G.W. Barton, J.A. Ramagnoli, 1995, Back-off calculations in optimizing control: A dynamic approach, Comp. Chem. Engg., 19, S699-S708.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Application of Graphic Processing Unit in Model Predictive Control Arash Sadrieh, Parisa A. Bahri School of Engineering and Energy, Murdoch University, Western Australia, 6150
Abstract This study seeks to pave the way for the implementation of a model predictive controller method using a Graphic Processing Unit (GPU). The GPU has been adapted and used as a real time co processor for Nonlinear Model Predictive Control (NMPC) algorithms, providing a means to improve the computational performance of the MPC algorithm in an economic manner. In this approach, a parallel version of Nelder-Mead simplex algorithm was used to solve MPC optimization problem. In order to show the effectiveness of the proposed approach, the implementation was applied in model predictive control of a crystallizer unit operation. The results show a considerable improvement in computational performance compared to standard CPU based implementation. Keywords: Nonlinear Model Predictive Control, GPGPU, Nelder-Mead.
1. Introduction Model Predictive Control (MPC) is a set of computer control algorithms which use a process model to predict the future response of a process. MPC algorithms are widely used in different applications such as chemical, food processing, automotive, aerospace and metallurgy. A demanding feature of most MPC algorithms is that an optimization problem must be solved online (Cannon, 2004). In process system MPC applications, the effects of computational delay, is a major obstacle in using these algorithms in many industrial processes, specifically for processes that contain non linear mathematical models with many equations and variables (Diehl et al., 2002). GPUs are relatively cheap components found in most new PCs and are traditionally employed for 3D rendering graphics algorithms. Hardware architecture of the GPU chip is designed to facilitate the execution of a very high number of threads in parallel. This means that a similar piece of software is executed independently on a very large amount of data. 3D rendering algorithms exploit this feature to obtain real time computational performance. Similarly, a Nonlinear Model Predictive Control (NMPC) algorithm has the potential to be implemented as a data parallel algorithm due to the fact that it requires a single algorithm (optimization algorithm) to be executed on a very large number of data (time steps). In this paper, the power of GPU chips is harnessed to address the specific computational requirements of NMPC problems in process systems. The structure of the paper is organized as follows: Firstly, general background information on GPU architecture is provided. The optimization problem raised from NMPC is then explained and Parallel Nelder-Mead (NM) algorithm is described as an optimization algorithm that can be implemented on GPU. Subsequently, the implementation details of the algorithm is explained and finally the performance results achieved is compared against standard CPU-based implementation.
Application of Graphic Processing Unit in Model Predictive Control
493
2. Background The idea of using GPUs for general computation has only recently gained attention with introduction of fully programmable graphical chips with high memory bandwidth and high computational horsepower (Owens et al., 2007). It has been demonstrated that when GPUs are applied for general purpose computation, they provide speedups of orders of magnitude compared to optimized CPU implementations (Owens et al., 2008). To put the speed of GPUs into context, take NVIDIA Fermi architecture that has been introduced recently (Wasson, 2009). This processor has 512 computing cores and can sustain a peak rate of more than 500 Giga Floating points Operations Per Second (GFLOPS), compared to the fastest CPU at the time which operates at a peak rate of less than 20 GFLOPS (GFLOPS is a computing performance measurement unit). Data parallel algorithms are defined as a set of algorithms where a similar algorithm should be executed independently on a very large amount of data. The GPU has a unique hardware architecture that is suitable to run data parallel algorithms. A GPU chip principally consists of a set of Single Instruction, Multiple Data (SIMD) multiprocessors and a memory unit that is accessible from all the multiprocessors. Inside a multiprocessor, there are several processors and a shared memory to be used internally between processors. Every multiprocessor has a single instruction unit and therefore a single program code can be executed on each multiprocessor. A processor has a set of registers and these registers are locally accessed on a processor’s active thread to store/retrieve data. However, registers are not shared between different processors. Constant cache and texture cache are applied as components to reduce time expensive memory-access operations. To implement data parallel algorithms on the GPU, this chip can be assumed as a coprocessor that cooperates with a main processor (i.e. CPU). The data parallel algorithm is expressed in a specific function form, called kernel and the device (i.e. GPU) simultaneously executes a batch of kernel instances (threads), organized in a hierarchical grid structure. A grid contains a batch of similar thread blocks where each thread block describes a group of threads that could cooperate with each other efficiently through the fast shared memories available on the multiprocessors. Kernels are implemented using different programming environments such as NVIDA Compute United Device Architecture (CUDA) or OpenCL. In this study CUDA platform was applied in order to implement GPU Based NMPC. The CUDA platform is a parallel programming architecture that extends C high level programming language and applied to implement data parallel algorithms on the GPU (Nvidia, 2007).
3. Problem Formulation and Optimizer Algorithm A discrete time system is assumed to be described by the following state equations: ݔାଵ ൌ ݂ሺݔ ǡ ݑ ሻ
(1)
and
ݕ ൌ ݄ሺݔ ሻ
( 2)
Where ݇ݔ, ݇ݑ, ݇ݕdenote state vector, input vector and system output vector at stage݇ respectively and functions ݂ and ݄ are nonlinear functions. The goal of NMPC is defined by minimizing cost function ܬ, expressed by (Henson, 1998): ିଵ
ܬൌ ߶൫ݕାȁ ൯
ୀ
ܮሺݕାȁ ǡ ݑାȁ ሻ
ܣൌ ሾݑȁ ǡ ݑାଵȁ ǡ ǥ ǡ ݑାெିଵȁ ሿ ,
Subject to:
(3) (4)
A. Sadrieh et al.
494
ݑ ൏ ݑାȁ ൏ ݑ௫ ǡ Ͳ ݆ ܯെ ͳܽ݊݀ݕ ൏ ݕାȁ ൏ ݕ௫ ǡ ͳ ݅ ܲ ( 5 )
Where ݇ݕ݆ȁ݇ is the predicted output ݇ݕ݆ based on information available at time ݇ and similarly ݇ݑ݆ȁ݇ is the control input vector ݑା at time ݇. The constant numbers ܯand ܲ denote control and prediction horizons, respectively and ߶ǡ ܮare nonlinear functions. While numerous numerical approaches can be applied to solve optimization problem (5), in this paper the NM algorithm was selected due to the fact that the algorithm can be implemented on parallel architectures. In this algorithm constraints are considered by introducing penalty functions. Consequently, in problem (5) the constraint violation penalty functions ܲଵ and ܲଶ are added to the objective function : ିଵ
כ ܬൌ ߶൫ݕାȁ ൯
ୀ
ܮ൫ݕାȁ ǡ ݑାȁ ൯
ୀଵ
ெିଵ
ܲଵ ൫ݕାȁ ൯
ୀ
ܲଶ ൫ݑାȁ ൯ ( 6 )
A simplified parallel version of the algorithm (Lee and Wiswall, 2007) is provided bellow: 1. Evaluate objective function at I+1 initial points ܣ ǡ ܣଵ ǡ ǥ ǡ ܣூ 2. Reorder ܣ ǡ ܣଵ ǡ ǥ ǡ ܣூ so that כܬሺܣሻ ൏ כܬሺܣଵ ሻ ൏ ڮ൏ כܬሺܣூ ሻ ഥ ൌ ଵ σூିே ܣ 3. Setܯ ூ ୀ 4. For every ݁ݎ݄݁ݓሺ ܫെ ܰ ͳሻ ܫrun steps 4.1-4.4 in parallel: ഥ ߙሺܯ ഥ െ ܣ ሻ 4.1. Set ܣோ ൌ ܯ כ൫ܣோ ൯ כሺ ܣሻ 4.2. If ܬ ൏ ܬ then run steps 4.2.1-4.2.2 ഥ൯ 4.2.1. Set ܣா ൌ ܣோ ߛ൫ܣோ െ ܯ כ൫ܣா ൯ כሺ ܣሻ 4.2.2. If ܬ ൏ ܬ then set ܣ ൌ ܣா otherwise ܣݐ݁ݏ ൌ ܣோ 4.3. If ( כܬ൫ܣோ ൯ כܬሺܣ ሻሻܽ݊݀ሺ כܬ൫ܣோ ൯ ൏ כܬ൫ܣିଵ ൯ then ܣݐ݁ݏ ൌ ܣோ 4.4. If כܬ൫ܣோ ൯ כܬ൫ܣିଵ ൯ then run steps 4.4.1-4.4.3 4.4.1. If כܬ൫ܣோ ൯ ൏ כܬ൫ܣ ൯then set ܣሚ ൌ ܣோ otherwise set ܣሚ ൌ ܣ ഥ ܣሚ ൯ 4.4.2. Set ܣ ൌ ߚ൫ܯ כ൫ܣ ൯ 4.4.3. If ܬ ൏ כܬ൫ܣ ൯ then set ܣ ൌ ܣ 5. If the solution is converged then Terminate otherwise go to step (2) The algorithm starts by evaluating כܬat I+1 initial points and then reordering the points so that the first point (i.e., ܣ ሻ has the least objective value (i.e., כܬሺܣ ሻ). Afterwards, N worst points of the sequence created in step (2) are selected and each point is assigned to a parallel processor where the parallel processor is responsible for finding a new point that has a lower objective value. The algorithm continues iterating until the solution converges. The constant parameters ߙǡ ߛ are predefined in the algorithm and I is specified based on the optimization problem.
4. Implementation To implement the GPU based NMPC controller, the process model equations is developed in Aspen Custom Modeler (ACM) equation orientated tool. Objective function of NMPC is expressed as equations in the ACM model and control handles, control and predication horizons are specified in ACM. At each time step current plant measurements are set into associated state variables in the model. The NMPC optimization is then performed by running a dynamic optimization of the objective function over the horizons and the optimization results for the next control stage, subsequently, are sent back to the plant. This process is repeated for every time step. To perform the NMPC optimization, a special GPU based optimizer was developed and
Application of Graphic Processing Unit in Model Predictive Control
495
integrated with ACM. In the GPU optimizer two levels of parallelization are considered. Firstly the equations represented in the model are evaluated simultaneously during each model evaluation and secondly the model is evaluated concurrently for multiple evaluation points (as stated in step (4) of the optimization algorithm). To realize the first level of parallelization (i.e. parallel equation evaluation in a model), a software utility is developed that receives the process model in the ACM format and generates equivalent CUDA kernels for the model equations. Consequently when the model evaluation is necessary the kernels representing model equations are concurrently executed on the GPU. The code generator utility also has the ability to find equations with similar structure and group them into a single kernel. This improves the GPU performance throughput by reducing the number of kernels. The equation grouping is normally necessary in process models which often have multiple equations with similar structures. Examples include the equations resulted from space discretization or the equations raised from presence of several similar unit operations in a flow sheet. The generated GPU based model evaluator can be applied in any other optimisation algorithm. The second level of parallelization (i.e. concurrent model evaluation of multiple points) is implemented by launching N instances of each equation(s) kernel where N is the number of evaluation points specified in the parallel NM algorithm.
5. Numerical results and discussion A crystallizer model describing a Continuous Mixed Suspension Mixed Product Removal (CMSMPR) unit operation was selected (Pantelides and Oh, 1996). In this unit operation, potassium sulphate crystals are produced from aqueous solutions. It was assumed that crystal breakage is not important. The first principle dynamic model for this unit operation was taken from ACM examples. This unit operation was selected due to the fact that it contains a relatively high number of equations and has different types of equations including integral, partial differential and algebraic equations. The controller is designed to control the Mean value of Crystal Numbers (MCN) as close as possible to a reference trajectory by adjusting the crystallizer temperature (ͳͲ ܶሺԨሻ ͺͲ) at the sampling time of 30 minutes. In this controller, the control and predication horizons are 6 and 10 steps respectively. To benchmark the algorithm, a PC was applied that has an Intel 2.98 GHz Core(TM) 2 E7500 CPU and a NVIDIA FERMI GTX 480 GPU. To confirm the correctness of the approach, the controller was tested in presence of a 10% disturbance in the feed concentration at time t=0. As it is shown in figure (1), the controller adjusts the process output (i.e. MCN) close to the reference trajectory. The scalability of the approach was measured through changing the number of equations in the model by adjusting discretization spacing preference. As a result, three levels of accuracy for the model are defined based on the number of equations. The NMPC computation time was measured for each accuracy level in the model and results are presented in the Table 1. It can be seen that this approach is always outperforming standard NM implementation particularly when the number of equations is increased. Table 1: Computational Time Comparison of GPU-Based Approach and Standard Approach Accuracy Level
Algorithm Time (m:s)
Level 0 (5014 equations)
GPU 14:29
Standard 30:08
Level 1 (9917 equations)
GPU 18:47
Standard 47:04
Level 2 (19817 equations)
GPU 24:59
Standard 68:05
A. Sadrieh et al.
496
80
Temperature (Control handel) Uncontrolled MCN Controlled MCN
60 40 20 0 Time
1
2
3
4
Refrence Trajectory
Figure 1: Dynamic Simulation Results: The controller at accuracy level 0 is tested with 10% disturbance in feed concentration where temperature was used to control the MCN close to reference trajectory. The control and prediction horizons are 6 and 10 steps.
For example, in a model that contains 19,817 equations our approach runs 2.8 times faster compared to the standard approach. This satisfies response time constraint of 30 minutes.In these tests, the results are obtained by applying a single GPU card. Considering the good scalability of the approach, the current computational time can be improved by adding more GPUs to the architecture. The performance of typical sequential CPU-based algorithms, however, is limited to maximum speed of a single CPU chip and adding more CPUs will not affect the overall perofmance.
6. Conclusion A new hardware platform was proposed for the implementation of model predictive control of nonlinear process systems. Numerical results show a considerable improvement in computational performance results. Considering these results and the growth rate in computational power, price and the availability of GPU chips, it can be concluded that these chips can be considered as very attractive co-processors in industrial NMPC applications. Future work includes applying the GPU based model evaluator in other optimisation algorithms and benchmarking current approach against the sequential version of the aformentioned approaches.
References CANNON, M. (2004) Efficient nonlinear model predictive control algorithms. Annual Reviews in Control, 28, 229-237. DIEHL, M., BOCK, H. G., SCHLÖDER, J. P., FINDEISEN, R., NAGY, Z. & ALLGÖWER, F. (2002) Real-time optimization and nonlinear model predictive control of processes governed by differential-algebraic equations. Journal of Process Control, 12, 577-585. HENSON, M. A. (1998) Nonlinear model predictive control: current status and future directions. Computers and Chemical Engineering, 23, 187-202. LEE, D. & WISWALL, M. (2007) A parallel implementation of the simplex function minimization routine. Computational Economics, 30, 171-187. NVIDIA, C. (2007) Compute Unified Device Architecture Programming Guide. NVIDIA: Santa Clara, CA. OWENS, J. D., HOUSTON, M., LUEBKE, D., GREEN, S., STONE, J. E. & PHILLIPS, J. C. (2008) GPU computing. Proceedings of the IEEE, 96, 879-899. OWENS, J. D., LUEBKE, D., GOVINDARAJU, N., HARRIS, M., KRÜGER, J., LEFOHN, A. E. & PURCELL, T. J. (2007) A Survey of General Purpose Computation on Graphics Hardware. Wiley Online Library. PANTELIDES, C. C. & OH, M. (1996) Process modelling tools and their application to particulate processes. Powder Technology, 87, 13-20. WASSON, S. (2009) Nvidia’s ‘fermi’gpu architecture revealed.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Statistical Process Control of Multivariate Systems with Autocorrelation Tiago J. Rato, Marco S. Reis, CIEPQPF, Department of Chemical Engineering, University of Coimbra, Rua Sílvio Lima, 3030-790, Coimbra, Portugal
Abstract Current industrial processes are characterized by encompassing a large number of interdependent variables, which very often exhibit autocorrelated behavior, due to the dynamic nature of the phenomena involved, associated with the high sampling rates of modern data acquisition systems. Multivariate statistical process control charts have been developed to handle the cross-correlation issue, such as the Hotelling’s T2, MEWMA and MCUSUM control charts, but they are not able to handle properly the presence of autocorrelation in data. In order to address both problems simultaneously, alternative procedures were developed, namely by adapting the control limits, using residuals from time series modeling and applying data transformation techniques, some of which will be addressed in this paper, along with others we now propose. The proposed monitoring methods use a combination of Dynamic PCA (DPCA), ARMA models and missing data estimation methods, allowing for the simultaneous reduction of data dimensionality while capturing its dynamic behavior, therefore also handling the autocorrelation effects. The results obtained show that the proposed methodologies based upon missing data estimation tend to present better performance, constituting good alternatives to methodologies currently in use. Keywords: Dynamic multivariate statistical process control; Principal component analysis; Autoregressive moving-average models; Missing data.
1. Introduction The objective of statistical process control (SPC) is to monitor the stability and performance of a process over time, in order to verify whether it remains within a state of “statistical control” (Kourti and MacGregor, 1995). In order to accomplish this goal, traditional SPC charts (Shewhart, CUSUM and EWMA) are often used for monitoring key product quality variables in an univariate way (Montgomery, 2005). However, with the development of processes and instrumentation, the need for properly monitoring many correlated variables led to the development of multivariate control charts such as the Hotelling’s T2 (Hotelling, 1931), MEWMA (Lowry et al., 1992a) and MCUSUM (Lowry et al., 1992b). For larger systems, even these statistics present problems. For instance, the inversion operation of the covariance matrix in the Hotelling’s T2 statistic may run into numerical instability problems for highly correlated sets of variables, or may be even impossible to perform, in case it becomes rank deficient, and therefore new methodologies based on latent variables techniques, such as Principal Components Analysis (PCA) (Jackson, 1991, Jolliffe, 2002), were developed to address this limitations. The statistics used in the latent variables frameworks, namely PCA or partial least squares, PLS (Geladi and Kowalski, 1986, Martens and Naes, 1989, Wold et al., 2001)
T. Rato et al.
498
are typically based on the model scores where a Hotelling’s T2 statistic is applied. This statistic is usually complemented with a residual statistic, Q (also known as squared predicted error, SPE). However, all these methods assume that variables are independent along time, an hypothesis that is often not met in practice, especially with the high sampling rates currently achieved with modern instrumentation. In order to address this issue, Ku et al. proposed an SPC procedure based on dynamic principal component analysis (DPCA), which is an extended version of PCA that includes time lagged variables in order to accommodate, and tacitly model, the dynamic behavior of variables within the same PCA model (Ku et al., 1995). Unfortunately, one can easily verify that the direct implementation of such method still leads to autocorrelated statistics, which raises problems in its proper implementation. Therefore, in order to better handle this issue, alternative approaches must be adopted, such as: time-series modeling (Harris and Ross, 1991, Montgomery and Mastrangelo, 1991), control limits adjustment/correction (Vermaat et al., 2008), variables transformation (Bakshi, 1998, Reis et al., 2008) and the use of non-overlapping moving windows. To address all these issues simultaneously, we present a set of candidate methodologies, some of them being new. The new proposed methodologies use a combination of DPCA, ARMA models and missing data estimation methods, allowing for the simultaneous reduction of the data dimensionality (correlation structure) while capturing its dynamic behavior, therefore also handling the auto-correlation effects. The rest of this paper is organized as follows. In the following two sections, we briefly present the techniques studied, and show the results obtained for the systems tested. Finally, we conclude with a summary of the contributions presented in this paper.
2. Methods In this paper we analyze the performance of several multivariate SPC methods. These methods are based on the Hotelling’s T2 and Q statistics and may make use of time series (TS) models and missing data (MD) estimation methods. As the total number of statistics under test is large, we will only present in this paper their general form, according to Table 2. All these statistics are applied to the scores obtained through the use of PCA, DPCA or PLS models (in the case of PLS, one is referring to the X-scores). Regarding approaches based on DPCA models, two different methods were tested to determine the number of lags (l) to be used for the variables, as indicated in Table 1. The LS1 method is the one proposed by Ku et al. (1995) and is based on the number of linear relations needed to describe the system. On the other hand, the LS2 method estimates the number of lags for each variable based on a succession of singular value decompositions and parallel analyses of an optimization function based on the smallest singular values for each decomposition. Table 1. Definitions of the lag selection method.
Designation LS1 LS2
Lag selection method Proposed by (Ku et al., 1995). New proposed method.
As an example of one of the statistics analyzed, we present the DPCA-LS2-MD-S3 statistic. This statistic has the form of S3 (as it makes use of the scores estimated by missing data, MD) and is based on the DPCA were the number of lags was estimated by the LS2 method.
Statistical Process Control of Multivariate Systems with Autocorrelation
499
Table 2 Definition of the statistics used according to the complementary subspaces they are relative to.
Subspace
Designation
Original space (residual statistics)
R1 R2 S1
PCA subspace
S2
S3
Statistic type Squared prediction error (Q) for the reconstructed data with the observed scores. Hotelling’s T2 for the reconstructed data obtained with the estimated scores. Hotelling’s T2 for the observed scores. Hotelling’s T2 for the observed and estimated scores. Hotelling’s T2 for the residual between observed and estimated scores.
Equation
x Pt x Pt T
R1
R2
x Ptˆ S1
T
1
S tˆ
T
x Ptˆ
1
t St t T
S2 S3
ªt º 1 ªt º «tˆ » S t , tˆ «tˆ » ¬¼ ¬¼
t tˆ
T
S t tˆ t tˆ 1
3. Results In this section we present a summary of the results obtained by applying the monitoring statistics presented in Table 2, to a case study that consists of monitoring a simulated distillation column. 3.1. Case study: Wood & Berry distillation column Wood and Berry (Wood and Berry, 1973) presented a linear model approximation for the dynamics of a binary distillation column separating methanol from water, in which the distillate (xD) and residual (xB) methanol weight fraction are expressed as functions of the reflux flow rate (FR) and the reboiler steam flow rate (FS). The compositions of the top and bottom products, expressed in weight % of methanol, are the output variables. The reflux and the reboiler steam flow rates are the inputs (expressed in lb/min); time units are in minutes. For conducting the simulations, FR and FS are considered to be normally distributed random variables with zero mean (variables are expressed in deviation terms) and unit variance; xD and xB are computed according to Eq. (1) with the addition of noise (with a signal-to-noise ratio of about 10 dB) through the use of the transfer functions related to the feed flow rate and feed composition, given by (Lakshminarayanan et al., 1997):
ª 12.8e s ª xD ( s ) º «16.7 s 1 « x (s) » « ¬ B ¼ « 6.6e 7 s «¬ 10.9 s 1
º 21s 1 » ª FR ( s ) º »« » 3 s 19.4e » ¬ FS ( s ) ¼ 14.4 s 1 »¼ 18.9e
3 s
(1)
The observation measurements vector was defined as x = [ xD xB FR FS ]T. In order to construct the latent variables models, 3000 observations were collected under normal operation conditions (Xref). The data matrix Xref was then used to estimate the number of lags (needed for the construction of the DPCA models). From this analysis the number
T. Rato et al.
500
of lags obtained through the use of the LS1 approach was 2 for all variables. By using the LS2 method one gets, l = [ 2 2 9 4 ], that is 2 lags for xD and xB, 9 lags for FR and 4 lags to FS. The simulation model was run for a set of perturbations in the sensors measurements and the corresponding Average Run Length (ARL), to each perturbation, was determined. The upper control limits (UCL) for all statistics were set by trial and error so that the in-control Average Run Length (ARL0) was 370. For each perturbation, 3000 datasets were generated, leading to 3000 Run Lengths, from which we have computed the ARL values for each statistic. The ARL values, along with the associated 95% confident intervals (obtain through bootstrap), for a step perturbation in mean of the first sensor with magnitude k standard deviations, are presented in Fig. 1. These results show that there is no significant difference between the traditional static PCA and the Dynamic PCA in this case (Fig. 1 (a)), even with the use of the LS2 method to estimate the number of lags. In fact, the DPCA-LS1-0-R1 (that uses the Ku et al. approach) gives better results than DPCA-LS2-0-R1. Furthermore, even the statistics that incorporate a dynamical modeling component (such as DPCA), still present some autocorrelation. This issue is mitigated by the use of an implicit prediction methodology, namely through a MD approach to estimate future values. Results obtained show that the application of such an approach, not only reduces the statistics autocorrelation, but also improves the control chart performance (Fig. 1). In this analysis, the missing data based statistics (specially DPCA-LS2-MD-R2) present the best performance, and have the weakest final autocorrelations. 400
400
380
350
360
300
340
250
320
ARL
ARL
200 300
150 PCA-0-S1 PCA-0-R1 DPCA-LS1-0-S1 DPCA-LS1-0-R1 DPCA-LS2-0-S1 DPCA-LS2-0-R1
280 260 240
100
220
0
200
-50
0
0.1
0.2
0.3
DPCA-LS1-0-S1 DPCA-LS1-0-R1 DPCA-LS1-MD-S3 DPCA-LS1-MD-R2 DPCA-LS2-MD-S3 DPCA-LS2-MD-R2
50
0.4
0.5
0.6
0.7
0.8
k
(a)
0.9
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
k
(b)
Figure 1 ARL for the tested methodologies. 1) static PCA model (PCA-0-S1 and PCA-0-R1); 2) DPCA model of (Ku et al., 1995) (DPCA-SL1-0-S1 and DPCA-LS1-0-R1); 3) Application of missing data to the DPCA model of (Ku et al., 1995) (DPCA-LS1-MD-S3 and DPCA-LS1-MDR2); 4) Application of missing data to the DPCA model with the new lag selection method (DPCA-LS2-MD-S3 and DPCA-LS2-MD-R2).
4. Discussion and Conclusions This study was also performed for the multivariate AR(l) process presented by (Ku et al., 1995) and for a Continuous Stirred-Tank Reactor (CSTR) system with an heating jacket. In all these studies, the DPCA-LS2-MD-R2 statistic turns out to present consistently superior performances. This statistic is new, and belongs to a class of statistics that make use of missing data methods to predict the future scores of a DPCA model. This class of statistics has proven to be a good alternative to traditional
Statistical Process Control of Multivariate Systems with Autocorrelation
501
methodologies, as they present a better performance and lower autocorrelation. However we would like to point out that such statistics do require a suitable method to estimate the number of lags needed to construct the DPCA model, such as the one we have also proposed. We do believe that the new statistics, based on MD estimation, are eligible for future applications as alternatives to the current ones based strictly on PCA and DPCA.
5. Acknowledgements Tiago J. Rato acknowledges the Portuguese Foundation for Science and Technology for his PhD grant (grant SFRH/BD/65794/2009). Marco S. Reis also acknowledges financial support through project PTDC/EQU-ESI/108374/2008 co-financed by the Portuguese FCT and European Union’s FEDER through “Eixo I do Programa Operacional Factores de Competitividade (POFC)” of QREN (with ref. FCOMP-010124-FEDER-010397).
References B. R. Bakshi, 1998, Multiscale PCA with Application to Multivariate Statistical Process Control, AIChE Journal, 44, 7, 1596-1610. P. Geladi and B. R. Kowalski, 1986, Partial Least-Squares Regression: a Tutorial, Analytica Chimica Acta, 185, 1-17. T. J. Harris and W. H. Ross, 1991, Statistical Process Control Procedures for Correlated Observations, The Canadian Journal of Chemical Engineering, 69, 48-57. H. Hotelling, 1931, The Generalization of Student's Ratio, The Annals of Mathematical Statistics, 2, 3, 360-378. J. E. Jackson, 1991, A User's Guide to Principal Components, Wiley, New York. I. T. Jolliffe, 2002, Principal Component Analysis, Springer, New York. T. Kourti and J. F. MacGregor, 1995, Process analysis, monitoring and diagnosis, using multivariate projection methods, Chemometrics and Intelligent Laboratory Systems, 28, 3-21. W. Ku, R. H. Storer and C. Georgakis, 1995, Disturbance detection and isolation by dynamic principal component analysis, Chemometrics and Intelligent Laboratory Systems, 30, 179-196. S. Lakshminarayanan, S. L. Shah and K. Nandakumar, 1997, Modeling and Control of Multivariable Processes: Dynamic PLS Approach, AIChE Journal, 43, 9, 2307-2322. C. A. Lowry, W. H. Woodal, C. W. Champ and C. E. Rigdon, 1992a, A Multivariate Exponentially Weighted Moving Average Control Chart Technometrics, 34, 46-53. C. A. Lowry, W. H. Woodall, C. W. Champ and S. E. Rigdon, 1992b, A Multivariate Exponentially Weighted Moving Average Control Chart, Technometrics, 34, 1, 46-53. H. Martens and T. Naes, 1989, Multivariate Calibration, Wiley, Chichester. D. C. Montgomery, 2005, Introduction to Statistical Quality Control, Wiley. D. C. Montgomery and C. M. Mastrangelo, 1991, Some Statistical Process Control Methods for Autocorrelated Data, Journal of Quality Technology, 23, 3, 179-193. M. S. Reis, B. R. Bakshi and P. M. Saraiva, 2008, Multiscale statistical process control using wavelet packets, AIChE Journal, 54, 9, 2366-2378. M. B. Vermaat, R. J. M. M. Does and S. Bisgaard, 2008, EWMA Control Chart Limits for Firstand Second-Order Autoregressive Processes, Quality and Reliability Engineering International, 24, 573–584. S. Wold, M. Sjöström and L. Eriksson, 2001, PLS-Regression: A Basic Tool of Chemometrics, Chemometrics and Intelligent Laboratory Systems, 58, 109-130. R. K. Wood and M. W. Berry, 1973, Terminal composition control of a binary distillation column, Chemical Engineering Science 28, 9, 1707-1717.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Implementation of model predictive controller in a pharmaceutical development plant Stéphane Hattou ,a Marie-Véronique Le Lann,b,c Karlheinz Preuss,d Boris Roussel,a Michel Cabassud e,c a
SANOFI-AVENTIS, 371, rue du Pr Joseph Blayac 34184 Montpellier cedex 4, France CNRS ; LAAS ;7, avenue du Colonel Roche ;F-31077 Toulouse, France b Université de Toulouse ; UPS, INSA,INP, ISAE ; LAAS , LGC ;F-31077 Toulouse, France d engineo GmbH, Ginsheimer Str. 1, 65462 Gustavsburg (Mainz), Germany e LGC, BP 84234, Campus INP-ENSIACET 4 allée Emile Monso 31030 Toulouse cedex 4, France b
Abstract Predictive control has spread in various domains such as refinery, chemical, metallurgical … industries. Nevertheless, concerning the pharmaceutical industry it still remains relatively exceptional since two particularities are clearly attached to this specific domain: the use of batch processes and the necessity to satisfy to strict validation procedures. In this context, a predictive controller has been developed; the Model Gradient Predictive Controller denoted MGPC and tested in real time application on various chemical reactors in the chemical development plant (PILOT and KILOLAB, Sanofi-Aventis at Montpellier, France). This plant is devoted to investigate new reactions before passing them to an industrial day-to-day production. In such a context, a same apparatus is used for carrying out different operations such as chemical reactions (changing several times a week) and crystallizations with highly non linear temperature set-point profiles (such as cubic profile). Keywords: Model Predictive Controller, Batch reactor, Pharmaceutical Industry.
1. Introduction The heart of a drug manufacturing process is the batch or fed-batch reactor that is still widely used in fine and pharmaceutical industries. It is often characterized as flexible and multipurpose equipment. That means that a same apparatus is used to carry out different reactions and operations under various operating conditions involving chaining of sequences. Nevertheless, in the same time these apparatus are well known to be potentially one of the major causes of lack in the operation reproducibility and of severe damages which can go up to run-away problems. To minimize these causes it is necessary to implement efficient automation and supervisory control. But, all the developed strategies have also to match with their flexible and the multipurpose characters. With the lack of on-line sensors for concentration measurement, the control of batch reactor still remains a problem of temperature control. Furthermore, since most of batch operations are successions of sequences (pre-heating, reactions, cooling following by a crystallization), the batch reactor has to be fitted out with a flexible heating-cooling system.
Implentation of model predictive controller in a pharmaceutical development plant
503
Based on previous works, the concept of thermal flux control has been adopted (Cabassud et al., 1996 ; Louleh et al., 1999). It consists in choosing as the manipulated variable, the thermal flux transferred from the reactor jacket to the reaction mixture and the temperature of the reaction mixture as the controlled variable. In this context, a predictive controller has been developed, the Model Gradient Predictive Controller denoted MGPC. The main difference between this controller and a classical model predictive controller is the objective function minimized which is expressed as a function of the temperature gradient. Then, the computed thermal flux is used in a cascade control schema to finally address the correct heating or cooling source and to determine if any change of configuration of the heating-cooling system is needed. To fit with the multipurpose character of the process, a procedure of on-line parameter identification has been added. This paper gives real time application results of such a controller on various chemical reactors used in the chemical development plant (PILOT and KILOLAB, SanofiAventis at Montpellier, France) to investigate new reactions before passing them to an industrial day-to-day production. This plant gathers about ten reactors of different sizes (from 10 to 1600 liters), of different materials (stainless-steal, glass-lined). Moreover these reactors are used to produce the first quantities of drugs needed to perform clinical tests and consequently have to be submitted to drastic drug manufacturing regulations.
2. The Model predictive controller As said previously the Model Gradient Predictive Controller denoted as MGPC takes its principal originality compared to a classical model predictive controller, from the conjunction of two principles : the minimization of an objective criterion expressed as a function of process output and reference trajectory gradients and the searched future decision variables (the manipulated variable or control variable) which in the present case represents the thermal flux to be exchanged between the reaction mixture and the utility fluid. The use of thermal flux as control variable enables to automatically address the correct heating or cooling source and to determine if any change of configuration of the heating-cooling system is needed (Cabassud et al., 1996 ; Louleh et al., 1999). In a classical Model Predictive Control scheme, the resulting amount of computations depends on the number of values of the manipulated variable in the control horizon. As only the manipulated variable for the next succeeding sample time is applied to the process, the main part of the computations is carried out to determine values of the manipulated variable that will never be applied to the process. Therefore a lot of computations can be saved if only the value of the manipulated variable for the next sample time is computed. The predictive character of the control algorithm is maintained by considering the set point at the end of the prediction horizon. The value of the set point is used to calculate the corresponding values of the reference trajectory which fixes the closed-loop dynamic of the system. With these considerations a new formulation of the criterion minimized has been adopted (Preuss et al., 2003): J° = || d/dt Tref(k+Hp°) - d/dt Tr(k+Hp°)||
(1)
where Tref(k+Hp°) and Tr (k+Hp°) represent respectively the reference trajectory and the reactor temperature computed at time (k+Hp°) Δt (Δt being the sampling time), Hp° is the output horizon. As said previously, the process model used to compute the future values of the process output gives a relation between the thermal flux q transferred from the reactor jacket to the reaction mixture (manipulated variable) and the temperature of the reaction mixture Tr. The simple model consists of one differential equation:
S. Hattou et al. .
504
d/dt Tr = b * q with: b = 1 / (Mr * Cpr)
(2)
Mr, Cpr are respectively the mass and the heat capacity of the reaction mixture. The details of the calculations for obtaining the optimal control variable with the assumptions can be found in (Preuss, et al., 2003). This value is given by: q(k+1)= [Tref(k+Hp°)-Tr (k)]/[Δt * Hp°] * Mr * Cpr
(3)
where Tr is the inner reactor temperature measured on-line. In case of chemical reaction with a reactant feeding, heat losses … the thermal flux can also be expressed as nr
(
q( k + 1) = UA(T j ( k ) − Tr ( k )) + K (Text ( k ) − Tr ( k )) + f cCpc (Tc ( k ) − Tr ( k )) − ¦ r j ΔHr jV j =1
)
(4)
Equalling the two expressions, the value of Tj(k) can be determined and sent in a cascade control schema to the low level controllers (Fig. 1). ª§ Tref ( k + Hp°) − Tr ( k ) · § K (Text ( k ) − Tr ( k )) · PT º ¸−¨ ¸¸ − Tede ( k ) = τ «¨¨ » + Tr ( k ) ¨ ¸ ΔtHp° M r Cp r «¬© ¹ M r Cp r »¼ ¹ © nr
(
)
With : PT = f c Cpc (Tc (k ) − Tr ( k )) − ¦ r j ΔHr jV and τ = j =1
Set point
M r Cpr UA
Master Controller MGPC
Tref
Inner reactor temperature
(5)
Tr
Output q from master controller Externe set point Tdesp
Inlet Jacket Temperature
Tde
P : Heating Slave Controller
Tde
Externe set point Tdesp P : « Cold » Slave Controller
Inlet Jacket Temperature
output
output
From A to 100% : hot utility control valve or electrical
Split-range on cold utility control valves From B to C% = 0 to 100% small valve From C à 100% = 0 to 100% big valve
% hot utility control valve opening
% cold utilities control valve opening
100
100 « cold »
« Hot » Output PID 0
A
100
Dead zone
%
0
B
C
100
Dead zone
Figure 1: Implementation structure of the cascade control scheme
3. Experimental results The implementation of such a controller has been performed on a glass lined reactor of 100 liters in the KILOLAB Unit at SANOFI-AVENTIS at Montpellier, FRANCE (Fig.2). The control algorithm has been implemented in the RS3 Fisher-Rosemount SCADA system (Fig.3).
Implentation of model predictive controller in a pharmaceutical development plant
Figure 2 : Semi-batch reactor
505
Figure 3 : The glass-lined reactor SCADA system
Different experiments have been performed to test the robustness of the predictive controller (Fig 5) and in particular the ability of the inner reactor temperature to track highly nonlinear set point profiles (3rd order) (Fig 6a and Fig. 6b). These atypical profiles are needed during crystallization steps to optimize the quality of the produced chemical compounds (Mullin’s cubic curves). A comparison with the Predictive Functional Controller (Richalet, 2003) has been performed. In this case the resolution of the equations depends on the type of set point profile by the number of coincidence points and the basis functions to be chosen which is not the case with MGPC. With a 2 coincidence points-PFC (Fig. 7), the set point tracking is not perfect. Similar results as MGPC have been obtained with 3 coincidence points which significantly increases the complexity of the algorithm and therefore its implementation in the SCADA system. Endothermic reaction experiments have also been performed to test how a feedforward compensation of the reaction heat consumption (eq. 5 via the variable PT) can improve the performance (Fig. 8).
4. Conclusions The implantation of the MGPC controller has been successfully performed in a multipurpose pharmaceutical pilot unit. In particular, it has been shown that it was possible to track highly non linear set point profile which is crucial for the production of a key pharmaceutical product. These implementations are pursued actually on a total of 10 reactors from 100 to 1600 liters of different types: stainless steel or glass-lined fitted out with different heating-cooling systems (mono or multi-fluid).
References M. Cabassud, A. Chamayou, L. Pollini, Z. Louleh, M.V. Le Lann and G. Casamatta, 1996, Procédé de contrôle thermique d'un réacteur discontinu polyvalent à partir d'une pluralité de sources de fluides thermiques, et dispositif de mise en œuvre de ce procédé.Patent N°95.03753. Bulletin officiel de la Propriété Industrielle, 39, 27/09/96.International Patent PCT/FR96/00426. Z. Louleh, M. Cabassud, M.V. Le Lann, 1999, A new strategy for temperature control of batch reactors : experimental application, Chemical Engineering Journal, , 75, pp.11-20
S. Hattou et al. .
506
K. Preuss, M.V. Le Lann, M. Cabassud, G. Anne-Archard, 2003, Implementation procedure of an advanced supervisory and control strategy in the pharmaceutical industry, Control Engineering Practice, Vol. 11, N° 12, pp. 1449-1458 J. Richalet , 1993, Pratique de la commande prédictive. Paris, Hermes.
Figure 5 : Experiment with succession of
Figure 4 : Comparison between computed and measured inner reactor temperatures
heating-cooling steps
Figure 6a : Experiment with a 3rd order set point temperature cooling temperature profile
Figure 6b : SCADA screen copy during a crystallization
ϵϬ ϴϬ
D'WĂŶĚW& ƉĞƌĨŽƌŵĂŶĐĞƐ
dĞŵƉĞƌĂƚƵƌĞͲΣ
ϳϬ ϲϬ ϱϬ ϰϬ
D'WZĞĂĐƚŽƌdĞŵƉ
ϯϬ
dĞŵƉ^ĞƚƉŽŝŶƚ
ϮϬ
D'W:ĂĐŬĞƚdĞŵƉ
ϭϬ
W&ZĞĂĐƚŽƌdĞŵƉ
Ϭ
W&:ĂĐŬĞƚdĞŵƉ
ͲϭϬ Ϭ͕ϬϬ
Ϭ͕ϱϬ
ϭ͕ϬϬ
ϭ͕ϱϬ ,ŽƵƌƐ Ϯ͕ϬϬ
Ϯ͕ϱϬ
ϯ͕ϬϬ
Figure 7 : Comparison between MGPC and a twocoincidence-points PFC
Figure 8: Endothermic reaction with compensation
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
A Hybrid Branch-and-Cut Approach for the Capacitated Vehicle Routing Problem Chrysanthos E. Gounarisa, Panagiotis P. Repoussisb, Christos D. Tarantilisb, and Christodoulos A. Floudasa a
Computer-Aided Systems Laboratory, Department of Chemical and Biological Engineering, Princeton University, NJ 08544, USA b Center for Operations Research & Decision Systems, Department of Management Science & Technology, Athens University of Economics & Business, Athens 11362, GR
Abstract This paper presents a hybrid optimization approach that combines deterministic and metaheuristic algorithms for the Capacitated Vehicle Routing Problem (CVRP). The approach combines a new branch-and-cut framework, that utilizes a two-commodity flow representation and novel heuristic-based procedures to separate various classes of cuts, with a subordinate Adaptive Memory Programming metaheuristic algorithm for the identification of high quality solutions. New local-scope cuts are suggested to exclude infeasible or suboptimal solutions, break problem symmetries, and tighten constraints. Computational experiments illustrate the potential of the new approach. Keywords: Vehicle Routing, Distribution Logistics, Branch-and-Cut
1. Introduction The Vehicle Routing Problem (VRP) deals with the optimal assignment and service sequence of a set of customers to a fleet of vehicles and is one of the most studied combinatorial optimization problems in the operations research literature (Laporte, 2009). However, unlike the Traveling Salesman Problem, where 1000-customer instances can be solved to optimality on a routinely basis, instances of VRP with more than one hundred customers can be hard to solve (Baldacci et al., 2010). In this paper, we address the Capacitated Vehicle Routing Problem (CVRP). Given a homogeneous fleet of capacitated vehicles, the objective is to design a set of least cost round-trip routes to serve a set of customers with known demand. Previous methods for solving the CVRP include branch-and-cut (Lysgaard et al., 2004), branch-and-cut-andprice (Fukasawa et al., 2006), and set partitioning approaches (Baldacci et al., 2008). Heuristic methods, such as iterative improvement, evolutionary algorithms, and hybrid metaheuristic schemes have also made significant contributions; however, most of them fail to provide a good compromise between solution quality and computational speed. Our goal is to develop a novel hybrid optimization method that combines –in a cooperative fashion– algorithms that provide theoretical guarantee of reaching optimal solutions with metaheuristic algorithms, which typically exhibit superior performance in regards to the speed of obtaining good quality solutions. In particular, we aim at exploiting synergies between an Adaptive Memory Programming (AMP) metaheuristic algorithm and a Branch-and-Cut (BC) solution framework. The former generates and continuously updates (via information from the relaxation solutions at each node of the BC tree) a reference set of high quality diversified solutions. This pool of elite solutions is then used for updating the incumbent and for guiding the BC tree search.
C.E. Gounaris et al.
508
2. Two-Commodity Network Flow Formulation Let V0 ^0,1, , N , N 1` be a node set and A ^(i, j ) : 0 d i j d N 1` be the resulting undirected arc set. The set V V0 \ ^0, N 1` represents the N customers, while nodes 0 and N 1 represent duplicate instances of the single depot (for departure and arrival of vehicles, respectively). A cost cij t 0 is associated with each arc (i, j ) A .
Furthermore, there exists a homogeneous fleet of K vehicles with maximum carrying capacity Q . Each customer i V requires qi units of product (0 qi d Q) . The solution of the CVRP calls for the determination of a set of vehicle routes with a minimum total cost, such that each customer is visited only once by exactly one vehicle, all available vehicles are used, each vehicle route starts and ends at the depot, and the cumulative customer demand satisfied by each route does not exceed the capacity of the vehicle. Baldacci et al. (2004) were the first to describe the CVRP with a twocommodity network flow formulation. We use their formulation in a slightly sparser form. For each of the undirected arcs (i, j ) A , a binary variable [ ij indicates if the arc is traversed or not (in either direction), while two flow variables, xij and x ji , represent the vehicle’s load and residual capacity (empty space). Eqs.(1-7) express the CVRP: min ¦¦ cij[ ij (1) [ ,x
s.t.
i
j !i
¦[
ji
j i
¦ [ij
s.t . xij x ji s.t .
2, i V
j !i
¦[
0j
j
¦[
i ( N 1)
i
Q[ij , (i, j ) A
¦x
Q qi , i V
¦x
¦q
ij
&
K
(2 & 3) (4) (5)
j
s.t .
j
0j
i
i
&
¦x
i ( N 1)
0
i
(6 & 7)
3. Strengthening Inequalities & Separation Algorithms 3.1. Commodity Flow Inequalities Constraints that strengthen the bounds of the flow variable bound strengthening constraints are appended to the formulation from the onset and are similar to the flow inequalities suggested by Baldacci et al. (2004). They can be expressed as follows: ª º xoj t «¦ qi ( K 1)Q »[oj , j V (8) ¬ i ¼ (9) xij t q j[ij & x ji t qi[ij , (i, j ) V 3.2. Local Scope Inequalities After examining the structure of the relaxation solution at each node, it is possible to infer that certain solution segments (i.e., collections of arcs P ) lead to infeasible, suboptimal, or otherwise undesirable solutions. Such segments can be disallowed through the addition locally at the node level of an appropriate cut of the type: (10) ¦[ij d| P | 1 i , j P
We focus on solution segments that correspond to fully formed paths, that is, collections of consecutively joint arcs P ^i, j : [ij 1`, augmented by adjacent fractional arcs,
Hybrid Branch-and-Cut for the CVRP
509
0 [ij 1 . We search for undesirable paths through a structured approach that grows
such paths iteratively, and we disallow formation of augmented paths due to 5 reasons: (a) subtour elimination – cyclical routes that do not include the depot; (b) capacity restrictions – routes that exceed vehicle capacity; (c) path dominance – suboptimal routes for which there exists a lower cost ordering of the customers; (d) symmetry breaking – non-nominal routes; that is, routes that begin by visiting a customer that is lexicographically higher than the last customer to be visited by this route; and, (e) demand restrictions – routes that are about to terminate before they have satisfied a minimum amount of demand. A separate class of local scope cuts results by inferring that a coefficient of a variable in a constraint can be suitably increased or decreased so as to tighten this constraint. In particular, we attempt to lift Eqs.(9) by replacing the coefficient of the binary variable (right hand side) with the cumulative load of the fully formed path connected to node i through –and including– node j , under the condition that this path (denoted Pj ) remains intact, and vice-versa: xij t qPj ([ij | Pj |
¦[
n ,m Pj
nm
) & x ji t qPi ([ij | Pi |
¦[
n ,m Pi
nm
)
(11 & 12)
3.3. Global Scope Inequalities There also exist 5 classes of cutting planes that are globally valid for the CVRP. These are the so-called Rounded Capacity (RC), Homogeneous Multistar (HM), Framed Capacity (FC), Strengthened Comb (SC) and Hypotour (HI) inequalities (see Naddef and Rinaldi, 2002, for detailed explanation). Due to their vast number, only those that are violated at a given node relaxation solution are taken into consideration. To this end, we have developed metaheuristic-based algorithms for their efficient separation (i.e., identifying which instances are in fact violated). The emphasis is given on RC and HM inequalities, whose separation is done concurrently through a new Tabu Search (TS) algorithm that improves upon the search framework presented in Augerat et al. (1999). For FC inequalities, a novel multi-restart TS algorithm, combined with a partition generation mechanism and edge-exchange neighborhood search, is used. The SC separation procedure proposed by Lysgaard et al. (2004) is used, enhanced with a TS procedure for expanding the teeth. Finally, the HI separation procedure proposed by Lysgaard et al. (2004) is adopted. The details of the above separation algorithms are omitted for conciseness of this paper.
4. Hybrid Branch-and-Cut Approach 4.1. Adaptive Memory Programming (AMP) Metaheuristic Initially, a reference set (pool) of high quality solutions is generated via an AMP algorithm. This is achieved via the repeated construction of provisional solutions out of promising building blocks identified during the search, while updating these adaptive memory components based on the progress and experience gained (Tarantilis, 2005; Repoussis et al., 2009). For this purpose, a knowledge extraction mechanism is utilized, coupled with a probabilistic construction heuristic and a TS algorithm. Overall, the proposed learning mechanism considers two properties: solution quality and appearance frequency for each pair of customers visited consecutively during a route. During branch-and-cut, at the end of each node’s processing (right before branching), the algorithm utilizes information from the LP relaxation in an effort to
510
C.E. Gounaris et al.
provide a new incumbent. Initially, a Path-Relinking algorithm generates an integer feasible solution [ int that is as “close” as possible to the node’s fractional solution [ f , in terms of Hamming distance d ¦int(1 [ijf ) ¦int[ijf . Next, a new provisional H ( i , j ):^[ij 1` ( i , j ):^[ij 0` solution is generated via reconstructing part of [ int using frequently observed components from the AMP pool. This solution is further improved via TS and, if particular criteria are met, the reference set and memory structures are updated. 4.2. Branch & Cut Framework Given that high quality initial upper bounds are provided through our metaheuristic framework, the priority of the branch-and-cut implementation is on improving the lower bound and on minimizing the number of subproblems (nodes) to be considered until the gap is closed. To this end, we adopt a best-bound-first node selection strategy. After obtaining the standard LP relaxation at each node, the cutting plane phase proceeds as follows: we first search for any augmented paths that need to be disallowed due to suboptimality, infeasibility, or non-nominality of the solution. Next, we check for potential to lift any flow variables and, lastly, we check for global cut violations (with emphasis on RC/HM). If at least one cut is identified at any of these three stages, we reoptimize the LP and repeat the process without continuing with the next stage(s). If no cuts are identified whatsoever, we proceed with branching the node. Let set S V and let G (S ) be the sum of [ ij of all arcs in the corresponding cut-set. As branching rule, we use the disjunction ^G (S ) 2` ^G (S ) t 4` . Among candidate sets for which G ( S ) | 3 , we select the one with the largest total demand.
5. Computational Studies We applied our framework to the standard benchmark data sets that were also used by Lysgaard et al. (2004), where a BC framework based on a different representation of the CVRP –called the vehicle flow formulation– is presented. Table 1 exhibits the root node gap for the 10 hardest problems we attempted, including two 100-customer instances. On average, our commodity flow-based method performs equally well with the vehicle flow-based method. The average root note gap can be improved if we enable CPLEX options for adding generic MIP cuts (e.g., Gomory cuts), however we have observed that doing so deteriorates the overall performance of the algorithm at later nodes. Therefore, for runs to full optimality, we disable cuts identified by CPLEX. Table 2 presents the time necessary to fully close the gap and the number of tree nodes that had to be explored for a set of medium-difficulty problems. The largest instance solved to guaranteed optimality was P-76-6, which involves 75 customers. Two instances were solved at root node by both methods. Our framework required fewer BC nodes for 9 of the remaining 13 instances. The improvement was very substantial for problems A-45-7 and B-50-8, two tight instances that are known to be hard to solve.
6. Conclusions This paper presents a hybrid BC framework for the exact solution of the CVRP that is based on the two-commodity flow formulation, systematic use of local scope cuts, new metaheuristic-based separation techniques for known classes of cuts, as well as an AMP metaheuristic algorithm for identification of high quality integer feasible solutions and acceleration of the search. Computational experiments on benchmark data sets illustrate the potential of the proposed approach.
Hybrid Branch-and-Cut for the CVRP
511
Table 1. Root node performance (% gap) Benchmark LP This This Problems relaxation paper paper+ A-69-9 9.04 3.87 3.30 A-80-10 8.42 3.02 2.39 B-68-9 10.64 1.14 1.14 E-76-8 6.06 2.34 2.17 E-76-10 7.02 3.62 3.33 E-101-8 5.65 1.57 1.55 E-101-14 6.82 3.75 3.20 P-60-15 6.56 3.91 3.48 P-65-10 6.37 3.12 2.48 P-76-5 4.03 1.56 1.55 Average 7.06 2.79 2.46 + CPLEX v11.0 generic MIP cuts enabled
Lysgaard et al., 2004 3.85 3.03 1.10 2.33 3.63 1.52 3.75 3.95 3.14 1.51 2.78
Table 2. Runs to full optimality Benchmark This paper Problems t (sec) # nodes A-44-6 86 125 A-45-7 2,835 2,084 A-48-7 5,147 203 A-55-9 145 156 B-43-6 39 100 B-44-7 3 1 B-45-6 152 276 B-50-7 2 1 B-50-8 4,523 1,503 B-52-7 3 3 B-57-7 99 49 B-64-9 21 7 E-51-5 13 8 P-50-7 78 131 P-76-4 143 105 Note: Optimum solution not provided as input
Lysgaard et al., 2004 t (sec) # nodes 620 211 19,414 4,170 372 113 468 152 125 63 8 1 299 159 11 1 31,026 5,694 25 15 441 168 42 13 59 17 805 263 535 141
References P. Augerat, J.M. Belenguer, E. Benavent, A. Corberán, D. Naddef, 1999, Separating capacity inequalities in the CVRP using tabu search. European Journal of Operational Research, 106, 546–557. R. Baldacci, E. Hadjiconstantinou, A. Mingozzi, 2004, An exact algorithm for the capacitated vehicle routing problem based on a two-commodity network flow formulation, Operations Research, 52, 5, 723-738. R. Baldacci, N. Christofides, A. Mingozzi., 2008, An exact algorithm for the vehicle routing problem based on the set partitioning formulation with additional cuts, Mathematical Programming Ser.A, 115,:351-385. R. Baldacci, P. Toth, D. Vigo, 2010, Exact algorithms for routing problems under vehicle capacity constraints, Annals of Operations Research, 175, 1, 213-245. R. Fukasawa, H. Longo, J. Lysgaard, M.P. de Aragao, M. Reis, E. Uchoa, R.F. Werneck, 2006, Robust branch-and-cut-and-price for the capacitated vehicle routing problem, Mathematical Programming Ser.A, 106, 491-511. G. Laporte, 2009, Fifty years of vehicle routing, Transportation Science, 43, 4, 408-416. J. Lysgaard, A.N. Letchford, R.W. Eglese, 2004, A new branch-and-cut algorithm for the capacitated vehicle routing problem, Mathematical Programming Ser.A, 100, 423-445. D. Naddef, G. Rinaldi, 2002, Branch-and-cut algorithms for the capacitated VRP. In: P. Toth and P. Vigo (Eds.), The Vehicle Routing Problem, SIAM Monographs on Discrete Mathematics and Applications, SIAM, Philadelphia, 53-81. P.P. Repoussis, C.D. Tarantilis, G. Ioannou, 2009, Arc-guided evolutionary algorithm for the vehicle routing problem with time windows, IEEE Transactions on Evolutionary Computation, 13, 3, 624-647. C.D. Tarantilis, 2005, Solving the vehicle routing problem with adaptive memory programming methodology, Computers and Operations Research, 32, 9, 2309-2327.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Design of robust PID controller for processes with stochastic uncertainties Pham L. T. Duong, Moonyong Leea a
Yeungnam University,Gyeongsan 712-749, Rep. Korea
Abstract Stability and performance of a system can be inferred from the evolution of statistical characteristic of system states. The polynomial chaos of Wiener provides an efficient framework for the statistical analysis of dynamic systems, with computational cost far superior to Monte Carlo simulations. In this work, we design a robust PID controller for systems with stochastic uncertainties by using a generalized polynomial chaos. Keywords: Polynomial Chaos, PID controller design, Statistical analysis, Stochastic process, Smith predictor.
1. Introduction Stochastic uncertainty may arise in systems when the physics governing the system is known and the system parameters are either not known precisely or expected to vary in the operational lifetime. Such uncertainty also occurs when the system models are built from experimental data using system identification techniques, where a system plant is represented by its transfer function with unknown parameters. As a result, the values of the parameters in the transfer function have a range of uncertainty. In order to include this uncertainty in the mathematical model, various probabilistic methods have been developed. Traditional probabilistic approaches to uncertainty quantification (UQ) include the Monte Carlo method [6,7] and its variants—for example, Latin Hypercube Sampling [9]—which generate ensembles of random realizations for the prescribed random inputs and use repetitive deterministic solvers for each realization. Although such methods are straightforward to apply, their convergence rates can be relatively slow. For example, the variance value typically converges as 1 / K , where K is the number of realization. The need for a large number of samples for accurate results tends to an excessive computational burden. The recently developed stochastic generalized polynomial chaos (gPC) methods can exhibit faster convergence for problems with relatively large random uncertainties. With the gPC, stochastic solutions are expressed as an orthogonal polynomial of the input random uncertainties. The PC method originates from the homogeneous chaos concept define by Wiener [10]. Ghanem and Spanos [12] showed that the PC is an effective computational tool for engineering studies. Karniadakis and Xiu [4] generalized and expanded the concept by using orthogonal polynomials from the Askey scheme class as the expansion basis. Puvkov et.al. [8] proposed that if the Wiener–Askey polynomial chaos expansion is chosen according to the probability distribution of the random input, then the chaos expansion allows possibility to construct simple algorithm for statistical analysis of dynamic system. When a controller is designed, it is also desirable to understand the distribution of the response in terms of the uncertainties. Once distributions of uncertainties are known, a statistical analysis problem is a prediction problem of how a specific distribution in the
Design of robust PID for processes with stochastic uncertainty
513
plant parameters maps to the range of responses. Hence, in this work, the gPC approach is used to account for the influence of random uncertainties in the parameters of control system on the statistical characteristics of its output. Robust PID controllers for systems with stochastic uncertainties are designed by judging the distribution of output response.
2. Statistical analysis of control system with generalized polynomial chaos theory [3,8] 2.1. Governing equations for system dynamics Let us consider a control system governed by differential algebraic equations (DAEs) in [4]: F (t , y , y c,..., y ( l ) , [ ) 0 (1) ® (l ) g ( t , y ( t ),..., y ( t ), [ ) 0 ¯ 0 0 0
([1 , [2 ,..., [ N ) is a random vector of mutually independent random
where ȟ
components with a probability density function (pdf) Ui ([ ) ; y is a state variable. 2.2. Polynomial chaos theory In the gPC method, one seeks an approximate of response function f ( y ( t , ȟ )) via an orthonormal polynomial of random variables: M
P
f ( y ( t , ȟ ))
¦f
m
( t )) m ( ȟ ) ;
i 1
(2)
§ N P· M 1 ¨ ¸ © N ¹
where P is the order of polynomial chaos, f m is the coefficient of the gPC expansion and satisfies Eq. (2) as: E[ E [) m f ( y )]
fm
³ f ( y ))
m
( ȟ ) U ( ȟ )dȟ
(3)
*
where E[] denotes the expectation. 2.3. Stochastic collocation Stochastic collocation approach can deal with complex response functions easily, and its algorithm is described below: x
Choose a collocation set ȟ
(m)
([1 ,..., [ N (m)
(m)
^ȟ
(m)
,w
( m)
`
Q m 1
for the random vector ȟ , where
) is the mth node and w(m) is the corresponding weight.
x
For each node, solve Eq. (1) to obtain its solution y
x
the response function f . Calculate the approximation of gPC coefficients via a discrete integration rule for Eq. (3).
(m)
(m)
y (t , ȟ ) and evaluate
(m)
Q
fj
Q
[ f ( y , ȟ )) ( ȟ )] )
¦f
(m)
) j (ȟ
(m)
)w
(m)
j
1,..., M
m 1
x
Construct the N-variate, Pth order gPC approximation of response function
(4)
P.L .T. Duong et al.
514
M
¦ f (ȟ)) (ȟ))
P
fN
j
(5)
j
j 1
^
(m)
The choice of collocation set ȟ , w
( m)
`
Q m 1
should be made such that an accurate
integration can be constructed, i.e for a smooth function g ([ ) : Q
Q
¦ g (ȟ
[g] [g
(m)
)w
³ g (ȟ) U (ȟ)dȟ
(m)
m 1
(6)
*
In the classical spectral method [8], Gaussian quadrature [11] is chosen as one dimensional (1D) numerical integration rule. The multi-dimensional numerical (1)
integration can be constructed by tensorization of 1D quadrature rule Qqi : Q
[ g] g]
(Qq
(1)
1
...
Qq
(1)
)g
(7)
N
where the subscript in Qqi
(1)
denotes the number of node for 1D quadrature rule.
2.4. Statistical analysis of control system When all the gPC coefficients are evaluated by a numerical method, the post-processing procedure can be carried out to obtain the statistics. The mean value is the first expansion coefficient:
³
Pf
P
E[ f N ]
f N U ( ȟ ) dȟ P
*
ªM º f j ( ȟ )) j ( ȟ ) U ( ȟ ))d dȟ ³ «¬ ¦ »¼ j 1 *
f1
(8)
The variance of response function f(y) is evaluated as:
V
Df
E [( f P f ) ]
2
2
f
M
³¦ (
*
M
f j ( ȟ )) j ( ȟ ) f1 )(
¦ f (ȟ)) ( ȟ) f ) U ( ȟ) dȟ
j 1
j
j
1
j 1
(9)
M
¦f
2 j
j 2
In Eqs. (8) and (9), the properties that the polynomial set starts with )1 (ȟ )
1 is
employed and the weight function of the polynomial is the probability density function. If the response function is chosen as f ( y ) y , the mean and variance of system state are approximately given by Eqs. (8) and (9). d
The set {Ii }i i 1 is the orthonormal polynomial of [i with the weight function U i ([i ) , which is the probability density function of random variable [i . This establishes a correspondence between the distribution of the random variable [i and the type of the orthonornal polynomial of it gPC basis. In this paper, we consider only uniform stochastic uncertainties and its correspondent generalized Legendre polynomial chaos. For details on other types stochastic uncertainties and there correspondence gPC basis, see [8, 3] and the references therein. 2.5. Sparse grids N
From Eq. (7), the total number of collocation point is
q or q i
N
if the number of point
i 1
in each dimensional is q. Thus, the total number of points grows very fast for large
Design of robust PID for processes with stochastic uncertainty
515
dimension N. For this reason, a full tensor product approach is mostly used for low dimensional problem only. In [2], Smolyak cubature was found that is very useful solving a random differential equation by stochastic collocation approach with high dimensional random space. Starting with one dimensional integration formula, the Smolyak algorithm is given by § N 1· U Q [ g ] (Qi1 (1)
...
Qi (1) ) g ¦ ( 1) J i ¨ J i ¸(Qi1
...
Qi ) (10) J N 1d i d J © ¹ ( 1)
( 1)
N
N
i1 i2 ... iN
i
where J t N denotes a level of the construction. The one dimensional sets should be nested for minimum number of node in Eq. (10). The Kronrod Patterson rule have nested sets of nodes, which made them more efficient for the construction of sparse grid. The nodes and weights for Smolyak cubature based on the Kronrod Patterson rule can be readily obtained from [5]. In this paper, the Smolyak cubature based on the Kronrod Patterson rule will be used in designing the PID controller for stochastic systems with high dimensional random space.
3. Example The PC method was applied to design a PID controller for the FOPDT system with a Smith Predictor K
G( s)
Ts 1
e Ls
K , T , L U [0.9,1.1];
(11)
Optimum PID parameter are obtained by minimizing the objective function T
min J
min
Kp , Ki , Kd
Kp , Ki , Kd
³ | M [e(t )] | dt
(12)
0
subject to (13)
max DY ( t ) d 0.02 0d t dT
Figure 1 shows the means and variances of system with the resulting PID setting K p 1.858; K i 1.896; K d 0.144 . In Fig. 2, 1000 possible responses of uncertain FOPDT system are plotted. The calculations were made using the DEMM toolbox [8]. 1.5
My(t)
1
0.5 0 0
1
2
3
4
5
6
7
8
9
5
6
7
8
9
t 0.015 0.01 Dy(t) 0.005 0 0
1
2
3
4 t
Figure 1. Predicted mean and variance for system output with random parameters.
P.L .T. Duong et al.
516
1
0.8 y(t)
0.6
0.4
0.2
0 0
1
2
3
4
t
5
6
7
8
9
Figure 2. 1000 possible responses of the uncertain system.
4. Conclusions In this work, a statistical analysis for the system with uniform stochastic parameters was studied with the help of the gPC. From the statistical evolution of system output, we can infer stability and design a robust controller with respect to stochastic uniform uncertainties. Optimum PID parameters can be computed by mean of nonlinear optimization approach. Simulation example has shown that the proposed design method gives robust results for systems with stochastic uncertainties. It should be noted that the variance is only a weak measure [1] for variability of random process. In [1], D .Xiu also provided alternative measure for variability, and this will be incorporated into future work on the PID design for processes with stochastic uncertainties.
Acknowledgments This research was supported by KOSEF research grants in 2009.
References 1. D. Xiu , 2010, Fast numerical method for robust optimal design, Engineering optimization, 40(6), 489-504. 2. D. Xiu, 2007,Efficient collocation approach for parametric uncertainty analysis, Communication in computational physics,2(2),293-309. 3. D .Xiu, 2010 Numerical method for stochastic computations:A spectral method approach, Princeton University press. 4. D. Xiu, G.R. Karniadakis , 2002, The Wienner-Askey Polynomial Chaos for Stochastic Differential Equations. SIAM J. Sci. Comput., 24 (2), 619–644. 5. F. Hess, V. Winschel, 2008, Likelihood approximation by numerical integration on sparse grids, Journal of Econometrics,144,62-80. http://www.sparse-grids.de/ 6. J.S.Liu, 2001, Monte Carlo Strategies in Scientific Computing. Springer-Verlag. 7. K. A. Puvkov, N.D. Egupov,2003.Classical and modern theory of control system. BMGTU press, Vol.2. 8. K. A. Puvkov, N.D. Egupov, A.M. Makarenkov, 2003, Theory and Numerical Methods for Studying Stochastic Systems. Fizmatlits, Moscow. 9. M. Stein , 1987, Large sample properties of simulation using Latin hypercube sampling, Techometrics,29(2),143-151. 10. N. Wienner, 1938, The homogeneous chaos, Amer. J. Math , 60: 897–936. 11. P. K. Kythe, M.R. Schaferkotter, 2005, Handbook of computational method for integration CRC Press. 12. R. G. Ghanem, P. D. Spanos, 1991, Stochastic Finite Elements: A Spectral Approach. Dover publications.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N.. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved..
MPC vs. PID. The advanced control solution for an industrial heat integrated fluid catalytic cracking plant Mihaela Iancu, Mircea V. Cristea, Paul S. Agachi Babes-Bolyai University, Faculty of Chemistry and Chemical Engineering, Chemical Engineering Department, Arany Janos St., No. 11, 400028, Cluj-Napoca, Romania,
[email protected] Abstract The modern process plants are continuously improved for a flexible production and for maximization of the energy and material savings. These plants are becoming more complex with strong interactions between the process units. Consequently, the failure of one unit might have a negative effect on the overall productivity. This situation reveals important control problems. Another problem is that the traditional techniques developed by now can hardly handle all the control problems that appear in modern plants. However, the appearance and the continuously development of the advanced control techniques provide better solutions for plants control at any level of complexity of the process. In this study a complex heat integrated fluid catalytic cracking (FCC) plant was used for comparing a model predictive control (MPC) strategy with the classical PID control strategy already implemented in the real plant. The study results revealed that the MPC controller was capable to maintain the variation of the controlled variables much closer to the set points than the classical PID controllers. The present work shows that it is possible to save equipment and energy costs. Moreover, it is well known that using a MPC strategy the plant can be exploited at its maximum capacity. Keywords: fluid catalytic cracking, heat integration, dynamic behavior, PID control, model predictive control.
1. Introduction It is a fact that the chemical industry is still dominated by the use of distributed control systems implementing simple PID controllers. In the literature, lots of publications can be found which discuss the control efficiency of the refinery processes ([1]–[4]). However, a few publications [5] compare the control efficiency between an advanced control technique such as model predictive control and a well designed conventional PID control system. Therefore, in this study a complex heat integrated fluid catalytic cracking (FCC) plant was used for the identification of the advantages and disadvantages of a model predictive control (MPC) strategy, previous developed [6], [7], comparing to the
518
M. Iancu et al.
classical PID control strategy from the industrial plant. The developed MPC strategy focused on the response of the heat integrated process in terms of operation, product quality and cost reduction of the heat integrated plant. To simulate the FCC heat integrated process Aspen HYSYS software was used. In the simulation are included the reactor-regenerator section, the main fractionator and the retrofitted heat exchange network – HEN (used for preheating the feedstock before entering the riser). The goal of this work was to demonstrate the necessity and the efficiency of the advanced control techniques in the refinery processes. Also, the present work intends to emphasize that it is possible to save energy and operation costs using a MPC control scheme for the heat integrated FCC plant.
2. Description of the heat integrated FCC plant dynamic model The way approached for building the FCC model, in order to simulate its steady state and dynamic state, was to use Aspen HYSYS which is specialized for simulating refining processes. Therefore the model of the FCC heat integrated plant was built and structured as a main flowsheet and two sub-flowsheets. The main flowsheet contains the FCC reaction block with the riser and the regenerator, a simplified scheme of the FCC column and the preheating train for the raw material. One sub-flowsheet consists in the reactions block and the other one represents the FCC main fractionation column. FCC fractionator was developed in a separately sub-flowsheet based on the industrial FCC column design and geometry. The FCC fractionator contains 38 trays, 2 sidestrippers (one for stripping the heavy gasoline fraction – HCN and the other one for stripping the light diesel oil fraction – LCO), 3 pump-arounds, and one condenser. Before entering in the condenser, the top column product is cooled in 2 heat exchangers. Because the case study of this work represents a real industrial plant which already has a PID control scheme implemented, the control scheme was developed on the basis of the real one. Moreover, due to the new needs of the heat integrated FCC plant the control scheme on the real plant had to be adjusted in order to solve the problems emerged from the new instabilities introduced in the system through the heat integration process. The controllers tuning parameters were obtained using Ziegler-Nichols method.
3. MPC vs. PID The control design of the FCC fractionator is an important aspect mainly from the point of view of the products [8] and also from the point of view of the HEN [9]. It was observed that if the FCC fractionator dynamic behavior is properly controlled, the HEN has an appropriate behavior. Consequently, the further analysis takes into account the PID controllers of the FCC column control scheme. These controllers are related to: the temperature control for the column top product stream TIC-100, the liquid level control for the condenser LIC-100, the flow control of the bottom product of the HCN side-stripper FIC-101, the flow control of the bottom product of the LCO side-stripper FIC-102 and the temperature
MPC vs. PID. The advanced control solution for an industrial heat integrated fluid catalytic cracking plant 519 control for the slurry stream TIC-101. The parameters of the selected PID controllers are presented in Table 1. Table 1. The characteristics of the main controllers of the FCC column FIC-101
Controlled Variable Min. Max. 0 [m3/h] 50 [m3/h]
FIC-102 LIC-100 TIC-100 TIC-101
0 [m3/h] 0% 105 [0C] 350 [0C]
Controller
60 [m3/h] 100 % 112 [0C] 363 [0C]
Set Point
Parameters
13.52 [m3/h]
Kc = 0.1; Ti = 0.2
32.72 [m3/h] 60 % 108 [0C] 356 [0C]
Kc = 0.1; Ti = 0.2 Kc = 1.8; Ti = 181 Kc = 3; Ti = 12 Kc = 2; Ti = 25
Therefore, using the manipulated and controlled variables of the PID control structure, a 5x5 MPC controller has been developed and implemented in the FCC heat integrated plant dynamic model. The prediction model of the MPC controller has been set up using the step response matrix of the controlled variables. The dynamic optimization problem approached was:
The results of the two control strategies, PID control and MPC control, are presented in Figure 1.a and Figure 1.b. The simulations were developed for 130 minutes. In the figures the red line represents the set point, the blue line the manipulated variable and the green line the controlled variable. PID MPC
I.
II. Figure 1.a: I. PID and MPC control of the column top product temperature [0C]. II. PID and MPC control of the condenser percent liquid level [%].
M. Iancu et al.
520
PID
MPC
I.
II.
III. Figure 1.b: I. PID and MPC control of the bottom product flow of the HCN sidestripper [m3/h]. II. PID and MPC control of the bottom product flow of the LCO side-stripper [m3/h]. III. PID and MPC control of the temperature control for the Slurry stream [0C]. As it can be seen in Figure 1.a and Figure 1.b, the MPC controller is able to maintain the variation of the controlled variables much closer to the set point than the PID controllers. The developed MPC controller proved to be efficient for FCC heat integrated plant control. An important difference can be observed in the control of the bottom product flow of the HCN side-stripper (Figure 1.b. I) and the bottom product flow of the LCO side-stripper (Figure 1.b. II). Therefore, for a heat integrated FCC plant, one of the advantages of using MPC control instead a classical PID control is reflected in the fact that plant can be exploited at its maximum capacity. Other incentives consist in reducing energy and operation costs.
MPC vs. PID. The advanced control solution for an industrial heat integrated fluid catalytic cracking plant 521
4. Conclusions The refinery units have continuous operation and they process large amounts of feedstocks. Therefore, any changes in the process (oil feedstock, fuel Euro 4, fuel Euro 5) needs the adjustment of the PID controllers’ parameters in order to cope with varying operating conditions. A classical PID control scheme is not able to handle all important changes in the FCC plant operation. Consequently, manual action of the operators is needed. As a result, during the period in which these changes are made, the FCC plant is operated in a non-optimal way and constrained to reach its operation parameters according to the new feedstock. This task may be successfully accomplished by the MPC controller. This study demonstrated that the best solution consists in using the advanced control techniques, especially the ones which imply the use of controllers based on the model of the process because in this way, at every moment of time, the process behavior is predicted and the multivariable controller may act promptly.
5. Acknowledgements This work was possible with the financial support of the Sectoral Operational Programme for Human Resources Development 2007-2013, co-financed by the European Social Fund, under the project number POSDRU 89/1.5/S/60189 with the title Postdoctoral Programs for Sustainable Development in a Knowledge Based Society”.
”
References [1] Cristea, M. V., Agachi, S. P., & Marinoiu, M. V., (2003). Simulation and model predictive control of a UOP fluid catalytic cracking unit. Chemical Engineering Procesing, 42, 67. [2] Williamson, C.J. and Young, B.R. (2003). Advanced Control of a Refinery Naphtha Train. IEEE Industry Applications Society Advanced Process Control Applications for Industry Workshop, Vancouver, BC. [3] Tellez, R., Young, B.R., & Castillo, F.J.L. (2008). Model Predictive Control of a HeatIntegrated Plant, A Case Study on the Reaction Section of the HDA Process. AIChE Spring National Meeting, New Orleans LA [4] Roman, R., Nagy, Z. K., Cristea, M. V., & Agachi, S. P. (2009). Dynamic modelling and nonlinear model predictive control of a Fluid Catalytic Cracking Unit, Computers and Chemical Engineering, 33, 605. [5] H. Huang, J. B. Riggs, (2002), Comparison of PI and MPC for control of a gas recovery unit, Journal of Process Control, 12, 163–173. [6] M. Morar, P. S. Agachi, (2009), The development of a M PC controller for a heat integrated fluid catalytic cracking plant. Studia Universitatis Babes-Bolyai Chemia, LIV(4), 43. [7] M. Iancu, P. S. Agachi, (2010), Optimal Process Control and Operation of an Industrial Heat Integrated Fluid Catalytic Cracking Plant Using Model Predictive Control, Computer Aided Chemical Engineering, 28, 505-510. [8] P. Lundstrom, S. Skogestad, (1995), Opportunities and difficulties with 5x5 distillation column, Journal of Process Control, 5(4), 249-261. [9] J. Morud, S. Skogestad, (1996), Dynamic behaviour of integrated plants, Journal of Process Control, 2/3, 145-156.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Plantwide Control of a Cumene Manufacture Process
a
Vivek Geraa, Nitin Kaisthaa, Mehdi Panahib, Sigurd Skogestadb Chemical Engineering, Indian Institute of Technology Kanpur, 208016, Kanpur, India b Chemical Engineering Department, NTNU, 7491, Trondheim, Norway
Abstract This work describes the application of the plantwide control design procedure of Skogestad (Skogestad, 2004) to the cumene production process. A steady state “top down” analysis is used to select the set of “self-optimizing” primary controlled variables which when kept constant lead to acceptable economic loss without the need to reoptimize the process when disturbances occur. Two modes of operation are considered: (I) given feed rate and (II) optimized throughput. Keywords: cumene production, control structure design, self-optimizing control
1. Introduction Cumene is an important industrial intermediate in the manufacture of phenolic and polycarbonate resins, nylon and epoxy and is conventionally produced by the Friedel Crafts alkylation of benzene with propylene. (Concentration unit: kmol/m3). Main reaction: C6H6 + C3H6 Æ C9H12 (Cumene) (k=2.8E7, E= 104174 kJ/kmol) Side reaction: C9H12 + C3H6 Æ C12H18 (DIPB) (k=2.32E9, E= 146742 kJ/kmol) Some research has already been done over the past few years which discusses the various aspects of operation, design and control of a cumene production plant. 1, 2 But none of them address the issue of control structure design in a systematic manner. In this work we try to address this by applying a part of Skogestad’s plantwide procedure of (Skogestad, 2004). The main steps of this procedure are as follows: x Degree of freedom analysis. x De¿nition of optimal operation (cost and constraints). x Identi¿cation of important disturbances x Identi¿cation of candidate controlled variables c. x Evaluation of loss for alternative combinations of controlled variables x Final evaluation and selection (including controllability analysis) Two modes of operation are considered for the process: Mode 1: Given Throughput. Mode 2: Optimized/Maximum Throughput. (feed rate is also a degree of freedom).
2. Base Case Design The base case design parameters and kinetics data and cost correlations were taken from Luyben (2010). Figure 1 provides a schematic of the conventional process. The fresh benzene and fresh C3 (95% propylene and 5% n-propane) streams are mixed with the recycle benzene, vaporized in a vaporizer, preheated in a feed effluent heat exchanger (FEHE) using the hot reactor effluent, before being heated to the reaction temperature in a furnace. The heated stream is fed to a cooled packed bed reactor. The hot reactor effluent loses sensible heat in the FEHE and is further cooled using cooling water. The cooled stream is sent to a light out first distillation train. The inert n-propane and small amounts of unreacted propylene are recovered as vapour distillate from column 1.The bottom stream is further distilled in the recycle column to recover and recycle unreacted benzene as the distillate. The recycle column bottom stream is sent to the product column to recover 99.9% cumene as the distillate and the heavy DIPB as the bottoms.
Plantwide Control of a Cumene Manufacture Process
523
2.1. Determination of column 1 pressure The flash tank in the Luyben design has been replaced with a distillation column (column 1) to reduce the loss of benzene and hence increase the plant operating profit. A column operating pressure of 5 bar with a benzene loss of 0.12 kmol/h was found to be near optimal. Table 1 provides an economic comparison of the base case design with the original Luyben design (with a flash tank instead of column 1) for the same operating conditions. The yearly operating profit of the base-case design is noticeably higher than the Luyben design due to the reduction in the loss of precious benzene in the fuel gas stream. For completeness, economic / operating condition details of Mode I and Mode II optimum solutions, where the plant operating profit (defined later) is optimized, are also provided in Table 1.
Figure1: Base-case cumene process flowsheet
3. Economic optimization of the base case design 3.1. Definition of objective function (J) and constraints Total operational profit per year (365 days) was chosen as the objective function J which is to be maximized with J = Product revenue – reactant cost + DIPB credit + vent gas credit + reactor steam credit – preheater electricity cost – steam cost in reboilers and vaporizer Since the plant is already built, it has certain physical limitations associated with the unit operation equipment. Moreover it is always optimal to have the most valuable product at its constraint to avoid product give-away. The steady state degrees of freedom to maximize the Mode I / Mode II operating profit are noted in Table 2. Note that since J does not have a strong relationship with cooler outlet temperature it is fixed at 100 °C.
A. Firstauthor et al.
524
Table 1. Economic comparison of base-case design with original Luyben design Unit Luyben Base case Mode I Mode II ° Reactor inlet temp C 358 358 361 346.99 Total benzene flow kmol/h 207 207 245 269.7 ° Hot Spot temp C 430 421.60 417.50 411.3 Benzene recycle kmol/h 207 207 245 269.70 Vent kmol/h 9.98 6.47 6.02 19.04 Heavy Bottom kmol/h 1.55 1.59 1.20 2.99 Fresh Propene kmol/h 101.93 101.93 101.93 175.02 Fresh Benzene kmol/h 98.78 95.09 95.00 153.87 Product kmol/h 92.86 92.94 93.67 150.47 Total Capital Cost $ 106 4.11 4.26 4.26 4.26 Total Energy Cost $ 106/year 2.23 2.35 2.68 3.43 57.09 92.47 $ 106/year 59.36 Benzene cost 57.14 30.62 52.59 $ 106/year 30.63 30.63 Propylene cost Reactor steam credit $ 106/year 0.40 0.54 0.53 0.86 Vent (B1) credit $ 106/year 1.59 0.70 0.59 1.84 Heavy (B2) credit $ 106/year 0.71 0.48 0.38 0.95 Product revenue $ 106/year 107.74 107.87 108.72 174.64 6 Total operational cost $ 10 /year 89.52 88.40 88.89 144.88 Total operational profit (J) $ 106/year 18.23 19.47 19.83 29.76 Price Data: HP steam $9.83/GJ, Steam generated $6.67/GJ, Electricity cost $16.8/GJ, Benzene price $68.6/kmol, Propylene price $34.3/kmol, Cumene price $132.49/kmol.
Table 2. Steady state degrees of freedom Process variables Fresh propene flow rate 101.93 kmol/h# Total benzene flow rate Variable Furnace outlet temperature Variable Reactor cooler temperature Fixed Condenser Temperature 32.25 0C Column 1 xC3,B Variable xC9,D Variable Column 2 xC6,B Variable xC9,D 0.999 Column 3 xC12,B Variable #: Fixed for Mode I. *: Degree of freedom for Mode II
DOF 0/1* 1 1 0 1 2 1
3.2. Optimization results Ideally all dofs in Table 1 should be simultaneously optimized. However, to overcome convergence issues in UniSim, the separation section is optimized first followed by the rest of the plant (see e.g. Araujo et al, 2007). The optimization results obtained are summarized in Table 3. For Mode I operation, none of the constraints are active while in Mode II operation (optimal throughput), the maximum furnace duty and product column boilup constraints are active. From an economical point of view, it is optimal to increase the Mode I feed rate without violating the constraints of the plant. As the propylene feed rate is
Plantwide Control of a Cumene Manufacture Process
525
increased the profit increases due to higher production. The first constraint to become active is maximum furnace heating. However this is not the real bottleneck as feed rate can be further increased by lowering the reactor inlet temperature and/or recycle benzene flow and hence increasing the profit. As the throughput is further increased, the maximum product column boilup constraint becomes active for a fixed DIPB mol fraction in the product column bottoms. This mol fraction may be decreased to further increase the throughput and profit with the boilup constraint active. The DIPB mol fraction can however not be decreased too much as the profit decreases due to cumene product loss in the heavy fuel stream. The reported column 3 xC12, B value in Table 3 corresponds to this limit of maximum operating profit. Table 3. Summary of Mode I and Mode II Optimization Results Mode I Mode II Process variables Type Value Type Value Fresh propene Total benzene Rxr inlet temperature Cooler temperature Top T Column 1 xC3,B xC9,D Column 2 xC12,B xC9,D Column 3 xC12,B
Fixed 101.93 kmol/h Variable Variable Variable 245 kmol/h 361 °C Max furnace duty* Variable 100 °C Fixed Fixed Fixed 32.25 °C Fixed Variable 0.01 Variable Variable 5.5x10-3 Variable Variable 2.7x10-4 Variable Fixed 0.999 Fixed Variable 0.9542 Max boil up* *: Variable is fixed by this constraint
175.02 kmol/h 269.7 kmol/h 346.99 °C 100 °C 32.25 °C 0.01 0.0012 3.5x10-4 0.999 0.9628
4. Self-optimizing Controlled Variables Skogestad (2004) states that self-optimizing control is when one can achieve an acceptable economic loss with constant setpoints for appropriately chosen / designed controlled variables without the need to re-optimize for disturbances. In this work, four disturbances are considered as in Table 3. SN. d1 d2 d3 d4
Table 4. Set of disturbances considered Disturbance variable Nominal Value Propylene flow rate 101.93 kmol/h 0 Column 1 condenser temperature 32.25 C Inert composition in the propylene feed 5% propane Propylene flow rate 101.93 kmol/h
change - 10 kmol/h 0 +3 C +3 % +10 kmol/h
4.1. Mode I Self Optimizing Controlled Variables For each of the four disturbances, the plant is sequentially reoptimized for all 6 unconstrained dofs (see Table 2). We also reoptimize the process keeping the distillation column mole recoveries constant (i.e. using 6 – 4 = 2 degrees of freedom). The difference in the objective function for the two cases was observed to be very small for all the disturbances (< 0.07%). Hence we choose to use distillation column mole recoveries as controlled variables for two reasons: First, resulting loss values are very small. Second, it reduces the number of self-optimizing variables to be determined and simplifies the further analysis to a great extent as we are left with only 2 input variables instead of 6. To choose the remaining two self-optimizing controlled variables, we use the “exact local method” (Halvorsen et al., 2003) which minimizes the worst case loss due to
A. Firstauthor et al.
526
suboptimal self-optimizing control policy. The branch and bound algorithm of Kariwala (2007) is used for the evaluation of the loss. Seven candidate controlled variables, namely, reactor inlet temperature, preheater duty, fresh benzene flow rate, total benzene flow rate, reactor feed benzene to propane ratio, reactor feed benzene mol fraction and vaporizer outlet temperature, are evaluated. The best set of two self optimizing variables for Mode I operation are thus found to be the reactor inlet temperature and the reactor feed benzene to propylene ratio. 4.2 Mode II Self Optimizing Controlled Variables The maximum furnace duty and maximum product column boil up are the two active constraints in Mode II. This leaves 5 (7 dof – 2 active constraints) unconstrained dof for which we need to find 5 self optimizing controlled variables. Similar to Mode I, the column purity specifications, namely, column 1 xC3,B, column 2 xC9,D and xC6,B when kept at their optimized nominal values with no disturbance result in negligible loss for the set of disturbances considered (note that column 3 xC12,B is fixed by its maximum boilup constraint). As in Mode I, the exact local method is used to select the best self optimizing variables for the remaining two unconstrained dof. The best set was found out to be fresh benzene flow rate and the reactor inlet propylene mol fraction. The economic loss for the next best set, which is total benzene flow and the reactor feed benzene to propylene is only slightly higher. Since the latter variable is a self-optimizing variable also in Mode I, we select this set as our choice of controlled variables in Mode II to simplify the transition from Mode I to Mode II. The transition would only require replacing the reactor inlet temperature controller with the total benzene flow controller.
5. Conclusion and future work In this work, a cumene production plant has been systematically analyzed for economically optimal operation at given throughput (Mode I) and optimum throughput (Mode II). Results show that in Mode I operation, the optimized unconstrained column product purities are self optimizing along with the reactor inlet temperature and the reactor feed benzene to propylene ratio. In Mode II, the maximum furnace duty and product column boilup constraints are active. The self-optimizing variables are again the unconstrained column product purities along with the total benzene flow to the reactor and the reactor feed benzene to propylene ratio. Further work would focus on developing a plantwide control structure for the process and its dynamic validation.
References 1. 2. 3. 4. 5. 6.
Luyben, W.L. (2010). Design and Control of the Cumene Process. Ind. Eng. Chem. Res. 49 (2), 719. S. Skogestad (2000). Plantwide control: The search for the self-optimizing control structure. J. Proc. Cont., 10, 487. S. Skogestad (2004). Control structure design for complete chemical plants. Comp. Chem. Engg., 28, 219–234. I. J. Halvorsen, S. Skogestad, J. C. Morud and V. Alstad (2003). Optimal selection of controlled variables. Ind. Eng. Chem. Res., 42, 3273. V. Kariwala (2007). Optimal measurement combination for local self-optimizing control, Ind. Eng. Chem. Res., 46, 3629. A. Antonio, M. Govatsmark, S. Skogestad (2006). Application of plantwide control to the HDA process. I-steady state optimization and self-optimizing control. Cont. Engg. Practice, 15, 1222.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
A robust optimization based approach to the general solution of mp-MILP problems Martina Wittmann-Hohlbein, Efstratios N. Pistikopoulos Centre for Process Systems Engineering, Department of Chemical Engineering, Imperial College, London SW7 2BY, U.K.
Abstract In this work, we focus on the approximate solution of multi-parametric mixed integer linear programming (mp-MILP) problems involving objective function (OFC), left-hand side (LHS) and right-hand side (RHS) uncertainty. A two-step algorithmic procedure is proposed. In the first step a partial immunization against the uncertainty is performed leading to a robust RIM-mp-MILP problem, whereas in the second step explicit optimal solutions of the robust model are derived by applying a decomposition algorithm. Computational studies are presented, demonstrating that (i) the robust RIM-mp-MILP counterpart is less conservative than the conventional robust MILP model, and (ii) the combined robust/multi-parametric procedure is computationally efficient, providing a tight upper bound to the overall global solution of the general mp-MILP problem. Keywords: multi-parametric programming, robust optimization, mixed-integer linear programming.
1. Introduction We consider the multi-parametric mixed integer optimization problem (P) ݖሺߠሻ ؔ ݉݅݊௫ǡ௬ ሺሺܿ ߠܪሻ் ݔ ሺ݀ ߠܮሻ் ݕሻ (P) s.t. ܣሺߠሻ ݔ ܧሺߠሻ ݕ ܾ ߠܨ ܴ א ݔ ǡ א ݕሼͲǡͳሽ ܴ א ߠ אȣ ؔ ሼߠ ܴ א ȁߠ ߠ ߠ௫ ǡ ݈ ൌ ͳǡ ǥ ǡ ݍሽǡ where ߠ denotes the vector of parameters and ܣሺߠሻǣ ൌ ܣே σୀଵ ߠ ܣ ǡ analogously forܧሺߠሻ. We assume that all matrices and vectors have appropriate dimensions. In the following we will denote by the lower case letter with subscript ݅, for instance ሾܽ ሿ, the column vector of entries related to the ݅-th row of the corresponding matrix. The presence of uncertainty in mixed integer linear programming models, employed in widespread application fields, including planning/scheduling, hybrid control and process synthesis, significantly increases the complexity and computational effort in retrieving explicit optimal solutions. Our aim is to find solutions of (P) that (i) are good approximations of the optimal solution and (ii) can be obtained efficiently. In this work, we apply suitable robust optimization techniques to derive solutions of (P). Our approach, denoted as a two-stage method for the solution of general mp-MILP problems, differs from existing methods as we are foremost interested in an immunization against LHS-uncertainty. We formulate a robust counterpart of type RIM-mp-MILP with only OFC- and RHS-uncertainty in the model that closely resembles the parametric nature of the original mp-MILP problem -
M.Wittmann-Hohlbein et al.
528
problems of this type can be efficiently solved using the algorithm proposed by Faísca et al. (2009). The method is described next.
2. A two-stage method for the solution of general mp-MILP problems 2.1. The worst-case oriented partially robust counterpart of (P) The pair ሺݔҧ ǡ ݕതሻ் is called a LHS-robust feasible solution of (P) if
א ߛȣǣܣே ݔҧ ܧே ݕത ߛሺܣ ݔҧ ܧ ݕതሻ ܾ ߠܨ
(1)
ୀଵ
for anyߠ אȣ. Incorporating (1) into (P) and introducing q auxiliary variables and additionally 2q linear constraints for each constraint leads to the formulation of the robust counterpart of the general mp-MILP problem. The partially robust counterpart (RC) associated to (P) is given by ݖҧሺߠሻ ؔ ݉݅݊௫ǡ௬ǡ௨ ሺሺܿ ߠܪሻ் ݔ ሺ݀ ߠܮሻ் ݕሻ
s.t. (RC)
்
்
ሾܽே ሿ் ݔ ሾ݁ே ሿ் ݕ ሺߠே ቀൣܽ ൧ ݔ ൣ݁ ൧ ݕቁ ݎ ݑ ሻ ୀଵ
ܾ ሾ݂ ሿ் ߠǡ݅ ൌ ͳǡ ǥ ǡ ݉ ் ் െݑ ൣܽ ൧ ݔ ൣ݁ ൧ ݕ ݑ ǡ݈ ൌ ͳǡ ǥ ǡ ݍǡ ݅ ൌ ͳǡ ǥ ǡ ݉ ܴ א ݔ ǡ א ݕሼͲǡͳሽ ܴ א ǡ ݑ ܴ א ǡ݅ ൌ ͳǡ ǥ ǡ ݉ ߠ אȣ ؔ ሼߠ ܴ א ȁߠ ߠ ߠ௫ ǡ ݈ ൌ ͳǡ ǥ ǡ ݍሽǡ
where ݎ ؔ ሺߠ௫ െ ߠ ሻȀʹ denotes the range and ߠே ൌ ߠ௫ െ ݎ the nominal value of ߠ . The robust model (RC) is a RIM-mp-MILP problem. Every feasible solution of (RC) is a LHS-robust feasible solution of (P). Note that the conventional robust counterpart (cvRC) of (P) corresponds to a fully deterministic MILP problem (Lin et al. (2004)). The solutions of (cvRC) are immune against all data variations in (P). Clearly, every feasible solution of (cvRC) is also feasible for (RC), and consequently for (P). 2.2. A decomposition algorithm for (RC) We outline the steps of the algorithm presented in Faísca et al. (2009). The master problem (M) is derived from the RIM-mp-MILP problem (RC) by treating the parameter ߠ as an optimization variable. Due to the bilinear terms in the objective function it corresponds to a nonlinear and non-convex optimization problem. The optimal integer node ݕ௧ of (M) is input to (RC), which then results in an mp-LP subproblem (S). The critical regions of (S), each a subset of ȣ in which a particular basis remains optimal are uniquely defined by the LP optimality conditions (Gal (1979)). Between every master and sub-problem iteration the MINLP master problem is updated. A new MINLP master problem is solved for each one of the current critical regions. Integer cuts are introduced into the formulation of (M) in order to exclude previously visited integer solutions. Parametric cuts ensure that only integer nodes that are optimal for (RC) for a certain realization of the parameters are considered. The cuts are given by ݕ െ ݕ ȁܬ ȁ െ ͳǡ אೖ
݇ ൌ ͳǡ ǥ ǡ ܭǡ
אೖ
where K denotes the number of previously identified integer solutions in this region which have been marked optimal, ܬ ؔ ሼ݆ȁݕ ൌ ͳሽ and ܮ ؔ ሼ݆ȁݕ ൌ Ͳሽ respectively, and ȁ ڄȁ corresponds to the cardinality, and
A robust optimization based approach to the general solution of mp-MILP problems
529
ሺܿ ߠܪሻ் ݔ ሺ݀ ߠܮሻ் ݕ ݖҧ ሺߠሻǡ݇ ൌ ͳǡ ǥ ǡ ܭǡ where ݖҧ ሺߠሻis the optimal objective value of (RC) at the integer node related to index ݇. The algorithm terminates in a region where the master problem is infeasible. In order to keep the number of non-convex optimization problems to a minimum, further comparison procedures are omitted. Instead, we retain an envelope of parametric profiles (Dua et al. (2002)) and collect all integer nodes and corresponding continuous solutions that have been identified to be optimal for certain points within a critical region. Function evaluation of the objective values for the parametric profiles stored in the envelope determines the optimal solution of (RC) at any parameter point. We observe the following properties when the decomposition algorithm is applied to (RC): The critical regions are polyhedral convex. The solutions stored in the envelope of parametric profiles of (RC) are piecewise affine functions. 2.3. Explicit solution of the general mp-MILP problem (P) The decomposition algorithm outlined in the previous section can be readily extended to address problem (P). If the coefficients of the constraint matrices in (P) are uncertain, the critical regions identified need not be convex. The solutions stored in the envelope of parametric profiles of (P) are piecewise fractional polynomial functions. The solution of mp-LP sub-problems with LHS-uncertainty is the bottleneck in solving the general mp-MILP problem (P). It either involves enumeration of the parameter space to retrieve the exact solution (Li et al. (2007)), or else an approximation of the solution via global optimization procedures (Dua et al. (2004)). This difficulty is the driving motivation to find a suitable reduction of the model (P) in order to reduce the computational complexity for the decomposition algorithm and to obtain competitive close to optimal solution of (P). The proposed two-stage method consists of recasting (P) as partially robust RIM-mpMILP model (RC) as described in Section 2.1, before applying the decomposition algorithm outlined in Section 2.2. An upper bound for the optimal objective value of (P) is obtained. Note that a lower bound for the optimal objective value of (P) to serve as a reference value can be obtained by solving the deterministic MINLP problem derived from (P) in which ߠ is treated as an additional optimization variable to global optimality.
3. Applications of the two-stage method Example 1. Consider the problem (P1) and its partially robust counterpart (RC1) ݖሺߠሻ ൌ ݉݅݊௫ǡ௬ ሺߠଵ ݔଵ ݔଶ ݕଵ ሻ ݖҧሺߠሻ ൌ ݉݅݊௫ǡ௬ ሺߠଵ ݔଵ ݔଶ ݕଵ ሻ s.t. െݔଵ ݔଶ ݔଷ ൌߠଶ ʹݕଵ s.t. െݔଵ ݔଶ ݔଷ ൌߠଶ ʹݕଵ (P1) (RC1) ݔଵ െ ߠଷ ݔଶ ݔସ ൌͳ ߠଵ ݕଶ ݔଵ ͷݔଶ ݔସ ͳ െ ͷݕଶ ݕଶ െ ݕଵ Ͳ, ݔ Ͳ െݔଵ ͷݔଶ െ ݔସ െ ͳ െ ͷݕଶ א ݕሼͲǡͳሽǡ െͷ ߠ ͷ ݕଶ െ ݕଵ Ͳǡ ݔ Ͳ א ݕሼͲǡͳሽǡ െͷ ߠ ͷǤ The application of the proposed two-stage method to (P1) required the solution of 9 MINLP and 2 mp-LP problems, returning 6 convex critical regions where an optimal solution exists as depicted in Figure 1.a. In contrast, the decomposition algorithm applied to (P1) required the solution of 15 MINLP and 4 mp-LP problems, computing a total of 9 convex and non-convex critical regions (Figure 1.b). ۱ି ܀ஶ marks a region where (P1) is unbounded. The parametric profiles obtained by the two-stage method provide an upper bound on the exact optimal objective value of (P1), i.e. on the value
530
M.Wittmann-Hohlbein et al.
related to the best solution among the profiles stored in the envelope with respect to its തതതത ሻ ת തതതതଵ ۱܀ exact solution for any parameter point. As an example consider ߠ אሺ۱܀ ۱ି ܀ஶ for whichݖሺߠሻ ൌ െλ, butെλ ൏ ݖҧሺߠሻ ൏ λ holds true. The conventional robust counterpart of (P1) whose solutions are immunized against LHS-, OFC- and RHS-uncertainty is infeasible for every parameter realization.
Figure 1: Critical regions of (P1) with (a) two-stage method and (b) decomposition algorithm
Example 2. We consider a sequential scheduling problem with uncertain processing and set-up times (Ryu et al. (2007)). The process consists of two stages with one unit per stage. Three products A, B and C are being processed. The production time of B, denoted by ߠଵ ǡ is unknown but bounded. After two products have been processed at the final stage, this stage may become unavailable for as long as half of the completion time needed for the first two products. The latter variability is modeled as LHS-uncertainty. The objective is to minimize the make-span . The application of the proposed two-stage method required the solution of 3 MINLP problems and 2 mp-LP problems, whereas the decomposition algorithm executed 5 MINLP and 2 mp-LP problems. The parametric profiles derived with both methods are given in Table 1 and Table 2, and the corresponding critical regions are depicted in Figure 2. The optimal make-span obtained with the two-stage method is independent ofߠଶ , the parameter associated to the uncertain set-up time. It yields an overall tighter approximation of the optimal make-span of the original scheduling problem than the optimal make-span of the conventional robust counterpart (Figure 2.b). തതതതଵ ۱܀ തതതത ۱ ܀ଶ
۱܀ଵ ۱ ܀ଶ ۱ ܀ଷ
Critical Region Optimal Make-span Optimal Sequence A-B-C ሼ͵ ߠଵ ǡ Ͳ ߠଶ ͲǤͷ} ͳǤͷߠଵ ͳͷ A-C-B ߠଵ ͳͺ ሼ ߠଵ ͺǡ Ͳ ߠଶ ͲǤͷሽ Table 1: Parametric profiles of Example 2 with the two-stage method Critical Region Opt. Make-span Opt. Sequence ͷ െ ߠଵ A-B-C ሼ͵ ߠଵ ͷǡ Ͳ ߠଶ ሽ ͳ ͺ ߠଵ ሼͶ ߠଵ ͺǡ A-B-C ߠଵ ሺߠଶ ͳሻ ͺߠଶ ͳͳ ͷ െ ߠଵ A-C-B ߠଵ ͳͺ Ͳ ߠଶ ͲǤͲͺǡ ߠଶ ሽ ͺ ߠଵ ሼ͵ ߠଵ ͺǡ ߠଵ ሺߠଶ ͳሻ ͺߠଶ ͳͳ A-B-C ͷ െ ߠଵ A-C-B ͲǤͲͺ ߠଶ ͲǤͷǡ ߠଵ ͳʹߠଶ ͳʹ ߠଶ ሽ ͺ ߠଵ Table 2: Parametric profiles of Example 2 with the decomposition algorithm
A robust optimization based approach to the general solution of mp-MILP problems
531
Figure 2: (a) Critical regions with decomposition algorithm, and (b) optimal make-span of Example 2 with two-stage method and of its conventional robust counterpart
4. Conclusions In order to obtain close-to-optimal solutions of the general mp-MILP problem (P), we propose a novel multi-parametric partially robust counterpart of type RIM-mp-MILP which, compared to the original problem, is computationally less expensive to solve with the decomposition algorithm in terms of fewer iterations and avoidance of either discretization of the parameter space or additional global optimization procedures. The second advantage of the proposed two-stage method is the generation of convex critical regions, which significantly simplifies the characterization of the parameter space. Beneficial of the two-stage approach is furthermore the low degree of conservatism of the new robust model compared to the conventional deterministic worst-case robust counterpart. Therefore, we believe the combined robust/parametric optimization approach for general mp-MILP problems to be an attractive alternative to the expensive explicit solution of the original problem and the overly pessimistic results obtained by conventional robust programming.
Acknowledgements Financial support from EPSRC (EP/G059071/1, EP/I014640) and from the European Research Council (MOBILE, ERC Advanced Grant, No: 226462) is gratefully acknowledged.
References Li Z, Ierapetritou MG (2007). A new methodology for the general multiparametric mixed-integer linear programming (MILP) problems. Ind. Eng. Chem. Res. 46(15):5141-5151. Lin X, Janak SL, Floudas CA (2004). A new robust optimization approach for scheduling under uncertainty: I. Bounded uncertainty. Comput. Chem. Eng. 28(6-7):1069-1085. Faísca NP, Kosmidis VD, Rustem B, Pistikopoulos EN (2009). Global optimisation of multiparametric MILP problems. J. Glob. Optim. 45(1):131-151. Gal T (1979). Postoptimal analyses, parametric programming and related topics. McGraw-Hill Inc.,US. Dua V, Bozinis NA, Pistikopoulos EN (2002).A multiparametric programming approach for mixed-integer quadratic engineering problems. Comput. Chem. Eng. 26(4-5):715-733. Dua V, Papalexandri KP, Pistikopoulos EN (2004). Global optimization issues in multiparametric continuous and mixed-integer optimization problems. J. Glob. Optim. 30(1):59-89. Ryu J, Dua V, Pistikopoulos EN (2007). Proactive scheduling under uncertainty: A parametric optimization approach. Ind. Eng. Chem. Res. 46(24):8044-8049.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
A deterministic optimization approach for the unit commitment problem Marian G. Marcovecchioa,b, Augusto Q. Novaisa, Ignacio E. Grossmann a
UMOSE/LNEG, Estrada do Paço do Lumiar 22, 1649-038, Lisbon, Portugal
b
INGAR/CONICET, Instituto de Desarrollo y Diseño, Santa Fe, Argentina UNL, Universidad Nacional del Litoral, Santa Fe, Argentina c Department of Chemical Engineering, Carnegie Mellon University, USA
Abstract Reliable power production is critical to the profitability of electricity utilities. This concern, together with the need for less dependence on fossil fuels consumption and for CO2 mitigation, is leading to the prospective use of combined forms of conventional and alternative forms of energy generation as the most promising means to meet an increasing demand for electric power. Unit commitment (UC) arises in this context as a most critical decision process, involving a large number of interacting factors and underlying therefore a complex optimization problem. As such, the UC problem has been receiving a good deal of attention in the literature, with heuristic approaches being most dominant. As an alternative, a deterministic optimization approach is proposed in this paper and applied to the thermal UC problem. The model developed is a mixed integer quadratic programming problem (MIQP) having the objective of minimizing the fuel consumption (calculated by a quadratic function) and start up costs, with a strategy proposed for its solution that exploits the characteristics of the UC problem. This consists of valid integer cutting planes and a Branch and Bound (B&B) search, which are developed and combined resulting in a Branch and Cut (B&C) algorithm particular to the UC problem. The approach is described and implemented to solve a reference case study. Although the UC problem is NP-hard, the results show that the proposed technique is capable of providing the optimal solution for real-world sized instances. Keywords: Energy optimization, Unit optimization, Branch and Cut algorithm
Commitment
problem,
Deterministic
1. Introduction Regulated and deregulated industry organizations correspond to two different UC problems: Security Constrained (SCUC) and Price-Based (PBUC). In SCUC, the on/off states and production levels for given power generators are determined to meet a timevarying demand of electricity over a given time horizon, while satisfying constraints such as startup and shutdown times, ramp up and ramp down limits, minimum up and down times and spinning reserve, among others. In the PBUC the decisions are taken
A deterministic optimization approach for the unit commitment problem
533
according to financial risks under no obligation of satisfying the expected demand. A large number of papers address the UC problem, since a good solution can bring about considerable economic savings. However, for real-world instances the underlying optimization problem turns out to be highly combinatorial and hence NP-hard, thus the need to provide efficient methods, able to reduce the computational times. In the literature they fall in two categories: deterministic and heuristic [1]. This work addresses the SCUC problem with thermal generating units. A mathematical programming model is formulated, considering all its inherent constraints and a single set of binary variables (the on/off status of each generator at each period of time), which is a MIQP, hard to solve through deterministic approaches for highly dimensional instances. A solution methodology based on valid integer cutting planes is developed and implemented to reach the global optimal solution, with a B&B search being defined that capitalizes on the proposed cuts. Finally, the model and technique are implemented and applied to an example with varying dimensions.
2. Mathematical problem formulation The SCUC problem for thermal generating units can be described as follows: given I power units and a specified time-varying demand over T time periods, the problem consists in determining, for each unit, the start-up and shut-down schedules and power production level, in order to minimize the operational costs while meeting demand. In what follows the sub-indices i and t denote respectively units i=1,…,I and time periods: t=1,…,T. The set of binary variables ui,t represents the on/off status of i at t; the sets of continuous variables pi,t, cui,t and cdi,t denote the power produced, start-up cost, and shut-down cost, respectively, for i at t. The mathematical programming model has therefore IxT binary and 3(IxT) continuous variables. The following mathematical formulation states the SCUC problem as an MIQP model, where the objective function to be minimized is the operating cost, which includes fuel consumption, start-up and shut-down costs: ଶ ൯ ܿݑǡ௧ ܿ݀ǡ௧ ൧ min cost=σூୀଵ σ்௧ୀଵൣ൫ܽ ݑǡ௧ ܾ ǡ௧ ܿ ǡ௧
(1)
where ai, bi and ci are coefficients of the fuel cost function: ai+bi.pi,t+c.pi,t2. The constraints to be satisfied are given by equations (2) to (16): t=1,...,T ܦ௧ σூୀଵ ǡ௧
(2)
ܦ௧ ܴ௧
σூୀଵ ݑǡ௧
t=1,...,T
(3)
ݑǡ௧ ǡ௧ ݑǡ௧
i=1,...,I; t=1,...,T
(4)
ݑǡ௧ െ ݑǡ௧ିଵ ݑǡ௧ା
i=1,...,I; t=1,...,T; j=1,...,(TUi-1)
(5)
ݑǡ௧ା ݑǡ௧ െ ݑǡ௧ିଵ ͳ
i=1,...,I; t=1,...,T; j=1,...,(TDi-1)
(6)
ǡ௧ିଵ െ ܴܦ ǡ௧ ǡ௧ିଵ ܷܴ
i=1,...,I; t=1,...,T
(7)
ݑǡ௧ ൌ Ͳ
i / Tinii0; t=1,..., (TUi -Tinii)
ݑǡ௧ ൌ ͳ ൫ݑǡ௧ െ ݑǡ௧ିଵ ൯ܿݏܪ ܿݑǡ௧
t=2,...,T i / Tinii>0; and t=1,...,T i / Tinii4 =4 3 >4
"=4 3 >4 "4 3 >4
4
! 4 3
544 3 "=4 3 >4 644 3
? &= ! &! @ ? & $@ .
01
4
! 4 3
-=!
/01
'7) '76)
"=! "=! #$! %&= ! &! = ! "=! -=! " & - '8) 2(& % /2( & % .01 & - % /2(1 % .(01 /01& ) () ( (2 1 * / 9( 9( ( % )( !9
$$"9( )) ( )( (: ( (/ ((2 2((2 2( /( % ( ( 1 / '7)9'8) 2
$! % & 7*(*0 : 0 ( !5 " )3*2
% ) !222 " 4 2 8 3 A, 5 2 ( / ) /, 9( B /( % // * 3 ) ) ! $$C" ) A * )) 6 ) ( / ( / ) * / 6 ) ( )) / * 9( % / 3!$-&" * (* 2* !;" *
(* 2* !" 9() +% ( 9( % %/()) )2 &" & / 3 (;' % 2 &2: &" ! "! "! # $! % % '@*@(& ()% (
) ) /( %))) + % /!5K" )) ( %"( "!'( & ( : ) % ) 2( (/
5Q .(01>(@*"!'9( ) ) % ( 2((:( 2 /!5R" .(01>@*( 9( % ( ) " "! ' ; 7 O7:O 9( ) $-& / ( 2 &" ( /
/ (2 !5 Q" / /" / ( ) ) ' % /!5M"
580
*+ !*
5 ! "
5A0 / 3
55( ( %
5C:2( %DQJD@
'HWHUPLQLVWLFJOREDORSWLPL]DWLRQRINLQHWLFPRGHOVRIPHWDEROLFQHWZRUNVRXWHU DSSUR[LPDWLRQYVVSDWLDOEUDQFKDQGERXQG
583
,Q WKLV ZRUN ZH DGGUHVV WKH JOREDO RSWLPL]DWLRQ RI NLQHWLF PRGHOV RI PHWDEROLF QHWZRUNVWKDWDUHPRGHOHGYLDWKH*0$IRUPDOLVP>9RLWDQG6DYDJHDX$OYHVHW DO @ :H SUHVHQW D QRYHO FXVWRPL]HG VSDWLDO EUDQFK DQG ERXQG V%% DOJRULWKP DQG FRPSDUH LWV SHUIRUPDQFH ZLWK WKDW RI DQ RXWHU DSSUR[LPDWLRQ PHWKRG SUHYLRXVO\ LQWURGXFHGE\WKHDXWKRUVDVZHOODVZLWKWKHFRPPHUFLDOJOREDORSWLPL]DWHU%$521
0DWKHPDWLFDOIRUPXODWLRQ 7KHFRPSOHWH PDWKHPDWLFDO IRUPXODWLRQRIWKH PHWDEROLFQHWZRUNFDQEHIRXQGLQWKH ZRUNE\3ROLVHWW\HWDO +HUHGXHWRVSDFHOLPLWDWLRQVZHRQO\SURYLGHDEULHI RXWOLQHRILW :HDGGUHVVWKHRSWLPL]DWLRQRIPHWDEROLFQHWZRUNVXQGHUVWHDG\VWDWHFRQGLWLRQV7KDW LV ZH DVVXPH WKDW WKH FRQFHQWUDWLRQ ; RI WKH Q PHWDEROLWHV DPRQJ WKH PHWDEROLF QHWZRUN GRHV QRW YDU\ ZLWK WLPH W +HQFHIRUWK WKH QHW EDODQFH RI SURFHVVHV U FRQWULEXWLQJWRWKHSURGXFWLRQDQGWKHGHSOHWLRQRIDPHWDEROLWHLHTXDOV S G; L = ¦ μ LU Y U = GW U = +HUH ȝLU UHSUHVHQWV WKH VWRLFKLRPHWULF FRHIILFLHQW RI SURFHVV U LQ WKH PDVV EDODQFH RI PHWDEROLWH L 7KH UDWH DW ZKLFK SURFHVV U RFFXUV ZKLFK LV GHQRWHG E\ YU FDQ EH GHWHUPLQHG IURP D NLQHWLF HTXDWLRQ RI FKRLFH IRU LQVWDQFH WKH VRFDOOHG SRZHUODZ IRUPDOLVP(T Q+P
I Y U = γ U ∏ ; M UM
M =
7KHEDVDOVWDWHDFWLYLW\RIWKHHQ]\PHJRYHUQLQJSURFHVVULVUHSUHVHQWHGE\ȖUZKHUHDV IUMLVXVHGWRGHQRWHWKHNLQHWLFRUGHURIPHWDEROLWHMLQSURFHVVU,I(T LVLQWURGXFHG LQ(T ZHREWDLQD*HQHUDOL]HG0DVV$FWLRQ*0$ PRGHODVIROORZV S Q+P § I · = ¦ ¨¨ μ LU γ U ∏ ; M UM ¸¸ U = © M = ¹ 7KHRSWLPDOHQ]\PDWLFDFWLYLW\FDQEHH[SUHVVHGDVDIROGFKDQJHFRQWLQXRXVYDULDEOH .U RYHUWKHEDVDOVWDWHDFWLYLW\SDUDPHWHUȖLU DVLOOXVWUDWHGLQHTXDWLRQ(T S Q+P § I · = ¦ ¨¨ μ LU . U γ U ∏ ; M UM ¸¸ U = © M = ¹ 7KH YDOXH RI YDULDEOH .U LQGLFDWHV ZKHWKHU WKH JHQH FRGLQJ D JLYHQ HQ]\PH PXVW EH RYHUH[SUHVVHG .U ! LQKLELWHG .U RU OHIW XQPRGLILHG .U LQ RUGHU WR PD[LPL]HWKHV\QWKHVLVUDWHRIWKHGHVLUHGSURGXFW :KLOHLWLVFOHDUWKDWPRGLI\LQJDVPDQ\HQ]\PHVDVWKHUHDUHLQWKHQHWZRUNZLOOOHDGWR WKHEHVWSHUIRUPDQFHSRVVLEOHLWLVDOVRREYLRXVWKDW VXFK QXPEHURIFKDQJHV PD\EH SURKLELWLYH+HQFHIRUWKDOLPLWPXVWEHLPSRVHGRQWKHQXPEHURIHQ]\PHVDOORZHGIRU PRGLILFDWLRQ(TDQG
12.565
-0.0015507@
The second and the third reduced order models approximate the behavior of the oxygen and hydrogen excess ratio. Both SS models have one state, one disturbance (I), one manipulated variable which is the excess of oxygen (ȜO2) and hydrogen (ȜH2) and one output/control variable, the mass flow rate of air ( m air ) and hydrogen ( m H 2 ), respectively. The system matrices are given as follows A2 [-0.014276] , B2 [ 0.060283] , C2 [-0.0088574] , D2 [233.68] A3 [ - 0.055887], B3 [0.0086835] , C3 [-0.0055123], D3 [ 238.05] The fourth SS model represents the behavior of the temperature which is maintained at the desired set point through a heatup and a cooling system. The system matrices are presented bellow A4 1, B4 [0.0052417 -0.0004687] , C4 [3.3348e-009], D4 1 This model has one state, one output /control variable (Tfc), one known disturbance the ambient temperature (Tamb) and two inputs, the power to the resistance for the heatup (WR) and the power of the fans (Wcl), which both are translated to percentage of the full equipment operation. A measure of the fitness between the aforementioned models in question we calculated the mean square-root difference. It was found that for the power controller was 0.028W, for the lambda of air and hydrogen 0.0407 and 0.0371 and for the temperature controller 0.0042K. The above metric was calculated using 2400samples over a period 600s and the results are a clear indication that the SS models have the required accuracy to describe the behavior under consideration.
4. Multi-Parametric Model Predictive Control (mpMPC) Framework The next step involves the design of multi-parametric Model Predictive Controllers of the PEMFC system. A major drawback which often limits the applicability of the traditional MPC framework is concerned with the increased online computational requirements related to the solution of the constrained optimization problems. In order to overcome this drawback, explicit or multi-parametric model predictive control mpMPC was developed (Pistikopoulos et al., 2007) which avoids the need for repetitive online optimization. In mp-MPC the online optimization problem is solved off-line with multi-parametric programming techniques to obtain the objective function and the control actions as functions of the measured state/outputs (parameters of the process) and the regions in the state/output space where these parameters are valid i.e. as a complete map of the parameters. The control is then applied by means of simple function evaluations instead of typically demanding online optimization computations. The following MPC formulation is considered for the PEM fuel cell control system: Ny
min J x ,u , y
¦ y
i
i 1
s.t.
ysp ,i Q yi ysp ,i T
x t 1
Nu 1
¦ u j 0
usp , j R u j usp , j T
j
Ax(t ) Bu t Cv t ,
(2)
y (t ) Dx(t ) ymin d y (t ) d ymax , umin d u (t ) d umax , vmin d v(t ) d vmax
where u are the manipulated variables, y are the controlled variables, Nu is the control horizon and Ny the prediction horizon. The objective function is set to minimize the
C. Ziogou et al.
746
quadratic norm of the error between the output variables and the reference points. Moreover, the system includes physical constraints which should satisfied during the operation: 0.1A d I d 12 A , 500cc / min d mair d 3000cc / min , 0 d WR d 55.8W , 313K d T fc d 353 K 0.1W d P d 7W , 200cc / min d mH 2 d 1000cc / min
, 0 d Wcl d 25.8W , 288K d Tamb d 313K
The aforementioned optimization problem (2) is a multi-parametric Quadratic Programming problem and can be solved with standard multi-parametric techniques (mp-QP) (Pistikopoulos, 2007). In our study the explicit parametric controller was derived with the Parametric Optimization (POP) software. The control horizon in each problem is 2, therefore there are two optimization variables (ut+0, ut+1). Table 1. optimization problem parameters and settings
Optimization Parameters (ș) Pred.Hor. Weight Weight CR variables (u) (Ny) (Q) (R) T1 [ x1 x 2 ǻI ǻP P Psp ] 10 3 0.01 67 P I(t+0), I(t+1) T 2 [ x1 I O2 2 O2 2, sp ] 20 1 0.1 13 ȜO2 mO2(t+0),mO2(t+1) 40 100 0.1 13 mH2(t+0), mH2(t+1) T3 [ x1 I OH 2 OH 2, sp ] ȜH2 WR (t+0), WR (t+1), T 4 [ x1 Tamb T fc T fc , sp ] Tfc 100 1000 0.001 17 Wcl (t+0), Wcl (t+1,) The corresponding parameters of each problem are shown in Table 1 with the respective number of explicit/multi-parametric MPC controller’s critical regions, while Figure 3 presents the control design for the PEMFC system including the input/output variables of each controller and the interactions between them. Objective
ǻP(d) Power Controller
I I(d) O2 Excess Ratio Controller
I(d) H2 Excess Ratio Controller
Tamb(d) Temperature Controller
P
.
mair .
mH2
PEM Fuel Cell
Ȝȅ2 ȜǾ2
Wcl WR
Figure 3 Control structure
Tfc Figure 4 Temperature control and cooling/heatup
5. Simulation Results Figures 4,5 and 6 depict the simulation results of the mp-MPC implementation for different operating conditions (set points). During the simulation we assumed that the ambient temperature was kept constant at 298K. The performance of the temperature controller is presented in Figure 4, where simulations performed with three temperature set points changes (333K, 338K, 343K) while the power controller’s set point is set at constant level (5W), and it is observed that the controller follows rapidly the set point changes on the temperature without offset. Due to the small size of the PEMFC the system needs to be heated during steady state operation in order to follow the set point (the resistance is working at 1-4%).
Multi-Parametric Model Predictive Control of an Automated Integrated Fuel Cell Testing Unit 747 In Figures 5 and 6 the performance of the power and oxygen excess ratio controllers is presented. During the experiment the hydrogen excess ratio is kept constant through the controller (ȜH2=1.5) and the temperature controller has a fixed set point at 338K. The mass flow rates (Figure 6) are properly adjusted to fulfill the starvation avoidance constraint by keeping the excess ratio at constant level. The power controller showed excellent response to load changes and the excess ratio controller demonstrated fast settling time (less than 2s) after current disturbances.
Figure 5 Control of power and Ȝȅ2
Figure 6 Current and mass flow rates
Overall the mpMPC controller design is able to track the desired reference points regardless the fluctuations of the interacting variables. Finally the system response was within the feasible area of operation since the output of the controllers was bounded by the operating constraints and the stability was guaranteed.
6. Conclusions In this work an explicit/mutli-parametric MPC controller design has been developed and validated offline on the simulation model. Four controllers have been derived in order to fulfill the power demand, while avoiding starvation, minimizing the excess of hydrogen supply and maintain the fuel temperature at the desired set point. The results have shown an excellent control performance. Current work focuses on the implementation of the derived controller in the experimental PEMFC system.
Acknowledgements Financial support from the DECADE IAPP Project of FP7 is gratefully acknowledged (Contract number PIAP-GA-2008-230659)
References Arce, A., Ramirez, D.R., del Real, A.J. & Bordons, C. (2007). Proc. of the 46th IEEE Conf. Dec. Con., New Orleans, LA, USA. Pukrushpan, J.T., Stefanopoulou, A.G. & Peng, H. (2004). Control of Fuel Cell Power Systems: Principles, Modelling and Analysis and Feedback Design, Series in Advances in Industrial Control, Springer Pistikopoulos, E.N., Georgiadis, M. & Dua, V. (2007). Multi-parametric Model-based Control: Theory and Applications, Weinheim: Wiley-VCH. del Real, A.J., Arce, A., Bordons, C., 2007, Development and experimental validation of a PEM fuel cell dynamic model. Journal of power sources, 173 (1). 310-324. Ziogou C, Voutetakis S., Papadopoulou S. Georgiadis M.C. (2010). Modeling and Validation of a PEM Fuel Cell System, Computer Aided Chemical Eng, 28, 721-726.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Use of commercial structured databases as innovative solution for FEED projects Fabio Ferraria, Lorenzo Selmia a
Foster Wheeler Italiana, via Caboto 1, Corsico 20094, Italy
Abstract Foster Wheeler Italiana (FWI) decided to test and evaluate the impact of an integrated project database in the workflow. With this aim, FWI developed, using the usual working procedures of Process Design, an integrated project database by means of commercial database software. Process Flow Diagrams (PFD) were used to define the database structure and the connections relevant to process data were implemented using the Process Datasheets as main references between the design parameters of unit equipment. The development activities carried out internally has shown that implementing a good database configuration it is possible to offer to Process Engineers the best documentation structure that reflects the Process Flow Diagrams topology. The data management gives the possibility to differentiate the information depending on the responsibility of the single company department, avoiding errors during the subsequent data transfer. Moreover the documents normally shared among company sections (i.e. Process Flow Diagrams for material of construction) can be kept always consistent and updated, just reducing the time lost during drafting and revisioning phases. With this approach, FWI experienced an increased quality of the documentation issued, without affecting the manhours expenditure and leading to more streamlined project management. Keywords: Structured Database, Document Management, Workflow, Work Process
1. Introduction In process industry the growing demand to improve design cycles of engineering companies is leading to an extensive use of structured database software; however the databases available on the market are normally built to contain only a reduced quantity of standard information and the commercial application tools are usually not integrated in the process design environment used for project development. Those structured database softwares are often used only as archives for the issued documents or, at least, as simple document management systems; this approach often results in database structures that do not contain all the logical connections between the equipment/data and require difficult a-posteriori work to define the relationships among the documents. Initial improvement can be achieved through a better integration of application tools into the design environment (Bessling et al., 1997), in particular if the heterogeneous data generated during the process are adequately managed and stored inside the database (Bayer et al., 2003). The interoperation of different tools has been often integrated only through the standardization of the interfaces on a syntactical level for common technical platforms (Wasserman, 1990), leading to difficulties in the implementation of all the necessary correlations between the different application outputs.
Use of commercial structured databases as innovative solution for FEED projects
749
A better solution can be obtained with the standardization and connection of the different documents fine-grained dependencies through a semantic document oriented integration that gives a meaningful context to data exchange (Marquardt et al., 2004). Process Flow Diagrams can be considered the adequate context to manage the data transfer among different application tools and specifically can be the central documents to guide the integration between the other deliverables (Bayer et al., 2001). Following this solution FWI has experienced some advantage in the project document creation phase, through an enhanced data management for the results of process simulation inside a structured database (Selmi et al., 2010). The complexity of bigger databases may, however, lead to difficulties in finding a specific document and critical issues can be generated by revisions especially if many entities are involved in project workflow (i.e. Company Departments). To better understand the complexity of the problem, it is possible to analyze the typical workflow followed by documents. (Figure 1.1).
Figure 1.1
In summary, after the characterization of the operating conditions, the process engineer prepares the equipment datasheets and includes the design conditions and the main sizing information. Those documents are then used to define the additional equipment details (i.e. materials and mechanical requirements) and finalize the relevant Material Requisitions. Finally the Material Requisitions are sent to Vendors/Suppliers that will develop their offer compliant with the requisition. During Front End Engineering Design (FEED) projects, several difficulties can be experienced in proceeding with a fast and efficient follow-up on the information initially provided. In particular most of the problems arise when process specifications are passed to the other disciplines and the data have to be transferred into other documents, created by the relevant Engineering Departments. During this phase an adequate manual check on the data is required to guarantee the correctness and coherence of the issued documents, specifically to avoid uncontrolled error propagation. In addition the consistency checks become particularly important during the revision and follow-up activities, especially after Vendors/Suppliers feedback when the modified data have to be carefully monitored, before the subsequent implementation and issue. All those additional checks are time consuming and may become difficult to manage and control if the integration and connection among the documents is not adequately defined. In order to solve those issues, the idea is to modify the standard approach to document management and to start the creation of project databases, using the deliverables generated by process engineers as a basis for the definition of structures and for the identification of document correlations. This innovative approach will be discussed in this paper.
750
F. Ferrari et al.
2. FEED Projects database customization 2.1. Database customization The off-the-shelf software database offered a very limited basket of options and was completely inadequate to cover all the possible features necessary for a FEED Project. The software was completely open, but was lacking in its default structure and specifically not compliant with Engineering Companies needs. Significant customization has been successfully carried out in order to include all the standard documents necessary to cope with FWI quality requirements. In this context the variables have been created, completely disregarding the interdependencies between the different documents, but following only Company internal procedures as general guideline for data definition. 2.2. The importance of Process Flow Diagrams creation phase In parallel to documents customization, the information workflow has been deeply analyzed, in order to evaluate the possible advantages of a structured database. This study highlighted that the default structures, embedded in the software, were not adequate and need a huge quantity of time to be compliant with the projects requirements and to be easily accessible for the users. The reason is that the procedures normally followed in commercial softwares are mainly focused on the Engineering Department’s approach to the problem and the database is only viewed as an archive for issued deliverables. Vice versa a better approach is to start in defining the database structure using the Process Flow Diagram (PFD) creation. As a matter of fact the PFD can represent and can model all the connections among the different equipment items and can create the relevant dependencies through the use of objects connectors, to represent the link among the database elements and directly manage them during the PFD drafting phase (Bayer et al., 2003). Moreover that solution respects the sequence of definition commonly followed during FEED projects and developed in the Process Department from the very beginning. Through that operation the process engineers automatically configure the database and implement a bulk document structure that will be already consistent with Process Flow Diagrams and will increase the simplicity of access to project deliverables. 2.3. Data management through connectors use After database structure creation through PFD drafting phase, the subsequent step has been to identify where to store the main data. The solution has been to use process streams as principal objects for operating conditions identification and take advantage of the database equipment connectors to transfer the data inside the whole database; with this solution, the definition of heat and material balances directly defines data assignments from flowsheets, so that it is not necessary to integrate the unit operations defined inside simulation topology. Effectively through that approach, the database objects inherit the minimum data necessary for the design and the information is automatically archived in the right place, below the correct relevant connected equipment (Selmi et al., 2010). 2.4. Data and documents sharing The above described approach allows the data transfer from heat and material balances and creates preliminary datasheets, however the document finalization foresees the intervention of the designer to define the correct equipment sizing parameters. This operation requires the integration of different application tools, homogenizing their heterogeneous result to evaluate the performance of the specific equipment in all the
Use of commercial structured databases as innovative solution for FEED projects
751
possible operating cases. In addition many engineering disciplines have to be involved to finalize the activities and complete the equipment definition. The idea has been to select Process Datasheets as the main document for equipment design and identify the process variables defined inside those documents as the source of all the subsequent dependencies of database objects. These data identification criteria constitute bulk documents shared among all the project entities and identify the main variables that shall be kept as principal source for equipment design. All of the other equipment engineering datasheets (i.e. Material Requisition) constitute dependent database documents linked to the main structure and defined through the relevant Process Datasheet. During Front End Engineering Design projects the Engineering Departments can create their relevant deliverables starting from a shared common database, where data are automatically represented in different documents and with the absence of redundant information already eliminated through the Process Datasheet variable definition. With this approach the documents can be issued as required in Company Procedures, but using the consistency guarantees offered by the database structure. All the additional information is then included in the database but located in specific areas dedicated to the department in charge of their definition. In addition, due to the fact that the base document structure is always the same, the subsequent datasheets are kept always consistent during project development, especially during the revision phase of shared data, so that all the departments are assured they are working on the most up-to-date information. (Figure 2.1) The best example of the advantages shown by this approach can be highlighted for Material of Construction (MOC) diagrams, which are typical deliverables in FEED projects. The standard workflow foresees the process engineer creating the process flowsheets and defining the unit design conditions/contaminants. The material experts are responsible for material definition for equipment and lines and show this information on a different drawing type (MOC). All of those phases normally generate delays and can cause many difficulties during data revision events. The described innovative approach uses the PFD as the main document and, following this solution, guarantees the document consistency, because the process scheme is kept intrinsically consistent, whilst the other data are placed only as additional information in the right database position and shared among all the disciplines involved.
Figure 2.1
2.5. Rights management An additional key point in the definition of the database structure is the management of access rights, in particular to avoid that disciplines not responsible for a particular datum might modify it without the approval provided by the Department that has accountability for the datum. The effectiveness of the database use has been guaranteed by the rigid definition of information workflow, organized following the Company Procedures that define the responsibility of each entity involved in the project. Through this path the deliverables
752
F. Ferrari et al.
development in FEED projects allows the sharing of many documents and information among the Departments, but correctly defines the boundary of intervention of each single discipline responsible for each specific design variable.
3. Conclusions The possibility to reduce or, in some cases, completely avoid data manual inputs has greatly improved the consistency of the issued deliverables during the information transfer between Departments involved in FEED projects. In addition that approach reduces the effort required to search and check the documents and their revisions, which are kept coherent through the database structure. The possibility to share the documents among the Departments has been improved and the quality has been enhanced, without affecting the manhours spent; it has to be highlighted that the information is automatically charged in the relevant correct positions and the rights definition avoids the interferences/modifications from disciplines not responsible. This innovative approach guarantees the structure efficiency in data management and aligns with the normal flow of development during all project phases. Furthermore the database structure automatically archives all the project deliverables in the most efficient position, as demonstrated by the experienced reduction in the time required to find each single document, because the connections among each equipment are intrinsically built. All those features and tools have shown interesting perspectives for collaborations with external companies, when the sharing of the same database enhances the information exchange. This opportunity, given by the database structure’s robustness, allows the data transfer and modifications during the entire FEED can be easily followed and checked. Finally the possibility to define template structures for repetitive design configurations (Ferrari et al, 2010) can offer major manhours efficiencies in the re-use of Basic Design Packages developed with a structured database, foreseeing the integration with subsequent FEED development projects.
References B. Bayer, K. Weidenhaupt, M. Jarke & W. Marquardt, 2001, A flowsheet centered architecture for conceptual design, Computer Aided Chemical Engineering, 9, Elsevier, 345-350. B. Bayer, S. Becker, M. Nagl, 2003, Integration Tools for Supporting Incremental Modifications within Design Processes in Chemical Engineering, Computer Aided Chemical Engineering, 15, Elsevier, 1256-1261. B. Bessling, B. Lohe, H. Schoenmaker et al., 1997, Cape in process design - potential and limitations, Computers & Chemical Engineering, 21-Suppl.1, Elsevier, S17-S21. F. Ferrari, L. Selmi, E. Capossela, 2010, Advantages in the production of Process Design Packages (Hydrotreating Units as study case), Computer Aided Chemical Engineering, 28, Elsevier, 1949-1953. W. Marquardt, M. Nagl, 2004, Workflow and information centered support of design processes the IMPROVE perspective, Computers & Chemical Engineering, 29, Elsevier, 65-82 L. Selmi, E. Capossela, F. Ferrari, 2010, Enhanced Data management to create Project Documents starting from Process Simulations, Computer Aided Chemical Engineering, 28, Elsevier, 1663-1666. A. Wasserman, 1990, Tool integration in software engineering environments, Lecture Notes in Computer Science, 467, Springer, 137-149.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) c 2011 Elsevier B.V. All rights reserved.
Controlled Variables from Optimal Operation Data Johannes Jäschke, Sigurd Skogestad∗ Department of Chemical Engineering; NTNU; Trondheim, Norway
Abstract In this paper we show how optimal operation data and concepts of self-optimizing control can be used for finding controlled variables which give optimal operation for the disturbances included in the data set. The method extracts the operation strategy which is hidden in the optimal data and may help to analyze and improve operation in the common case where it is difficult or very expensive to obtain a good model. Keywords: Controlled variable selection, data based methods, self-optimizing control
1. Introduction For many processes, obtaining a good mathematical process model is important for successful operation. However, obtaining a good model is often inhibited by several factors, such as a tight budget and limited knowledge or time. Thus, obtaining a good process model and keeping the model up to date is one of the major bottlenecks for the application of advanced process control in industrial applications [1]. It is therefore desirable to minimize the modeling effort, while still achieving good process performance. In this work we present a method which is based on logged process data, which is readily available for many processes in industry. This data is used to find self-optimizing controlled variables whose optimal setpoint does not change with varying disturbances. Previously, self-optimizing control structure design has been based on a process model. The contribution of this paper is to show how past process data can be analyzed to determine good controlled variables.
2. Motivation and problem formulation An example of a system which is hard to model is a marathon runner. However, it is easy to collect data from runners, such as e.g. heart rate, stride frequency, temperature, blood oxygen content and breathing frequency. The data from the best runs of the runners subject to expected disturbances such as hilly terrain and wind is collected in an optimal data matrix Y . This data is used to determine a linear combination of measurements, which is (almost) constant for all the best runs. By running such as to keep this linear combination of variables at their optimal values, an optimal running strategy can be implemented. Similarly, in a process plant, some operators may be able to operate the process more profitably than others. Analyzing the "optimal operation data" of these operators can reveal linear combinations of variables, which other operators can use as a guidance when operating the plant. Alternatively, these variables can be used for feedback control. We assume that optimal operation corresponds to minimizing a cost J, and that the optimization problem can be approximated in deviation variables around the optimal point ∗
[email protected] 754
Johannes Jä schke et al.
as min J = u
uT
dT
!
Juu Jdu
Jud Jdd
"!
u d
" (1)
where u ∈ Rnu and, d ∈ Rnd are the inputs and the disturbances, respectively. In order for a minimum to be unique, we require that Juu is positive definite. For each degree of freedom u we search for a controlled variable c which is a linear combinations of measurements, c = Hy. If the variables give acceptable performance when controlled at constant setpoints, they are called self-optimizing, as defined in [2]: Self-optimizing control is when we can achieve an acceptable loss with constant setpoint values for the controlled variables (without the need to reoptimize when disturbances occur). The loss is defined as L = J(u, d) − J(uopt , d), where u is the input generated by the current operating policy, for example adjusting u such that c = Hy is kept constant.
3. Data method The new method for finding these measurement combinations is directly inspired by the null-space method [3] which we present in the following. 3.1. Null space Method This method is based on the quadratic approximation of the cost function (1). In addition it is assumed that a linear noise free measurement model is available, y = Gy u + Gyd d. Here, y ∈ Rny is the vector of linear independent measurements and Gy , Gyd are the gain matrices of the system. Theorem 1 (Null space method). Given a sufficient number of noise-free linear independent measurements, ny ≥ nu + nd , select H such that HF = 0, where F = ∂ yopt /∂ d is the optimal sensitivity matrix. Then controlling c = Hy to zero gives optimal operation with zero loss. Proof: Close to d nom , by definition of F we have yopt (d) − yopt (d nom ) = F(d − d nom ). The optimal change in the controlled variables is: copt (d) − copt (d nom ) = HF(d − d nom ). Since H is selected such that HF = 0 optimal variation copt − copt nom is zero , too. Hence, controlling c = Hy to zero leads to optimal operation. The optimal sensitivity matrix F is usually obtained numerically, by optimizing a model −1 J [3]. or by linearizing at the nominal point, and evaluating F = Gyd − Gy Juu ud 3.2. Using optimal operation data In the case where we do not have an explicit model, we will not know the optimal sensitivity F = dyopt /dd. Now let us assume that we have “optimal” data for y for various disturbances collected in the data matrix Y . If we have sufficient data then Y will contain the same information as F, because all disturbances have been rejected optimally. In particular, all columns in Y are linear combination of the columns of F. For example, if we write the optimal sensitivity matrix F = ∂ yopt /∂ d = [ f1 , f2 , · · · , fnd ], we have that if the matrix is augmented by any (combination) of its columns, e.g. if Y = [F, α f1 + β f2 ], the left null space remains unchanged. This proves the following result: Theorem 2 (Optimal data method - No noise). Given sufficient measurements ny nu + nd , and optimal measurement data Y , where we for each distinct disturbance d there is
Controlled Variables from Optimal Operation Data
755
at least on column in Y . Then the optimal measurement combination can be determined by selecting H such that HY = 0. The H matrix may also give valuable insight into the operation policy. After scaling and centering of the data, the elements in the left singular vector of Y can be used to analyze the operation strategy. We will demonstrate this in an example from economy below. In practice, the data matrix Y will not be consistent such that a null space HY = 0 exists, either because of too many disturbances, or more likely because of measurement noise. One approach to handle the this is to do a singular value decomposition Y = UΣV T , and select the transpose of the nu columns in U which correspond to the smallest singular values in Σ. This is equivalent to approximating Y by the closest matrix with rank nu . More generally, the minimum loss method (exact local method) of [4] may be used, to handle cases with measurement noise, but this requires that we also have some “nonoptimal” data: Theorem 3 (Optimal data method with noise [4]). Given noisy optimal measurement data Y and given “nonoptimal” data for the effect of the inputs (degrees of freedom) u on the measurements Y , so Δy = Gy Δu, the optimal measurement combination can be determined by finding the H which minimizes ||(HGy )−1 HY ||F . Note that we want HGy to be large, that is we want to use “sensitive” measurements. With the sensitivities small and with little measurement noise, the contribution from the term HGy is small, and then Theorem 2 is sufficient.
4. Case studies 4.1. Optimal operation of a chemical reactor (use of Theorem 2) We consider a CSTR with a reaction A B, Fig 1. The feed contains mainly component A, and the objective is to maximize the profit which is calculated as the difference between the income from selling the product B and the cost of cooling: P = pBCB − pcool Ti2 . Ti is the cooling temperature which can be manipulated to optimize performance. The feed concentrations are the main disturbances, and the concentrations reactor temperature are measured, so y = [CA , CB , T ]. The optimal operation data is obtained by applying the NCO tracking procedure as described in [5] in combination with finite difference gradient estimates, where the input is perturbed to obtain a gradient estimate, and based on this estimate, it is adjusted to iteratively force the gradient to zero. The optimal data is collected into the data matrix Y , and a singular value decomposition Y = UΣV T gives (σ1 , σ2 , σ3 ) = (86.5, 4.8, 0.28). Since there is one input, Ti , we select the column in U corresponding to σ3 = 0.28. This gives a controlled variable c = Hy with H = [−0.77 0.63 0.005]. In Fig. 2 the simulated disturbance scenario is given and Fig. 3 shows the input usage when applying NCO tracking (to generate the optimal data) and when using a PI controller to control c = Hy = −0.77CA + 0.63CB + 0.005T to zero. Due to the continuous feedback control, controlling c = Hy gives much smoother input action than we have in the “optimal” data. Comparing the final profit in Fig. 4, shows that controlling the obtained invariant gives practically the same performance. 4.2. Economy example (use of Theorem 2) We consider economic indicators from 1991 to 2006 for France, Germany, Italy, Norway, UK, USA. The data is taken from [6]. The “measurements” y for each country are interest
756
Johannes Jä schke et al.
F CAin CBin
2 1.8 1.6 1.4
Ti Disturbances
1.2 1 0.8 0.6
T CA CB
0.4 0.2 0
CAin CBin 0
500
1000
1500
2000
2500 time [min]
3000
3500
4000
4500
5000
Figure 2. Disturbances CA,in , CB,in
Figure 1. CSTR 435
1.6
1.4 430 1.2
1
profit
Input Ti
425
420
0.8
0.6
0.4 415 0.2 NCO tracking selfíoptimizing control 410
0
500
1000
1500
2000
2500 time [min]
3000
3500
4000
4500
Figure 3. Inputs SOC and NCO tracking
5000
NCO profit SOC profit 0
0
500
1000
1500
2000
2500 time [min]
3000
3500
4000
4500
5000
Figure 4. Profit Comparison
rate (y1 ), unemployment (y2 ), the industrial production index (IPI, y3 ), the consumer price index (CPI, y4 ), tax revenue (% of GDP, y5 ) and exchange rate to SDR (special drawing rights, a “lumped” currency derived from the Yen, US Dollars, British Pounds and Euros, y6 ). The GDP growth, Fig 5, is the criterion for optimality. The measurements of year prior to the three years with highest GDP growth are used for Y . This results in H = [ - 0.67 - 0.02 0.22 0.62 0.32 0.10], Fig. 6. The most influential factors are the interest rate (-0.67) and the inflation rate (0.62). This is not unexpected, because the interest rate is used as a handle to control inflation. Of course the economics of countries is too complex to be described accurately by our selected variables, but we have shown that applying our method to economic data can reveal some of the operation strategy behind the data.
5. Discussion and conclusion The proposed “null space data method” picks out the weak directions in the data Y , whereas other “chemometric” regression methods concentrate on the strong directions in the data. An important reason for this is that we assume that the data is optimal, and we look for hidden combinations in this data that characterize the optimum. On the other hand, in regression methods one looks for relationships between variables X and Y . To show that the methods are different, assume our data contains two data sets, Y = [Y1 X]T and we want to find how the relationship between Y1 and X.#We assume #that dim(Y1 ) = dim(u) = nu . From our method, the problem becomes minH # H[Y1 X]T #F . Here, H is not unique, so we have that if H is an optimal solution, so is DH, where D
Controlled Variables from Optimal Operation Data
Figure 5. Annual GDP growth
757
Figure 6. Magnitude of elements in H
is an invertible matrix [4]. This degree of freedom may be used to set H = [I Hx ], and we optimize the problem minHx ||Y + Hx X||F , which has the least squares solution Hx = −Y X † . Thus our method is equivalent to the normal regression methods for problems where the norm of ||HY ||F is small, such that the contribution from the term Juu (HGy )−1 can be neglected, that is, for the noise free case. However, a significant difference to standard regression methods for the case when we simply minimize HY F , is that we do not distinguish between Y1 and X data and try to find a relationship between these, but instead focus on finding invariant variable combinations c = Hy = Hy y1 + Hx x = 0. Our method has the advantage that it only uses data and does not rely on a model. Thus it is applicable to systems where it is very expensive or impossible to obtain an accurate model. Not even the cost function has to be known as long as the data is optimal. However, it is important that the data is consistent in the sense that the data gives the correct optimal sensitivity F = dyopt /dd and contains little measurement noise. The main drawback is that we rely on optimal data, and performance cannot be improved beyond the learning data. However, one could obtain the optimal data using some expensive method, and then analyze it to find a cheap method which gives similar performance, as is done in the CSTR example above. Other applications could be to find the “secret” of good operators or the “control strategy” of a marathon runner or of some economy.
6. References [1]
[2] [3] [4] [5] [6]
D. Dochain and W. Marquardt and S. C. Won and O. Malik and M. Kinnaert and J. Lunze. Monitoring and Control of Process and Power Systems Adapting to Environmental Challenges, Increasing Competitivity and Changing Customer and Consumer Demands. Status report prepared by the IFAC Coordinating committee on Process and Power Systems. Proceedings of the 17th IFAC World Congress Seoul, Korea, July 6-11, 2008 S. Skogestad. Plantwide control: The search for the self-optimizing control structure. Journal of Process Control, 10:487-507, 2000 V. Alstad and S. Skogestad. Null space method for selecting optimal measurement combinations as controlled variables. Ind. Eng. Chem. Res., 46: 846 - 853, 2007. V. Alstad, S. Skogestad, and E. Hori. Optimal measurement combinations as controlled variables. Journal of Process Control, 19(1): 138 - 148, 2009. G. François, B. Srinivasan, D. Bonvin, Use of measurements for enforcing the necessary conditions of optimality in the presence of constraints and uncertainty, J. of Proc. Contr., 15(6), 2005
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Optimization of IMC-PID Tuning Parameters for Adaptive Control: Part 1 Chih-Wei Chua, *B. Erik Ydstie a, Nikolaos V. Sahinidis a ̙
a
Carnegie Mellon University, 5000 Forbes Ave., Pittsburgh, PA 15213, USA,
[email protected], *
[email protected], ̙
[email protected] Abstract This paper describes Part 1 of a two-part strategy for robust certainty equivalence adaptive PID control. In Part 1 we develop the strategy for simple PID controller tuning which maximizes the bandwidth subject to gain and phase margin constraints. An implementation of the non-adaptive strategy in a real time environment using model estimation based on non-convex optimization is described. The test shows the potential of the tuning method. In the next part, which due to space limitations could not be included here, we describe adaptive implementation. Keywords: IMC-PID controller, robustness, optimization, adaptive control.
1. Introduction Surveys on PID control [1] show that the majority of PIDs are left on factory settings. This observation shows that the PID has inherent robustness properties when it is applied to typical chemical processes. However, one might suspect that significant gains could be achieved if the controllers were optimized since the accumulated effect of millions of poorly tuned PIDs may be large. Many methods have been proposed for on and off line PID tuning. Most of these are not suitable for adaptive control since they do not tune performance subject to robust performance. For example, classical methods for PID tuning taught in undergraduate classes on process control (e.g. [2]) do not include any tuning knobs. In this respect the Internal Model Control (IMC) tuning procedure by Rivera et al. [3] is better suited since includes the filter parameter ߬ which tune closed loop performance [4,5] to achieve robust stability. In this paper we develop a tuning procedure for IMC which minimizes the filterparameter to maximize bandwidth subject to pre-specified gain and phase margin [6-9]. An analytical solution is developed for the first order dead time process. In the next section we show that the approach meshes with certainty equivalence adaptive control.
2. Robustness of Certainty Equivalence Adaptive Control A compelling paradigm for adaptive control was developed in the 1970s under the banner of Certainty Equivalence Adaptive Control. In this approach the parameters of a transfer function model ܩ ሺݏሻ is estimated in real time by matching the model to process data. The resulting model is used to update the controller as shown in Figure 1A. The figure does not highlight that it is critical update the controller tuning to achieve robust performance. This property is better illustrated in the equivalent diagram Figure 1B where the adaptive system is viewed as two composite systems. The first system
759
Optimization of PID
shows the controller in feedback with the model. The second system shows how the model adapts to the plant. Robust performance is achieved when the nominal feedback system on the left does not generate high frequency inputs.
Figure1. Figure 1A on the left shows the classical representation of the certainty equivalence approach to adaptive control. Figure 1B on the right shows the structure used in stability analysis.
The controller tuning only needs to be robust with respect to unstructured (additive) uncertainty. Closed loop stability is ensured if ȁ ሺሻୡ୪ ȟሺݏሻȁ ൏ ͳ where ȟሺ) is the model uncertainty. It follows that the PID controller should be tuned so that it has pre-specified gain and phase margins to compensate for given unstructured uncertainty. In this sense adaptive control achieves better performance and robustness than robust control theory alone since we do not need to tune for parametric uncertainty.
3. PID Control with Pre-specified Gain and Phase Margins The IMC design achieves optimal performance robustness by minimizing ߬ subject to gain margin and phase margin constraints. Thus we want to solve the problem ݉݅݊ ߬ ݏǤ ݐǤܣ כܣ כ ߔ ߔ
߬ Ͳ כ where כܣ is the desired phase margin (typically ͳȀ͵Ɏሻ Ȱ୫ is the desired gain margin (typically 1.7). Denoting the process and controller transfer functions by ܩ ሺݏሻ and ܩ ሺݏሻ we get
ܣ ൌ
ଵ
(1)
หீ ሺఠ ሻீ ሺఠ ሻห
ܽ݃ݎൣܩ ሺ݆߱ ሻܩ ሺ݆߱ ሻ൧ ൌ െߨ
(2)
ߔ ൌ ܽ݃ݎൣܩ ሺ݆߱ ሻܩ ሺ݆߱ ሻ൧ ߨ
(3)
หܩ ሺ݆߱ ሻܩ ሺ݆߱ ሻห ൌ ͳ
(4)
where ߱ , ߱ are the phase and gain crossover frequency. Below we simplify this problem.
4. Tuning algorithm A first-order-plus-time-delay (FOPTD) plant model models process control systems in this paper. The first order Pade approximation gives ܩ ሺݏሻ ൌ
ఛ௦ାଵ
݁ ିఏ௦ ؆
భ
ଵିమഇೞ
ఛ௦ାଵ ଵାభ మഇೞ
(5)
Chih-Wei Chu et al.
760
The IMC-PID formula for FOPTD is given as [9]: ഇ
ܩ ሺݏሻ ൌ
ቀଵା మ ௦ቁሺఛ௦ାଵሻ
(6)
ഇ
ቀఛ ା ቁ௦ మ ഓ
ଵ ଶቀഇቁାଵ
ܭ ൌ
(7)
ଶቀഓ ቁାଵ ഇ ఏ
ܶ ൌ ߬
(8)
ଶ
ܶௗ ൌ
ఛ
(9)
ഓ
ଶቀ ቁାଵ ഇ
The open-loop and closed-loop transfer function are given by ഇ
ܩ ሺݏሻ ൌ ܩ ሺݏሻܩ ሺݏሻ ൌ ீ ሺ௦ሻ
ܩ ሺݏሻ ൌ
ଵାீ ሺ௦ሻ
؆
ቀଵା మ ௦ቁ ഇ
ቀఛ ା మ ቁ௦
݁ ିఏ௦
భ మ
ଵି ఏ௦
(10) (11)
ఛ ௦ାଵ
Substituting Eq. (10) into (1) – (4) results in ഇ
ܣ ൌ
ቀఛ ା మ ቁఠ
(12)
మ
ഘ ഇ ඨ൬ ൰ ାଵ మ
ఠ ఏ
ܽ ݊ܽݐܿݎቀ
ଶ
ቁ െ ߱ ߠ ൌ െ ఠ ఏ
ߔ ൌ ܽ ݊ܽݐܿݎቀ ߱ ൌ
ଶ
గ
(13)
ଶ
ቁ െ ߱ ߠ
గ
(14)
ଶ
ଵ
(15)
ඥఛ మ ାఛ ఏ
Thus the optimization problem becomes ݉݅݊ ߬ ഇ
ݏǤ ݐǤ
ቀఛ ା మ ቁఠ ഘ ഇ మ ඨ൬ ൰ ାଵ మ
ܽ ݊ܽݐܿݎቀ ܽ ݊ܽݐܿݎቀ ߱ ൌ
כܣ
ఠ ఏ ଶ ఠ ఏ ଶ
ቁ െ ߱ ߠ ൌ െ
గ ଶ
గ
כ ቁ െ ߱ ߠ ߔ
ଵ ඥఛ మ ାఛ ఏ
ଶ
߬ , ߱ , ߱ Ͳ
כ This problem has 3 variables (߬ , ߱ , ߱ ) and 3 parameters (ߠ, כܣ , ߔ ). However, the problem needs the time delay information, and an explicit analytical relation between the tuning parameter, ߬ , gain margin, ܣ , and phase margin, ߔ can be found from Eq. (12) – (15). Solving Eq. (12) gives a constant and for convenience denoted as ߙ:
߱ ߠ ൌ ߙ ൌ ʹǤͶͷͺ
(16)
Optimization of PID
761
By substituting Eq. (12), (15) and (16) into (14) and using Eq. (12) and (16) we express ߔ and ߬ as functions of ܣ so that గ
ଶ
ଶ
ర ටమ ቀଵା మ ቁିଵ ഀ
ߔ ൌ െ ఏ
ସ
ଶ
ఈమ
߬ ൌ ቆܣ ටͳ
ܽ݊ܽݐܿݎ
ଵ
(17)
ర ටమ ቀଵା మ ቁିଵ ഀ
െ ͳቇ
(18)
The plot in Fig. 2 shows that for a given process that the gain and phase margins are coupled so that only one of the two constraints will be active. 1.2
85 80
1
2.5ș
2ș
75
0.8
1.5ș
0.6
P h a se m a rg in ( Φ m )
70 T =ș
65
0.4 O u tp u t
c
60 55
0.2 0
0.5ș
50
-0.2
A *=2
45
-0.4
A * = 2.5
-0.6
A *=3
m
40
T = 0.3ș c
35 1
1.5
2
2.5
3 3.5 Gain margin( A )
4
4.5
5
5.5
-0.8 0
m m
1
2
m
Figure 2. ܣ vs. ߔ respect to ߬
3
4
5 Time (sec)
6
7
8
9
Figure 3. Closed-loop step responses.
The bandwidth ߱ௐ is is defined as the frequency at which ܴܣ ሺ݆߱ௐ ሻ ൌ ȁܩ ሺ݆߱ௐ ሻȁ ൌ
ଵ ξଶ
(19)
Substituting Eq. (11) and (18) into (19) gives ߱ௐ ൌ
ଶ
(ʹͲ)
ర ర ఏඨ మ ቀଵା మ ቁିଶ ටଵା మ ିଵ ഀ ഀ
The relation between ߱ௐ and ܣ provides an estimate for closed-loop performance. Now we can propose a tuning method based on gain margin specification. According to Eq. (12), ܣ is proportional to ߬ . So for given כܣ , the minimal ߬ can be located when ܣ equals to its minimal value, כܣ . Then the PID controller parameters, corresponding phase margin and bandwidth can be calculated from Eq. (7) – (9), (17) and (20). Fig. 3 shows the simulation result for the closed-loop step responses of the controller designed by different gain margin specifications. As כܣ getting larger, the performance of the controller gets more conservative. Substituting into (20) gives כܣ
ξଶାଵ ర
ටଵା మ ഀ
ൌ ͳǤͺ
(21)
5. Real-time experiments The experimental set up comprises of a countercurrent shell and tube heat exchanger. Hot water flows through the shell side and the cold water flows through the tube side. Temperatures and flow rates are recorded at a sampling time of 0.1 seconds. The FOPTD model
10
Chih-Wei Chu et al.
762
ܩ ሺݏሻ ൌ
ିǤଷହ ଵǤଷ௦ାଵ
݁ ିଷ௦
(22)
was identified using global optimization as shown [10]. Fig. 3 shows the response of the system output for a set-point change followed by a load disturbance on hot water flow rate change. The controller gives quick set-point response and well disturbance rejection. 63.6 63.4
Temp (F)
63.2 63 62.8 62.6 62.4 62.2
0
50
100
150
200
250
300
350
400
Time (sec) 10
Cold water Hot water
Flow rate (gpm)
9 8 7 6 5 4
0
50
100
150
200
250
300
350
400
Time (sec)
Figure 3. Real-time experimental result. The precision is limited by 8bit AtoD conversion
6. Conclusions An optimization problem for the IMC-PID controller suitable for adaptive control is developed. The analytical solution for the optimization of bandwidth from gain and phase margin constraints is derived. We show that that gain and phase margins are coupled. The real time experiment result gives satisfied satisfactory set-point response and disturbance rejection. The proposed approach is ideally suited for application to adaptive control since the tuning criteria (gain margin and phase margin) are based on closed rather than open-loop performance.
References [1] K.J. Astrom, T. Hagglund(2nd ed.), PID Controllers: Theory, Design, and Tuning, Instrument Society of America, Research Triangle Park, NC, 1995. [2] J.G. Ziegler, N.B. Nichols, Optimum settings for automatic controllers, Trans. A.S.M.E., 64 (1942) 759–768. [3] D.E. Rivera, M. Morari, S. Skogestad, Internal model control. 4. PID controller design, Ind. Eng. Chem. Res., 25 (1) (1986) 252–265. [4] S. Skogestad, Simple Analytic Rules for Model Reduction and PID Controller Tuning, J. Process Control, 13(2003) 291-309. [5] I.L. Chien, P.S. Fruehauf, Consider IMC tuning to improve controller performance, Chemical Engineering Progress (1990) 33–41. [6] W.K. Ho, C.C. Hang, L.S. Cao, Tuning of PID controllers based on gain and phase margin specifications, Automatica, 31 (3) (1995) 497-502. [7] Q.G. Wang, H.W. Fung, Y. Zhang, PID tuning with exact gain and phase margins, ISA Transactions, 38 (1999) 243-249. [8] W. K. Ho, T. H. Lee, H. P. Han, and Y. Hong, Self-Tuning IMC-PID Control with Interval Gain and Phase Margins Assignment, IEEE Transactions on Control Systems Technology, 9 (3) (2001). [9] D.E. Seborg, T.F. Edgar, and D.A. Mellichamp(2nd ed.), Process Dynamics and Control, Wiley, New York, 2003. [10] G.H. Staus, L.T. Biegler, and B.E. Ydstie, Global optimization for identification, Proceeding of the 36th Conference on Decision and Control, (1997) 3010-3015.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) c 2011 Elsevier B.V. All rights reserved.
System identification using wavelet analysis Zdenˇek Váˇna1 , Samuel Prívara1 , Jiˇrí Cigler1 and Heinz A. Preisig2∗ 1 Department 2
of Control Engineering; CTU in Prague; Prague; Czech Republic Department of Chemical Engineering; NTNU; Trondheim; Norway
Abstract System identification (SID) plays a central role in any activity associated with process operations. With control being done on different levels, different models are required for the same plant each for a different range of dynamics. Besides that most identification methods apply to the linear models, they also do not allow for selecting a frequency range. Wavelet methods have the ability to select the time and frequency windows and are also applicable to the nonlinear processes. The paper presents an approach, in which wavelet transform is used for SID enabling selection of the particular frequency range. Even though, the wavelet transform as a tool is known for a long time and has a number of desirable properties, it is not frequently used in the applications. Keywords: wavelet transform, system identification, singular perturbation
1
Introduction
Wavelet transform as a mathematical tool serves mainly for data analysis in both time and frequency domains. The interconnection between the wavelet and identification theories was partly shown in e.g. Ghanem and Romeo (2001). The wavelets are used mostly for the nonlinear SID with the particular structure, where the unknown time-varying coefficients are expressed as a linear combination of the basis (wavelet) functions (Tsatsanis and Giannakis (2002); Wei and Billings (2002)). This is improved in Staszewski (1998). Yet another option comes out from the character of the wavelets searching for the system natural frequencies and dumping (Ruzzene et al. (1997); Kijewski and Kareem (2003)). Apart from the simple wavelet analysis the bi-orthogonal wavelets (Ho and Blunt (2003)), wavelet frames (Sureshbabu and Farrell (2002)) or even the wavelet networks (Shi et al. (2005)) can be used for SID. Preisig and Rippin (1993a,b,c) deals with the system of the particular input-output structure, where the parameters are identified via the spline wavelets and its derivatives. Carrier and Stephanopoulos (1998) showed, that least squares can be extended to the wavelet transform. We will show the use of some wavelet filters with a property of a superior selectivity (significant magnitude on specific frequency range) in a frequency domain and having compact support in a time domain, which in turn, influences an accurate implementation. This provides us with the possibility of a measured data analysis in the frequency domain without loss of information. Selection of a proper filter allows us to identify a system on a desired frequency range, or to identify a number of systems for distinct frequency ranges. This is especially convenient for the systems with more dominant modes. We demonstrate yet another facet of a wavelet-based identification thus continuing earlier expositions (Preisig (2010)). The paper is organized as follows. In Section 2 the properties of the discrete wavelet transform (DWT) is introduced. Section 3 introduces linear SID theory and interconnects it with the wavelet transform. The case study is provided in Section 4. The last section is summary.
2
Discrete wavelet transform - the basic principle
DWT can be found in Frazier (1999); Kolzow (1994). The main idea of a discrete wavelet transform is described by a multiresolution analysis of the Hilbert space (Frazier (1999)): ∗1
[email protected] ; 2
[email protected] 764
ˇ et al. Zdenˇek Vá na ˇ
Zden
ek Vá
ˇ
na
V j and W j denote the space of approximations at the jth level and space of details, respectively. It holds 2 (ZN ) = V p ⊕ W p ⊕ W p−1 ⊕ · · · ⊕ W1 . Spaces V j , W j have dimension 2Nj and that is the reason, why N has to be divisible by 2 p , where p is the maximum level of $ {R2k ψ}M−1 an analysis. Let N = 2M, ϕ, ψ ∈ 2 (ZN ) and let B = {R2k ϕ}M−1 k=0 k=0 is wavelet st basis at the 1 level and vectors ϕ, ψ are its generators. Vector ϕ is called a father wavelet (or filter) and vector ψ is called a mother wavelet (or wavelet). The coefficients of the signal z ∈ 2 (ZN ) in the basis B can be expressed as the inner product of z with the basis vectors. First we analyze the vector z ∈ 2 (ZN ) by filters ϕ1 , ψ1 . Thereby we obtain x1 , y1 ∈ 2 (ZN/2 ). 2 (ZN/2 ) is also Hilbert space, thus we can analyze x1 by filters ϕ2 , ψ2 . In a similar manner x2 , y2 are obtained and this procedure is repeated up to the pth level. We consider a real space signal z sampled by Ts = f1s , which has a full, symmetric spectra. To comply with Shannon-Kotelnik theorem we count only the single sided spectra. Then, to retain the full signal energy, we have to multiply it by 2. If we Fourier express transform of wavelet analysis of the vector z, we obtain F {[z]B } = 14 zˆ m2 ϕˆ m2 , 14 zˆ m2 ψˆ m2 . It could be applied repeatedly up to the pth level as well. Since the spaces V j , W j have the same dimension, the frequency range is divided into the halves. An analogous operation is performed at the next levels.
3
System identification and wavelet transform
We consider a discrete-time linear time-invariant (LTI) system y(t) = G(q)u(t)+H(q)e(t) = y(G, H, u), with q being a shift operator, e(t) zero mean white noise with variance σe2 and u(t), y(t) system input-output signals (Ljung (2002)). Let us choose the linear predictor nb a y(t|t ˆ − 1, θ ) = − ∑nk=1 ak q−k y(t) + ∑k=0 bk q−k u(t) = zT θ , where z is measured data and θ are the unknown parameters. This can be written as Y = ZΘ. For a filtered prediction error ε f (t, θ ) = L(q)ε(t, θ ) = L(q) [y(t) − y(t|t ˆ − 1, θ )] holds (Ljung (2002)). If the predictor is time-invariant and linear in parameters and u(t), y(t) are scalars (what means the single-input single-output (SISO) system is considered), then the result of filtering of ε is the same as filtering the input-output data first and then applying the predictor. Recall, that the wavelet coefficients are to be evaluated as an inner product of the time signal and even shifts of the wavelet filters. On 2 (ZN ), the inner product can be written as a vector multiplication. Then an equation Y = ZΘ can be extended by multiplying with thewavelet matrix T and user defined weighting matrix W as W TY = W T ZΘ. It is important to realize that each wavelet coefficient bears information about time interval of the same length as is length of the wavelet filter, thus it is impossible to construct matrices Y, Z from filtered data, because it does not keep the required time structure of the model. There are two limitations while implementing wavelet matrix T : the lenght of analyzed data (It is convenient but impractical to have data of length 2 p .) and the data periodicity (The periodical extension of a vector is used for analysis of the whole length of data. However, due to e.g. adding high frequencies or disabling the recursive identification, it is not convenient to periodically extend data). There are two possible points of view of the basic principle of wavelet analysis: a) both the approximations and the details of the analyzed data are kept of length N (upsampling operator) and scaled wavelet filters are used, b) the lengths of both approximations and details at all levels are decreased and the analysis is always performed by the same basic filters ϕ, ψ. Because the analyzed data in practice need not to be of length of 2 p for any p ∈ N, the latter approach is more accurate. Let L is length of wavelet filter and S is their basic shift. Then theanalyzed data length N +L N j at the jth level can be written recursively as N j+1 = 1 + jS as long as the data •
are long enough for analysis at the next level, [o]• denotes the integer part of o. Then the number of iterations is maximum level p of wavelet analysis. With the knowledge of p and individual lengths N j , j = 1, . . . , p the wavelet matrix T can be computed successively. At the 1st level, the analysis is done by matrix T1 which contains the
765
System identification using wavelet analysis
even shifts of the wavelet filters in rows. Because the wavelet filter of length L could be smaller than the data length N1 , the wavelet filters have to be replenished onto the length! L with analysis is characterized by matrix multiplication – " " zeros. !Higher " levels ! I 0 T j,D {R2k ψ} Tj = , Tj = T = {R ϕ} , k = 0, . . . , N j+1 − 1. T j,D , T j,A are submatrices 0 Tj j,A 2k of the mother and father wavelets, respectively. The matrix T1 is defined as T j for j = 1. Let us now briefly introduce some specific frequency properties of wavelet filters. In Section 2 the halving of the frequency range was mentioned. The frequency characteristics of both types of the wavelet filters cover full range of frequencies, however, only that half with a major influence can be considered. Let us call ”main interval" of the filter such a frequency range, where the filter has the major influence. The overview of the main intervals at distinct levels is in the following table. Table 1. Table of wavelet filters and their main intervals at distinct levels. Analysis
jth level details jth level approximations
Main interval fmin
fmax
fs 2 j+1
fs 2j fs
0
Frequency-domain filter
ψˆ
m 2j
2 j+1
j−1 m · ∏i=1 ϕˆ 2i j ∏i=1 ϕˆ 2mi
The wavelet matrix W is a user defined, diagonal matrix. Because of an overlapping of the wavelet filters in frequency domain, the filters from Table 1 do not have unit gain at any frequency, which can be compensated by the weighting matrix W1 computed from the 1 1 1 vector of weights V1 = max(wˆ (m)) , . . . , max(wˆ p (m)) , max(wˆ (m)) , where wˆ i , i = 1, . . . , p+1 1 p+1 are the filters from the Table 1. Moreover, the user defined weights of the filters should be chosen such that no filter overweights any other. In consequence, there is a bound for weights for each particular wavelet filters family Frazier (1999); Kolzow (1994). 3.1 Asymptotic properties: Convergence and consistency Let Θ∗ denotes such a model parameters vector, which is the best theoretical solution of an ˆ N denotes the solution computed identification problem as defined in Ljung (2002) and Θ from measured data. Then, the following holds for the convergence and the consistency of the ARX model: a) Convergence: limN→∞ θˆN = θ ∗ + limN→∞ E{(Z T Z)−1 Z T e}. This limit holds for wavelet filters as well. b) Consistency: In case of an open-loop, the variance of the frequency function Φv (ω) estimate at certain frequency ω can be written as VarG(ω, θˆN ) ≈ Nn Φ with u (ω) v(t) = H(q, θˆN )e(t) being filtered white noise (see Zhu (2001)). Then after straightforward modifications: VarG(m, θˆN ) ≈
n Φv (m) N Φu (m)
p+1
∑ %%
j=1
1
V ( j)wˆ
j (m)
%2 , where V ( j) is %
a normalized weight for jth level analysis, V (k) is kth element of the vector of weights which the matrix W is constructed from. In case of no additional weighting, this sum equals 1 for all frequencies and the results of SID with and without wavelet filtering are similar (Figure 1).
4
Case study
The proposed algorithm was implemented and tested on system with given transfer func(s+100)(s+1) tion G = 100(s+10)(s+0.1) discretized with Ts = 0.1s. The frequency characteristic is well
766
ˇ et al. Zdenˇek Vá na ˇ
Zden
ˇ
ek Vá
na
divisible into the slow and fast parts. The results parameters modeling without any additional filters weights are depicted in Figure 1/left. The maximum possible analysis level was used. We can see, that the ARX and the wavelet filtered ARX models (WAV model) provide similar results. In case of suitable filter weights selection, we can choose frequencies of interest and identify the system in this frequency range. To get credible results it is suitable to compare ARX with the wavelet filtering and ARX with prefiltering by filter F. 16 s4 This has been chosen as Flow = (s+2) 4 and Fhigh = (s+2)4 for low and high frequencies; both discretized by Ts . The results are depicted in Figure 1/middle and right for slow and fast subsystems. Both results are similar. Figure 1. Results from identification procedure. all frequencies
low frequencies
8
output response
6
6
6
4
4
2
2
2 0
−4
0
100 200 discrete time error for ARX
histograms
8
4
−2
magnitude (dB)
high frequencies
8
35
30
30
25
25
20
20
15
15
10
10
5
5
0 −2
0 −2
2
4
0
−2
−2
0
error for WAV
35
0
300
0
0
2
4
100 200 discrete time
300
0
error for WAV
error for ARX
100 200 discrete time error for ARX
30
30
30
30
20
20
20
20
10
10
10
10
0 −1
0
0
−10
−10
−20
−20
0
1
0 −1
0
1
0 −2
0
2
4
300
error for WAV
6
0 −2
0
2
4
6
0 −10 −20
original system prefiltered ARX model
−30
−30
−30
WAV model prefilter for ARX
−40
−2
0
10
10 frequency
−40
−2
0
10
10 frequency
−40
−2
0
10
10 frequency
Now, the question arises why to prefilter data by the wavelets. Wavelet filters have advantage in comparison to the filters designed in a classical way. Firstly, they have simple structure in a frequency domain and secondly, they complement each other in frequency domain. This provides us with a big advantage in problems, where the frequency characteristics of the system are not known in advance. The satisfactory results could be acquired by tuning of the weights only.
5
Conclusion
5.1 Possible extensions 1. Multi-input single-output (MISO) system identification: The ARX model structure (matrices Y, Z) can be simply expanded for MISO systems. This poses no change in matrices T,W . The only problem is with identifiability due to collinearity in data. 2. Thresholding of thw wavelet analysis: This means to nulify the wavelet coefficients lower than some threshold εt ∈ R. The threshold is the lower limit for the considered portion of the particular frequency range in the original signal. Globally, it can lead to more accurate numerical results. The price is a loss of some input-output data information. 3. Keeping of the wavelet analysis coefficients at lower levels: For the increasing number of equations, the approximations at each level could be doubled. Moreover, one half is kept and the other is used for the analysis at the next level. This expands the number of equations, i.e. improves the result (for the price of higher computational demands). 4. Recursive identification: There is one inherent difference from the recursive LS solution (Ljung and Ljung (1985); Engel et al. (2004)) i.e., minimum length of measured data has to be greater than the length of shift of the wavelet filter ! " at Z the lowest used level. Then the predictor can be extended by Tnew Z Θ= new
System identification using wavelet analysis
767
! " Y Tnew Y , where Tnew is the matrix of all new possible shifts of the wavelet new filters. 5.2 Concluding remarks The proposed algorithm presents the way how to use the wavelet transform as a tool for data (pre)filtering in the field of the model identification. This method enables us to identify slow or fast subsystems of the singularly perturbed system as well as to do the reduced order model identification for control. However, the quality of identification is sensitive to wavelet family and analysis level selection.
6
Acknowledgements
This work has been supported from the state budget of the Czech Republic, through the Ministry of industry and commerce, in the scope of grant No. FR-TI1/517, ”Control systems for energy consumption optimization in low-energy and passive houses".
References Carrier, J., Stephanopoulos, G., 1998. Wavelet-based modulation in control-relevant process ldentification. AIChE journal, report from MIT. Engel, Y., Mannor, S., Meir, R., 2004. The kernel recursive least-squares algorithm. IEEE Transactions on Signal Processing 52 (8), 2275–2285. Frazier, M., 1999. An introduction to wavelets through linear algebra. Springer Verlag. Ghanem, R., Romeo, F., 2001. A wavelet-based approach for model and parameter identification of non-linear systems. International Journal of Non-Linear Mechanics 36 (5), 835–859. Ho, K., Blunt, S., 2003. Adaptive sparse system identification using wavelets. Circuits and Systems II: Analog and Digital Signal Processing, IEEE Transactions on 49 (10), 656–667. Kijewski, T., Kareem, A., 2003. Wavelet transfrom for system identification in civili engineering. report from Dpt. of civil engineering and geological science, University of Notre dame. Kolzow, D., 1994. Wavelets. A tutorial and a bibliography (*). Rendiconti dell’Istituto di matematica dell’Università di Trieste, 49. Ljung, L., 2002. Prediction error estimation methods. Circuits, Systems, and Signal Processing 21 (1), 11–21. Ljung, S., Ljung, L., 1985. Error propagation properties of recursive least-squares adaptation algorithms. Automatica 21 (2), 157–167. Preisig, H., 2010. Parameter Estimation using Multi-Wavelets. Computer Aided Chemical Engineering 28, 367–372. Preisig, H., Rippin, D., 1993a. Theory and application of the modulating function method–I. Review and theory of the method and theory of the spline-type modulating functions method. Computers & Chemical Engineering (17), 1–16. Preisig, H., Rippin, D., 1993b. Theory and application of the modulating function method–II. algebraic representation of Maletinsky’s spline-type modulating functions. Computers & Chemical Engineering 17 (1), 17–28. Preisig, H., Rippin, D., 1993c. Theory and application of the modulating function method–III. application to industrial process, a well-stirred tank reactor. Computers & Chemical Engineering 17 (1), 29–39. Ruzzene, M., Fasana, A., Garibaldi, L., Piombo, B., 1997. Natural frequencies and dampings identification using wavelet transform: application to real data. Mechanical Systems and Signal Processing 11 (2), 207–218. Shi, H., Cai, Y., Qiu, Z., 2005. Improved system identification approach using wavelet networks. Journal of Shanghai University (English Edition) 9 (2), 159–163. Staszewski, W., 1998. Identification of non-linear systems using multi-scale ridges and skeletons of the wavelet transform. Journal of Sound and Vibration 214 (4), 639–658. Sureshbabu, N., Farrell, J., 2002. Wavelet-based system identification for nonlinear control. Automatic Control, IEEE Transactions on 44 (2), 412–417. Tsatsanis, M., Giannakis, G., 2002. Time-varying system identification and model validation using wavelets. Signal Processing, IEEE Transactions on 41 (12), 3512–3523. Wei, H., Billings, S., 2002. Identification of time-varying systems using multiresolution wavelet models. International Journal of Systems Science 33 (15), 1217–1228. Zhu, Y., 2001. Multivariable system identification for process control. Elsevier.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Robust Reallocation and Upgrade of Sensor Networks for Fault Diagnosis Suryanarayana Kolluri and Mani Bhushan Department of Chemical Engineering, Indian Institute of Technology Bombay, Mumbai-400076, India
Abstract We propose a reallocation and upgrade strategy for improving an existing sensor network design from a fault diagnostic perspective. Following the work of Bhushan et al. (2008) for base case robust design, we perform reallocation and upgrade of an existing network such that to the extent possible, the resulting sensor network is robust to variations in the uncertain fault occurrence and sensor failures probabilities. Robustness to modeling errors is also incorporated by considering distributed networks. The resulting formulations are applied to the Tennessee Eastman (TE) problem. Keywords: Reallocation, Upgrade, Robust Design, Fault Diagnosis, Optimization.
1. Introduction Several approaches have been presented in literature for designing sensor networks (choosing variables to be measured along with the number of sensors on each variable) that can ensure reliable fault diagnosis. Bhushan et al. (2008) considered the unreliability of fault detection as a criterion for choosing optimal networks. They required an underlying cause-effect model to predict the effect of different faults on process variables and information about fault occurrence and sensor failure probabilities, to compute the unreliability of detection of any fault. The resulting problems typically had multiple optimal solutions. Bhushan et al. (2008) exploited this multiple solutions feature, to further select networks (amongst these multiple solutions) which were robust to uncertainties in fault occurrence and sensor failure probability data as well as uncertainties in the underlying cause-effect information. Theirs and most other approaches in literature focus on base case design scenarios where the problem is to design a sensor network from scratch. Since the operating point in a process plant changes over time and most processes already have (non-optimal) sensor networks, it is necessary to have comprehensive reallocation and upgrade strategies for improving an existing sensor network design. The aim in this work is to propose such strategies for obtaining robust sensor networks which are optimal from a fault diagnosis perspective.
2. Related Previous Work The proposed formulations are based on the concept of unreliability of detection of fault which is defined as probability of fault occurring and remaining undetected due to simultaneous failure of all sensors affected by that fault. For ith fault, it is given as (Bhushan et al., 2008): n
Ui
fi
(s ) j
( Bij x j )
(1)
j 1
The overall system unreliability U is defined to be the maximum across all faults
Robust Reallocation and Upgrade of Sensor Network for Fault Diagnosis
769
(Bhushan et al., 2008). However, as seen in equation (1), the unreliability depends on fault occurrence probabilities (fi) and sensor failure probabilities (sj) as well as the faultvariable bipartite matrix B (obtained from a cause-effect process model). In case some of the probability data is only approximately known, a sensor network design which is optimal with respect to the nominal values, may not give the optimal system unreliability when some of the approximately known values increase. To alleviate this problem, Bhushan et al. (2008) utilized the fact that there are typically several sensor networks which give optimal system unreliability when nominal values are used. They then considered a second objective function (in a lexicographic sense) which corresponded to minimizing the unreliabilities of detection of faults whose unreliability values depend on approximately known data. This way, even if some of the approximately known values were to change, the overall system unreliability is not affected (or is less affected). Robustness to the uncertainties in the process model was considered by incorporating network distribution (number of variables measured) as another objective with the idea that: more the number of variables measured, less are the chances of missing a fault due to inaccurate fault-effect modeling. Bhushan et al. (2008) presented their approach for the base case design formulation while Bhushan et al. (2003) considered upgrade and reallocation but did not consider robustness issues. Bagajewicz and Sanchez (2000) have also presented upgrade and reallocation formulations but have considered optimal variable estimation related objectives.
3. Robust Reallocation and Upgrade of Sensor Networks In the current work, we propose a reallocation and upgrade strategy to modify an existing sensor network so as to minimize the overall system unreliability while incorporating robustness to uncertain probability data and underlying cause-effect models. Accordingly, we consider three situations based on: (i) uncertainties only in some fault occurrence probabilities, (ii) uncertainties only in some sensor failure probabilities, and (iii) either of scenarios (i) or (ii) along with uncertainties in the underlying cause-effect models. In the formulations below, only salient features of the robustness aspects are explained and further details can be obtained from Bhushan et al. (2008) where the base case design formulations are presented. All the resulting formulations are mixed integer linear programming in nature. 3.1 Robustness to available probability data It is assumed that for some faults, the occurrence probabilities are exactly known, while for certain other faults, only approximate values are available. The proposed reallocation and upgrade formulation is: Formulation I: Robustness to Inaccurate Fault Occurrence Probability Data min [D1U D 2I f xs ] (2) (xj )
n
Subject to,
¦c q ¦ ¦ h j
j 1
t , r ut , r
j
xs
C*
U t log(U i ), i I / I f U
(3)
tM t rM r
log(U i ) I fi , i I f
(4) (5)
I f d I *f
(6)
I f d I fi I *f yi , Pyi t I fi I *fi , P( yi 1) d I fi I *fi ; i I f
(7)
770
xj
S.Kolluri and M.Bhushan q j x*j
¦u
j ,r ,
j M t , M r
(8)
j M r , M t
(9)
rM r
xj
q j x*j
¦u
t, j ,
tM t
xj
q j x*j
¦u
t, j
tM t
xj
q j x*j ,
¦u
j ,r
d
x*j ,
¦u
j,r ,
j Mr Mt
(10)
rM r
j M r Mt
(11)
j Mt
(12)
rM r
x j , q j , ut , r z ;U , U i \ ; (xs , I f , I fi ) \ , yi ^0,1`
(13)
In formulation I, the primary objective U is the system unreliability based on nominal values. The secondary objective f indicates the robustness of the primary objective to uncertain fault occurrence probabilities. It is defined as the minimum of the robustness values in the unreliabilities of individual uncertain faults (constraints 7). The third objective xs is the cost saved while performing upgrade and reallocation and is required since different cost networks may yield the same U and f values. Further details can be obtained from Bhushan et al. (2008). Constraints 8-11 are the reallocation and upgrade constraints which keep track of the number of sensors (xj) on the jth variable. The number of sensors is a combination of existing sensors xj* (if any) plus new sensors qj (upgrade if any) plus (minus) the additional sensors gained (lost) due to reallocation. Constraint 12 ensures that the number of sensors reallocated from a variable to other variables cannot be more than the number of existing sensors on that variable. The cost constraint (3) takes into consideration cost of upgrade and reallocation. 3.2 Robustness to Inaccurate Sensor Failure Probability Data We now assume that some sensor failure probabilities are approximately known, while all fault occurrence and remaining sensor failure probabilities are exactly known. The formulation is similar to earlier scenario with the only difference being in the calculation of maximum meaningful robustness required for each fault. Formulation II: Robustness to Inaccurate Sensor Failure Probability Data min [O1U O2I f xs ] (14) (xj )
Subject to, U t log(U i ), i I / I s U
log(U i ) Isi ; Isi*
¦ B (log s ) x ; ij
j
j
(15) i I s
(16)
jJ s
alongwith constraints (3) and (6)-(13) of formulation I with f replaced by s. In formulation II, constraint 16 is written for faults affecting variables to be measured by inaccurate sensors, and si* is the maximum meaningful value of the slack for individual fault unreliability of detection values which depends on chosen sensors. 3.3 Robustness to Modeling Errors It is now assumed that apart from some uncertain fault occurrence or sensor failure probabilities, there are uncertainties in the fault-variable bipartite matrix B. This is due to errors present in the underlying models used to predict effect of faults on different variables. In order to incorporate robustness to these errors, network distribution is considered as an additional objective.
Robust Reallocation and Upgrade of Sensor Network for Fault Diagnosis
Formulation III: Network Distribution and Uncertain Probability Data min[ E1 U E 2 I E3 N xs ] (xj )
771
(17)
Subject to, Constraints of formulations I or II depending on the case and n
N
¦n ; k
n j d x j , n j ^0,1` , j 1, 2,......n
(18)
k 1
Constraints 18 coupled with maximization of N in the objective function ensure that N is the number of different variables measured in the process (irrespective of hardware redundancy).
4. Case Study: Tennessee Eastman Process The proposed formulations are applied to the TE process (Downs and Vogel, 1993). In this process 50 measurable variables and 33 faults are considered (Bhushan et al., 2008). The data corresponding to fault occurrence and sensor failure probabilities, sensor costs and matrix B is taken from Bhushan et al. (2008). The existing sensors are assumed to be located on variables: [1-10,22-29,35,36,45,47,49]. The cost for reallocation of sensors as assumed in the present work is: 10 units for ([1,2], [1,3], [2,1], [2,3], [3,1], [3,2]), 60 units for [4,13] and 160 units for ([5,22], [6,23], [7,24], [8,25], [9,26], [10,27], [22,5], [23,6], [24,7], [25,8], [26,9], [27,10]). In this notation, pair [i,j] means that transferring a sensor from variable i to variable j is allowed with cost as specified. For the existing sensors the objective function values are: U=-2, f =2, s=0, N=23, with cost used=13650. The formulations are solved using CPLEX and results are: Cases I and III: Uncertainties in occurrence probabilities of faults 1 and 9 are considered with (case III) and without (case I) N in the objective function: The results are presented in Table 1 where for various cases and C* values, the objective functions (column 3) and the decision variables: sensor reallocation (column 4 pairs indicating from and to variables) and upgraded (new sensors in column 5), are reported. The term i(j) in column 5 means that j new sensors are selected on variable i (hardware redundancy). For C*=3000, same objectives are obtained for both cases I and III. However compared to the existing sensors, it is found that by placing additional sensors and reallocating some sensors at a small cost (2940 units), the system unreliability can be significantly improved (-6 compared to -2 earlier). For C*=5000, U can be decreased even more as expected. Further, case III has a higher N than case I thereby leading to more robustness to modeling errors. Cases II and IV: Uncertainties in failure probabilities of sensors 3 and 4 are considered with (case IV) and without (case II) N in the objective function: The results are presented in Table 2 and are similar to results listed in Table 1. Once again significant improvement in the existing values of U and s by incurring minor additional cost can be noticed.
5. Conclusions In this work, optimization formulations have been proposed for upgrading and reallocating an existing sensor network to ensure reliable fault detection and diagnosis in the presence of uncertainties in some of the underlying probability data as well as fault-variable models. The formulations have been applied to the TE process and result in significant improvement of the existing sensor network.
772
S.Kolluri and M.Bhushan Table 1. Results for Uncertainty in Faults
Case
C*
I
3000
III
3000
I
5000
[-8, 2, 24,1000]
[1,3], [2,3], [22,5], [25,8], [26,9]
III
5000
[-8, 2, 27, 320]
[22,5], [25,8],[26,9]
[U, f, N, xs]
Reallocated [1,3], [22,5], [-6, 2, 25,60] [25,8], [26,9] Same as Case I, C*=3000
Upgraded Sensors 4,13(2),42(2),43(2),45,46, 47,48,49,50 4(2),13(2),42(2),43(2), 45(2),46(2),47(2),48(2), 49(2), 50(2) 3(2),4(2),13(2), 42(2), 43(2), 44,45(2),46(2),47(2),48(2), 49(2),50(2)
Table 2. Results for Uncertainty in Sensors Case
II IV
C* 3000 3000
[U, s, N, xs] Reallocated Same as Case I, C*=3000 (Table 1) Same as Case I, C*=3000 (Table 1)
II
5000
[-8, 9, 24, 200]
[1,3], [2,3], [22,5], [25,8], [26,9]
IV
5000
[-8, 9, 26,20]
[22,5], [25,8],[26,9]
Upgraded Sensors
3(2),4(4),13(2),42(2), 43(2),45(2),46(2),47(2), 48(2),49(2),50(2) 3(4),4(4),13(2),42(2),43(2), 45(2),46(2),47(2),48(2), 49(2),50(2)
Notation cj ,C*
ht,r I If ,Is Js Mt Mr n nj ut,r xj Į, ȕ, Ȝ , * fi, si
: : : : : : : : : : : : : :
cost of putting a new sensor on variable j, total available cost respectively cost of reallocating sensor from variable t to r set of faults indices considered in formulations set of fault indices which affect inaccurate faults and sensors respectively set of indices of inaccurate sensors set of variables whose sensors may be reallocated (to other variables) set of variables to which sensors can be reallocated (from other variables) number of measurable variables binary variable which is 1 if variable j is measured and is 0 otherwise number of sensors moved from variable t to r number of sensors measuring variable j after upgrade and reallocation constants used for lexicographic optimization overall robustness and its maximum meaningful value robustness for inaccurate fault i for formulations I and II
References M.Bagajewicz and M.Sanchez, 2000, Reallocation and Upgrade of Instrumentation in Process Plants, Computers and Chemical Engineering,24(8), 1945-1959. M.Bhushan, S.Narasimhan and R.Rengaswamy, 2003, Sensor Network Reallocation and Upgrade for Efficient Fault Diagnosis, Proceedings of FOCAPO2003, Florida, USA, 443-446. M.Bhushan, S.Narasimhan and R.Rengaswamy, 2008, Robust Sensor Network Design for Fault Diagnosis, Computers and Chemical Engineering, 32(4-5), 1067-1084 J.J.Downs and E.F.Vogel,1993, A Plant Wide Industrial Process Control Problem, Computers and Chemical Engineering, 17(3), 245-255
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Explicit/Multi-Parametric Model Predictive Control of a Solid Oxide Fuel Cell Kostas Kouramas,a Petar S. Varbanov,b Michael C. Georgiadis,c JiĜí J. Klemeš,b Efstratios N. Pistikopoulos a a
Centre for Process Systems Engineering, Department of Chemical Engineering, Imperial College London, London SW7 2AZ, UK b Centre for Process Integration and Intensification CPI2, Research Institute for Chemical and Process Engineering, Faculty of Information Technology, University of Pannonia, Egyetem u. 10, 8200 Veszprém, Hungary c Department of Engineering Informatics and Telecommunications, University of Western Macedonia, Kozani 50100, Greece
Abstract In this work we present a general framework for the design and validation of explicit/multi-parametric MPC controllers for Solid Oxide Fuel Cells (SOFC). The framework features four key steps comprising the development of a dynamic mathematical model of the process at hand, the development of a reduced order model of the process and the design of an explicit/multi-parametric MPC for online control. The framework is illustrated on a SOFC system. Keywords: Solid Oxide Fuel Cell, Explicit Model Predictive Control, Multi-Parametric Programming
1. Introduction Fuel Cell power systems have a high potential for serving many power generation applications, both stationary and mobile, due to their outstanding energy (electric) efficiency, fuel versatility and minimal environmental impact characteristics [2,7]. In particular, Solid Oxide Fuel Cell (SOFC) systems have emerged as one of the most commercially widespread technologies, together with Proton Exchange Membrane Fuel Cells (PEMFC), offering high energy efficiency and robust performance, including also hybrid CHP arrangements increasing further the overall fuel utilisation [8]. The SOFC operation is based on an exothermic electrochemical reaction taking place at elevated temperatures [2,3]. The complex physical, chemical and electrical operation of SOFCs gives rise to important modelling and control challenges [2,3]. The efficient and stable operation of SOFCs depends on the efficient control/regulation of the generated voltage/power in the presence of varying operating conditions and disturbances [2]. These disturbances are associated with the fluctuations of the electric load (current), which mainly correspond to current demand changes/fluctuations or failures in the network [3]. Recent research on the control of fuel cells (mainly PEM FC) [1,3,4,7] showed that modern advanced model-based control methods such as Linear Quadratic Regulation [7], Model Predictive Control (MPC) [1,4] and Explicit/Multi-Parametric MPC (mp-MPC) [1,3,4] are suitable for the voltage and temperature regulation of FC in the presence of disturbances and constraints. Based on our previous work on multiparametric programming and control of FC [3,5], we present a unified framework for the off-line design and validation of explicit/multi-parametric MPC controllers for FC,
774
K. Kouramas et al.
which we then apply for an SOFC system. A dynamic mathematical model of the SOFC system is presented that is used for performing dynamic simulation studies, and a reduced-order model of the SOFC is developed that is suitable for the design of modelbased controllers. An mp-MPC controller is then designed and validated off-line by directly implementing it to the dynamic model of the SOFC. The proposed framework is presented in detail in the following section.
2. Framework for explicit/multi-parametric MPC of fuel cells The proposed framework for the design and validation of explicit/multi-parametric MPC controllers [5,6] is illustrated in Figure 1. This framework consists of four key steps which are described as follows: (i) development of a dynamic mathematical model of the fuel cell that is used for detailed simulation and (design and operational) optimization studies, (ii) development of a reduced-order/approximating model, suitable for control design, (iii) design of the explicit/multi-parametric MPC controller using off-line multi-parametric programming and control methods, and (iv) off-line validation/testing of the controller. All the steps of this framework are performed offline before any real implementation on the system takes place. Hence the controller can be fully tested and validated off-line - step (iv), thereby reducing the cost and time of testing as well as the risk of failing at the online implementation [5]. The four steps of this framework are discussed in detail in the following sections, for the explicit/multiparametric control design for a SOFC system. 2.1. SOFC mathematical model The SOFC system under consideration is shown in Figure 2. The SOFC operates via the exothermic oxidation (see Figure 2) of the fuel (assumed to be hydrogen) with oxygen being the oxidant. The cell consists of a solid electrolyte (Y2O.ZrO2), an anode (where the fuel is supplied and the oxidation takes place) and the cathode (where the oxygen is supplied and reduction to O2- takes place).The SOFC stack consists of 384 unit cells (Figure 2) of a rectangular configuration, connected in series – each unit cell is either connected to another unit cell or to the wiring of the load. The dynamic model of the SOFC was developed in [3] and its most important equations are shown in Table 1. This model is implemented in Matlab/Simulink (The Mathworks Inc., 2007) and was used to perform a set of open-loop dynamic simulation studies. In these simulations the voltage and temperature of the SOFC are obtained for a set of varying operating conditions in which a set of electric current step disturbances ¨I = 20, 40, 60 A are applied to the system at time 50 s. The results of these simulations are given in Figures 5 and 6. 2.2. Reduced order model for the SOFC Current model-based control methods, such as mp-MPC, cannot use detailed dynamic models such as the SOFC model in Table 1 and they usually rely on linear input-output or state-space models [5,6]. Therefore, in the second step of the framework (Figure 1) a reduced-order state space (SS) model of the SOFC process is obtained by performing system identification based on the input/output data of the open-loop simulations. Note that the SOFC voltage V and temperature T are the system outputs y [V T ]T and the H2 and O2 mass flowrates u [ FH 2 FO2 ]T are the system inputs. The mathematical model of the derived SS model is given by (1)
Explicit/multi-parametric MPC of a Solid Oxide Fuel Cell
775 Load Cooling
FUEL
e-
Anode
e-
ELECTROLYTE e
Cathode
-
OXIDANT Cooling
Fuel
eAnode
Fuel channel Oxidant
eElectrolyte
O2e-
Cathode
Figure 1. Framework for mp-MPC.
Oxidant channel
H 2 O2 o H 2O 2e O2 4 e- o 2O2
e-
Figure 2. SOFC system and unit cell configuration
Table 1. SOFC mathematical model.
, QH 2O
QO2
Valve Molar Constants
QH 2
Constitutive Equations
PH 2 Van
nH 2 RT , PH 2OVan
Dynamic Molar Balances
d PH dt 2
RT in QH 2 QHout2 2 K r I Van
Partial Pressure
PH 2 s
Dimensionless Heat Balance
w4 wW
Dimensionless Heat Generation Open Characteristic Electric Potential Concentration, Activation and Ohmic Losses Closed Circuit SOFC Voltage
F0
xt 1
K H2
PH 2
PH 2O
1/ K H 2 1 W H2 s
Q
in H2
K O2
PO2
nH 2O RT
,
d PH O dt 2
2 K r I , W H2
V / K H2 RT
§ Oeff , x w 2 4 Oeff , y w 2 4 Oeff , z w 2 4 1 K e Vj ¨ ( UC P ) s h ¨© Os w[ 2 Os wK 2 Os w] 2 K e Os 'T / heff 0.72S01.1 , F0
· ¸ ¸ ¹
Os 't / Us CP heff2 , S0 Vj 1 Ke / Ke Os 'T / heff
0.1 § RT ª PH 2 PO2 º ·¸ N O ¨ EO «ln » ¨ 2 F ¬« PH 2O ¼» ¸¹ ©
Vcon
i· RT § ln¨¨1 ¸¸ , Vact na F © iL ¹
VDC
VOCP Vcon Vact VOhmic
§i· RT log¨¨ ¸¸ , VOhmic D na F © i0 ¹
Cxt
rI
(1)
A
C
ª 4953 1593.5 118.82 1.66 º «18322 5891.4 47.346 0.66 » ¬ ¼
-0.00054 -0.00208 0.99374 0.00396
2 eff
VOCP
Axt But dt , yt
RT in Q QHout2O QHrxn2O Van H 2O
Os 't
ª0.99839 « «-0.00512 «-0.00097 « ¬«0.00162
-0.00506 0.98427 0.00298 0.00063
K H 2O ,
-3.6269 10 0.00026 0.00553 0.99569
5 º
» », B » » ¼»
0.00107 º ª-0.00106 « 0.00331 0.00331 »» « , «0.00072 » 0.00072 « » 5 8.6062 105 ¼» ¬«8.6062 10
where d is the model mismatch. Note that the input-output data and hence the SS model was derived for a sampling time of 0.01 s. The Matlab system identification toolbox
K. Kouramas et al.
776
was used for performing the system identification calculations. The modelling error between the dynamic mathematical model and the reduced model of the SOFC is shown in Figure 3. 2.3. Explicit/multi-parametric MPC design An explicit/multi-parametric MPC controller is designed in the third step of the proposed framework. First the following MPC control formulation is considered VN* x s.t.
min ut
xt 1
N M T ¦ t o rt i yt i Q rt i yt i ¦t 0 utT Rut
A xt B ut dt , yt
Cxt , t
0,1,
(2)
, N 1
ª0 mol/s º ª6 mol/s º ª200 Volt º ª350 Volt º «0 mol/s » d ut d «6 mol/s » , « 1000 K » d yt d « 1300 K » , t ¬ ¼ ¬ ¼ ¬ ¼ ¬ ¼
0,1,
,N
where y is the output vector, u is the input vector, d is the vector of the model mismatch and N is the output horizon and M is the control horizon. The objective function in the above optimization is set to minimize the difference between the actual output values and the set point r, as well as the control increments at all times. The input-output constraints correspond to the physical limitations of the inlet hydrogen and oxygen flowrate and the operational limitations for the voltage and temperature (for example below 200 V are not useful for everyday operation). We assume that N=5 and M=2, Q 100 I 2 and R 0.1 I 2 . The MPC optimization problem (2) is a multi-parametric Quadratic Programming (mp-QP) which consists of four optimization variables ut, ut+1 and eight parameters x [ xt , Vt , Tt , Vsp , Tsp ]T . The Parametric Optimization (POP) (Paros, 2007) software was used to solve the mp-QP problem (2) and derive an explicit controller which consists of 154 critical regions and corresponding control laws which are shown in Figure 4. The Mathematical description of the controller is given by
ut
Ki x ci if
x CRi
^x | Ai x d bi ` ,
i 1,
,51
(3)
2.4. Controller validation The controller is implemented directly to the dynamic model (Table 1.) of the SOFC and a set of closed-loop simulation are performed for varying load conditions. A number of step changes are considered for the current load that are of magnitude of ¨I = 20, 40, 60 A. The results of these simulations are shown in Figures 5 and 6 together with the results from the open-loop dynamic simulations (Section 2.1). By comparing the open-loop and closed-loop simulations we notice that although the open-loop voltage is significantly reduced for an increasing current load, in the closed-loop simulations the controller manages to regulate and maintain the voltage to its desired value despite the varying disturbance. In additions, the temperature increase that is noticed in the open-loop simulations is smaller for the closed-loop simulations. This is an important feature of the controller since no cooling system is considered in this study – it is clear that the inclusion of a cooling/heat exchanging system would have further improved the temperature changes due to the disturbances. 2.5. Concluding remarks In this work we presented a framework for the design and validation of explicit/multiparametric controllers for SOFC. The resulting controller manages to regulate the SOFC voltage to its desired value despite the disturbances while at the same time satisfying the system constraints.
Explicit/multi-parametric MPC of a Solid Oxide Fuel Cell
777
2.6. Acknowledgements The financial support of the following projects is acknowledged: EPSRC (EP/E047017/1, EP/G059071/1), EU (DECADE IAPP project, PIAP-GA-2008-230659) and European Research Council (MOBILE ERC Advanced Grant No: 226462).
Figure 3. Model mismatch between SOFC dynamic and approximating models.
Figure 4. Critical regions of the explicit controller.
Figure 5. Open-loop and Closed-loop simulation of the SOFC voltage V.
Figure 6. Open-loop and Closed-loop simulations of the SOFC temperature T.
References 1. A. Arce, D.R. Ramirez, A.J. del Real, C. Bordons, 2007, Constrained explicit Predictive Control Strategies for PEM Fuel Cell Systems, Proc. of the 46th IEEE Conf. Dec. Con., New Orleans, USA, 6088-6093. 2. M. Bavarian, M. Soroush, I.G. Kevrekidis and J.B. Benziger, 2010, Mathematical Modelling, Steady-State and Dynamic Behavior and Control of Fuel Cells: A Review, Industrial & Engineering Chemistry Research, 49, 17, 7922-7950. 3. D.I. Gerogiorgis, K.I. Kouramas, N. Bozinis and E.N. Pistikopoulos, 2006, AIChE Annual Meeting, San Francisco, California, USA, Session 455. 4. C. Panos, K. Kouramas, M.C. Georgiadis and E.N. Pistikopoulos, 2010, Modelling and Explicit MPC of PEM Fuel Cell Systems, Computer Aided Chemical Engineering, 28, 517-522. 5. E.N.Pistikopoulos, 2009, Perspective in Multiparametric Programming and explicit Model Predictive Control, AIChE Journal, 55, 8. 6. E.N. Pistikopoulos, M. Georgiadis, V. Dua (eds.), 2007, Multi-parametricModel-Based Control: Theory and Applications, Wiley-VCH, Weinheim, Germany 7. J.T. Pukrushpan, A.G. Stefanopoulou and H. Peng, 2004, Control of Fuel Cell Power Systems, series in advances in industrial control, Springer, London, UK 8. Varbanov P., Klemeš J., 2008. Analysis and Integration of Fuel Cell Combined Cycles for Development of Low-Carbon Energy Technologies. Energy, 33(10), 1508-1517.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
A Reformulation Scheme for Parameter Estimation of Hybrid Systems Ines Mynttinen and Pu Li Simulation and Optimal Processes Group, Institute of Automation and Systems Engineering, Technische Universität Ilmenau, 98693 Ilmenau, Germany
Abstract We present a new reformulation scheme for parameter estimation of hybrid systems. A key step in this method is the introduction of a continuous switching variable which approximates the strict complementarity condition by means of a smoothened step function (SSF). The reformulation is implemented using discretization with collocation on finite elements, and the resulting optimization problem is solved by a NLP solver. The effectiveness of the proposed reformulation scheme is demonstrated for the parameter estimation of a three-tank model by comparison with the results of a penalization approach and a heuristic particle swarm optimization. Keywords: parameter estimation, hybrid systems, reformulation methods
1. Introduction Parameter estimation of dynamic models is an important issue in many fields of industrial research [1], since simulation and optimization are usually based on differential algebraic equations (DAE) involving a multitude of parameters. To calibrate these dynamic nonlinear models, the parameters have to be estimated based on measured data [1, 2]. Until now continuous dynamic systems have been considered in most parameter estimation studies. However, in many fields of applications such as chemical processes, power plants and transport vehicles, continuous and discrete state dynamics are coupled strongly. Such systems with mixed continuous and discrete dynamics are called hybrid systems. In simulation studies on hybrid systems discrete transitions are handled through embedded logical statements [3]. For optimization tasks one can apply a heuristic (gradient-free) search algorithm, such as particle swarm optimization (PSO), which has recently been used for parameter estimation and optimization of a hybrid system [4, 5]. Another approach is to solve a high-dimensional optimization problem subject to the DAE system as constraints via a gradient-based method, which is much more efficient than a heuristic search method. However, NLP-based parameter estimation for hybrid systems is an extremely challenging task because due to instantaneous switches of system dynamics the objective functions, constraints and gradients can be non-smooth, divergent or discontinuous. As a consequence constraint qualifications will be violated [6]. To overcome this difficulty mixed-integer approaches [7, 8] and reformulation strategies have been proposed. Since the former can be computationally expensive for complex systems [8], in this study we focus on reformulation methods. Reformulation methods introduce additional variables to remove the non-smoothness from the problem while retaining the desired system features. Reformulation strategies can be classified as relaxation [6] or penalization of incomplete switching (PICS) [6, 9]. In this study, we propose a new relaxation method making use of a smooth step function. Applied to the parameter estimation of a three-tank model, the performance, accuracy and robustness
A Reformulation Scheme for Parameter Estimation of Hybrid Systems
779
of this reformulation method are studied and compared with those of a PICS approach and the PSO method.
2. Reformulation strategies for parameter estimation of hybrid systems Parameter estimation aims at extracting the best values of parameters determining the dynamics of the system under consideration, based on a series of measurements xjƐ(m) of several state variables xj , j = 1,…,M at different time points tƐ , Ɛ = 1,…,N. Due to measurement error, the estimated parameters are subject to some uncertainty. Assuming that the measurement error is uncorrelated and normally distributed with variance ıj2 model parameters can be estimated by minimizing the weighted least-squares function J ( p)
M
N
j
A
¦¦
( x j ( p, tA ) x (jAm ) ) 2
V 2j
(1)
subject to the DAE system as equality constraints and variable boundaries as inequality constraints. In this study we consider parameter estimation problems for hybrid systems i.e. a mode transition will take place when state variables meet certain switching conditions. This leads to a hybrid nonlinear dynamic optimization problem. For a convenient description, we consider a general binary hybrid system with autonomous transitions controlled by the transition condition s(x,p) and the optimization problem is formulated as min J ( x, p ) p
s.t. (mode 1) : x
f (1) ( x, p ), s( x, p ) t 0
(2)
(mode 2) : x f (2) ( x, p ), s( x, p ) 0 with states x = x(t) and parameters p. Solving this problem directly, the instantaneous transition between the modes will lead to the violation of constraint qualifications mentioned above and possibly to a failure of the NLP solver. Thus, we reformulate the equality constraints by introducing a time-dependent switching variable 0 ı(t) 1. The latter connects both operating modes resulting in the mixed but continuous dynamic x ı f (1) ( x, p ) (1 ı ) f (2) ( x, p ). (3) To describe the transition behavior, the switching variable needs to be forced to meaningful values, i.e. ı = 1 for mode 1 and ı = 0 for mode 2. For this purpose, we propose to use a smooth step function (SSF) to represent this variable. To be specific, we employ the Fermi-Dirac function 1 (4) ı ( s) 1 exp ( IJs ) where a non-negative relaxation parameter IJ is introduced to regulate the steepness of the smooth step. In this way the hybrid optimization problem is relaxed to a continuous optimization problem which can be solved by available dynamic optimization approaches. As in other relaxation methods, the complementarity is not strictly fulfilled. Instead, a sequence of relaxed problems with increasing IJ can be solved to approach the solution of the original problem by limnĺ IJn ĺ. For comparison, we also consider the penalization of incomplete switching (PICS) approach where the complementarity is ensured by an inner optimization combined with penalization of the constraint violation [9]. The inner optimization problem is defined as min s ı (5) 0d ı d1
which will be converted into the corresponding Karush-Kuhn-Tucker conditions í s – Ȝ0 + Ȝ1 = 0 with complementarity constraints Ȝ0 ı = 0, Ȝ1(1 í ı) = 0 and non-
780
I. Mynttinen et al.
negative Lagrange multipliers Ȝ0, Ȝ1 0. The violation of the complementarity constraints is penalized in the outer optimization with the objective function tf
min J ( x, p ) U ³ (O0V O1 (1 V )) dt p
(6)
t0
where the penalty is weighted by a time-independent parameter ȡ. If ȡ is greater than a certain value, the solution of this optimization problem will be exact, since all stationary points of the original problem are local minimizers to the penalized version [6]. After a time discretization (e.g. collocation on finite elements) both SSF and PICS lead to a smooth NLP problem which can be solved by any of the available NLP solvers. Solving the problem Eq. (2) by PSO, the objective function Eq. (1) is calculated repeatedly for the so-called particles of a swarm, where each complete parameter set p represents a particle position. This position is adapted using the particle velocity which is based on information from previous simulation cycles. For suitable tuning parameters, PSO provides a proper balance of parameter space exploration at the early stage of the search and a good precision of the final results [5].
3. Parameter estimation for a three-tank system In order to examine the performance of our reformulation method, we consider a tank system similar to that used in [8, 10], since it is simple enough to understand its behavior intuitively but exhibits non-trivial hybrid properties. The system consists of three tanks in a row connected to each other (Fig.1). There are inflows Qzi, i = 1, 3 to the left Qij
Aij sign( sij ) 2 g | sij |
sij
hi h j ,
(i , j )
{(1, 2), (2,3)}
exact
SSF
PICS
PSO
A12 [10-5m2]
6.0
6.51
6.06
6.28
A23 [10-5m2]
2.0
2.29
2.98
2.30
Fig. 1. Three-tank system, Toricelli’s law and optimal parameter values.
and the right tank. The dynamics of the tank levels hi, i = 1, 2, 3 is given by the mass balance of the tanks. The outflows QLi, Q3 and the flows between the tanks are modeled by Toricelli’s law (see Fig. 1). The sign function switches the direction of the flow between two tanks abruptly from +1 to í1 or vice versa, when the condition sij = hiíhj = 0 is passed. Note that the gradient of the flow diverges to infinity at this point. Our aim is to estimate via minimization of the objective function (Eq.(1)) the flow parameters Aij based on (simulated) measurement data hƐ(m), Ɛ = 1...10 of the tank levels taken equidistantly within the time horizon (t0, tf ) = (0, 20) s. The data are generated via simulation of the original model with added Gaussian noise. The equality constraints representing the system dynamics are reformulated using the proposed relaxation method. Then the dynamic optimization problem is discretized by the collocation method, and finally the discretized NLP problem is solved by using IPOPT [11]. In order to evaluate the performance of the proposed method, we compare it with that of PICS and PSO. First, we study the accuracy, i.e. the capability of the algorithms to re-
A Reformulation Scheme for Parameter Estimation of Hybrid Systems
781
produce the correct switching behavior and the optimal parameter values. The optimal state trajectories found by SSF and PICS are shown in Fig. 2a). They agree quite well with each other. In particular, the crossing points of the levels h1 and h2 and the levels h2 and h3 nearly coincide. This reflects the fact that the correct switching behavior is obtained in both cases as shown in Fig. 2b). It can be seen that using SSF the switch is smooth and rather slow, which apparently has almost no impact on the trajectories. In contrast, using PICS the switch takes place almost instantaneously. Obviously, PSO will also provide the correct switching behavior if a simulation tool able to treat hybrid systems is used. b)
a)
SSF
h1
PICS
h2 h3
F ig. 2. Trajectories of (a) the states at the solution for SSF (solid) and PICS (dashed) as well as the measurements of h1 (diamonds) and h2 (triangles) and (b) the corresponding switching variables ı12 (blue) and ı23 (green).
The table in Fig. 1 compares the optimal parameter values estimated by the three methods. With PICS we obtain the best value for the parameter A12, but the deviation of A23 from the exact value is considerable. SSF and PSO result in moderate deviations for both parameters. In summary, all three approaches provide reasonably accurate results. The robustness of the algorithms is evaluated regarding their sensitivity to a change of the reformulation parameter as well as the capability of the three methods to handle error-in-measurement. The dependence of the objective function value obtained by SSF and PICS on the respective reformulation parameter is shown in Fig. 3a) and b). For SSF, J(IJ) is monotonous and thus the parameter IJ can simply be increased as long as the problem stays sufficiently smooth for the NLP solver to find a solution. In contrast, the non-monotonous behavior of J(ȡ) with the PICS approach demonstrates that to find the proper balance between the least-squares term and the penalty is not a trivial task. We also studied the influence of a finite error-in-measurement on the estimated parameter values. Parameter estimation was carried out for 50 series of h2 for different values of ıM, so that the mean parameter values Ɩ12 as well as their standard deviation ıp can be evaluated. As expected, Ɩ12 is constant over a wide range of random error whereas ıp increases linearly with increasing ıM (not shown). It turns out that PICS can handle only small random errors, whereas the results of SSF and PSO are reasonable for ıM 0.02m. The CPU time needed by PICS is higher than that needed by SSF, since the number of iterations with SSF is generally lower, in particular in the case of a fine temporal grid. In the example of the three-tank system, the computation time required by PSO is not higher than that required by the reformulation strategies. With a small number of particles (nȞ = 25) and iterations (ni = 15) a small value of the objective function and proper parameter estimates can be achieved, see Fig. 1 and 3c). It should be noted, that the good performance of PSO is due to the low dimensionality of the parameter space. Reformulation strategies are expected to outperform PSO considerably for problems involving a large number of parameters or time-dependent control variables.
782
a)
I. Mynttinen et al.
b)
c)
Fig. 3. Dependence of the objective function on the reformulation parameters, i.e. (a) the smoothing parameter IJ and (b) the penalty ȡ in the case of SSF and PICS, respectively. (c) Objective function as a function of the number of iterations for PSO.
4. Conclusions Reformulation strategies of nonlinear optimization problems with complementarity constraints such as smoothing of the step function (SSF) and penalization of incomplete switching (PICS) can be used to solve hybrid dynamic parameter estimation problems. They provide a computationally attractive alternative to heuristic optimization methods like particle swarm optimization (PSO) in conjuction with a simulation tool for the underlying DAEs. The SSF method is more robust against the variation of the reformulation parameter than PICS. SSF and PSO proved to be quite robust against error in measurement. In the next step, we plan to apply these methods to large-scale systems and test their viability for industrial purposes.
5. Acknowledgement We would like to thank SIEMENS AG for financial support. We are grateful to Erich Runge for valuable help regarding the interpretation of results and methods.
References [1] K. Schittkowski (2002), Numerical Data Fitting, Kluwer Academic Press. [2] C. Michalik, B. Chachuat, W. Marquardt (2009), Incremental global parameter estimation in dynamical systems, Ind. Eng. Chem. Res., vol. 48, pp. 5489–5497. [3] R. Goebel, R.G. Sanfelice, A.R. Teel (2009), Hybrid dynamical systems, IEEE Control Syst. Mag., pp. 28–93. [4] V. S. Pappala, I. Erlich (2008), A new approach for solving the unit commitment by adaptive particle swarm optimization in IEEE PES General Meeting, pp. 1–6. [5] M. Schwaab, E.C. Biscaia, J.L. Monteiro, J.C. Pinto (2008), Nonlinear parameter estimation through particle swarm optimization, Chem. Eng. Sci., vol. 63, pp. 1542–1552. [6] B. T. Baumrucker, J.G. Renfro, L.T. Biegler (2008), MPEC problem formulations and solution strategies with chemical engineering applications, Comp. Chem. Eng., vol. 32, pp. 2903– 2913. [7] P. I. Barton, J.R. Banga, S. Galan (2000), Optimization of hybrid discrete/continuous dynamic systems, Comp. Chem. Eng., vol. 24, pp. 2171–2182. [8] J. Till, S. Engell, S, Panek, O. Stursberg (2004), Applied hybrid system optimization: An empirical investigation of complexity, Control Eng. Pract., vol. 12, pp. 1291–1303. [9] S. Sager (2009), Reformulations and algorithms for the optimization of switching decisions in nonlinear optimal control, J. Process Contr., vol. 19, pp. 1238–1247. [10] B. T. Baumrucker, L.T. Biegler (2009), MPEC strategies for optimization of a class of hybrid dynamic systems, J. Process Contr., vol. 19, pp. 1248–1256. [11] A. Wächter, Ipopt, https://projects.coin-or.org/Ipopt, Jul., 2010.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N.. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved..
Dynamic optimization of bioreactors using probabilistic tendency models and Bayesian active learning Ernesto Martíneza, Mariano Cristaldi,b Ricardo Grau,b Joao Lopesc a
INGAR (Conicet-UTN), Avellaneda 3657, Santa Fe, S3002 GJC, Argentina
b
INTEC (Conicet-UNL), Güemes 3450, Santa Fe, 3000, Argentina
c
Porto University, Chemistry Dept., R. Aníbal Cunha 164, Porto 4099-030, Portugal
Abstract First-principles models of fermentation processes typically have built-in errors in the form of structural mismatch and parametric uncertainty. A model-based optimization approach for run-to-run improvement under uncertainty of fed-batch bioreactors by integrating probabilistic tendency models with Bayesian inference is proposed. Probabilistic models grounded on first principles are used in the design of dynamic experiments to bias data gathering towards the subspace of most promising operating conditions. Results obtained in the fed-batch fermentation of penicillin G are presented. Keywords: Bayesian inference, Bioprocesses, Model-based experimental design, Modeling for optimization, Uncertainty.
1. Motivation Most optimization techniques are model-based, and since accurate dynamic models are rarely available, guaranteeing the performance of an operating policy under uncertainty is crucial for a successful scale-up (Terwiesch et al., 1994; Schenker and Agarwal, 1995; Bonvin, 1998; Kadam et al., 2007). The best use of an imperfect first-principles model through proper handling of its inherent uncertainty is a challenging problem for fast productivity improvement of innovative fed-batch fermentations using a handful of production runs (Walsh, 2007). The main problem in bioreactor modeling for optimization is that biological activity occurs in alternative metabolic pathways with switches which are triggered in response to changes in environmental conditions (Visser et al., 2000; Riascos and Pinto, 2004; Martínez et al., 2009). Due to the complexity of metabolic regulation and sparse measurements, first-principles models of bioreactor dynamics can only capture the qualitative tendency of sampled state variables such as biomass and protein concentrations. Migration from the bench scale to production runs is thus made with high levels of uncertainty about the maximum level of productivity that can actually be achieved. As a result, a sub-optimal policy is typically used to compensate for the inherent uncertainty in a model-optimized policy (Bonvin, 1998).
784
E. C. Martínez et al.
2. Methodology 2.1. Probabilistic tendency model In order for a tendency model to reflect the observed bioreactor dynamics as accurately as possible it must faithfully represent its own fidelity statistically. A probabilistic model quantifies this uncertainty by integrating first-principles knowledge with data bias to capture all plausible dynamics in a distribution over model predictions for state transitions between samples. To this aim, let us assume that bioreactor dynamics is modeled using a number of state variables x(t) that can be measured and the vector y(t) represents measured values of the outputs at a given sampling time t. Also, it is assumed that the tendency model can be described by a dynamic stochastic model constituted by
f ( x, x, u (t ), w, T , t ) 0 , y
g ( x(t ))
(1)
with the set of initial conditions x(0)=x0, u(t) and w are, respectively, the time-dependent and time-invariant control variables (manipulated inputs), Tis the vector of i.i.d. model parameters with given a priori distributions p(T i ), i 1,..., k , and t is time. Run-to-run optimization of a bioreactor aims at increasingly improving a performance index (e.g., productivity) J at the end of each run by purposefully setting the parameters in the operating policy Mand the sampling strategy Mdefined as follows:
M1[ y 0 , E , w, t f ]; M 2
[t1 ,..., t n ]
(2)
where y0 is the set of initial conditions of the measured variables, and tf is the duration of an experiment. The idea is to exploit current knowledge about the prior distribution p(T) to define a model-optimized policy M and then explore over a evaluation run by sampling data to revise parameter distributions so that an improved policy is found. Control vector parameterization is used to discretize the control input u=[(t;E) profiles. To make predictions at an arbitrary sampling time, we take the uncertainty about tendency model parameters into account by averaging over predicted state transitions with respect to their probability distributions. Thus, a predictive state distribution p( x ti ) is obtained for each sample time ti, which sheds light not only on the expected value of the state x , but also on the uncertainty of this estimation. Within the Bayesian framework, probability distributions over model parameters T i , i 1,..., k , capture the a priori parametric uncertainty in a tendency model. Using samples, these distributions can be conveniently modified on a run-to-run basis so as to reflect modeling bias introduced by new sampled data. In this work parameter distributions are represented by histograms obtained by bootstrapping. The bootstrap method is a simulation method for statistical inference using re-sampling with replacements. The method has been succesfully applied in quantifying confidence intervals of uncertain kinetic parameters in metabolic networks (Joshi et al, 2006). 2.2. Model-based policy iteration A high-level description of the model-based policy iteration framework is given in Fig. 1. It is important to highlight that the activiity called policy evaluation corresponds to the actual running of a designed experimental run whereas other activities such as policy optimization, experimental design and sensitivity analysis are entirely based on model
Dynamic optimization of bioreactors using probabilistic tendency models
785
simulations. The operating policy M is first initialized by resorting to expert judgement and a priori knowledge from lab scale to avoid undesirable physiological states. Samples are taken along this experiment so as to make a rough estimation of probability distributions or histograms for parameters in the tendency model. Equipped with a probabilistic model which explicitly addressed its own uncertainty, the policy iteration loop can be entered. First, the “most probable” model parameterization is used to find a model-optimized operating policy. Using this policy an optimally informative experiment is designed to define an optimal sampling strategy Malong the next evaluation run. The policy is then evaluated experimentally and new data is gathered. To use incoming data more efficiently, a sensitivity analysis is made to pinpoint which are the subset of parameters that explain most of the variance of the performance index J. Finally, using new data the probabilistic tendency model is updated by bootstrapping the distributions of sensitive parameters, and a new policy improvement round begins. 2.3. Experimental design Optimal sampling times must be calculated so as to bring new information to selectively reduce parametric uncertainty which significantly affect the optimality of M regarding the performance index J. For effective sampling, the best criterion is D-optimality which maximizes the determinant of the Gram matrix M (Martinez, et al., 2009)
M 2*
max M 2 det M
; M
QT Q ;
subject to:
t iL d t i d t iU , i 1,...n
(3)
where each entry of the matrix Q, Siij, measures the sensitivity of the performance index J at the i-th sampling time with respect to the j-th parameter of the operating policyM. 1: Policy evaluation Exploratory run. Define priors p(T) for parameter distributions. 2: Model initialization 3: Loop 4: Model-based optimization Policy improvement. Optimal sampling strategy. 5: Experimental design 6: Policy evaluation Collect observations. 7: Sensitivity analysis Introduce modeling bias. 8: Probabilistic model update Bootstrapping. 9: End loop Fig. 1. High-level description of model-based policy iteration.
3. Example 3.1. Fed-batch fermentation of penicillin G Penicillin production is an established benchmark in fermentation processes for testing new approaches in modeling, optimization and control of novel bioprocesses. Run-torun optimization aims at maximizing the final amount of penicillin obtained. Firstprinciples equations and parametric uncertainty for an unstructured tendency model of a
786
E. C. Martínez et al.
fed-batch bioreactor are detailed in Menezes et al. (1994). An alternative tendency model for the penicillin bioreactor has been provided by Riascos and Pinto (2004). 3.2. Results Table 1 depicts policy run-to-run optimization results obtained when sampled data is generated using the model proposed by Riascos and Pinto (2004) as the in silico bioreactor with realistic added measurement noise, whereas for policy iteration the probabilistic model is based on the structure of the tendency model proposed by Menezes et al. (1994). The operating policy includes important parameters such as the concentration of substrate in the feed and initial volume along with a feed rate profile modeled using inverse polynomials (see Martinez et al., 2009) with parameters A, B and C. As can be seen, model-based policy iteration achieves a significant improvement in penicillin production in just three evaluation runs. In Table 2, selective uncertainty reduction is highlighted whereas in Fig. 2 the evolution of feed rate profiles are shown. Table 1. Model-based policy iteration under structural errors and parametric uncertainty
Policy parameters -2
A [L h ] B [h-1] C [h-2] tfeed [h] tfinal [h] Substrate feed Conc. [g L-1] First discharge [h] Discharge volume [L] Discharge frequency [h] Initial volume [L] Penicillin obtained, J (Kg) Performance mismatch, std(J)
Initial
1st run
2nd run
3rd run
Optimum
0.6882 0.1431 0.0002 0 240 240 24 60 24 600 16.12
0.8707 0.1 2.e-4 24 300 500 24 80 24 500 35.47
0.9697 0.1022 3e-4 23.6 300 500 24.17 80 24 500 40.49
1.3494 0.1015 9e-4 24 294.8 500 24.02 80 24.62 500 57.51
1.2755 0.1018 0.0012 22.37 300 500 24 79.57 35.28 500 63.24
3.1
2.1
2.5
1.2755
Table 2. Run-to-run uncertainty reduction using sensitivity analysis
Param.
Prior interval
Exploratory run
Pmáx
0.12-0.17
Ks
0.006-0.4 (5 – 10)e-3 (0.01- 8)e-3 0.40-0.58 0.4 - 1 (3 – 15)e-3 (1 – 20)e-5 0.014-0.029 (1 – 20)e-5
0.1444-0.17 (6.0 – 6.89)e-2 (5-5.0004)e-3 1e-6 - 6.65e-3 0.40 - 0.58 0.4 - 0.5178 (7.2 – 9.3)e-3 (1 – 1.09)e-4 (2.48 – 2.9)e-2 (1 – 1.14)e-5
Kh
(2 – 10)e-3
(2.0 – 2.04)e-3
Kx Kd klis Yxs Yps
Smáx Kp
]máx
1st run
2nd run
3rd run (6.0 – 6.89)e-2
0.40 – 0.4857
0.40 - 0.4757
(7.2 – 8.3)e-3 (2.48 – 2.9)e-2 (2.0 – 2.04)e-3
Dynamic optimization of bioreactors using probabilistic tendency models
787
Fig. 2. Run-to-run optimization of the substrate feed rate.
4. Final remarks A novel run-to-run optimization strategy for fast productivity improvement under uncertainty in fed-batch fermentation units integrating probabilistic tendency models with sensitivity analysis has been proposed. Bayesian inference and probabilistic models are very important for bioprocess scale-up since production data are very sparse.
References D. Bonvin, 1998, Optimal operation of batch reactors. J. Proc. Control, 355-368. M. Joshi, A. Seidel-Morgenstern, A. Kremling, 2006, Exploiting the bootstrap method for quantifying parameter confidence intervals in dynamical systems, Metab. Eng, 8, 447–455. J. Kadam, M. Schlegel, B. Srinivasan, D. Bonvin, W. Marquardt, 2007, Dynamic optimization in the presence of uncertainty: from off-line nominal solution to measurement-based implementation, J. of Proc. Control, 389-398. E. Martínez, M. Cristaldi M., R. Grau, 2009, Design of Dynamic Experiments in Modeling for Optimization of Batch Processes, Ind. Eng. Chem. Res, 48, 7, 3453-3465. J. Menezes, S. Alves, J. Lemos, 1994, Mathematical Modelling of Industrial Pilot-Plant Penicillin-G Fed-Batch Fermentations, J. Chem. Tech. Biotechnol., 123-138. C. Riascos, J. Pinto, 2004, Optimal control of bioreactors: a simultaneous approach for complex systems, Chemical Engineering J., 99, 23–34. B. Schenker, M. Agarwal. 1995, Prediction of infrequently measurable quantities in poorly modeled processes, J. Proc. Control , 329-339. P. Terwiesch, M. Agarwal, D. W. T. Rippin, 1994, Batch unit optimization with imperfect modeling: a survey, J. Proc. Control , 238-258. D. Visser, R. van der Heijden, K. Mauch, M. Reuss, S. Heijnen, 2000, Tendency modeling: A new approach to obtain simplified kinetic models of metabolism applied to Saccharomyces cerevisiae. Metabolic Engineering, 2(3), 252-275. G. Walsh, 2007, Pharmaceutical biotechnology: concepts and applications; John Wiley & Sons Ltd, Chichester, England.
21st European Symposium on Computer Aided Process Engineering – ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A.C. Kokossis (Editors) © 2011 Elsevier B.V. All rights reserved.
Plantwide Control Design of a Postcombustion CO2 Capture Process Marc-Oliver Schacha, Rüdiger Schneiderb, Henning Schrammb, Jens-Uwe Repkea a
Institute of Thermal, Environmental and Natural Products Process Engineering, TU Bergakademie Freiberg, Leipziger Straße 28, 09596 Freiberg, Germany b Siemens AG Energy Sector, Fossil Power Generation, Industriepark Höchst, 65926 Frankfurt am Main
Abstract Coal-fired power plants are operated flexible over a large operating range and processes for postcombustion CO2 capture have to follow the power plant operation load and to separate the carbon dioxide in every operating point with minimal energy demand. In this work control structures for these processes were designed using self-optimizing control. The derived control structures allow the separation of 90% of CO2 over an operating range of 40 – 100% load with acceptable energy requirements. Keywords: CO2 capture, postcombustion, control structures, self-optimizing control
1. Introduction Carbon capture and storage from coal-fired power plants have become an important field of research in the last decade. Different concepts were analyzed and evaluated. For retrofitting already existing power plants postcombustion processes should be applied. Chemical absorption processes have shown a very good performance in this field. Pilot plants of these processes are already constructed on some power plant sites to gain more insight in the operation. In the fields of process configuration, solvent design and main equipment extensive research work has been conducted. The next step in the development of the process is the design of a control and operation strategy. Contributions in this field are scarce in the literature. Lawal et al. (2010) published results of a dynamic simulation of an absorption process for CO2 capture, but they concentrated more on the analysis of different cases e.g. change of flue gas mass flux and change of the mass flow of the heating steam. The same control structure was used for all the discussed cases. Ziaii et al. (2009) showed the dynamic performance of a desorber when the heating steam changes. The motivation was an economic operation at times when the price for electricity is high. Then it could be advantageous to use the heating steam of the power plant for the production of electricity instead of using it for the solvent regeneration. Kvamsdal et al. (2009) made simulation studies with a dynamic absorber model. Effects of load changes on the absorber performance were analyzed. Since solely the absorber was modeled the used control structure didn’t took the whole process into account. Panahi et al. (2010) designed a control structure for the complete capture plant using the “self optimizing control” concept of Skogestad (2000). The control structure was designed for one operating point with respect to disturbances. In this work control structures for an operating range of 40 to 100% load of the power plant were designed. Due to the increased usage of renewable energy sources coal-fired power plants are operated more dynamically. Therefore a control structure for a carbon capture plant has to be designed for a large operating range.
Plantwide Control Design of a Postcombustion CO2 Capture Process
789
2. Process Simulation As shown in Schach et al. (2010) the standard configuration of the chemical absorption process with absorber intercooler has a very good performance in terms of cost of CO2 avoided in comparison with other configurations. The flowsheet of the process is shown in Figure 1. The flue gas enters the absorber after passing a blower and a water cooler. The CO2 reacts with the solvent, a 30 wt-% monoethanolamine (MEA) solution, and 90% of the CO2 is separated. The solvent is intercooled in the absorber column leading to a higher loading of the solvent which results in a more efficient regeneration. The loaded solvent is pumped through a cross heat exchanger to the top of the stripper. In this column the solvent is regenerated by providing heat in form of heating steam of the power plant. The vapours are condensed in a partial condenser at 40°C. As gaseous product CO2 is obtained which is liquefied in a multistage compressor by pressurizing the CO2 up to 110bar. During the compression water condenses and the resulting liquefied CO2 has a purity of >99.5 mol-%. The lean solvent is routed back to the absorber. For this process configuration a control structure was designed in this work. The process was simulated using Aspen Plus 2006.5 with the amine package MEAREA which provided the reaction model considering both kinetically controlled and Table 1: Flue gas data Load
100%
Mass flow Temperature xCO2 xN2 xO2 xH2O
750 kg/s 49°C 13.5 71.5 3.5 11.5
Figure 1: Flowsheet of the analyzed process
equilibrium reactions. For the absorber and stripper columns the RadFrac model with rate-sep calculation was used. The control structure was designed for an operating range of the power plant of 40 to 100% load. Four different operating points were considered: 40, 60, 80 and 100% load. The mass flow and composition of the flue gas for the 100% case are shown in Table 1. At lower loads the mass flow decreases and the composition changes. Since in part load the coal is burnt with excess air the concentration of CO2 and water decrease whereas the concentration of oxygen and nitrogen increase.
3. Control Structure Design The self-optimizing control concept of Skogestad (2000) was applied for the control structure design. The objective of this concept is to find the control structure which realizes an acceptable loss with constant setpoint values for the controlled variables even if disturbances occur. For the analyzed process this means that the control structure has to maintain an economic separation of 90% of the CO2 over the whole regarded operating range. In the following the steps and results of the procedure are described. 3.1. Objective, Constraints and Disturbances The objective is to minimize the equivalent energy demand of the capture process while maintaining a separation degree of 90% CO2. The energy demand is composed of the required energy for the blower, pumps and compressor and the equivalent work for the solvent regeneration.
790
M.-O. Schach et al.
The process is subject to 8 constraints: CO2 separation degree of 90%, the flue gas is cooled down to 40°C before entering the absorber, the pressure in the absorber is 1bar, the temperature in the condenser is 40°C, concentration of MEA has to be maintained with MEA and water makeup streams, the temperature of the pumparound in the feed cooler is constant and the CO2 is compressed up to 110bar. The capture process at 100% flue gas load was regarded as the nominal process and the other loads of the power plant were considered as disturbances. 3.2. Degree of Freedom The degree of freedom was determined using the concept of the restraining numbers from Konda et al. (2006) resulting in a degree of freedom of 18. All 8 constraints and the 5 levels have to be controlled. In order to have the solvent mass flow as a free manipulated variable the level in the absorber is controlled with the temperature of the lean solvent. With this temperature the water leaving the column at the top can be manipulated and in this way the level can be controlled. Since the feed is predetermined by the power plant load, the number of degrees of freedom is reduced by one. Taking this and the above mentioned constraints into account the degree of freedom is 4: pressure in the stripper column, temperature and mass flow of the solvent at the intercooling stage and mass flow of the solvent. 3.3. Optimization Parameters left for optimization are: the pressure in the stripper (1.1 – 1.8bar), the temperature of the intercooling stage (30 – 50°C) and the mass flow of the solvent. Since always the whole hold up was intercooled this was no parameter for optimization. The temperature approach in the cross heat exchanger for the integration of the heat of the lean solvent has a big influence on the performance of the process. It is determined by the exchanger area. For the optimization of the process at 100% load this was also an optimization parameter, as well as the stage of the intercooler in the absorber. The molecular-inspired parallel tempering algorithm from Ochoa et al. (2009) was used for the optimization. For the optimization of the process at 100% load the objective function was the cost of CO2 avoided which was calculated according to the cost model from Schach et al. (2010). Since for this load the size of the main apparatuses, except for the columns was not determined, the cost of CO2 avoided are a more meaningful value to evaluate the overall performance. Since no parameter was at its bound after the optimization there are still 4 degrees of freedom minus 1 for the mass flow in the intercooler. The pressure in the stripper has to be controlled due to safety reasons. Therefore two degrees of freedom are left: solvent mass flow and mass flow of cooling water in the intercooling stage. 3.4. Identification of candidate controlled variables Two degrees of freedom are left for which controlled variables have to be identified. The energy for solvent regeneration, i.e. the mass flow of heating steam is decisive for the overall performance. Therefore it is crucial to find for this manipulated variable a controlled variable which leads to an economic separation. To get the mass flow of heating steam as an additional degree of freedom the constraint that 90% of the CO2 emissions have to be separated was deleted. Now for three manipulated variables controlled variables had to be selected. The minimum value of the objective function Jmin(u,d) = J(uopt(d),d) is reached when for every disturbance the manipulated variables change the controlled variables to a new optimal value. In chemical plants one finds mostly feedback control, where the controlled variables have constant setpoints and the objective function is J(u,d). The difference between those two definitions is the loss
Plantwide Control Design of a Postcombustion CO2 Capture Process
791 (1)
with z = Juu1/2(u - uopt) = Juu1/2 G-1 (c - copt). Juu is the Hessian of the objective function and G is the steady-state gain matrix. The objective is to find those controlled variables which minimize the loss. If each controlled variable ci is scaled such that ||ec|| = ||c copt||2 1, the worst case loss can be expressed according to Halvorsen et al. (2003) (2) where S is the matrix of scalings for the controlled variables ci, S = diag{1/span(ci)}, where span(ci) = ¨ci,opt(d) + ni(¨ci,opt(d)) is the variation of ci due to variation on disturbances and ni is the implementation error of ci. Those sets of controlled variables with the highest minimal singular value minimize the maximal loss and should be chosen for the control structure. Out of 29 different controlled variables those were selected which minimize the maximal loss. With three manipulated variables there are 3654 possible sets. To avoid screening all possible sets a branch and bound algorithm of Cao et al. (2008) was used. Since this screening method evaluates the sets in terms of minimizing the objective function, a further analysis had to be employed to analyze the applicability of the proposed sets. This was done using the relative gain array (RGA) and the performance relative gain array (PRGA). Table 2 shows a selection of the best sets of controlled variables which maintain an economic separation of CO2 over the whole operating range and which are not coupled. Table 2: Selection of the best sets of controlled variables Set
C1 (MRichSolvent)
C2 (MHeating Steam)
C3 (MCooling Water)
I II III IV
MCO2, Feed/ MRichSolvent MCO2, Feed/ MRichSolvent MCO2, Feed/ MRichSolvent MCO2, Feed/ MRichSolvent
MHeating steam/MCO2 Feed xCO2, Sweetgas loading lean solvent MHeating Steam/MCO2 Feed
T18, Absorber T18, Absorber T18, Absorber MCooling Water/MFlue Gas
Table 3 shows for the four sets the loss (Loss = J - Jopt) in terms of equivalent power in MWel and the separation degree of CO2 for the three part load cases with constant setpoints. With set II a separation degree of 90% cannot be reached. Due to this the energy requirement are lower than for the optimized cases. With the other sets 90% of the CO2 can be separated with an acceptable loss of energy. This was achieved although the separation degree was not defined as a controlled variable. Since the loading of the lean solvent is difficult to measure online and set II does not separate 90% of the CO2, set I and set IV can be recommended as a control structure. The process with the control structure according to set I is shown in Figure 2. Table 3: Separation degree of CO2 and loss in MWel at part load with constant set points
Loss in MWel
Separation degree of CO2
Set
80%
60%
40%
80%
60%
40%
I II III IV
0,96 -2,47 -0,48 1,12
-0,52 -5,08 1,97 -0,5
0,84 -5,2 -3,76 0,89
90,65 89,01 90,03 90,51
89,74 87,09 90,83 89,82
90,49 85,16 88,11 90,72
792
M.-O. Schach et al.
Figure 2: Flowsheet with control structure according to set I
4. Conclusion Control structures for a postcombustion CO2 capture process using self-optimizing control based only on steady state analysis were designed. Three of the four presented structures allow the separation of 90% of CO2 with acceptable losses in terms of equivalent energy demand although the separation degree was not a controlled variable.
References Cao, Y., Kariwala, V., 2008, Bidirectional branch and bound for controlled variable selection part I. Principles and minimum singular value criterion, Computers and Chemical Engineering, 32, 2306 – 2319. Halvorsen, I., Skogestad, S., Morud, J., Alstad, V., 2003, Optimal selection of controlled variables, Ind. Eng. Chem. Res., 42, 3273 – 3284. Konda, M., Rangaiah, G., Krishnaswamy, P., 2006, A simple and effective procedure for control degrees of freedom, Chemical Engineering Science, 61, 1184 – 1194. Kvamsdal, H., Jakobsen, J., Hoff, K., 2009, Dynamic modeling and simulation of a CO2 absorber column for post-combustion CO2 capture, Chemical Engineering and Processing, 48, 135 -144 Lawal, A., Wang, M., Stephenson, P., Koumpouras, G., Yeung, H., 2010, Dynamic modelling and analysis of post-combustion CO2 chemical absorption process for coal-fired power plants, Fuel, 89, 2791 – 2801. Ochoa, S., Repke, J.-U., Wozny, G., 2009, A new parallel tempering algorithm for global optimization: applications to bioprocess optimization, Computer Aided Chemical Engineering, 26, 513 – 518. Panahi, M., Karimi, M., Skogestad, S., Hillestad, M., Svendsen, H., 2010, Self-optimizing and control structure design for a CO2 capturing plant, Proceedings of the 2nd Annual Gas Processing Symposium. Schach, M.-O., Schneider, R., Schramm, H., Repke, J.-U., 2010, Exergoeconomic analysis of post-combustion CO2 capture processes, Computer Aided Chemical Engineering, 28, 997 – 1002. Schach, M.-O., Schneider, R., Schramm, H., Repke, J., 2010, Techno-economic analysis of postcombustion processes for the capture of carbon dioxide from power plant flue gas, Ind. Eng. Chem. Res., 49, 2363 – 2370. Skogestad, S., 2000, Plantwide control: the search for the self-optimizing control structure, Journal of Process Control, 10, 487 – 507. Ziaii, S., Rochelle, G., Edgar, T., 2009, Dynamic modeling to minimize energy use for CO2 capture in power plants by aqueous monoethanolamine, Ind. Eng. Chem. Res., 48, 6105 – 6111.
@AB?