26 European Symposium on Computer Aided Process Engineering [1st Edition] 9780444634443, 9780444634283

26th European Symposium on Computer Aided Process Engineering contains the papers presented at the 26th European Society

801 90 82MB

English Pages 2486 [588] Year 2016

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

26 European Symposium on Computer Aided Process Engineering [1st Edition]
 9780444634443, 9780444634283

Table of contents :
Content:
CopyrightPage iv
PrefacePage xxvZdravko Kravanja, Miloš Bogataj
International Scientific CommitteePages xxvii-xxx
Modelling and iterative Real-time Optimization of a homogeneously catalyzed hydroformylation processPages 1-6Reinaldo Hernández, Sebastian Engell
Integrated management of hierarchical levels: towards a CAPE toolPages 7-12Canan Dombayci, Sergio Medina, Moisès Graells, Antonio Espuña
Mathematical Optimization of Real-time Waste Incineration Scheduling in the IndustryPages 13-18Matteo L. Abaecherli, Daniel Santos González, Elisabet Capón-García, Konrad Hungerbühler
Modelling SER Biomass Gasification Using Dynamic Neural NetworksPages 19-24Alia Salah, Lutz Hanel, Marcel Beirow, Günter Scheffknecht
PERKS: Software for Parameter Estimation in Reaction Kinetic SystemsPages 25-30Aditya Tulsyan, Paul I. Barton
Optimization of a Sour Water Stripping Plant Using Surrogate ModelsPages 31-36Natalia Quirante, Jose A. Caballero
Security Analysis of the Electronic Management System for a Total Site Utility SystemPages 37-42Oscar Manso
Enhanced production of methanol from switchgrass: CO2 to methanolPages 43-48Mariano Martín, Ignacio E. Grossmann
Techno-economic analysis of the production of epichlorhydrin from glycerolPages 49-54Alberto Almena, Mariano Martín
Kriging based Fault Detection and Diagnosis Approach for Nonlinear Noisy Dynamic ProcessesPages 55-60Ahmed Shokry, Mohammadhamed Ardakani, Gerard Escudero, Moisès Graells, Antonio Espuña
Kinetics of Phycocyanobilin Cleavage from C-Phycocyanin by MethanolysisPages 61-66Chandrakant R. Malwade, Maria C. Roda-Serrat, Knud V. Christensen, Xavier Fretté, Lars P. Christensen
Simulation of Turbulent Particulate Flows for Nuclear Waste Management: Agglomeration in Vertical FlowsPages 67-72Derrick O. Njobuenwu, Michael Fairweather
A feasibility-based algorithm for Computer Aided Molecular and Process Design of solvent-based separation systemsPages 73-78Smitha Gopinath, Amparo Galindo, George Jackson, Claire S. Adjiman
Multiobjective Retrofitting Methodology for Manufacturing Processes of Active Pharmaceutical Ingredients Considering Environment, Health and Safety RisksPages 79-84Hirokazu Sugiyama, Akiko Segawa, Satoshi Yoshikawa, Takayuki Toyama, Hayao Nakanishi, Emi Kikuchi-Uehara, Masahiko Hirao
Performance Evaluation of Unsteady Processes with Dynamic and Temporal Flexibility IndicesPages 85-90Ruei-Shing Wu, Yi-Chung Kuo, Chuei-Tin Chang
Solution of Chance-Constrained Mixed-Integer Nonlinear Programming ProblemsPages 91-96Erike Esche, David Müller, Sebastian Werk, Ignacio E. Grossmann, Günter Wozny
Hydrogen Sulphide Scavenging Simulation in HYSYSPages 97-102André Young, Fernando Pessoa, Eduardo Queiroz
Dynamic Real-Time Optimization of Microgrids with Day-Ahead Commitments for External Power ExchangePages 103-108Michael Zachar, Prodromos Daoutidis
Methodology for preselecting heat-integrated mass allocation networks likely to be associated with cost efficient HENPages 109-114Sami Ghazouani, Assaad Zoughaib, Solène Le Bourdiec
Computer-Aided Molecular Design by Combining Genetic Algorithms and COSMO-RSPages 115-120Jan Scheffczyk, Lorenz Fleitmann, Annett Schwarz, André Bardow, Kai Leonhard
Energy consumption maps for quaternary distillation sequencesPages 121-126F.I. Gómez-Castro, N.E. Ramírez-Vallejo, J.G. Segovia-Hernández, C. Gutiérrez-Antonio, M. Errico, A. Briones-Ramírez, J. Sánchez-Aguilar
Energy integration of a hydrotreating process for the production of biojet fuelPages 127-132C. Gutiérrez-Antonio, F.I. Gómez-Castro, A.G. Romero-Izquierdo, S. Hernández
Mathematical Optimization of the Production of Fuel Pellets from Residual BiomassPages 133-138M.A. Méndez-Vázquez, F.I. Gómez-Castro, J.M. Ponce-Ortega, A.H. Serafín-Muñoz, J.E. Santibañez-Aguilar, M.M. El-Halwagi
On Optimal Assignment of Cold Standby Components for Multi-Channel Safety InterlocksPages 139-144Yeremia Yehuda Lepar, Yu-Chih Wang, Chuei-Tin Chang
Optimising biomass-based energy supply chains for sustainable mobilityPages 145-150Federico d’Amore, Fabrizio Bezzo
Estimation of membrane fouling parameters for concentrating lactose using nanofiltrationPages 151-156A. Sharma, M. Jelemenský, R. Paulen, M. Fikar
Multi-Scale Modeling of Solid Electrolyte Interface Formation in Lithium-Ion BatteriesPages 157-162Fridolin Röder, Richard D. Braatz, Ulrike Krewer
Automatic Model Reduction of Population Balance Models by Proper Orthogonal DecompositionPages 163-168Dmytro Khlopov, Michael Mangold
Parallel computing in multi-parametric programmingPages 169-174Richard Oberdieck, Efstratios N. Pistikopoulos
Modelling of a Reactive Distillation in the production process of high purity Cyclohexanone to produce caprolactamPages 176-181David Lorenzo, Carlos Perez-Galvan, Cristian Triana, Aurora Santos, Arturo Romero, I.David L. Bogle
Multimodal Green Food Supply Chain Design and Planning under UncertaintyPages 181-186Ana Carla Gomes, Tânia Pinto-Varela, Ana Paula Barbosa-Póvoa
Derivative-Free Chemical Process Synthesis by Memetic Algorithms Coupled to Aspen Plus Process ModelsPages 187-192Maren Urselmann, Tim Janus, Christophe Foussette, Stephen Tlatlik, Axel Gottschalk, Michael T.M. Emmerich, Thomas Bäck, S. Engell
Price Adjustment in Price-based Coordination Using Quadratic ApproximationPages 193-198S. Wenzel, R. Paulen, S. Krämer, B. Beisheim, S. Engrell
Is Wood Waste Only for Burning? A Methodology for Best Pathway Identification of Waste RecoveryPages 199-204Gisèle Abi Chahla, Assaad Zoughaib, Cong-Toan Tran, Romain Farel
A new scheme for process simulation by optimization: distillation as an examplePages 205-210Anna Hoffmann, Michael Bortz, Jakob Burger, Hans Hasse, Karl-Heinz Küfer
Renewable electricity integration at a regional level: Cantabria case studyPages 211-216Antonio Dominguez-Ramos, Charalampos Triantafyllidis, Sheila Samsatli, Nilay Shah, Angel Irabien
Multi-Stage Adaptive Robust Optimization over Bioconversion Product and Process Networks with Uncertain Feedstock Price and Biofuel DemandPages 217-222Daniel J. Garcia, Jian Gong, Fengqi You
Cost-based analysis about a newly designed two-staged reverse osmosis process with draw solutePages 223-228Kiho Park, Do Yeon Kim, Dae Ryook Yang
Chemical Looping Reforming: Impact on the Performances Due to Carbon Fouling on CatalystPages 229-234Giuseppe Diglio, Piero Bareschino, Erasmo Mancusi, Francesco Pepe
Targeting the hydrogen network and optimal feed using rigorous simulationPages 235-240Wang Dai, Guilian Liu, Jingjing Liang
Incorporating life cycle assessment indicators into optimal electric vehicle charging strategies: An integrated modelling approachPages 241-246Gonzalo Bustos, Miao Guo, Koen H. van Dam, Salvador Acha, Nilay Shah
Neural Networks Modeling of Dearomatization of Distillate Cuts with Furfural to Produce LubricantsPages 247-252Kese P.F. Alberton, Anie D. Lima, Wlamir S. Nogueira, Luís C. Gomes, Príamo A. Melo, Argimiro R. Secchi, Maurício B. de Souza Jr.
Lignocellulosic supply chain MILP model: a Hungarian case studyPages 253-258Anna Panteli, Sara Giarola, Nilay Shah
A framework for modelling investment decisions in gas infrastructuresPages 259-264Sara Giarola, Daniel J.G. Crow, Adam Hawkes
Process Design and Integration of Various Carbon Capture Approaches into the Energy Sector and Other Energy-intensive Industrial ApplicationsPages 265-270Calin-Cristian Cormos, Letitia Petrescu, Ana-Maria Cormos, Serban Agachi
Assessment of chemical looping combustion process by dynamic simulationPages 271-276Ana-Maria Cormos, Dora-Andreea Chisalita
Comparison of Different Heat Exchanger Tube Designs used in Latent Heat Thermal Energy Storage Systems - a Numerical StudyPages 277-282Martin Koller, Anton Beck, Heimo Walter, Michael Hameter
Global sensitivity analysis of computer-aided molecular design problem for the development of novel working fluids for power cyclesPages 283-288Jérôme Frutiger, Jens Abildskov, Gürkan Sin
Minimizing the complexity of surrogate models for optimizationPages 289-294Julian Straus, Sigurd Skogestad
Optimization Methods for Catalyst DesignPages 295-300Jeffrey A. Herron, Manos Mavrikakis, Christos T. Maravelias
Model-based framework for enhanced and controlled operation of a fed-batch bioreactor: xylitol productionPages 301-306Héctor Hernández-Escoto, Oscar Andrés Prado-Rubio, Ricardo Morales-Rodriguez
Dynamic behavior of a multi-tasking reactive distillation column for production of silane, dichlorosilane and monochlorosilanePages 307-312C. Ramírez–Márquez, J.G. Segovia-Hernández, N. Ramírez-Corona, J.A. Cervantes-Jáuregui, Arturo Jiménez-Gutiérrez
Use of predictor corrector methods for multi-objective optimization of dynamic systemsPages 313-318Tobias Keßler, Filip Logist, Michael Mangold
Economic risk-based analysis: Effect of technical and market price uncertainties on the production of glycerol-based isobutanolPages 319-324Carina L. Gargalo, Krist V. Gernaey, Gürkan Sin
Synthesis and Optimization of Microalgae BiorefineriesPages 325-330Melina Psycha, Antonis C. Kokossis
Testing the Non-Diagonal Quadratic Convex Reformulation TechniquePages 331-336Otto Nissfolk, Ray Pörn, Tapio Westerlund
Development of soft sensor with neural network and nonlinear variable selection for crude distillation unit processPages 337-342Kai Sun, Shao-hsuan Huang, Shi-Shang Jang, David Shan-Hill Wong
Modelling of organophilic pervaporation to compete with distillationPages 343-348Andras Jozsef Toth, Anita Andre, Eniko Haaz, Peter Mizsey
Optimal regional resource utilisation – the St. Margarethen casePages 349-354Stephan Maier, Raphaela H. Maier, Christina M. Pfeifer, Michael Narodoslawsky
Mathematical Programming Approach for the Design of Intensified Thermally Coupled Distillation Sequences.Pages 355-360José A. Caballero, Juan A. Reyes-Labarta
Discrete-Time MIP Methods for Production Scheduling in Multistage FacilitiesPages 362-367Andres F. Merchan, Hojae Lee, Christos T. Maravelias
Economic, Environmental and Dynamic Optimization Applied to Hybrid Processes for the Purification of BiobutanolPages 367-372E. Sanchez-Ramírez, J.J. Quiroz-Ramírez, S. Hernandez, J.G. Segovia-Hernández
Purification of Bioethanol from a Fermentation Process: Alternatives for DehydrationPages 373-378Carolina Conde-Mejía, Arturo Jiménez-Gutiérrez, Fernando Israel Gómez-Castro
Optimization of OSN Membrane Cascades for Separating Organic MixturesPages 379-384Vincentius S.K. Adi, Marcus Cook, Ludmila G. Peeva, Andrew G. Livingston, Benoît Chachuat
On a systems approach to support novelty in bioreactor designPages 385-390Georgios P. Panayiotou, Aikaterini D. Mountraki, Antonis C. Kokossis
Comparison of Continuous-Time Models for Adjustable Robust Optimization in Process Scheduling under UncertaintyPages 391-396Nikos H. Lappas, Chrysanthos E. Gounaris
Developing a Soft Sensor with Online Variable Selection for Industrial Multi-mode ProcessesPages 398-403Jialin Liu, Shi-Shang Jang, David Shan-Hill Wong
CFD Modelling of Bubbly Gas Flow using Coupled OPOSPM-Two-Fluid ModelPages 403-408Menwer Attarakih, Abdelmalek Hasseine, Hans-Jörg Bart
Design and control of azeotropic dividing wall column for separating furfural-water mixturePages 409-414Xing Qian, Shengkun Jia, Sigurd Skogestad, Xigang Yuan
Multiobjective Optimization of Membrane Networks for Fractionation of Protein Hydrolysate from Fish By-ProductsPages 415-420Ricardo Abejón, Azucena Abejón, Marie P. Belleville, José Sánchez-Marcano, Aurora Garea, Ángel Irabien
Optimization and Monte Carlo Simulation for Product Launch Planning under UncertaintyPages 421-426Catarina M. Marques, Samuel Moniz, Jorge Pinho de Sousa, Ana Paula Barbosa-Póvoa
Population Balance Modelling of Liquid Extraction Columns using the Sectional Quadrature Method of Moments (SQMOM)Pages 427-432Samer Alzyod, Menwer Attarakih, Abdelmalek Hasseine, Hans-Jörg Bart
Robust multi-objective optimal control of dynamic biological networksPages 433-438Philippe Nimmegeers, Dries Telen, Mickey Beetens, Filip Logist, Jan Van Impe
Influence of the Mushy Zone Constant on the Numerical Simulation of the Melting and Solidification Process of Phase Change MaterialsPages 439-444Michael Hameter, Heimo Walter
CFD Simulation of Boiling Flows for Nuclear Reactor Thermal Hydraulic ApplicationsPages 445-450Marco Colombo, Michael Fairweather
Group - Interaction Contribution Approach for Prediction of Electrochemical Properties of Ionic LiquidsPages 451-456Khadra Mokadem, Mourad Korichi
Whole-Systems Modelling of Alternatives for Future Domestic TransportPages 457-462Sheila Samsatli, Alfredo Ramos, Mark Matchett, Nigel P. Brandon, Nilay Shah, Nouri J. Samsatli
Multi-scale modelling software framework facilitating simulation of interconnected scales using surrogate-modelsPages 463-468Sigve Karolius, Heinz A. Preisig, Henrik Rusche
Reliable Energy Systems Design for Continuous Processes incorporating Renewables GenerationPages 469-474Oluwamayowa O. Amusat, Paul R. Shearing, Eric S. Fraga
Dynamic impact milling model with a particle-scale breakage kernelPages 475-480Qinglin Su, Serena Schiano, Chuan-Yu Wu, Zoltan K. Nagy, Chris D. Rielly
Projection-based Reformulation and Decomposition Algorithm for A Class of Mixed-Integer Bilevel Linear ProgramsPages 481-486Dajun Yue, Fengqi You
Mass and energy integration for the supercritical process for biodiesel production and a bioethanol dehydration trainPages 487-492Fernando I. Gómez-Castro, Claudia Gutiérrez-Antonio, Araceli Guadalupe Romero-Izquiero, Ricardo Morales-Rodríguez, Juan Gabriel Segovia-Hernández
Convex Model for Global Optimization of Water Distribution SystemPages 493-498Yingzong Liang, Ergys Pahija, Chi Wai Hui
Multi-level Modelling of Sustainable Chemical Production: from CFD to LCAPages 499-504Pratham Arora, Andrew Hoadley, Sanjay Mahajani, Anuradda Ganesh
Modeling and simulation of phase change and non-ideality in multistream heat exchangersPages 505-510Harry A.J. Watson, Donghoi Kim, Truls Gundersen, Paul I. Barton
Using Transient Measurements in Iterative Steady-State Optimizing ControlPages 511-516Weihua Gao, Sebastian Engell
Computationally Efficient Steady-State Process Simulation by Applying a Simultaneous Dynamic MethodPages 517-522Alexander Zinser, Liisa Rihko-Struckmann, Kai Sundmacher
Spatial branch and bound algorithm for the global optimization of MIQCPsPages 523-528Pedro M. Castro
Risk Management of Shale Gas Supply Chain under Estimated Ultimate Recovery UncertaintyPages 529-534Jiyao Gao, Fengqi You
Optimal Design and Synthesis of Shale Gas Processing and NGL Recovery ProcessesPages 535-540Jian Gong, Fengqi You
A Leader-Follower Game-Based Life Cycle Optimization Framework and ApplicationPages 541-546Jiyao Gao, Fengqi You
Adjustable Robust Optimization for Scheduling of Batch Processes under UncertaintyPages 547-552Hanyu Shi, Fengqi You
Effective Sampling, Modeling and Optimization of Constrained Black-box ProblemsPages 553-558Ishan Bajaj, M.M. Faruque Hasan
Integration of Energy-cost Optimization and Production Scheduling Using Multiparametric ProgrammingPages 559-564Hubert Hadera, Rachid Labrik, Juha Mäntysaari, Guido Sand, Iiro Harjunkoski, Sebastian Engell
Simulation and optimization of the molecular weight distribution in a non-ideal polymerization reactorPages 565-570Chi Zhang, Xi Chen, Zhijiang Shao, Chaozhong Xu, Jiajun Wang
Efficient optimization-based design of energetically intensified distillation processesPages 571-576Thomas Waltermann, Mirko Skiborowski
Anaerobic Bio-reactor ModelingPages 577-582Cansu Birgen, Heinz A. Preisig, Alexander Wentzel, Sidsel Markussen, Bernd Wittgens, Ujjaini Sarkar, Anindita Ganguly, Sudeshna Saha, Sibashish Baksi

Citation preview

Elsevier Radarweg 29, PO Box 211, 1000 AE Amsterdam, Netherlands The Boulevard, Langford Lane, Kidlington, Oxford OX5 1GB, UK 50 Hampshire Street, 5th Floor, Cambridge, MA 02139, USA Copyright © 2016 Elsevier B.V. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. Details on how to seek permission, further information about the Publisher’s permissions policies and our arrangements with organizations such as the Copyright Clearance Center and the Copyright Licensing Agency, can be found at our website: www.elsevier.com/permissions. This book and the individual contributions contained in it are protected under copyright by the Publisher (other than as may be noted herein). Notices Knowledge and best practice in this field are constantly changing. As new research and experience broaden our understanding, changes in research methods, professional practices, or medical treatment may become necessary. Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using any information, methods, compounds, or experiments described herein. In using such information or methods they should be mindful of their own safety and the safety of others, including parties for whom they have a professional responsibility. To the fullest extent of the law, neither the Publisher nor the authors, contributors, or editors, assume any liability for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions, or ideas contained in the material herein.

British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library Library of Congress Cataloging-in-Publication Data A catalog record for this book is available from the Library of Congress ISBN (Part A): 978-0-444-63873-1 ISBN (Set): 978-0-444-63428-3 ISSN: 1570-7946 For information on all Elsevier publications visit our website at https://www.elsevier.com/

Publisher: John Fedor Acquisition Editor: Kostas Marinakis Editorial Project Manager: Sarah J. Watson Production Project Manager: Mohanapriyan Rajendran Designer: Greg Harris Typeset by SPi Global, India

Preface These proceedings contain contributions presented at the 26th European Symposium on Computer Aided Process Engineering (ESCAPE 26) held in Portorož, Slovenia from the 12th to 15th of June 2016. ESCAPE 26 is the 26th edition of ESCAPE series. Previous editions were held in Denmark (1992, 2001, 2015), France (1992, 2008), Austria (1993), Ireland (1994), Slovenia (1995), Greece (1996, 2011), Norway (1997), Belgium (1998), Hungary (1999, 2014), Italy (2000, 2010), The Netherlands (2002, Finland (2003, 2013), Portugal (2004), Spain (2005), Germany (2006), Romania (2007), Poland (2009) and United Kingdom (2012). The ESCAPE series is a prominent European yearly event in the field of Computer Aided Process Engineering (CAPE) bringing together people from around the world dedicated to Process Systems Engineering. Scientists, industrial experts, academics and students exchange and share the most recent knowledge about CAPE concepts, methods, tools and applications, either regarding continuous developments of existing technologies or innovative developments of new ones based on new discoveries and inventions. The ESCAPE 26 focused on themes: Process-product Synthesis, Design and Integration; Modelling, Numerical Analysis, Simulation and Optimization; Process Operations and Control; Green Bioprocess Engineering and Advances in Biomedical Engineering; CAPE/PSE in Environmental Engineering; CAPE/PSE in Sustainable Energy Applications; CAPE Applications; and Education in CAPE/PSE. Out of 406 contributions, prepared by authors from 47 countries (Europe, the Americas, Africa, Asia and Australia), the ESCAPE 26 International Scientific Committee selected 176 oral presentations, 7 being plenary lectures, 18 keynote lectures, and 230 poster presentations. We beleive that the contributions contained in these proceedings will serve as a reference and motivate new ideas, developments and collaborations in the field of computer aided process engineering and beyond. Yours sincerely, Zdravko Kravanja Conference Chair

Miloš Bogataj Conference Secretary

International Scientific Committee Conference Chairman Zdravko Kravanja, University of Maribor, Slovenia

Themes Coordinators

Process-product Synthesis, Design and Integration Mariano Martin, University of Salamanca, Spain Modelling, Numerical analysis, Simulation and Optimization Stratos Pistikopoulos, Texas A&M University, USA Process Operations and Control Sebastian Engell, TU Dortmund University, Germany Green Bioprocess Engineering and Advances in Biomedical Engineering David Bogle, University College London, UK CAPE/PSE in Environmental Engineering André Bardow, RWTH Aachen University, Germany CAPE/PSE in Sustainable Energy Applications Petar Varbanov, Pázmány Péter Catholic University, Hungary CAPE Applications Flavio Manenti, Politecnico di Milano, Italy Education in CAPE/PSE Antonio Espuña, Universitat Politecnica de Catalunya, Spain

Members Mariano Martin, University of Salamanca, Spain Thomas Adams, McMaster University, Canada Mario Eden, Auburn University, USA Fengqi You, Northwestern University, USA Michael Fairweather, University of Leeds, UK Antonis Kokossis, National Technical University of Athens, Greece Rafiqul Gani, Technical University of Denmark, Denmark Andrzej Kraslawski, Lappeenranta University of Technology, Finland Jose Caballero, University of Alicante, Spain Gonzalo Guillén-Gosalbez, The University of Manchester, UK Stratos Pistikopoulos, Texas A&M University, USA Marianthi Ierapetritou, Rutgers University, USA Andreja Nemet, University of Maribor, Slovenia

xxviii

International Scientific Committee

Ruth Misener, Imperial College London, UK Alexander Mitsos, RWTH Aachen University, Germany Michael Georgiadis, Aristotle University of Thessaloniki, Greece Lazaros Papageorgiou, University College London, UK Vivek Dua, University College London, UK Chrysanthos Gounaris, Carnegie Mellon University, USA Pei Liu, Tsinghua University, China Benoit Chachuat, Imperial College London, UK Carl Laird, Purdue University, USA Nikolaos Sahinidis, Carnegie Mellon University, USA Hermann Feise, BASF, Germany Davide Manca, Politecnico di Milano, Italy Moisès Graells, Universitat Politècnica de Catalunya, Spain Sebastian Engell, TU Dortmund University, Germany Ana Barbosa-Póvoa, Técnico Lisboa, Portugal Luis Puigjaner, Universitat Politecnica de Catalunya, Spain Gintaras Reklaitis, Purdue University, USA Sigurd Skogestad, Norwegian University of Science and technology, Norway David Bogle, University College London, UK Paul Agachi, Kazakh British Technical University, Kazakhstan Andreas Linninger, University of Illinois at Chicago, USA Ioannis Androulakis, Rutgers University, USA Costas Kiparissides, Aristotle University of Thessaloniki, Greece Francois Marechal, École polytechnique fédérale de Lausanne, France Tony Kiss, AkzoNobel, The Netherlands Filip Logist, University of Leuven, Belgium Michael Narodoslawsky, Graz University of Technology, Austria André Bardow, RWTH Aachen University, Germany Jiří Klemeš, Pázmány Péter Catholic University, Hungary Peter Mizsey, Budapest University of Technology and Economics, Hungary Fabrizio Bezzo, University of Padova, Italy Ferenc Friedler, University of Pannonia, Hungary Niall Mac Dowell, Imperial College London, UK Petar Varbanov, Pázmány Péter Catholic University, Hungary Hon Loong Lam, The University of Nottingham Malaysia Campus, Malaysia Zainuddin A. Manan, University of Technology, Malaysia Sharifah Rafidah Wan Alwi, University of Technology, Malaysia Raymond Girard R. Tan, De la Salle University, Philippines Petro Kapustenko, Kharkiv Polytechnic Institute, Ukraine Igor Bulatov, The University of Manchester, UK Panos Seferlis, Aristotle University of Thessaloniki, Greece Sakis Papadopoulos, Centre for Research and Technology Hellas, Greece Tomislav Novosel, University of Zagreb, Croatia Franjo Cecelja, University of Surrey, UK Hella Tokos, University of Surrey, UK

International Scientific Committee

Niyi Isafiade, University of Cape Town, South Africa Michael Walmsley, The University of Waikato, New Zealand Flavio Manenti, Politecnico di Milano, Italy Tilman Barz, Austrian Institute of Technology, Austria Mattia Vallerio, University of Leuven, Belgium Guido Buzzi-Ferraris, Politecnico di Milano, Italy Massimiliano Barolo, The University of Padova, Italy Soledad Diaz, Planta Piloto de Ingeneria Quimica, Argentina Christos Maravelias, University of Wisconsin, USA Alessio Frassoldati, Politecnico di Milano, Italy Chi Wai Hui, The Hong Kong University of Science and Technology, China Rubens Maciel Filho, University of Campinas, Brazil Antonio Espuña, Universitat Politecnica de Catalunya, Spain Il Moon, Yonsei University, South Korea Henrique Matos, Técnico Lisboa, Portugal Zorka Novak Pintarič, University of Maribor, Slovenia Emilia Kondili, Technological Education Institute of Piraeus, Greece Valentin Plesu, Polytechnic University of Bucharest, Romania Iqbal Mujtaba, University of Bradford, UK Elvis Ahmetović, University of Tuzla, Bosnia and Herzegovina Dimitrios Gerogiorgis, The University of Edinburgh, UK Lidija Čuček, University of Maribor, Slovenia Alexandra Elena Bonet-Ruiz, Polytechnic University of Bucharest, Romania Jordi Bonet-Ruiz, University of Barcelona, Spain Ignacio Grossmann, Carnegie Mellon University, USA Iiro Harjunkoski, ABB Corporate Research, Germany Lorenz Biegler, Carnegie Mellon University, USA Pedro Castro, University of Lisbon, Portugal Carlos Mendez, National University of Litoral, Argentina Cesar de Prada, University of Valladolid, Spain Ton Backx, Eindhoven University of Technology, Netherlands Claudio Scali, University of Pisa, Italy Rajagopalan Srinivasan, Indian Institute of Technology Gandhinagar, India Nilay Shah, Imperial College London, UK Peter Singstad, Cybernetica AS, Norway Stefan Kraemer, TU Dortmund University, Germany Igor Plazl, University of Ljubljana, Slovenia Xavier Joulia, ENSIACET, France Jean-Marc Le Lann, ENSIACET, France Jan Thullie, Silesian University of Technology, Poland Luis Cisternas, University of Antofagasta, Chile Sauro Pierucci, Politecnico di Milano, Italy Vladimir Mahalec, McMaster University, Canada

xxix

Local Organising Committee Chairman

Zdravko Kravanja, University of Maribor, Slovenia

Conference Secretary

Miloš Bogataj, University of Maribor, Slovenia

Members

Igor Plazl, University of Ljubljana, Slovenia Neven Duić, University of Zagreb, Croatia Zorka Novak Pintarič, University of Maribor, Slovenia Mojca Slemnik, University of Maribor, Slovenia Andreja Nemet, University of Maribor, Slovenia Lidija Čuček, University of Maribor, Slovenia

Zdravko Kravanja, Miloš Bogataj (Editors), Proceedings of the 26th European Symposium on Computer Aided Process Engineering – ESCAPE 26 June 12th -15th, 2016, Portorož, Slovenia © 2016 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/B978-0-444-63428-3.50005-9

Modelling and iterative Real-time Optimization of a homogeneously catalyzed hydroformylation process Reinaldo Hern´andez and Sebastian Engell Chair of Process Dynamics and Operations, Technische Universit¨at Dortmund, Emil-Figge-Strasse 70, Dortmund 44227, Germany {reinaldo.hernandez, sebastian.engell}@bci.tu-dortmund.de

Abstract In this contribution, the real-time optimization (RTO) of a novel multiphase homogeneously catalyzed process is investigated. Specifically, the conversion of long chain olefins is studied for the example of the hydroformylation of 1-dodecene. A first principle model was built and validated with experimental data available from a miniplant. An iterative optimization scheme is proposed in order to ensure convergence to the plant optimum in presence of model uncertainties. Local quadratic approximation combined with derivative-free optimization is used for the estimation of the plant gradients under presence of noisy measurements. The simulation results show significant improvements in the economic performance of the process despite model uncertainties concerning the gas solubility and the amount of active catalyst in solution. Keywords: real-time optimization, homogeneous catalysis, modifier adaptation.

1. Introduction The high selectivity and activity at mild conditions that can be achieved by means of homogeneous catalysis have motivated significant research, as well as the development of important industrial applications, where the hydroformylation of propene in the Ruhrchemie/Rhone-Poulenc process (RCH-RP) is the best known example (Frey, 2014). The processing of long chained olefin is particularly challenging due to their low solubility in the polar catalyst phase. Novel process concepts have been developed during the last years in order to overcome these limitations, including the use of complex solvent systems which are able to perform the reaction homogeneously at high temperatures, while at a low temperature the mixture separates into two phases and efficient catalyst recovery is possible in such a Thermomorphic Multicomponent Solvent (TMS) System (Brunsch and Behr, 2013). For the economic success of this technology, minimizing catalyst leaching while achieving high conversion and selectivity of the process are crucial. Process optimization can be applied based on kinetic models and thermodynamic descriptions of the phase equilibrium (Hentschel et al., 2015; Schafer et al., 2012). In this contribution, optimal economical operation of a hydroformylation process has been addressed by means of Real-time Optimization (RTO). In this approach the problem is decomposed hierarchically: an upper layer uses a rigorous static model to determine the optimal set-points that are tracked by a lower layer operating at a higher sampling frequency (Darby et al., 2015). However, due to model inaccuracies, this will not lead to an optimal operation of the real plant. We propose the use of an iterative Real-time Optimization scheme (Gao and Engell, 2005) to optimize the dosing of the catalyst and the reaction temperature in presence of model-plant mismatch. The

2

R. Hern´andez and S. Engell

Figure 1: Left: Thermomorphic Multicomponent Solvent (TMS) system. Right: Reaction network of the hydroformylation of 1-dodecene. algorithm employed here makes use of a local quadratic approximation for the estimation of the plant gradients which leads to reduced sensitivity to measurement noise (Gao et al., 2015). In addition, a trust region framework is applied to avoid oscillations. This contribution is structured as follows: section two describes the integrated process for the hydroformylation of 1-dodecene in a Thermomorphic Multicomponent Solvent (TMS) system and in section 3 the model is presented. Section four gives a general overview of the proposed RTO scheme. Finally, in the sections five and six the results and the main conclusions are presented. Acknowledgment: This work was supported as part of the Collaborative Research Center: ”Integrated Chemical Processes in Liquid Multiphase Systems” (SFB/Transregio 63 InPROMPT) by the Deutsche Forschungsgemeinschaft (DFG).

2. Process Description In Figure 1, the diagram of a TMS system is shown. As was stated in the introduction, the idea is to use a predefined mixture of a nonpolar solvent (solvent 1) and a polar solvent (solvent 2) such that at high temperatures the system is homogeneous; therefore the limitations in the mass transfer of the reactant to the catalyst phase are avoided. At low temperatures, phase separation takes place, and the expensive catalyst in the polar phase can be recovered and recirculated to the reactor while the product in the nonpolar phase goes to further purification steps (Brunsch and Behr, 2013). The TMS concept has been already applied to different homogeneous catalyzed reactions, including the hydroformylation of long chain alkenes. The process concept has been proven in a continuously operated miniplant at TU Dortmund (Zagajewski et al., 2015). Besides the main reaction corresponding to the production of the linear aldehyde (tridecanal), isomerization to iso-dodecene, hydrogenation to dodecane and formation of branched aldehyde take place; as is shown in the reaction network (Figure 1). High selectivity at mild conditions (90◦ C and a syngas pressure of 20 bar) has been achieved by using a catalyst-ligand complex consisting of the precatalyst Rh(acac)(CO)2 and the ligand biphephos. As solvent system, a mixture of the polar solvent dimethylformamide (DMF) and the nonpolar solvent decane was used. The cost of rhodium and ligand makes the catalyst recovery crucial for the success of the process concept.

3. Model Description A first principles model for the integrated operation of the reactor and decanter is presented in this section. 3.1. Reactor model The reactor is modelled as an ideal CSTR, and two-film theory is used for description of the mass transfer between the gas and the liquid phases. According to the material balance, the concentra-

Modelling and iterative Real-time Optimization of a homogeneously catalyzed hydroformylation process

3

tion of the different liquid components Ci (i= 1-dodecene, tridecanal, dodecane, etc) is given by equation (1). Mcat is the mass of active catalyst in the reactor and VR is the reactor volume. The ˙ , while νi,l are the coefficients inflow and outflow volumetric flow rates are given by V˙in and Vout of the stoichiometric matrix and rl is the reaction rate for the reaction l. The interested reader is referred to the literature for the kinetic model (Hentschel et al., 2015). VR

Nreact dCi ˙ ·Ci,out + Mcat ∑ νi,l · rl = V˙in ·Ci,in − Vout dt l=1

(1)

For the j gas components ( j = CO and H2 ), the concentration in the liquid phase depends on the mass transfer coefficient (ke f f ) and the equilibrium concentration at the interface (Ceq ): Nreact dC j ˙ ·C j,out + Mcat ∑ ν j,l · rl . = −ke f f · (C j −Ceq j ) + V˙in ·C j,in − Vout dt l=1

(2)

The GL equilibrium is modelled by means of Henry’s law (3), which describes the relation between the liquid concentrations to the partial pressure Pj . The dependence of the Henry coefficient H j on the reaction temperature T is modelled by an Arrhenius expression (4). Pj = Ceq j · H j

(3) 

H j = H j,0 · exp

−E j RT

 (4)

As was stated in equation (1) the reaction rate is proportional to the mass of active catalyst and therefore proportional to its concentration Ccat . At high carbon monoxide concentrations, catalyst deactivation takes place by formation of inactive Rh-dimers. This phenomenon has been approximately quantified according to equation (5), where the active catalyst concentration is expressed as a function of the concentration of catalyst precursor CRh,precursor and the equilibrium constants Kcat,1 and Kcat,2 : Ccat =

CRh,precursor . 1 + Kcat,1 ·CCO + Kcat,2 ·CCO /CH2

(5)

3.2. Decanter model LLE is assumed between the phases in the decanter. Simple expressions of the equilibrium constant Ki as a function of the decanter temperature Tdecanter are used:  Ki = exp Ai,0 +

Ai,1 Tdecanter

 + Ai,2 Tdecanter ,

(6)

where Ai,0 , Ai,1 and Ai,2 were obtained by regression of experimental data (Schafer et al., 2012). The split factor ζi and the molar flow of the components in the product stream (ni,product ) and the catalyst recycle (ni,catalyst ) as a function of the inlet flow to the decanter (ni,decanter ) can be defined according to (7)-(9):

ζi =

Ki , 1 + Ki

(7)

ni,product = ζi · ni,decanter ,

(8)

ni,catalyst = (1 − ζi ) · ni,decanter .

(9)

Based on experimental data, expressions similar to (7)-(9) are used for the description of the catalyst leaching.

4

R. Hern´andez and S. Engell

4. Iterative Real-time Optimization 4.1. Problem formulation In RTO, the solution to the static optimization problem (10) is addressed: min φ (u)

(10a)

u

s.t. G(u) ≤ 0,

(10b)

where u stands for the set of inputs/degrees of freedom which minimize the cost function φ (u). G(u) is the vector of constraint functions which includes the model equations and the process constraints that must be satisfied. Due to modelling inaccuracies and disturbances, the value of the cost function and constraints for the actual process (φ p (u) and G p respectively) will be different from the model. As a consequence, the solution of the problem (10) can lead to a suboptimal or to an unfeasible operating point. Iterative optimization addresses the model-plant mismatch by a modification of the problem (10) with correction terms that represent the actual values of the objective function and constraints as well as their gradients. In each iteration k, the problem (11) is solved and a new input uk+1 is computed (Gao and Engell, 2005).   T  uk+1 − uk (11a) min φ (uk+1 ) + φ p (uk ) − φ (uk ) + ∇φ p (uk ) − ∇φ (uk ) uk+1

  T  uk+1 − uk ≤ 0. s.t. G(uk+1 ) + GP (uk ) − G(uk ) + ∇G p (uk ) − ∇G(uk )

(11b)

The algorithm ensures convergence to the optimal point under model adequacy conditions (Chachuat et al., 2009). 4.2. Estimation of the gradients and quadratic approximation The most important issue in the application of iterative optimization is the estimation of the gradients under noisy conditions. Different approaches can be found in the literature, including among others, nested modifier-adaptation (Navia et al., 2013) and dual-modifier adaptation (Marchetti, 2015). In this work, local quadratic approximation of the plant, as proposed by Gao et al. (2015) in combination with a trust region framework (Biegler et al., 2015) is applied. The quadratic approximation can be considered as a reduced model (RM) of the true plant model, while the original detailed model (ODM) is the plant itself. Given the number of inputs nu , the idea of quadratic approximation is to construct a local quadratic model of the cost function and the constraints of the plant based on at least nr = [(nu + 1)(nu + 2)/2 − 1] regression points U = {uk , uk−1 , ..., uk−nr+1 }. The problem is reduced to finding the values of the set of parameters θ := ai, j , ..., bi , ..., c which minimize the square of the estimation errors, according to problem (12): 2 nr  (12a) min ∑ φ p (uk−i+1 ) − φq (uk−i+1 , θ ) θ

i=1

n u nu

nu

i=1 j=1

i=1

s.t. φq (uri , θ ) = ∑ ∑ ai, j ui u j + ∑ bi ui + c,

(12b)

After the problem (12) has been solved, the gradients are estimated using the quadratic approximation of the objective function (φq ≈ φ p ) and the optimization problem (11) is solved. An additional constraint is introduced to the problem (11) which represents bounds on the inputs such that they stay within the trust region. The trust region is updated in each step and the algorithm is repeated until convergence.

Modelling and iterative Real-time Optimization of a homogeneously catalyzed hydroformylation process

5

Figure 2: Model validation for the hydroformylation of 1-dodecene in the TMS system DMFdecane. T = 90◦ C, Psyngas = 20 bar, Ligand/Rh = 5/1, Catalyst dosing = 2 ppm., Tdecanter = 5◦ C

5. Results 5.1. Model validation and nominal optimal operation In order to validate the model, different experiments were performed in the miniplant under carefully controlled conditions. Figure 2 compares the experimental values of the product composition and the main reaction parameters to the model predictions. As can be seen there is a good agreement with the experiments. The optimization problem (13) is formulated as the minimization of the raw material and operating cost per unit of n-tridecanal produced. Given the price of raw material Pr1−dodecene , catalyst PrRh and the operating cost for cooling and heating (Ccooling , Cheating ), the cost can be computed based on the reconciliated data for the flows (F) of 1-dodecene and tridecanal. Pr1−dodecene · F1−dodecene + PrRh · FRh +CCooling +CHeating Ftridecanal s.t. G(u) ≤ 0

min u

(13a) (13b)

The inequality G(u) represents the model equations and the bounds in the inputs. As degrees of freedom in the problem, the decanter temperature (Tdecanter ), catalyst dosage (expressed as ppm in the alkene feed), total pressure (P) and reactor temperature (T ) are considered. The computed optimal operating conditions were T = 89◦ C, catalyst dosing of 0.8 ppm, Tdecanter = 5◦ C at its lower bound and P = 30 bar at its upper bound. A sensitivity analysis was performed on the different model parameters and it was found that the gas solubility and the equilibrium constants for the catalyst species have the largest influence on the cost function. 5.2. Handling mismatch: Iterative optimization In order to test the iterative optimization algorithm, different simulation studies were carried out. Mismatch of 30% in the Henry coefficient (3) is considered; in addition to mismatch in equilibrium equation (Kcat,2 = 0) for the active catalyst species (5). The model (1)-(9) is considered as the ”real plant”, while the model with the mismatch is iteratively optimized. Figure 4 shows the cost surface of the ”real plant” and of the model. It can be seen that the optimal point predicted by the model is different to the actual optimal point (Temperature 89◦ C vs. 95◦ C, catalyst 0.8 ppm vs 0.65 ppm). Several runs of the iterative optimization algorithm were performed, in noise-free scenarios as well with measurements noise. The plant gradients were calculated based on the quadratic approximation (12). Despite the model-plant mismatch, the algorithm generates a series of inputs that converge to the actual optimum of the plant (Figure 4). In the optimal solution, the reaction severity is adjusted by increasing the temperature while the fresh catalyst dosing is reduced, giving as a consequence an improved economic performance of the process.

6

R. Hern´andez and S. Engell Cost Surface "Real Plant"

1200

1100

1150

Cost (Euro/kmol )

Cost (Euro/kmol)

Cost Surface Model

1150

1050 1000 950 900 2

1100 1050 1000 950 900 2

105

1.5

Catalyst dosage (ppm)

Catalyst dosage (ppm)

95

0.5 0

90 85

105

1.5

100

1

Temperature (°C)

100

1

95

0.5 0

90 85

Temperature (°C)

Figure 3: Cost surface of the plant and of the model Isocost Curves Plant

2

945 940

1.8

Current Operating Point

935

1.4 1.2 1

Cost (Euro/kmol )

Catalyst Dosage (ppm)

1.6

Model optimum Plant optimum

0.8

925 920

Model optimum

915 910

0.6

Plant optimum

905

0.4 85

Current Operating Point

930

90 95 100 Reactor Temperature (°C)

105

900 0

10

20

30 40 Iteration Index

50

60

Figure 4: Left: Set point trajectories to the actual optimal point. Right: Cost function

6. Conclusions In this work, the model-based optimization of the hydroformylation of 1-dodecene in a Thermomorphic Multicomponent Solvent (TMS) system was discussed. A Real-time Optimization (RTO) scheme was proposed in order to handle significant model uncertainties. The results are very promising, despite model-plant mismatch concerning gas solubility and the amount of active catalyst, it was possible to converge to the optimal point.

References L. Biegler, Y.-d. Lang, L. Weijie, 2015. Multi-scale optimization for process systems engineering. Computers and Chemical Engineering 60, 17–30. Y. Brunsch, A. Behr, 2013. Temperature controlled catalyst recycling in homogeneous transition-metal catalyst: minimization of catalyst leaching. Ang. Chem. Int. Ed 52, 1586–1589. B. Chachuat, B. Srinivasan, D. Bovin, 2009. Adaptation strategies for real-time optimization. Computers and Chemical Engineering 33, 1557–1567. M. Darby, M. Nikolau, J. Jones, D. Nicholson, 2015. RTO: An overview and assessment of current practice. Journal of Process Control 21, 874–884. G. Frey, 2014. 75 years of oxo synthesis- The success story of a discovery at the OXEA site Ruhrchemie. Journal of Organometalic Chemistry 754, 5–7. W. Gao, S. Engell, 2005. Iterative set-point optimization of batch chromatography. Computers and Chemical Engineering 29, 1401–1409. W. Gao, S. Wenzel, S. Engell, 2015. Modifier adaptation with quadratic approximation in iterative optimizing control. Proceedings of the 2015 European Control Conference (ECC), 2533–2537. B. Hentschel, G. Kiedorf, C. Hamel, A. Seidel-Morgenstern, H. Freund, S. K, 2015. Model-based identification and experimental validation of the optimal reaction route for hydroformylation of 1-dodecene. Industrial and Engineering Chemical Research 54, 1755–1765. A. Marchetti, 2015. A new dual modifier-adaptation approach for iterative process optimization with inaccurate models. Computers and Chemical Engineering 59, 89–100. D. Navia, G. Gutierrez, C. De Prada, 2013. Nested modifier adaptation for RTO in the Otto Willimas Reactor. International Symposium on Dynamics and Control of Process Systems, 123–128. E. Schafer, B. Yvonne, G. Sadowski, A. Behr, 2012. Hydroformylation of 1-dodecene in the thermomorphic solvent system dimethylformamide/decane. Phase behaviour-reaction performance-catalyst recycling. Ind. Eng. Chem. Res. 51, 10298–10306. M. Zagajewski, A. Behr, P. Sasse, J. Wittman, 2015. Continuously operated miniplant for the rhodium catalyzed hydroformylation of 1-dodecene in a thermomorphic multicomponent solvent system (TMS). Chemical Engineering Science 115, 88–94.

=GUDYNR.UDYDQMD0LORã%RJDWDM (GLWRUV 3URFHHGLQJVRIWKHWK(XURSHDQ6\PSRVLXPRQ &RPSXWHU$LGHG3URFHVV(QJLQHHULQJ±(6&$3( -XQHWKWK3RUWRURå6ORYHQLD‹(OVHYLHU%9$OOULJKWVUHVHUYHG http://dx.doi.org/10.1016/B978-0-444-63428-3.50006-0

Integrated management of hierarchical levels: towards a CAPE tool Canan Dombaycia, Sergio Medinaa, Moisès Graellsb, Antonio Espuñaa* a

Chemical Engineering Department, ETSEIB, Universitat Politecnica de Catalunya, 647 Diagonal Avenue, Barcelona 08028, Spain

b

Chemical Engineering Department, EUETIB, Universitat Politecnica de Catalunya, Comte d’Urgell 187, Barcelona 08028, Spain *[email protected]

Abstract The integration of decision-making procedures usually assigned to different hierarchical production systems requires the use of complex mathematical models and high computational efforts, in addition to the need of an extensive management of data and knowledge within the production systems. This work addresses this integration problem and proposes a comprehensive solution approach, as well as guidelines for Computer Aided Process Engineering (CAPE) tools managing the corresponding cyberinfrastructure. This study presents a methodology based on a domain ontology which is used as the connector between the introduced data, the different available formulations developed to solve the decision-making problem, and the necessary information to build the finally required problem instance. The methodology has demonstrated its capability to help exploiting different available decision-making problem formulations in complex cases, leading to new applications and/or extensions of these available formulations in a robust and flexible way. Keywords: Scheduling, Batch Control, Mathematical Modelling, Knowledge and Information Management, Ontologies.

1. Introduction Much research has addressed integrated management of production systems. However, it is generally accepted that academia and industry do not achieve a common ground for production systems integration since the mapping between the real complex systems and the case studies is hard to accomplish. On the other hand, a considerable number of investigations have recently started to establish a link between the hierarchical levels (Vegetti and Henning, 2015 and Muñoz et al., 2015). An essential element in linking planning and scheduling activities is due to ANSI standards (ISA88, ISA95), which deal with the procedural, physical, and process models, and data exchange in the hierarchical systems. Similarly, supporting systems according to these standards can be found such as BatchML (BatchML, 2015). An extensive study of the general problem of vertical and hierarchical integration is required. Therefore, there is a need to solve this integration problem using advanced tools for supporting the decisions to be made. The vertical and horizontal integration strategies require extensive formulations. In recent research, three main integration strategies are reported for solving the problem: (i) hierarchical, (ii) iterative, and (iii) full-space methods. All of them require computationally effective formulations, communication between the models and

8

C. Dombayci et al.

complementary strategies (Maravelias and Sung, 2009). Recently, Muñoz et al. (2015) used a full-space method in order to solve the integration problem, and used an ontological approach for communication between master and slave formulations in a Lagrangian decomposition approach. Also, Fedorova et al. (2015) used generic model templates constructed with an ontology, providing several layers in order to create the models using existing templates in another domain from this work. In addition to these integration techniques and generic approaches, this work introduces a further step to bring a new solution to the integration of planning and scheduling levels. The solution strategy of decision support for the integrated management system still remains unclear. This paper addresses integration problems using ontologies and proposes a general solution strategy for making decision in integrated management systems. The starting point is an ontology for planning and scheduling in batch production systems. A domain ontology from the recognized Batch Control Standard ISA88 (Dombayci et al., 2015) is used and enhanced with normative rules. These rules from description logics are added to the ontology in order to maintain the connection between the flows of data from interface to formulations. In this stage, the main idea of the solution approach is to use the same mathematical programming model, and to build new models from this structure supported by the ontology. The details of developed procedure are explained in the methodology section by using separated modules. Additionally, the findings of this study have a number of crucial implications for future practice in decision-making procedures of the integration of hierarchical levels in batch production systems.

2. Methodology The proposed methodology creates the systematic analysis of the applied methods by categorizing them into modules. An overview of this procedure and its sections are shown in Figure 1 and explained in Sections 2.1, 2.2, 2.3, and 2.4. The methodology is implemented using markup languages such as OWL (web ontology language) and XML (extensible mark-up language). Additionally, specific tools are used in the implementation of the methodology: (i) a user interface is created using Matlab GUI (graphical user interface), (ii) structures to build XML files are constructed using Matlab, (iii) connection between XML and OWL is achieved with a java-based ontology management tool, and (iv) mathematical formulations are implemented in GAMS. Module 1 Data flow

Module 2 XML file(s)

Module 3 OWL file

Module 4 Formulations

Module 1.1: • Interface for users outside the computational environment Module 1.2: • Structures can be combined Module 1.3: • Mapping from one XML to module 2 or mapping from a database

• Construction of XML files according to the OWL file • This module should be constructed in parallel with module 1 and module 3 in order to match the data. • Each element in XML files only contains concepts, instances, object properties and data properties

• Loading ontology with the data from module 2 • This module contains OWL with normative rules • Module has different cases for each module 4 elements which represent each formulations

• Elements in module 4 contains formulations • Main future direction for the module is to build an environment to be able to manage different formulations Module 4.1 • Short-term planning from Kondili et al., (1993). Module 4.2 • Planning module Module 4.3 • Integrated formulation from module 4.1 and 4.2

Figure 1. Methodology overview

Integrated management of hierarchical levels: towards to a CAPE tool

9

2.1. Module 1: Data flow This module is constructed in order to maintain the data flows from/to the different system interfaces. This collects the data based on ISA88 Standard and is developed to sustain the functionality of the work, considering the compatibility with the other modules. Additionally, this module is the place where the decision-making problem is selected. Current implementation is done by Matlab GUI and a drawing tool for STN (state task network). An XML from STN is created from a drawing tool yEd which produces the network as a txt file (Figure 2). Also possible implementations of module 1 are listed as follows: x Interface for users to build physical, procedural, and process models and/or to load from already created XML files, x Structure creations without using the interface for module developers, and x Mapping from previously structured elements from other XML based data such as models, parameters, and data from databases. 2.2. Module 2: XML file(s) Module 2 contains XML file(s) in order to maintain communication between module 1 and module 3. Each node of XML file contains concept names, instance names, object properties, and data properties to be loaded to the OWL file in module 3. This module is significant for connecting separately structured data and needs to be developed in coordination. XML file(s) are loaded to the OWL file using an instance loader manager based on Java in module 3. The template of XML file is illustrated in Figure 3. Product1 40 %

10 %

60 % 60 %

40 %

100 %

IntAB

Reaction2

HotA

Heating

100 %

ImpureE IntBC

FeedA

80 %

100 %

100 % FeedB

Reaction3

Reaction1 50 %

20 %

50 % FeedC

Figure 2. STN of the case study

Product2

Separation 100 %

90 %



ConceptName InsName

hasObjectProperty OtherInstance



dataProperty someValue



Figure 3. XML template

2.3. Module 3: OWL file Module 3 contains OWL file(s), which have normative rules, modelled according to the each formulation in module 4. This section with OWL files receives the organized instances from module 2 and loads to the static structured OWL file. The static structure means that the concepts and their properties remain the same but instances and their properties are changed in the instance layer by the structured data from the XML file. A substantial part of this module focuses on data management to support the decisionmaking procedure. For instance, while the connection between the process cell and units are not required to solve a scheduling problem, connections are required to be introduced in the solution of multi-process cell scheduling problem and its planning procedure. While process input refers to material, which is stored or used to a planning formulation, it has different connection with the each unit procedure (or unit) in a scheduling formulation. These connections are maintained by the normative rules, and leading to robust connection between structured data in ontology and input structure of formulations. Table 1 gives the introduced data through the user interface. Data connection between unit procedures and process inputs (STN) is introduced using a

10

C. Dombayci et al.

drawing tool. After this, module 3 concludes with the information in Table 2; which shows examples of interaction between the data for each formulation using normatives. Table 1. Introduced data Description Process Input Entered values

A, B, C

Process Output

Intermediate Material

Unit Procedure

P

AB

UP1, UP2

Table 2. Examples of normative rules and conclusions Normative rule in developed Printed Data ontology Instances of ProcessInput concept

A, B, C

Instances of UnitProcedure.hasInput

UP1.A, UP1.B, UP2.C, UP2.AB

Instances of UnitProcedure.hasOutput

UP1.AB, UP2.P

Meaning in the formulation Raw material set in planning Input material of unit procedures in order to create mass balance in scheduling formulation Output material for mass balance

2.4. Module 4: Formulations A short-term scheduling formulation (Kondili et al., 1993) is used to maintain the scenarios in results in Section 3. The idea is to use the same formulation and produce different solutions according to the data flow coming from other modules and requirements of the decision-making procedure. The future advancement for this module is to have a general formulation in order to respond the introduced from this methodology and solve the optimization problems.

3. Results The methodology is implemented as a CAPE tool and different scenarios are used to show the ability of adaptability to other scenarios. The data for solving the scenarios are introduced through the user interface module. Introducing master recipe instances are shown in Figure 4. Scenarios are planned for a multi-process cell area in a site. The decision-making problem is constructed from an area manager point of view and the production planning is supported by finding different scheduling results. In these results, demand is divided for each cell for separate solutions (scenario 1 and scenario 2), demand is added up for all the cells for total solution (scenario 3a), and demand is divided for each cell for a one solution (scenario 3b). Details of scenarios are given as follows: 3.1. Scenario 1 Figure 4. Implemented interface view The first scenario is the original problem from Kondili et al. (1993), and is called Process Cell 1 (PC1). 3.2. Scenario 2 The second scenario is similar to the original problem in scenario 1, but max batch sizes are increased and decreased by 10% in Process Cell 2 (PC2) and Process Cell 3 (PC3),

Integrated management of hierarchical levels: towards to a CAPE tool

11

respectively. Part of the structured data in XML files from scenario 1 (process input, STN) are used directly for these new cells. These additional process cells are solved separately. 3.3. Scenario 3 Scenario 3 has the integrated solution of these 3 process cells and the formulation receives all the input data for the solution. In scenario 3a demand is multiplied by 3 and no specific process cell is addressed for production. In contrast, scenario 3b consists of mapping the amount of demand and process outputs for each process cells according to process cell capacity change. Furthermore, intermediate storage amount is triplet for these scenarios.

4. Discussions Table 3 summarizes the results for an area manager. Each scenario is designed for different decision-making procedures and is proposed to show the capability of the methodology for complex cases. For instance, if the question is to assign process cells to specific production orders, the manager chooses within scenario 1 and scenario 2 by inspecting the optimal scheduling solutions by checking the make-span time from the table. Scenario 3a contains the optimal scheduling data when the demand is not assigned to specific process cells and scenario 3b considers the assigned demand to each process cells for a more specific situation. Scenario 3a has the highest profit since the problem is constructed monolithic and constraints on demand attendances are removed comparing to scenario 3b. Furthermore, computational times are shown in the table. The required computational effort is lower when the max batch sizes increases (scenarios 1 and 2). Also the required effort is reduced when constraints on product demand are removed (scenario 3a). Table 3. Result of scenarios

Scenarios

Process Cells

Product Demand Make-span Profit CPUs 1 500 units 40 hour 13432 1 PC1 0.593 2 400 units 31 hour units PC2 (10% 1 500 units 40 hour 13535 2a 0.453 2 400 units 21 hour more) units PC3 (10% 1 500 units 40 hour 13393 2b 0.889 2 400 units 21 hour less) unit 40 hour 1 40705 3ྶ500 units 3a PC1, PC2, PC3 0.125 22 hour units 2 3ྶ400 units 1 500 units 40 hour PC1 2 400 units 25 hour 40602 1 550 units 39 hour 3b PC2 0.562 units 2 440 units 33 hour 1 450 units 32 hour PC3 2 360 units 30 hour The proposed data management approach recognizes each decision-making procedure in a similar way, however handles data according to the problem to be solved. In this work, an area manager is considered, but this factor can be replaced by a planning formulation and the same data will be shared by two different formulations using module 3. The proposed procedure is to remove the limitations between the hierarchical levels in production systems and to allow the solutions flexibility considering the interactions and decision variables. However, it is necessary to develop and implement different formulations in module 4 in order to solve and improve decision-making problems. In addition, the short-term scheduling formulation is used for different purposes than the reported results for to demonstrate capability of the methodology.

12

C. Dombayci et al.

When other related methods and this study are connected, work is in the same research line with Muñoz (2011) in terms of using functionalities of ontological models and the knowledge introducing and collecting methods are managed generally. Additionally, this work is not only on the interfacing of different elements in production systems but also the solutions of mathematical programmes compared with Vegetti and Henning, (2015). Modelling in the ontology is the main issue in this methodology, and similarly connecting this model with programming skills. In general, problems occur in these connections and more difficulties are expected in the solving of large scale problems.

5. Conclusions This paper proposes a methodology for integrated management of production systems. It also presents a modular approach, and introduces a flexible way of managing production in different process cells while incorporating the planning requirements. The data needed to solve the different optimization problems in different production scenarios are introduced to a general class of problem formulation through a single interface, and the ontology determines the problem instance to be solved. The methodology showed robustness and flexibility for developing more complex cases and may be adapted to use different auxiliary tools (like sophisticated drawing tools to efficiently feed data to the ontology). Future work in this line involves developing a more general formulation to address other classes of problems in hierarchical systems. Thus, extended formulations should be implemented and the capacity of the methodology should be tested accordingly. Additionally, exploring data base applications to connect module 1 with other modules and investigating further data exchange applications will be investigated.

Acknowledgments Financial support received from the Spanish Ministry of Economy and Competitiveness and the European Regional Development Fund (research Project SIGERA, DPI201237154-C02-01), the ‘Agencia de Gesti d’Ajuts Universitaris i de Recerca-AGAUR’ (2014 FI00305), the Mexican National Council for Science and Technology (CONACyT) and the Research Group CEPEiMA (2014SGR1092), is fully appreciated.

References BatchML. Batch Markup Language. Retrived September 2015 from: http://www.mesa.org/en/BatchML.asp. C. Dombayci, J. Farreres, H. Rodríguez, E. Muñoz, E. Capón-García, A. Espuña, M. Graells, 2015. On the process of building a process systems engineering ontology using a semiautomatic construction approach. Comput-Aided Chem. Eng., 37, 941–946. M. Fedorova, G. Sin, R. Gani, 2015. Computer-aided modelling template: Concept and application. Comput Chem Eng, 85, 232-247. E. Kondili, C.C. Pantelides, R.W. Sargent, 1993. A general algorithm for short term scheduling of batch operations. Comput Chem Eng, 17, 211-227. C. T. Maravelias, C. Sung, 2009. Integration of production planning and scheduling: Overview, challenges and opportunities. Comput and Chem Eng, 33, 1919-1930. E. Muñoz, E. Capon-Garcia, J. M. Lainez-Aguirrec, A. Espuña, L. Puigjaner, 2015. Supply chain planning and scheduling integration using lagrangian decomposition in a knowledge management environment. Comput Chem Eng, 72, 52-67. E. Muñoz, 2011. Knowledge management technology for integrated decision support systems in process industries. Phd, UPC. M. Vegetti, G. Henning, 2015. An ontological approach to integration of planning and scheduling activities in batch process industries. Comput-Aided Chem Eng, 37, 995-1000.

Zdravko Kravanja, Miloš Bogataj (Editors), Proceedings of the 26th European Symposium on Computer Aided Process Engineering – ESCAPE 26 June 12th -15th, 2016, Portorož, Slovenia © 2016 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/B978-0-444-63428-3.50007-2

Mathematical Optimization of Real-time Waste Incineration Scheduling in the Industry Matteo L. Abaecherli,a* Daniel Santos González,a Elisabet Capón-García,a Konrad Hungerbühlera a

ETH Zürich, HCI G138, Vladimir-Prelog-Weg 1, 8093 Zürich, Switzerland

[email protected]

Abstract This work proposes a novel approach for handling transfer activities together with storage and processing tasks in the field of waste incineration. Given a set of waste streams, a detailed plant topology, short or medium-term strategy specifications, and optimization targets, the developed model optimizes the schedule for the incineration of liquid waste along a given time horizon, from a waste transfer but also economic and sustainable point of view. Additionally, detailed waste transfer, storage, mixing and incineration assignments along with their start and processing time are provided. Tests in industry show that the obtained optimized schedules can efficiently support real-time decisions, leading to a more cost efficient and sustainable waste incineration. Keywords: Scheduling, Mathematical Optimization, Waste Management, Incineration, Energy efficiency

1. Introduction Increasingly stringent environmental regulations, the price volatility of fossil fuels and carbon emissions regulations, force the industry to increase the efficiency of their operations (Harjunkoski et al., 2014). This also includes downstream processes such as waste management, which is a quite challenging task in all sectors of the chemical industry (Poliakoff et al., 2002). Large chemical sites often have their own in-house waste treatment facilities, which ensure an independent and adequate waste handling, allowing at the same time for material and energy recovery. A well-thought and smart schedule is required in order to achieve a smooth and efficient operation, avoiding holdups in the production as well as constraint violations of both technical and regulatory nature (Wassick, 2009). In previous works (Wassick, 2009; Abaecherli et al., 2014; Abaecherli et al., 2015) it has been shown that mathematical optimization tools can deliver improved storage, mixing and treatment strategies for liquid waste incineration in industry. By doing so, the natural gas consumption can be reduced up to 60 %; thereby both costs and environmental burden of the incineration process can be significantly decreased. Nevertheless, most decision-making processes in this field are still based purely on empirics. Likewise, existing optimization models only generate short and medium-term schedules with low temporal resolution and are consequently not reliable in supporting daily routines in big industrial sites with strongly fluctuating waste production. The goal of this work is to deliver a high-resolution, real-time optimization tool for facilitating industrial waste incineration scheduling and thereby reduce the overall process costs and environmental impact.

M.L. Abaecherli et al.

14

F Figu ure 1: Oveervieew wasste ttreaatmeent systtem.. Black arroowss inddicatte waste w e traansffer via v ppipees w whille doottedd onnes repre r esennt traansffer of o waaste by tank t k waagonns.

22. Sys S tem m Over O rvieew T Thee sy ystem m cons c siderred in thiss woork reppresentss a typicall waaste inccineeratiion facilityy foor iinduustrrial liqu uid wasste ttreaatmeent. Waastee can n bee deelivvered d to o the in ncinnerattionn sitte eiitheer bby tankk wago w on or o bby pipin p ng. On nce unlooadded, thee waastee caan eithe e er be b ddirecctly buurnedd ooveer inncin nerattion n lannces orr stoored d in nterm med diateely andd inn some casses mixxed witth othe o er w wasste streeam ms befo b ore treeatm mentt. Figu F ure 1 gives a sche s emaatic ov vervview w ovver the iinciinerratioon sy ysteem cconnsideeredd.

33. Met M thoodollogyy T Thee mode m el iss formuulateed aas a mixe m d-inntegger lineear program m (MI ( LP)). A diiscreete-time fform mulatioon with w h a sinngle unniforrm grid d of o oone houur was w used. Sim milaar tto pprevviouus m moddelss deealin ng with w h w waste inncinneraationn (A Abaaechherlii et al., 20 014), thheree arre ttwo keyy m matthem matiical forrmuulations. On n th he loogissticss side, theere are a i) mass m s baalannces, prrovidingg tthe am moun nt off waastee w storred in a taank i at tim me t ass weell as a inn annd ooutfl flow ws ,, of tthe tankk, such s h as in Eq.(1),, annd iii) mixin m ng cons c straaintss, asssurring thaat , , & ,, tthe com mpoosition of o thhe outf o flow w iss equ ual to the t onee of the waaste storred in tthe tankk att any y ttimee t,, ass seeen in Eqq.(2)). Wast W te mix m xing is cruuciaal for fo impprov vingg thhe inncinneraationn scheeduules. Thhe nonn-linnearrity in Eq q. (2) ( leaads to a seve s ere inccreaase in thee m modeel ccom mpleexityy. Ther T refoore, a vvolu umee discreetizaatioon appr a roacch (Cap ( pón-Gaarcíaa ett al.., 200144) hhas beeen adop a ptedd in ordder to t linneaarizee thee mixinng prob p blem m. d stored s d iin outt M wsstored w w, i, t ,i ,t  M w,i ,t 1  M w,i ,t  M w,i ,t

M wsto,ored i ,t

M w

s stored d w ',i ,t



M wouut,i ,t

M

out w ',i ,t

 , i, t w

(11)

(22)

w

O On thee prroceess sidee, tthe eneergyy co onseervaationn of o thhe com c mbusstion proc p cess haas too be aassuured d while w e reespeectiing tem mperatu ure lim mitattionns. This iss achieeved d bby iintrooduucing g eenergy ballancces, succh as a inn Eqq.(3)). With W h bei ing am moun nt o f wa aste e w inci iner rated d the ,, ooveer laancee l at a tiimee t and a thhe amo a ount of natturaal gaas need n ded to oveercoome ennergyy

155

M Matthem matiical Opttimiizatiion of o RealR -tim me Wast W te Inncineerattion Schheduuling g inn thee Inddusttry

shortag ges at tiimee t. andd a the are t hea h at off combuustiionss of waste w aand natuurall gaas rresppecttively. On thee rigght han nd side s e off thee ennerggy balan b nce staandss the coooliing proocesss ffrom m th he incin neraationn prroduuctss y occurriing at ttimee t from m the t ttem mperraturre in n , tthe oven T to t th he initial tem mperratu ure , witth a heeat cap pacitty for f eacch inncinneraation n pprodducct y . T  Fumes  G F G M  H  M  H  M  w l  w,l ,t w  t y  y,t 0 Cppy dTT  T  T 0  t   T

(33)

33.1. Waastee unnloadingg T This work w k in ntrodducees a nooveel ap pprooach h foor cconttinuuouss waastee unnloaadinng, iin oordeer too hhanndle bo oth thee hhighh tim me ressoluutionn of o tthe deecisiion--maaking and a thhe accu a uratte rreprreseentaation n off loggistiic taaskss in the plaant. Eq..(4) preevennts inco i omp pletee unnloaadinng of the w wasste. Thee eqquattionn woorkss by y no ot allow wing g inaactivve peri p iodss beetweeen actiive periodss foor tthe binnary y varriab bles at acctivaate the unlloaddingg prroceess of o wast w te w att pum mp u at a , , tha pperiiod t. This T s is achhieveed bby not n alloowinng inac i ctiviity betw b ween n tw wo activ a ve ttimee peeriods t’ t aandd t’’ witth a gapp off at leas l st tw wo timee perriod ds bbetw weenn theem, as shoown in F Figuure 2.

B

w ,u ,t '

u

  Bw,u ,t ''  1  u



u ,t ' t  t '''

Bw,u ,t

 t',t''' t'  t''  2 w,t

(44)

P Probblem ms arisse when w n thhe unnloaadin ng proc p cess is nnot finiisheed befor b re thhe last l perriodd. Inn thiis ccasee, a bin naryy sw witchh vaariaable thaat inndiccatees iff thee taank is bein b ng uunloaded att the llastt perriodd is actiivatted. Thiis sw witcch is i thhen useed too deeacttivatte th he com c mpletionn coonstrrainnt iin Eq.( E (5), if thee taank is stilll beeingg unnloaadedd inn thhe last l peeriod d. In I E Eq.((5) M is a suffficieently y laargee nuumber cons c stituutingg ann up pperr booundd on n the am mou unt of w waste w thhat is i uunlooaded at a puumpp u at tiimee t . , ,

M u ,t

WU W w ,u ,t

 M  (1  BwUUend ) w w

(55)

F Figu ure 2. Unlo U oadinng pproccess oveervieew. The T situ uatioon on o thhe leeft with w inteermiittennt unnloaadingg haas tto bbe preve p enteed while w e coontinnuouus unloa u adinng as a seeen on the righ ht side is desir d red. 1 iindicatees uunlooadin ng whil w e 0 stannds ffor no n unnloaading g at a sp peciffic time t.

16

M.L. Abaecherli et al.

4. Industrial testing The model has been validated in collaboration with an industrial partner. The incineration facility considered consisted of two furnaces, 19 incineration lances, 17 intermediate storage tanks, 12 unloading pumps and a complex network of pipes. In average more than 30 different waste streams from different production sites with great variety of heating values and compositions occur every day. The mathematical formulation introduced has been implemented in GAMS, and solved using CPLEX 12.5 for MILP problems. All computational studies were performed on a DELL Precision T3500 computer with Intel 3.33GHz CPU and 18 GB RAM. Stopping criteria were an optimality gap of 0.1 % and a computational time of 1800 s. 4.1. Validation The validation of the optimization tool has been achieved by proving its ability to generate feasible schedules for a given set of waste streams. Therefore, two days of operation in industry were monitored in detail. The goal of the validation was to be able to process the waste streams over the same horizon as in reality. The objective of the optimization was to maximize the amount of waste unloaded at the incineration site. It could be shown that the introduced optimization tool can cope with typical problems encountered in industrial waste incineration. The results are summarized in Table 1. Keeping in mind the discrete nature of the optimization problem, which invariably overestimates the time needed to perform certain tasks (e.g. a 40-minutes-task has to be rounded up to 1 hour in the model), the obtained schedule is quite close to reality. 4.2. Case Study 1 Due to the large combinatorial nature of the problem, there are often many optimal or near-optimal solutions. Since both feasible but also economically advantageous and sustainable schedules are of interest, a two-step optimization approach was followed. The first one with the goal to maximize the amount of waste unloaded at the incineration site and the second one aiming to minimize the overall natural gas consumption, which is the main cost driver of the incineration process. Natural gas is used to provide continuous and steady combustion in case the treated waste does not provide enough energy to the system. Therefore, in order to decrease the overall costs, the second optimization delivers a waste storage, transfer, mixing and incineration strategy that minimizes the use of natural gas. In so doing, the consumption of natural gas could be reduced by 70 % in the first day and 60 % in the second one. This is interesting both from an economical point of view, as it can be seen in Table 1, and an environmental one, as the natural gas consumption per amount of waste treated could be reduced. This leads, among other things, to a reduction of the overall CO2 emissions of about 4-5 %.

Mathematical Optimization of Real-time Waste Incineration Scheduling in the Industry

17

Table 1. On the left side of the table the results of the validation are shown with a run maximizing the amount of unloaded waste. The actual amount of waste unloaded in industry was 194 m3 for the first day and 186 m3 for the second. On the right side of the table, the results of the second optimization of the first case study are shown. A constraint was added in order to keep the amount of waste unloaded equal to the results from the first run.

Validation Day 1 Day 2 2,860 3,065 1,894 2,003 32 36 3.41 0.00 193 182 -1.8 1.3 21.3 20.1 1,800 1,777

Equations [103] Variables [103] Discrete Variables [103] Optimality gap [%] Unloaded waste [m3] Cost [MU] Natural Gas [t] CPU [s]

Case Study 1 Day 1 Day 2 2,860 3,065 1,894 2,003 32 36 0.99 1.41 193 182 -8.9 -10.1 6.8 8.3 276 1,800

4.3. Case Study 2 So far, all calculations have been done considering a horizon length of 24 h. By doing so, in some cases no optimality gap of 1 % is reached within the defined computational time. Therefore, the horizon was divided in two 12 h-blocks which were solved consecutively. Doing so, it could be shown that solving two consecutive smaller problems largely reduces the computational time compared to solving the whole problem over a longer horizon, without significantly affecting the efficiency of the solution with respect to the amount of unloaded waste and the natural gas consumption, and consequently the economic performance, as shown in Table 2. Table 2. Results of a consecutive two-step optimization solved with a reduced horizon length of 12 h each, starting either at midnight or at midday. The results for both optimization runs are showed for each day and for each of the two horizons.

Time Equations [103] Variables [103] Discrete Variables [103] Optimality gap [%] Unloaded waste [m3] Cost [MU] Natural Gas [t] CPU [s]

Day 1 Opt. run 1 Opt. run 2 0-12 12-24 0-12 12-24 666 785 666 785 458 513 458 513

Day 2 Opt. run 1 Opt. run 2 0-12 12-24 0-12 12-24 972 581 972 581 648 377 648 377

9

10

9

10

13

8

13

8

0.10

0.00

0.85

0.52

0.00

0.00

5.00

0.00

117

79

117

79

119

58

119

58

1.7 8.9 133

-2.6 11.3 53

-2.9 3.3 3

-5.8 6.6 5

3.3 10.0 6

-1.4 11.0 246

-4.9 1.1 1,800

-4.6 5.8 3

18

M.L. Abaecherli et al.

5. Conclusions A model considering logistic and operational issues for the incineration of liquid industrial waste has been created and validated in a real plant. It could be shown that the model can handle daily challenges encountered in industrial practice and generate attainable schedules for upcoming operation periods within reasonable computational time. Due to the complex nature of actual waste incineration problems, the combinatorial possibilities for waste transfer, storage, mixing and treatment are nearly infinite. Therefore, to find the most suitable solution for the industry, a two-step optimization has been performed in order to generate a schedule that is efficient in terms of waste transfer as well as economically profitable and sustainable. Operating costs are strongly linked to the usage of natural gas to overcome energy deficits in the incineration process. Thus, the resulting schedules provide optimal mixing, storage, transfer and treatment strategies in order to exploit as efficiently as possible the energy contained in the different waste streams , hence reducing the overall natural gas consumption. Hence, environmental impact of the waste incineration is reduced, as the reduction of natural gas usage reduces the overall CO2 emissions and fume cleaning requirements per ton of waste treated.

References I. Harjunkoski, C. T. Maravelias, P. Bongers, P. M. Castro, S. Engell, I. E. Grossmann, J. Hooker, C. Méndez, G. Sand, J. Wassick, 2014, Scope for industrial applications of production scheduling models and solution methods, Computers & Chemical Engineering, 62, 161-193 M. Poliakoff, J. M. Fitzpatrick, T. R. Farren, P. T. Anastas, 2002, Green Chemistry: Science and Politics of Change, Science, 297, 807-810. J.M. Wassick., 2009, Enterprise-wide optimization in an integrated chemical complex, Computers & Chemical Engineering, 33, 1950-1963 M. L. Abaecherli, E. Capón-García , K. Hungerbühler, 2015, Reduction of the Environmental Impact While Improving the Economic Benefits, 4th ICOSSE, Balatonfüred M. L. Abaecherli, E. Capón-García , S. Papadokonstantakis, K. Hungerbühler, 2014, Natural Gas Reduction in Industrial Waste Incineration By Optimized Short-Term Scheduling, AIChE Annual Meeting, Atlanta E. Capón-García , I. E. Grossmann, K. Hungerbühler, 2014, Effective Discretization Strategy for Multi-Component Networks, AIChE Annual Meeting, Atlanta

Zdravko Kravanja, Miloš Bogataj (Editors), Proceedings of the 26th European Symposium on Computer Aided Process Engineering – ESCAPE 26 June 12th -15th, 2016, Portorož, Slovenia © 2016 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/B978-0-444-63428-3.50008-4

Modelling SER Biomass Gasification Using Dynamic Neural Networks Alia Salaha*, Lutz Hanela, Marcel Beirowa, Günter Scheffknechta a

University of Stuttgart, Pfaffenwaldring 23, Stuttgart 70569, Germany

[email protected]

Abstract Sorption Enhanced Reforming (SER) is a promising technology for biomass gasification. This process enables an in-situ carbon dioxide (CO2) capture in the gasifier and results in higher hydrogen concentration in the product gas. The resulting high calorific gas can be further used for electricity generation. Biomass gasification consists of highly coupled sequential thermo-chemical processes which are difficult to be modelled in closed-form equations. The purpose of this paper is to introduce dynamic neural networks model for the biomass steam gasification process. The dynamic neural networks are designed and trained using measurement data from 200 kWth, pilot plant at the IFK, University of Stuttgart. The designed model shows good performance in estimating the overall behavior of the biomass gasification process which is characterized mainly by the volume flow rate of the product and exhaust gases. The model also successfully estimates output variables based on new measurement test data. A sensitivity analysis is carried out to show the relative importance of each input variable within the model. Keywords: Biomass, modelling, neural networks, Sorption Enhanced Reforming.

1. Introduction Biomass gasification is gaining an increased attention as it results in a significant reduction of the CO2 emission compared to fossil fuels and also produces high calorific gas from a renewable energy resource. Nevertheless, experiments are time consuming and are limited to safe operation, therefore; mathematical models are needed in order to explain the complex nature of the gasification process and to highlight the effect of each variable on the overall system performance. However, the biomass gasification process is difficult to model in closed-form equations as it consists of sequential thermochemical processes that are highly coupled. Several mathematical models have been developed based on idealized assumptions; these are either based on the thermodynamic equilibrium or simplified reaction kinetics of the gasification process. A detailed review about biomass gasification modelling can be found in (Puig-Arnavat et al., 2010). Though these models show relatively good results, they are either limited to stationary operating conditions or they are computationally expensive. Based on this comes the need to use other approaches for modelling which are more flexible, provide better accuracy and can be used independently from the plant. Neural networks show great potentials in this field as they

20

A. Salah et al.

are implemented in several researches to re-estimate their outputs for the biomass gasification process models. The modelling focuses on product gas composition, gas yields and other parameters as in (Mikulandrić, et al.,2014) and (Puig-Arnavat, et al., 2013). However, neural networks implementation in this field is limited so far to “function approximators” which can be interpreted as stationery models. The main aim of this paper is to provide a model for the Sorption Enhanced Reforming (SER) biomass steam gasification process based on measurement data from a 200 kWth pilot plant. The model is based on dynamic neural networks (DNN) which can be applied offline and independently from the process. This modelling method fits better the dynamic nature of the process in question. These DNN are also utilized to estimate outputs which are not included within their training data.

2. Biomass steam gasification model This section provides an overview about the Sorption Enhanced Reforming process and the pilot plant from which the measurement data are obtained and the biomass gasification is modelled. 2.1. Sorption Enhanced Reforming (SER) biomass steam gasification SER is a promising technology for biomass gasification. As compared to other gasification technologies it enables an in-situ carbon dioxide (CO2) capture in the gasification reactor resulting in higher hydrogen (H2) concentrations in the product gas. This is done by using calcium oxide (CaO) as a sorbent for CO2 according to the exothermal reaction CaO( s ) + CO2( g ) ⇔ CaCO3( s ) + Heat and this leads to shifting the (water-gas-shift) reaction (WGSR) CO + H 2O ⇔ CO2 + H 2 towards producing H2 and thus resulting in increased hydrogen levels. The SER process is presented in Figure 1 and is often named in literature AER (Adsorption enhanced reforming). More details about this process can be found in (Hawthorne, et al., 2012). 2.2. SER 200 kWth pilot plant at IFK University of Stuttgart In this work, the biomass steam gasification process within a 200 kWth pilot plant is modelled. The plant consists of two coupled fluidized bed reactors: the gasifier (R1) which is a bubbling fluidized bed reactor and a regenerator (R3) which is a circulating fluidized bed reactor. The process of the plant is described as follows: steam is fed to the gasifier which fluidizes the bubbling bed. Then CO2 from gasification is adsorbed and enhanced hydrogen generation takes place. The product gas leaves the bubbling bed and entrained char particles are separated and returned to the gasifier bed. The product gas is then fully oxidized. The gasifier bed materials consisting of char and partially carbonated CaO are transferred to the regenerator where the CaO is heated up to approx. 900°C and fully calcined. The bed material is entrained and separated so that part of the solid material is diverted to the gasifier and the rest is circulated back to the regenerator ensuring homogenous temperature distribution and promoting good combustion.

3. NARX Dynamic neural network model The SER biomass steam gasification process is described by modelling the volume flow rates of the product and the exhaust gases as a preliminary step for further analysis of gas compositions. In order to insure safe and sufficient operation, reactors’ temperatures and pressures are also observed and modelled.

Modelling SER Biomass Gasification Using Dynamic Neural Networks

21

Figure1 Sorption Enhanced Reforming (SER) Process (Hawthorne, et al., 2012)

3.1. Model definition The biomass steam gasification process is modelled by the volume flow rates of gases at both reactors. The CO2/CaO equilibrium strongly depends on the process temperature at atmospheric conditions and the carbonation occurs at appreciable rates at temperatures between 600 – 725 °C (C. Hawthorne et al., 2012). Then, it is important to observe the main temperatures and pressures for each reactor and to include them in the model. Accordingly, the model of biomass steam gasification comprises six outputs (n = 6) namely volume flow rates of gases, their temperatures and pressures at both the gasifier and the regenerator. Each of these outputs is considered as a sub-model and is identified as a nonlinear function of all inputs, their previous values and the previous values of the output. Model inputs and outputs are listed in Table 1. Each of these dynamic submodels can be represented as a nonlinear time series according to the following:

(

)

y (t ) = f y (t − 1) ,…, y (t − d y ) , u (t − 1) ,…, u (t − du ) + e (t ) , where T

T

y (t ) = ⎡⎣ y1 (t )K yn (t )⎤⎦ , u (t ) = ⎡⎣u1 (t )K ur (t )⎤⎦ , and e (t ) = ⎡⎣e1 (t )K en (t )⎤⎦

(1) T

(2)

are the system output, input and noise respectively, 𝑑!  and 𝑑! are the corresponding number of lags for the output and inputs, and 𝑓(  . ) is a nonlinear function (Chen et al., 1992). Table 1 Model Output and Input Variables

Inputs

Outputs

Gasifier R1

Regenerator R3 3

y1

Product gas volume flow rate in m /h STP

y4

Exhaust gas volume flow rate in m3/h STP

y2

Temp. in °C

y5

Temp. in °C

y3

Pressure in mbarg

y6

Pressure in mbarg

u1

Fuel mass flow in kg/h

u3

Fuel mass flow in kg/h

u2

Steam mass flow in kg/h

u4

Air and O2 volume flow rate in m3/h STP

u5

Cont. mass flow rate from R3 in kg/h

u6

Disc. mass flow rate from R3 in kg/h

22

A. Salah et al.

With this representation of the process, it is assumed that the various outputs at each instant of time depend on their previous values and the previous values of inputs. Based on this the biomass steam gasification process is modelled as a nonlinear autoregressive system with exogenous inputs (NARX). This choice of representation is justified based on the existence of external inputs which play a major role in the process and can be regulated, as well as, the highly coupled sequential thermo-chemical processes which explains the nonlinear representation. So the aim now is to approximate and to identify this nonlinear autoregressive dynamic system with exogenous inputs. 3.2. Modelling using NARX dynamic neural networks Neural network is considered a method of system identification and has a great potential in this field due to its robustness and capability to handle and learn nonlinear and complicated relationships. Neural networks during training adjust their parameters according to a variety of optimization algorithms in order to have their outputs similar to outputs of the identified systems. In this work a special type of neural networks called dynamic neural networks (DNN) is used to model the biomass steam gasification process. In these networks outputs depend not only on current inputs but also on their previous values. In addition, DNN have feedback loops and/or recurrent connections. This implies that the outputs are also a function of their previous values which can be interpreted as memory capability of the network.

Figure 2 Dynamic Neural Network Training Concept

Training of the DNN is accomplished in two steps according to Figure 2: • Open loop NARX neural network, where a feedforward multilayer perceptron neural network is trained using backpropagation algorithms. The open loop training is a preliminary step to define the main structure of the neural network, to validate its performance and to provide outputs based on previous measurement data. This type of models can be applied online along with the process. • Closed loop NARX neural network, this is a second training and is implemented to estimate outputs based on current and previous inputs and the previously estimated outputs, therefore generating a feedback loop. With this closed loop structure the NARX neural network can be used to estimate its future outputs independently from the previous plant measurements and thus can be used to simulate each of the aforementioned biomass steam gasification process output variables offline. Simulation results of the DNN are presented in the following section.

Modelling SER Biomass Gasification Using Dynamic Neural Networks

23

4. Results and discussion Measurement data from the pilot plant are collected for 3.5 hours of gasification process at a sampling time of 5 seconds. During this process, the fuel and steam mass flows into the gasifier are increased from about 30 to 36 kg/h. This increase resulted in an increase in the volume flow rate of the product gas from about 15 to 20 m3/h STP accompanied by an increase in the gasifier main pressure from about 25 to 30 mbarg. The main temperatures of both the gasifier and the regenerator are kept within 620 – 680 °C and 900 – 935 °C; respectively. 4.1. Simulation results of dynamic neural networks The DNN are trained using a part of measurement data that covers the operating range of all process variables as shown in the shaded areas of Figure 3. Then they are used to estimate the corresponding output variables for the remaining part of the measurement data according to the magnified part of the same figure. This implementation provides a method of utilizing the measurement data to practically validate and test the performance of the designed DNN as they are subject to new inputs.

On the other hand, the DNN are able to estimate their outputs using new measurement data which are not included in training. In Figure 3 (b) and (f) the performance is good in terms of capturing the behavior and minimal errors of 7% and 6%; respectively. Gasifier temperature model in Figure 3 (d) also shows good performance in terms of capturing temperature increase with an estimation error of 0.03%, the deviation can be related to the need for more training data.

20

0

0.5

1

1.5

Temp. in °C

12

Product gas in Nm 3/h

Meas.

16

20 216

12

(b)

DNN 2.5

0

3

0.5

18 3.5

training data 1

1.5

(a)

2

2.5

3

3.5

680 620

15 3

3.2

training data

0

0.5

1

1.5

2 (c)

30 25 20

training data

0

0.5

640 2.5

3

3.5

3.3 30

3.4

3.5 (f)

26

1.5 2 2.5 3 3.5 22 time in h (e) Figure 3 Gasifier R1 Measured & Estimated Outputs 1

3.4

(d)

660

650

Pressure in mbarg

Product gas in Nm 3/h

Figures 3 (a, c) and (e) show outputs of the DNN as compared to measurement data from the pilot plant for the gasifier reactor R1 only due to space limitation. It can be clearly seen that the neural networks are successful in modelling the biomass steam gasification process in terms of capturing the overall behaviors of each output variable with relatively small estimation errors. The maximum estimation errors for the product gas flow rate, gasifier temperature and pressure are 12%, 4% and 5.5%, respectively. These small error values indicate the good design and performance of the DNN.

3.2

3.4

A. Salah et al.

Model sensitivity %

24

6 t-1 4 2

t-1 t-4

t-4

u1

u2

u3

u4 Variable

u5

u6

y1

Figure 45 Relative Importance of Input Variables in Product Gas Model

4.2. Sensitivity analysis Sensitivity analysis is carried out to justify the structure of the designed neural networks. This analysis indicates the relative importance of each input variable and their lags with respect to the designed models. The analysis is based on partitioning of the connection weights of the designed neural network which is proposed by (Garson, 1998) and implemented by (Puig-Arnavat et al., 2013). The designed dynamic neural network model for product gas flow rate (y1) has four input lags and four feedback lags. Their relative importance with respect to the model is shown in Figure 4. It can be noticed that all the feedback lags y1(t - 1) … y1(t - 4) have a significant contribution within the model which further justifies the choice of DNN to model the process. In addition to feedback lags the inputs u1(t - 1) & u1(t - 3), u2(t - 1) & u2(t - 3) and u3(t - 1) have a good contribution. On the other hand, some input lags such u4(t - 3) and u6(t - 3) have a small contribution and they could be excluded in an optimized DNN model.

Conclusion Dynamic neural networks are implemented to model the SER biomass steam gasification process in a 200 kWth pilot plant. The designed models capture and estimate successfully the overall behavior of the product and exhaust gases in addition to reactors main temperatures and pressures during gasification. They also capture the behavior during load change instantaneously with a maximum estimation error of 15%. By this implementation, the dynamic modelling and simulation of the biomass steam gasification process is accomplished using DNN.

References S. Chen, S.A. Billings, Neural networks for noninear dynamic system modelling and identificaion, InternationalJournal of Control, 1992, 56,2, 319 – 346. D. Garson, Neural Networks: An Introductory Guide for Social Scientists, 1998, SAGE. M. Hagan, H. Demuth,O. De Jesús, An introduction to the use of neural networks in conrol systems, 2002, Internationl Journal of Robust and Nonlinear Control, 12, 959 – 985. C. Hawthorne, N. Poboß, H. Dieter, A. Gredinger, M. Zieba, G.Scheffknecht, 2012, Operation and results of a 200-kWth dual fluidized bed pilot plant gasifier with adsorption m. enhanced reforming, Biomass Conversion and Biorefinery, 2, 3, 217 – 227. R. Mikulandrić, D. Lončar, D. Böhning, R. Böhme, M. Beckmann, Artificial neural network modelling approach for a biomass gasification process in fixed bed gasifiers, 2014, Energy Conversion and Management, 87, 1210-1223. M. Puig-Arnavat, J. Alfredo Hernández, J. Carles Bruno, A. Coronas, Artificial neural network models for biomass gasification in fluidized bed gasifiers, 2013, Biomass and Bioenergy, 49, 279 – 289. M. Puig-Arnavat, J. Alfredo Hernández, J. Carles Bruno, A. Coronas, Review and analysis of biomass gasification models, 2010, Renewable and Sustainable Energy Reviews, 14, 2841 – 2851.

Zdravko Kravanja, Miloš Bogataj (Editors), Proceedings of the 26th European Symposium on Computer Aided Process Engineering – ESCAPE 26 June 12th -15th, 2016, Portorož, Slovenia © 2016 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/B978-0-444-63428-3.50009-6

PERKS: Software for Parameter Estimation in Reaction Kinetic Systems Aditya Tulsyana and Paul I. Bartonb a Process Systems Engineering Laboratory, Massachusetts Institute of Technology, 77 Massachusetts

Avenue, Room 66-270 Cambridge 02129, Massachusetts, United States of America b Process Systems Engineering Laboratory, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Room 66-470b Cambridge 02129, Massachusetts, United States of America

Abstract To bring forth the recent developments in global dynamic optimization methods to practitioners, we present a software implementation- PERKS: Parameter Estimation in Reaction Kinetic Systems. PERKS performs parameter estimation in complex reaction-network models using a state-of-the-art branch-and-bound global dynamic optimization method. The built-in global optimization method in PERKS guarantees a best possible least-squares fit of experimental data by the kinetic model. PERKS is equipped with a graphical-user-interface (GUI) to streamline user interaction and reduce training-time on the software. Further, PERKS is designed keeping in mind the diverse background of our possible end-users. Example problems are included with the software to get practitioners started with minimal effort and limited background knowledge in optimization methods. In this paper, we introduce PERKS and discuss some of its key features. Keywords: Global optimization, parameter estimation, software, reaction network

1. Introduction Kineticists and chemists building predictive models for complex chemical reaction networks with nr reactions (also referred to as the dimension of the network) often end up with a set of non-linear first-order ordinary differential equations (ODEs) of the form dx (t, p) = Sr(q, p, x(t, p)), dt y(t, p) = Cx(t, p) + ε(t),

x(t0 ) = x0 ,

(System equation)

(1)

(Measurement equation)

(2)

where x(t, p) ∈ Rnx is a vector of state variables, describing the concentration or mass or mole fraction of chemical species as a function of time, denoted as t. Other time-varying variables, such as reaction temperature, pressure or density, if present, are also represented in x. The vector p ∈ Rn p represents the unknown or uncertain kinetic model parameters, which are often poorly known a priori or difficult to measure directly in experiments. Typical examples of kinetic parameters are rate constants and activation energies, but could also include other unknown quantities, such as molecular properties of the species. While the vector p only includes the unknown and uncertain parameters, other variables, which are precisely known (in statistical sense) and can be safely held fixed are represented by vector q ∈ Rnq . The quantities S ∈ Rnx ×nr and r(q, p, x(t, p)) ∈ Rnr represent the stoichiometric matrix and rate-law vector of appropriate dimensions, respectively, such that Sr describes the rate of change of species concentration at time t for a given p and

A. Tulsyan and P.I. Barton

26

q. Note that (1) is a general model representation, and can be used to describe different reaction networks. Many popular reaction mechanisms, such as power-law, Mars-Van Krevelen, LangmuirHinshelwood, Eley-Rideal, Michaelis-Menten and Hougen-Watson can be represented by (1). In (1), x(t0 ) ∈ Rnx represents a vector of initial concentration of species at time t = t0 . In (2), y(t, p) ∈ Rny represents concentrations of species sampled in an experiment. Here we assume that the concentration of at least one species is measured (i.e., ny ≥ 1) such that C ∈ Rny ×nx has at least one non-zero entry in its principal diagonal. Finally, ε(t) ∈ Rny represents random measurement error or sensor noise.

2. Parameter Estimation Problem Parameter estimation is an important first step in building a reliable predictive model for a kinetic system (G. Franceschini and S. Macchietto, 2008; Y. Bard, 1974). Given some kinetic data y¯ ∈ Rny of length Ndata sampled (uniform or multi-rate sampling) in an experiment, the objective of a parameter estimation problem is to find p ∈ Rn p under which Model (1) explains the data well. For example, the predictive capabilities of a model can be measured using a square-loss (or leastsquare) function given below Ndata

L(p) =

∑ [¯y(ti ) − y(ti , p)]T [¯y(ti ) − y(ti , p)],

(3)

i=1

where L is a square-loss function. In the classical frequentist setting, the parameter estimation problem is cast as an optimization problem; wherein, the difference between the predicted and experimental values, as described by the loss function L in (3), for example, is minimized over the parameter-space. The goal is then to find a parameter value pˆ ∈ Rn p that minimizes the loss function in (3) such that pˆ ∈ arg min L(p), p∈P

(4)

where P ⊂ Rn p is the parameter space. Solving the optimization problem (4) with an ODE system described in (1) is referred to as a global dynamic optimization problem. Despite the neat problem formulation, global dynamic optimization in (4) does not lend itself to any easy analytical or closed-form solution barring a few simple linear models. This is because for most reaction networks of interest, represented by Model (1), the loss function in (4) is non-convex and multimodal.

3. Global Dynamic Optimization Methods Recent advancements in computing technologies and over a decade of academic research in global dynamic optimization have culminated in the development of advanced methods that can guarantee finding globally optimal solutions of parameter estimation problems for ODE systems (B. Chachuat, A.B. Singer and P.I. Barton, 2006; A.B. Singer and P.I. Barton, 2006; A.B. Singer, J.W. Taylor, P.I. Barton and W.H. Green, 2006; J.K. Scott and P.I. Barton, 2010, 2013a,b; Y. Lin and M.A. Stadtherr, 2007b; J.K. Scott, B. Chachuat and P.I. Barton, 2013; A.M. Sahlodin and B. Chachuat, 2011; A.B. Singer, J.W. Taylor, P.I. Barton and W.H. Green, 2006; Y. Lin and M.A. Stadtherr, 2007a; A.B. Singer and P.I. Barton, 2006). In the academic setting, there are host of software available to help solve (4) for dynamical systems. Some of the popular global dynamic optimization software include MC++ (A.M. Sahlodin, 2013), dGOPT (S.D. Schaber, 2014) and GDOC (A.B. Singer, J.W. Taylor, P.I. Barton and W.H. Green, 2006). Despite the popularity of these software in the academic setting, the complexity of these advanced methods coupled with limited understanding of optimization theory often precludes practitioners

PERKS: Software for Parameter Estimation in Reaction Kinetic Systems

27

Figure 1: A screen shot of PERKS main GUI. The sequence of commands or steps required to estimate the parameters of a reaction network model in PERKS is marked A through F. from actively using it. The disconnect between the global dynamic optimization community and practitioners is further widened due to the lack of common mathematical and computational platforms understood and appreciated by all. This often drives kineticists and chemists – interested in only estimating the parameters of their reaction network – to rely on off-the-shelf optimization software. Many popular commercial software such as COMSOL Multiphysics and MATLAB have GUI-enabled modules (for example, Chemical Reaction Engineering Module in COMSOL Multiphysics (Comsol, 2013) and the Optimization Toolbox in MATLAB (T. Coleman, M.A. Branch and A. Grace, 1999)) that solves (4). Although these commercial software provide easeof-use, nevertheless, they are mostly based on incomplete optimization methods, such as local optimization, evolutionary algorithms, simulated annealing and direct search methods. As a result, these software can often yield a sub-optimal solution to the estimation problem, and also leads practitioners to misinterpret results and develop unreliable models for their reaction networks (A.B. Singer, J.W. Taylor, P.I. Barton and W.H. Green, 2006).

4. PERKS Software To bring forth some of the recent developments in solving global dynamic optimization problems to users and practitioners, we have developed the software - PERKS: Parameter Estimation in Reaction Kinetic Systems. PERKS performs parameter estimation in complex reaction-networks of the form (1) using the state-of-the-art branch-and-bound global dynamic optimization method developed in the Process Systems Engineering Laboratory (PSEL) at Massachusetts Institute of Technology (B. Chachuat, A.B. Singer and P.I. Barton, 2006; A.B. Singer and P.I. Barton, 2006; A.B. Singer, J.W. Taylor, P.I. Barton and W.H. Green, 2006; J.K. Scott and P.I. Barton, 2010, 2013a,b; J.K. Scott, B. Chachuat and P.I. Barton, 2013). Unlike dGOPT (S.D. Schaber, 2014) and GDOC (A.B. Singer, J.W. Taylor, P.I. Barton and W.H. Green, 2006) – also developed in PSEL – the unique feature of PERKS is that it includes a graphical-user-interface (GUI) to streamline user interaction and reduce training-time on the software. PERKS is a powerful global dynamic optimization interface based on dGOPT that allows users to enter model and kinetic data, perform parameter estimation and analyze estimation results. PERKS is designed keeping in mind the diverse background of our possible end-users, with limited understanding of computer programming languages (e.g., C, C++, FORTRAN) and mathematical methods (e.g, branch-and-bound algorithm, local-optimization, numerical integration, relaxation and bounding methods) commonly used by the global dynamic optimization community. Although PERKS is designed to only perform global parameter estimation in reaction kinetic systems, nevertheless, PERKS is customizable both in features and applicability.

A. Tulsyan and P.I. Barton

28

The GUI for PERKS is developed in Microsoft Excel using VBA programming and includes easy-to-use clickable modules. A screen shot of PERKS main GUI is shown in Figure 1. In Figure 1, the encircled objects, marked A through F, correspond to the sequence of commands the user needs to enter to estimate the parameters of a reaction network model. Each of the objects marked A through F are clickable modules and serve the following function – A is the “Model Entry” module, where the user enters through a series of dialog boxes the dimension of the reaction network (nr ), number of state variables (nx ), unknown parameters (n p ), stoichiometric matrix S ∈ Rnx ×nr and rate function r for the reaction network. Once the reaction network model is defined, the user clicks on Module B and inputs the initial concentrations for nx species. Next, in Module C, the user specifies the number of species for which concentration measurements are available. For a regularly sampled kinetic data, the user can also provide the sampling rate. Once the experimental data is entered in Module C, the user moves to Module D, where the user enters a priori information for the states and parameters. After saving the data in modules A through D, the user clicks on Module E to run the global dynamic optimization solver. Finally, the parameter estimates can be viewed by clicking Module F.

5. Case Study: Continuous Pharmaceutical Manufacturing Unit This example is taken from a pharmaceutical plant designed to manufacture tablets in continuous mode. The plant-wide model for the process is based on an abridged version of an end-to-end pilot plant designed and constructed at the Massachusetts Institute of Technology for the NovartisMIT Center for Continuous Manufacturing. The upstream process consists of several reactors and separation units for synthesis and purification of intermediates and active pharmaceutical ingredient (API); which is followed by a sequence of solid processing downstream units, such as drying and extrusion that produce the final pharmaceutical product in tablet form. Two reactants (C1 ; C2 ) are mixed at the beginning of the process in a static micromixer with a catalyst and recycled material. The mixture is then fed to a tubular reactor operated at elevated temperature, where an intermediate product C3 is formed. For parameter estimation we consider batch experiments with a postulated reaction network. The following reactions take place in the reactor of interest: Cat1,p

1 − C1 + C2 − ) −− −− −* − C3 ;

p2

p

C1 −−3→ C4 ;

p

C3 −−4→ C5 ;

p

0.5 C1 + 1.5 C2 + 0.5 C3 −−5→ C6 + C7 ,

where C4 through C7 are the impurities produced by side reactions. The objective is to estimate the rate constants in the given reaction network. The experiment is run starting at t = 0. Table 1 gives the concentration profiles for all the species measured over the experiment time t ∈ [0, 5/3] at a rate of a sample per 10 minutes. We use PERKS to estimate parameters of the reaction network. This is done by entering the following information in PERKS: 1. Module A: We provide the following information in Module A.     −1 1 −1 0 −0.5 x1      −1 1  x2  0 0 −1.5  p1 x1 p1      p2 x3   1 −1 0 −1 −0.5   p2   x3                0 1 0 0  x= .  x4  ; p =  p3  ; r =  p3 x1  ; S =  0  x5    0  p4 x3   p4  0 0 1 0      0  x6  0 0 0 1  p5 p5 0 0 0 0 1 x7 PERKS provides an option to define custom labels for the states and parameters, and is also equipped with an on-screen keyboard to reduce typographical error in defining S and r. The keyboard automatically populates and mounts the variables and numbers of interest to avoid any bad or invalid data entry.

PERKS: Software for Parameter Estimation in Reaction Kinetic Systems

29

2. Module B: We provide the following initial conditions for the states x(t0 ) = [0.7923, 3.9421, 0, 0, 0, 0, 0]T In Module B we need to provide initial concentrations for all the states, irrespective of whether states are subsequently measured in an experiment or not. This is not restrictive as in most experiments the initial concentrations of species are known. 3. Module C: We first enter the number of states for which measurements are available, 7 in this example, and then the number of samples available. Based on the states measured, PERKS automatically creates a form of fixed size to enter kinetic data. There is also an option to import kinetic data from other files. We enter the kinetic data as given in Table 1. 4. Module D:  0  0   0   0   0   0 0

We provide the following a priori interval bounds for the states and parameters.      x1 (t, p) 1         x2 (t, p)   4  0 p1 3        x3 (t, p)   1   0   p2   2              ≤  x4 (t, p)  ≤  1  ,  0  ≤  p3  ≤  1  .              x5 (t, p)   1   0   p4   1         x6 (t, p)   1  0 p5 1 x7 (t, p) 1

PERKS requires a priori bounds on the states and parameters. A tight a priori bounds certainly reduces the computational time required by the optimization routine, nevertheless in most cases, even crude bound information would suffice. 5. Module E: Clicking on this starts the optimization routine in the default settings. There is also an option in this module to change the tolerances for both optimization and integration subroutines. 6. Module F: After execution, PERKS returns the following estimates for the reaction network pˆ = [2.2502, 1.8502, 0.1726, 0.1801, 0.0134]T PERKS also provide other valuable information to the users related to the computational time, quality of estimates, final objective function value, and predictive capability of the model.

6. Conclusions The objective of PERKS is to enable chemists and kineticists to quickly, yet efficiently estimate the parameters in complex chemical reaction networks. Apart from solving the parameter estimation problem, PERKS can also be customized to solve the reaction network discrimination problem. PERKS is packed with the state-of-the-art global dynamic optimization methods developed in the Process Systems Engineering Laboratory at Massachusetts Institute of Technology. The unique feature of PERKS is that it includes a graphical-user-interface (GUI) to streamline user interaction and reduce training-time on the software. Finally, PERKS is designed keeping in mind the diverse background of our possible end-users, with limited understanding of computer programming and mathematical methods commonly used by the global dynamic optimization community.

7. Availability PERKS is available in alpha version for educational and non-profit research purposes only. PERKS runs on a Linux platform, and requires several third-party software such as CPLEX, SNOPT, and CVODES. The installation and maintenance guide is being prepared and will be available shortly.

A. Tulsyan and P.I. Barton

30

Table 1: Experimental data for pharmaceutical reaction model. All concentrations are in ml/L. t (hours)

x1 (t)

x2 (t)

x3 (t)

x4 (t)

x5 (t)

x6 (t)

x7 (t)

0 1/6 2/6 3/6 4/6 5/6 1 7/6 8/6 9/6 10/6

0.792302 0.594226 0.294261 0.181754 0.134612 0.115835 0.108070 0.104663 0.103950 0.104188 0.103792

3.942141 3.751241 3.458084 3.345181 3.298436 3.279183 3.270745 3.268447 3.267100 3.268685 3.266347

0.000000 0.186349 0.466507 0.570457 0.611340 0.626711 0.630672 0.628295 0.625522 0.620214 0.617758

0.000000 0.003565 0.011251 0.015450 0.017510 0.018540 0.019570 0.020204 0.020758 0.021154 0.021788

0.000000 0.000657 0.004199 0.007368 0.009904 0.011805 0.014103 0.016797 0.019174 0.021392 0.024165

0.000000 0.000000 0.006259 0.005625 0.005467 0.004675 0.004595 0.005150 0.005150 0.005467 0.005705

0.000000 0.000657 0.004199 0.007368 0.009904 0.011805 0.014103 0.016797 0.019174 0.021392 0.024165

8. Acknowledgments We gratefully acknowledge Spencer D. Schaber for helping us with PERKS. The authors would also like to thank the MIT-Novartis Center for Continuous Manufacture for funding this project.

References A.B. Singer and P.I. Barton, 2006. Global optimization with nonlinear ordinary differential equations. Journal of Global Optimization 34 (2), 159–190. A.B. Singer, J.W. Taylor, P.I. Barton and W.H. Green, 2006. Global dynamic optimization for parameter estimation in chemical kinetics. The Journal of Physical Chemistry A 110 (3), 971–976. A.M. Sahlodin, 2013. Global optimization of dynamic process systems using complete search methods. Ph.D. thesis, McMaster University. A.M. Sahlodin and B. Chachuat, 2011. Convex/concave relaxations of parametric ODEs using Taylor models. Computers & Chemical Engineering 35 (5), 844–857. B. Chachuat, A.B. Singer and P.I. Barton, 2006. Global methods for dynamic optimization and mixed-integer dynamic optimization. Industrial & Engineering Chemistry Research 45 (25), 8373–8392. Comsol, 2013. Comsol multiphysics-chemical reaction engineering module. Users Guide. G. Franceschini and S. Macchietto, 2008. Model-based design of experiments for parameter precision: State of the art. Chemical Engineering Science 63 (19), 4846–4872. J.K. Scott and P.I. Barton, 2010. Tight, efficient bounds on the solutions of chemical kinetics models. Computers & Chemical Engineering 34 (5), 717–731. J.K. Scott and P.I. Barton, 2013a. Bounds on the reachable sets of nonlinear control systems. Automatica 49 (1), 93–100. J.K. Scott and P.I. Barton, 2013b. Improved relaxations for the parametric solutions of ODEs using differential inequalities. Journal of Global Optimization 57 (1), 143–176. J.K. Scott, B. Chachuat and P.I. Barton, 2013. Nonlinear convex and concave relaxations for the solutions of parametric ODEs. Optimal Control Applications and Methods 34 (2), 145–163. S.D. Schaber, 2014. Tools for dynamic model development. Ph.D. thesis, Massachusetts Institute of Technology. T. Coleman, M.A. Branch and A. Grace, 1999. Optimization Toolbox for Use with MATLAB: User’s Guide, Version 2. Math Works, Incorporated. Y. Bard, 1974. Nonlinear Parameter Estimation. Academic Press New York. Y. Lin and M.A. Stadtherr, 2007a. Deterministic global optimization of nonlinear dynamic systems. AIChE Journal 53 (4), 866–875. Y. Lin and M.A. Stadtherr, 2007b. Validated solutions of initial value problems for parametric ODEs. Applied Numerical Mathematics 57 (10), 1145–1162.

Zdravko Kravanja, Miloš Bogataj (Editors), Proceedings of the 26th European Symposium on Computer Aided Process Engineering – ESCAPE 26 June 12th -15th, 2016, Portorož, Slovenia © 2016 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/B978-0-444-63428-3.50010-2

Optimization of a Sour Water Stripping Plant Using Surrogate Models Natalia Quirante,a* Jose A. Caballeroa a

Institute of Chemical Processes Engineering. University of Alicante, PO 99, E-03080 Alicante, Spain. [email protected]

Abstract In this work we present a methodology for the large scale optimization of complex chemical processes. In the proposal, processes have been simulated using modular simulators, where units are classified depending on their numerical noise and the CPU time needed to converge. If the unit does not generate numerical noise, then the unit is kept in the simulator. However, if we have noisy and/or CPU time consuming units, these units are replaced by a surrogate model. In addition, some of these units can be aggregated to decrease the complexity of the model. And finally, some unit operations and constraints are included as explicit equations. As a result, we solve a hybrid simulation-optimization model formed by units in the original flowsheet, surrogate models, and explicit equations. As a case study, we perform the multiobjective optimization of a sour water stripping plant, where stripping columns are replaced by Kriging metamodels, because they have proven to be accurate and reliable, and they allow a fast interpolation. For this purpose, we simultaneously consider economic aspects, heat integration and environmental impact. Our optimization strategies ensure the convergence to a local optimum for numerical noise-free models, and a solution around the optimum inside the tolerance of the numerical noise. Keywords: Process simulation, process optimization, Kriging interpolation, surrogate models.

1. Introduction Modular simulators are commonly used due to their advantages: a) Data can be easily checked for completeness and consistency, b) Modules can be developed and tested independently, c) New modules can be easily added, and d) Solution methods can be adapted for each model. However, when we want to optimize a process, the modular design loses some of its advantages, such as: limited access to the explicit equations of the modules because they are as a ‘grey box models’, some models generate numerical

32

N. Quirante and J.A. Caballero

noise, numerical noise can be make bigger by recycles because they act as a noise amplifiers, CPU time to converge may be high, and if a module does not converge the entire optimization crashes. To overcome these disadvantages, process models with a modular structure have been solved using surrogate models, because they are computationally efficient and they guarantee an acceptable degree of accuracy (Chung et al., 2011). A surrogate model approximates the input-output relationships of the simulation through combination of mathematical functions. The surrogate model does not generate noise and it is rather easy and quick to solve, while the original simulation model could be noisy, difficult to solve, and time consuming (Palmer & Realff, 2002). In this work, we follow a disaggregated approach, where only the simulation modules that introduce numerical noise or are difficult to converge, are replaced by surrogate models. The rest of units, such as mixers, splitters, pumps, heaters, coolers, etc. are remained in the original flowsheet. Therefore, we deal with a hybrid system composed of simulator modules, surrogate models, and explicit equations – this could be unit operations added in equation form or constraints added by the designer-. To approximate models, in this work we focus on Kriging (Krige, 1951) interpolation, because they can be built with relatively little sampling data and they are computationally efficient. On the other hand, energy efficiency is a fundamental aspect in chemical processes. The most effective method to reduce costs consists of thermal integration between process streams through heat exchangers by cooling and heating systems. Additionally, reduction of energy consumption can get the minimization of environmental impacts (Morar & Agachi, 2010).

2. Methodology In this work we study the replacement of complex systems by surrogate models based on a Kriging interpolation. This fitting consists of two term: a polynomial expression and a deviation from that polynomial, as it is shown in Eq.(1).

y(x)  =  f(x)  +  Z(x)

(1)

Where Z(x) is a stochastic Gaussian process with no uncertainty about the mean of y(x). Details of Kriging interpolation can be found in references (Jones et al., 1998; Quirante et al., 2015). The hybrid model we are dealing with can be written as shown in Eq.(2). The system is composed of units in the original simulation, surrogate Kriging models, third party modules connected to the simulation and explicit equations.

Optimization of a Sour Water Stripping Plant Using Surrogate Models

33

min: f(x) 𝑂𝑢𝑡 𝑆 (𝑥𝑖𝐼𝑛 , 𝑢𝑖 )        i ϵ Kriging Unit; j ϵ Kriging modeli s.a.    𝑥𝑖,𝑗 = 𝐺𝑖,𝑗

 𝑥𝑘𝑂𝑢𝑡 = 𝐺(𝑥𝑘𝐼𝑛 , 𝑢𝑘 )            k ϵ Unit in process simulator  𝑥𝑗𝑂𝑢𝑡 =  𝑥𝑖𝐼𝑛                                    connectivity

(2)

𝑠𝑢𝑝

𝑐𝑎𝑙 𝑐  "𝑥𝑡𝑒𝑎𝑟 − 𝑥𝑡𝑒𝑎𝑟 "  ≤  eps

 h(x)  =  0  g(x)  ≤  0 Where f(·) makes reference to the objective function. The first constraint represents the input-output structure of the Kriging interpolators. The second is the input-output structure for the units in the simulator or the third party modules. The third is related implicitly by propagating the information through the flowsheet. In the fourth constraint the recycle structure of the flowsheet is transfer to the NLP solver. Finally, the last two constraints are explicit equations added to the model. The algorithm to solve the model is as follows: 1.

N sampling points are generated and then sampled in the simulator. These points have to be separated enough to ensure that the noise has no effect on the simulation.

2.

With the simulation data the Kriging metamodels are built.

3.

We should validate the accuracy of the model.

4.

Simulation models are replaced by the Kriging surrogates.

5.

Perform the NLP optimization. We can finish the optimization if the accuracy of the model is good. If the accuracy is not good, Kriging metamodels have to be re-optimize.

6.

If the actual trust region does not cover the complete domain we must resample around the optimal point until we can guarantee that the error in gradient is below a given tolerance.

In this work, sampling points are generated distributed throughout the search area using a max-min approach (maximizing the minimum distance between two points). For this, we fix the bounds of all independent variables and then we distribute the rest of the points through the max-min approach.

3. Case study: sour water stripping plant The case study consists of the multiobjective optimization of a Sour Water Stripping (SWS) plant located in Germany. The objective is to remove sulphides and ammonia from water by stripping with steam.

34

N. Quirante and J.A. Caballero

The sour water comes from different sources: vacuum distillation units, FCC fractionators, crude distillation units, and washing from petrochemical complex. The water from the petrochemical complex contains ethanol and ethyl tert-butyl ether (ETBE). This stream is sent to a stripper where a rich-gas in ethanol is recovered. Water from the other sources is sent to flash drums, where hydrocarbons are removed. After the flash drums, sour water is sent to a first set of strippers (strippers E1, E2 and E3 in Figure 1) where ammonia and H2S are removed. Then, overhead streams are mixed and sent to a second set of strippers where high purity hydrogen sulphide is recover by overhead (units E4, E5 and E6 in Figure 1). The bottom is sent to the ammonia stripper (units E7 and E8) where ammonia rich-gas is recovered. Water free of contaminants is reused in other processes. Part of this water is sent to the flare (unit E9). The simulation is carried out in Aspen HYSYS v.8.4. In the simulation two different property packages are used; NRTL model for stream and units where ethanol and ETBE are present, and SourPR model for the rest of the model. The aim of this work is optimizing the SWS plant operating conditions, including heat integration and LCA analysis. Operating conditions include costs of cooling water, steam, coal, and the investment costs associated to the new HEN.

Sour water sources

Sour water stripping

H2S denitrification

NH3 concentration

Sour gas washing

R

H2S (To RSU) R

From vacuum distillation unit

E4

E1

From FCC fractionators

To flare R

E9

H2O

Sour gas

E5

Vapor

N2

From crude distillation unit R

R

NH 3 (To ammonia recovery)

R

E2 N2

E6

E8 R

R

From a petrochemical complex

E7 E3

R

Figure 1. Complete flowsheet of the sour water stripping plant.

A

Optimization of a Sour Water Stripping Plant Using Surrogate Models

35

All the stripping columns have been replaced by surrogate models. Particularly, units E4 and E5 form a highly integrated system with a thermal couple (liquid stream from E4 goes to E5 and vapour stream from E5 goes to E4) and a recycle stream, and units E7 and E8 also form an integrated system. Therefore, columns E4 and E5 are merge in a single surrogate, and a similar situation occurs with columns E7 and E8. Thereby, the surrogates are numerically more efficient. In this work we have study the stripper efficiency, modifying the steam flow rate, feed composition and temperature. We have supposed a fixed structure. All the models were simulated on Aspen HYSYS v.8.4. Kriging surrogate models were calibrated using MATLAB. As NLP solver we use CONOPT available through TOMLAB-MATLAB. The objective function, explicit constraints, and the rest of the model, are written in a proprietary modelling language (Caballero et al., 2014) interfaced with TOMLAB. We studied the process before and after performing the heat exchanger network. To evaluate the process from an environmental point of view, we use the ReCiPe indicator, available in Ecoinvent Database v.3., which metric is based on the principles of Life Cycle Assessment (LCA). In Figure 2 is shown the three main impact categories. As we can see, impact after economic optimization is reduced about 15 % and after heat integration is reduced about 49.5 % regarding the initial plant impact.

Initial plant

Economic optimum

Heat integration

4

Points

3

2

1

0 Ecosystem quality

Human health

Resources depletion

Figure 2. Main environmental impacts of the sour water stripping plant.

36

N. Quirante and J.A. Caballero

4. Conclusions In this paper we have proposed a hybrid approach to optimize a large scale flowsheet, in which some units are kept in the process simulator, others are replaced by surrogate models, and constraints are added in equation form. We used a disaggregated approach in which the large surrogate models are replaced by a set of smaller surrogates. According the degrees of freedom, some columns can be merge in a single surrogate. In this optimization, the stripping columns (implicit black-box functions of the simulator) were replaced by Kriging metamodels, according the numerical noise and the lack of convergence. Kriging metamodels have proven to be accurate and reliable, and they also allow a fast interpolation. Economic optimization allows around 46.3 % savings against the initial plant, and it allows us to reduce the total impact of the stripping plant around 15 %. HENs allows savings in energy (around 39 % in heating and 25 % in cooling) against a plant without heat integration, and it also allows us to reduce the total impact of the stripping plant around 49.5 %.

Acknowledgements The authors with to acknowledge the financial support by the Ministry of Economy and Competitiveness from Spain, under the project CTQ2012-37039-C02-02.

References Caballero JA, Navarro MA, Ruiz-Femenia R, Grossmann IE. Integration of different models in the design of chemical processes: Application to the design of a power plant. Appl Energy 2014;124:256-273. Chung PS, Jhon MS, Biegler LT. The holistic strategy in multi-scale modeling. In: Marin GB,editor: Advances in Chemical Engineering: Academic Press; 2011. Vol. 40: p. 59-118 Jones DR, Schonlau M, Welch WJ. Efficient global optimization of expensive black-box functions. J Glob Optim 1998;13:455-492. Krige DG. A statistical approach to some mine valuation and allied problems on the Witwatersrand. [Master's thesis]. South Africa: University of Witwatersrand; 1951. Morar M, Agachi PS. Review: Important contributions in development and improvement of the heat integration techniques. Comput Chem Eng 2010;34:1171-1179. Palmer K, Realff M. Metamodeling approach to optimization of steady-state flowsheet simulations. Chem Eng Res Des 2002;80:760-772. Quirante N, Javaloyes J, Caballero JA. Rigorous design of distillation columns using surrogate models based on Kriging interpolation. AIChE J 2015;61:2169-2187.

Zdravko Kravanja, Miloš Bogataj (Editors), Proceedings of the 26th European Symposium on Computer Aided Process Engineering – ESCAPE 26 June 12th -15th, 2016, Portorož, Slovenia © 2016 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/B978-0-444-63428-3.50011-4

Security Analysis of the Electronic Management System for a Total Site Utility System Oscar Mansoa* a

Alexandra Institute A/S, Åbogade 34, Århus8200, Denmark

[email protected]

Abstract This paper presents the Security Analysis of the Electronic Management System (EMS) of a Total Site Utility System as proposed under the scope of the Efenis project. The Efenis project has been funded by the European Commission via the seventh framework programme (EC FP7) with the aim to improve process integration by applying novel methods of total site targeting. 16 different international partners, including 10 academic institutions and 6 industrial companies, have been participating in this project. A model of the EMS system is presented with the aim to integrate it on a Total Site Utility System in order to achieve a reliable implementation capable of accomplishing the mission of the Efenis project. The results also provide the design, implementation and deployment of a modular security system that helps to provide the security controls recommended during the analysis. Keywords: Total Site analysis, security, risk analysis, electronic management systems.

1. Introduction Results from the Efenis project have shown the enormous potential of energy savings that can be achieved by applying innovative energy management systems based on the integration across industries and processes outside the boundaries of a single plant. On the other hand, a secure operation and management of this innovative integration model presents some important challenges that also required further analysis. This paper presents the results of the security analysis undertaken under the context of Efenis by first presenting the main security requirements of a Total Site Utility System. Then, a model for the Electronic Management System (EMS) of a Total Site Utility System is defined taking into account the main challenges presented by such a system. Next, we present the results of a Risk Analysis undertaken on the proposed EMS model introducing the recommendations to mitigate the main risks detected. Following, we introduce the design of a modular security system providing sufficient security controls. And finally, an integration of the resulting security model is presented in the form of the Efenis toolbox, which presents the tools developed by the partners of the Efenis project.

38

O. Manso

2. Overview of Industrial Control systems Industrial Control System (ICS) is a general term that includes several types of control systems used in industrial production, including Supervisory Control and Data Acquisition (SCADA) systems, Distributed Control Systems (DCS) and other smaller control system configurations such as Programmable Logic Controllers (PLC) often found in the industrial sector. To support further discussions, this section presents the definition of key ICS components used in control and networking (Krutz 2006). •

Programmable Logic Controller (PLC). This is a small industrial computer originally designed to perform logic functions executed by electrical hardware such as relays, switches or sensors.



Remote Terminal Unit (RTU). These units basically differ from PLCs in that RTUs are more suitable for wide geographical telemetry, often using wireless communications while PLCs are more suitable for local area control.



Master Terminal Unit (MTU). This is the device that acts as the Master in an ICS system, controlling the operation of the Slave subsystems (PLCs and RTUs).



Human-Machine Interface (HMI). The HMI is software and hardware that allows human operators to monitor the state of a process under control, modify control settings and manually override automatic control operations in the event of an emergency.

3. Electronic Management System Model for Efenis The major novelty of the Efenis project has been the creation of the foundation required for comprehensive, high-impact industrial deployment of energy systems based on Total Site Integration approach in the target industries and subsequent commercial exploitation. Figure 1 represents an example of a Total Site Utility System consisting of two main sites (Site A and B) where the Electronic Management System – EMS- of the Total Site is placed on a third site (Site C) named the Primary Control Centre. 3.1. Considerations about the Primary Control Centre Given the fact that the different sites that make up the Total Site Utility System may belong to different companies with different interests, the Primary Control Centre for the Total Site Utility System should mainly behave as a monitoring system for the whole site used to supervise and validate the energy efficiencies gained from the integration of the different sites. Indeed, privacy and security considerations recommend that the Primary Control Centre should not provide a full representation of all the components at each site, but just a simplified view from each of them, sufficient to understand and control the whole picture of the global site.

Seecurrity Ana A alysiis off thee Eleectroonicc Ma anaggem ment Sysstem m forr a Tota T al Site Utilit U ty Syste Sy em

399

Figguree 1 E Exam mple of an a IC CS foor a Totaal Siite Utility U y Syystem m

Thhe oper o ratorrs of o th he Prim maryy C Contrrol Cenntre shoouldd haave thee abbilityy off prrovidingg recom mmeendeed suuggeestioons aboout the t para p ameeterss to be sset at a each diff ffereent site s in orde o er to impprovve thhe efficienccy of the whholee system m. Buut inn anny caase, they y shhoulld neeverr be able too takke thhe coontrrol of o anny site. It sh hould be b upp to the opeeratoors of o each e h parrticuular site to cont c trol the opeeratiions of thei t r ow wn site. s 3.22. EMS E S Moodell Taakinng thhesee coonsid deraationns into acccounnt, the t moddel shown in Figuure 1 prese p entss thee EM MS modulees (E EM MS Mast M ter aand Slaave) as entiities thaat arre coomppleteely sepaarated ffrom m the HM MI stationss. Giivenn thaat thhesee EM MS mod m dules arre in ntercconnnected via v a WAN W N nettwork, tthey y aree plaacedd insidee thee deemillitarrizedd zoone (DM MZ)) in ordder to t m miniimizze thhe risk r thatt anny plaus p siblee atttackk to thesse modu m ules couuld affe a ct an ny oof thhe otherr com mpo onennts of o a Conntrol Ceentree. Thhe main m n com mpoonen nts of o thhe EMS E S sysstem m beeing prooposed are a repr r esenntedd in Figuure 2. •

Staationn IInteerfacce I1. This T inteerfaace repr r resennts the t connnecctionn am mongg thhe HMI H aand its corrrespponddingg EM MS Slavve. The HMI H Staationn shoouldd send info i ormaationn iintoo thee EM MS Slaave butt, fo or seecurrity reaasonns, this t info form matioon flow f w shhouldd nneveer go g in the opposite direectioon. Thiis inter i rface suppportss th he ccreattionn, m maintennancce and a pressenttatio on of o a sim mpliffiedd vieew of o thhe ccom mpon nentts off thee H HM MI staation too be pubblishhed on the t Tota T al Siite.

40

O. Manso

Figure 2 Main components of the proposed EMS System



Module M1. This module will be in charge of automatically feeding the corresponding data from the HMI Station onto interface I1. This module is to be implemented and adapted to each HMI technology available in Local Control Centres.



Interface I2. This interface is used by the Local Administrator to configure I1 by defining a simplified view of the HMI Station that s/he would like to publish in the EMS System.



Interface I3. This interface is used by local operators of the system to monitor the status of their local system and also to show the suggestions for control operations received from the global system.



Interface I4. Interface I4 defines the interconnection among the EMS Master and Slave systems. This interface should allow the EMS Master to retrieve the models from each Slave and exchange information with each site.



Interface I5. This interface is used by Global Administrators to configure the global model for the Total Site Utility System.



Interface I6. This interface should be accessed securely and remotely from operators at each Local Centre in order to monitor and survey the status of the global EMS System. Each Local Centre should define a group of operators that would take the role of Global Operators acting on behalf of such Local Centre.

4. Risk Analysis of the EMS Model for Efenis The risk analysis for the EMS Model for Efenis was performed following a formal and rigorous analysis methodology based on the OCTAVE Allegro approach (Octave 2015). The performed analysis was based on the parameters considered as essential to accomplish the mission of the Efenis project, which defined its main objective as: “To facilitate and accelerate a move to low carbon manufacturing processes and site management by deployment and demonstration of innovative energy management systems and enabling efficiency technologies, which extend the scope of energy management outside the boundaries of a single plant to total site and then beyond the total site to district heating/cooling systems”.

Security Analysis of the Electronic Management System for a Total Site Utility System

41

Figure 3 Proposed PKI infrastructure used to secure main components of the EMS System

The result from the analysis showed that the EMS model can be deployed on a Total Site Utility System to achieve the mission of the Efenis project with an Acceptable Risk Score if: It conforms correctly to the specifications presented and it is deployed with all the Security Controls recommended. Special emphasis should be placed in securing interface I4, which is used to connect the EMS nodes via a WAN network.

5. Proposed Public Key Infrastructure used to secure EMS System In order to fulfil the security recommendations derived from the previous security analysis, a modular Public Key Infrastructure (PKI) was put in place in charge of issuing and managing the life cycle of digital certificates to be used by the different entities interacting with such interfaces. The Efenis PKI infrastructure introduced the following components (see Figure 3): •

Efenis CA: Certification Authority used to generate the digital certificates required to secure the EMS System.



Efenis Site RA: Registration Authority used to manage the digital certificates to be installed on the EMS Slave system at each Local Site. A digital certificate will also be generated for the EMS Master system.



Efenis Personal RA: Registration Authority used to manage two types of personal certificates: Global Operators and Global Administrators of the EMS Total Site. The members of the Management Board are responsible to approve the list of people belonging to each of these two groups.



Efenis VA: Efenis Validation Authority is the entity in charge of keeping and reporting the validity status for each Efenis credential.

42

O. Manso

Figure 4 Efenis Software Toolbox, https://demo.efenis.alexandra.dk/public-portal/

The resulting infrastructure has been implemented and deployed as a RESTFUL (Fielding 2000) web interface based on JSON (JSON 1999) data exchange to be easily integrated in different settings. As prove of concept, the security infrastructure has been integrated into the Efenis software toolbox, a Web Portal used to present and secure the access to the software tools developed by the partners of the Efenis project (Figure 4).

6. Conclusions This paper analysed the main security challenges an Electronic Management System of a Total Site Utility System is confronted with and showed how to overcome them.

Acknowledgement The author acknowledges the financial support from EC FP7 project ENER/FP7/296003 ‘Efficient Energy Integrated Solutions for Manufacturing Industries – EFENIS’.

References R.T.Fielding, 2000, Architectural Styles and the Design of Network-based Software Architectures, PhD Dissertation, University of California, Irvine. JSON, 1999, http://www.json.org/, Last accessed November 4, 2015. R. L. Krutz, 2006, Securing SCADA Systems, Wiley Publishing Inc., ISBN:0764597876. Octave, 2015, https://www.cert.org/resilience/products-services/octave/, Last accessed November 3, 2015.

Zdravko Kravanja, Miloš Bogataj (Editors), Proceedings of the 26th European Symposium on Computer Aided Process Engineering – ESCAPE 26 June 12th -15th, 2016, Portorož, Slovenia © 2016 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/B978-0-444-63428-3.50012-6

Enhanced production of methanol from switchgrass: CO2 to methanol Mariano Martín, Ignacio E. Grossmann Department of Chemical Engineering. University of Salamanca, Plz. Caídos 1-5, 37008, Salamanca, Spain Department of Chemical Engineering. Carnegie Mellon University. 5000 Forbes Ave. 15213. Pittsburgh, PA

Abstract In this work an integrated facility that produces methanol from switchgrass and reuses the captured CO2 to enhance its production capacity by 50% is proposed. The process consists of two sections: biomass processing to syngas and its conversion to methanol, and the hydrolytic section where hydrogen is produced to hydrogenate the CO2 that has been captured during syngas cleaning. The integrated facility reaches 207 Mgal/yr of methanol production, 318 kt/yr of oxygen but requires a large amount of electricity to generate the hydrogen. As a result, it can only be used in regions where wind velocity is above 8 m/s and solar radiation is above 5 kWh/m2/day such as Midwest US, certain regions in China or the South of Europe. The investment is high, around 1000 M€, allocation dependent, but the production cost of methanol is promising 0.25-0.35 €/kg showing a high production capacity. Keywords: Solar Energy, Biomass, Wind power, Synthetic methane, Hydrogen

1. Introduction Methanol is one of the most important chemical products. It can be used as solvent, as intermediate for the production of fuels or as a raw material for a number of higher valued products. Typically, it has been produced from the reforming of natural gas or coal. The use of fossil based sources of methanol is its main weakness from the sustainability point of view when used within the biofuels industry, for instance biodiesel production. Methanol is the alcohol of choice due to the fast reaction times and high conversions but, above all, its low price. However, it is possible to produce methanol from a number of wastes or renewable sources such as glycerol, biomass or CO2. In particular, in the production of most biofuels from biomass, a fraction of the biomass is lost as CO2; actually at least 0.675 kg of CO2 are produced per kg of methanol produced via switchgrass gasification. While CO and CO2 compete in the hydrogenation process, and have several issues if processed together with the same catalyst, we can hydrogenate them separately. In this work we integrate a facility for the production of methanol from switchgrass, via syngas production and methanol synthesis with an electrolytic section that uses PV solar, wind and/or biogas as energy sources for the production of hydrogen. This hydrogen is used to hydrogenate the CO2 captured in the biomass section of the process. Therefore, the facility consists of two sections. The switchgrass based section includes biomass gasification, gas reforming and cleanup, CO2 capture and methanol synthesis. The processing capacity of biomass is given by typical results in the literature, 1700 t/d.

44

M. Martín and I.E. Grossmann

The electrolytic section involves power production, oxygen and hydrogen purification and methanol synthesis.

2. Process description The process consists of four sections. Switchgrass processing, water electrolysis, methanol synthesis from syngas and from CO2. Figure 1 shows the scheme for the integrated facility.

Figure 1.- Integration of switchgrass and renewable for zero CO2 emissions in the production of methanol

2.1. Biomass processing The biomass is washed and milled before the gasification. Two alternatives are considered: The Renugas gasifier (R) operates at medium pressure using oxygen and produces a gas rich in CO2. The light hydrocarbons generated can be further reformed to hydrogen and CO. High pressure gasification allows a large throughput per reactor volume, and reduces the need for a downstream pressurization, reducing therefore the overall power needed. However, the efficiency of the gasifier is lower, and a large amount of steam and pure oxygen are needed to avoid syngas dilution and reduce the size of the gasifier (Eggeman, 2005). The low pressure gasifier, Battelle Columbus (Ferco, F), is indirectly heated. The system consists of two chambers, a gasifer and a combustor. Sand, olivine, provides the energy for gasification by being heated up in the combustor with the energy released in the combustion of char. This type of gasifier produces a gas with low CO2 content, but contains heavier hydrocarbons. The reactor is fast fluidized, allowing throughputs equal to those of the bubbling fluidized Renugas gasifier despite the nearly atmospheric operation. Working at a lower pressure decreases the operating cost (Phillips, 2007). Subsequently, the syngas is reformed to remove the hydrocarbons. Steam reforming (S) is an endothermic reaction but provides a higher concentration of hydrogen in the syngas. Partial oxidation (O) is exothermic but its yield to hydrogen is lower. Finally, the raw syngas is cleaned. Two steps are proposed. Cold cleaning by means of a scrubber for low pressure gasification, or a ceramic filter operating at high temperature for high pressure gasification is the first one. The second step consists of a multibed PSA system used to remove the last traces of hydrocarbons, H2S and CO2 in that order. Once the syngas is purified, we use it for traditional

Enhanced Production of Methanol from switchgrass: CO2 to methanol

45

methanol synthesis. The typical process used so far in industry for the production of methanol form syngas is carried out. CO + 2 H 2 ↔ CH 3OH (1) CO2 + H 2 ↔ CO + H 2O

2.2. Water electrolysis The second section of the facility corresponds to water electrolysis (NEL, 2012). It uses energy from solar, wind energy or a power plant based on biogas. On the one hand, we have the line of oxygen that carries water vapor and traces of hydrogen. This water is condensed and the resulting stream is dehydrated using a zeolite adsorber before compressing and storing the oxygen. This oxygen can be used in the direct gasifier and/or the partial oxidation section of the biomass processing process. On the other hand, we have the stream of hydrogen containing traces of oxygen and water vapor, most of which is separated by condensation. The oxygen affects negatively the catalyst, so it is eliminated by using a deoxygenation reactor where water is produced. Next, a zeolite is used to dehydrate the stream. At this point we mix the hydrogen with the CO2 from the sour gases capture section in the biomass based part of the flowsheet. The gas phase is adjusted for the optimal operating conditions using a compressor and a heat exchanger. Methanol is produced based on a series of equilibria carried out over a catalyst, typically Cu/ZnO/Al2O3. The optimal operating conditions (ratio of H2 and CO and operating temperature and pressure at the reactor) are determined. Unreacted gases are separated from the methanol using a flash and a distillation column and recycled back to the reactor. CO 2 + 3H 2 3 ↔  CH OH + H 2 O (2) CO 2 + H 2 ↔   CO + H 2O

3. Solution procedure 3.1. Modelling approach The different units are modelled using first principles, mass and energy balances, such as the case of heat exchangers, compressors; first principles combined with experimental data for conversions, i.e. gas reforming or species equilibrium for the WGSR and the methanol synthesis reactor (Cherednichenko, 1953; and Bisset, 1977 ); experimental correlations define the gas composition produced form the gasifiers (Eggemann, 2005; Phillips et al., 2007); rules of thumb for the adsorbent beds or small parameter estimation problems for the performance of the wind turbines. We formulate the model for the superstructure as an MINLP problem in GAMS. 3.2. Solution procedure The large production of CO2 and the limited size of current solar fields, wind farms and biogas facilities will typically lead to infeasible operating solutions. Therefore, we used the following procedure to optimize the operation of the integrated facility. First, we optimize the production of methanol from CO2 and hydrogen (P2), without accounting for the power consumption in the biomass processing section, W(Switchgrass section)=0, to determine the needs of energy for processing the captured CO2. We obtain a power_ratio of 25150 kJ/kgCO2 and the need of 1.375 kgCO2 per kg of methanol produced. Next, we optimize the production of methanol from biomass first (P1) to determine the optimal topology of the switchgrass processing path and the CO2 produced. Bear in mind that the more CO2 produced the more energy we require to process it, in spite of

M. Martín and I.E. Grossmann

46

the additional methanol produced, and thus this process may lead to infeasible operation based on the availability of renewable energy for hydrogen production. Finally, for the optimal topology of switchgrass processing, we reevaluate the operation of the electrolytic section bearing in mind that the power required by the compressors in the switchgrass processing section must also be provided by renewable sources. The first problem (P1) optimizes the objective function given by eq. (3) where methanol is produced from biomass, subject to the model given in section 2.1. Power_ratio is the energy required to process a kilogram of CO2. The cost of oxygen is assumed to be 0.021 €/kg, the steam price is 0.019€/kg and the cost of electricity of 0.06 €/kWh. We fix the processing capacity of the biomass section to 20 kg/s of switchgrass, so that further comparison with the use of CO2 for algae growing is evaluated (Martín & Grossmann, 2015a) = Z fc ( MetOH ) Bio + fc(MetOH)CO − CE · ∑ W ( Compresi ) −  ·  CS m steam − (3) i = compresors 2

CO2 ·fc ( O2 )   −CE ·Power_ratio·fc ( CO 2 )

The model is formulated as an MINLP that has four topology options, namely, two gasifiers and two reforming modes. We solve four NLP’s of 2400 equations and 2700 variables each. The second problem (P2) uses the CO2 captured during syngas production as a source for the production of methanol via its hydrogenation. The link between the biomass section and this one is given by the fact that the CO2 produced is processed here and the need to provide the electrical power for the biomass preprocessing stage and the compressors. Thus, the total power to be produced is as given by equation (4) (4) W Electrolizer + W Compress + W Switchgass,section  = Power;

(

)



i=compresors,H 2 ,O 2

(

i

)

(

)

The energy can be produced either using wind turbines, solar panels or biomass so that: Power ≤ Power generated, where Power generated =    n aerogenerador

Pnominal

(1 + exp ( − ( v − a ) / m ) )

(5)

        

  + n pannel ·Solar _ inc·A pannel ·3600 / ( days month ·24·3600 ) + EnergyBiogas

The objective function for this section is given by eq. (10), subject to the model given in section 2.2, where the different terms are given by eqs (6)-(9) (IRENA, 2012; Hahn 2012): 1 Wind Invest ·Pnominal ·n wind turbines (6) C = · +C ·P ·n ; Wind

Operation,Wind

3

time 1 · n panel ·( Ppanel ·c panel + A pannel ·carea ) ; = CSolar 3·time

(

Biogas Investment (€ / kW ) = 19352· Energy Biogas (kW)(

nominal

wind turbines

(7) −0.27 )

);

1 Energy Biogas · ( Biogas Investment ) +COperation,Biogas ·Energy Biogas 3 time Z  fc ( MeOH ) − CWind − CSolar − CBiogas + CO2 fc ( O 2 ) = C Biogas =

(8) (9) (10)

The model yields 540 eqs and 647 vars. We may assume that the number of turbines, solar panels and electrolyzers is continuous since the cost is given by kW and the fact that a unit can be operating at a fraction of its design specifications. Next an economic evaluation is carried out to estimate investment and production costs.

Enhanced Production of Methanol from switchgrass: CO2 to methanol

47

4. Results 4.1. Plant topology A flow of 1700 t/d of biomass is processed. Following the procedure described in section 3 for a plant allocated in Cádiz, Spain due to the large solar incidence and high wind speed. Figure 2 shows the relative objective function for the different options. It turns out that the optimal process is that which uses indirect gasification followed by steam reforming. Actually, the use of direct gasification is quite close, and it is only because of the high production of CO2, which eventually makes its hydrogenation infeasible due to the large energy consumption that prevents from using this option. 4.2. Plant feasible allocations The operation of the facility is limited by the current size of wind farms up to 125 MW, solar field, up to 250 MW, and biogas power plants up to 50 MW. Figure 3 describes the feasible region for the operation of the plant. As expected, higher wind velocities are needed for the plant to operate when solar irradiation is lower. Based on this result, and assuming that the facility must be allocated in a single spot, and not producing the energy in several allocations, there are only a few feasible allocations worldwide. Thus, in Figure 4 we show the feasible allocations of such a plant based on the needs for solar, wind and biogas energy.

Z/Zmax

1 0.5 0 FS

FO

RS

RO

Figure 2.- Topology selection for the biomass based section. F: Ferco. R: Renugas. S: Steam reforming; O: Partial oxidation

Figure 3.-Feasible region of operation

Figure 4.- Possible allocations for such a plant.

Regions where wind velocity is above 8 m/s and solar radiation is above 5 kWh/m2 /day include the US Midwest, certain regions in China or the South of Europe for instance.

48

M. Martín and I.E. Grossmann

The investment is high, around 1000 M€, being allocation dependent, but the production cost is promising 0.25-0.35 €/kg of methanol with a high production capacity. The comparison between algae and other technologies in order to capture and transform CO2 into fuels is in favor of algae not only for a reduced investment cost, 180 M€ vs. 1000 M€, but also the production cost since it is also almost half in case of using algae. Bear in mind that algae are using novel harvesting technologies and thus, the oil production cost is drastically reduced compared to conventional methods, (Martín & Grossmann, 2015).

5. Conclusions In this work we have designed an integrated facility for the enhanced production of methanol from switchgrass by capturing and hydrogenating the CO2 that is released in the production of the syngas. The facility consists of biomass processing to syngas, gas clean up and composition adjustment followed by methanol synthesis. Water is splitted using electricity that is produced from biogas, solar PV or wind turbines to hydrogenate the CO2 to methanol. The integrated facility enhances the production of methanol by 50%, reaching a production capacity of 207 Mgal/yr of methanol and 318 kt/yr of oxygen, but it requires a large amount of electricity to generate the hydrogen. As a result, it can only be used in regions where wind velocity is above 8 m/s and solar radiation is above 5 kWh/m2 /day such as US Midwest, certain regions in China or the South of Europe. The investment is high, around 1000 M€, being allocation dependent, but the production cost is promising 0.25-0.35 €/kg of methanol with a high production capacity.

References L. Bissett, L. (1977) Chemical Engineering , 84(21), 155. V.M.Cherednichenko, V. M., Dissertation, Karpova, Physico Chemical Institute ,Moscow, U.S.S.R., 1953. T. Eggeman , 2005, Updated Correlations for GTI Gasifier – WDYLD8. Technical memorandum for Pam Spath, National Renewable Energy Laboratory, Golden, Colorado. June 27, 2005. H. Hahn, H. (2012) Guideline for financing agricultural biogas projects - Training material for financing bodies. IEE/09/848/SI2.558364) pdf D.3.5_IWES_EN IRENA, 2012, Renewable Energy technologies: Cost Analysis Series. Vol. 1. Power Sector. Wind Power. NEL Hydrogen, 2012, Technical Data. http://www.nel-hydrogen.com/home/?pid=75 S. Phillips, A. Aden, J Jechura, D Dayton, T Eggeman, 2007, Thermochemical ethanol via indirect gasification and mixed alcohol synthesis of lignocellulosic biomass. Technical Report, NREL/TP-510–41168, April 2007. Z. Yuan, B. Chen, 2012, Process Synthesis for Addressing the Sustainable Energy Systems and Environmental Issues. AIChE J., 58 (11), 3370-3389

Zdravko Kravanja, Miloš Bogataj (Editors), Proceedings of the 26th European Symposium on Computer Aided Process Engineering – ESCAPE 26 June 12th -15th, 2016, Portorož, Slovenia © 2016 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/B978-0-444-63428-3.50013-8

Techno-economic analysis of the production of epichlorhydrin from glycerol Alberto Almena, Mariano Martín Department of Chemical Engineering. University of Salamanca, Plz. Caídos 1-5, 37008, Salamanca, Spain

Abstract In this work we evaluate the production of epichlorohydrin from glycerol, the main byproduct in the biodiesel industry. A process flowsheet is synthesized consisting of glycerol purification, reaction with HCl in hexanoic acid, and final reaction with NaOH to produce epichlorohydrin. Both reactions are complex equilibriums. The reaction medium, hexanoic acid, must be recovered. Finally, the products require purification. The process is selected based on the use of cheap and available secondary raw materials, namely HCl and NaOH, compared to other technical alternatives. We couple MATLAB with CHEMCAD to simulate the mass and energy balances. Finally, the economic analysis of the process shows that the process is profitable due to the high selling cost of the epichlorohydrin. For the production of 27 kt/yr of epichlorohydrin 99.9%, 41.5 kt/yr of glycerol, 29 t/y of HCl and 15.9 kt/yr of NaOH are need. The investment adds up to 63.7 M€ with a production cost of 1.28 €/kg. Keywords: Glycerol, Process synthesis, Epichlorhydrin,

1. Introduction Biodiesel production has increased over the last decades in order to reduce the dependence on fossil fuels As a result, its main byproduct, glycerol, has saturated the market and novel uses are being developed to provide further value. Fuels such as hydrogen, methanol, ethanol, or glycerol ethers can be produced with a limited profit. Alternatively, high added value products such as polymers and intermediates can also be obtained from glycerol. So far epichlorohydrin has been produced from crude oil sources, such as propylene, with low yield in the use of chlorine (Weissermel and Arpe, 1997). Glycerol is an alternative raw material that allows a more sustainable production with higher chlorine yield. In this paper we present the conceptual design of the production process to transform glycerin into epichlorohydrin using HCl and NaOH as reagents. We use a simulation based approach for rigorous simulation of the process integrating MATLAB and CHEMCAD to design a process and evaluate its economics.

2. Process description The process consists of four stages: feedstock preparation involving glycerol purification and reactants preparation, glycerol hydrochlorination and product recovery, epichlorohydrin production and product purification up to commercial composition. Figure 1 shows the flowsheet. The crude glycerol is fed to the process at standard conditions, 293 K and 100 kPa. It has to be heated up to the boiling point before it is fed to the distillation column as saturated liquid. In this column, T-01, the glycerol is separated from the impurities,

A. Almena and M. Martín

50

namely water and methanol. A packed distillation column is used, with an upper bound for the bottoms temperature of 423 K, to avoid glycerol decomposition. Thus, the packed column operates under vacuum, at a pressure up to 10 kPa. Basically, all the water and methanol will be separated as distillate from the glycerol, which is fed to the next stage.

Figure 1 – Process flowsheet The bottom stream containing the main reactant, glycerol, is mixed with the recirculation, comprising the recycled catalyst and the heavy intermediate products from the bottom of the column T-02. The mixture is cooled down to the reaction temperature (383 K) and loaded into the semibatch reactor, R-01. Hydrogen chloride needs to be bubbled continuously into R-01 using a 10% excess by weight. A semibatch reactor allows the continuous addition of hydrogen chloride maintaining its concentration low, which minimizes the production of trichloropropane out of glycerol.The reaction that occurs in the semibatch reactor is the hydrochlorination of glycerol. The hydrochlorination is performed isothermally at 383 K and under hydrogen chloride pressure of 760 kPa (Bell et al 2008), in the presence of hexanoic acid (5 % mol) as catalyst. To bring the gas to the reaction conditions, a two step compression system with intercooling is used. Under this pressure, the liquid-vapor contact is improved and the equilibrium limit is overcome (Bell et al 2008) resulting in the efficient production of DCH. The reaction product is a mixture that comprises dichlorohydrins, in a mole ratio of 30-50 to 1 of 1,3-DCH and 2,3-DCH respectively, monochlorohydrins (MCH), water and catalyst. This ratio is at least an order of magnitude larger than the one shown by current industrial process, typically 3 to 1. The improved 1,3-regioselectivity undergoes cyclization with a base to form epichlorohydrin about 300 times faster than from the 2,3-DCH. This translates into smaller equipment in the second reaction stage. The hydrogen chloride not dissolved in the reaction mixture, together with the steam formed by the evaporation of the water in the mixture, are evacuated from the isothermal semibatch reactor when the internal pressure exceeds the upper bound. After 3 h of reaction, the mixture containing the dichlorohydrins is cooled and expanded to the pressure conditions of the product recovery column. At this stage, dichlorohydrins, hydrochloric acid and water formed in the reaction step are separated from the heavy components, namely monochlorohydrins, catalyst and unreacted

Techno-economic analysis of the production of epichlorhydrin from glycerol

51

glycerol. The recirculation comprises the heavy components, obtained at the bottom of the column. This stream is returned to the reaction step. To maintain low pressure drop across the unit, a packed column is used where almost sharp separation of the substances can be achieved by operating at 5 kPa (Dow Global Tec. 2013). The distillate is sent to a liquid-liquid separator where the two phases, organic and aqueous, are separated. The purpose of this embodiment is to make use of the amount water formed in the first reaction stage to prepare the sodium hydroxide solution, a reactant of the second reaction stage. The organic phase, comprising mostly of dichlorohydrins and a small amount of water dissolved, is heated up to the reaction temperature and sent to reactor R-02. On the other hand, the aqueous phase with most of the water, and saturated with dichlorohydrins and other species such as hydrochloric acid and traces of catalyst, is mixed with a concentrated solution of sodium hydroxide in mixer M-03 to be neutralized and later diluted to the appropriate concentration for the reaction: 20 w% aqueous solution. The original solution of NaOH is prepared in a stirred tank, M-02. The sodium hydroxide fed to the process represents an excess of 5 mol % of the dichlorohydrins fed to the reactor R-02, plus the necessary amount to neutralize the hydrochloric acid. Both stirred tanks operate adiabatically, reducing the energy consumption in the subsequent heating of the solution to reaction temperature. The reactant streams are only mixed in the reactor, to prevent the formation of epichlorohydrin and its decomposition before the reaction stage. Reactants are fed to the top of the reactor tower R-02, a packed column where dehydrochlorination of dichlorohydrins occurs. It operates isothermally at 363 K and under vacuum conditions, 30 kPa. The advantage of a reactive distillation column is the short contact time between the water and the epichlorohydrin, minimizing the hydrolysis of the produced epichlorohydrin to glycerol, which represents the major yield losses. Steam is injected from the bottom of the column as countercurrent stream, stripping the undissolved epichlorohydrin produced by the reaction and evaporated by the reaction heat. The amount of steam used for the stripping of epichlorohydrin is such that the composition at the top has a water/epichlorohydrin ratio by weight of from about 1 to 2.5 (DOW, 2012). The bottoms of the column are mainly waste water, with all the substances dissolved in it, such as epichlorohydrin, sodium chloride, unreacted dichlorohydrins and sodium hydroxide excess. This stream is cooled to make use of its energy and disposed of as waste product. The top stream comprising stripping steam and epichlorohydrin is cooled down to saturation and condensed. A heterogeneous azeotrope is found, which can be separated in a decanter, removing the aqueous phase. Finally we dehydrate the epichlorohydrin to reach commercial purity over 99.9 w% using zeolite molecular sieves. The resulting epichlorohydrin is cooled down and condensed for storage.

3. Solution procedure 3.1. Modelling approach Table 1 shows the modelling features of the main units within the flowsheet. The two distillation columns where modelled in CHEMCAD. For the R-01 reactor, the kinetics available in the literature is obtained at low pressure (Tesser et al 2007). However, operating at high pressure allows higher yield since we overcome the equilibrium limit. The simplified mechanism is shown in Table 1. Therefore, for fitting

A. Almena and M. Martín

52

the model to the experimental results found in the literature for high pressure operation, a parameter estimation problem is proposed with θp as parameter. The reactive distillation tower, R-02, was sized using the kinetics found in Ma et al (2007). The reaction takes place in the liquid phase. The tower is modelled as a plug flow reactor. The liquid volume is computed considering the liquid phase remaining in the void space of the catalyst given by the liquid hold up and the porosity of the bed. Furthermore, the epichlorohydrin in vapor and dissolved are computed to determine the one that exits on top and from the bottoms, considering the solubility limit of epichlorohydrin in water Unit T-01 T-02

R-01

R-02

Table 1.- Main modelling features of major units. Main features Main Products UNIFAC thermodynamics Glycerol 99.99% CHEMCAD 10kPa, 15 Stages NRTL thermodynamics Dichlorohydrins 99.99% CHEMCAD 5kPa, 46 stages Unreacted and catalyst Semibatch reactor 383 K , 760 kPa, 10 % excess of HCl

Modelling

1,3-DCH

Matlab

Matlab

Packed reactive distillation column: 363K, 30 Kpa, 20% of NaOH C3H6Cl2O+NaOHC3H5ClO +NaCl+ H2O C3H5ClO+NaOH+H2O C3H8O3+ NaCl

Epichlorohydrin

3.2. Solution procedure We couple MATLAB, for modeling the reactors, with CHEMCAD to rigorously simulate the mass and energy balances. Next an economic evaluation followed by a sensitivity analysis are performed.

4. Results 4.1. Process operation Figure 2 shows the performance of reactor R-01. θp turned out to be (8.5 10-4)-1. The formation of 3-MCH peaks at 5500 s. Next, the second hydroxyl substitution begins to be important and the 1,3-DCH is formed. A large amount of water is also obtained as byproduct, while glycerol is totally converted and the other species, namely 2,3-DCH, 2-MCH and dissolved HCl, are obtained at a low concentration. After the recovery of the unreacted products and catalyst in T-02, and the addition of NaOH dissolution, Figure 3 shows the evolution of the components along the R-02. Epichlorohydrin dissolves in the aqueous phase until the solubility limit is reached. Then, the organic phase is formed and evaporated, becoming part of the countercurrent stripping gas stream. Thus, the epichlorohydrin-water azeotrope is obtained at the top of the column.

Techno-economic analysis of the production of epichlorhydrin from glycerol

53

The other substances, such as NaOH, NaCl, glycerol and DCHs do not reach their solubility limit along the reactor, so they remain in the aqueous phase and constitute the waste stream at the bottoms. A height of around 15 m is obtained for optimal results. The process shows a yield 10% higher than current industrial ones using a sustainable raw material. For the production of 27 kt/yr of epichlorohydrin, 99.9%, 41.5 kt/yr of glycerol, 29 kt/yr of HCl and 15.9 kt/yr of NaOH are needed.

Figure 2.-Performance of R-01

Figure 3.- Performance of R-02

4.2. Economic evaluation The total invested capital is also computed comprising the investment necessary to the construction of the plant and its start up, including equipment purchase and installation, ground acquisition and its adjustment and all related to the project development. The total invested capital includes tied-up capital and working capital. The tied-up capital comprises the direct capital, building expenses, research, start up and other expenses items. The direct capital is computed as a function of the equipment cost that has been estimated using Matche, or else in Peters, Timmerhaus and West’s tool. Figure 4 shows the breakdown of the invested capital, which adds up to 63.7 MM€. The epichlorohydrin production cost comprises the manufacturing costs and the management costs involving raw materials, direct and indirect labor, utilities, maintenance, insurance, taxes and management. (Silla, 1999) resulting a unit production cost for epichlorohydrin of 1.28 €/kg, while market price is set in 1,976.22 €/t. Figure 5 represents the percentage of each element comprising the production cost. The economic analysis of the process shows that the process is profitable due to the high selling cost of the epichlorohydrin, resulting in a payout time of 5 years.

Figure 4.- Invested capital breakdown

Figure 5.-Production cost breakdown

54

A. Almena and M. Martín

4.3. Sensitivity analysis The price of raw materials, NaCl, NaOH and glycerol is volatile. For values of the raw materials within the 0.75-1.25 times the base price, a linear relationship between the product costs and that of the raw materials is found. Furthermore, epichlorohydrin is more sensitive to the price of HCl and NaOH than to that of glycerol, resulting in unit costs for the epichlorohydrin from 1.2 €/kg to 1.4 €/kg.

5. Conclusions In this work a novel process for the production of epichlorohydrin has been proposed using glycerol, providing further value and using a sustainable raw material. The process consists of four stages, glycerol purification, its reaction to mono and dichlorohydrins and their separation, epichlorohydrin production and final product purification. We use a hybrid modular-equation based modeling approach to simulate the operation of such process integrating MATLAB and CHEMCAD. It was found that the proposed process has yield 10% higher than the current industrial process (80% vs. 73 % related to the mass of pure glycerol fed to the plant) using a sustainable raw material. The process consumes 41.5 kt/yr of glycerol, 29 t/yr of HCl and 15.9 kt/yr of NaOH to produce 27 kt/y of epichlorohydrin at 99.9 w%. The investment cost adds up to 63.7 M€ with a production cost of 1.28 €/kg. As a result the payout time is 5 years, while the plant is projected to operate continuously during an average life of 20 years.

References B.M. Bell, Briggs, J.R., Campbell, R.M., Chambers, S.M.; Gaarenstroom, P.D.; Hippler, J.G.; Hook, B.D.; Kearns, K.; Kenney, J.M.; Kruper, W.J.; Schreck, D.J.; Theriault, C.N.; Wolfe, C.P. (2008) Glycerin as a Renewable Feedstock for Epichlorohydrin Production: The GTE Process. Clean, 36 (8), 657–661. Dow Global Technologies, 2012. Process and apparatus for producing and purifying epichlorohydrins. France, European Patent Application. Inventors: Kneupper, C. D.; Basile, P.S.; Fan, W.W.; Noorman, S. EP 2 537 837 A1. 26/12/2012. L. Ma, L.; Zhu, J. W.; Yuan, X. Q.; Yue, Q. Synthesis of Epichlorohydrin from Dichloropropanols: Kinetic Aspects of the Process. Chemical Engineering Research and Design. 2007, 85 (A12), 1580–1585. Matche. 2014. Index of process equipment. Last accessed 18/12/2014. http://www.matche.com/equipcost/EquipmentIndex.html. Peters, M.S.; Timmerhaus, K.D.; West, R.E. Plant Design and Economics for Chemical Engineers. 2014. Last accessed 18/12/2014. http://www.mhhe.com/engcs/chemical/peters/data/. Silla, H. Chemical process engineering. Design and economics. New York, United States: Marcel Dekker Inc. 2003; ISBN: 0-8247-4274-5. R. Tesser.; Santacesaria, E.; Di Serio, M.; Di Nuzzi, G.; Fiandra, V. Kinetics of Glycerol Chlorination with Hydrochloric Acid: A New Route to α,γ-Dichlorohydrin. Ind. Eng. Chem. Res. 2007, 46, 6456–6465. K. Weissermel, H.J. Arpe. Industrial Organic Chemistry. 3rd Edition. Weinheim, Germany: Wiley-VCH. 1997

Zdravko Kravanja, Miloš Bogataj (Editors), Proceedings of the 26th European Symposium on Computer Aided Process Engineering – ESCAPE 26 June 12th -15th, 2016, Portorož, Slovenia © 2016 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/B978-0-444-63428-3.50014-X

Kriging based Fault Detection and Diagnosis Approach for Nonlinear Noisy Dynamic Processes Ahmed Shokrya1, Mohammadhamed Ardakania1, Gerard Escuderob2, Moisès Graellsa2, Antonio Espuñaa1* a

Department of Chemical Engineering, Universitat Politècnica de Catalunya. Department of Computer Sicne, Universitat Politècnica de Catalunya. 1 ETSEIB. Av.Diagonal 647,08028, Barcelona, Spain. 2 EUETIB. Comte d’Urgell 187, 08036 Barcelona, Spain. [email protected] b

Abstract This paper presents a hybrid approach to enhance the performance of the data-based Pattern Classification Techniques (PCTs) used for Fault Detection and Diagnosis (FDD) of nonlinear dynamic noisy processes. The method combines kriging metamodels with PCT (e.g. Support Vector Machines). The metamodels are used in two different ways; first, they are used as Multivariate Dynamic Kriging(s) (MDKs) which estimate the process dynamic behavior/outputs, second, as classical static models which are used for smoothing noise and imputing missing values of the process actual outputs measurements. So during the process operations, the estimated and the smoothed actual outputs are compared, and residual/error signals are generated that is used by the classifier to detect and diagnose the process possible faults. The method is applied to a benchmark case study, showing a high enhancement in such PCTs due to the introduction of the process dynamics information to these PCTs via the MDKs, and by smoothing the noise and imputing the missing measurements using the static kriging. Keywords: Hybrid Fault Detection and Diagnosis, Pattern Classification Techniques, Multivariate Dynamic Kriging, Dynamic Modelling.

1. Introduction Early Fault Detection and Diagnosis (FDD) plays an essential role in the safety and reliability of industrial process operations. Basic mathematical model-based FDD techniques rely on the monitoring of the extent of the matching between the actual process and an analytical model prediction. However, their reliability is conditioned by the analytical model quality; so, in addition to the difficulty to find accurate analytical models for complex nonlinear processes, they are sensitive to modelling errors, parameter variations, and uncontrolled disturbances…etc. Alternatively, data based PCTs (e.g. Support Vector Machines (SVM), Gaussian Naive Bayes (GNB), Decision Tree (DT), Artificial Neural Networks (ANN)...etc.) have shown a great flexibility and robustness for FDD of nonlinear chemical processes, in addition to their independence from the need of process mathematical models. They are trained based on pattern recognition principles from the process historical data, including normal and different faulty situations, and then they can be used for process supervision and for detecting and diagnosing possible faults from the process outputs measurements. However, these PCTs suffer from serious limitations: first, the classification of faults is based only on the current process measurements, and any knowledge about the system

56

A. Shokry et al.

dynamics is disregarded. As a result, they are mostly used for FDD of steady state processes, where the process is operated under constant operating conditions (manipulated inputs). Consequently, the process state/output variables also show a constant/specific behavior (set point). This constant or steady-state behavior represents a definite or specific pattern that is easily recognized by these PCTs under normal or faulty conditions as well. In many situations, the process is to be operated under changing operating conditions (inputs), e.g. transition between different process set points, or changing the manipulated inputs to overcome some external disturbances. In these cases, PCTs could easily produce false alarms by diagnosing the change in the processes state/outputs as a possible fault. This is due to the lack of Information about the dynamics governing the relation between the process inputs and outputs. Second, errors that very often contaminate the measurements can create false diagnosis and fake alarms. These usual errors may be random (e.g.: sensors white noise), or non-random (outliers / biases due to instruments malfunctioning, miss-calibration or poor sampling). Additionally, missing measurements due to mechanical/electronic failures of sensors, can also lead to the same misleading FDD results. One robust and efficient way to take into account the information about the process dynamics, is to use a data based dynamic observer, which allows calculating error signals between the observer estimated outputs and the process measured outputs. These residuals are then used to detect and isolate faults using a threshold value for the error, or some statistical analysis (Caccavale et al., 2010; Honggui et al., 2014; and Banua and Umab, 2011). Few works -outside the chemical engineering area- (Patton, et al, 1994) have combined these data based observer (and the generated residuals) with PCTs, in order to automatize the FDD. However, they have assumed prefect process output data without any noisy, missing and/or outlier measurements. Additionally, the classification performance of the PCTs under changeable operating conditions (manipulated inputs) is not clearly illustrated/stated, furthermore, the robustness of the approach has not been assessed with different classifiers. Finally, most of them have used ANN as dynamic observer, but ANN frequently shows some drawbacks as the curse of dimensionality and the difficulty to specify the net structure. This paper presents a hybrid approach (Figure 1), whose objective is to enhance the performance of those data-based PCTs used for FDD of nonlinear, dynamic, noisy processes, running under different operating conditions, and suffering from missing measurements. This enhancement is based on the use of static kriging metamodels to smooth the noise and impute the missing values of the actual process output measurements, and the introduction of the process dynamic information to the PCTs, through the use of MDKs (Shokry and Espuña, 2014, 2015), which estimate the process future output variables. Thus, error signals can be calculated between the estimated and actual smoothed outputs and used to train and test the PCTs. The proposed approach is applied to the FDD of a benchmark problem, showing a high enhancement in the performance of the PCTs. Additionally, different PCTs have been tested (namely SVM, GNB, DT and ANN (Askarian et al., 2016)) to illustrate the approach robustness.

2. Methodology Assume a process under study, having U(t) inputs and X(t) outputs, ∈ , ∈ : The first step is the construction/training of a PCT using residual/error signals e t), ∈ x . These error signals are calculated as the difference between the available measurements of the process outputs corresponding to normal (Xnr (t)), and faulty situations (Xft (t)), but under the same inputs scenarios (U(t)). Second, MDKs are trained

Kriging based Fault Detection and Diagnosis Approach for Nonlinear Noisy Dynamic Processes

57

with the available process data under normal (fault-free) conditions, in order to be able to estimate the process future outputs as a function of the current outputs and inputs: 1 f , . During the process operations under a certain time profile of the inputs U(t), the MDKs observer is used to estimate the process behavior corresponding to this profile U(t), and the static kriging metamodels are used to smooth the noise and impute the missing data from the process actual output measurements. The error signals are calculated , where Xsm(t) is the actual smoothed output. These signals are fed to the previously trained PCT to detect and diagnose possible faults under different manipulated inputs. The error signals enable the PCT to have information about the process dynamics, along with possible faults, so the PCT can discern if the outputs change is due to the inputs or due to some faults. Possible faults x(t)noisy

U(t)

Decision

Static OK

Process

+

Classifier

-

MDKs observer

Figure 1. Proposed FDD framework.

2.1. Ordinary Kriging (OK) and Multivariate Dynamic Kriging(s) (MDKs) Given training data [xi,yi], i=1,2,..n, ∈ , ∈ , OK assumes a predictor ŷ(x)= µ+Z(x). Where Z(x) is a deviation from the constant mean µ value, and expressed as a stochastic Gaussian process. The OK final predictor is given in Eq.(1), where [r] n×1 is the correlations between the point to be predicted xnew and the training data (Shokry and Espuña, 2014). The OK has shown high capabilities for modelling complex static systems (Shokry and Espuña, 2014). Recently, MDKs (Shokry and Espuña, 2014, and Shokry and Espuña, 2015) have been proposed for the modelling of multivariate dynamic systems showing outperforming capabilities over other state of art techniques (e.g. ANN). Given a random training signal of process inputs U(t) and outputs X(t), ∈ , ∈ , where the measurements are recorded over constant time steps , … … , MDKs are based on training kx kriging models (Eq.(2) ) to , , , predict each output as a function of the previous inputs and outputs over a specific time lag L. Then, these MDKs are used through recursive interpolation to predict the system outputs over lager time domains (multistep ahead prediction). μ 1

1μ ,..

,

,..

,

(1) (2)

1,2, …

3. Application and discussions The three-tank system in Figure 2 (Patton et al.,1994) has been widely used as a benchmark case study in monitoring, control and FDD studies. 2 |

Q2in(t)

Q1in(t)









h1 Tank 1

s13

h3

s23 Tank 3

h2 Tank 2

s0



|



2 |

|



2 |

|

2



Qout(t)

2

Figure 2. Three tanks benchmark system.

The plant consists of three identical cylindrical tanks of cross section area A=0.0154m2, serially interconnected by three cylindrical pipes of cross section area s13=s23=s0=0.005 m2, and flow coefficients a13= 0.6836, a23= 0.4819, a0= 0.4819. Two pumps fill the

58

A. Shokry et al.

system with flowrates Q1 and Q2 (limited to 0.003 m3/s). Gaussian noise ( 0, 0.015 is added to the process outputs h1,h2,h3. Outliers of value 0, 0.03 are also added to 3% of the measurements, considering a sampling time of one second. The process is subjected to three possible faults: leaking in tank 1 (Qf1=-0.0007 m3/s), plugging in tank 2 (Qf2=+0.0007 m3/s), and leaking in tank 3 (Qf3=-0.0007 m3/s). The objective is to design a data driven FDD system capable of fault detection and diagnosis under any change in the inputs (Q1, Q2), and with noisy and missing measurements. 3.1. Classifier construction The PCT is constructed using historical process measurements under changeable manipulated inputs (as much as possible, in order to collect process information under different operating scenarios). Figure 3 (a)-top shows the profiles of the inputs Qj(t). Figure 3(a)-bottom shows the faults scenario, in which the first 250 seconds are operated under normal conditions, then the faults start to influence the process sequentially (fault 1, fault 2, fault 3), each for 250 seconds. Figure 3(b)-top illustrates the process behavior for normal/fault-free (hi-nr(t)) and faulty conditions (hi-ft(t)), and finally, Figure 3(b)- bottom shows the residual (ei(t)= hi-nr(t)- hi-ft(t)) signals between them. The total set of 1000 data (Figure 3(a,b)- bottom) is randomly separated into a training group (500 data) used to train the PCT, and a validation group (500 data) used to assess the trained PCT. The PCT is trained and validated (with the residuals ei(t)) twice: first, using ideal (noise-free) data, and second, using noisy but smoothed (using the static OK) data. -3

2 Q1 Q2

1

h1norm h2norm h3norm h1faulty h2faulty 600

h3 800faulty10

fualt

x 10

Fault 1 Fault 2 Fault 3

norm

0

-1

0.5 0.4 0.3 0.2 0.1

-3

e= h

Faults(m3/s)

1

(a)

Tanks levels (m)

x 10

-h

Inlets (m3/s)

3

0

200

400 600 Time (s)

800

1000

0.1 0.05

e1 e2 e3

0 -0.05 200

(b)

400 600 Time (s)

800

1000

Figure 3. (a): Process inputs (a-top), faults (a-bottom), outputs (b-top), and residuals (b-bottom).

Table 1 shows the high accuracy of the PCTs, and how the residuals are able to isolate the effect of the inputs from the effect of faults. Also, it is worthy to mention that the PCTs were trained in the classical way using only the process variables (Q1,Q2,h1,h2,h3), and the PCTs accuracy was very low (around 60%) even with ideal data. Table 1. Accuracy of SVM, GNB, DT and ANN classifiers validations.

Accuracy (F1-Score)

Ideal data Noisy, but smoothed

SVM 96.8 82.6

Classifiers DT GNB 98.5 97.7 90.07 81.1

ANN 99.2 92.0

3.2. Observer construction To train the MDKs, random signal of the process variables measurements is used; Figure 4 shows a random inputs signal (a), and the corresponding system outputs ((b) green dotted line). Using these data, the MDKs are fitted/trained (Eq. (3)). Then they are validated via the prediction of the process behavior using the same training scenario of inlets (Figure 4 (a)), and additionally a different validation scenario of inlets (Figure 4(c)). The MDKs predictions are shown (Figure 4 (b,d) red dotted line) to accurately describe the real behavior (Figure 4 (b,d) blue solid line) despite of the noise and the outliers in the training data.

Kriging based Fault Detection and Diagnosis Approach for Nonlinear Noisy Dynamic Processes

59

-3

3

0 -3

x 10

h3 (m)

3

Q2 (m /s)

4 2 0

(a)

0

200

400 Time (s)

600

0.6 0.4 0.2 0 0.4 0.2 0

Measured behavior Underlying behavior MDK estimation 500 600 700 8

h2 (m)

2

400

h1 (m)

x 10

300

Q1 (m /s)

4

0.4 0.2 0

(b)

800

100

200

300 400 500 Time (s)

600

700

800

100

200

300

600

700

800

-3

h1 (m)

x 10

3

Q1 (m /s)

4

0

-3

x 10

0.2 0

h3 (m)

3

Q2 (m /s)

4

h2 (m)

2

0.6 0.4 0.2 0 0.4

2 0

(c)

0

200

400 Time (s)

600

800

0.4 0.2 0

(d)

400 500 Time (s)

Figure 4. Training (a,b) and validation (c,d) of the MDKs. 1

,

,

,

,

,

1,2, …

(3)

3.3. Applications After the construction of the PCT and the MDKs observer, they can be used to supervise the process as described in Figure 1; considering that the process to be operated under different inlets profiles shown in Figure 5 (left), and subjected to different faults scenario shown in Figure 5 (right). The process outputs measurements are illustrated in Figure 6 (left) in solid blue lines, and the MDKs are used to estimate the process normal behavior (Figure 6 (left) red dotted lines) under the given inlets profiles, the MDKs were able to estimate the process normal behavior with very high accuracy, and their estimations were very close to the real values of the process normal behavior (Figure 6 (left) solid black lines). -3

-4

x 10 Faults( m /s)

3

3

Q1,Q2 (m /s)

x 10 3 2.8

Q1 Q2

2.6 0

100

200 Time (s)

300

400

Fault 1 Fault 2 Fault 3

5 0 -5 0

100

200 Time(s)

300

400

Figure 5. Inlets and faults scenarios fault

e= h

0

-h

fault

-0.1

norm

0.45 0.4 0.35 0.3 0.25

e1 e2 e3

0.1

0.1 0.05 0

h

h2(m)

0.35 0.3 0.25 0.2

Measured MDK estimated Smoothed Underlying behavior

0.3

norm

-h

0.4

h3(m)

h1(m)

0.5

-0.05

50

100

150

200 250 Time(s)

300

350

400

50

100

150 200 250 Time (s)

300

350

400

Figure 6. Left: Measured and smoothed outputs, and the MDKs estimations, right: residual.

60

A. Shokry et al.

Additionally, static kriging models are used to smooth the measured outputs, and to impute their missing values (Figure 6 (left) cyan dotted lines). The procedure is repeated three times, each one using a different amount of missing data (10%, 20%, and 30%). Figure 6 (right) shows the residuals (10% of missing data) between the actual and the estimated outputs (top), and the actual smoothed and the estimated outputs (bottom). Table 2 shows the FDD accuracy results of the PCTs using both noisy and smoothed residuals signals. The table shows high enhancement in the PCTs performance when used in the proposed framework. Table 2. Accuracy of the proposed FDD approach based on SVM, GNB, DT and ANN classifiers.

SVM GNB DT ANN

Accuracy (F1-Score) Noisy and missing Smoothed and imputed 10 % 20% 30% 10 % 20% 30% 79.0 79.0 69.0 84.7 84.5 84.5 81.7 80.0 80.0 67.7 67.7 54.3 71.5 71.5 65.8 81.9 81.2 81.2 74.5 70.0 62.0 86.3 87.8 84.0

4. Conclusions This work proposes a hybrid approach for FDD of nonlinear noisy dynamic processes, subject to changeable manipulated inputs and missing measurements. The method combines MDKs, static kriging, and PCTs. The MDKs show very high accuracy to approximate the process nonlinear dynamic behavior, which enables the calculations of error signals that are used to introduce the process dynamic information to the PCTs, and enables the PCTs to distinguish the faults effects from the manipulated inputs effects on the outputs. The methodology has been applied for the FDD of a benchmark three-tank problem, and has shown high accuracy, robustness, and enhancement of the classification performance of several PCTs when used in the proposed framework. The direct future work is the online applications of the proposed method. Acknowledgements: Financial support from the Spanish Ministry of Economy and Competitiveness and the European Regional Development Fund (SIGERA, DPI2012-37154-C0201), and from the Generalitat de Catalunya (2014-SGR-1092-CEPEiMA) is fully appreciated.

References A. Shokry and A. Espuña, 2014, Applying Metamodels and Sequential Sampling for Constrained Optimization of Process Operations, Lecture Notes in Computer Science ,8468, 396-407. A. Shokry, A. Espuña,2015, Multistep Ahead Prediction Using Ordinary Kriging Applied to Modeling and Simulation of Complex Nonlinear Dynamic Processes,ECCE10, Nice, France. A. Shokry, A. Espuña, 2014, Sequential Dynamic Optimization of Complex Nonlinear Processes based on Kriging Surrogate Models”, Procedia Technology,15, 376–387. F. Caccavale, P. Digiulio, M. Iamarino, S. Masi , F. Pierri,2010, A neural network approach for on-line fault detection of nitrogen sensors in alternated active sludge treatment plants. Water Science & Technology, 62 ,2760-2768. H. Honggui, L. Ying, Q. Junfei,2014, A fuzzy neural network approach for online fault detection in waste water treatment process. Computers and Electrical Engineering, 40, 2216–2226. M. Askarian, G. Escudero, M. Graells, R. Zarghami, F. J. Farahani, N. Mostoufi, 2016, Fault diagnosis of chemical processes with incomplete observations: A comparative study. Computers & Chemical Engineering, 84, 104–116. R.J. Patton, J. Chen, T.M. Siew,1994, Fault diagnosis in nonlinear dynamic systems via neural networks, Proc. of IEE Int.Conf. on Control, 1346-1351. U. S. Banua, G. Umab, 2011, ANFIS based sensor fault detection for continuous stirred tank reactor, Applied Soft Computing, 11, 2618–2624.

Zdravko Kravanja, Miloš Bogataj (Editors), Proceedings of the 26th European Symposium on Computer Aided Process Engineering – ESCAPE 26 June 12th -15th, 2016, Portorož, Slovenia © 2016 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/B978-0-444-63428-3.50015-1

Kinetics of Phycocyanobilin Cleavage from CPhycocyanin by Methanolysis Chandrakant R. Malwade*, Maria C. Roda-Serrat, Knud V. Christensen, Xavier Fretté, Lars P. Christensen Department of Chemical Engineering, Biotechnology and Environmental Technology, University of Southern Denmark, Campusvej 55, DK-5000 Odense, Denmark [email protected]

Abstract Phycocyanobilin (PCB) is an important linear tetrapyrrolic molecule for food as well as pharmaceutical industry. It is obtained from blue-green algae, where it is attached covalently to phycobiliproteins (C-PC and APC) present in the light harvesting complexes. In this work, cleavage of PCB from phycobiliproteins present in the extract of Arthrospira platensis by methanolysis is investigated. Different initial concentrations (25 mg/mL, 10 mg/mL, and 5 mg/mL) of proteins are used in order to investigate the effect of protein aggregation on process yield. A kinetic model is developed by fitting the experimental data for methanolysis. Results show that the kinetics follows a pseudo first order kinetics and remains unaffected due to the different initial concentration of phycobiliproteins. Moreover, yield of PCB in the cleavage process is found to be proportional to the initial concentration of phycobiliproteins. Keywords: Methanolysis, reaction kinetics, food color, phycocyanobilin, Arthrospira platensis.

1. Introduction Phycocyanobilin (PCB) is a linear tetrapyrrole chromophore covalently attached to protein subunits of phycobiliproteins, C-Phycocyanin (C-PC) and Allophycocyanin (APC), present in the light harvesting complexes of the blue-green algae Arthrospira platensis (formerly known as Spirulina platensis). PCB is attached to the protein subunits through a cysteine residue with a thioether linkage as shown in figure 1. PCB absorbs light in the red region of the electromagnetic spectrum, thereby exhibiting a vivid blue color. Recently, it has gained significant interest in the food industry due to its potential to complete the natural food color palette which lacks a blue color. The extract of Arthrospira platensis has already been approved for use in certain food products as a blue color additive by US Food & Drug Administration. Other alternatives for natural blue color such as combinations of red cabbage liquid, sodium bicarbonate and aluminium sulfate, extracts of viola and pansies combined with aluminium or stannous chloride, and fruit juice of Genipa americana mixed with watermelon juice have been reported (Lauro, 2007; Rice, 2010; Wu et al., 2013). However, the Arthrospira extract has problems with the stability and other alternatives do not offer the shades as vivid as synthetic blue colors. Therefore, efforts are on to develop a stable natural blue food color from the chromophore PCB as an alternative to the synthetic

62

C.R. Malwade et al.

food colors. In addition, PCB has been known to possess antiproliferative, antioxidant, atheroprotective activities and alleviate toxic effects of alcohol consumption (Hirata et al., 2000; Koníčková et al., 2014; Strasky et al., 2013). Hence, the PCB molecule is of great interest to the food as well as pharmaceutical industries.

Figure 1. PCB attached to alpha (left) and beta (right) subunits of C-PC via thioether linkage (Padyana et al., 2001).

PCB can be obtained by cleaving the molecule from phycobiliproteins present in the extract of Arthrospira platensis. There are mainly three methods mentioned in the literature for cleavage of PCB from blue-green algae: enzymatic hydrolysis, acid hydrolysis, and methanolysis. Enzymatic hydrolysis involved the cleavage of PCB from C-PC (purified from Phormidium luridum) by incubating the solution of C-PC in 0.25M potassium phosphate (pH 7) buffer with 100 mg Nagarse enzyme at 37 °C for 16 h (Siegelman et al., 1967). PCB cleavage by acid hydrolysis consisted of dispersion of denatured C-PC into concentrated hydrochloric acid at 25 °C for 30 min followed by precipitation of free PCB from the mixture by dilution with water (O’heocha, 1958). In methanolysis, denatured C-PC is boiled in methanol at 60 °C for 16 h and free PCB from the resultant mixture is purified by extraction with chloroform (Carra and O’heocha, 1966; Chapman et al., 1968). However, the methods mentioned in the literature reported very poor yields and mainly focused on obtaining PCB for analytical purposes, hence they were not concerned about the optimization of process parameters. Among the alternatives for cleavage of PCB, methanolysis is considered as the most efficient process in regard to yield and cleavage products. The exact mechanism of methanolysis is not known yet. However, the yield of cleavage by methanolysis is assumed to be dependent on several factors like aggregation state and denaturation of phycobiliproteins, their initial concentration, duration of the process etc. Thus, development of an efficient process for cleavage of PCB must involve optimization of the underlying process parameters. Investigation of kinetics could be the first step towards process optimization as it can give fundamental process insights through understanding of reaction mechanisms and rate constants. Fitting of kinetic models to experimental measurements can also enable forecast of reaction endpoints, predict reaction yields and can be used to control and optimize industrial processes. In the present work, kinetics of cleavage of PCB from phycobiliproteins by methanolysis is investigated. Prior to the cleavage process, phycobiliproteins are

Kinetics of Phycocyanobilin Cleavage from C-Phycocyanin by Methanolysis

63

denatured with methanol, which is expected to increase the accessibility of PCB during the cleavage process. Based on the experimental data a reaction kinetic model is developed. The effect of initial concentration of phycobiliproteins on the kinetics of methanolysis and PCB yield is also investigated.

2. Experimental 2.1 Materials A commercial extract of Arthrospira platensis, Linablue G1, obtained from DIC Europe GmbH, Düsseldorf, Germany was used in this work as a source of phycobiliproteins. Linablue G1 contains 55 wt% D-trehalose, 5 wt% sodium tri-citrate as preservatives and 40 wt% spirulina color (Arthrospira extract). Further analysis of Linablue G1 by using the 2-wavelength absorption method reported earlier revealed composition of spirulina color as 21.5 wt% C-PC, 6.2 wt% APC (Yoshikawa and Belay, 2008). HPLC grade solvents, methanol (Chromasolv, purity > 99.9 %) and acetonitrile (Chromasolv, purity > 99.9 %) obtained from Sigma-Aldrich Chemie GmbH, Germany and ultrapure water from PURELAB Chorus water purifier were used in this work. 2.2 Preprocessing of Linablue G1 Additives, D-trehalose and sodium tri-citrate, were removed from Linablue G1 before it was used in the cleavage process. D-trehalose and sodium tri-citrate have high solubility in methanol; therefore Linablue G1 was washed with methanol to remove the additives selectively. The procedure included suspension of 50 g of Linablue G1 in 400 mL fresh methanol under magnetic stirring for 30 min. The procedure was repeated 5 times with recovered methanol. Removal of additives was confirmed by High Pressure Liquid Chromatography – Refractive Index (HPLC-RI) analysis of the filtrate. Wet cake was dried under the fume hood in the dark for 24 h to obtain 20 g of additive free Linablue G1. In addition to the removal of additives, unfolding i.e., denaturation of phycobiliproteins was achieved with methanol washing. 2.3 Cleavage of Phycocyanobilin Cleavage of PCB from phycobiliproteins was carried out according to the method reported earlier (Carra and O’heocha, 1966). The procedure included boiling of additive free Linablue G1 in 100 mL methanol at 60 °C for 16 h under reflux. Three different initial concentrations (25 mg/mL, 10 mg/mL, and 5 mg/mL) of proteins were used. Kinetics of the methanolysis process was followed by taking samples at regular time intervals, which were analyzed with HPLC to determine the concentration of PCB. A sample chromatogram of methanolysis mixture is shown in figure 2 along with calibration curve. All cleavage experiments were performed in triplicates. 4.5

a

PCB

75 3.9

25 0 0.0

0.08

PCB (mg/mL)

125

6.2 min

1.0

2.0

3.0

4.0

5.0

6.0

7.0

0.06

Y = 0.0019·X + 0.0013 R² = 0.9981

0.04 0.02

b

0

8.0

9.0 0 10.010

20 30 Peak area

40

Figure 2. a) Sample chromatogram of methanolysis mixture; b) HPLC calibration curve for PCB.

64

C.R. Malwade et al.

3. Kinetic model The cleavage of the thioether bond by methanolysis most likely follows an SN1 or SN2 reaction mechanism. As the methanolysis in this study is carried with methanol in high molar excess, the influence of the methanol concentration cannot be determined. From a modelling point of view it then becomes immaterial if the reaction follows an SN1 or SN2 mechanism. The reaction will behave as a pseudo first order reaction. The reaction will further under the relative harsh reaction conditions become irreversible as the protein is denatured. One of the PCB attached to the beta subunit (type II) is expected to be less accessible than the remaining PCBs attached to the beta or alpha subunits (type I) (Bishop et al., 1986). It is therefore to be expected that the cleavage of PCB can be described by two first order irreversible reactions either in parallel (Scheme 1) or in series (Scheme 2). P

ν · PCB I

P

ν PCB

P

ν · PCB II

P

ν PCB

Scheme 1: Two parallel reactions P

ν · PCB I P

ν · PCB II

P

ν · PCB II

ν · PCB II

P

ν PCB

ν PCB

Scheme 2: Two reactions in series In a batch reactor the reactions-in-parallel mechanism will appear as a single first order reaction while the consecutive reaction mechanism will be more complex: dC dt dC dt

k ·C (1) k ·C

k ·C

The analytical solutions to the set of equations (1) for the parallel reaction scheme is C t C t

·

·e

C

k ·C · e k k

·

e

·

C

·e

(2)

·

Based on the reaction stoichiometry the concentration of PCB can then be expressed as CPCB t

υ ·C

υ υ

· 1

υ ·C

· 1

υ ·C

·

υ C · υ C υ υ

υ k · υ k k

υ k · υ k k

υ C · υ C

·e ·e

·

(3) ·

Kinetics of Phycocyanobilin Cleavage from C-Phycocyanin by Methanolysis

65

4. Results and Discussion The fitted model parameters are shown in table 1 while the model fit to the experimental data can be seen in figure 3. As seen in figure 3 the model describes the data adequately. The ratio between υ and υ though is too large compared to what could be expected based on prior findings where a ratio 0.2 and 0.3 would seem more likely if all PCB had been cleaved. This could indicate that though the model explain the kinetic observations well, the two-step model might be an over simplification. This is quite plausible as it is known that at least three different thioether bonds between PCB and the proteins exist (Bishop et al., 1986). The amount of data available though, is not sufficient to support a three step mechanism. The ratio between total attached PCB type I, υ · C and initial protein is, as it should be, quite consistent around 0.5·10-3 independent on the initial protein concentration. This is also expected based on the large difference in molecular weight of the protein and PCB. Further, the large difference in the rate constants k1 and k2 is consistent with earlier more qualitative reports that some PCBs are appreciably more difficult to cleave. Table 1. Model data fitted to experimental data.

Initial protein concentration (mg/mL)

υ ·C (mg/mL)

υ υ

5 10 25

2.7·10-3 4.7·10-3 12·10-3

24

0.35

PCB concentration (mg/mL)

0.3

5 mg/mL

Model

10 mg/mL

Model

25 mg/mL

Model

C C 1.0·10-4

k (h-1)

k (h-1)

33

0.29

0.25 0.2 0.15 0.1 0.05 0 0

2

4

6 8 10 Reaction time (h)

12

14

16

18

Figure 3. Cleavage of PCB as a function of time. Error bars indicate the standard deviation; the fully drawn lines are calculated using the model (3) with the parameters from table 1.

66

C.R. Malwade et al.

5. Conclusions The developed consecutive pseudo first order kinetic model fit the experimental kinetic data for methanolytic cleavage of PCB from the phycobiliproteins well. A more elaborated model though will have to be developed in order to bring consistency between the more in-depth knowledge available about the linkage between PCB and the protein, and the kinetic model.

Acknowledgement This paper is a part of the research within the Innovation Consortium Natural Blue Food Color financially supported by Danish Agency for Science Technology and Innovation to whom the authors are indebted.

References Bishop, J. E., Lagariass, J. C., Schoenleberll, R. W., Rapoport, H., Klotz, A. V, Glazer, A. N., 1986. Phycobiliprotein-Bilin Linkage Diversity. J. Biol. Chem. 261, 6790–6796. Carra, P., O’heocha, C., 1966. Bilins released from algae and biliproteins by methanolic extraction. Phytochemistry 5, 993–997. Chapman, D. J., Cole, W. J., Siegelman, H. W., 1968. Cleavage of phycocyanobilin from Cphycocyanin. Biochim. Biophys. Acta 53, 692–698. Hirata, T., Tanaka, M., Ooike, M., Tsunomura, T., Sakaguchi, M., 2000. Antioxidant activities of phycocyanobilin prepared from Spirulina platensis. J. Appl. Phycol. 12, 435–439. Koníčková, R., Vaňková, K., Vaníková, J., Váňová, K., Muchová, L., Subhanová, I., Zadinová, M., Zelenka, J., Dvořák, A., Kolář, M., Strnad, H., Rimpelová, S., Ruml, T., Wong, R.J., Vítek, L., 2014. Anti-cancer effects of blue-green alga Spirulina platensis, a natural source of bilirubin-like tetrapyrrolic compounds. Ann. Hepatol. 13, 273–283. Lauro, G., 2007. Stabilized natural blue and green colorants. US Pat. 7,279,189. US 2006/0003060 A1. O’heocha, C., 1958. Comparative biochemical studies of the phycobilins. Arch. Biochem. Biophys. 73, 207–219. Padyana, A. K., Bhat, V. B., Madyastha, K. M., Rajashankar, K. R., Ramakumar, S., 2001. Crystal structure of a light-harvesting protein C-phycocyanin from Spirulina platensis. Biochem. Biophys. Res. Commun. 282, 893–898. Rice, R., 2010. Natural Blue Falvorants and Colorants. US Pat. App. 12/753,278. US 2010/0255177 A1. Siegelman, H. W., Chapman, D. J., Cole, W. J., 1967. Enzymatic cleavage of phycocyanobilin. Arch. Biochem. Biophys. 3455, 261. Strasky, Z., Zemankova, L., Nemeckova, I., Rathouska, J., Wong, R.J., Muchova, L., Subhanova, I., Vanikova, J., Vanova, K., Vitek, L., Nachtigal, P., 2013. Spirulina platensis and phycocyanobilin activate atheroprotective heme oxygenase-1: a possible implication for atherogenesis. Food Funct. 4, 1586–1594. Wu, S., Ford, C., Horn, G., 2013. Stable natural color process, products and use thereof. US Pat. 8,557,319. US 2009/0246343 A1. Yoshikawa, N., Belay, A., 2008. Single-laboratory validation of a method for the determination of c-phycocyanin and allophycocyanin in spirulina (Arthrospira) supplements and raw materials by spectrophotometry. J. AOAC Int. 91, 524–529.

Zdravko Kravanja, Miloš Bogataj (Editors), Proceedings of the 26th European Symposium on Computer Aided Process Engineering – ESCAPE 26 June 12th -15th, 2016, Portorož, Slovenia © 2016 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/B978-0-444-63428-3.50016-3

Simulation of Turbulent Particulate Flows for Nuclear Waste Management: Agglomeration in Vertical Flows Derrick O. Njobuenwu*, Michael Fairweather School of Chemical and Process Engineering, University of Leeds, Leeds, LS2 9JT, United Kingdom *[email protected]

Abstract Results from large eddy simulations of relevance to the transport of nuclear waste flows are analysed to elucidate the interaction between the turbulent flow and calcite particles (a nuclear waste test material) in vertical flows. Agglomeration of particles is modelled based on the hard-sphere collision approach coupled to van der Waals’ interaction in a Lagrangian framework. The number density of particle collision frequency, efficiency and agglomeration rate reveal the importance of gravity, lift and buoyancy forces in the simulations. The number of collisions increases in the order of upward, no gravity and downward flows, whereas the reverse occurs for the number of agglomerations. The upward flow shows a higher agglomeration rate than the other flows due to the depletion of particles in the near-wall region under the action of the lift force. Keywords: LES, particles, agglomeration, sedimentation.

1. Introduction Management of the UK’s legacy nuclear waste is a major remediation programme involving a community of partners including academics and government agencies. The total volume of the waste is O(106) m3 arising from the nuclear and defence industries, as well as hospitals and research, and has been stored across the UK since 1959. This waste has different radioactivity levels (low, intermediate and high), and includes a substantial volume of insoluble sludge consisting of colloidal particles which must be cleaned-up to ensure that storage sites can be returned to a re-usable state. The stages of the decommissioning process comprise of waste retrieval, transport, separation, solidification and permanent storage. In a typical waste sludge, the particles are in contact with highly alkaline and high ionic strength salt solutions, where the electrical double layer associated with charged sites on particle surfaces collapses, and electrostatic repulsions that can disperse particles of like charge are inhibited. Hence, it is assumed that the particles are attracted to each other (with van der Waals’ attraction dominating), promoting particle agglomeration. Agglomeration can occur at most of the stages of the decommissioning process, resulting in unwanted effects such as the formation of deposits and plugging of transfer and process pipes. Agglomerates also affect the viscosity, solid settling rate, and other waste characteristics that affect pumping. The rates of agglomeration and of de-agglomeration in pumps and in pipe

D.O. Njobuenwu and M. Fairweather

68

flow are important characteristics that feed into transport prediction models. It has been shown that primary particle size, agglomerate diameter and the fractal dimension of the agglomerate have a significant effect on two important sedimentation parameters, namely the settling rate and gel point. Previous studies, e.g. Njobuenwu and Fairweather, 2015), have considered the effect of particle size, flow turbulence intensity and particle restitution coefficient on collision frequency and efficiency at zero gravity. In a dynamic situation, these parameters must all be accounted for to predict the transport, agglomeration and sedimentation behaviour of the waste. Most numerical simulations exclude the gravitational force in order to study only turbulence-induced agglomeration. However, gravitational acceleration is inevitably present in reality, and hence must be included as one of the external forces that induce particle-particle interactions leading to agglomeration and sedimentation. The inclusion of gravitational acceleration does alter the particle behaviour in a system, as shown by (Marchioli et al., 2007; Molin et al., 2012; Njobuenwu and Fairweather, 2015). The overall objective of our work is to develop a framework for the modelling of solidliquid separation using computational fluid dynamic techniques, enabling the prediction of particle settling efficiency. The specific aim of the work described herein is to investigate the effects of gravity on particle agglomeration, extending earlier studies (Njobuenwu and Fairweather, 2015) in which the gravity force was neglected and particle-particle interaction was only induced by turbulence. Gravitational and buoyancy forces are now included in Newton’s law of particle motion to study the effect of gravity in upward and downward flows.

2. Numerical methodology In a dense suspension with high mass loading and particle volume fractions Φv = 10-3 a four-way coupling technique is required which includes modelling particle-fluid and fluid-particle interactions, as well as particle-particle interactions. Details of the fourway coupling techniques employed in this paper can be found in the literature (Rani et al., 2004; Bini and Jones, 2008; Alletto and Breuer, 2012; Molin et al., 2012). For the liquid phase, the flow is predicted using large eddy simulation (LES). LES resolves the large-scale flow structures by computation and models sub-grid scale (SGS) motions as: ∂u i ∂τ

+uj

∂u i ∂x j

=−

1 ∂p

ρ ∂xi



∂ ∂x j



ij

)

+ τ ij +

∆P

ρLz

+

f pi

ρ

(1)

where ui and p are the fluid velocity and pressure, where σ ij represents the viscous stress,

τ ij = ui u j − ui u j represents the effect of the SGS motion on the resolved

motion. fp accounts for the action on the fluid of the particles, given by the sum of all hydrodynamic forces (less body forces) in the momentum equation of all particles in a fluid computational cell, f p = −1 / Vcell



N cell j =1

f fj ; ∆p / Lz = − ρuτ2 / h + Φv g( ρ − ρ p ) is the

mean pressure constant that drives the flow (Molin et al., 2012), and τ w = ρuτ2 is the mean wall shear stress. The fluid is water (ρ = 103 kgm-3, ν = 10-3 m2s-1) with a shear Reynolds number, Reτ=uτ/ν. The BOFFIN-LES code (Bini and Jones, 2008) was used to solve the LES equations. The dispersed phase is represented using a Lagrangian particle tracker, enhanced with deterministic hard-sphere collision and van der Waals’ interactions, where the frequency and efficiency of binary collisions are monitored in a channel flow. Particle

Simulation of Turbulent Particulate Flows for Nuclear Waste Management: Agglomeration in Vertical Flows

69

trajectories are obtained from Newton’s equation of motion considering hydrodynamic forces (drag, lift, added mass and pressure gradient), gravity and buoyancy forces, and the SGS contribution:

dv = FH dt + (1 − ρ / ρ p )gdt + (C0 k sgs / t p ) 0.5 FH =

(u − v ) ρ [(u − v )× ω]+ ρ Du + ρ + CL t p / fD ρp ρ p Dt 2 ρ p

(2)

 du dv  −    dt dt 

A pro-active approach to collision detection was adopted for modelling collision frequency. This method ensures that a possible collision is detected at the beginning of a time step and all particles are advanced using small collision time intervals (Sundaram and Collins, 1996; Rani et al., 2004), with the particles subsequently moved based on the equation of motion without collisions. The agglomeration kinetic model is such that agglomeration occurs if the elastic energy after the compression period of the collision is less than the work required to overcome the van der Waals’ forces (Alletto, 2014):

(v

− 2

− v 1−

) − [(v − v )⋅ n ] (1 − e ) ≤ H | (v − v )⋅ n | 6d 2

− 2

− 2

2

− 1

− 1

c

c

2 n

* *2 0

 6(1 − e 2 )  d *p3,1 + d *p3, 2 n    p 2 ρ *pσ * d *p2,1 d *p2, 2 (d *p ,1 + d *p , 2 )   

1/ 2

,

(3)

where quantities with the superscript * are dimensionless and defined as: the particle density ρp*=ρp/ρ, particle diameter dp*=dp/2h, Hamaker constant H * = H /( ρub2 2h) , and yield pressure σ * = σ /( ρub2 ) ; ub is the bulk velocity and 2h is the channel height. Note the superscript (–) denotes quantities before collision and superscript (+) denotes those after collision, with subscripts 1 and 2 denoting particles number one and two. An agglomerate is treated as a spherical particle with an equivalent diameter, while its velocity and position vectors are determined from the pre- and post-collision velocities of the parent and child particles. The post-collision properties of colliding particles are changed according to the hard-sphere collision model for the non-agglomerating case. The channel was laden with spherical calcite, a test material for legacy waste sludge with a particle to fluid density ratio of Φρ = 2.71, Hamaker constant H = 3.8×10-20 J, mean yield stress σ = 3.0 × 108 Pa, minimal contact distance δ 0 = 2.0 × 10 −10 m and normal restitution coefficient en = 0.4 (Tomas, 2007, Ho and Sommerfeld, 2002). The particles were initially randomly distributed, with their velocity vector set equal to that of the fluid at the particles’ position. Periodic boundary conditions were imposed in the streamwise and spanwise directions, with elastic collisions applied at the walls.

3. Results and Discussion First, the effect of momentum exchange between the two phases, the shear induced lift force and the direction of gravity on preferential concentration and wall accumulation is considered. It is important to understand particle segregation under these influences as they affect the particle collision frequency and efficiency at different locations in the wall-normal direction. Figure 1 shows the integral of instantaneous concentration profiles, C/C0 , in the wall-normal direction for the viscous sub-layer, x+ 1) are qualitatively consistent with those observed by Marchioli et al. (2007) for a gas-solid flows (ρp/ρ > 1), but are opposite in direction to the observations of Molin et al. (2012) for liquid-gas bubbly flows (ρp/ρ < 1). Marchioli et al. (2007) showed that the lift force affects the particle concentration at the wall, leading to a reduction of the concentration at the wall for an upward channel flow and an increase for a downward flow. The reverse was the case in Molin et al. (2012). Figure 2 shows the evolution of the number of collisions, Ncol Figure 2(a), and the number of collisions which satisfy the agglomeration criterion in Eq. (2), Nagg Figure 2(b), for the three flows noted above, all normalised by the initial total number of primary particles, Ntp = ~ 10M. The collision efficiency, also known as the agglomeration rate β defined here as the ratio of the number of collisions Nagg which satisfy Eq. (2) to the total number of collisions Ncol, i.e. β = Nagg/Ncol, is shown in Figure 3(a). The probability density function, PDF, of the wall-normal location, x+, where binary collisions occur (open symbols) and agglomeration occurs (solid symbols) for the three flow cases is also shown in Figure 3(b). On occurrence of agglomeration the

Simulation of Turbulent Particulate Flows for Nuclear Waste Management: Agglomeration in Vertical Flows

71

daughter particle is treated as a volume equivalent sphere, thus implying a continuously increasing population of multiples and a depletion of single particles throughout the simulation with time, in the absence of break-up. (a)

(b)

Figure 2. Evolution of number of particle-particle collisions (a) and number of such collisions leading to agglomeration (b) for downward (DF), no gravity (NG) and upward (UF) flows.

In Figure 2, the number of inter-particle collisions with time is highest for the downward flow, followed by the no gravity and upward flows. The lift- and gravityinduced segregation of particles towards the near-wall region for the downward case enhances the inter-particle collisions, while the depletion of particles from this region in the upward case means that there are fewer particles in the near-wall region where collisions have been shown to occur frequently, as seen in Figure 3. Note that the scatter in the curves for the filled triangles and squares is a result of lower runtimes. Also in Figure 2, the inter-particle collisions leading to agglomeration are shown to be higher for the upward than for the downward flow. Hence, the low number of inter-particle collisions for the upward flow together with their higher effectiveness in leading to agglomeration means that the upward flow has a higher agglomeration rate than the downward and no gravity cases, as is evident in the results of Figure 3. The insert in Figure 3(a) shows that in all three flow cases the agglomeration rate is highest at the start of the simulation when the initial bias in the particle properties is still present, which then gradually reduces until this rate becomes almost constant with time when the particles have fully interacted with the fluid and all the external forces acting on them. Although the large number of inter-particle collisions for the downward flow case seen in Figure 2 is an advantage for agglomeration processes, most of the collisions in this case exhibit a high relative kinetic energy which then inhibits the majority of these collisions from leading to agglomeration. Hence, the downward flow has a lower agglomeration rate than for the upward flow, as shown in Figure 3(a). In all three cases, the PDF of the location of inter-particle collisions, which is always symmetric, shows high occurrences in the near-wall region, with the upward, then the no gravity and finally the downward flow having the largest number of collisions. In terms of the location where agglomeration occurs, the upward flow shows less bias as agglomeration takes place away from the near-wall region. In contrast, for the downward and no gravity cases, the PDF(x+) where agglomeration occurs peaks in the near-wall region. Overall, it has therefore been demonstrated that the direction of mean flow relative to gravitational acceleration has a significant effect on the frequency of inter-particle collisions and the number of such collisions that lead to agglomeration.

D.O. Njobuenwu and M. Fairweather

72

(a)

(b)

Figure 3. Evolution of agglomeration rate (a) and PDF of location along wall-normal axis where interactions occur (b) for downward (DF), no gravity (NG) and upward (UF) flows. In (b), binary collisions (open symbols) and agglomeration (solid symbols).

4. Conclusions The effect of the direction and absence of gravitational acceleration on particle agglomeration in vertical channel flows has been investigated. Particles in downward and zero gravity flows are transported towards the wall where they accumulate, while particles in upward flow migrate away from the wall, although a uniform spatial distribution is maintained in the core of the flow in all cases. This segregation pattern affects the evolution of inter-particle collisions and collision efficiency, as well as the location where these events take place. The downward flow has the highest number of collisions, but the lowest number of agglomerations and lowest agglomeration rate, whereas the upward flow has the lowest number of inter-particles collisions, but the greatest number of agglomerations, and hence the highest agglomeration rate.

References M. Alletto. 2014, Ph.D. Thesis, Helmut-Schmidt University Hamburg, Germany. M. Alletto and M. Breuer, 2012, One-way, two-way and four-way coupled LES predictions of a particle-laden turbulent flow at high mass loading downstream of a confined bluff body, Int. J. Multiphase Flow, 45, 70-90. M. Bini and W.P. Jones, 2008, Large-eddy simulation of particle-laden turbulent flows, J. Fluid Mech., 614, 207-252. C.A. Ho and M. Sommerfeld, 2002, Modelling of micro-particle agglomeration in turbulent flows, Chem. Eng. Sci., 57, 3073-3084. C. Marchioli, M. Picciotto and A. Soldati, 2007, Influence of gravity and lift on particle velocity statistics and transfer rates in turbulent vertical channel flow, Int. J. Multiphase Flow, 33, 227-251. D. Molin, C. Marchioli and A. Soldati, 2012, Turbulence modulation and microbubble dynamics in vertical channel flow, Int. J. Multiphase Flow, 42, 80-95. D.O. Njobuenwu and M. Fairweather, 2015, Deterministic modelling of particle agglomeration in turbulent flow, In: K. Hanjalic, T. Miyauchi, D. Borello, M. Hadziabdic and P. Venturini, Eds. Proc. 8th International Symposium on Turbulence, Heat and Mass Transfer, NY, 587-590. S.L. Rani, C.M. Winkler and S.P. Vanka, 2004, A new algorithm for computing binary collisions in dispersed two-phase flows, Numer. Heat Tr. B-Fund., 45, 99-107. S. Sundaram and L.R. Collins, 1996, Numerical considerations in simulating a turbulent suspension of finite-volume particles, J. Comput. Phys., 124, 337-350. J. Tomas, 2007, Adhesion of ultrafine particles - A micromechanical approach, Chem. Eng. Sci., 62, 1997-2010.

Zdravko Kravanja, Miloš Bogataj (Editors), Proceedings of the 26th European Symposium on Computer Aided Process Engineering – ESCAPE 26 June 12th -15th, 2016, Portorož, Slovenia © 2016 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/B978-0-444-63428-3.50017-5

A feasibility-based algorithm for Computer Aided Molecular and Process Design of solvent-based separation systems Smitha Gopinatha , Amparo Galindoa , George Jacksona and Claire S. Adjimana* a Department of Chemical Engineering,

Centre for Process Systems Engineering, Imperial College London, South Kensington Campus, London SW7 2AZ, United Kingdom [email protected]

Abstract Computer-aided molecular and product design (CAMPD) can in principle be used to find simultaneously the optimal conditions in separation processes and the structure of the optimal solvents. In many cases, however, the solution of CAMPD problems is challenging. In this paper, we propose a solution approach for the CAMPD of solvent-based separation systems in which implicit constraints on phase behaviour in process models are used to test the feasibility of the process and solvent domains. The tests not only eliminate infeasible molecules from the search space but also infeasible combinations of solvent molecules and process conditions. The tests also provide bounds for the optimization of the process model (primal problem) for each solvent, facilitating numerical solution. This is demonstrated on a prototypical natural gas purification process. Keywords: CAMPD, MINLP, solvent, CO2 capture

1. Introduction The computer-aided molecular and process design (CAMPD) problem is defined as the simultaneous optimization of a process and the molecules used in the process. CAMPD problems are large mixed-integer nonlinear problems (MINLP) that are challenging to solve for a number of reasons: the need to solve a large number of nonlinear equality constraints in process and property prediction models; the discontinuous nature of the feasible space, where optimization starting from an infeasible point may lead to numerical failure; convergence to poor local solutions due to the non-convexity of the problem. Several approaches have been proposed to overcome these challenges. One approach is to reformulate it into a nonlinear continuous optimization problem (Pereira et al., 2011; Lampe et al., 2015). Hierarchical optimization has been used to generate initial guesses for the solution of the full CAMPD problem (Burger et al., 2015). Screening-based methods, and that reduce the molecular search space, based on a generate-and-test strategy, that employ physical property targets have been used in several studies (Hostrup et al., 1999; Karunanithi et al., 2005; Eljack et al., 2008). The determination of targets by using property “clusters” has been investigated (Eljack et al., 2008). Buxton et al. (1999) proposed a screening approach embedded in an MINLP optimizer and applied it to the design of a solvent for a separation unit with fixed operating conditions. In their approach, tests based on property targets, the ability to initialize model subproblems and the feasibility of mass transfer were used prior to solving the primal problem. We adopt the idea of embedding tests in an optimizer and propose novel tests that do not require the user to specify arbitrary property

74

S. Gopinath et al.

targets for the solvent or mixtures in the process. The tests are based on thermodynamic insights that exploit implicit constraints on phase behaviour. Examples of such constraints include the presence of two phases at vapour-liquid equilibrium or the stability of the liquid solvent at relevant process conditions. Implicit constraints are often discontinuous and their violation can lead to numerical failure. The tests developed in our study lead to a reduction in the domain of (unknown) process and molecular variables, thereby providing bounds for the optimization of the process model that lie within the feasible region. While the tests developed are general and may be adapted to different separation systems, they are introduced with a focus on the design of an absorption system.

2. Proposed tests The CAMPD problem is given by min

f (uu, n )

s.t.

g p (uu, n ) = 0 g (uu, n ) ≤ 0 Cn ≤ e u L ≤ u ≤ uU

u ,nn

(P)

where u ∈ X ⊂ Rc is a vector of continuous variables and n ∈ N ⊂ (N ∪ {0})q is a vector of integer variables that represent the number of groups of each type in the molecule. f : X × N → R is the process objective. g p is a set of process and property constraints, g represents process constraints and C n ≤ e represents molecular feasibility constraints and bounds on the vector n . u L and uU are lower and upper bounds on the continuous variables, respectively. 2.1. Test 0 An implied constraint on an absorption model is that the feed remains in the vapour phase. Test 0 identifies conditions of temperature and pressure under which the feed is in the desired phase. First, an iterative approach is applied to find the cricondentherm TFL0 , the highest two-phase coexistence temperature for the feed of composition y F . The dew pressure is found iteratively at every temperature T (T is increased by a user-specified step size h at the end of each iteration) until no two-phase coexistence pressure is found, to yield TFL0 . Once TFL0 has been computed, a lower bound on the pressure domain is determined. The pressure is assumed to be regulated using an isenthalpic valve. A reduction in pressure, for gases with a positive Joule-Thompson coefficient such as CO2 and methane, results in a reduction in temperature. Thus, one may find the lowest pressure PL0 that is achievable by isenthalpic expansion of the gas, without the temperature dropping below TFL0 . 2.2. Test 1 An implied constraint on an absorption process is that the solvent is a liquid at relevant process conditions. Test 1 is used to identify whether the solvent (defined by vector n k ) is a liquid in the range of user-specified storage and handling temperatures [Ts1 , Ts2 ] and pressures, and if it is safe to handle and feasible to transport. Tmelt , Tboil , T f lash and ν are the melting point, boiling point, flash point and viscosity of the proposed solvent, respectively. ν b is a user specified upper bound on viscosity. Similar to the property-based screening of Harper et al. (1999), test 1 consists of the following property tests: Ts2 − Tboil ≤ 0, Tmelt − Ts1 ≤ 0, Ts2 − T f lash ≤ 0, ν(Ts1 ) − ν b ≤ 0 (P1)

A feasibility-based algorithm for the design of solvent-based separation systems

75

Figure 1: A phase diagram for CO2 -methane-solvent (propyl-methyl ether) at TN = 304.4 K, Pabs = 9.897 MPa 2.3. Test 2 An implicit constraint in an absorber model is that the solvent and feed can mix to form two phases. Test 2 is designed to identify if the solvent can form a two-phase mixture with the feed and to determine the range of operating conditions where it may do so. This test is based on the presence of a feasible operating line for separation. The operating line is calculated using the concept of difference point, 4 = Vi+1 − Li , given by Hunter and Nash (1934) and Henley et al. (2011) for the design of extraction systems. Here, vapour and liquid streams leaving any stage i are represented by flowrates Vi and Li and compositions y i and x i , respectively. The composition at the difference point is represented by o in Figure 1. The gas entering the absorber has a flow rate VN+1 and composition y N+1 . The solvent entering the absorber (which is assumed to be pure solvent in this feasibility test) has a flowrate L0 and composition s 0 . As 4 = VN+1 − LN , an operating line must pass through the feed y N+1 , x N (a point on the two-phase boundary) and the difference point o . From 4 = V1 − L0 , the locus of difference points lies on the line ← y 1→ s 0 , excluding the segment y 1 s 0 (as both V1 and L0 are nonnegative). This test may be applied to systems for which the solubility of the solute in the solvent increases with an increase in pressure. At a pressure where the mixture exhibits a critical point, such that the solute and solvent are completely miscible, a further increase in pressure reduces the maximum concentration of the solute on the two-phase boundary. When the highest concentration of solute on the liquid phase boundary is lower than the concentration of the solute in the feed, a difference −−→ point that lies on ray y 1 d 0 is infeasible. This is because an operating line y N+1 o 00 cannot intersect −−→ the two-phase region, if o 00 is a point on y 1 d 0 . Hence, the difference point can lie only on the ray −→ −→ s 0 d . By visualization, if there exists an operating line with any o on the ray s 0 d that intersects the two-phase region, then the operating line with o = s 0 also cuts the two-phase region. If the line segment y N+1 s 0 does not intersect the two-phase region, then no other operating line is feasible. Figure 1 represents the operating line and difference points at the maximum value of feasible

76

S. Gopinath et al.

pressure PU1 , where y N+1 s 0 is the only feasible operating line. This is formalised in problem (P2), where the existence of a feasible operating line is checked by testing if the difference line y N+1 s 0 , intersects any point x N that is in equilibrium with y N . An upper bound PU1 is identified on the pressure of the separation unit below which the condition of feasibility is met. PU1 =

max

Pabs ,TN

s.t.

Pabs

(P2)

y N+1 (1) − x N (1) y N+1 (1) − s 0 (1) = y N+1 (2) − x N (2) y N+1 (2) − s 0 (2) µi (yyN , TN , Pabs ) = µi (xxN , TN , Pabs ) ∀ i ∈ NC ∑NC i=1 xi = 1 ∑NC i=1 yi = 1 k y N − x N k2 > 0 PL0 ≤ Pabs ≤ PU0 T L0 ≤ TN ≤ T U0 Tmelt + 10 ≤ TN ≤ min(T f lash − 10, Tboil − 10)

3. Proposed algorithm The tests are embedded within an outer approximation framework (Duran and Grossmann, 1986; Fletcher and Leyffer, 1994). To account for linearizations that are not strict underestimators of the nonconvex feasible space, global convexity tests (Kravanja and Grossmann, 1994) have been implemented. The main algorithmic steps are as follows: I. Set TF , y F , PU0 , T U0 , n 1 , initial solvent structure, and ε, convergence tolerance. II. Evaluate Test 0 to find TFL0 and PL0 . / A0 = 0, / IC0 = 0/ and y 0 = 0/ III. Set f UBD = ∞ and f LBD = 0. Set iteration counter k =1, F 0 = 0, UBD LBD −f > ε) IV. While ( f 1. Evaluate Test 1. If test passed, go to step IV 2, else go to step IV 5. 2. Evaluate Test 2. If problem (P2) is infeasible, go to step IV 5, else go to step IV 3. U = PU1 . Evaluate the primal problem (P3) (Table 1). 3. Set Pabs 4. If problem (P3) is feasible, set f k equal to the objective and if f k < f UBD , set f UBD = f k and n ∗ = n k . If (P3) is feasible set F k = F k−1 ∪ {k} and go to step IV 6, else go to step IV 5. 5. Add an integer cut to (P4) (Table 1) to eliminate n k . Add ICk = ICk−1 ∪ {k}. Compute ak = ∑t (bt−1 ntk ), where b is set such that b is the maximum value that any ni can assume. 6. Solve (P4) (Table 1). If feasible, the objective is η k , set f LBD = η k and go to step IV 7. Else, stop. 7. Set k = k + 1 and go to step 1V 1. The set Ak of the master problem (P4) contains linearizations of the active constraints in problems (P3) and (P1), as well as constraints violated in (P1). Hence an integer cut is not required to avoid cycling of solvents that fail test 1. 3.1. Algorithm Implementation The overall algorithm is implemented in C++. The problems (P0), (P1), (P2), (P3) and (P4) are each launched, as necessary, by C++. Problems (P0), (P1), (P2) and (P3) have been implemented in gPROMS (Process Systems Enterprise, 1997-2015). The default local NLP solver in gPROMS (based on sequential quadratic programming) is used to solve problems (P2) and (P3). Problem (P3) is initialized by dynamically moving the process model from a solvent structure n 0 to n k . The master problem is solved by Gurobi (Gurobi Optimization, Inc., 2015).

77

A feasibility-based algorithm for the design of solvent-based separation systems

Primal Problem (P3) min f (uu, n k ) u

s.t.

Table 1: Primal and Master Problems Master Problem (P4) min η

g p (uu, n k ) = 0 g(uu, nk ) ≤ 0 u Lk ≤ u ≤ uUk

u,nn,η

s.t.

f i + ∇n f i [nn − n i ]T + ∇u f i [uu − u i ]T < η g j + ∇n g j [nn − n j ]T + ∇u g j [uu − u j ]T ≤ 0 L(1 − yr ) + ε ≤ a − ar ≤ Uyr − ε a = ∑t (bt−1 nt ) u L ≤ u ≤ uU Cnn ≤ d yr ∈ {0, 1}

∀ i ∈ Fk ∀ j ∈ Ak ∀ r ∈ ICk

∀ r ∈ ICk

Table 2: Performance of tests 1 and 2, with yFCO = 0.8, TF = 320 K, PU0 = 100 MPa, T U0 = 340 K 2

Number of molecules tested Number of molecules eliminated by test Number of molecules with updated bounds Average updated upper bound on pressure

Test 1 1109 660 N/A N/A

Test 2 449 0 392 30.039 MPa

4. Case study and Results The separation of carbon dioxide from methane is studied here. A process with a 10 stage absorption column operating at a variable pressure Pabs is considered. The solvent entering the absorber has a flow rate of Fsolvent . The spent solvent is regenerated in an adiabatic flash drum at 1 bar. The property prediction model employed is the Statistical Associating Fluid Theory -γ Mie equation of state (Papaioannou et al., 2014). More details on the process and molecular models may be found in Burger et al. (2015). The objective function is the net present value (NPV ) of the process computed over a 10-year period. The design variables are Pabs , Fsolvent and n , the structure of the optimal solvent. The treated gas exiting the absorber has to be of at least 97 (mole) % methane purity and the temperature in the flash drum must be at least ten degrees above the melting point of the solvent. The area and height of the absorber are constrained to be less than 300 m2 and 50 m, respectively. The effectiveness of each of the tests is first investigated. In test 0, for a feed of composition y FCO = 0.8, PF = 7.961 MPa, TF = 320 K and user-specified pressure bounds of 0.1 MPa ≤ Pabs ≤ 2

7.5 MPa, the updated bounds on pressure and temperature obtained are PL0 = 4.961 MPa and TFL0 = 288 K, respectively. This results in a 52 % reduction of the process domain. The reduction in the domain obtained from tests 1 and 2 when applied on a search space comprising of the groups CH3 , CH2 , eO (an oxygen group between a CH3 and CH2 group), cO (an oxygen group between two CH2 groups) , COO, −CH, COOH and CH2 OH is shown in Table 1. A molecule has an updated pressure bound if test 2 identifies a PU1 , such that PU1 < PU0 . 60 % of the solvent design space is eliminated by test 1. The remaining molecules are evaluated in test 2 which updates the upper bound on absorber pressure for 87 % of these molecules. The results of applying the feasibility-based algorithm on the case study are shown in Table 2. The average number of iterations of the algorithm (over ten runs with different values of n 1 ) is 16, whereas the average number of evaluations of the primal is 14. Importantly, the problem converges to the solution in each of the ten runs. The use of the algorithm has enhanced convergence as compared to previous work (Burger et al., 2015), indicating that the tests help to mitigate the numerical difficulties that arise when solving such highly nonlinear problems.

78

S. Gopinath et al.

Table 3: Results of the algorithm yFCO = 0.2, PF = 7.961 MPa, TF = 301.4 K, 0.1 MPa ≤ Pabs ≤ 7.5 MPa, 0< Fsolvent ≤ 50 kmols−1 2

Solvent penta(oxymethylene)dimethyl ether

Pabs /MPa 3.83

Fsolvent /kmols−1 0.84

NPV /Billion USD 1.72

5. Conclusions A feasibility-based algorithm for CAMPD has been proposed with novel tests that reduce both the process and molecular domain. As demonstrated in a case study, the discrete space is reduced by test 1. Implicit discontinuities in process models are addressed by tests 0 and 2. Numerical difficulties are avoided by optimizing within a feasible search space and process model initialization, making the solution of challenging CAMPD problems feasible. Acknowledgements The authors gratefully acknowledge financial support from the Engineering and Physical Sciences Research Council (EPSRC) grants EP/E016340, EP/J014958/1 and EP/J003840/1.

References J. Burger, V. Papaioannou, S. Gopinath, G. Jackson, A. Galindo, C. S. Adjiman, 2015. A hierarchical method to integrated solvent and process design of physical CO2 absorption using the SAFT-γ Mie approach. AIChE Journal 61, 3249– 3269. A. Buxton, A. G. Livingston, E. N. Pistikopoulos, 1999. Optimal design of solvent blends for environmental impact minimization. AIChE Journal 45, 817–843. M. A. Duran, I. E. Grossmann, 1986. An outer-approximation algorithm for a class of mixed-integer nonlinear programs. Mathematical Programming 36, 307–339. F. T. Eljack, C. C. Solvason, N. Chemmangattuvalappil, M. R. Eden, 2008. A property based approach for simultaneous process and molecular design. Chinese Journal of Chemical Engineering 16, 424–434. R. Fletcher, S. Leyffer, 1994. Solving mixed integer nonlinear programs by outer approximation. Mathematical Programming 66, 327–349. Gurobi Optimization, Inc., 2015. Gurobi optimizer reference manual 6.5. P. M. Harper, R. Gani, P. Kolar, T. Ishikawa, 1999. Computer-aided molecular design with combined molecular modeling and group contribution. Fluid Phase Equilibria 158, 337–347. E. J. Henley, J. D. Seader, D. K. Roper, 2011. Separation process principles. Wiley. M. Hostrup, P. M. Harper, R. Gani, 1999. Design of environmentally benign processes: integration of solvent design and separation process synthesis. Computers & Chemical Engineering 23, 1395–1414. T. G. Hunter, A. W. Nash, 1934. The application of physico-chemical principles to the design of liquid-liquid contact equipment. part II. application of phase-rule graphical methods. Journal of the Society of Chemical Industry, 95–102. A. T. Karunanithi, L. E. K. Achenie, R. Gani, 2005. A new decomposition-based computer-aided molecular/mixture design methodology for the design of optimal solvents and solvent mixtures. Industrial & Engineering Chemistry Research 44, 4785–4797. Z. Kravanja, I. E. Grossmann, 1994. New developments and capabilities in PROSYNan automated topology and parameter process synthesizer. Computers & Chemical Engineering 18, 1097–1114. M. Lampe, M. Stavrou, J. Schilling, E. Sauer, J. Gross, A. Bardow, 2015. Computer-aided molecular design in the continuous-molecular targeting framework using group-contribution PC-SAFT. Computers & Chemical Engineering 81, 278 – 287. V. Papaioannou, T. Lafitte, C. Avenda˜no, C. S. Adjiman, G. Jackson, E. A. M¨uller, A. Galindo, 2014. Group contribution methodology based on the statistical associating fluid theory for heteronuclear molecules formed from Mie segments. The Journal of Chemical Physics 140 (5). F. E. Pereira, E. Keskes, A. Galindo, G. Jackson, C. S. Adjiman, 2011. Integrated solvent and process design using a SAFT-VR thermodynamic description: High-pressure separation of carbon dioxide and methane. Computers & Chemical Engineering 35, 474–491. Process Systems Enterprise, 1997-2015. gPROMS Model Builder 4.1.0.

Zdravko Kravanja, Miloš Bogataj (Editors), Proceedings of the 26th European Symposium on Computer Aided Process Engineering – ESCAPE 26 June 12th -15th, 2016, Portorož, Slovenia © 2016 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/B978-0-444-63428-3.50018-7

Multiobjective Retrofitting Methodology for Manufacturing Processes of Active Pharmaceutical Ingredients Considering Environment, Health and Safety Risks Hirokazu Sugiyama, a,* Akiko Segawa, a Satoshi Yoshikawa, b Takayuki Toyama, b Hayao Nakanishi, c Emi Kikuchi-Uehara, a Masahiko Hirao a a

Department of Chemical System Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, 113-8656 Tokyo, Japan b

Production Technology Department, Shionogi & Co., Ltd, 7 Moriyama, Nishine, Kanegasaki-cho, Isawa-gun, Iwate, 029-4503, Japan c

Kanegasaki Plant, Shionogi & Co., Ltd, 7 Moriyama, Nishine, Kanegasaki-cho, Isawagun, Iwate, 029-4503, Japan [email protected]

Abstract In the manufacturing processes of pharmaceutical active ingredients (APIs), huge amount of chemicals are used per unit amount of product, and the consequential risks need to be considered. This work presents a method for retrofitting API manufacturing processes considering risks of environment, health and safety (EHS) as well as economic and technical aspects. The method consists of four steps: (I) process mass analysis, (II) EHS risk evaluation, (III) generation of improvement options, and (IV) multiobjective evaluation and interpretation. Case study was performed on an industrial manufacturing plant of an API, which includes reaction, extraction, crystallization and centrifugation, and involves variety of process chemicals. By following the method step-by-step, a promising option could be identified which can effectively reduce EHS risks with reasonable efforts in the economic and regulatory aspects. Keywords: integrated process design, multiobjective decision-making, pharmaceutical manufacturing, process retrofitting, process safety

1. Introduction The pharmaceutical industry, where inherently hazardous materials are used, recognizes the importance of the risks related to environment, health and safety (EHS). So far, various contributions are found such as determination of acceptable daily exposure limits (Sargent, et al., 2013), solvent selection method (Henderson, et al., 2011), or EHS performance indicator in the organization (e.g., Dodsworth, et al., 2007). For supporting the EHS-conscious design of chemical and pharmaceutical production processes, Koller et al. (2000) proposed an EHS hazard assessment method of substances. Sugiyama et al. (2008) adopted this practical index-based method to the multiobjective design framework of chemical processes with a focus on the early design phases. Adu et al. (2008) presented a comprehensive comparison of various EHS methods in terms of covered issues such as explosion, parameters considered such as flash point, and the

80

H. Sugiyama et al.

calculation procedures. More recently, Manipura et al. (2013) presented a design method for pharmaceutical and fine chemical industries, which considers EHS risks as the core evaluation criteria. However, the methodological support on the retrofit of existing plants is still in infancy, where accidents could occur in reality. In this work, we present an EHS-conscious retrofitting methodology for the manufacturing processes of active pharmaceutical ingredients (APIs). APIs are the key component of drug products, and the manufacturing processes typically involve huge amount of solvents which need to be investigated from the EHS perspective. The method consists of four steps: (I) process mass analysis, (II) EHS risk evaluation, (III) generation of improvement options, and (IV) multiobjective evaluation and interpretation. A case study was performed on an industrial production process of an API by following the four-step methodology in order to demonstrate the effectiveness of the proposed method.

2. Method Figure 1 shows the schematic of the proposed methodology, which is explained in the following sections step-by-step. 2.1. Step I: Process mass analysis In the first step, the entire API production processes are investigated, which typically consist of reaction, extraction and crystallization. From the manufacturing recipes as well as production records, the mass, , of substance i at time t [s] is obtained, which serves as the basis of the EHS evaluation. 2.2. Step II: EHS risk evaluation The aim of this step is to quantify the EHS risk of the process. Here, we propose a novel EHS evaluation indicator, which calculates the risk score R [kg] of category j as shown in Eq. (1): ·

·

·

(1)

where with S [kg], O [-], D [-] and T [-] represent the scores of severity, occurrence, detectability and technical countermeasures, respectively. The larger the value of R is, the more risky the process is. This way of multiplying severity, occurrence and detectability stems from failure mode and effect analysis (FMEA), a well-established risk management method in the field of quality engineering. The term T enables incorporation of local devices such as earth of the equipment or hand gloves of the operators during the operation into the evaluation score. As to the suffix j, the following categories which were covered by Koller et al. (2000) are considered in this method: fire/explosion, reaction/decomposition, and acute toxicity for safety, chronic toxicity and irritation for health, water- and air-mediated effects, solids, degradation, and accumulation for environment, respectively. 2.2.1. Severity The severity score S is calculated using Eq. (2): ·

,

(2)

Multiobjective Retrofitting Methodology for Manufacturing Processes of Active Pharmaceutical Ingredients Considering Environment, Health and Safety Risks Step 1: Process mass analysis Step 2: EHS risk evaluation Step 3: Generation of improvement options Step 4: Multiobjective evaluation & interpretation

81

• Collection of process information • Calculation of S, O, D and T scores • Calculation of R score • Focus on the peaks of R score • Calculation of evaluation indicators • Identification of the set of promising options

Figure 1 Schematic overview of the proposed methodology.

Criteria of openness

Value

Open

1.0

Open with a temporal lid

0.8

Charging port (diameter • 10cm)

0.6

Charging port (diameter < 10cm)

0.4

Glove box

0.2

Closed

0.1

Table 1 Scoring table for occurrence O.

where Ii, j [-] represents the hazard index value of substance i in category j, which Koller et al. (2001) proposed. This method translates the physical property values of chemicals to non-dimensional index values with the range between 0 (i.e., least hazardous) and 1 (i.e., most hazardous). 2.2.2. Occurrence The score O [-] represents the possibility of an unwanted situation occurring at time t, with the value between 0.1 (i.e., least likely to occur) and 1 (i.e., most likely to occur). The basis for calculating this semi-qualitative score is the openness of the equipment, e.g., vessels for reaction, extraction or crystallizer, which could be opened for different purposes, e.g., charging of solvents. We defined a scoring table shown in Table 1. According to the table, when the operation is done in a closed equipment, the score of 0.1 is given, whereas the highest score of 1 is given when the equipment is open. Intermediate scores are given for the operations using glove boxes, ports and lids, according to the degree of openness. The score O is time-dependent, and can capture subtle operation which can lead to an accident eventually, e.g., opening of a port for charging solvents during extraction. 2.2.3. Detectability The score D [-] takes into account of the easiness of detecting an unwanted situation from EHS perspective, based on the presence of measurement devices and the state of the materials in the operation. Similarly to the score O, we also defined a scoring table, and the value is ranging from 0.1 (i.e., most capable to detect) to 1 (i.e., least capable to detect). The worst score of 1 is given when all the status such as temperature, pressure or pH are measured on the real time basis. The worst score of 1 is given when gas is present in the operation and the measurement device is not installed. The intermediate scores are also defined, e.g., for the processes with visual observation on potentially dangerous situation.

82

H. Sugiyama et al.

2.2.4. Technical factors Companies install various technical devices locally in the production area to prevent an accident from occurring and/or to reduce the magnitude of an accident. Earthing devices of vessels, masks, hand gloves and local ventilators are the examples, and often multiple devices/equipments are used in an combined manner. The score Tj [-] indicates the degree of the installation of such countermeasures with the values from 0.1 (i.e., most preferable) to 1 (i.e., least preferable). We defined, here again, a scoring table similar to Table 1. The suffix j is present in order to differentiate the purpose as well as the characteristics of the technical devices installed. 2.3. Step III: Generation of improvement options The purpose of Step III is to generate different options for process improvement, i.e., the options that could reduce the risk score R obtained in the previous step. 2.4. Step IV: Multiobjective evaluation and interpretation In this last step, improvement options generated in Step III are compared on a multiobjective manner. In this method, the following five axes are adopted for the evaluation: cost, efforts in change management, and reduction potentials in safety, health and environmental risks. The cost can be either investment or operation, or combination of both in the form of net present value. The second axis is unique to the pharmaceutical industry, where any change of manufacturing processes needs to follow a certain procedure which is defined in good manufacturing practice (GMP). For the reduction potential of EHS risks, the difference of the score R before and after the improvement can be applied directly. The score R in each category can be summed within each aspect of safety, health and environment, i.e., in the multiobjective evaluation, three R scores are compared before and after the improvement. As a result, Pareto optima options can be obtained as a set of promising options.

3. Case Study A case study was performed on an industrial production process of an API. The process consists mainly of reaction, extraction, crystallization operations, and involves different types of organic solvents. 3.1. Step I: Process mass analysis The process information and data were collected in order to calculate , the mass profile of substance i over the entire operation. The peak of the all profile was observed in the extraction process where an organic solvent is charged into the vessel as the extraction solvent. 3.2. Step II: EHS risk evaluation Index values Ii, j were calculated for the substances in the process for the categories listed in section 2.2, and the severity score Sj (t) was obtained using Eq. (2). The scores of O, D and T were also calculated by using scoring tables like Table 1, and the operation and facility information. Figure 2(a) shows the calculated result of the score O. It takes the value between 0.1 and 1, and the peaks are found in and around reaction, extraction and crystallization operations. Figure 2(b) shows the profile of the score R for the category of fire/explosion. Reaction and extraction operations, where large quantities of solvents are used, showed generally high score together with a couple of peaks.

Multiobjective Retrofitting Methodology for Manufacturing Processes of Active Pharmaceutical Ingredients Considering Environment, Health and Safety Risks

83

Risk score R [kg]

Occurrence score O [-]

1.0 1.0

0.5 0.5

Time t

(a)

(b)

Filtration

Time t

Crystallization

Crystal form conversion Centrifugation

Crystallization

Extraction

Filtration

Reaction

Filtration

Crystallization

Crystal form conversion Centrifugation

Crystallization

Extraction

Filtration

Reaction

0.0 0

Figure 2 Calculation result of (a) occurrence score O and (b) risk score R for fire/explosion. Cost

Changemanagement

Safety

Health Current

Environment -3.3%

-17%

-33%

Figure 3 Multiobjective evaluation result of process options considering reduction of solvent amount used in the extraction operation.

3.3. Step III: Generation of improvement options We focused on the extraction operation which showed two peaks of the score R as shown in Figure 2(b), and considered the reduction of an organic solvent as an improvement idea. The current status was compared with three reduction options, i.e., minus 3.3%, 17% and 33%. Along with the reduction, the R scores would decrease, however, the extraction performance and thus the process yield would also become lower than the current status. In order to quantify this balance, we performed experiments to measure the extraction performance by changing the solvent amount. 3.4. Step IV: Multiobjective evaluation and interpretation Figure 3 shows the multiobjective evaluation of the process options considered. In the axes of cost and EHS risks, the evaluation results are normalized with the best being the maximum and worst being the minimum values. In the axis of change management, minimum, intermediate and maximum values are given according the required workload of the regulatory work. As to the result, the cost aspect which considers mainly operating cost is affected largely on the extraction performance, and the value increases largely in the option of minus 33%. The maximum score was given for the option of minus 17%, but practically there is little difference among the current, minus 3.3% and 17%. In change management, the options of minus 17% and 33% require updates of the

84

H. Sugiyama et al.

regulatory documents and thus intermediate scores are given. As expected, the EHS risk scores are reduced along with the decrease of the solvent amount. Strictly spoken all options are Pareto optima, however, in the practical sense, option of minus 3.3 % appears attractive because it can reduce the EHS risk scores without a damage on the cost as well as substantial work on change management.

4. Conclusions and outlook We presented a novel method for retrofitting API manufacturing processes considering EHS risks as well as economic and technical aspects. The method consists of four steps of process mass analysis, EHS risk evaluation, generation of improvement options, and multiobjective evaluation and interpretation. A novel and pragmatic risk indicator is proposed, which can incorporate the degree of severity, occurrence, detectability and technical countermeasures into the assessment. The method was practiced in an industrial case study, and the effectiveness was demonstrated. As to the future work, an alternative indicator needs to be developed which is less sensitive to the process mass profile as in the current version. Also, more examples of multiobjective evaluation need to be illustrated in order to increase the applicability of the methodology.

Acknowledgements Financial support by Grant-in-Aid for Young Scientists (B) No. 26820343 from the Japan Society for the Promotion of Science as well as Research Grant 2014 from Nagai Foundation Tokyo are gratefully acknowledged.

References I. K. Adu, H. Sugiyama, U. Fischer, K. Hungerbühler, 2008, Comparison of methods for assessing environmental, health and safety (EHS) hazards in early phases of chemical process design, Process Saf. Environ. Protec., 86, 77–93 M. Dodsworth, K. E. Connelly, C. J. Ellett, P. Sharratt, 2007, Organizational climate metrics as safety, health and environment performance indicators and an aid to relative risk ranking withing industry, Process Saf. Env. Protec., 85, 59–69 R. K. Henderson, C. Jiménez-González, D. J. C. Constable, S. R. Alston, G. G. A. Inglis, G. Fisher, J. Sherwood, S. P. Binks, A. D. Curzons, 2011, Expanding GSK’s solvent selection guide – embedding sustainability into solvent selection starting at medicinal chemistry, Green Chem., 13, 854–862 G. Koller, U. Fischer, K. Hungerbühler, 2000, Assessing safety, health, and environmental impact early during process development, Ind. Eng. Chem. Res, 39, 960–972 A. Manipura, E.B. Martin, G.A. Montague, P.N. Sharratt, I. Houson, 2013, Risk-based decision making in early chemical process development of pharmaceutical and fine chemical industries, Comput. Chem. Eng., 55, 71–82 E. V. Sargent, E. Faria, T. Pfister, R. G. Sussman, 2013, Guidance on the establishment of acceptable daily exposure limits (ADE) to support risk-based manufacture of pharmaceutical products, Regul. Toxicol. Pharmacol, 65, 242–250 H. Sugiyama, U. Fischer, M. Hirao, K. Hungerbühler, 2008, Decision-making framework for chemical process design including different stages of environmental, health and safety (EHS) assessment, AIChE Journal, 54, 1037–1053

Zdravko Kravanja, Miloš Bogataj (Editors), Proceedings of the 26th European Symposium on Computer Aided Process Engineering – ESCAPE 26 June 12th -15th, 2016, Portorož, Slovenia © 2016 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/B978-0-444-63428-3.50019-9

Performance Evaluation of Unsteady Processes with Dynamic and Temporal Flexibility Indices Ruei-Shing Wu, Yi-Chung Kuo, Chuei-Tin Chang* Department of Chemical Engineering, National Cheng Kung University, 1 University Road, Tainan 70101, Taiwan [email protected]

Abstract Two quantitative performance measures, i.e., the dynamic and temporal flexibility indices, have been devised in the past to quantitatively characterize the unsteady operations. However, not only their significances were not accurately interpreted and their roles not clarified, but also the theoretical foundation of their computation strategies not established rigorously. In this paper, the necessary conditions of a critical dynamic programming model are derived first and the trapezoidal integration formula is then applied to discretize the resulting constraints for computing the flexibility measures. Two case studies are reported to demonstrate the feasibility of this approach. Keywords: Flexibility index; Trapezoidal rule; Active set method; KKT conditions.

1. Introduction Dealing with uncertainties is one of the practical issues in chemical process design. The ability of a process to maintain feasible operation despite uncertain deviations from the nominal states is often referred to as its operational flexibility. Various approaches to facilitate quantitative analysis have already been proposed. The original steady-state flexibility index was defined by Swaney and Grossmann (1985) for use as a gauge of the feasible region in the parameter space. Since the steady-state material-and-energy balances were used as the model constraints, this metric can be viewed as a performance indicator of the continuous processes. Dimitriadis and Pistikopoulos (1995) later noted that the operational flexibility of a dynamic system must be evaluated differently. Specifically, a so-called dynamic flexibility index should be computed on the basis of a system of differential algebraic equations (DAEs). On the other hand, while an ill designed system may become inoperable due to instantaneous disturbances, the cumulative effects of temporary parameter variations may also result in serious consequences. To address this important issue, a mathematical programming model was proposed by Adi and Chang (2013) for evaluating the corresponding temporal flexibility index. Since the dynamic and temporal flexibility indices can complement one another to fully characterize a given unsteady process, it is desirable to consider both in a design. However, the computation and application strategies of these indices still have not been well developed. Not only their significances have not been accurately interpreted and their roles not clarified, but also their numerical values cannot be determined efficiently with the existing algorithms. Further studies are clearly needed to address these implementation issues.

86

R.-S. Wu et al.

2. Process model All equality constraints in a dynamic model can be expressed in a general form as hi ( d, z ( t ) , x ( t ) , x ( t ) , θ ( t ) ) = 0, x ( 0 ) = x 0

(1)

where, i is a numerical constraint label; d represents a constant vector in which all design specifications are stored; z denotes the vector of all adjustable control variables; x is the vector of all state variables; θ denotes the vector of all uncertain parameters. Notice that hi is essentially a functional of various functions of time and it is usually established to model the dynamic behaviour of an unsteady process over a given time horizon, i.e., 0 < t ≤ H . Similarly, the inequality constraints can be written as g j ( d, z ( t ) , x ( t ) , θ ( t ) ) ≤ 0

(2)

where, j is the numerical label of a specific inequality constraint and gj is also a functional defined over the aforementioned horizon. Note that equation (2) is often adopted to reflect the actual physical and/or chemical limits. The upper and lower bounds of the uncertain parameters should also be incorporated in the present model as (3)

θN ( t ) − Δθ− ( t ) ≤ θ ( t ) ≤ θN ( t ) + Δθ+ ( t )

These bounds may be extracted directly from historical operation records. Let us consider the rainfall data as a specific example for illustration convenience. By setting H to be 24 hours and t to be the hour of a day, θ N ( t ) , Δθ − (t ) and Δθ+ (t ) may be established according to the largest range of hourly precipitations collected every day over months. If the cumulated quantities of the above parameters over time are also recorded, the following extra inequalities may also be adopted to better characterize the uncertainties H

(4)

−ΔΘ− ≤  θ (τ ) − θ N (τ )  dτ ≤ +ΔΘ+ 0

Again for illustration purpose, let us consider the climate statistics. The constant values of ΔΘ+ and ΔΘ− in this case can now be estimated according to the daily rainfall totals which are usually also available. Since the uncertain parameters usually do not always stay at the upper (or lower) limits throughout the entire horizon, one would H

H

expect ΔΘ− ≤ Δθ− (τ ) dτ and ΔΘ+ ≤ Δθ + (τ ) dτ .   0

0

3. Dynamic and temporal flexibility indices Let us first consider a feasibility functional whose value can be determined as follows: ψ ( d, θ ( t ) ) = min max g j ( d, z ( t ) , x ( t ) , θ ( t ) ) x ( t ), z ( t )

(5)

j ,t

subject to the equality constraints given in equation (1). Note that the given system should be always operable if ψ ≤ 0 . In order the facilitate evaluation of the dynamic flexibility index, let us introduce a scalar variable δ to adjust the ranges of the transient variations in equation (3), i.e.

Performance Evaluation of Unsteady Processes with Dynamic and Temporal Flexibility Indices

87

θ N ( t ) − δΔθ− ( t ) ≤ θ ( t ) ≤ θ N ( t ) + δΔθ+ ( t )

(6)

The corresponding performance measure, i.e., the dynamic flexibility index (Dimitriadis and Pistikopoulos, 1995), can be computed accordingly with the following model: FI d = max δ

(7)

subject to equation (6) and

maxψ ( d, θ ( t ) ) ≤ 0

(8)

θ(t )

In order to take in account of the accumulated effects of uncertain parameters in nonsteady operations, the aforementioned scalar variable (δ) can be incorporated into equation (4) instead, i.e. H

−δΔΘ− ≤  θ (τ ) − θ N (τ )  dτ ≤ +δΔΘ+ 0

(9)

Thus, the corresponding temporal flexibility index (Adi and Chang, 2013) is: FI t = max δ

(10)

subject to equations (3), (8) and (9).

4. Karush–Kuhn–Tucker conditions The mathematical program for computing the aforementioned feasibility functional can be posed alternatively by introducing an extra scalar variable u(t), i.e. (11) ψ ( d, θ ( t ) ) = min u ( t ) t =H x ( t ),z( t ),u( t )

subject to the equality constraints in equation (1), and also (12)

u ( t ) = 0

g j ( d, z ( t ) , x ( t ) , θ ( t ) ) ≤ u ( t )

(13)

To facilitate derivation of Karush–Kuhn–Tucker (KKT) conditions for this functional optimization problem, let us rewrite equation (1) alternatively as (14) φ ( d, z ( t ) , x ( t ) , θ ( t ) ) − x ( t ) = 0 An aggregated objective functional can then be constructed by introducing Lagrange multipliers to incorporate of all constraints, i.e. H (15) L = u ( H ) + {μ1 ( t ) [ 0 − u ] + μ2 ( t ) [φ − x ] + λ ( t ) [ g − u1]}dt



0

where, the multipliers for all equality constraints are real while those for inequalities should be nonnegative. By taking the first variation of L and then setting it to zero, one can produce the following four groups of necessary conditions: (i) μ1 ( 0 ) = 0, μ1 ( H ) = 1, x ( 0 ) = x 0 , μ 2 ( H ) = 0 ; (ii) (iii)

 ∂φ   ∂g  μ 2 = −μT2   − λ T   , μ1 = λ T 1 ;  ∂x   ∂x  T  ∂ϕ  T  ∂g  T ; μ2   + λ  = 0  ∂z   ∂z 

x = φ, u = 0, λ T ( g − u1) = 0, λ ≥ 0 . Since at least one of the inequality constraints must be active at certain time instance when the extremum is reached, it is necessary to force u(t) to be zero over [0, H]. Thus, (iv)

88

R.-S. Wu et al.

the conditions in (iv) can be modified as follows: (v) x = φ, u = 0, λ T g = 0, λ ≥ 0, g ≤ 0 . Therefore, the dynamic and temporal flexibility indices can be determined respectively by minimizing δ, subject to the common conditions specified in (i) – (iii), (v), and also the aforementioned two separate sets of constraints imposed upon the uncertain parameters, i.e., equation (6) for computing the dynamic flexibility index and equations (3) and (9) for the temporal flexibility index.

5. Trapezoidal integration rule To estimate the integral of function φ , let us divide the horizon [0, H] into M equal intervals and label their end points sequentially as p=1,2,3…M. Thus, the length of each time interval should be H/M (denoted as Δ t ). Applying the trapezoidal rule results in x ( t p ) = x ( t p−1 ) +

Δt  φ d, x ( t p −1 ) , z ( t p −1 ) , θ ( t p−1 ) + φ d, x ( t p ) , z ( t p ) , θ ( t p )   2 

(

) (

)

(16)

where, x(t0 ) = x(0) = x 0 . Note that the inequality constraints in equation (2) can also be discretized according to the aforementioned points.

6. Case studies Let us consider the buffer tank in Figure 1. The corresponding dynamic model can be written as dh (17) A = θ (t ) − k h dt

where, h denotes the height of liquid level (m); A ( = 5 m2) is the cross-sectional area of the tank; k ( = 5 / 10 m 5/2 min −1 ) is a proportionality constant; θ denotes the feed flow rate ( m3 min-1) and it is treated as the only uncertain parameter in the present example. To fix ideas, the following upper and lower limits are adopted in the flexibility analysis: (a) The height of tank is 10 m, i.e., h ≤ 10 ; (b) Due to the operational requirement of downstream unit(s), the outlet flow rate of buffer tank must be kept above 5 /10 m3 min −1 . Thus, the minimum allowable height of its liquid level should be 1 m, i.e., 1 ≤ h . (c) The time horizon covers a period of 800 minutes, i.e., 0 ≤ t ≤ 800 . 6.1. Continuous operation Let us assume that, in the continuous operation under consideration, the nominal steadystate value of feed rate is θ N ( t ) = 0.5 m 3 min −1 and the anticipated positive and negative deviations are set at Δθ + ( t ) = Δθ − ( t ) = 0.5 m3min-1 . Therefore, the range of uncertain parameter is 0 ≤ θ ( t ) ≤ 1 and the nominal height of liquid level at steady state should be 5 m.

Figure 1. A buffer vessel.

Performance Evaluation of Unsteady Processes with Dynamic and Temporal Flexibility Indices

89

The proposed computation procedure yields a dynamic flexibility index of 0.415. If it is not feasible to improve the control quality of the upstream feed stream, the desired operational target, i.e., a dynamic flexibility index (FId) of 1, can be realized by increasing the buffer capacity. In particular, a larger storage tank with a cross-sectional area of 61 m2 can be adopted to withstand all possible disturbances allowed by equation (3). Note that, since a relatively large tank is needed to achieve a dynamic flexibility index of 1, the corresponding investment may not be justifiable. To facilitate computation of temporal flexibility index, let us assume that the accumulated positive and negative deviations in liquid volumes are: ΔΘ+ = ΔΘ− = 62.5 m3. By solving the proposed model, it can be found that FI t = 0.444 and this implies that the given system can only withstand the most severe disturbance for a shorter-thantargeted period of time. If it can be predicted on the basis of operation experience that the largest disturbances rarely last for the entire horizon, then a less stringent design criterion can be established according to the temporal flexibility index. Notice that the cross-sectional area for FIt=1 is only 11.3 m2 in this case. 6.2. Periodic operation Let us next assume that, over a single period of 800 minutes in the cyclic operation under consideration, the nominal feed rate and its anticipated positive and negative deviations can be described below θ N ( t ) = 0.5 (m 3min -1 ), Δθ + ( t ) = Δθ − ( t ) = 0.1 for 0 ≤ t ≤ 100 (min)  N 3 −1 + − θ ( t ) = 0.6 (m min ), Δθ ( t ) = Δθ ( t ) = 0.1 for 100 ≤ t ≤ 200 (min)  N 3 −1 + − (18) θ ( t ) = 0.7 (m min ), Δθ ( t ) = Δθ ( t ) = 0.1 for 200 ≤ t ≤ 250 (min) θ N ( t ) = 0.8 (m 3 min −1 ), Δθ + ( t ) = Δθ − ( t ) = 0.1 for 250 ≤ t ≤ 300 (min)  θ N ( t ) = 0.6 (m3 min −1 ), Δθ + ( t ) = Δθ − ( t ) = 0.1 for 300 ≤ t ≤ 350 (min)  N −1 + − 3 θ ( t ) = 0.5 (m min ), Δθ ( t ) = Δθ ( t ) = 0.1 for 350 ≤ t ≤ 450 (min) θ N t = 0.4 (m 3 min −1 ), Δθ + t = Δθ − t = 0.1 for 450 ≤ t ≤ 500 (min) () ()  ( ) θ N ( t ) = 0.2 (m 3 min −1 ), Δθ + ( t ) = Δθ − ( t ) = 0.1 for 500 ≤ t ≤ 600 (min)  θ N ( t ) = 0.6 (m3 min −1 ), Δθ + ( t ) = Δθ − ( t ) = 0.1 for 600 ≤ t ≤ 700 (min)  N 3 −1 + − θ ( t ) = 0.5 (m min ), Δθ ( t ) = Δθ ( t ) = 0.1 for 700 ≤ t ≤ 800 (min) To facilitate computation of temporal flexibility index, let us assign the accumulated positive and negative deviations in liquid volumes to be ΔΘ + = ΔΘ − = 20.0 m3. Based on an initial height of 5 m, the corresponding dynamic flexibility index was found to be 0.368 while the temporal flexibility index 0.185. However, to achieve an index value of 1, it may not be necessary to raise the cross-sectional area of the buffer tank proportionally. To be more specific, the required areas for these two criteria should be set at 8.25 and 6.95 m2 respectively. Note also that the former index (FId=0.368) indicates that the buffer operation can be made feasible if the parameter range can be narrowed to θ N ( t ) − 0.368 × Δθ − ( t ) ≤ θ ( t ) ≤ θ N ( t ) + 0.368 × Δθ + ( t ) , and Figure 2 shows the simulation results of the worst-case scenarios. One can clearly observe that, if the feed rate is maintained respectively at the upper and lower limits of the narrowed range, the liquid level should stay within the allowed range at any time throughout the given horizon. On the other hand, note that the time interval associated with the latter index (FIt=0.185) can also be determined to be between 562 and 599 minutes, and Figure 3 shows the simulation results of the worst-case scenarios.

90

R.-S. Wu et al.

10

upper limit lower limit nominal

9 8

water level (m)

7 6 5 4 3 2 1 0 0

100

200

300

400 time (min)

500

600

700

800

Figure 2. Simulation results of the worst-case scenarios in interval [0, 800] for periodic operation under the constraint of narrowed parameter range. 10

upper limit lower limit nominal

9 8

water level (m)

7 6 5 4 3 2 1 0 0

100

200

300

400 time (min)

500

600

700

800

Figure 3. Simulation results of the worst-case scenarios in interval [562, 599] for periodic operation under the constraint of original parameter range.

7. Conclusions By rigorously deriving the KKT conditions of a dynamic programming model, a systematic methodology has been developed in this work to compute the dynamic and temporal flexibility indices. One can also see that, depending upon the availability of historical data and the intrinsic nature of uncertainty in the particular application, either FId or FIt (or both) can be used to represent the operational flexibility of an unsteady process.

References V. S. K. Adi, and C. T. Chang, 2013, A Mathematical Programming Formulation for Temporal Flexibility Analysis, Comput. & Chem. Eng., 57, 151. V. D. Dimitriadis, and E. N. Pistikopoulos, 1995, Flexibility Analysis of Dynamic System, Ind. Eng. Chem. Res., 34, 4451. R. E. Swaney, and I. E. Grossmann, 1985, An Index for Operational Flexibility in Chemical Process Design. Part I: Formulation and Theory, AIChE J., 31, 621.

Zdravko Kravanja, Miloš Bogataj (Editors), Proceedings of the 26th European Symposium on Computer Aided Process Engineering – ESCAPE 26 June 12th -15th, 2016, Portorož, Slovenia © 2016 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/B978-0-444-63428-3.50020-5

Solution of Chance-Constrained Mixed-Integer Nonlinear Programming Problems Erik Eschea,* , David M¨ullerb , Sebastian Werkc , Ignacio E. Grossmannd and G¨unter Woznya a Process

Dynamics and Operations Group, Technische Universit¨at Berlin, Sekr. KWT-9, Str. des 17. Juni 135, D-10623 Berlin, Germany b Evonik Technology & Infrastructure GmbH, Process Technology & Engineering, CAPE & Automation, Paul-Baumann-Str. 1, D-45772 Marl, Germany c Complevo GmbH, Bismarkstr. 10-12, D-10625 Berlin, Germany d Carnegie Mellon University, Dpt. of Chemical Engineering, Doherty Hall, 5000 Forbes Avenue, 15213 Pittsburgh, PA, United States of America [email protected]

Abstract In this contribution a framework for the solution of chance-constrained MINLP problems is described and tested to solve of process synthesis problems with strongly nonlinear and non-convex subsystems. The framework can handle the appearance of non-monotonic relationships between uncertain inputs and chance-constrained outputs, the appearance of multiple roots in the chance constraint evaluation, and performs extensive result recycling to ensure a robust performance despite the structural changes implemented by the MINLP optimization solver. The framework can be interfaced with optimization and simulation solvers programmed in C++, Fortran, and Python. As a first application the process synthesis of the oxidative coupling of methane with a focus on the removal of carbon dioxide from the product stream is investigated. Keywords: Chance Constraint, MINLP, Optimization, Oxidative Coupling of Methane

1. Introduction and Motivation The investigation of new process concepts is a time-consuming and challenging task, especially if new chemical compounds and novel process units are introduced. Usually, many different options exist to perform a single reaction or separation task, which leads to numerous alternative flowsheets to be evaluated. On top of that, the lack of knowledge on novel process equipment with new compounds seldom allows for a rigorous description of these flowsheets with low uncertainty. An example is the process synthesis for the oxidative coupling of methane (OCM), which directly converts methane (CH4) to ethene (C2H4) (Keller and Bhasin, 1982). Depending on catalyst type, reactor design, and operation conditions the outlet concentrations of the reaction step can vary heavily (Godini et al., 2013). Consequently, the design of the product separation needs to be adjusted, in turn influencing recycles to the reaction section. The main byproduct of the OCM reaction is carbon dioxide (CO2). Apart from the classical path to remove CO2 via amine-based absorption, adsorption and gas permeation can also be applied. Of course, different scrubbing liquids, adsorbents, and membrane materials further complicate this synthesis problem. Whilst rigorous models can be derived for all structural options in this process concept, the parameters therein imply a certain amount of uncertainty given that insufficient experimental data is available.

92

E. Esche et al.

1.1. Process Synthesis under Uncertainty The aforedescribed synthesis problem for the OCM process concept is representative for process synthesis under uncertainty in general. Within the scope of this work it is considered as a class of optimization problems with many nonlinear equations, sometimes differential equations in time or space, and a comparatively lower number of structural decisions in the form of binary or integer variables. The uncertainty for these optimization problems typically lies in the parameters of the underlying nonlinear systems, the feed streams, and market conditions. In mathematical programming these optimization problems are formulated as mixed-integer nonlinear programming (MINLP) problems under uncertainty, wherein the complexity lies in the evaluation of the nonlinear subsystems and the incorporation of the uncertainty. In general terms, three different methods exist to formulate and solve MINLP under uncertainty: stochastic programming (Birge and Louveaux, 2011) with two or more stages, robust optimization (Ben-Tal et al., 2009), and probabilistic programming (Grossmann and Guill´en-Gos´albez, 2010). 1.2. Chance Constraints Within probabilistic programming chance constraints are a well-known option to formulate and solve optimization problems under uncertainty as introduced by Charnes and Cooper (1959). They consider the uncertainty of a model in the form of parameter values and inputs through probabilities of constraint satisfaction. An inequality constraint h of the original optimization problem under uncertainty with state variables x, continuous decisions u, structural decisions y, fixed parameters p, and uncertain parameters ξ is replaced by a constraint calculating the probability of the inequality being adhered to and enforcing a lower bound α thereon: h(x, u, y, p, ξ) ≥ 0



Pr{h(x, u, y, p, ξ) ≥ 0} ≥ α

(1)

Joint chance constraints will be disregarded at this point. As a further simplification, this contribution is limited to uncertainty which adheres to a multivariate normal distribution with expected values µ and covariance matrix Cov: ξ ∼ N (µ, Cov).

2. Evaluation of Chance Constraints The evaluation of chance constraints constitutes a multivariate integration over a probability density function ϕ. The actual complexity of the evaluation of these probabilities lies in the calculation of the upper bound of the innermost integral (ξ0U ). In order to facilitate it a number of assumptions are commonly made in existing implementations, e.g. (Werk et al., 2012b), which are going to be challenged at this point. 1. h is continuous for ξ in Rn+1 : Given that the uncertainty space of ξ is typically tight, this assumption might hold true in practice. However, it is unlikely that the root finding problem is feasible for all combinations of u and y. Consequently, h might appear numerically discontinuous and appropriate measures need to be taken during the evaluation. 2. h is monotonically decreasing in ξ0 : In previous publications, e.g. (Wendt et al., 2002), it is assumed that one of the uncertain parameters will show this relation regarding h. Nevertheless, in case such an element of ξ is initially determined, it could still be that for a different combination of u and especially y this relation reverses. 3. For any choice in values of ξ1, . . ., ξ n only a single, distinct value for ξ0U can be found: Similar to the monotonic relation, multiple roots might appear intermittently, albeit infrequently. In any case, the evaluation should be able to identify cases with multiple roots and calculate the probability accordingly.

Solution of Chance-Constrained Mixed-Integer Nonlinear Programming Problems

93

In the following the evaluation of chance constraints is explained in detail, focusing on the measures implemented to alleviate the issues noted above. To facilitate the multivariate integration an n-dimensional sparse grid is created for the uncertainty space of ξ1, . . ., ξ n . For every point of the sparse grid ξ0 is calculated to equate h to 0. Details hereon may be found in (Werk et al., 2012a,b). The therein described method has been extended as part of this contribution to allow for the appearance of multiple roots, violations of the monotonicity, and numerical infeasibilities. Relaxing Strict Monotonicity: The extended framework has two algorithms to deal with the relaxation of strict monotonicity. The first part is a permutation of the uncertain parameter vector ξ until an element is found for ξ0 , which shows strict monotonicity regarding h. This is estimated by investigating all sparse grid positions at the initial point of u and y and comparing the values of h at ξ0 set to µ0 , µ0 + 3σ0 , and µ0 − 3σ0 in turn. The second part operates at runtime of the optimization. Whenever the chance constraint evaluation is requested every single sparse grid position for a fixed permutation of ξ is investigated. First of all, h is evaluated at the afore-mentioned three positions of ξ0 . Based thereon a decision is made, whether h is greater than zero within this interval, less than zero, a root exists within the 3σ interval, or is non-convex. In the first two cases the probability for this particular sparse grid point is set to 1 or 0 respectively. For the third case Newton’s method is applied to determine the root and in case of the non-convexity the interval is bisected and each half is reevaluated with the same algorithm. This part returns a vector of root positions with the 3σ interval and respective integration directions (ascending or descending relation to h). The resulting calculation of the probability is summed up in Eq. (2). Therein, SG j is the j-th sparse grid position, I D the vector of integration directions for the respective vector of j roots ξ0,i , and ω the weight for each sparse grid position. j

Pr{h(x, u, y, p, ξ) ≥ 0} =

nX SG j=0

ω(SG j ) ·

nX I D −1 i=1

Zξ0 I Di ·

j

ϕ(ξ0 )dξ0 ∧ ξ0 : h(x, u, p, ξ = SG j ) = 0 (2)

−∞

Initialization and Line Search Methods: As a consequence of the structural decisions made during the solution of MINLP problems, abrupt changes in the decision and state variables may occur, which complicate the evaluation of chance constraints, as good initial values are always required. In order to compensate for this, additional algorithms have been added to the original chance constraint evaluation. First of all, the framework saves the past twenty successful solution points, i.e. a set of u, y, ξ, and x. Based hereon an initial guess for the solution of the system at the next point defined by u, y, and ξ is generated by one of three options. The first is selecting the nearest neighbor from the stored results, the second is the creation of a convex hull. If those two fail, a line search is started from the nearest neighbor taking a number of intermediate steps to try to evaluate the new point. Given the aforementioned infeasibility of some combinations of u and y there is a limit to the number of steps the line search is allowed to take. On failure, this point is flagged as infeasible and the respective probability of the chance constraint is set to zero.

3. Framework for Chance-Constrained MINLP Problems The methods described in the previous section are implemented in a software framework coded in Python and detailed in Fig. 1. The framework consists of four layers: the MINLP solver level, the relaxation level, the chance constraint evaluation level, and the simulation level. The MINLP Solver Level: Within the MINLP solver level the objective function is defined, the lower and upper bounds for decisions u and y are set, the chance constraint evaluations are called and the bounds on the probabilities (αi ) are enforced. All inequality constraints untouched by uncertainty are also stated here. The framework can interface with numerous optimization solvers implemented in C/C++, FORTRAN, or Python. Currently, interfaces to IPOPT and NLPQLP for

94

E. Esche et al.

Figure 1: Framework for the solution of chance-constrained MINLP problems (Esche, 2015). NLP problems, MISQP and MIDACO for MINLP problems are maintained. Gradient and Hessian of the objective function and derivatives for the inequalities regarding the decision variables are supplied to these solvers. The MINLP solver level performs a parallelized call for all chance constraint evaluations via the relaxation level and calls upon the simulation level itself for the evaluation of objective function and additional inequalities. The Relaxation Level: The relaxation level’s main purpose is the avoidance of unnecessary chance constraint evaluations. Within large superstructures some chance constraints will always be limited to a particular section of the whole structure. If this sub-structure is currently inactive, the chance constraints therein should be deactivated. A straightforward solution is to set the probabilities to 1 and the gradients regarding decision variables to 0. The Chance Constraint Evaluation Level: The chance constraint evaluation level implements all measures introduced in section 2 to guarantee a stable performance of the evaluation even for highly nonlinear and non-convex MINLP problems. The determination of the roots for the sparse grid is yet again parallelized and the simulation level is called upon independently for each. The Simulation Level: The simulation level is simply a set of Python-interfaces to solvers in C/C++ and FORTRAN, which need to be able to return sensitivities regarding the decision variables u. Seeing that the simulation is the time-sensitive step during the chance-constrained MINLP optimization, the actual equations should always be implemented in the same language as the simulation solver. For the purpose of this work a C++ frontend to the NLEQ1s solver (Nowak and Weimann, 1991) has been implemented, which uses the Eigen3 template library (Guennebaud et al., 2010) to compute the required sensitivities in a sparse way. Hence, the model equations also need to be supplied in C++.

4. Case Study: Process Synthesis under Uncertainty The process synthesis problem of the OCM reaction described above will now be used as an example for a case study to highlight the performance of the framework. For this case study six operating scenarios for the reactor are considered with either nitrogen (N2) or CO2 dilution and variations in selectivity and conversion. The product gas can then be separated in a combined pressure and temperature swing adsorption, a membrane network to remove CO2 and an absorption

Solution of Chance-Constrained Mixed-Integer Nonlinear Programming Problems

95

system for the same purpose. Each separation step may be bypassed. Within the membrane network up to six membranes may be used as a combination of stripping and rectification cascades. For each membrane a choice between either polyimide or polyethylene oxide membranes is available. In the absorption section monoethanolamine or piperazine-activated methyldiethanolamine may be chosen as the scrubbing liquids. For the membranes, models introduced in (Esche et al., 2015) are employed and for the absorption refer to (Esche et al., 2014). The adsorption is modeled rigorously based on a parabolic differential equation system incorporating dual-site Langmuir isotherms, the linear driving force approach for the mass transfer in the pores, and Ergun equation for the pressure drop along the adsorbent bed. Further details on the model formulation and implementation can be found in (Esche, 2015). All models are implemented manually in MOSAIC (Kraus et al., 2014) and exported automatically to C++ for the implementation in the chance-constrained MINLP framework. The full superstructure model consists of roughly 23,100 equations with 33 continuous and 18 binary decision variables and four uncertain parameters. These uncertainties are the CO2 to C2H4 ratio in the reactor outlet, the CO2 permeability of the membranes, the heat of absorption, and the heat loss in the absorption section. The minimization of the specific energy required for the removal of CO2 is selected as the objective function. At the same time two chance constraints are enforced. The first requiring a purity of the C2H4-rich stream of 95% with a probability of 90% and the second limiting the loss of C2H4 throughout the entire product purification to 5% with a probability of 98%. Each chance-constrained MINLP optimization is initialized by first of all performing a deterministic optimization and then restarting the chance-constrained problem at the deterministic solution. Thanks to the extensive parallelization in the framework the solution time of the chanceconstrained case did not take significantly longer than the deterministic solution. Each part took a couple of hours, but sometimes up to several days. There are larger differences between deterministic and chance-constrained results for the six investigated reactor scenarios, both varying in the number and choice of membranes and especially the membrane sizes and operation conditions in the absorption section. Compared to a base-case using only amine-based absorption for the required CO2 separation task energy savings of more than 60% can be observed. Discussing the finer details of the optimization results goes beyond the scope of this paper and will be discussed in future publications. At this point chance-constrained and deterministic results will only be evaluated regarding their satisfaction of the required bounds on the purity of the outlet stream and the loss of C2H4. This is shown in Fig. 2. For different scenarios with changing feed conditions,

Figure 2: Comparison of the probabilities of chance-constrained (plus) and deterministic (cross) results to satisfy constraints on CO2 removal and C2H4 loss. (Esche, 2015). the application of the adsorption section, and different requirements for the maximum number of

94

E. Esche et al.

membranes, the probabilities of constraint satisfaction are calculated for both deterministic and chance-constrained cases. The latter manage to satisfy the probability requirements as expected, whereas the deterministic results violate the purity of the outlet stream in 30 to 40% of the uncertainty scenarios. Performing process synthesis tasks with these deterministic results would of course be dangerous for the whole process concept as the violation could render the subsequent purification of C2H4 infeasibile with a probability of up to 40%. This is an important result as it shows the merit of implementing chance constraints to obtain robust superstructures, which satisfy process requirements even for new and little investigated systems.

5. Conclusions & Outlook This contribution introduces the first-ever framework for the solution of chance-constrained MINLP problems. The framework has been tested on a stronly nonlinear and non-convex optimization problem, namely the process synthesis of the oxidative coupling of methane process concept. The framework can deal with non-monotonic relationships between uncertain inputs and chanceconstrained outputs and performs an extensive reinitialization at runtime. In future work, the framework shall be extended to include more state of the art optimization and simulation solvers and will also be extended to additional probability distributions.

6. Acknowledgements The support of this work by the Cluster of Excellence Unicat “Unifying Concepts in Catalysis” (DFG EXC 314) funded by the German Research Foundation is gratefully acknowledged.

References A. Ben-Tal, L. El Ghaoui, A. Nemirovski, 2009. Robust optimization. Princeton University Press. J. R. Birge, F. Louveaux, 2011. Introduction to stochastic programming. Springer Science & Business Media. A. Charnes, W. W. Cooper, 1959. Chance-constrained programming. Management science 6 (1), 73–79. E. Esche, 2015. MINLP optimization under uncertainty of a mini-plant for the oxidative coupling of methane. Ph.D. thesis, Technische Universitt Berlin, Berlin, Germany. E. Esche, D. M¨uller, R. Kraus, G. Wozny, 2014. Systematic approaches for model derivation for optimization purposes. Chemical Engineering Science 115, 215–224. E. Esche, D. M¨uller, S. Song, G. Wozny, 2015. Optimization during the process synthesis: enabling the oxidative coupling of methane by minimizing the energy required for the carbon dioxide removal. Journal of Cleaner Production 91, 100–108. H. Godini, H. Trivedi, A. G. de Villasante, O. G¨orke, S. Jaˇso, U. Simon, A. Berthold, W. Witt, G. Wozny, 2013. Design and demonstration of an experimental membrane reactor set-up for oxidative coupling of methane. Chemical Engineering Research and Design 91 (12), 2671–2681. I. E. Grossmann, G. Guill´en-Gos´albez, 2010. Scope for the application of mathematical programming techniques in the synthesis and planning of sustainable processes. Computers & Chemical Engineering 34 (9), 1365–1376. G. Guennebaud, B. Jacob, et al., 2010. Eigen v3. http://eigen.tuxfamily.org. G. E. Keller, M. Bhasin, 1982. Synthesis of ethylene via oxidative coupling of methane: I. determination of active catalysts. Journal of Catalysis 73 (1), 9–19. R. Kraus, S. Fillinger, G. Tolksdorf, D. Hoang-Minh, V. A. Merchan-Restrepo, G. Wozny, July 2014. Improving model and data integration using mosaic as central data management platform. Chemie Ingenieur Technik 86 (7), 1130– 1136. U. Nowak, L. Weimann, 1991. A family of Newton codes for systems of highly nonlinear equations. Konrad-ZuseZentrum f¨ur Informationstechnik Berlin. M. Wendt, P. Li, G. Wozny, 2002. Nonlinear chance-constrained process optimization under uncertainty. Industrial & engineering chemistry research 41 (15), 3621–3629. S. Werk, T. Barz, H. Arellano-Garcia, G. Wozny, 2012a. Performance analysis of shooting algorithms in chanceconstrained optimization. In: I. A. Karimi, R. Srinivasan (Eds.), 11th International Symposium on Process Systems Engineering. Vol. 31 of Computer Aided Chemical Engineering. Elsevier, pp. 1512 – 1516. S. Werk, T. Barz, G. Wozny, H. Arellano-Garcia, 2012b. An approach to process monitoring under probabilistic constraints. In: I. D. L. Bogle, M. Fairweather (Eds.), 22nd European Symposium on Computer Aided Process Engineering. Vol. 30 of Computer Aided Chemical Engineering. Elsevier, pp. 1252 – 1256.

=GUDYNR.UDYDQMD0LORã%RJDWDM (GLWRUV 3URFHHGLQJVRIWKHWK(XURSHDQ6\PSRVLXPRQ &RPSXWHU$LGHG3URFHVV(QJLQHHULQJ±(6&$3( -XQHWKWK3RUWRURå6ORYHQLD‹(OVHYLHU%9$OOULJKWVUHVHUYHG http://dx.doi.org/10.1016/B978-0-444-63428-3.50021-7

Hydrogen Sulphide Scavenging Simulation in HYSYS André Young,a* Fernando Pessoa,a Eduardo Queiroz a a

Universidade Federal do Rio de Janeiro, Escola de Química, 149 Athos da Silveira Ramos ave, 21941-909, Rio de Janeiro, Brazil [email protected]

Abstract In this work, H2S scavenging in oil production was simulated and the effect of gas lift and water production increase were evaluated over the concentration profile of H2S along the production line and the scavenging efficiency. Data from a real production field from Brazil was used. Scavenging kinetics was modelled according to literature concerns. The production line was simulated with the software Aspen HYSYS, from Aspen Technology, Inc. It was observed that lifting gas injection has two important effects upon the system. The first is the anticipation of the meeting point between system’s bubble pressure and the line pressure, what means that since the lifting gas injection point there is a gas phase carrying hydrogen sulphide and hydrocarbons from the liquid phase and interfering in the reaction kinetics. The second is the dilution effect of the gas lift. When water production was simulated, at different water-to-oil ratios, it was observed different H2S partitions between the gas and liquid phases, related to the lower solubility of H 2S in water. This phenomenon could also affect the scavenging efficiency, because the reaction occurs only in the liquid phase. The authors agree that these observations are very important to the correct modelling of hydrogen sulphide scavenging in oil lines and could be used to support the development of new optimization tools. Keywords: Hydrogen Sulphide, Scavenger, H2S

1. Introduction Despite the recent fall in oil prices, Pre-salt is still a huge opportunity of investment and source of income to Brazil. The immense reserve under Brazilian coast can finally promote the country as a major producer and oil exporter. Obviously, long-term government planning goes through the expected profits from the exploitation of this reserve (Formigli, 2007). However, Pre-salt brings new challenges to the oil industry. In comparison with traditional wells, Pre-salt wells can contain a large amount of acid gases, like CO2 and H2S (Arinelli et al., 2015). New engineering solutions are needed and are being developed to overcome these and other problems (Lima et al., 2015). To protect the production lines, some oil companies have invested in the last years on hydrogen sulphide scavengers (Amosa et al., 2010). The scavenger, if injected in the production line, reacts with H2S in the liquid phase, preventing the riser and flexible lines’ walls from corrosion. The optimization of the amount of H2S scavenger and of the injection point is of great importance to the oil companies. This optimization depends on the precision at which the concentrations of hydrogen sulphide in the liquid phase and in the gas phase as well as equilibrium constants and volume flows are estimated (Rajagopal et al., 2009).

98

A. Young et al.

In a preliminary literature review, it was not found any work regarding two common factors in oil and gas production that should also affect the scavenging process: gas lift application and water content. The gas lift operation is highly employed in petroleum production to promote the reservoir fluid raising and consists in the injection of a pressurized gas into the production line, what can considerably change the fluid properties (Brown, 1980). On the other hand, along the well depletion, water content in the reservoir is expected to vary, what could also interfere in the production plans. Therefore, the objective of this work is to simulate H2S scavenging and evaluate the effect of gas lift and water production increase over the concentration profile of H2S along the production line and the scavenging efficiency.

2. Simulation Simulations were carried out in the software Aspen HYSYS, from Aspen Technology, Inc. Figure 1 shows the process flowsheet used to simulate the oil production line. Oil, water and H2S feed the line at the reservoir. The line is constituted by two pipe segments, represented by plug flow reactors to allow the H2S consumption along the way. Between the two pipe segments, there is an injection valve, where the lifting gas and the scavenger can be introduced in the line. The production line has a total extension of 10,770 m from the reservoir to the platform and the injection valve is situated 710 m above the bottom. This configuration represents a real production field operated in the Brazilian coast. The reservoir fluid was composed by hydrocarbons from C1 to C19, carbon dioxide and nitrogen. The heavier compounds were represented by a hypothetical-component (C20+), with molar mass of 449 g/mole and density of 0.9557 g/cm³. The reservoir fluid composition can be seen in Table 1. The scavenger, as well as its reaction products, was simulated also as a hypothetical-component. These hypothetical-components were built with the assumption of their UNIFAC structures and their properties were estimated according to standard methods proposed in HYSYS. The thermodynamic method chosen for simulation was the Peng-Robinson equation of state, because of its completeness and large application in the oil and gas industry (Privat and Jaubert, 2012).

Table 1. Reservoir fluid composition (mol%) Component Methane Ethane Propane i-Butane n-Butane i-Pentane n-Pentane n-Hexane

Concentration Component Concentration Component Concentration 31.21 % n-Heptane 2.72 % C15 1.65 % 8.10 % n-Octane 3.32 % C16 1.26 % 6.42 % n-Nonane 2.88 % C17 1.11 % 1.13 % n-Decane 2.54 % C18 1.14 % 2.70 % C11 2.18 % 1.03 % C19 0.93 % C12 2.00 % C20+ 19.53 % 1.47 % C13 1.99 % 0.36 % CO2 2.06 % 1.67 % C14 0.62 % N2

Hydrogen Sulphide Scavenging Simulation in HYSYS

99

Petroleum flow rate in the reservoir was calculated based in experimental data for the oil formation volume factor (Bo), which can be seen as the volume of reservoir fluid required to produce one barrel of oil in standard conditions (15.6 °C and 1 bar). Temperature and pressure profiles along the line were introduced in the process simulator through the Case Study tool. The line was divided in 40 points (10 points before the injection valve and 30 points after the injection valve). Each point was simulated with the corresponding temperature and pressure and the H2S concentration in the gas and liquid phases could be observed as a dependent variable. The H2S scavenger used in the simulations was (ethylenedioxy)dimethanol (CAS: 3586-55-8), at a commercial concentration of 70 vol%. It is believed that the reaction mechanism is similar to that presented for the reaction between H 2S and triazinanes, so the scavenging process was modelled as a sequence of three first-order irreversible reactions (Bakke and Buhaug, 2004). Kinetic constants were manually adjusted to provide a reduction of 99 % in the final concentration of hydrogen sulphide in the gas phase, in the base scenario, with an injection of 45 L/h of commercial scavenger at the gas lift valve. It resulted in kinetic constants of 0.0035 kmol/m³-s, 0.00035 kmol/m³-s and 0.000035 kmol/m³-s, for the three consecutive reactions, respectively. The scavenging reaction was proposed to occur only in the liquid phase. Six different scenarios were simulated. In every scenario, the reservoir conditions were around 267 bar and 90 °C and platform conditions were around 15 bar and 60 °C. In the base scenario, no water is present in the reservoir and there is not gas lift. To produce 3,000 m³/d of liquid oil in standard conditions, 152.52 m³/h were necessary from the reservoir. The H2S flow rate was adjusted to provide 500 ppmv in the vapour product at platform conditions. In the other five scenarios, total liquid phase production was maintained at 3000 m³/d, but the water content was varied to 30 % and 60 % and the use of gas lift was tested. The lifting gas flow rate was adjusted to obtain a gas-to-liquid ratio of 130 at platform conditions and its composition was equal to the gas product composition at standard conditions, except by the hydrogen sulphide content. The H2S flow rate was adjusted proportionally to the petroleum flow rate from the reservoir.

Figure 1. Production line representation in HYSYS

100

A. Young et al.

3. Results and Discussion The H2S concentration profiles in the liquid and gas phases, with and without the use of scavenger (SC) for each of the six scenarios can be seen in Figures 2a-f. In the reservoir, all the system is liquid. During its course along the production line, pressure decreases. When the system’s pressure becomes lower than its bubble pressure, gas liberation starts. From this point, a little reduction in the liquid flow and also in the H 2S concentration in the liquid phase can be noted, because the hydrogen sulphide is being carried by the arising gas. The production line has a big horizontal segment, between the touchdown point and the Christmas tree, where the pressure fall is small. Between the Christmas tree and the platform, pressure fall abruptly increases, and so do both the gas flow and the H2S concentration in the gas phase.

Figure 2. H2S profiles along the line (a) 0 % of water, without GL (b) 0 % of water, with GL (c) 30 % of water, without GL (d) 30 % of water, with GL (e) 60 % of water, without GL (f) 60 % of water, with GL

Hydrogen Sulphide Scavenging Simulation in HYSYS

101

When gas lift was applied, two interesting results can be observed. First of all, there is an anticipation of the meeting point between system’s bubble pressure and the line’s pressure. It occurs because, for the proposed configuration, the base system’s bubble point is reached only after the gas lift valve. When the lifting gas is added, the system is modified and a gas phase becomes to exist since the injection point. This fact is important because the lifting gas contributes to the removal of a considerable portion of hydrocarbons and H2S from the liquid phase. Depending on the relationship between the escape of hydrocarbons and H2S from the liquid phase to the gas phase, the concentration of H2S in the liquid phase may be smaller or larger at that point. If larger, it is expected a more efficient scavenging reaction. This would occur for lighter oils. If smaller, the scavenging reaction would be less efficient. This would occur for heavier oils. Besides that, it is observed the dilution of the hydrogen sulphide in the gas phase. Hydrogen sulphide is less soluble in water than in hydrocarbons. Therefore, two contrary effects are observed as long as the production of water increases. Assuming that the flow rate of H2S reduces in proportion to the oil flow while the well is depleted, in addition to the lower solubility of H2S in water, it is expected that the concentration of H2S in the liquid phase decrease. However, due to the smaller produced gas flow rate and to the supposed maintenance of the total liquid flow, a larger fraction of hydrogen sulphide remain in the liquid phase, making the concentration in the vapour phase decrease. Besides that, water exerts also a dilution effect over the scavenger, because water is less compressible than hydrocarbons and is not volatile. It results in a significant decrease in the reaction rate. In the base case, the scavenging reaction was adjusted to bring the H2S final concentration in the vapour phase from 500 to 5 ppmv. Consequently, the final concentration in the liquid phase changed from 35 ppmv to almost zero. When the gas lift was installed, the final concentrations in the gas and liquid phases, respectively, were 291 and 22 ppmv without scavenger and 3 ppmv and zero with the scavenger. These results show the dilution effect of the lifting gas, as well as indicates that the content of light gases carried by the gas lift was probably superior to the content of hydrogen sulphide, leaving a larger portion of the contaminant available for reaction in the liquid phase. It is a positive result regarding process safety. With 30 % of water, the H2S concentration in the platform fell to 467 ppmv in the vapour phase and 26 ppmv in the liquid phase without scavenger and increased to 62 ppmv in the vapour phase and 3 ppmv in the liquid phase with scavenger, as a result of the dilution effect caused by the water, which reduced the reaction rate. When the gas lift was added, the final H2S concentrations fell to 204 ppmv in the vapour phase and 12 ppmv in the liquid phase without scavenger and to 30 ppmv in the vapour phase and 2 ppmv in the liquid phase with scavenger. Therefore, despite the lower content of H 2S, 45 L/h of commercial scavenger were not able to reduce its concentration to the expected values, which is a negative result in terms of process safety. With 60 % of water, the H2S concentration in the platform fell to 402 ppmv in the vapour phase and 18 ppmv in the liquid phase without scavenger. But, because of the much lower content of H2S leaving the reservoir, its concentration reached zero in both gas and liquid phases after about 7,000 m, despite the reduced reaction rate. When the gas lift was added, the final H2S concentrations fell to 115 ppmv in the vapour phase and 6 ppmv in the liquid phase without scavenger and reached zero after about 7,000 m with scavenger. This is a positive result in terms of process safety, but it could mean a waste of scavenger, which is economically not interesting.

102

A. Young et al.

4. Conclusions Petroleum production was simulated at a total liquid flow rate of 3,000 m³/d and the H2S scavenging process was investigated over six scenarios, involving the gas lift application and water contents of 0 %, 30 % and 60 %. The results showed that both the lifting gas injection and the increasing water content have significant effects over the hydrogen sulphide partition among the phases, as well as over the scavenging efficient. These effects can be positive or negative. When the gas lift was applied to a system free of water, a positive result was provided by the dilution of the contaminant in the gas phase, and its removal from the liquid phase. Then, in this situation, less scavenger should be used. When 30 % of water were present, negative results were obtained with and without gas lift, due to a dilution effect of the water over the scavenger, which reduced the reaction rates. Then, more scavenger would be necessary to attend the specifications. Finally, when 60 % of water were present in the final liquid stream, a positive result was obtained because of the low content of H2S in the system, that could be promptly consumed by the scavenger. These results show, therefore, that these two important factors must be taken into account when optimizing the scavenger application, aiming lower economical expenses and safer operations.

References M. Amosa, I. Mohammed, S. Yaro, 2010, Sulphide Scavengers in Oil and Gas Industry - A Review, NAFTA, 61 (2), p. 85-92. L. Arinelli, O. Araújo, J. Medeiros, 2015, Dynamic Simulation and Analysis of Slug Flow Impact on Offshore Natural Gas Processing: TEG Dehydration, Joule-Thomson Expansion and Membrane Separation, Computer-Aided Chemical Engineering, 37, p. 1775-1780. J. Bakke, J. Buhaug, 2004, Hydrogen Sulfide Scavenging by 1,3,5-Triazinanes. Comparison of the Rates of Reaction, Industrial & Engineering Chemistry Research, 43 (9), p. 1962–1965. K. Brown, 1980, The Technology of Artificial Lift Methods, PennWell Books, 2, Tulsa, United States. J. Formigli, 2007, Pre-salt reservoirs offshore Brazil: perspectives and challenges, Miami, United States. B. Lima, O. Araújo, J. Medeiros, C. Morgado, 2015, Viability of Technologies for CO2 Capture and Reuse in a FPSO: Technical, Economic and Environmental Analysis, Computer-Aided Chemical Engineering, 37, p. 1385-1390. R. Privat, J. Jaubert, 2012, Thermodynamic Models for the Prediction of Petroleum-Fluid Phase Behaviour, Crude Oil Emulsions - Composition Stability and Characterization, ,Q7HFK p. 71-106. K. Rajagopal, R. Lacerda, I. Slobodcicov, E. Campagnolo, Modeling and Simulation of Hydrogen Sulphide Removal from Petroleum Production Lines by Chemical Scavengers, Chemical Engineering Communications, 196 (10), p. 1237-1248.

Zdravko Kravanja, Miloš Bogataj (Editors), Proceedings of the 26th European Symposium on Computer Aided Process Engineering – ESCAPE 26 June 12th -15th, 2016, Portorož, Slovenia © 2016 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/B978-0-444-63428-3.50022-9

Dynamic Real-Time Optimization of Microgrids with Day-Ahead Commitments for External Power Exchange Michael Zachara and Prodromos Daoutidisa a Department of Chemical Engineering and Materials Science, University of Minnesota, 421 Wash-

ington Ave SE, Minneapolis, MN, 55455, USA *[email protected]

Abstract This paper explores the stochastic scheduling of microgrids where energy exchange with the macrogrid is constrained by day-ahead commitments. These constraints are included to reduce the added cost to distribution system operators, associated with the integration and operation of behind-the-meter generation. Chance-constrained optimization is used to minimize operational cost and meet the energy exchange commitments. The problem is transformed into a mixed integer linear program, and is used to analyze the impact of risk aversion on microgrid operating cost and renewables curtailment. Keywords: Microgrid, Stochastic Scheduling, Chance Constrained Optimization, Dynamic RealTime Optimization

1. Introduction A microgrid is an autonomous power system which employs distributed generation and storage units to serve a local load. These power systems are of interest since they facilitate the integration of distributed renewables, allow for multi-generation systems (e.g. combined heat, cooling, and power), reduce transmission losses, and increase the ability to shape the net load and regulate local power quality (Lasseter et al. (2002); Pepermans et al. (2005)). Microgrids could enable beneficial practices like peak shaving, voltage support, and frequency regulation (Pepermans et al. (2005)). However, current market structures, in particular net metering and feed-in tariffs, do little to encourage microgrids to participate in these activities. Several publications have established potential problems with integrating distributed generation in current market structures. Mehleri et al. (2012) shows that model predictive control of a microgrid may result in strongly peaked export of power around noon. In Zhang et al. (2015), electricity sales to the macrogrid are not allowed, but the power imported to the microgrid still exhibits undesirable traits, such as intermittency and large spikes in demand. Serving such loads would require utility plants to ramp quickly and back-off significantly during midday, resulting in increased cost and control complexity for the macrogrid. Balcombe et al. (2015) shows that microgrids can be designed and controlled to reduce the impact of distributed generation on the net load by actively pursuing self-sufficiency. However, this may require microgrids to forgo economic benefits as the authors showed that these solutions are more expensive under current market structures. Thus, alternative market structures which inherently promote responsible interaction with the macrogrid are needed, such as those discussed in Eid et al. (2014) and Picciariello et al. (2015). In this work, a novel market structure is proposed in which microgrid operators must supply distribution system operators (DSOs) with a day-ahead estimate of their net external energy exchange

104

M. Zachar and P. Daoutidis

over each 1-hour period. This gives utilities adequate time to incorporate this knowledge into their scheduling and dispatch decisions for power plants. In addition, the energy exchange commitments are only allowed to change by some maximum step size from hour-to-hour to prevent extreme ramping requirements for power plants. Under this proposed market structure, a stochastic microgrid scheduling optimization problem is formulated and solved. This scheduling optimization is used to probe the relationship between the level of risk aversion employed by microgrid operators, and the frequency and magnitude of commitment violations, operating cost, load shedding, and renewables utilization.

2. Problem Formulation A generic microgrid is considered consisting of photovoltaics (PV), gas-fired microturbines, a battery, and a bi-directional connection to the macrogrid. The energy flow diagram is shown in Figure 1. This paper focuses on decisions at the scheduling level. The microgrid schedule governs the number of microturbines turned on, the charge/discharge state of the battery, and the commitments for energy exchange with the macrogrid. This schedule is optimized hourly to minimize the Figure 1: Energy flow diagram of the operational cost, minimize load shedding, and prevent microgrid system considered. violations of energy exchange commitments. This problem is formulated as a dynamic real-time optimization (D-RTO) where the objective function is to minimize the expected cost over a 48-hour receding horizon. Chance-constraints are used to ensure that load and renewables stochasticity do not increase the risk of commitment violations or load shedding. In order to distinguish between predicted (or forecasted) values and realized values of variables, the following nomenclature will be used: x(t) will refer to the realized value of the variable x at time t, and x(t,τ) will refer to the expected value of x(t) at time τ. In the following sections, the models used for each unit in the D-RTO are introduced. Then, the chance-constrained D-RTO formulation is presented and transformed into a standard mixed integer linear program. Finally, the results of implementing this scheduling approach over a 1-year period at various levels of risk aversion are shown and analyzed. 2.1. Unit Models Microturbines serve as a local source of dispatchable power. In particular, 4 identical microturbines rated at 30 kW are available in the system. Fuel consumption is considered to be an affine function of the output power. The microturbines must be operated at 10 % of their rated power when on in order to maintain stable operation. Finally, a fixed cost is incurred when starting up microturbines based on the extra wear and fuel usage during these events. A 600 kWh battery bank is used to store and dispatch power as needed. The maximum depth of discharge is taken to be 80 % since deeper discharges significantly shorten battery lifespan. In addition, the rate of charge/discharge is constrained to be less than 10 % of the maximum capacity per hour. The charging and discharging efficiency are taken to be constant within these operational limits. Finally, a fixed cost is incurred when switching from charging to discharging based on an assumed lifespan of 1,500 cycles and replacement cost of 150 $/kWh. Renewable power from rooftop PV is available at no marginal cost. The true amount of available PV power is not known in advance, but a forecast is used during scheduling. PV power can be curtailed as needed in order to prevent excessive export of energy to the macrogrid.

Dynamic Real-Time Optimization of Microgrids with Day-Ahead Commitments for External Power Exchange

105

Up to 20% of the load can be shed at some effective convenience rate. A bi-directional connection with the macrogrid is used to import and export power from the microgrid. No hard limit is placed on the power exchange since thermal line limits are not expected to be significant. Microgrid operators are billed or remunerated based on the net energy exchanged in each hour. 2.2. Scheduling Optimization The scheduling problem D-RTO is then formulated as: τ+47  Minimize: ∑ ζm (t)F(t, τ) + ζb (t)Pb (t, τ) − ζe (t)Pe (t, τ) + φm ym (t, τ) t=τ



(1)

+ θ ψ(t, τ) + ζs Ps (t, τ) + ζu Pu (t, τ) −W (τ + 47)L(τ + 47, τ)) Subject to: Pm (t, τ) + Pd (t, τ) − Pc (t, τ) + Pb (t, τ) − Pe (t, τ) + Ps (t, τ) = P (t, τ) − Pv (t, τ)  Prob Pm (t) + Pd (t)−Pc (t) + Pb (t) − Pe (t) + Ps (t)  = P (t) − Pv (t) Ps (t) ≤ Ps (t, τ) & Pu (t) ≤ Pu (t, τ) ≥ α1

(2)

(3)

  Prob Pm (t) + Pd (t) − Pc (t) + Pb (t) − Pe (t) = 0.8P (t) − Pv (t) Pu (t) ≤ Pu (t, τ) ≥ α2

(4)

0 ≤ Pv (t, τ) ≤ Pvmax (t, τ)

(5)

0.1xm (t, τ)Pmmax

(6)

≤ Pm (t, τ) ≤ xm (t, τ)Pmmax 0 ≤ Pc (t, τ) ≤ Pcmax (t, τ) 0 ≤ Pd (t, τ) ≤ Pdmax (t, τ) Pcmax (t, τ) ≤ 0.1Lmax (1 − χ(t, τ)) Pdmax (t, τ) ≤ 0.1Lmax χ(t, τ) Pcmac (t, τ) ≤ (Lmax − L(t − 1, τ))/0.9 Pdmax (t, τ) ≤ (L(t − 1, τ) − 0.2Lmax )0.95

(12)

Ps (t, τ) ≤ 0.2P (t, τ)

(13)

(7) (8) (9) (10) (11)

Pg (t) − γ − Pu (t, τ) ≤ Pb (t, τ) − Pe (t, τ) ≤ Pg (t) + γ + Pu (t, τ)

∀t ∈ [τ, τ + 23]

(14)

Pb (t, τ) − Pe (t, τ) ≥ Pb (t − 1, τ) − Pe (t − 1, τ) − δ

∀t ∈ [τ + 24, τ + 47]

(15)

Pb (t, τ) − Pe (t, τ) ≤ Pb (t − 1, τ) − Pe (t − 1, τ) + δ

∀t ∈ [τ + 24, τ + 47]

(16)

Puabs (t, τ) ≥ Pu (t, τ) Puabs (t, τ) ≥ −Pu (t, τ)

(18)

F(t, τ) = κ1 xm (t, τ) + κ2 Pm (t, τ)

(19)

L(t, τ) = L(t − 1, τ) + Pc (t, τ)0.9 − Pd (t, τ)/0.95

(20)

ym (t, τ) ≥ xm (t, τ) − xm (t, τ − 1)

(21)

ψ(t, τ) ≥ χ(t, τ) − χ(t, τ − 1)

(22)

χ, ψ ∈ {0, 1}

(23)

(17)

M. Zachar and P. Daoutidis

106

xm , ym ∈ {0, 1, 2, 3, 4}

(24)

F, Pb , Pe , Ps , Pu , L, Pm , Pc , Pd , Ps , Pcmax , Pdmax > 0

(25)

where the nomenclature is described in Table 1. The binary, integer, and continuous decision variables can be see in Equations (23), (24), and (25), respectively. The chance constraint Equation (3) specifies that there should be at least an α1 confidence level that the load shed does not exceed the expected amount, Ps (t, τ). Similarly, Equation (4) specifies that there should be at least an α2 confidence level that the commitment violation does not exceed the expected amount, Pu (t, τ). In these equations, the variables to the left of the equality are the realized values of dispatch decisions which will be made after the true value of the stochastic variables, P` (t) and Pvmax (t), are revealed. However, the feasible range for the realized values is constrained by decisions made here-and-now. As a note, Ps (t, τ) and Pu (t, τ) are generally zero in the optimal solution since they are penalized at a very high rate in the objective function. Equations (3) and (4) cannot be directly used in a standard solver. However, each chance constraint can be transformed into a set of linear inequalities. Truncated normal distributions are used to describe the forecasting errors. The untruncated distributions for the errors are given by:  !  t −τ 2 2 (26) ε` (t, τ) ∼ N 0, P` (t) 0.13 + 0.015 47  2 !  max t −τ Pv (t) rated , Pv 0.17 + 0.05 (27) εv (t, τ) ∼ N 0, min 3 47 The load forecast is truncated to the range [0 kW, 450 kW], and available PV power is truncated to the range [0 kW, 400 kW]. Based on the forecasted values P` (t, τ) and Pvmax (t, τ), the microgrid operator estimates the probability distribution of the stochastic parameters using these equations and makes here-and-now decisions such that Equations (3) and (4) are satisfied. In particular, for t ∈ [τ, τ + 23], equation (3) is replaced by: xm (t, τ)Pmmax + Pdmax (t, τ) + Pg (t)+γ + Pu (t, τ) + Ps (t, τ) ≥ C−1 P` (t, τ) − Pvmax (t, τ), (1 + α1 )/2

(28)



0.1xm (t, τ)Pmmax − Pcmax (t, τ) + Pg (t)−γ − Pu (t, τ) ≤ C−1 (P` (t, τ), (1 − α1 )/2)

(29)

where C−1 (x, p) is the inverse cumulative distribution function of the uncertain parameter x evaluated at the probability p. Equation (28) ensures that the probability of under-generation (i.e. the power balance cannot be satisfied without increasing load shedding or penalized power import beyond the expected values) is less than 1 − α1 /2. Equation (29) ensures that the probability of over-generation (i.e. the power balance cannot be satisfied without increasing penalized power export beyond the expected values) is less than 1 − α1 /2. Together, these two inequalites ensure that the realized power balance can be satisfied without increasing load shedding or penalized power exchange with at least a probability of α1 . Similarly, for t ∈ [τ, τ + 23], Equation (4) can be reformulated as: xm (t, τ)Pmmax + Pdmax (t, τ) + Pg (t)+γ + Pu (t, τ) ≥ C−1 0.8P` (t, τ) − Pvmax (t, τ), (1 + α2 )/2



 0.1xm (t, τ)Pmmax − Pcmax (t, τ) + Pg (t) − γ − Pu (t, τ) ≤ C−1 0.8P` (t, τ), (1 − α2 )/2

(30)

(31)

It is worth noting if α1 ≤ α2 that Equation (31) is necessarily more restrictive than Equation (29), so (29) does not need to be included in the final formulation. Transformed constraints can also be developed for t ∈ [τ + 24, τ + 47] by replacing the Pg (t) term with Pb (t, τ) − Pe (t, τ).

Dynamic Real-Time Optimization of Microgrids with Day-Ahead Commitments for External Power Exchange

107

Table 1: Optimization formulation nomenclature. Parameters/Variables α confidence level γ schedule elasticity δ schedule adaptability ζ cost coefficient η battery efficiency θ cycle cost κ fuel rate coefficient φ startup cost

Parameters/Variables χ is discharging ψ is cycled x number on y number started F fuel rate L battery level P power W storage incentive

Subscripts b import from macrogrid c charging d discharging e export to macrogrid g exchange commitment ` load/demand m microturbine s load shedding u penalized exchange v solar PV

3. Case Study The scheduling performance is examined for different α1 and α2 levels by solving the D-RTO iteratively over a 1-year period as shown in Figure 2. Aggregate load and PV power data of 100 stand-alone houses in Austin, TX from the Pecan Street Inc., Dataport database are used for P` (t) and Pvmax (t). The market parameters are taken to be δ =40 kW and γ =10 kW. Figures 3 and 4 show the frequency and magnitude of load shedding and commitment violations as different confidence levels are used. Load shedding becomes less frequent and severe as α1 is increased. Similarly, commitment violations become less frequent and severe as α2 is increased. In addition, there is a weak cross-coupling between these two phenomena. Figure 5 shows the realized cost under different confidence levels. As α2 is increased, the D-RTO solution becomes more Figure 2: Simulation flow. conservative and the fuel utilization increases since microturbines are committed more often to satisfy the chance constraints. Similarly, the battery is cycled more often as α2 increases. These phenomena result in the positive correlation between realized cost and α2 . In constrast, no strong correlation is seen between cost and α1 .

1

1%

1

= 0.8

1

= 0.9

1

= 0.99

= 0.999

0.1%

0.01%

1 - ECDF

1 - ECDF

Figure 5 also shows the curtailment of PV power under different confidence levels. There is a strong positive correlation between α2 and the amount of PV power curtailed. For example, α2 < 0.99 is required to achieve a PV utilization of over 82.5%. This implies that there is an inherent trade-off between the utilization of renewables and the frequency and magnitude of market violations. There is little correlation between the α1 confidence level and PV curtailment.

1

1%

1

= 0.8

1

= 0.9

1

= 0.99

= 0.999

0.1%

0.01%

0

10

20

30

40

50

Load Shed, P (kW) s

60

0

5

10

15

20

25

30

35

Commitment Violation, P (kW) u

Figure 3: Empirical complimentary cumulative distribution functions of load shed and commitment violations for α2 = 0.999. Input load and PV data obtained from Pecan Street Inc. Dataport.

108

2

1%

2

= 0.9

2

= 0.99

2

= 0.999

1 - ECDF

1 - ECDF

M. Zachar and P. Daoutidis

= 0.9999

0.1%

0.01%

2

1%

2

= 0.9

2

= 0.99

2

= 0.999 = 0.9999

0.1%

0.01% 0

5

10

15

20

25

30

35

40

0

Load Shed, P (kW)

5

10

15

20

25

30

35

Commitment Violation, P (kW)

s

u

9.0 8.5 8.0 1

7.5 7.0

1

6.5

1

=0.999 =0.99 =0.9 =0.8

1

0.0 1E-4

1E-3

0.01

1 -

2

0.1

PV Curtailed (% Available)

Mean Realized Cost ($/hr)

Figure 4: Empirical complimentary cumulative distribution functions of load shed and commitment violations for α1 = 0.9. Input load and PV data obtained from Pecan Street Inc. Dataport. 25%

20% 1

1

15%

1

1

=0.999 =0.99 =0.9 =0.8

10% 1E-4

1E-3

0.01

1 -

0.1

2

Figure 5: Cost (left) and curtailment of available PV power (right) versus the confidence levels. Input load and PV data obtained from Pecan Street Inc. Dataport.

4. Conclusions In the proposed market structure, D-RTO can be used for microgrid scheduling to minimize cost while ensuring that commitments for energy exchange with the macrogrid are met with a high degree of satisfaction. In addition, the level of risk aversion (i.e. the α1 and α2 parameters in the D-RTO) can be tuned to achieve a desired level of commitment fulfillment and load shedding. The effect of these parameters on other performance metrics, in particular cost and renewables utilization, is also quantified. Finally, this analysis can be used to determine if a particular set of microgrid control objectives (e.g. operational cost, renewable curtailment, load shedding, etc.) is achievable, and how conservative the scheduling must be to achieve these goals.

5. Acknowledgments This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. 00039202. Partial financial support from the University of Minnesota Initiative for Renewable Energy and the Environment project RL-0010-13 is also acknowledged.

References P. Balcombe, D. Rigby, A. Azapagic, 2015. Energy self-sufficiency, grid demand variability and consumer costs: Integrating solar pv, stirling engine chp and battery storage. Applied Energy 155, 393–408. C. Eid, J. Reneses Guill´en, P. Fr´ıas Mar´ın, R. Hakvoort, 2014. The economic effect of electricity net-metering with solar PV: Consequences for network cost recovery, cross subsidies and policy objectives. Energy Policy 75, 244–254. R. Lasseter, A. Akhil, C. Marnay, J. Stephens, J. Dagle, R. Guttromson, A. Meliopoulous, R. Yinger, J. Eto, 2002. The CERTS microgrid concept. White paper for Transmission Reliability Program, US Department of Energy. E. Mehleri, L. Papageorgiou, N. Sarimveis, H. Markatos, 2012. A model predictive control framework for residential microgrids. Computer Aided Chemical Engineering 30, 327–331. G. Pepermans, J. Driesen, D. Haeseldonckx, R. Belmans, W. Dhaeseleer, 2005. Distributed generation: definition, benefits and issues. Energy Policy 33 (6), 787–798. A. Picciariello, J. Reneses, P. Frias, L. S¨oder, 2015. Distributed generation and distribution pricing: Why do we need new tariff design methodologies? Electric Power Systems Research 119, 370–376. D. Zhang, S. Evangelisti, P. Lettieri, L. Papageorgiou, 2015. Energy consumption scheduling of smart homes with microgrid under multi-objective optimisation. Computer Aided Chemical Engineering 37, 2441–2446.

Zdravko Kravanja, Miloš Bogataj (Editors), Proceedings of the 26th European Symposium on Computer Aided Process Engineering – ESCAPE 26 June 12th -15th, 2016, Portorož, Slovenia © 2016 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/B978-0-444-63428-3.50023-0

Methodology for preselecting heat-integrated mass allocation networks likely to be associated with cost efficient HEN Sami Ghazouania,b,* , Assaad Zoughaiba and Sol`ene Le Bourdiecb a Centre

Efficacit´e Energ´etique des Syst`emes (CES), Mines ParisTech, PSL Research University, Palaiseau, 91120, France b EDF R&D Division, EDF Lab Les Renardi` eres, Moret Sur Loing, 77818, France *[email protected]

Abstract Instead of being discarded to the environment with costly treatments, process effluents can be reused and valued for their mass and heat content. This may result in substantial operating costs reduction but require important capital costs. Based on models developed by Ghazouani et al. (2015a,b), a new methodology is proposed to select mass allocation network designs that can potentially lead to an economically interesting heat exchangers network (HEN) without having to design the latter. In the new objective function, an estimation of the associated HEN capital costs is added to the operating costs (fresh sources and utilities consumption). HEN costs take into account the number of mass streams requiring heating or cooling and a rough estimation of necessary heat exchange surface. Moreover, mixer units are introduced into the initial superstructure to further reduce the number of mass streams participating in the HEN. The selection of the most cost effective units among infinite possibilities is made easier by the changes done to the model previously mentionned. Besides, the model resolution speed allows evaluating fairly quickly many possibilities and selecting the most promising ones. The methodology can be very helpful for sequential or simultaneous models designing mass allocation and heat exchangers networks. In this article, HEN design is done sequentially, using an established linear model (Barbaro and Bagajewicz (2005)). The relevance of the proposed methodology is assessed through a case study detailed in a previous work (Ghazouani et al. (2015a)). Keywords: Mass integration, Heat integration, MILP, HEN, Mixer units

1. Introduction Resources recovery is becoming an increasingly important matter for the industry because environmental and quality regulations are getting more restrictive, and raw materials prices have shown they can change rapidly and drastically in the recent past. Therefore, resources recovery represents a good solution to reduce raw material requirements, waste generation and overall operating costs. Also, it helps to comply with current or future regulations. However, in actual processes, this re-allocation of matter involves inevitably heating and cooling to be practically feasible. Thus, mass allocation and heat exchangers networks must be optimized to find a solution economically optimal. In the last decade, several design methodologies have been proposed (sequential or simultaneous) using linear or non-linear models (Boix et al. (2012), Ahmetovi´c and Kravanja (2013), Jimnez-Gutirrez et al. (2014), Tan et al. (2014)). These methodologies can be very complex and may require a relatively long calculation time, if they converge at all. In this new model, based

110

S. Ghazouani et al.

on previously presented works (Ghazouani et al. (2015a,b)), the objective function is the sum of operating costs (fresh sources, waste discharge, heating/cooling requirements) and an estimation of annualized HEN capital costs. The purpose is to evaluate as accuratly and rapidly as possible the order of magnitude of the HEN costs compared to operating costs without designing it (which is a very complex problem). The minimum number of heat exchangers and the rough estimation of heat exchange area are optimized. Moreover, it is possible to reduce the number of streams in the HEN by mixing mass streams before using them. The initial model superstructure is also modified by adding mixer units prior to the HEN. They are characterized by a fixed temperature and a link to a specific sink. Indeed, since we are using a linear model, the temperature has to be a parameter, not a variable. The selection problem of which sinks should be associated with a mixer unit and what should be its temperature is formulated in a new MILP. Actually, many possibilities can be tested fairly quickly to choose one or several economically interesting mixer units. With the aim of designing the associated HEN, this new approach can be used to greatly reduce the search space of a linear model and also the time calculation or identify a good initial guess for non-linear models (either sequential or simultaneous). In this paper, the methodology is first presented, and followed by a case study where the HEN structure cost evaluation is done sequentially, using linear HEN model developed by Barbaro and Bagajewicz (2005) and improved by Cong-Tran et al. (2015) in CERES software.

2. Initial MILP model The basic problem to be dealt with is how to reuse generated effluents (process sources) within process units (process sinks), in order to minimize the global fresh resource and energy requirements. Process sources are characterized by a certain mass flow rate, temperature, impurities composition and properties. Process sinks have requirements concerning mass flow rate and temperature. They have limitations on acceptable composition and properties. Fresh sources are used to feed process sinks when available process sources cannot comply with sink requirements but they have a cost. Parts of unused sources are sent to waste sinks. Mass allocation is realized by taking into account mass balances and limitations constraints based on sources (fresh and process) characteristics and sinks (waste and process) requirements. Available fresh resources consumption is optimized depending on their cost.

Figure 1: Coupled heat/mass integration superstructure Mass streams generated between sources and sinks can require heating or cooling to reach the sink temperature. Thus, to reduce heat requirements, heat integration is done based on the classic transhipment model. To further economize energy, non-isothermal mixing can be done by sending part of a mass stream directly to its sink at a temperature different from the sink temperature. To avoid non-linear equations related to this possibility, a temperature scale defines the levels at which mass splits can occur. This scale can be refined depending on the desired accuracy level.

Methodology for designing heat-integrated mass allocation networks likely to be associated with cost efficient HEN

111

step The difference between two consecutive temperatures would be smaller than a given value ∆Tmax . For each stream, heat requirements are calculated within each temperature interval of the scale for the remaining part of the stream going through it (Fig.1). Heat balance includes external utilities, with variable heat output, at a given cost. Optimal mass allocation network is designed by minimizing the annual operating cost (Cop ) including fresh resources and utilities consumption costs. A more detailled description of this model with all necessary equations and references can be found in Ghazouani et al. (2015a,b).

3. Modified MILP model: HEN cost evaluation and reduction 3.1. Mixer unit model and superstructure modifications Mass streams allocated to a same sink may require heating (or cooling) and can be heated up (or cooled down) by the same heat source (or heat sink). Hence, mixing them together can avoid the need for many heat exchangers. To this end, mixer units are used to reduce the number Figure 2: Mixer units model of streams in the HEN (Fig.2). They are characterized by a fixed temperature and associated with a specific sink (process or waste), which can have any number of mixer units. Mixing all streams provided by all sources together must result in a stream at the mixer unit temperature. The mixer unit outlet is subjected to the same constraints as any regular source in the model (c.f.§2), only with a single sink to supply. The stream going from a mixer unit to a sink interacts similarly within the HEN as a stream presented in Fig.1. The new superstructure, including the mixer units, is presented in Fig.3. 3.2. Objective function modifications

Figure 3: New heat-integrated mass allocation network superstructure

To find an economically relevant mass allocation network, it is necessary to evaluate the associated HEN costs. The cost function f ixed of a heat exchanger is divided between a fixed and a variable part related to its existence (che ) area and its area (che ), respectively. Usually, the minimum heat exchangers required can be estimated using the formula proposed by Linnhoff et al. (1979) when heat streams are known. However, since the streams to be heat-integrated are unknown, it is assumed (even if it is underestimated) that this number (nmin he ) is the maximum number between hot and cold streams created that require cooling or heating (respectively nhot and ncold ). These numbers can be quantified, for each mass stream, using binary variables to count mass extractions at temperature levels different from its corresponding source (or mixer unit) temperature (Fig.1). Utilities are also counted as one if there output is not null. To comply with linear modelling and avoid using maximum function, integer extra min variables (nextra hot , ncold ) are introduced as complements of nhot and ncold to calculate nhe : extra extra nmin he = nhot + nhot = ncold + ncold = max(nhot , ncold )

(1)

112

S. Ghazouani et al.

f ixed The total fixed part of HEN cost CHEN is: f ixed f ixed min f ixed CHEN = che .nhe + k f ixed .che .(nhot + ncold )

(2)

The second term in Eq.(2) is added to guarantee obtaining the minimum of actual streams in the HEN (i.e. nhot and ncold ) while remaining marginal (selecting k f ixed accordingly). variable ) is calculated by estimating what could be the heat exchange area The variable area cost (CHEN for each stream (mass stream (ms) or utility (u)) associated to the heat flux (qms,n or qu,n ) within each temperature interval n where it exists (Fig.1). Exchange areas cannot be precisely calculated at this point since the matching between hot and cold streams is not done yet. Therefore, an average esti ) that temperature difference karea .∆Tpinch is arbitrally chosen to calculate an exchange area (SHEN could be closed to the real one and evaluate its influence on the mass allocation network design. It is based on the minimum allowed temperature difference (∆Tpinch ). It can be increased (with karea ≥ 1) to better approximate the real heat exchange area. Also, each stream (ms or u) is characterized by a heat transfer coefficient (htc). The variable part of HEN costs is: qms,n qu,n esti variable .carea CHEN = SHEN +∑ ).carea (3) he = ∑(∑ he n ms htcms .karea .∆Tpinch u htcu .karea .∆Tpinch

In addition to the operating costs, the new objective function of the modified MILP model includes the total annualized HEN costs for a given number of years (Nop ) that it is assumed to be operated: f ixed esti variable = Cop + (CHEN +CHEN )/Nop Ctotal

(4)

4. Case study The case study focuses on the use of water within a phenol production process. It is presented by Ghazouani et al. (2015a). All the process data can be found in the article. The following table summarizes economic data and selected model parameters: Table 1: Economic Data f ixed

carea he e.m−2

hop h

Nop y

25

5,292

77.8

8,000

1

c f resh1 e.t −1

c f resh2 e.t −1

chot e.MW h−1

ccold e.MW h−1

3

1

100

che e

Table 2: Parameters ∆Lmax f resh %

cp j kJ.kg−1 .K −1

∆Tpinch ◦C

∆Tstep ◦C

max Nsplit

htcms W.m−2 .K −1

htcu W.m−2 .K −1

k f ixed

karea

900

4.2

10

10

3

1,000

1,000

0.001

1

Hot utility temperature = 100◦C; Cold utility temperature = 10◦C

This case study aims at demonstrating the relevance of the new objective function to design better mass allocation networks that will ultimately lead to more cost-effective HEN designs in a sequential approach. For each model, the optimal HEN is designed using the heat streams extracted from the mass allocation results. First, a comparison on the overall costs is done between the initial formulation (§2) and the modified one (§3.2). Then, a sensitivity study is realized to select mixer units, using the new formulation, (§3.1) to try to further reduce HEN costs. 4.1. Initial MILP vs. Modified MILP Table 3 presents the results of each model where the HEN costs are either evaluated (initial) or actually part of the objective function (modified). It shows the new mass allocation operating costs

Methodology for designing heat-integrated mass allocation networks likely to be associated with cost efficient HEN

113

esti ) are reduced compared (Cop ) increases (+2.8 ke), but the HEN and overall costs estimation (Ctotal min to the initial solution (-8.2 ke) as the estimation of nhe is lowered (3 instead of 5). Table 4 presents the overall actual cost (Ctotal ) after HEN designing based on the mass allocation previously obtained. Indeed, the modified model actually leads to a more economical HEN design, dividing its cost by more than two while also improving the operating costs. The cost estimation esti is relatively low (+6 %) conrealized is quite accurate with a difference between Ctotal and Ctotal trary to the initial one (+25 %). The heat exchangers number is close to its estimation (4 instead esti ) is far from the final result (S of 3). However, the overall exchange area estimation (SHEN HEN ). It can be improved by adjusting karea . Note that the overall calculation time takes 1-2 sec.

Table 3: Mass allocation network results prior to HEN design for the initial and modified MILP models (HEN costs evaluation) Model

esti Ctotal ke

Cop ke

L f resh1 kg.h−1

L f resh2 kg.h−1

MERh kW

MERc kW

nhot

ncold

nmin he

esti SHEN m2

Initial

70.7

42.1

Modified

62.5

44.9

804

318

0

102

5

5

5

27.6

610

1,102

0

107

3

2

3

23.6

Table 4: Overall cost results after HEN design based on previous mass allocation network results Model

Ctotal ke

CHEN ke

Cop ke

Qh kW

Qc kW

nhe

SHEN m2

Initial

95.2

48.2

47.0

5

106

9

8.0

Modified

66.6

21.7

44.9

0

107

4

7.0

4.2. Reducing HEN costs: mixer units’ selection HEN costs can potentially be reduced using mixer units. The selection is done on a trial-and-error basis. A first test series, using a few mixer units with every sink, shows that only the ones associated with the waste sink have a significant influence on the results. Note that the sink temperature is 30 ◦ C. Sources temperatures range from 25 to 85 ◦ C. To determine the optimal mixing temperature, a preliminary test series is done on a range from 45 to 80 ◦ C with a 5 ◦ C step. The results show an optimum around 60 ◦ C. To refine the results Figure 4: Mixer units’ selection around this possible optimum, a second test series is done on a range from 55 to 65 ◦ C with a 1 ◦ C step. The results are shown in Fig.4. For mixer units between 55 and 70 ◦ C, nmin he decreases to 2. The difference between these solutions is mostly due to cooling requirements esti ) because of slightly different mass allocation structures (Table 5). and heat exchange area (SHEN The HEN design and total cost evaluation are done on interesting points around the possible optimum (56-61 ◦ C). Table 6 shows not all mixer units performed well, but some improved the solution by reducing nhe (for Tmixer = 56 and 59 ◦ C, nhe reaches 3). The reason is that the optimization is done sequentially; in the previous step, mass streams are not perfectly designed for an optimal heat exchange. Simultaneous optimization would have better results. But still, with a fast calculation time (1-2 sec), we were able to select quasi-optimal mixer units and reduce our solution’s costs.

114

S. Ghazouani et al.

Table 5: Modified MILP results with mixer units Tmixer ◦C

esti Ctotal ke

Cop ke

L f resh1 kg.h−1

L f resh2 kg.h−1

MERh kW

MERc kW

nhot

ncold

esti SHEN m2

Without

62.5

44.9

610

1,102

0

107.5

3

2

23.6

56

57.4

44.9

605

1,122

0

107.1

2

2

23.9

57-60

57.1

44.8

616

1,068

0

107.1

2

2

23.4

61

58.1

45.6

654

1,068

0

106.9

2

2

24.2

Table 6: Cost evaluation after HEN design with mixer units Tmixer ◦C

Ctotal ke

CHEN ke

Cop ke

Qh kW

Qc kW

nhe

SHEN m2

Without

66.6

21.7

44.9

0

107.5

4

7.0

59

61.2

16.5

44.7

0

107.1

3

8.0

56

61.3

16.4

44.9

0

107.5

3

7.3

57, 58, 60

71.0

16.4

54.6

9.9

107.1

3

7.3

61

72.7

21.8

50.9

5.3

112.0

4

7.9

5. Conclusion The proposed methdology allows a quick evaluation of the most economical mass allocation and heat exchangers networks. Its steps and performances have been shown on a case study. It is useful for sequential methodologies, because it helps reducing the size of HEN design problem and leads towards a quasi-optimal solution. However, it would have better results with simultaneous optimization models as they still have the abitilty to arrange mass streams during the HEN design. It can also be useful for non-linear approaches trying to solve this kind of problem because it can give several good initial guesses and increase the chances of converging towards an optimal solution. Currently, we are working on a simultaneous and linear approach to design mass allocation and heat exchanger networks. And the proposed model

References E. Ahmetovi´c, Z. Kravanja, 2013. Simultaneous synthesis of process water and heat exchanger networks. Energy 57, 236–250. A. Barbaro, M. J. Bagajewicz, 2005. New rigorous one-step milp formulation for heat exchanger network synthesis. Computers and Chemical Engineering 29, 1945–1976. M. Boix, L. Pibouleau, L. Montastruc, C. Azzaro-Pantel, S. Domenech, 2012. Minimizing water and energy consumptions in water and heat exchange networks. Appl. Therm. Eng. 36, 442–455. T. Cong-Tran, F. Thibault, H. Thieriot, A. Zoughaib, S. Pelloux-Prayer, 2015. New features to barbaros heat exchanger network algorithm: heat exchanger technologies and waste heat flow representation. In: Proceedings of the 28th International Conference on Efficiency, Cost, Optimization, Simulation and environmental impact of energy systems, 29 June-3 July 2015, Pau, France. ISBN: 978-2-9555539-0-9. Lecture 51041. S. Ghazouani, A. Zoughaib, S. Pelloux-Prayer, 2015a. Coupled heat and resource allocation network design considering multi-contaminants, properties and non-isothermal mixing. In: Proceedings of the 28th International Conference on Efficiency, Cost, Optimization, Simulation and environmental impact of energy systems, 29 June-3 July 2015, Pau, France. ISBN: 978-2-9555539-0-9. Lecture 50064. S. Ghazouani, A. Zoughaib, S. Pelloux-Prayer, 2015b. Simultaneous heat integrated resource allocation network targeting for total annual cost considering non-isothermal mixing. Chemical Engineering Science 34, 385–398. A. Jimnez-Gutirrez, J. Lona-Ramrez, J. Mara Ponce-Ortega, E. H. M., 2014. An MINLP model for the simultaneous integration of energy, mass and properties in water networks. Comp. Chem. Eng. 71, 52–66. B. Linnhoff, D. R. Mason, I. Wardle, 1979. Understanding heat exchanger networks. Computer Chemical Engineering 3, 292–302. Y. Tan, D. Ng, M. El Halwagi, D. Foo, Y. Samyudia, 2014. Heat integrated resource conservation networks without mixing prior to heat exchanger networks. Journal of Cleaner Production 71, 128–138.

=GUDYNR.UDYDQMD0LORã%RJDWDM (GLWRUV 3URFHHGLQJVRIWKHWK(XURSHDQ6\PSRVLXPRQ &RPSXWHU$LGHG3URFHVV(QJLQHHULQJ±(6&$3( -XQHWKWK3RUWRURå6ORYHQLD‹(OVHYLHU%9$OOULJKWVUHVHUYHG http://dx.doi.org/10.1016/B978-0-444-63428-3.50024-2

Computer-Aided Molecular Design by Combining Genetic Algorithms and COSMO-RS Jan Scheffczyka, Lorenz Fleitmanna, Annett Schwarza, André Bardowa*, Kai Leonharda a

RWTH Aachen University, Chair of Technical Thermodynamics, Schinkelstrasse 8, 52062 Aachen, Germany *[email protected]

Abstract Increasing demand for tailor-made chemicals gives rise to challenging molecular design tasks. Previous molecular design approaches have relied on simplified thermodynamic models to be computationally tractable. In contrast, quantum mechanics offers the most comprehensive molecular picture but a direct integration into computer-aided molecular design (CAMD) is challenging. In this work, we therefore aim at integrating quantumlevel information into molecular design while still allowing for efficient computations. For this purpose, a framework for optimization-based molecular design is introduced based on property predictions by COSMO-RS and a genetic algorithm for molecular design. The resulting framework is applied to a case study for solvent design in liquidliquid extraction. Keywords: COSMO-RS, CAMD, Extraction.

1. Introduction A key for economic and ecologic performance in process and product design is the selection of suitable solvents (Hostrup et al., 1999). However, the large design space of potential solvent molecules leads to combinatorial explosion and requires systematic search strategies. For this purpose, computer-aided molecular design (CAMD) methods have been developed. In CAMD, chemical product design is combined with property prediction methods (Ng et al., 2015). The success of CAMD methods strongly relies on the underlying property method (Adjiman et al., 2014). Commonly, CAMD is based on first-order group contribution (GC) methods, which simplify the underlying thermodynamic picture by assuming additivity of molecular groups. GC methods further require primary experiments for parametrization, which is especially problematic for new compounds (Delidovich et al., 2014). Limitations of GC methods can be overcome by ab initio quantum mechanical (QM) calculations, which provide very accurate predictions but can be computationally demanding. A well-balanced compromise between accuracy and computation efficiency has been achieved by the thermodynamic model COSMO-RS (Klamt et al., 2001): Computationally expensive QM COSMO calculations are performed to calculate screening charge densities (so called σ-surfaces) once for pure components; the σ-surfaces of the pure components are stored in a database and can be combined with statistical thermodynamics for fast computations of mixture properties. Thereby, COSMO-RS provides accurate and computationally efficient calculations while not requiring any experimental data.

116

J. Scheffczyk et al.

Recently, COSMO-RS has been successfully applied to solvent selection. Frequently, large databases of known σ-surfaces are screened for solubility prediction (e.g., Blumenthal et al., 2015). However, database screenings are limited to known molecular structures. Molecular modifications (e.g., functional group addition or chain length variation) have therefore been explored using heuristics with COSMO-RS-assisted experiments (Burghoff et al., 2008). Zhou et al. (2015) use COSMO-RS-based semiempirical descriptors combined with a GC method to design new compounds. In this work, the COSMO-CAMD framework is introduced to combine optimizationbased molecular design with COSMO-RS property prediction. The framework is applied to a case study for solvent design in liquid-liquid extraction. This work has been extended and is presented in all detail in Scheffczyk et al. (2016).

2. COSMO-CAMD framework Here, we consider a CAMD problem (Hostrup et al., 1999) of the following form ƒš

ሺ‫ݕ‬ሻ

(1)

•Ǥ –Ǥ

Šሺ‫ݕ‬ሻ ൌ Ͳ ‰ሺ‫ݕ‬ሻ ൑ Ͳ ‫ܻ א ݕ‬.

(2) (3) (4)



The objective function ሺ‫ݕ‬ሻ is based on thermodynamic properties calculated by COSMO-RS depending on the molecular structure ‫ݕ‬. The molecular structure ‫ ݕ‬is part of the design space of all molecular structures ܻ that can be generated from a set of defined building blocks consisting of molecular fragments. Šሺ‫ݕ‬ሻ are equality constraints on the solvent design (e.g., structural feasibility or liquid-phase behavior) while ‰ሺ‫ݕ‬ሻ are inequality constraints on molecular properties (e.g., size). In this work, a genetic algorithm (GA) is employed to optimize molecular structures based on property prediction with COSMO-RS. The molecular structure ‫ ݕ‬is expressed in a fragment-based notation (Douguet et al., 2005) containing the full topological information of a molecule (e.g., isomers are distinguished). The fragment-based representation forms the direct input to the genetic algorithm (LEA3D), which also incorporates constraints on structural feasibility Eq. (2) (Douguet et al., 2005). The molecular structures are used in COSMO calculations. In COSMO, a trade-off exists between accuracy and computational time depending on the basic QM level employed (Klamt et al., 2001). To find a suitable trade-off, two QM levels are combined in this work: In a first step, the Design Phase, COSMO calculations are performed on the level BP-TZVP-MF (in the following TZVP-MF). This level provides intermediate accuracy and computational cost and is used to generate and evaluate novel molecular structures. In a second step, the Refinement Phase, the most promising candidates are characterized using BP-TZVPD-FINE (in the following TZVPD-FINE), the most accurate QM level available. While COSMO-RS provides case-specific information on a solvent-solute system, σ-surfaces from COSMO calculations are properties of pure components. Therefore, a database is generated for σ-surfaces of novel molecular structures during the optimization. Thereby, time-consuming COSMO calculations are performed only once for each molecular structure. Additionally, the σ-surface database can be re-used for different case studies, thus further increasing computational efficiency.

Computer-Aided Molecular Design by Combining Genetic Algorithms and COSMO-RS

117

Even for the first run in which the σ-surface database is generated, the combination of database generation with the Design Phase using an intermediate accuracy QM level reduces the computational effort already by a factor of ~ 25. This provides the means to evaluate ~ 4500 molecules in 5 weeks on a 3.2 GHz desktop PC. Since genetic algorithms allow for easy parallelization by computing the individuals on separate cores, molecular design by COSMO-CAMD could be performed within a day on an appropriate cluster. The general procedure of COSMO-CAMD consists of the following steps (Figure 1): x

To start, the fragment library is specified with the set of molecule fragments that can be combined by the genetic algorithm. Constraints are specified for molecular properties (e.g., molecule size). Optimization parameters are specified (# of generations, # of molecules per generation, probabilities of crossover / mutations).

x

In the Design Phase, the GA generates new molecular structures based on the current set of molecular structures using genetic operations (crossover and mutation). The first population is randomly initialized from library fragments. Each molecular structure is evaluated with COSMO-RS. If the structure exists in the σ-surface database, σ-surfaces are used from the database. If not, σ-surfaces are calculated on TZVP-MF level. From σ-surfaces, COSMO-RS calculates the thermodynamic properties of the molecular structures in the case-specific mixture. These properties allow to evaluate the objective function ሺ‫ݕ‬ሻ Eq. (1) and thus the performance of the molecular structure. The current generation is updated until the genetic algorithm terminates (e.g., after a specified number of generations). The Design Phase yields a preliminary ranked list of solvent candidates based on ሺ‫ݕ‬ሻ.

x

In the Refinement Phase, the top solvent candidates from the Design Phase are selected for a refinement step. The molecules are re-calculated on TZVPD-FINE level to obtain results with the highest possible accuracy and the performance ሺ‫ݕ‬ሻ is accordingly re-estimated. The final result is a ranked list of solvent candidates for further investigation (e.g., experiments). Specifications Fragment library, constraints, …

Generation of molecular structures

Evaluation with COSMO-RS TZVP-MF

Refinement Phase

Yes

Continue? No Ranked list of solvent candidates

Evaluation with COSMO-RS

Evaluation with COSMO-RS

Design Phase

Generation+1

Molecule in database?

Yes

No

COSMO calculation of σ-surfaces Database of σ-surfaces

TZVP-MF

TZVPD-FINE

COSMO-RS property prediction

Objective function evaluation

TZVPD-FINE

Ranked list of solvents

Figure 1: Schematic procedure of the COSMO-CAMD framework.

118

J. Scheffczyk et al.

3. Case study: Solvent design for extraction A common extraction problem is the removal of phenol from industrial wastewater. Gmehling and Schedemmann (2014) have screened an experimental database (Dortmund Data Bank, DDB) for extracting agents. In this work, novel solvents for phenol extraction are designed using the COSMO-CAMD framework introduced. 3.1. Problem specification An objective function ଵ ሺ‫ݕ‬ሻ from Gmehling and Schedemann (2014) is used to evaluate the extraction agent performance based on selectivity, capacity and solvent loss: ଷǤ଴  ୮୦ୣ୬୭୪ ଵǤ଴ ଵǤହ ଵ ሺ‫ݕ‬ሻ ൌ Ž‘‰ ൮൤ ൨ ή ൣ ൧ ή ቀͳ െ ᇣᇧᇤᇧᇥ šୱ୭୪୴ୣ୬୲ǡୖ ቁ ൲ǡ ୮୦ୣ୬୭୪ ᇣᇧ ᇧᇤᇧ ᇧᇥ  ୵ୟ୲ୣ୰ ᇣᇧᇧᇤᇧᇧᇥ ୗୣ୪ୣୡ୲୧୴୧୲୷

େୟ୮ୟୡ୧୲୷

(5)

ୗ୭୪୴ୣ୬୲୪୭ୱୱ

where  ௜ ൌ  ɀ௜ǡୖ Ȁɀ௜ǡ୉ is the distribution coefficient of compound ݅ between two liquid equilibrium phases  and . The activity coefficients ɀ௜ are calculated with COSMO-RS at the liquid-liquid-equilibrium (LLE) concentration š௜ǡ୉ and š௜ǡୖ of a binary watersolvent mixture with phenol at infinite dilution and T = 25 °C. In LEA3D, constraints Šሺ‫ݕ‬ሻ Eq. (2) and ‰ሺ‫ݕ‬ሻ Eq. (3) need to be incorporated in the objective function ሺ‫ݕ‬ሻ: —•–‡š‹•– ՜ ‡Ž•‡ ଵ ሺ‫ݕ‬ሻ ൌ Ͳǡ Šሺ‫ݕ‬ሻǣ ൜  ‡ƒ•‹„Ž‡‘Ž‡…—Žƒ”•–”—…–—”‡• ՜ ‡•—”‡†„›͵Ǥ

(6)

Large molecules naturally more likely phase separate from water, which positively affects ଵ ሺ‫ݕ‬ሻ. However, larger components have high melting points. To restrict the number of heavy atoms݊ in a molecule, LEA3D has a built-in penalty function ଶ ሺ‫ݕ‬ሻ (Douguet et al., 2005): ͳ ଶ ଶ ‰ሺ‫ݕ‬ሻǣ ଶ ሺ‫ݕ‬ሻ ൌ ቐሺͲǤͻ݊௑ ሻ െ ሺ݊ െ ݊୫୧୬ ሻ  ଶ ሺͲǤͻ݊௑ ሻ

ǡ ‹ˆ݊୫୧୬ ൏ ݊ ൏ ݊୫ୟ୶ ǡ ‹ˆ݊ ൏ ݊୫୧୬ ‫ ݊ ש‬൐ ݊୫ୟ୶

™‹–Š݊௑ ൌ ݊୫୧୬ ‹ˆ݊ ൏ ݊୫୧୬ ƒ†݊௑ ൌ ݊୫ୟ୶ ‹ˆ݊ ൐ ݊୫ୟ୶ .

(7) (8)

Boundaries for the molecular size are here set to ሾ݊୫୧୬ ൌ ͷǡ ݊୫ୟ୶ ൌ ͳʹሿ. The final objective function is: ሺ‫ݕ‬ሻ ൌ 

ଵ ሺ‫ݕ‬ሻ ൅ ଶ ሺ‫ݕ‬ሻ Ǥ ʹ

(9)

The initial fragment library consists of methyl-, ethyl-, propyl-, butyl-, phenyl-, carbonyl-, carboxyl-, ether-, hydroxy- and cyclohexylgroup fragments. Each hydrogen atom in the fragments can possibly be replaced by another fragment from the library except for hydrogen in hydroxyl-groups. The number of generations is set to 20 with 40 molecules per generation based on pre-tests showing good performance. Parameters for mutation and crossover are set according to Douguet et al. (2005). The top 25 molecules from the Design Phase are evaluated in the Refinement Phase.

Computer-Aided Molecular Design by Combining Genetic Algorithms and COSMO-RS

119

3.2. Results In the database solvent screening by Gmehling and Schedemann (2014), the most promising molecule identified is methylisobutylketone (MIBK). MIBK results in an objective function ሺ‫ݕ‬ሻ of 4.24 on TZVPD-FINE level. A comparison to the solvents designed by COSMO-CAMD in Table 1 shows that the new designed solvents achieve higher ሺ‫ݕ‬ሻ than MIBK with a maximum value of 4.76 (+12 %). This increase is larger than the observed deviation of TZVPD-FINE from experimental data (Root mean square RMS = 0.40) for the 10 solvents published in Gmehling and Schedemann (2014). Overall, TZVPD-FINE values are in good agreement with TZVP-MF (Figure 2). Here, we compare objective function ሺ‫ݕ‬ሻ values using both QM levels for all designed molecules, which exist in the TZVPD-FINE database or were calculated in the Refinement Phase. The calculated RMS value for TZVP-MF to TZVPD-FINE values is 0.27. Due to the known accuracy of COSMO-RS, the candidate list should serve as an input for experiments to verify the predicted performance. The candidate list contains novel solvents designed by COSMO-CAMD: 18 of the most promising 25 molecules designed by COSMO-CAMD are not contained in the COSMO-RS database (Version COSMObase-1501-TZVPD-FINE). Thus, COSMOCAMD successfully goes beyond the established screening methods based on database search. Table 1: Ranked list for solvents designed by COSMO-CAMD. ሺ‫ݕ‬ሻ TZVP-MF

ሺ‫ݕ‬ሻTZVPD-FINE

#

Component

1

2-methyl-4-(propan-2-yl)benzaldehyde

4.31

4.76

2

2-methyl-5-propylbenzaldehyde

4.27

4.75

3

4-methyl-2-propylbenzaldehyde

4.29

4.74

4

2-methyl-5-(propan-2-yl)benzaldehyde

4.26

4.72

5.0

F(y) TZVPD-FINE

4.5 4.0

3.5 3.0

COSMO-CAMD

2.5

Best DDB x=x

2.0 1.5 1.5

2.0

2.5

3.0 3.5 F(y) TZVP-MF

4.0

4.5

5.0

Figure 2: Molecules generated in COSMO-CAMD using TZVP-MF and TZVPD-FINE level. Dashed line indicates best solvent from database screening (Gmehling and Schedemann 2014).

120

J. Scheffczyk et al.

4. Conclusions In this work, COSMO-CAMD is presented, an optimization-based design method for novel solvents. The molecular structures are generated by direct manipulation of the chemical structure with the genetic algorithm LEA3D allowing to explore a large search space by optimization. The presented framework combines a computationally efficient Design Phase with lower accuracy QM methods (TZVP-MF) with accurate property prediction for promising molecules with TZVPD-FINE. The case study for phenol extraction shows that COSMO-CAMD designs novel promising solvents. The designed molecules are predicted to have better performance than molecules contained in a comprehensive database. 18 of the most promising 25 designed molecules are novel solvents, which had not been in the COSMO-RS TZVPDFINE database. Overall, COSMO-CAMD designs promising novel molecules and thus successfully overcomes the limitations of both, experimental and COSMO-RS-based database screenings.

Acknowledgments This work was performed as part of the Cluster of Excellence ”Tailor-Made Fuels from Biomass”, which is funded by the Excellence Initiative by the German federal and state governments to promote science and research at German universities.

References C. S. Adjiman, A. Galindo, G. Jackson, 2014, Molecules Matter: The Expanding Envelope of Process Design, Proceedings of The 8th International Conference on Foundations of ComputerAided Process Design 34, 55–64. L. C. Blumenthal, C. M. Jens, J. Ulbrich, F. Schwering, V. Langrehr, T. Turek, U. Kunz, K. Leonhard, R. Palkovits, 2015, Systematic identication of solvents optimal for the extraction of 5-hydroxymethylfurfural from aqueous reactive solutions, in submission. B. Burghoff, E. L.V. Goetheer, A. B. de Haan, 2008, COSMO-RS-Based Extractant Screening for Phenol Extraction As Model System, Ind. Eng. Chem. Res., 47, 12, 4263–4269. I. Delidovich, K. Leonhard, R. Palkovits, 2014, Cellulose and hemicellulose valorisation: an integrated challenge of catalysis and reaction engineering, Energy Env. Sci., 7, 9, 2803- 2830. D. Douguet, H. Munier-Lehmann, G. Labesse, S. Pochet, 2005, LEA3D: a computer-aided ligand design for structure-based drug design, J. Med. Chem., 48, 7, 2457–2468. J. Gmehling, A. Schedemann, 2014, Selection of Solvents or Solvent Mixtures for Liquid–Liquid Extraction Using Predictive Thermodynamic Models or Access to the Dortmund Data Bank, Ind. Eng. Chem. Res., 53, 45, 17794–17805. M. Hostrup, P. Harper, R. Gani, 1999, Design of environmentally benign processes: integration of solvent design and separation process synthesis, Comput. Chem. Eng., 23, 10, 1395–1414. A. Klamt, F. Eckert, M. Hornig, 2001, COSMO-RS: A novel view to physiological solvation and partition questions, J. Comput.-Aided Mol. Des., 15, 4, 355–365. L. Y. Ng, F. K. Chong, N. G. Chemmangattuvalappil, 2015, Challenges and opportunities in computer-aided molecular design, Comput. Chem. Eng., 81, 115–129. J. Scheffczyk, L. Fleitmann, A. Schwarz, M. Lampe, A. Bardow, K. Leonhard, 2016, COSMOCAMD: A Framework for Optimization-Based Computer-Aided Molecular Design using COSMO-RS, in submission. T. Zhou, K. McBride, X. Zhang, Z. Qi, K. Sundmacher, 2015, Integrated solvent and process design exemplified for a Diels-Alder reaction, AIChE J., 61, 1, 147–158.

Zdravko Kravanja, Miloš Bogataj (Editors), Proceedings of the 26th European Symposium on Computer Aided Process Engineering – ESCAPE 26 June 12th -15th, 2016, Portorož, Slovenia © 2016 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/B978-0-444-63428-3.50025-4

Energy consumption maps for quaternary distillation sequences F.I. Gómez-Castroa, N.E. Ramírez-Vallejoa, J.G. Segovia-Hernándeza, C. Gutiérrez-Antoniob*, M. Erricoc, A. Briones-Ramírezd, J. Sánchez-Aguilare a

Departamento de Ingeniería Química, División de Ciencias Naturales y Exactas, Campus Guanajuato, Universidad de Guanajuato, Noria Alta s/n, Guanajuato, Gto., México, 36050 b Facultad de Química, Universidad Autónoma de Querétaro, Av. Cerro de las Campanas s/n, Col. Las Campanas, Querétaro, Qro., México, 76010, [email protected] c University of Southern Denmark, Department of Chemical Engineering, Biotechnology and Environmental Technology, Campusvej 55, 5230 Odense M , Denmark d Exxerpro Solutions,,Av. del Sol 1B Interior 4B, Plaza Comercial El Sol, Querétaro, Qro., México, 76134 e Posgrado e Investigación, Universidad Autónoma de Querétaro, Cerro de las Campanas S/N, Querétaro, Qro., México, 76010

Abstract Thermally coupled distillation columns represent a very interesting option for the intensification of distillation systems in order to reduce the energy consumption, and, as a consequence, the environmental impact of the separation process. Several thermally coupled distillation schemes can be generated for the separation of multicomponent mixtures. This fact is an advantage, since a wide portfolio of alternatives can be used to separate a specific mixture; however, this is also a disadvantage since a lot of alternatives must be explored in order to find the optimal one. The optimal configuration, for a given mixture, depends on the nature of the mixture, usually quantified for ternary mixtures through the ease of separation index (ESI), and also on the feed composition. As can be noticed, the size of the design and optimization problem increases when these variables are considered in the generation of the solutions space. For the separation of ternary mixtures, Tedder and Rudd (1978) presented a composition map for which thermally coupled systems allowed energy savings. However, the scenario is different for quaternary mixtures, since no similar information is available. Therefore, in this work, energy consumption data for five feed compositions for a mixture near to ideality are presented. The quaternary sequences studied are: conventional direct (three columns), conventional indirect (three columns), thermally coupled direct (main column and two side rectifiers), and thermally coupled indirect (main column and two side strippers). The design and optimization of the distillation sequences is performed through a multiobjective genetic algorithm with constraints handling, coupled to the commercial process simulator Aspen Plus, and enhanced through the use of neural networks. Keywords: quaternary mixtures, energy consumption, thermally coupled distillation, stochastic optimization.

122

F.I. Gómez-Castro et al.

1. Introduction It is well known that distillation is one of the most used separation processes for industrial mixtures. Its versatility relies on the fact that, with an adequate design, almost any purity can be reached. However, one of its main disadvantages is the elevated energy consumption, which is a consequence of a low thermodynamic efficiency. As the required purity increases, more energy must be provided to the column’s reboiler. In order to maintain the advantages of distillation, and simultaneously reduce its energy requirements, several alternatives have been proposed. One of these options are the thermally coupled distillation columns, obtained substituting one or more auxiliary heat exchangers, associated to non product streams, with a vapour/liquid interconnecting stream. These systems avoid the phenomena known as re-mixing, taking advantage of the composition profiles (Triantafyllou and Smith, 1992); this allow decreasing the energy requirements and also the capital costs. There are several thermally coupled distillation schemes proposed for the separation of multicomponent mixtures. This fact is an advantage, since a wide set of alternatives can be used to separate a specific mixture; however, this is also a disadvantage since all the alternatives must be explored in order to identify the best one. The optimal sequence, for a given mixture, depends mainly on the nature of the mixture, usually quantified for ternary mixtures through the ease of separation index (ESI), and also on the feed composition. The ease of separation index was defined by Tedder and Rudd (1978), and it relates the distribution constant of each component of the mixture. As can be noticed, the size of the design and optimization problem increases when also these variables are considered in the generation of the solutions space. Some configurations have been proposed for the separation of quaternary mixtures, and it has been found that, for the cases reported, the use of quaternary thermally coupled distillation sequences may represent important reductions on energy consumptions showing also good control properties if compared to similar conventional sequences (Hernández et al., 2005, Errico et al., 2008, VázquezCastillo et al., 2009). Nevertheless, Shah and Agrawal (2010) evidenced that the use of thermal coupling could reduce the energy consumption only for some cases. This result is a generalization of the work of Tedder and Rudd (1978), where it was concluded that for ideal ternary mixtures, energy savings can be obtained by using thermally coupled sequences only if the composition of the middle-boiling component is between 0.4 and 0.8. Also in the same work, Tedder and Rudd presented composition maps for which thermally coupled systems allowed energy savings. In their study, eight distillation systems were analyzed for the separation of ternary mixtures, and energy consumption maps were generated as a function of the feed composition and nature of the mixture. This information was very useful, considering the large size of the solutions space when conventional and thermally coupled distillation columns are considered. However, the scenario is different for quaternary mixtures, since to the knowledge of the authors, there is no information about the composition space for which quaternary thermally coupled systems show lower energy requirements than conventional sequences. Therefore, in this work energy consumption data for five feed compositions for a mixture near to ideality are presented. The quaternary sequences studied are: conventional direct (three columns), conventional indirect (three columns), thermally coupled direct (main column and two side rectifiers), and thermally coupled indirect (main column and two side strippers). The set of compositions to be analyzed is obtained through an experimental design approach; in this work we are presenting a subset of 5 from the 11 resulting compositions. The optimal design of the distillation sequences is performed through a multiobjective genetic algorithm coupled to Aspen

Energy consumption maps for quaternary distillation sequences

123

Plus and speeded-up by the use of neural networks (Gutiérrez-Antonio and BrionesRamírez, 2015).

2. Case of study The mixture of 2-methyl-2-butene (A)/cyclopentane (B)/benzene (C)/toluene (D) has been taken as case of study. It can be seen that the mixture consists on ramified and cyclic hydrocarbons; thus, the Chao-Seader equation is used to model the vapour-liquid equilibrium. Feed flow rate for all cases is set as 100 kmol/h, with desired recoveries of 99% for all the components. The ease of separation index for this mixture, defined for the quaternary case as ESIABCD = ESIABC/ESIBCD, is 1.04, which indicates ideality. In order to analyse the effect of feed composition on the energy requirements of the quaternary sequences, a set of values for feed compositions has been obtained through a simplex lattice experimental design approach. In this work, a sub-set of feed compositions is studied (Table 1). The configurations considered in the present study are reported in the Figure 1. Table 1. Feed compositions used for the study. Component/Case

A

B

C

D

M1F1

0.7

0.1

0.1

0.1

M1F2

0.1

0.7

0.1

0.1

M1F3

0.1

0.1

0.7

0.1

M1F4

0.1

0.1

0.1

0.7

M1F5

0.25

0.25

0.25

0.25

3. Design and optimization methodology In order to generate the energy consumption data for the designs of conventional and thermally coupled quaternary sequences, a multiobjective genetic algorithm with constraints handling is used. We decided to employ this method since it has been proved to be a robust tool for the optimization of chemical processes. The genetic algorithm is coupled to the process simulator Aspen Plus; thus the complete rigorous model for distillation columns is used. A main problem with the use of the process simulator is the long time required to evaluate the objectives and constraints functions. Thus, the speed of the strategy is improved through the use of neuronal networks, which are used as surrogate models for the evaluation of the objectives and constraints functions. The code is implemented in Matlab, which is linked to Aspen Plus using ActiveX Technology. For more details the reader is referred to the original contribution (Gutiérrez-Antonio and Briones-Ramírez, 2015). For each system, a tuning process is performed to determine the number of generations and the number of individuals required. For the configurations analysed in this work, 1000 individuals are required per generation. The numbers of generations were 150 and 250 for conventional and thermally coupled distillation sequences, respectively. The main decision variables for the conventional configurations are the number of stages, the reflux ratio and location of feed stages. In the case of the thermally coupled sequences, the flow rates and locations of interlinking streams are also important design variables. The objective function

124

F.I. Gómez-Castro et al.

involves the simultaneous minimization of the number of stages and heat duty for each sequence, while the constraints are the purities and recoveries established for all cases.

Figure 1. Studied distillation sequences: (a) Conventional direct sequence, CDS; (b) conventional indirect sequence, CIS; (c) quaternary thermally coupled direct sequence, QTCDS; (d) quaternary thermally coupled indirect sequence, QTCIS.

4. Results In this section the results obtained in the optimization step are discussed in terms of energy consumption and number of stages; it is worth to mention that the objectives have been grouped in order to facilitate the presentation of the results. The Pareto front is integrated by a set of optimal designs including the minimum energy consumption, the minimum number of stages and all the designs in between these two extremes. Figure 2 shows the Pareto fronts for case M1F1, where objectives have been grouped. From Figure 2, it is clear that for the same number of total stages, the minimum heat duty is observed in the QTCDS, whose optimal designs overlap in the region of less total stages with the CDS designs. The extremes of the Pareto front, which represent minimum energy requirements and minimum number of stages, respectively, are not of interest from the practical point of view. Considering this, some designs have been selected with a similar number of stages in order to observe changes in the heat duties. The Table 2 presents the heat duties of all analysed sequences for the five feed compositions, showed in Table 1, when 150 stages are considered. For the case M1F1, where component A is in major proportion, the best option for the separation of the four components is the QTCDS, followed by the CIS; for this feed composition it is clear that neither the QTCIS nor CDS are the optimal options to perform the separation.

Energy consumption maps for quaternary distillation sequences

125

Figure 2. Pareto fronts of studied distillation sequences for case M1F1. Table 2. Heat duties, QT, (kW) of all sequences for a similar number of total stages, NT. M1F1

M1F2

NT

QT

NT

QT

CDS

150

46,283.96

CDS

150

2,979.81

CIS

150

5,813.74

CIS

150

3,997.27

QTCDS

150

3,595.17

QTCDS

154

2,916.57

QTCIS

150

10,753.43

QTCIS

220

81,685.44

M1F3

M1F4

NT

QT

NT

QT

CDS

150

2,333.86

CDS

152

2,446.36

CIS

150

2,648.94

CIS

150

1,949.53

QTCDS

170

2,696.43

QTCDS

151

2,090.75

QTCIS

152

10,395.08

QTCIS

151

2,165.72

M1F5 NT

QT

CDS

150

3,044.81

CIS

149

5,040.83

QTCDS

150

3,171.28

QTCIS

149

38,049.10

126

F.I. Gómez-Castro et al.

In the case M1F2, where component B is in major proportion, the best option for the separation of the four components is the QTCDS, followed by the CDS and the CIS; for this feed composition it is clear that the QTCIS is the worst option to perform the separation. On the other hand, the case M1F3, where component C is in major proportion, the best option for the separation is the CDS, followed by the CIS and the QTCDS; for this feed composition it is clear that the QTCIS is the worst option to perform the separation. For the case M1F4, where component D is in major proportion, the best option is the CDS, followed by the CIS and the QTCDS; for this feed composition it is clear that the QTCIS is the worst option to perform the separation. Finally, for case M1F5 the best option to perform the separation is the CDS, followed very close by the QTCDS; in this case the QTCIS is the worst option when a similar number of stages is considered.

5. Conclusions In this work, an analysis of the effect of feed composition on the energy requirements of quaternary thermally coupled distillation sequences has been presented. A stochastic optimization methodology has been used for the design of the distillation sequences. It has been found that, for a mixture close to ideality, the quaternary thermally coupled distillation sequences show energy savings only for a mixture with high feed composition of the light components. The conventional indirect sequence is preferred when the heavy component has high composition, because it is separated in the first column. For an equimolar mixture, the direct conventional sequence is the best alternative, but the difference with the thermally coupled distillation sequence is small. When the third component has the highest feed composition, the conventional direct sequence must be preferred. Nevertheless, further studies are required to analyse a wider range of compositions.

References M. Errico, B.-G. Rong, G. Tola, I. Turunen, 2008, Process intensification for the retrofit of a multicomponent distillation plant – an industrial case study, Ind. Eng. Chem. Res., 47, 19751980 C. Gutiérrez-Antonio, A. Briones-Ramírez, 2015, Multiobjective Stochastic Optimization of Dividing-wall Distillation Columns Using a Surrogate Model Based on Neural Networks, Chem. and Biochem. Eng. Quat., 29(4), 491-504 S. Hernández, I.R. Gudiño-Mares, J.C. Cárdenas, J.G. Segovia-Hernández, V. Rico-Ramírez, 2005, Analysis of control properties of thermally coupled distillation sequences for fourcomponent mixtures, Ind. Eng. Chem. Res., 44, Issue 2, 391-399 V.H. Shah, R. Agrawal, 2011, Are all thermal coupling links between multicomponent distillation columns useful from an energy perspective?, Ind. Eng. Chem. Res., 50, Issue 3, 1770-1777 D.W. Tedder, D.F. Rudd, 1978, Parametric studies in industrial distillation: Part I. Design comparisons, AIChE J., 24, Issue 2, 303-315 C. Triantafyllou, R. Smith, 1992, The design and optimization of fully thermally coupled distillation, Trans. Inst. Chem. Eng., 70, Part A, 118-132 J.A. Vázquez-Castillo, J.A. Venegas-Sánchez, J.G. Segovia-Hernández, H. Hernández-Escoto, S. Hernández, C. Gutiérrez-Antonio, A. Briones-Ramírez, 2009, Design and optimization, using genetic algorithms, of intensified distillation systems for a class of quaternary mixtures, Comput. Chem. Eng., 33, Issue 11, 1841-1850

Zdravko Kravanja, Miloš Bogataj (Editors), Proceedings of the 26th European Symposium on Computer Aided Process Engineering – ESCAPE 26 June 12th -15th, 2016, Portorož, Slovenia © 2016 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/B978-0-444-63428-3.50026-6

Energy integration of a hydrotreating process for the production of biojet fuel C. Gutiérrez-Antonio a, F. I. Gómez-Castro b, A. G. Romero-Izquierdo b, S. Hernández b* a

Facultad de Química, Universidad Autónoma de Querétaro, Av. Cerro de las Campanas s/n, Col. Las Campanas, 76010, Querétaro, Querétaro, México b Departamento de Ingeniería Química, División de Ciencias Naturales y Exactas, Campus Guanajuato, Universidad de Guanajuato, Noria Alta S/N, Col. Noria Alta, 38010, Guanajuato, Guanajuato, México, [email protected]

Abstract Biojet fuel has been identified as the most promissory alternative to reduce CO2 emissions in the aviation sector, which contributes about 2% of the total emissions of carbon dioxide. There are several processes available for the production of biojet fuel; nevertheless, the hydrotreating process is one of the most promising, since it can be adapted to the existing refinery infrastructure and it is also certified by ASTM. Biojet fuel, nevertheless, is still not economically competitive with the conventional, petroleum-based jet fuel. Thereby, in this work we propose the energy integration of the hydrotreating process considering Jatropha Curcas as renewable raw material. We present a kinetic model for the reactive section, in order to estimate the energy released by the process, which is used to partially satisfy the energy requirements of the purification section. Finally, the effect of the energy integration on the price of biojet fuel is analyzed. Results show that the cost per liter of biojet fuel is very close to the cost of the fossil one, when the energy generated in the process is used. Thus, through a proper energy integration on the production process, the cost of biojet fuel can be competitive with that of the fossil jet fuel. Keywords: Biojet intensification.

fuel,

hydrotreating

process,

energy

integration,

process

1. Introduction In transport sector, aviation industry is responsible for 2% of the total CO2 annual emissions, approximately 623 million tons in 2009 (IATA, 2009). This percentage is small; however, estimations indicate that in 2020 the global emissions of international aviation will be 700% greater than in 2005, even if the efficiency of the motors is increased in 2% per year (European Commission, 2015). Therefore, IATA and International Civil Aviation Organization have established a four pillar strategy to fight the climate change problem (ASA, 2014), which includes technological and operational improvements, market based measures, and alternative fuels. From these alternatives, IATA has identified the alternative fuels as the most promising to significantly reduce CO2 emissions (IATA, 2009). Aviation fuel produced from renewable raw materials, known as biojet fuel or synthetic paraffinic kerosene (SPK), is very similar to fossil jet fuel in energetic density and physical properties (Bertelli, 2010). The SPK can be

1288

C. Guutiérrrezz-Anntonnio eet all.

pprodducced from f m a)) bioomaass, whhich h is gasi g ified d annd laterr fed d to o a Fisc F cherr-Troopsch proc p cesss; oor bb) vege v etable oils o andd annimaal fats, fa wh hichh aree chhem micallly mod m difieed with w h hyydroogenn. Inn bbothh caasess, thhe efffluent of the t reactiv ve zoonee muust bbe puri p ified d. Amo A ong the proocessses thaat uuse veegetablee oils andd annim mal fatss, thhe hyddrottreattingg prroceess is onee off thhe m mosst pprom missing (M McC Call et al, 20009)); since s e itt caan be adaapteed to t the t exiistinng rrefineryy iinfrrastrructture andd itt is alsoo ceertiffied d byy AS STM M sttanddard d (A AST TM D755666, 20015). Ligh L ht ccom mponnennts, napphtaas, bbiojjet fuel an nd greeen dieesel aree obbtaiined d from hyydrootreaating g pproccesss, whic w ch connsistts oof two t co onseecuttive reactoors and d convventtional ddisttillattionn. be opttimiized N Nevverthheleess, thee prroduuctioon proc p cesss must m d in ord der to redu r uce thee ennergyy cconnsum mptiion: an nd, in thiis way w y, mak m king thhe ppricce of o bioj b jet fueel ccom mpetiitivee inn ccom mparrisoon with w fosssil jjet ffuell. One alter a rnattivee to reduuce thee ennergy y co onsuumpptioon of thhe pproccesss is thee ennerggy integgrattionn, which w h iss prropoosedd in thiis work w k. Jaatroophaa Cuurcaas is i seleected ass reenew wable rraw maateriial, sincce it i haas a higgh prod p ductivee po otenntial in Meexicoo ((INIFA AP, 200 2 08).

22. Mo M delllingg off hyydrrotrreattingg proccesss T Thee hy ydrootreeatin ng proocesss cons c sideers the traansfform matiion off veegettablee ooil throoughh ddeooxyg genaatingg, gene g erattingg lonng chaain hyddroccarbbonss in n the first fi reaactoor; aafter, thes t e hhyddroccarbons arre iisom meriized d annd hyd droccracckedd inn a seeconnd reaactor too ggeneeratte hhyddroccarbons in thee booilinng poin p nt raangee off jett fuuel. Latter, the hydrocarbbonns sttream m is i sepaaratted to obta o ain lighht ccom mponnentts, naph n htass, biiojeet fuuel andd grreenn dieesell. T The feed d fflow w strea s am waas 1100 kgg/h of Jattrop pha Cuurcaas oil, o wh hosee main m n coomppounnds arre ttriglyceeriddes of o oleic o c accid (42 ( wt% %), linooleicc accid (44 ( wt% %), pallmittic acid a d (111 wtt%)) andd reaacto eesteearicc accid (3 wt% %). Thhe first f or iss m modeelledd with w thee luump ped kinneticc m modeel rrepoorteed by Shharm ma et aal (220000): (11) (22) (33) (44) (55) W Whhere Ctg a trigglycceriddes,, CL are are a lig ghterr comppouundss (C C5-C C8)), Cj aare miiddle t ccom mpouundds (C C9--C144), Ch are heavy y coompounnds (C115-C C188), and a Cp aree oligom merrized d ccom mpouundds (> >C118); k1, k2, k3, k4 annd k5 arre kkineetic connstaants.. In thee firrst rreacctor,, the hhyddroggen is fed f in a rratioo off 15500 mll H2/ m ml ooil iin orde o er too avoid a d cokee formaationn ((MccCaall ett al,, 2009). Thhe ooperratinng cond c ditioons of tthe first f t reaactoor arre 320 °C aandd 80 barr; tthuss thee reeactaantss muust bbe cond c ditiooned beeforre ennterringg thee reaacto or. IIn tthe seco s ond reaactor, thhe hydr h rocrrakin ng reac r ctionn iss reppressented with w h the m modeel prropoosedd bby Sha S ayeggh et al. (20122) foor thhe VGO V O caatalyyticc craackiing, wh hichh connsidderss thee formaation n oof ccokke; how h weveer, thiss reeactiion patth is i not n connsideredd siincee thhe hydr h rogeen/ooil rratio o uutiliized d av void ds cooke form mattionn. Thhe luumpped kinneticc moodeel is:: (66)

E Eneergyy inttegrratio on oof a hyddrottreaatingg prroceess ffor tthe pprooducction n off bioojet ffueel

1299 (77) (88)

W Whhere CVGO a C199-C are C21 parraffiins, Cgaasoline arre C C5-C C11 parraffi fins withh itts isomeers, and d V Cgl are C1-C4 4 paarafffins witth itts issom mers;; k122, k13 annd k23 are the kinneticc coonsttantss. Inn the secoondd reaactoor thheree is no add ditioonall feeed of o hhydrrogeen, sinc s ce th he amo a ountt inttrodduceed inn tthe firsst reeactoor iss ennouggh ffor the t who w ole reac r ctivee seectioon. The T e opperattingg condittionns of the secoondd reaacto or arre 480 4 °C and d 80 0 bar. Thee isoomeerizatioon iss mode m elledd w with thee kinnetic m moddel of Cale C emm ma eet all (200000): (99) W Whhere Cisopar nd Cpaaraffinns are a thee coonccentrratioon of isooparraffiins andd para p afinss, i raffinns an rresppecttively; k2 is i thhe kkineetic connstaant. Thhe reenew wabble hyd h droccarbon streeam leaavess the rreacctive seectio on at a 4480 °C andd 800 baar; ther t eforre, w we proopossed a tuurbine tto pperfform m the cconnditiionin ng of o th he strea s am for its feeddingg to thee disstillaatioon trrainn, alsso gene g eratiing eneergyy. T Thee hy ydroocarrbonn sttreaam iis then t n feed to a distilllatioon train t n wher w re four f r prroduuctss arre oobtaaineed: ligh ht gasess (C C1-C C4), naaphttas (C5 ( 5-C77), bbiojet fuel f (C8-C C16)) annd ggreenn diieseel ((C17-C C21)). We W cons c sideer booth direect andd dirrectt-inddireect disti d illattion seq quenncess, Figurre 1, 1 sincce innforrmaationn abboutt thee sppecific type t e off disstillatioon used u d is not avaailabble (McCaall et e aal, 220009). It is i worth w h too menti m ion thaat thhe first fi disstillaationn co olum mn in both b h seequeencees is i ffixeed, in i orde o r too sepparaate ffirstt thee ligght gasses aand theerefo ore usin ng refri r igerrantt as coooling g servvicee onnly inn thhat ccoluumnn.

33. Ene E ergy y in nteggraation n oof th he hyd h drottrea atin ng p proocesss T Thee hy ydrootreaatin ng proceessees have h e beeen sim mulaated in the proocesss siimu ulatoor A Aspeen Plus P s. F From the t sim mulattionn resultts we w perfo p orm m a pinc p ch anal a lysiss inn ordder to deteerm mine how w too uuse thee relleassed eneergyy byy thee firrst reac r ctor.. Thhe ccalcuulattionns shhow wed thatt the ennerggy is i eenoough h to satiisfy y thee ennerggy reequiirem mennts inn: 1) thhe coond ditiooninng of thhe reeactantss; 2)) the secoondd reeactoor (end ( dothherm mic)); 3) 3 part p ially y fu fulfil thhe ene e rgettic con nsum mpttion off the ddisttillattionn traain. Figguree 2 show s ws the t inteegraatedd proocesss with w thee dirrect seqquennce..

Fiigurre 1. Diistilllatiion sequuennces con nsiddered: a) diirect, b)) dirrectt-inddireect.

1300

C. Guutiérrrezz-Anntonnio eet all.

F ure 2. Figu 2 Hydr H rotreatiing pro p cesss wiith dire d ect ssequuencce and a energy inteegraatioon. A ng/ccoolling reqquirremeentss whhich h caannoot be b fulfi fu lledd by y thee prroceess streeam ms All thee heeatin aare sup pplieed by b steam s m oor coooling wat w ter. Heaat exxchhanggers aree theen inntro oducced to pperfform m tthe inteegraationn beetweeen thee strream ms. Finallyy, eqquippmeent and a utillitiees coostss aree coompputed d ffor both h, nonn -inteegraatedd andd inntegrrateed proc p essees.

44. Res R sultts T ulatio on of thee reeacttive zoone shoowss thhat there is a glob g bal connverrsionn to o Thee siimu hhyddroccarbons of 85% %; w whille th he spec s cificc connveersioon too biiojeet fuuel is arrounnd 220% %, which w h iis cons w h thhe max m ximuum connverrsion n reeporrtedd (M McC Call et al, a 200 2 9). In tthe firsst c sisteent with rreacctorr, the reeleassed eneergyy is 46.005 kW, k , wh hile thee seccond d reeactoor requ r uiress 133.03 kW W foor thee sm iits opeeratiion; it is worrth to menntio on that t malll am mouunt of eneergyy avvailaablee foor iinteegraationn is du ue too thhe sma s all flow f w off prroceessinng ooil. Also, thee coondiitionningg off the eeffluuennt of thhe reac r ctivee seectio on alloows geenerratinng 3.36 3 66 kW k . Thhe separattionn off the rreneewaable hyydrocarbbonns caan be b achi a ieveed in n thhe ddirecct seequuencce with w 20.19 kW W, whil w e 224.888 kW k aree requirred in the t direct--inddirecct scchem me. Thhereeforee, thhe best b t hyydrootreaating g pproccesss, with w out eneergyy innteg gratiion, is the onne with w thee diirectt scchem me. Onnce thatt the ppincch anal a lysis haas been b n reealizzed,, an nd thhe ener e rgy inteegraatio on betw b weenn thhe sttreaams waas pperfform med d wee fouundd intterestinng reesullts (Tab ( ble 1). F From m here h e wee consid der threee ccases: 1) hhyddrotrreatting prroceess witth direect seqquen nce wiithoout eneergyy innteggration (H HD); 2) 2 hhyddrotrreatting prroceess with direect seqquencee annd eneergyy innteg gration (H HD--EI); 3) 3 hydrotreating process with direct-indirect sequence and energy integration (HI-EI). For the three cases the raw material cost is $4,470,116.9 USD/year. From Table 1 we can observe that in HD-EI, the cooling and heating costs decrease in 30%, where energy integration is used, in comparison with HD case. Nevertheless, the investments costs are 7% greater in HD-EI, due to the additional equipment required to perform the energy transfer between the streams. Also, the electricity cost in HD-EI scenario are 27% greater, due to the use of compressors to complete the conditioning of high pressure steam for the distillation train. On the other hand, scenario HI-EI allows a reduction of 18% in the cooling and heating costs, with similar investments costs in comparison with HD, and the electricity costs are just 17% greater. This balance of costs allows to HI-EI case being the best in terms of total annual costs, which considers annualized investments. However, the total annual costs for the other scenarios are very similar.

Energy integration of a hydrotreating process for the production of biojet fuel

131

Table 1. Summary of costs ($USD/year) for the three scenarios. Case

Cooling and heating services costs

Investments costs

Electricity cost

Total annual cost

HD

$45,520.96

$234,017.41

$2,087.39

$4,751,742.59

HD-EI

$32,091.69

$250,199.31

$2,870.46

$4,755,278.29

HI-EI

$36,949.11

$235,632.98

$2,457.80

$4,745,156.73

With base on the annual total costs we estimated the price per liter of biojet fuel, and also the total CO2 emissions for each scenario (Table 2). From Table 2 we can observe the great impact on CO2 emissions of using energy integration, even when conventional distillation is used. The minor environmental impact is observed in HD-EI case, where energy is used to generate high pressure steam, follows by HI-EI; however, the HD case is even in another magnitude order. This is because of the higher steam temperatures required on the HD case and the higher requirement of steam for that process. On the other hand, the price per liter is almost the same, since the total annual costs of the three cases are very similar. However, the minor value for the price is observed in HI-EI case, which value is 1.55 times the price of fossil jet fuel in Mexico (0.5125 USD/liter). It is important to mention that the biojet fuel price and total annual costs estimations consider a very small flow of oil; the specific values can change when an industrial scale oil flow is considered. Table 2. Biojet fuel price (USD/liter) and total CO2 emissions (kg CO2/year). Case

CO2 emissions due to steam utility

CO2 emissions due to electricity

Total CO2 emissions

Price of biojet fuel

HD

286,458,714.13

8,692.48

286,467,406.61

0.7944

HD-EI

39,516,609.68

4,869.7

39,521,479.44

0.7950

HI-EI

88,912,371.77

6,953.31

88,919,325.08

0.7938

5. Conclusions Energy integration for the hydrotreating process for biojet fuel production has been presented. Conventional and thermally integrated hydrotreating processes have been simulated and compared in terms of total annual costs. When integrating the processes, savings in utilities costs (mainly steam costs) are obtained. Nevertheless, since additional equipment is required, the cost due to the acquisition of equipment is increased. However, the environmental impact of the integrated process is considerably smaller than that for the conventional hydrotreating process. In order to decrease the total annual costs, the investment costs must be reduced. However, the reduction in

132

C. Gutiérrez-Antonio et al.

energy requirements will lead to a more sustainable process, with also a slightly minor production costs of biojet fuel.

6. Acknowledgments The authors acknowledge the financial support provided by CONACYT, through project 239765.

References Aeropuertos y Servicios Auxiliares, “Estrategia de los cuatro pilares”, biocombustibles.asa.gob.mx. Consulted 19/01/2016. ASTM D7566-15c, Standard Specification for Aviation Turbine Fuel Containing Synthesized Hydrocarbons, ASTM International, 2015, www.astm.org. Consulted 19/01/2016. Bertelli, C., “Bioturbosina”, Plan de Vuelo, 2010. biocombustibles.asa.gob.mx. Consulted 19/01/2016. Calemma, V., Peratello, S., Perego, C., Applied Catalysis, 190(1-2), 207-218, 2000. European Commission, “Reducing emissions from the aviation sector”, ec.europa.eu. Consulted 19/01/2016. INIFAP, Jatropha (Jatropha Curcas L.) Bajo Condiciones de Temporal en México, 2008, http://www.agromapas.inifap.gob.mx, Consulted 19/01/2016. International Air Transport Association, “A global approach to reducing aviation emissions – First stop: carbon neutral growth from 2020”, www.iata.org. Consulted 19/01/2016. McCall, M. J., Kocal J.A., Bhattacharyya, A., Kalnes, T.N., Brandvold, T.A., “Production of aviation fuel from renewable feedstocks”, Patent No. US 8,039,682 B2, 2009. Sharma, R.K., Ananda, M., Rana, B.S., Kumar, R., Farooqui, S.A., Sibi, M.G., Sinha, A.K., Catalysis Today, 198(1), 314-320, 2012. Shayegh, F., Farshi, A., Dehgan, A., Petroleum Science and Technology, 30(9), 945-957, 2012.

Zdravko Kravanja, Miloš Bogataj (Editors), Proceedings of the 26th European Symposium on Computer Aided Process Engineering – ESCAPE 26 June 12th -15th, 2016, Portorož, Slovenia © 2016 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/B978-0-444-63428-3.50027-8

Mathematical Optimization of the Production of Fuel Pellets from Residual Biomass M. A. Méndez-Vázqueza, F. I. Gómez-Castroa*, J. M. Ponce-Ortegab, A. H. Serafín-Muñozc, J. E. Santibañez-Aguilarb, M.M. El-Halwagid. a

Departamento de Ingeniería Química, División de Ciencias Naturales y Exactas, Campus Guanajuato, Universidad de Guanajuato, Noria Alta S/N, Col. Noria Alta, 38010, Guanajuato, Guanajuato, México, [email protected] b Departamento de Ingeniería Química, Facultad de Ingeniería Química, Universidad Michoacana de San Nicolás de Hidalgo, Francisco J. Mugica S/N, Morelia, Mich. 58060, México. c Departamento de Ingeniería Civil, División de Ingenierías, Campus Guanajuato, Universidad de Guanajuato, Av. Juárez No. 77, Guanajuato, Gto. 36000, México. d The Artie McFerrin Department of Chemical Engineering, Texas A&M University, College Station, Texas, 77843-3122, Unites States

Abstract One of the main concerns of mankind in the last years is the availability of energy sources, which should have the lower environmental impact possible. In the state of Guanajuato, Mexico, clay industry burns each year about 15.000 m3 of fuel oil and residual oils, and 96.000 tons of non-sustainable wood derivatives. In particular, wood derivatives are required in high quantities due to their low density and high humidity, with costs of about 3,4 USD/GJ. Two start-up companies, GEMAR & TODO PELLET, produce solid fuel pellets from vegetable residual material. This raw material is obtained from agribusiness and from the cities nearby. The solid biofuel pellet has high density, low content of humidity and a homogeneous shape. This pellet has an energy density of about 12 GJ/m3, which is 2.3 times the content of commonly used wood. To make the production of solid biofuels economically feasible, special care must be taken about the location of the production plants and hubs, to avoid excessive distances between the sources of the biomass and the facilities. Furthermore, to have a truly environmentally friendly fuel, the supply chain and the production process must minimize the global environmental impact. Thus, in this work a mathematical programming model to determinate the optimal location of the production plants, the collection centers, together with the optimal distribution logistics, is proposed. The multi-objective optimization problem involves the minimization of total annual costs and emissions of equivalent carbon dioxide for the whole supply chain. The problem is modeled following a general disjunctive programming approach, and then relaxed into a MINLP problem using the convex hull strategy. The problem is then solved using the GAMS software. The results show that, for the different scenarios analyzed, the optimal solution was always the same. The location of the main plant and the secondary plants has been determined, and it has been found that most of the raw material (40%) must consist on stover and straw, while the rest of the material requirement is satisfied by residues from pruning and from the agave industry. With this solution, reductions on about 18,600 ton of CO2 are expected for the main plant. Keywords: Pellets, clay industry, supply chain optimization.

134

M.A. Méndez-Vázquez et al.

1. Introduction Biomass has been historically used to satisfy energy needs of human kind, even in this modern age, biomass continues playing a fundamental role, and it is widely used in the form of wood, vegetable charcoal and biofuels. It also has been extensively considered as a way to reduce greenhouse gas emissions (GHG) when it is consumed in a sustainable rhythm, in which biomass metabolizes atmospheric CO2 through photosynthesis, and the carbon fixation in the ground is higher that the carbon releasement when it is burned. In this work the two start-up companies involved manage a set of low market value biomass residues, which are susceptible of rotting or intentional burning and include: pruning residues, road and wasteland management residues, bagasse from beverages industries, stover, and straw, among others. These residues are produced each year in the state of Guanajuato, Mex. reaching 4.2 million tons for the agribusiness residues and 92 thousand tons for the urban biomass residues (Méndez-Vázquez et al., 2015; White and Hanselka, 2000). These raw materials are transformed into pellets in a pilot pre-commercial plant of 20 tons per month. Then, the pellets are burned in the local clay industry that produces handmade bricks for construction needs. Guanajuato is nowadays the third greatest national brick producer with almost 2.300 brick-ovens, which 40% of them are located in tree cities: Abasolo, León and Silao (IEE, 2007). The pellets substitute the common unsustainable wood derivatives such as painted pallets, urban trees, young forest trees and solid municipal waste whose yearly consumption reaches 96 thousand tons and also substitutes the 15 thousand cubic meters of waste motor oil (IEE, 2007). In each brick-oven an electric fan pellet feeder is used, which supplies an excess of air and allows to reduce the total GHG emissions per brick produced. Because the biomass is widely dispersed, the aim of this work is to determine the pellet facility location and its logistics, in order to make the production of pellets economically feasible, avoiding excessive distances between the facilities and the sources of the biomass, as well with the client locations. Simultaneously, the approach pretends to minimize the environmental impact by minimizing the GHG emissions of the supply chain and the production process. A multi-objective mathematical programming models allows to determinate optimal locations together with the optimal distribution logistics.

2. Modelling the multi-objective mathematical scenarios. The problem is modeled following a general disjunctive programming approach, and then relaxed into a MINLP problem using the convex hull strategy following the methodology of Bowling et al. (2011) and Santibañez-Aguilar et al. (2011). The problem is then solved using the GAMS software. The superstructure of the scenario is presented in Fig 1. BIOMASS SOURCE

TECHNOLOGY

HUBS ALLOCATION

W A

D

FACILITIES ALLOCATION

H 1

S 1

C 1

H 2

S 2

C 2

H n

P W (Unsustainable Wood derivatives for oven-bricks) A (Agribusiness residues) P (Pruning and wasteland management) D (Densification technology)

MARKETS

M H (Collector hubs) S (Secondary facility) M (Main Facility) C (Clients)

Figure 1. Super Structure for the mathematical approach.

C 3

Mathematical Optimization of the Production of Fuel Pellets from Residual Biomass

135

The economic and the environmental objective functions are represented by the eq. 1 and eq. 4, respectively. The net profit and the GHG CO2 equivalent emissions are the dependent variables, which are function of the suppliers, hubs, secondary and main facilities, and client allocation. The main economic parameters included were road distances (obtained through Google Maps), local salaries, biomass availability, proximate supplier allocation, raw material real costs, truck capacity, facilities capacities, operation costs, technology yield, total process energy demand, biomass physical, thermal and chemical properties and market demand. The environmental parameters were the burning efficiencies, and CO2 unitary emissions from transportation, forest management, forest land use change from wood production, residual biomass management, controlled burning practices in agriculture and from pellet and traditional fuel burning in the oven-bricks. The model included 833 equations and 1.252 variables. 𝑁𝑒𝑡𝑃𝑟𝑜𝑓𝑖𝑡 = 𝑃𝑟𝑜𝑑𝑢𝑐𝑡!"#$! − 𝑅𝑎𝑤𝑀𝑎𝑡.!"#$ − 𝑇𝑟𝑎𝑛𝑠𝑝𝑜𝑟𝑡𝑎𝑡𝑖𝑜𝑛!"#$ − 𝐼𝑛𝑓𝑟𝑎𝑒𝑠𝑡𝑟𝑢𝑐𝑡𝑢𝑟𝑒!"#$ − 𝑂𝑝𝑒𝑟𝑎𝑡𝑖𝑜𝑛!"#$

(1)

𝐶𝑂! !"#$ = −𝐴𝑏𝑠𝑜𝑟𝑏𝑒𝑑!""# +  𝐿𝑈𝐶!""# + 𝑇𝑟𝑎𝑛𝑠𝑝𝑜𝑟𝑡𝑎𝑡𝑖𝑜𝑛!""# + 𝑈𝑠𝑒!""# + 𝐷𝑖𝑠𝑝𝑜𝑠𝑎𝑙!"#

%$(2)

𝐶𝑂! !"#$%&'( = −𝐴𝑏𝑜𝑠𝑟𝑏𝑒𝑑!""# + 𝑇𝑟𝑎𝑛𝑠𝑝𝑜𝑟𝑡𝑎𝑡𝑖𝑜𝑛!"#$%&"#  !"#  !"##"$ + 𝑃𝑟𝑜𝑐𝑒𝑠𝑠!"#$"%&'(") + 𝑈𝑠𝑒!"##"$

(3)

𝐶𝑂! !"# = 𝐶𝑂! !"#$ − 𝐶𝑂! !"#$%&'(

(4)

Due to the lack of real pellet production costs obtained from a commercial plant within Mexico, the fixed (capital) and operating costs were estimated with bibliographic data, assuming that the components follow the economy of scale, and then corrected with the pilot plant available information and recent data. The main reason to use MINLP to model the case of interest, is because of the non-linear behaviour of the economy of scale for the total estimated capital cost of the facilities, CC ($/y), showing exponential expressions as follows (Mani et al., 2006). 𝐶! = 𝑒𝐶!"

(5) !

𝐶!"! =   𝐶!"!

𝐶! 𝐶!

!.!

(8)

𝑖 1+𝑖 1+𝑖 ! −1

(6)

𝐶! = 𝐶! + 𝐶!"

(9)

𝐶!" =   𝛼!" 𝑃 !!"

(7)

𝐶! 𝐶! = 𝑡!" 𝐺!

(10)

𝑒 =  

Where e is the capital recovery factor, Ceq is the cost of the equipment ($), i is the interest rate (decimal) and N is the lifetime of the equipment (years). Finally, the equipment cost, Ceq has a general relationship based on many equipment manufacturers and literature data, where αeq is the unit cost of the equipment ($), neq is the scaling factor of the equipment, and P is the characteristic parameter of the equipment (Mani et al., 2006). Following this relationships, the estimation of the cost of specific equipment for a particular capacity not available is given by eq. 8, where C1 and C2 are the capacity of equipment 1 and 2. The total cost, CT, was calculated by the eq. 9, where COP is the operating cost ($/y). The production cost, CP ($/kg), for any product was estimated from eq. 10, where tOP is the total operating hours of the plant per year (h/y) and GP is the production rate (kg/h). The CO2, CO and NO2 emissions were measured in situ with a portable industrial gas analyser (Model E85000 from E-Instruments) and corrected through the mass balances and energy efficiency calculations in the brick-ovens (Méndez-Vázquez et al., 2015).

136

M.A. Méndez-Vázquez et al.

The emissions base line was settled with the actual wood demands and its energy properties. This implies that for each 35 thousand brick unities, there are consumed 3.63 tons of wood with 30% moisture content, equivalent to 43.7 thermal GJ. This energy requirement is meant to be satisfied with our pellets that have 17 GJ/ton and 12 GJ/m3 (2.3 times wood properties), resulting in a net 2.56 tons of consumption per the same amount of bricks. Simultaneous experimental pellet burning tests were run in order to support the calculations. Every other contribution to emissions, including transportation, land use change, forest, urban biomass and wasteland management (including prescribed burning practices) was taken from literature. A set of six possible allocations for the main facility, secondary facilities and hubs was proposed based on its historical relationship with biomass generation, and with brick producers. The cities of León, Pénjamo, Abasolo, Salamanca, Irapuato, San Francisco del Rincón and Guanajuato were the chosen ones. To pursuit a plausible representation of the logistics that will result from the mathematical modelling, the effect of different allocations for the same supplier within a 20 km radius of its original allocation with respect to the facilities was taken into account.

3. Analysis of the Optimal Model Scenarios Despite the fact that the supplier allocation was not exact, and that in reality it could be several kilometers away from the settled point but in the same city, affecting the transportation costs and emissions, the optimal solution was always the same, and it is represented graphically in the Fig. 2.

Figure 2. Graphical representation of the optimal design for residual biomass pellets for the brick industry in Guanajuato.

Mathematical Optimization of the Production of Fuel Pellets from Residual Biomass

137

The optimal solution showed that, from the six different allocation options, two of them (Guanajuato and San Francisco del Rincón) should not exist because its biomass availability or market demand for pellets is too short. Meanwhile, four allocations (León, Pénjamo, Abasolo and Salamanca) were feasible to have a secondary facility, either with or without a hub, and its capacity was up to 1.900 to 6.000 tons/year, depending if it has one or three shifts. Finally, there was just one feasible main facility (Irapuato), whose capacity was up to 12.000 tons per year. The optimized indicators of each facility are shown in Table 1. Table 1. Economic and technical indicators of the optimal solution. Optimized Indicator

Secondary facility with hub

Main facility

Allocation

León, Pénjamo, Abasolo and Salamanca (60 km radius)

Irapuato (100 km radius)

Raw Material

20% Straw and stover, 40% pruning and 40% bagasse

40% straw and stover, 30% pruning and 30% bagasse

Fixed Capacity

1.800 tons per year

5.421 tons per year

Capital Cost Investment

$36.500 USD (including rent of the building)

$400.000 USD (including land and building acquisition)

Unitary Cost and Sale Price.

$45,8 USD/t and $90,4 USD/t

$13,1 USD/t and $63,8 USD/t

Economic Indicators

ROI 19.32%, NPV $8.100 USD, Equilibrium point of 92.36% sales.

ROI 22.8%, NPV $600.600 USD, Equilibrium point of 46.7% sales.

Generated employees

4 direct and 6 indirect

12 direct and 36 indirect

CO2eq avoided

2.250 t/year

18.600 t/year

The experimental data from the emissions analysis during the biomass burning in the brick-oven showed that the traditional way without an electric fan (solid fuel feeder) generates more emissions; 80% more CO and 90% more SO2 emissions. Nevertheless, the NOX emissions didn’t change significantly with or without the fan. Meanwhile, the tests run with pellets vs wood using the electric fan showed that when pellets are used as the sole fuel, there is a reduction of 27% for CO, 15% for NO, and 87% for SO2 compared with wood. The temperature of the flue gas was about 62° C and the temperature of the interior of the brick-oven reached up to 870°C. These tests are not definitive for the GHG emission reduction potential of the pellet for the brick industry, mainly because the pellet feeding rate was not homogeneous during the burning test, and because the technique of burning is still being developed. Finally, due to the lack of a pellet market in Mexico, its price has not been stablished. Therefore, its price was stablished using the actual relationship between the energy consumption in the brick industry and the prices that is being paid for. The actual cost that is being paid by brick producers is $3.4 USD/GJ, which its equivalent to $40.9 USD/m3 and $58.0 USD/ton. While the optimum sales price in the main facility was

138

M.A. Méndez-Vázquez et al.

determined in $63.8 USD as the optimum. This pricing differences implies that pellet fuel is 9.2 % more expensive than the usual fuel expenses. The main reason for this is the high price of the agribusiness residues that might be used as raw material and can cost up to $32 USD per ton (49% of the sales price). While the urban residues are as cheaper as $1.6 USD per ton because only management costs are required, but they are little available in the main plant location.

4. Conclusions The proposed mathematical model was able to determine the optimal supply chain, in economic and environmental terms, of the residual biomass facilities to produce pellets for the brick industry in the state of Guanajuato, Mexico. This study offers a background that will allow economical decisions for the short term and middle term investment. More precise results can be achieved through future work in which the integration of Spatial Geographical Information Systems (GIS), that can be used to determine more precisely the allocation of the biomass suppliers. Although more energy needs can be added to the mathematical model such as ceramics, pottery, tiles, pots and many others that will require more technological innovation in burner systems, such as cooking needs, residential and rural heating during the winter, and water and process heating. This optimization model can be used for similar energy, biofuels or residual biomass needs.

References I.M. Bowling, J.M. Ponce-Ortega, M. El-Halwagi, 2011, Facility location and supply chain optimization for a biorefinery, Industrial & Engineering Chemistry Research, 50, 10, 6276-6286. B. Cárdenas-González, C. Márquez-Estrada, 2011, Determinación de Emisiones de Gases de Efecto Invernadero en Base a Factores de Emisión y Monitoreos de Eficiencia Energética en la Comunidad Ladrillera El Refugio, León Guanajuato, Programa de Eficiencia Energética en Ladrilleras Artesanales de América Latina (ELLA) contra el Cambio Climático. C. García-Ubaque, M. Vaca-Bohórquez, G. Talero-Rojas, 2013, Aprovechamiento de Biomasa Peletizada en Sector Ladrillero en Bogotá-Colombia: Análisis Energético y Ambiental, Información Tecnológica, 24, 3, 115-120. Instituto de Ecología del Estado (IEE), Inventario de Emisiones Guanajuato 2006, 1ª ed. Guanajuato, México. 2007. S. Mani, S. Sokhansanj, X. Bi, A. Turhollow, 2006, Economics of Producing Fuel Pellets from Biomass, Applied Engineering in Agriculture, 22, 3, 421-426. M.A. Méndez-Vázquez, A.H. Serafín-Muñoz, A.G. Romero-Izquierdo, F.I. Gómez-Castro, B. Cabrera-Alvarado, 2015, Modelo de aprovechamiento sustentable de biomasa residual para la producción de pellet como combustible renovable y eficiente para enverdecer la industria de arcillas, GEMAR-UGTO. Project FINNOVATEG from Consejo de Ciencia y Tecnología del Estado de Guanajuato (CONCYTEG). J.E. Santibañez-Aguilar, J.B. González-Campos, J.M. Ponce Ortega, M. Serna-González, M., M.M. El-Halwagi, 2011, Optimal planning of a biomass conversion system considering economic and environmental aspects, Industrial & Engineering Chemistry Research, 50, 14, 8558-8570. Sistema de Información Agroalimentaria y Pecuaria Nacional. [http://www.siap.gob.mx/cierre-dela-produccion-agricola-por-estado/]. Last consulted in November 2015. L. White, C. Hanselka, 2000, Quemas prescritas de pastizales en Texas, Extensión Cooperativa de Texas. E-37S.

Zdravko Kravanja, Miloš Bogataj (Editors), Proceedings of the 26th European Symposium on Computer Aided Process Engineering – ESCAPE 26 June 12th -15th, 2016, Portorož, Slovenia © 2016 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/B978-0-444-63428-3.50028-X

On Optimal Assignment of Cold Standby Components for Multi-Channel Safety Interlocks Yeremia Yehuda Lepar, Yu-Chih Wang, Chuei-Tin Chang* Depertment of Chemical Engineering, National Cheng Kung University, 1 University Road, Tainan 70101, Taiwan [email protected]

Abstract Since hardware failures are inevitable but random events, the reliabilities of safety interlocks can be closely tied to their structures and maintenance programs. Our primary objectives in this work are to devise corrective and preventive maintenance policies under the support of cold standbys and to derive the general formulations for computing the availabilities of alarm and shutdown subsystems based on their Markov diagrams. An interlock superstructure is also developed to facilitate the search for the best interlock designs and maintenance plans with the genetic algorithm (GA). One example is provided to demonstrate the feasibility and effectiveness of the proposed optimization approach. Keywords: Interlock; Maintenance policy; Cold standby; Genetic algorithm.

1. Introduction In order to mitigate the detrimental outcomes of accidents in a chemical plant, it is a common practice to install safety interlocks on the inherently hazardous units. Since hardware failures are random events, it is imperative to keep these protective mechanisms operable at all time. To this end, each interlock should be properly configured and maintained so as to achieve a high level of reliability (or availability). Since any equipment may fail either safely (FS) or dangerously (FD), the corresponding probabilities can usually be lowered by introducing redundancy at the component level. The unsafe process state can be detected according to one or more critical variable (which is referred to as a “channel” in this work), while each of them may be monitored independently with redundant sensors. Similarly, there may be more than one way to end a continuous operation and bring the system to a safe state and, for the purpose of ensuring successful shutdown, it is also necessary to install redundant actuators. Ideally, the spare-supported corrective maintenance policy can be utilized to upkeep the monitoring device(s) in the alarm subsystem of an interlock. For each channel, there may be multiple components installed online and also spares stored offline. A failed online sensor is supposed to be immediately replaced with a functional spare and then quickly repaired offline. On the other hand, since the actuators are not used during normal operations, any FD failure can only be discovered upon demand. To suppress these unobservable malfunctions, a comprehensive preventive maintenance strategy, i.e., periodic inspections and replacements with cold standbys, is adopted in this work. Although several mathematical models have already been proposed for systematic generation of interlock designs (Liao and Chang, 2010), their implementations are still quite laborious and solutions incomplete. Specifically, not only a complex model must be reformulated for each new application, but also coded on a case-by-case basis with

140

Y.Y. Lepar et al.

commercial software (such as GAMS). To ensure conciseness, portability and maintainability of the computer program for any scenario in practice, it is desirable to develop a generic and modularized MATLAB (or C) code according to an evolutionary algorithm. It should also be noted that the design options concerning cold standbys were essentially ignored in the existing models. By constructing a comprehensive superstructure in which all possible interlock structures are embedded, these options can be easily included in the proposed code.

2. Superstructure Let us consider the interlock superstructure presented in Fig. 1. The binary variable ξ is adopted here to denote the process condition, i.e., its value is 1 if the process is in an unsafe state while 0 if otherwise. Since this state may be revealed in several different process variables, e.g., the temperature, pressure, and flow rate, etc., their values are characterized with a binary vector x, that is, an element of this vector equals 1 if the corresponding condition exceeds the allowed limit while 0 if otherwise. For the purpose of monitoring a critical variable online, it is assumed that one or more identical sensor can be configured in a measurement channel. All channel outputs also form a binary vector y and each element is usually determined with a voting gate to reflect whether or not (i.e., 1 or 0) the unsafe condition can be confirmed. Finally, an alarm logic, which can be expressed as a binary function f(y), is applied according to these channel outputs. A function value of 1 denotes the decision to set off an alarm while 0 means otherwise. In response to a positive alarm decision, one or more shutdown operation may have to be performed and each is represented with a binary variable uj. Its value (1 or 0) is used to signify whether or not the corresponding operation channel is activated. Every shutdown operation may be facilitated with more than one identical actuator and the outcome is represented with still another binary variable vj. Finally, a second binary function h(v) can be defined to characterize the final results of all actions implemented by shutdown units. A function value of 1 corresponds to the scenario of safe shutdown while 0 denotes otherwise.

3. Maintenance policies The corrective and preventive maintenance programs are introduced to raise the availabilities of alarm and shutdown subsystems respectively. Their general frameworks are outlined below: 3.1. Corrective maintenance program It is assumed that the online sensors in alarm subsystem may fail and these failures are observable. A comprehensive spare-supported corrective maintenance program can be tailored for these revealed failures in a channel (Liang and Chang, 2008). To be specific, let us assume that there are m sensors purchased for a particular channel, and n of them are placed online while the rest are stored offline as spares. If an online sensor fails and at least one spare is functional, then replace the former with the latter immediately. The failed sensor is taken offline and then placed in a queue for repair. The offline repair process is in effect only when all online sensor(s) are working, while the repair of online sensor(s) can take place only if none of the offline spares are working. It is also assumed that the failed sensors are repaired one at a time in sequence.

An Optimal Assignment of Cold Srandby Components for Multi-Channel Safety Interlocks

141

Figure 1. Superstructure The corresponding Markov diagram can be found in Fig. 2. Notice that every node in this diagram represents a distinct channel state. There are three types of transition rates between states, i.e., the failure rate (λ), the repair rate (μ), and the replacement rate (ε), and all are constant model parameters. By assuming that the steady state is reached quickly, the asymptotic state probabilities can be calculated according to the following equations: Pj + 2 =

jλ P ; j 1, 2, , n = µ 1

m + jλ ε 1, 2, , n; i =j + 2, j + 3, , m P( j +1)[ m −( i −1)]+1 − P( j +1)( m −i )+ 2 ; j = mm

P( j +1)[ m −( i −2)]+ 2 =

Figure 2. Markov diagram of spare-supported corrective maintenance policy

(1) (2)

Y.Y. Lepar et al.

142

m +λ ε P( j +1)( m − j )+1 − P( j +1)[ m −( j +1)]+ 2 ; j = 1, 2, , n mm

(3)

j +1− k  , n; i 1, 2, , j − 1 = ; j 1, 2,=

(4)

P( j +1)( m − j )+ 2 = i

= Pi +1 P1λ i ∏

k =1 ε + ( j − k )λ

2λ ε P( j +1)[ m −( i −1)]+ j = P − P ;j= 1, 2, , n; i =j + 2, j + 3, , m (5) ε + λ ( j +1)[ m −( i −1)]+ j −1 ε + λ ( j +1)( m −i )+ j +1 P( j +1)( m − j )+ j +1 =

2λ m +λ ε P( j +1)( m − j )+ j − P( j +1)( m − j )+ j −1 − P( j +1)[ m −( j +1)]+ j +1; j = 1, 2, , n mmm

(6)

λ ε

(7)

P( j +1)( m −i )+ n +1 = P( j +1)( m −i )+ j ; j =1, 2, , n; i =j + 1, j + 2, , m

Note that the sum of all state probabilities equals unity. If a k-out-of-n voting gate is incorporated in the measurement channel, the average availability of this channel can be determined by summing the state probabilities associated with the nodes in the first n+1-k rows of the Markov diagram in Fig. 2. 3.2. Preventive maintenance program As mentioned before, the actuator malfunctions can be regarded as hidden failures. Thus, the desired availability of a shutdown channel can usually be achieved by introducing a preventive maintenance program with cold standbys. Let us again assume that there are m actuators purchased for a particular shutdown operation (or channel), and n of them are placed online while the rest are stored offline for use as cold standbys. After plant start-up, the online units are inspected regularly at constant intervals. The failed ones identified at each time interval should be immediately replaced with the cold standbys and all broken units are repaired offline. The corresponding state transition diagram between two consecutive inspections is presented in Fig. 3. Since the state probabilities under the above maintenance scheme are renewed after each inspection/replacement, they must be described with a dynamic model: = P1 µ Pn + 2 − nλ P1

(8)

Figure 3. Markov diagram of preventive maintenance policy aided by cold standbys

An Optimal Assignment of Cold Srandby Components for Multi-Channel Safety Interlocks P( m −n )( n +1)+1 = −( m + nλ ) P( m −n )( n +1)+1

(9)

= P( m −n +1)( n +1) λ P( m −n +1)( n +1)−1 − m P( m −n +1)( n +1)

(10) (11)

P j +1 = ( n + 1 − j )λ Pj − ( n − j )λ Pj +1 + µ Pj + n + 2 ; j = 1, 2,...,( n − 1)

P( m −n )( n +1)+( j +1) =

( n + 1 − j ) λ P( m−n )( n +1)+ j −  m + ( n − j ) λ  P( m−n )( n +1)+( j +1) ; j =

143

1, 2,...,( n − 1)

(12)

P j ( n +1)+1 =−( m + nλ ) Pj ( n +1)+1 + m P( j +1)( n +1)+1; j =1, 2,...,( m − n − 1)

(13)

1, 2,...,(m − n − 1) P( j= λ P( j +1)( n +1)−1 − mm P( j +1)( n +1) + P( j +2)(= n +1) ; j +1)( n +1)

(14)

−  m + ( n − i ) λ  Pn + 2+i +( n +1)( j −1) + ( n − i + 1) λ Pn +1+i +( n +1)( j −1) + m Pn + 2+i +( n +1) j ; Pn + 2+i +( n +1)( j −1) =

(15)

i 1, 2....,( n −= 1); j 1, 2,...,( m − n − 1) =

Since the sum of all state probabilities at any time also equals unity, the probability of state n+1 can be calculated by subtracting the other probabilities from 1. Under the assumption that at least k online units are required to execute the designated shutdown operation, the average availability can be determined by numerically computing the time-averaged sum of the state probabilities associated with the nodes in the first n+1-k rows of Fig. 3.

4. Encoding scheme A systematic encoding scheme has been developed in this work to facilitate automatic implementation of genetic algorithm. For illustration clarity, let us consider a simple case by setting M=3 and K=2 in the superstructure (see Fig. 1). Since the maximum numbers of online and spare units in every measurement channel are both selected to be 7 in this case, each number is represented with 3 binary digits. Since there are 3 channels, the corresponding alarm logic can be expressed with 6 (i.e., 23 -2) digits. Note that the shutdown configuration is encoded in the same way. The online units of two shutdown channels are expressed with a total of 6 digits, while the standbys also 6 digits. Finally, the inspection interval of each channel is given in 4 digits. Based on the above specifications, a 53-digit binary string (chromosome) can be constructed to characterize the interlock. To illustrate the encoding scheme more precisely, let us consider the arbitrarily selected chromosome given below: 001010101|001010100|001001010|111111|001001|101001|00110101 The structural features of interlock can be extracted section-by-section sequentially from this chromosome, i.e., number of online sensors: 1, 2, 5; number of spare sensors: 1, 2, 4; voting gates: 1oo1, 1oo2, 2oo5; alarm logic: OR; number of online actuators: 1, 1; number of standby actuators: 5, 1; inspection intervals: 3, 5. Finally, notice that the fitness measure of GA is the corresponding expected life-cycle expenditure, which can be computed on the basis of existing mathematical models (Liao and Chang, 2010).

5. Case studies Let us consider the refrigeration unit shown in Fig. 4 (Liptak, 1987). An interlock in this system stops the compressor if any of the following six conditions occurs while the compressor is running: (1) refrigerated water flow is low (FSL-3); (2) compressor discharge pressure is high (PSH-4); (3) evaporator temperature has dropped near the freezing point (TSL-7); (4) refrigerated water temperature is dangerously low (TSL-6);

144

Y.Y. Lepar et al.

(5) temperature of motor bearing or winding is high (TSH-5); (6) lubricating oil pressure is low (not shown in Fig. 4). The proposed design and maintenance strategies were applied with assumed parameters. Due to space limitation, these parameters are not presented here. Extensive optimization runs were performed to generate the optimal design specifications and maintenance programs under different budget constraints. By imposing the upper limits on the purchase and maintenance costs to be 10000 and 12000 respectively, the search converged to a total life-cycle expenditure of 26,513.8, which includes the total expected life-cycle costs of sensors and actuators (that is, 4178.7 and 5527.9 respectively), and also the total expected loss due to FS and FD failures (16802.2). The optimal interlock design can be extracted from the corresponding chromosome. Specifically, the voting gates and the numbers of spares in the measurement channels are: (1) 4oo4/2, (2) 7oo7/2, (3) 1oo1/4, (4) 1oo1/2, (5) 1oo1/6, and (6) 2oo2/2; the online actuators and the numbers of standbys in the shutdown channels are: (1) 2/4 and (2) 2/7; the inspection intervals of the shutdown channels are 7 and 8 months respectively.

6. Conclusions As mentioned before, the aims of this study are to develop an improved mathematical model in which all viable design options of the interlocks (including those for spares and cold standbys) can be incorporated and, also, to construct a generic and modularized code so as to generate the design specifications efficiently for a wide variety of applications. From the results of extensive case studies, it can be observed that the proposed enhancements are quite effective and, furthermore, the GA-based MATLAB code is significantly more transparent, portable and maintainable.

References K. H. Liang, and C. T. Chang, 2008, A Simultaneous Optimization Approach to Generate Design Specifications and Maintenance Policies for the Multi-Layer Protective Systems in Chemical Processes, Ind. Eng. Chem. Res., 47, pp. 5543 - 5555. Y. C. Liao, and C. T. Chang, 2010, Design and Maintenance of Multichannel Protective Systems, Ind. Eng. Chem. Res., 49, pp. 11421–11433. B. G. Liptak, 1987, Optimization of Unit Operations, pp. 156 – 161, Chilton Book Co., Radnor, PA.

Figure 4. Piping and instrumentation diagram (P&ID) of a typical refrigeration unit.

Zdravko Kravanja, Miloš Bogataj (Editors), Proceedings of the 26th European Symposium on Computer Aided Process Engineering – ESCAPE 26 June 12th -15th, 2016, Portorož, Slovenia © 2016 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/B978-0-444-63428-3.50029-1

Optimising biomass-based energy supply chains for sustainable mobility Federico d’Amore and Fabrizio Bezzo* CAPE-Lab – Computer-Aided Process Engineering Laboratory, Department of Industrial Engineering, University of Padova, via Marzolo 9, 35131 Padova, Italy. *Corresponding author: [email protected]

Abstract The identification of alternative and sustainable energy sources has been one of the fundamental research goals of the last two decades and electric cars or biofuel fed vehicles may contribute to tackle this formidable issue. According to this perspective, a multi-echelon supply chain is here investigated considering biomass cultivation, transport, conversion into bioethanol or bioelectricity, distribution and final usage in alternative bifuel (ethanol and petrol) and electric vehicles. Multiperiod and spatially explicit features are introduced in a Mixed Integer Linear Programming modelling framework where economic and environmental objectives are simultaneously taken into account assessing multiple technologies. Results will show the efficacy of the methodology at providing stakeholders with a quantitative tool to optimise the economic and environmental performance of different supply chain configurations. Keywords: alternative fuel vehicle, bioethanol and bioelectricity supply chain, first and second generation, indirect land use change, multi-objective optimisation.

1. Introduction The global energy consumption by transport has grown by 2% per year since 2000 and accounted for 28% of the overall energy consumption in 2012 (IEA, 2015). Considering that the road transport almost totally relies on oil derived fuels, diminishing the mobility dependency on fossil fuels may represent not only a strategic decision, but also an environmental necessity. One possibility to reach that goal is the establishment of the production of biofuels and bioelectricity for alternative fuel vehicles (AFVs). Many Process Systems Engineering (PSE) approaches focusing on Supply Chain (SC) optimisation through mathematical programming have been recently proposed (You et al., 2014). Contributions have dealt either with the maximisation of the economic performance (Dunnett et al., 2008) or with the minimisation of the environmental impact (You et al., 2011). Other contributions have mainly focused on electric vehicles (EVs), demonstrating the potential benefits in terms of GHG emissions reduction, although the effective impact is shown to be highly dependent on the technology used for electricity production (Tessum et al., 2014). In general, available studies focused on the optimisation of upstream SCs for ethanol or electricity production or analysed the environmental effects of the technologies. However, to our knowledge no study have been presented where both the production SC and vehicle utilisation are simultaneously taken into account and optimised for a strategic assessment of biomass exploitation. This work aims at bridging this gap by introducing a modelling framework where the whole production SC for ethanol and/or

146

F. d’Amore and F. Bezzo

electricity and the final customer needs are optimised according to both economic and environmental objectives. Both corn grain and stover will be considered as biomass choices and several technological options will be taken into account to produce either ethanol or electricity. Site location and scale, logistic infrastructure definition and end user demand evolution for AFVs (bifuel or EVs) will be simultaneously incorporated within the optimisation model according to the spatially explicit Northern Italy framework already presented by Giarola et al. (2011). The economic performance of entire network will be assessed in terms of the SC Net Present Value (NPV) and of the end user potential savings in purchasing and driving an AFV instead of a traditional one. The environmental performance of the system will be evaluated in terms of GHG emissions, by considering the impact of each single life cycle stage.

2. Problem Description and Mathematical Features This paper deals with the strategic design and planning of an industrial SC for the production of bioethanol and bioelectricity in North Italy over a 15-years’ time horizon. The design process is conceived as a multi-objective optimisation problem aiming at: (i) the maximisation of the financial performance of the business (in terms of global NPV), and (ii) the minimisation of the impact on global warming (in terms of overall GHG emissions). The entire network can be divided into two main substructures: (i) the upstream network, dealing with biomass growth, biomass pre-treatment and transport to the conversion facilities, and (ii) the downstream network, dealing with products production, distribution and final usage by end user. This study integrates the multiobjective MILP modelling framework proposed by Giarola et al. (2011), representing the dynamic evolution of a bioethanol SC localised in North Italy, with biopower production and the implementation of end user-related stages (Figure 1).

Figure 1. Global SC network. On the one hand, we retained the mathematical formulation presented by Giarola et al. (2011) for objective functions general definition, bioethanol SC economics, cost linearization, logical constraints, mass balances and environmental burdens. On the

Optimising biomass-based energy supply chains for sustainable mobility

147

other hand, the following new features were implemented: biopower SC economics, end user AFVs-related economics, environmental issues related to biopower SC and to AFVs utilisation. The first objective function is the maximisation of the NPV [€] of the entire business, which is here expressed as the minimisation of its opposite value:

objeco = − NPV

(1)

One important difference with respect to Giarola et al. (2011) is that here the NPV is calculated by summing the industrial SC profit (NPVchain [€]) and the end user savings or costs (NPVcar [€]) in driving EVs instead of traditional vehicles: NPV = NPVchain + NPVcar

(2)

In other words, we separate the economics of the production SC from the economic interest of the final customer. NPVchain of Eq.(2) is calculated as in Giarola et al. (2011), while the impact on final customer (NPVcar) is represented as a cost difference with respect to a traditional car. Note that NPVcar represents an economic advantage or burden when moving from traditional cars to EVs and can be interpreted as a market assessing metric to forecast AFVs penetration. It is calculated by summing up the potential saving in driving EVs instead of traditional ones (RISP [€]) minus essential extra costs occurred to buy EVs (exCO [€]):

NPVcar = RISP − exCO

(3)

where RISP represents the global potential savings by end user in driving EVs and is calculated by multiplying the global average distance covered by EVs (powerKMt [km/time period] for the differential travelling cost with respect to a traditional vehicle (ΔKMcost [€/km]):

RISP =  powerKM t ⋅ ΔKMcost ⋅ CFdfCARt t

(4)

where CFdfCARt represents the discount factor for each time period t. The term exCO of Eq.(3) represents the additional investment occurred to buy an EV with respect to a traditional one, discounted through the same CFdfCARt factors utilized for Eq.(4):

exCO =  newCARSt ⋅ (charg + inc ⋅ Θt ) ⋅ CFdfCARt t

(5)

In Eq.(5) newCARSt [new EVs/time period] represents the global amount of new EVs purchased for each time period t, depending on the electricity market demand evolution. The constant charg represents the average cost of a domestic electric charger, while the constant inc [€/new EV] evaluates the differential purchasing cost of an EV with respect to a traditional one, which is decreased in t through the parameter Θt. The second objective function Objenv aims at minimising the total GHG impacts which result from the operation of the bioethanol and the biopower SC over the 15-years time horizon. Accordingly: TGHG =  TI t (6) t Details about emissions related to biomass growth (bg), biomass pre-treatment (bpt) and bioethanol production (fp) can be found in Giarola et al. (2011), while emissions

148

F. d’Amore and F. Bezzo

resulting from the bioelectricity generation (epow) are estimated by considering the emission factors for power production fppi,k [kg of CO2-eq/time period], depending on biomass i and technology k selection. The impact of AFVs is calculated by considering both EVs batteries production (ebat) and bifuel vehicles-related emissions (ebifuel).

3. Case Study Northern Italy was discretised according to the grid approach described by Zamboni et al. (2009), consisting of 59 homogeneous squares of equal size (50 km of length). One additional cell (g=60) allows biomass importation from foreign suppliers. Two main Scenarios were formulated: in Scenario A both bioethanol and bioelectricity demand variations are set, whereas in Scenario B only the global demand is set, allowing the solver to reach the optimum quota agreement over the 15-year horizon. Three main processing technologies were identified for ethanol production: (i) the Dry Grind Process (DGP, k=1), where corn is converted into ethanol through a biological process; (ii) the Ligno-Cellulosic Ethanol Process (LCEP, k=4), where stover only is converted into ethanol; and (iii) the Integrated Grain-Stover Process (IGSP, k=2, 3), where both corn grain and stover are processed to obtain ethanol. On the other hand, three technological options were considered for electricity generation: (i) biomass direct combustion for Rankine steam cycle (C+R, k=11); (ii) biomass gasification for Turbo Gas cycle (G+TG, k=22); and (iii) biomass gasification for Internal Combustion Engine (G+MCI, k=33). 3.1. Results and discussion The design variables were optimized by using the CPLEX solver in the GAMS® modelling tool on a 3.40 GHz processor (overall computational time: about 60 hours). As expected, the Pareto set of sub-optimal solutions resulting from the bi-objective problem reveals the conflict between the environmental and economic performances. In Scenario A, the optimal configuration in terms of economic performance (case A1 as reported in Figure 2) entails a marginal NPV of 2.08 €/GJoutput against a global environmental impact of 94.8 kg of CO2-eq/GJoutput, which in fact is equivalent to a GHG increase of about 10 % compared to gasoline. The GHG emission factor for gasoline was assumed equal to 85.8 kg of CO2-eq/GJ, according to HGCA (2005). In such a configuration the usage of biomass (at least, the biomass considered in this study) does not lead to any environmental advantage. The environmental impact is almost identical in Scenario B, where the optimal configuration in terms of economic performance (case B1 as reported in Figure 2) entails a marginal NPV of 5.55 €/GJoutput (significantly higher than in Scenario A) and a global environmental impact of 95.2 kg of CO2-eq/GJoutput, equivalent to a GHG increase of about 11 % compared to gasoline. In both Scenario A and Scenario B, the system design would involve the establishment of standard DGP biorefineries (k=1) and a significant corn importation from abroad. With regard to the electricity production, G+TG (k=22) and G+MCI (k=33) facilities are established and the feedstock for the electricity conversion (stover) is completely produced in Northern Italy. With regards to NPVchain, both Scenarios perform similarly (440 M€ vs. 407). The good SC economic performance is mainly related to technology choices and biomass supply costs. On the other hand, NPVcar is quite different in the two cases. In Scenario A it is worth 198 M€, while it reaches 1294 M€ in Scenario B. This is due to the larger EV penetration in Scenario B (about 12 % by t=5). In fact, that design configuration can produce a NPVcar of about 1.3 G€ in 15 years, against of about

Optimising biomass-based energy supply chains for sustainable mobility

149

200 M€ in Scenario A. Thus, it would seem that EVs would acquire a higher than expected market share in an unconstrained condition. However, although the final NPV is largely positive, Scenario B requires that the final users accept a period of negative economic performance, which is needed to pay back the investment for establishing the production facilities. In other words, due to the initial high EV cost, initially EV buyers would not get any return from their investment and simply would be instrumental to set in motion the production SC. This is unlikely to occur unless some incentives are promoted through dedicated policies. In view of the above, Scenario A, despite a lower value of NPVcar, exhibits a payback time of 7 years and probably represents a more sensible design option.

Figure 2. Pareto curves under bi-objective optimisation for A and B Scenarios.

The best configurations in terms of global warming mitigation potential are achieved through the establishment of different conversion technologies for ethanol production. In Scenario A, the optimal configuration in terms of environmental performance (case A2 as reported in Figure 2) entails a global environmental impact of 34.6 kg of CO2eq/GJoutput corresponding to a GHG decrease of about 60 % compared to gasoline (compliant with long terms EU environmental targets). The environmental optimum (case B2 as reported in Figure 2) is even better in Scenario B, in which the global environmental impact is of 16.0 kg of CO2-eq/GJoutput, corresponding to a GHG decrease of about 81 % compared to gasoline. In Scenario A NPVchain is worth -1377 M€ against -1799 M€ in Scenario B, whereas in Scenario A the NPVcar is worth 198 M€ against -15 M€ in Scenario B. The Scenario B negative value of NPVcar is related to the fact that second generation ethanol technologies are preferred and the payback time is not reached within the time horizon considered in this study. Regarding the EVs competitiveness, the potential savings for customer (NPVcar) depend first of all on the value of the yearly interest rate i0 and on the value of the differential purchasing cost variation in t (which is fixed through the parameters inc and Θt of Eq.(5)). There is no convenience for the end user in purchasing an EV if: (i) the yearly interest rate i0 grows over the value of 9% or (ii) the differential purchasing cost is still

150

F. d’Amore and F. Bezzo

of 2500 €/EV by t=5. Nevertheless, those values seems quite pessimistic with respect to those that were implemented in the formulation according to the literature (interest i0 of 5% and differential cost of 625 €/EV by t=5). The economic results were also tested by varying the electricity market price MPpower. As expected, by increasing MPpower results show a decrease in the convenience for end user in purchasing an EV instead of a traditional one (due to a decrease in the value of the parameter ΔKMcost), therefore NPVcar drops. Conversely, high electricity prices produce larger incomes for power generation facilities and consequently NPVchain grows. On the other hand, in Scenario B the EVs penetration proved to be almost identical according to a vast range of MPpower prices, even though it was postponed until t=4.

4. Final Remarks A multi-objective MILP modelling framework for the strategic optimisation of multiechelon bioethanol and biopower supply chains for the Northern Italy market, has been presented and discussed. All simulation studies show that both bifuel and electric vehicles are needed for complying with the market demand. From an economic standpoint of the supply chain infrastructure, the results show that a correct mix of first generation biorefineries for ethanol production and of gasification facilities for electricity generation represents a viable industrial option and permits a feasible penetration of the alternative fuel vehicles in the traditional market. Nevertheless, this network configuration does not represent a suitable answer to match the EU targets on global warming mitigation, due to the high environmental impact resulting from the handling of the first generation technologies. Second generation biorefineries, although still rather expensive, coupled with gasification technologies for power generation, represent the best option for a significant reduction of GHG emissions.

References A.J. Dunnett, C.S. Adjiman, N. Shah, 2008, A Spatially Explicit Whole-System Model for the Lignocellulosic Bioethanol Supply Chain: an Assessment of Decentralised Processing Potential. Biotechnol. Biofuels, 1, 13. S. Giarola, A. Zamboni, F. Bezzo, 2011, Spatially Explicit Multi-Objective Optimisation for Design and Planning of Hybrid First and Second Generation Biorefineries. Comput. Chem. Eng., 35, 1782-1797. HGCA, 2005, Bioethanol Greenhouse Gas Calculator: Users’ guide. London, UK. IEA, 2015. Mobilising Innovation to Accelerate Climate Action. Energy technology perspective. Paris, France. C.W. Tessum, J.D. Hill, J.D. Marshall, 2014, Life Cycle Air Quality Impacts of Conventional and Alternative Light-Duty Transportation in the United States. Proc. Natl. Acad. Sci., 111: 18490–18495. F. You and B. Wang, 2011, Life Cycle Optimization of Biomass-to-Liquid Supply Chains with Distributed-Centralized Processing Networks. Ind. Eng. Chem. Res., 50, 10102-10127. D. Yue, F. You, S.W. Snyder, 2014, Biomass-to-Bioenergy and Biofuel Supply Chain Optimization: Overview, Key Issues and Challenges. Comput. Chem. Eng., 66, 36–56. A. Zamboni, F. Bezzo, N. Shah, 2009, Supply chain optimization for bioethanol production system in Northern Italy: Environmentally conscious strategic design, In: Computer-Aided Chemical Engineering 27 (R.M. Brito-Alves, C.A. Oller do Nascimento, E.C. Biscaia Jr., Eds.), Elsevier, Amsterdam (The Netherlands), 2037-2042.

Zdravko Kravanja, Miloš Bogataj (Editors), Proceedings of the 26th European Symposium on Computer Aided Process Engineering – ESCAPE 26 June 12th -15th, 2016, Portorož, Slovenia © 2016 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/B978-0-444-63428-3.50030-8

Estimation of membrane fouling parameters for concentrating lactose using nanofiltration A. Sharmaa , M. Jelemensk´ya, R. Paulena,b and M. Fikara a Faculty

of Chemical and Food Technology, Slovak University of Technology in Bratislava, Slovakia b Process Dynamics and Operations Group, Technische Universit¨at Dortmund, Germany {ayush.sharma, martin.jelemensky, miroslav.fikar}@stuba.sk, [email protected]

Abstract The paper deals with parameter estimation of permeate flux model with fouling for the nanofiltration process. We propose a new technique towards fouling estimation with fouling model being an explicit function of concentration. The objective is to experimentally concentrate lactose in a lactose-salt solution at constant temperature and pressure using cross-flow nanofiltration. The experimental results show a decrease in the permeate flux over time, as the concentration of lactose increases. The limiting flux model is used to model the experimental permeate flux data without fouling. This limiting flux model parameters and the fouling parameters are then estimated via least-squares method using the experimental flux data. Keywords: Nanofiltration, Lactose, Parameter estimation, Limiting flux, Membrane fouling

1. Introduction Nanofiltration is a liquid-phase separation process for removing dissolved solids, carried out by means of membranes. Nanofiltration covers a molecular cut-off range from 100 to 1,000 Daltons. The separation takes place mainly because of diffusion of the molecules of the solvent through the mass of the membrane material, driven mainly by a high trans-membrane pressure. Nanofiltration has found application in removal of chemicals, colorants and total organic carbon (TOC) of water and simultaneous removal of sodium chloride and concentration of organics in the food and pharmaceutical industries (Eriksson (1988)). Nanofiltration helps enhancing the edibility and nutritional value of whey products by partial demineralization resulting in concentrated lactose solution forming cheap beverages with pre-biotic properties, as studied in Veraszt´o et al. (2013). As with any other membrane process, nanofiltration is susceptible to reduction in the permeate flux as the concentration of molecules increases. One of the simplest models defining this relation is the limiting flux model (Blatt et al., 1970; Cheryan, 1998; Balannec et al., 2005; Tang et al., 2007). The model defines the permeate flux as solely a function of the macro-solute concentration, i.e. the solute with the higher rejection coefficient. The model defines a limiting macro-solute concentration, clim , i.e. the maximum beyond which the membrane flux cannot be sustained. Aimar and Field (1992) analyzed the increase in concentration leading to limiting flux and deduced the values of the limiting flux parameters (k and clim ). The other reason for permeate flux decline, besides increasing concentration, is membrane fouling. Membrane fouling is defined as reversible or irreversible deposition of solutes on the surface of the membrane, or inside the pores of membrane. This deposition results in the loss of available membrane area for separation. The phenomenon of membrane fouling has been introduced and

152

A. Sharma et al.

coolant M

TC FT 01

CT 01

TT 01

retentate

FT 02

M

TT 02

CT 02

permeate

canal PC

permeate tank

feed tank PT 02

LT PT 01

pump

nanofiltration membrane Figure 1: Nanofiltration process scheme.

modeled by Hermia (1982) who categorized the fouling behavior into four models depending on the form of deposition of solutes over a membrane. In Charfi et al. (2012) an estimation of the fouling parameters was conducted for micro and ultrafiltration using the experimental data reported in literature. Recently, Jelemensk´y et al. (2015) developed a method of optimal control of a diafiltration process under fouling conditions. The diafiltration process is used for fractionation of the species in the solution, where the membrane filtration is employed. The developed optimizing control law is strongly dependent on the parameters of flux and fouling models. Thus, the parameters must be known in order to run the diafiltration process optimally, which motivates our present study. The aim of this paper is to investigate limiting flux and fouling behaviour with nanofiltration membranes for solutions containing lactose and salts. We assume that the flux model is composed of two parts. The lower-level one defines unfouled membrane properties and we will estimate it using the limiting flux model. The upper-level part uses the model from the lower level and enhances the model with a fouling mechanism. Separation of the model into these two parts makes the model more flexible and suitable for further analysis and optimal operation.

2. Process Description In general, a membrane process consists of a feed tank and a membrane unit. The solution, consisting of a solvent and solutes, is brought from the feed tank to the membrane unit by means of mechanical energy (pump) as depicted in Figure 1. The membrane is designed to retain the macro-solute and to allow the passage of the micro-solute. Part of the filtered solution rejected by the membrane (retentate) returns back to the feed tank. Permeate stream leaves the system at a flowrate q = AJ, where A is the membrane area and J is the flux subjected to unit membrane area.

Estimation of membrane fouling parameters of a nanofiltration process

153

The nanofiltration experiments were conducted in cross-flow mode and controlled at constant transmembrane pressure (TMP) defined as TMP =

Pfeed + Pretentate − P permeate = 25 bar, 2

(1)

and the control was attained using a proportional feedback controller (PC) by manipulating the retentate valve (permeate pressure is atmospheric pressure, and is constant). Nanofiltration is generally operated at the pressure range 10 – 40 bar, and hence the nominal value of 25 bar was chosen for these experiments. The temperature of the solution was maintained at a constant value of 25 ◦ C using a heat exchanger and an on-off controller (TC) for cooling water circulation. Besides these, the plant is equipped with conductivity sensors (CT) on both permeate and retentate sides. The volume of the solution in the feed tank at any time can be recorded by a level sensor (LT). 2.1. Materials Lactose monohydrate (M = 360.31 g/mol) and sodium chloride (M = 58.44 g/mol) manufactured by Centralchem (Slovakia) were used as solutes, and reverse osmosis water was used as a solvent to prepare the experimental solution. The plant holds an NFW-1812F nanofilter membrane manufactured by Synder Filtration, U.S.A, with a cut off range from 300 – 500 Da, and membrane area of A = 0.465 m2. Lactose was concentrated from 40 g/L to a concentration factor of 6.25 where the volume of the initial solution was 30 L. 2.2. Process model The mathematical model of the process is given by material balances of solutes and the overall material balance as: ci dci = A J Ri , dt V dV = −A J, dt

ci (0) = ci,0 , i = 1, 2,

(2a)

V (0) = V0 ,

(2b)

where J is the permeate flux, ci represents the concentration of the ith solute, V is the volume of the processed solution. Ri represents the rejection coefficient for the ith solute defined as Ri = 1 − ci/cp,i , where cp,i is the concentration of ith component in permeate. In our case, the solution to be separated consisted of lactose as the macro-solute (of concentration c1 ) and sodium chloride as the micro-solute (of concentration c2 ). Complete rejection was considered for lactose, i.e. R1 = 1 (according to the membrane manufacturers it is R1 = 0.97), and complete passage for sodium chloride, i.e. R2 = 0. The experiments were entirely run in concentration mode meaning no inflow of feed or diluant. As the rejection of lactose is complete, it does not leave the system, thus at any time c1 (t)V (t) = c1,0 V0 .

(3)

Several experiments with different concentrations of lactose and salt revealed that the flux does not depend on the amount of the salt. Therefore, the flux J0 of the unfouled membrane is formulated as a function of macro-solute concentration c1 using the limiting flux model J0 (c1 ) = k ln

clim , c1

(4)

where k is the mass transfer coefficient and clim is the limiting concentration of macro-solute. The membrane flux under fouling conditions can be, according to Hermia (1982), categorized into four divisions. This division is on the basis of how solutes deposit in, or over the membrane,

154

A. Sharma et al.

i.e. cake filtration model (n = 0), intermediate blocking model (n = 1), internal/standard blocking model (n = 1.5) and complete pore blocking model (n = 2). The first three flux models can be described by the following equation:    1/(n−2) 2−n J = J0 1 + K (2 − n)(AJ0) t , (5) while the complete pore blocking model can be expressed as: J = J0 e−K t .

(6)

3. Parameter Estimation In this section, the parameters of limiting flux model and the parameters of the four fouling models described above are estimated. Several experiments were performed and one of the experimentally obtained permeate flow rate data w.r.t. increasing concentration of lactose and time as depicted in Figure 2 is used here to perform the estimation. The minimization of the sum of squared differences between experimental flux data (Jexp ), and estimated flux model (Jest ) can be formulated as: m

min

∑ (J j,exp − J j,est)2

(7a)

K,k,clim j=1

s.t.

(7b) c21

dc1 = A J, dt c1,0V0 clim J0 = k ln , c1

c1 (0) = 40 [g/L], J = J(J0 , K, n,t),

(7c) J j,est = J(J0 , K, n,t j ),

(7d)

where m is the number of data points, and J is the permeate flux defined either by (5) or by (6). The Eq. (7c) is derived from (2a), by replacing V (t) from (3). The volume of the processed solution in the beginning of the operation is 0.03 m3. Based on technological considerations, the three estimated parameters (K, k and clim ) are expected to lie within the intervals K ∈ [0, 1000] units, k ∈ [0, 10] m/h, clim ∈ [0, 1500] g/L. The experimental measurements show that the flow rate of permeate decreases with time, because of the gel-polarization layer formed on the membrane surface, and due to the fouling of membrane. Non-linear least-squares estimation was performed to identify the values of the parameters k, clim of the limiting flux model (4) and the fouling rate constant K for all the four fouling models. The linear least-squares method (Foley, 2013) was also used to estimate the limiting flux model parameters k and clim assuming no fouling. A non-linear estimation of the parameters (k, clim ) of limiting flux model without fouling was also done for comparison, and they were estimated to be: k = 0.0066 m/h and clim = 880.97 g/L. These values as seen in Table 1 are analogous to the linearly estimated limiting flux model. All the optimization problems were solved in MATLAB using the SQP solver implemented in the function fmincon. MATLAB function ode45 was used for numerical solution of the initial value problem (7c) – (7d). Figure 2 shows the comparison between experimental data, limiting flux model, and the four fouling models. It can be observed that the performance of the limiting flux model is the worst as it does not account for fouling. On the other hand, all four fouling models fit the data reasonably well. The cake filtration model with n = 0 is estimated to be linear w.r.t. time as seen from the figure, and hence does not fit the experimental data with high precision. The other three fouling models are of non-linear nature and all fit the experimental data with satisfactory precision. This similarity of the models suggests that the fouling behavior could occur due to nanofiltration being

Estimation of membrane fouling parameters of a nanofiltration process

2.6

experimental data limiting flux cake filtration intermediate blocking internal blocking complete blocking

2.4 2.2

J × 10−2 [m/h]

155

2 1.8 1.6 1.4 1.2 1 0.8 0

0.5

1

1.5

2

2.5

time [h] Figure 2: Comparison of estimated four fouling models, limiting flux model (with no fouling), and experimental data. Table 1: Comparison of estimated values of K, k and clim for different fouling models, and limiting flux model, along with the value of least squares criterion, f = ∑mj=1 (J j,exp − J j,est )2 . model (n)

K

k × 10−2 [m/h]

clim [g/L]

f × 10−5 [m/h]

cake filtration (0) intermediate blocking (1) internal blocking (1.5) complete blocking (2) limiting flux (–)

494.14 [s/m2 ] 33.47 [1/m] √ 2.59 [1/ s] 0.19 [1/s] –

1.30 0.76 0.74 0.72 0.66

210.43 880.89 880.98 880.97 880.97

3.43 1.36 1.91 2.55 5.98

a higher pressure based separation process. It is a well-known phenomenon that the fouling in the form of pore blocking increases with increasing pressure for membrane processes operated in cross-flow mode, and higher pressures tend to foul the membrane internally rather than externally on the surface due to higher sweep-off in-flow rate. The cake filtration fouling model, on the other hand, states fouling on the surface of the membrane by forming a layer of solutes, which is quite prominent in dead-end membrane separation rather than in cross-flow filtration. The other three fouling models account for blocking of membrane pores by solutes too, and hence fit the experimental data more precisely. Table 1 provides estimated values of all parameters. The value of the objective function qualifies the intermediate fouling model (Figure 2) as the best fit for the experimental case studied here. The study done on nanofiltration of water in Chang et al. (2011) suggested the same model defining the behavior of fouling. Note also the comparison of different values for the parameters k, and clim of the limiting flux equation with the cake filtration fouling model, to other three models. This also points to appropriateness of the cake filtration model. On the other hand, the limiting flux

156

A. Sharma et al.

parameters estimated for other three fouling models are in a very close proximity of linearly and non-linearly estimated limiting flux model without fouling.

4. Conclusions We studied the parameter estimation of membrane flux models with fouling, by using the experimentally obtained data of permeate flux for concentrating lactose using nanofiltration. This estimation was conducted by non-linear least squares method. The results of parameter estimation of limiting flux model showed that the mass transfer coefficient (k) and limiting concentration (clim ) for this experiment were quite high, and lactose could be concentrated with even higher factor. The estimation of fouling parameters resulted in internal/standard blocking model (n = 1.5), intermediate blocking model (n = 1) and complete blocking model (n = 2) as the better fits, while intermediate blocking model fits the best. The obtained model will be used in the future for experimental evaluation of optimal control theory for membrane processes developed in Jelemensk´y et al. (2015). An interesting direction of further studies would consider the design of an experiment that would achieve a better possibility of discriminating among the different fouling models.

5. Acknowledgment The authors gratefully acknowledge the contribution of the Scientific Grant Agency of the Slovak Republic under the grant 1/0053/13, the Slovak Research and Development Agency under the project APVV-0551-11, and internal grant from the Slovak University of Technology in Bratislava. This publication is also a partial result of the Research & Development Operational Programme for the project University Scientific Park STU in Bratislava, ITMS 26240220084, supported by the Research & Development Operational Programme funded by the ERDF.

References P. Aimar, R. Field, 1992. Limiting flux in membrane separations: A model based on the viscosity dependency of the mass transfer coefficient. Chemical Engineering Science 47 (3), 579–586. B. Balannec, M. Vourch, M. Rabiller-Baudry, B. Chaufer, 2005. Comparative study of different nanofiltration and reverse osmosis membranes for dairy effluent treatment by dead-end filtration. Separation and Purification Technology 42 (2), 195 – 200. W. Blatt, A. Dravid, A. Michaels, L. Nelsen, 1970. Solute polarization and cake formation in membrane ultrafiltration: Causes, consequences, and control techniques. In: J. Flinn (Ed.), Membrane Science and Technology. Springer US, pp. 47–97. E.-E. Chang, S.-Y. Yang, C.-P. Huang, C.-H. Liang, P.-C. Chiang, 2011. Assessing the fouling mechanisms of highpressure nanofiltration membrane using the modified hermia model and the resistance-in-series model. Separation and Purification Technology 79 (3), 329 – 336. A. Charfi, N. B. Amar, J. Harmand, 2012. Analysis of fouling mechanisms in anaerobic membrane bioreactors. Water Research 46 (8), 2637 – 2650. M. Cheryan, 1998. Ultrafiltration and Microfiltration Handbook. CRC Press, Florida, USA. P. Eriksson, 1988. Nanofiltration extends the range of membrane filtration. Environmental Progress 7 (1), 58–62. G. Foley, 2013. Membrane Filtration. Cambridge University Press. J. Hermia, 1982. Constant pressure blocking filtration laws-application to power-law non-newtonian fluids. Trans. IchemE 60 (183). M. Jelemensk´y, A. Sharma, R. Paulen, M. Fikar, 2015. Time-optimal operation of diafiltration processes in the presence of fouling. In: K. V. Gernaey, J. K. Huusom, R. Gani (Eds.), 12th International Symposium on Process Systems Engineering And 25th European Symposium on Computer Aided Process Engineering. Elsevier B.V, Copenhagen, Denmark, pp. 1577–1582. C. Y. Tang, , J. O. Leckie, 2007. Membrane independent limiting flux for RO and NF membranes fouled by humic acid. Environmental Science & Technology 41 (13), 4767–4773. B. Veraszt´o, A. Sharma, Q. D. Nguyen, G. Vatai, P. Czermak, Z. Kov´acs, 15 2013. Membrane filtration technology for processing whey and converting whey-based lactose into galactooligosaccharides. In: J. Krzysztoforski, M. Szwast (Eds.), Conference Proceeding of the 6th Membrane Conference of Visegrad Countries. Polish Membrane Society, Warsaw, Poland, p. E5.

Zdravko Kravanja, Miloš Bogataj (Editors), Proceedings of the 26th European Symposium on Computer Aided Process Engineering – ESCAPE 26 June 12th -15th, 2016, Portorož, Slovenia © 2016 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/B978-0-444-63428-3.50031-X

Multi-Scale Modeling of Solid Electrolyte Interface Formation in Lithium-Ion Batteries Fridolin R¨odera , Richard D. Braatzb and Ulrike Krewera,* a TU

Braunschweig, Inst. of Energy and Process Systems Engineering, Franz-Liszt-Str. 35, D38106, Braunschweig, Germany b Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139, USA *[email protected]

Abstract This article presents a multi-scale model to simulate and analyze the formation of Solid Electrolyte Interface (SEI). A macroscopic model of battery and SEI is coupled with an atomistic kinetic Monte Carlo model for surface reactions. The developed multi-scale algorithm allows numerical simulation of key degradation mechanisms over long time and length scales. As a practical example, a full discharge process is simulated. The results show data exchange between stochastic and deterministic model as well as exemplary effects of reaction mechanisms on layer structure and battery performance. This novel approach enables simulations and detailed studies of complex SEI layer formation processes. Keywords: Multi-Scale Modeling, Lithium-Ion Battery, Degradation

1. Introduction Lithium-ion batteries supply electrical energy via an electrochemical redox reaction at anode and cathode. The anodes are used at electrical potentials outside the electrochemical stability window of the electrolyte. Continuous decomposition of the electrolyte is prevented by a thin layer at the active particle surface. This layer consists of various by products of electrolyte decomposition reactions and acts as an Solid Electrolyte Interface (SEI). The interphase needs to be a good lithium-ion conductor, but insulating for electrons in order to provide good performance and long lifetime of the battery. Research has been carried out on modeling layer growth and its impact on the battery performance using macroscopic models (Colclasure et al., 2011; Ploehn et al., 2004), while atomistic models based on Molecular Dynamic and Transition State Theory studies have been used to determine the most important reaction mechanisms (Ganesh et al., 2012; Leung and Budzien, 2010; Wang et al., 2001; Leung, 2013; Agubra and Fergus, 2014). According to Arorat et al. (1998), atomistic models are limited in time and length scales, and thus cannot be used to understand the SEI formation and predict its composition. In order to bridge this cap, this article introduces a multi-scale modeling approach using kinetic Monte Carlo (kMC) method in combination with a macroscopic model, which provides new insights in the layer formation mechanism during a long time-scale discharge process.

2. Mathematical Model The developed multi-scale model combines a macroscopic (continuum) model with an atomistic (stochastic) model. The models and the coupling are explained in detail within this section.

158

F. R¨oder et al.

2.1. Macroscopic Model The macroscopic model incorporates a battery model and a SEI model. In the battery model, each electrode i is represented as single particle, with a solid diffusion process of lithium taking place in the particle: ∂ ciLi (ri ) 1 = 2 ∇(Dis ri2 ∇ciLi (ri )) ∂t ri

(1)

with the boundary conditions −Dis ∇ciLi (ri = 0) = 0 and −Dis ∇ciLi (ri = Ri ) = JiLi/F , Faraday constant F, lithium reaction current density JiLi = jiLi/ais , specific surface area is ais = 3εsi/Ri , active material fraction εsi , and particle radius Ri . Furthermore, the electrochemical double layer at every phase interface k is modeled by CkDL

∂ ∆Φk = −Jktotal − F ∑ q j ∂t

(2)

with double layer capacitance CkDL , electrical potential difference ∆Φk , and total current density Jktotal at the interface. The implemented interfaces are anode/SEI, SEI/electrolyte, and electrolyte/cathode. The electrochemical reactions q j includes the main intercalation reaction as well as the undesired side reactions. Since only very low charging rates are applied, electrolyte and active material charge resistances as well as diffusion in the electrolyte phase are neglected. The model is further extended by a one-dimensional SEI model based on the work of Colclasure et al. (2011). The model includes transport of a charged species pair cSEI consisting of lithium and electron:   ∂ cSEI (ξ ) 1 ∂ 2DLi+ De− ∂ cSEI (ξ ) ξ ∂ δ ∂ cSEI (ξ ) = 2 + (3) ∂t δ ∂ ξ DLi+ + De− ∂ξ δ ∂t ∂ξ with the dimensionless coordinate ξ in the SEI, diffusion coefficient of lithium ions DLi+ , diffusion coefficient of electrons De− , and the SEI layer thickness δ . The potential within the SEI is calculated as   ∂ JSEI ∂ ∂ ln(cSEI (ξ )) F 2 cSEI ∂ ΦSEI (DLi+ − De− )cSEI F (4) 0= = + (DLi+ + De− ) ∂ξ ∂ξ ∂ξ RT ∂ξ with the current through the SEI of JSEI = Janode = janode/as , ideal gas constant R, and temperature T . The total potential drop (at the negative electrode) is ∆Φa = ∆ΦSEI + ∆Φgraphite/SEI + ∆ΦSEI/electrolyte

(5)

with the ohmic drop in the SEI phase of ∆ΦSEI = ΦSEI (ξ = 0) − ΦSEI (ξ = 1) and the potential drop at the interfaces ∆Φgraphite/SEI and ∆ΦSEI/electrolyte . The total cell potential Ecell is Ecell = ∆Φc − ∆Φa

(6)

2.2. Kinetic Monte Carlo Model of Surface Reaction The kinetic Monte Carlo model is based on the solid-on-solid approach (Drews et al., 2004; Pal and Landau, 1994; Weeks and Gilmer, 2007). In contrast to a 3D approach, such a 2+1D assumption allows adsorption only on top of surface sides. A cubic lattice of lattice size L is used, while diagonal and horizontal diffusion are allowed. Those and further assumptions are required because of complexity and the long time-scale character of the Solid Electrolyte Layer growth problem.

Multi-Scale Modeling of Solid Electrolyte Interface Formation in Lithium-Ion Batteries

159

Implemented processes i such as surface diffusion, adsorption, desorption, and reactions and corresponding rates are calculated based on Arrhenius kinetics. The general rate equation is ri = ki cej exp



−EiA RT



 exp

αi ∆ΦSEI/electrolyte RT

 (7)

with process speed ki , activation energy EiA , potential difference at interface ∆ΦSEI/electrolyte , and symmetry factor αi . The process rate may depend on the species j concentration in the electrolyte cej in the case of an adsorption process. Furthermore, activation energy can depend on binding energy to neighboring solid neighbors EiA = ∑ n j J j , where n j is the number of neighbors and J j is the bonding energy to a neighbor of the type j (Pal and Landau, 1994). Diffusion speed depends on whether it is horizontal k = wh = D/2∆L2 or diagonal diffusion k = wd = D/4∆L2 , as shown by Drews et al. (2004). Charge-transferring processes through interfaces depend on the potential difference ∆ΦSEI/electrolyte at the interface. The considered electrolyte mainly consists of the solvent ethylene carbonate (C3 H4 O3 ) and a lithium salt (Li+ + PF− 6 ). These components have concentrations some magnitudes higher than every other component in the electrolyte, while maintaining an almost constant surface fraction. Consequently, almost every system state transition is an adsorption, desorption, or surface diffusion step of the major species. Accurate consideration of those processes would limit the feasible time horizon of the model to a few seconds. To investigate longer time scales, only the crucial processes related to the rare intermediate species are explicitly simulated, while quasi steady state assumptions (QSSA) for major species are applied. The theoretical background for the application of QSSA in a stochastic systems is given by Rao and Arkin (2003). The major species concentration on the surface is determined in the macroscopic model, while the probability of a species on a surface site is approximated by a conditional expectation based on their surface fraction in the macroscopic model. Side reactions lead to a growing solid layer, which consists of LiF (passive), Li2 CO3 (active), and lithium alkyl carbonate (CH2 OCO2 Li)2 (organic). These species are reported as main SEI layer components (Chen et al., 2011; Ganesh et al., 2012; Aurbach et al., 1999; Wang et al., 2001; Verma et al., 2010). Reaction mechanisms are reported for the solvent reduction (Ganesh et al., 2012; Leung and Budzien, 2010; Wang et al., 2001; Leung, 2013) and the salt reduction (Ganesh et al., 2012; Agubra and Fergus, 2014). A subset of proposed mechanisms is used in this work. 2.3. Multi-Scale Algorithm

Figure 1: Flow chart of the multi-scale algorithm.

160

F. R¨oder et al.

Both presented models are coupled in a multi-scale model. This model is capable of simulating layer growth and battery performance for long time processes (e.g., full discharge process). The algorithm is presented in Figure 1. After every time step ∆t j , data between macroscopic and atomistic model are exchanged. This macroscopic model updates system states (e.g., electron concentration at the surface or surface fraction of major species at the surface), while kMC data results are used to determine parameters of the macroscopic model (e.g., average layer thickness, electron consumption rate). Parameters are provided as time-dependent polynomial functions to avoid fluctuations in the solution of the macroscopic model.

3. Results and Discussion

Figure 2: Battery potential during 0.5 C discharge process and KMC configuration at four discharge capacities. The battery potential and kMC configuration at four different discharge capacities for the discharge rate of 0.5 C is shown in Figure 2. The increase in the layer thickness during the discharge process is observed in Figures 2 and 4. The electron-driven layer growth decreases with increasing layer thickness. The layer consists mainly of the organic species (CH2 OCO2 Li)2 and some LiF components, which is in good agreement with experimental results (Nie et al., 2013). At the beginning of discharge, some LiF components are created as seen in Figure 5. Through fast ongoing depletion of the organic solid components, LiF components are covered before they can continue growing. With further discharge, the layer thickness increases and thus becomes more and more insulating and limits electron transport through the layer. The electron-driven side reactions considerably slow down and as a consequence leads to a decreasing production of organic solids. At the end of discharge, LiF components are not covered immediately and thus start to grow as nanoparticles on the surface. This possible mechanism and other detailed features of the growth process as well as detailed simulation of SEI composition and structure are only possible by considering atomistic effects in a kMC simulation as presented. On the other hand, the leakage of electrons through the interface as well as the surface fraction of the major species

Multi-Scale Modeling of Solid Electrolyte Interface Formation in Lithium-Ion Batteries

2

161

·1016

k / s−1

1.8 1.6 1.4 1.2 1 0.8

kMC Polynomial Approx.

0

0.2 0.4 0.6 0.8 1 time / s

1.2 1.4 ·104

Figure 3: Effective electron consumption rate at the SEI surface: kMC solution and polynomial fit for macroscopic model. ·10−8

δ/m

1

0.8

0.6

kMC Polynomial Approx.

0

0.2 0.4 0.6 0.8 1 time / s

1.2 1.4 ·104

Figure 4: SEI layer thickness: kMC solution and polynomial fit for macroscopic model. can be determined with macroscopic approaches. Furthermore, macroscopic models can be used to determine layer resistance and battery performance based on the atomistic simulation output. Within the presented simulation, two parameters are determined using the kMC models. The first parameter is the reaction rate constant of the electron consumption at the surface. The second parameter is the layer thickness. The kMC output as well as the polynomial fits for parameters at the last simulation step is shown in Figure 3 and 4. The layer thickness is observed to show almost no fluctuation within the kMC simulation, while the electron consumption rate fluctuates strongly. The reason for this fluctuation is that an electron consumption is a very rare process on this small surface area during one simulation step ∆t j and leads to large differences in between simulation steps. In contrast to this observation, the layer thickness only slightly changes during one simulation step. Nevertheless, in both cases a polynomial approximation is useful to transfer the kMC output to a time-dependent parameter in the macroscopic model. The polynomial allows the reduction of fluctuations or accurate fitting of kMC output for the rate constant and layer thickness, respectively.

4. Conclusions This article presents a novel multi-scale modeling approach to simulate SEI layer formation in lithium-ion batteries. A practical approach is provided for the time-scale problem of atomistic simulations of very long formation processes and to exchange the fluctuating output of stochastic models to macroscopic time-dependent battery models. This approach enables the simulation of

162

F. R¨oder et al.

Figure 5: kMC configuration at the beginning of discharge (left) and at the end of discharge (right). the complex formation process of the SEI, while involving fundamental atomistic surface mechanisms (e.g., surface diffusion, reaction, crystalline growth). By application of this approach, future work can address understanding the effects of different atomistic reaction mechanisms, charge rate dependencies for formation and cycling, as well as optimal control issues.

References V. A. Agubra, J. W. Fergus, 2014. The Formation and Stability of the Solid Electrolyte Interface on the Graphite Anode. Journal of Power Sources 268, 153–162. P. Arorat, R. E. White, S. Carolina, M. Doyle, 1998. Capacity Fade Mechanisms and Side Reactions in Lithium-Ion Batteries. Journal of The Electrochemical Society 145 (10), 3647–3667. D. Aurbach, B. Markovsky, I. Weissman, E. Levi, Y. Ein-Eli, 1999. On the Correlation between Surface Chemistry and Performance of Graphite Negative Electrodes for Li-Ion Batteries. Electrochimica Acta 45, 67–86. Y. C. Chen, C. Y. Ouyang, L. J. Song, Z. L. Sun, 2011. Electrical and Lithium Ion Dynamics in Three Main Components of Solid Electrolyte Interphase from Density Functional Theory Study. The Journal of Physical Chemistry C 115, 7044–7049. A. M. Colclasure, K. a. Smith, R. J. Kee, 2011. Modeling Detailed Chemistry and Transport for Solid-ElectrolyteInterface (SEI) Films in Li–Ion Batteries. Electrochimica Acta 58, 33–43. T. O. Drews, R. D. Braatz, R. C. Alkire, 2004. Coarse-Grained Kinetic Monte Carlo Simulation of Copper Electrodeposition with Additives. International Journal for Multiscale Computational Engineering 2 (2), 313–327. P. Ganesh, P. R. C. Kent, D. E. Jiang, 2012. Solid-Electrolyte Interphase Formation and Electrolyte Reduction at Li-Ion Battery Graphite Anodes: Insights from First-Principles Molecular Dynamics. Journal of Physical Chemistry C 116, 24476–24481. K. Leung, 2013. Electronic Structure Modeling of Electrochemical Reactions at Electrode/Electrolyte Interfaces in Lithium Ion Batteries. Journal of Physical Chemistry 117, 1539–1547. K. Leung, J. L. Budzien, 2010. Ab Initio Molecular Dynamics Simulations of the Initial Stages of Solid-Electrolyte Interphase Formation on Lithium Ion Battery Graphitic Anodes. Physical Chemistry Chemical Physics 12, 6583– 6586. M. Nie, D. Chalasani, D. P. Abraham, Y. Chen, A. Bose, B. L. Lucht, 2013. Lithium Ion Battery Graphite Solid Electrolyte Interphase Revealed by Microscopy and Spectroscopy. Journal of Physical Chemistry 117, 1257–1267. S. Pal, D. Landau, 1994. Monte Carlo Simulation and Dynamic Scaling of Surfaces in MBE Growth. Physical Review B 49 (15), 10597–10606. H. J. Ploehn, P. Ramadass, R. E. White, 2004. Solvent Diffusion Model for Aging of Lithium-Ion Battery Cells. Journal of The Electrochemical Society 151, 456–462. C. V. Rao, A. P. Arkin, 2003. Stochastic Chemical Kinetics and the Quasi-Steady-State Assumption: Application to the Gillespie Algorithm. Journal of Chemical Physics 118 (11), 4999–5010. P. Verma, P. Maire, P. Nov´ak, 2010. A Review of the Features and Analyses of the Solid Electrolyte Interphase in Li-Ion Batteries. Electrochimica Acta 55 (22), 6332–6341. Y. Wang, S. Nakamura, M. Ue, P. B. Balbuena, 2001. Theoretical Studies to Understand Surface Chemistry on Carbon Anodes for Lithium-Ion Batteries: Reduction Mechanisms of Ethylene Carbonate. Journal of the American Chemical Society 123 (5), 11708–11718. J. D. Weeks, G. H. Gilmer, 2007. Dynamics of Crystal Growth. Advances In Chemical Physics 40, 157–228.

Zdravko Kravanja, Miloš Bogataj (Editors), Proceedings of the 26th European Symposium on Computer Aided Process Engineering – ESCAPE 26 June 12th -15th, 2016, Portorož, Slovenia © 2016 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/B978-0-444-63428-3.50032-1

Automatic Model Reduction of Population Balance Models by Proper Orthogonal Decomposition Dmytro Khlopova and Michael Mangolda a Max Planck Institute for Dynamics of Complex Technical Systems, Sandtorstraße 1, 39106 Magde-

burg, Germany [email protected], [email protected]

Abstract This paper shows the use of Proper Orthogonal Decomposition (POD) for the model reduction of particulate processes in fluid flow described by Population Balance Equations (PBEs). As such systems may have several internal and external coordinates, corresponding detailed models of these systems tend to be very complicated with a high number of degrees of freedom. POD is an attractive way to obtain nonlinear reduced models of low order. This paper reports on the automatization of the mentioned reduction method. An automatic procedure for the reduction of PBE models is presented, which is implemented in the modeling and simulation environment ProMoT/Diana. Keywords: nonlinear model reduction, population balances, proper orthogonal decomposition, empirical interpolation, computer aided modeling

1. Introduction Particulate processes are of great importance for the chemical and the pharmaceutical industries, because the majority of products are particles, and because particulate processes like crystallization can be used for separation of mixtures and purification of substances. The quality of particles often depends on properties like size, shape, composition etc. A widely used approach to the mathematical modeling of particulate systems are population balance equations, where the assumption of an infinite number of particles is made and particle properties are treated as internal in addition to the external (geometrical) coordinates (Hulburt and Katz, 1964; Ramkrishna, 2000). Particulate processes typically consist of the actual dispersed phase and a fluid phase, which is in exchange with the dispersed phase. Essential processes within the dispersed phase are formation of particles, their division into smaller particles, union of particles into larger units, growth and motion of particles. The resulting population balance is a partial integro-differential equation that depends on time, internal, and external coordinates. For the numerical solution of systems with particle populations in fluid flow the original system of infinite system order is reduced to one with a finite system order by discretizing the internal and external coordinates. But the resulting number of degrees of freedom makes the detailed model inconvenient for controller design and process optimization. Thus there is a need for reduced models. The reduced models should have much lower system order than the reference models. Their numerical solution should be much easier and faster. Also the reduced models should be able to reproduce the system behavior with sufficient accuracy in the relevant window of operation conditions and in a relevant range of system parameters.

D.Khlopov and M. Mangold

164

In this paper POD is used for the development of an automatic procedure for model reduction and is applied to the model of a fluidized crystallizer. This method has been successfully used during the last years for the nonlinear model reduction of population balance systems like crystallizers (Krasnyk and Mangold, 2010; Mangold et al., 2015), and granulators (Mangold, 2012). In contrast to widely used method of moments, POD preserves the full information on the property distributions. In other words, the model reduction by POD is a proven approach, and it is not the intention of the contribution to report on a new model reduction technique. However, applying model reduction by POD manually to complex chemical engineering models is a tedious task. The idea of this work is to provide an environment that performs the model reduction automatically with minimal additional input from the user. The model reduction tool under development is embedded in dynamic flowsheet simulator and applicable to a large class of systems. The model reduction method is discussed in Section 2. Section 3 describes technical details of the developed software tool for automatic model reduction, which is based on the modeling and simulation environment ProMoT/Diana. As an example, the reference model of a fluidized bed crystallizer and the corresponding reduced model will be presented in Section 4.

2. Mathematical method 2.1. Reference model representation In order to apply a reduction procedure to the reference model, it is has to be transformed into a spatially discretized form. After discretization, the reference model will have a general structure, which is a set of differential algebraic equations and may be written as B

dx (t) = Ax x(t) + g(x(t)), dt

(1)

where x(t) is the discretized state vector, B and Ax are the system matrices, where B may be singular, and g(x(t)) comprises the nonlinearities of the system. 2.2. POD method The Proper Orthogonal Decomposition Method (Kunisch and Volkwein, 2003; Park and Cho, 1996; Sirovich, 1987) can be subdivided roughly into two steps. The first step, the offline phase, includes preliminary calculations, like the solution of the reference model and the generation of the reduced model equations. Depending on the complexity of the reference model the offline phase can be extremely computationally intensive. But this investment pays off in the second fast and cheap step, the online phase. In the online phase only a differential algebraic system of very low order has to be solved. As a starting point, the detailed reference model has to be solved numerically. Snapshots for each state of the model x(t1 ), x(t2 ), ... are collected and arranged as a matrix X = (x(t1 ), x(t2 ), ...). A reduced basis for the snapshots vectors is constructed from the singular value decomposition X = UΣV T . The new basis vectors are taken as Ψxi = Ui , i = 1, ..., N x , where Ui denotes the i th column of U, and N x is the dimension of the reduced basis and correspondingly the order of the resulting reduced model. The state vector x(t) is approximated by the following expression: x(t) ≈ Ψx φ x (t) where

Ψx

= (Ψx1 , ..., ΨxN x ),

(2) and

φ x (t)

is the coefficient vector of the reduced basis.

In order to obtain equations for φ x (t), the approximation for the state vector (2) is inserted into the discretized partial differential equation (1). To make the projection of the residuals on the reduced

Automatic Model Reduction by Proper Orthogonal Decomposition

165

basis vanish, Galerkin’s method of weighted residuals is applied, which leads to dφ x xT Ψ BΨ}x (t) = ΨxT Ax Ψx φ x (t) + ΨxT g(Ψx φ x (t)) | {z | {z } dt =:Bred

(3)

=:Ared

The matrix Bred and the matrix Ared from (3) have to be evaluated only once for a fixed reduced basis, because they do not depend on the reduced state vector φ x (t). 2.3. Empirical Interpolation The handling of the nonlinear term on the right-hand side of (3) requires some additional efforts. According to the formulation of (3), its evaluation would require the reconstruction of the state vector x(t) from the reduced state vector φ x (t), the computation of the complete nonlinearity g, and the projection of the nonlinearities on the reduced basis. This procedure would have to be carried out during the runtime of the reduced model, but is prohibitively expensive. Clearly, more efficient approaches are needed. There are several methods in literature on how to handle the nonlinear terms in the context of POD model reduction, whose basic idea is to approximate also the nonlinearities by basis vectors constructed from snapshots (Grepl et al., 2007; Nguyen et al., 2008). The Empirical Interpolation Method (Grepl et al., 2007) is used. The basis vectors Ψgi , i = 1, ..., N g for the available snapshots g(x(t1 )), g(x(t2 )), ... are constructed by the iterative procedure in Grepl et al. (2007). During runtime of the reduced model, the nonlinearity is approximated as a linear combination of time independent basis functions Ψg = Ψg1 , ..., ΨgN g weighted by time dependent coefficients φ g (t), which follow from the linear equation system  g    Ψ1,k1 · · · ΨgN g ,k1 gk1 (Ψx φ x (t))  ..   ..  g .. .. (4)  .  . .  φ (t) =  . Ψg1,k

Ng

···

ΨgN g ,k

Ng

gkN g (Ψx φ x (t))

In summary, the resulting reduced model consists of the differential equations (3) in combination with the linear algebraic equations (4). The main task of the automatic model reduction tool is to construct the reduced model equations (3) and (4) in symbolic form from an arbitrarily structured reference model.

3. Automatic model reduction An automatic procedure for the model reduction is implemented in the modeling and simulation environment ProMoT/Diana. ProMoT is a modeling tool written in Common Lisp (Ginkel et al., 2003). ProMoT supports the structured implementation of dynamic models described by systems of nonlinear implicit differential algebraic equations. It is a purely symbolic modeling tool, which keeps the model formulation separate from numerical requirements. The numerical counterpart to ProMoT is Diana (Krasnyk, 2009), which is a simulation tool for the solution and nonlinear analysis of differential algebraic systems. The numerical core of Diana is written in C++ in order to ensure fast and efficient numerical solutions. The model implementation in C++ is done automatically by ProMoT. For the numerical analysis, the modeler accesses Diana via scripts written in Python. The developed software tool for model reduction is a part of the ProMoT project and hence is written in Common Lisp. The main parts of the tool are the snapshots generator, the symbolic transformator, the generator of numerical data and the builder of reduced model. The structure of the automatic tool for model reduction is sketched in Fig. 1.

166

D.Khlopov and M. Mangold

3.1. Snapshots generator ProMoT provides a general text based modeling language MDL. In order to start the automatic model reduction, the user has to provide the detailed reference model written in this language. Also, the user has to provide the name of a Python script which contains all information about simulation conditions like definition of model parameter values, a time range, and an output time interval for collecting snapshots. After reading of all necessary information, the snapshots generator translates the provided reference model into the corresponding C++ code for Diana. Then snapshots generator runs Diana to yield snapshots and writes them into an output file. 3.2. Symbolic transformator When the reference model is transformed into the correct form (1), all the steps of reduction technique described in Section 2 could be easily implemented using just Matlab. But the reference model provided by user has an arbitrary structure, which is a set of ordinary differential equations with algebraic Figure 1: Structure of automatic tool for model equations and can be written as B dx dt = f (x). reduction The symbolic transformation of the reference model into the appropriate form is one of the most difficult tasks in the present work. For accomplishment of this intention, ProMoT allows to treat all the modeling entities like model equations and variables in symbolic form. To transform the reference model into the needed form (1), automatic splitting of the right-hand sides of equations into linear and nonlinear terms has to be done, in particular deriving of the system matrix Ax . One possibility is to use the Jacobian matrix i for Ax , whose elements can be expressed as Axi, j = ∂ ODE ∂ s j , where Axi, j is an element of the system matrix at i th row and at j th column. ODEi is the right-hand side of the i th ordinary differential equation, s j is an symbolic name of j th state variable. The above operation has to be performed for expressions in symbolic form. For this purpose the computer algebra system Maxima is used, which is embedded into the ProMoT core. For convenience of use of the computer algebra system a program interface between ProMoT and Maxima has been developed. The interface allows to convert internal data structures of ProMoT into corresponding Maxima respresentation and vice versa. After composing the matrix Ax , the nonlinearity g(x) in symbolic form can be easily constructed as g(x) = f (x) − Ax x. 3.3. Generator of numerical data In order to generate the equations of reduced model, it is necessary to calculate the reduced bases of the model. A complicated mathematical apparatus is needed to accomplish this, in particular to calculate singular value decomposition and perform empirical interpolation of snapshots data. For these purposes it was decided to use a specialized software as an external tool. GNU Octave is a high-level interpreted language intended for numerical computations. To use this tool externally

167

Automatic Model Reduction by Proper Orthogonal Decomposition

a program interface between ProMoT and Octave has been developed. The interface allows to run Octave in interactive mode, send and receive data, and to apply all the build-in mathematical functions in Octave. 3.4. Builder of reduced model After completion of the above parts it is possible to calculate all the numerical matrices and symbolic expressions needed for the reduced model in the form of Eq. (3) and (4). The builder of the reduced model creates a new modeling file into which it writes the following system of equations  Nx Nx Ng dφ xj g  ∑ Bred i, j dt (t) = ∑ Ared i, j φ jx (t) + ∑ Cred i, j φ j (t)   j=1 j=1 j=1 (5) g x N N  g   ∑ Dred k, j φ j (t) = f (x(t)) − ∑ Ered k, j φ jx (t) j=1

j=1

where N x is the number of ordinary differential equations of the reduced model and N g denotes the number of algebraic equations for handling of the nonlinearities. For the reconstruction of the states of the detailed model x(t1 ), x(t2 ), ... one has to evaluate equation (2).

4. Example: fluidized bed crystallizer 4.1. Reference model In the following, the model of the fluidized bed crystallizer sketched in Fig. 2 will be considered. The crystallizer has the shape of a cylinder whose diameter narrows towards the crystallizer’s bottom. An input volume flow of the fluid comes from outside the model and enters the bottom of the crystallizer. The fluid flow goes from bottom to top, which drags small particles upwards. Larger particles sink to the bottom due to gravity. The reference model for this process is a variant of the model described in Palis et al. (2013), Mangold et al. (2015). The main model assumption is that the number of particles is sufficiently high that the particle phase may be described by a particle population with a number size density n(x, L,t) denoting the number of particles with size L per volume at a point x in space. The population balance equation of the system reads: ∂n ∂ ∂ ∂n A(x) = − (A(x)vP (x, L,t)n(x, L,t)) + D (A(x) ) (6) ∂t ∂x ∂x ∂x x,L,t

x,L,t

The first term on the right-hand side of (6) is the advective transport of particles with velocity vP ; A(x) denotes the cross-sectional area of the crystallizer. The particle velocity vP can be expressed by the following equation s 4 Lg ρP − ρF vP = vF − 3 cW (vF , n) ρF

(7)

The fluid flow velocity vF is calculated from Z ∞ V˙ π 3 vF = , with AF = A(1 − nL dL), (8) AF 0 6 where V˙ is the volume flow of the fluid; and AF is the part of the cross-sectional area not occupied by particles. The second term on the right-hand side of (6) stands for particle transport by dispersion.

Figure 2: Scheme of fluidized bed crystallizer

D.Khlopov and M. Mangold

168

4.2. Reduced model Numerical solution of the reference model has been performed to obtain snapshots for model reduction. The following discretization grid has been applied: 120 points in the direction of external coordinate (position along the crystallizer) and 80 points in the direction of internal coordinate (size of particles). In total, the reference model consists of 9600 ordinary differential equations. The snapshots are collected solutions of the detailed model on an equidistant time grid for t = 0..2000 seconds with interval of 1 second. −4

relative total error

x 10

The reduced model consists of 28 ordinary differential equations and 13 algebraic equations, compared to 9600 ordinary differential equations of the reference model. The reduced model and the reference model agree very well, as is illustrated by Fig. 3. It shows the relative total error ||n(t) − n(t)||/||n(t)||, ˆ where n(t) ˆ is the approximation of the reduced model.

2

1

0 0

500

1000 t/s

1500

2000

Figure 3: Relative total error of approximation

5. Outlook The automatic tool for model reduction has been developed by using proper orthogonal decomposition combined with empirical interpolation. Although the basis functions from snapshots of the reference model give some hints on the accuracy to be expected from the reduced model, depending on many factors the approximation error during runtime of the reduced model can leave the desired range. For this purpose an efficient a-posteriori error estimator has to be implemented.

References M. Ginkel, A. Kremling, T. Nutsch, R. Rehner, E. D. Gilles, Jun. 2003. Modular modeling of cellular systems with ProMoT/diva. Bioinformatics 19 (9), 1169–1176. M. A. Grepl, Y. Maday, N. C. Nguyen, A. T. Patera, May 2007. Efficient reduced-basis treatment of nonaffine and nonlinear partial differential equations. ESAIM: Mathematical Modelling and Numerical Analysis 41 (3), 575– 605. H. Hulburt, S. Katz, 1964. Some problems in particle technology: A statistical mechanical formulation. Chemical Engineering Science 19, 555–574. M. Krasnyk, 2009. Ph.D. Thesis. Otto-von-Guericke-University Magdeburg. M. Krasnyk, M. Mangold, 2010. Reduction of a urea crystallizer model by proper orthogonal decomposition and best point interpolation. Industrial & Engineering Chemistry Research 49, 9887–9898. K. Kunisch, S. Volkwein, Jan. 2003. Galerkin proper orthogonal decomposition methods for a general equation in fluid dynamics. SIAM Journal on Numerical Analysis 40 (2), 492–515. M. Mangold, 2012. Model reduction of a batch drum granulator by proper orthogonal decomposition. In: 8th IFAC Symposium on Advanced Control of Chemical Processes, 10 - 13 July 2012, Singapore. pp. 856–861. M. Mangold, D. Khlopov, S. Palis, L. Feng, P. Benner, D. Binev, A. Seidel-Morgenstern, 2015. Nonlinear model reduction of a continuous fluidized bed crystallizer. Journal of Computational and Applied Mathematics 289, 253–266. N. C. Nguyen, A. T. Patera, J. Peraire, Jan. 2008. A best points interpolation method for efficient approximation of parametrized functions. International Journal for Numerical Methods in Engineering 73 (4), 521–543. S. Palis, D. Binev, H. Lorenz, A. Seidel-Morgenstern, A. Kienle, 2013. Population balance modeling of crystallization in a fluidized bed. BIWIC 2013 20th International Workshop on Industrial Crystallization, 252–259. H. M. Park, D. H. Cho, 1996. The use of the karhunen-love decomposition for the modeling of distributed parameter systems. Chemical Engineering Science, 81–98. D. Ramkrishna, 2000. Population balances: theory and application to particulate systems in engineering. Academic Press. L. Sirovich, 1987. Turbulence and the dynamics of coherent structures part i: coherent structures. Quarterly of Applied Mathematics, 257–267.

Zdravko Kravanja, Miloš Bogataj (Editors), Proceedings of the 26th European Symposium on Computer Aided Process Engineering – ESCAPE 26 June 12th -15th, 2016, Portorož, Slovenia © 2016 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/B978-0-444-63428-3.50033-3

Parallel computing in multi-parametric programming Richard Oberdiecka,b and Efstratios N. Pistikopoulosb a Department

of Chemical Engineering, Centre for Process Systems Engineering, Imperial College London, London, United Kingdom. b Artie McFerrin Department of Chemical Engineering, Texas A&M University, College Station TX, United States. [email protected]

Abstract In multi-parametric programming, an optimization problem is solved as a function of certain bounded parameters. Hence it requires the exploration of the corresponding parameter space, a procedure which inherently leads to independent subproblems to be solved for each part of the parameter space. This characteristic is used to develop a parallelization strategy for many classes of multi-parametric programming algorithms. The trade-off between information overhead and independence of each machine is addressed explicitly through the introduction of a user-defined parameter. This novel approach is applied to a geometrical multi-parametric quadratic programming algorithm; a computational study as well as the application to a combined heat and power heat recovery subsystem show the benefits of the developed approach. Keywords: Multi-parametric Programming, Parallelization

1. Introduction As opposed to other branches of optimization, in multi-parametric programming (mp-P) an optimization problem is solved for a range and as a function of bounded parameters. This characteristic has proven to be a powerful methodology, as it allows the transfer of an online optimization burden offline (see Pistikopoulos et al. (2015) and references therein). However, one drawback of mp-P is the considerable offline computational burden associated with it, especially for larger scale problems. Consequently, several publications have considered the approximate solution of mp-P problems in order to lessen the computational effort (Filippi, 2004). However, to the authors knowledge no publication to date has considered the use of parallel computing in mp-P algorithms, and the only contribution related to mp-P considers the point location problem, a problem closely related to multi-parametric model predictive control (Zanarini et al., 2013)1 . This is surprising, as mp-P algorithms consider a partitioned, multi-dimensional parameter space which inherently requires the solution of independent subproblems at each iteration (Table 1 presents a list of independent elements identified for several classes of multiparametric programming). Such a structure naturally lends itself to parallel computing, 1 As it is open-source, the authors are aware that the Multiparametric Toolbox (MPT) (Herceg et al., 2013) explicitly considers parallel programming. However, neither is the exact strategy clear nor has MPT documented their procedure.

R. Oberdieck and E.N. Pistikopoulos

170

which bases its computational benefits on the distribution of independent elements or tasks onto different machines and the parallel execution of the required operations on these machines. Thus, in this paper we demonstrate how this inherently iterative and independent nature of mp-P problems can be exploited using a suitable parallelization approach, applicable to all cases presented in Table 1. Using the example of a widely used geometrical multi-parametric quadratic programming (mp-QP) algorithm, the computational benefits are highlighted and the insights gained are discussed in detail.

Table 1: The problem class and corresponding independent element of several classes of multi-parametric programming algorithms Problem class Independent elements Multi-parametric linear and quadratic programming geometrical approach (Dua et al., 2002; Baotic, 2002) Multi-parametric linear and quadratic programming combinatorial approach (Gupta et al., 2011) Direct multi-parametric dynamic programming (Borrelli et al., 2005) Multi-parametric mixed-integer programming - Global optimization (Dua et al., 2002; Oberdieck and Pistikopoulos, 2015) Multi-parametric mixed-integer programming Branch and bound and exhaustive enumeration (Borrelli, 2003; Oberdieck et al., 2014)

Each facet/critical region Each combination of active sets Each critical region of the previous stage Each critical region

Each node/integer combination

2. Multi-parametric quadratic programming - A case study A very commonly encountered class of mp-P problems are mp-QP problems, which naturally arise in multi-parametric model predictive control (MPC) problems (see section 3.3). In general, they are described in the following form: minimize n

(Qx + Hθ + c)T x

subject to

Ax ≤ b + F θ θ ∈ Θ := {θ ∈ Rq |CRA θ ≤ CRb },

x∈R

(1)

where the matrices have appropriate dimensions. The solution to problem (1) is thereby given by a set of disjoint polytopes2 , called critical regions, and associated with it an affine function of the form x(θ) = Ki θ + ri , ∀θ ∈ CRi ,

(2)

where Ki and ri have appropriate dimensions and CRi denotes the i-th critical region. A widely used solution approach for problem (1) is the geometrical mp-QP algorithm presented in Baotic (2002) (see Figure 1), and we will use it to elucidate the application of parallel computing techniques. The parallelization thereby takes place over the elements of N , i.e. the facets of the critical regions constituting the solution. 2 Two polytopes A and B are called disjoint if int (A ) ∩ int (B) = ∅, where int is the interior of a polytope.

Parallel computing in multi-parametric programming

1: 2: 3: 4: 5: 6: 7: 8: 9:

10: 11: 12: 13:

171

N = 0, S = ∅, θ0 ∈ Θ Solve first iteration, get S ← (CR0 , x0 (θ), λ0 (θ)) and N = H (CR) while N 6= ∅ do Pop halfspace ω(θ) ≤ 0 from N Using variable step-size, find θ0 ∈ Ω := {θ0 ∈ Θ|ω(θ0 ) > 0} Fix θ0 in problem (1), and solve the resulting QP; identify x0 , λ0 if feasible then Obtain x(θ) and λ(θ) from Basic Sensitivity Theorem Obtain CR = {θ ∈ Θ|Ax(θ) ≤ b + F θ, λ(θ) ≥ 0} and remove redundant constraints S ← (CR, x(θ), λ(θ)) N ← H (CR) end if end while

Figure 1: The solution approach for problem (1) presented in Baotic (2002). Note that H (CR) denotes the half-spaces defining critical region CR, and that the part highlighted in gray is executed in parallel. 2.1. Application of parallel computing Parallelization inherently exploits independent aspects of an algorithm and distributes them on different machines, where these independent subproblems are computed in parallel. The disjoint nature of the critical regions thereby naturally generates independent subproblems which can be solved in parallel. Additionally, as the solution of a subproblem might generate new subproblems due to the exploration of the parameter space, we define concept of the limiting iteration number ρlimit : Definition 1 The limiting iteration number ρlimit is the maximum number of iterations performed on a single machine before the result is returned to the main algorithm. Hence it is possible to choose between continuing the current computation locally or to return the results to the main algorithm and perform a re-distribution of the problems. The resulting trade-off is between an increased overhead resulting from the information transfer between the machines and the possibility of calculating possibly suboptimal or unnecessary solutions, as the re-distribution always ensures that the algorithm performs optimally. Remark 1 Since at the end of the algorithm all results are combined together, the final solution is always optimal. Consequently, the parallelization strategy proposed in this paper can be summarized as follows: Step 1: Formulation of the sequential solution algorithm

R. Oberdieck and E.N. Pistikopoulos

172

Figure 2: The key problem statistics for the randomly generated test set: (a) the number of variables, (b) the number of parameters and (c) the number of constraints for each test problem. Step Step Step Step Step

2a: Identification of the most external iterative procedure 2b: Identification of the independent elements computed at each iteration 2c: Definition of ρlimit 3: Connection to different machines and equal distribution of elements 4: Execution of the current computation locally until (i) the pre-defined termination criteria are met or (ii) the number of iterations has reached ρlimit

3. Results and Discussion The computations of the numerical examples were carried out on a 4-core machine with an Intel Core i7-4790 CPU at 3.60 GHz and 16 GB of RAM. Furthermore, MATLAB R2015a, IBM ILOG CPLEX Optimization Studio 12.6.2 and NAG MB24 was used for the computations. 3.1. The test set The proposed parallelization algorithm was tested on a randomly generated test set of 52 mp-QP problems, and key problem statistics are reported in Figure 3.1. Note that the test set has been ordered in ascending order with respect to the time needed to solve the problem sequentially. 3.2. Parallelization results Definition 2 The speedup factor Ψ is defined as Ψ=

tSequential , tP arallel

(3)

where tP arallel and tSequential are the time needed to solve the parallelized and sequential algorithm, respectively. In Figure 3.3 (a) the average speedup factor is reported as a function of the number of cores with ρlimit = 1, while in Figure 3.3 (b) the average speedup factor is shown as a function of ρlimit , with the number of cores set to 4.

Parallel computing in multi-parametric programming

173

Figure 3: The numerical results for the speedup of the computation by using parallel computing. In (a), the computational benefits as a function of the number of cores is shown while in (b) the dependence on the number of iterations performed on a single thread is investigated. In (c) we show the computational benefits obtained when using parallel computing for the multi-parametric model predictive control of a combined heat and power system. 3.3. Application to a combined heat and power subsystem MPC controller In order to highlight the impact of parallel computing onto the computational efficiency, we consider the development of a MPC controller for a residential combined heat and power (CHP) system (Diangelakis et al., 2015; Pistikopoulos et al., 2015). The reduced-order model of the heat recovery subsystem used here is given as     0.9712 −0.0207 −0.0529 −0.0245 −0.0079 0.8169 −0.0524 xk +  −0.1009 0.0593  uk xk+1 =  0.0012 (4) −0.0099 −0.0302 0.9551 −0.02457 0.0125 where xk and uk are the states and inputs of the system at time k, respectively. The corresponding MPC problem3 is then given as minimize x

subject to

xTN P xN +

NP −1 k=0

xTk Qxk + uTk Ruk

Eq. (4) xk ∈ [−5, 5]2 , ∀k = 0, ..., N uk ∈ [−2, 2]2 , ∀k = 0, ..., N − 1,

(5)

where matrices have appropriate dimensions. The computational time as a function of the horizon length N is reported in Figure 3.3 (c), which clearly shows the computational gains possible from parallel computing. 3 In the light of brevity and conciseness, the problem formulation presented here is intentionally simplistic in the sense that it does not consider elements such as outputs, disturbances or possible differences between control and output horizons.

174

R. Oberdieck and E.N. Pistikopoulos

4. Conclusion In this paper, we have discussed the application of parallel computing to multi-parametric programming. In particular, we have provided a straightforward approach that allows for the parallelization of large classes of multi-parametric programming algorithms. In order to cope with the trade-off between communication overhead and independence of each machine, we have introduced the limiting iteration number ρlimit . Using a widely considered geometrical mp-QP algorithm (Baotic, 2002), the computational benefits have been highlighted and the influence of ρlimit was investigated. Future work will encompass the application of this strategy to other multi-parametric programming algorithms and the study of the influence of ρlimit for different algorithms, as well as how the parallelization efficiency scales with the higher number of cores.

5. Acknowledgements Financial support from EPSRC (EP/M027856/1) and from Texas A&M University is gratefully acknowledged.

References M. Baotic, 2002. An Efficient Algorithm for Multiparametric Quadratic Programming. F. Borrelli, 2003. Constrained optimal control of linear and hybrid systems. Vol. 290 of Lecture Notes in Control and Information Sciences. Springer, Berlin and New York. F. Borrelli, M. Baotić, A. Bemporad, M. Morari, 2005. Dynamic programming for constrained optimal control of discrete-time linear hybrid systems. Automatica 41 (10), 1709–1721. N. A. Diangelakis, S. Avraamidou, E. N. Pistikopoulos, 2015. Decentralized multiparametric model predictive control for domestic combined heat and power systems. Industrial & Engineering Chemistry Research, in print. V. Dua, N. A. Bozinis, E. N. Pistikopoulos, 2002. A multiparametric programming approach for mixedinteger quadratic engineering problems. Computers & Chemical Engineering 26 (4–5), 715–733. C. Filippi, 2004. An Algorithm for Approximate Multiparametric Linear Programming. Journal of Optimization Theory and Applications 120 (1), 73–95. A. Gupta, S. Bhartiya, P. Nataraj, 2011. A novel approach to multiparametric quadratic programming. Automatica 47 (9), 2112–2117. M. Herceg, M. Kvasnica, C. N. Jones, M. Morari, 2013. Multi-Parametric Toolbox 3.0. In: Control Conference (ECC), 2013 European. pp. 502–510. R. Oberdieck, E. N. Pistikopoulos, 2015. Explicit hybrid model-predictive control: The exact solution. Automatica 58 (0), 152–159. R. Oberdieck, M. Wittmann-Hohlbein, E. N. Pistikopoulos, 2014. A branch and bound method for the solution of multiparametric mixed integer linear programming problems. Journal of Global Optimization 59 (2-3), 527–543. E. N. Pistikopoulos, N. A. Diangelakis, R. Oberdieck, M. M. Papathanasiou, I. Nascu, M. Sun, 2015. PAROC - an Integrated Framework and Software Platform for the Optimization and Advanced Model-Based Control of Process Systems. Chemical Engineering Science 136, 115–138. A. Zanarini, M. Jafargholi, H. Peyrl, 2013. Exploiting parallelization in explicit model predictive control. In: Information, Communication and Automation Technologies (ICAT), 2013 XXIV International Symposium on. pp. 1–7.

Zdravko Kravanja, Miloš Bogataj (Editors), Proceedings of the 26th European Symposium on Computer Aided Process Engineering – ESCAPE 26 June 12th -15th, 2016, Portorož, Slovenia © 2016 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/B978-0-444-63428-3.50034-5

Modelling of a Reactive Distillation in the production process of high purity Cyclohexanone to produce caprolactam David Lorenzoa*, Carlos Perez-Galvanb, Cristian Trianab, Aurora Santosa, Arturo Romeroa, I.David L. Bogleb a

Univerisad Complutense de Madrid, Departamento de Ingeniería Química, Facultad de Ciencias Químicas. Avda. Complutense S/N, 28040 Madrid, Spain. b

University College London, Department of Chemical Engineering, Torrington Place, WC1E 7JE London, United Kingdom. [email protected]

Abstract Cyclohexanone (ONE) is an important raw material to promote caprolactam (CPL). It is used as a monomer in nylon industries. In the ONE production process, this component needs to be purified from a mixture of cyclohexanol (OL) and some impurities such as 2-cyclohexen-1-one (CXENONE). The concentration of these impurities must be reduced o in order to improve the quality of nylon fibres produced from CPL. The main scope of this work is to model a reactive distillation column where the cyclohexanone (ONE) is separated from a mixture of cyclohexanol (OL) and 2cyclohexen-1-one (CXENONE). The model is developed in gPROMS, where the kinetic model is implemented alongside with the NRTL equation whose binary interaction parameters are also obtained experimentally. The aforementioned model is validated in a packed distillation column operating at continuous conditions. This experimental setup is used to prove the capacity of the model to explain the real operation conditions. The mathematical model has shown favourable predictions on temperature profiles, concentration profile and conversion of all the compounds in each stage of the column. Finally, the MINLP problem formulation for optimising real operation is suggested along with planned sensitivity studies which will next be tackled. Keywords: Process integration, Reactive Distillation, Cyclohexanone, Modelling, 2cyclohexen-1-one.

1. Introduction One of the most commonly used processes for caprolactam production is promoted by the oxidation of cyclohexane in presence of catalytic metal salts to obtain a reaction mixture known as KA-oil, which contains ONE, OL and other impurities such as CXENONE (Burlone, Sink et al. 2006). ONE, which is purified from KA-oil, reacts with hydroxylamine in oleum media, producing CPL by the Beckmann rearrangement of cyclohexanone oxime (Romero, Santos et al. 2004). The quality of nylon fibres, which are produced from the polymerisation of CPL, is highly affected by the amount and type of organic by-products present in pure ONE.

176

D. Lorenzo et al.

The main impurities are aldehydes and unsaturated compounds, CXENONE the most studied impurity (Lorenzo, Santos et al. 2013a). This compound presents a similar reactive behaviour like cyclohexanone. Simple distillation has proven to be ineffective for CXENONE removal. In order to reduce the amount of this impurity, reactive distillation has been proposed as a viable alternative (Wyatt, Benneker et al. 2004). A chemical reaction using a basic catalyst is proposed to remove the CXENONE, as shown in reaction r1. The addition of NaOH develops the reaction between CXENONE and ONE. These compounds react by the Michael addition generating two isomers, which are considered as DIONE in this work. DIONE has a higher boiling point than ONE and can be easily separated by distillation in further purification steps (Lorenzo, Santos et al. 2013b). : : 2 Nevertheless, the presence of alkali within the reaction medium also promotes the selfcondensation of ONE, producing a mixture of heavier compounds which can be easily removed by distillation. However, this is an undesirable reaction which represents losses of ONE in the system. NaOH develops an aldol condensation of ONE to form 2(1-cyclohexenyl) cyclohexanone (D1) and 2-cyclohexylidenecyclohexanone (D2) in the liquid phase, also regarded as dimers (D). The production of higher mass weight compounds, which can be produced by consecutive condensations, are neglected in the operating conditions used (Lorenzo, Santos et al. 2013a). Thus, it is desirable to minimise the amount of these dimers in the caprolactam production since it implies a raw material loss. In this work, a reactive distillation column is modelled and the model validated in order to select the operating conditions that guarantee the removal of CXENONE with minimum losses of ONE. The MINLP problem will deal with the real conditions on a future paper.

2. Experimental The reactive distillation model, which is proposed in this work, is validated in a packed distillation column operating at continuous conditions. The experimental setup comprises a distillation column whose shell is made of stainless steel, with an internal diameter of 30 mm and a height of 500 mm. The packing is made of PTFE Rasching rings of 6 mm, in order to avoid the corrosion caused by NaOH. The feed mixture and NaOH streams are supplied by two HPLC pumps (Agilent 1100 series). Both streams can be fed into three different positions in order to study the effect of the relative position of NaOH. Another quaternary HPLC pump (Agilent 1100 series) is used to remove the distillate and bottom streams. The vapour stream, which flows through the column, is generated in a kettle reboiler. This experimental setup was already used in a previous work, where the height equivalent to a theoretical plate (HETP) was determined by (Lorenzo, Santos et al. 2015). Under operating conditions, the column is capable of developing four equilibrium stages. The reflux rate is measured after finishing each experiment and is

Modelling of a Reactive Distillation in the production process of high purity Cyclohexanone to produce caprolactam

177

obtained by measuring the volumetric flow rate of the condensate collected in the cooler under steady state conditions.

3. Distillation column model. In this work, the modelling of the distillation column is based on the equilibrium-stage model using reaction kinetics (EQ-Kin), where the experimental reaction rates are included in the both mass balance and energy balance. To assume that in each stage the equilibrium is achieve, equilibrium relations and sum equations have been taken into account (Taylor and Krishna 2000, Keller and Górak 2013). The kinetic models for reaction 1 and reaction 2 are shown in Eq. (1) and Eq. (2), respectively. The kinetic models are function of the catalyst concentration, temperature and the concentration of each species in the liquid phase. Furthermore, the selfcondensation of ONE is a reversible reaction and it is also presented. These reaction rates were obtained experimentally and the results are presented by (Lorenzo, Santos et al. 2013a, Lorenzo, Santos et al. 2013b). 1.23 10

exp

1.56 10 exp

2914

(1)

15944 1.50 10 exp

7118

(2)

where r1 is expressed in mmol of ONE/kg·min, T in K, CNaOH in mmol/kg and Cj in mmol/kg. The thermodynamic behaviour of the system can be explained by using the vapourliquid equilibrium (VLE) data of binary mixtures (i.e. OL + ONE, ONE + CXENONE and OL+CXENONE) presented in the works of Lorenzo et. al 2015 . For this paper the thermodynamic data and kinetics obtained experimentally have been used to model a single equilibrium stage. This generic stage model was then used within EQ-Kin to model the complete column. The reaction system considered in this work has not been studied; therefore the thermodynamic properties are not available in simulation packages. gPROMS is used in this work for the simulation and optimisation of the reactive distillation column and given its equation-oriented language, the experimental values can be used to model the column behaviour.

4. Results and discussion 4.1. Compound distribution in the column distillation. The component distribution between bottom and top, in the distillation column, is studied analysing a sample by GC/MS. In Figure 1, the result for a sample taken at 360 min, at 4.2 reflux ratio, CNaOH= 50 mg·kg-1, Qa= 1.5 mLmin-1, ONA0 = 60% and OL0 40%, CCXENONE,0 =3500 mg·kg-1, is plotted. ISTD depicts the peak of internal standard, which is used to quantified the impurities, D1, D2 and D3 (dimers of self-condensation of ONE) are the compounds lumped as Dimers (D), and DIONE 1, DIONE 2 (dimers of ONE and CXENONE condensation) are lumped compounds as DIONES.

178

D. Lorenzo et al.

As can be seen in Figure 1 in the top stream the heavier compounds, DIONE and D, are not present. It means that these compounds are only removed by the bottom stream and this behaviour can be explained assuming that they are only presented in the liquid phase. It can be modelled assuming that the partition coefficients of these components are close to negligible.

a

b

Figure 1. Mass chromatograms at 360min of operation in continuos mode of a) top stream and b) bottom stream which leave the column. 4.2. Validation of the model. Different experimental conditions are tested in the experimental setup at 4.2 reflux rate, CNaOH= 500 mg·kg-1, Qa= 1.5 mLmin-1, ONA0 = 60% and OL0 40%, CCXENONE,0 =3500 mg·kg-1, in order to find how the different variables, such as reflux rate, affect to the conversion of ONE and CXENONE. The experimental results are plotted over the prediction of the distillation model of four equilibrium stages. As can be seen in Figure 2, the model fits very well to experimental results. The experimental data can be used to prove the use of the model in the optimization process.

Modelling of a Reactive Distillation in the production process of high purity Cyclohexanone to produce caprolactam

xj / (mg ·kg-1)

2500

a) Top Stream, QD=0.45 mL·min-1

2000 1500 1000 CXENONE D DIONE

500

xj / (mg ·kg-1)

179

0 6000 5000 4000 3000 2000 1000 0

b) Bottom Stream, QB=1.05 mL·min-1

0

1

2

3

4

5

6

7

8

9

time / h

Figure 2: Composition of compound j in a) and top stream b) bottom stream. Symbols depict experimental data obtained. Solid lines depict the composition values predicted by EQ-KIN model.

5. MINLP optimisation The optimisation of the operating conditions and the design of the distillation column under real settings of flow rates, pressure and temperature will be addressed in a future work. The main scope is to minimise the total annual cost in which the operating costs, the total capital investment and the cost related to the loss of ONE due to selfcondensation are taken into account. The objective function is expressed in Eq. (3). This objective will be achieved with the most suitable values of reflux ratio (RR), concentration of NaOH fed into the reaction section and pressure.

(3) In order to calculate the number of total equilibrium trays and number of reactive trays the MINLP approach is proposed. A binary variable in each tray indicates whether that tray is present in the column or not. In this work this concept is applied to determine the optimal locations for the reflux, boilup and the tray in which the catalyst is fed into the column, therefore the number of reactive trays can be also determined (Viswanathan and Grossmann 1993). In order to ensure that the solution provides only the suitable structure, the sum of every binary variable has to be equal to one. The restriction equations are as follow:

180

D. Lorenzo et al.

1 1 1

1,2, … 1,2, … 1,2, …

(4) (5) (6)

The high influence of CXENONE concentration in the quality of nylon fibers limits the operation conditions of the distillation column, also a higher concentration of ONE is desired and used as constraint in the optimisation step (>99 %).

6. Conclusions The model developed in this work can be used to predict the behaviour of a reactive distillation in the purification process of cyclohexanone and cyclohexanol to reduce the amount of 2-cyclohexen-1-one which can affect to the quality of nylon fibres. The NRTL parameters experimentally obtained are included in the model. The model is validated in an experimental setup in order to guarantee that the trend and the order of magnitude of the concentration for each compound can be explained and predicted by the model, the concentration of each compound is experimentally obtained. A methodology for the optimisation of the system is proposed as a way to find the most suitable operating conditions and design of the distillation column. Using gPROMS, the system can be optimised under real operation conditions, to achieve the most suitable conditions.

References D. A. Burlone, R. M. Sink, J. Lanza and K. Gust (2006). A Method o Minimizing AldehydeBased Impurities in a Process Stream. WO/2006/087347. T. Keller and A. Górak (2013). Modelling of homogeneously catalysed reactive distillation processes in packed columns: Experimental model validation. Computers & Chemical Engineering 48: 74-88. D. Lorenzo, A. Santos and A. Romero (2015). Vapor–Liquid Equilibria of Cyclohexanone + 2Cyclohexen-1-one and Cyclohexanol + 2-Cyclohexen-1-one, Validated in a Packed Column Distillation. Journal of Chemical & Engineering Data. D. Lorenzo, A. Santos, E. Simon and A. Romero (2013a). Kinetic of Alkali Catalyzed SelfCondensation of Cyclohexanone. Ind. Eng. Chem. Res. 52, 2257-2265. D. Lorenzo, A. Santos, E. Simon and A. Romero (2013b). Kinetics of Alkali-Catalyzed Condensation of Impurities in the Cyclohexanone Purification Process. Ind. Eng. Chem. Res. 52,45: 15780-15788. A. Romero, A. Santos and P. Yustos (2004). Effect of Methyl-δ-valerolactams on the Quality of ε-Caprolactam. Ind. Eng. Chem. Res. 43 1557-1560. R. Taylor and R. Krishna (2000). Modelling reactive distillation. Chemical Engineering Science 55,22: 5183-5229. J. Viswanathan and I. E. Grossmann (1993). Optimal Feed Locations and Number of Trays For Distillation-Columns with Multiple Feeds. Industrial & Engineering Chemistry Research 32,11: 2942-2949. L. Wyatt, A. H. Benneker and A. P. H. Schouteten (2004). Process for reducing the aldehyde concentration in a mixture comprising cyclohexanone and one or more aldehydes, Koninklijke DSM N.V., Neth. . 8 pp.

Zdravko Kravanja, Miloš Bogataj (Editors), Proceedings of the 26th European Symposium on Computer Aided Process Engineering – ESCAPE 26 June 12th -15th, 2016, Portorož, Slovenia © 2016 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/B978-0-444-63428-3.50035-7

Multimodal Green Food Supply Chain Design and Planning under Uncertainty Ana Carla Gomesa, Tânia Pinto-Varelaa*, Ana Paula Barbosa-Póvoaa a

Center for Management Studies, CEG-IST, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais, 1049-101 Lisboa, Portugal

*[email protected]

Abstract Food supply chains (FSC) have been receiving increased attention throughout the recent years. The inherent products’ characteristics associated with the logistics activities require an approach beyond the traditional one, where specific food supply chain characteristics need to be considered namely customer perception of freshness, which appears as an intangible value but with a high impact in sales, as well as environmental aspects associated with these systems. This work propose a comprehensive approach for design and planning of a food supply chains, where supply chain lead times are explored while being integrated with transportation mode selection, environmental impacts, and demand uncertainty. A multi-objective approach is developed considering the maximization of an economic indicator, and the minimization of environmental impacts as well as FSC lead time. An example is used to show the methodology application. Keywords: Food Supply Chain; Design Planning; Multimodal; Environmental impacts; Shelf-life.

1. Introduction Food Supply Chains (FSC) are complex systems involving several entities that manage perishable products from suppliers to final customers. The applicability of traditional methodologies to these systems has revealed to be insufficient to accommodate their specificities. The traditional assumptions of unlimited product shelf-life and deterministic demand are no longer suitable. Product freshness is nowadays a main costumer decision factor resulting into the need of optimizing supply chain lead times, specially in what concerns logistics aspects. Additionally, currently FSC trigger the development of its sustainability exploring food waste and environment impacts minimization within an uncertain market demand. Such concerns lead to the need of building up a comprehensive approach where supply chain lead times, environmental impacts along with economic benefits need to be accounted for in an integrated form. On the related literature Soysal et al (2014) have suggested a multi-objective model to minimize the total logistics costs and the total amount of greenhouse gas emissions caused by transportation. Later on the same authors extended this work to deal with demand uncertainty (Soysal et al 2015). Amorim and Almada Lobo (2014) propose a model that considers products freshness when dealing with logistics transportation and Etemadnia et al (2015) presented a Mixed-Integer Linear Programming (MILP) model that integrates the location of a food hub with bimodal transportation decisions.

182

A.C. Gomes et al.

In the proposed work, also a multi-objective approach is explored using the ε -constraint method. A mixed integer linear formulation (MILP) is developed based on the generic Resource-Task Network (RTN) representation where environmental impacts associated with transportation modes and FSC energy consumption are quantified through the Ecoindicator 99. Market uncertainty is dealt through the use of a two-stage stochastic approach. A relevant aspect to be explored in food supply chain is the supply chain lead time, which nowadays, has an important impact in consumer perception of products’ freshness. The FSC lead time in this context is defined as the total length required to deliver all the products to the final consumers. The objectives considered explore the relation within multimodal distribution scenarios, between the FSC lead time and its environmental impact and the economic indicator (profit) under an uncertain market environment. Summarizing, Table 1 presents a brief comparison between the recent state-of-the-art research and the proposed work, highlighting the gaps in the subject and what the proposed work aims to contribute with. Table 1 – Comparison between recent state-of-the-art literature and proposed model Objective functions Transport Uncertainty options Economic Environment Food   Soysal et al. (2014)   Amorim & Almada Lobo (2014)   Etemadnia et al. (2015)   Soysal et al. (2015)      Proposed model

2. Modelling framework As mentioned before the present work explores different well established methodologies/techniques. These are respectively the RTN representation, the Ecoindicator 99, a two-stage stochastic approach, the ε-constraint method. 2.1. The Resource-Task Network The RTN representation is a general and conceptually simple methodology constituted by tasks and resources that form a bipartite directed graph (Pantelides 1994). The resources are defined uniformly, with no differentiation between them, and are classified into renewable and non-renewable. The non-renewable resources characterize all sorts of materials, utilities, etc. On the other hand, the renewable resources represent all existing technological resources in the SC network, such as warehouses, distribution centers, food hub facilities, transportation resources and production facilities. A task represents operations that consume and/or produce a specific set of resources, such as, a set of operations that constitute a production process of a product. Finally, the interaction between tasks and resources constitute the RTN. 2.2. The Eco-Indicator 99 The Eco 99 characterizes a damage function: human health, ecosystem quality and resource. This approach involves three main steps: inventory of all relevant emissions, resource extractions and land-use of all processes within the life cycle; calculate the damages that these emissions cause to Human Health, Ecosystem Quality and

Multimodal Green Food Supply Chain Design and Planning under Uncertainty

183

Resources; and, finally, weight of these three damage categories. For the inventory a LCA approach is used. In this paper the damage to human health is calculated, considering all processes and activities over the FSC network that consume fuel and electricity. 2.3. The Two-stage stochastic model The two-stage stochastic approach is used to account for uncertainty on the demand uncertainty. The resulted model is an extension of the deterministic model proposed by Pinto-Varela et al. (2011), and incorporates demand uncertainty, through the definition of three possible scenarios – optimistic, expected and pessimistic. This model approach divides the decision variables into two types: first stage variables and second stage variables. The first stage variables are decisions that cannot be reviewed, when the uncertainty disappears. In this work the first stage variables are related with the strategic design decisions, considering the binary variables defined for the choice of the location of each entity (factories, warehouses, food hub, distribution centers) and the binary variables related with the selection of the physical distribution structure, since it is considered that the distribution activities play a significant role in food SC. The second stage is defined by the continuous variables that can be reviewed after the scenario occurrence. Those variables are related with the FSC entities design, production planning, distribution planning and inventory. 2.4. The ε-constraint method The ε-constraint method is used to solve the multi-objective model developed. A Pareto efficient front is generated where the different objective solutions are considered. This allows the decision maker to select any solution of the Pareto frontier, depending on the worthiness of each objective function. The procedure to formulate the ε-constraint method is described as follows: -

Solve the maximization and minimization of the objective functions 𝐹𝐹2 and 𝐹𝐹3 , to set the upper (𝑏𝑏2𝑚𝑚𝑚𝑚𝑚𝑚 and 𝑏𝑏3𝑚𝑚𝑚𝑚𝑚𝑚 ) and lower bounds (𝑏𝑏2𝑚𝑚𝑚𝑚𝑚𝑚 and 𝑏𝑏3𝑚𝑚𝑚𝑚𝑚𝑚 ); Define the number of steps, nsteps; Define the ε interval:

= ∆

−b ) ( b= i max i

min i

nsteps

-

Solve the model as:

-

s.t.

-

2,3

(1)

Maximize F 1

F2 ≤ b2max − w∆ε1

w= 0,..., nstep

F3 ≤ b3max − w∆ε 2

w= 0,..., nstep

(2)

Define the Pareto frontier through the efficient solution set characterization.

3. Design problem The problem in study can be summarized as follows: Given: FSC design and planning aspects: A superstructure with all the possible locations of each FSC entity, which means all technological resources such as food plants,

184

A.C. Gomes et al.

warehouses, distribution centers and food hubs. The latter is focused in cross docking activities; a set of food products; the RTN representation of the FSC network; maximum and minimum capacity for each technological resource; a set of markets where products are available to final costumers within uncertain market demand; a fixed time planning horizon, where one time unit in the timeline characterize 5 days; Economic aspects: fixed and variable costs for each plant, warehouse, food hub and distribution centers; raw material costs; products selling prices in each market; transportation costs; Environmental aspects: environmental specifications and parameters associated with each logistic activity within the FSC; environmental specifications and parameters for each transportation mode. Electricity and fuel consumption in FSC entities activities and transportation mode; Determine: The FSC network, characterized by: network design - how many plants, warehouses, food hubs and distribution centers are to be open, their locations and capacities; the quantity of products sold per market/scenario; transportation food path along the SC; The distribution, characterized by: the transportation mode selection defined in terms of number and type of transportation resource; its environmental impacts and the associated lead time. In order to: Define the Pareto frontier for the Food FSC design and planning exploring multimodal transportation under an uncertainty market demand, considering three objectives: economic indicator maximization, FSC lead time and its environment impacts minimization.

4. Example The example characterizes a Food SC producing and distributing a family of products to two main retailers. Two industrial facilities potential locations are considered: Barcelona and Madrid. At each location there is available a set of technological resources to install. The production processes require raw materials supplied from Minas Gerais, Brazil. The equipment to be installed at the plant of Barcelona transforms the raw materials into final products in one period. In turn, the plant of Madrid using the same process takes two periods, since the technology potentially installed in Madrid is less innovative, but has the benefit of 50% lower the fixed costs. The two retailers’ locations are fixed in Lisbon and Oporto. The demand uncertainty is defined through three scenarios: expected, optimistic and pessimistic (described in Table 2), with an associated probability value, of 0.4, 0.3 and 0.3, respectively. Table 2 – Market demand per uncertainty scenario, min: max in t. Market Pessimistic Expected Optimistic Lisbon 5:60 50:200 150:350 Oporto 2:75 60:300 255:450

There is also the possibility to install a hub facility in Azambuja, logistic center near Lisbon, where cross-docking activities are performed. Each connection between the SC entities has the possibility of using more than one transportation mode option, leading to a multimodal SC network, as shown in Table 3. Regarding the distribution centers, the fixed and variable costs are similar. The General Algebraic Modeling System (GAMS) 23.7.3 with the CPLEX 12.3 was used to implement the developed model.

Multimodal Green Food Supply Chain Design and Planning under Uncertainty

185

Origin

Table 3 - Transportation options for the flows between SC entities, per origin and destination. Destination Madrid Barcelona Azambuja Oporto Lisbon Road and Sea Road and Sea Brazil Road and Air Road and Air Road Road and Air Road Madrid Road and Rail Road and Air Road Road Barcelona Road Road Azambuja

For simplicity only the Pareto frontier between profit and SC lead-time is presented, Figure 1, but the Environmental values are shown in Table 4. In point A the SC topology is characterized by the installation of both plants and the hub. The facilities receive raw materials from Minas Gerais through airfreights. The Madrid plant supplies both markets, but the Barcelona plant only supplies Oporto market. Both plants use road transportation. The food hub located in Azambuja supplies Lisbon market by truck. In this case, Lisbon market receives 156 t and Oporto market 264 t. This topology presents the shortest SC lead-time with expensive transportation modes leading to a non-profitable structure. Points B and C are also characterized by the installation of both plant facilities. In point B both plants are supplied through airfreights and Madrid receives also through sea, leading to a higher lead-time but a more profitable structure. In point C the plants are supplied through sea, and the remaining transportation by truck. In point B the SC network supplies Oporto and Lisbon market with 444 t and 156 t respectively, while point C, supplies 444 t and 348 t for Oporto and Lisbon, respectively. Finally, the SC network for point D involves a single plant facility, this located in Madrid. The raw materials are transported through sea freight to the plant. The plant supplies both markets, through road transportation, with 348 t and 444 t, for Lisbon and Oporto, respectively. The retailers’ warehouses designs are visible in Table 4.

Figure 1 – Trade-off between profit and SC lead-time. m.u. – monetary unit; t.u. – time unit (1 t.u. = 5 days) – Supplier; – Plants; – Food hub; – Retailers Table 4 – Retailers warehouses, in ton. A B C D Lisbon 156 156 348 348 Oporto 264 444 444 444

A.C. Gomes et al.

186

The most profitable Food SC, structure D, is obtained at the expense of the highest lead time. It requires 11, 41 and 66 trucks to transport raw materials from the port to Madrid, respectively for pessimistic, expected and optimistic scenarios. From Madrid to Lisbon 5, 16 and 29 trucks are needed and from Madrid to Oporto 6, 25 and 37 trucks. The results regarding annual profit and environmental impacts are resumed in Table 5. Points with lower SC lead-time present lower annual profit, as faster transportation modes are chosen, representing options with highest costs and highest environmental impact (higher eco-indicator (ECO 99). As the lead-time increases more ecological and less expensive transportation modes are selected decreasing the environmental impacts while increasing the profit. Table 5 – Main results in terms of annual profit and environmental impacts. A B C D Annual profit (m.u.) -494,502 728,822 1,412,756 1,425,648 ECO 99 (mPt) 369,428 263,578 58,338 52,107 Lead time (week) 13 16 18 21

5. Conclusions Food supply chain network design and planning is explored through a multi-objective optimization. The developed model considers the identification of alternative optimal food supply chain networks, characterized by the number and type of transportation modes, its environmental impacts and the supply chain lead time. The latter translated the consequent customer products freshness perception. Uncertainty on market demands is also modeled, this time with a two stage formulation where only three scenarios are considered. The present work appears as a starting point on the work to be developed on FSC. This should be considered in more detail the presence of uncertainty, lead time quantification as well as environmental impacts calculation. Furthermore, more representative instances should be solved, aiming to deal with real FSCs.

References Amorim, P. and B. Almada-Lobo (2014). “The impact of food perishability issues in the vehicle routing problem.” Computers and Industrial Engineering. 67: 223–233. Etemadnia, H., S. J. Goetz, P. Canning and M. S. Tavallali (2015). "Optimal wholesale facilities location within the fruit and vegetables supply chain with bimodal transportation options: An LP-MIP heuristic approach." European Journal of Operational Research 244(2): 648-661. Pantelides, C. C. (1994). Unified frameworks for optimal process planning and scheduling. , D.W.T. Proc. Second Conf. on Foundations of Computer Aided Operations, CACHE Publications Pinto-Varela, T., A. P. F. D. Barbosa-Povoa and A. Q. Novais (2011). "Bi-objective optimization approach to the design and planning of supply chains: Economic versus environmental performances." Computers & Chemical Engineering 35(8): 1454-1468. Soysal, M., J. M. Bloemhof-Ruwaard and J. G. A. J. van der Vorst (2014). "Modelling food logistics networks with emission considerations: The case of an international beef supply chain." International Journal of Production Economics 152: 57-70. Soysal, M., J. M. Bloemhof-Ruwaard, R. Haijema & J.G.A.J. Van Der Vorst, (2015). "Modeling an Inventory Routing Problem for perishable products with environmental considerations and demand uncertainty." International Journal of Production Economics 164:118-133.

Zdravko Kravanja, Miloš Bogataj (Editors), Proceedings of the 26th European Symposium on Computer Aided Process Engineering – ESCAPE 26 June 12th -15th, 2016, Portorož, Slovenia © 2016 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/B978-0-444-63428-3.50036-9

Derivative-Free Chemical Process Synthesis by Memetic Algorithms Coupled to Aspen Plus Process Models Maren Urselmanna* , Tim Janusa, Christophe Foussetteb, Stephen Tlatlikc, Axel Gottschalkc,d, Michael T. M. Emmeriche, Thomas B¨ackb,e and S. Engella,f a Zentrum

f¨ur Beratungssysteme in der Technik, Dortmund e.V., Joseph-von-Fraunhofer-Str. 20, 44227 Dortmund, Germany b divis intelligent solutions GmbH, Joseph-von-Fraunhofer-Str. 20, 44227 Dortmund, Germany c SUPREN GmbH, Joseph-von-Fraunhofer-Str. 20, 44227 Dortmund, Germany d Bremerhaven University of Applied Sciences, An der Karlstadt 8, 27568 Bremerhaven, Germany e LIACS, Leiden University, Niels Bohrweg 1, 2333CA Leiden, The Netherlands f TU Dortmund University, Emil-Figge-Str. 70, 44227 Dortmund, Germany *[email protected]

Abstract Design optimization problems of chemical processes are characterized by a large number of discrete and continuous design decisions, highly non-linear models and multi-modal continuous subspaces. In our previous work, we introduced a derivative-free memetic algorithm (MA) for design optimization which is a combination of an evolutionary algorithm (EA) and a derivative-free optimization (DFO) method. The EA addresses the global optimization of all design variables, whereas the DFO method locally optimizes the continuous sub-problems that arise by fixing the discrete variables with respect to design specifications. The MA calls the simulation software Aspen Plus to simulate the design alternatives. In this contribution, the MA is extended to consider two objectives. Therefore, the selection procedure of the MA is replaced by a multi-objective selection and the continuous optimization problem which is addressed by the DFO method is reformulated. Keywords: chemical process synthesis, Aspen Plus process models, multi-objective design optimization, memetic algorithms, derivative-free optimization

1. Introduction Chemical process synthesis in industrial practice is usually done by an interdisciplinary team in an iterative process of testing ideas and comparing the performances of alternative process configurations. This task comprises the generation of flowsheets which is usually guided by expert knowledge and heuristics, simulation studies and techno-economic and ecological evaluation, etc. Often multiple criteria are considered as e.g. an economic criterion and the energy efficiency of the process. Optimization of process design parameters is conducted only for few alternatives, if at all. Such design optimization problems are characterized by a large number of continuous and discrete design parameters and highly non-linear models. Even if only one objective is considered the existence of many local optima is a challenge. A heuristics-guided search will usually not lead to optimal or Pareto-optimal designs and some promising structures may remain undisclosed. To intensify and to shorten this time-consuming and therefore cost-intensive iterative process, systematic methods for the optimization of the design of chemical processes have been developed in

188

M. Urselmann et al.

research for decades (Yeomans and Grossmann, 1999). Methods that are based on mathematical programming require an explicit equation-based formulation, i.e. a time-consuming generation of mathematical models of all suitable design alternatives, of the process constraints and of the process models is needed. If realistic unit operation models are used, computation times may become large and even convergence just to local optima may not be achieved if the initial values are not chosen well. In Urselmann and Engell (2015), a memetic algorithm (MA) for the efficient solution of design optimization problems was presented that explores a large number of local optima and exhibits a robust behavior in terms of computation time and solution quality. This MA couples an evolutionary algorithm (EA) which performs a global search in the space of the discrete and continuous design decisions with local rigorous optimization of the continuous parameters to improve the design that is proposed by the EA. In industrial practice, an explicit equation-based formulation of the process models which is required by the approaches described above is not available because commercial and well-proven process simulation software (flowsheet simulators) are applied. Therefore in Urselmann et al. (2015) the MA was connected with the commercial software Aspen Plus. As derivatives are not provided by the simulator, gradient-based local optimization was replaced by a derivative free optimization (DFO) method. The DFO method optimizes the continuous design decisions with respect to the production cost such that constraints on the process, e.g., a minimal purity of the product are satisfied. In this contribution, the derivative-free MA that calls the simulation by a flowsheet simulator is extended to consider two conflicting objective functions: the specific investment cost and the specific consumption of primary energy. We demonstrate the MA for the design of the dehydration of an ethanol and water mixture using benzene as an entrainer. The performance of the novel algorithm and the Pareto-optimal results are presented and discussed.

2. Design Optimization of Chemical Processes The design of a chemical production process is defined as the choice of the chemical production route, the choice of the pieces of equipment and their connection by flows of materials, and the selection of the operating conditions of the different elements. The goal of the design optimization of chemical processes is to find the optimal design among all suitable alternatives with respect to one or more objective functions. Mathematically, a single-objective design optimization problem can be formulated as a mixed-integer nonlinear problem (MINLP): min f (x, y, z) s.t. gi (x, y, z) ≤ 0, i = 0, ..., ni ∈ N0

(1) (2)

h j (x, y, z) = 0, j = 0, ..., n j ∈ N0

(3)

x ∈ Rn x , y ∈ Rn y , z ∈ N0 z , n i , n j , n x , n y , n z ∈ N

(4)

n

where f is the objective function, the continuous variables x denote the so-called state variables of the process, e.g., temperatures or flows inside the pieces of equipment, whereas the continuous design variables and operating conditions such as the feed flows that enter a piece of equipment are denoted by y. The discrete variables z correspond to structural design choices, e.g., the number of elements of a piece of equipment. Constraints for feasible designs or specifications on the product are described by the inequalities (2), while a system of nonlinear equations, the process model (3), describes the behavior of the processing system (solving the mass and heat balances for a given geometry). 2.1. Derivative-Free Design Optimization If a process simulation software is used, the process model (3) is defined by the simulator by parameterizing standard model blocks and connecting them as specified by the discrete variables.

Derivative-Free Chemical Process Synthesis by Memetic Algorithms Coupled to Aspen Plus Process Models

189

The simulator solves the equation system (3) (in the sequel called simulation) with fixed discrete and continuous design decisions z and y and determines the state variables x. Inequality constraints (2) which comprise state variables cannot be evaluated before the simulation and therefore become implicit. If the simulation does not converge, the design alternative cannot be evaluated. In the context of derivative-free optimization, implicit constraints are called derivative-free and simulation failures are denoted as unrelaxable constraints. 2.2. Multi-Objective Design Optimization If more than one criterion is considered for evaluating the design of a process, there is usually no single alternative that is optimal with respect to all criteria, if the objectives are in conflict to each other. The goal of the design optimization in the multi-objective case is therefore to determine a set of design alternatives that yields the best compromise solutions for all specified objective functions. The optimization of the different objectives is done simultaneously. The mathematical formulation (1) changes to: min fk (x, y, z), k = 1, ..., n ∈ N, where n is the number of objectives. In this case, a design alternative is evaluated by an n-dimensional vector of objective function values. Two alternatives s and s cannot be ordered if each of them is better than the other with respect to one of the objectives. If one of the designs s is better with respect to at least one of the objectives and not worse with respect to the others, s dominates s . In a set of solutions, an alternative s is called non-dominated if none of the other solutions of the set is dominating s. The optimal compromise solutions are those design alternatives which are nondominated with respect to the whole feasible search space, i.e. to all feasible design alternatives. These solutions are called Pareto-optimal. The set of the Pareto-optimal solutions is called Paretoset and their corresponding n-dimensional objective function vectors are called Pareto-front. The goal of a multi-objective algorithm for design optimization is therefore to find non-dominated design alternatives that approximate the Pareto-front as accurately as possible. The choice of the ’best’ design alternative among the Pareto-set depends on the preferences of the design engineer.

3. A Multi-Objective Memetic Algorithm for Design Optimization The term Memetic Algorithm was first introduced by Moscato (1989) to describe populationbased hybrid evolutionary algorithms (EA) which are coupled with local refinement strategies. The MA presented in this contribution is based on a ( +  ) - evolution strategy (ES) (B¨ack, 1996) which is a special form of an EA which includes a control mechanism for the mutation strength, called self adaptation. The local search is done by the solver NOMAD (Nonlinear Optimization by Mesh Adaptive Search) based on the generalized pattern search framework introduced by Audet and Dennis (2006). 3.1. The Single-Objective Memetic Algorithm In Figure 1 the structure of the MA is shown. Individuals of the MA are represented by the design variables of the process. The initialization of the individuals of the first population is done with a uniform random distribution of the variables within their feasible domain. To evaluate the individuals, the design proposed by the ES is given as a starting point to the DFO method which performs a local search in the space of the continuous design variables while the discrete design decisions are fixed. In each iteration of the DFO method, the simulator Aspen Plus is called to determine the state variables

Figure 1: Structure of the MA

190

M. Urselmann et al.

x of the design. After the simulation, the objective function value(s) of the design and the violation of the constraints are computed. The result of the local optimization, which is a local optimizer in the best case and a locally improved individual otherwise, is returned to the ES. The genes of the original individual are replaced by the genes of the improved solution. From the population, individuals are then selected randomly to become parents. Problem-specific operators (see Urselmann et al. (2015)) are applied to the parents to generate the offspring generation which is evaluated in the same fashion as the individuals of the first generation. In the single-objective case, the population of the next generation is selected by the so-called  -selection, i.e. the best  individuals of all  offspring and individuals that do not exceed a maximal age of  generations survive. In the multi-objective case, the selection is replaced as described in the next section. Constraints are handled differently on the different layers of the MA. To incorporate constraints into the ES, an extended selection order is used, since EA do not rely on fitness values but rather on their ranking. Feasible solutions are always preferred to infeasible solutions. In the singleobjective case, feasible solutions are sorted by their fitness and infeasible solutions are ranked based on their distance to feasibility, if available. If a simulation does not converge for a given design, the corresponding individual is not considered for selection. The local solver NOMAD handles the derivative-free constraints, i.e. the design specifications, by the Progressive Barrier approach. Solutions for which the simulation does not converge are ignored. As NOMAD is not able to handle starting points that produce simulation failures, a latin hyper-cube sampling (LHS) is applied in this case. The first solution found by the LHS that fulfils the unrelaxable constraints is given to NOMAD as the starting point instead. 3.2. Multi-Objective Extension of the MA To extend the MA to consider multiple objective functions, the fitness-based selection procedure of the ES has to be replaced by a procedure that forces the search towards the Pareto-front. In addition, a strategy for handling multiple criteria within the DFO method has to be developed. The Multi-Objective Selection Procedure: Selection is now based on non-dominated sorting combined with the so-called hypervolume measure or S metric which was introduced by Beume et al. (2006). Thereby the ES becomes a so-called SMS-EMOA (S metric selection evolutionary multiobjective optimization algorithm) which is one of the state-of-the-art methods in multi-objective evolutionary optimization. The hypervolume measure is frequently applied as a quality measure for comparing the results of EMOA, i.e. different approximations of a Pareto-front can be compared by the comparison of their dominated hypervolumes, whereas higher values are better than lower values. The basic idea of the SMS-EMOA is to select individuals for the next generation that maximize the dominated hypervolume of the population such that the search is guided towards good approximations of the Pareto-front. In the case of the MA, the primary selection criterion is the non-dominated sorting, i.e. non-dominated individuals are always preferred to dominated individuals. The secondary criterion is the number of dominating points, i.e. the number of individuals in the actual population that dominate an individual, where individuals with a lower number of dominating points are preferred. Only if the first two criteria are not sufficient to select the required number of points uniquely, the dominated hypervolume is determined and the individuals with the highest contribution to the dominated hypervolume are selected, i.e. the S metric serves as the ternary selection criterion. If the population does not comprise a sufficient number of feasible individuals, infeasible individuals are selected based on the violation of the constraints. Single-Objective Local Search with the  - Constrained Method: The second extension needed to consider multiple objectives is the new definition of the continuous optimization problem addressed by the DFO method. In contrast to other memetic algorithms, the local search of the MA for design optimization is performed on a subset of the optimization variables of the MA. The DFO method solves continuous sub-problems of the original problem that depend on the discrete design variables which are proposed by the EA. I.e. one local search explores the con-

Derivative-Free Chemical Process Synthesis by Memetic Algorithms Coupled to Aspen Plus Process Models

191

tinuous search space of only one design alternative, although the configuration may not have the potential to yield optimal performance with respect to the objectives. Therefore, the effort spent within the local search should be kept as small as possible. Because of the multi-modality of the continuous sub-problems, good local solutions may be missed by following this guideline. But the MA is able to generate the same configuration more than once such that these solutions are not excluded from the search. To keep the effort small, the DFO method continues to address a single-objective problem. For the transformation of the original bi-objective problem formulation, the  -constraint method introduced by Haimes et al. (1971) is applied, because it can handle convex and concave Pareto-fronts. One of the objective functions is defined as the primary objective which is minimized, the other objectives are expressed as inequality constraints of the form min f1 (y), s.t. fi (y) ≤ i , i = 2, ..., m + 1. In the bi-objective case, two decisions have to be made: One of the two objectives has to be chosen as the primary objective f1 which is done randomly within the MA. For the second objective f2 a value for 2 has to be defined. One possibility is to choose the function value of the starting point ystart of the secondary objective, i.e. 2 = f2 (ystart ). But in case of the MA for design optimization, the starting point usually does not fulfill the design specifications (2) and is not feasible. Therefore, two phases are defined for the local search: The first phase comprises the search for a feasible solution y f that fulfills the unrelaxable constraints and the design specifications. If the point generated by the EA does not fulfill the unrelaxable constraints, the LHS is called to generate a first solution for which the simulation converges. This point is given as a starting point to the DFO method, that solves the problem: miny i max[0, gi (x, y, z)]. In the second phase the primary objective is minimized with respect to the design specifications and the new inequality constraint: miny f1 s.t. f2 ≤ f2 (y f ), gi (x, y, z) ≤ 0. The new constraint is also derivative free and is handled by the Progressive Barrier approach.

4. The Case Study In this contribution, the design of an extractive distillation column for the separation of a water and ethanol mixture with benzene as an entrainer is considered. The extractive distillation column is the first column of the flowsheet illustrated in Figure 2. The bottom product is ethanol. The distillate of the column is a two-phase ternary mixture that is fed to a decanter to separate a phase rich in benzene which is recycled to the extractive distillation and a phase rich in water which is fed to the second column where the entrainer is regenerated. The bottom product of this column is water and the top product comprises a ternary mixture that is recycled to the extractive distillation column. In this contribution, the case study is reduced to the design of the extractive distillation column. Therefore the recycle streams were fixed to values Figure 2: Flowsheet of the whole dehytaken from a convergent simulation of the whole process dration process and were defined as feed F˙3 (recycle from the second column) and F˙4 (recycle from the decanter). A minimal purity of 99 w-% of ethanol of the bottom product has to be fulfilled. The two objectives considered here are the specific investment cost (SIC) in [ C/t] and the specific primary energy consumption (SPE) [GJ/t] which are related to the mass of the ethanol stream. Four different feed streams F˙1 − F˙4 are fed to the column. The amounts and compositions of the feed streams are: F˙1 : 7325.34 kg/h (7.7 w-% water, 92.3 w-% ethanol), F˙2 : 61.88 kg/h (100% benzene), F˙3 : 6617.97 kg/h (5.1 w-% water, 60.9 w-% ethanol, 34.0 w-% benzene), F˙4 : 19207.4 kg/h (0.9 w-% water, 14.9 w-% ethanol, 84.3 w-% benzene). The distillation column is operated at atmospheric pressure. The design variables are the number of stages N, the locations of the four different feeds n1 -n4 , the reflux ratio rr and the distillate to feed ratio rd2 f . The feasible domain of

192

M. Urselmann et al.

the latter is restricted due to mass balances. The lower and upper bounds for the design variables min = 0.6889 and are: N min = 4, N max = 60, nmin = 2, nmax = N − 1, rrmin = 0.01, rrmax = 10.0, rd2 f max = 0.99. rd2 f 2.4 2.38 SPE [GJ/t]

5. Results

2.36

Testrun 1 Testrun 2 Testrun 3 Testrun 4 Testrun 5

2.34 Five test runs of the multi-objective MA were per2.32 formed for the design optimization of the case study on a 2.3 Windows-PC with an Intel Core i7 CPU with 3.40 GHz 3 3.25 3.5 3.75 4 4.25 4.5 4.75 SIC [EUR/t] and 16 GB RAM. The parameters of the MA were set to  = =15. The initial mesh size of NOMAD was set Figure 3: Results of the MA to 25% of the feasible domain for each coordinate. The local search was stopped after 120 iterations or if a minimal mesh size of 10−6 was reached. The MA was terminated after 80 generations. In Figure 3, the approximations of the Pareto-front of the test runs of the MA are illustrated. The results of the five test runs are close to each other. The difference in the normalized dominated hypervolumes of the worst and the best test run is 0.03%. In Figure 4, the design alternatives that lead to the two non-dominated extremal points are shown. In both cases, the optimizer reduced the reflux to the minimum allowed and the recycle from the decanter is fed on the first stage under the condenser. The heating and the cooling duty of both columns are nearly identical, but the specific energy consumptions differ, because of the different amounts of Figure 4: Design alternatives that corproduct. Good compromise solutions were found with respond to the extremal points low SIC and only 1% difference to the optimal SPE value.

6. Conclusions In this contribution, the derivative-free MA introduced by Urselmann et al. (2015) that was developed for single-objective design optimization was extended to consider multiple objectives. The new MA was applied to optimize the design of an extractive distillation column. It shows a robust behavior in different test runs and finds structurally different design alternatives that realize a compromise between the specific investment cost and the specific primary energy consumption.

References C. Audet, J. J. E. Dennis, 2006. Mesh adaptive direct search algorithms for constrained optimization. SIAM Journal on Optimization 17 (1), 2–11. T. B¨ack, 1996. Evolutionary Algorithms in Theory and Practice. Oxford University Press, NY. N. Beume, B. Naujoks, M. T. M. Emmerich, 2006. SMS-EMOA: Multiobjective selection based on dominated hypervolume. European Journal of Operational Research 181, 1653–1669. Y. Haimes, L. Lason, D. Wismer, 1971. On a bicriterion formulation of the problems of integrated system identification and system optimization. IEEE Transactions on Systems, Man, and Cybernetics 1, 296–297. P. A. Moscato, 1989. On evoloution, search, optimization, genetic algorithms and martial arts: Towards memetic algorithms. Tech. rep., Caltech, Pasadena, California. M. Urselmann, S. Engell, 2015. Design of memetic algorithms for the efficient optimization of chemical process synthesis problems with structural restrictions. Computers & Chemical Engineering 72, 87–108. M. Urselmann, C. Foussette, T. Janus, S. Tlatlik, A. Gottschalk, M. T. M. Emmerich, T. B¨ack, S. Engell, 2015. DerivativeFree Design Optimization of Chemical Processes by a Memetic Algorithm. In: Proceedings of the UK Workshop on Computational Intelligence. H. Yeomans, I. E. Grossmann, 1999. A systematic modelling framework of superstructure optimization in process synthesis. Computers & Chemical Engineering 23, 709–731.

Zdravko Kravanja, Miloš Bogataj (Editors), Proceedings of the 26th European Symposium on Computer Aided Process Engineering – ESCAPE 26 June 12th -15th, 2016, Portorož, Slovenia © 2016 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/B978-0-444-63428-3.50037-0

Price Adjustment in Price-based Coordination Using Quadratic Approximation S. Wenzel1 , R. Paulen1 , S. Kr¨amer2 , B. Beisheim2 and S. Engell1 1 Process

Dynamics and Operations Group, Department of Biochemical and Chemical Engineering, TU Dortmund University, Emil-Figge Straße 70, 44227 Dortmund, Germany 2 INEOS K¨ oln GmbH, Alte Str. 201, 50769 K¨oln, Germany E-mail:{Simon.Wenzel, Radoslav.Paulen, Sebastian.Engell}@bci.tu-dortmund.de, {Stefan.Kraemer, Benedikt.Beisheim}@ineos.com

Abstract Site-wide allocation of shared resources among the units of (partially) autonomous plants can be realized by price-based coordination. In an iterative procedure, a central site-coordinator adjusts the transfer prices for shared resources based on the network (im)balance until the equilibrium price λ ∗ is found which assures the achievement of network balance. This process is known as tˆatonnement of the exchange (Uzawa, 1960) and has the benefit of keeping local cost information of the plants confidential. One of the drawbacks of the tˆatonnement process is its slow rate of convergence. In addition it is very sensitive to the choice of the step size parameter in the price update. In this contribution, we propose a novel price update based on quadratic approximation of the residual of a network of production plants with strictly convex cost functions. We perform simulation studies for an academic example and an industrial case study of balancing steam networks in an integrated petrochemical production site to show the much improved rate of convergence in comparison to classical subgradient price updates. Keywords: Dual decomposition; Hierarchical management; Decentralized systems; Pricebased coordination; Derivative-free optimization

1. Introduction Site-wide coordination of shared resources among semi-autonomous plants can be realized by applying price-based algorithms, where the (local) objectives of the plants are modified to penalize the overconsumption and reward the production of the shared resources by assigning prices to them. In an iterative procedure a central coordinator then adjusts the prices of the shared resources and evaluates the residual of global constraint, i.e., the network balance, until the equilibrium price vector λ ∗ is found. This process is referred to as the tˆatonnement process (Uzawa, 1960). One advantage is that information about the cost functions of the local plants is kept confidential, since only the shared resource utilization has to be shared with the coordinator (Cheng et al., 2007). In the process industry this situation can commonly be found, e.g., in large production sites or at industrial parks, where different business units share a site-wide utility network. Acknowledgement: The authors acknowledge the support of the European Commission under the grant agreement number 611281 (FP7 ICT project DYMASOS). The authors would like to thank Shaghayegh Nazari and Lukas Maxeiner from the Process Dynamics and Operations Group (DYN) at TU Dortmund University for stimulating discussions on the material presented in this contribution.

S. Wenzel et al.

194

The drawback of the classical price-based coordination is the slow rate of convergence of the tˆatonnement process, which strongly depends on the update step of the price vector, which usually is based on the subgradient method (Kozma et al., 2015). In this contribution, we propose a new approach that combines classical price-based coordination with elements of derivative-free optimization to improve the rate of convergence. The local plants are probed with different price vectors for the shared resources and respond with their optimal utilization of the shared resources. The coordinator collects the responses and approximates the residuals of the network balances as a quadratic function of the prices. It can be shown that if the local objectives can locally be represented as strictly convex quadratic functions around the site-wide optimum, which is a reasonable assumption for industrial problems when tracking of inputs or production targets is included with a quadratic term, then the approximation is exact. The optimum of the approximated quadratic function yields the equilibrium price vector λ ∗ and thus the site-wide optimum. For non-strictly convex functions the alternating method of multipliers (ADMM, Boyd et al. (2011)) can be applied to convexify the local subsystems as for example shown for the dynamic coordination of a petrochemical production site by Stojanovski et al. (2015). We perform simulation studies based on two examples, of which one is a steam network balancing problem in the integrated petrochemical production site of INEOS in K¨oln, to compare the performance of the proposed scheme with classical subgradient price updates.

2. Problem Formulation We consider a cooperative shared resource allocation problem that can be formulated as a separable mathematical program, where we assume that the local objective function of the ith agent (plant, subsystem) can be expressed as quadratic function around the agent-wide optimum. with symmetric and positive definite matrices Pi  0 ∈ Rni ×ni . The overall coordination problem can thus be stated as min

x1 ,x2 ,...,xn

s.t.

n

1

∑ 2 xiT Pi xi + qTi xi

(1a)

∑ Ai xi = 0,

(1b)

i=1 n

i=1

where xi is a decision vector of the ith agent and matrices Ai ∈ Rm×ni represent mappings of local decisions into consumption or production of m shared resources. The Lagrangian of the problem (1) incorporates the relaxed global constraint and reads as n 1 L (x, λ ) = ∑ xiT Pi xi + qTi xi + λ T Ai xi , 2 i=1

(2)

where λ ∈ Rm stands for the vector of Lagrange multipliers. The optimality conditions of problem (1) are inferred by differentiating the Lagrangian (2) and can be stated as ∇xi L = Pi xi + qi + ATi λ = 0, n

∇λ L = ∑ Ai xi = 0.

(3a) (3b)

i=1

Upon reaching these conditions, we obtain the optimal decisions of the plants subject to the availability of the shared resources and the convergence of the Lagrange multipliers to their optimal values λ ∗ which are unique, because of the aforementioned assumptions, and can be interpreted as optimal prices of the shared resources.

Price Adjustment in Price-based Coordination Using Quadratic Approximation

195

2.1. Price-based coordination In the classical price-based coordination the prices for the shared resources are iteratively adjusted until convergence is achieved. For a given price λ k , where the superscript k denotes the iteration index, the ith plant optimizes the local objective function of the form (2) to find the locally optimal decision vector xi∗ (λ k )   1 xi∗ (λ k ) = arg min xiT Pi xi + qTi xi + λ k,T Ai xi = −Pi−1 qi + ATi λ k . (4) 2 xi By iteratively updating the prices λ k based on the knowledge of the constraint residuals Ai xik = Ai xi∗ only, the centralized optimum can be found. The classical subgradient price update rule of the price-based coordination computes a new price λ k+1 via the following step n

λ k+1 = λ k + α k ∑ Ai xik ,

(5)

i=1

where α k is a (fixed) step size parameter. With the aforementioned assumptions on convexity and a suitable α k (see, e.g., Bertsekas, 2015) the scheme converges to an optimal price vector λ ∗ , since the residual of the constraint vanishes, i.e., the shared resources are balanced. 2.2. Linking prices of shared resources and their balance In the following we derive a relation between the prices λ and the residual of the network balance. The target is to exploit this relation in the updating procedure of the prices to enhance the rate of convergence. Taking the squared 2-norm of the residual of (1b) and introducing the optimal price-dependent local decisions of (4), we arrive at the closed-form expression of a function fr (λ ), whose minimizer is λ ∗ .

2

n

(6) fr (λ ) = ∑ Ai xi = λ T Qλ + hT λ + v,

i=1 2

with n

Q = ∑ Ai Pi−1 ATi

n

i=1 n

j=1 n

i=1

j=1

v = ∑ qTi Pi−1 ATi

n

∑ A j Pj−1 ATj ,

hT = 2 ∑ qTi Pi−1 ATi i=1

n

∑ A j Pj−1 ATj ,

(7)

j=1

∑ A j Pj−1 q j .

(8)

3. Price Adjustment Using Quadratic Approximation The idea of the proposed strategy is to exploit the structure of (6) in the case when detailed information about the plants, i.e., Ai , Pi , qi , ∀i, is not available to the coordinator. Because of the missing information, the use of the analytic solution of (6) is not possible. Instead, we use quadratic approximations of the observed residuals as a function of the transfer prices in order to extract the required information from probing the local plants with different price vectors. In the field of derivative-free optimization, quadratic functions are used to build surrogate models of not explicitly available cost functions. The optimization is then performed based on the quadratic function, which is iteratively updated until convergence is achieved (Conn et al., 2009). A general quadratic function q(λ , ξ ) can be expressed as ! m

q(λ , ξ ) = ∑

i=1

m

∑ ai, j λi λ j

j=1

m

+ 2 ∑ bi λi + c, i=1

ai, j = a j,i ,

(9)

196

S. Wenzel et al.

where λ ∈ Rm is a vector of independent variables and ξ ∈ Rnξ is a vector of parameters. To fully determine a quadratic model based on empirical data nξ = (m + 1)(m + 2)/2 points are required. If the available number of points is greater than nξ , then a quadratic function can be fitted by regression. Regression over more points than required can be beneficial for noisy or inaccurate data or non-quadratic mappings (Gao et al., 2015). The proposed Algorithm 1 collects τ0 ≥ nξ data points by probing plants with different price vectors λ k . We refer to the resulting residuals of the balance as measurements. In the simplest case, classical subgradient price updates are used in the initial phase (see (5)). Another possibility would be sampling in a meaningful range of prices. Once τ0 points have been collected, a quadratic fit is constructed based on the last τ ≤ k points to empirically find an approximation of (6) (Gao et al., 2015). The next price vector λ k+1 is found by minimizing frk (λ ). If the approximation is exact, the algorithm converges after one price update. In practice however, numerical errors or a not well-poised set of points can lead to a non-exact quadratic approximation (Conn et al., 2009). In this case we propose to successively update the quadratic function with new points checking the network balance until convergence is achieved. Since the quadratic approximation frk (λ k ) is updated at each iteration we call the approach recursive quadratic approximation (RQA). Algorithm 1 Algorithm for recursive quadratic approximation (RQA). k ≤ kmax is the iteration index, εr is the termination tolerance, λ 0 denotes the initial prices, α k is the step size parameter, τ0 and τ define the start of the algorithm and the considered points. 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16:

Initialize: k = 0, kmax , εr , λ 0 , α, τ0 , τ while ¬ Convergence do for all agents i = 1 : n do xik = arg min Li (xi , λ k ) xi

. Agents (subsystems, plants) optimize based on λ k .

end for r(k) = ∑ni=1 Ai xik . Evaluate residual. if (k == kmax ) ∨ (kr(k)k22 < εr ) then . Convergence check. Convergence = True end if if k < τ0 − 1 then λ k+1 = λ k + α r(k) . Subgradient price update. else . Quadratic fit (see (6)). Fit frk (λ ) based on data points (kr( j)k22 , λ ( j)), for j = max (0, k − τ), . . . , k . Proposed price update. λ k+1 = arg min frk (λ ) λ

end if end while

4. Case Studies We consider two examples of different nature. The first example is an academic coordination problem of 100 plants with five shared resources and the second example represents a resource allocation problem of the integrated petrochemical production site of INEOS in K¨oln. Here the local plants can exhibit non-quadratic responses to the prices of the shared resources caused by the activation of local constraints. Example 1. (100 plants with five shared variables) In this example 100 plants with strictly-convex quadratic cost functions according to (1) are investigated. The initial prices λ 0 are chosen randomly from a uniform distribution in [0, 1). The step size α is fixed at 7.5·10−4 (larger values lead to diverging behavior), εr is set to 10−1 , the constraint matrices Ai are identity matrices I ∈ R5×5 , and τ0 = nξ , τ = kmax = 200. J

197

2.0

107 106 105 104 103 102 101 100 10−1 10−2

SG 1st QA RQA

1.5 1.0

λi

fr =  ∑i Ai xi 22

Price Adjustment in Price-based Coordination Using Quadratic Approximation

0.5 0.0 −0.5

0

50

100

Iterations

150

(a) Residual fr vs. iterations.

200

−1.0

0

50

100

Iterations

150

200

(b) Prices λi vs. iterations.

Figure 1: Simulation results for 100 systems with five shared resources (SG: subgradient updates, 1st QA: results after the first quadratic approximation, RQA: recursive quadratic approximation).

Example 2. (Coordination problem in an integrated petrochemical production site) In this example, a resource allocation problem is considered for management of steam in the integrated petrochemical production site of INEOS in K¨oln. We apply price-based coordination to balance the 5 and 30 bar steam networks to which a power plant, a cracker and an ammonia plant are connected (Stojanovski et al., 2015). The aim is to steer the plants to an operating point where the produced amount of steam matches the consumed amount. The cost functions of the plants contain linear economic contributions and quadratic production demand tracking. The price-based coordination is initialized with prices λ 0 = [1.0, −0.5], the step size α is set to 5 · 10−3 (best found tuning without diverging behavior) and the termination tolerance is εr = 10−1 . The parameters for the start and the considered points are τ0 = nξ and τ = kmax = 200. J

5. Simulation Results In Fig. 1 the evolution of the residual fr and the prices λi with the number of iterations are shown for Example 1. During the first 21 iterations (nξ = 21), the price updates are conducted according to (5). Based on the collected information in the initial phase, afterwards the optimal price vector λ ∗ is found by quadratic approximation. It can be seen that the first quadratic approximation (1st QA) finds prices that cause an even larger imbalance than the last one found by the subgradient update. This is due to a non-well poised set of regression points, created by the subgradient update in the initial phase. However, if the quadratic approximation makes iteratively use of the points collected within the next few iterations (RQA), the scheme finds an optimal price vector that satisfies the constraint after less than 40 iterations. In contrast, the subgradient method (SG) does not converge within the first 200 iterations. It shows a typical trajectory where initially the imbalance is reduced fast, but reaching a high precision requires a large amount of iterations due to slow convergence. The simulation results for Example 2 are shown in Fig. 2. The classical subgradient price update and the RQA show a similar behavior as for the first example. In order to apply subgradient price updates, a small step size is required, because the plants are locally constrained and react sensitively to small changes in the prices for the shared resources. Again it can be seen that the first quadratic approximation is inexact and does not lead to convergence to the optimum. The proposed strategy needs four additional iterations to converge to an optimal price vector λ ∗ , while the classical subgradient update approaches the optimal price only asymptotically with a slow very rate of convergence. The results for the recursive quadratic

S. Wenzel et al.

198

1.0

103 102

SG 1st QA RQA

0.5

101

λi

fr =  ∑i Ai xi 22

104

100

0.0

10−1 10−2

−0.5 0

5

10

15

Iterations

20

(a) Residuals fr vs. iterations.

25

30

0

5

10

15

20

Iterations

25

30

(b) Prices λi vs. iterations.

Figure 2: Simulation results for industrial case study of the integrated petrochemical production plant of INEOS in K¨oln (SG: subgradient updates, 1st QA: results after the first quadratic approximation, RQA: recursive quadratic approximation).

approximation in Example 2 show that the relation (6) holds. Hence, it can be assumed that in the range of the probed prices the local plants do not operate at active local constraints.

6. Conclusions In this contribution, a novel strategy for updating the prices in price-based coordination schemes is proposed that exploits the link between the prices and the residual of the shared resource balances and performs a quadratic approximation of this relationship. It is demonstrated, on a medium-size test problem and for an industrial case study, that the proposed strategy can significantly reduce the number of iterations that is needed to find the optimal price vector λ ∗ . Future research will focus on improving the choice of the regression points for the quadratic approximation as well as address the generalization to non-strictly convex functions. Further, it will be investigated how the proposed strategy performs for the case of active local constraints or whether it must be modified in such situations.

References D. Bertsekas, 2015. Convex Optimization Algorithms, 2nd Edition. Athena Scientific. S. Boyd, N. Parikh, B. P. E Chu, J. Eckstein, 2011. Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers. Foundations and Trends in Machine Learning 3 (1), 1–122. R. Cheng, J. Forbes, W. Yip, 2007. Price-driven coordination method for solving plant-wide MPC problems. Journal of Process Control 17 (5), 429–438. A. R. Conn, K. Scheinberg, L. N. Vicente, 2009. Introduction to Derivative-Free Optimization. Society for Industrial and Applied Mathematics. W. Gao, S. Wenzel, S. Engell, 2015. Modifier Adaptation with Quadratic Approximation in Iterative Optimizing Control. In: Proceedings of 2015 European Control Conference (ECC). Linz, Austria, pp. 2532–2537. A. Kozma, C. Conte, M. Diehl, 2015. Benchmarking large-scale distributed convex quadratic programming algorithms. Optimization Methods and Software 30 (1), 191–214. G. Stojanovski, L. Maxeiner, S. Kr¨amer, S. Engell, 2015. Real-time Shared Resource Allocation by Price Coordination in an Integrated Petrochemical Site. In: Proceedings of 2015 European Control Conference (ECC). Linz, Austria, pp. 1492–1497. H. Uzawa, 1960. Market Mechanisms and Mathematical Programming Hirofumi Uzawa. Econometrica 28 (4), 872–881.

Zdravko Kravanja, Miloš Bogataj (Editors), Proceedings of the 26th European Symposium on Computer Aided Process Engineering – ESCAPE 26 June 12th -15th, 2016, Portorož, Slovenia © 2016 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/B978-0-444-63428-3.50038-2

Is Wood Waste Only for Burning? A Methodology for Best Pathway Identification of Waste Recovery Gisèle Abi Chahla,a* Assaad Zoughaib,a Cong-Toan Trana and Romain Farelb a

Centre for Energy efficiency of Systems (CES), Mines ParisTech, PSL Research University, Palaiseau, 91120, France b Paris-Saclay Efficacité Energétique (PS2E), Research and Education Institute, Les Loges-en-Josas, 78354, France [email protected]

Abstract Industry has always looked for maximizing on-site synergies using energy and mass integration methods, rather independently. However, considering the component valorisation in its original form, corollary implies missing the reuse opportunities of the component in another form. The conversion brings the possibility of turning the nonusable waste into another usable energy or material through chemical processes, and allows its reinsertion in the system. Hence the inclusion of these processes enables exploring new paths for the recovery of waste streams and bridging the gap between the two integration methods. This paper introduces a methodology which couples Energy and Mass integration techniques through conversion processes, in the aim of finding the best valorisation pathway of waste streams in a local context. In this methodology, the valorisation pathways are driven by the local demand leading to the synergies maximization. Indeed modelling the local demand profile will indicate the feasible pathways through identifying the needs. The best pathway will hence be determined through detailed economic evaluation. The proposed methodology is demonstrated on a case study considering a large industrial site where waste wood valorisation is assessed. Since waste wood has multiple valorisation pathways by its conversion to energy or to another high added value material, the proposed methodology will serve as a tool for the identification of the best economic valorisation solution. Each of these conversion pathways is modelled and validated with literature results. In this case study, waste wood valorisation through heat and power generation, hydrogen or methane generation is challenged using economic criteria. For each possible waste wood conversion system, the obtained superstructure is analyzed through Energy and Mass integration methods for each set of the objectives. Keywords: Industrial ecology, Energy integration, Mass integration, Process design.

1. Introduction In recent years, the implementation of industrial ecology strategies has never ceased to increase due to the growth of environmental concerns, including the depletion of natural resources and water, air and soil pollution. Industrial ecology is focused on the identification of all the possible energy and material exchanges which enable the mitigation of the used resources. Many studies were conducted to assess the feasibility of heating networks based on industrial heat recovery (Farhat, et al., 2015) and similarly

200

G.A. Chahla et al.

for material reuse and recycling (Ghazouani, et al., 2015). However, considering the component valorisation in its original form, corollary implies missing the reuse opportunities of the component in another form. The conversion brings the possibility of turning the non-usable waste into another usable energy or material through chemical processes to get higher efficiency and more rational reuse of resources. The scope of the previously developed methodologies in this context are limited. They either do not take into account mass integration and consider only the optimization of one predefined pathway on a local scale (Gassner & Maréchal, 2009), or do not consider the heat transportation networks (Gerber, et al., 2013). This paper surpasses these limitations by introducing a methodology which couples Energy and Mass integration techniques through conversion processes on a territorial scale, in the aim of finding the best valorisation pathway of waste streams while considering the local demand.

2. Methodology description The objective of this methodology is to maximize the circular economy of a territory by searching for new valorisation opportunities of non-usable streams in their original form, and thus convert them to streams which could be recovered. This is made possible by the conversion processes. First the actors of the studied territory should be defined and then an exhaustive study of their energy and material flows is conducted. These data serve as input for the energy integration and the mass integration problems which show the synergies between different actors. The streams of material or of energy which were not reused are the waste streams of the territory; by searching to transform them into usable products that answer the local demand, more economy can be achieved since the cost of the utility of such products will be avoided. By targeting the identified waste streams, an inventory of their conversion processes are constructed. The systems converting the wastes into recoverable streams, based on the local demand of the park, are modelled in the aim of extracting the required input of the integration problems. First a local energy integration of each of the conversion processes is conducted to find internal synergies. And then the possible conversion pathways of non-usable streams are established before generating the superstructure scenarios which form the inputs of the energy and the material integration of the territory. This step allows the determination of the resources demand of the studied territory induced after the synergies are carried out. The final step serves in evaluating the investment and operating costs of each pathway in the aim of choosing the best conversion system to be employed in terms of the defined objectives. Noteworthy that the energy integration is performed by the developed methodology of Farhat, et al. (2015) which designs heat transfer networks between multiple plants by minimizing exergy consumption and total cost and that of Ghazouani, et al. (2015) is used for the material integration.

3. Case Study The proposed methodology is demonstrated on a case study evaluating an industrial park formed by three plants which are denoted as Site 1, Site 2 and Site 3 in this paper. Even by searching for synergies between them in order to reduce the utility demand and thus the economic expenses, 12 t/h of wood waste are discharged without any valorisation opportunity since none of the sites has this demand. The conversion brings the possibility of turning the non-usable wood waste into another usable energy or material through chemical processes, and allows its reinsertion in the system which is otherwise unattainable. In this paper, waste wood valorisation through heat and power

Is Wood Waste only for burning? A Methodology for Best Pathway Identification of Waste Recovery

201

generation, hydrogen or methane generation are challenged since the studied park has hydrogen and heat demand and assuming that the electricity and the methane can be sold to the network. Each of these four possible waste wood conversion systems is modelled and validated with literature results, and for each the obtained superstructure is analyzed through Energy and Mass integration methods and then the best economic valorisation solution is identified. 3.1. Conversion Systems Modelling First the bloc flow diagrams are drawn for the four conversion systems, and then these blocs are modelled using Modelica language in the Dymola® software by their physical equations of mass and energy balance while taking into account the chemical reaction taking place inside each of the reactors. The extracted simulation results of the developed models served to construct the flow sheet for the conversion systems. Figure 1 shows the flow sheet of the conversion of wood waste (biomass) to hydrogen and carbon dioxide. As illustrated the 12 t/h of biomass is gasified (Ji, et al., 2009) with steam and converted to gaseous mixture containing tar which is removed in a steam methane reformer (SMR) (Jess, 1996) before entering the water gas-shift (WGS) membrane reactor (Feng, et al., 2007) where 1.15 t/h of hydrogen is sucked by a vacuum pump through the selective H2 membrane. The remaining gas mixture is burned in a combustion chamber with pure oxygen, and then the output gas goes through a separation step to get 16.39 t/h of carbon dioxide.

Figure 1: Case 1 - Flow sheet of wood waste to hydrogen and carbon dioxide conversion system

The second possible valorisation path of the 12 t/h of wood waste is shown in Figure 2 where it is converted to methane by sending the output of the gasifier to a methanation unit (Er-rbib & Bouallou, 2014) before condensing the gas and separating it from carbon dioxide. Through this conversion system 2.82 t/h of methane is produced and 7.5 t/h of carbon dioxide is recovered.

Figure 2: Case 2 - Flow sheet of wood waste to methane and carbon dioxide conversion system

22022

G.A G A. Chhahhla eet all.

T Thee thhird stuudieed syst s tem m is thee conv c version off wood d waste w e innto heaat thro t oughh itts ccom mbustioon with w air. Thhe fl flue gasses are a then t n coooleed annd allo a ow the t recu r uperratioon oof 53.28 M MW W of heeat. In thee lasst conv version opttionn thhe coombbusttion n chhambber is folllowed bby a steaam cycl c le whic w ch ennabbles the pro oducction off 2.88 MW M of elec e ctriccity andd 50 MW W of o heeat.

Figgure 3: Case C e 3 - Floow shheett of woo w od w wasste too heat coonveersioon systeem

Fiigurre 4: Casse 4 - Fllow sheeet off woood w wasste too ellectrricitty annd heat conv c version systtem (Coogennerattion))

33.2. En nerggy and Mat M teriaal Integ In grattionn Reesultts T Thee compeetitiion of tthe best b t vaaloriisatiion pathhwaay oof thhe 12 t//h off waastee woood dischaargedd bby the stuudieed innduustrial ppark k haas four f r caandiddatees w whicch are a thee coonveersioon systtem ms ddesccrib bed earrlierr. Inn thhe aim a of con nductinng eenerrgy andd mate m rial inttegrratioon first f t the supersttructurees of o both proobleems aree coonstrructted. Thhe energgy sup s ersttructuree is forrmed d bby tthe graand com mpoositee cuurvees of o eaach sitee off the teerrittory y and thhat of tthe connsiddered d cconnversionn syystem m inn eaach casee. As A for fo thhe mat m eriaal suuperrstruuctuure, it iss geenerratedd byy the ttypee annd flow fl w ratte off thee prroduucedd materrial.

Fiiguree 5: Heaat annd Mate M erial netw workk resultss - C Casee 1: woo w od too hyddroggen

T Thee terrritooriall en nergyy inntegration whi w ich is i basedd onn the piinchh meetho od, find f ds thhe ppotentiaal synergiies betw weeen tthe actoors of the pro obleem andd thuus the t minnim mizattionn off thee uttility y ddem mandd. As A an a exam e mplle oof th he gene g erateed netw n workk Figur F re 5 illlustrratees thhat of tthe firsst ccasee whheree wood d is connverrted to hyd h droggen. As shoownn each site s trannsm mits its heat h t exxcesss ffor fouur stteam m netw workks att differrent tem mpeeratuuress. Thhenn thiis laatterr suuppllies thee ennergy y ddem mandd off the teerritory by usiing the heaat reecovvereed aand if it is nott ennouggh, uutiliitiess arre aaddded eithher on o the t nettworrk or o direc d ctly on thee sitte when w n thhe heat h dem mannd exce e eedss the steaam netw n worrks’ tem mperatuuress. Foor thhe mat m teriaal demaand, Siite 1 off thee stuudieed terri t itoryy ppressentts a 1.5 5 t/hh deemaand of hyd h drog gen. For thhe Case C e 1 shoown in Fig guree 5, 1.15 t//h of o

Is Wood Waste only for burning? A Methodology for Best Pathway Identification of Waste Recovery

203

this demand is recovered from the conversion site and the rest is bought and considered as a utility. Unlike the other cases where all the hydrogen demand is supplied by a utility since there is no local production of it. Heat integration is conducted locally, before generating it on territorial level, to find possible synergies on the system scale. All the studied systems present a heat excess except that of wood waste to methane (Case 2) which has a 10.31 MW heat deficit. Consequently the total heat demand of the territory will be the same except in Case 2. The steam networks found by applying a territorial integration enables the recovery of the heat surplus of each Site, including the conversion system, to send it back where it is required. Table 1 lists the networks heat recovery from the conversion system and between sites, the networks utility demand for each case as well as the utility need above the networks’ temperatures which is delivered directly on site. The base case represents the energy integration of the park formed by the three sites without any valorisation of wood waste via a conversion system. Apparently the wood to heat path (Case 3) shows the highest recovery thus the lowest utility requirement, followed by the cogeneration pathway (Case 4). Notably the high temperature heat deficit in Case 2 induces a utility demand on site. Thereby the four conversion system of wood showed a better heat integration than the Base Case when added to the studied park since they raise the recovered energy while maintaining a lower utility demand which has a linear cost function between 150 and 300°C (utility cost = (0.2/1500)×Temperature - 0.01) and above 300 °C the utility kWh costs 0.03€. Table 1: Recovery and utility demand found by the territorial Energy integration (Unit in MW) Base Case Case 1 Case 2 Case 3 Utility on site 0 2.97 (Site1) 2.97 (Site1) 0 10.31 (Conv.Sys) Utility on network 72.84 53.77 54.14 18.47 0 14.97 13.59 53.28 Recovery from conv. system Recovery between sites 18.14 19.26 20.27 19.22 Total 90. 98 90.98 90.98 + 10.31 90.98

Case 4 0 28.04 50.47 12.47 90.98

3.3. Economic Evaluation Results Even though, in terms of resources, adding a conversion unit for the wood waste seems attracting, an economical evaluation is primordial to find the cost effective solutions. Therefore a cost evaluation of each path is performed by calculating first the capital cost of the conversion system itself (estimated by literature based correlation for each operation), of its local heat exchangers (found by a local heat integration) and then that of the steam networks (pipes and heat exchangers) required to implement the identified synergies of the territory. Afterward the operating costs are evaluated while considering the conversion system’s resources (water and oxygen), the heat utilities of the park and its material demand which is hydrogen in this studied park. Table 3 shows the economic results of the four studied Table 2: Cost of resources Cost Unit cases and that of the base case found by using the cost Component 3.09 €/m3 of the resources listed in Table 2. The waste wood is Water Oxygen 20 €/t assumed to be available for free. The revenue is gained 2,200 €/t by selling the products of the conversion system of Hydrogen 40 €/MWh wood waste which are electricity, carbon dioxide and Methane Electricity 55.3 €/MWh methane. By removing this revenue from the operating Carbon dioxide 50 €/t cost, the total variable cost is deduced.

204

G.A. Chahla et al.

Case 2 represents the largest capital compared to the other cases. Having the second lowest total variable cost after Case 1, Case 2 needs more than ten years to win over the others, but could never be a better solution than Case 1. By comparing Case 3 and Case 4, this latter could be eliminated since it presents 2.23 M€ more in capital cost with a higher variable cost due to the low considered electricity price. Although in terms of capital cost, the Base Case is the winner, but it has the highest total variable cost while Case 1 has the lowest. Hence Case 3 tops the Base Case economically after only one year of operation while above four years Case 1 wins over the others. Table 3: Economic evaluation of the four studied cases compared to the base Case on 12 months (Unit in M€) Base Case Case 1 Case 2 Case 3 Case 4 Capex conversion system 0 61.70 70.91 3.37 4.60 Capex heating network 9.79 9.94 10.09 8.39 9.39 Total Capital Cost 9.79 71.64 81.00 11.76 13.99 Annual Operating Cost* 25.49 9.38 25.67 20.66 22.13 Annual Revenue 0 4.92 12.58 0 0.93 Total variable Cost 25.49 4.46 13.09 20.66 21.20 *Annual operating hours is considered 6000 h/y

4. Conclusion and Outlook The proposed methodology allows the exploration of new valorisation pathways which are not attainable without the conversion of the non-usable streams of an industrial park. It is applied on a case study of wood waste valorisation of an industrial park formed of three sites with a need of heat and hydrogen. This study showed that such systems can decrease considerably the operation cost of the territory by the synergies that are made possible by the conversion of the discharged waste streams. The methodology's scope could be expanded to include the definition of the optimal location of the conversion system on the studied territory while dealing with more than one waste stream.

References H. Er-rbib, C. Bouallou, 2014. Methanation catalytic reactor. Comptes Rendus Chimie, 17(7-8), 701-706. A. Farhat, A. Zoughaib, K. El Khoury , 2015. A new methodology combining total site analysis with exergy analysis. Computers & Chemical Engineering, 82, 216-227. W. Feng , P. Ji, T. Tan , 2007. Efficiency penalty analysis for pure H2 production processes with CO2 capture. AIChE Journal, 53(1), 249-261. M. Gassner, F. Maréchal, 2009. Methodology for the optimal thermo-economic, multi-objective design of thermochemical fuel production from biomass. Computers & Chemical Engineering , 33(3), 769-781. L. Gerber, S. Fazlollahi, F. Maréchal, 2013. A systematic methodology for the environomic design and synthesis of energy systems combining process integration, Life Cycle Assessment and industrial ecology. Computers & Chemical Engineering , 59(0), 2 - 16. S. Ghazouani, A. Zoughaib, S. Pelloux-Prayer,, 2015. Simultaneous heat integrated resource allocation network targeting for total annual cost considering non-isothermal mixing. Chemical Engineering Science, 134, 385-398. A. Jess , 1996. Catalytic upgrading of tarry fuel gases: A kinetic study with model components. Chemical Engineering and Processing: Process Intensification, 35(6), 487 - 494. P. Ji, W. Feng, B. Chen,, 2009. Comprehensive Simulation of an Intensified Process for H2 Production from Steam Gasification of Biomass. Industrial & Engineering Chemistry Research, 48(8), 3909-3920.

Zdravko Kravanja, Miloš Bogataj (Editors), Proceedings of the 26th European Symposium on Computer Aided Process Engineering – ESCAPE 26 June 12th -15th, 2016, Portorož, Slovenia © 2016 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/B978-0-444-63428-3.50039-4

A new scheme for process simulation by optimization: distillation as an example Anna Hoffmanna,* , Michael Bortza , Jakob Burgerb , Hans Hasseb and Karl-Heinz K¨ufera a Fraunhofer

Institute for Industrial Mathematics ITWM, Optimization, Fraunhofer-Platz 1, 67663 Kaiserslautern, Germany b University of Kaiserslautern, Laboratory of Engineering Thermodynamics, Erwin-Schr¨ odingerStraße 44, 67663 Kaiserslautern, Germany [email protected]

Abstract Commercial flowsheet simulators suffer from limitations in problem formulation flexibility and numerical instabilities which potentially turn process simulation into a tedious task. In this work, we address these problems and present a new scheme for integrated process simulation and optimization. The new method is developed here for distillation but can be extended to other unit operations. It utilizes numerically stable distillation calculations based on the solution of a sequence of fixed-point problems. Using the ideas of the shooting method the process simulation is embedded into an optimization problem and hence feasible design parameters are found by applying suitable nonlinear optimization algorithms. Systematic studies of typical examples of distillation processes are presented including also flowsheets with several columns. The new method is shown to be flexible in terms of problem formulation and more robust than those presently used in process simulators. Keywords: Process simulation, optimization, fixed-point iteration, shooting method, distillation

1. Introduction The underlying models for flowsheet simulation consist of a large number of nonlinear equations and unknowns with only a few degrees of freedom. In commercial flowsheet simulators, the user is typically required to specify a precise value for each degree of freedom. In general, this approach does, however, not reflect the user’s demands. Product purities, for example, require rather inequality constraints than equality constraints. Also other process quantities, such as recycle compositions or operating conditions, have to be specified albeit they may actually be optimization parameters. The fact that certain values have to be fixed impedes a full exploration of process limitations and restricts the user to a small subset of the solution space. Freeing the optimization variables and changing equality to inequality constraints to extend the feasible solution space is typically done only in a subsequent optimization step which requires repeated calls of the simulation with different parameters. This work aims at a more direct way toward optimization in flowsheet simulations. The intriguing feature of the approach lies in the embedding of the flowsheet problem in an optimization problem with a small number of optimization variables and constraints. Hence, large-scale optimization solvers are not needed. The embedding is possible due to a suitable decomposition of the entire

206

A. Hoffmann et al.

system of equations into the unit operations, as well as tailored decomposition strategies for the different unit operations. The present work focuses on decomposition strategies for distillation processes which enable numerically stable stage-to-stage calculations using the equilibrium stage model. Stage-to-stage calculations of column profiles are frequently used in literature, especially in combination with the simplifying assumption of constant molal overflow (CMO). One of the earliest papers in this field is the work of Lewis and Matheson (1932). For newer developments, see, e.g. Levy et al. (1985), Van Dongen and Doherty (1985), Levy and Doherty (1986), Julka and Doherty (1990), Zhang and Linninger (2004), Lucia et al. (2006), Lucia et al. (2008), and Petlyuk et al. (2015). In the present work, stage-to-stage calculations are based on rigorous mass, equilibrium, summation and heat (MESH) equations on each stage and among stages. The transition from one stage to the next is provided by solving a fixed-point problem. Combining stage-to-stage calculations with the ideas of the shooting method, which is typically used in order to solve boundary value problems for ordinary differential equations, it is possible to embed distillation processes into an optimization problem. Within this new framework, the user only needs to specify the actual demands on the process. These specifications can be incorporated by adding equality or inequality constraints to the optimization problem or suitable objective functions which attempt to fulfill certain demands instead of being restricted to a fixed number of specified equalities when using commercial systems.

2. Methodology 2.1. Stage-to-stage calculation of distillation columns based on fixed-point iterations A new approach is presented in which the full MESH system is decoupled allowing for a stagewise column calculation. For simplicity a distillation column with a total of NS stages, one single feed on stage NF and without any side draws is considered. For the downwards calculation from stage n to stage n − 1 an equivalent fixed-point problem is derived based on the MESH equations for a control volume including the condenser and the upper part of the distillation column down to a certain stage n. For the following considerations, values for the independent design variables Q˙ C < 0, p, D, and xD are assumed to be given. Such values can be obtained using the guidelines in Section 2.4. From the composition of the vapor phase yn on stage n which was calculated in a previous step the composition of the corresponding liquid phase xn is calculated using extended Raoult’s law. For the considered control volume the following component mass balance holds ( 0 for n > NF , n n xF +V n−1 yn−1 = Ln xn + DxD , where Fdown := Fdown F for n ≤ NF .

(1)

In Eq. (1) the variables V n−1 , yn−1 , and Ln are unknowns which need to be determined. Considering the same control volume the following enthalpy balance holds: n Q˙ C + Fdown hl (xF ) +V n−1 hv (yn−1 ) = Ln hl (xn ) + Dhl (xD ).

(2)

For a feasible column design, Eq. (1) and Eq. (2) have to be fulfilled. In particular, these equations n are fulfilled for a fixed-point of the function ϕdown which is defined in the following way n ϕdown (s) :=

n hl (xF ) + (D − F n )hv (yn−1 (s)) − Dhl (xD ) Q˙ C + Fdown down , hl (xn ) − hv (yn−1 (s))

where yn−1 (s) =

n Fdown s D xn + xD − xF . n n n s + D − Fdown s + D − Fdown s + D − Fdown

(3) (4)

A new scheme for process simulation by optimization

207

n In a fixed-point s∗ of ϕdown it holds s∗ = Ln . Analogously, a sequence of fixed-point problems for the calculation of a distillation column from the reboiler upwards can be derived. For the sake of brevity the case of upward calculation is no longer discussed here.

2.2. Embedding process simulation in an optimization problem Based on the calculation scheme in Section 2.1 the unknown process variables of a distillation column can be determined with given Q˙ C , p, D, and xD by solving a fixed-point problem on each stage. When calculating a distillation column starting from the condenser downwards we proceed until the fixed-point problem for stage 2 is solved. This gives the concentration vector y1 from which the composition of the corresponding liquid phase x1 = xB in equilibrium can be calculated. We write xB (Q˙ C , p, D, xD ) in order to stress the dependence of this composition vector on the choice of parameters. However, not all of the MESH equations have been used so far. One possibility of including the unused equations is to use the component mass balance for the entire distillation column and to insert xB (Q˙ C , p, D, xD ) into this equation: FxF = DxD + (F − B)xB (Q˙ C , p, D, xD ).

(5)

Depending on the choice of Q˙ C , p, D, and xD the component mass balance equation will not necessarily be fulfilled, that means xB (Q˙ C , p, D, xD ) can also be evaluated for infeasible parameters. However, in order to obtain a feasible column design Eq. (5) should hold. In the context of ordinary differential equations this problem can be regarded as a boundary value problem. The parameters which are imposed at the top of the column correspond to the initial values of an ODE. Furthermore, we solve a difference equation by calculating the distillation column upwards which is strongly related to the solution of an ODE with fixed step size. The aspired boundary value corresponds to the result for xB which can be calculated a priori using the overall component mass balance. A method for solving such boundary value problems is the socalled (single) shooting method (Stoer and Bulirsch, 2005, chap. 7.3.1). One way of transferring the ideas of this method to our problem of finding a feasible column design is to formulate the problem as an optimization problem of the following form: min

Q˙ C ,p,D,xD

0

s.t. FxF = DxD + (F − B)xB (Q˙ C , p, D, xD )

(6)

This optimization problem is of size (NC + 3) in terms of optimization parameters where NC is the number of components. The number of constraints is at least NC . Those MESH equations that are not incorporated explicitly as constraints in the optimization problem are already enforced during the upwards or downwards calculation of the distillation column which is needed in any iteration step of the optimization algorithm in order to determine all process variables. Due to the embedding in an optimization problem we can use suitable nonlinear optimization algorithms to find feasible optimization variables in a targeted manner instead of iteratively guessing feasible specification values when using commercial simulators. 2.3. Simultaneous process simulation and optimization In general, one is not only interested in an arbitrary feasible column design but a column design which is optimal in some sense. In order to perform process optimization it is only necessary to replace the zero objective function in Eq. (6) by an objective function of interest. A feasible column design is not needed as a starting point and hence it is not necessary to perform process simulation in advance.

A. Hoffmann et al.

208

2.4. Starting values Starting values for the optimization variables are needed for the considered optimization problems. The pressure p will usually be fixed at 1 bar. For distillation based flowsheets the limiting case of columns with infinite reflux and infinite number of stages (∞/∞-analysis, Ryll et al., 2012, 2013) is of great interest. In this limiting case, the system of equations which has to be solved in order to determine a feasible column design is much easier to solve. The resulting solution provides starting values for D and xD . Starting values for Q˙ C are obtained from conservative heuristics or sophisticated mathematical analysis. Alternatively, other distillation short-cut methods could be employed.

3. Numerical results 3.1. Single distillation column Consider a simple distillation column with completely defined ternary feed containing water (WAT), ethanol (ET), and tetrahydrofuran (THF). The distillation column is operated at 1 bar and consists of 30 stages. The feed stage is 15 counted from the bottom. The desired specifications on feed and product streams are summarized in Tab. 1. Using the method introduced in Section 2 an optimization problem with five optimization parameters Q˙C , D, and xD can be formulated and the desired water purity in the bottom stream is included as an inequality constraint. In order to obtain the remaining process variables the distillation column is computed stage-wise starting from the top. In this example, a feasible distillation column that fulfills the specifications in Tab. 1 is calculated in 4 iterations. The initial values as well as the final values are summarized in Tab. 2.

Table 1: Specifications for the exemplary simple distillation column. Stream Amount Composition Feed stream 4 kmol/h 0.7 mol/mol WAT 0.2 mol/mol ET 0.1 mol/mol THF Bottom stream ≥ 0.99 mol/mol WAT

Table 2: Initial and final values for the optimization parameters. Parameter Initial Value Final Value Q˙ C in kW -53.2 -60.1 D in kmol/h 1.30 1.62 xD in mol/mol 7.57 × 10−2 WAT 2.58 × 10−1 WAT 6.16 × 10−1 ET 4.95 × 10−1 ET 3.08 × 10−1 THF 2.47 × 10−1 THF The presented approach is robust with respect to poor initial values. In an exemplary case, a composition for the top stream is chosen that lies in a different distillation region than the desired bottom product. However, a feasible solution is found within 6 iterations. The composition profiles of the liquid phase in the column for iterations 1, 4, and 6 are depicted in Fig. 1. Iterations 2, 3, and 5 are similar to iteration 1 and 6, respectively, and therefore not shown. 3.2. Separation of water and ethanol using the light-boiling entrainer tetrahydrofuran Fig. 2 shows a distillation process to separate water from ethanol with the light-boiling entrainer tetrahydrofuran. The top stream of column C1 is used as feed stream for column C2 and the top

209

A new scheme for process simulation by optimization

0 0.1

1 1

0.9

0.2

0.8

AT

l mo ol/ /m

0.6

4

0.5

F

xW

0.7

0.4

xTH

/m ol/ mo l

0.3

0.5

0.6

0.4

0.7

0.3

0.8

0.2

0.9

6

0.1

1 0

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 xET / mol/mol

1

0

Figure 1: Composition profiles of the liquid phase in the column for iterations 1, 4, and 6 when using a poor initial value for xD . stream of column C2 is used as recycle stream and mixed with stream 1. The specifications used here are summarized in Tab. 3. The distillation columns C1 and C2 have 40 stages each with feed on stage 20. Furthermore, both columns operate at 1 bar. 5

3 1 (WAT+ET)

M

2

C1

C2

7 (THF) 4 (WAT)

6 (ET)

Figure 2: Flowsheet for separating water, ethanol, and tetrahydrofuran.

Table 3: Specifications for the separation of water and ethanol using the light-boiling entrainer tetrahydrofuran. Stream Amount Composition 1 3.6 kmol/h 0.75 mol/mol WAT 0.25 mol/mol ET 7 ≤ 0.01 kmol/h 1 mol/mol THF 4 ≥ 0.99 mol/mol WAT 6 ≥ 0.99 mol/mol ET Using the method introduced in Section 2 results in an optimization problem with eleven optimization parameters. Column C1 is calculated downwards and column C2 upwards. A feasible flowsheet that fulfills the specifications in Tab. 3 is obtained in 38 iterations. The initial values as well as the final values are summarized in Tab. 4. Using ASPEN PLUS the flowsheet can hardly be initialized without prior knowledge about the solution. Hence, the presented approach makes initialization more robust and therefore much easier especially for inexperienced users.

210

A. Hoffmann et al.

Table 4: Initial and final values for the flowsheet optimization parameters. Parameter Initial Value Final Value Stream 7 in kmol/h 1.0 × 10−4 7.46 × 10−4 Q˙ C,1 in kW -523.3 -10366.7 Stream 3 in kmol/h 1.42 24.84 x3 in mol/mol 7.26 × 10−2 WAT 1.69 × 10−1 WAT 6.15 × 10−1 ET 3.63 × 10−2 ET 3.13 × 10−1 THF 7.95 × 10−1 THF Q˙ C,2 in kW -173.0 -79.8 Stream 6 in kmol/h 0.88 1.55 x6 in mol/mol 1.0 × 10−2 WAT 9.18 × 10−3 WAT 9.90 × 10−1 ET 9.90 × 10−1 ET −4 1.13 × 10 THF 8.20 × 10−4 THF

4. Conclusion An important advantage of the new method presented here is the tremendous increase in flexibility in process simulation. When applying the classical Newton approach it is mandatory to specify as many variables as there are degrees of freedom. In this classical approach, neither fewer or more demands on the process are possible, nor it is possible to set inequality constraints on process variables. Our formulation of the task of process simulation as an optimization problem gives the possibility of including an arbitrary number of equalities and inequalities. This gain in flexibility allows full exploitation of process limitations. The stage-by-stage calculation of a distillation column based on fixed-point iterations is numerically stable and does not require the computation of derivatives. In combination with the shooting method the process simulation is embedded into an optimization problem of low dimensionality which operates not only on the set of feasible parameter combinations but also on infeasible ones and allows for purposeful calculation of a feasible solution. This enables robust process simulation which was illustrated using the example of a simple distillation column and a distillation-based flowsheet and allows for simultaneous process simulation and optimization.

References V. Julka, M. F. Doherty, 1990. Geometric behavior and minimum flows for nonideal multicomponent distillation. Chem. Eng. Sci. 45 (7), 1801–1822. S. G. Levy, M. F. Doherty, 1986. A simple exact method for calculating tangent pinch points in multicomponent nonideal mixtures by bifurcation theory. Chem. Eng. Sci. 41 (12), 3155–3160. S. G. Levy, D. B. Van Dongen, M. F. Doherty, 1985. Design and synthesis of homogeneous azeotropic distillations. 2. Minimum reflux calculations for nonideal and azeotropic columns. Ind. Eng. Chem. Fundam. 24 (4), 463–474. W. K. Lewis, G. L. Matheson, 1932. Studies in distillation. Ind. Eng. Chem. 24 (5), 494–498. A. Lucia, A. Amale, R. Taylor, 2006. Energy efficient hybrid separation processes. Ind. Eng. Chem. Res. 45 (25), 8319–8328. A. Lucia, A. Amale, R. Taylor, 2008. Distillation pinch points and more. Comput. Chem. Eng. 32 (6), 1342–1364. F. Petlyuk, R. Danilov, J. Burger, 2015. A novel method for the search and identification of feasible splits of extractive distillations in ternary mixtures. Chem. Eng. Res. Des. 99, 132–148. O. Ryll, S. Blagov, H. Hasse, 2012. ∞/∞-analysis of homogeneous distillation processes. Chem. Eng. Sci. 84, 315–332. O. Ryll, S. Blagov, H. Hasse, 2013. ∞/∞-analysis of heterogeneous distillation processes. Chem. Eng. Sci. 104, 374–388. J. Stoer, R. Bulirsch, 2005. Numerische Mathematik 2, 5th Edition. Springer-Verlag, Berlin, Heidelberg. D. B. Van Dongen, M. F. Doherty, 1985. Design and synthesis of homogeneous azeotropic distillations. 1. Problem formulation for a single column. Ind. Eng. Chem. Fundam. 24 (4), 454–463. L. Zhang, A. A. Linninger, 2004. Temperature collocation algorithm for fast and robust distillation design. Ind. Eng. Chem. Res. 43 (12), 3163–3182.

Zdravko Kravanja, Miloš Bogataj (Editors), Proceedings of the 26th European Symposium on Computer Aided Process Engineering – ESCAPE 26 June 12th -15th, 2016, Portorož, Slovenia © 2016 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/B978-0-444-63428-3.50040-0

Renewable electricity integration at a regional level: Cantabria case study Antonio Dominguez-Ramosa, Charalampos Triantafyllidis,b Sheila Samsatli,b Nilay Shah,b Angel Irabiena a

Unversidad de Cantabria, Chemical and Biomolecular Engineering, ETSIIyT, Avda. Los Castros, s.n., 39005 Santander, Spain

b

Imperial College London, Centre for Process System Engineering, South Kensington London SW7 2AZ, United Kingdom

Abstract Sustainability Energy Programs (SEPs) determine the operative way in which the different energy vectors must be provided to final industrial, domestic or transport customers, as target demands for each final form of energy. In order to reduce the current huge regional electric deficit of the Cantabria region (northern Spain) due to imports from neighboring regions, a restructured, integrated system based mainly on wind on-shore power for 2020 is envisaged by the SEP of Cantabria. In this work, results of a Resource-Task Network model for the electricity grid have been developed as a previous step to feed the STeMES model for a better temporal resolution that can consider energy storage. A MILP optimization problem is solved for the minimization of the total cost of the electricity network capable of supplying the electricity demand in Cantabria for the 2020 horizon, using as a starting point the 2014 generation, transformation and distribution structure. Keywords: electricity, grid, RTN, Cantabria, renewable energy

1. Introduction and context In Spain, most of the executive power is in hands of the autonomous communities, and only few capabilities remain in the central government. In this work, the Autonomous Community of Cantabria, located in the northern coast of Spain, is selected as a case study. The Cantabria regional government is responsible for strategic energy planning. It can be appreciated in the document entitled “Sustainability Energy Programme for Cantabria 2014-2020” –SPEC– (Cantabria Regional Goverment 2014) that the region features a clear net electricity importer profile. The SPEC supposes the need to increase the installed power based on the contribution of different renewable energy sources especially by means of wind on-shore power. The predicted installed power is around 700 MW by 2020. Consequently, electricity imports are expected to be reduced from 3.272 GWh in 2014 to 1.565 GWh in 2020, which is around 52% reduction. On the other hand, few mentions in the SPEC of energy storage technologies are made, options that clearly can play a vital role to promote the sustainability of the energy system (Liu, Li et al. 2010). The stated problem is the current cost per unit of stored kWh. While pumped-storage power stations can help to diversify the power portfolio, technical space availability has become a serious issue in Spain.

212

A. Dominguez-Ramos et al.

Increasing the installed power of renewable energies such as wind or solar implies that many more peaks of power will appear. The possibility of storing those power peaks into different chemical products and fuels from the reduction of CO2 is well-known (Centi, Quadrelli et al. 2013). Therefore, it is necessary to develop a mathematical model capable of dealing with the planning of energy infrastructures to take into account the future needs of electricity and fuels. Because of the interest in the ER technology, the electricity network will be the focus of this work. One the main problems that arise with the planning of energy infrastructures is the geographical domain that is intended to be covered. Those models range from national to urban levels. However, as it happens in the SEPC, the real actions regarding energy infrastructures are taken on a regional basis. Even if regional programs are coherent with national targets which in turn does so with european ones, intrinsic regional considerations maybe not included. On the other hand, the temporal domain is also critical. Most of the reviewed models lack the appropriate temporal resolution which is critical when storage must be accounted for. Only the STeMES model is able to consider hourly operation over an entire year and simultaneously optimize the design of the network of conversion, storage and transport technologies (Samsatli, Samsatli 2015a). However, the initialization of this very model requires care and good quality data sets thus a model capable of taking into account simultaneously both the regional features for the geographical domain as well as the principal infrastructures in a yearly resolution is necessary according to the normal period of time derived from the previously mentioned described planning. The novelty of this work is the description of a model for an electricity grid network, which can help at the initialization of a more complex model. Anyway, because of the simplicity of the described model, it can be easily managed by technical staff in order to help at regional energy programming without the need of advanced skills. The aim of this work is the development of a mathematical model with yearly temporal resolution to determine the prediction of electricity infrastructures (generation, transformation and transmission) to fulfill the targets of the SPEC. The obtained results can be fed to the hourly temporal resolution model STeMES (Samsatli, Samsatli 2015a). The model considers a Resource-Task Network approach (Samsatli, Samsatli 2015b). It has been completed for 2 scenarios: a) No restrictions on the imports b) a decrease of a yearly 10% in the imports from 2014.

2. Development of the electricity grid network model in Cantabria Figure 1 depicts the Resource-Task Network (RTN) diagram for this work. Tasks are related to technologies for generation and/or transportation of electricity at different voltage levels. Boxes are used to represent technologies, while circles represent the 3 existing resources. In the left side of Figure 1, the 4 selected generation technologies produce the resource “electricity at 220 kV”. This resource can be transformed into “electricity at 400 kV” by means of the “substation HV/MV”. The resource “electricity at 220 kV” is transformed into the resource “electricity 0, for w > w∗ .

(4)

The examination of the sign of µ yields, that, though there exist solutions for all w, the only relevant solutions for s = 0 are the solutions where µ > 0 and, hence, w > w∗ (active inequality constraints). For inactive inequality constraints (w < w∗ ) we get: s2 = −g(x).

(5)

s

active w w∗ inactive

µ

active w w∗ inactive

Figure 1: Solution branches for s and µ. The solid black line shows the valid solution branch and the gray line marked with circles shows the invalid solution branch for w < w∗ , the dashed line marked with Figure 1 illustrates the feasible solutions of the op- stars shows the valid solution branch for timization problem. In summary the inequality con- w > w∗ straints require a switching strategy between solution branches at points where an inequality constraint becomes active or inactive. In the following we will explain the switching strategy used to switch branches at the critical points. Because the constraint is inactive and g(x) < 0 holds, there exist two real solutions, namely: p (6) s = ± −g(x).

2.3 Branch switching strategy: There are two scenarios for which a branch has to be switched. The first one is switching the branch s = 0, µ 6= 0 to the branch s 6= 0, µ = 0, which corresponds to an inequality constraint switching from active to inactive. The second one is switching from the branch µ = 0, s 6= 0 to the branch µ 6= 0, s = 0, which corresponds to an inequality constraint switching from inactive to active. For each of the two cases we implemented a different branch switching approach. Firstly, we will describe the approach for inequality constraints switching from active to inactive.

316

T. Keßler et al.

We detect the critical point by monitoring the sign of µ. If a change in the sign is detected, we set w = w∗ + ε1 , x = x∗ , µ = 0 and s = ε2 as a guess for the starting point on the new branch. The parameters ε1/2 are used to push the new starting point far enough into the new branch, so that the numerical solver does not return to the old branch. When an inequality constraint switches from inactive to active we use an approach which has some parallels to the arc-length continuation introduced by Chan and Keller (1982). In order to find a new starting point for the numerical continuation we calculate the tangent to the new solution branch: Fx˜ ·

Table 1: Parameter values of constrained PFR-equations. Parameter

Meaning

cf K L N Tf Tmax Tmin Tw,max Tw,min v α β γ δ

Feed concentration Scaling factor Length of reactor Number of grid points Feed temperature Maximum reactor temperature Minimum reactor temperature Maximum jacket temperature Minimum jacket temperature Flow velocity Reaction kinetic constant Reaction kinetic constant Reaction kinetic constant Reaction kinetic constant

∂w ∂ x˜ + Fw · = 0, ∂z ∂z

Value 0.02 mol l 250 000 1m 50 340 K 400 K 280 K 400 K 280 K 0.1 ms 0.0582 1s 0.2 1s 16.659 0.25

(7)

 ∂L ∂L ∂L , , , x˜ = (x, s, µ) and z is the arc-length which specifies how far we ∂x ∂s ∂µ move on the curve. From (7), one obtains immediately the correlation ∆x˜ = −Fx˜−1 · Fw · ∆w, which is used to calculate a guess for the new starting point [w = w∗ + ∆w , x = x∗ + ∆x, s = 0, µ = ∆µ]. The precise new solution on the new branch is calculated by Newton’s method, using the predicted solution as starting point. 

where F =

3. Tubular reactor example The scalarization technique from section 2 is now applied to a Plug Flow Reactor (PFR) from Logist et al. (2009). The model equations are: x2 γ· ∂ x1 α ∂ x1 1 + x2 , =− + · (1 − x1 ) · e ∂t ∂z v x2   γ· Tw − T f ∂ x2 ∂ x2 α · δ β =− + · (1 − x1 ) · e 1 + x2 + · − x2 , ∂t ∂z v v Tf

(8a) (8b)

with the boundary conditions x1 (0) = 0, x2 (0) = 0 at the inlet. The parameters used in equations (8) can be found in Table 1. We only consider the steady-state of the system, therefore the right hand sides of the system equations equal zero. The states of the system are the dimensionless reactant concentration, x1 = (c f − c)/c f , and the dimensionless reactor temperature, x2 = (T − T f )/T f . The system input and spatially distributed optimization variable is the jacket temperature Tw . In order to be able to solve the equations numerically with a continuation algorithm, they have to be discretized. To do so, we use the Finite Volume method with N = 50 grid points, replacing

Use of predictor corrector methods for multi-objective optimization of dynamic systems

w=0 w = 0.125

0.02

w = 0.0106 w=1

c in [mol/l]

T in [K]

500 400 300

0

0.2

0.4 0.6 z in [m]

0.8

1

0.2

0.4 0.6 z in [m]

0.8

1

4.55 J1

Tw in [K]

0

1

·10−4

450 350 250

−3

1 ·10 0.5 0 0.8 0.6

0.01 0

317

0

0.2

0.4 0.6 z in [m]

0.8

1

2 0

1

0.5 J2

1.44 ·10−2

Figure 2: Results of the continuation: state profiles of the reactor temperature T , concentration c, together with a closeup of the end part of the reactor, and jacket temperature Tw . Each for various values of w. The Pareto front is shown in the lower right corner. the 2 system equations with 100 new equations, x2,i γ· x1,i−1 − x1,i α + · (1 − x1,i ) · e 1 + x2,i , for i = 1, . . . , N, 0= ∆z v x2,i γ· x2,i−1 − x2,i α · δ 1 + x2,i 0= + · (1 − x1,i ) · e ∆z v   Tw,i − T f β + · − x2,i , for i = 1, . . . , N, v Tf

(9a)

(9b)

where the lower index i stands for the i-th grid point zi . These grid points are equally distant spaced by ∆z in the definition interval of z. The first grid point, i = 0, is the inlet, where the boundary conditions have to be fulfilled. The optimization problem is defined as: min J = (1 − w) · J1 + w · J2 , Tw,i

s.t. discretized system equations (9), x2,min ≤ x2,i ≤ x2,max , Tw,min ≤ Tw,i ≤ Tw,max ,

(10)

with the objective functions J1 and J2 defined as: J1 = c f · (1 − x1 (L)) = c f · (1 − x1,N ), J2 =

T f2 · x22 (L) K

=

2 T f2 · x2,N

K

.

(11a) (11b)

The first objective is to minimize the reactant concentration at the outlet of the reactor to maximize the product yield, the second objective is to hold the reactor temperature at the outlet close to the feed temperature to minimize heat losses. Because this is a constrained problem, we have to apply the KKT-conditions to calculate optimal points. In order to generate a starting point for the

318

T. Keßler et al.

continuation, the optimization problem was solved for w = 0 analytically by solving equations (8). The results are shown in Figure 2. In the left section, the reactor is heated up as fast as possible to the highest permitted temperature, Tmax , in order to obtain high conversion. After that, the cooling temperature has to be reduced to compensate the heat of reaction and to prevent the reactor temperature from exceeding Tmax . Towards the right end of the reactor, the coolant temperature has to be increased stepwise, because the more reactant is converted, the less heat is set free by the reaction. For w = 0 only objective J1 is active, therefore the temperature is held constantly at Tmax up to the reactor outlet to maximize the conversion. For w = 1 the only objective is J2 , therefore the reactor has to be cooled from a specific point to reach the inlet temperature at the outlet. In between these two extremes, there are many more calculated profiles, some of which are shown in the figure. The numerical results agree well with those obtained by Logist et al. (2009) using the normal boundary intersection and normalized normal constraint methods. The Pareto front, which is depicted in the lower right corner of Figure 2, is clearly superior to the results of the weighted sum method also shown by Logist et al. (2009). The complexity of the example, with 200 inequality constraints, 100 equality constraints and 50 optimization variables, underlines the feasibility of numerical continuation methods regarding large-scale multi-objective optimal control problems. In order to generate the results, the before mentioned branch switching had to take place 36 times. The computation time on an ordinary desktop PC is a matter of a few minutes.

4. Conclusions Solving multi-criterion optimization problems is a field with significant practical relevance. In this paper we showed that numerical parameter continuation algorithms known from bifurcation analysis can, thanks to the optimality conditions, be used to solve such problems, even for large problem sizes, originating from discretized dynamic optimization or optimal control problems. Because the here shown example is a constrained problem we had to make use of a branch switching technique. The scalarization presented here is similar to the weighted sum method, but is able to overcome some of its major drawbacks, namely the calculation of non-convex Pareto fronts and the loss in accuracy because of a heavy clustering of solutions.

References M. Bortz, J. Burger, N. Asprion, S. Blagov, R. B¨ottcher, U. Nowak, A. Scheithauer, R. Welke, K.-H. K¨ufer, H. Hasse, 2014. Multi-criteria optimization in chemical process design and decision support by navigation of pareto sets. Computers & Chemical Engineering 60, 354–363. T. F. C. Chan, H. B. Keller, 1982. Arc-length continuation and multi-grid techniques for nonlinear elliptic eigenvalue problems. SIAM Journal on Scientific and Statistical Computing 3, 173–194. Y. Collette, P. Siarry, 2003. Multiobjective Optimization: Principles and Case Studies. Springer. I. Das, J. Dennis, 1997. A closer look at drawbacks of minimizing weighted sums of oobjective for pareto set generation in multicriteria optimization problems. Structural Optimization 14, 63 – 69. K. Deb, 2002. Multi-objective optimization using evolutionary algorithms. Chichester, Wiley. J. Gudat, F. Vazquez, D. Nowak, J. R¨uckmann, 2007. Pathfollowing methods for nonlinear multiobjective optimization problems. International Journal of Management Science and Engineering Management 2, 163–177. C. Hillermeier, 2001. Generalized homotopy approach to multiobjective optimization. Journal of Optimization Theory and Applications 110, 557 – 583. M. Krasnyk, M. Ginkel, M. Mangold, A. Kienle, 2007. Numerical analysis of higher order singularities in chemical process models. Computers & Chemical Engineering 31, 1100–1110. F. Logist, P. Van Erdeghem, J. Van Impe, 2009. Efficient deterministic multiple objective optimal control of (bio)chemical processes. Chemical Engineering Science 64, 2527–2538. B. Lundberg, A. Poore, 1993. Numerical continuation and singularity detection methods for parametric nonlinear programming. SIAM Journal on Optimization 3, 134–154. R. T. Marler, J. S. Arora, 2004. Survey of multi-objective optimization methods for engineering. Structural and Multidisciplinary Optimization 26, 369–395. K. Miettinen, 1999. Nonlinear Multiobjective Optimization. Kluwer Academic Publishers. A. Potschka, F. Logist, J. V. Impe, H. Bock, 2011. Tracing the pareto frontier in bi-objective optimization problems by ode techniques. Numerical Algorithms 57, 217–233.

Zdravko Kravanja, Miloš Bogataj (Editors), Proceedings of the 26th European Symposium on Computer Aided Process Engineering – ESCAPE 26 June 12th -15th, 2016, Portorož, Slovenia © 2016 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/B978-0-444-63428-3.50058-8

Economic risk-based analysis: Effect of technical and market price uncertainties on the production of glycerol-based isobutanol Carina L. Gargaloa, Krist V. Gernaeya, Gürkan Sina a

CAPEC-PROCESS Research Centre, Department of Chemical and Biochemical Engineering, Technical University of Denmark, DK-2800 Lyngby, Denmark

Abstract In this study, the production of glycerol-based isobutanol is critically assessed in terms of its techno-economic performance through the estimation of economic indicators, net present value (NPV) and minimum selling price (MSP). The Monte Carlo method with Latin Hypercube Sampling (LHS) is used to propagate the market price and technical uncertainties to the economic indicator calculations and to quantify the respective economic risk. The results clearly indicated that under the given market price uncertainties, the probability of obtaining a negative NPV is 0.95. This is a very high probability of failure, which corresponds to an economic risk of 20 MM$ as a potential loss. In order to decrease the economic risk, the integrated production of isobutanol as a module added to the biodiesel plant was tested as an alternative scenario. The probability of a negative NPV is decreased from 0.95 to 0.2 and the corresponding economic risk was reduced from 20 to 0.64 MM$. Using the integrated concept of utilising the waste glycerol stream in biodiesel plants contributes to the diversification of the product portfolio for vegetable oil based biorefineries, and in turn improves costcompetitiveness and robustness against market price fluctuations. Keywords: Net Present Value, uncertainty analysis, risk, isobutanol.

1. Introduction Concerns about the environment, dependency on fossil fuels and its persistent supply and price security have motivated the research into chemicals and fuels based on renewable resources. As a bioalcohol, isobutanol has the potential to be used as a gasoline blendstock, contributing to the progressive substitution of petro-based chemicals and also leading to lower emissions of environmentally damaging chemical components. The increased biodiesel production has created a significant amount of surplus crude glycerol since it has exceeded the demand for purified glycerol. This motivates research into exploring glycerol as a feedstock for producing higher value added chemicals. One of the challenges in such process concept studies is, however, that real plant data is often not available. Therefore, early-stage design of biorefineries is marked by assumptions, hypotheses and simplifications that need to be made in order to proceed with process analysis. Consequently, uncertainty on the techno-economic parameters/criteria is expected, and needs to be appropriately dealt with (Turton 2009) (Cheali et al. 2014). Therefore in this study, the techno-economic feasibility of producing isobutanol from glycerol is analyzed with particular emphasis on comprehensive assessment of uncertainties including the product and crude glycerol’s prices, the required fixed capital investment and sales volume. For the economic assessment, the economic model used is the discounted cash-flow rate of return

320

C.L. Gargalo et al.

(DCFR). For the uncertainty analysis, the Monte Carlo technique is used which comprises the following steps: (i) characterization of the input uncertainties through appropriate statistical distribution functions; (ii) generation of N samples from the input uncertainty domain using Latin Hypercube Sampling (LHS) with correlation control; (iii) mapping of economic indicators values vs. their estimated likelihood; and, (iv) risk quantification based on probability of failure to achieve a certain target.

2. Materials and methods 2.1. Case study The isobutanol production process is obtained by the chemical thermochemical conversion of glycerol and methanol, based on the work of Bauer & Hulteberg (2014). The total feedstock rate (crude glycerol) and isobutanol production capacity are 4200 ton/year and 3440 ton/year, respectively. Noteworthy is that the methanol needed for the thermochemical conversion of glycerol is entirely obtained by recycling the methanol separated during the glycerol purification process. 2.2. Techno-economic analysis: methods and assumptions The economic model used in this study is the discounted cash-flow rate of return (DCFR). This model is based on the calculation of the net present value (NPV) which can also be used to calculate the minimum product selling price (MSP) and discount rate at which the project breaks even. It is assumed that the plant will be 40% equity financed, where the loan is taken for ten years with 8% interest. The plant is built within three years (one for engineering planning and two years for construction), where the principal investment is paid in stages over these three years together with the respective interest. It is also assumed that 8% of the total capital investment (TCI) is spent in year 2, 60% is spent before year -1 and 32% before year 0. A 30 year plant lifetime is also assumed and the Modified Accelerated Cost Recovery System (MACRS) is used as depreciation method (for a 5 years period). The income tax rate and discount rate are assumed to be 35% and 10%, respectively. The continuous processing plant is operated by three shifts of workers, each working 330 days/year at a rate of 32.7 $/h. The assumptions used are retrieved from Peters et al. (2003) and Humbird et al. (2011)for the DCFR model. This model can be expressed by Eq.1, 2 and 3, where the input model variables are described in Eq.5. 𝒀𝒊 = 𝑦! , 𝑦! , … , 𝑦!   = 𝑓(𝑋) = 𝑁𝑃𝑉, 𝑀𝑆𝑃, 𝑔𝑟𝑜𝑠𝑠  𝑝𝑟𝑜𝑓𝑖𝑡   ! 𝐍𝐏𝐕 = !!!! ! ! − C! =  𝐴𝑛𝑛𝑢𝑎𝑙  𝑝𝑟𝑒𝑠𝑒𝑛𝑡  𝑤𝑜𝑟𝑡ℎ − !!!!! 𝐴𝑛𝑛𝑢𝑎𝑙  𝐹𝑖𝑥𝑒𝑑  𝐼𝑛𝑣𝑒𝑠𝑡𝑚𝑒𝑛𝑡  𝐶𝑜𝑠𝑡  

(1)

(!!!)

t = 1, … , T   C! = net  cash  inflow  during  the  lifetime  of  the  plant   C! = total  initial  investment  costs  

(2)

Economic risk-based analysis: Effect of technical and market price uncertainties on the production of glycerol-based isobutanol 𝒚 = 𝒇 𝒙 = 𝑁𝑃𝑉 ≅

321

𝑥!  ×𝑃! −

𝑥  !"! ×    𝑃!"!   + !

𝑥!!  ×𝑃!! + 𝑙𝑎𝑏𝑜𝑟 + 𝑚𝑎𝑖𝑛𝑡𝑒𝑛𝑎𝑐𝑒 !

(3)

!

+ 𝑝𝑟𝑜𝑝𝑒𝑟𝑡𝑦  𝑖𝑛𝑠𝑢𝑟𝑎𝑛𝑐𝑒  𝑎𝑛𝑑  𝑡𝑎𝑥   −

𝑀𝐴𝐶𝑅𝑆   𝐹𝐶𝐼

 

!!!!

≅   𝑥!  ×𝑃! −

𝑥!"! ×𝑃!"!   + ! !

𝑥  !"! ×    𝑃!"!   + !

𝑥!!  ×𝑃!! + 𝑓(𝐹𝐶𝐼) !

𝑀𝐴𝐶𝑅𝑆  (𝐹𝐶𝐼)    

− !!!!

𝒈𝒓𝒐𝒔𝒔  𝒑𝒓𝒐𝒇𝒊𝒕   ≅ 𝑥!  ×𝑃! −  

𝑥!!  ×𝑃!! + 𝑓(𝐹𝐶𝐼)    

𝑥  !"! ×    𝑃!"!   + !

!

𝐢𝐧𝐩𝐮𝐭  𝐦𝐨𝐝𝐞𝐥  𝐯𝐚𝐫𝐢𝐚𝐛𝐥𝐞𝐬 → 𝑋! = x! , P! , x!"! , P!"!  , x  !"! , P!"! , x!! , P!!  , 𝐹𝐶𝐼   𝑖 = 1,2, … , 𝐼   𝑙 = 1,2, … , 𝐿   𝑚 = 1,2, … , 𝑀   𝜃𝜖  𝑋! →  𝜃 = x! , P! , P!"  , 𝐹𝐶𝐼  

(4)

(5)

(6)

Where, x𝑝 , P𝑝 , x𝑟𝑚𝑖 , P𝑟𝑚𝑖 , x 𝑢𝑡, P𝑢𝑡, x𝑤 , P𝑤 , 𝐹𝐶𝐼 correspond to sales volume, product price, raw material(s) inflow, price of raw material(s), utilities needed, price of utilities, waste outflow(s), waste(s) treatment price(s), and fixed capital investment. Eq. 6 indicates the subset of inputs identified as uncertain.

3. Deterministic economic assessment The capital investment and the duty estimates for the glycerol purification section are based on the work of Zhang et al. (Zhang et al. 2004; Zhang et al. 2003); for the glycerol conversion to isobutanol and respective separation and purification, the estimates were taken from Bauer and Hulteberg (2014). The utility prices were also retrieved from Bauer and Hulteberg (2014). For the deterministic analysis, and model initialization, the nominal values for the price of crude glycerol and isobutanol are 0.68 and 1.21 $/kg, respectively. The fixed capital investment is estimated based on the above-mentioned economic conditions and by using the estimation of capital investment by the percentage of delivered equipment method, whereas the total annual production cost is based on the factorial methodology (Peters et al. 2003). Based on the assumed economic conditions, the fixed capital investment is 12.9 MM$, the total annual product revenue is 4.2 MM$ and the total annual production cost is 3.8 MM$. Consequently the gross profit is 1.2 MM$/y, NPV is -13.4 MM$ which corresponds to a break-even minimum selling price of 1.67 $/kg of isobutanol.

4. Stochastic economic evaluation In this step, the uncertainty associated with key inputs is defined, and is then propagated to investigate their cumulative impact on the economic evaluation of a project, such as

322

C.L. Gargalo et al.

NPV and MSP, used as economic indicators for decision-making. The identified sources of uncertainty, as shown in Eq. 6, are the raw material (P!" ) and product’s price (P! ), the fixed capital investment (FCI) and the sales volume (x! ).The sources of uncertainty are now characterized using appropriate statistical distribution functions. To this end, as presented in Table 1, the scenario used for further analysis is built on, (i) historical data on the glycerol and isobutanol historical prices in Europe (as shown in Figure 1); (ii) variability of the fixed capital investment over a typical range of variation; and, (iii) variation of the volume sales. Through LHS with correlation control (Cheali et al. 2014), 500 scenarios were generated for each of the input parameters. Therefore, the isobutanol production process was modelled and simulated deterministically for each one of the 500 samples, generating 500 sets of possible economic indicators. In Figure 2 (a) the parallel coordinate plot of the system under uncertainty is shown, allowing a quick visualization of results obtained from Monte Carlo simulations. It displays highdimensional datasets, where each y axis (‘columns’) represents one variable; there are 500 values for each y, corresponding to the 500 LH samples. These values are then joined, creating multiple polylines that represent the scenarios across variables.

 Figure 1- Input data uncertainty for product and feedstock: crude glycerol (grey line) and isobutanol prices (blue line), in Europe, from 2010 to 2014. Table 1 – Identified sources of uncertainty for economic analysis. Glycerol historical prices (2010-2014) Isobutanol historical prices (2010-2014) Fixed capital investment Sales volume

Normal distribution (0.093 std.) [this study, fitting to data)] Normal distribution (0.27 std.) [this study] Uniform distribution (-20% +30% around nominal value) (Turton, 2009) Normal distribution (0.25 std.) (Anderson, 2010)

Risk quantification & mitigation Considering a system under uncertainty, if one then wishes to determine the risk that the objective function will reach a value lower or equal to a given threshold, this can be expressed as the probability of failure to achieve the target times the consequence of failing to achieve that same target. In this work, the risk is given by the probability of the project not being profitable (NPV 1, which helps mitigate some of the usual sensitivity to problem scaling exhibited by the method.

4. PRICO process simulation Figure 1 shows the PRICO process for producing liquefied natural gas (Maher and Sudduth, 1975), which is a single-stage mixed refrigerant (MR) process. The MR is supplied at two pressure levels in the process by means of throttling and compression, and therefore serves as both a hot stream, high-pressure refrigerant (HPR), and as a cold stream, low-pressure refrigerant (LPR). Table 1 gives the data for the streams involved in the MHEX for this instance of the PRICO process. In this simulation, the Peng Robinson CEOS with parameter values taken from the Aspen Plus v8.4 database (Aspen Technology Inc., 2014) is used to calculate the thermodynamic properties of both the liquid and the vapor phase, and each phase region is discretized into three affine segments. Let OUT , u ≡ t OUT , and u ≡ UA be the unknown variables afforded by the base MHEX model u1 ≡ THPR 2 3 LPR IN , is given by an adiabatic flash consisting of Eqs.(1)–(3). The inlet cold temperature, u4 ≡ tLPR calculation at the throttle valve outlet. The remaining equations are those needed to solve for the roots of the CEOS (Eq.(14)) to calculate enthalpy values or fugacity coefficients, and flash models which include Eq.(13) to calculate component distributions in the two-phase region. In total, the process requires 283 equations and variables to model, a reduction of over 90% compared to the formulation given in Kamath et al. (2012). An initial guess for the simulation is given by first solving the model with the ideal gas CEOS and Raoult’s law to find temperatures and component phase distributions. Initial estimates for the compressibility factors and associated slack variables are then generated by solving the system given by Eq.(14) (and its liquid root analog) repeatedly at the ideal solution conditions wherever an enthalpy or fugacity calculation will be needed in the CEOS model. Using the CPLEX callable library v12.5 (IBM, 2015) as the LP solver, the simulation converges to the solution with u∗1 = 114.71 K, u∗2 = 266.77 K, u∗3 = 8.89 MW/K, and u∗4 = 108.95 K with kf(u∗ )k∞ < 10−9 after 53 iterations (2.36 seconds) when ∆Tmin = 1.2 K.

H.A.J. Watson et al.

510

Figure 2 shows the composite curves at the solution. The calculated isentropic compression work is 24.4 MW. The robustness of the algorithm with respect to the initial guess is very comparable to the favorable behavior observed in the study by Watson and Barton (2016), which used the same MHEX model with ideal thermodynamics and Raoult’s Law. An Aspen Plus (Aspen Technology Inc., 2014) simulation initialized with the results of this simulation predicts a value for the compression work that differs by only 1.2% from that obtained here, which is attributed to minor differences in the physical property and CEOS calculations performed by Aspen Plus.

5. Conclusions

Temperature (K)

A method for simulating MHEXs with phase 320 300 changes under nonideal thermodynamic con280 ditions has been presented. The formulation is 260 a compact nonsmooth model which is solved 240 entirely through equation-solving methods, in 220 contrast to those which require solving a hard 200 optimization problem involving binary vari180 ables or complementarity constraints or a DAE 160 system. Accordingly, the model is less com140 plex and allows for realistic simulation of pro120 cess flowsheets involving multiphase MHEXs 100 0 10 20 30 40 50 60 70 80 90 outside of an optimization framework. FuEnthalpy (MW) ture work will focus on simulating more complex and industrially relevant liquefaction pro- Figure 2: Composite curves for the simulated cesses using this methodology. MHEX in the PRICO process.

6. Acknowledgments The authors are grateful to Statoil for providing financial support for this research.

References Aspen Technology Inc., 2014. Aspen Plus v8.4. Aspen Technology Inc., Cambridge, MA. M. A. Duran, I. E. Grossmann, 1986. Simultaneous optimization and heat integration of chemical processes. AIChE Journal 32 (1), 123–138. F. Facchinei, A. Fischer, M. Herrich, 2014. An LP-Newton method: nonsmooth equations, KKT systems, and nonisolated solutions. Mathematical Programming 146, 1–36. M. M. F. Hasan, I. A. Karimi, H. E. Alfadala, H. Grootjans, 2009. Operational modeling of multistream heat exchangers with phase changes. AIChE Journal 55 (1), 150–171. IBM, 2015. IBM ILOG CPLEX v12.5. http://www-01.ibm.com/software/commerce/optimization/cplex-optimizer. R. S. Kamath, L. T. Biegler, I. E. Grossmann, 2010. An equation-oriented approach for handling thermodynamics based on cubic equation of state in process optimization. Computers & Chemical Engineering 34 (12), 2085–2096. R. S. Kamath, L. T. Biegler, I. E. Grossmann, 2012. Modeling multistream heat exchangers with and without phase changes for simultaneous optimization and heat integration. AIChE Journal 58 (1), 190–204. K. A. Khan, P. I. Barton, 2015. A vector forward mode of automatic differentiation for generalized derivative evaluation. Optimization Methods & Software 30 (6), 1185–1212. J. Maher, J. Sudduth, 1975. Method and apparatus for liquefying gases. US Patent 3,914,949. R. C. Pattison, M. Baldea, 2015. Multistream Heat Exchangers: Equation-Oriented Modeling and Flowsheet Optimization. AIChE Journal 61 (6), 1856–1866. H. H. Rachford Jr., J. D. Rice, 1952. Procedure for use of electronic digital computers in calculating flash vaporization hydrocarbon equilibrium. Journal of Petroleum Technology 4 (10), 327–328. H. A. J. Watson, P. I. Barton, 2016. Modeling phase changes in multistream heat exchangers. Submitted. H. A. J. Watson, K. A. Khan, P. I. Barton, 2015. Multistream heat exchanger modeling and design. AIChE Journal 61 (10), 3390–3403. T. F. Yee, I. E. Grossmann, 1990. Simultaneous optimization models for heat integration–II. Heat exchanger network synthesis. Computers & Chemical Engineering 14 (10), 1165–1184.

Zdravko Kravanja, Miloš Bogataj (Editors), Proceedings of the 26th European Symposium on Computer Aided Process Engineering – ESCAPE 26 June 12th -15th, 2016, Portorož, Slovenia © 2016 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/B978-0-444-63428-3.50090-4

Using Transient Measurements in Iterative SteadyState Optimizing Control Weihua Gao* and Sebastian Engell Biochemical and Chemical Engineering Department, TU Dortmund, Emil-Figge-Str. 70, 44221 Dortmund, Germany *[email protected]

Abstract This paper deals with the use of transient measurements in the steady-state optimization of slow dynamic processes. The key idea here is to estimate steady states from the measurements during the transients of a process. The predicted steady states are then employed in the modifier adaptation with quadratic approximation scheme (Gao et al., 2015) to drive the operation of the process to its optimum. The performance of the strategy is illustrated by simulation studies of the optimization of a continuous stirred-tank reactor. Keywords: Real-time optimization, model uncertainty, iterative methods

1. Introduction In model-based real-time optimization (RTO), plant-model mismatch can be handled by applying steady-state bias- and gradient-adaptations to the cost and constraint functions in an iterative optimization procedure (Gao and Engell, 2005). One of the difficulties in applications of this approach is that for each new set-point, one has to wait for the plant to reach a steady state after the set-point change, which makes the convergence slow. In order to accelerate the real-time steady-state optimization for slow dynamic processes, several researchers have pursued the idea to use transient measurements in steady-state optimization. Bamberger and Isermann (1978) proposed to extract the gradient of the steady-state with respect to the degrees of freedom from a dynamic model which is identified from a PRBS perturbation of the plant and is recursively adapted by the information gained from the set-point changes. Golden and Ydstie (1989) extended this scheme by using the estimated plant gradients from dynamic identification to modify the approximate process model, which is considered as an equality constraint in the steady-state optimization problem. Zhang and Roberts (1990) combined the Integrated System Optimization and Parameter Estimation (in short ISOPE) scheme with a linear dynamic model identification procedure for the steady-state optimization of nonlinear constrained processes with slow dynamics. Franc¸ois and Bonvin (2014) proposed an approach for the optimization of the steady-state performance that uses the transient measurements to calculate the gradients by the neighboring-extremal (NE) method, which relies on the accuracy of the linearization that results from a variational analysis of the nominal model. Unfortunately, significant problems are associated with the identification process and the accuracy of the resulting steady-state gradients, as analyzed by Brdy´s and Tatjewski (2005). Firstly, an adequate structure of the dynamic model must be found. Secondly, to get sufficiently accurate identification results, the plant must be appropriately excited. A passive identification experiment based on the measurement records that were collected during the transient process between the last

512

W. Gao and S. Engell

set-points will not always deliver sufficiently rich data records and so active identification experiments around the actual set-point must be planned. This implies additional dynamical changes of the set-points around the current set-point for identification purposes only which is not welcomed by plant operators and managers. Recently, Gao et al. (2015) proposed a new RTO scheme, which combines the modifier adaptation approach with quadratic approximation (called MAWQA). Similar to derivative-free optimization, the MAWQA scheme employs only the observed steady-states of the plant, instead of using gradients explicitly. We propose to estimate only the next steady state from the measurements during the transients of the plant and thus to speed up the convergence to the optimum steady state. The proposed estimation procedure is based on a linear ARX system model, the structure of which can be chosen according to the nominal rigorous process model. The parameters of the ARX system are estimated by a linear regression of the transient measurements which are collected during the transients to the new set-point. To prevent large estimation errors, additional conditions, e.g. satisfaction of the mass balance at the predicted steady state, are used to validate the ARX system model. The rest of this paper starts with an introduction of the MAWQA scheme. Then, the estimation of steady states from transient measurements is presented. The performance of the approach is illustrated by simulation studies for the steady-state optimization of a continuous stirred-tank reactor described in (Franc¸ois and Bonvin, 2014).

2. Modifier adaptation with quadratic approximation The general model-based set-point optimization problem can be stated as min Jm (u) u

s.t.

(1)

Cm (u) ≤ 0,

where u is a vector of manipulated variables to be optimized, Jm (u) and Cm (u) are the objective and the vector of constraint functions of the model, assumed to be twice differentiable with respect to u ∈ Rnu . The plant objective and constraints are represented by J p (u) and C p (u) and their values are only available via plant evaluations. To handle plant-model mismatch, the optimization problem (1) is iteratively adapted in the modifier-adaptation approach via min u

s.t.

    (k) (k) (k) (k) (k) T Jad (u) =Jm (u) + J p − Jm + ∇J p − ∇Jm u − u(k)     (k) (k) (k) (k) (k) T Cad (u) =Cm (u) + C p − Cm + ∇C p − ∇Cm u − u(k) ≤ 0,

(2)

where the superscript k represents the iteration index. Note that the bias adaptation of the objective function does not influence the optimum and it just corrects the objective value at u(k) . The use of gradient adaptations in the modifier-adaptation approach requires the computation of the process gradients at the current set-point. In MAWQA, the collected data at the previous set-points are used to estimate the gradients by a quadratic-approximation approach. The quadratic approximation of the cost function is defined by nu

i

nu

Jφ (u, P) = ∑ ∑ ai, j ui u j + ∑ bi ui + c i=1 j=1

(3)

i=1

with the coefficient set P = {a1,1 , . . . , anu ,nu , b1 , . . . , bnu , c}. The quadratic approximations of the constraint functions Cφ (u) are constructed similarly. The process gradients are determined by

Using Transient Measurements in Iterative Steady-State Optimizing Control

513

evaluating ∇Jφ and ∇Cφ at u(k) . The coefficients in P are determined by solving nr

min P



    2 J p u(ri ) − Jφ u(ri ) , P

(4)

i=1

with u(ri ) ∈ U (k) . Here nr is the size of the regression set and ri is the index of the set-point move. Assume U is the set of all the collected data, the regression set U (k) = Unb ∪ Udist , where Unb = {u : ku − u(k) k ≤ ∆u; and u ∈ U}. Udist is determined by min

Udist

s.t.

∑u∈Udist ku − u(k) k ϕ(Udist )

(5)

size(Udist ) ≥ (nu + 1)(nu + 2)/2 − 1 Udist ⊂ U \ Unb .

Here ∆u is a parameter of the screening algorithm. ϕ(Udist ) is the minimal angle between all possible vectors that are defined by u − u(k) , where u ∈ Udist . A detailed description of the screening algorithm is presented in (Gao et al., 2015). In MAWQA, the covariance matrix of the regression set U (k) is used to define a constrained search space B (k) for the next set-point move. B (k) is a nu -axial ellipsoid centered at u(k) . The axes of the ellipsoid are aligned with the eigenvectors of the covariance matrix. The semi-axis lengths of the ellipsoid are related to the eigenvalues of the covariance matrix by a scaling parameter γ. The MAWQA scheme employs only steady states of the plant. Next, the approach to calculate steady states from the measurements that are obtained during the transients of the plant is presented.

3. Steady states estimation from transients The estimation of steady states is based on a linear ARX system model y(k) = A(q−1 )y(k − 1) + B(q−1 )u(k − 1) + D,

(6)

where y ∈ Rny is the output vector and u ∈ Rnu is the control vector. A(q−1 ) ∈ Rny ×ny and B(q−1 ) ∈ Rny ×nu are the matrixes of polynomials in the backward shift operator q−1 with orders my and mu . The orders are chosen according to the nominal rigorous process model. D ∈ Rny is a constant vector. The coefficients of the polynomials in A(q−1 ) are determined by regressing the measurements that are obtained during the transient process after the current set-point move. In the absence of measurement errors, the length of the regression window is at least (ny my + my + 1) for a unique solution as follows ny

∑ aTi, j · δ y j,t = yi (k) − yi (k − t),

i = 1, . . . , ny ,

t = 1, . . . , ny my

(7)

j=1

 T where ai, j = ai, j,1 , . . . , ai, j,my and δ y j,t = [(y j (k − 1) − y j (k − 1 − t)), . . . , (y j (k − my ) − y j (k − my − t))]T . In the presence of measurement errors, (7) is replaced by a linear least-square regression procedure nr ny

min ai, j

∑∑ ∑

t=1 i=1

!2

ny j=1

aTi, j · δ y j,t

− yi (k) + yi (k − t)

,

(8)

W. Gao and S. Engell

514

where nr ≥ ny my represents the number of regression samples. The steady state is calculated as   ys = [I − A(1)]−1 y(k) − A(q−1 )y(k − 1) , (9) where I is a ny × ny identity matrix and  my  my ∑i=1 a1,1,i . . . ∑i=1 a1,ny ,i   .. .. .. A(1) =  . . . . my my ∑i=1 any ,1,i . . . ∑i=1 any ,ny ,i Since all the regression samples are obtained at the current set-point, B(q−1 )u(k − 1) + D keeps constant. B(q−1 ) and D do not appear in the estimation procedure. Therefore, step responses are sufficient for the estimation of steady states. The accuracy of the predicted steady state depends on how well the calculated ARX system model approximates the process at the current set-point. We propose to check conservation conditions to evaluate the predicted steady state. Such conditions can be mass balances or energy balances at the steady state. If the steady-state conservation conditions are not satisfied, a new estimation is started at the next measurement. As the dynamics of the process converge, the process will eventually stay within a linear interval around the steady state and can be approximated by the ARX system model.

4. Simulation study The steady-state optimization of a continuous stirred-tank reactor (Franc¸ois and Bonvin, 2014) is k

k

1 2 considered here. The process consists of two irreversible reactions: A + B − → C and 2B − → D, where the nominal reaction constants k1 = 0.75 L/(mol · min) and k2 = 1.5 L/(mol · min). The objective is to optimize the steady-state performance of the reactor by determining the optimal feed rates uA and uB

max

2 cC,s (uA + uB )2 /(uA cA,in ) − 0.004 (u2A + u2B )

s.t.

V k1 cA,s cB,s (−∆Hr,1 ) +V k2 c2B,s (−∆Hr,2 ) − 110 ≤ 0

uA ,uB

(10)

cD,s / (cA,s + cB,s + cC,s + cD,s ) − 0.1 ≤ 0, where c·,s represents the steady-state concentration, cA,in = 2.0 mol/L is the feed concentration of component A, V = 500 L is the reactor volume, (−∆Hr,1 ) = 3.5 kcal/mol and (−∆Hr,2 ) = 1.5 kcal/mol are the reaction enthalpies. The dynamic model c˙A (t)= − k1 cA (t)cB (t) + uA (t)cA,in /V − (uA (t) + uB (t)) cA (t)/V

(11a)

c˙B (t)= − k1 cA (t)cB (t) − 2k2 c2B (t) + uB (t)cB,in /V c˙C (t)=k1 cA (t)cB (t) − (uA (t) + uB (t)) cC (t)/V

(11b) (11c)

c˙D (t)=k2 c2B (t) − (uA (t) + uB (t)) cD (t)/V

(11d)

− (uA (t) + uB (t)) cB (t)/V

is obtained from mass balances, where cB,in = 1.5 mol/L is the feed concentration of component B. The settling time of the process is more than 60 minutes. The sampling interval is 1 minute as in (Franc¸ois and Bonvin, 2014). Table 1 lists three cases of plant-model mismatch which are considered in the simulation study. From the nominal model, an ARX system model results as     cA (k − 1) cA (k) aA,A aA,B  cB (k)   aB,A aB,B   cB (k − 1)      cC (k)  =  aC,A aC,B aC,C   cC (k − 1) cD (k − 1) aD,B aD,D cD (k)

   + B(q−1 )u(k − 1) + D. (12) 

Using Transient Measurements in Iterative Steady-State Optimizing Control

Nominal model Case I Case II Case III

Table 1: Cases of study Parameters k1 : 0.75, k2 : 1.5, CA,in : 2.0, CB,in : 1.5 k1 : 1.4, k2 : 0.4 k1 : 1.4, k2 : 0.4, CA,in : 2.5 k1 : 1.4, k2 : 0.4, CA,in : 2.5, CB,in : 2.0

515

Optimum u∗A : 14.5, u∗A : 21.7, u∗A : 17.2, u∗A : 15.7,

u∗B : 14.9 u∗B : 31.8 u∗B : 30.3 u∗B : 22.7

Here ny = 4, nu = 2, my = 1 and mu = 1. From successive ny my + my + 1 = 6 sampling points, the coefficients a(·),(·) can be determined. Then, the steady states are calculated by (9). To ensure the accuracy of the estimated steady states, the following condition of mass balance is required to be satisfied (cA,s + cB,s + 2cC,s + 2cD,s ) (uA + uB ) / (CA,in uA +CB,in uB ) ∈ 1 ± 0.002.

(13)

In the simulation study, the feed rates uA and uB are normalized in (0 − 40 L/min). The parameters used in MAWQA are: ∆u = 4 and γ = 3. The starting set-point is chosen at the nominal optimum (uA : 14.5 L/min, uB : 14.9 L/min). Two additional perturbations with step-size h = 4 are applied to initialize the modifier adaptation scheme. Note that the values of ∆u, γ and h are chosen as in (Gao et al., 2015) and they influence only the number of set-point moves to the optimum. The results are illustrated by set-point evolutions and trajectories on the performance contours in Fig. 1. In all three cases, around two settling periods (120 mins) are taken to reach the real optimum. It is less than 1/4 of the time which is needed for the process to arrive at steady states in each iteration. The irregular time periods are related to the nonlinearity of the process dynamics between different states. As analyzed before, it takes at most a complete settling period to estimate the steady states.

35

0

40

0

30

15 uA(t)

10

20

40

60 80 Time (min)

100

0 0

120

10

20 uA (L/min)

(a)

40

40

0

30 30

15 10

uA(t)

5

uB(t) 20

40

60 80 Time (min)

(c)

100

120

140

0 20 10 0 0

0

0

20

0

25 uB (L/min)

Feed rate (L/min)

30

(b)

35

0 0

0

0

0 0

20 10

uB(t)

5

0 0

20

0

25 uB (L/min)

Feed rate (L/min)

30

10

20 uA (L/min)

(d)

30

40

W. Gao and S. Engell

516

0

40

35

0 30

20

10

u (t)

5

u (t)

0 0

20

0

B

15

0

u (L/min)

25

0

Feed rate (L/min)

30

10

A

0

0

B

20

40

60 80 Time (min)

(e)

100

120

0 0

10

20 u (L/min)

30

40

A

(f)

Figure 1: Illustration of set-point evolutions (left) and trajectories on the performance contours (right). The dash-dot lines in figure (a),(c) and (e) mark the optimal levels of feed rate. The thick contour lines in figure (b), (d) and (f) mark the constraints. real optimums, initial perturbations, regression points

5. Conclusions This paper explores a novel approach to use the transient measurements in the steady-state optimization. It combines the MAWQA scheme with the estimation of steady states from transients. Step responses of the process are shown to be sufficient for the calculation of steady states by using the ARX system model. The next set-point will be computed until the predicted steady state satisfies the conservation conditions. The advantage over the dynamic model-identification approach is that there is no need for PRBS excitations of the process at each set-point. Future research will be focused on how to improve the robustness of the steady-state estimation to noisy transient data.

References W. Bamberger, R. Isermann, 1978. Adaptive on-line steady-state optimization of slow dynamic processes. Automatica 14 (3), 223 – 230. M. Brdy´s, P. Tatjewski, 2005. Iterative Algorithms for Multilayer Optimizing Control. Imperial College Press, London, UK. G. Franc¸ois, D. Bonvin, 2014. Use of transient measurements for the optimization of steady-state performance via modifier adaptation. Industrial & Engineering Chemistry Research 53 (13), 5148–5159. W. Gao, S. Engell, 2005. Iterative set-point optimization of batch chromatography. Computers & Chemical Engineering 29 (6), 1401 – 1409. W. Gao, S. Wenzel, S. Engell, 2015. Modifier adaptation with quadratic approximation in iterative optimizing control. In: Proceedings of the 14th European Control Conference. Linz, Austria, pp. 2532–2537. M. P. Golden, B. E. Ydstie, 1989. Adaptive extremum control using approximate process models. AIChE Journal 35 (7), 1157 – 1169. H. Zhang, P. Roberts, 1990. On-line steady-state optimisation of nonlinear constrained processes with slow dynamics. Transactions of the Institute of Measurement and Control 12 (5), 251 – 261.

=GUDYNR.UDYDQMD0LORã%RJDWDM (GLWRUV 3URFHHGLQJVRIWKHWK(XURSHDQ6\PSRVLXPRQ &RPSXWHU$LGHG3URFHVV(QJLQHHULQJ±(6&$3( -XQHWKWK3RUWRURå6ORYHQLD‹(OVHYLHU%9$OOULJKWVUHVHUYHG http://dx.doi.org/10.1016/B978-0-444-63428-3.50091-6

Computationally Efficient Steady-State Process Simulation by Applying a Simultaneous Dynamic Method Alexander Zinsera , Liisa Rihko-Struckmanna and Kai Sundmachera,b,* a Max

Planck Institute for Dynamics of Complex Technical Systems, Sandtorstraße 1, 39106 Magdeburg, Germany b Otto-von-Guericke University, Universit¨ atsplatz 2, 39106 Magdeburg, Germany [email protected]

Abstract In process simulation, different types of thermodynamic equilibria have to be solved. In a previous work (Zinser et al., 2015), we introduced a dynamic method to solve general thermodynamic equilibrium problems which is based on the solution of a set of ordinary differential equations (ODE). In this contribution, we extend our approach to the simultaneous solution of an overall process flowsheet in one iteration step. This is done by the coupling of the ODE systems of the single units according to the flowsheet connectivity. It is shown, that this leads to a significant improvement of the computational costs compared to conventional methods. A systematic comparison with established algorithms is performed with respect to convergence, initialization and computational costs. Keywords: Process Simulation, Chemical Equilibria, Phase Equilibria, Methanol Synthesis.

1. Introduction A large variety of complex chemical as well as phase equilibria between two or more phases have to be solved in process simulation. For different types of equilibrium calculations, different mathematical approaches are available. For chemical equilibria, the most common approach is the Gibbs energy minimization (Lwin, 2000). In the case of phase equilibrium calculations, algorithms for the direct solution of the necessary equilibrium conditions are available (Walas, 1985). In a recent work (Zinser et al., 2015), we presented a general approach for solving chemical as well as phase equilibrium problems in a generalized manner by solving a set of ordinary differential equations. In this contribution, we apply our Dynamic Method to a process flowsheet and solve the thermodynamic equilibria in this process simultaneously. This approach leads to a significant improvement of the computational efficiency as well as convergence properties compared to conventional methods. In the next sections, a brief description of the Dynamic Method is given and the extension to the Simultaneous Dynamic Method is introduced. After that, the methanol synthesis process is simulated using conventional tearing methods as well as using the Simultaneous Dynamic Method.

2. Dynamic Method for single units In a recent work, we presented a dynamic method for solving thermodynamic equilibria (Zinser et al., 2015). This method is based on the solution of a set of ordinary differential equations (ODE)

518

A. Zinser et al.

which describes the evolution of the molar composition dn = Ar , n(0) = n0 , dτ where A is the stoichiometric matrix and r is a vector of rate expressions which describes fluxes between different phases or fluxes due to chemical reactions. Those rate expressions formulated in a thermodynamic consistent way and can be written as      rσπ ,π = kσπ ,π fσπ − fσπ

(1) the are (2)

for the phase transition between phases π and π  of species σ . In the case of a chemical reaction equilibrium, the rate expression for reaction ρ that occurs in phase π may written as ⎡ ⎤ π π  π |νσρ   | ⎢ 1 fσ fσπ νσρ ⎥ ⎥ − π (3) rρπ = kρπ × ⎢ ∏ ◦ π ⎣ ∏ ⎦ fσ Keq,ρ σ ∈S fσ◦π σ ∈S π 0 νσρ

π ,π 

where the parameter kσ and kρπ can be chosen arbitrarily to adjust the numerics. E. g. a set of π chemical reactions can be “normalized” using kρπ = Keq, ρ. The fugacities fσπ may be expressed in terms of the fugacity coefficients φσπ , or, in terms of the activity coefficients, applying fσπ = xπσ φσπ P ,

or,

fσπ = xπσ γσπ P◦ ,

(4a) (4b)

respectively. In this work, the fugacities are obtained from the fugacity coefficients applying the predictive Soave-Redlich-Kwong (PSRK) Equation of State (EoS), see also Soave (1972); Holderbaum and Gmehling (1991); Fischer and Gmehling (1996). If the initial composition n0 is defined, this method leads for τ → ∞ to the corresponding equilibrium composition neq = n(τ → ∞) of a reaction of separation unit.

3. Simultaneous Dynamic Method In order to apply the Dynamic Method to an overall flowsheet simulation, the evolution equations for each single unit u ∈ U are formulated and extended by source and sink terms for each stream that is connected to the unit: dn(u) = n˙ in − n˙ out + A (u)r (u) (5) dτ The composition of the outflow is always equal to the composition in the unit. In multiphase units, the outlet is connected to one present phase in the unit, e. g. n˙ out =

1 π ,(u) n θ (u)

(6)

where θ (u) refers to the residence time of the unit. The feed streams of multiphase units may be assigned to an arbitrary phase or may be distributed among the phases. This may have a small impact on the computational performance, but not on the steady state solution. Additionally, one has to make sure, that the fluxes due to chemical reactions and phase transitions must be much faster than the fluxes between the units that result from the flowsheet connectivity. The reason for this is, that the thermodynamic equilibrium assumes either infinite reaction volume or infinite residence time. Both would lead to a cancellation of the sink and source terms which were introduced in this section.

519

Process Simulation by Simultaneous Dynamic Method

4. Conventional methods for process simulation In general process simulations, the overall mass balances of a process are solved using some kind of tearing methods. Assuming a simple process that includes a recycle stream, the tearing methods work as follows: (0)

[1] assume an initial guess for the recycle stream nrec . [2] solve the process model straight forward and obtain a new value nnew rec for the recycle stream. [3] update the value of the recycle stream, e. g. via (k+1)

nrec

(k)

= (1 − λ )nrec + λ nnew rec

(7)

[4] continue with [2] while the mass balance is violated. The algorithm above led to different types of tearing methods, e. g. the direct substitution method for λ = 1, the under-relaxation method for 0 < λ < 1 or the over-relaxation method for λ > 1. In the linear case, it can be proven that those methods converge for λ ∈ (0, 2). For the stop criterion of these algorithms, the following a posteriori error estimation



σ ∈S



(k+1) (k) !

nrec,σ − nrec,σ < M

(8)

of the error in the mass balance was implemented with a threshold of M = 10−6 .

5. Methanol synthesis process In this work, we apply the Simultaneous Dynamic Method on the methanol synthesis process (Rihko-Struckmann et al., 2010) and compare the computational costs against conventional tearing methods. An overview of the process flowsheet is given in Fig. 1. Note, that this flowsheet only shows the units which change the composition of a stream. For the sake of clarity, heat exchangers and compressors are not displayed here.

simultaneous dynamic method recycle

tearing method

purge

feed

MeOH

DIST REACT

FLASH

equilibrium methods for each unit

water

Figure 1: Simplified flowsheet of the methanol synthesis process and calculation procedures of the sequential approach (green) and the Simultaneous Dynamic Method (blue).

A. Zinser et al.

520

100

>100

90

number of iterations

number of iterations

80 80

60

70 60 50 40 30

40 0.5

1 1.5 relaxation parameter λ

2

20 0.5

0.6

0.7

0.8 0.9 recycle ratio (1 − ξ)

0.92

0.94 0.95

Figure 2: The total number of iterations as a function of the relaxation parameter λ (left). Impact of the recycle ratio (1 − ξ ) on the computational performance (right). For the calculations, we assumed a temperature of Tr = 450 K and a pressure of Pr = 5 MPa in the reactor. In the flash separation, a temperature of Tf = 300 K and a pressure of Pf = 0.5 MPa were assumed. The amount of purge gas was set to ξ = 0.1 and the feed was assumed to be in a stoichiometric composition of CO2 /H2 = 1/3.

6 Results 6.1. Conventional methods The methanol synthesis process was solved using the conventional tearing method as described in section 4. The thermodynamic equilibria in the reactor as well as in the separation unit were computed using the Dynamic Method for single units as described in section 2. In the initialization step for this approach, an initial guess for the composition of the recycle stream has to be applied, which was set to zero, nrec = 0. In a first study, the method parameter λ was varied in the range 1/2 ≤ λ ≤ 2. The total number of iterations of the method over the relaxation parameter λ is shown in Fig. 2 (left). Each iteration corresponds to computational costs in terms of CPU time of ≈ 400 ms. In case of the direct substitution method (λ = 1) we obtained 85 iterations. The best tearing method was an overrelaxation method with λ ≈ 1.8 which solved the mass balances in 48 iterations. In a second study, we investigated the impact of the purge stream ratio on the conventional method, see Fig. 2 (right). In this case, the method parameter was fixed to λ = 1.8 while the purge ratio was varied on the range ξ ∈ [0.05, 0.5]. It can be seen, that small purge ratios ξ lead to slower convergence or, in the worst case, to numerical problems.

0 2 4 6 8 10 12

6.2. Simultaneous Dynamic Method

14 16 0

5

10

15

nz = 135 By applying the Simultaneous Dynamic Method on the methanol synthesis process, we get a system of 15 ODEs describing the evolu- Figure 3: Structural Jacobian of the ODE system tion of the streams in the overall process cycle. of the Simultaneous Dynamic Method.

Process Simulation by Simultaneous Dynamic Method

4.5

−1

3.5

CO2 H2 CH3OH H2O CO

4

reactor product nσ / mol h

σ

reactor product n / mol h−1

4.5

CO2 H2 CH3OH H2O CO

4

521

3 2.5 2 1.5 1 0.5

3.5 3 2.5 2 1.5 1 0.5

0 −15 10

−10

10

−5

10

0

time τ

10

5

10

0 −15 10

10

10

−10

10

−5

10

0

time τ

10

5

10

10

10

Figure 4: Evolution of the composition in the reactor with different initial conditions. The first five equations refer to the evolution equations of the five species in the reactor. The ten remaining equations describe the evolution of the set of species in the two phases in the vapor-liquid separation unit. The structural Jacobian of the system is given in Fig. 3. The strong coupling of the evolution equations within a process unit can be seen at the filled square submatrices. The identity-like subblocks of the Jacobian refer to the flowsheet connectivity. It is assumed that the reactor outlet is connected to the vapor phase of the separation unit and the vapor phase outlet of the separation unit is fed back to the reactor via the recycle stream. The simultaneous model was solved using the M ATLAB ode15s solver (Shampine and Reichelt, 1997) applying two different initial conditions. The first set of initial conditions uses the knowledge of the process that the extent of reaction in equilibrium is ≈ 50%. Additionally, it is assumed, that water and methanol only occur in the liquid phase of the vapor-liquid separation and the remaining gases only occur in the vapor phase. The second set of initial conditions is a set of normally distributed random values. In this case no physical knowledge was involved in the setup of the method. Both sets of initial condutions lead to the equilibrium composition at steady state. Fig. 4 (left) shows the evolution of the composition in the reactor for the first set of initial conditions. The evolution using the random initial setup is shown on the right hand side of Fig. 4. The evolution of the streams of the vapor-liquid separation is shown in Fig. 5. It can be seen, that in both cases, the fluxes due to chemical reactions and phase transitions equilibrate on a time scale of 10−10 < τ < 10−6 . On a second time scale of 10−1 < τ < 103, the mass balances equilibrate according to the flowsheet connectivity. The computational costs are approximately 400 ms for the first set of initial conditions and about 1000 ms for the random set of initial conditions. Table 1: Computational performance of the two tearing methods compared to the Simultaneous Dynamic Method. Method

λ Iterations Time/Iteration CPU-time

Direct Substitution

Over-Relaxation

Dynamic Method

1 85

1.8 48

— 1 ≈ 400 ms 0.4 s

≈ 400 ms 34 s

19 s

522

A. Zinser et al.

σ

vapor stream nV / mol h

−1

3.5

4.5

CO2 H2 CH3OH H2O CO

4 −1

4

vapor stream nV / mol h σ

4.5

3 2.5 2 1.5 1 0.5

3 2.5 2 1.5 1 0.5

−10

10

−5

10

0

time τ

10

5

10

0 −15 10

10

10

1.8

0.8

1.6

0.7

1.4

−1

0.9

liquid stream nLσ / mol h

σ

liquid stream nL / mol h

−1

0 −15 10

3.5

0.6 0.5 0.4 0.3

−5

0

time τ

10

5

10

10

10

0

10

0

10

10

5

10

10

5

10

0.6

0.2 10

time τ

1

0.4

−10

−5

10

0.8

0.1 10

−10

10

1.2

0.2

0 −15 10

CO2 H2 CH3OH H2O CO

0 −15 10

−10

10

−5

10

time τ

10

10

Figure 5: Evolution of the vapor streams (upper plots) and the liquid streams (lower plots) of the separation for the physically motivated initial conditions (left) as well as the random initial conditions (right).

7. Conclusion The methanol process was simulated using conventional tearing methods as well as our proposed Simultaneous Dynamic Method. A summary of the computational expenses is given in Table 1. It was shown that the dynamic simulation approach is more efficient in terms of computational costs. Additionally, the proposed method fulfills the mass balances exactly and is insensitive against variations of the initial conditions and the size of the purge ratio.

References K. Fischer, J. Gmehling, 1996. Status and Results of the PSRK Method for the Prediction of Vapor-Liquid Equilibria and Gas Solubilities. Fluid Phase Equilibria 121, 185–206. T. Holderbaum, J. Gmehling, 1991. PSRK: A Group Contribution Equation of State Based on UNIFAC. Fluid Phase Equilibria 70, 251–265. Y. Lwin, 2000. Chemical Equilibrium by Gibbs Energy Minimization on Spreadsheets. International Journal of Engineering Education 16 (4), 335–339. L. Rihko-Struckmann, A. Peschel, R. Hanke-Rauschenbach, K. Sundmacher, 2010. Assessment of Methanol Synthesis Utilizing Exhaust CO2 for Chemical Storage of Electrical Energy. Industrial & Engineering Chemistry Research 49 (21), 11073–11078. L. F. Shampine, M. W. Reichelt, 1997. The MATLAB ODE Suite. SIAM Journal on Scientific Computing 18 (1), 1–22. G. Soave, 1972. Equilibrium Constants from a Modified Redlich-Kwong Equation of State. Chemical Engineering Science 27 (6), 1197–1203. S. M. Walas, 1985. Phase Equilibria in Chemical Engineering. Butterworth Publishers. A. Zinser, K. Ye, L. Rihko-Struckmann, K. Sundmacher, 2015. A Dynamic Method for Computing Thermodynamic Equilibria in Process Simulation. Computer Aided Chemical Engineering 37, 299–304.

Zdravko Kravanja, Miloš Bogataj (Editors), Proceedings of the 26th European Symposium on Computer Aided Process Engineering – ESCAPE 26 June 12th -15th, 2016, Portorož, Slovenia © 2016 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/B978-0-444-63428-3.50092-8

Spatial branch and bound algorithm for the global optimization of MIQCPs Pedro M. Castro Centro de Matematica Aplicações Fundamentais e Investigação Operacional, Faculdade de Ciências, Universidade de Lisboa, 1749-016 Lisboa, Portugal

Abstract We propose a new global optimization algorithm for a class of non-convex mixedinteger nonlinear problems. It involves considering tight relaxations at every node of the spatial branch-and-bound tree by doing optimality based bound tightening upon branching and relying on normalized multiparametric disaggregation. Through the solution of benchmark problems from the literature, we show that orders of magnitude reduction in optimality gap can be achieved compared to commercial solvers BARON and GloMIQO. Keywords: MINLP; NLP; Process networks; Blending problems.

1. Introduction Non-convex mixed-integer quadratic constrained problems (MIQCPs) are frequently encountered in process systems engineering. Examples can be found in design and operational problems dealing with water networks (Teles et al., 2012), blending of crude oil (Castro and Grossmann, 2014a) and refined petroleum products (Kolodziej et al., 2013b), and hydroelectric power plants (Castro and Grossmann, 2014b). It is often very difficult to find a feasible solution to such problems, let alone finding the global optimum. Commercial global optimization solvers like BARON and GloMIQO (Sahinidis, 1996; Misener and Floudas, 2013) have improved dramatically over the years and can effectively tackle small to medium size problems. They rely on spatial branch and bound (B&B) to iteratively reduce the variables domain, one by one, and improve the quality of the linear (LP) or mixed-integer linear (MILP) relaxation of the MIQCP. MILP relaxations, e.g. multiparametric disaggregation (MDT) (Kolodziej et al., 2013a), work by simultaneously partitioning the variables domain and can be made as tight as desired so as to prove global optimality without the need for spatial B&B. In fact, an MDT-based algorithm has already been found more efficient than BARON and GloMIQO (Castro and Grossmann, 2014a: Castro 2015). This work is about the integration of MDT with spatial B&B so that larger, more challenging MIQCPs can be solved to global optimality.

2. Problem definition We consider the the following non-convex MIQCP: min

,

subject to , ,

(P) 0 ∑

\0 ,

524

P.M. Castro

0

,

0,1

where is a vector of non-negative continuous variables and are binary variables. is terms comprising continuous variables an , -index set that defines the bilinear and present in the problem (binary variables appear linearly), while / are and all the variables’ lower/upper bounds. Set includes the objective function and are scalars whereas and are matrices. Whenever , (P) constraints, becomes a quadratic constrained problem (QCP), which is also considered in this work.

3. MILP relaxation from multiparametric disaggregation (MDT) In the normalized MDT relaxation (Castro, 2015), bilinear terms in (P) are first . The user then selects one variable in every , replaced by new sets of variables pair to be discretized: . In fact, it is actually the range between the variables’ lower and 0,1 . The upper bounds that are being discretized, through auxiliary variables discrete representation considers a finite number of positions , 1, … , 1 in the decimal representation system, for which exactly one digit 0,1, … ,9 needs to be . Parameter controls the quality of the relaxation. chosen through binary variables 10 . The continuous domain is then achieved by adding slack variables 0 Δ The full set of constraints for the MILP relaxation of (P) is given below. Two important properties are worth highlighting for (PR-MDT): (i) it is feasible for values of , that do not satisfy and hence are infeasible in (P); (ii) its solution and min , . , yields a lower bound for (P), i.e. min

,

subject to , ,

0 ∑



∑ 10 · · 0 Δ 10



∑ ·Δ

10 ·

·

Δ

∑ 1

· 10 , | ,

·Δ ·Δ , , , ,

0 , ,Δ ,Δ 0,1 0,1 ,

(PR-MDT)

,

·Δ · 10

Δ

| ,

Δ

Δ

Δ



\0 ,

,…, 1 ,…, 1 0, … ,9 ,

,…, 1

Spatial branch and bound algorithm for the global optimization of MIQCPs

525

Due to the tight relaxation, the optimal solution , from (PR-MDT) is typically close to the feasible region of (P). It thus provides a good starting point for solving (P) to reduce it to an NLP. Upper with a fast local solver, after adding constraint will be generated this way. Note that while it is possible that makes bound ∞. (P) infeasible, it is less likely to occur for lower values of . In such case,

4. Optimality based bound tightening (OBBT) Besides , the quality of the (PR-MDT) relaxation is a function of bounds and , which can be calculated using system knowledge. Stronger bounds require solving multiple optimization problems. While optimality based bound tightening can be very time consuming, the benefit from a tighter relaxation might payoff in the long run. Current spatial branch and bound algorithms perform OBBT only at the root node or up to a limited depth (Belotti, et al., 2009), whereas in the proposed algorithm, OBBT is executed in every node of the search tree. We rely on the standard McCormick relaxation for OBBT that, unlike (PR-MDT), involves the same number of binary variables as (P) and hence can be solved much faster. All variables appearing in a bilinear term, | , , , are subject to bound tightening. More specifically, we solve one minimization and one maximization problem of type (BC) in sequence. Note that the domain can be reduced even if no feasible solution ( ∞) is yet known for (P). Note that the integrality constraints have been relaxed to so as to reduce the computational time by solving LPs instead of MILPs. The resulting bounds will thus be weaker than for 0,1 . min

(

max

)

subject to ∑ ∑

,

0

,

\0 (BC)

, 0

,

0,1

5. Branching strategy Given the optimal solution , from the relaxation problem, the next step is to select the branching variable. Since variables are already being partitioned as part of variable ( ) contributing to the largest discrepancy the relaxation, we select the between the exact and relaxed representations of the bilinear terms. arg max max |

|

Each node in the B&B tree holds information about the lower bound and the branching variable. The node selected for branching ( ) features the smallest lower , , closing node and opening bound amongst all open nodes. We then bisect two new nodes. arg min

526

P.M. Castro

6. Global optimization algorithm Now that all main components have been presented, we can briefly describe how the new spatial B&B global optimization algorithm works. Given the the value of accuracy parameter and relative optimality tolerance , we start by solving a MILP derived from the standard McCormick relaxation, leading to lower bound . The solution is then used as a starting point for solving (P) and generating our first upper bound . If , the search can be terminated. Else, we set and find tighter bounds for the bilinear variables by solving multiple (BC) problems. We then solve MILP relaxation problem (PR-MDT) to improve followed by (P), to see if can be decreased from the potentially different starting point. Still at the root node, we check again the termination criterion. If it is not met, we bisect the domain of one of the variables through branching. For each branch (node), we apply OBBT followed by (PR-MDT) to find lower bound and then by (P). Notice that the (BC) and (PR-MDT) problems may become infeasible after branching. In such case, node can be fathomed. The same is true if / 1 . The search proceeds by selecting the node with the minimum until no more nodes are left open or running out of time.

7. Numerical experiments The performance of the new algorithm is evaluated through the solution of 43 benchmark problems from the literature: (i) 18 water-using and (ii) 15 wastewater treatment design problems (Teles et al., 2012); (iii) 7 multiperiod blending problems (Kolodziej et al., 2013b); (iv) 3 hydroelectric scheduling problems (Castro and Grossmann, 2014b). Problems (i)-(ii) are of the NLP type, meaning that (BC) will be an LP instead of an MILP. The choice of discretized variables can be found in Castro (2015), while 1. The algorithm was implemented in GAMS 24.3 and solved on an Intel i7-4790 processor with 8 GB of RAM, SSD and running Windows 7. Problems (PR-MDT) and (BC) were solved by CPLEX 12.6 using up to 8 threads. The termination criteria were a relative optimality tolerance= 10-6 or total computation time= 3600 CPUs for (i)-(iii) and 18,000 CPUs (3600 CPUs maximum per MILP) for (iv). Within the proposed algorithm, the NLPs from (P) were solved by CONOPT 3.16C and commercial solvers BARON 14.0.3 and GloMIQO 2.3. 7.1. Results analysis Table 1 gives some key performance indicators to compare the computational performance of the proposed spatial B&B algorithm with the two commercial solvers. All three algorithms could find the global optimal solutions except for two hydro scheduling problems. Proving global optimality is however not as easy. Nevertheless, the new algorithm has the highest success rate (77%), solving 15 problems of type (i), 15 of type (ii) and 3 of type (iii), to 0.0001% gap. Table 1. Key performance indicators (43 problems solved) Algorithm New (MDT-based) GloMIQO BARON

Optimal solutions found 41 41 41

Proven optimal 33 25 17

Best performer 20 20 3

Spatial branch and bound algorithm for the global optimization of MIQCPs

527

Cumulative distribution function

1

0.8

0.6 MDT Algorithm

0.4

BARON GloMIQO

0.2 0

1

2

3

4

5 

6

7

8

9

10

Figure 1. Performance profile for optimality gap

Cumulative distribution function

1

0.8

0.6 MDT Algorithm

0.4

BARON GloMIQO

0.2 0

1

2

3

4

5 

6

7

8

9

10

Figure 2. Performance profile for computational time

BARON excels in the multiperiod blending problems (MINLPs), proving optimality for all in less than 30 minutes but has difficulties in the water network problems (NLP type). GloMIQO has the advantage of typically being the fastest at proving optimality, which is the main reason why it is the best performer in 20 problems (first criterion is the lowest optimality gap; if it is the same, the second criterion is the minimum computational time). While the chosen key indicators are useful, the performance profiles of Dolan and Moré (2002) are far more powerful. We choose to consider the optimality gap (computed using the best known solution and not necessarily the upper bound found by the algorithm) in Figure 1 and the total computational time in Figure 2, since they give us complementary information. The cumulative distributions are plotted as a function of , a parameter that tells us that the performance ratio with respect to the best solver is below 2 . Given that all algorithms can find the same number of optimal solutions and since a consistently lower gap will increase the likelihood that the solutions returned are indeed global optima, one may argue that the optimality gap at termination is more important than the total time to prove global optimality. The results in Figure 1 show that the new algorithm is the best in 88% of the problems (check value for 0), returning 95% of the times a gap that it is at most three orders of magnitude higher than the gap from the best performer. For 10, GloMIQO is still at 74%, because it is unable to bring the

528

P.M. Castro

gap down to 0.1% for a few problems (maximum=9.8%) that can be solved to 0.0001% gap by the new algorithm. BARON is particularly inefficient in the wastewater treatment problems, returning gaps above 10% in six cases (maximum=72.3%). Overall, there is a clear ranking, with the new algorithm in first place, GloMIQO in second and BARON in third. If, on the other hand, the focus is on computational time to prove global optimality, GloMIQO is the winner, being the fastest in 68% of the cases, compared to the new algorithm’s 46% (the reason why the sum exceeds 100% is because ties count for both algorithms). Still, the new algorithm can prove optimality more often and so it is no surprise that it goes above GloMIQO at some point, around 5, meaning that it is more likely to be at most 32 times slower than the fastest solver.

8. Conclusions This paper has presented a spatial branch and bound algorithm for the solution of mixed-integer quadratic constrained problems. Its main differentiating features are the use on every node of the search tree of a relaxation derived from normalized multiparametric disaggregation that is preceded by optimality based bound tightening of all variables appearing in bilinear terms. The computational results have shown that the new algorithm is better at proving global optimality than commercial solvers BARON and GloMQIO, while the latter is typically the fastest. Performance is especially good for the NLP problems, with the worse results for the multiperiod blending problems (compared to BARON) suggesting changing the branching strategy when dealing with binary variables.

References P. Belotti, J. Lee, L. Liberti, F. Margot, A. Wachter, 2009, Branching and Bounds Tightening Techniques for Non-convex MINLP. Opt. Meth. Soft., 24, 4-5, 597-634. P.M. Castro, 2015, Normalized Multiparametric Disaggregation: An Efficient Relaxation for Mixed-Integer Bilinear Problems, J. Glob. Optim., doi:10.1007/s10898-015-0342-z. P.M. Castro and I.E. Grossmann, 2014a, Global Optimal Scheduling of Crude Oil Blending Operations with RTN Continuous-time and Multiparametric Disaggregation, Ind. Eng. Chem. Res., 53, 15127-15145. P.M. Castro, I.E. Grossmann, 2014b, Optimality-based Bound Contraction with Multiparametric Disaggregation for the Global Optimization of Mixed-Integer Bilinear Problems, J. Glob. Optim., 59, 277-306. E.D. Dolan, J.J. Moré, 2002, Benchmarking Optimization Software with Performance Profiles. Math. Program. Ser. A, 91, 201-213. S.P. Kolodziej, P.M. Castro, I.E. Grossmann, 2013a, Global Optimization of Bilinear Programs with a Multiparametric Disaggregation Technique, J. Glob. Optim., 57, 1039-1063. S.P. Kolodziej, I.E. Grossmann, K.C. Furman, N.W. Sawaya, 2013b, A Discretization-based Approach for the Optimization of the Multiperiod Blend Scheduling Problem, Comput. Chem. Eng., 53, 122-142. R. Misener, C.A. Floudas, 2013, GloMIQO: Global Mixed-Integer Quadratic Optimizer, J. Glob. Optim., 57, 3-50. N. Sahinidis, 1996, BARON: A General Purpose Global Optimization Software Package, J. Glob. Optim., 8, 201-205. J.P. Teles, P.M. Castro and H.A. Matos, 2012, Global Optimization of Water Networks Design using Multiparametric Disaggregation, Comput. Chem. Eng., 40, 132-147.

Zdravko Kravanja, Miloš Bogataj (Editors), Proceedings of the 26th European Symposium on Computer Aided Process Engineering – ESCAPE 26 June 12th -15th, 2016, Portorož, Slovenia © 2016 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/B978-0-444-63428-3.50093-X

Risk Management of Shale Gas Supply Chain under Estimated Ultimate Recovery Uncertainty Jiyao Gao, Fengqi You* Northwestern University, 2145 Sheidan Road, Evanston, Illinois 60208, USA

Abstract This paper addresses the risk management for optimal design and operations of shale gas supply chains under uncertainty of estimated ultimate recovery (EUR). A multiobjective two-stage stochastic mixed-integer linear programming model is proposed to optimize the expected total cost and the financial risk. The latter criterion is measured by conditional value-at-risk (CVaR) and downside risk. In this model, both design and planning decisions are considered with respect to shale well drilling, shale gas production, processing, multiple end-uses, and transportation. In order to solve this computationally challenging problem, we integrate both the sample average approximation method and the L-shaped method. The proposed model and solution methods are illustrated through a case study based on the Marcellus shale play. According to the optimization results, the stochastic model provides a feasible design for all the scenarios with the lowest expected total cost. Moreover, after risk management, total expected cost increases but the risk of high-cost scenarios is reduced effectively, and the CVaR management shows its advantage over downside risk management in this specific case study. Keywords: shale gas, estimated ultimate recovery, uncertainty, risk management

1. Introduction In recent years, the widespread application of horizontal drilling and hydraulic fracturing has led to a “shale revolution. Despite the optimistic forecast of shale gas production given by the EIA (EIA, 2015), the actual future of shale gas may not be as bright as the EIA suggests (Hughes, 2014). From a well-by-well based calculation of shale gas production throughout the U.S. by Post Carbon Institute, conclusion is obtained that the actual profitability of a shale well can be significantly affected by the uncertainty in the estimated ultimate recovery (EUR). Considering the significant influence of the shale gas industry on the overall U.S. energy sector, it is essential to address the optimization of emerging shale gas supply chains with explicit consideration of EUR uncertainty and actual shale gas production profiles (Garcia and You, 2015). Supply chain design and optimization has long been known as a challenging problem that is vital to the success of industrial concerns. Currently, there are publications regarding shale gas supply chain planning (Cafaro and Grossmann, 2014), shale water management (Yang et al., 2015), process design (He and You, 2014), and GHG emissions (Gao and You, 2015a). In this work, we focus on addressing the optimal design and operations of a shale gas supply chain under EUR uncertainties. We derive the EUR distribution based on real-world reports (Swindell, 2014). By reviewing the reported EUR data, we identify a wide distribution of EUR with distinct “long tails”, which is suitable for applying stochastic programming approach to explicitly account

530

J. Gao and F. You

for the EUR uncertainty (You et al., 2009). A multiobjective two-stage stochastic mixed-integer linear programming (SMILFP) model is hereby proposed to simultaneously minimizing the expected total cost and financial risk. The objective is to find a solution with the best expected performance under all scenarios. Notably, the resulting problem size may increase exponentially as the number of scenarios increases. To tackle this challenge, we adopt a sample average approximation (SAA) approach to generate scenarios based on the real-world EUR distribution data, which is combined with statistical methods to determine the required number of scenarios to achieve the desired accuracy (Shapiro, 2003). In order to further boost the solution process of the resulting two-stage SMILFP problem, an L-shaped method is used to solve large-scale problems efficiently. Finally, a case study based on the Marcellus shale play is presented to illustrate the application of the proposed modelling framework and solution approaches (Gao and You, 2015b).

2. Problem Statement In this section, we formally state the optimization problem of a shale gas supply chain under EUR uncertainty. A shale gas supply chain network includes a set of shale sites with potential wells to be drilled, a set of processing plants where raw gas is processed and pipeline-quality shale gas is obtained, and a set of end customers, namely power plants, industrial customers, commercial customers, and residential customers, where shale gas is consumed. Shale gas is transported through pipeline in this network. The major decision variables comprise two stages. The first-stage decisions involve all the design decisions, where are made “here-and-now” prior to the realization EUR uncertainties. The second-stage decisions are all the operational decisions that are made “wait-and-see” after the EUR uncertainty. The detailed decision variables are summarized in the following: Stage I: Design Decisions • Drilling schedule at each shale site; • Constructions and corresponding capacities of processing plants; • Installation and corresponding capacities of pipelines among shale sites, processing plants, and end customers. Stage II: Operational Decisions • Amount of shale gas produced at each shale site in each time period; • Amount of shale gas processed at each processing plant in each time period; • Amount of gas being sent to each end customer in each time period. The objective of this two-stage stochastic programming problem is minimizing the expected total cost across the supply chain as well as minimizing the financial risk. The latter criterion is evaluated by different risk management approaches, including the downside risk and CVaR managements. The ultimate goal of this work is to seek a balanced strategy regarding both expected economic performance and risk reduction.

3. Model Formulation Based on the general problem statement, we develop a multiobjective SMILP model. The economic objective is to minimize the expected total cost that include the first stage cost TC1 plus the expected second stage cost, which is the product of the scenario probability pjs and the associated scenario cost TC2,js.

Risk Management of Shale Gas Supply Chain under Estimated Ultimate Recovery Uncertainty

531

min E(Cost) = TC1 +

(1)



p js ⋅ TC2, js

js∈JS

The first stage cost includes the capital investment regarding shale well drilling, construction of processing plants, and installation of pipeline networks, calculated by: sdci ,t ⋅ NN i ,t

⎛ ⎛ prir − prir −1 ⎞ ⎞ + ∑ ∑ ⎜ prir −1 ⋅ yp p , r + ( PC p , r − prcr −1 ⋅ yp p , r ) ⋅ ⎜ ⎟ ⎟⎟ ⎜ i∈I t ∈T p∈P r ∈R ⎝ (1 + dr ) ⎝ prcr − prcr −1 ⎠ ⎠ ⎛ ⎛ tprir − tprir −1 ⎞ ⎞ + ∑ ∑ ∑ ⎜ tprir −1 ⋅ xpi , p , r + (TPCi , p , r − tprcr −1 ⋅ xpi , p , r ) ⋅ ⎜ ⎟ ⎟⎟ ⎜ i∈I p∈P r ∈R ⎝ ⎝ tprcr − tprcr −1 ⎠ ⎠ ⎛ ⎛ tprir − tprir −1 ⎞ ⎞ + ∑ ∑ ∑ ⎜ tprir −1 ⋅ xpm p , m, r + (TPMC p , m , r − tprcr −1 ⋅ xpm p , m , r ) ⋅ ⎜ ⎟ ⎟⎟ ⎜ p∈P m∈M r ∈R ⎝ ⎝ tprcr − tprcr −1 ⎠ ⎠

TC1 = ∑∑

t

(2)

The second stage cost corresponding to each scenario accounts for the shale gas production cost, gas processing cost, transportation cost, as well as the income from sales of natural gas liquids (NGL) as a negative term here, which is calculated by: spci ,t ⋅ SPi ,t , js vp ⋅ STPi , p ,t , js + vtcs ⋅ lspi , p ⋅ STPi , p ,t , js TC2, js = ∑∑ +∑∑∑ t t i∈I t∈T i∈I p∈P t∈T (1 + dr ) (1 + dr ) (3) vtcm ⋅ lpm p , m ⋅ STPM p , m,t , js plt ⋅ PLS p ,t , js +∑ ∑ ∑ ∀ js + ∑ ∑ 1 + dr t t p∈P m∈M t∈T p∈P t∈T (1 + dr ) ( ) The risk objective is to minimize the financial risk, which can be addressed by either the downside risk management approach (4) or the CVaR management approach (5). min DRisk( x, Ω) = ∑ p js ⋅ψ js ( x, Ω) js∈JS (4) s.t. ψ js ≥ Cost js − Ω, ψ js ≥ 0 In the downside risk management method, a positive variable ψ js is introduced and defined as the positive deviation between the target Ω and the scenarios cost Cost js . If the scenario cost Cost js is less than the target Ω , ψ js is equal to 0; otherwise ψ js is equal to the difference.

min CVaR( x, α ) =



p js ⋅ φ js

js∈JS

+ VaR (5) 1−α s.t. φ js ≥ Cost js − VaR, φ js ≥ 0, VaR ≥ 0 CVaR management is another approach of risk management (Gebreslassie et al, 2012). Instead of fixing a target cost Ω , a given quantile α ∈ ( 0,1) is considered. Two nonnegative variables, value-at-risk (VaR) and scenario cost Cost js are introduced. If the

Cost js is less than VaR, φ js is equal to 0; otherwise φ js is equal to the difference. These objectives are optimized subject to five types of constraints, including the mass balance constraints, capacity constraints, bounding constraints, and logic constraints. Due to the limited space of this paper. Detailed formulations of these constraints are not provided here but available upon request.

532

J. Gao and F. You

4. Solution Method 4.1. Sample Average Approximation Stochastic programming models that rely on scenarios are often computational demanding. In order to circumvent these computational challenges, we consider the Sample Average Approximation (SAA) for the 2-stage stochastic optimization problem. Given the following EUR distribution based on actual historical data of 2,600 wells reported in Marcellus shale play, we can use Oracle Crystal Ball to generate sampling points based on Monte Carlo method. By applying the SAA technique, a sample size of around 300 is enough to find the optimal solution with 98% possibility. 4.2. L-shaped Method Algorithm. L-Shaped Method 1: Initialization. LB ← −∞ , UB ← +∞ , Iter in ← 1 , Gap ← +∞ 2: Solve Master Problem to obtain initial first-stage decisions 3: while ( Gap ≥ Tol ) do Iter ← Iter + 1 4: while (terminate=false) do Scenario s ← s + 1 5: Solve Subproblem with given first-stage decisions 6: if (Subproblem of scenario s is feasible) then Generate optimality cut: θ ≥ eo x + d o 7:

else Generate feasibility cut: e f x + d f ≤ 0

8: 9:

end if if ( count ( s) ≥ S ) then

10: 11: 12: 13: 14: 15: 16: 17:

terminate ← true end if end while if (all the subproblems are feasible) then Add θ ≥ eo x + d o to Master Problem, update UB else Add e f x + d f ≤ 0 to Master Problem end if Solve Master Problem with updated cuts Update LB end while Output optimal solution

Figure 1. Algorithm for L-shaped method.

In order to solve the resulting SMILP problem, the L-shaped method is applied to take advantage of the two-stage structure (You et al., 2013). As can be seen in Figure 1, we first solve the master problem to obtain the first stage decisions. Then we fix the first stage decisions and solve all the scenario-based subproblems to obtain the optimal second stage decisions. Depending on the feasibility of the subproblem, either feasibility or optimality cut is added to the master problem. The optimal solution is obtained when the stopping criterion is satisfied.

Risk Management of Shale Gas Supply Chain under Estimated Ultimate Recovery Uncertainty

533

5. Case Study To illustrate the application of the proposed model and solution algorithms, we considered a shale gas supply chain problem based on Marcellus shale play. In this case study, five shale sites are included, each of which can drill up to 4 to 8 wells at maximum. We consider 3 potential shale gas processing plants (He and You, 2015), and 4 different types of shale gas end customers. The total planning horizon is 10 years, which is close to the real productive life of Marcellus wells (Gao and You, 2015a). A total of 300 scenarios are considered. The resulting stochastic model consists of 174,281 constraints, 114,250 continuous variables, and 125 discrete variables. The deterministic model is based on the nominal value of EUR for shale wells, which gives a total cost of $153 million. Meanwhile, the lowest expected total cost returned by the stochastic model is $171 million, 12% higher than that given by the deterministic model. Moreover, we note that the scale of the deterministic model is much smaller than the stochastic one, making it easier to solve. However, it is worth pointing out that the optimal supply chain design obtained from the deterministic model fails to provide feasible solution to 205 out of 300 scenarios considered in this work. It means that for most realization of the EUR uncertainty, the specific optimal design from deterministic model cannot fulfil the required capacity. In contrast, the stochastic model provides a design that are not only feasible for all the scenarios, but maintain the best economic performance on average. The optimal design of the shale gas supply chain network is presented in the following Figure 2.

Figure 2. Optimal design of shale gas supply chain network from stochastic model.

Figure 3. Comparison of the cost distribution before and after risk management.

534

J. Gao and F. You

Here we compare the cost distribution with respect to different models in Figure 3, including the stochastic model with economic objective, the downside risk management model, and the CVaR management model. Notably, the stochastic model is identified with the lowest expected total cost of $171 million. After the risk management, the downside risk management model gives a total cost of $176 million, where the target value Ω is $190 million; the total cost in CVaR management model is $177 million, where the 90% α quantile corresponds to $201 million.

6. Conclusions In this work, the stochastic model is shown to be a superior choice than deterministic model for determining the optimal design of a shale gas supply chain under EUR uncertainty. Additionally, risk management is proven to be an effective tool in reducing the financial risk by sacrificing economic performance to some extent, and the CVaR can be a more convenient choice that does not rely on the choice of target value Ω.

References D. C. Cafaro and I. E. Grossmann, 2014, Strategic planning, design, and development of the shale gas supply chain network, AIChE Journal, 60(6), 21. J. Gao and F. You, 2015a, Shale Gas Supply Chain Design and Operations toward Better Economic and Life Cycle Environmental Performance: MINLP Model and Global Optimization Algorithm, ACS Sustainable Chemistry & Engineering, 3(7), 1282-1291 J. Gao and F. You, 2015b, Deciphering and handling uncertainty in shale gas supply chain design and optimization: Novel modeling framework and computationally efficient solution algorithm, AIChE Journal, 61(11), 3739-3755. J. Gao and F. You, 2015b, Optimal Design and Operations of Supply Chain Networks for Water Management in Shale Gas Production: MILFP Model and Algorithms for the Water-Energy Nexus. AIChE Journal, 61, 1184-1208. B. H. Gebreslassie, Y. Yao, F. You, 2012, Design under Uncertainty of Hydrocarbon Biorefinery Supply Chains: Multiobjective Stochastic Programming Models, Decomposition Algorithm and A Comparison between CVaR and Downside Risk. AIChE Journal, 58, 2155-2179. D. J. Garcia and F. You, 2015, Supply chain design and optimization: Challenges and opportunities, Computers & Chemical Engineering, 81, 153-170. J. Gong, F. You, 2015, Sustainable Design and synthesis of Energy Systems, Current Opinion in Chemical Engineering, 10, 77-86. C. He and F. You, 2014, Shale Gas Processing Integrated with Ethylene Production: Novel Process Designs, Exergy Analysis, and Techno-Economic Analysis, Industrial & Engineering Chemistry Research, 53(28), 11442-11459. C. He and F. You, 2015, Toward more cost-effective and greener chemicals production from shale gas by integrating with bioethanol dehydration: Novel process design and simulationbased optimization, AIChE Journal, 61(4), 1209-1232. A. Shapiro (2003) 'Monte Carlo Sampling Methods', in Ruszczynski, A. & Shapiro, A. (eds.) Handbooks in Operations Research and Management Science: Elsevier, 353-425. G. S. Swindell 'Marcellus Shale in Pennsylvania: A 2,600 Well Study of Estimated Ultimate Recovery', SPE Annual Meeting, Dallas, TX. L. Yang, I. E. Grossmann, M. S. Mauter and R. M. Dilmore, 2015, Investment optimization model for freshwater acquisition and wastewater handling in shale gas production, AIChE Journal, 61(6), 1770-1782. F. You, I. E. Grossmann, 2013, Multi-cut Benders Decomposition Algorithm for Process Supply Chain Planning under Uncertainty. Annals of Operations Research, 210, 191–211. F. You, J. M. Wassick and I. E. Grossmann, 2009, Risk management for a global supply chain planning under uncertainty: Models and algorithms, AIChE Journal, 55(4), 931-946.

Zdravko Kravanja, Miloš Bogataj (Editors), Proceedings of the 26th European Symposium on Computer Aided Process Engineering – ESCAPE 26 June 12th -15th, 2016, Portorož, Slovenia © 2016 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/B978-0-444-63428-3.50094-1

Optimal Design and Synthesis of Shale Gas Processing and NGL Recovery Processes Jian Gong, Fengqi You* Northwestern University, 2145 Sheridan Road, Evanston, IL 60208, USA

Abstract The booming of the shale gas industry is predicted to impact North American’s energy landscape. This paper addresses the robust design and synthesis of shale gas processing and natural gas liquids (NGLs) recovery processes under uncertain feedstock compositions. The problem is addressed via three steps. First, we construct an uncertainty set for raw shale gas compositions. Second, we develop a superstructure for shale gas processing and NGLs recovery processes that includes monoethanolamine and diethanol amine absorption processes in an acid gas removal section, triethylene glycol absorption and condensation processes in a dehydration section, standalone and integrated designs of a NGLs recovery process, and a nitrogen rejection process. The proposed superstructure explicitly shows process configurations for the feedstocks in the uncertainty set. In the last step, we propose a two-stage adaptive robust mixedinteger linear programming problem based on the superstructure and a tailored solution method for the proposed model. The obtained robust optimal design is immunized against uncertainty in feedstock compositions. Keywords: shale gas processing, NGLs recovery, superstructure, adaptive robust optimization.

1. Introduction Shale gas is regarded as an important driver leading the change in North America’s energy landscape. Due to the application of mature extraction technologies such as horizontal drilling and hydraulic fracturing, total natural gas production in the U.S. is predicted to increase 45% by 2040 (EIA, 2015). The rapid expansion in upstream shale gas production motivates developing additional shale gas processing plants and downstream facilities to absorb the increasing supplies of shale gas (Gao and You, 2015). There is great economic potential and practical need to address the optimal design and synthesis of shale gas processing and NGLs recovery processes (JulianDuran et al., 2014). However, uncertain raw shale gas compositions complicate the design and synthesis of shale gas processes (Bullin and Krouskop, 2009). If a processing plant is designed to process shale gas with fixed compositions, a large amount of off-spec gas will be produced when the real composition deviates from the designed composition. In order to maintain stable supply of qualified gas products, uncertain feedstock compositions must be considered at the design stage (Gong and You, 2015). In this work, we address the robust design and synthesis of shale gas processing and NGLs recovery processes under uncertain feedstock compositions. We first construct an uncertainty set for raw shale gas compositions. Next, we propose a novel process superstructure with technology and process alternatives. The

536

J. Gong and F. You

superstructure explicitly demonstrates the process configurations for all compositions in the discrete uncertainty set. Based on the superstructure, we develop a two-stage adaptive robust mixed-integer linear programming (MILP) model and a tailored solution strategy to solve the optimization problem. The robust optimal design is analyzed and compared with a deterministic case.

2. Uncertainty set definition There are seven major constituents in raw shale gas: C1, C2, C3+, CO2, H2S, N2, and H2O. The upper and lower bounds of the molar fraction of each constituent are shown in Table 1 (Speight, 2013). We construct a discrete uncertainty set with 12 representative compositions, based on the upper and lower bounds in Table 1. Upper bounds are essential for obtaining robust capacity of a process. Therefore, each upper bound is assigned to 3 compositions in the discrete uncertainty set. The difference among these 3 compositions resides in the molar fractions of other constituents in the raw shale gas. Table 1. Upper and lower bounds of raw shale gas composition (molar fraction in %). H2S N2 H2O Constituent C1 C2 C3+ CO2 Lower bound 77.50 3.00 0.40 0.10 0.02 0.20 0.05 Upper bound 95.50 16.10 4.00 9.00 1.80 14.30 0.05

3. Process description

Figure 1. Superstructure of shale gas processing and NGLs recovery processes.

A shale gas processing plant is designed to separate valuable NGLs and undesired constituents, so that the resulting gas product meets the specifications for transmission (Gao and You, 2015b). There are typically four sections in shale gas processing and NGLs recovery processes, namely, acid gas removal (AGR), dehydration, NGLs recovery, and nitrogen rejection (He and You, 2014). For different feedstock conditions, even processed by the same technology, the products are hardly uniform, and the downstream processes have to be tailored in order to satisfy different processing needs.

Optimal Design and Synthesis of Shale Gas Processing and NGL Recovery Processes

537

In Figure 1, the proposed superstructure explicitly demonstrates the process configuration for each feedstock composition. We consider two solvent absorption alternatives in the AGR section: the first one uses MEA as the solvent and the second one uses DEA. MEA has a high solution capacity for H2S and CO2, while it also possesses a relatively high energy requirement for regeneration. In contrast, DEA has a lower energy requirement for regeneration due to a lower vapor pressure, but DEA is unable to dissolve as much of the acid gases as MEA if the same amount of solvent is used. All off-gas streams produced in the shale gas processing and NGLs recovery processes are sent to a power generation system. The process flowsheet is shown in Figure 2. The sweet shale gas stream from the AGR section is water-saturated. However, both operating cryogenic units in NGLs recovery and nitrogen rejection processes and transporting gas product in pipelines require a low concentration of water (0.1 ppm). In Figure 3, we consider two gas dehydration alternatives: a TEG absorption process and a condensation process. TEG is selected as a liquid desiccant because of its high absorption efficiency, less energy-intensive regeneration, nontoxicity, and no interaction with the hydrocarbons. Due to the need of refrigeration in the condensation and NGLs recovery processes, we further integrate the condensation process with a NGLs recovery process as shown in process (2) of Figure 3. In order to make the alternatives comparable, a standalone NGLs recovery process is sequentially added to the TEG absorption process in process (1) of Figure 3. As show in the superstructure, 8 of 24 gas products from NGLs recovery should be further upgraded to reduce surplus nitrogen. We consider a cryogenic distillation process in the nitrogen rejection section (He and You, 2015).

Figure 2. Process flowsheet of acid gas removal processes.

538

J. Gong and F. You

Figure 3. Process flowsheet of the dehydration and NGLs recovery processes.

4. Model formulation and solution method Based on the superstructure, we develop a two-stage adaptive robust MILP model shown as Equations (1)-(4) (Ben-Tal et al., 2009). The model consists of a design stage and an operational stage (Gao and You, 2015a). The design-stage problem minimizes the annualized investment cost AIC before the uncertainty is revealed. Y is an integer variable for technology selection, and CAP is a continuous variable representing the equipment capacity. Superstructure configuration constraints require that one and only one technology can be selected in a section of the superstructure, and fixed connections are enforced between dehydration and NGLs recovery processes. The objective function of the design-stage problem is a linear function of equipment capacities.

min  AIC + max min AOC    c∈U AOC∈O2 (Y, CAP, c ) O1= {superstructure configuration constraints, capacity bounding constraints} Y, CAP∈O1

U = {12 points in the uncertainty set}

O2= {linking constraints, annual operating cost evaluation constraints}

(1) (2) (3) (4)

In contrast, the operational-stage problem minimizes the worst-case annual operating cost AOC, given the design-stage decisions and revealed uncertainty. All variables in the operational stage are continuous variables. Constraints in the design stage include linking constraints and annual operating cost evaluation constraints. Linking constraints

Optimal Design and Synthesis of Shale Gas Processing and NGL Recovery Processes

539

require that the equipment capacity of each section should be larger than any processing flowrate of the section given a feedstock composition in the uncertainty set. The operating cost consists of feedstock purchasing cost, utility cost, waste treatment cost, and operational & maintenance cost. These costs are determined based on the realization of the uncertain feedstock in the maximization problem in the operational stage and the process configuration determined in the design stage. The proposed ARO model cannot be solved directly by any off-the-shelf solver because of the multi-level optimization structure. As the discrete uncertainty set is used to describe the uncertainty realization in the operational stage, the two-stage problem can be equivalently reformulated into a single-level optimization problem. The key idea is to obtain the worst realization of the annual operating cost by simultaneously considering all uncertain compositions in a single-level optimization problem (Shi and You, 2015). As a result, the second-stage constraints are introduced into the first-stage problem.

5. Results and discussion The robust optimal design and a deterministic optimal design are shown in Figure 4. Both processes select the MEA absorption process and the integrated condensation and NGLs recovery process, while nitrogen rejection is selected only by the robust optimal design. The robust optimal process can handle all feedstocks in the uncertainty set, due to the inclusion of a three-stage compression system in the integrated NGLs recovery process and another three-stage compression system installed in the nitrogen rejection section. As a result, purified gas with less nitrogen can be compressed directly in the NGLs Recovery process; otherwise, the purified gas can be further upgraded by the nitrogen rejection process. Both processes demonstrate that the integrated dehydration and NGLs recovery process is more cost-effective compared to sequential processes.

Figure 4. Process configurations of two optimal designs.

The minimum total annualized cost of the shale gas processing plant is $43.3 MM. Specifically, the annualized investment cost is $ 6.0 MM, and the largest share of investment cost is dedicated to the nitrogen rejection process. The reason for such an expensive nitrogen rejection section is the use of a series of expensive compressors, which are key to maintain the required low temperature in the cold boxes and pressurize the gas product before distribution. As an integrated dehydration and NGLs recovery

540

J. Gong and F. You

process, the second and third sections contribute 28% of the annualized investment cost. The annual operating cost is $ 37.3 MM, which is more than 5 times of the annualized investment cost. The feedstock cost is the highest (79%) among the four categories. Nearly all of the feedstock expenditure is used to purchase the raw shale gas.

6. Conclusions We address the robust design and synthesis of shale gas processing and NGLs recovery processes under uncertain feedstock compositions in this work. A three-step computational framework is proposed. We develop a discrete uncertainty set, a superstructure, and a two-stage adaptive robust MILP model with tailored solution method based on the framework. With a minimum total annualized cost of $43.3 MM, the robust optimal design consists of an MEA absorption process in the AGR section, a condensation process in the dehydration section, integrated process in the NGLs recovery section, and a nitrogen rejection process. The robust optimal design is compared with a deterministic optimal design, and the results show the necessity of building extra equipment units to hedge against uncertain feedstock compositions.

References A. Ben-Tal, L. El Ghaoui and A. Nemirovski (2009) Robust optimization. Princeton University Press. K. A. Bullin, P. E. Krouskop, 2009, Compositional Variety Complicates Processing Plans for US Shale Gas, Oil & Gas Journal, 107, 10, 50-55. EIA (2015) Annual energy outlook 2015, Washington, DC (DOE/EIA-0383(2015). Available at: http://www.eia.gov/forecasts/aeo/pdf/0383(2015).pdf. J. Gao, F. You, 2015, Deciphering and handling uncertainty in shale gas supply chain design and optimization: Novel modeling framework and computationally efficient solution algorithm, AIChE Journal, 61, 11, 3739-3755. J. Gao, F. You, 2015, Optimal Design and Operations of Supply Chain Networks for Water Management in Shale Gas Production: MILFP Model and Algorithms for the Water-Energy Nexus, AICHE Journal, 61, 4, 1184-1208. J. Gao, F. You, 2015, Shale Gas Supply Chain Design and Operations toward Better Economic and Life Cycle Environmental Performance: MINLP Model and Global Optimization Algorithm, ACS Sustainable Chemistry & Engineering, 3, 7, 1282-1291. J. Gong, F. You, 2015, Sustainable design and synthesis of energy systems, Current Opinion in Chemical Engineering, 10, 77-86. C. He, F. You, 2015, Toward more cost-effective and greener chemicals production from shale gas by integrating with bioethanol dehydration: Novel process design and simulation-based optimization, AIChE Journal, 61, 4, 1209-1232. C. He, F. You, 2014, Shale Gas Processing Integrated with Ethylene Production: Novel Process Designs, Exergy Analysis, and Techno-Economic Analysis, Industrial & Engineering Chemistry Research, 53, 28, 11442-11459. L. M. Julian-Duran, A. P. Ortiz-Espinoza, M. M. El-Halwagi, A. Jimenez-Gutierrez, 2014, Techno-Economic Assessment and Environmental Impact of Shale Gas Alternatives to Methanol, ACS Sustainable Chemistry & Engineering, 2, 10, 2338-2344. H. Shi, F. You, 2015, A computational framework and solution algorithms for two-stage adaptive robust scheduling of batch manufacturing processes under uncertainty, AIChE Journal. DOI: 10.1002/aic.15067. J. G. Speight (2013) 'Chapter 4 - Shale Gas Properties and Processing', in Speight, J.G. (ed.) Shale Gas Production Processes. Boston: Gulf Professional Publishing, 101-119.

Zdravko Kravanja, Miloš Bogataj (Editors), Proceedings of the 26th European Symposium on Computer Aided Process Engineering – ESCAPE 26 June 12th -15th, 2016, Portorož, Slovenia © 2016 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/B978-0-444-63428-3.50095-3

A Leader-Follower Game-Based Life Cycle Optimization Framework and Application Jiyao Gao, Fengqi You* Northwestern University, 2145 Sheidan Road, Evanston, Illinois 60208, USA

Abstract In this work, we address the life cycle optimization of a shale gas supply chain covering the well-to-wire life cycle of shale gas-generated electricity. A non-cooperative supply chain with multiple players is considered. Following the Stackelberg game, the power generation sector is identified as the leader in this game that takes action first and cares about both its own cost and the greenhouse gas emissions across the product life cycle. After the observation of power plants’ decisions, the follower as shale gas producer takes actions correspondingly to optimize its own profit. Both players need to make both design and operational decisions. The resulting problem is formulated as a multiobjective mixed-integer bilevel linear programming problem, which cannot be solved using any off-the-shelf solvers directly. Based on a case study of the Marcellus shale play, the levelized cost of electricity ranges from $75/MWh to $133/MWh, and the corresponding unit greenhouse gas (GHG) emissions range from 111 to 469 kg CO2eq/MWh. The application of carbon capture and storage implies significant impacts on both economic and environmental performance. The non-cooperative supply chain has a 9% higher upstream GHG emissions compared with the cooperative one. Keywords: game theory, life cycle optimization, shale gas, MIBLP

1. Introduction In the past decade, the wide application of horizontal drilling and hydraulic fracturing techniques has led to a boom in the natural gas production. Accompanying the economic benefits, concerns are raised regarding the environmental impacts of shale gas production, among which the greenhouse gas (GHG) emissions is of special interest to both academia and industry. There have been a huge body of works focusing on evaluating the life cycle GHG emissions of shale gas, most of which are based on life cycle assessment (LCA) methodology (Weber and Clavin, 2012). Meanwhile, some publications focus on optimization of shale gas supply chains; issues such as supply chain planning (Cafaro and Grossmann, 2014), process design (He and You, 2014), water management (Yang et al., 2015), uncertainties (Gao and You, 2015a), and GHG emissions (Gao and You, 2015b) were addressed. However, all of these literature assume a centralized model, while in practice the management of a shale gas supply chain is normally decentralized and run by different companies. Consequently, the optimal strategy obtained from a centralized model can be practically unachievable (Von Stackelberg, 2010). Therefore, there is an urgent need to properly address the noncooperative relationship between multiple stakeholders in the life cycle optimization of a shale gas supply chain (Garcia and You, 2015). To address this challenge, a leader-follower game-based LCO model is proposed, which integrates both the leader-follower Stackelberg game (Chu and You, 2014) and LCO framework (Gong and You, 2015). A “well-to-wire” system boundary is chosen

542

J. Gao and F. You

covering the whole shale gas supply chain system from shale sites to power plants. A three-echelon shale gas supply chain superstructure is given in Figure 1.

Figure 1.Superstructure of a shale gas supply chain

2. Problem Statement As mentioned in the previous section, the stakeholders in a shale gas supply chain is classified into two players, identified as the customer, indicating the power generation sector, and the producer, indicating the shale sites, processing plants, and corresponding facilities. The customer is the leader in this game that makes decisions first, which include: • Selection of Carbon Capture and Storage (CCS) technology in each power plant; • Amount of natural gas to purchase from each of the processing plants; • Amount of conventional natural gas to purchase from external sources. The shale gas producer is the follower in this game that reacts rationally according to the leader’s decisions, and the producer’s decisions include: • Drilling schedule at each shale site; • Selection of location and capacity for each potential processing plant; • Amount of raw shale gas transported to each existing processing plant; • Processing planning at each processing plant; • Amount of processed natural gas transported to each power plant; The power plants care about not only its own economic performance, but the life cycle environmental performance embedded in the final products. To be more specific, the objectives of the power plants include: • Minimizing the total cost of power plants; • Minimizing the total life cycle GHG emissions in the shale gas supply chain. Meanwhile, the producer takes action after the leader and tends to only care about its own profit. Thus, after the realization of the power plants’ decisions, the producer will react accordingly to optimize its own objective: • Maximize its total profit.

A Leader-Follower Game-Based Life Cycle Optimization Framework and Application

543

3. Model Formulation and Solution Method According to the problem statement in the previous section, a multiobjective MIBLP model is developed to address the sustainable design and operations of non-cooperative shale gas supply chain networks. A general form of this MIBLP problem is presented as follows denoted as (P0).

(P0)

Economic Objective: min TC leader = cRt xu + cZt y u + d Rt xl + d Zt y l Environmental Objective: min TE = eRt x u + eZt y u + f Rt x l + f Zt y l s.t. AR x u + AZ y u + BR x l + BZ y l ≤ r , x u ∈  m+ R , y u ∈  m+ z

where (xl ,yl ) solves: max TP follower = wRt x l + wZt y l s.t. QR x u + QZ y u + PR x l + PZ y l ≤ s , xl ∈  n+R , y l ∈  n+z

where TCleader denotes the power plants’ total cost; TE denotes the total life cycle GHG emission throughout this shale gas supply chain; and TPfollower denotes the producer’s total profit. xu and yu indicate the continuous variables and discrete variables in the upper-level problem, respectively. xu involves decisions including the amount of natural gas purchased from external sources, the amount of shale gas purchased from the shale gas supply chain, and the amount of gas used for power generation with a certain CCS technology. yu involves decisions on whether a CCS technology is applied at a power plant or not. Correspondingly, xl and yl indicate the continuous variables and discrete variables in the lower-level problem, respectively. xl addresses decisions regarding shale gas production at each shale site, capacity of processing plants and corresponding schedule, and the amount of shale gas sold to power plants. yl addresses both the drilling decisions of shale wells and also the construction decisions of processing plants. The constraints for both the leader and follower can be classified into five parts, namely the mass balance constraints, environmental constraints, economic constraints, capacity constraints, and logic constraints. This leads to a multiobjective MIBLP problem, which cannot be solved using any offthe-shelf solvers directly. Thus, a tailored optimization algorithm is applied to tackle this problem efficiently. We first reformulate the original MIBLP into an equivalent generalized semi-infinite program, which is further reformulated into a projection-based single-level optimization problem. In order to boost the computational efficiency, a decomposition approach is implemented based on the single-level reformulation. By iteratively solving one master problem and two subproblems, a Karush-Kuhn-Tucker (KKT)-condition-based cut is generated in each iteration and added to the master problem. Thus, the lower bound and upper bound can be updated correspondingly.

4. Case Study and Results Discussion To illustrate the application of the proposed modelling framework and solution algorithm, we consider a case study based on Marcellus shale play. In this case study, a 12 mile × 12 mile area within Beaver County, PA is considered, where a total of 20 shale sites are included as reported by Frac Focus. Each shale site can drill up to 4-8 shale wells at maximum. There are 3 potential conventional shale gas processing plants with 3 capacity ranges considered (He and You, 2015). Five Gas Turbine Combined Cycle power plants are considered. For each power plant, they can choose whether or

544

J. Gao and F. You

not to incorporate the CCS technology into the original processes. There are 3 difference CCS technologies for power plants to select from, including two postcombustion capture technologies and one oxyfuel combustion process. Two postcombustion capture technologies considered here are the Fluor’s Econamine FG PlusSM process and MHI’s KS-1 process. The oxyfuel combustion processes use oxygen rather than air for combustion of fuel (Davison, 2007). The total planning horizon is 10 years. 4.1. Pareto-Optimal Curve The resulting multiobjective MIBLP problem is solved using the proposed optimization algorithm and 10 Pareto optimal solutions are obtained, which form the Pareto-optimal curve shown in Figure 2. The x-axis represents the total life cycle GHG emissions generated throughout the shale gas supply chain. The y-axis represents the leader’s total cost. The total cost decreases as the total life cycle GHG emissions increase, which explicitly shows the trade-off between the economic and environmental objectives in the leader’s problem.

Figure 2. Pareto-optimal curve with breakdown of the cost and emissions: pie charts represent cost breakdowns; the donut charts represent emissions breakdowns

As can be seen in Figure 2, point A indicates the extreme solution with the lowest total life cycle GHG emissions of 5.5 billion kg CO2-eq, which is equivalent to 111 kg CO2eq/MWh; meanwhile, it has the highest leader’s total cost of $6.6 billion (in USD), with an equivalent levelized cost of electricity (LCOE) given as $133/MWh. Therefore, we identify point A as the environment-oriented solution. On the contrary, point B is identified as the economics-oriented solution, which has the lowest power plants’ total cost known as $3.7 billion, and a corresponding LCOE of $75/MWh. The total life cycle GHG emission of point B is 23.2 billion kg CO2-eq, and the equivalent unit GHG emission is 469 kg CO2-eq/MWh. Solutions between points A and B are all equivalently optimal according to the definition of Pareto-optimal, and they have lower power plants’ total cost than A and lower life cycle GHG emissions than B. 4.2. Design and Operational Results To present a detailed comparison between points A and B regarding the shale gas flow, we use Sankey diagrams in Figure 3 to illustrate the corresponding results, where the width of each flow is proportional to its amount. For point A minimizing the life cycle GHG emission, all the 20 potential shale sites are active, with a total of 105 wells drilled. For point B minimizing the leader’s total cost, 18 shale sites are active, and 96

A Leader-Follower Game-Based Life Cycle Optimization Framework and Application

545

wells are drilled in total. For point A, processing plant 1 and processing plant 3 are constructed with capacity of 15.1 million standard cubic feet (MMscf) of shale gas per year and 27.5 MMscf of shale gas per year, respectively. For point B, processing plant 1 is constructed with a capacity of 37.3 MMscf of shale gas per year. The total processed shale gas in point A is 426 MMscf. Notably, in point A the external conventional natural gas source supplies 30 MMscf in total of extra natural gas to power plant 1 to satisfy the demand, accounting for 7% of the power plants’ total natural gas demand and 42% of power plant 2’s demand. On the contrary, in point B the total natural gas demand of 373 MMscf is satisfied solely by shale gas. The total electricity demand is fixed as 49.5 million MWh for both points.

Figure 3. Summary of shale gas flow in environment-oriented solution (Left-Point A) and economics-oriented solution (Right-Point B).

4.3. Comprehensive performance comparison

Figure 4. Comparison between the economics-oriented model and environment-oriented model

As can be seen in Figure 4, when the power generation sector is pursuing the optimal economic performance, the power plants’ total cost is reduced to $3.7 billion, and the corresponding LCOE can be reduced to $75/MWh compared with $133/MWh LCOE in the environment-oriented solution. When the power plants target on minimizing the total life cycle GHG emissions in the supply chain, the power plants’ total cost increases to $6.6 billion. As a return, the total GHG emissions are reduced from 23.2 billion kg CO2-eq to 5.5 billion kg CO2-eq, and the unit GHG emissions are reduced from 469 kg CO2-eq/MWh to only 111 kg CO2-eq/MWh due to the application of CCS technologies.

546

J. Gao and F. You

5. Conclusions In this work, we propose a leader-follower game-based MIBLP model to demonstrate the interactions among different players and trade-offs between conflicting objectives in a non-cooperative shale gas supply chain. Through a case study based on Marcellus shale play, we come to a few important conclusions: (1) the non-cooperative perspective in supply chain optimization normally results in worse but more realistic performance than cooperative ones regarding both economic and environmental criteria; (2) proper application of CCS technology can lead to significant improvement of the overall environmental performance of a shale gas supply chain; (3) the interaction between power plants and shale gas producer is so complex that may lead to different design and performance depending on the power plants’ preference.

References D. C. Cafaro, I. E. Grossmann, 2014, Strategic planning, design, and development of the shale gas supply chain network, AIChE Journal, 60(6), 21. Y. Chu, F. You, 2014, Integrated scheduling and dynamic optimization by Stackelberg game: Bilevel model formulation and efficient solution algorithm. Industrial & Engineering Chemistry Research, 53, 5564-5581. J. Davison, 2007, Performance and costs of power plants with capture and storage of CO2, Energy, 32(7), 1163-1176. J. Gao, F. You, 2015, Deciphering and handling uncertainty in shale gas supply chain design and optimization: Novel modeling framework and computationally efficient solution algorithm, AIChE Journal, 61(11), 3739-3755. J. Gao, F. You, 2015, Shale Gas Supply Chain Design and Operations toward Better Economic and Life Cycle Environmental Performance: MINLP Model and Global Optimization Algorithm, ACS Sustainable Chemistry & Engineering, 3(7), 1282-1291. J. Gao, F. You, 2015, Optimal Design and Operations of Supply Chain Networks for Water Management in Shale Gas Production: MILFP Model and Algorithms for the Water-Energy Nexus. AIChE Journal, 61, 1184-1208. J. Gong, F. You, 2015, Sustainable Design and Synthesis of Energy Systems. Current Opinion in Chemical Engineering, 10, 77-86. C. He, F. You, 2014, Shale Gas Processing Integrated with Ethylene Production: Novel Process Designs, Exergy Analysis, and Techno-Economic Analysis, Industrial & Engineering Chemistry Research, 53(28), 11442-11459. C. He, F. You, 2015, Toward more cost-effective and greener chemicals production from shale gas by integrating with bioethanol dehydration: Novel process design and simulation-based optimization, AIChE Journal, 61(4), 1209-1232. D. J. Garcia, F. You, 2015, Supply chain design and optimization: Challenges and opportunities, Computers & Chemical Engineering, 81, 153-170. H. Von Stackelberg (2010) Market structure and equilibrium. Springer Science & Business Media. C. L. Weber, C. Clavin, 2012, Life Cycle Carbon Footprint of Shale Gas: Review of Evidence and Implications, Environmental Science & Technology, 46(11), 5688-5695. L. Yang, I. E. Grossmann, M. S. Mauter, R. M. Dilmore, 2015, Investment optimization model for freshwater acquisition and wastewater handling in shale gas production, AIChE Journal, 61(6), 1770-1782. D. Yue, F. You, 2014, Game-theoretic modeling and optimization of multi-echelon supply chain design and operation under Stackelberg game and market equilibrium, Computers & Chemical Engineering, 71, 347-361. D. Yue, F. You, 2014, Fair Profit Allocation in Supply Chain Optimization with Transfer Price and Revenue Sharing: MINLP Model and Algorithm for Cellulosic Biofuel Supply Chains. AIChE Journal, 60, 3211-3229.

Zdravko Kravanja , Miloš Bogataj (Editors), Proceedings of the 26th European Symposium on Computer Aided Process Engineering – ESCAPE 26 June 12th -15th, 2016, Portorož, Slovenia © 2016 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/B978-0-444-63428-3.50096-5

Adjustable Robust Optimization for Scheduling of Batch Processes under Uncertainty Hanyu Shi, Fengqi You* Northwestern University, 2145 Sheridan Road, Evanston, Illinois 60208, USA

Abstract In this work, we hedge against the uncertainty in the of batch process scheduling by using a novel two-stage adjustable robust optimization (ARO) approach. We introduce symmetric uncertainty sets into the deterministic mixed-integer linear programming (MILP) model for batch scheduling problem and then reformulate it into a two-stage problem. The budgets of uncertainty is used to adjust the degree of conservatism. Since the resulting two-stage ARO problem cannot be solved directly by any existing optimizer, the column-and-constraint generation (C&CG) algorithm is then applied to solve it efficiently. One case study for batch manufacturing processes is considered to demonstrate the validation of the two-stage ARO model formulation and the efficiency of the C&CG algorithm. Keywords: batch processes, scheduling, two-stage adaptive robust optimization, column-and-constraint generation algorithm.

1. Introduction Batch processes have been widely used in the process industry to manufacture myriad products, such as high-value-added chemicals, pharmaceuticals and bioproducts (Méndez et al., 2006, Wassick et al., 2012). In batch manufacturing processes, production scheduling is one of the most crucial decisions, since it has a critical influence on production efficiency and economic performance (Chu and You, 2015). In recent decades, increasingly complex manufacturing processes, growingly fierce competition, and a constantly fluctuating market environment have led to new challenges in the batch scheduling problem (Chu and You, 2014, 2015). It is now more important for the manufacturing industry to develop robust batch scheduling methods and tools to maximize profitability and to maintain system reliability under the presence of increasing uncertainty (Li and Ierapetritou, 2008). In this paper, we propose a two-stage adaptive robust optimization (ARO) approach to deal with the production scheduling problem for batch manufacturing processes in the face of uncertainty. When batch manufacturing processes are subject to uncertainty during production processes, the two-stage ARO strategy can help increase scheduling flexibility and ameliorate overall performance of the manufacturing system. In the two-stage framework for batch scheduling, decisions for the operational task-equipment assignment are made “here-and-now”, and their corresponding decision variables are treated as firststage variables. The remaining decision variables are second-stage variables, which are determined in a “wait-and-see” mode after the realization of uncertain parameters. Due to the complicated multi-level decision-making structure, the two-stage ARO problem for batch scheduling cannot be solved directly by any off-the-shelf optimization solvers. In this work, we apply the column-and-constraint generation (C&CG) algorithm to solve the two-stage ARO problem (Zeng and Zhao, 2013).

H. Shi and F. You

548

2. Model formulation In this work, the continuous-time formulation with global time points is used to model the single-stage deterministic scheduling of batch manufacturing process (Méndez et al., 2006). We then introduce symmetric uncertain set (Bertsimas and Sim, 2003) into the single-stage deterministic model and reformulate it into a two-stage model under uncertainty. Since uncertainty is nearly inevitable in batch manufacturing processes, we assume that the fixed processing times of some chemical reaction tasks are subject to uncertainty and are modelled with uncertainty sets. We assume that the uncertain fixed  follows a symmetric distribution and can be expressed within a processing time fd i symmetric interval (1) with the expected mean value fdi . ∆fdi is the absolute bound for deviation of the fixed processing time. Iu is the subset of tasks whose fixed processing time might be subject to uncertainty.  ∈  fd − ∆fd , fd i i  i

fd i + ∆fd i  , ∀i ∈ I u

(1)

To adjust the degree of conservatism, an integer parameter Γ is introduced to confine the number of tasks whose fixed durations can deviate far away from their deterministic values (Bertsimas and Goyal, 2010). The integer parameter Γ fd is known as the budget of uncertainty. In order to integrate the budget of uncertainty into the model formulation, the symmetric interval can be reformulated as constraint (2). α i fd is a fd

continuous variable and βi fd a binary variable.  = fd + α fd ∆fd , − β fd ≤ α fd ≤ β fd , ∀i ∈ I u fd i i i i i i i

Constraint (3) shows that the sum of the binary variable βi of uncertainty Γ fd . ∑ βi fd ≤ Γ fd

(2) fd

is bounded by the budget (3)

i∈ I u

The order demand for each product is obtained from the customers and the needs of customers can vary during the scheduling horizon. As a result, we also assume that order demand is subject to uncertainty in this work. The uncertainty set constraints for the order demand are defined in a similar approach as that of uncertain fixed processing time and not given in details for brevity. (ARO_Two) (ARO_First) Obj first  + θ max (4) Wsin ,Wf in

s.t.

First-stage constraints Optimality cuts Feasibility cuts (ARO_Second) min

pUncertain ∈D

s.t.

 max ( Obj  second )  V second 

(5)

Connecting constraints Second-stage constraints Uncertainty set constraints The two-stage model formulation is given in (ARO_Two) and includes two subproblems, (ARO_First) and (ARO_Second). In the two-stage model, the decisions for production sequence, unit assignment and resource allocation are made “here-andnow”; the corresponding variables and constraints are allocated into the first-stage

Adjustable Robust Optimization for Scheduling of Batch Processes under Uncertainty

549

problem. All the remaining decisions are made in a “wait-and-see” mode, and their corresponding variables and constraints are placed in the second-stage. However, (ARO_Second) is a min-max problem that cannot be solved directly. As a result, the dual formulation for (ARO_Second) is derived and can be solved as a generic minimization problem, as shown in (ARO_Dual). (ARO_Dual) min d s ∈D

s.t.

min

V dual − second

Dualobj u

(6)

Connecting constraints Dual problem constraints Uncertain processing time and demand constraints

3. Solution Strategies Since the two-stage ARO problem for batch scheduling cannot be solved directly by any off-the-shelf optimization solvers, in this section we present the C&CG method (Zeng and Zhao, 2013) for solving the two-stage ARO problem efficiently. We should note that for the two-stage robust batch scheduling problems, the second-stage problem determines a set of continuous variables and is a linear programming (LP) problem.

Figure 1. Flow charts for the C&CG method.

The flow chart of C&CG method is given in Figure 1. In the initialization step, the upper and lower bounds are set as positive and negative infinity respectively. The master problem of C&CG method (C&CG_Master) is then solved. If the master problem is feasible, the upper bound can be updated and the first stage decision variables should be passed to the dual problem of the second stage problem (General_Dual). If the master problem is infeasible, the entire algorithm can be terminated and the problem has no feasible solution. If the (General_Dual) with the

550

H. Shi and F. You

fixed first stage variables can be solved to optimality, the optimal solutions of the dual problem should be passed to the master problem to formulate one optimality cut in next iteration. The lower bound should be also updated with those optimal solutions of the dual problem. The difference between the upper and lower bounds is then compared with the predetermined tolerance. If the difference is smaller than the tolerance, the entire algorithm terminates. Otherwise (C&CG_Master) with the new optimality cut will be solved again and a new iteration starts. If the (General_Dual) is unbounded, we should find the extreme rays of (General_Dual). The feasibility cut is built based on the current extreme rays and then added to the master problem. (C&CG_Master) will then be solved again with the new feasibility cut and a new iteration starts. When it comes to our specific two-stage ARO batch scheduling problem, the master problem is (ARO_First). The dual problem is (ARO_Dual). We should note that in each iteration of the C&CG algorithm, the second-stage problem provides the master problem with a non-trivial solution, and the master problem generates a new set of second-stage decision variables and a new column of new constraints. As a result, the problem size of master problem for the C&CG algorithm is large in general.

4. Case study

Figure 2. STN representation for case study.

In this case study, we solve a complex industrial-scale scheduling problem originally from The Dow Chemical Company (Chu et al., 2013, 2014; Yue et al., 2013). The STN of this batch process is shown in Figure 2. This multipurpose batch plant is used to manufacture four different products with four raw materials and six intermediates. The operation of batch splitting and mixing is allowed in this network batch process. Equipment units include one mixer, two reactors, one finishing unit and one drumming unit. The first-stage decision variables are the assignment of eight tasks to five equipment units and production sequencing. Second-stage decision variables are the batch sizing of each tasks, the starting time and the end time, etc. The scheduling horizon for this large-scale case is set as 1,600 min. We use eleven time points in this large-scale case study.

Adjustable Robust Optimization for Scheduling of Batch Processes under Uncertainty

551

In this case study, we assume that the fixed processing times for all reaction tasks and the order demands for all final products are subject to uncertainty. The absolute bounds for the fixed processing time deviations are set at 40% of the expected mean values. The absolute bounds for the order demand deviations are set at 15% of the expected mean values. There are six uncertain fixed processing times and we set the corresponding budget of uncertainty Γ fd as four. There are four uncertain order demands for four final products, and we set the corresponding budget of uncertainty Γ dem as three. The deterministic model, the conventional robust optimization model and the proposed two-stage ARO for batch scheduling problem are solved and compared in this work. We assume there is no uncertainty in the deterministic model, that is, all parameters are assigned to their expected values. The worst case scenario is considered in the conventional robust optimization model and all parameters are assigned to their worst case values. In this case study, all the problems are MILP problems and can be solved by CPLEX 12. Both the deterministic problem and the conventional robust optimization problem share the same problem size, which contains 198 binary variables, 1,489 continuous variables, and 2,821 constraints. For the two-stage ARO model, the subproblem by the C&CG algorithm includes 20 binary variables, 2,913 continuous variables, and 1,642 constraints. The problem size of the master problem for the C&CG algorithm is obtained from its corresponding last iteration. The last iteration master problem of the C&CG algorithm contains 198 binary variables, 6,645 continuous variables, and 13,427 constraints.

Figure 3. Gantt charts for case study returned by (a) the deterministic model; (b) the two-stage ARO model (c) the conventional robust optimization model.

Gantt charts returned by different methods are given in Figure 3. The results show that the deterministic problem returns an optimal profit of $17,700 in 1.28 CPUs, and the conventional robust optimization problem leads to a total profit of $11,765 in 8.33 CPUs. The C&CG algorithm takes five iterations to converge. The C&CG algorithm solves the two-stage ARO problem in 564.94 CPUs, and returns an optimal solution of $12,815. Since the conventional robust optimization is the most conservative one among all three methods, the profit returned by the two-stage ARO is 8.92% higher profit than that returned by the conventional robust optimization method. The deterministic problem is the most optimistic one but its solution is not robust, because the schedule returned by the deterministic model results in infeasibility if we apply the same uncertain scenario from the two-stage ARO model.

5. Conclusions In this work, we developed a two-stage ARO approach to solve the batch scheduling problem under uncertainty. We first formulated the two-stage model formulation for

552

H. Shi and F. You

batch scheduling with uncertain parameters. The unit assignment, production sequence and resource allocation were the first-stage decisions, while the remaining decisions were the second-stage decisions. We then introduced symmetric uncertainty sets to confine the uncertain parameters and the budgets of uncertainty to control the degree of conservatism. We applied C&CG method to solve the two-stage ARO problem for batch scheduling. The C&CG method iteratively added either optimality cuts or feasibility cuts into its master problem. One case study on multi-product batch manufacturing processes were considered to demonstrate the applicability of the proposed computational framework. The results showed that the C&CG was able to solve the two-stage ARO problem efficiently. The deterministic model returned the highest profit, but its resulting schedule led to infeasible solutions if there were uncertain parameters. The conventional robust optimization problem was the most conservative one and led to the lowest profit. The two-stage ARO resulted in an intermediate profit and returned a robust production schedule against uncertainty.

References D. Bertsimas, V. Goyal, 2010, On the power of robust solutions in two-stage stochastic and adaptive optimization problems, Mathematics of Operations Research, 35(2), 284-305. D. Bertsimas, M. Sim, 2004, The price of robustness, Operations research, 52(1), 35-53. Y. Chu, J. M. Wassick, F. You, 2013, Efficient scheduling method of complex batch processes with general network structure via agent-based modeling, AIChE Journal, 59(8), 2884-2906. Y. Chu, J. M. Wassick, F. You, 2014, Hybrid method integrating agent-based modeling and heuristic tree search for scheduling of complex batch processes. Computers & Chemical Engineering, 60, 277-296. Y. Chu, F. You, 2014, Integrated planning, scheduling, and dynamic optimization for batch processes: MINLP model formulation and efficient solution methods via surrogate modeling. Industrial & Engineering Chemistry Research, 53, 13391-13411. Y. Chu, F. You, 2015, Model-based integration of control and operations: Overview, challenges, advances, and opportunities, Computers & Chemical Engineering, 83, 2–20. Y. Chu, F. You, J.M. Wassick, A. Agarwal, 2015, Integrated planning and scheduling under production uncertainties: Bi-level model formulation and hybrid solution method. Computers & Chemical Engineering, 72, 255-272. Z. Li, M. G. Ierapetritou, 2008, Robust optimization for process scheduling under uncertainty, Industrial & Engineering Chemistry Research, 47(12), 4148-4157. C. A. Méndez, J. Cerdá, I. E. Grossmann, I. Harjunkoski, M. Fahl, 2006, State-of-the-art review of optimization methods for short-term scheduling of batch processes, Computers & Chemical Engineering, 30(6), 913-946. J.M. Wassick, A. Agarwal, N. Akiya, J. Ferrio, S. Bury, F. You, 2012, Addressing the Operational Challenges in the Development, Manufacture, and Supply of Advanced Materials and Performance Products. Computers & Chemical Engineering, 47, 157–169. D. Yue, G. Guillen-Gosalbez, F. You, 2013, Global Optimization of Large-Scale Mixed-Integer Linear Fractional Programming Problems: A Reformulation-Linearization Method and Process Scheduling Applications, AIChE Journal, 59(11), 4255-4272. D. Yue, F. You, 2013, Sustainable Scheduling of Batch Processes under Economic and Environmental Criteria with MINLP Models and Algorithms. Computers & Chemical Engineering, 54, 44-59. B. Zeng, L. Zhao, 2013, Solving two-stage robust optimization problems using a column-andconstraint generation method, Operations Research Letters, 41(5), 457-461.

Zdravko Kravanja, Miloš Bogataj (Editors), Proceedings of the 26th European Symposium on Computer Aided Process Engineering – ESCAPE 26 June 12th -15th, 2016, Portorož, Slovenia © 2016 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/B978-0-444-63428-3.50097-7

Effective Sampling, Modeling and Optimization of Constrained Black-box Problems Ishan Bajaj and M.M. Faruque Hasan* Artie McFerrin Department of Chemical Engineering, Texas A&M University 3251 TAMU, College Station, TX 77843, USA *Corresponding author, E-mail :[email protected]

Abstract An effective strategy for the sampling, modeling, and optimization of black-box problems with constraints is proposed. Black-box systems are often approximated and solved using surrogate models. Motivated by the fact that the surrogate model needs to be accurate in the feasible region, we use feasible samples to train surrogate model and reduce the overall number of function evaluations. A new mathematical programming based approach is employed to obtain the desired number of feasible, unique and space-filling samples. We then approximate the black-box function by kriging function and apply trust region based ε-exact method to converge to a local optimal solution. The overall framework comprising of mathematical programs to obtain feasible samples, modeling and optimization was tested on a suite of 16 problems from GlobalLib. Compared to case when both feasible and infeasible samples are used to train surrogate model, an average reduction of 25.69% was observed when only the feasible samples are selected a priori. The framework is also applied to optimize operating conditions of a tri-reformer to convert and utilize CO2 from power plants. Keywords: Black-box, Feasible sampling, Optimization, Trust-region, CO2 utilization

1. Introduction Many important engineering, process synthesis, material screening, and multi-scale problems do not have known or explicit functional form of the objective function and/or the constraints, and are labeled as black-box problems (Biegler et al. (2014), Hasan et al. (2013), Nuchitprasittichai and Cremaschi (2011)) and has the following form: min f (x), s.t. gi (x) ≤ 0 ∀i ∈ {1, . . . , P} (1) x

where x ∈ Rd , f (x) : Rd → R is a black-box function and gi (x) : Rd → R. The black-box function can be result of an experiment (Bunin et al. (2014)), computational fluid dynamics simulation, solution of nonlinear partial differential equation systems (Boukouvala et al. (2015)) or flowsheet simulation (Caballero and Grossmann (2008)). Although the real systems are approximated effectively, obtaining data and derivative information can be computationally and economically expensive. Even for the cases when the computational time is not substantial, obtaining derivatives using finite difference is unreliable. In these cases, gradient based solvers can not be used for optimization and a more practical approach is needed. To this end, derivative-free optimization (DFO) has gained significant attention in recent times (Rios and Sahinidis (2013)). These methods are classified into direct-search and model-based methods. Vicente and Cust´odio (2012) discussed the effectiveness of direct-search methods for non-smooth black-box problems. However, these methods may lead to large number of function evaluations for continuous problems. Model-based

554

I. Bajaj and M.M.F. Hasan

approach, which is an inexpensive approximation of the original model is a promising alternative and has been subject of recent studies (e.g. Henao and Maravelias (2011), First et al. (2014), Agarwal and Biegler (2013)). Wild et al. (2008) employed radial basis function and trust-region methods for local optimization of unconstrained black-box functions. Recently, Conn et al. (2009) proposed a trust-region based method which guarantees the convergence to a stationary point under the assumption of fully-linear property. Boukouvala and Ierapetritou (2013) used both feasible and infeasible samples with respect to known constraints to develop surrogate model for feasible region, which may lead to more function evaluations. Recently, Boukouvala et al. (2015) proposed an algorithm for constrained grey-box problem and studied the effect of initial sampling variability, the type of surrogate model, and local versus global parameter estimation on the performance of the algorithm. Since black-box problems are computationally expensive where a single simulation may take a long time, the challenge is to obtain a solution satisfying valid convergence criteria using minimum number of function evaluations. In what follows, problem statement is presented first. We then present the overall black-box optimization framework. Then, an overview of the surrogate modeling approach is provided. Next, trust-region method along with the black-box optimization algorithm is presented. Finally, several examples including literature problems and a chemical engineering case study are provided to demonstrate the efficacy of the framework.

2. Problem Statement We propose a sampling, modeling and optimization framework, for problem (1), where f (x) represents the black-box objective function, gi (x) are constraints whose analytical form is known. Both f (x) and gi (x) are assumed to be twice-continuously differentiable functions, f (x) is bounded, and the second derivatives of f (x) and gi (x) are uniformly bounded. Generally, in constrained black-box problems, surrogate models are constructed using both feasible and infeasible samples. But this may lead to more function evaluations since it is inconsequential how the original model behaves in the region of infeasibility and one would be wasting samples in developing the surrogate model in infeasible region. Eliminating infeasible samples obtained by traditional sampling such as LHD, and repeating this process until the desired number of samples are obtained, presents one way of getting feasible samples. But this may not always be successful especially when the equality constraints are involved and also may not lead to spacefilling samples. To address this problem, we present an optimization formulation based approach that yields the desired number of space-filling, feasible samples. Once the samples are obtained, surrogate model is developed and ε-exact method is used which guarantees convergence to a local optimal solution within finite number of function evaluations, although the true functional form of f (x) is unknown.

3. Surrogate Model Optimization Framework The overall framework is outlined in figure 1 and each of the steps is described in the subsequent subsections. Feasible samples obtained by minimizing wrap-around L2 discrepancy are used for function evaluations, kriging function is then developed using the design points obtained and trustregion method is employed to converge to the local solution of the original model. 3.1. Design of Feasible Experiments Selecting design for performing expensive computations is an important step in developing an efficient surrogate model. Obtaining sufficient number of samples is important to develop a surrogate model that is ε-exact. Latin hypercube design (LHD) is popular in black-box optimization

555

Effective Sampling, Modeling and Optimization of Constrained Black-box Problems

literature. However, LHD does not yield the desired number of feasible points and utilizing both feasible and infeasible samples leads to higher number of function evaluations which is evident from Table 1. Optimization formulation based on entropy, integrated mean square error, minimax and maximin distances and discrepancy measures are popular in statistics literature. We use wrap-around L2 discrepancy (WD) proposed by Hickernell (1998) in this work to obtain desired number of feasible samples, which is defined as:  4 d i 1 N N d h3 (i) ( j) (i) ( j) (2) + 2 ∑∑∏ − |uk − uk |(1 − |uk − uk |) W2 = 3 N i=1 i=1 k=1 2 (i)

where d is the dimension of the variable, N denotes the number of design points. uk is the normalized k-th variable of i-th sample. Here, x is replaced by the normalized variable u since Eq. 2 is applicable only for the range [0,1]. WD captures uniformity over the unit cube so that the design points obtained are space-filling and are uniformly scattered on the domain in low discrepancy sense. We solve the following DNLP to obtain a set of feasible samples for evaluating f (x): min W 2 , s.t. gi (u) ≤ 0,∀i ∈ {1, . . . , P} (3) u

The inclusion of the constraints ensures feasibility of the samples obtained. However, the above is an NP problem and obtaining a global optimum is a challenge. 3.2. Surrogate Model Construction and Optimization Surrogate models such as kriging, radial basis function, artificial neural network are widely used in the literature. However, in this study, we use kriging model as the surrogate model because of its ability to accurately model complex systems. We consider the following form of the kriging model: ys (x) = µ + ∑ ci exp[− ∑ θk (xk − xi,k )2 ] (4) i∈SIP

Initialize trust-region as the entire domain

Perform design of experiment by solving problem 3

k

Use DUPLEX to divide the samwhere, ys (x) represents the value of kriging model as a ple set into interpolation, validafunction of x, and µ and θk are the parameters to be tion and testing sets estimated, xi,k denotes the k-th variable of i-th sample belonging to the interpolation set, SIP. We divide the Decrease or expand Develop and optimize design points obtained by solving problem 3 into three and move trust-region surrogate model sets: interpolating, validation and testing set using DUPLEX (Snee (1977)) algorithm which aims to divide the data into sets covering approximately the same region Convergence and have similar statistical properties. DUPLEX is apcriteria met? No plied twice to separate the three sets - firstly, validation set is separated and secondly, interpolating and testing Yes sets are separated from the remaining samples. The paStop rameters, θk and µ, are obtained by minimizing the error between estimated function value by kriging and actual Figure 1: Algorithm description data obtained by simulation over the validation set and the testing set is used to establish the error bound, ε. The following nonlinear parameter estimation problem is solved to global optimality using BARON (Tawarmalani and Sahinidis (2005)): min ∑ SPi + SNi (5)

θk ,µ i∈SCV

s.t. yi + SPi − SNi = µ +



j∈SIP

ci Ri, j

∀i ∈ SCV,



j∈SIP

c j Ri, j = yi − µ

∀i ∈ SIP

I. Bajaj and M.M.F. Hasan

556

Ri, j = exp[− ∑ θk (xi,k − x j,k )2 ]

∀i ∈ S, j ∈ SIP

k

Here, SCV is the set of validation points, SPi ≥ 0 and SNi ≥ 0 are slack variables and S denotes the set of sample points. The kriging model is optimized to obtain x: min ys (x), s.t. gi (x) ≤ 0,∀i ∈ {1, . . . , P}, x ∈ ∆k (6) x where ∆k is the trust region at kth iteration. 3.3. ε-Exact Method Any surrogate model based optimization framework requires a valid convergence criteria, either based on maximum function evaluations or stationary point. Two approaches, based on trustregion (Conn et al. (2000)) have been proposed in the literature to guarantee convergence to the stationary point of the original model. The first approach requires derivatives from the original model, which is generally unavailable. The more practical approach does not need derivative from the original model but needs to satisfy fully-linear property (Conn et al. (2009)). The fullylinear property implies that the surrogate model approximates original model and its gradients well within a trust region ∆k . We adopt slightly modified version of ε-exact optimization strategy by Biegler et al. (2014) for our framework to converge to local optima. Since the constraints are assumed to be known, problem 6 is always constructed at a feasible point and the intersection of trust-region and feasible region is non-empty. We, therefore, do not use penalty function for optimization so that the framework is not sensitive to the penalty and smoothing parameters and instead solve Eq. 6 to global optimum within a trust-region ∆k . The rationale of doing so is that if kriging is able to capture the nonlinearities of the original model within first few iterations, we may converge to a global optima. The adaptive trust-region framework does not require the surrogate model to be accurate in the entire design space but in the subregion likely to have a local optima.

4. Results 4.1. Numerical Examples The above framework is applied to 16 problems from GlobalLib, out of which global solution of fourteen of the problems is known. Fourteen of them are constrained problems with linear, nonlinear, equality and inequality constraints; while two has box-constraints. The objective functions are considered as black-box and the constraints are assumed to be known. εr , η0 , η1 , υ, υinc are set to 10−5 , 0.25, 0.75 0.5 and 3.0 respectively, and σ , σ are chosen arbitrarily. Number of function evaluations and time are limited to 2500 and 24 hours respectively. The results are summarized in Table 1, where column 1 shows examples taken from GlobalLib; column 2 gives the number of constraints, Nconstraints and variables, Nvar ; reported global optima, fglobal is in column 3; total number of evaluations given by our method, N p are given in column 4; and column 5 contains the total number of evaluations required when both feasible and infeasible points given by LHD, NLHD are used, . The overall framework is able to achieve atleast local optima for all the problems. In twelve of the problems, our algorithm is able to achieve global optimum. The kriging function is able to capture the nonlinearities of the original model in the first few iterations and narrow down the trust region to the point of global optima. We compared with the case when LHD is used and observe an average reduction of 25.69% in function evaluations. In twelve of the examples, N p is less than NLHD . However, in three of the examples, the results are opposite. One reason for such results could be the fact that uniform design affects the accuracy of the surrogate model, and in these cases the uniformity of LHD outweighed the advantage of sampling only in feasible region.

Effective Sampling, Modeling and Optimization of Constrained Black-box Problems

Problem ex.2.1.1 ex.2.1.2 ex.2.1.4 ex.2.1.5 ex.2.1.6 ex.2.1.9 ex.3.1.2 ex.3.1.3 ex.3.1.4 ex.5.2.2 case1 ex.5.2.2 case2 ex.8.1.1 ex.8.1.2 circle dispatch chance

Nconstraints ,Nvar 1,5 2,6 5,6 11,10 5,10 1,10 6,5 6,6 3,3 6,9 6,9 0,2 0,1 10,2 2,4 3,4

Table 1: Summary of results. fglobal ys (x) -17 -11.424 -213 -212.887 -11 -10.999 -268.015 -267.837 -39 -39 -0.375 -5.28×10−3 -3.06×104 -3.06×104 -310 -310 -4 -4 -400 -400 -600 -600 -2.02 -1.0709 4.5742 4.5742 3155.2879 3155.2181 29.8944 29.8944

Np 235 232 62 915 326 547 126 582 32 85 89 36 50 121 161 33

557

NLHD 254 241 62 456 377 1817 126 2500 112 92 92 68 53 1727 1810 42

4.2. Tri-reforming Case Study To further illustrate the efficacy of the method, we applied it to a chemical engineering case study where we maximize H2 conversion by combining flue gas, methane and air. CO2 conversion and utilization is essential for sustainable development since it is also a source of carbon. A novel tri-reforming process enabling conversion of CO2 in flue gas to CO and H2 by integrating CO2 reforming, steam reforming and partial oxidation of methane in a single reactor was proposed by Song and Pan (2004). Methane, air and flue gas are introduced in a one-dimensional, steady state, adiabatic, homogeneous and plug flow reactor. In this case study, we maximized H2 mole fraction, yH2 , considering flow rate of air, nair (moles/s) and methane, nCH4 (moles/s) , inlet temperature of stream, T0 (K), inlet pressure, P0 (MPa) as the decision variables. The reactor model is a set of nine ODEs and MATLAB ode solver ode15s is used for simulation. The kinetic data and rate equations are taken from Aboosadi et al. (2011). Flue gas from coal-fired power plant with flow rate, n f lue , of 1 moles/s containing 14% mole CO2 , 9% mole H2 O, 4% mole O2 and 74% N2 is taken for simulation. The optimization problem is formulated as: max yH2 x

RT dyi = ρb ri i ∈ {CO,CO2 ,CH4 , H2 , N2 , O2 , H2 O} dz P ρb ∑ ∆H fi ri 2ρ f u2s f dT dP =− i , =− dz ρ f usC p dz dh h i (1 − ε) 150(1 − ε) f= + 1.75 , 0 ≤ nair , nCH4 ≤ 1 2ε 3 Reh 0.1 MPa ≤ P0 ≤ 2 MPa, 800 ≤ T0 ≤ 1200, 0 ≤ yi ≤ 1 ni T |z=0 = T0 , yi |z=0 = , P|z=0 = P0 n f lue + nCH4 + nair The optimal values nCH4 , nair , T0 , P0 and number of evaluations are 0.5782 moles per hour, 1 moles per hour, 1199.2 K, 1.969 MPa and 1259 respectively. The local solution is verified by perturbing around the solution obtained. The resulting mole fraction of H2 is 0.1686 while the mole fraction of N2 is 0.5876. If oxygen-enriched air is used for combustion, the flue gas will contain less amount of N2 , resulting in higher concentration of H2 . s.t.

558

I. Bajaj and M.M.F. Hasan

5. Conclusions We presented an optimization framework that addresses the black-box problems with known constraints. We proposed a novel way to systematically select desired number of feasible points a priori to train the surrogate model at each iteration that leads to reduction in the number of function evaluations. We exploited the fact that the surrogate model need not be accurate in the entire domain but only in the feasible region. The efficacy of the framework was shown by applying it to 16 problems from GlobalLib and tri-reformer reactor case study. Although we have assumed the constraints to be known, the framework can be adapted to unknown constraints as well. The framework shows promise for application to challenging problems such as pressure swing adsorption, material optimization, process intensification and experimental optimization.

References Z. A. Aboosadi, A. H. Jahanmiri, M. R. Rahimpour, 2011. Optimization of tri-reformer reactor to produce synthesis gas for methanol production using differential evolution (DE) method. Applied Energy 88 (8), 2691–2701. A. Agarwal, L. T. Biegler, 2013. A trust-region framework for constrained optimization using reduced order modeling. Optimization and Engineering 14 (1), 3–35. L. T. Biegler, Y. D. Lang, W. Lin, 2014. Multi-scale optimization for process systems engineering. Computers & Chemical Engineering 60, 17–30. F. Boukouvala, M. M. F. Hasan, C. A. Floudas, 2015. Global optimization of general constrained grey-box models: new method and its application to constrained PDEs for pressure swing adsorption. Journal of Global Optimization, 1–40. F. Boukouvala, M. G. Ierapetritou, 2013. Surrogate-based optimization of expensive flowsheet modeling for continuous pharmaceutical manufacturing. Journal of Pharmaceutical Innovation 8 (2), 131–145. G. A. Bunin, G. Franc¸ois, D. Bonvin, 2014. Feasible-side global convergence in experimental optimization. arXiv preprint arXiv:1406.4063. J. A. Caballero, I. E. Grossmann, 2008. An algorithm for the use of surrogate models in modular flowsheet optimization. AIChE Journal 54 (10), 2633–2650. A. R. Conn, N. I. M. Gould, P. L. Toint, 2000. Trust region methods. Vol. 1. SIAM. A. R. Conn, K. Scheinberg, L. N. Vicente, 2009. Global convergence of general derivative-free trust-region algorithms to first-and second-order critical points. SIAM Journal on Optimization 20 (1), 387–415. E. L. First, M. M. F. Hasan, C. A. Floudas, 2014. Discovery of novel zeolites for natural gas purification through combined material screening and process optimization. AIChE Journal 60 (5), 1767–1785. M. M. F. Hasan, E. L. First, C. A. Floudas, 2013. Cost-effective CO2 capture based on in silico screening of zeolites and process optimization. Physical Chemistry Chemical Physics 15 (40), 17601–17618. C. A. Henao, C. T. Maravelias, 2011. Surrogate-based superstructure optimization framework. AIChE Journal 57 (5), 1216–1232. F. J. Hickernell, 1998. Lattice rules: how well do they measure up? Springer. D. R. Jones, M. Schonlau, W. J. Welch, 1998. Efficient global optimization of expensive black-box functions. Journal of Global optimization 13 (4), 455–492. A. Nuchitprasittichai, S. Cremaschi, 2011. Optimization of CO2 capture process with aqueous amines using response surface methodology. Computers & Chemical Engineering 35 (8), 1521–1531. L. M. Rios, N. V. Sahinidis, 2013. Derivative-free optimization: a review of algorithms and comparison of software implementations. Journal of Global Optimization 56 (3), 1247–1293. R. D. Snee, 1977. Validation of regression models: methods and examples. Technometrics 19 (4), 415–428. C. Song, W. Pan, 2004. Tri-reforming of methane: a novel concept for catalytic production of industrially useful synthesis gas with desired H2 /CO ratios. Catalysis Today 98 (4), 463–484. M. Tawarmalani, N. V. Sahinidis, 2005. A polyhedral branch-and-cut approach to global optimization. Mathematical Programming 103, 225–249. L. N. Vicente, A. L. Cust´odio, 2012. Analysis of direct searches for discontinuous functions. Mathematical programming 133 (1-2), 299–325. S. M. Wild, R. G. Regis, C. A. Shoemaker, 2008. ORBIT: Optimization by radial basis function interpolation in trustregions. SIAM Journal on Scientific Computing 30 (6), 3197–3219.

Zdravko Kravanja, Miloš Bogataj (Editors), Proceedings of the 26th European Symposium on Computer Aided Process Engineering – ESCAPE 26 June 12th -15th, 2016, Portorož, Slovenia © 2016 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/B978-0-444-63428-3.50098-9

Integration of Energy-cost Optimization and Production Scheduling Using Multiparametric Programming Hubert Haderaa,b,1, Rachid Labrikc, 2, Juha Mäntysaarid, Guido Sanda, Iiro Harjunkoskia*, Sebastian Engellb a

ABB Corporate Research, Wallstadter Str. 59, 68526 Ladenburg, Germany Technical Univesity of Dortmund, Emil-Figge-Str. 70, 44221 Dortmund, Germany c KTH Royal Institute of Technology, 10044 Stockholm, Sweden d ABB Oy Industry Solutions, Strömbergintie 1B, 00380 Helsinki, Finland * [email protected] b

Abstract In energy-intensive industries, the traditional strategy is to schedule the production first. From the production schedule, the demand for energy is predicted and an optimization of the energy-supply cost is performed. The academic approach is to combine all the production and energy-cost related constraints into a single monolithic problem. In contrast, in this work a different approach is proposed. The energy-cost optimization problem is solved using Multiparametric Programming (mp), separately from the scheduling problem. Based on the solution from the mp-MILP (Mixed Integer Linear Programming) problem, several production scheduling problems with sensitivity information from the mp-MILP solution embedded can be solved in parallel in order to find the system-optimal solution. The approach is tested on realistic data instances of a stainless-steel process and obtains the optimal solution. However, the computational performance is strongly limited to very small instances due to limitations of the mpMILP solvers. Keywords: Multiparametric Programming, production scheduling, stainless-steel, energy cost optimization

1. Introduction Today’s industrial practice is to first schedule the production, satisfying all production specific rules, and based on the production schedule to predict the energy demand (Hadera et al., 2015a). Afterwards, energy purchase and sales optimization solutions generate the best available energy portfolio for the given schedule. This approach does not necessarily result in a system-wide optimal solution. Most reported methods in the scientific literature combine the scheduling problem with all energy-related information into a single monolithic problem. This ensures finding the system-optimal solution, but it requires noticeable efforts to integrate both scheduling and energy planning into one model, and solution times can be very large. Therefore decomposition techniques are of interest. Moreover, if the energy-cost optimization can be separated from production scheduling it is possible to combine tailored solutions for both problems. 1 2

Currently: BASF SE, Carl-Bosch-Str. 38, 67056 Ludwigshafen, Germany Currently: Quintiq SAS, Rue de Londres, 75009 Paris, France

560

H. Hadera et al.

2. Problem statement The goal of this work is to develop an integration strategy for minimizing the systemwide cost that consists of the energy-cost (energy problem) and the production scheduling cost (scheduling problem) components. In practice, the two problems are solved separately in a sequential fashion. The energy-cost optimization problem is to be solved without knowledge of the constraints of the scheduler (Hadera at al., 2015b). The cost of energy consists of the cost of electricity purchase reduced by revenues from the sale of the surplus of electricity. The production scheduling cost consists of a term that represents the makespan of the schedule and of financial penalties for the deviation of the consumption of electricity from pre-agreed values. 2.1. Solution approach We propose an approach which tackles the energy-cost MILP optimization problem as an oracle, separately from the scheduling problem. For the scheduling problem the load – the energy demand for the production process – is a variable, while for the energy-cost optimization problem it is treated as a parameter. The latter model is solved using Multiparametric Programming (Gal & Nedoma, 1972; Dua & Pistikopoulos, 2000) with the load as the investigated parameter. The results of the mp-MILP problem are load parameter regions with corresponding linear optimal solutions. The mp-MILP solution provides a map of the space of the varying load parameter where each region represents a certain optimum purchasing strategy (where the binary decisions are fixed) of the energy cost minimization problem. Each region is characterized by a load parameter interval with maximum and minimum load levels which must not be exceeded, otherwise the conditions of another region apply. For each region, the mp-MILP solution returns a certain marginal cost curve of the related linear solution which represents the cost of increasing energy use in different time periods. For each feasible (for the scheduling problem) load region with restricted load values, the scheduling problem can be solved with the corresponding marginal cost curve embedded into the objective function as a penalty for the use of energy (i.e. “aggregated cost” of energy). In this way many schedules can be obtained. The one with the minimum objective function value represents the system-wide optimal schedule for which a system-wide optimal energy purchase structure was found in the previous step. We apply the developed approach to a stainless-steel production process using a continuous-time general precedence scheduling approach. The scheduling model comprises the minimization of the deviation penalties as a result of the committed load problem.

Figure 1. The energy-cost optimization problem

Integration of Energy-cost Optimization and Production Scheduling Using Multiparametric Programming

561

Figure 2. Stainless-steel production process layout (Hadera et al., 2016)

In the energy cost optimization we use the generalized minimum cost flow network model (Hadera et al., 2015a) to find the optimal structure of multiple time-sensitive electricity contracts, including base load, time-of-use, day-ahead spot market and onsite power generation, and the opportunity to sell electricity back to the grid with revenues (Figure 1). Both models are formulated as MILP problems using realistic data instances from the literature and industrial practice. 2.2. Use case: Stainless-steel process The stainless-steel production is an energy-intensive batch process with complicated production constraints. The goal is to optimize a 24 h production schedule and its corresponding cost of energy. The plant layout and input data considered in this work are the same as in previous studies by Hadera et al. (2015a). The continuous-time general precedence scheduling formulation follows the one reported by Hadera et al. (2015a). The scheduling model is extended by improved energy-awareness constraints according to Hadera et al. (2016) in order to account for electricity consumption in different time slots. The corresponding energy-cost optimization problem is formulated as a minimum-cost flow network with the same structure and similar input data as in Hadera et al. (2015a). The simplified plant layout and flow network problem are shown in Figure 2.

3. Model formulation and implementation The objective function of the total problem is to minimize the system-wide cost (Eq. 1) consisting of net electricity cost (𝜇, see notation in Table 1) and production schedule cost (makespan 𝑡 !"# and penalties for load deviation 𝛿) as shown in Eq. (2). The minimum-cost flow network model consists of the equations (47) – (57) reported by Hadera et al. (2015a). The production scheduling problem consist of the improved model reported by Hadera et al. (2016) and additional equations (58) – (60) from Hadera et al. (2015a) that account for deviation penalties. Table 1 Model notation

Sets: 𝑇= time slots Parameters: 𝑞!!"# /𝑞!!"# =lower/upper boundary of the load in interval t; 𝑀𝐶! = marginal cost; 𝑘= parametrized region of mp-MILP solution Variables: 𝜇= net electricity cost; 𝑡 !"# = makespan; 𝛿= penalties for load deviation; 𝑞! = electricity consumption (load) in time interval t

562

H. Hadera et al.

min 𝑓  !"!#$ = min(𝑓 !"!#$% + 𝑓 !"!!" )

(1)

𝑓 !"!#$% = 𝜇 ; 𝑓 !"!!" = 𝑡 !"# + 𝛿

(2)

𝑔 !"!!" = 𝑡 !"# + 𝛿 +

(3)

min 𝑓 !"!#$ = min(𝑔

! 𝑀𝐶!

! !"!!"

∙ 𝑞!

),𝑘∈ℕ

(4)

The solution algorithm starts by solving the mp-MILP model of the energy-cost problem with the loads 𝑞! as the investigated parameters (Figure 3). The mp-MILP solving process consists of two subproblems that are solved iteratively (Dua & Pistikopoulos, 2000). For the first subproblem, a multiparametric linear problem is obtained by determining the binary variables. The second subproblem (MILP) is created by treating the parameters as unknown. As a result we obtain a set of objective functions and variables that are expressed as a function of the parameters with upper and lower bounds (𝑞!!"# /𝑞!!"# ) of the parameters within which these functions are valid - a map of linear solutions (regions) and affine functions is obtained. For each region 𝑘, it is possible to obtain a related marginal cost 𝑀𝐶! which describes how much the objective function value would change in case of a load increase by 1. The marginal cost stays valid for a given solution as long as the load parameter does not exceed the bounds (i.e. as long as the optimal cost structure does not change). In the second step of the algorithm, for all of the identified regions, an MILP scheduling problem is solved. The scheduling problem is modified to account for the marginal cost (Eq. 3) in the objective function. Each of the scheduling problems is solved with the load variable restricted to the upper/lower bound values of the region for which the marginal cost applies.

Figure 3. Sequential solution approach using Multiparametric Programming with example graph showing the mp-MILP solution for the load parameter for 2 successive hours

Integration of Energy-cost Optimization and Production Scheduling Using Multiparametric Programming

563

Table 2 The computational times of the mp-MILP energy-cost problem Number of time slots

2

3

4

5

6

CPU time [s]

54

67

1471

38302

629709

Number of LP regions found

18

85

461

2809

10639

For scheduling problems with feasible loads, the optimal objective function value can be obtained for each region which can be compared with the values for the other regions in order to find the minimal one. The latter, together with its corresponding electricity purchase structure is a system-wide optimal solution to the original problem (Eq. 4).

4. Numerical case study The mp-MILP model of the minimum-cost flow network is solved using MATLAB with the YALMIP 20131002 solver (Löfberg, 2004). The scheduling problem is solved using GAMS 24.1.2 and the CPLEX 12.5.1 solver. The computational times of the mp-MILP problem are shown in Table 2. The solution time grows significantly with the increasing number of binary variables. For the minimum-cost flow network model, the number of binary variables is directly related to the number of time slots. The largest tractable instance of 6 time slots that are considered in the energy-cost optimization problem takes more than 7 days. The stainless-steel process use case assumes a scheduling horizon of 24 h with a 1 h discretization interval for the prices of electricity. Thus the energy problem should also consider 24 time slots. Since this is not possible due to the computational limitations, the time interval that is considered in the scheduling step can be changed such that a larger scheduling horizon is taken into account. The time slot lengths considered in the stainless steel use case are 120, 150 and 210 min. The solution times reported in Table 3 are related only to the production scheduling part of the algorithm when multiple scheduling problems are solved. In order to assess the performance of the proposed approach (denoted in Table 3 as “mp-MILP” approach), the results can be compared to the monolithic MILP formulation (denoted as “mono”) of the same problem using the scheduling model described in Hadera et al. 2016. The total computational time of the mp-MILP approach consists of the CPU-time of the mp-MILP energy-cost optimization step (Table 2) and the CPU-time of the entire scheduling optimization step (Table 3).

5. Discussion and conclusion Many of the solution regions identified at the mp-MILP step were defined by load restrictions which were not feasible for the scheduler. Results show that the new approach obtains exactly the same system-wide optimal objective function values (total cost) as the monolithic model strategy. For some problem instances the resulting load curve (𝑞! ) of the monolithic approach differed from the one obtained by the sequential approach. This is because there might be multiple optimal solutions with the same objective function value. Also, there is a trade-off between the production and energy cost. For some solutions higher production cost can result in lower energy cost and vice versa. The computational performance of the sequential approach is much worse than that of the monolithic one. The proposed approach is limited to small examples due to the limitations of Multiparametric Programming step. Further work could focus on finding an effective pre-computation step for the mp-MILP problem in such a way that

564

H. Hadera et al.

it would simplify the optimization problem in order to reduce the number of relevant mp-MILP regions. Another improvement at the mp-step could restrict the solution exploration space only to those load values which are feasible for the scheduling problem. In this way, a potentially much smaller number of regions would need to be assessed by mp-MILP and to be evaluated by the scheduler. It would be also interesting to solve the scheduling problem and use the optimal load curve to initialize the mpenergy problem. This could limit the number of relevant solution regions in the latter, however would need an additional scheduling step. Table 3 Stainless-steel use case results No. of slots/ slot length

5 slots/ 90 [min]

4 slots/ 120 [min]

3 slots/ 150 [min] 2 slots/ 210 [min]

No. of products

No. of feasible regions in mp-MILP

CPUs mono

CPUs mp-MILP (scheduling step)

Total cost

2

8

0,51

1397,7

45523

3

13

0,62

1308,7

43420

4

21

3,15

1308,3

42953

2

8

0,56

170,2

36029

3

7

0,66

200,5

35449

4

4

1

221,4

46544

2

5

0,39

30

26278

3

2

0,41

31

34367

2

1

2,12

7

26138

Acknowledgment We would like to acknowledge the Marie Curie FP7-ITN project "Energy savings from smart operation of electrical, process and mechanical equipment– ENERGYSMARTOPS", Contract No: PITN-GA-2010-264940 for financial support.

References Gal, T., & Nedoma, J., 1972, Multiparametric Linear Programming, Manag. Sci., 18, 7, 406–422 Dua, V., Pistikopoulos, E., 2000, An Algorithm for the Solution of Multiparametric Mixed Integer Linear Programming Problems, Ann Oper Res, 99, 1-4, 123-139 Hadera, H., Harjunkoski, I., Sand, G., Grossmann, I. E., Engell, S., 2015a, Optimization of steel production scheduling with complex time-sensitive electricity cost, Comput Chem Eng., 76, 117-136 Hadera, H., Labrik, R., Sand, G., Harjunkoski, I., E., Engell, S., 2016, An improved energyawareness formulation for general precedence continuous-time scheduling model, accepted to Ind. Eng. Chem. Res., DOI: 10.1021/acs.iecr.5b03239 Hadera, H., Wide, P., Harjunkoski, I., Mäntysaari, J., Ekström, J., Sand, G., Engell, S., 2015b, A Mean Value Cross Decomposition Strategy for Demand-side Management of a Pulping Process, Comput. Aided Chem. Eng., 37, 1931-1936 Löfberg, J., 2004, Yalmip: A toolbox for modeling and optimization in MATLAB. In Proceedings of the CACSD Conference, Taipei, Taiwan

Zdravko Kravanja, Miloš Bogataj (Editors), Proceedings of the 26th European Symposium on Computer Aided Process Engineering – ESCAPE 26 June 12th -15th, 2016, Portorož, Slovenia © 2016 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/B978-0-444-63428-3.50099-0

Simulation and optimization of the molecular weight distribution in a non-ideal polymerization reactor Chi Zhang,a Xi Chen,a* Zhijiang Shao,a Chaozhong Xu,b Jiajun Wangb a State Key Laboratory of Industrial Control Technology, College of Control Science and Engineering, Zhejiang University, Hangzhou, 310027, P.R. China b State Key Laboratory of Chemical Engineering, College of Chemical and Biological Engineering, Zhejiang University, Hangzhou, 310027, P.R. China

[email protected]

Abstract Molecular weight distribution (MWD) is essential for describing the microstructural quality of a polymer. In the past decade, the simulation and optimization of the MWD of a polymerization process have received considerable attention. However, most studies are limited to ideal reactors, such as CSTRs. However, in large-scale industrial reactors, imperfect mixing often leads to a non-uniform distribution. The commercial software Fluent is used in this study to simulate free radical polymerization by the method of moments and computational fluid dynamics (CFD). An interface is designed in-house to extend the simulation to MWD calculation. Given a specific MWD curve as the target, process optimization is performed by combining CFD and MWD calculations in the in-house developed interface between C language and Fluent. A non-ideal tubular reactor is demonstrated for the low-density polyethylene process. Keywords: computational fluid dynamics; free radical polymerization; molecular weight distribution; process optimization.

1. Introduction The end-use properties of polymers are strongly dependent on their molecular weight distribution (MWD). Some studies have been conducted on MWD calculation, but few have focused on determining the optimal operating policies to achieve a target MWD. In addition, most of these studies considered an ideal reactor assumption, which does not apply to large-scale industrial reactors with poor mixing. In recent years, researchers have used computational fluid dynamics (CFD) method to obtain more accurate simulation results. Read et al. (1997) used CFD method to simulate a low-density polyethylene (LDPE) autoclave reactor and investigated its properties. Wells and Ray (2005) solved the polymerization reaction in an autoclave reactor, whose stability was studied. Serra et al. (2007) investigated polymerization in microreactors and calculated polydispersity index (PDI) and monomer conversion. Roudsari et al. (2013) modeled the polymerization in a lab-scale reactor with CFD method and studied the influence factors on conversion. Among the studies on the CFD-based method of polymerization, none used MWD for product quality, probably because of computational complexity. In this study, a distributed model is developed using the method of moments. The MWD is computed as a function of reactant concentrations and the moments with the use of an

566

C. Zhang et al.

in-house developed interface. Given a specific MWD curve as the target, the optimal operating conditions are determined to achieve the maximal conversion under the MWD constraint for a non-ideal reactor.

2. CFD Simulation Free radical polymerization is studied in this project. It involves initiator decomposition, chain initiation, propagation, chain transfer to monomer, termination by combination, termination by disproportionation, chain transfer, and chain termination. By applying the method of moments (Kiparissides et al., 1997), one can replace the polymerization scheme with a conventional reaction scheme that has a manageable size. The moments of live radical and dead polymer are defined as follows: ∞



n =1

n =1

λi = ni [ Rn ] , μi = ni [ Pn ] , i = 0,1, 2

(1)

where Rn is radical; Pn is polymer; n is the chain number of radicals or polymers and i represents the order of moments. A simplified reaction scheme using the moments can be derived as (Zhou et al., 2001) kd

Step 1

I → 2A

Step 2

A+ M → R

Step 3

R+M → R

kx

kp

Step 4

R+R

1 ktd + ktc 2



(2)

P

ktrm

Step 5

M + ( R) → P + ( R)

where I is the initiator; M is the monomer; A is radical; R denote the total radicals,



R ; n

n =1

P denote the total polymers,



P ; and k is the rate of the reaction. In addition to the n

n =1

five-step reaction scheme, a mass source term ( ktd R 2 ) is added to the transport equation of the total radicals, R, for consistency with the source terms derived in Eq. (2). The scalar transport equations for the moments given in Eq. (3) are also simultaneously solved. d [ λ0 ] dt d [ λ1 ] dt d [ λ2 ] dt

=

d [R ] dt

= − ktd λ0 + k x [ A][ M ] , 2

d [ μ0 ] dt

=

d [P] dt

= k td λ0 + k trm [ M ] λ0

= − ktd λ0 λ1 + k x [ A][ M ] + k p λ0 [ M ] + ktrm [ M ] ( λ0 − λ1 ) ,

2

d [ μ1 ] dt

= k td λ0 λ1 + ktrm [ M ] λ1

= − k td λ0 λ2 + k x [ A][ M ] + k p λ0 [ M ] + ktrm [ M ] ( λ0 − λ2 ) + 2 k p λ1 [ M ] ,

d [ μ2 ] dt

(3)

= k td λ0 λ2 + ktrm [ M ] λ2

Both R and P, which are respectively equivalent to λ0 and μ0 , need not be solved. The mixture viscosity is described as  

0.556

 μ1    μ0 

η = ηethy exp  2.00+ 0.017

μ1 +

( −500+560 μ1 )  1 1   −  Rg  T 423 

(4)

Simulation and optimization of the molecular weight distribution in a non-ideal polymerization reactor where

η ethy

567

is the viscosity of ethylene. Given this description of viscosity, the effects of

temperature and product distribution are included in the fluid motion. Following the Flory distribution method, the MWD can be calculated as 1  β  n +1 W ( n ) = ( τ +β )  τ + ( τ +β )( n −1)  n ( )  2  1+ τ +β τ=

ktd [ R ]+ ktrm [ M ] k p [M ]

,

β=

ktc [ R ] k p [M ]

.

(5) (6)

For the CFD simulation, Gambit 2.4.26 is used to generate the reactor geometry and discretize the computational domain. ANSYS Fluent 15.0 is employed to solve all transport equations in a steady-state condition. The first-order upwind scheme is selected to calculate the cell face fluxes. Pressure–velocity coupling is achieved using the SIMPLE algorithm. The finite volume method is used in the CFD modeling to solve the governing transport equations.

Figure 1. Process diagram using Fluent to solve MWD. The entire process diagram is shown in Figure 1. User-defined function (UDF) codes are written for the source terms and properties in C language and linked to the species transport model in Fluent. Fluent features a reactive species balance option, which is designed to deal with chemically reacting species. However, this option cannot be used directly for the lumped kinetic model of free radical addition polymerization, because the propagation reactions shown in Step 3 of Eq. (2), “R + M → R,” violates the mass conservation law. Another UDF code is written to define the polymerization reaction to deal with this problem, as shown in the left part of Figure 1. Given that the simulation is difficult to converge, a three-stage approach is employed. The steady-state flow solution is carried out first, followed by solving the steady-state convection–diffusion equations for the concentrations of initiator, monomer, and total radicals simultaneously with constant viscosity. Finally, the viscosity equation is added and calculated with other equations together. After the simulations are finished successfully, the distributions of all the variables, including the variables required to further calculate the MWD in Eqs. (5) and (6), can be obtained in Fluent. Then, as shown in the right part of Figure 1, τ and β are calculated by the Custom Field Function Calculator. An interface is designed to write τ and β to files by Fluent. Finally, the distribution of MWD can be calculated externally by reading these data files in C.

3. Optimizations The process optimization with the CFD simulation is also studied and is formulated in the following equation:

568

C. Zhang et al.

max conversion( x) s.t . MWD − MWD ≤ ε

(7)

xlow ≤ x ≤ xup

The goal of optimization is to maximize the conversion at the exit center subject to the MWD constraint with a given target, MWD , and a specified tolerance, ε . The initiator mass fraction at the inlet is chosen as the manipulated variable, x. A derivative-free optimization algorithm is preferred in this work because of the difficulty in obtaining gradient information. The simplex method is used because of its good performance and easy implementation. An interface is developed to conduct optimization using ANSYS Fluent software. As shown in Figure 2, the left part describes the optimization process. The polymerization process should be simulated by Fluent at each iteration. The right part of Figure 2 describes the details of conducting the Fluent simulation. An interface between C and Fluent, including the interface program and interface data, is developed in-house to automate the optimization, as illustrated in the middle part of Figure 2. The interface program includes three parts: 1) Write journal program. This program creates a Fluent journal file containing information of the specific initiator mass fraction and other operations required in Fluent. The journal file is written with the textual command which is concise and easy to understand. This program enables Fluent to write data into files after a simulation. 2) Call Fluent program. This program is used to call Fluent during optimization. Fluent is activated to read the journal file and solve the reaction under the new condition. 3) Read data program. This program reads τ , β , and the conversion from the data file. The data are fed back to the optimization program.

Figure 2. Schematic of the optimization method.

4. Results and Discussion 4.1. CFD simulation of LDPE tubular reactors In this project, a two-dimensional tube, which is 10 m in axial length and 0.038 m in radial diameter, is studied. The initiator is assumed to be premixed with the monomer and uniformly injected into the tube. The inlet initiator and monomer mass fractions are 0.000378 and 0.999622, respectively. The inlet temperature is 500 K. The velocity is 21.85 m/s, and the operating pressure is 2150 atm. The physical constants are the same as those used by Zhou et al. (2001). The finite rate model is used to deal with the reaction. The moments of radicals and polymers are solved by six additional userdefined scalar equations. The rate constants are set following Zhou et al. (2001).

Simulation and optimization of the molecular weight distribution in a non-ideal polymerization reactor

569

The conversion distribution of the simulation is shown in Figure 3. It should be noted that the radial direction has been scaled up for 20 times to better illustrate the results. The conversion rapidly increases at approximately 5 m downstream of the inlet along the axial direction. It is slightly lower at the center than near the wall along the radial direction. Differences are evident in the radial direction, as shown in Figure 3. Thus, the flow in the reactor is actually not plug flow. The distribution of MWD along the axial center can be derived using the in-house developed interface, as shown in Figure 4. The MWD peak initially increases and subsequently drops to a stable value after approximately 5 m downstream of the inlet. The MWD also becomes wider as it is located farther away from the inlet. The axial PDI increases from approximately 1.9 to 2.46, as shown in Figure 5.

Figure 3. Conversion distribution in the reactor.

Figure 4. The distribution of MWD along the axial direction.

4.2. Optimization of conversion for target MWD As shown in Figure 6, the curve with asterisk shows the target MWD. Initiator mass fraction is used as the optimized variable, x. A total of 48 equidistant sampling points are chosen from the target logarithmic MWD curve and normalized to calculate the MWD error constraint. The tolerance of the normalized MWD error constraint, ε , is set to 1e-6. If the error is large than the tolerance, it is added into the objective function by using penalty with the weight as 1e8. The upper and lower bounds of the optimized variable x are set to 5.67e-4 and 1.89e-4, respectively. The initial iterative point is set to 2.0e-4. At each iteration, two simplex points are obtained and the optimum ones are listed in Table 1. The program is terminated if the derivation of the two points is within the convergence tolerance, which is set to 2e-3. In this case, the program takes nine steps to find the optimum point. The initiator mass fraction x is 5.5e-4, and the conversion rate is 7.722e-2. Table 1. Optimization results at every simplex step Step 0 1 2 3 4 5 6 7 8 9

Initiator concentration 2.000 e-4 3.200 e-4 4.800 e-4 4.800 e-4 5.200 e-4 5.200 e-4 5.400 e-4 5.400 e-4 5.500 e-4 5.500 e-4

Objective function -2.152 e+4 -5.855 e+3 7.510 e-2 7.510 e-2 7.634 e-2 7.634 e-2 7.693 e-2 7.693 e-2 7.722 e-2 7.722 e-2

MWD error 2.152 e-4 5.855 e-5 9.982 e-7 9.982 e-7 8.797 e-13 8.797 e-13 9.857 e-7 9.857 e-7 9.873 e-7 9.873 e-7

Conversion 6.261 6.907 7.510 7.510 7.634 7.634 7.693 7.693 7.722 7.722

e-2 e-2 e-2 e-2 e-2 e-2 e-2 e-2 e-2 e-2

Convergence tolerance 2.132 7.796 2.945 1.623 7.980 7.671 3.805 3.734 1.860

e+0 e+4 e+3 e-2 e-3 e-3 e-3 e-3 e-3

570

C. Zhang et al.

The MWD curves of the initial, target, and optimum points are compared in Figure 6. The initial MWD curve is different from the target MWD, whereas the optimum and target MWD curves almost overlap with the normalized difference as 9.873e-7. The method not only finds a suitable inlet initiator concentration for the target MWD, but also helps improve the conversion rate of a specific process. The conversion curves along the axial direction are shown in Figure 7. The optimum curve presents a higher conversion than the initial curve. -4

x 10 1.4

2.5

1.2

2.4

1

0.08

Initial point MWD Target MWD Opt result MWD

0.07 0.06

conversion

2.6

MWD

PDI

2.3 2.2

0.8 0.6

0.05 0.04 0.03

2.1

0.4 0.02

2

0.2 1.9

0.01

0

1.8 0

2

4 6 Axial Position, m

Figure 5. Axial PDI distribution.

8

10

0

1

2 3 Log Chain length

4

5

Figure 6. MWD curves of the initial, target, and optimum points.

Initial conversion Opt conversion

0 0

2

4 6 Axial Position, m

8

10

Figure 7. Conversion curves of initial and optimum points.

5. Conclusions In this study, the CFD method is demonstrated useful for predicting the MWD distribution in a non-ideal polymerization reactor. An interface between the CFD simulation and the MWD calculation is developed in-house to enable the process optimization with the embedded MWD. An industrial tubular reactor for LDPE achieves the maximal conversion rate subject to the MWD constraint with the use of CFD simulation and process optimization.

Acknowledgments We gratefully acknowledge the financial support of 973 Program (No. 2012CB720503) and National Natural Science Foundation of China (Nos. U1509209 & 61374205).

References N. K. Read, S. X. Zhang, W. H. Ray, 1997. Simulations of a LDPE reactor using computational fluid dynamics. AIChE journal, 43(1): 104-117. G. J. Wells, W. H. Ray, 2005. Methodology for modeling detailed imperfect mixing effects in complex reactors. AIChE journal, 51(5): 1508-1520. W. Zhou, E. Marshall, L. Oshinowo, 2001. Modeling LDPE tubular and autoclave reactors. Industrial & engineering chemistry research, 40(23): 5533-5542. C. Serra, G. Schlatter, N. Sary, F. Schönfeld, & G. Hadziioannou, 2007. Free radical polymerization in multilaminated microreactors: 2D and 3D multiphysics CFD modeling. Microfluidics & Nanofluidics, 3(4), 451-461. S. F. Roudsari, F. Ein-Mozaffari, R. Dhib, 2013. Use of CFD in modeling MMA solution polymerization in a CSTR. Chemical Engineering Journal, 219(3):429–442. C. Kiparissides, G. Daskalakis, D. S. Achilias, & E. Sidiropoulou, 1997. Dynamic Simulation of Industrial Poly(vinyl chloride) Batch Suspension Polymerization Reactors. Industrial & engineering chemistry research, 36(4):1253-1267.

Zdravko Kravanja, Miloš Bogataj (Editors), Proceedings of the 26th European Symposium on Computer Aided Process Engineering – ESCAPE 26 June 12th -15th, 2016, Portorož, Slovenia © 2016 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/B978-0-444-63428-3.50100-4

Efficient optimization-based design of energetically intensified distillation processes Thomas Waltermanna,* and Mirko Skiborowskia a TU Dortmund University, Faculty of Chemical and Biochemical Engineering, Laboratory of Fluid

Separations, Emil-Figge-Straße 70, D-44227 Dortmund, Germany *[email protected]

Abstract Although suffering from low energy efficiency, distillation is still the most widespread separation technique in industry. Different means for the improvement of energy efficiency, such as heat-pump assisted and thermally coupled distillation columns, have been developed. However, the comparison of these different configurations and the identification of the most energy or economicly beneficial option for a specific separation is an elaborate task. Therefore, an efficient optimization-based method for the evaluation of the different options is proposed. In contrast to most previous approaches, the method is based on a superstructure equilibrium tray model including rigorous thermodynamics. It is therefore directly applicable to the separation of non-ideal mixtures and considers a broad range of different options for energetically intensified distillation processes. For illustration purposes the design method is applied to the separation of a ternary non-ideal mixture. Keywords: Distillation, heat integration, thermal coupling, optimization

1. Introduction Due to increasing scarcity of fossil feedstock and rising energy prices there is a need for energy efficiency and more sustainable industrial processes. While distillation is still the most widespread separation technique in industrial application, conventional distillation processes suffer from low thermodynamic efficiency and therefore offer large potential for improvements in terms of energy savings (Kiss, 2014). Different approaches to heat integration as well as concepts of thermal coupling of distillation columns have been proposed to increase the energy efficiency of distillation processes. Heat integration can either be achieved by adjusting the operating conditions (i.e. the pressure) of the single columns in a distillation sequence (HI) (Harwardt et al., 2009) or by introducing heat pumps, as e.g. vapor recompression (VRC), for individual columns (Kiss, 2014). Without altering the operating conditions or introducing heat pumps, the energy requirement can be reduced by means of thermal coupling of the columns (TC) in a distillation sequence, which can further be integrated into a single column shell resulting in a so called divided wall column (DWC) and savings in capital costs (Kiss, 2014). Figure 1 provides an overview of the different distillation configurations for the separation of a mixture into three products. For each of the basic configurations (direct, indirect and sloppy split) different energy intensified distillation configurations can be designed, which are exemplarily shown for a direct sequence. For each basic configuration there is an optimal heat-integrated or thermally coupled design. The selection of the most efficient configuration is an elaborate task requiring the evaluation of the single options at optimal design and operating conditions, which

572

T. Waltermann and M. Skiborowski

depend on the mixture properties and separation tasks. In order to shorten this selection process Kiss et al. (2012) proposed a selection scheme based on heuristic rules. For the resulting short list of distillation techniques rigorous simulations are nonetheless required to determine the most efficient option for an individual separation task (Kiss et al., 2012). Methods for the screening of thermal coupled distillation configurations are usually based on shortcut methods. In most cases modiA Separation fications of the Fenske-Underwood- A, B, C B Process C Gilliland method are used which application is limited to ideal mixDirect Split tures (Brueggemann and Marquardt, Indirect Split 2004). For non-ideal mixtures rigorSloppy Split ous thermodynamics have to be considered. Therefore shortcut methods such as the rectification body method are required and can be used for an efficient screening of TC configurations for minimum heat requirements (Brueggemann and Marquardt, 2004). However, since these Figure 1: Overview of different distillation configurations shortcut methods cannot consider in- for a three-product separation process vestment costs, an economic comparison of thermally coupled, conventionally heat integrated variants and heat-pump assisted configurations requires at least rigorous MESH models. While Duennebier and Pantelides (1999) extended the investigations of TC columns including DWC to a rigorous equilibrium tray model, the application was limited to ideal mixtures and other energetically intensified distillation configurations were not considered. The same applies to other previous works using rigorous modeling and optimization which were mainly focused on the detailed investigation of a specific kind of distillation configuration (Harwardt et al., 2009; Kraemer et al., 2009a; Harwardt and Marquardt, 2012). Nevertheless, in order to determine the optimal configuration as many options as possible should be included in the screening process and compared on a reliable basis. In order to determine the most energy or economicly beneficial of these options, they have to be compared at their optimal settings. Therefore, in order to identify the maximum saving potential an efficient optimizationbased method for the evaluation of different options is proposed. This method makes use of rigorous equilibrium tray models including non-ideal thermodynamics and is therefore applicable to non-ideal as well as azeotropic mixtures. Moreover, both operating and investment costs are taken into account for the evaluation of the process variants. Consequently, this allows the profound comparison of a broad range of distillation configurations for difficult separation tasks.

2. Modeling and Optimization In this work several concepts for energetically intensified distillation are compared. The investigated variants include conventional distillation, VRC, thermally coupled columns as well as DWC for the direct, indirect and sloppy splits of a mixture into three products. Moreover, for the direct and indirect split conventional heat integration by adjusting column pressure is also considered. This results in the screening of 16 different process configurations for one specific separation task. The proposed method is based on a rigorous equilibrium tray model including non-ideal thermodynamics and does not require additional short-cut calculations for initialization. The model is briefly described in the following subsections, while a more elaborate description can be found in the referenced articles.

573

Efficient optimization-based design of energetically intensified distillation processes

2.1. Modeling The superstructure model is based on the MESH equations (mass balances, equilibrium conditions, summation constraints and enthalpy balances) and is elaborately described in the articles of Kraemer et al. (2009b) and Skiborowski et al. (2015). In the VRC configuration an isentropic compression of the top vapor stream is modeled similar as in the approach of Harwardt and Marquardt (2012). For all heat-integrated configurations additional heat-exchangers are considered to provide additional cooling or heating duty by means of external utilities. 2.2. Cost estimation Dimensioning and cost estimation is done similar to the work of Kraemer et al. (2009b) and Skiborowski et al. (2015) using cost correlations of Guthrie (Biegler et al., 1997). The calculated capital costs are adjusted by the M&S index for 2010 and depreciated over a time period of 10 years with an interest rate of 6 %, assuming an annual operation time of 8000 h. The assumed utility costs for steam, cooling water and electricity are given in Table 1. The investment costs of the DWC are estimated by con- Table 1: Assumed utility costs (Values sidering only one column shell which is sized based on taken from Baerns et al. (2012)) the maximum vapor stream. For the partitioned section the vapor streams of both sides are added up. Moreover, Utility Costs to account for the additional costs due to the partition wall a surcharge factor of 20% is added to the investSteam (3 bar) 16 e/t ment costs of the column shell (Dejanovic et al., 2011). Steam (15 bar) 18 e/t For the conventional heat integrated configurations the Cooling water 0.05 e/t investment costs of the required pumps are neglected. Electricity 0.06 e/kWh 2.3. Optimization ':&

VLQJOHFROXPQ PXOWLFROXPQSURFHVV The design procedure itself is based on a FRQYHQWLRQDO PLQ7$& 0RGHOPRGLILFDWLRQ few sequential steps which are illustrated 7& VWHSZLVH  7& )ODVK LQLWLDOL]DWLRQ in Figure 2. All optimizations are done PLQ45HE PLQ7$& by means of a gradient-based approach 95& 95& VROYH0(6 PLQ45HE PLQ7$& PRGHO and a superstructure formulation. For +, WZRYDULDQWV  +, each split first the conventional columns VROYH0(6+ PLQ45HE PLQ7$& PRGHO are initialized individually by means of FRQYHQWLRQDO flash calculations and subsequent optiPLQ45HE PLQ7$& mization for minimum energy demand (MED). Then, all energetically intensified configurations are introduced. For the Figure 2: Stepwise optimization procedure for the evaluation of the thermally coupled con- individual distillation configurations figurations and the DWC design the superstructure is automatically adapted from the sequence of simple columns. Afterwards, the energetically intensified configurations are optimized for MED, before all process variants are finally optimized for minimum TAC.

The developed method is implemented in GAMS 23.6. The calculation of the thermodynamic properties is done by external functions as described by Skiborowski et al. (2015). Furthermore, the resulting mixed-integer nonlinear programming (MINLP) problem is solved as a series of successively relaxed MINLP problems in order to increase the efficiency and robustness of the calculations (Kraemer et al., 2009b). The complete optimization of one individual process configuration does not exceed 15 minutes of calculation time on a PC with a 3.2 GHz quad-core CPU. Moreover, about 94 % of the calculations were finished in less than 10 minutes while the computational time of the different configurations averages to 301 seconds. While the current calculations were carried out in sequential runs, the overall calculation time could be significantly shortened by using parallel processing.

T. Waltermann and M. Skiborowski

574

3. Case study - Results & discussion The developed method is demonstrated by application to the separation of a ternary, non-ideal mixture consisting of chloroform (C), benzene (B) and toluene (T) with a feed flowrate of 10 mol/s. The feed stream enters the process as saturated liquid at atmospheric pressure and is separated into three different products with a purity requirement of 99.9 mol-% each. Equilibrium calculations are based on the Wilson model for the liquid phase, while the vapor phase is modeled as ideal gas. Since the composition of the feed stream has a major influence on energy demand and total costs of a separation process, different exemplary feed compositions are investigated and for each the optimal distillation configuration is determined.

TOC TIC

TC

DWC

VRC

Sloppy Split

Indirect Split

Direct Split

simple

TC

DWC

VRC

HI Col2

simple

HI Col1

TC

DWC

VRC

HI Col2

- 24%

simple

350 325 300 275 250 225 200 175 150 125 100 75 50 25 0

HI Col1

First, the design method is applied to the separation of an equimolar feed. The TAC for each optimized configuration are shown in Figure 3. Both total annualized operation costs (TOC) and total annualized investment costs (TIC) of each configuration are displayed. For the conventional column sequences the direct split results in the lowest TAC and thus, is used as the benchmark case for the other configurations. Minimum costs are achieved by using conventional heat integration in an indirect sequence with column 1 operating at elevated pressure (HI Col1) supplying the required heat for the reboiler of column 2 (see Figure 4). No additional heat exchanger is required in addition to the one for internally heat integration. Moreover, the required operating pressure of column 1 in order to achieve the required temperature difference for heat transfer is relatively low. Thus, TAC savings of about 24 % in relation to the direct sequence of conventional columns can be achieved.

  

3.1. Equimolar feed

Figure 3: Annualized cost values of the different distillation configurations for the separation of an equimolar feed of CBT

&ROXPQ EDU ' P + P

+ P

PROV   

N: 



' [& [% [7

PROV   

%  [& [% [7

PROV   



 )  [& [% [7

N:

&ROXPQ DWP ' P

N:



N:

% [& [% [7



PROV   

Figure 4: Indirect sequence with heat integration (column 1 with elevated operating pressure) as optimal configuration for an equimolar feed of CBT

For most configurations the TOC outweigh the TIC significantly. However, for the heat-pump assisted VRC configurations the ratio of TIC to TOC is basically inverted due to the additional investment costs for the required compressors.

575

Efficient optimization-based design of energetically intensified distillation processes

3.2. Benzene-rich feed

350 325 300 275 250 225 200 175 150 125 100 75 50 25 0

TOC TIC

TC

DWC

VRC

simple

TC

DWC

VRC

HI Col2

simple

HI Col1

TC

DWC

VRC

HI Col2

simple

HI Col1

  

The design method is also ap- 22% plied to the separation of a mixture rich in the middle boiling component. The resulting TAC can be seen in Figure 5. The comparison of the different configurations reveals the sloppy split as the most suited conventional sequence for this specific separation task. Overall the DWC with a middle partition wall (see Figure 6) reSloppy Split Indirect Split Direct Split sults in the lowest TAC and savings of about 22 % in rela- Figure 5: Annualized cost values of the different distillation tion to the sloppy split by con- configurations for the separation of a benzene-rich feed of CBT ventional columns. The DWC (xC = 0.2, xB = 0.6, xT = 0.2) configuration benefits from the N: high amount of middle-boiling ':& ' PROV DWP component, since in contrast to [   ' P  [   other configurations remixing + P [   of the components is reduced and the middle-boiling compo ) PROV nent can be withdrawn by a [    [   sidestream at high purity (Kiss, [   6 PROV 2014). [   & % 7

& % 7



&

[%  [7 

In addition to the DWC both the  sloppy split using VRC and the indirect cut with heat integra tion by column 1 (HI Col1) lead % PROV [   to significant savings in TAC. [   The heat-integrated indirect cut [   N: performs well due to previously mentioned reasons. The sloppy split VRC configuration leads Figure 6: Divided wall column with middle wall as optimal conto a saving potential of about figuration for a benzene-rich feed of CBT 21 % which is only slightly inferior to the DWC configuration. Moreover, also all VRC configurations offer significant saving potentials. & % 7

3.3. Benzene-poor feed Finally, the design method is applied to the separation of a feed with a low fraction of the middle boiling component. TAC values for the different configurations are given in Figure 7. The optimal configuration uses conventional heat integration in a indirect sequence with column 1 operating at elevated pressure (HI Col1) which was also the optimal configuration for the equimolar feed. In addition to the optimal configuration most of the alternative approaches used on the direct cut also display saving potentials. Here, the thermal coupled sequence and the DWC with an upper wall are the best options while the VRC configuration and conventional heat integration with column 2 operating at elevated pressure result in slightly less savings. Altogether, the average TAC as well as the saving potentials of the distillation variants are the lowest among the investigated feed compositions.

T. Waltermann and M. Skiborowski

576

4. Conclusion

350 325 300 275 250 225 200 175 150 125 100 75 50 25 0

TOC

TC

DWC

VRC

simple

TC

DWC

VRC

HI Col2

simple

HI Col1

TC

DWC

VRC

HI Col2

simple

HI Col1

  

TIC The developed method has been successfully applied to the separation of a non-ideal mixture - 15% into three products. The design method enables the comparison of a variety of energetically intensified distillation configurations for a specific separation task and determines the optimal distillation configuration within a short amount of computational time. In the case study Sloppy Split Indirect Split Direct Split varying feed compositions were considered. The results show that the feed composition has Figure 7: Annualized cost values of the different distillation a strong influence and leads configurations for the separation of a benzene-poor feed of CBT to varying optimal distillation (xC = 0.4, xB = 0.2, xT = 0.4) configurations. The presented approach facilitates an efficient evaluation of the different configurations and can further be used to evaluate the influence of different parameters as for example different utility or investment costs. These factors vary with the location a process is implemented and the presented approach can easily adapt to account for different settings. It therefore could also be used to address the uncertainty related to the different cost factors.

References M. Baerns, A. Behr, J. Gmehling, H. Hofmann, U. Onken, A. Renken, 2012. Technische Chemie. Wiley-VCH. L. T. Biegler, I. E. Grossmann, A. W. Westerberg, 1997. Systematic methods for chemical process design. Prentice Hall, Old Tappan, NJ (United States). S. Brueggemann, W. Marquardt, 2004. Rapid screening of design alternatives for nonideal multiproduct distillation processes. Computers & Chemical Engineering 29 (1), 165–179. I. Dejanovic, L. Matijasevic, I. J. Halvorsen, S. Skogestad, H. Jansen, B. Kaibel, Z. Olujic, 2011. Designing four-product dividing wall columns for separation of a multicomponent aromatics mixture. Chemical Engineering Research and Design 89 (8), 1155–1167. G. Duennebier, C. C. Pantelides, 1999. Optimal design of thermally coupled distillation columns. Industrial & Engineering Chemistry Research 38 (1), 162–176. A. Harwardt, K. Kraemer, W. Marquardt, 2009. Optimization based design of heat integrated distillation processes. In: Proceedings of the 8th world congress of chemical engineering (WCCE). A. Harwardt, W. Marquardt, 2012. Heat-integrated distillation columns: Vapor recompression or internal heat integration? AIChE Journal 58 (12), 3740–3750. A. A. Kiss, 2014. Distillation technology - still young and full of breakthrough opportunities. Journal of Chemical Technology and Biotechnology 89 (4), 479–498. A. A. Kiss, S. J. Flores Landaeta, C. A. Infante Ferreira, 2012. Towards energy efficient distillation technologies - making the right choice. Energy 47 (1), 531–542. K. Kraemer, A. Harwardt, W. Marquardt, 2009a. Design of heat-integrated distillation processes using shortcut methods and rigorous optimization. In: 10th International Symposium on Process Systems Engineering: Part A. K. Kraemer, S. Kossack, W. Marquardt, 2009b. Efficient optimization-based design of distillation processes for homogeneous azeotropic mixtures. Industrial & Engineering Chemistry Research 48 (14), 6749–6764. M. Skiborowski, A. Harwardt, W. Marquardt, 2015. Efficient optimization-based design for the separation of heterogeneous azeotropic mixtures. Computers & Chemical Engineering 72, 34–51.

Zdravko Kravanja, Miloš Bogataj (Editors), Proceedings of the 26th European Symposium on Computer Aided Process Engineering – ESCAPE 26 June 12th -15th, 2016, Portorož, Slovenia © 2016 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/B978-0-444-63428-3.50101-6

Anaerobic Bio-reactor Modeling Cansu Birgena,* , Heinz A. Preisiga , Alexander Wentzelb , Sidsel Markussenb , Bernd Wittgensc , Ujjaini Sarkard , Anindita Gangulyd , Sudeshna Sahad and Sibashish Baksid a Norwegian

University of Science and Technology, Chemical Engineering, Sem Sælands vei 6, Trondheim, Norway b SINTEF Materials and Chemistry, Biotechnology and Nanomedicine, Richard Birkelands vei 3, Trondheim, Norway c SINTEF Materials and Chemistry, Industrial Process Technology, Sem Sælands vei 2A, Trondheim, Norway d Jadavpur University, Chemical Engineering, 188 Raja S. C. Mullick Road, Kolkata, India *[email protected]

Abstract EcoLodge aims at providing a proof-of-concept for a new, integrated biotechnological production process for C8 ester butyl butyrate, a promising supplement/substitute for diesel and jet engine fuels, from lignocellulose derived C5 and C6 sugars. Butanol and butyric acid are the process intermediates produced via anaerobic fermentation. This paper focuses on mathematical model simulation for BuOH fermentation and integrated gas stripping unit. Inhibitory effects of butanol and substrate are identified as main obstacles for high productivity butanol fermentation. Therefore, in situ removal of inhibitory butanol from the bio-reactor by gas stripping is suggested taking the substrate inhibition into consideration. Mathematical models from the literature are employed together with experimentally estimated model parameters. Model simulations of the integrated unit are performed in continuous mode for determination of optimum values for substrate concentration in feed and feed flow rate. The full domain of the operating ranges were explored for three different inhibitory concentrations of butanol (5,8 and 10 g/l), which resulted in performing 14000+ simulation experiments. The results provide a clear insight into the feasible combinations of the operating conditions. The optimal operating range for 5 g/l is obtained in this study will be employed in experimental bio-reactor and gas stripping unit in the next phase of the project. Keywords: Bio-reactor, Biobutanol, Fermentation, Modeling, Gas stripping

1. Introduction EcoLodge aims at providing a proof-of-concept for a new, integrated biotechnological production process for C8 ester butyl butyrate, a promising supplement/substitute for diesel and jet engine fuels, from lignocellulose derived C5 and C6 sugars. Previous studies claim that BuB should have excellent properties both as gasoline as well as diesel component (Lange et al., 2010). Therefore, butyl butyrate is an interesting biofuel option of which butanol and butyric acid are the process intermediates produced via anaerobic fermentation. This paper focuses on model simulation for BuOH fermentation and integrated gas stripping unit to determine optimal operating points. The process for production of butanol via anaerobic fermentation by Clostridia is known popularly as acetonebutanolethanol (ABE) fermentation (Mayank et al., 2013). Clostridia are rod-shaped, spore-forming Gram positive bacteria and typically strict anaerobes (Lee et al., 2008). Among

578

C. Birgen et al.

many solventogenic clostridia, Clostridium acetobutylicum ATCC 824 remains the best studied and manipulated strain (D¨urre, 2005). Therefore, it is chosen for butanol production in this paper. Clostridial solvent production is biphasic; first phase is acidogenic phase during which acetate, butyrate, hydrogen, and carbon dioxide are produced as major products. Acidogenic phase is observed during exponential growth phase of cells (Andersch et al., 1983; Hartmanis and Gatenbeck, 1984). Second phase is solventogenic phase during which acids are re-assimilated and the solvent is produced which consists of acetone, butanol and ethanol. High cost of substrate, low solvent yield, low solvent tolerance, and culture degeneration are regarded as the main bottlenecks for BuOH fermentation (D¨urre, 2011). High cost of substrate poses a problem when substrates such a sugar cane are used; however, our substrate is sugar derived from lignocellulose. Its cost does not impose a major problem. The low solvent yield problem can be approached by strain manipulation and optimization of the fermentation conditions. High solvent yield would still create a problem due to solvent intolerance (Gu et al., 2011). Therefore, researchers investigated various alternative techniques to recover butanol from the fermentation broth. These techniques include adsorption, liquidliquid extraction, perstraction, pervaporation, reverse osmosis, and gas stripping (Maddox, 1989). Gas stripping is widely applied as a reliable option, since it is simple, does not require expensive equipment, and does not harm the culture (Qureshi and Blaschek, 2001). During gas stripping, gas is sparged through the bio-reactor and butanol is condensed and recovered. Therefore, the concentration of butanol in the broth is kept below inhibitory level of 5g/l butanol as can be found in previous studies. However, gas stripping would be more efficient when the butanol concentration in the broth is higher than 8 g/L, at which the condensed vapor from gas stripping would have a butanol concentration higher than its solubility, thus result in a highly concentrated organic phase (Xue et al., 2012). Use of concentrated substrate solutions could enhance the solvent production if solvent intolerance problem can be tackled. However, too high substrate concentration causes a lag phase due to substrate inhibition (Ezeji et al., 2004). Hence, the inhibitory substrate concentration and minimum substrate concentration required for cell maintenance are 75 g/l and 20 g/l, respectively for a continuous fermentation system. In a similar manner, substrate concentration in feed has to be within a certain range to prevent drastic changes in the fermentation broth as the cells are sensitive to great changes, therefore the range is defined as 250g/l - 500 g/l (Ezeji et al., 2005). Here we illustrate how the operations improve when fulfilling the criteria for substrate concentration in the feed, and meet the constraints in the substrate and product concentrations in the broth.

2. Mathematical model for integrated fermentation and gas stripping unit 2.1. Process description A schematic diagram of the integrated fermentation process is shown in Figure 1. The operation objective of this continuous bio-reactor with gas stripping is to get constant and high concentration cultivation of C. acetobutylicum ATCC 824 with high butanol production. The feed stream Fin contains growth medium and substrate (Sin ) and the outlet stream, Fout contains stripped product which is only butanol in this study. In the bio-reactor, anaerobic fermentation takes place and the products are produced. Butanol is separated from the fermentation broth via gas stripping. The stripping gas stream, Fgas , flows through the reactor and takes up the butanol from which then is condensed and collected in condenser while the stripping gas is recycled back to the reactor. 2.2. Process model development For the model we assume an ideally stirred tank reactor with constant volume of the liquid contents. This implies that the intensive properties like the concentrations of the cell mass and substrate are uniform throughout the liquid body. The Monod model is employed to describe the

Anaerobic Bio-reactor Modeling for Biofuel Production

579

Fgas

condenser

Fin, Sin Fout

Vl, P, S, X Feed tank

Bioreactor

Product tank

Figure 1: Integrated fermentation unit change in cell concentration over time. The model describes the cell growth rate, thus the cell concentration, in terms of cell growth rate (μ) and the decay term accounts for the natural death of the cells (kd ). In this project we assume the latter to be negligible. Butanol inhibition is also not considered due to the fact that the butanol concentration is always kept below the inhibitory level as result of the in situ removal by gas stripping (Kov´arov´a-Kovar and Egli, 1998). Thus, the change in cell concentration in the whole unit over time [g/l/h] is described by: dX = cell growth - cell death = μ X − kd X dt

(1)

The Monod model relates the cell growth rate to the concentration of a single growth limiting substrate (μ := f (S)). The two parameters are the maximum specific growth rate (μmax ), and the substrate affinity constant (Ks ). μ :=

μm S Ks + S

(2)

The equation for change in substrate concentration links the specific substrate consumption rate (qs ) and the cell concentration (X) with the inlet substrate concentration (Sin ), the inlet flow rate (Fin ), and the active liquid volume (Vl ) [g/l/h]: Sin Fin dS − qs X = inflow - consumption = dt Vl

(3)

The specific rate of substrate consumption (qs ) is defined in terms of the cell growth yield coefficient on the substrate (Yx/s ), cell growth rate (μ), and the maintenance term, which accounts for the substrate consumption necessary for the cell survival (ms ) assumed negligible (Pirt, 1965): qs :=

μ Yx/s + ms

Yx/s :=

dX dS

(4) (5)

580

C. Birgen et al.

Butanol is the only product considered in this paper. Therefore, the model equation describes the change in the butanol concentration over time and links the specific productivity rate (q p ) and cell concentration (X) with the stripping rate (Rs ) [g/l/h]: dP = q p X − Rs dt

(6)

The specific productivity rate (q p ) is defined in terms of production yield coefficient (Yp/x ), cell growth rate (µ) and maintenance factor for product (m p ,) which we assumed to be negligible as did Luedeking and Piret (1959): q p := Yp/x µ + m p

(7)

The gas stripping rate of the product butanol, (Rs ), from the fermentation broth is modelled in terms of the product concentration (P) and the gas stripping rate constant (Ksα ) as reported by Truong and Blackburn (Truong and Blackburn, 1984). Rs := Ksα P

(8)

Considering the assumption that the liquid volume in the bio-reactor is constant, the product separated by gas stripping has the flow rate of Fout , [l/h]: dVl = Fin − Fout = 0 dt

(9)

Rs Vl ρBuOH

(10)

Fout :=

2.2.1. Experimental estimation of model parameters Estimation of parameters by the fitting of experimental data to a model equation is a commonly applied practice (Mayank et al., 2013). Experiments were performed to determine µ and Ks and Yx/s and Yp/x . Batch cultivation of Clostridium acetobutylicum ATCC 824 is performed on complex growth medium containing glucose as growth limiting substrate. The cell concentration is measured using a spectrophotometer, which provides the optical density. Latter is plotted versus time providing the cell growth rate. The data are used to estimate µ and Ks . The substrate and product concentration in the fermentation broth is measured using high pressure liquid chromatography. The resulting data are used to estimate Yx/s and Yp/x , respectively. All estimated and assigned model parameters and experiment design considerations are shown in table below. Model parameter/experiment design consideration Maximum specific growth rate (1/h) Substrate affinity constant (g/l) Cell growth yield coefficient (g cells/g substrate) Product yield coefficient (g product/g cells) Minimum substrate concentration in fermentation broth (g/l) Maximum substrate concentration in fermentation broth (g/l) Minimum substrate concentration in in feed (g/l) Maximum substrate concentration in feed (g/l) Maximum butanol concentration in fermentation broth (g/l) Gas stripping rate constant (1/h)

Symbol µmax Ks Yx/s Yp/x Smin Smax Sinmin Sinmax Pmax Ksα

Value 0.18 1.2 0.28 0.41 20 75 250 500 5, 8, 10 0.059

Anaerobic Bio-reactor Modeling for Biofuel Production

581

3. Results and Discussion A simple fermentation model is constructed for a continuous bio-reactor equipped with gas stripping using the Monod kinetics. The model is used to determine the optimal substrate feed concentration and the optimal feed flow rate whilst considering the substrate and butanol inhibition by simulating the process at all combinations of feed and inhibition constraints and meeting the composition constraints imposed by the gas stripping. Inhibitory values for butanol concentration in fermentation broth are taken from literature as 5 g/l, 8 g/l, 10 g/l as well as the inhibitory substrate concentration and minimum substrate concentration required for cell maintenance as 20 g/l and 75 g/l (Ezeji et al., 2005; Xue et al., 2012). The substrate concentration in feed has to be within a certain range to prevent drastic changes in the fermentation broth because the cells are sensitive to large changes. Literature Ezeji et al. (2005) suggest a range of 250g/l - 500 g/l. All the design considerations (Smin , Smax , Sinmin , Sinmax , Pmax ) are implemented to compute the optimal Sin and Fin so that fermentation process can operate continuously without inhibiting the growth of cells. The model is simulated by Matlab at steady state. Figure 2 is plotted for pre-defined

Figure 2: Substrate concentration in broth (S) vs. Substrate flow rate in feed (Sin *Fin ).

Figure 3: Substrate concentration in feed (Sin ) vs. Flow rate of feed stream (Fin ).

range of S ∈ 20 − 75 [g/l] and corresponding Equation 3 at steady state. Figure 2 shows clearly that S increases as Sin *Fin increases. The relation is not linear: the rate of change accelerates with higher values of Sin *Fin , which indicates that the system is more sensitive to changes at higher values of Sin *Fin . This trend would result in dramatic changes in the fermentation broth, which is not desired as it would strongly affect the cells. The data are used used to determination the relation between Sin and Fin . As can be seen in Figure 3, different combinations of Sin and Fin are determined for three different Pmax values. Each set of Sin and Fin indicates an operating point which fulfils the criteria. A total of 14,056 different combinations are obtained of which 941 points fulfil the criteria for Pmax =5 g/l, 10,157 points fulfil the criteria for Pmax =10 g/l.

Sin *Fin values which are obtained by evaluating

Figure 4: Substrate concentration in broth (S) vs. Cell concentration in broth (X). points fulfil the criteria for Pmax =8 g/l, and 2958

A combination of Sin and Fin is chosen for each Pmax value. The change of cell concentration (X) with changing substrate concentration in feed (S) are illustrated in Figure 4. X is increasing with

582

C. Birgen et al.

decreasing S, which implies that the cells are growing by consuming substrate. For Pmax =5 g/l, response of X to changing S is quicker compared to other two. Trends of the curves for Pmax =8 g/l and Pmax =10 g/l closely match.

4. Conclusions A simple reactor model combined with Monod kinetics for the anaerobic fermentation of sugar to butanol using C.acetobytlicum ATCC 824 has been used to explore an optimal operation range with respect to feed concentration, feed rate as well as the product concentration constraint maintaining suitable conditions for the cells whilst enabling a continuous stripping of the butanol from the reactor. The full domain of the operating ranges were explored, which resulted in performing 14000+ simulation experiments. The results provide a clear insight into the feasible combinations of the operating conditions. Cells can still be productive under some toxic levels of butanol; however, it will decrease the overall productivity and life time of the bio-reactor. Therefore, inhibitory butanol concentration is chosen Pmax as 5 g/l in this study so that no inhibiton occurs. The optimal operating range for Pmax =5 g/l is obtained in this study will be employed in experimental bio-reactor and gas stripping unit in the next phase of the project.

References W. Andersch, H. Bahl, G. Gottschalk, 1983. Level of enzymes involved in acetate, butyrate, acetone and butanol formation by clostridium acetobutylicum. European journal of applied microbiology and biotechnology 18 (6), 327–332. P. D¨urre, 2005. Handbook on clostridia. CRC Press. P. D¨urre, 2011. Fermentative production of butanolthe academic perspective. Current opinion in biotechnology 22 (3), 331–336. T. Ezeji, N. Qureshi, H. Blaschek, 2004. Acetone butanol ethanol (abe) production from concentrated substrate: reduction in substrate inhibition by fed-batch technique and product inhibition by gas stripping. Applied microbiology and biotechnology 63 (6), 653–658. T. Ezeji, N. Qureshi, H. Blaschek, Apr. 28 2005. Process for continuous solvent production. US Patent App. 10/945,551. URL https://www.google.com/patents/US20050089979 Y. Gu, Y. Jiang, H. Wu, X. Liu, Z. Li, J. Li, H. Xiao, Z. Shen, H. Dong, Y. Yang, et al., 2011. Economical challenges to microbial producers of butanol: feedstock, butanol ratio and titer. Biotechnology journal 6 (11), 1348–1357. M. G. Hartmanis, S. Gatenbeck, 1984. Intermediary metabolism in clostridium acetobutylicum: levels of enzymes involved in the formation of acetate and butyrate. Applied and Environmental Microbiology 47 (6), 1277–1283. K. Kov´arov´a-Kovar, T. Egli, 1998. Growth kinetics of suspended microbial cells: from single-substrate-controlled growth to mixed-substrate kinetics. Microbiology and molecular biology reviews 62 (3), 646–666. J.-P. Lange, R. Price, P. M. Ayoub, J. Louis, L. Petrus, L. Clarke, H. Gosselink, 2010. Valeric biofuels: a platform of cellulosic transportation fuels. Angewandte Chemie International Edition 49 (26), 4479–4483. S. Y. Lee, J. H. Park, S. H. Jang, L. K. Nielsen, J. Kim, K. S. Jung, 2008. Fermentative butanol production by clostridia. Biotechnology and bioengineering 101 (2), 209–228. R. Luedeking, E. L. Piret, 1959. A kinetic study of the lactic acid fermentation. batch process at controlled ph. Journal of Biochemical and Microbiological Technology and Engineering 1 (4), 393–412. I. S. Maddox, 1989. The acetone-butanol-ethanol fermentation: recent progress in technology. Biotechnology and Genetic Engineering Reviews 7 (1), 189–220. R. Mayank, A. Ranjan, V. S. Moholkar, 2013. Mathematical models of abe fermentation: review and analysis. Critical reviews in biotechnology 33 (4), 419–447. S. Pirt, 1965. The maintenance energy of bacteria in growing cultures. Proceedings of the Royal Society of London B: Biological Sciences 163 (991), 224–231. N. Qureshi, H. Blaschek, 2001. Recovery of butanol from fermentation broth by gas stripping. Renewable Energy 22 (4), 557–564. K. N. Truong, J. W. Blackburn, 1984. The stripping of organic chemicals in biological treatment processes. Environmental progress 3 (3), 143–152. C. Xue, J. Zhao, C. Lu, S.-T. Yang, F. Bai, I. Tang, et al., 2012. High-titer n-butanol production by clostridium acetobutylicum jb200 in fed-batch fermentation with intermittent gas stripping. Biotechnology and bioengineering 109 (11), 2746–2756.