Advances in Production Management Systems. Towards Smart Production Management Systems: IFIP WG 5.7 International Conference, APMS 2019, Austin, TX, USA, September 1–5, 2019, Proceedings, Part II [1st ed. 2019] 978-3-030-29995-8, 978-3-030-29996-5

The two-volume set IFIP AICT 566 and 567 constitutes the refereed proceedings of the International IFIP WG 5.7 Conferenc

1,794 41 39MB

English Pages XXVII, 645 [650] Year 2019

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Advances in Production Management Systems. Towards Smart Production Management Systems: IFIP WG 5.7 International Conference, APMS 2019, Austin, TX, USA, September 1–5, 2019, Proceedings, Part II [1st ed. 2019]
 978-3-030-29995-8, 978-3-030-29996-5

Table of contents :
Front Matter ....Pages i-xxvii
The APMS Conference & IFIP WG5.7 in the 21st Century: A Bibliometric Study (Makenzie Keepers, David Romero, Thorsten Wuest)....Pages 1-13
Front Matter ....Pages 15-15
Price Decision Making in a Centralized/Decentralized Solid Waste Disposal Supply Chain with One Contractor and Two Disposal Facilities (Iman Ghalehkhondabi, Reza Maihami)....Pages 17-26
Understanding the Impact of User Behaviours and Scheduling Parameters on the Effectiveness of a Terminal Appointment System Using Discrete Event Simulation (Mihai Neagoe, Hans-Henrik Hvolby, Mohammad Sadegh Taskhiri, Paul Turner)....Pages 27-34
Full-Scale Discrete Event Simulation of an Automated Modular Conveyor System for Warehouse Logistics (Alireza Ashrafian, Ole-Gunnar Pettersen, Kristian N. Kuntze, Jacob Franke, Erlend Alfnes, Knut F. Henriksen et al.)....Pages 35-42
Handling Uncertainties in Production Network Design (Günther Schuh, Jan-Philipp Prote, Andreas Gützlaff, Sebastian Henk)....Pages 43-50
Supply Chain Scenarios for Logistics Service Providers in the Context of Additive Spare Parts Manufacturing (Daniel Pause, Svenja Marek)....Pages 51-58
Supply Chain Optimization in the Tire Industry: State-of-the-Art (Kartika Nur Alfina, R. M. Chandima Ratnayake)....Pages 59-67
Collaborative Exchange of Cargo Truck Loads: Approaches to Reducing Empty Trucks in Logistics Chains (Hans-Henrik Hvolby, Kenn Steger-Jensen, Mihai Neagoe, Sven Vestergaard, Paul Turner)....Pages 68-74
An Integrated Approach for Supply Chain Tactical Planning and Cash Flow Valuation (Sabah Belil, Asma Rakiz, Kawtar Retmi)....Pages 75-83
UAV Set Covering Problem for Emergency Network (Youngsoo Park, Ilkyeong Moon)....Pages 84-90
A Stochastic Optimization Model for Commodity Rebalancing Under Traffic Congestion in Disaster Response (Xuehong Gao)....Pages 91-99
Optimal Supplier Selection in a Supply Chain with Predetermined Loading/Unloading Time Windows and Logistics Truck Share (Alireza Fallahtafti, Iman Ghalehkhondabi, Gary R. Weckman)....Pages 100-108
Scheduling Auction: A New Manufacturing Business Model for Balancing Customization and Quick Delivery (Shota Suginouchi, Hajime Mizuyama)....Pages 109-117
Passenger Transport Disutilities in the US: An Analysis Since 1990s (Helcio Raymundo, João Gilberto M. dos Reis)....Pages 118-124
Front Matter ....Pages 125-125
Configuring the Future Norwegian Macroalgae Industry Using Life Cycle Analysis (Jon Halfdanarson, Matthias Koesling, Nina Pereira Kvadsheim, Jan Emblemsvåg, Céline Rebours)....Pages 127-134
Operationalizing Industry 4.0: Understanding Barriers of Industry 4.0 and Circular Economy (Lise Lillebrygfjeld Halse, Bjørn Jæger)....Pages 135-142
Business Model Innovation for Eco-Efficiency: An Empirical Study (Yan Li, Steve Evans)....Pages 143-150
Atmospheric Water Generation (AWG): Performance Model and Economic Analysis (Faraz Moghimi, Hamed Ghoddusi, Bahram Asiabanpour, Mahdi Behroozikhah)....Pages 151-158
Life Cycle Assessment for Ordinary and Frost-Resistant Concrete (Ramin Sabbagh, Paria Esmatloo)....Pages 159-167
Front Matter ....Pages 169-169
Simulation Based Optimization of Lot Sizes for Opposing Logistic Objectives (Janine Tatjana Maier, Thomas Voß, Jens Heger, Matthias Schmidt)....Pages 171-179
A Proposal of Order Planning Method with Consideration of Multiple Organizations in Manufacturing System (Ken Yamashita, Toshiya Kaihara, Nobutada Fujii, Daisuke Kokuryo, Toyohiro Umeda, Rihito Izutsu)....Pages 180-188
Reduction of Computational Load in Robust Facility Layout Planning Considering Temporal Production Efficiency (Eiji Morinaga, Komei Iwasaki, Hidefumi Wakamatsu, Eiji Arai)....Pages 189-195
Decision-Making Process for Buffer Dimensioning in Manufacturing (Lisa Hedvall, Joakim Wikner)....Pages 196-203
Postponement Revisited – A Typology for Displacement (Fredrik Tiedemann, Joakim Wikner)....Pages 204-211
Efficient Heuristic Solution Methodologies for Scheduling Batch Processor with Incompatible Job-Families, Non-identical Job-Sizes and Non-identical Job-Dimensions (M. Mathirajan, M. Ramasubramanian)....Pages 212-222
Optimizing Workflow in Cell-Based Slaughtering and Cutting of Pigs (Johan Oppen)....Pages 223-230
Increasing the Regulability of Production Planning and Control Systems (Günther Schuh, Philipp Wetzchewald)....Pages 231-239
Possibilities and Benefits of Using Material Flow Information to Improve the Internal Hospital Supply Chain (Giuseppe Ismael Fragapane, Aili Biriita Bertnum, Jan Ola Strandhagen)....Pages 240-247
Medical Supplies to the Point-Of-Use in Hospitals (Giuseppe Ismael Fragapane, Aili Biriita Bertnum, Hans-Henrik Hvolby, Jan Ola Strandhagen)....Pages 248-255
Combining the Inventory Control Policy with Pricing and Advertisement Decisions for a Non-instantaneous Deteriorating Product (Reza Maihami, Iman Ghalehkhondabi)....Pages 256-264
Assessing Fit of Capacity Planning Methods for Delivery Date Setting: An ETO Case Study (Swapnil Bhalla, Erlend Alfnes, Hans-Henrik Hvolby)....Pages 265-273
Front Matter ....Pages 275-275
From a Theory of Production to Data-Based Business Models (Günther Schuh, Malte Brettel, Jan-Philipp Prote, Andreas Gützlaff, Frederick Sauermann, Katharina Thomas et al.)....Pages 277-284
Real-Time Data Sharing in Production Logistics: Exploring Use Cases by an Industrial Study (Masoud Zafarzadeh, Jannicke Baalsrud Hauge, Magnus Wiktorsson, Ida Hedman, Jasmin Bahtijarevic)....Pages 285-293
Scenarios for the Development and Use of Data Products Within the Value Chain of the Industrial Food Production (Volker Stich, Lennard Holst, Philipp Jussen, Dennis Schiemann)....Pages 294-302
Bidirectional Data Management in Factory Planning and Operation (Uwe Dombrowski, Jonas Wullbrandt, Alexander Karl)....Pages 303-311
Open Access Digital Tools’ Application Potential in Technological Process Planning: SMMEs Perspective (Roman Wdowik, R. M. Chandima Ratnayake)....Pages 312-319
Front Matter ....Pages 321-321
Implementation of Industry 4.0 in Germany, Brazil and Portugal: Barriers and Benefits (Walter C. Satyro, Mauro de Mesquita Spinola, Jose B. Sacomano, Márcia Terra da Silva, Rodrigo Franco Gonçalves, Marcelo Schneck de Paula Pessoa et al.)....Pages 323-330
Planning Guideline and Maturity Model for Intra-logistics 4.0 in SME (Knut Krowas, Ralph Riedel)....Pages 331-338
Self-assessment of Industry 4.0 Technologies in Intralogistics for SME’s (Martina Schiffer, Hans-Hermann Wiendahl, Benedikt Saretz)....Pages 339-346
Industry 4.0 Visions and Reality- Status in Norway (Hans Torvatn, Pål Kamsvåg, Birgit Kløve)....Pages 347-354
Exploring the Impact of Industry 4.0 Concepts on Energy and Environmental Management Systems: Evidence from Serbian Manufacturing Companies (Milovan Medojevic, Nenad Medic, Ugljesa Marjanovic, Bojan Lalic, Vidosav Majstorovic)....Pages 355-362
Front Matter ....Pages 363-363
Virtualization of Sea Trials for Smart Prototype Testing (Moritz von Stietencron, Shantanoo Desai, Klaus-Dieter Thoben)....Pages 365-371
IoH Technologies into Indoor Manufacturing Sites (Takeshi Kurata, Takashi Maehata, Hidehiko Hashimoto, Naohiro Tada, Ryosuke Ichikari, Hideki Aso et al.)....Pages 372-380
3D Visualization System of Manufacturing Big Data and Simulation Results of Production for an Automotive Parts Supplier (Dahye Hwang, Sang Do Noh)....Pages 381-386
Front Matter ....Pages 387-387
Blockchain as an Internet of Services Application for an Advanced Manufacturing Environment (Benedito Cristiano A. Petroni, Jacqueline Zonichenn Reis, Rodrigo Franco Gonçalves)....Pages 389-396
Development of a Modeling Architecture Incorporating the Industry 4.0 View for a Company in the Gas Sector (Nikolaos A. Panayiotou, Konstantinos E. Stergiou, Vasileios P. Stavrou)....Pages 397-404
Process for Enhancing the Production System Robustness with Sensor Data – a Food Manufacturer Case Study (Sofie Bech, Thomas Ditlev Brunoe, Kjeld Nielsen)....Pages 405-412
In-Process Noise Detection System for Product Inspection by Using Acoustic Data (Woonsang Baek, Duck Young Kim)....Pages 413-420
Front Matter ....Pages 421-421
Closed-Loop Manufacturing for Aerospace Industry: An Integrated PLM-MOM Solution to Support the Wing Box Assembly Process (Melissa Demartini, Federico Galluccio, Paolo Mattis, Islam Abusohyon, Raffaello Lepratti, Flavio Tonelli)....Pages 423-430
Modeling Manual Assembly System to Derive Best Practice from Actual Data (Susann Kärcher, David Görzig, Thomas Bauernhansl)....Pages 431-438
Application of a Controlled Assembly Vocabulary: Modeling a Home Appliance Transfer Line (Chase Wentzky, Chelsea Spence, Apurva Patel, Nicole Zero, Adarsh Jeyes, Alexis Fiore et al.)....Pages 439-446
What Product Developers Really Need to Know - Capturing the Major Design Elements (Bjørnar Henriksen, Andreas Landmark, Carl Christian Røstad)....Pages 447-454
Front Matter ....Pages 455-455
Design-for-Cost – An Approach for Distributed Manufacturing Cost Estimation (Minchul Lee, Boonserm (Serm) Kulvatunyou)....Pages 457-465
Computer-Aided Selection of Participatory Design Methods (Michael Bojko, Ralph Riedel, Mandy Tawalbeh)....Pages 466-474
Knowledge Management Environment for Collaborative Design in Product Development (Shuai Zhang)....Pages 475-480
A Multi-criteria Approach to Collaborative Product-Service Systems Design (Martha Orellano, Khaled Medini, Christine Lambey-Checchin, Maria-Franca Norese, Gilles Neubert)....Pages 481-489
Front Matter ....Pages 491-491
MES Implementation: Critical Success Factors and Organizational Readiness Model (Daniela Invernizzi, Paolo Gaiardelli, Emrah Arica, Daryl Powell)....Pages 493-501
Identifying the Role of Manufacturing Execution Systems in the IS Landscape: A Convergence of Multiple Types of Application Functionalities (S. Waschull, J. C. Wortmann, J. A. C. Bokhorst)....Pages 502-510
A Generic Approach to Model and Analyze Industrial Search Processes (Philipp Steenwerth, Hermann Lödding)....Pages 511-519
A Methodology to Assess the Skills for an Industry 4.0 Factory (Federica Acerbi, Silvia Assiani, Marco Taisch)....Pages 520-527
Front Matter ....Pages 529-529
A Theoretical Approach for Detecting and Anticipating Collaboration Opportunities (Ibrahim Koura, Frederick Benaben, Juanqiong Gou)....Pages 531-538
The Systematic Integration of Stakeholders into Factory Planning, Construction, and Factory Operations to Increase Acceptance and Prevent Disruptions (Uwe Dombrowski, Alexander Karl, Colette Vogeler, Nils Bandelow)....Pages 539-546
Service Engineering Models: History and Present-Day Requirements (Roman Senderek, Jan Kuntz, Volker Stich, Jana Frank)....Pages 547-554
Design and Simulation of an Integrated Model for Organisational Sustainability Applying the Viable System Model and System Dynamics (Sergio Gallego-García, Manuel García-García)....Pages 555-563
Front Matter ....Pages 565-565
Enabling Energy Efficiency in Manufacturing Environments Through Deep Learning Approaches: Lessons Learned (M. T. Alvela Nieto, E. G. Nabati, D. Bode, M. A. Redecker, A. Decker, K.-D. Thoben)....Pages 567-574
Retail Promotion Forecasting: A Comparison of Modern Approaches (Casper Solheim Bojer, Iskra Dukovska-Popovska, Flemming Max Møller Christensen, Kenn Steger-Jensen)....Pages 575-582
A Data Mining Approach to Support Capacity Planning for the Regeneration of Complex Capital Goods (Melissa Seitz, Maren Sobotta, Peter Nyhuis)....Pages 583-590
Developing Smart Supply Chain Management Systems Using Google Trend’s Search Data: A Case Study (Ramin Sabbagh, Dragan Djurdjanovic)....Pages 591-599
Front Matter ....Pages 601-601
Managing Knowledge in Manufacturing Industry - University Innovation Projects (Irina-Emily Hansen, Ola Jon Mork, Torgeir Welo)....Pages 603-610
Technology Companies in Judicial Reorganization (Ricardo Zandonadi Schmidt, Márcia Terra da Silva)....Pages 611-616
Multiscale Modeling of Social Systems: Scale Bridging via Decision Making (Nursultan Nikhanbayev, Toshiya Kaihara, Nobutada Fujii, Daisuke Kokuryo)....Pages 617-624
e-Health: A Framework Proposal for Interoperability and Health Data Sharing. A Brazilian Case (Neusa Andrade, Pedro Luiz de Oliveira Costa Neto, Jair Gustavo de Mello Torres, Irapuan Glória Júnior, Cláudio Guimarães Scheidt, Welleson Gazel)....Pages 625-630
Managing Risk and Opportunities in Complex Projects (Asbjørn Rolstadås, Agnar Johansen, Yvonne C. Bjerke, Tobias O. Malvik)....Pages 631-639
Back Matter ....Pages 641-645

Citation preview

IFIP AICT 567

Farhad Ameri Kathryn E. Stecke Gregor von Cieminski Dimitris Kiritsis (Eds.)

Advances in Production Management Systems Towards Smart Production Management Systems

IFIP WG 5.7 International Conference, APMS 2019 Austin, TX, USA, September 1–5, 2019 Proceedings, Part II

123

IFIP Advances in Information and Communication Technology Editor-in-Chief Kai Rannenberg, Goethe University Frankfurt, Germany

Editorial Board Members TC 1 – Foundations of Computer Science Jacques Sakarovitch, Télécom ParisTech, France TC 2 – Software: Theory and Practice Michael Goedicke, University of Duisburg-Essen, Germany TC 3 – Education Arthur Tatnall, Victoria University, Melbourne, Australia TC 5 – Information Technology Applications Erich J. Neuhold, University of Vienna, Austria TC 6 – Communication Systems Aiko Pras, University of Twente, Enschede, The Netherlands TC 7 – System Modeling and Optimization Fredi Tröltzsch, TU Berlin, Germany TC 8 – Information Systems Jan Pries-Heje, Roskilde University, Denmark TC 9 – ICT and Society David Kreps, University of Salford, Greater Manchester, UK TC 10 – Computer Systems Technology Ricardo Reis, Federal University of Rio Grande do Sul, Porto Alegre, Brazil TC 11 – Security and Privacy Protection in Information Processing Systems Steven Furnell, Plymouth University, UK TC 12 – Artificial Intelligence Ulrich Furbach, University of Koblenz-Landau, Germany TC 13 – Human-Computer Interaction Marco Winckler, University of Nice Sophia Antipolis, France TC 14 – Entertainment Computing Rainer Malaka, University of Bremen, Germany

567

IFIP – The International Federation for Information Processing IFIP was founded in 1960 under the auspices of UNESCO, following the first World Computer Congress held in Paris the previous year. A federation for societies working in information processing, IFIP’s aim is two-fold: to support information processing in the countries of its members and to encourage technology transfer to developing nations. As its mission statement clearly states: IFIP is the global non-profit federation of societies of ICT professionals that aims at achieving a worldwide professional and socially responsible development and application of information and communication technologies. IFIP is a non-profit-making organization, run almost solely by 2500 volunteers. It operates through a number of technical committees and working groups, which organize events and publications. IFIP’s events range from large international open conferences to working conferences and local seminars. The flagship event is the IFIP World Computer Congress, at which both invited and contributed papers are presented. Contributed papers are rigorously refereed and the rejection rate is high. As with the Congress, participation in the open conferences is open to all and papers may be invited or submitted. Again, submitted papers are stringently refereed. The working conferences are structured differently. They are usually run by a working group and attendance is generally smaller and occasionally by invitation only. Their purpose is to create an atmosphere conducive to innovation and development. Refereeing is also rigorous and papers are subjected to extensive group discussion. Publications arising from IFIP events vary. The papers presented at the IFIP World Computer Congress and at open conferences are published as conference proceedings, while the results of the working conferences are often published as collections of selected and edited papers. IFIP distinguishes three types of institutional membership: Country Representative Members, Members at Large, and Associate Members. The type of organization that can apply for membership is a wide variety and includes national or international societies of individual computer scientists/ICT professionals, associations or federations of such societies, government institutions/government related organizations, national or international research institutes or consortia, universities, academies of sciences, companies, national or international associations or federations of companies. More information about this series at http://www.springer.com/series/6102

Farhad Ameri Kathryn E. Stecke Gregor von Cieminski Dimitris Kiritsis (Eds.) •





Advances in Production Management Systems Towards Smart Production Management Systems IFIP WG 5.7 International Conference, APMS 2019 Austin, TX, USA, September 1–5, 2019 Proceedings, Part II

123

Editors Farhad Ameri Texas State University San Marcos, TX, USA

Kathryn E. Stecke The University of Texas at Dallas Richardson, TX, USA

Gregor von Cieminski ZF Friedrichshafen AG Friedrichshafen, Germany

Dimitris Kiritsis EPFL, SCI-STI-DK Lausanne, Switzerland

ISSN 1868-4238 ISSN 1868-422X (electronic) IFIP Advances in Information and Communication Technology ISBN 978-3-030-29995-8 ISBN 978-3-030-29996-5 (eBook) https://doi.org/10.1007/978-3-030-29996-5 © IFIP International Federation for Information Processing 2019 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

The revolution in the information and communication technology (ICT) is rapidly transforming our world. The manufacturing industry is not an exception and it has already gone through profound changes due to the technological advancements in information technology. The digitization of production systems has been the most influential trend in the manufacturing industry over the past few years. The concept of Cyber-physical Production System (CPPS) is now being increasingly adopted in various sectors of the manufacturing industry to promote further intelligence, connectivity, and responsiveness throughout in the product value chain. There are several enablers of the vision of digitized, cyber-enabled, sustainable, and smart production system, including big data analytics, artificial intelligence, virtual and augmented reality, digital twin, and Human-Machine Interaction (HMI). These are the key components of the fourth industrial revolution and the main research thrusts in smart manufacturing and Industry 4.0 research community. The core challenge is how to improve the effectives and efficiency of production systems and, at the same time, enhance their sustainability and intelligence. Also, redefining the role of human in the new generation of automated production systems is a major challenge faced by researchers and practitioners. APMS 2019 in Austin, Texas brought together leading international experts from academia, industry, and government in the area of production systems to discuss globally pressing issues in smart manufacturing, operations management, supply chain management, and Industry 4.0. A large international panel of experts reviewed all the papers and selected the best ones to be included in these conference proceedings. The topics of interest in APMS 2019 included Smart Supply Networks, Knowledge-Based Product Development, Smart Factory and IIOT Data-Driven Production Management, Lean Production, and Sustainable Production Management. The proceedings are organized in two parts: – Production Management for the Factory of the Future (Volume 1) – Towards Smart Production Management Systems (Volume 2) The conference was supported by the International Federation of Information Processing (IFIP) and was organized by the IFIP Working Group 5.7 on Advances in Production Management Systems and Texas State University. We would like to thank all contributors for their high-quality work and for their willingness to share their innovative ideas and findings. We are also indebted to the members of the IFIP Working Group 5.7, the Program Committee members, and the Scientific Committee members for their support in the review of the papers. Finally, we appreciate the

vi

Preface

generous support from our sponsors, namely, Texas State University - College of Science and Engineering, the University of Texas at Dallas - Naveen Jindal School of Management, AlphaNodus, and PennState Service Enterprise Engineering. September 2019

Farhad Ameri Kathryn Stecke Gregor von Cieminski Dimitris Kiritsis

Organization

Conference Chair Farhad Ameri

Texas State University, USA

Conference Co-chair Dimitris Kiritsis

École polytechnique fédérale de Lausanne, Switzerland

Program Chair Kathryn Stecke

University of Texas at Dallas, USA

Program Co-chair Gregor von Cieminski

ZF Friedrichshafen AG, Germany

Program Committee Albert Jones Boonserm Kulvatunyou Vital Prabhu Kathryn Stecke (Committee Chair) Thorsten Wuest

National Institute of Standards and Technology (NIST), USA National Institute of Standards and Technology (NIST), USA The Pennsylvania State University, USA University of Texas at Dallas, USA West Virginia University, USA

Doctoral Workshop Co-chairs Boonserm Kulvatanyou Gregor von Cieminski

National Institute of Standards and Technology (NIST), USA ZF Friedrichshafen AG, Germany

International Advisory Committee Dragan Djurdjanovic Gül Kremer Ilkyeong Moon David Romero

University of Texas at Austin, USA Iowa State University, USA Seoul National University, South Korea Tecnologico de Monterrey University, Mexico

viii

Organization

Scientific Committee Erry Yulian Triblas Adesta Erlend Alfnes Thecle Alix Susanne Altendorfer-Kaiser Farhad Ameri Bjørn Andersen Eiji Arai Frédérique Biennier Umit S. Bititci Adriana Giret Boggino Magali Bosch-Mauchand Abdelaziz Bouras Jim Browne Luis Camarinha-Matos Sergio Cavalieri Stephen Childe Hyunbo Cho Gregor von Cieminski Catherine Da Cunha Frédéric Demoly Shengchun Deng Melanie Despeisse Alexandre Dolgui Slavko Dolinšek Sang Do Noh Heidi Carin Dreyer Eero Eloranta Soumaya El Kadiri Christos Emmanouilidis Åsa Fasth-Berglund Jan Frick Paolo Gaiardelli Bernard Grabot Samuel Gomes Gerhard Gudergan Thomas R. Gulledge Jr. Hironori Hibino

International Islamic University Malaysia, Malaysia Norwegian University of Science and Technology, Norway IUT Bordeaux Montesquieu, France Montanuniversitaet Leoben, Austria Texas State University, USA Norwegian University of Science and Technology, Norway Osaka University, Japan INSA Lyon, France Heriot Watt University, UK Universidad Politécnica de Valencia, Spain Université de Technologie de Compiègne, France Qatar University, Qatar University College Dublin, Ireland Universidade Nova de Lisboa, Portugal University of Bergamo, Italy Plymouth University, UK Pohang University of Science & Technology, South Korea ZF Friedrichshafen AG, Hungary Ecole Centrale de Nantes, France Université de Technologie de Belfort-Montbéliard, France Harbin Institute of Technology, China Chalmers University of Technology, Sweden IMT Atlantique Nantes, France University of Ljubljana, Slovenia Sungkyunkwan University, South Korea Norwegian University of Science and Technology, Norway Helsinki University of Technology, Finland Texelia AG, Switzerland Cranfield University, UK Chalmers University, Sweden University of Stavanger, Norway University of Bergamo, Italy INP-ENIT (National Engineering School of Tarbes), France Belfort-Montbéliard University of Technology, France FIR Research Institute for Operations Management, Germany George Mason University, USA Tokyo University of Science, Japan

Organization

Hans-Henrik Hvolby Dmitry Ivanov Harinder Jagdev John Johansen Toshiya Kaihara Dimitris Kiritsis Tomasz Koch Pisut Koomsap Gül Kremer Boonserm Kulvatunyou Thomas R. Kurfess Andrew Kusiak Lenka Landryova Jan-Peter Lechner Ming K. Lim Hermann Lödding Marco Macchi Vidosav D. Majstorovich Adolfo Crespo Marquez Gökan May Jörn Mehnen Hajime Mizuyama Ilkyeong Moon Dimitris Mourtzis Irenilza de Alencar Naas Masaru Nakano Torbjörn Netland Gilles Neubert Manuel Fradinho Duarte de Oliveira Jinwoo Park François Pérès Fredrik Persson Selwyn Piramuthu Alberto Portioli-Staudacher Vittaldas V. Prabhu Ricardo José Rabelo Mario Rapaccini Joao Gilberto Mendes dos Reis Ralph Riedel Asbjörn Rolstadås David Romero

ix

Aalborg University, Denmark Berlin School of Economics and Law, Germany National University of Ireland at Galway, Ireland Aalborg University, Denmark Kobe University, Japan Ecole Polytechnique Fédérale de Lausanne, Switzerland Wroclaw Universit of Science and Technology, Poland Asian Institute of Technology, Thailand Iowa State University, USA National Institute of Standards and Technology, USA Georgia Institute of Technology, USA University of Iowa, USA Technical University of Ostrava, Czech Republic First Global Liaison, Germany Chongqing University, China Hamburg University of Technology, Germany Politecnico di Milano, Italy University of Belgrade, Serbia University of Seville, Spain Ecole Polytechnique Fédérale de Lausanne, Switzerland Strathclyde University Glasgow, UK Aoyama Gakuin University, Japan Seoul National University, South Korea University of Patras, Greece UNIP Paulista University, Brazil Keio University, Japan ETH Zürich, Switzerland EMLYON Business School Saint-Etienne, France SINTEF, Norway Seoul National University, South Korea Université de Toulouse, France Linköping Institute of Technology, Sweden University of Florida, USA Politecnico di Milano, Italy Pennsylvania State University, USA Federal University of Santa Catarina, Brazil Florence University, Italy UNIP Paulista University, Brazil TU Chemnitz, Germany Norwegian University of Science and Technology, Norway Tecnologico de Monterrey University, Mexico

x

Organization

Christoph Roser Martin Rudberg Thomas E. Ruppli Krzysztof Santarek John P. Shewchuk Dan L. Shunk Riitta Smeds Vijay Srinivasan Johan Stahre Kathryn E. Stecke Kenn Steger-Jensen Volker Stich Richard Lee Storch Jan Ola Strandhagen Stanislaw Strzelczak Shigeki Umeda Marco Taisch Kari Tanskanen Ilias Tatsiopoulos Sergio Terzi Klaus-Dieter Thoben Jacques H. Trienekens Mario Tucci Gündüz Ulusoy Bruno Vallespir Agostino Villa Hans-Hermann Wiendahl Joakim Wikner Hans Wortmann Thorsten Wuest Iveta Zolotová

Karlsruhe University of Applied Sciences, Germany Linköping University, Sweden University of Basel, Switzerland Warsaw University of Technology, Poland Virginia Polytechnic Institute and State University, USA Arizona State University, USA Aalto University, Finland National Institute of Standards and Technology, USA Chalmers University, Sweden University of Texas at Dallas, USA Aalborg University, Denmark FIR Research Institute for Operations Management, Germany University of Washington, USA Norwegian University of Science and Technology, Norway Warsaw University of Technology, Poland Musashi University, Japan Politecnico di Milano, Italy Aalto University School of Science, Finland National Technical University of Athens, Greece Politecnico di Milano, Italy Universität Bremen, Germany Wageningen University, The Netherlands Universitá degli Studi di Firenze, Italy Sabancı University, Turkey University of Bordeaux, France Politecnico di Torino, Italy University of Stuttgart, Germany Jönköping University, Sweden University of Groningen, The Netherlands West Virginia University, USA Technical University of Košice, Slovakia

Contents – Part II

Smart Supply Networks The APMS Conference & IFIP WG5.7 in the 21st Century: A Bibliometric Study. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Makenzie Keepers, David Romero, and Thorsten Wuest

1

Price Decision Making in a Centralized/Decentralized Solid Waste Disposal Supply Chain with One Contractor and Two Disposal Facilities. . . . Iman Ghalehkhondabi and Reza Maihami

17

Understanding the Impact of User Behaviours and Scheduling Parameters on the Effectiveness of a Terminal Appointment System Using Discrete Event Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . Mihai Neagoe, Hans-Henrik Hvolby, Mohammad Sadegh Taskhiri, and Paul Turner

27

Full-Scale Discrete Event Simulation of an Automated Modular Conveyor System for Warehouse Logistics. . . . . . . . . . . . . . . . . . . . . . . . . Alireza Ashrafian, Ole-Gunnar Pettersen, Kristian N. Kuntze, Jacob Franke, Erlend Alfnes, Knut F. Henriksen, and Jakob Spone Handling Uncertainties in Production Network Design . . . . . . . . . . . . . . . . . Günther Schuh, Jan-Philipp Prote, Andreas Gützlaff, and Sebastian Henk Supply Chain Scenarios for Logistics Service Providers in the Context of Additive Spare Parts Manufacturing. . . . . . . . . . . . . . . . . . . . . . . . . . . . Daniel Pause and Svenja Marek Supply Chain Optimization in the Tire Industry: State-of-the-Art. . . . . . . . . . Kartika Nur Alfina and R. M. Chandima Ratnayake Collaborative Exchange of Cargo Truck Loads: Approaches to Reducing Empty Trucks in Logistics Chains. . . . . . . . . . . . . . . . . . . . . . Hans-Henrik Hvolby, Kenn Steger-Jensen, Mihai Neagoe, Sven Vestergaard, and Paul Turner An Integrated Approach for Supply Chain Tactical Planning and Cash Flow Valuation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sabah Belil, Asma Rakiz, and Kawtar Retmi UAV Set Covering Problem for Emergency Network . . . . . . . . . . . . . . . . . Youngsoo Park and Ilkyeong Moon

35

43

51 59

68

75 84

xii

Contents – Part II

A Stochastic Optimization Model for Commodity Rebalancing Under Traffic Congestion in Disaster Response . . . . . . . . . . . . . . . . . . . . . . . . . . Xuehong Gao

91

Optimal Supplier Selection in a Supply Chain with Predetermined Loading/Unloading Time Windows and Logistics Truck Share . . . . . . . . . . . Alireza Fallahtafti, Iman Ghalehkhondabi, and Gary R. Weckman

100

Scheduling Auction: A New Manufacturing Business Model for Balancing Customization and Quick Delivery . . . . . . . . . . . . . . . . . . . . Shota Suginouchi and Hajime Mizuyama

109

Passenger Transport Disutilities in the US: An Analysis Since 1990s. . . . . . . Helcio Raymundo and João Gilberto M. dos Reis

118

Sustainability and Production Management Configuring the Future Norwegian Macroalgae Industry Using Life Cycle Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jon Halfdanarson, Matthias Koesling, Nina Pereira Kvadsheim, Jan Emblemsvåg, and Céline Rebours Operationalizing Industry 4.0: Understanding Barriers of Industry 4.0 and Circular Economy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lise Lillebrygfjeld Halse and Bjørn Jæger Business Model Innovation for Eco-Efficiency: An Empirical Study . . . . . . . Yan Li and Steve Evans Atmospheric Water Generation (AWG): Performance Model and Economic Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Faraz Moghimi, Hamed Ghoddusi, Bahram Asiabanpour, and Mahdi Behroozikhah Life Cycle Assessment for Ordinary and Frost-Resistant Concrete . . . . . . . . . Ramin Sabbagh and Paria Esmatloo

127

135 143

151

159

Production Management Theory and Methodology Simulation Based Optimization of Lot Sizes for Opposing Logistic Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Janine Tatjana Maier, Thomas Voß, Jens Heger, and Matthias Schmidt A Proposal of Order Planning Method with Consideration of Multiple Organizations in Manufacturing System . . . . . . . . . . . . . . . . . . . . . . . . . . . Ken Yamashita, Toshiya Kaihara, Nobutada Fujii, Daisuke Kokuryo, Toyohiro Umeda, and Rihito Izutsu

171

180

Contents – Part II

Reduction of Computational Load in Robust Facility Layout Planning Considering Temporal Production Efficiency . . . . . . . . . . . . . . . . . . . . . . . Eiji Morinaga, Komei Iwasaki, Hidefumi Wakamatsu, and Eiji Arai

xiii

189

Decision-Making Process for Buffer Dimensioning in Manufacturing. . . . . . . Lisa Hedvall and Joakim Wikner

196

Postponement Revisited – A Typology for Displacement . . . . . . . . . . . . . . . Fredrik Tiedemann and Joakim Wikner

204

Efficient Heuristic Solution Methodologies for Scheduling Batch Processor with Incompatible Job-Families, Non-identical Job-Sizes and Non-identical Job-Dimensions . . . . . . . . . . . . . . . . . . . . . . . M. Mathirajan and M. Ramasubramanian

212

Optimizing Workflow in Cell-Based Slaughtering and Cutting of Pigs . . . . . . Johan Oppen

223

Increasing the Regulability of Production Planning and Control Systems . . . . Günther Schuh and Philipp Wetzchewald

231

Possibilities and Benefits of Using Material Flow Information to Improve the Internal Hospital Supply Chain . . . . . . . . . . . . . . . . . . . . . . Giuseppe Ismael Fragapane, Aili Biriita Bertnum, and Jan Ola Strandhagen Medical Supplies to the Point-Of-Use in Hospitals . . . . . . . . . . . . . . . . . . . Giuseppe Ismael Fragapane, Aili Biriita Bertnum, Hans-Henrik Hvolby, and Jan Ola Strandhagen

240

248

Combining the Inventory Control Policy with Pricing and Advertisement Decisions for a Non-instantaneous Deteriorating Product . . . . . . . . . . . . . . . Reza Maihami and Iman Ghalehkhondabi

256

Assessing Fit of Capacity Planning Methods for Delivery Date Setting: An ETO Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Swapnil Bhalla, Erlend Alfnes, and Hans-Henrik Hvolby

265

Data-Driven Production Management From a Theory of Production to Data-Based Business Models . . . . . . . . . . . Günther Schuh, Malte Brettel, Jan-Philipp Prote, Andreas Gützlaff, Frederick Sauermann, Katharina Thomas, and Mario Piel Real-Time Data Sharing in Production Logistics: Exploring Use Cases by an Industrial Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Masoud Zafarzadeh, Jannicke Baalsrud Hauge, Magnus Wiktorsson, Ida Hedman, and Jasmin Bahtijarevic

277

285

xiv

Contents – Part II

Scenarios for the Development and Use of Data Products Within the Value Chain of the Industrial Food Production . . . . . . . . . . . . . . Volker Stich, Lennard Holst, Philipp Jussen, and Dennis Schiemann Bidirectional Data Management in Factory Planning and Operation . . . . . . . . Uwe Dombrowski, Jonas Wullbrandt, and Alexander Karl Open Access Digital Tools’ Application Potential in Technological Process Planning: SMMEs Perspective. . . . . . . . . . . . . . . . . . . . . . . . . . . . Roman Wdowik and R. M. Chandima Ratnayake

294 303

312

Industry 4.0 Implementations Implementation of Industry 4.0 in Germany, Brazil and Portugal: Barriers and Benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Walter C. Satyro, Mauro de Mesquita Spinola, Jose B. Sacomano, Márcia Terra da Silva, Rodrigo Franco Gonçalves, Marcelo Schneck de Paula Pessoa, Jose Celso Contador, Jose Luiz Contador, and Luciano Schiavo

323

Planning Guideline and Maturity Model for Intra-logistics 4.0 in SME . . . . . Knut Krowas and Ralph Riedel

331

Self-assessment of Industry 4.0 Technologies in Intralogistics for SME’s . . . . Martina Schiffer, Hans-Hermann Wiendahl, and Benedikt Saretz

339

Industry 4.0 Visions and Reality- Status in Norway. . . . . . . . . . . . . . . . . . . Hans Torvatn, Pål Kamsvåg, and Birgit Kløve

347

Exploring the Impact of Industry 4.0 Concepts on Energy and Environmental Management Systems: Evidence from Serbian Manufacturing Companies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Milovan Medojevic, Nenad Medic, Ugljesa Marjanovic, Bojan Lalic, and Vidosav Majstorovic

355

Smart Factory and IIOT Virtualization of Sea Trials for Smart Prototype Testing. . . . . . . . . . . . . . . . Moritz von Stietencron, Shantanoo Desai, and Klaus-Dieter Thoben

365

IoH Technologies into Indoor Manufacturing Sites . . . . . . . . . . . . . . . . . . . Takeshi Kurata, Takashi Maehata, Hidehiko Hashimoto, Naohiro Tada, Ryosuke Ichikari, Hideki Aso, and Yoshinori Ito

372

3D Visualization System of Manufacturing Big Data and Simulation Results of Production for an Automotive Parts Supplier . . . . . . . . . . . . . . . . Dahye Hwang and Sang Do Noh

381

Contents – Part II

xv

Cyber-Physical Systems Blockchain as an Internet of Services Application for an Advanced Manufacturing Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Benedito Cristiano A. Petroni, Jacqueline Zonichenn Reis, and Rodrigo Franco Gonçalves Development of a Modeling Architecture Incorporating the Industry 4.0 View for a Company in the Gas Sector . . . . . . . . . . . . . . . . . . . . . . . . . . . Nikolaos A. Panayiotou, Konstantinos E. Stergiou, and Vasileios P. Stavrou

389

397

Process for Enhancing the Production System Robustness with Sensor Data – a Food Manufacturer Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . Sofie Bech, Thomas Ditlev Brunoe, and Kjeld Nielsen

405

In-Process Noise Detection System for Product Inspection by Using Acoustic Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Woonsang Baek and Duck Young Kim

413

Knowledge Management in Design and Manufacturing Closed-Loop Manufacturing for Aerospace Industry: An Integrated PLM-MOM Solution to Support the Wing Box Assembly Process . . . . . . . . Melissa Demartini, Federico Galluccio, Paolo Mattis, Islam Abusohyon, Raffaello Lepratti, and Flavio Tonelli Modeling Manual Assembly System to Derive Best Practice from Actual Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Susann Kärcher, David Görzig, and Thomas Bauernhansl Application of a Controlled Assembly Vocabulary: Modeling a Home Appliance Transfer Line. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chase Wentzky, Chelsea Spence, Apurva Patel, Nicole Zero, Adarsh Jeyes, Alexis Fiore, Joshua D. Summers, Mary E. Kurz, and Kevin M. Taaffe What Product Developers Really Need to Know - Capturing the Major Design Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bjørnar Henriksen, Andreas Landmark, and Carl Christian Røstad

423

431

439

447

Collaborative Product Development Design-for-Cost – An Approach for Distributed Manufacturing Cost Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Minchul Lee and Boonserm (Serm) Kulvatunyou

457

xvi

Contents – Part II

Computer-Aided Selection of Participatory Design Methods . . . . . . . . . . . . . Michael Bojko, Ralph Riedel, and Mandy Tawalbeh Knowledge Management Environment for Collaborative Design in Product Development. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shuai Zhang A Multi-criteria Approach to Collaborative Product-Service Systems Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Martha Orellano, Khaled Medini, Christine Lambey-Checchin, Maria-Franca Norese, and Gilles Neubert

466

475

481

ICT for Collaborative Manufacturing MES Implementation: Critical Success Factors and Organizational Readiness Model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Daniela Invernizzi, Paolo Gaiardelli, Emrah Arica, and Daryl Powell Identifying the Role of Manufacturing Execution Systems in the IS Landscape: A Convergence of Multiple Types of Application Functionalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S. Waschull, J. C. Wortmann, and J. A. C. Bokhorst

493

502

A Generic Approach to Model and Analyze Industrial Search Processes . . . . Philipp Steenwerth and Hermann Lödding

511

A Methodology to Assess the Skills for an Industry 4.0 Factory . . . . . . . . . . Federica Acerbi, Silvia Assiani, and Marco Taisch

520

Collaborative Technology A Theoretical Approach for Detecting and Anticipating Collaboration Opportunities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ibrahim Koura, Frederick Benaben, and Juanqiong Gou The Systematic Integration of Stakeholders into Factory Planning, Construction, and Factory Operations to Increase Acceptance and Prevent Disruptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Uwe Dombrowski, Alexander Karl, Colette Vogeler, and Nils Bandelow Service Engineering Models: History and Present-Day Requirements . . . . . . . Roman Senderek, Jan Kuntz, Volker Stich, and Jana Frank Design and Simulation of an Integrated Model for Organisational Sustainability Applying the Viable System Model and System Dynamics . . . . Sergio Gallego-García and Manuel García-García

531

539 547

555

Contents – Part II

xvii

Applications of Machine Learning in Production Management Enabling Energy Efficiency in Manufacturing Environments Through Deep Learning Approaches: Lessons Learned . . . . . . . . . . . . . . . . . . . . . . . M. T. Alvela Nieto, E. G. Nabati, D. Bode, M. A. Redecker, A. Decker, and K.-D. Thoben Retail Promotion Forecasting: A Comparison of Modern Approaches . . . . . . Casper Solheim Bojer, Iskra Dukovska-Popovska, Flemming Max Møller Christensen, and Kenn Steger-Jensen

567

575

A Data Mining Approach to Support Capacity Planning for the Regeneration of Complex Capital Goods . . . . . . . . . . . . . . . . . . . . . Melissa Seitz, Maren Sobotta, and Peter Nyhuis

583

Developing Smart Supply Chain Management Systems Using Google Trend’s Search Data: A Case Study. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ramin Sabbagh and Dragan Djurdjanovic

591

Collaborative Technology Managing Knowledge in Manufacturing Industry - University Innovation Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Irina-Emily Hansen, Ola Jon Mork, and Torgeir Welo Technology Companies in Judicial Reorganization . . . . . . . . . . . . . . . . . . . Ricardo Zandonadi Schmidt and Márcia Terra da Silva Multiscale Modeling of Social Systems: Scale Bridging via Decision Making . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nursultan Nikhanbayev, Toshiya Kaihara, Nobutada Fujii, and Daisuke Kokuryo e-Health: A Framework Proposal for Interoperability and Health Data Sharing. A Brazilian Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Neusa Andrade, Pedro Luiz de Oliveira Costa Neto, Jair Gustavo de Mello Torres, Irapuan Glória Júnior, Cláudio Guimarães Scheidt, and Welleson Gazel

603 611

617

625

Managing Risk and Opportunities in Complex Projects . . . . . . . . . . . . . . . . Asbjørn Rolstadås, Agnar Johansen, Yvonne C. Bjerke, and Tobias O. Malvik

631

Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

641

Contents – Part I

Lean Production Total Quality Management and Quality Circles in the Digital Lean Manufacturing World . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . David Romero, Paolo Gaiardelli, Daryl Powell, Thorsten Wuest, and Matthias Thürer Practical Boundary Case Approach for Kanban Calculation on the Shop Floor Subject to Variation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Christoph Roser and Daniel Nold

3

12

Options for Maintaining Weak FIFO in Parallel Queues. . . . . . . . . . . . . . . . Kaan Kalkanci and Christoph Roser

21

Sketching the Landscape for Lean Digital Transformation . . . . . . . . . . . . . . Alireza Ashrafian, Daryl J. Powell, Jonas A. Ingvaldsen, Heidi C. Dreyer, Halvor Holtskog, Peter Schütz, Elsebeth Holmen, Ann-Charlott Pedersen, and Eirin Lodgaard

29

Cyber-Physical Waste Identification and Elimination Strategies in the Digital Lean Manufacturing World . . . . . . . . . . . . . . . . . . . . . . . . . . David Romero, Paolo Gaiardelli, Matthias Thürer, Daryl Powell, and Thorsten Wuest Using Prescriptive Analytics to Support the Continuous Improvement Process. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Günther Schuh, Jan-Philipp Prote, Thomas Busam, Rafael Lorenz, and Torbjörn H. Netland Lean Leadership in Production Ramp-Up . . . . . . . . . . . . . . . . . . . . . . . . . . Uwe Dombrowski and Jonas Wullbrandt

37

46

54

No Lean Without Learning: Rethinking Lean Production as a Learning System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Daryl Powell and Eivind Reke

62

The Effect of Team Size on the Performance of Continuous Improvement Teams: Is Seven Really the Magic Number? . . . . . . . . . . . . . . Daryl Powell and Rafael Lorenz

69

Lean and Digitalization—Contradictions or Complements? . . . . . . . . . . . . . . Rafael Lorenz, Paul Buess, Julian Macuvele, Thomas Friedli, and Torbjørn H. Netland

77

xx

Contents – Part I

Production Management in Food Supply Chains Neuro-Fuzzy System for the Evaluation of Soya Production and Demand in Brazilian Ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Emerson Rodolfo Abraham, João Gilberto Mendes dos Reis, Aguinaldo Eduardo de Souza, and Adriane Paulieli Colossetti Port Logistic Support Areas (PLSA) for Exporting Grains: An Exploratory Case-Study in the Largest Port in Latin America . . . . . . . . . Clayton Gerber Mangini, Irenilza de Alencar Nääs, Antônio Carlos Estender, Meykson Rodrigues Alves Cordeiro, and Agnaldo Vieira da Silva Sustainability of Meat Chain: The Carbon Footprint of Brazilian Consumers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Raquel Baracat T. R. Silva, João Gilberto Mendes Reis, Thayla M. R. Carvalho Curi, Nilsa D. S. Lima, Solimar Garcia, and Irenilza de Alencar Nääs Global Warming Impact in a Food Distribution System: A Case-Study in an Elementary School in Piaui . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Genyvana Criscya G. Carvalho, Ivonalda Brito de Almeida Morais, Manoel Eulálio Neto, Raimundo Nonato Moura Rodrigues, Francisco Canindé Dias Alves, Irenilza de Alencar Nääs, and Oduvaldo Vendrametto Broiler Meat Production in Piaui State: A Case Study . . . . . . . . . . . . . . . . . Eldelita A. P. Franco, Lilane de A. M. Brandão, José A. A. Luz, Kelly L. F. Gonçalves, and Irenilza de A. Nääs Collaborative Production Chains: A Case-Study of Two Agri-Food Companies in Brazil . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yuri Claudio C. de Lima, Silvia Piva R. de Morais, Luis A. Mendes de M. Araujo, Daiane da S. A. Castelo Branco, and Irenilza de A. Nääs An Evaluation of Brazilian Ports for Corn Export Using Multicriteria Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Aguinaldo Eduardo de Souza, João José Giardulli Junior, João Gilberto Mendes dos Reis, Ataide Pereira Cardoso Junior, Paula Ferreira da Cruz Correia, Ricardo Zandonadi Schimidt, José Benedito Sacomano, and Márcia Terra da Silva

87

95

102

108

116

123

129

Contents – Part I

Port Terminals Assessment: An Empirical Analysis of Requirements of Brazilian National Plan of Port Logistics . . . . . . . . . . . . . . . . . . . . . . . . Aguinaldo Eduardo de Souza, João Gilberto Mendes dos Reis, Ataide Pereira Cardoso Junior, Emerson Rodolfo Abraham, Oduvaldo Vendrametto, Renato Marcio dos Santos, and Roberta Sobral Pinto Brazilian Coffee Export Network: An Analysis Using SNA . . . . . . . . . . . . . Paula F. da Cruz Correia, João Gilberto M. dos Reis, Aguinaldo E. de Souza, and Ataíde Pereira Cardoso Jr. CNN-Based Growth Prediction of Field Crops for Optimizing Food Supply Chain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shunsuke Iitsuka, Nobutada Fujii, Daisuke Kokuryo, Toshiya Kaihara, and Shinichi Nakano Asymmetrical Evaluation of Forecasting Models Through Fresh Food Product Characteristics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Flemming M. M. Christensen, Iskra Dukovska-Popovska, Casper S. Bojer, and Kenn Steger-Jensen Horizontal Integration in Fresh Food Supply Chain . . . . . . . . . . . . . . . . . . . Flemming M. M. Christensen, Soujanya Mantravadi, Iskra Dukovska-Popovska, Hans-Henrik Hvolby, Kenn Steger-Jensen, and Charles Møller Reverse Logistics and Waste in the Textile and Clothing Production Chain in Brazil . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Solimar Garcia, Irenilza de Alencar Nääs, Pedro Luiz de Oliveira Costa Neto, and João Gilberto Mendes dos Reis Port Performance Measures in Brazil: An Analysis in Port of Santos . . . . . . . Renato Márcio dos Santos, João Gilberto Mendes dos Reis, Júlio Cesar Raymundo, Emerson Rodolfo Abraham, Ataide Pereira Cardoso Junior, and Aguinaldo Eduardo de Souza CO2 Gas Emissions of Soybean Production and Transportation in the Different Macro-regions of Mato Grosso State - Brazil . . . . . . . . . . . . Marley Nunes Vituri Toloi, Rodrigo Carlo Toloi, Helton Raimundo Oliveira Silva, João Gilberto Mendes dos Reis, and Silvia Helena Bonilla

xxi

135

142

148

155

164

173

180

187

xxii

Contents – Part I

Sustainability and Reconfigurability of Manufacturing Systems Classification of Optical Technologies for the Mapping of Production Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Marius Greger, Daniel Palm, Louis Louw, and Konrad von Leipzig

197

A DRC Scheduling for Social Sustainability: Trade-Off Between Tardiness and Workload Balance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Muhammad Akbar and Takashi Irohara

206

Towards Reconfigurable Digitalized and Servitized Manufacturing Systems: Conceptual Framework. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xavier Boucher, Audrey Cerqueus, Xavier Delorme, Clemens Gonnermann, Magdalena Paul, Gunther Reinhart, Julia Schulz, and Fabian Sippl Simulation of Reconfigurable Assembly Cells with Unity3D . . . . . . . . . . . . Magdalena Paul, Daria Leiber, Julian Pleli, and Gunther Reinhart Decision Support System for Joint Product Design and Reconfiguration of Production Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S. Ehsan Hashemi-Petroodi, Clemens Gonnermann, Magdalena Paul, Simon Thevenin, Alexandre Dolgui, and Gunther Reinhart Simple Assembly Line Balancing Problem with Power Peak Minimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Paolo Gianessi, Xavier Delorme, and Oussama Masmoudi Modular Robot Software Framework for the Intelligent and Flexible Composition of Its Skills . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lisa Heuss, Andreas Blank, Sebastian Dengler, Georg Lukas Zikeli, Gunther Reinhart, and Jörg Franke A Competence-Based Description of Employees in Reconfigurable Manufacturing Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Svenja Korder, Barbara Tropschuh, and Gunther Reinhart

214

223

231

239

248

257

Product and Asset Life Cycle Management in Smart Factories of Industry 4.0 Identification of the Inspection Specifications for Achieving Zero Defect Manufacturing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Foivos Psarommatis and Dimitris Kiritsis

267

Risk Sources Affecting the Asset Management Decision-Making Process in Manufacturing: A Systematic Review of the Literature . . . . . . . . . Adalberto Polenghi, Irene Roda, Marco Macchi, and Paolo Trucco

274

Contents – Part I

Conceptual Framework for a Data Model to Support Asset Management Decision-Making Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Adalberto Polenghi, Irene Roda, Marco Macchi, and Alessandro Pozzetti

xxiii

283

Hybrid Approach Using Ontology-Supported Case-Based Reasoning and Machine Learning for Defect Rate Prediction . . . . . . . . . . . . . . . . . . . . Bongjun Ji, Farhad Ameri, Junhyuk Choi, and Hyunbo Cho

291

Semantic Model-Driven PLM Data Interoperability: An Application for Aircraft Ground Functional Testing with Eco-Design Criteria . . . . . . . . . D. Arena, M. Oliva, I. Eguia, C. Del Valle, and D. Kiritsis

299

A Method for Converting Current Data to RDF in the Era of Industry 4.0 . . . Marlène Hildebrand, Ioannis Tourkogiorgis, Foivos Psarommatis, Damiano Arena, and Dimitris Kiritsis Total Cost of Ownership Driven Methodology for Predictive Maintenance Implementation in Industrial Plants . . . . . . . . . . . . . . . . . . . . . I. Roda, S. Arena, M. Macchi, and P. F. Orrù Ontology-Based Resource Allocation for Internet of Things . . . . . . . . . . . . . Zeinab Nezami, Kamran Zamanifar, Damiano Arena, and Dimitris Kiritsis

307

315 323

Variety and Complexity Management in the Era of Industry 4.0 Bringing Advanced Analytics to Manufacturing: A Systematic Mapping . . . . Hergen Wolf, Rafael Lorenz, Mathias Kraus, Stefan Feuerriegel, and Torbjørn H. Netland Impact of Modeling Production Knowledge for a Data Based Prediction of Transition Times . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Günther Schuh, Jan-Philipp Prote, Philipp Hünnekes, Frederick Sauermann, and Lukas Stratmann Reconfigurable Manufacturing: A Classification of Elements Enabling Convertibility and Scalability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Alessia Napoleone, Ann-Louise Andersen, Alessandro Pozzetti, and Marco Macchi Industry 4.0 in SMEs: A Sectorial Analysis . . . . . . . . . . . . . . . . . . . . . . . . Javier Luco, Sara Mestre, Ludovic Henry, Simon Tamayo, and Frederic Fontane

333

341

349

357

xxiv

Contents – Part I

Reconfigurable Manufacturing: A Case-Study of Reconfigurability Potentials in the Manufacturing of Capital Goods . . . . . . . . . . . . . . . . . . . . Bjørn Christensen, Ann-Louise Andersen, Khaled Medini, and Thomas D. Brunoe

366

A DSM Clustering Method for Product and Service Modularization . . . . . . . Omar Ezzat, Khaled Medini, Maria Stoettrup Schioenning Larsen, Xavier Boucher, Thomas D. Brunoe, Kjeld Nielsen, and Xavier Delorme

375

Customization and Variants in Terms of Form, Place and Time . . . . . . . . . . Joakim Wikner and Fredrik Tiedemann

383

A Framework for Identification of Complexity Drivers in Manufacturing Companies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rasmus Andersen, Thomas D. Brunoe, and Kjeld Nielsen Identification of Platform Candidates Through Production System Classification Coding. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Daniel G. H. Sorensen, Hoda A. ElMaraghy, Thomas Ditlev Brunoe, and Kjeld Nielsen

392

400

5G-Ready in the Industrial IoT-Environment: Requirements and Needs for IoT Applications from an Industrial Perspective . . . . . . . . . . . . . . . . . . . Kay Burow, Marco Franke, and Klaus-Dieter Thoben

408

Complexity Management in Production Systems: Approach for Supporting Problem Solving Through Holistic Structural Consideration . . . . . Samuel Horler, Ralph Riedel, and Egon Müller

414

Participatory Methods for Supporting the Career Choices in Industrial Engineering and Management Education The Teaching of Engineers Focused on Innovative Entrepreneurship . . . . . . . Danielle Miquilim and Marcia Terra da Silva

425

Research Initiative: Using Games for Better Career Choices . . . . . . . . . . . . . Nick B. Szirbik and Vincent R. Velthuizen

433

Blockchain in Supply Chain Management Blockchain as Middleware+ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . David Holtkemper and Günther Schuh Towards a Blockchain Based Traceability Process: A Case Study from Pharma Industry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ferdinando Chiacchio, Diego D’Urso, Lucio Compagno, Marcello Chiarenza, and Luca Velardita

443

451

Contents – Part I

An Architecture of IoT-Based Product Tracking with Blockchain in Multi-sided B2B Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shantanoo Desai, Quan Deng, Stefan Wellsandt, and Klaus-Dieter Thoben A Blockchain Application Supporting the Manufacturing Value Chain. . . . . . Bjørn Jæger, Terje Bach, and Simen Alexander Pedersen Design of a Blockchain-Driven System for Product Counterfeiting Restraint in the Supply Chain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sotiris P. Gayialis, Evripidis Kechagias, Georgios A. Papadopoulos, and Grigorios D. Konstantakopoulos

xxv

458

466

474

Designing and Delivering Smart Services in the Digital Age A Dual Perspective Workflow to Improve Data Collection for Maintenance Delivery: An Industrial Case Study . . . . . . . . . . . . . . . . . . Roberto Sala, Fabiana Pirola, Emanuele Dovere, and Sergio Cavalieri The Impact of Digital Technologies on Services Characteristics: Towards Digital Servitization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . David Romero, Paolo Gaiardelli, Giuditta Pezzotta, and Sergio Cavalieri

485

493

Capability-Based Implementation of Digital Service Innovation in SMEs . . . . David Görzig, Susann Kärcher, and Thomas Bauernhansl

502

Digital Servitization: The Next “Big Thing” in Manufacturing Industries . . . . Ugljesa Marjanovic, Slavko Rakic, and Bojan Lalic

510

Organization of Sales for Smart Product Service Systems. . . . . . . . . . . . . . . Benedikt Moser, Achim Kampker, Philipp Jussen, and Jana Frank

518

Operations Management in Engineer-to-Order Manufacturing Exploring Logistics Strategy in Construction . . . . . . . . . . . . . . . . . . . . . . . Martin Rudberg and Duncan Maxwell Architecture for Digital Spare-Parts Library: Use of Additive Layer Manufacturing in the Petroleum Industry . . . . . . . . . . . . . . . . . . . . . . . . . . R. M. Chandima Ratnayake, Arvind Keprate, and Roman Wdowik IPD Methodology in Shipbuilding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hajnalka Vaagen and Lucky C. Masi

529

537 546

xxvi

Contents – Part I

Changing Markets: Implications for the Planning Process in ETO Companies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kristina Kjersem and Marte F. Giskeødegård

554

Purchasing Strategies, Tactics, and Activities in Engineer-to-Order Manufacturing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mikhail Shlopak, Espen Rød, and Gabriele Hofinger Jünge

562

Examining Circular Economy Business Models for Engineer-to-Order Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nina Pereira Kvadsheim, Deodat Mwesiumo, and Jan Emblemsvåg

570

Digitalized Manufacturing Logistics in Engineer-to-Order Operations. . . . . . . Jo Wessel Strandhagen, Sven-Vegard Buer, Marco Semini, and Erlend Alfnes

579

Aspects for Better Understanding of Engineering Changes in Shipbuilding Projects: In-Depth Case Study . . . . . . . . . . . . . . . . . . . . . . Natalia Iakymenko, Marco Semini, and Jan Ola Strandhagen

588

Practical Guidelines for Production Planning and Control in HVLV Production . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Erik Gran and Erlend Alfnes

596

APS Feasibility in an Engineer to Order Environment . . . . . . . . . . . . . . . . . Erlend Alfnes and Hans-Henrik Hvolby

604

The Operator 4.0 and the Internet of Things, Services and People Empowering and Engaging Solutions for Operator 4.0 – Acceptance and Foreseen Impacts by Factory Workers . . . . . . . . . . . . . . . . . . . . . . . . . Eija Kaasinen, Susanna Aromaa, Päivi Heikkilä, and Marja Liinasuo

615

Task-Technology Fit in Manufacturing: Examining Human-Machine Symbiosis Through a Configurational Approach . . . . . . . . . . . . . . . . . . . . . Patrick Mikalef, Hans Yngvar Torvatn, and Emrah Arica

624

Augmented Reality for Humans-Robots Interaction in Dynamic Slotting “Chaotic Storage” Smart Warehouses. . . . . . . . . . . . . . . . . . . . . . . Peter Papcun, Jan Cabadaj, Erik Kajati, David Romero, Lenka Landryova, Jan Vascak, and Iveta Zolotova Analyzing Human Robot Collaboration with the Help of 3D Cameras . . . . . . Robert Glöckner, Lars Fischer, Arne Dethlefs, and Hermann Lödding

633

642

Contents – Part I

Investments of the Automotive Sector and the Industry 4.0. Brazilian Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sergio Miele Ruggero, Nilza Aparecida dos Santos, José Benedito Sacomano, and Marcia Terra da Silva Process Innovation in Learning Factories: Towards a Reference Model . . . . . Maria Stoettrup Schioenning Larsen, Astrid Heidemann Lassen, and Kjeld Nielsen Applicability of Agile Methods for Dynamic Requirements in Smart PSS Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stefan Wiesner, Jannicke Baalsrud Hauge, Paul Sonntag, and Klaus-Dieter Thoben

xxvii

650

658

666

Smart Service Engineering: Promising Approaches for a Digitalized Economy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Roman Senderek, Jan Kuntz, Volker Stich, and Jana Frank

674

Strategies for Implementing Collaborative Robot Applications for the Operator 4.0. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Åsa Fast-Berglund and David Romero

682

Situation Awareness for Effective Production Control . . . . . . . . . . . . . . . . . Andreas D. Landmark, Emrah Arica, Birgit Kløve, Pål Furu Kamsvåg, Eva Amdahl Seim, and Manuel Oliveira

690

Intelligent Diagnostics and Maintenance Solutions for Smart Manufacturing A Study on the Diagnostics Method for Plant Equipment Failure . . . . . . . . . Minyoung Seo and Hong-Bae Jun

701

Detailed Performance Diagnosis Based on Production Timestamps: A Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Johannes Cornelis de Man and Felix Mannhardt

708

Modeling the Maintenance Time Considering the Experience of the Technicians . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hyunjong Shin, Kai-wen Tien, and Vittaldas Prabhu

716

A Thesaurus-Guided Method for Smart Manufacturing Diagnostics . . . . . . . . Farhad Ameri and Reid Yoder

722

Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

731

The APMS Conference & IFIP WG5.7 in the 21st Century: A Bibliometric Study Makenzie Keepers1, David Romero2, and Thorsten Wuest1(&) 1

West Virginia University, Morgantown, USA [email protected], [email protected] 2 Tecnológico de Monterrey, Monterrey, Mexico [email protected]

Abstract. The APMS conference and IFIP WG 5.7 community can proudly look back at a rich history of research and practical impact in the field of production and production management. However, in the light of the recent disruptions of the field, often summarized under the terms Industry 4.0 or Smart Manufacturing, it is critical to assess recent research trends and changing key topics within the community to enable informed decisions about the future directions of the conference. This paper takes a critical look at 1,428 published papers from the APMS proceedings that are available on Scopus and derives key insights through a bibliometric study. A special focus is put on the last five years to reflect the recent effects of digital transformation on the driving topics of the conference. The results show the emergence and dominance of Industry 4.0 among the recent topics, but also provides evidence of established topics, such as sustainability, remaining relevant. Overall, the study provides a wealth of information that provides the foundation for forward looking discussion among the community members. Keywords: Key topics  APMS  Production management  Smart factory Smart manufacturing  Industry 4.0  IFIP  Bibliometric analysis



1 Introduction The field of production and production management is currently experiencing an interesting phase with paradigms like Smart Manufacturing and Industry 4.0, disrupting whole industries on a global scale [1]. Exciting technologies such as the Industrial Internet of Things (IIoT), Additive Manufacturing, Cyber-Physical Systems, AI and machine learning are being introduced to the shop-floor and beyond [2]. This digital transformation has a strong influence on industry and academia alike, and also effects policies related to the domain. With regard to these disruptive and rapid changes, it is necessary to critically reflect on the topics that (i) have been covered by the contributions of the APMS community, as well as (ii) observable changes in preferences regarding the dominant topics and research areas, especially for an established community with a long history such as IFIP WG5.7 and APMS. It is then crucial to provide transparent and insightful data to enable an informed discussion around the future

© IFIP International Federation for Information Processing 2019 Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 1–13, 2019. https://doi.org/10.1007/978-3-030-29996-5_1

2

M. Keepers et al.

directions of the community and the APMS conference, thus ensuring and solidifying the stance at the pinnacle of production management research and industrial relevance. The flagship conference of the International Federation for Information Processing (IFIP) Working Group 5.71 was originally established in 1978, and with it the ‘International Conference on Advances in Production Management Systems (APMS)’2 as a working conference. Starting out as a tri-yearly event, it has emerged as a premier yearly international conference held every year since 2005. From 2005 to 2012, the conference venues where mainly located in Europe. However, paying tribute to its global ambition, today the APMS conference location rotates in a three-year rhythm through Asia (incl. Australia/New Zealand), Europe/Africa, and the Americas. This regular global rotation started in 2013 with the conference being held in State College, Pennsylvania, USA. The APMS conferences in the 21st Century, their years, and locations are illustrated in Table 1 and Fig. 1 (locations only). This paper pays tribute to the history of this conference series with a focus on the 21st Century. With a history of 23 events (by 2018), and over 40 years, the objective of this research is to investigate the international collaborations, topics covered over time, and, most importantly, provide insights in emerging topics of relevance to the APMS community. The objective is to build (i) a solid understanding of the roots of this international conference series, and the community built around it, and (ii) provide insights on relevant topics, including historic trends and forward-looking trends based on solid, bibliometrics data.

Fig. 1. World map with APMS locations from 2000–2018.

1 2

https://www.ifipwg57.org. https://www.apms-conference.org.

The APMS Conference & IFIP WG5.7 in the 21st Century A Bibliometric Study

3

The remainder of this paper is structured as follows: First, we will briefly elaborate on our methodology for this paper before providing in-depth insights in the data used for our analysis in Sect. 2. Following we present the main results in Sect. 3 and discuss selected topics of relevance in more detail in Sect. 4. Section 5 concludes the paper and provides an outlook on future work and next steps.

2 Methodology and Data We chose a bibliometric study as the methodology to develop this paper. Furthermore, we decided to concentrate on two main timeframes, 2000–2018 (a.k.a. the 21st century) and 2014–2018 (a.k.a. the last five years). The main data source of this bibliometric analysis were the published proceedings of the APMS conference from 2000 until 2018. It has to be noted that the available data for our analysis was not complete for the first timeframe (see Table 1) as some of the earlier editions are not available as part of the Springer series and thus the Scopus database. The second timeframe provided a complete dataset for analysis, including all relevant meta-data. For our analysis, we focused on the Scopus database, as the most established provider of high-quality conference proceedings data, where we identified the proceedings and pre-processed the data. Table 1 depicts the year, locations, conference topic, and number of papers included in this analysis. Hyperlinks are included in the table to provide direct links to proceedings when available. Several editions are published in up to three books – please click on the Roman number (I–III) to activate the hyperlinks in such cases, otherwise, click on the topic of the specific conference. Cleaning refers to adjusting for the correct date as conference year and year of publication of the proceedings varied in selected cases. Furthermore, we had to remove other papers from conferences with similar titles and/or published in Springer’s AICT series. We then exported the identified papers from Scopus, and again pre-processed the . csv files to ensure consistency and compliance with our analytical tools. We mainly relied on MS-Excel and VOSviewer for our analysis and data visualizations. VOSviewer is a bibliometric analysis tool focused on visualization of similarities. The tool works by first developing a similarity matrix based on association strength, then uses the similarity values to determine location and proximity of labels. To augment the data derived from the Scopus files, we went through the minutes of the IFIP WG 5.7 meetings for the years 2000–2018, mainly to identify the Special Interest Groups (SIGs) formed, active, merged, and resolved (see Fig. 6).

4

M. Keepers et al. Table 1. APMS conference proceedings 2000–2018 (based on Scopus data)

Year No Location

Conference topica

Papers

2002 2003 2005 2006 2007 2008 2009 2010

Collaborative Systems for Production Mgmt. Integrating Human Aspects in Prod. Mgmt. Lean Business Systems and Beyond Advances in Production Management Systems Innovation in Networks New Challenges, New Approaches Competitive and Sustainable Manufacturing, Products and Services Value Networks: Innovation, Technologies, and Management Competitive Manufacturing for Innovative Products & Services (Part I/II) Sustainable Production and Service Supply Chains (Part I/II) Innovative & Knowledge-Based Prod. Mgmt. in a Global-Local World (Part I/II/III) Innovative Production Management Towards Sustainable Growth (Part I/II) Initiatives for a Sustainable World The Path to Intelligent, Collaborative and Sustainable Manufacturing (Part I/II) Production Mgmt. for Data-Driven, Intelligent, Collaborative, & Sustainable Manuf./Smart Manuf. for Industry 4.0 (Part I/II) Total number of papers included in analysis

-

8 9 10 11 12 13 14 15

Eindhoven, Netherlands Karlsruhe, Germany Washington D.C., USA Wroclaw, Poland Linköping, Sweden Espoo, Finland Bordeaux, France Cernobbio, Italy

2011 16 Stavanger, Norway 2012 17 Rhodes Island, Greece 2013 18 State College, USA 2014 19 Ajaccio, France 2015 20 Tokyo, Japan 2016 21 Iguassu Falls, Brazil 2017 22 Hamburg, Germany 2018 23 Seoul, South Korea

59 82 142 66 184 134 233 164 112 122 129

1,428

a

Hyperlink of proceedings and/or further information provided if available.

3 Results We structured the results from our analysis in four main sub-sections: co-authorships and country networks, most productive and highly-cited authors and countries, most relevant keywords of the APMS proceedings, and Special Interest Groups (SIGs). Following, we present the analysis results as a basis for our discussion in the next section. 3.1

Co-authorship and Country Networks

The IFIP WG 5.7 and APMS conference has global ambitions and continuously provides a forum for international exchange. We analyzed the number of different countries represented as authors of respective APMS proceedings for each year from 2014 to 2018, as well as an accumulated count for the timeframe 2000–2018 (see Table 2).

The APMS Conference & IFIP WG5.7 in the 21st Century A Bibliometric Study

5

Table 2. Summary of APMS Collaborations (based on Scopus data) Year

# of papers # of countries # of authors

2014 233 2015 164 2016 112 2017 122 2018 129 2000–2018 1,428

34 28 22 32 29 57

601 418 281 351 348 2,531

Furthermore, we analyzed the authorship networks and visualized the clusters using VOSviewer based on (i) individual authors (see Figs. 2 and 4), as well as (ii) on the basis of their respective countries (see Figs. 3 and 5) for the two timeframes 2000– 2018 (see Figs. 2 and 3) and 2014–2018 (see Figs. 4 and 5).

Fig. 2. Co-authorship Network Diagram between 2000–2018 (Authors with at least 4 papers)

Fig. 3. Network Diagram of Countries between 2000–2018 (Countries with at least 1 paper)

6

M. Keepers et al.

Fig. 4. Co-authorship Network Diagram between 2014–2018 (Authors with at least 2 papers)

Fig. 5. Network Diagram of Countries between 2014–2018 (Countries with at least 1 paper)

3.2

Most Productive and Highly-Cited Authors and Countries

We analyzed the most productive (measured by no. of published papers) and highlycited (measured by no. of citations) authors and countries for our two chosen timeframes: 2000–2018 (see Table 3) and 2014–2018 (see Table 4). Furthermore, we analyzed the 10 most cited papers from 2000–2018 (see Table 5) and 2014–2018 (see Table 6).

The APMS Conference & IFIP WG5.7 in the 21st Century A Bibliometric Study

7

Table 3. Most productive and highly-cited authors & countries (2000–2018) Highly-cited author Most productive author Author No. Author Taisch, M. 155 Taisch, M. Thoben, K.-D. 67 Vendrametto, O. Stahre, J. 64 Kiritsis, D. Romero, D. 62 Alfnes, E. May, G. 57 Nielsen, K. Nielsen, K. 56 Thoben, K.-D. Bernus, P. 47 Sacomano, J. Noran, O. 47 Strandhagen, J. Alfnes, E. 44 Stich, V. Dolgui, A. 44 Dos Reis, J.. Garetti, M. 44 Abe, J. Schuh, G.

Most productive country No. Country 49 Germany 36 Italy 32 Brazil 25 France 25 Norway 25 Japan 24 Switzerland 23 Denmark 22 United States 19 United Kingdom 19 19

No. 191 189 181 150 147 94 88 81 78 63

Table 4. Most productive and highly-cited authors & countries (2014–2018) Highly-cited author Most productive author Author No. Author Romero, D. 61 Vendrametto, O. Stahre, J. 59 Sacomano, J. Bernus, P. 47 Nielsen, K. Noran, O. 47 Kiritsis, D. Taisch, M. 47 Taisch, M. Negri, E. 41 Brunoe, T.D. Nielsen, K. 36 Alfnes, E. Thoben, K.-D. 36 Dos Reis, J. Brunoe, T. 35 Strandhagen, J. Fumagalli, L. 34 Goncalves, R. Abe, J.

3.3

Most productive country No. Country No. 26 Brazil 141 22 Germany 101 21 Norway 84 20 France 78 18 Italy 69 16 Switzerland 49 16 Japan 49 15 Denmark 47 15 United States 40 15 Sweden 40 15

Most Relevant Keywords of APMS Proceedings

We decided to analyze the keywords instead of title and/or abstract to identify trends regarding the key topics and research domains covered by the APMS conference over the last five year. Our rationale behind this decision is that keywords introduce less bias, e.g., through multiple usage of one word in the abstract of an individual paper, and also are supposedly more standardized to enable topical searches. While this is partly true, there is still a large variability among the keywords within the data.

8

M. Keepers et al. Table 5. Top 10 highest cited papers (2000–2018) Authors

Year Title (adapted/shortened)

Ref. No.

Romero, D. Bogdanski, G. Romero, D. Mourtzis, D. Andersen, A. Fumagalli, L. Bentaha, M. Bentaha, M. Trentesaux, D. Bocewicz, G.

2016 2013 2015 2015 2015 2014 2013 2013 2014 2012

[3] [4] [5] [6] [7] [8] [9] [10] [11] [12]

The operator 4.0: Human CPS & adaptive automation Ext. energy value stream appr. applied on electronics ind. Towards a human-centered ref. architecture Perf. indicators for eval. of PSS design: A review Reconf. manuf. on multi. levels: Lit. rev.& res. directions Ontology-based modeling of manuf. & log. systems Stoch. formulation of disassembly line balancing probl. Chance constrained programming model … Sustainability in manuf. operations scheduling. Cyclic steady state refinement: Multimodal proc. persp.

27 22 20 19 19 19 18 16 16 16

Table 6. Top 10 highest cited papers (2014–2018) Authors

Year

Title (adapted/shortened)

Ref.

No.

Romero, D. Romero, D. Mourtzis, D. Andersen, A. Fumagalli, L. Trentesaux, D. De Carolis, A. Nielsen, P. Bruno, G. Garza-Reyes, J.

2016 2015 2015 2015 2014 2014 2017 2014 2014 2014

The operator 4.0: Human CPS & adaptive automation Towards a human-centered ref. architecture Perf. indicators for eval. of PSS design: A review Reconf. manuf. on multi. levels: Lit. rev.& res. directions Ontology-based modeling of manuf. & log. systems Sustainability in manuf. operations scheduling Maturity model for digital readiness of manuf. companies An empirical investigation of lead time distributions Expl. of semantic platform to store & reuse PLM knowl. Lean & green – syn., diff., lim., & need for six sigma

[3] [5] [6] [7] [8] [11] [13] [14] [15] [16]

27 20 19 19 19 16 14 13 13 13

Table 7 illustrates the most used keywords from each of the 2014–2018 APMS proceedings, as well as the accumulated count for the five-year period. The number of displayed keywords varies from 6 to 9 following the methodology to include the keywords that have a higher count than the group that would expand the list above ten. The total keyword count (of different keywords) that was analyzed was 955 (2014); 670 (2015); 452 (2016); 518 (2017); 540 (2018); and 3,135 (total). 3.4

Special Interest Groups (SIGs)

The data of the Special Interest Groups (SIGs) within IFIP WG 5.7 is rather limited ad there are only a few SIGs active at a given time. Figure 6 highlights the formed, active, merged, and resolved SIGs in the 21st Century.

Keyword

15 9 7 7 7 7

sustainability SMEs scheduling simulation case study lean manuf.

cloud manuf. engineer-to-order lean manuf. sustainability paraconsistent logic case study

2015

No.

2014

Keyword

10 9 9 8 8 7

No. paraconsistent logic sustainability emergy supply chain simulation CPPS innovation lean manuf.

Keyword

2016 8 7 5 4 4 4 4 4

No. industry 4.0 manufacturing smart manuf. lean manuf. sustainability education supply chain digitization logistics

Keyword

2017 23 6 6 5 4 4 4 4 4

No. industry 4.0 CPPS mass customization smart factory lean manuf. engineer-to- order

Keyword

2018

Table 7. Top 6–9 most used keywords of APMS proceedings 2014–2018.

24 8 7 7 5 5

No.

industry 4.0. sustainability lean manuf. engineer-to-order simulation CPPS

Keyword

2014–2018 No. 50 36 30 24 22 22

The APMS Conference & IFIP WG5.7 in the 21st Century A Bibliometric Study 9

10

M. Keepers et al.

Fig. 6. Timeframe of Active Special Interest Groups (SIGs) between 2000–2018

4 Discussion The previous results section reported the ‘hard facts’ directly derived from the data. The analyses and visualization chosen are following common standards in bibliometric analysis, e.g., authorship networks that illustrate the strength of the relationship. We chose to provide more visualizations in lieu of detailed explanation given the page limit. In this section, we will now discuss and expand on these results and put them into context to provide additional context that we were not able to represent in the results above. The results of our authorship analysis show a consistent representation of 22 to 34 different countries each year over the last five years and 57 different countries in the 21st Century, the APMS stands true to its claim to be a true international conference with global impact. The authors’ keywords are a main focus of our discussion as there is a wide variability of terms used that describe similar domains. However, the clustering is inherently subjective as the interpretation and definition of the cluster boundaries is subjective in itself. Therefore, the following discussion is not comprehensive, but a snapshot of the data the authors deemed most interesting given the context and objectives of this paper. The popularity of the keyword ‘case study’ (see Table 7) is an indication that the objective of the APMS of practice-oriented research that provides value to industry is taken seriously. The keyword, while not always top-ranked, is consistently among the higher ranked keywords every year. The keyword ‘industry 4.0’ is not only the most dominant key word in the last two years and over the timeframe of 2014–2018, but when seen as a cluster, and thus combined with ‘smart manufacturing, ‘CPS’, ‘smart factory’, and ‘intelligent manufacturing’, just to name a few, it becomes even more dominant. When we interpret the cluster even broader, and include ‘product service systems’, ‘servitization’, ‘digital transformation’, and ‘internet of things’ as part of the ‘industry 4.0’ cluster as well, this trend is reinforced even more. Overall, it can be safely concluded that Industry 4.0 is, unsurprisingly, a core topic of the APMS community and is here to stay for the near future. The ‘smart factory’ keyword that emerges in 2018 can be traced back to the host country South Korea, where smart factory is the term used for their federal Industry

The APMS Conference & IFIP WG5.7 in the 21st Century A Bibliometric Study

11

4.0/Smart Manufacturing program. The simultaneous emergence of ‘maturity model’ and ‘industry 4.0’ and ‘smart manufacturing’ may indicate the struggle of companies to adapt and cope with these new paradigms. There are many other clusters that emerge from the data and that reflect the key topical domains covered. Of those, we would like to mention three specifically: (i) ‘supply chain’ is represented in top keywords, however, not in a position that does the topic justice when we look at the cluster that might include ‘logistics’, ‘production logistics’, ‘SCM’ and so forth. Supply chain and logistics have been (and seem to continue to be) an essential area of interest to the APMS community. When analyzing the keywords, a cluster around (ii) ‘data analytics’ emerges that includes ‘machine learning’, ‘modeling’, and ‘simulation’, as well as various specific algorithms. This is a very diverse cluster, yet of key interest to the community. Another cluster that we feel is worthy of discussion is (iii) ‘business model’, which highlights the economic viability and management perspective a lot of the keywords reflect. This is another key aspect of the APMS community that brings together different disciplines and thus different domains. In the following, we created a visual representation of all keywords used in the APMS papers from 2014–2018 as a word cloud (see Fig. 7) that allows a quick overview of key topics, but also shows the diversity of topics covered by this conference.

Fig. 7. Word cloud of all keywords from APMS proceedings 2014–2018

5 Conclusion and Outlook This paper intended to reflect on the rich history and topical development of the IFIP WG 5.7 and associated APMS conference over the term of the 21st Century. With the radical disruption of the Fourth Industrial Revolution already at full pace, this is a perfect time to critically reflect whether the topical focus of the APMS conference is still up-to-date and a good representation of the reported work, or if adjustments are necessary to be fit for the future.

12

M. Keepers et al.

Summarizing our findings, our analysis shows that the APMS conference is a truly international community with strong collaborations among the members. The keywords show that new topical areas emerge and become more dominant – especially the cluster around Industry 4.0, which is already reflected in the thriving ‘smart manufacturing’ SIG as well. The outlook is promising when we look at the most cited papers from 2000–2018 with 6 out of 10 being published within the last five years. However, increasing the number of citations of APMS proceedings should be a focus of the community to increase the impact and reputations of the conference. There are several limitations that we need to keep in mind when reading our study. First, the early years of the 21st Century are not represented in the data as the proceedings are not available on Scopus. Therefore, the total numbers of papers reported for the timeframe 2000–2018 as well as other derived results such as author networks and such are not 100% accurate. However, the objective of this paper was to reflect the recent changes in scope and focus of the community and in that sense, the recent five years, for which all data was available and is included in the analysis can be considered more important. A second limitation that needs to be reported is the subjectivity that revolves around the clusters of keywords briefly discussed in Sect. 5. While the results of the keywords analysis is objective and accurate, the discussion is influenced by the authors interpretation, and as such, is subjective to some extent. Acknowledgment. This work would not have been possible without the help of former and current officers of IFIP WG 5.7. We especially thank Dimitris Kiritsis, Marco Taisch, Umit Bititci, Klaus-Dieter Thoben, and Gregor von Cieminski, as well as the helpful APMS community at large. This work was supported by the J. Wayne and Kathy Richards Faculty Fellowship in Engineering at West Virginia University.

References 1. Thoben, K.-D., Wiesner, S., Wuest, T.: “Industrie 4.0” and smart manufacturing – a review of research issues and application examples. Int. J. Autom. Technol. 11(1), 4–19 (2017) 2. Mittal, S., Kahn, M., Romero, D., Wuest, T.: Smart manufacturing: characteristics, technologies and enabling factors. Part B: J. Eng. Manuf. 1–20 (2017). Online first 3. Romero, D., Bernus, P., Noran, O., Stahre, J., Berglund, Å.F.: The operator 4.0: human cyber-physical systems & adaptive automation towards human-automation symbiosis work systems. In: Nääs, I., et al. (eds.) APMS 2016. IAICT, vol. 488, pp. 677–686. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-51133-7_80 4. Bogdanski, G., Schönemann, M., Thiede, S., Andrew, S., Herrmann, C.: An extended energy value stream approach applied on the electronics industry. In: Emmanouilidis, C., Taisch, M., Kiritsis, D. (eds.) APMS 2012. IAICT, vol. 397, pp. 65–72. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-40352-1_9 5. Romero, D., Noran, O., Stahre, J., Bernus, P., Fast-Berglund, Å.: Towards a human-centred reference architecture for next generation balanced automation systems: human-automation symbiosis. In: Umeda, S., Nakano, M., Mizuyama, H., Hibino, H., Kiritsis, D., von Cieminski, G. (eds.) APMS 2015. IAICT, vol. 460, pp. 556–566. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-22759-7_64

The APMS Conference & IFIP WG5.7 in the 21st Century A Bibliometric Study

13

6. Mourtzis, D., Fotia, S., Doukas, M.: Performance indicators for the evaluation of productservice systems design: a review. In: Umeda, S., Nakano, M., Mizuyama, H., Hibino, H., Kiritsis, D., von Cieminski, G. (eds.) APMS 2015. IAICT, vol. 460, pp. 592–601. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-22759-7_68 7. Andersen, A.-L., Brunoe, T.D., Nielsen, K.: Reconfigurable manufacturing on multiple levels: literature review and research directions. In: Umeda, S., Nakano, M., Mizuyama, H., Hibino, H., Kiritsis, D., von Cieminski, G. (eds.) APMS 2015. IAICT, vol. 459, pp. 266– 273. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-22756-6_33 8. Fumagalli, L., Pala, S., Garetti, M., Negri, E.: Ontology-based modeling of manufacturing and logistics systems for a new MES architecture. In: Grabot, B., Vallespir, B., Gomes, S., Bouras, A., Kiritsis, D. (eds.) APMS 2014. IAICT, vol. 438, pp. 192–200. Springer, Heidelberg (2014). https://doi.org/10.1007/978-3-662-44739-0_24 9. Bentaha, M.L., Battaïa, O., Dolgui, A.: A stochastic formulation of the disassembly line balancing problem. In: Emmanouilidis, C., Taisch, M., Kiritsis, D. (eds.) APMS 2012. IAICT, vol. 397, pp. 397–404. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3642-40352-1_50 10. Bentaha, M.L., Battaïa, O., Dolgui, A.: Chance constrained programming model for stochastic profit–oriented disassembly line balancing in the presence of hazardous parts. In: Prabhu, V., Taisch, M., Kiritsis, D. (eds.) APMS 2013. IAICT, vol. 414, pp. 103–110. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-41266-0_13 11. Trentesaux, D., Prabhu, V.: Sustainability in manufacturing operations scheduling: stakes, approaches and trends. In: Grabot, B., Vallespir, B., Gomes, S., Bouras, A., Kiritsis, D. (eds.) APMS 2014. IAICT, vol. 439, pp. 106–113. Springer, Heidelberg (2014). https://doi. org/10.1007/978-3-662-44736-9_13 12. Bocewicz, G., Nielsen, P., Banaszak, Z.A., Dang, V.Q.: Cyclic steady state refinement: multimodal processes perspective. In: Frick, J., Laugen, B.T. (eds.) APMS 2011. IAICT, vol. 384, pp. 18–26. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-33980-6_3 13. De Carolis, A., Macchi, M., Negri, E., Terzi, S.: A maturity model for assessing the digital readiness of manufacturing companies. In: Lödding, H., Riedel, R., Thoben, K.-D., von Cieminski, G., Kiritsis, D. (eds.) APMS 2017. IAICT, vol. 513, pp. 13–20. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-66923-6_2 14. Nielsen, P., Michna, Z., Do, N.A.D.: An empirical investigation of lead time distributions. In: Grabot, B., Vallespir, B., Gomes, S., Bouras, A., Kiritsis, D. (eds.) APMS 2014. IAICT, vol. 438, pp. 435–442. Springer, Heidelberg (2014). https://doi.org/10.1007/978-3-66244739-0_53 15. Bruno, G., Antonelli, D., Korf, R., Lentes, J., Zimmermann, N.: Exploitation of a semantic platform to store and reuse PLM knowledge. In: Grabot, B., Vallespir, B., Gomes, S., Bouras, A., Kiritsis, D. (eds.) APMS 2014. IAICT, vol. 438, pp. 59–66. Springer, Heidelberg (2014). https://doi.org/10.1007/978-3-662-44739-0_8 16. Garza-Reyes, J.A., Winck Jacques, G., Lim, M.K., Kumar, V., Rocha-Lona, L.: Lean and green – synergies, differences, limitations, and the need for six sigma. In: Grabot, B., Vallespir, B., Gomes, S., Bouras, A., Kiritsis, D. (eds.) APMS 2014. IAICT, vol. 439, pp. 71–81. Springer, Heidelberg (2014). https://doi.org/10.1007/978-3-662-44736-9_9

Smart Supply Networks

Price Decision Making in a Centralized/Decentralized Solid Waste Disposal Supply Chain with One Contractor and Two Disposal Facilities Iman Ghalehkhondabi1(&) and Reza Maihami2 1

2

School of Business and Leadership, Our Lady of the Lake University, San Antonio, TX 78207, USA [email protected] School of Business and Leadership, Our Lady of the Lake University, Houston, TX 77067, USA [email protected]

Abstract. Solid waste management has been an interesting topic for researchers in few last decades. This paper studies a price-sensitive demand for the waste disposal service of two disposal facilities who are dealing with a contractor to gain more profit. The waste disposal process is studied in a supply chain structure where a contractor manages to collect the waste from the producers and transport them to the facilities for disposal. Two scenarios are proposed where at the first one the disposal facilities lead a price Stackelberg game over the contractor, and in the second scenario, both disposal facilities and the contractor cooperate on the chain decision variables in an integrated framework. A numerical example is performed to illustrate the efficiency and applications of the proposed model. Keywords: Stackelberg game  Coordinated supply chain Solid waste management  Pricing



1 Introduction Municipal solid waste includes the produced waste by residential and business buildings, public parks, municipals, and service providers. The municipal solid waste is generally categorized to recyclables, hazardous, and garbage waste [1]. It is a challenge for many municipals to design or select a solid waste management system that minimizes the waste negative environmental impacts and ensures a reasonable level of health and welfare for the citizens [2]. Solid waste management includes managing waste generation, collection, separation, transportation, treatment, distribution and disposal, therefore we can define waste management as a supply chain management problem [3]. Having an inappropriate waste management system can lead to disease transmission, contaminated ground and surface water, negative impacts on ecosystem, producing greenhouse emissions, and negative impacts on tourism and the other business activities [4].

© IFIP International Federation for Information Processing 2019 Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 17–26, 2019. https://doi.org/10.1007/978-3-030-29996-5_2

18

I. Ghalehkhondabi and R. Maihami

There are some operations research capabilities which motivate using mathematical models in solid waste management area. For instance, the hidden relation between different parameters in a problem become noticeable when developing a mathematical model. Or having the mathematical model enables the decision makers to test different scenarios and make informed decisions [5]. The environmental effects of solid waste management have been an important place of academic discussions in last two decades. Assessment of hospital biomedical waste management [6], relations between solid waste management and climate change [7], and sustainable design of a waste management system [3] are among some of the recent studies in this area. Besides the environmental impacts, economics always play an important role in municipal services. Pricing of the waste management service has been studied by many researchers. But, most of the waste management pricing studies consider the elasticity of waste production versus waste management service price rather than studying the profitability of the waste disposal facilities as economic entities. Weight-based pricing in the collection of household waste [8], effects of unitbased pricing on household waste collection demand [9], public willingness to pay and participate in waste management [10] are among the works which studied pricing in waste management systems. In this study, we consider a solid waste disposal supply chain including two disposal facilities who compete for selling the service, and one contractor who is the dealer between the municipal waste producers and the disposal facilities. Demand for each disposal facility is a function of its own price and the competitor price. We solve the model under two scenarios. At the first scenario we consider a Stackelberg game, where the disposal facilities determine their service prices for the final customer (waste producer), and the contractor determines its own profit based on the given service prices. In the second scenario, an integrated supply chain is considered where one individual makes decisions for entire entities. So, the service price and the contractor profit margin are determined at the same time by one agent. Figure 1 shows the structure of this model.

Fig. 1. Waste disposal supply chain

Price Decision Making in a Centralized/Decentralized Solid Waste Disposal

19

The objective of this study is to show the effect of integrated decision making on the decision variables and profit of a waste disposal supply chain.

2 Model Representation Following notation is used in modeling the problem: Di : Demand for the service of facility i d: The total potential demand for solid waste disposal pi : Customer price for the service provided by facility i (per kilogram) a: The price sensitivity of demand for facility i based on its own price b: The price sensitivity of demand for facility i based on its competitor price hi : Disposal facility i price asked from the contractor (per kilogram) li : Contractor’s profit for selling the service of facility i (per kilogram) oi : Operation cost at facility i (per kilogram) We observe the demand for urban solid waste disposal to be a function of price. Increasing the price in disposal facility 1 (or decreasing price in facility 2) may reduce the demand for this facility (Choi, 1991). Equation 1 denotes the price sensitive demand function for solid waste disposal facilities: Di ¼ d  api þ bp3i ;

i ¼ 1; 2;

d; a; b [ 0

ð1Þ

According to (Jeuland and Shugan, 1983), a [ b should hold true, and the larger difference between these two parameters mean that two products have lower possibility of substitution. 2.1

Decentralized Waste Management Supply Chain

Considering the demand function (1), we can define the disposal facilities’ and contractor’s profit functions qFi and qCO as follow: qFi ¼ ðhi  oi Þðd  api þ bp3i Þ qCO ¼

2 X

i ¼ 1; 2

li ðd  api þ bp3i Þ

ð2Þ ð3Þ

i¼1

The profit that the contractor makes on selling the service of each facility equals to the difference between the price that it receives from the customers and the cost it pays to that facility. In our first scenario we believe that the disposal facilities have enough power to make their price decisions hi and dictate it to the contractor. Considering the value of hi , contractor should determine the final price asked from the customer pi . If we substitute li ¼ pi  hi , the contractor’s reaction function to the prices h1 and h2 can be calculated through the first order derivatives of (3):

20

I. Ghalehkhondabi and R. Maihami

@qCO ¼ d  2ap1 þ 2bp2 þ ah1  bh2 @p1

ð4Þ

@qCO ¼ d  2ap2 þ 2bp1 þ ah2  bh1 @p2

ð5Þ

The coordinator profit function would be jointly concave in p1 and p2 if the Hessian matrix (6) is negative-definite: 2

@ 2 qCO 2 1 4 @p @ 2 qCO @p1 @p2

@ 2 qCO @p1 @p2 @ 2 qCO @p22

3

  5 ¼ 2a 2b ¼ 4a2  4b2 2b 2a

ð6Þ

According to our assumption of a [ b, 4a2  4b2 would be positive, and the Hessian matrix is negative definite. If we solve Eqs. (4) and (5) for zero, the optimal prices are: p1 ¼

h1 d þ 2ða  bÞ 2

ð7Þ

p2 ¼

h2 d þ 2ða  bÞ 2

ð8Þ

Substituting the optimal price values to Eq. (2), the first order derivative of (2) with respect to hi can give us the maximization condition of the disposal facilities’ profit functions: @qFi 1 ¼ ðd  2ahi þ bh3i þ aoi Þ; 2 @hi

i ¼ 1; 2

ð9Þ

Solving Eq. (9) for zero gives us the optimal hi values: hi ¼

d að2aoi þ bo3i Þ þ ; 2ða  bÞ ð2a  bÞð2a þ bÞ

i ¼ 1; 2

ð10Þ

Substituting (9) in (7), we have the optimal price asked from the customers: pi ¼

d ð3a  2bÞ að2aoi þ bo3i Þ þ ; 2ð2a  bÞða  bÞ 2ð2a  bÞð2a þ bÞ

i ¼ 1; 2

ð11Þ

To simplify the problem, we logically can assume that the operations cost for both disposal facilities is the same. This assumption also removes our analysis dependence on the used technology in each facility. Therefore, using o as the operations cost in both facilities, we have the optimal hi and pi prices as follows:

Price Decision Making in a Centralized/Decentralized Solid Waste Disposal

d þ ao ; 2a  b

hi ¼ pi ¼

i ¼ 1; 2

d ð3a  2bÞ þ aoða  bÞ ; 2ð2a  bÞða  bÞ

21

ð12Þ i ¼ 1; 2

ð13Þ

Important model outputs can be derived using values in (12) and (13) as shown in Table 1. It is notable that the disposal facility may only cover its costs if its service price asked from the contractor is greater than or equal to the operations cost, therefore (14) should always hold true: d þ ao d o ! o 2a  b ab

2.2

ð14Þ

Integrated Waste Management Supply Chain

Our second scenario considers a case where an individual agent makes the decisions for all players in the solid waste disposal chain. In this case we only have one profit function which is the supply chain profit function: max qSC ¼

li ;l3i

2 X

li ðd  aðo þ li Þ þ bðo þ l3i ÞÞ

ð15Þ

i¼1

Model optimal values can be derived through the optimal value of li . The first order conditions with respect to li follow: @qSC ¼ d  aðo þ 2l1 Þ þ bo þ 2bl2 @l1

ð16Þ

@qSC ¼ d  aðo þ 2l2 Þ þ bo þ 2bl1 @l2

ð17Þ

The Hessian matrix is: 2

@ 2 qSC 2 1 4 @l @ 2 qSC @l1 @l2

@ 2 qSC @l1 @l2 @ 2 qSC @l22

3

 5 ¼ 2a 2b

 2b ¼ 4a2  4b2 2a

ð18Þ

The solid waste disposal supply chain has a maximum value for a [ b. Solving (16) and (17) for zero, we have: l1 ¼

d  oa þ ob þ 2bl2 2a

ð18Þ

22

I. Ghalehkhondabi and R. Maihami

l2 ¼

d  oa þ ob þ 2bl1 2a

ð19Þ

Substituting (19) to (18), we have: l1 ¼ l2 ¼

d  oa þ ob 2a  2b

ð20Þ

Table 1. Optimal prices and profits Equil. value Decentralized chain Integrated chain d þ ao hi 2ab   d ð 3a2b Þ þ ao ð ab Þ pi 1 d o þ 2ð2abÞðabÞ 2 ab li

aðdoðabÞÞ 2ð2a2 3ab þ b2 Þ

doa þ ob 2a2b

Di

aðd þ oða þ bÞÞ 4a2b

1 2 ðd þ oða þ bÞÞ

qFi

aðd þ oða þ bÞÞ2 2ð2a þ bÞ2

-

qCO

a2 ðd þ oða þ bÞÞ2 2ðabÞð2a þ bÞ2

-

qSC

að3a2bÞðd þ oða þ bÞÞ2 2ðabÞð2a þ bÞ2

ðd þ oða þ bÞÞ2 2ðabÞ

3 Discussion We analyze the system manner in this section through algebraic analysis. Proposition 1. The price values in two scenarios are in the following order: pDec [ pInt i i Proof.   d ð3a  2bÞ þ aoða  bÞ 1 d d  oa þ ob [ oþ ) 0\ 2ð2a  bÞða  bÞ 2 ab 4a  2b þ ob d We know that o  ab and a [ b, so 0\ doa 4a2b is always true Unlike the integrated supply chain, each disposal facility in a decentralized chain is adding its own profit to the service operation cost, and the contractor is adding its own profit li as well. Adding the marginal profit by all chain players increases the final service price of the decentralized chain versus the integrated chain. Int Proposition 2. The contractor’s profit is in the following order: lDec i \li

Price Decision Making in a Centralized/Decentralized Solid Waste Disposal

23

Proof. aðd  oða  bÞÞ d  oa þ ob  \ ) oða  bÞ3 \d ða  bÞ2 ) 0\d  oða  bÞ 2a  2b 2 2a2  3ab þ b2 There is no disposal profit for individual disposal facilities in the integrated scenario, and all of supply chain profit comes from the contractor’s profit. Essentially, contractor should divide the profit among the players of the supply chain. Without having the disposal facility profit ðhi Þ in the integrated chain, contractor has more space for increasing its profit margin. Proposition 3. The service demand for each facility in two scenarios are in the folInt lowing order: DDec i \Di Proof. aðd þ oða þ bÞÞ 1 a \ ðd þ oða þ bÞÞ ) \1 ) b\a 4a  2b 2 2a  b Based on Proposition 1, price in the decentralized chain is higher than the integrated chain. Therefore, the demand for the service provided by the decentralized chain would be lower than the demand for the integrated chain. Proposition 4. The supply chain profit in coordinated scenario is more than the supply chain profit in decentralized scenario. Proof. ðd þ oða þ bÞÞ2 að3a  2bÞðd þ oða þ bÞÞ2 ða  bÞðd þ oða þ bÞÞ2  ¼ [0 2ð a  b Þ 2ða  bÞð2a þ bÞ2 2ð2a þ bÞ2 It is approved in many former studies that in most of the cases integration improves the profitability of the supply chain. Proposition 4 shows that the profitability of the solid waste disposal supply chain would be increased under cooperation versus decentralized decision making.

4 Numerical Example A simple numerical example shows the variations of the equilibrium values versus different b values. We assume d ¼ 750, a ¼ 45, and o ¼ 5. Based on the initial model assumption of a [ b, b can vary from 0 to 44.9. Figure 2 shows that when b gets closer to a, disposal facility price in decentralized chain and the final price in both decentralized and integrated chains get larger. When b gets so close to a; it means there is not so much difference between two service providers, and the services are more substitutable. Unlike the current model, if there is not an assumption of same price for the disposal facilities, it would be expected that there would be lowering-price contest between the competitors.

24

I. Ghalehkhondabi and R. Maihami Price 70 60

Disposal facility price

50

Customer Dec price

40

Customer Int price

30 20

10

20

30

40

Fig. 2. Price variations versus b

Figure 2 noted that increasing b, increases the service price in both scenarios. By selling more expensive service, it is expected that the contractor may make more profit as well. Figure 3 shows that the contractor makes more profit in integrated channel compared to the decentralized one. Contractor profit

80

60

Contractor Dec profit Contractor Int profit

40 20

10

20

30

40

Fig. 3. Contractor profit versus b

The demand which depends on the sale price will determine the total profit in both scenarios. In Propositions 1 and 2, we proved that the integrated price is lower than the price in the decentralized scenario, and the total profit for the integrated scenario is greater than the decentralized one. Thus, we conclude that the demand level is higher in the integrated scenario. From managerial viewpoint, if the facilities want more profit, they should follow a scenario which leads to more demand (Fig. 4). Larger b values lead to more demand and higher prices, therefore as much as two services become more substitutable, the total profit of the disposal chain would increase. Figure 5 shows that the total profit in the integrated chain is slightly more than the profit of the decentralized chain.

Price Decision Making in a Centralized/Decentralized Solid Waste Disposal

25

demand Each facility 350 300

Dec Demand Facility 250

Demand Facility Int

200 150 10

20

30

40

Fig. 4. Demand for each disposal facility versus b Profit 70000 60000 50000

Chain Dec profit

40000

Chain Int profit

30000 20000 10000 10

20

30

40

Fig. 5. Total profit versus b

5 Conclusions The waste management practice of a service supply chain with two disposal facilities and a contractor is studied. It is shown that working under an integrated supply chain platform can slightly improve the profitability of the chain. If the service of the disposal facilities become more similar, they can sell the service at a higher price both in the decentralized and integrated supply chains. Under the parameter assumptions of this study, the demand would be motivated if the service of both disposal facilities become more similar.

References 1. Asefi, H., Lim, S., Maghrebi, M., Shahparvari, S.: Mathematical modelling and heuristic approaches to the location-routing problem of a cost-effective integrated solid waste management. Ann. Oper. Res. 273(1–2), 75–110 (2019) 2. Heidari, R., Yazdanparast, R., Jabbarzadeh, A.: Sustainable design of a municipal solid waste management system considering waste separators: a real-world application. Sustain. Cities Soc. 47, 101457 (2019)

26

I. Ghalehkhondabi and R. Maihami

3. Mohammadi, M., Jämsä-Jounela, S.-L., Harjunkoski, I.: Optimal planning of municipal solid waste management systems in an integrated supply chain network. Comput. Chem. Eng. 123, 155–169 (2019) 4. Chinasho, A.: Review on community based municipal solid waste management and its implication for climate change mitigation. Am. J. Sci. Ind. Res. 6(3), 41–46 (2015) 5. Pires, A., Martinho, G., Rodrigues, S., Gomes, M.I.: Optimization in waste collection to reach sustainable waste management. Sustainable Solid Waste Collection and Management, pp. 207–238. Springer, Cham (2019). https://doi.org/10.1007/978-3-319-93200-2_12 6. Alam, I., Alam, G., Ayub, S., Siddiqui, A.A.: Assessment of bio-medical waste management in different hospitals in Aligarh city. In: Kalamdhad, A.S., Singh, J., Dhamodharan, K. (eds.) Advances in Waste Management, pp. 501–510. Springer, Singapore (2019). https://doi.org/ 10.1007/978-981-13-0215-2_36 7. de Oliveira, J.A.P.: Intergovernmental relations for environmental governance: cases of solid waste management and climate change in two Malaysian states. J. Environ. Manage. 233, 481–488 (2019) 8. Linderhof, V., Kooreman, P., Allers, M., Wiersma, D.: Weight-based pricing in the collection of household waste: the Oostzaan case. Resour. Energy Econ. 23(4), 359–371 (2001) 9. Van Beukering, P.J., Bartelings, H., Linderhof, V.G., Oosterhuis, F.H.: Effectiveness of unitbased pricing of waste in the Netherlands: applying a general equilibrium model. Waste Manag. 29(11), 2892–2901 (2009) 10. Han, Z., Zeng, D., Li, Q., Cheng, C., Shi, G., Mou, Z.: Public willingness to pay and participate in domestic waste management in rural areas of China. Resour. Conserv. Recycl. 140, 166–174 (2019)

Understanding the Impact of User Behaviours and Scheduling Parameters on the Effectiveness of a Terminal Appointment System Using Discrete Event Simulation Mihai Neagoe1(&) , Hans-Henrik Hvolby2,3 , Mohammad Sadegh Taskhiri1 , and Paul Turner1 1

3

ARC Centre for Forest Value, Discipline of ICT, College of Sciences and Engineering, University of Tasmania, Hobart, Australia [email protected] 2 Centre for Logistics, Department of Materials and Production, Aalborg University, Aalborg, Denmark Department of Mechanical and Industrial Engineering, Norwegian University of Science and Technology, Trondheim, Norway

Abstract. This research improves understanding of the impact of specific types of truck driver behaviour and temporal scheduling on the effectiveness of a terminal appointment system. A discrete event simulation model of a bulk cargo marine terminal is developed to analyse parameters related to driver behaviour (punctuality and proportion of planned appointments) and temporal scheduling (appointments per time window and time window spacing) on truck flows and turnaround times at the terminal. The model is based on an Australian wood chip export marine terminal currently experiencing significant truck congestion. The terminal operator and stakeholders have expressed interest in the implementation of an appointment system to address this issue. The modelling presented in this research was used to inform their investigation into developing an appointment system solution. Simulation results indicate that the proportion of planned appointments, used as a proxy for the appointment system use, has a significant impact on truck turnaround times. Greater truck arrival punctuality only marginally improves truck turnaround times. Interestingly most optimization approaches continue to focus on improving punctuality through service rules or financial penalties in order to achieve optimal turnaround times. However, the additional cost in terms of complexity or assumptions for optimal solutions against non-optimal approaches are rarely weighed in terms of dividends of the marginal improvements generated. By involving terminal users (drivers and transporters) in the design of an appointment system and its scheduling parameters, terminal operators can significantly improve appointment system use and effectiveness by increasing the probability of positive users’ behaviours. Keywords: Transport management  Supply chain collaboration User requirements  Congestion management

© IFIP International Federation for Information Processing 2019 Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 27–34, 2019. https://doi.org/10.1007/978-3-030-29996-5_3



28

M. Neagoe et al.

1 Introduction Terminal appointment systems are one of the most effective methods to manage congestion and communicate with multiple users at marine terminals. The system’s performance in terms of truck turnaround times and equipment use efficiency can be affected by the systems’ parameters and by the behaviours of the terminal users. System parameters can include the lead time for selecting an appointment time, appointment window length [1], number of appointments per time-window [2, 3], appointment spacing, and truck servicing rules [4]. User behaviours can be modelled by the probability that drivers miss appointments, arrive un-appointed [3], or their arrival punctuality [4, 5]. The system parameters are often determined by optimization approaches, either linear programming [6], queuing theory [7] or through simulation [8]. Some user behaviours are seen as disruptions to the optimal system solution that must be dealt with. Authors recommend the use of penalty systems to enforce compliance with the appointment system schedules [5] while others introduce more complex truck service rules to maintain a high level of system efficiency [4]. Although enforcement and system rules approaches have merit in moderating users’ behaviours they suffer from a series of shortcomings: • The information required for complex system rules may be difficult to collect in real-time, making implementations in real-life scenarios challenging; • Neither approach (enforcement nor system rules) is based on understanding of the underlying causes behind the users’ behaviours. This can lead to unexpected outcomes such as system misuse [9]. • Users’ involvement in decisions regarding the system’s parameters and functionality is typically limited [10]. The potential consequences of the lack of involvement in decision making is explained by Ackoff [11]: “In problems, the solution to which involve the reactions of others, their participation in the problem-solving process is the best protection against unexpected responses […] A failure to consult others who have a stake in our decision is often seen as an act of aggression”. This research therefore introduces a discrete event simulation model of a bulk cargo marine terminal is developed to analyse parameters related to driver behaviour (punctuality and proportion of planned appointments) and temporal scheduling (appointments per time window and time window spacing) on truck flows and turnaround times at the terminal. The modelling results are then used to support the involvement of terminal users prior to the implementation of an appointment system to understand their requirements, consequences on the users’ behaviours and the consequences of the users’ behaviours on the effectiveness of the system.

2 The Wood Chip Export Terminal Field Site The marine terminal on which the modelling work is based is an Australian wood chip export facility. Wood chips are processed from logs in facilities located in close proximity of the terminal, and delivered to the terminal. Wood chips are stored at the terminal and subsequently loaded on dry bulk cargo ships belonging to international

Understanding the Impact of User Behaviours and Scheduling Parameters

29

pulp and paper producers. The terminal receives approximately 1.6 million tons of wood chips and handles roughly 52,000 truck deliveries per year. The terminal’s customers outsourced the transportation task between their production sites and the terminal to a stable base of transport contractors. This setup creates a small and relatively closed transport system. At the terminal, trucks are first weighed at a weighbridge. Next trucks drive to an on-dock staging area where they wait for one of two unloading ramps to become available. Once a ramp is available, trucks are unloaded, then drive to a weigh-bridge to be weighed once more. The terminal and supply chain setup have been explored in greater detail in previous work [12]. The terminal was experiencing truck significant congestion. The consequences of congestion ranged from increased costs for transporters, increased supply chain uncertainty for terminal customers, and additional staff and maintenance scheduling for the terminal operator. The stakeholders were considering a range of potential options to mitigate congestion and expressed interest in an appointment system to manage truck flows at the terminal. The researchers developed a simulation model to further understanding regarding the potential consequences of different system parameters and behaviours can have on truck waiting and turnaround times and ultimately inform the stakeholders’ investigation into the development of an appointment system solution.

3 Data Collection and Simulation Model Data on truck arrivals were collected from the terminal weigh-bridge database and geolocation data from GPS units mounted on several trucks. These data were supplemented by on-site observations by the research team and discussions with stakeholders. The GPS data were used in conjunction with geo-fences setup around terminal infrastructure that quantified the duration of truck visits at every stage. The truck arrival frequency and geo-fence visit durations data were analysed with Arena Input Analyzer to generate distributions. Approximately 7 months of truck arrival data and 3 months of geo-location data were used. The fitted distributions formed the input for the discreteevent simulation model. The simulation model, presented in Table 1, comprises of two stages: the truck arrival generator and the truck processing. The lack of complexity in the model’s design is purposeful, as the model and results primarily served as a discussion point with the terminal’s and its users’ staff which had a diverse demographic and socioeconomic backgrounds. The use of a relatively simplistic model aimed to improve its accessibility to a broader audience. The model is implemented in Python programming language. In the first stage, the truck arrivals are generated. The planned arrivals parameter indicates the percentage of appointed arrivals and walk-ins. The arrival time of the truck is then calculated based on whether or not the respective truck has an appointment. A time-window spacing coefficient is applied on each one-hour interval (e.g. a 6-min spacing means that within a 60-min interval, the first 6 min do not have any appointments). The truck arrival list is then sorted in ascending order of arrival times and fed to a generator function which creates truck objects with payload, capacity, product characteristics.

30

M. Neagoe et al. Table 1. Simulation model stages and steps

Step 1.1 1.2 A 1.2 B 1.3 1.4 Step 2.1 2.2 2.3 2.4 A 2.4 B 2.5 2.6 A 2.6 B 2.7

Stage 1 – truck arrival generator Random choice of appointed/un-appointed arrival; If un-appointed arrival: arrival time = previous truck arrival time + inter-arrival time (gamma, k = 1.49, h = 6.97); Else appointed arrival: arrival time = previous app. time + spacing + appointment interval + punctuality; Sort truck list based on truck arrival times; Generate truck object (payload, capacity, product and arrival times); Stage 2 – truck processing Next truck object from arrival list; Weigh-in at Weigh-bridge (1 min); Drive to Unloading area (1 min); If any unloading ramp free and available: Unloading lognormal (m = 5.16, s = 3.97); Else no unloading ramp free and available: Accrue waiting time; Drive to Weigh-bridge - 2 min; If weigh-bridge free: Weighing - normal (l = 3.46, r = 1.68); Else weigh-bridge not free: Accrue waiting time; Calculate total service and waiting times for truck;

In the second model stage, the truck objects are then processed by the terminal object. The two-stage approach is required as trucks are served on a first-come, firstserved basis, irrespective of their appointment time. While other priority rules are considered in the literature [4], The first-come, first served approach was considered in this model for its lower level of complexity. The second stage is largely based on modelling presented in previous work [13]. The weigh-out and unloading ramp service times are stochastically determined from distributions fitted from GPS data. The weighin in stage is held constant at 1 min/truck. the drive times from the weigh-bridge and unloading ramp and back are held constant at 1 and 2 min respectively. The total truck processing times including the waiting are the summarized and output. The second simulation stage follows closely the unloading process observed by the research team at the terminal and described in Sect. 2. The simulation model and its logic have been presented and discussed with terminal staff to improve the accuracy and validity of the representation. Scenario Analysis: System Parameters and User Behaviour The four factors included in the scenario analysis were driver behaviours (punctuality, missed and unplanned appointments) and system parameters (appointments per time window and appointment buffers). These factors were adapted from the appointment systems literature. The factors included are: • Number of appointments per time window. Two values were included for each one-hour time window, 6 (low frequency) and 8 appointments (high frequency). In the cases where all appointments were unplanned, an inter-arrival time distribution that would provide similar arrival frequency was used.

Understanding the Impact of User Behaviours and Scheduling Parameters

31

• Time window spacing. Each time window contains the same number of appointments but has a starting buffer period. The three values included were 0-, 6-, and 12-min/time window. • Planned/Unplanned arrivals. The proportion of planned and unplanned arrivals was varied in 25% increments between 0% (all un-appointed arrivals) and 100% (all appointed arrivals). • Arrival punctuality. Punctuality was modelled by adding a stochastic component to each appointed arrival time. Three normal distributions were used to simulate truck arrival punctuality, similar to the approach presented in [5]: (1) High: 95% of arrivals are within ±5 min from appointment time {N(0, 2.5)}, (2) Medium: 68% of arrivals are within ±5 min from appointment time {N(0, 5)}, (3) Low: 38% of arrivals are within ±5 min from the appointment time {N(0, 10)}, The scenario analysis included combinations of the 4 factors and resulted in 74 scenarios. Each scenario was run 20 times and each iteration simulated a year of operations.

4 Modelling Results The results of the simulation model in terms of average truck turnaround time for the scenarios tested are presented in Fig. 1. The scenario where an average of 6 trucks per hour arrive uncoordinated resembles the situation empirically observed at the terminal (yellow diamond symbol). The modelled turnaround time in this scenario was approximately 23.5 min per truck. This turnaround time figure is similar to the empirically observed average turnaround time. The turnaround time in this scenario includes an average waiting time of 6 min per truck, the majority of which accrued waiting for an unloading ramp to become available. The sequential nature of the processes at the terminal means that reductions in turnaround time arise primarily from a reduction in waiting times. Since the terminal’s infrastructure is fixed, most benefits accrued from the reduction in waiting and turnaround times directly benefit terminal users. The change from a low to high frequency truck arrivals generates a 20% increase in throughput, from 1.6 to 2 million tons. The change in truck arrival frequency is represented in Fig. 1 by the change in symbol colours, from yellow to blue. This increase is also met with a doubling of truck turnaround times, most likely due to throughput getting closer to the terminal’s maximum physical capacity. In the higher arrival frequency scenarios, an improvement in turnaround times of 25 to 30%, is generated if at least 25% of arrivals are scheduled. The turnaround time improvement gradually decreases as a higher percentage of trucks arrive appointed. When all trucks arrive with appointments, the expected turnaround time improvement is between 37 and 46% compared to the scenario where all appointments are unplanned, depending on arrival punctuality. In the low arrival frequency scenarios, the marginal improvement with each increment of planned appointment proportion remains relatively constant, between 3 and 5%, depending on the arrival punctuality.

32

M. Neagoe et al.

Fig. 1. Simulation model scenario analysis results in terms of average truck turnaround times, truck arrival punctuality, and percentage of planned arrivals (Color figure online)

The impact of low arrival punctuality increases with the proportion of appointed truck arrivals. Punctuality is represented in Fig. 1 by the change of symbol type and darkening colour tones. Low punctuality halves the effectiveness of the appointment system compared to high punctuality in virtually all low arrival frequency scenarios. In high arrival frequency scenarios, the impact of low punctuality is less considerable, between 16–20% compared to high punctuality. Time window spacing (not depicted in Fig. 1) has limited impact on truck turnaround time averages. The introduction of 6- or 12-min spacing between each time window increases turnaround times by 2% compared to no spacing. The next section discusses the simulation results in the context of the extant literature and of the potential applications.

5 Discussion and Future Research Simulation results indicate that truck turnaround times increase non-linearly with throughput which corroborates literature findings [12, 14]. Particularly in the high frequency arrival scenarios, a small proportion of known, even less punctual, arrivals, can have a significant impact on turnaround times, similar to [15]. Finally, the impact of arrival punctuality, while an important influencing factor for turnaround times, was not the most important determining factor in turnaround times. Modelling results revealed that the factor with the most influence on turnaround times was the use of the system. If the potential achievable reduction in turnaround times through the use of the terminal appointment system is not fully appreciated by its users, it is likely that the system use will not be as high as expected. This situation can

Understanding the Impact of User Behaviours and Scheduling Parameters

33

create a vicious circle in which low use reduces the impact of the appointment system on turnaround times therefore leading to lower system use. Therefore, it is paramount to involve users in the design of appointment systems and particularly in determining its operating parameters to encourage system use. While not directly considered in this paper, it is important to acknowledge that the appointment system usability and adoption, among other factors can also influence the system use. Design for usability should also be central in design workshops. The impact of arrival punctuality was important but considerably less than that of appointment system use. Much of the extant literature focuses on optimizing costs [1, 14] in which case, low truck arrival punctuality can increase terminal and user costs and consequently, the optimal solution. By focusing solely on optimal solution, however, its potential cost in complexity or assumptions compared to near-optimal solutions is rarely weighed against the additional benefits an optimal solution can generate. To ensure compliance from this perspective, variations in service rules [4] or financial penalties have been proposed. However, the literature rarely considers whether the impact of disruptions caused by driver behaviours are sufficiently significant to warrant the introduction of complex service rules and financial penalties variations while risking to reduce the appointment system use. Future research aims to use the findings from the scenario analysis in conducting participatory design workshops with the terminal operator and its users. The collaboration with the terminal operator is part of an ongoing multiple case-study investigation on mechanisms and technologies to address maritime terminal land-side congestion. The simulation and participatory design approach will be extended to the other case studies to seek replicating the results or identifying differentiating factors.

6 Conclusion This research introduced a discrete event simulation model of a bulk cargo marine terminal is developed to analyse parameters related to driver behaviour (punctuality and proportion of planned appointments) and temporal scheduling (appointments per time window and time window spacing) on truck flows and turnaround times at the terminal. Modelling findings highlight the importance of involving terminal users in the design of the appointment system and its parameters to ensure its use and consequently its effectiveness in reducing truck turnaround times. Arrival punctuality appears to have less impact than the appointment system use. However, the lack of punctuality is typically financially penalized which tends to cause tensions between terminals and transporters. Shifting focus from enforcing punctuality towards ensuring use of system may assist have a more positive impact on turnaround times. In the context of this research it is important to acknowledge some of its limitations. The model scope is limited to the terminal gate and unloading operations as insufficient data were available to generate a model of the entire chain. Bulk cargo marine terminal operations are typically less complex than those observed in container terminals, on which the majority of the terminal appointment systems literature is based. It is likely that insights generate in this research can be, at least in part, transferrable to other bulkcargo and container marine terminals.

34

M. Neagoe et al.

This research is part of an ongoing project undertaken in Australia funded by the Australian Research Council through the Industrial Transformation Research Program.

References 1. Chen, G., Govindan, K., Yang, Z.: Managing truck arrivals with time windows to alleviate gate congestion at container terminals. Int. J. Prod. Econ. 141, 179–188 (2013) 2. Torkjazi, M., Huynh, N., Shiri, S.: Truck appointment systems considering impact to drayage truck tours. Transp. Res. Part E Logist. Transp. Rev. 116, 208–228 (2018) 3. Huynh, N., Walton, C.M.: Robust scheduling of truck arrivals at marine container terminals. J. Transp. Eng. 134, 347–353 (2008) 4. Li, N., Chen, G., Govindan, K., Jin, Z.: Disruption management for truck appointment system at a container terminal: a green initiative. Transp. Res. Part D Transp. Environ. 61, 261–273 (2018) 5. Ramírez-Nafarrate, A., González-Ramírez, R.G., Smith, N.R., Guerra-Olivares, R., Voß, S.: Impact on yard efficiency of a truck appointment system for a port terminal. Ann. Oper. Res. 258, 195–216 (2017) 6. Chen, G., Jiang, L.: Managing customer arrivals with time windows: a case of truck arrivals at a congested container terminal. Ann. Oper. Res. 244, 349–365 (2016) 7. Guan, C., Liu, R.: Modeling gate congestion of marine container terminals, truck waiting cost, and optimization. Transp. Res. Rec. J. Transp. Res. Board 2100, 58–67 (2009) 8. Huynh, N., Walton, M.: Improving efficiency of drayage operations at seaport container terminals through the use of an appointment system. In: Böse, J. (ed.) Handbook of Terminal Planning, pp. 323–344. Springer, New York (2011). https://doi.org/10.1007/978-1-44198408-1_16 9. Morais, P., Lord, E.: Terminal appointment system study. Transp. Res. Board 1, 123 (2006) 10. Huynh, N., Smith, D., Harder, F.: Truck appointment systems. Transp. Res. Rec. J. Transp. Res. Board 2548, 1–9 (2016) 11. Ackoff, R.: The Art of Problem Solving. Wiley, New York (1978) 12. Neagoe, M., Taskhiri, M.S., Nguyen, H.-O., Hvolby, H.-H., Turner, P.: Exploring congestion impact beyond the bulk cargo terminal gate. In: Logistics 4.0 and Sustainable Supply Chain Management, Proceedings of HICL 2018, pp. 63–82 (2018) 13. Neagoe, M., Taskhiri, M.S., Nguyen, H.-O., Turner, P.: Exploring the role of information systems in mitigating gate congestion using simulation: theory and practice at a bulk export terminal gate. In: Moon, I., Lee, G.M., Park, J., Kiritsis, D., von Cieminski, G. (eds.) APMS 2018. IFIPAICT, vol. 535, pp. 367–374. Springer, Cham (2018). https://doi.org/10.1007/ 978-3-319-99704-9_45 14. Guan, C., Liu, R.: Container terminal gate appointment system optimization. Marit. Econ. Logist. 11, 378–398 (2009) 15. Chen, G., Govindan, K., Yang, Z.Z., Choi, T.M., Jiang, L.: Terminal appointment system design by non-stationary M(t)/E k/c(t) queueing model and genetic algorithm. Int. J. Prod. Econ. 146, 694–703 (2013)

Full-Scale Discrete Event Simulation of an Automated Modular Conveyor System for Warehouse Logistics Alireza Ashrafian1(&), Ole-Gunnar Pettersen1, Kristian N. Kuntze1, Jacob Franke1, Erlend Alfnes1, Knut F. Henriksen2, and Jakob Spone3 1

Norwegian University of Science and Technology, 7491 Trondheim, Norway [email protected] 2 Swisslog, 0581 Oslo, Norway 3 ASKO, 0950 Oslo, Norway

Abstract. This paper presents the use of advanced simulation modeling to optimize the operation of a fully automated modular conveyor system in a largescale warehouse. At its peak capacity, the smooth flow of material in the system was greatly impaired due to the appearance of bottlenecks. A full-scale 3D discrete event simulation (DES) model of the system was built and timedependent statistical models were carefully designed and implemented in the model in order to capture the randomness and complex dynamics of the operation. The model was verified and validated, and several scenarios have been analyzed. The paper demonstrates a practical example of how data-driven simulation modeling provided a cost-effective solution to enhance efficiency. The paper highlights the crucial aspects that must be taken into account in the modeling of the system in order to create a reliable standalone decision support system. Moreover, the paper highlights the identified key steps that are yet to be taken from dynamical modelling towards a Digital Twin. Keywords: Discrete event simulation  Discrete event logistics system  Warehouse logistics  Distribution center automation  Material handling  Modular conveyor systems  Decision support system

1 Introduction Automated modular conveyor systems are widely used in central warehouses and material distribution centers to achieve a high degree of throughput and smooth material flow while remaining flexible and efficient [4]. Achieving high efficiency, however, is not always problem free. The higher capacity and increased complexity of interactions in smart material handling systems also require advanced analysis to avoid design mistakes and to ensure efficient operation. Factors such as varying demand and a high degree of randomness, this poses significant challenges in achieving desired efficiencies and maintaining smooth material flows. Verifying that the system design adheres to operational requirements under constantly varying conditions is often beyond the capability of conventional analysis tools [3]. © IFIP International Federation for Information Processing 2019 Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 35–42, 2019. https://doi.org/10.1007/978-3-030-29996-5_4

36

A. Ashrafian et al.

In the past few decades, computer simulation has become one of the most effective decision support tools used in logistic systems. 3D transient computer simulations have provided reliable solutions to identify and avoid costly design and operational mistakes. Among them, discrete event simulation (DES) [5] has become an indispensable tool for understanding the complex dynamics in logistics and supply chain systems [2, 9]. Moreover, and within the paradigm of Industry 4.0 digital transformation, computer simulations integrated with virtual reality systems and operation data, have become the basis of the digital twin technology [6, 7]. According to Stark et al. [8], “A Digital Twin is the digital representation of a unique asset (product, machine, service, product service system or other intangible asset), that alters its properties, condition and behavior by means of models, information and data”. A Digital Twin is a digital abstraction of a system that is fed with live, high quality, data. The vision of a digital twin for smart warehouse systems is, therefore, to create a mirror of processes incorporating all related information within the warehouse system. The twin can be used to monitor processes in real time and continuously fine tune all operations in the warehouse in order to optimize the flow, to find bottlenecks, etc. In short, if such a digital twin is realized, it allows better understand end-to-end processes, find bottlenecks and improve performance beyond all expectations. The success of digital twins is, therefore, greatly dependent on, i.a., the performance of the underlying simulation model in capturing the key characteristics and complex dynamics of the system. The purpose of this paper is to demonstrate how a full scale DES model can support the design and optimization of smart warehouse systems. The paper also clarifies what are the crucial aspects that must be taken into account in full detail in the model in order to create a reliable standalone decision support system. Moreover, the paper highlights the key steps that are yet to be taken to make the dynamical model a real Digital Twin that can be used as an effective tool for decision-making and optimization.

2 The Automated Modular Conveyor System Under Study 2.1

The Geometry and Operation

The system under study is an automated modular conveyor system for handling totes, trays and cartons [4] that has a complex interconnected geometry (Fig. 1). The system comprises of five major parts: the “supplier”, the mini-load storage systems, the “highway”, 5 sub-loops, and 21 pickup stations. The supplier feeds the system with totes that are stacked with products from a pallet. The highway is the central conveyor that runs through the system and makes it possible that totes fed by the supplier can travel between the sub-loops. Typically, a tote that comes from the supplier via the highway to a sub-loop will end up at a mini-load station for storage. Mini-loads are vertical storage units that store loaded totes and trays for later use. Each mini-load system is connected to a sub-loop in the conveyor system and has 5 input and output ports. Alternatively, if a tote enters the system through a mini-load, it will usually go to one of the four pickup stations connected to the associated sub-loop. These pick up stations hold a tote for a statistically certain amount of time before it is released to its next destination, e.g., another pick up station.

Full-Scale Discrete Event Simulation of an Automated Modular Conveyor System

37

All routing operations are fully automated by scanner and barcodes. In front of each decision point, the control system sends a totes barcode to a bin destination manager (BDM), which immediately returns a command telling the control system if the tote should proceed straight ahead or change direction. The BDM is a part of the central warehouse management system (WMS). Each module has a constraint on capacity, i.e., it can contain a limited number of totes in order to function properly. Congestion occurs if this limit is exceeded and, consequently, the smooth function of the system will be impaired. To prohibit congestion, all of the entrances to and the exits from the areas are equipped with optical counters that control the number of totes in that section. The system automatically blocks further incoming of totes if the maximum capacity is reached.

Fig. 1. The geometry of the modular conveyor system under study (Courtesy of ASKO and Swisslog).

2.2

Operational Challenges

The system is designed such that the bottleneck in the system always resides at the pickup stations. This is to ensure the continuous flow of totes in other parts of the system and, to ensure that totes are always available for the operators at pick up stations. The operators have a relatively high operational cost, and they should never be idle. At times, during periods of increased strain on the system, the highway often becomes the bottleneck and the flow of totes is disrupted. Several observations from operating the system suggested that the bottleneck appeared on the highway due to its capacity limitation as well as the way the supplier feeds the highway which takes a lot

38

A. Ashrafian et al.

of its capacity. The current simulation modeling and analysis was therefore aimed at finding a new way of feeding the totes from the supplier to the system and, support the corresponding design changes with data-driven analysis.

3 The Full-Scale Dynamical Model The simulation modeling and analysis software FlexSim® [1] has been used in order to perform a full scale DES model of the automated modular conveyor system under study. The model geometry is based on the 3D CAD data and is identical to the real system in operation (Fig. 2).

Fig. 2. The full-scale computer simulation model for the installed modular conveyor system.

3.1

The Model Logic

The operational logic of the system has several important aspects that are carefully implemented in the model. There are certain variables tagged to each individual tote that contains information about its current state and destination, as well as an individual name that is used for statistics collection and, later on, for model verification. These variables come into play at every crossroad where decision points are located. Right before every crossroad, where a tote is faced with several different paths to continue on, there is a decision point that contains a code attributed to the variables in each tote to decide whether to continue on to one conveyor or another. A decision point may also change the variables containing information about the totes destination, e.g., by choosing what pickup station to travel to right after entering a sub-loop.

Full-Scale Discrete Event Simulation of an Automated Modular Conveyor System

39

As mentioned before, when a tote has arrived at a pickup station, it will be held at that station for a certain amount of time. This stay time varies statistically depending on the work of operator at that station. The pickup station also changes the information of the tote upon its release. For example, when the pickup station labels a tote as “empty”, then the decision points are forced to direct the tote to the fastest available route to exit the system. The final part of the logic of the operation deals with the function of suppliers which provide the model with totes based on statistical distributions that change over time (see Sect. 3.2). They also create and set the value of the variables in the tote. The value of these variables is also determined by time-dependent statistical distributions. The associated operational logic explained above is programmed in each corresponding individual element of the model using the script language provided by the FlexSim® software [1]. 3.2

Simulation Input and Statistical Modelling

Various time-dependent statistical data from the operation is integrated with the model. The data was collected from the system over one week of operation amounting to approximately 500,000 totes. The data included the number of totes arriving at the system per hour, and the number of totes being processed by the system per hour. Moreover, statistical analysis was performed in order to model the statistical distributions [5] for the various number of input variables such as manual feeding rate to the supplier, feeding rates from the mini-loads, processing time at pick up stations, and the decision points that distribute the totes throughout the system. Hourly based operational data indicated a high degree of dynamics in the variations associated with these statistical distributions. A considerable effort was made in order to correctly model these variations and program it in the DES model. 3.3

Model Verification and Validation

Several verifications of the model were performed under controlled conditions in order to ensure the consistency of the model and its internal logic. Validation of the model including various statistical models was also performed by comparing the simulation results with operational data for several variables. To ensure the consistency of the model in terms continuity between all geometrical areas, the totes resident time in the system were calculated and it was verified that no tote remains in the system for a large amount of time. The statistical correctness of decision points in sending totes to various destinations was validated by comparing the simulation results for the cumulative number of totes ended up at various sub-loops with the operational data after several hours of operation (Fig. 3).

40

A. Ashrafian et al.

Fig. 3. Cumulative number of totes sent to various sub-loops (SLs) by the decision points.

4 Results and the Improved Design Various simulations were performed considering multiple scenarios for the design modifications in the system. The resulting scenario that had the most effect on removing the bottleneck from the highway and therefore improving the smooth material flow in the system is presented here. The reference case is the one identical to the conditions in the original system. The analysis was based on various key performances in the system, e.g., average content in the highway and the sub-loops, idle time for the operators at pick up stations, and passages through certain decision points along the highway. These give clear indications of how the implementation of the suggested design modification would affect the system. The proposed new design (Fig. 4) contains a new loop where the newly stacked totes would go alongside the highway and further on to the allotted sub-loops, and finally, the mini-load for storage. The basic idea was to ensure the increased availability of totes for the pickup station, while reducing the average number of totes on the highway, and hence, avoiding congestion. It should be noted that the time-dependent statistical data for the number of totes arriving at the system as well as the statistical distributions for e.g., manual feeding to the system remains unchanged and equally applicable to the system with new design as well. The average content in the entire system dropped by about 8%. Results for the highway and each of the sub-loops are shown in Fig. 5. Note that the number of totes processed in each sub-loop has remained almost unaffected. The number of totes passing through the decision points on the highway which dropped by a staggering 28% (not shown here) which clearly indicates a significant reduction in the traffic on the highway. Another advantage of the design is that if one of the sub-loops are full, the tote could loop around instead of congesting the highway. Moreover, an increase in the availability of the totes to the operators without consuming further from the available area in the facility was obtained. Figure 6 shows a heat map of the average tote content on the highway. The reduction in the number of totes on the highway is evident, especially, in the area closest to the entry from the supplier (the right end of the figure). The analyses point out the apparent reduction of strain put on the system and a more uniform distribution of totes along the whole highway conveyor.

Full-Scale Discrete Event Simulation of an Automated Modular Conveyor System

41

Fig. 4. The proposed new design including a new conveyor loop marked in red. (Color figure online)

Fig. 5. Average number of totes per hour on the highway and sub-loops.

Fig. 6. Heat maps indicating the average number of totes on the highway conveyor. Top: base case, bottom: the new design.

42

A. Ashrafian et al.

5 Conclusions Highly automated logistics systems incorporate enhanced capacity and efficiency as well as a higher degree of complexity in system dynamics. To this end, advanced computer simulation models are of great value to create near-reality standalone models for complex discrete event logistics system (DELS). The model presented in this study has captured the complex geometry, the variety of system interactions, the system logic, and the key stochastic dynamics that took place under the real operation. It was demonstrated that the current model could be used as a reliable standalone decision support system for operation and design change. Throughout the course of this study, some requirements were also identified in order to make conventional simulation modeling an effective tool for Digital Twins. Firstly, connectivity and integration with operational or enterprise resource planning database must be realized. Secondly, and as demonstrated in this study, the modelled system for Digital Twin application must include high level of details and low level of abstraction in all the aspects including geometry, stochastic and logic. Ultimately, the construction of models, the process of optimization through design modifications, development of model versions, and the construction of various scenarios should be automated to the highest degree in order to achieve efficiency without manual interventions. Moreover, the need for functionality was of significant importance in the course of conducting the current study. Manual modifications of the simulation model can be time consuming, especially if a large set of variations of the model needs to be built.

References 1. Beaverstock, M., Greenwood, A.G., Lavery, E., Nordgren, B.: Applied Simulation: Modeling and Analysis Using FlexSim. FlexSim Software Products Inc., Orem (2017) 2. Brailsford, S., Dangerfield, B., Churilov, L. (red.): Discrete-Event Simulation and System Dynamics for Management Decision Making. Wiley, Hoboken (2014) 3. Jia, Y., Jiang, P.F.: The application of simulation technology in distribution center. In: Applied Mechanics and Materials, Zurich, vol. 865, pp. 675–680 (2017) 4. QuickMove: Flexible, modular conveyor system for small loads. Swisslog Holding Ltd. (2018). https://www.swisslog.com/quickmove. Accessed Apr 2018 5. Law, A.M.: Simulation Modeling and Analysis, 5th edn. McGraw-Hill, New York (2015) 6. Rodic, B.: Industry 4.0 and the New Simulation Modelling Paradigm. Organizacija 50(3), 193 (2017) 7. Rosen, R., von Wichert, G., Lo, G., Bettenhausen, K.D.: About the importance of autonomy and digital twins for the future of manufacturing. IF-AC-PapersOnLine 48(3), 567–572 (2015) 8. Stark, R., Kind, S., Neumeyer, S.: Innovations in digital modelling for next generation manufacturing system design. CIRP Ann. Manuf. Technol. 66(1), 169–172 (2017) 9. Tako, A.A., Stewart, R.: The application of discrete event simulation and system dynamics in the logistics and supply chain context. Decis. Support. Syst. 52(4), 802–815 (2012)

Handling Uncertainties in Production Network Design Günther Schuh, Jan-Philipp Prote, Andreas Gützlaff(&), and Sebastian Henk Laboratory for Machine Tools and Production Engineering (WZL), RWTH Aachen University, 52074 Aachen, Germany {g.schuh,j.prote,a.guetzlaff, s.henk}@wzl.rwth-aachen.de

Abstract. Decision making in production network design is complex due to a large number of influencing factors, options and uncertainties. Furthermore, the agility in production networks and therefore the decision demand increases while made decisions are often hard to revise. Hence, a fast yet holistic decisionmaking process is key for sustainable production network development. While many existing approaches target the overall network optimization, few of them include a systematic approach to cover uncertainty and barely any approaches cover the uncertainty of information and models used for the decision-making. In praxis, these approaches result in unsystematic and time-consuming decisionmaking processes. This paper presents an approach to take uncertainty systematically into consideration and splits it into internal uncertainty, which can be reduced by the decision maker, and external uncertainty, which has to be considered in the sensitivity analysis. The method was applied for the site selection of an automotive supplier. Keywords: Production networks

 Decision process  Uncertainty

1 Introduction and Motivation As globalization progresses, companies are trying to grow by internationalizing their business activities and, at the same time, become less dependent on local economic fluctuations. This development is reflected in an increasing number of production locations for manufacturing companies [1]. Manufacturing companies are also increasing their market share across an increasing range of products. At the same time, product life cycles are shortening. An increasing number of variants and a decreasing number of identical parts are created in production [2]. Due to the global distribution of added value and the increasing product complexity, production networks today are among the most complex and dynamic manmade systems, which often results in historically grown structures [3]. At the same time, location decisions are difficult to revise, which is why decisions have to be made carefully [4]. Existing approaches for production network design generate a large operational modelling effort in order to tackle the complexity and impact of decisions in the © IFIP International Federation for Information Processing 2019 Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 43–50, 2019. https://doi.org/10.1007/978-3-030-29996-5_5

44

G. Schuh et al.

network (see 2.1). However, the approaches often lack a holistic view of uncertainty. Decisions are therefore often prolonged by unsystematic additional analyses [5]. The aim of this paper is to present an approach for assessing uncertainty in the planning process of production networks. This enables a focus on the most important factors of the decision and the planning time can be significantly shortened by the absence of unnecessary iterations.

2 State of the Art 2.1

Existing Approaches for Production Network Design

Current approaches for production network design can be distinguished between models which describe how to proceed in production network design, optimization and simulation models and – as a new trend within the last years – big data driven analysis approaches: Procedure models describe consecutive steps to create and evaluate production network scenarios. LANZA et al. give an overview of recent approaches [6]. CHRISTODOULOU et al. for example describe a multi-stage approach for network design that proposes a complexity reduction by defining simplifying assumptions and splitting the network into subnetworks for each product group. The approach emphasizes the importance of creating reliable production cost models and finding a realistic demand forecast, but does not further detail how to achieve this [7]. STOFFEL developed a V-Model to design large production networks that focuses on a strategy based reconfiguration of the whole network. It structures the information need and the level of granularity but also does not cover uncertainty in the decision making process systematically [8]. Optimization models are the most commonly described method to design production networks [6]. A detailed overview of optimization approaches is given by CHENG et al. [9] and OLHAGER et al. [10]. LANZA and MOSER for example provide a detailed approach that evaluates the impact of changing influencing factors and includes objectives such as delivery time, quality, flexibility and coordination effort [11]. Even though they give an optimal solution from a mathematical point of view, optimization models do not recognize wrong input data and have to quantify and rank qualitative network design objectives. Simulation models support the decision making process by providing information about complex interactions, e.g. about the robustness of a network scenario regarding different production volumes as demonstrated by PUTNIK et al. [12]. This models can be helpful to decrease uncertainty, but are not suitable to systematically create network alternatives. Big data analyses like presented by GÖLZER et al. [13] try to provide decision support systems based on the analysis of large data sets. An overview of recent approaches is given by TIWARI et al. [14]. These approaches might help to gather information faster, but neither can they provide information about the reliability of the data sets nor can they consider qualitative aspects of production network design.

Handling Uncertainties in Production Network Design

2.2

45

Handling Uncertainties

In general, proper decision making requires the consideration of environmental states [15]. These conditions are often not known with certainty. Thus, uncertainty represents any deviation from the ideal of complete knowledge [16]. There are various ways to account for uncertainties in decision making. Those can be classified between the two extremes of totally neglecting uncertainties and solving complex stochastic problems. The goal is to find a good compromise between planning effort and result [17]. To identify and prioritize critical uncertainties, WALKER et al. distinguish three dimensions of uncertainty to enable an adequate consideration in the decision-making process [16]: Nature of uncertainty, Level of uncertainty and Location of uncertainty. The nature of uncertainty focuses on the origin and describes whether the uncertainty is based on a lack of knowledge (epistemic uncertainty) or on inherent variability (variability uncertainty). While the former can be reduced by further research and empirical efforts, latter remains due to natural randomness as well as behavioral and societal variability [16]. The level of uncertainty describes how the uncertainty is ranked within a seven-level spectrum between deterministic knowledge and total ignorance. Depending on the level of uncertainty, a statistical or scenario consideration is required as shown in Fig. 1 [18].

Fig. 1. Levels of uncertainty by WALKER et al. [18]

The location of uncertainty characterizes where the uncertainty manifests itself within an existing or designed model complex. There are four different generic locations of uncertainty [19]: Context uncertainty, Data uncertainty, Model uncertainty and Phenomenological uncertainty. While context uncertainty covers the endogenous and exogenous uncertainties which are caused by surrounding influences [20], data uncertainty describes uncertainties due to data incompleteness, data inaccuracy and variation in the input data [21]. In addition, the inaccuracies of a simplifying model in

46

G. Schuh et al.

comparison to reality lead to model uncertainties containing conceptual, mathematical and computational model uncertainties. Phenomenological uncertainty affects the consequences of a decision [19].

3 Approach In order to systematically consider uncertainty in production network design, the presented approach follows the decision making process by SPETZLER [5] and distinguishes the phases of initialization, information gathering, alternatives creation, evaluation and decision making (Fig. 2). Based on this steps, uncertainty factors are identified and prioritized to focus the analysis effort and the evaluation scope.

Fig. 2. Adaptation of the generic decision making process by SPETZLER [5] for production network design

3.1

Initiation Phase

The focus within the initiation phase is to transfer the generous thoughts on an upcoming need for action into a structured decision problem. By determining the general scope and the focused objectives, the following selection of the evaluation method is enabled. A rough cost structure analysis helps to identify the cost critical aspects of the decision making, e.g. the importance of transport costs or wages as part of the total landed cost. The literature contains a wide variety of evaluation methods (e.g. see SCHUH et al. [22]), which should be selected depending on the desired detail degree of the consideration and the importance regarding the cost structure. Based on the selected evaluation methods and the decision scope, the information need to apply the selected methods can be derived. 3.2

Modelling Phase

The modelling phase includes the steps of information gathering and alternatives creation. First, the elements and influencing factors of the evaluation method determined in the initialization phase are divided into internal and external factors on the basis of the uncertainty evaluation by WALKER et al. (Fig. 3). Internal uncertainty describes the uncertainties associated with the considerations within the decisionmaking process, which primarily depend on the level of detail selected. External uncertainties arise from external driven situations and developments. These are

Handling Uncertainties in Production Network Design

47

Fig. 3. Differentiation of uncertainty forms for further consideration

unknown to the decision maker and can only be anticipated to a limited extent by obtaining information. For the information gathering, a distinction is made between context and data. Context includes all internal and external developments that affect the decision-making process. External to the company is for example the general market development, internal the planned introduction of new products. However, a distinction must be made here as to whether this is a factor that the decision-maker can influence or not. For external context uncertainty, only an assessment can be made. Complete certainty is not possible. Historical developments can be extrapolated and supplemented by expert assessments in order to limit development opportunities. The assessment of the level of uncertainty is based on the experience of the decision maker and the importance of the influencing factor in relation to the overall decision. Depending on the assessment of the uncertainty, a sensitivity analysis of the scenarios is conducted or new scenarios are created in order to quantify various development possibilities. Data uncertainty is broken down into flawed measurement, inconsistency and incompleteness. In contrast to external uncertainty, this raises the question of the extent to which uncertainty can be reduced by additional survey effort. This is usually possible by including additional data sources, new or additional data collection or expert surveys. The new data collection can be conducted iteratively until a sufficiently valid data basis for modelling has been created. While the final assessment of the sufficient

48

G. Schuh et al.

validity depends on the risk affinity of the decision maker, it is necessary to consider the remaining uncertainty in the evaluation phase in order to achieve a high decision quality. It is important to achieve the required accuracy in this step before moving on to modelling, as otherwise the uncertainty of the data is transferred to the model uncertainty and the model cannot be calibrated. For alternatives creation, a calibration of the as-is model with the existing network is necessary to determine the model uncertainty. The error deviation can be used for an iterative calibration of the model. The remaining uncertainty has to be mapped by sensitivity analyses after alternative scenarios have been created in the next step. If the uncertainty is very high, additional scenarios should be considered (Fig. 2). For the scenario generation, existing methods such as the previously introduced approach “Making the right things in the places” can be used [7]. 3.3

Evaluation and Decision Making

In the final step, the alternatives are converted into evaluation scenarios with sensitivities according to scenario techniques like the one developed by GAUSEMEIER et al. [23]. All relevant influences are taken into account with their inherent uncertainty. The range of sensitivity per influencing factor depends on the uncertainty in the previous evaluation process. The final decision is based on the action alternatives and their sensitivities regarding the focused objectives. A holistic approach to scenario assessment based on Economic Value Added is presented by SCHUH et al., for example [22]. There can be two different results in the decision: Either there is a dominant scenario, or there are different scenarios that cannot be finally weighed against each other due to their sensitivity. In the latter case, it should be examined whether further sources are available to reduce the internal uncertainty. Otherwise, it is advisable to choose the scenario preferred from a strategic point of view.

4 Application The expansion or opening of a press shop was planned for the production of body parts for a high-volume model of an OEM. During the initialization phase, the scope of consideration could be narrowed down to one existing or two potentially new plants: A low transport and a low cost site. As the sales price was fixed, the valuation was based on the cumulated cash out over the product lifetime of 14 years. A static cost model on an annual basis was chosen for modelling purposes and iteratively specified to reduce data and model uncertainty. A rough cost estimation in the initiation phase shows that transports and wages have a higher impact on the costs than the investment in machines and buildings. The information gathering therefore focused on those two factors, while investments and fixed costs were estimated with good accuracy on the basis of previous projects. While wage costs could be determined with good accuracy, transport costs are difficult to estimate, especially due to the long planning period and external factors. Therefore, a broad sensitivity corridor hat to be taken into account. The model results in a dominant scenario for the existing location, but due to the sensitivity of transport

Handling Uncertainties in Production Network Design

49

costs it is not absolutely dominant (Fig. 4). The management chose the dominant scenario due to lower investments and therefor lower risks of sunk cost effects. The entire decision making process took two weeks.

Fig. 4. Results of the method application

5 Conclusion and Future Research A focused decision-making process can be derived from a preliminary analysis of the decision problem and an uncertainty-oriented procurement of information. By recording and evaluating the uncertainty in information acquisition and modelling, unnecessary detailing loops can be avoided by mapping the uncertainty in the sensitivity analysis or through additional scenarios. Detailed information is only obtained for the most relevant factors. Thus, even investment-intensive, far-reaching decisions can be made in a short time. The assessment of uncertainty, however, continues to depend on the experience and knowledge of those involved in the process and cannot be completely quantified. Further research is therefore needed to derive concrete specifications for the classification of uncertainty in relation to sensitivities and scenario alternatives.

6 Acknowledgment The authors would like to thank the German Research Foundation DFG for the kind support within the Cluster of Excellence “Internet of Production” - Project-ID: 390621612.

References 1. Schuh, G., Reuter, C., Prote, J.-P., Stöwer, M., Witthohn, C., Fränken, B.: KonsortialBenchmarking: Gestaltung von globalen Produktionsnetzwerken (2016) 2. Roland Berger Strategy Consultants, Mastering product complexity (2012) 3. Váncza, J.: Production networks. In: Produ, T.I.A.F., Laperrière, L., Reinhart, G. (eds.) CIRP Encyclopedia of Production Engineering, pp. 1–8. Springer, Berlin (2016). https://doi. org/10.1007/978-3-662-53120-4

50

G. Schuh et al.

4. Krystek, U.: Internationalisierung: Eine Herausforderung für die Unternehmensführung, XXI, 617 S. (1997) 5. Spetzler, C.S.: Decision Quality: Value Creation from Better Business Decisions, p. 1. John Wiley & Sons Inc, Hoboken (2016) 6. Lanza, G., et al.: Global Production Networks: Design and Operation, CIRP Annals 68(2) (2019) 7. Christodoulou, P., Fleet, D., Hanson, P., Phaal, R., Probert, D., Shi, Y.: Making the right things in the right places: a structured approach to developing and exploiting ‘manufacturing footprint’ strategy (2007) 8. Stoffel, M.: V-Modell zur Auslegung Großer Produktionsnetzwerke. Apprimus Verlag, Aachen (2016) 9. Cheng, Y., Farooq, S., Johansen, J.: International manufacturing network: past, present, and future. Int. J. Oper. Prod. Manag. 35(3), 392–429 (2015) 10. Olhager, J., Pashaei, S., Sternberg, H.: Design of global production and distribution networks. Int. J. Phys. Distrib. Log. Manage. 45(1/2), 138–158 (2015) 11. Lanza, G., Moser, R.: Multi-objective optimization of global manufacturing networks taking into account multi-dimensional uncertainty. CIRP Ann. 63(1), 397–400 (2014) 12. Putnik, G.D., Škulj, G., Vrabič, R., Varela, L., Butala, P.: Simulation study of large production network robustness in uncertain environment. CIRP Ann. 64(1), 439–442 (2015) 13. Gölzer, P., Simon, L., Cato, P., Amberg, M.: Designing global manufacturing networks using big data. Procedia CIRP 33, 191–196 (2015) 14. Tiwari, S., Wee, H.M., Daryanto, Y.: Big data analytics in supply chain management between 2010 and 2016: Insights to industries. Comput. Ind. Eng. 115, 319–330 (2018) 15. Laux, H., Gillenkirch, R.M., Schenk-Mathes, H.Y.: Entscheidungstheorie. Springer, Berlin (2014). https://doi.org/10.1007/978-3-642-55258-8 16. Walker, W.E., et al.: Defining uncertainty: a conceptual basis for uncertainty management in model-based decision support. Integr. Assess. 4(1), 5–17 (2003) 17. Klein, R., Scholl, A.: Planung und Entscheidung: Konzepte, Modelle und Methoden einer modernen betriebswirtschaftlichen Entscheidungsanalyse, München (2012) 18. Walker, W.E., Lempert, R.J., Kwakkel, J.H.: Deep Uncertainty. Encyclopedia of Operations Research and Management Science, pp. 395–402. Springer, Boston (2013) 19. Kreye, M.E., Goh, Y.M., Newnes, L.B.: Manifestation of uncertainty - a classification. In: Proceedings of the 18th International Conference on Engineering Design (ICED 11), pp. 96– 107 (2011) 20. Weck, O. de, Eckert, C., Clarkson, J.: A classification of uncertainty for early product and system design. In: Proceedings of the 16th International Conference on Engineering Design (ICED 07), pp. 159–171. Paris (2007) 21. Huijbregts, M.A.J., et al.: Framework for modelling data uncertainty in life cycle inventories. Int. J. LCA 6(3), 127–132 (2001) 22. Schuh, G., Prote, J.-P., Fränken, B., Gützlaff, A.: Assessment of production network scenarios based on economic value added. In: 24th International Conference on Production research (ICPR 2017). Posnan, Poland, 30 July–3 August, 2017 23. Gausemeier, J., Fink, A., Schlake, O.: Szenario-Management: Planen und Führen mit Szenarien. Hanser, München (1996)

Supply Chain Scenarios for Logistics Service Providers in the Context of Additive Spare Parts Manufacturing Daniel Pause(&) and Svenja Marek(&) Institute for Industrial Management (FIR), Aachen, Germany {daniel.pause,svenja.marek}@fir.rwth-aachen.de

Abstract. Current supply chain structures in the spare parts logistics are changing profoundly due to the influence of digitalization and additive manufacturing (AM). In particular the Logistics Service Provider (LSP) is influenced by the change, as the physical transport of goods could become redundant due to the digital transmission of production data. This leads to a reduction of the LSP’s share in the value chain. Conceptualizing a new role for the LSP for additively manufactured spare parts is necessary. Therefore, five different scenarios are identified in which the LSP serves as a transport carrier, digital distributor, an AM decision maker, a selector of the manufacturer and as an AM service provider. Keywords: Additive/Rapid Manufacturing (AM)  Logistics Service Provider (LSP)  Spare Parts Management Spare parts logistics  Supply chain management



1 Introduction The common value structures in logistics and supply chain management are about to undergo a profound change, mainly driven by technology. 95% of all logistics companies surveyed are convinced that digitization will trigger a profound change in logistics processes [1, 2]. In particular booking platforms, will gain in importance and take on a new mediating position between the client and the Logistics Service Provider (LSP) [3, 4]. In the long term, however, Additive Manufacturing (AM) may have an even greater influence on the reorganization of existing logistics structures. Significant advantages of AM, such as on-demand production of small lot sizes, drive the implementation of the described future scenario [5]. German industry sees high potential for AM in the spare parts industry as an example of Rapid Manufacturing, which has been the result of a study concerning the future of spare parts. The study states that in the next five years, more than 85% of spare parts suppliers will apply Rapid Manufacturing to their business and companies which incorporate printing spare parts today will gain a sustainable competitive advantage in the future [6]. The high potential in the field of Rapid Manufacturing is illustrated by the recently high and dynamic research and development activities which leads to a maturing of the © IFIP International Federation for Information Processing 2019 Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 51–58, 2019. https://doi.org/10.1007/978-3-030-29996-5_6

52

D. Pause and S. Marek

procedure and applications. Regarding publications and patent registrations, Rapid Manufacturing has already gained more importance than Rapid Prototyping [7]. There are several examples of the increasing importance of Additive Manufacturing of spare parts in the industry: General Electric Aviation is printing fuel nozzles for its aircraft engines [8]. Daimler manufactures additively spare parts for trucks and has already over 30 digital models in a “virtual” warehouse. AM is furthermore applied by Siemens in the field of train manufacturing [9]. Siemens stores more than 450 digital models in a “virtual” warehouse, which can be printed on demand [10].

2 Motivation Following the influence of digitization and the resulting possibility to digitally transmit the production data, the physical transport of goods between manufacturer and customer could become redundant. The CAD1 file of a spare part would be sent digitally to the place of use and manufactured locally [11]. The reduction or abolition of delivery routes could reduce the LSP’s share in the value chain. Consequently the arising question is what the LSP’s future market position will look like. For this reason it is necessary to design a new role concept for the LSP in the value chain for additively manufactured spare parts. From the LSP’s viewpoint additional services in the field of after-sales or spareparts logistics seem especially attractive, as the margin is much higher than in the traditional transport business. After-sales-offers contain value-added-services so that the LSP takes over additional value adding tasks such as stock and demand planning [12]. As the number of manufacturing service providers increases, the demand for players who can act as intermediaries between end customers and manufacturing service providers increases. Such players can also be described as intermediaries. Spare parts customers benefit from intermediaries who offer a complete supply chain solution with capacity, transport and technology planning. In this context spare parts customers can continue to concentrate on their individual core competencies [12]. Especially in the context of spare parts, the central challenge is the fast delivery of needed spare parts. Oftentimes, specialized spare parts are not available constantly and an expensive loss of production can be the result. AM can increase the service efficiency and have a positive impact on the logistics costs due to the optimization of spare parts availability. The morphological box shown in Fig. 1 illustrates the field of observation of the research. The target group encompasses suppliers, Original Equipment Manufacturer (OEM), the LSP and the customers and is referring to capital goods. Part of the logistics field are the intralogistics as well as the logistics across companies. Furthermore, the subsystems storage, warehousing and transport are taken into account. Spare parts are the field of observation and in particular single-items which are critical to the function.

1

CAD: computer-aided design.

Supply Chain Scenarios for Logistics Service Providers Criteria

Logistic area

Target group

Users

Spare parts

Attribute Supplier

OEM

Wholesale/ Retail

LSP

Customer

Industry sector

Capital good

Consumer good

Logstics area

Intralogistics

Interlogistics

Logistic subsystem

Order processing

Storage

Warehouse

Transport

Type of spare part

Long-lasting capital and consumer goods

Non-durable goods

Criticality

Critical to the function

Uncritical to the function

Abrasion behavior

Predictable

Sporadic

Demand

Single parts

Low quantities

High quantities

Concept of maintenance

Periodic maintenance

Unscheduled maintenance

Emergency service

Type of manufacturing Legend:

In the field of observation

53

Single-item production

Single- and smallSeries production Mass production series production

Partly in the field of observation

Not in the field of observation

Fig. 1. Field of observation based on a morphological box

3 State of the Art The impact of additive spare parts manufacturing in the service portfolio of LSPs has been reviewed in several public articles, which claim its future importance [13, 14]. There are several pilot projects of international logistics service companies, such as UPS offering AM services in 29 US states [14]. However, all AM approaches so far resemble a trial and error procedure and do not follow a scientific approach. Therefore an investigation of the importance of the scientific relevance and state of research of the LSP’s role in AM is essential. Huth and Goele [15] investigate the potential of spare parts logistics in the region of Berlin in Germany. Based on an empirical research the importance of spare parts logistics for manufacturing companies is identified. The role of spare parts logistics for the corporate success, the outsourcing potential, the current application as well as challenges in the application are measured. Barkawi et al. [12] investigate potentials of after-sales-businesses for different sectors. The authors focus on the collaboration of different supply chain actors to achieve an increase of value. There is no application of a typology or empirical research. Geiger [16] investigates the spare parts industry as a future market for LSP and the current and future role that a LSP can take. Systematic fields of application of LSPs, customer requirements as well as prerequisites for entering new businesses are identified. Therefore an empirical study as well as a typology are applied.

54

D. Pause and S. Marek

The illustrated research studies analyze the LSP’s possible role and market position in the future. However, all the research studies do not consider the impact of AM technologies on the LSP’s service portfolio, which will be addressed in this work.

4 Supply Chain Scenarios for LSPs in the Context of Additive Spare Parts Manufacturing The equivalent term of the supply chain is often used for the term logistics network [17]. The core task in Supply Chain Management is the economic and demand-oriented connection of sources and sinks, or production sites and customers, through the exchange of material, information and financial flows [18]. Schuh et al. [17] are dividing a supply chain network vertically into four levels: network, company, function and resources. In this study the network level is considered. In particular, spare parts logistics will be projected to this level, which represents a sub-area of supply chain management [15]. According to Huth and Goele [15] spare parts logistics ensures that the spare parts required for the maintenance and/or repair of plants, equipment and end products are made available to the appropriate consumer in the required quantity, type and quality at the right time and as cost-effectively as possible. Since the influence of AM on spare parts logistics has not yet been sufficiently investigated, possible scenarios in which the logistics service provider shows an involvement in the spare parts service process are presented below. Apart from the basic scenario which characterizes the LSP as a carrier of spare parts there are four other scenarios. The LSP changes its roles in each of the following scenarios (see Fig. 2). The spare parts supply chain consists of different functional roles, which are necessary to fulfil the additive spare parts service and which the logistics service provider could potentially play. The functional roles comprise the digital distributor (Distribution of digital file), the AM decision maker (Decision about AM construction), the selector of manufacturer (Selection of manufacturer), the manufacturer of additive spare parts (Production of spare parts), and the carrier of spare parts (Transport of spare parts). The spare parts customer (Customer) triggers the process with a spare parts order and then receives the finished spare part. The spare parts customer determines the demand of spare parts and what is required from the suppliers in the sense of delivery time, flexibility and availability. The customer can be a private person or a company. The fast delivery of spare parts can be economically essential for the customer. In Scenario 0, the LSP functions as a carrier and is responsible for the planning as well as the execution of the transport and delivery orders. The other tasks are performed by other actors. For this purpose, there are various variants: for example the customer could decide for himself whether the part can be produced additively, or the task could be a service offered by the producer. There are also various possibilities in the selection of the additive manufacturer. This could happen individually by the customer but could also be a service provided by a third party. For example, express shipping or emergency logistics are already offered by the LSP. This could also be used in the context of additive spare parts logistics, for example via night express, same day transport, direct travel, direct flight or taxi.

Supply Chain Scenarios for Logistics Service Providers

55

Fig. 2. Scenarios for the LSP in the additive spare parts supply chain

Scenario 1 characterizes the LSP as a digital distributor. The digital distributor is responsible for the safe delivery of construction data, such as CAD data, from the spare parts customer to the AM service provider [19]. If the spare parts customer is not in possession of the construction data he might get in contact with the OEM to get the necessary data. Otherwise the OEM can also represent the spare parts customer in the spare parts supply chain presented above. In order to fulfill the role of digital distributor, extensive know-how in the field of data management and data distribution is necessary in order to ensure a safe, fast and reliable transfer of the CAD data. Moreover, it is required that IP rights are being maintained through new technologies, crypto procedures, privacy management, etc. The LSP serves as the first contact person for the AM service provider and the customer. Therefore, it needs to be considered that the LSP ensures the limited availability of the recipient. Whether and to what extent the LSP performs further tasks in this scenario must be decided individually by the LSP. It would be imaginable that the LSP takes over the role as a spare parts carrier. In Scenario 2 the LSP serves as an AM decision maker and holds knowledge of development and construction of spare parts. The LSP consults the customer regarding potentials of AM related to the development, production and delivery of spare parts and decides if to apply additive or conventional manufacturing to the object of interest. To provide a well-founded information and decision basis various aspects have to be taken

56

D. Pause and S. Marek

into account. For example, the shape and size of the object are important factors. In addition the economic efficiency and possible potentials that could result from topology and shape optimization must also be taken into account. Beyond that it might be useful to combine this scenario with the previous one. It would also be necessary to decide which actor would carry out the other tasks. In Scenario 3 the LSP functions as a selector of manufacturer. After the AM data has been developed, the LSP chooses an AM service provider that fits the order. To do this, the LSP must first define the information requirements of the customer and the producer. The LSP takes up the customer’s requirements in a systematic approach and compares them with the characteristics of the producers. To do this, the LSP uses a matching algorithm and a defined valuation model. The result is a list with possible producers, which is presented to the customer. Based on this list the customer could choose the producer. The other tasks in the scenario can be performed by different actors. This decision must be made individually based on a business case consideration. Scenario 4 characterizes the LSP as an AM service provider. The LSP has its own production capacity as well as production and construction skills, which is why in Scenario 4 the initial investment costs are comparatively high. AM is a technology that can theoretically be used anywhere in the world and thus enables customer-oriented production. A LSP, which has several distribution and warehouse locations nationally and globally, has optimal prerequisites for the implementation of 3D printers at their locations. A distinction can be made between a central and a decentral production. In both situations there are specific advantages and disadvantages which should be weighed up individually. The LSP competes with other AM service providers and can be requested directly by the customer or in cooperation with a digital distributor. This is a contract manufacturing of the spare part where the data is delivered by the digital distributor. After the spare part has been manufactured, it is delivered directly from the LSP to the customer. However, the delivery can also be carried out by another logistics service provider. Furthermore, the production of the spare part can be subcontracted or the core aspects, such as redesign, 3D printing, post-processing or quality assurance can be outsourced by the LSP. The scenarios described above are summarized in the following figure (see Fig. 3). For a structured summary, a morphological box with three characteristics and associated characteristics is used, which summarizes the necessary competencies of the logistics service provider for the respective scenarios. The first descriptive characteristic is the function of the logistics service provider. The logistics service provider can be a service provider (performer), consultant or intermediary. In addition, the necessary professional competence is presented. A distinction is made between competence in handling information transfer, production, design and logistics. In addition, the existence of production capacities is defined. These can belong directly to the logistics service provider itself or the use of outside capacities can be taken up.

Supply Chain Scenarios for Logistics Service Providers

57

Fig. 3. Overview of the function, competencies and production capacities of logistics service providers per supply chain scenario

5 Conclusion and Further Research Based on the changing role of the LSP due to the digitalization, five scenarios describing the role of LSP have been developed to identify the LSP’s future market position. For each role, different requirements are needed and a different base of knowledge is necessary. Thus, the identification of the optimal role of a LSP depends on his individual characteristics. Further research is furthermore needed in the areas of technology, property rights and business models. Concerning the technology, it is essential to decrease the process times of AM so that on-demand manufacturing can be realized. In terms of property rights, the ownership rights of the CAD files are yet to be determined. The digital interfaces between OEM, digital distributor and AM service provider need to be determined exactly. In addition business models have to be identified in order to quantify the value created for the LSP and enable a well-founded decision to be made. Acknowledgements. This research and development project 3Dsupply is funded by the German Federal Ministry of Education and Research (BMBF) within the “Innovations for Tomorrow’s Production, Services, and Work” Program (funding number 02K16C162) and implemented by the Project Management Agency Karlsruhe (PTKA). The author is responsible for the content of this publication.

58

D. Pause and S. Marek

References 1. Kayikci, Y.: Sustainability impact of digitization in logistics. Procedia Manuf. (2018). https://doi.org/10.1016/j.promfg.2018.02.184 2. Bundesvereinigung Logistik: Digitalisation in Logistics (2017). https://www.bvl.de/en/ positionpaper-digitisation. Accessed 8 Aug 2019 3. Baums, A.: Industrie 4.0 - politische Aspekte (2015). http://plattform-maerkte.de/wpcontent/uploads/2015/03/Industrie-4_0-Feb-2015_Blog.pdf. Accessed 8 Feb 2019 4. van Marwyk, K., Treppte, S.: Digital business models in logistics - Results. Roland Berger, München (2016). Accessed 5 Aug 2018 5. Gebhardt, A.: Additive Fertigungsverfahren. Additive Manufacturing und 3D-Drucken für Prototyping - Tooling - Produktion, 5th edn. Hanser, München (2016) 6. Geissbauer, R., Wunderlin, J.: The future of spare parts is 3D. A look at the challenges and opportunities of 3D printing. PwC Strategy& (2017) 7. TAB: Technikfolgenabschätzung (TA). Additive Fertigungsverfahren (3-D-Druck). TABArbeitsbericht Nr. 175. Büro für Technikfolgen-Abschätzung beim Deutschen Bundestag (TAB), Berlin (2017). https://www.dgm.de/fileadmin/DGM/Archiv/Print-Medien/ Positionspapiere/2017-09-06-Technikfolgeabschaetzung-Additive-Fertigung.pdf 8. Heinze-Wallmeyer, S.: GE Aviation nutzt additive Fertigung zur Herstellung von Flugzeugturbinen (2017). https://www.3d-grenzenlos.de/magazin/kurznachrichten/geaviation-3d-druck-von-flugzeugturbinen-27230923/ 9. Pankow, G.: Daimler druckt erstes Lkw-Ersatzteil aus Metall (2017). https://www. produktion.de/nachrichten/unternehmen-maerkte/daimler-druckt-erstes-lkw-ersatzteil-ausmetall-112.html 10. Breuer, H.: A Print-on-demand Manufacturing Plant (2016). https://www.siemens.com/ innovation/en/home/pictures-of-the-future/industry-and-automation/additive-manufacturingspare-parts-for-the-rail-industry.html. Accessed 28 Mar 2018 11. Thomas, O.E.A.: Supply Chain 4.0: Revolution in der Logistik durch 3D-Druck. IM + io Fachzeitschrift für Innovation, Organisation und Management (März 2016), 58–63 (2016) 12. Barkawi, K., Baader, A., Montanus, S.: Erfolgreich MIT After Sales Services. Geschäftsstrategien für Servicemanagement und Ersatzteillogistik. Springer-Verlag, Heidelberg (2006). https://doi.org/10.1007/3-540-34548-5 13. Scott, C.: FedEx Introduces New 3D Printing Services with FedEx Forward Depots (2018). https://3dprint.com/201472/fedex-3d-printing-forward-depots/ 14. Ward, J.: Warum UPS glaubt, dass 3D-Drucker die Logistikkette verändern werden (2016). https://news.sap.com/germany/2016/05/warum-ups-glaubt-dass-3d-drucker-dielogistikkette-verandern-werden/ 15. Huth, M., Goele, H.: Potenzial der Ersatzteillogistik von produzierenden Unternehmen in der Region Berlin/Brandenburg (2013) 16. Geiger, R.: Schlussbericht zum Forschungsvorhaben ET-LDL. IPRI - International Performance Research Institute, Stuttgart (2012) 17. Schuh, G., Stich, V. (eds.): Logistikmanagement. Handbuch Produktion und Management 6. VDI-Buch, vol. 6, 2nd edn. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-64228992-7 18. Stadtler, H., Kilger, C., Meyr, H. (eds.): Supply Chain Management and Advanced Planning. Concepts, Models, Software, and Case Studies, 5th edn. Springer, Heidelberg (2015) 19. Business Dictionary: Original Equipment Manufacturer (OEM) (2018). http://www. businessdictionary.com/definition/original-equipment-manufacturer-OEM.html. Accessed 30 Jan 2019

Supply Chain Optimization in the Tire Industry: State-of-the-Art Kartika Nur Alfina1 and R. M. Chandima Ratnayake2(&) 1

2

University of Indonesia, Depok, Indonesia Department of Mechanical and Structural Engineering and Materials Science, University of Stavanger, Stavanger, Norway [email protected]

Abstract. Recent research underlines the crucial role of supply chain optimization, in terms of maximize profit and minimize cost. Today the stakeholders are also empowered and the organizations are becoming stakeholder-centered, relates to the main objectives of a supply chain are availability and inventory control so the particular aim for availability must relate to stakeholder satisfaction. The implementation of supply chain optimization in tire industry nowadays not only focuses on profit, but also on the environmental and societal effect that is considered as ways to achieve the sustainable supply chain and stakeholder satisfaction. Currently a wealth of literature on supply chain optimization with maximize profit and minimize cost, to the best of our knowledge there is limited state-of-the art review on supply chain optimization considering with economy, environment and stakeholder satisfaction. This manuscript analyze research stream on supply chain optimization with economy objectives such maximize profit and minimize cost, environmental effect and stakeholder satisfaction with the aim to relate the existing optimization methods to empirical research and reveal the conceptual framework. The paper classifies existing research streams and application in tire industry areas with different optimization subject. The results of this study gives outlook which optimization methods are available for supply chain managers and give a conceptual framework in tire industry considering sustainable supply chain factors from economic, environmental and societal effect. Keywords: Supply chain optimization  Tire industry  Environment  Societal

1 Introduction Today, millions of tires are used each year and with the growing concern about environmental issues in recent years, the problem of used tires disposal has attracted many practitioners and researchers [1]. World demand for tires increase 4.1% per year and reach to 3.0 billion units in 2019, according to the U.S. Environmental Protection Agency (EPA) report [4]. Hence tire industry is become an important issues for both academics and practitioners. Supply chain optimization is the application of processes

© IFIP International Federation for Information Processing 2019 Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 59–67, 2019. https://doi.org/10.1007/978-3-030-29996-5_7

60

K. N. Alfina and R. M. Chandima Ratnayake

and tools to ensure the optimal operation of a manufacturing and distribution supply chain [5]. It can be observed in the existing study that used of problem statements are generally considered: • Supply chain optimization considerations with sustainable factors • Supply chain optimization considerations without sustainable factors. The goal of this study is to classify existing research streams and application in tire industry areas with different optimization subject. The results of this study gives outlook which optimization methods are available for supply chain managers and give a conceptual framework in tire industry considering sustainable supply chain factors from economic, environmental and societal effect.

2 State-of-the-Art Review 2.1

Literature Selection

Supply chain in the tire industry is getting more complex today. Variabilities of market demand and supply add to the complexity [7]. Supply chain optimization is the application of processes and tools to ensure the optimal operation of a manufacturing and distribution supply chain [5]. In order to restrict our research, so we focused the typical supply chain optimization that used in tire industry. Reverse logistic and closedloop supply chain have an incremental trend in recent years [10]. The single period mixed integer linear programming (MILP) model considering the uncertainty parameters for closed-loop supply chain proposed [11] in their model also the Multi-echelon reverse logistic network adopted by [13] case study at India use mixed integer nonlinear programming (MINLP) models to maximize profit. 2.2

Mixed Integer Linear Programming (MILP)

Mixed Integer Linear Programming (MILP) involves problems in which only some of the variables, xi are constrained to be integers, while other variables are allowed to be non-integers. This is why it is called Mixed [14]. A mixed integer linear programming model is designed for the closed-loop supply chain to maximize total profit. The proposed model usually determines the optimum number of distribution, collection, recycling centers and retreading. of products to meet the quality for remanufacturing. In fact, uncertainty is embedded in the optimization model [11]. 2.3

Mixed Integer Non-linear Programming (MINLP)

Mixed integer nonlinear programming (MINLP) refers to optimization problems with continuous and discrete variables and nonlinear functions in the objective function and/or the constraints [14]. MINLP arise in applications in a wide range of fields, including chemical engineering, finance, and manufacturing. Closed-loop supply chain with MINLP model use to maximize profit adopted by [16]. Meanwhile MINLP model in reverse logistic proposed by [13] to maximize profit in remanufacturing tire.

Supply Chain Optimization in the Tire Industry: State-of-the-Art

2.4

61

Closed-Loop Supply Chain

The closed-loop supply chain consists of the activity start from design, control, and operation for a system in terms for maximize value creation over the entire life cycle of a product with the dynamic recovery [8]. Designing an economically and ecologically optimized closed-loop supply chain network is a prerequisite for tire producers to facilitate increased environmental responsibility and sustainable development [3]. Some literature reviews papers have been published about closed-loop supply chain in tire industry such as [3] with minimize environment impact and maximize profit [2]. And [17] with the first stage model on maximize profit but in second stage the model focus on sustainable factors such minimize environment and social effect, maximize profit. 2.5

Reverse Logistic

Reverse logistics (RL) has been defined as a term that refer to the role of logistics in product returns, the source reduction, a recycling, the materials substitution, a reuse of materials, a waste disposal, and the refurbishing [9]. Reverse logistics systems use of mathematical tools to design for the recovery of products that have ended their life cycle [18]. Beside the MILP or MINLP model for example, in reverse logistics proposed by [9] to minimize the total cost with genetic algorithm also [19] use fuzzy multiobjective mixed integer program model to maximize total profit and coverage area.

3 Analysis and Observations 3.1

Literature Analysis

Based on literature analysis on supply chain optimization in tire industry, our next objective is to derive some classifications regarding the following issues: • What types of supply chain optimization should be considered by supply chain managers. • Which methods are mostly suitable for supply chain in tire industry. • How to identify challenges in implementations the supply chain optimization in tire industry. • What the conceptual framework represents the supply chain implementations in tire industry. For the first step, identify the implementation of supply chain optimization in tire industry as following Table 1 which describes the methodology, evaluated factor and also research method.

62

K. N. Alfina and R. M. Chandima Ratnayake Table 1. Supply chain optimization implementation research in tire industry

Authors [10] Kannan [3] Subulan

[19] Radhi and Zhang

Year Methodology Evaluated factor 2009 Reverse Minimize total logistic supply chain cost 2015 Closed loop Maximize total supply chain profit, minimize total environmental impact 2016 Closed loop Maximize total supply chain profit

[2] Simic

2016 Closed loop Maximize total supply chain profit

[4] Amin

2017 Closed loop Maximize total supply chain profit

[12] Pedram

2016 Closed loop Maximize total supply chain profit

[20] Simic

2017 Closed loop Maximize total supply chain profit, minimize environment, social effect

[21] Yadollahinia

2018 Reverse logistic

[16] Sahedjamnia

[1] Pereira

Maximize total profit, maximize customer satisfaction, minimize distance collecting facilities 2018 Closed loop Minimize total supply chain cost, minimize total environmental impact 2018 Closed loop Forecasting supply chain volume scrap tire, probability of return

Method GA (Genetic algorithm) MILP (Mixed integer linear programming), IFGP

Research sector Tire, plastic goods Tire

MINLP (Mixed integer nonlinear programming) Interval parameter chance constrained programming model MILP (Mixed integer linear programming) MILP (Mixed integer linear programming) Interval parameter chance constrained programming model MILP (Mixed integer linear programming)

Tire

MILP (Mixed integer linear programming)

Tire

Tire

Tire

Tire

Tire

Tire

FTM (Transfer Tire function model) (continued)

Supply Chain Optimization in the Tire Industry: State-of-the-Art

63

Table 1. (continued) Authors [22] Saxena

Year Methodology Evaluated factor 2018 Reverse Maximize total logistic profit, maximize coverage

[23] Ebrahimi 2018 Closed loop Minimize total supply chain cost, minimize environment effects, maximize demand responsiveness [13] 2018 Closed loop Minimize fixed Fathollahisupply chain cost, minimize Fard transportation cost, minimize purchasing cost [24] Banguera 2018 Reverse Minimize total logistic cost [25] CostaSalas

2017 Reverse supply chain network design

[14] Sasikumar

2010 Reverse logistic

Maximize economic benefit, minimize environment impact Maximize profit

[26] 2017 Closed loop Maximize profit Bhattacharyya supply chain [27] Farias

3.2

2017 Supply chain Minimize fixed cost, minimize network variable cost design

Method Fuzzy multiobjective mixed integer programme model Stochastic multi-objective programming

Research sector Remanufacturing tire

Tri-level programming model

Tire

MILP (Mixed integer linear programming) MILP (Mixed integer linear programming)

Tire

MINLP (Mixed integer nonlinear programming) MILP (Mixed integer linear programming) MILP (Mixed integer linear programming)

Tire

Tire

Remanufacturing tire

Tire

Tire

Critical Analysis

Publications specifically for tire industry the research of supply chain optimization are boomed in 2015 until present. In early 2010 there are no significant publications as described in Fig. 1 summary of supply chain optimization implementation research in tire industry in last decade, also the objective function is analyzed. Mostly the recent publication have evaluate factor in economic sector. Sustainable supply chain factor implementation is still limited.

64

K. N. Alfina and R. M. Chandima Ratnayake

Fig. 1. Publication review of supply chain optimization in tire industry

3.3

Managerial Implications

In many practical settings, companies need analysis tools to estimate both the supply chain robustness and sustainable. For sustainable supply chain the objective function need to consider from economic, environment and social impact. Thus the Tables 2 and 3 as the results of classified literature review by objective function categories can contribute to give support decisions reference for supply chain manager in tire industry to implement based on desired objective that match with their company objective. Table 2. Supply chain optimization classified by objective function Economic

Economic, Environment

[10] Kannan, [19] Radhi and Zhang, [2] Simic, [4] Amin, [12] Pedram, [13] Fathollahi-Fard, [24] Banguera, [14] Sasikumar, [26] Bhattacharyya, [27] Farias

[3] Subulan, [16] Sahedjamnia, [22] Saxena, [25] Costa-Salas

Economic, Environment, Social [20] Simic, [21] Yadollahinia, [23] Ebrahimi

Table 3. Matrix managerial implications Suppliers

Manufacturer

Economic

Optimize raw material purchasing

Environment

Reduce scrap materials

Increase manufacturing process and capacity, optimize production planning Reduce scrap tires

Social

Increase service level compliance

Optimize demand fulfilment

Collecting center Optimize distance collecting used/scrap tires Minimize return used/scrap tires Labour cost efficiency

Recycler Optimize planning recycling/retreading

Minimize return used/scrap tires

Labour cost efficiency

Supply Chain Optimization in the Tire Industry: State-of-the-Art

65

4 Towards a Conceptual Framework The developed conceptual framework is expected to provide general guidance [22] on supply chain optimization in tire industry. Figure 2 illustrates the conceptual framework that is constructed based on the analysis of the findings in the literature. The framework comprises four elements, which represent the essential features for successful supply chain optimization implementation in the tire industry: (1) (2) (3) (4)

Reliable data support; Sustainable model; Reliable solvers; and Simultaneously implementation.

Fig. 2. A conceptual framework for implementing supply chain optimization in tire industry

The contribution from conceptual framework in this manuscript is describing the sustainable supply chain factor that had robust result in the implementation based on literature analysis. Combination of three factors from economic, environment and social impact such provided by [17], [1] and [24] give the robust and sustainable impact in tire industry. Simultaneously factor need to be highlight for supply chain managers in their practical problems to achieve robustness result.

66

K. N. Alfina and R. M. Chandima Ratnayake

5 Conclusions Supply chain optimization is crucial part to ensure tire business remains profitable and still have a good relation with stakeholder. The managerial implications and conceptual framework for sustainable supply chain optimization such economic, environment, social factor revealed in this study. Thus it is contribute to supply chain manager in decision support of their practical problem which the best fit method to achieve the company objective. Although there is a growing research in supply chain optimization, but still it is limited publications to rise focus on stakeholder satisfaction as the objective of the research. In future, social methodology like customer relationship management need to studied further either in the tire business or in other practical industries.

References 1. Yadollahinia, M., Teimoury, E., Paydar, M.M.: Tire forward and reverse supply chain design considering customer relationship management. Resour. Conserv. Recycl. 215–228 (2018). https://doi.org/10.1016/j.resconrec.2018.07.018 2. Simic, V., Dabic-Ostojic, S.: An Interval parameter chance constrained programming model for uncertainty based decision making in tire retreading industry. J. Clean. Prod. 1–9 (2016). http://dx.doi.org/10.1016/j.jclepro.2016.10.122 3. Subulan, K., Taşan, A.S., Baykasoğlu, A.: The designing an environmentally conscious tire closed loop supply chain network with multiple recovery options using interactive fuzzy goal programming method. Appl. Math. Model, 2661–2702 (2015). http://dx.doi.org/10. 1016/j.apm.2014.11.004 4. Amin, S.H., Zhang, G., Akhtar, P.: The effects of uncertainty on a closed loop supply chain tire network. Expert Syst. Appl. (2017). https://doi.org/10.1016/j.eswa.2016.12.024 5. S.-c. optimization: Wikipedia, Wikimedia Foundation, Inc., 09 February 2019. https://en. wikipedia.org/wiki/Supply-chain_optimization 6. Alfina, K.N., Dachyar, M., Farizal, F.: Optimization Supply Chain Strategy of Tire Manufacturing using Goal Programming Method. SSRN eLibrary (2018). http://dx.doi.org/ 10.2139/ssrn.3248155 7. Van Wassenhove, L.N., Guide Jr., V.D.R.: The Evolution of Closed-Loop Supply Chain Research. Inst. Oper. Res. Manag. Sci. (INFORMS), 10–18 (2009). https://doi.org/10.1287/ opre.1080.0628 8. Kannan, G., Noorul Haq, A., Devika, M.: The analysis of closed loop supply chain using genetic algorithm and particle swarm optimisation method. Int. J. Prod. Res. 47, 1175–1200 (2009). http://dx.doi.org/10.1080/00207540701543585 9. Govindan, K., Soleimani, H.: The review of reverse logistics and closed loop supply chains for cleaner production focus. J. Clean. Prod. (2017). https://doi.org/10.1016/j.jclepro.2016. 03.126 10. Pedram, A., Yusoff, N.B., Udoncy, O.E., Mahat, A.B., Pedram, P., Babalola, A.: Integrated forward and reverse supply chain: a tire case study. Waste Manag. (2016). http://dx.doi.org/ 10.1016/j.wasman.2016.06.029 11. Fathollahi-Fard, A.M., Hajiaghaei-Keshteli, M., Mirjalili, S.: Hybrid optimizers to solve a tri-level programming model for a tire closed-loop supply chain network design problem. Appl. Soft Comput. (2018). https://doi.org/10.1016/j.asoc.2018.06.021

Supply Chain Optimization in the Tire Industry: State-of-the-Art

67

12. Sasikumar, P., Kannan, G., Haq, A.N.: A multi-echelon reverse logistics network design for product recovery—a case of truck tire remanufacturing. Int. J. Adv. Manuf. Technol. 1223– 1234 (2010). https://doi.org/10.1007/s00170-009-2470-4 13. Research Gate: Research Gate - MILP, MIP and ILP, 7 July 2017. https://www.researchgate. net. Accessed May 2019 14. Radhi, M., Zhang, G.: Optimal configuration of remanufacturing supply network with return quality decision. Int. J. Prod. Res. (2015). http://dx.doi.org/10.1080/00207543.2015. 1086034 15. Simić, V., Dabić-Ostojić, S., Bojović, N.: Interval-parameter semi-infinite programming model for used tire management and planning under uncertainty. Comput. Ind. Eng. (2017). http://dx.doi.org/10.1016/j.cie.2017.09.013 16. Banguera, L.A., Sepúlveda, J.M., Ternero, R., Vargas, M., Vásquez, Ó.C.: Reverse logistics network design under extended producer responsibility: the case of out-of-use tires in the Gran Santiago city of Chile. Int. J. Prod. Econ. (2018). https://doi.org/10.1016/j.ijpe.2018. 09.006 17. Saxena, L.K., Jain, P.K., Sharma, A.K.: A fuzzy goal programme with carbon tax policy for Brownfield Tyre remanufacturing. J. Clean. Prod. (2018). https://doi.org/10.1016/j.jclepro. 2018.07.005 18. Webster, J., Watson, R.T.: Analyzing the past to prepare for the future: writing a literature review. MIS Q. 26(2), xiii–xxiii (2002) 19. Fink, A.: Conducting Research Literature Reviews from Internet to Paper. SAGE, Los Angeles (2014) 20. Rachman, A., Ratnayake, R.C.: Adoption and implementation potential of the lean concept in the petroleum industry: state-of-the-art. Int. J. Lean Six Sigma (2018). https://doi.org/10. 1108/IJLSS-10-2016-0065 21. Meredith, J.: Theory building through conceptual methods. Int. J. Oper. 13(5), 3–11 (1993) 22. Ebrahimi, S.B.: A stochastic multi-objective location-allocation-routing problem for tire supply chain considering sustainability aspects and quantity discounts. J. Clean. Prod. (2018). https://doi.org/10.1016/j.jclepro.2018.07.059 23. Pati, R.K., Vrat, P., Kumar, P.: Quantifying bullwhip effect in a closed loop supply chain. Oper. Res. Soc. India, 231–253 (2011). https://doi.org/10.1007/s12597-010-0024-z 24. Shaharudin, M.R., Govindan, K., Zailani, S., Tan, K.C., Iranmanesh, M.: Product return management: linking product returns, closed-loop supply chain activities and the effectiveness of the reverse supply chains J. Clean. Prod. (2017). https://doi.org/10.1016/j.jclepro. 2017.02.133 25. Sasikumar, P., Kannan, G., Haq, A.N.: The multi echelon of reverse logistics network design for product recovery truck tire remanufacturing. Int. J. Adv. Manuf. Technol. 1223–1234 (2010). https://doi.org/10.1007/s00170-009-2470-4

Collaborative Exchange of Cargo Truck Loads: Approaches to Reducing Empty Trucks in Logistics Chains Hans-Henrik Hvolby1(&) , Kenn Steger-Jensen1, Mihai Neagoe2 Sven Vestergaard1, and Paul Turner2

,

1

2

Centre for Logistics, Department of Materials and Production, Aalborg University, Aalborg, Denmark [email protected] ARC Centre for Forest Value, Discipline of ICT, College of Sciences and Engineering, University of Tasmania, Hobart, Australia

Abstract. Reducing the volume of trucks carrying empty or below capacity loads on road networks are both socio-economic and environmental sustainability issues for the logistics industry. Planning concepts for a collaborative logistics exchange based on real-time data are described as well as the benefits in terms of optimizing load capacity utilization, minimization of empty running, reducing costs, traffic congestion, and truck emissions. Keywords: Collaborative Constraints  Optimise

 Transport  Cargo  Logistics  Agent-based 

1 Introduction Based on data from the European Union [1] identified that empty backhauls represent about 25% of the road transportation activities and that loaded trucks in average use 57% of the capacity. This triggered a further investigation into how planning of truckloads is conducted as well as conditions and rules for transport companies. This research project was initiated with the aim to reduce the empty running by exchanging loads among freight carriers. The centre-point of the project is to provide real-time data and advanced decision support tools to the transport companies and thereby reduce costs as well as environmental benefits in terms of reduced pollution and congestion. The potential benefits of horizontal collaboration for logistics services providers are recognized in the literature in the form of increased efficiency and productivity gains [2], decreased environmental impact, and improved market presence or access [3, 4]. From an economic perspective, modeling results of collaborative approaches reveal cost savings ranging from less than 10% [5], between 20 and 30% [6–8] to as high as 50% [9]. The effectiveness of collaborative approaches can be influenced by various factors such as the geography [10] and not least the partners’ similarity in distribution networks [11]. Only two freight carriers are participating in the research (demonstrator) project but in a later implementation, a higher number of carriers are required to gain a high efficiency and to reduce the number of non-connecting deliveries. © IFIP International Federation for Information Processing 2019 Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 68–74, 2019. https://doi.org/10.1007/978-3-030-29996-5_8

Collaborative Exchange of Cargo Truck Loads: Approaches

69

From an environmental perspective, the reduction of empty backhaul trips was suggested as an alternative to mitigate the transport environmental footprint [10]. Environmental impact reductions will likely follow the cost reductions of collaborative approaches since fuel consumption accounts for a large proportion of transport costs. Sustainability of the industrial sector has become one of the most significant societal, political and business issues due to the fact, that the manufacturing sector has a huge impact on the environment, economy and the quality of human life. The focus on the impact of supply chain activities including logistics and transportation has captured a huge academic and industrial interest which has led to significant contributions regarding figures and measures [15, 16], concepts and strategies and methodologies and tools such as Life-Cycle Assessment (LCA) and Corporate Social Responsibility (CSR). While collaborative transport planning can generate a series of efficiency benefits, companies are also required to navigate challenges regarding information sharing and security with rivals and ensuring the delivery of adequate services by other companies for their own customers [4]. In cases where full information disclosure is accepted by collaborating parties, decisions can be made by a central decision-maker. However, in cases where not all information is shared amongst parties, decentralized decisionmaking approaches are adopted [2]. In a decentralized decision-making approach, transporters can retain the ability to choose which routes or cargoes they are willing to exchange. The decisions by individual transporters regarding routes or cargoes can depend on the information, options available [12], as well the relationships between logistics service providers and their customers [13, 14]. In this context, decision support systems can be useful tools to inform decision-makers of available collaboration alternatives while allowing the option for human direction of the final choice. Today, only few companies are part of supply chains that jointly supports an overall planning body to optimize the flow. To support optimized goods flows between independent second-party logistics providers (2PLs) calls for supply chain collaboration solutions enabling increasing visibility and decision support integrated in current transport management systems. Firms are used to optimize internally, but collaborative logistics creates the need to develop a new business model that takes inter-firm relations into account. This paper lines up the findings so far as well as conceptual solutions to support sharing of loads. Two options have been identified: an agent-based approach and an optimisation approach. The paper is organised as follows. First, the planning approach in the transport companies are presented alongside constraints in terms of policies (as well as driving regulations). Secondly, the identified options are presented followed by a discussion and conclusions.

2 Planning Approach in Case Companies The freight carriers in focus are specialised in handling point-to-point transport freights, normally minimum 5 pallets. The direct approach has a number of advantages compared to a hub-and-spoke setup that are mainly used for smaller freights. The main advantage is less handling and faster delivery time, whereas the downside is a higher

70

H.-H. Hvolby et al.

risk of empty running. The freight carriers have three types of shippers: contractual, regular and occasional customers. Contractual customers have to be served, whereas regular customers could be turned down. Occasional customers are accepted if the load fits with the planned route and loads. Most order requests come from customers, but some requests come from other transport companies aiming to sell of transports that do not fit into their routes and capacity. The actual load per day for contractual customers are not known until a few hours before the truck starts its route and may change over the day. This setup generates some uncertainty with respect to which transport orders to accept. On the one hand, if too many orders are accepted, the transport capacity may be insufficient for the day’s deliveries. On the other hand, accepting fewer orders or a more risk-averse approach may mean that transport capacity is not fully utilized. Due to uneven load balances, trucks are rarely fully loaded on both outbound and inbound trips. The goal is to have a full outbound load and aim for as high load as possible on the inbound trips. Most freight carriers have some casual transports to increase loads when possible, e.g. empty pallets. Other options is transport portals such as Timocom and Teleroute [17–19] that act as a broker in between shippers and freight carriers. Uber Freight [20] is also entering the European market, but here the portal is a freight marketplace focused on full load transportation. The transport companies investigated, plans their transports manually but use domain-specific systems to support the decision making and to share the current plan with colleagues as well as the final launch of trips. Orders from contractual customers arrive in semi-automated ways such as excel sheets or through manual access to customers information system. Regular and occasional customers communicate via email or phone. Surprisingly, very limited planning was conducted in real-time and refilling of trucks were rarely used. The companies were aware that a further investigation of these areas could improve capacity utilisation and earnings. The normal strategy by the route planners is to accept most requests as “it is easier to form a good plan if you have more orders”. During the planning process, you may sell off requests that were accepted earlier in the planning process or contractual transports if the load either exceeds the transport capacity or is too small to fill up a truck. In the latter case, the planners contact transport colleagues by phone or email to sell of transports. Usually, the price of a transport increases during the day and therefore there is a risk element in having too many unallocated transports in the late afternoon.

3 Planning Concepts As previously stated, the idea with the paper is to present planning concepts supporting the current processes in transport companies. Two different concepts have been brought forward with the aim of improving the capacity utilisation: • an agent-based approach aiming at supporting selling and buying transports to and from other transport companies

Collaborative Exchange of Cargo Truck Loads: Approaches

71

• an optimisation approach where all transports are re-distributed among the participating transport companies under a given set of constraints configured by each participant such as: • accepted transport colleagues to undertake a transport for the given company • blocked customers and types of goods that are not to be transferred to colleagues • a matrix expressing how may extra miles of route deviation you are willing to drive in order to pick up X number of pallets (e.g. 2 miles for 2 pallets and 20 miles for 10 pallets • A threshold value expressing the minimum profit margin for accepting a request (Fig. 1)

Fig. 1. Conceptual illustration of tenders, requests, constraints and business model

The constraints are used to evaluate as to whether a request from a customer or a transport company qualifies for manual decision making. To enable this prequalification process, the order request must as a minimum include the following information: type of goods, number of pallets, weight and volume, order date & time, pick-up information (date & time, address, customer name), delivery information (date & time, address, customer name), and offered price. In an agent-based setup, the route planner will receive the pre-qualified requests which match the configured settings. This ensures that only relevant requests require the attention of the planner. Further, the system should support a full business process transaction in terms of automatically enter the order specification, notify the cargo owner when the product are delivered, invoice the cargo owner, etc. These processes are quite labour intensive in the current manual based setting.

72

H.-H. Hvolby et al.

As trucks constantly move and new orders are accepted while some old orders are sold off the picture changes over time. We have tried to illustrate this in Fig. 2. The planning process is similar to the “Net-Change” in Enterprise Resource Planning (ERP) systems. This mean that during the day new orders are received and accepted and each time we consider how to best fit the new order in the existing plan. The alternative to this is a “Full plan” (in ERP this is done during the night or in some companies over the weekend due to time constraints) where all plans are cleared and a totally new plan is developed. The benefits of this is that the Net-Change does not find an optimal plan but a feasible plan given the already planned tasks.

Fig. 2. Real-time scan for relevant tenders to accept or reject a request based on constraints such as position and time-window.

As an alternative to the agent-based planning concept a constrained or optimised concept could be considered. Constrained based planning is based on hard and soft (or goal) constraints [19]. Its distinguishing feature is that the objectives can be stated as minimising deviations from pre-specified goals. Hard constraints are not overruled, whereas soft constraints can be overruled, if necessary. If the number of trucks is considered as a hard constraint then capacity load, based on customer order acceptance, is considered as a soft constraint. As no plan optimisation objectives or criteria are considered, this option produces a feasible but not necessarily an optimal plan. Therefore, a hidden plan objective function is used to drive the planning and trade-off among the soft constraints. The hidden plan objective function is defined as minimizing plan cost. In addition to hard and soft constraints, it is possible to use business rules and demand priorities. Business rules are used as explicit decisions made when there are more options to choose among in the plan generation. Business rules are ranked by use of priorities of given topics and play an important role in constraint-based planning by

Collaborative Exchange of Cargo Truck Loads: Approaches

73

avoiding the traditional (time-consuming) re-planning and re-scheduling after plan generation. Optimised plans are generated based on plan objectives, penalty factors and constraints beside the hard and soft constraints [21]. The constraint-based rules are exchanged with decision variables and penalty factors, instead of the hidden object function and business rules and demand priorities. In the optimisation, the soft constraints might be overruled if this reduces the total costs. For example, demand priority and supplier allocation ranks could be overruled to reach the best profit. The benefits of a constrained or optimised plan is that it automates the planning process. The question is, however, whether planning automation and elimination of individual decision-making for each company is a change that transport companies are willing to instantiate. With reference to the manufacturing area, it is a long “travel” to move from manual planning to semi-automated and fully automated planning, as this change requires reliable data and fit-for-purpose decision rules.

4 Discussion and Conclusion The paper discuss possible planning concepts in an automated collaborative logistics system based on real-time collection and analysis of shipment and tracking data which makes it possible for large competing logistics companies to share load capacity on less-than-truckload shipments and minimize empty-running. Currently, planning and coordination is handled manually and the aim of the ongoing project is to optimize the utilization of load capacity and minimize empty running, reducing costs, traffic congestion, and truck emissions. At present only the most obvious constraints have been included in the work. Currently, we are e.g. not able to handle dependent options where one option only is relevant if another option also is fulfilled. It is also required that all trips have been initiated before the collaborative tool is able to suggest relevant loads to share (the chicken and egg challenge). Regular (returning) customers has less integrated planning routines but expect anyway that their loads are handled in spite of lacking contractual agreements. To support this, we suggest that historical data are used to forecast capacity requirements on a daily basis to make room for these customers. Finally, a threshold value for the minimum number of empty pallets in the truck are needed if refill of trucks is considered. This value depends on the type of truck and the number of deliveries to be completed. The important issue here is to avoid unloading of many “new” pallets to enable offloading of “older” pallets. This last issue are to some degree in contrast to the initiating goal of obtaining a higher capacity load on trucks. Acknowledgements. The authors would like to express their gratitude to the Innovation Fund Denmark which the Collaborative Cargo (Directly) research project, is a part off.

74

H.-H. Hvolby et al.

References 1. Juan, A.A., Faulin, J., Pérez-Bernabeu, E., Jozafowiez, N.: Horizontal cooperation in Vehicle routing problems with backhauling and environmental criteria. Procdia Soc. Behav. Sci. 111, 1133–1141 (2014) 2. Gansterer, M., Hartl, R.F.: Collaborative vehicle routing: a survey. Eur. J. Oper. Res. 268, 1–12 (2018) 3. Cruijssen, F., Cools, M., Dullaert, W.: Horizontal cooperation in logistics: opportunities and impediments. Transp. Res. Part E Logist. Transp. Rev. 43, 129–142 (2007) 4. Cruijssen, F.C.A.M.: Horizontal cooperation in transport and logistics. CentER Cent. Econ. Res. 46, 216 (2006) 5. Quintero-Araujo, C.L., Gruler, A., Juan, A.A.: Quantifying potential benefits of horizontal cooperation in urban transportation under uncertainty: a simheuristic approach. In: Conference of the Spanish Association for Artificial Intelligence. pp. 280–289 (2016) 6. Soysal, M., Bloemhof-Ruwaard, J.M., Haijema, R., van der Vorst, J.G.: Modeling a green inventory routing problem for perishable products with horizontal collaboration. Comput. Oper. Res. 89, 168–182 (2018) 7. Montoya-Torres, J.R., Muñoz-Villamizar, A., Vega-Mejia, C.A.: On the impact of collaborative strategies for goods delivery in city logistics. Prod. Plan. Control. 27, 443– 455 (2016) 8. Bailey, E., Unnikrishnan, A., Lin, D.-Y.: Models forminimizing backhaul costs through freight collaboration. J. Transp. Res. Board 2224, 51–60 (2011) 9. Sanchez, M., Pradenas, L., Deschamps, J.-C., Parada, V.: Reducing the carbon footprint in a vehicle routing problem by pooling resources from different companies. NETNOMICS Econ. Res. Electron. Netw. 17, 29–45 (2016) 10. Pérez-Bernabeu, E., Juan, A.A., Faulin, J., Barrios, B.B.: Horizontal cooperation in road transportation: a case illustrating savings in distances and greenhouse gas emissions. Int. Trans. Oper. Res. 22, 585–606 (2015) 11. Adenso-Díaz, B., Lozano, S., Moreno, P.: Analysis of the synergies of merging multicompany transportation needs. Transp. A Transp. Sci. 10, 533–547 (2014) 12. Ukovich, W., Nolich, M., Fanti, M.P., Iacobellis, G., Rusich, A.: A decision support system for cooperative logistics. IEEE Trans. Autom. Sci. Eng. 14, 732–744 (2017) 13. Gligor, D.M., Autry, C.W.: The role of personal ralationships in facilitating supply chain communications: a qualitative study. J. Supply Chain Manag. 48, 24–43 (2012) 14. Gligor, D.M., Holcomb, M.: The role of personal relationships in supply chains: an exploration of buyers and suppliers of logistics services. Int. J. Logist. Manag. 24, 328–355 (2013) 15. Dornfeld, D.A.: Green Manufacturing: Fundamentals and Applications. Springer, New York (2013). https://doi.org/10.1007/978-1-4419-6016-0 16. Bunse, K., Vodicka, M., Schönsleben, P., Brülhart, M., Ernst, F.: Integrating energy efficiency performance in production management – gap analysis between industrial needs and scientific literature. J. Clean. Prod. 19(6–7), 667–679 (2011) 17. Timocom. www.timocom.com. Accessed 12 Apr 2019 18. Teleroute. www.teleroute.com. Accessed 12 Apr 2019 19. Trucker Path. truckerpath.com. Accessed 12 Apr 2019 20. Uber freight. www.uberfreight.com. Accessed 9 June 2019 21. Hooker, J.N.: Logic-Based Methods for Optimization: Combining Optimization and Constraint Satisfaction. Wiley, New York (2000)

An Integrated Approach for Supply Chain Tactical Planning and Cash Flow Valuation Sabah Belil1,2, Asma Rakiz1,3(&), and Kawtar Retmi1 1

Emines-Mohammed VI Polytechnic University, 43150 Ben Guerir, Morocco {Sabah.belil,Asma.rakiz,Kawtar.retmi}@emines.um6p.ma 2 LIMOS Clermont Auvergne University, Aubière, UMR CNRS 6158, Aubière, France 3 Paris II Panthéon-Assas University, 75006 Paris, France

Abstract. This paper presents a methodology combining a flow optimization and a cost models in order to, simultaneously, realize a tactical planning of a productive system, and evaluate the financial performance of the proposed plans. The system addressed is a multi-site, multi-product supply chain structure with finite capacities of production, storage and transport. In order to model physical flow, we propose an optimization model taking into consideration all the physical system’s constraints. It calculates production and transport plans while maximizing demand satisfaction rate. Then, in order to financially evaluate the solution found by the optimization model, we propose a cost model using Activity Based Costing (ABC) as a valuation method using cost drivers mechanism. Finally, in order to couple both optimization and cost models in a global integrated model, we use an approach called PREVA for PRocess EVAluation, generally used to set up a supply chain’s management control system using financial and physical metrics. Keywords: Cost model

 Linear programming model  Financial valuation

1 Introduction The supply chain is defined as a network composed of physical entities (factories, workshops, warehouses, etc.) crossed by flows (financial, material, informational, etc.), grouped into an integrated logistic process [1]. Thus, the supply chain management consists of modelling a set of flows, to manage them in an integrated way, and to improve their coordination to create value for the final customer [2]. A supply chain is crossed at least by three flows [3]: the physical flow (purchase of the raw material, transformation into finished product, and delivery to the customer). The financial flow, its optimization aims at satisfying the actors who contributed to the functioning of the supply chain. The information flow, which allows to coordinate between the two flows. The aim of our paper is to evaluate a supply chain performance by presenting the integration of physical and financial flows across the chain. [4, 5] propose a methodology for analyzing supply chain performance in terms of the management of assets and flow of cash along the supply chain.

© IFIP International Federation for Information Processing 2019 Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 75–83, 2019. https://doi.org/10.1007/978-3-030-29996-5_9

76

S. Belil et al.

The purpose is to formalize relationships between physical and financial flow by their integration in tactical planning for an internal supply chain. The proposed approach allows the use of budgeting in order to evaluate the tactical production planning by integrating financial parameters such as payments terms into valuation models Activity Based Costing (ABC), and by coupling these kind of models with the tactical model. The rest of the article is organized as follows. In Sect. 2 a review of the literature is presented. Section 3 gives a detailed description of the modeling framework for supply chain evaluation. Finally, Sect. 4 conclude this paper.

2 Literature Review In this literature review, we first study tactical planning in multi-site supply chain. In the second part, we address literature that present ABC valuation method as a tool for building a cost model of an industrial system. Finally, we present some literature that link physical and financial models. 2.1

Multi-level Tactical Planning

Tactical planning deals with decisions about materials flow, inventory and capacity utilization during with a planning horizon from one or several months to two years. The main objective at this stage is to improve the cost efficiency (inventory) and the customer’s satisfaction. In the hierarchical decision-making environment described above, lot-sizing decisions fall into the categories of either tactical or operational level. Many researchers have been interested in supply chain tactical planning using the lot sizing problem. The literature proposing its modelling is diverse and varied. Most of these papers propose mathematical models in order to achieve this goal. Multi-level lot sizing models are the models used by excellence in order to grasp the problem of horizontal synchronization of the global supply chain. [6] propose a literature review of the lot sizing dedicated to supply chain. The basic multi-level model, called the MultiLevel Capacitated Lot Sizing Problem (MLCLSP), was proposed by [7]. Its purpose is to link the end-products demand with the needs for internal components using the Gozinto matrix. Starting with this basic model, researchers have developed multi-site models that solve different problems; we cite for instance [8–10]. Although, the problems solved by these models have different configurations, the considered variables and activities are similar. 2.2

A Valuation Method: Activity Based Costing

According to a comparison of the various methods of cost calculation [11], no method can be excluded as each one is appropriate to a specific context. As part of our research, we chose the ABC valuation method as a tool. This choice was made in order to be able to move to Business Units (BUs) oriented industrial process in the context of an industrial system. And this, to be able to be coupled to optimization models, from industrial management control systems that are associated with cost centers per functional entity.

An Integrated Approach for Supply Chain Tactical Planning and Cash Flow Valuation

77

ABC can improve organizational performance in a number of ways: helping organizations become more efficient; provide organizations with a clear picture of where funds are spent; offer organizations a better alternative to the product based on cost; identify value-added activities and eliminate or reduce non-value-added activities. ABC provides organizations with a better understanding of cost formation using a causal approach through driver mechanics and identifies action levers leading to better organizational performance. In order to build a Decision Support System (DSS) linking optimization models to management control, we should focus on approaches combining optimization models and cost models. 2.3

Coupling and Review Analysis

Many works have been done on the product flow in supply chain whereas there is a little research works on the financial aspect of the supply chain. Traditional approaches for supply chain management usually focus on process operations and neglect the financial side of the problem. The study of supply chain manager interest for integration of financial impact in operational and tactical planning is done by [12]. [13] addresses the implementation of financial cross-functional links with the supply chain operations and retrofitting activities at plant level. In [14], the authors estimate the shareholder wealth effects of supply chain glitches that resulted in production or shipment delays. [15] considers a cash management problem of firms in a two-asset setting. [16] define an approach to assess the nature of a sale point based on supply chain activity. [17, 18] present a methodology to solve the problems of efficiency, process control, and discrete production system cost management. [19] present the activity based costing method in combination with a discrete event simulation as a part of a discrete supply chain. [20] propose a model for the optimization of a global supply that maximizes the after tax profits in strategic planning. In [21], authors study the impact of financing constraints on inventory and cash management. [22] examines the relationship between liquidity management, operating performance and corporate value for firms. Table 1. Literature analysis Authors

Decision level Strategic

[13]

Tactical

Operational

X

[14]

X

[15]

X

[16]

X

[5] [12]

X

[20]

X

[17]

X

[4]

Physical flow

Modeling approach

Financial flow

Modeling approach

X

Mixed integer linear programming

X

Net present value (NPV) method

X

Linear programming

X

Statistics methods

X

Diffusion approximation technique

X

Discrete event simulation

X

Activity Based Costing method

X

X

Linear programming

X

Linear programming

X

X

Linear programming

X

Statistics methods

X

Linear programming

X

Linear programming

X

X

Discrete event simulation

X

Activity Based Costing method

X

X

Linear programming

X

Linear programming

[23]

X

X

Discrete event simulation

X

Activity Based Costing method

[22]

X

X

Linear programming

X

Statistics methods

[18]

X

X

Discrete event simulation

X

Activity Based Costing method

[21]

X

Linear programming

X

Linear programming

[19]

X

Discrete event simulation

X

Activity Based Costing method

X X

78

S. Belil et al.

A synthesis of the different studied papers, carrying the integration of the physical and the financial flows, is given in a grid (Table 1); his grid indicates for each article studied, the decision-making level studied and the modeling technics.

3 An Approach for the Physical and Financial Flows Assessment of the Supply Chain In this section we will present a detailed description of the modeling framework for supply chain evaluation. 3.1

Designing a Decision Support System Using PREVA

In order to evaluate a supply chain based on the combined use of ABC method and the optimization model, we supposed to model the supply chain as a set of activities considers as Business Units and as autonomous entities belonging either to the entity of the supply chain or to a supplier/distributor who is integrated into the latter. We therefore assume the existence of a supply chain made up of 1 to N BUs. In order to use the ABC models to evaluate processes, we propose to allocate, for each BU (a plant, a transport activity, etc.) made up of at least one elementary supply chain process, the different items that are required to translate physical flow activities into financial flow items. This phase is realized in our case using PREVA approach, it is used repeatedly to set up management control systems for the supply chain. The robustness of the approach has been shown several times [11, 16, 18]. The design and evaluation of activities by PREVA for supply chain involves three steps: (i) the first step deals with the evaluation of physical flow performance based on an optimization model. This step involves building a mathematical model of the studied system. (ii) In the second step, we have an action model for the financial flow that is done using the ABC method with a granularity at the pace of optimization. (iii) In the third step, the results are structured into prospective scorecards. 3.2

Formalization for Tactical Production Planning in Multi-site Supply Chain Structure

The propose Mixed Integer Linear Program (MILP) is a multi-level, multi-product multi-product supply chain structure. The problem is modeled using a small bucket multilevel lot sizing model. The productive system consists a set manufacturing and storage sites. Each site produces a set of products intended to meet both a final demand and an intermediate demand. This product structure is modeled by the Gozinto matrix, initiated by [7]. Each plant and inventory has a finite capacity. End-products can be stored in downstream or final inventories. Final products are transported by railway road to final inventories with a transport time. The train is limited by a maximum and minimum transport capacity. In order to satisfy the demand, the model calculates for each period, the production, storage and transported plans. We present below the mathematical model:

An Integrated Approach for Supply Chain Tactical Planning and Cash Flow Valuation

79

S: Set of manufacturing sites; I: Set of storage sites; T: Set of periods; P: Set of products; di;t : Final demand of a product i at period t; ai;j : The number of units of the product i necessary to produce a unit of the product j, ai;j;u ¼ 0 if j\i (j predecessor of fin i); Capi : Maximum production capacity; invint i : Downstream inventory; invi : Final max inventories; Invi : Maximum stock capacity; Sati;t : Demand satisfaction rate; Pi;t;s : Quantity of product i to be produced at plant s; Ri;t;s : Quantity of product i delivered by train from site s to final stocks; Rmax , Rmin : Maximum and minimum capacity of product to transported; zi;inv : Transport time; bi;s : Binary parameter, it equals 1 if the product i can be produced/stored in the plant/inventory. X Max Sati;t ð1Þ i;t fin Invfin i;t;j ¼ Invi;t1;j  sati;t :di;t ; 8i; t\zs;inv ; j ¼ m::I fin Invfin i;t;j ¼ Invi;t1;j þ

X u0

Ri;tzs;j ;s  sati;t :di;t ; 8i; t  zs;inv ; j ¼ m::I

Pi;t;s  Caps ; 8i; t; s X max Invint ; 8i; t; j i;t;j  Invj i X i

max Invfin ; 8i; t; j i;t;j  Invj

ð2Þ ð3Þ ð4Þ ð5Þ ð6Þ

Pi;t;s \bi;s :Caps ; 8i; t; s

ð7Þ

max Invint ; 8i; t; j i;t;j \bi;j :Invj

ð8Þ

max Invfin ; 8i; t; j i;t;j \bi;j :Invj

ð9Þ

Ri;t;s  Rmax ; 8i; t; s

ð10Þ

Ri;t;s  Rmin :xi;t;s ; 8i; t; s

ð11Þ

Ri;t;s  Mxi;t;s  0; 8i; t; s

ð12Þ

fin Invint i;t;s ; Invi;t;s ; Pi;t;s ; Ri;t;s  0; 8i; t; s

ð13Þ

xi;t;s 2 f0; 1g; 8i; t; s

ð14Þ

The Objective function (1) maximizes demand satisfaction rate. (2) is the stock balance equation of stocks. (3) and (4) are stock balance equations of final stocks. Constraints (5), (6) and (7) ensure the respect of the maximum production and inventory capacity respectively. Constraints (8) and (9) and (10) respectively assign the products to the plant/stocks in which they are produced/stored. Constraints (11) and (12) ensure maximum and minimum quantity limitations for transport by train.

80

S. Belil et al.

Constraints (12) and bind binary variables to corresponding decision variables. Finally, constraints (14) and (15) define the domain of the decision variables. 3.3

Cost Model

Regarding the valuation model, initially, we did a distribution of charges. Since we have no operational decision related to fixed cost, as part of our modeling, we will consider direct and indirect variable costs. Relative to these costs, we have no problem in imputing the direct costs, otherwise, in order to impute the indirect costs, it is necessary to use the cost drivers. In a second step, we built our cost drivers. These drivers must reflect the consequences of the decisions made. We can have cost drivers linked to the line and others related to processors. Then, once we have cost drivers in the supply chain, we can move from existing industrial management control systems that are associated with cost centers per functional entity, to BUs process-oriented. The construction of an ABC model thus makes it possible to evaluate the value creation for each BU constituting the supply chain and making it possible to take into account all the inputs and outputs in each process entity. This model is then coupled with the optimization model. In our case, we did a processor division to obtain the cost per quality and per hour. This file combines the direct cost and the support cost, the latter cost represents the cost of shutdowns SC and maintenance MC and the cost of running conditions RCC plus the cost of energy EC for each product. These support costs are calculated using cost inductors, running conditions inductors RCI, stop inductors SI, maintenance inductors MI and energy inductors EI. RCI ¼ RCC  RC

ð15Þ

MI ¼ MC  ðMD=DDMÞ

ð16Þ

SI ¼ SC  ðSD=DDMÞ

ð17Þ

EI ¼ EC  POEpr

ð18Þ

Where RC are the running conditions. MD is the maintenance duration. DDM is the duration of decision making. SD is the stop duration. And POEpr is the percentage of energy per processor. Finally, we calculated a margin, per hour and per quality. Once we did a cost model per line, per hour, and per processor for all outputs, we brought all the direct variable costs of all these outputs into one matrix. 3.4

Implementation of the Proposed Approach to a Chemical Process Industry

In this section, we illustrate our model on real case of a chemical process industry. The program has been solved for one-month demand. It has realized the production planning of three products P1, P2 and P3 that can be produced in two plants F1 and F2 according to two scenarios. The first scenario consists of encouraging the production of

An Integrated Approach for Supply Chain Tactical Planning and Cash Flow Valuation

81

the product P1 in plant F1 (product P2 can be produced simultaneously in plants F1 and F2). The second one consist of encouraging. The production of P2 in plant F2 rather than F1. Then, using the cost model, we financially evaluated the two proposed solutions in order to choose, economically, the best one.

Table 2. Coupling results (pro-format data) Produced quantity (Plant F1scénario 1)

Produced quantity (Plant F2scénario 1)

Direct cost (Dhs)

Support cost (Dhs)

Total cost (Dhs)

Produced quantity (Plant F1scénario 2)

15/02/2017 01:00

258

272

4481

123

2441728

272

15/02/2017 02:00

258

272

4491

123

2446691

15/02/2017 03:00

0

272

4489

123

15/02/2017 04:00

258

272

4491

15/02/2017 05:00

258

272

15/02/2017 06:00

0

15/02/2017 07:00 15/02/2017 08:00

Produced quantity (Plant F2scénario 2)

Direct cost (Dhs)

Support cost (Dhs)

Total cost (Dhs)

0

3585

123

1009739

272

0

3583

123

1009229

1255809

69

0

3584

123

257422

123

2446691

272

0

3583

123

1009229

4481

123

2441728

272

0

3585

123

1009739

272

4491

123

1256357

0

39

3583

123

143207

258

272

4489

123

2445625

0

0

3584

123

0

132

0

4494

123

610455

272

39

3583

123

1152211

Each hypothesis is simulated using the DSS proposed in the previous section. The accounting and physical data of a month were used in the real case to validate the operation of the system. The results proposed here are based on pro forma data. Table 2 presents the results. The analysis of the results shows that over the period of analysis and taking into account the data used, the production of P2 in plants F1 and F2 (Hypothesis 1) is not relevant from a financial point of view. So, our DSS can give an economic lighting.

4 Conclusion In conclusion, we proposed and integrated approach combining two model: a flowplanning model and a financial valuation model. Mixing the approach aim to model and plan physical flows, on the one hand, and to evaluate the financial performance of the found solution on the other. Physical flow planning model is realized by a MILP model proposed to solve a multi-level, multi-product lot-sizing problem. Financial flow is modeled using ABC valuation method using cost drivers mechanism. In order to combine the models, we propose PREVA approach, method that models supply chain subsystems by Business Units.

82

S. Belil et al.

References 1. Fenies, P., Gourgand, M., Tchernev, N.: A framework for supply chain performance evaluation. In: Congresso Internacional de pesquisa em logistica, pp. 1–12 (2004) 2. Pagh, J.D., Lambert, D.M., Cooper, M.C.: Supply chain management: more than a new name for logistics. Int. J. Logist. Manag. 8, 1–14 (1997) 3. Lapide, L.: What about measuring supply chain performance. Achiev. Supply Chain Excel. Technol. 2, 287–297 (2000) 4. Guillen-Gosalbez, G., Guillén, G., Badell, M., Puigjaner, L.: A holistic framework for shortterm supply chain management integrating production and corporate financial planning. Int. J. Prod. Econ. 106, 288–306 (2007) 5. Gupta, S., Dutta, K.: Modeling of financial supply chain. Eur. J. Oper. Res. 211, 47–56 (2011) 6. Brahimi, N., Dauzere-Péres, S., Najid, N., Nordli, A.: Etat de l’art sur les problemes de dimensionnement des lots avec contraintes de capacité. In: Conférence Francophone de MOdélisation et SIMulation, pp. 385–392 (2003) 7. Billington, P.J., McClain, J.O., Thomas, L.J.: Mathematical programming approaches to capacity-constrained MRP systems: review, formulation and problem reduction. Manag. Sci. 29, 1126–1141 (1983) 8. Gnoni, M.G., Iavagnilio, R., Mossa, G., Mummolo, G., Di Leva, A.: Production planning of a multi-site manufacturing system by hybrid modelling: a case study from the automotive industry. Int. J. Prod. Econ. 85, 251–262 (2003) 9. Thierry, C.: Supply chain management, models and implementation for medium-term decision support. University habilitation Memory to Conduct Research, Toulouse II Le Mirail University (2003) 10. Spitter, J.M., Hurkens, C.A.J., De Kok, A.G., Lenstra, J.K., Negenman, E.G.: Linear programming models with planned lead times for supply chain operations planning. Eur. J. Oper. Res. 163, 706–720 (2005) 11. Retmi, K.: An approach for an economic evaluation of operational and tactical decisions: implementation on the OCP supply chain. Ph.D. thesis: Paris Nanterre University et ENSEM-Hassan II University (2018) 12. Vickery, S.K., Jayaram, J., Droge, C., Calantone, R.: The effects of an integrative supply chain strategy on customer service and financial performance: an analysis of direct versus indirect relationships. J. Oper. Manag. 21, 523–539 (2003) 13. Badell, M., Romero, J., Puigjaner, L.: Optimal budget and cash flows during retrofitting periods in batch chemical process industries. Int. J. Prod. Econ. 3, 359–372 (2005) 14. Hendricks, K.B., Singhal, V.R.: The effect of supply chain glitches on shareholder wealth. J. Oper. Manag. 21, 501–522 (2003) 15. Premachandra, I.M.: A diffusion approximation model for managing cash in firms: an alternative approach to the Miller-Orr model. Eur. J. Oper. Res. 1, 218–226 (2004) 16. Fenies, P., Lagrange, S., Tchernev, N.: A decisional modelling for supply chain management in franchised networks: application in franchise bakery networks. Prod. Plan. Control. 21, 595–608 (2010) 17. Chan, K.K., Spedding, T.A.: An integrated multidimensional process improvement methodology for manufacturing systems. Comput. Ind. Eng. 44, 673–693 (2003) 18. Comelli, M.: Modélisation, optimisation et simulation pour la planification tactique des chaînes logistiques (phd thesis). Université Blaise Pascal - Clermont-Ferrand II (2008) 19. Mahal, I., Hossain, A.: Activity-based costing (ABC) – an effective tool for better management. Res. J. Financ. Account. 10 (2015)

An Integrated Approach for Supply Chain Tactical Planning and Cash Flow Valuation

83

20. Vidal, C.J., Goetschalckx, M.: A global supply chain model with transfer pricing and transportation cost allocation. Eur. J. Oper. Res. 129, 134–158 (2001) 21. Brown, W., Haegler, U.: Financing constraints and inventories. Eur. Econ. Rev. 48, 1091– 1123 (2004) 22. Wang, Y.-J.: Liquidity management, operating performance, and corporate value: evidence from Japan and Taiwan. J. Multinatl. Financ. Manag. 12, 159–169 (2002) 23. Lange, J., Bergs, F., Weigert, G., Wolter, K.-J.: Simulation of capacity and cost for the planning of future process chains. Int. J. Prod. Res. 50, 6122–6132 (2012)

UAV Set Covering Problem for Emergency Network Youngsoo Park1 and Ilkyeong Moon1,2(B) 1

Department of Industrial Engineering, Seoul National University, Seoul 08826, Korea 2 Institute for Industrial Systems Innovation, Seoul National University, Seoul 08826, Korea [email protected]

Abstract. Recent technology allows UAVs to be implemented not only in fields of military, videography, or logistics but also in a social security area, especially for disaster management. UAVs can mount a router and provide a wireless network to the survivors in the network-shadowed area. In this paper, a set covering problem reflecting the characteristics of UAV is defined with a mathematical formulation. An extended formulation and branch-and-price algorithm are proposed for efficient computation. We demonstrated the capability of the proposed algorithm with a computational experiment.

Keywords: UAV

1

· Disaster management · Set covering problem

Introduction

Over the last few years, there has been an increasing interest in unmanned aerial vehicles (UAVs) in the various fields including military, telecommunication, and aerial videography [1,2]. Although it has been widely used for commercial or military purposes, this study suggests that UAVs can also be useful for disaster management. When a disaster occurs, activities to mitigate further damages, such as relief logistics, casualty transportation, and evacuation, are planned. Because of extremely varying situations in the demand (disaster) areas, it is crucial to establishing a plan with the scientific decision. Therefore, accurate data collection through contacting with survivors is needed to make these activities efficiently. However, large-scale disasters can cause survivors to be isolated or disconnected in disaster areas. In this case, the reconstruction of the temporary network by using the UAVs can accommodate the communication with the survivors and gathering the real-time data [3,4]. By using UAVs of built-in This research was supported by the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT & Future Planning [Grant no. 2017R1A2B2007812]. c IFIP International Federation for Information Processing 2019  Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 84–90, 2019. https://doi.org/10.1007/978-3-030-29996-5_10

UAV Set Covering Problem for Emergency Network

85

network routers, rebuilding the network on shadow areas can be realized when the proper amounts of UAVs are distributed in the appropriate locations. If the number of UAVs is sufficient, launching with a large number of UAVs simultaneously can reconnect the network easily. However, the decision maker with the limited resource is obliged to make an optimal plan because the risk of either under-or-over plan would result in the damage of human life. Therefore, the following two questions can be raised naturally. • What is the minimum number of UAVs to cover all areas? • Where should each UAV be located? By developing the mathematical model based on the set covering problem, the minimum number of UAVs and their flight position to cover every survivor was analyzed in this research. The proposed UAV set covering problem (USCP) generalizes the classical set covering problem by incorporating the flexible characteristic of UAVs which have no restrictions on the position of facilities that can be located. As well as a disaster situation, USCP can model various environments, including manufacturing industry. In the smart factory, established by industry 4.0, individual resources communicate with each other via wired or wireless network. Especially for the wireless network, it is vital to cover every resource efficiently with minimal investment. As with the USCP, the location where the wireless network router can be installed at this time is relatively free, making it impossible to choose over given candidates of positions. This study proposes a branch-and-price approach to overcome the intractability caused by the quadratic constraint and solve USCP for efficient computation. The overall structure of the study takes the form of 5 chapters, including this introductory chapter. Section 2 is concerned with the description of USCP and the standard formulation of the mathematical model. In Sect. 3, an extensive formulation and a branch-and-price algorithm are presented for the problem. In Sect. 4, computational experiments are conducted, and results are analyzed. Section 5 summarizes the findings of the research.

2

Problem Definition and Mathematical Formulation

The objective of USCP is to cover every demand point with the minimum number of UAV with a fixed coverage and without a restrictions on the position. The detailed assumptions of USCP are defined as follows: (1) Positions of demand points are deterministic. (2) Coverage distance of UAV is identical. (3) There are no restrictions on the position of UAV. (4) Each wireless network is uncapacitated and the network traffic is ignored. (5) Overlap interference between UAVs and shadowing effect by buildings are ignored. A mathematical model is developed based on the assumptions. Let N denote the set of the demand points where the survivors are distributed. The following is the notations used in the standard formulation for USCP.

86

Y. Park and I. Moon

Parameters axi a position of demand point i on x-coordinate. ayi a position of demand point i on y-coordinate. R coverage radius of a UAV.

∀i ∈ N ∀i ∈ N

Decision variables  1, if UAV j is used. yj = ∀j ∈ N 0, otherwise.  1, if demand point i is covered by UAV j. ∀i ∈ N xij = ∀j ∈ N 0, otherwise. x cj ∈ R, position of UAV j on x-coordinate. ∀j ∈ N cyj ∈ R, position of UAV j on y-coordinate. ∀j ∈ N When the coverage range is given as a parameter, what we are interested in is the minimum number of UAVs and the position of each UAV in the x-y plane to cover all demand points. The relevant mathematical formulation based on mixed-integer programming is developed as follows: min

m 

yj

(1)

j=1

s.t.

xij ≤ yj , m  xij ≥ 1,

∀i ∈ N, ∀j ∈ N

(2)

∀i ∈ N

(3)

(axi − cxj )2 + (ayi − cyj )2 ≤ R2 + M (1 − xij ),

∀i ∈ N, ∀j ∈ N

(4)

xij ∈ {0, 1},

∀i ∈ N, ∀j ∈ N

(5)

yj ∈ {0, 1}, cxj , cyj ∈ R,

∀j ∈ N ∀j ∈ N

(6) (7)

j=1

Objective function (1) minimizes the number of UAVs to cover all demand points. Constraint (2) indicates the linking constraint between a demand point and a UAV. That is, a UAV j is required to cover a demand point i. Constraint (3) represents that each demand point i should be covered by one UAV. Constraint (4) is the logical constraint to incorporate the network coverage of the UAV. When the demand point i is covered by the UAV j, the location of the demand point (axi , ayi ) is covered by a circle with a circumcenter placed on the point (cxi , cyi ). Constraints (5) and (6) mean that variables xij and yj are binary variables. Constraint (7) means that cxi and cyi are the non-negative real variables. For distinction, the formulation will be renamed as Euclidean standard formulation (ES). ES contains the non-linear constraint as presented in Constraint (4). Accordingly, it is hard to obtain the optimal solution within a reasonable time, even for small-sized problems. Since the fast decision is vital in the response for disaster management, a branch-and-price approach for USCP is designed, which will be introduced in the next section.

UAV Set Covering Problem for Emergency Network

3

87

Branch-and-Price Approach for USCP

Branch-and-price (B&P) approach is a well-known exact-algorithm for largescale optimization problems. By incorporating the column generation technique into the branch-and-bound, it can significantly improve the bounds of the linear programming relaxation and resolve the symmetry of the solutions while branching. For detailed information of the B&P approach, one can refer [5]. 3.1

Master Problem

Denote by Ω is the set of the possible patterns to cover the demand points by one UAV. The patterns are defined with a given parameter wij indicating the inclusion of each demand point for a pattern. The minimum number of UAVs can be determined by an integer program as follows:  yj (8) min j∈Ω



s.t.

wij yj ≥ 1

∀i ∈ N

(9)

∀j ∈ Ω

(10)

j∈Ω

yj ∈ {0, 1},

Objective function (8) minimizes the number of UAVs required to cover all demand points. Constraint (9) is related to an assignment constraint to the demand points. The optimality under the current basis is determined by the pricing subproblem. 3.2

Pricing Subproblem

We have defined πi as the dual price of constraint (9). By solving the pricing subproblem, one can identify whether there is a better assignment pattern of demand points for a UAV. To construct a pattern (column), the decision variable is binary to identify whether a demand point i is covered by the generated column or not. Another decision variables cx and cy represent the position of UAV of the generated pattern. Additional columns for the master problem is generated by solving the following pricing problem: min s.t.

m 

πi wi i=1 (axi − cx )2 + 1−

wi ∈ {0, 1}, cx , cy ∈ R,

(11) (ayi − cy )2 ≤ R2 + M (1 − wi ),

∀i ∈ N ∀i ∈ N

(12) (13) (14)

88

4

Y. Park and I. Moon

Computational Experiments

To compare the effectiveness of the proposed solution algorithms, computational experiments are performed. All optimization models were developed in FICO R Core TM Xpress Mosel version 7.9. Experiments were performed with Intel  i5-6600 CPU @ 3.30GHz and 32 GB of RAM operated on Windows 10 64 bit OS. To be applied in the disaster management, each experiment was conducted with the run-time limit of 1800 seconds. Data set was made based on the benchmark data from OR-Library [6,7]. For each size of demand points of 10, 20, and 50, 10 instances were created, and the demand points were distributed uniformly on the 100 x 100 Euclidean plane. Three coverage radiuses of 10, 20, and 30 were examined for each instance. An analysis of algorithmic performance and sensitivity analysis are provided for the managerial insight in disaster management. Table 1 lists the computational results. The columns in this table are defined as follows. #Opt/#F eas: the number of solved/feasible-solution-provided problems within the time limits. T ime: the average of the computation time to solve the problems. For the problems not solved within the time limit, we used 1800 seconds while calculating the average. GapL : the average of the gap between lower (LP) bound and the feasible solution. # of UAVs: the average of the objective value of the feasible solution. Gap: the average of the gap between # of UAVs of ES and B&P algorithm, calculated by {(# of UAVs of ES)-(# of UAVs of B&P)}/(# of UAVs). Table 1. Computational results ND Rd #Opt/#F eas T ime(s) 10

20

ES

GapL (%)

B&P

ES

# of UAVs Gap(%)

ES

B&P

B&P ES

B&P B&P

10

1/9

10/10

1661.2

0.2 33.32 -

7.6

7.2

20

9/9

10/10

388.8

1.1 4.00

-

4.6

4.0

−6.00

30

10/10 10/10

11.2

2.7 -

-

2.8

2.8



10

0/0

10/10

1800*

17.9 85.00 -

20

0/9

10/10

1800*

91.9 46.00 -

6.9

5.5

−7.00

30

5/10

10/10

944.8

78.3 12.50 -

3.5

3.5



10

0/0

10/10

1800*

447.9 96.00 -

20

0/0

5/10

1800*

20.0 10.7

−4.43

−46.50

50.0 16.5

−67.00

1594.7 96.00 26.77 50.0 11.6

−85.56

30 0/2 1/10 1800* 1765.1 88.47 68.61 41.0 17.8 1800*: No problem is finished within the time limits

−75.64

50

As shown in Table 1, ES was not capable of providing optimal solutions even for the smallest problems. Long computation time and high GapL of ES were caused by both factors of weak LP bound and scarce feasible solution. ES could not provide feasible solutions for 49 of 90 problems and it led to the high average of the number of UAVs required to cover the demand points. Especially for the problems with the 50 demand points, GapL were higher than 88% and showed

UAV Set Covering Problem for Emergency Network

89

the intractability of ES. B&P algorithm solved 76 of 90 problems and provided a feasible solution for every problems. Thus, for 7 of 9 classes of problems showed 0 for GapL . For every data set used for the computational experiment, Gap was always the same or less than zero, which meant that B&P algorithm makes a better plan to use fewer UAVs to cover the area. Certainly, there was a tendency that the smaller the radius of the coverage be, the more UAVs were required. For the same coverage with the different number of demand points, the number of UAVs grew with the number of demand points, too. However, under the fixed-size area, the growth rate was less than 1. In other words, 7.2 UAVs with coverage radius 10 were required to cover 10 demand points, but only 10.7 UAVs were required to cover 20 demand points. In extreme cases, there is an upper bound of the number of UAVs, which will cover the whole area without any network-shadow area.

5

Conclusions

We introduced a UAV set covering problem with fixed coverage and without restrictions of positions for planning an emergency wireless network in disaster areas efficiently. Due to the intractability of quadratic constraints, proposed ES could not provide an optimal solution by a commercial solver within a practical time. An extended formulation of ES was proposed to implement B&P algorithm for USCP, which provided a better LP-bound and removed the symmetry of the solution. The computational experiment showed that B&P algorithm can provide an optimal solution for small-sized problems within reasonable time limits. Sensitivity analysis was conducted to show the tendency between the number of demand points, radius, and the number of UAVs required. Acknowledgements. This research was supported by the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT & Future Planning [Grant no.2017R1A2B2007812].

References 1. Kim, D., Lee, K., Moon, I.: Stochastic facility location model for drones considering uncertain flight distance. Ann. Oper. Res. 1–20 (2018). https://doi.org/10.1007/ s10479-018-3114-6 2. Kim, S., Moon, I.: Traveling salesman problem with a drone station. IEEE Trans. Syst. Man, Cybern. Syst. 49, 42–52 (2019). https://doi.org/10.1109/TSMC.2018. 2867496 3. Aida, S., Shindo, Y., Utiyama, M.: Rescue activity for the great east Japan earthquake based on a website that extracts rescue requests from the net. Proc. Workshop Lang. Process. Crisis Inf. 2013, 19–25 (2013) 4. Heinzelman, J., Waters, C.: Crowdsourcing Crisis Information in Disaster-Affected Haiti. US Institute of Peace, Washington (2010) 5. Vanderbeck, F., Wolsey, L.A.: Reformulation and decomposition of integer programs. 50 Years of Integer Programming 1958–2008, pp. 431–502. Springer, Berlin (2010). https://doi.org/10.1007/978-3-540-68279-0 13

90

Y. Park and I. Moon

6. Beasley, J.E.: OR-library: distributing test problems by electronic mail. J. Oper. Res. Soc. 41, 1069–1072 (1990). https://doi.org/10.1057/jors.1990.166 7. Osman, I.H., Christofides, N.: Capacitated clustering problems by hybrid simulated annealing and tabu search. Int. Trans. Oper. Res. 1, 317–336 (1994). https://doi. org/10.1016/0969-6016(94)90032-9

A Stochastic Optimization Model for Commodity Rebalancing Under Traffic Congestion in Disaster Response Xuehong Gao(&) Pusan National University, Busan, Republic of Korea [email protected]

Abstract. After a large-scale disaster, the emergency commodity should be distributed to relief centers. However, the initial commodity distribution may be unbalanced due to the incomplete information and uncertain environment. It is necessary to rebalance the emergency commodity among relief centers. Traffic congestion is an important factor to delay delivery of the commodity. Neither the commodity rebalancing nor traffic congestion is considered in previous studies. In this study, a two-stage stochastic optimization model is proposed to manage the commodity rebalancing, where uncertainties of demand and supply are considered. The goals are to minimize the expected total weighted unmet demand in the first stage and minimize the total transportation time in the second stage. Finally, a numerical analysis is conducted for a randomly generated instance; the results illustrate the effectiveness of the proposed model in the commodity rebalancing over the transportation network with traffic congestion. Keywords: Commodity rebalancing  Emergency logistics Stochastic optimization  Traffic congestion



1 Introduction In the last decade, large-scale natural disasters have occurred frequently. Such natural disasters pose serious threats to the sustainable development of society, economy, and ecology. A large number of people are impacted significantly and a lot of assets are damaged severely. Upon these disasters, relief centers should be determined and emergency commodity should be distributed to these relief centers to provide basic life support [1–3]. However, because the initial commodity distribution may be unbalanced due to the incomplete information and uncertain environment, each relief center may have a surplus or a shortage. In such a situation, the surplus should be redelivered to other unmet relief centers to make the efficient use of the commodity. In the transportation process to rebalance the commodity, traffic congestion is considered, because large-scale disasters usually trigger huge demand and service in the disaster area, which makes the mobile network operators no longer able to provide demand and service sufficiently. Hence, tasks to rebalance the commodity over the transportation network with traffic congestion are very difficult to accomplish under demand and supply uncertainties. Against the backdrop of difficulties, the aim of this study is to formulate © IFIP International Federation for Information Processing 2019 Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 91–99, 2019. https://doi.org/10.1007/978-3-030-29996-5_11

92

X. Gao

this commodity rebalancing problem with stochastic elements considering traffic congestion and solve it using mathematical programming. The rest of this paper is organized as follows. Section 2 reviews previous studies and highlights the main differences from the previous studies. Section 3 provides a problem description, a stochastic optimization model, and a solution method. Then, the application of the proposed model in a numerical instance is shown where the results are presented and discussed in Sect. 4. Finally, Sect. 5 concludes this study with contributions and further directions.

2 Literature Review Humanitarian logistics research has attracted growing attention as human suffering and economic loss continue to increase. In the past, many studies mainly surveyed on the humanitarian logistics for disaster management. Caunhye and Nie [4] reviewed optimization models in emergency logistics, which were divided into the following parts: facility location, relief distribution, casualty transportation, and other operations. Galindo and Batta [5] reviewed recent OR/MS research in disaster operations management and provided future research directions. Here, the related studies are reviewed on the commodity distribution, commodity rebalancing (also referred to as redistribution), and humanitarian logistics under the consideration of traffic congestion in disaster response. Dessouky and Ordonez [6] argued two important issues to solve facility location and vehicle routing problems that could ensure the rapid distribution of medical supplies in a logistics network. Jotshi and Gong [7] developed a robust methodology for dispatching and routing emergency vehicles in a post-disaster environment with the support of data fusion. Chen and Yu [8] applied an integer programming and a network-based partitioning to determine temporary locations for onpost EMS facilities after the disaster. Al Theeb and Murray [9] presented a mathematical programming model to deliver goods, disaster victims, and volunteer workers through a road network. Gao and Lee [10] proposed a stochastic programming model to facilitate the multi-commodity redistribution process under uncertainty. Gao and Lee [11] proposed a two-stage stochastic programming model to design a multi-modal transportation network for multi-commodity redistribution. Traffic congestion is one of the most important factors to delay humanitarian logistics and contribute to increasing the time of delivery and the number of injuries after disasters [12]. Transportation time may also be affected by traffic congestion on the roads due to various reasons [7]. Feng and Wen [13] pointed out that the roadway systems usually got different levels of damage, and thus the roadway capacity was reduced, which caused traffic congestion after a severe earthquake. Nagurney and Flores [14] proposed a network model which consisted of multiple nongovernmental organizations who sought to supply multiple demand points with relief items post a disaster to reduce the convergence and even congestion. To estimate the traffic congestion, the travel time function suggested in the traditional Bureau of Public Roads (BPR) curve [15] provides the relationship between the link travel time and the volume of traffic on a highway network, which is shown in the following functional form:

A Stochastic Optimization Model

 F ð xÞ ¼ F0  1:0 þ a 

 x b  C

:

93

ð1Þ

where F ð xÞ is the link travel time when the link-flow rate is x, F0 is the free flow travel time, C is the practical link capacity, and a  0, b  0 are parameters. This BPR function is widely used in the transportation planning field [16]. In spite of many studies that have been dedicated to humanitarian logistics after the disasters, few of them concern the commodity rebalancing process. To fill this research gap, this study focuses on the commodity rebalancing problem that incorporates traffic congestion. The novelty and contribution can be summarized as follows. A stochastic mixed-integer nonlinear programming model is proposed to formulate this commodity rebalancing problem, which has never been studied before. Then, a linearization method is also proposed to reformulate the nonlinear model so that it can be solved in the optimization solver CPLEX. Finally, a numerical instance is carried out and some managerial insights are obtained.

3 Problem and Methodology 3.1

Problem Statement

A transportation network which consists of a number of roads and relief centers is considered in this study. An initial commodity distribution has been delivered to these relief centers. Upon the disaster, the surplus commodity at some relief centers needs to be redelivered to other unmet relief centers. It is difficult to determine how much of commodity should be delivered and received at each relief center, which makes the supply and demand uncertain. In this study, a scenario-based approach is applied for uncertain elements, which are represented in terms of a number of discrete realizations of stochastic quantities. Generally, in a demand or supply relief center, there is a set of scenarios N. For a particular P scenario n 2 S, there is a probability of occurrence P ðnÞ, such that P ðnÞ  0 and n2S P ðnÞ ¼ 1. In this study, two assumptions are made: (i) each relief center is a separate unit in terms of the possible quantities of demand or supply; (ii) the available paths with background traffic-flow rates are given. In light of the above requirements, this study develops a two-stage stochastic mixed-integer nonlinear programming model for the commodity rebalancing problem. The objective functions are to minimize the expected total weighted unmet demand in the first stage and minimize the total transportation time in the second stage. 3.2

Model Formulation

The problem is modeled using the following notations. Sets S Set of supply relief centers, indexed by s 2 S. D Set of demand relief centers, indexed by d 2 D. S Set of scenarios, indexed by n 2 S.

94

X. Gao

Parameters The minimum and maximum supply at relief center s. SDs ; Sr s D r The minimum and maximum demand at relief center d. Rd ; Rd M A big positive number. W; V Commodity weight and volume. CW; CV Vehicle weight and volume capacities. Weighted values of supply relief center s and demand relief center d. Zs; Zd Distance between relief centers s and d. Dsd Bsd Background traffic-flow rate between relief centers s and d. Csd Practical traffic-flow rate capacity between relief centers s to d. T Planning period for the transportation process. V Vehicle travel speed. L Vehicle loading/unloading time. Possible supply at relief center s in scenario n. ons Possible demand at relief center d in scenario n. ind pns pnd

Probability of occurrence of ons . Probability of occurrence of ind .

Decision variables ds Outgoing quantity at relief center s. rd Incoming quantity at relief center d. wsd Commodity-flow from relief centers s to d. nsd Vehicle number from relief centers s to d. The completed formulation is given as the following two-stage stochastic mixedinteger nonlinear programming model. In the first stage, the objective function is defined as follows W1 ¼

X d2D

X

pn  Z d  maxfind  rd ; 0g þ n2S d

X s2S

X

  n s n p  Z  max d  o ; 0 s s s n2S ð2Þ

To remove the MAX function in (2) and realize the MAX function, the following two auxiliary binary variables are introduced into the model. ð3Þ

ð4Þ

A Stochastic Optimization Model

95

Then, the commodity rebalancing problem can be formulated as a deterministic optimization model: ð5Þ

s:t:P d ¼ s s2S d2D rd ;

P

ð6Þ

ð7Þ ð8Þ ð9Þ ð10Þ SDs  ds  Sr s

8 s 2 S;

ð11Þ

RDd  rd  Rr d

8 d 2 D;

ð12Þ

where objective function (5) aims to minimize the expected total weighted unmet demand at relief centers. Constraint (6) guarantees the balance of outgoing and incoming shipments. Constraints (7)–(10) guarantee that relief centers only account for the unmet demand. Given a general positive number to ind  rd , is 1, whereas is 0. Constraints (11) and (12) define the decision variables. After the first-stage decision variables are obtained, the second-stage problem can be formulated as follows. min W2 ¼

X s2S

X d2D

T  nsd þ

X

X s2S

"  # Dsd nsd þ Bsd  T b  1þa   nsd d2D V Csd  T ð13Þ

P

s:t: w  ds d2D sd

X s2S

X s2S

wsd  rd

X d2D

wsd ¼

wsd  W  nsd  CW

ð14Þ

8 s 2 S; 8 d 2 D;

X d2D

X s2S

ð15Þ wsd ;

8 s 2 S; d 2 D;

ð16Þ ð17Þ

96

X. Gao

wsd  V  nsd  CV nsd  ðCsd  Bsd Þ  T nsd  0 & integer

8 s 2 S; d 2 D;

ð18Þ

8 s 2 S; d 2 D;

ð19Þ

8 s 2 S; d 2 D;

ð20Þ

wsd  0 8 s 2 S; d 2 D:

ð21Þ

where the objective function (13) aims to minimize the total transportation time. Constraint (14) ensures the total outgoing shipment cannot exceed the available amount at relief center s. Constraint (15) ensures that the total incoming shipment should be greater than or equal to the demand at relief center d. Constraint (16) guarantees transportation balance. Constraints (17) and (18) restrict assigned vehicles should be able to deliver the commodity by satisfying both weight and volume capacities. Constraint (19) ensures that assigned vehicles cannot exceed the route capacity. Constraints (20) and (21) are nonnegative constraints of variables. 3.3

Solution Method

The second-stage objective function is nonlinear when the BPR function is considered. To linearize the second-stage objective function, two more auxiliary parameters are introduced in this study. The first one is the maximum units of the commodity in one

CW CV vehicle, which is represented as G ¼ min W ; V . The other one is the number of vehicles x from relief centers s to d, which is represented as nxsd . Because the vehicle number must be an integer, the discrete solution space can be built. The consecutive integer numbers are used to represent the potential solution. nxsd 2

     d r 0; 1; 2. . .; x; . . .; min ðCsd  Bsd Þ  T; min s ; d G G

8 s 2 S; d 2 D: ð22Þ

And an auxiliary binary variable is used to represent the second-stage decision variables. uxsd

¼

1 if nxsd vehicles are used from relief centers s to d: 0 otherwise:

ð23Þ

Then, the BPR function can be rewritten as follows. "  x #

x nsd þ Bsd  T b  nxsd w nsd ¼ 1 þ a  Csd  T

8 s 2 S; d 2 D:

ð24Þ

With the new BPR function, the second-stage model can be rewritten as follows.

A Stochastic Optimization Model

Min W2 ¼

X

X s2S

Xmin

h ðCsd Bsd ÞT;min

x¼0

d2D

   i ds r d G ;G

 T  nxsd þ

97

 Dsd x  w nsd  uxsd ; V ð25Þ

h Pmin ðCsd Bsd ÞT;min

   i ds r d G;G

x¼0

Xmin

h ðCsd Bsd ÞT;min

   d r nxsd  uxsd  min Gs ; Gd

8 s 2 S; d 2 D;

nxsd  uxsd  ðCsd  Bsd Þ  T

8 s 2 S; d 2 D;

ds rd G ;G

h ðCsd Bsd ÞT;min

s2S

X

Xmin

ds r d G ;G

h

uxsd ¼ 1 8 s 2 S; d 2 D;

d2D

ðCsd Bsd ÞT;min

ð28Þ

   i ds r d G ;G

x¼0

Xmin

ð27Þ

   i

x¼0

X

ð26Þ

   i

x¼0

Xmin

s:t:

h ðCsd Bsd ÞT;min

nxsd  uxsd  G  rd

8 d 2 D;

ð29Þ

nxsd  uxsd  G  ds

8 s 2 S;

ð30Þ

   i ds r d G;G

x¼0

nxsd  0 & integer

8 s 2 S; d 2 D:

ð31Þ

where objective function (25) represents the total transportation time. Constraints (26) and (27) restrict the total number of vehicles on each route. Constraint (28) restricts that only one solution is selected from the set of potential solutions. Constraint (29) restricts that the pre-determined demand should be met. Constraint (30) restricts the outgoing commodity cannot exceed the pre-determined supply. Constraint (31) defines the decision variable.

4 Numerical Analysis To illustrate the validity of the proposed model and solution approach, a numerical analysis with a randomly generated instance is carried out and the related results are reported. In this numerical instance, 12 relief centers and the commodity of food are considered. In BRP function, a = 0.15 and b = 4.0 are used to estimate the travel time under traffic congestion. The planning period T equals to 1. Relief-center weight values are integers generated randomly in the interval [10, 30]. The minimum and maximum quantities of demand and supply are integer numbers and drawn randomly from the intervals [5, 15] and [20, 30], respectively. Moreover, the scenarios in each of supply and demand relief centers are consecutive integer numbers from SDcs to Sr with prob r

rcs r D D ability 1= Scs  Scs þ 1 , and from Rcd to Rcd with probability 1= Rcd  RDcd þ 1 respectively. The commodity has a weight of 2.0 and a volume of 1.0. The vehicle has weight and volume capacities (10, 10). Besides, the vehicle has loading/unloading time

98

X. Gao

2 and speed 1. The distance between relief centers comes from the interval [20, 60]. The background traffic-flow rates are randomly generated on the interval ½0:4; 0:8. The practical capacity of each route is randomly generated in the interval [60, 100]. All the models are implemented in IBM ILOG CPLEX Optimization Studio (Version: 12.6). Table 1 shows that 6 relief centers are considered as supply relief centers. The parameters and decision variables at relief centers are also given in Table 1. It is obvious that the incoming and outgoing quantities of commodities are closely related to the weighted values, demand, and supply at relief centers. Generally, a demand relief center with a large weighted value and great demand receives more to meet its highpressure need. A supply relief center with a small weighted value and great supply shares more with other relief centers. For instance, the fifth demand relief center with weighted value 30 receives more food than the first demand relief center with weighted value 30. The second supply relief center with a weighted value of 28 shares less and maintains a higher food inventory level. To obtain a better insight into the behavior in the transportation process with traffic congestion, the obtained first-stage decision variables are expanded ten times. After that, the vehicle assignment between relief centers is obtained and shown in Table 2. Table 1. Results of incoming and outgoing food at relief centers in the first stage Relief center ID S 1 2 3 4 5 6 16 28 19 16 19 13 Zs; Zd 11 9 5 12 12 12 SDcs =RDcd r Sr cs =Rcd   ds =rd

D 1 2 3 4 5 6 30 30 17 22 30 26 13 13 12 9 6 5

26 21 28 26 22 20 21 21 28 30 28 20 21 13 18 21 17 19 18 18 18 20 21 14

Table 2. Results of vehicle assignment between relief centers in the second stage nsd S 1 2 3 4 5 6

D 1 – – – – 30 6

2 32 – – – – 4

3 10 – – 26 – –

4 – – 36 4 – –

5 – 26 – 12 4 –

6 – – – – – 28

5 Conclusion and Future Research This paper presents a two-stage stochastic mixed-integer nonlinear programming model for the commodity rebalancing considering traffic congestion under uncertainties of demand and supply. A method to linearize the model is developed so that it can be

A Stochastic Optimization Model

99

solved in the CPLEX solver. A numerical analysis is applied to demonstrate the applicability of the solution method for the proposed model. In the end, the problem of interests from the following aspects can be explored in future studies. It is interesting to consider the multi-commodity rebalancing cases. It is a significant topic to extend the model to the multi-period rebalancing process. Another future consideration is to extend this work to the budget-based uncertain cases. These questions will be considered in further research.

References 1. Akgün, İ., Gümüşbuğa, F., Tansel, B.: Risk based facility location by using fault tree analysis in disaster management. Omega 52, 168–179 (2015) 2. Chen, Y., et al.: The regional cooperation-based warehouse location problem for relief supplies. Comput. Ind. Eng. 102, 259–267 (2016) 3. Gao, X., et al.: A hybrid genetic algorithm for multi-emergency medical service center location-allocation problem in disaster response. Int. J. Ind. Eng. 24(6), 663–679 (2017) 4. Caunhye, A.M., Nie, X., Pokharel, S.: Optimization models in emergency logistics: a literature review. Socio-Econ. Plann. Sci. 46(1), 4–13 (2012) 5. Galindo, G., Batta, R.: Review of recent developments in OR/MS research in disaster operations management. Eur. J. Oper. Res. 230(2), 201–211 (2013) 6. Dessouky, M., et al.: Rapid distribution of medical supplies. Int. Ser. Oper. Res. Manag. Sci. 91, 309 (2006) 7. Jotshi, A., Gong, Q., Batta, R.: Dispatching and routing of emergency vehicles in disaster mitigation using data fusion. Socio-Econ. Plann. Sci. 43(1), 1–24 (2009) 8. Chen, A.Y., Yu, T.-Y.: Network based temporary facility location for the Emergency Medical Services considering the disaster induced demand and the transportation infrastructure in disaster response. Transp. Res. Part B: Methodol. 91, 408–423 (2016) 9. Al Theeb, N., Murray, C.: Vehicle routing and resource distribution in postdisaster humanitarian relief operations. Int. Trans. Oper. Res. 24(6), 1253–1284 (2017) 10. Gao, X., Lee, G.M.: A Stochastic programming model for multi-commodity redistribution planning in disaster response. In: Moon, I., Lee, G.M., Park, J., Kiritsis, D., von Cieminski, G. (eds.) APMS 2018. IAICT, vol. 535, pp. 67–78. Springer, Cham (2018). https://doi.org/ 10.1007/978-3-319-99704-9_9 11. Gao, X., Lee, G.M.: A two-stage stochastic programming model for commodity redistribution under uncertainty in disaster response. In: Proceedings of International Conference on Computers and Industrial Engineering, CIE (2018) 12. Campos, V., Bandeira, R., Bandeira, A.: A method for evacuation route planning in disaster situations. Procedia-Soc. Behav. Sci. 54, 503–512 (2012) 13. Feng, C.-M., Wen, C.-C.: A fuzzy bi-level and multi-objective model to control traffic flow into the disaster area post earthquake. J. East. Asia Soc. Transp. Stud. 6, 4253–4268 (2005) 14. Nagurney, A., Flores, E.A., Soylu, C.: A Generalized Nash Equilibrium network model for post-disaster humanitarian relief. Transp. Res. Part E: Logistics Transp. Rev. 95, 1–18 (2016) 15. Roads, U.S.B.O.P.: Traffic assignment manual for application with a large, high speed computer. U.S. Dept. of Commerce, Bureau of Public Roads, Office of Planning, Urban Planning Division (1964) 16. Florian, M.A.: Traffic Equilibrium Methods. Lecture Notes in Economics and Mathematical Systems, vol. 118. Springer, Heidelberg (1974). https://doi.org/10.1007/978-3-642-48123-9

Optimal Supplier Selection in a Supply Chain with Predetermined Loading/Unloading Time Windows and Logistics Truck Share Alireza Fallahtafti1(&), Iman Ghalehkhondabi2, and Gary R. Weckman1 1

2

Industrial and Systems Engineering Department, Ohio University, Athens, OH 45701, USA {af551515,weckmang}@ohio.edu School of Business and Leadership, Our Lady of the Lake University, San Antonio, TX 78207, USA [email protected]

Abstract. Rapid population growth and increasing demand of transportation necessitate more efficient transportation and logistics processes. Efficient logistics processes in a supply chain can help the supplier selection procedure be more proficient in terms of delivery time. This paper studies a three-stage supply chain which enables truck sharing for delivery. All suppliers and the manufacturer have a time window for loading and unloading the material. A nonlinear programming model is developed to find the optimal truck share among different suppliers. A numerical example shows the applicability of the proposed model. Keywords: Truck sharing  Delivery time window  Supply chain management  Third party logistics provider Supplier selection



1 Introduction Supplier selection is one of the most important decision-making practices in supply chain management. Supplier selection and order assignment are two activities which affect a company’s performance and its supply chain competitiveness [1]. Selecting the appropriate supplier can reduce the purchase cost and improve competency [2]. Factors such as price, quality, and delivery time would affect the supplier selection practices [3]. Increasing competency between the companies, as well as higher customer expectations, made the delivery time more important in recent years. Most companies are outsourcing their logistic processes to third-party logistics (3PL) providers so they can focus more on their own specialty and competitive advantages [4]. Many researchers have studied the supplier selection problem under various assumptions and within different frameworks. Studies such as supplier selection and order assignment with supply capacity [5], logistics provider selection using analytic hierarchy process (AHP) and linear programming [6] or using fuzzy analytic network process (FANP) [7], applying multi-criteria decision making to select a third-party © IFIP International Federation for Information Processing 2019 Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 100–108, 2019. https://doi.org/10.1007/978-3-030-29996-5_12

Optimal Supplier Selection in a Supply Chain

101

logistics provider [8], and selecting the third-party logistics provider for a mass customization logistics process [9] are some of the works in this area. Truck transportation has been a major part of logistic processes for decades. The recent increasing need for truck transportation container terminals has congestion problems even in developed ports such as the Port of Rotterdam [10]. These problems inspired researchers and decision makers to think about creative ways to encounter this truck transportation capacity problem. One of the ideas developed in order to deal with truck transportation capacity involves diving the concept of ride sharing up into logistic processes. Ride sharing is a solution used to prevent road congestion, as well as dynamic tolling and managed lanes [11]. Ride sharing can reduce traffic congestion’s negative impacts, such as air pollution and wasting resources (i.e. time, gas, etc.) [12]. In the scope of truck sharing, such concept was adopted for logistic processes. Truck sharing is a way to solve empty trips, as well as a lack of capacity issue regarding truck transportation [13]. Studies such as [14] and [15] applied the truck sharing concept in container terminals to optimize both the assignment of cranes to containers and truck sharing amongst the containers. Although there are many research studies in the fields of truck sharing applying third-party logistics providers and supplier selection, no research has considered the impact of truck sharing while simultaneously selecting both the suppliers and 3PL providers. In this study we consider a supply chain with three stages. There is a manufacturer requiring a major raw material which can be fulfilled by different suppliers. Third-party logistics providers are available for delivering the material from suppliers to the manufacturer. Available trucks can share capacity among the pallets from different suppliers. More explicitly, each truck can make a tour to visit different suppliers and collect pallets all in one tour to deliver to the manufacturer. According to the assumptions of ride sharing, there is a loading time window for each supplier. The manufacturer has a desired time and a possible time window for unloading the material as well. There would be a cost of opportunity if the truck couldn’t be unloaded at the manufacturer’s desired time. Therefore, pallets can be collected and delivered only at the time windows provided by the supplier and the manufacturer. Figure 1 shows the schematic of the studied supply chain.

Fig. 1. Structure of proposed model

102

A. Fallahtafti et al.

The remainder of this paper is as follows: The mathematical formulation and solution method is provided in Sect. 2. Section 3 is dedicated to a numerical example, and the conclusions and directions for future studies are explained in Sect. 4.

2 Mathematical Model We describe our network by the graph G (N, A), where N is the set of vertices and A is the set of arcs. Node set N includes third-party logistics (3PL) providers renting load truck (D), suppliers (R) and the only manufacturer {0} in our model. We define the first stage including the node set N1 ¼ D [ R, and the second stage including node set N2 ¼ R [ f0g. Each arc (i, j) has a nonnegative cost of transportation ctij based on the Euclidian distance between i and j. We apply the following assumptions and notations to our currently-studied model: 2.1

Assumption

• Number and location of (1) 3PL companies, (2) suppliers and (3) the manufacturer is known in advance; • Total raw material (in pallets) demand by the manufacturer; • It is possible that a supplier is not selected by the manufacturer (raw material can be supplied by some of the suppliers); • A truck first visits the assigned supplier(s) and picks the raw materials up, then moves to the manufacturer’s parking; • All suppliers have a time window for loading raw material; • A supplier (in case of selection) can be visited only by one vehicle; • There is a limited number of parking lots and limited time for unloading raw materials; • The Euclidean distance between i and j is considered as a proxy for time traveled along each arc (i, j). 2.2

Sets

D R K N1 N2

Set Set Set Set Set

2.3

Parameters

ct co dem capk

of of of of of

3PL companies, D = {1, …, d} suppliers, R = {1, …, r} vehicles, K = {1, …, k} 3rd party logistics company and suppliers in the first stage, N1 ¼ D [ R suppliers and a manufacturer, N2 ¼ R [ f0g

Average cost of travelling (dollars/hour) Average cost of opportunity (dollars/hour) Total demand Capacity of vehicle k

Optimal Supplier Selection in a Supply Chain

P tij sti  eptr ; lptr dtu ltu

2.4 tik xkij yki

2.5

103

Capacity of parking for unloading raw materials Average travel time from node i to node j (min) Average service time for each node i in period t (min) Time window constraint for supplier r to deliver the raw materials Desired time for unloading the raw material in manufacturer’s parking The latest possible time for unloading raw material in manufacturer’s parking

Decision Variables Arrival  1 ¼ 0  1 ¼ 0

time of each vehicle to nodes i 2 N if vehicle k traverses arc (i; jÞ 2 N otherwise if vehicle k visit i 2 N2 otherwise

Mathematical Formulation MinZ ¼

XXX

ct tij xkij þ

kK i2N1 j2N2

X

X

  co yk0 dtu  t0k 

ð1Þ

k2K

ykr  1

; 8r 2 R

ð2Þ

k

XX

xkij  1

; 8j 2 R

ð3Þ

xkij  1

; 8i 2 N1

ð4Þ

k2K i2N1

XX k2K j2N2

X

xkir ¼

i2 N1

X

xkrj

; 8r 2 R; 8k 2 K

ð5Þ

j2 N2

X

xkij ¼ ykj

; 8j 2 N2 ; 8k 2 K

ð6Þ

yki  dem

ð7Þ

i2 N1

XX k2K i2R

XX i2N1 j2N2

xkij  capk

; 8k 2 K

ð8Þ

104

A. Fallahtafti et al.

X

yk0  p

ð9Þ

k2K

tjk  tik þ sti þ tij  Mð1  xkij Þ ykr eptr  trk  ykr lptr

; 8i 2 N1 ; 8j 2 N2 ; 8k 2 K ; 8r 2 R; 8k 2 K

t0k  ltu yk0 ; 8k 2 K xkij ; ykj 2 f0; 1g tik  0

ð10Þ ð11Þ ð12Þ

; 8i 2 N; 8j 2 N; 8k 2 K

ð13Þ

; 8i 2 N; 8k 2 K

ð14Þ

Objective function (1) minimizes the total costs including transportation cost and opportunity cost. Constraint (2) expresses that the produced raw materials and subassemblies of each supplier is picked up by at most one vehicle. Constraint (3) ensures that a truck cannot reach a certain destination from two different sources at once. Constraint (4) declares that a truck cannot head to different destinations simultaneously. Equations (5) and (6) are the equilibrium constraints associated to each supplier. Constraint (7) assures that the total demand should be satisfied. Constraints (8) and (9) guarantee that the truck capacity and parking capacity to unload the raw materials shipped from suppliers are respected. Constraint (10) shows the time at which each location is observed. The time window constraint of suppliers is modeled in Constraint (11) declaring that the raw materials can be picked up after the specified earliest pickup time and before the specified latest pickup time. Constraint (12) imposes that a truck should reach the parking lot for unloading the raw material before the latest possible time. Finally, constraints (13) and (14) assure the binary and non-negativity constraints on decision variables. 2.6

Model Linearization

The proposed model in this study is a mixed integer nonlinear program (MINLP). However, we can rewrite the objective function in the linear format using a set of linear constraints [16]. Linearizing the objective function (1): MinZ ¼

XXX

ct tij xkij þ

kK i2N1 j2N2

¼

XXX

ct tij xkij

þ

kK i2N1 j2N2

ak1  ak2 ¼ dtu  t0k ak1 ; ak2  0

X

  co yk0 dtu  t0k 

k2K

  co yk0 ak1 þ ak2

ð16Þ

; 8k

ð17Þ

X k2K

ð18Þ

Optimal Supplier Selection in a Supply Chain

105

Objective function (16) is still nonlinear. Each of two expressions in the objective function should be rewritten again. The following formulas show the linearized form of P co yk0 ak1 : k2K

X

co yk0 ak1 ¼

k2K

X

co ak1a

ð19Þ

k2K

ak1a  Myk0 ak1a  ak1

; 8k

ð20Þ

; 8k

ak1a  ak1  Mð1  yk0 Þ

ð21Þ ; 8k

ð22Þ

Constraints (17), (18), (20), (21), and (22) are the new constraints of our optiP mization model. k2K co yk0 ak2 in Eq. (16) should be rewritten in a same manner as (19) to (22).

3 Computational Results To demonstrate the applicability of the proposed model, a numerical example is conducted in this section. Consider a supply chain with 8 suppliers and a manufacturer. The manufacturer requires 5 pallets of raw material in order to run its production process. All the suppliers can provide one pallet of raw material, but the manufacturer can choose the suppliers based upon the travelling and opportunity costs. There are three third-party logistics providers (3PLs) with one truck each. Additionally, 3PLs can apply ride sharing in order to reduce the shipment cost. Travel cost is obtained based on the Euclidian distance, and is provided in Table 1. Other required parameter values are provided in Tables 2 and 3. Table 1. Travel time between nodes (minutes) Parking d1 d2 d3 r1 r2 r3 r4 r5 r6 r7 r8

Parking 0 27 40 40 42 38 48 30 42 42 38 42

d1 27 0 48 67 19 27 40 30 57 57 60 68

d2 40 48 0 57 48 72 27 19 81 13 30 48

d3 40 67 57 0 81 72 78 60 48 48 30 13

r1 42 19 48 81 0 42 30 30 75 60 68 80

r2 38 27 72 72 42 0 67 55 42 78 75 78

r3 48 40 27 78 30 67 0 19 89 40 55 72

r4 30 30 19 60 30 55 19 0 72 30 40 55

r5 42 57 81 48 75 42 89 72 0 80 68 60

r6 42 57 13 48 60 78 40 30 80 0 19 38

r7 38 60 30 30 68 75 55 40 68 19 0 19

r8 42 68 48 13 80 78 72 55 60 38 19 0

106

A. Fallahtafti et al. Table 2. Parameters Parameter dem ct co P sti

Value 5 40 30 3 30

Parameter dtu ltu capk1 capk2 capk3

Value 3 pm 5 pm 3 2 1

Table 3. Suppliers’ loading time window r1 [10 am, 11 am]

r2 [10 am, 12 pm]

r3 [11 am, 12 pm]

r4 [9 am, 10 am]

r5 [10 am, 11 am]

r6 [10 am, 12 pm]

r7 [11 am, 12 pm]

r8 [12 pm, 1 pm]

Figure 2 shows the solution schematically. Solving the model gives us the lowest possible cost of $204, which would be achievable through the following schedule: • The truck from the 1st logistics company takes one pallet from the first supplier at 10 am, one pallet from the fourth supplier at 12 pm, and unloads the entire shipment in the manufacturer’s parking lot at 3 pm. • The truck from the 2nd logistics company takes one pallet from the sixth supplier at 10 am, one pallet from the seventh supplier at 11 am, and unloads the entire shipment in the manufacturer’s parking lot at 3 pm. • The truck from the 3rd logistics company takes one pallet from the 8th supplier at 12 pm and unload the entire shipment in the manufacturer’s parking lot at 3 pm.

Fig. 2. Optimal truck share plan

Optimal Supplier Selection in a Supply Chain

107

The short distance between different suppliers and the manufacturer’s parking makes it possible to deliver the material to the manufacturer earlier than the specified time, but the manufacturer’s employees would only be available at the assigned time slot from 3 pm to 5 pm. This solution shows the possibility of an agreement between the suppliers and the manufacturer to coordinate on loading/unloading time windows.

4 Conclusions The truck transportation sector has witnessed a demand boom in recent years. Sharing trucks among different loads can optimize the truck capacity utilization and reduce traffic congestion as well. Improved transportation services would increase the competitiveness of third-party logistics providers. The proposed model in this study showed the applicability of truck sharing in a three-stage supply chain. Numerical example results illustrated the available trucks can be shared among the suppliers in order to deliver material to the manufacturer while still considering the available time windows for loading/unloading trucks. Studying the same problem in a multi-product, multi period, multi-objective framework would be an interesting topic for future research. Moreover, considering uncertain parameters (e.g., in demand) may increase the attractiveness of the model and make it more applicable. Studying the possibility of bargaining on the loading/unloading time windows amongst the chain players would be a valuable area for future research, as well.

References 1. Moghaddam, K.S.: Fuzzy multi-objective model for supplier selection and order allocation in reverse logistics systems under supply and demand uncertainty. Expert Syst. Appl. 42(15– 16), 6237–6254 (2015) 2. Willis, T.H., Huston, C.R., Pohlkamp, F.: Evaluation measures of just-in-time supplier performance. Prod. Inven. Manag. J. 34(2), 1 (1993) 3. Moore, D.L., Fearon, H.E.: Computer-assisted decision-making in purchasing. J. Purch. 9(4), 5–25 (1973) 4. Vaidyanathan, G.: A framework for evaluating third-party logistics. Commun. ACM 48(1), 89–94 (2005) 5. Ghodsypour, S.H., O’brien, C.: The total cost of logistics in supplier selection, under conditions of multiple sourcing, multiple criteria and capacity constraint. Int. J. Prod. Econ. 73(1), 15–27 (2001) 6. Tian, Y., Zantow, K., Fan, C.: A framework of supplier selection of integrative logistics providers. Int. J. Manag. Enterp. Dev. 7(2), 200–214 (2009) 7. Nobar, M.N., Setak, M., Tafti, A.F.: Selecting suppliers considering features of 2nd layer suppliers by utilizing FANP procedure. Int. J. Bus. Manag. 6(2), 265 (2011) 8. Hwang, B.-N., Shen, Y.-C.: Decision making for third party logistics supplier selection in semiconductor manufacturing industry: a nonadditive fuzzy integral approach. Math. Probl. Eng. 2015, 1–12 (2015) 9. Hu, X., Wang, G., Li, X., Zhang, Y., Feng, S., Yang, A.: Joint decision model of supplier selection and order allocation for the mass customization of logistics services. Transp. Res. Part E Logist. Transp. Rev. 120, 76–95 (2018)

108

A. Fallahtafti et al.

10. Behdani, B., Fan, Y., Wiegmans, B., Zuidwijk, R.: Multimodal schedule design for synchromodal freight transport systems. Eur. J. Transp. Infrastruct. Res. 16, 424–444 (2014) 11. Xiong, C., Hetrakul, P., Zhang, L.: On ride-sharing: a departure time choice analysis with latent carpooling preference. J. Transp. Eng. 140(8), 04014033 (2014) 12. Wang, F.-Y., Tang, S., Sui, Y., Wang, X.: Toward intelligent transportation systems for the 2008 Olympics. IEEE Intell. Syst. 18(6), 8–11 (2003) 13. Islam, S., Olsen, T.: Truck-sharing challenges for hinterland trucking companies: a case of the empty container truck trips problem. Bus. Process. Manag. J. 20(2), 290–334 (2014) 14. Islam, S.: Simulation of truck arrival process at a seaport: evaluating truck-sharing benefits for empty trips reduction. Int. J. Logist. Res. Appl. 21(1), 94–112 (2018) 15. Vahdani, B., Mansour, F., Soltani, M., Veysmoradi, D.: Bi-objective optimization for integrating quay crane and internal truck assignment with challenges of trucks sharing. Knowl. Based Syst. 163, 675–692 (2019) 16. Bazaraa, M.S., Jarvis, J.J., Sherali, H.D.: Linear Programming and Network Flows. Wiley, Hoboken (2011)

Scheduling Auction: A New Manufacturing Business Model for Balancing Customization and Quick Delivery Shota Suginouchi(B)

and Hajime Mizuyama

Aoyama Gakuin University, 5-10-1 Fuchinobe, Sagamihara, Kanagawa 252-5258, Japan [email protected]

Abstract. This paper proposes a new manufacturing business model that is enabled by a novel scheduling method based on an auction, called Scheduling Auction. The proposed scheduling method and the associated business model aim to balance expanding customization and quick delivery to better satisfy customers. It accepts bids from customers, captures their preferences in terms of product type and due date more extensively than before from the bids and reflects those preferences in the production schedule. The fundamental framework of the method and its working are illustrated with a simple case of a factory with a single machine. Future research directions include conducting validation experiments with human subjects and extending the method to more complex production systems. Keywords: Production scheduling

1

· Auction · Mechanism design

Introduction

Manufacturing companies must meet customers’ demands. To satisfy diverse market demands, production styles have been continuously changed from highvolume low-variety production to low-volume high-variety production. This trend keeps increasing product variety, and yet the requested due dates never become longer. Thus, companies face a crucial challenge of how to balance expanding customization and quick delivery to satisfy their customers. The conventional manufacturing business model can be broadly classified into make-to-stock (MTS) and make-to-order (MTO) models. MTS is characterized by a short lead-time from order to delivery, where quick delivery is accomplished in exchange for holding product inventory. Because the cost and risk associated with holding inventory increase with increasing product variety, naive MTS is not suitable for most extremely high-variety production environments currently in use. MTO, however, deals with high-variety production with minimal inventory, c IFIP International Federation for Information Processing 2019  Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 109–117, 2019. https://doi.org/10.1007/978-3-030-29996-5_13

110

S. Suginouchi and H. Mizuyama

by making the production schedule after receiving orders. However, the resultant longer lead-time to delivery is not always accepted by the market. Mass customization is a relatively new business model that combines MTO of final product assembly with MTS of standardized parts and units, thereby making the lead-time shorter than that of naive MTO with less inventory than naive MTS. However, demanding customers remain unsatisfied. Thus, this paper proposes a new manufacturing business model that is enabled by a novel scheduling method based on an auction, called Scheduling Auction. The importance of scheduling is well recognized even in conventional MTO and mass-customization, where the problem is captured as an efficient allocation of jobs to machines along the time axis. The jobs considered here correspond to a set of orders provided by customers, which specify product type and due date. Thus, when zooming out back to each customer’s perspective, another relevant decision is made in advance by the customer in terms of which product type and due date should be claimed. The proposed method is unique in that it deals with these two decision problems simultaneously by involving customers in the computational process, and it tries to obtain a production schedule preferable to the customers. In Scheduling Auction, customers are invited to join an auction and bid on a product type and due date. A schedule is then made according to the bids collected from the customers. In conventional production scheduling, an order from a customer is specified by a single set of values for the product type, price, and due date, e.g., “Product A, $100, and within two weeks”. In the proposed method, in contrast, the preference of the customer can be expressed more flexibly, e.g., “I want to buy Product A or Product B but not both. Product A is better than Product B, if it is delivered until the end of this month.” The flexibility expands the solution space of the corresponding scheduling problem, and is expected to lead to a superior solution. This paper presents the fundamental framework of Scheduling Auction by using a simple case of a factory with a single machine. 1.1

Literature Review

Some papers have proposed a production scheduling method based on an auction mechanism. For example, Kutanoglu and Wu [1] used a combinatorial auction and Lagrangean relaxation for scheduling. In this study, auctioned goods are the right to occupy a machine in a specified time slot. Each job is treated as an agent and bids for a combination of time slots for processing itself. The auctioneer coordinates the bids by adjusting the price of each time slot according to the conflicts among the bidders. The primary motivation behind these studies is how to solve a scheduling problem through distributed computation rather than how to capture and incorporate customers’ detailed preference into the schedule. Suginouchi et al. [2,3] utilized a combinatorial auction for taking customers’ preferences into consideration in scheduling. This study addresses the scheduling problem by repeating auctioning among computer agents representing customers, which imitates a negotiation process among customers. A sub-optimal

Scheduling Auction: A New Manufacturing Business Model

111

Fig. 1. Schema of Scheduling Auction

solution for the corresponding combinatorial optimization problem is obtained by a math-heuristic approach [4]. To more accurately capture customers’ preference, it is desirable to have the actual customers participate in the negotiation process. However, it is not realistic for actual customers to follow the troublesome process of repeating an auction many times. Furthermore, strategic interaction among the customers needs to be properly managed to involve them into the computational process. Mechanism design is an academic field addressing strategic interactions among self-interested agents, and it has recently started to be applied to production scheduling [5–7]. Most studies in this field focus on the theoretical aspect and deal with simplified settings, where either machines or jobs are simply deemed as self-interested agents, and the earlier decision of having customers choose product type and due date is not incorporated. There have also been some initial application studies. For example, Zhong et al. [8] and Zheng et al. [9] applied the auction-based scheduling method to energy hub and service clouds, respectively. Nishino et al. [10] proposed an auction mechanism for arranging seat reservations in a movie theater. However, all the settings considered in these studies are different from the one for Scheduling Auction.

2

Proposed Method

It is assumed that a manufacturing company divides a planning horizon into multiple terms, and holds an auction to plan the production schedule of every term. This paper focuses on an auction held for a certain term. The term is further divided into smaller time units, and due dates, processing times, completion times, etc., are all specified in terms of the units in the following.

112

S. Suginouchi and H. Mizuyama

Figure 1 shows a schema of the proposed method, Scheduling Auction. The manufacturing company takes the role of the auctioneer, and customers are treated as bidders. Auctioned goods are not products but combinations of product type and due date. The customers’ preference of the goods is assumed to be quasi-linear. There are some bidders, and each bidder may submit multiple bids for different combinations of product type and due date. When two or more bids are submitted by a bidder, only one of them can be assigned to the schedule. Because the production capacity is limited, not all customers can receive a product. If all bids made by a customer are not chosen by the auction, the customer may join the next-term auction or give up obtaining a product. When bids are collected, then the auctioneer determines the winning bids among them and establishes the corresponding production schedule. The criterion of this winner determination is to maximize the social surplus. The contract price is determined by using the Vickrey-Clarke-Groves (VCG) mechanism [11–13], to induce truthful behavior of the bidders.

3

Example Case

In this section, a single-machine scheduling problem is used to illustrate the working of the proposed method. 3.1

Notation

i: Customer (Bidder) (i = 1, . . . , I) t: Product type (t = 1, . . . , T ) P Tt : Processing time of type t d: Index of possible due dates (d = 1, . . . , D) DDd : Due date corresponding to index d bi,t,d : Customer i’s bid value on type t and due date index d pi : Customer i’s payment xi,t,d ∈ {0, 1}: A decision variable indicating whether bi,t,d is accepted j: Job processed in the jth (j = 1, . . . , I) jnj : The customer ordering the product corresponding to job j (Decision variable) COj : Completion time of job j P Oj : Processing time of job j DOj : Due date of job j SS: Social surplus SS−i : Social surplus in the absence of customer i 3.2

Formulation

The following formulation specifies the problem of optimizing the production schedule and determining winners of the auction at the same time.

Scheduling Auction: A New Manufacturing Business Model

max

D I  T  

bi,t,d × xi,t,d

113

(1)

i=1 t=1 d=1

 P Oj COj = COj−1 + P Oj

(if j = 1) (otherwise)

(∀j)

(2)

(∀j)

(3)

(∀j) (∀j, ∀j |j = j  )

(4) (5)

(∀j)

(6)

xi,t,d = 1

(∀i)

(7)

jnj ∈ {1, · · · , J} xi,t,d ∈ {0, 1}

(∀j)

(8) (9)

DOj =

D T  

DDd × xjnj ,t,d

t=1 d=1

s.t. COj ≤ DOj jnj = jnj  P Oj =

D T  



P Tt × xjnj ,t,d

t=1 d=1 D T   t=1 d=1

The objective function is expressed by Eq. (1), which maximizes the sum of the accepted bid values. Equation (2) defines the completion time COj of job j. Equation (3) defines the due date DOj of job j. Equation (4) guarantees that job j completes before its due date DOj . Equation (5) guarantees that job j is processed only once. Equation (6) expresses that the processing time of job j is determined by the product type t chosen by customer jnj . Equation (7) guarantees that the product type t and due date DDd of customer i are determined uniquely. 3.3

Algorithm

In the case of the single-machine scheduling problem, the algorithm of Scheduling Auction is as follows: STEP 1. Customers who want to order a product participate in the auction as bidders. Each bidder may bid for all combinations of product type t and due date DDd . Table 1 shows an example of such bids, where the number of product types T is three and the number of possible due dates D is two. STEP 2. The auctioneer gathers bids and solves the integer optimization problem formulated in Sect. 3.2 to determine the winning bids and to plan a production schedule. The social surplus SS is also obtained. STEP 3. The auctioneer determines each bidder’s payment pi using the VCG mechanism [11–13]. To do so, i is initialized to be 1.

114

S. Suginouchi and H. Mizuyama Table 1. An example of bids Due date DDd Product type t 1 2 3 DD1

$200 $100 $50

DD2

$150 $100 $0

Table 2. Experimental conditions $U [1, 100]a

Number of customers I

5, 10 Bid value bi,t,1

Number of types T

2, 10 Bid value bi,t,d (d ≥ 2) max{U [1, 100], bi,t,d−1 }

Number of possible due dates D 2, 10 Processing time P Tt Number of trials 10 Due date DDd a U [a, b]: random sampling from a uniform distribution b ceil(x): roundup integer

t ceil{(d + 1) ×

T ×I b } D

STEP 4. The auctioneer calculates the social surplus SS−i in the absence of customer i by solving the integer optimization problem formulated in Sect. 3.2. Equation (10) determines customer i’s payment pi . pi = SS−i − (SS −

D T  

bi,t,d × xi,t,d )

(10)

t=1 d=1

STEP 5. If i = I, i := i + 1 and go to STEP 4. Otherwise, the auctioneer ends the auction.

3.4

Computational Experiments

To discuss the performance of Scheduling Auction, numerical experiments were conducted. The experiments were performed with the conditions shown in Table 2. This study used a CPLEX 12.6.3 [14] and Inter(R) Core i7 3.50 GHZ 16.0 GB memory computer. Figure 2 shows a schedule obtained by Scheduling Auction when the number of customers I is 5, the number of product types T is 2, and the number of possible due dates D is 2. The objective function value corresponding to the social surplus is $336. Table 3 shows the bid values. * signifies accepted bids. In this case, customer 1 pays $5, customer 2 pays $29, customer 3 pays $29, customer 4 pays $0, and customer 5 pays $19. The mean calculation time for solving a problem was 0.03 s in 10 trials. The calculation time increased exponentially as the number of customers I, the number of product types T , and the number of possible due dates D increased. For example, the mean calculation time was 22.02 s in 10 trials when I = 10, T = 2 and D = 2. When I = 10, T = 2, and D = 10, the mean calculation time for solving optimizing a problem was 96.06 s in 10 trials. When I = 10,

Scheduling Auction: A New Manufacturing Business Model

115

Fig. 2. Example of an obtained schedule (I = 5, T = 2, D = 2) Table 3. Each Bidder’s bid values Customer i t = 1 t = 1 t = 2 t = 2 d=1 d=2 d=1 d=2 1

*79

11

42

2

2

40

40

*73

6

3

19

13

*97

41

4

34

8

43

*29

5

58

*58

87

40

T = 10, and D = 2, the mean calculation time was 180.58 s in 10 trials. The exact solution cannot be obtained in cases where I is more than 15. Furthermore, to decide each customer’s payment, the auctioneer must solve the integer optimization problem shown in Sect. 3.2 (I + 1) times. Thus, an efficient optimization method is required to realize Scheduling Auction.

4

Conclusion

This paper proposes an auction-based scheduling method and a new manufacturing business model enabled by the method. The proposed method captures customers’ preferences in terms of product type and due date more extensively than before from their bids, which is then reflected in to the production schedule. A winner determination procedure is also presented. This paper uses a simple case of only one machine to describe the fundamental framework of the proposed scheduling method and the associated business model. Future research directions include conducting experiments with computer agents and/or human subjects and extending the problem formulation and solution procedure to more complex production systems. Furthermore, this paper implicitly assumes that customers’ preferences are determined only by the characteristics of the final product, such as specification, price, and due date. In the future, however, customers may also care about how the product is produced, i.e., the production process itself. A typical example is observed in the liberalized energy market. In April 2006, the retail electricity market in Japan was liberalized, and consumers were enabled to select a retailer.

116

S. Suginouchi and H. Mizuyama

Consumers’ selection criteria were found out to include not only the price but also the greenness, etc. Some retailers started to appeal to consumers by the method of generating electricity, e.g. their energy was not generated from nuclear power plants but from renewable energy sources, etc. Thus, future customers may have a preference in terms of the energy utilized to produce a product, and such a preference could also be incorporated into the production schedule by extending the method proposed in this paper. Acknowledgements. This research was partially supported by the Ministry of Education, Science, Sports and Culture, Grant-in-Aid for Young Scientists, 2019–2022 (19K15240, Shota Suginouchi).

References 1. Kutanoglu, E., Wu, S.D.: On combinatorial auction and Lagrangean relaxation for Distributed Resource scheduling. IIE Trans. 31(9), 813–826 (1999). https://doi. org/10.1023/A:1007666414678 2. Suginouchi, S., Kokuryo, D., and Kaihara, T.: Value co-creative manufacturing system for mass customization: concept of smart factory and operations method using autonomous negotiation mechanism. In: Proceedings of the 50th CIRP Conference on Manufacturing Systems (USB) (2017). https://doi.org/10.1016/j.procir. 2017.03.313 3. Suginouchi, S., Kaihara, T., Fujii, N., Kokuryo, D.: Utilization of pheromone in production scheduling by negotiation and cooperation among customers. In: Proceedings of SICE Annual Conference 2018, pp. 73–778 (2018). https://doi.org/10. 23919/SICE.2018.8492625 4. Boschetti, M.A., Maniezzo, V., Roffilli, M., Boluf´e R¨ ohler, A.: Matheuristics: optimization, simulation and control. In: Blesa, M.J., Blum, C., Di Gaspero, L., Roli, A., Sampels, M., Schaerf, A. (eds.) HM 2009. LNCS, vol. 5818, pp. 171–177. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-04918-7 13 5. Heydenreich, B., M¨ uller, R., Uetz, M.: Games and mechanism design in machine scheduling - an introduction. Prod. Oper. Manag. 16(4), 437–454 (2007). https:// doi.org/10.1111/j.1937-5956.2007.tb00271.x 6. Christodoulou, G., Koutsoupias, E.: Mechanism design for scheduling. Bull. EATCS 97, 40–59 (2009) 7. Kress, D., Meiswinkel, S., Pesch, E.: Mechanism design for machine scheduling problems: classification and literature overview. OR Spectr. 40, 583–611 (2018). https://doi.org/10.1007/s00291-018-0512-8 8. Zhong, W., Yang, C., Xie, K., Xie, S., Zhang, Y.: ADMM-based distributed auction mechanism for energy hub scheduling in smart buildings. IEEE Access 6, 45635– 45645 (2018). https://doi.org/10.1109/ACCESS.2018.2865625 9. Zheng, B., Li, P., Liu, S.: Mechanisms for optimally scheduling and pricing pleasingly parallel jobs in service clouds. IEEE Access 6, 73733–73749 (2018). https:// doi.org/10.1109/ACCESS.2018.2882605 10. Nishino, N., Fukuya, K., Ueda, K.: An auction mechanism considering seat reservations in movie theater services. Int. J. Organ. Collect. Intell. (IJOCI) 2(1), 63–76 (2011). https://doi.org/10.4018/joci.2011010104 11. Vickrey, W.: Counterspeculation, auctions, and competitive sealed tenders. J. Financ. 16(1), 8–37 (1961). https://doi.org/10.2307/2977633

Scheduling Auction: A New Manufacturing Business Model

117

12. Clarke, E.H.: Multipart pricing of public goods. Public Choice 11(1), 17–33 (1971). https://doi.org/10.1007/BF01726210 13. Groves, T.: Incentives in teams. Econometrica 41(4), 617–631 (1973). https://doi. org/10.2307/1914085 14. IBM Web page. https://www.ibm.com/analytics/data-science/prescriptiveanalytics/cplex-optimizer

Passenger Transport Disutilities in the US: An Analysis Since 1990s Helcio Raymundo

and João Gilberto M. dos Reis(&)

Postgraduate Programme in Production Engineering, Paulista University, Dr. Bacelar, 1212, São Paulo, São Paulo 04026-002, Brazil [email protected], [email protected]

Abstract. Even providing the means for human displacements, passenger transport causes disadvantages that can be called disutilities, such as time and money spending, insecurity and discomfort, and, negative impacts on communities. From the National Transportation Statistics, it is possible to measure passenger transport disutilities and reaches some conclusions that can help planning and public policies of the country. The results show that Americans are wasting more time and spending more money on their cars since the 1990s. Insecurity related to traffic in all modes of transportation has decreased significantly, and the discomfort in automobiles may have experienced an increase due to improvements in the infrastructure. America is lowering its per capita emissions of local gases, but there is insufficient data for conclusions regarding the greenhouse gases. Keywords: Passenger transport

 Disutilities  Mobility  The USA

1 Introduction Despite supplying the means for people’s displacement, passenger transport causes losses, inconveniences, and disadvantages that can be called disutilities. Passenger transport disutilities imposed on the passengers are time and money spending, insecurity and discomfort, and, on the society, negative impacts on communities. High levels of service imply low levels of disutility and vice-versa, and passenger transport problems are not evident by the occurrence of disutilities, but rather by its manifestation in undesirable levels [1]. This principle seems to have never been utterly present in driving the destiny of passenger transport in the world, and in the United States [1], otherwise perhaps the automobile might not have had been hegemonic, dividing its tasks with collective modes of transportation in a more equitable way, thus reducing the disutilities of passenger transportation in the country. The automobile is the preferred mode of travel for most Americans. Passenger transport in the country has always been based on it and the public transportation, although essential, plays a secondary role, and it has had a small market share [2]. These circumstances have permeated passenger transport industry (including governmental agencies) and have been shaping the official statistics, as, for example, the National Transportation Statistics, prepared by the Bureau of Transportation Statistics annually since 1970 [3]. © IFIP International Federation for Information Processing 2019 Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 118–124, 2019. https://doi.org/10.1007/978-3-030-29996-5_14

Passenger Transport Disutilities in the US: An Analysis Since 1990s

119

Thus, considering that the National Transportation Statistics 2018 is one of the most comprehensive compendiums on transport and transit of the USA, it is possible to have in only one publication most of the elements that allow the country to evaluate past actions and anchor its future planning and the development of public policies [4]. Therefore, this paper aims to establish an overview of passenger transport disutilities in the USA from this source [4], adequately compiled, applying a specific and proper methodology to take measurements and corresponding analysis developed by the authors [5]. The authors conducted a similar study regarding the conditions of the Brazilian urban areas in the cities with more than 60 thousand inhabitants, representing about 60% of the population of the country, reaching satisfactory results [6].

2 Materials and Methods 2.1

Analysis of the National Transportation Statistics 2018

Table 1 shows how to assess passenger transport disutilities. Table 1. Disutilities measurements (Source: Adapt [5, 7, 8]) Disutility Time Cost

Insecurity

Discomfort

Negative impacts on communities

Methodological main characteristics Complete journey time (timed or estimated), origin to destination, regarding time spent All expenses and trips cost should be included (cost per passenger): (i) passengers’ expenses with vehicles; (ii) fares, and (iii) non-explicit cost of time All passengers may suffer traffic accidents, which may result in unsafe conditions. Typical accident rates can be adopted, weighted by the number of inhabitants Discomfort is limited not only to physical aspects but also of the psychological side (privacy/freedom) in (i) public transport terminals, stations, stops, and (ii) vehicles (i) consumption of areas devoted to infrastructure, and (ii) environmental impacts, this last one classified in noise pollution and pollution caused by gases, liquids, and solids, that reach the air, water and soil, measured by costs per passenger of noise pollution, greenhouse gases emissions (GGE) (CO2) and local gases emissions

Time. Three sets of data and information could help to understand the disutility time: • Annual Person-Hours of Highway Traffic Delay per Auto Commuter – “the extra time spent during the year traveling at congested speeds rather than free-flow speeds in the peak periods” [4, 9]; • Travel Time Index – “the ratio of travel time in the peak period to travel time at free-flow conditions” [4, 9]; and

120

H. Raymundo and J. G. M. dos Reis

• Annual Roadway Congestion Index (RCI) – “a measure of vehicle travel density on major roadways in an urban area during the peak period” [4]. Despite the similarities between these indicators, all of them highlight a considerable and worrying time spending growth. Undoubtedly, people using cars in the USA are spending more time every year, from 18 h in 1982 to 42 in 2006, almost 2.3 twice more. Also, despite the small relief experienced in recent years as of 2008, the indicators have returned to their previous levels as in 2011 (Fig. 1).

45 40 35 Hours

30 25 20 15 10 5 0 1982 1990 1992 1994 1996 1998 2000 2002 2004 2006 2008 2010 2012 2014 Years 471 Urban area average

Fig. 1. Annual Person-Hours of Highway Traffic Delay per Auto Commuter (Source: [4])

Cost. People are spending more money using their cars in the USA since 1998, considering: • Annual Highway Congestion Cost per peak Auto Commuter – “cost of wasted time and fuel associated with congestion” [4, 10]; and • Expenditures per Capita – the result of the indicator “Personal Expenditures by Category - millions of current dollars” [4], chained into 2014 by the US Inflation Calculator [11] and divided by the correspondent population [12]. The per capita expenditures, from a level of almost US$ 1,800 in 1960, reached US $ 3,300 in 1985 and went to US $ 3,900 in 2005 (2.2 twice more about 1960), maintaining an average at a level of US $ 3,600 during 2011 to 2014 (Fig. 2). Insecurity. The best indicator available for this disutility is the number of fatalities per million of inhabitants of seven modes of transportation [4], as shown in Fig. 3, and Table 2.

Passenger Transport Disutilities in the US: An Analysis Since 1990s

121

5000

US$

4000 3000 2000 1000 2014

2012

2010

2008

2006

2004

2002

2000

1998

1996

1994

1992

1990

1980

1970

1960

0

Years Fig. 2. Expenditures per Capita in 2014 US dollar (Source: Adapt [4])

120

Millions inhabitants

100 80 60 40 20 0 1960 1965 1970 1975 1980 1985 1990 1995 2000 2005 2010 2015 years Passenger car occupants

Motorcyclists

Bus occupants

Pedestrians

Pedalcyclists

Transit Bus and others

Transit Rail Fig. 3. Fatalities per Mode per Million of Inhabitants in the USA (Source: Adapt [4])

122

H. Raymundo and J. G. M. dos Reis Table 2. Reduction of insecurity disutility by mode

Modes Passenger car occupants Motorcyclists

Bus occupants Pedestrians Pedalcyclists Transit bus and others Transit rail

Comments A decrease from 120.0 (1980) to 39.0 (2011), stability (2011 to 2014) (40.0) Worrying highs and lows. Increase from 1960 to 1980 (23.0), decrease (1980–1997) (8.0), increase (1997–2008) (18.0), stability (2008–2015) (15.0) Negligible values, ranging from 0.0 to 0.1 The highest value (44.0) was in 1970, the lowest (16.0) in 2004, a trend of stability since 1997 ( tα (μ)}, where n∗ is the ordinal number of the best individual in the current generation, tα (μ) stands for the t-ratio with μ degrees of freedom and the confidence interval of α% confidence level and μ is the integer which is closest to  2  (σn∗ /gn∗ )2 (σn2 /gn )2  2 2 2 + μ := (σn∗ /gn∗ + σn /gn ) / . (7) gn∗ − 1 gn − 1

Reduction of Computational Load in Robust Facility Layout

193

A pre-given number of the individuals in this set D are eliminated from the current population, since they can be regarded as significantly-bad individuals. The same number of new individuals are then generated by performing crossover to a pair of individuals which does not belong to D, and all the other individuals are left to the next generation.

Fig. 4. Result of Case 1.

Fig. 3. Flowchart of the improved robust FLP method.

4

Fig. 5. Result of Case 2.

Numerical Experiments

Due to page limitation, this section briefly describes results of numerical experiments using an example of F = 7, J = 5 and Oj = 7. The range of the number of production of each kind of job was given as {4, 5, 6} (Case 1) and {2, 3, 4, 5, 6, 7, 8} (Case 2). In Case 1, as shown in Fig. 4, the index value of the layout plan obtained by the proposed method was worse than that of the conventional robust FLP, though computation time was reduced drastically to about 10%. It was also worse than the value of the layout plan obtained considering only the standard scenario. Figure 5 shows the result of Case 2, where it was impossible to obtain a layout plan by the conventional method due to its computational load (S = 16807). The index value obtained by the proposed method was better than at least that of the layout obtained considering only the standard scenario. Although it is necessary to perform optimization based on the conventional method and compare the index value of the result to this value of the proposed method in a future work, it is expected this result implies a potential of the proposed method to efficient robust FLP considering temporal efficiency.

194

5

E. Morinaga et al.

Summary

This paper has described reduction of computational load of the robust FLP method considering temporal production efficiency. The GA based on the sampling approach and a statistical theory was applied to the robust FLP method. Numerical experiments implied the proposed method has a potential to efficient robust FLP considering temporal efficiency. Further numerical studies with much narrower and broader range of scenarios will be performed in a future work.

References 1. Drira, A., Pierreval, H., Hajri-Gabouj, S.: Facility layout problems: a survey. Annu. Rev. Control 31(2), 255–267 (2007) 2. Hosseini-Nasab, H., Fereidouni, S., Ghomi, S.M.T.F., Fakhrzad, M.B.: Classification of facility layout problems: a review study. Int. J. Adv. Manuf. Technol. 94(1–4), 957–977 (2018) 3. Lawler, E.L.: The quadratic assignment problem. Manag. Sci. 9(4), 586–599 (1963) 4. Sherali, H.D., Fraticelli, B.M.P., Meller, R.D.: Enhanced model formulations for optimal facility layout. Oper. Res. 51(4), 629–644 (2003) 5. Chwif, L., Barretto, M.R.P., Moscato, L.A.: A solution to the facility layout problem using simulated annealing. Comput. Ind. 36(1–2), 125–132 (1998) 6. Gon¸calves, J.F., Resende, M.G.C.: A biased random-key genetic algorithm for the unequal area facility layout problem. Eur. J. Oper. Res. 246(1), 86–107 (2015) 7. Pour, H.D., Nosraty, M.: Solving the facility and layout and location problem by ant-colony optimization-meta heuristic. Int. J. Prod. Res. 44(23), 5187–5196 (2006) 8. Fujihara, Y., Osaki, H.: A facility layout method linked to production scheduling. Trans. Jpn. Soc. Mech. Eng. Ser. C 63(605), 297–303 (1997). (in Japanese) 9. Hino, R., Moriwaki, T.: Resource reallocation based on production scheduling (1st report). J. Jpn. Soc. Precis. Eng. 69(5), 655–659 (2003). (in Japanese) 10. Morinaga, E., Shintome, Y., Wakamatsu, H., Arai, E.: Facility layout planning with continuous representation considering temporal efficiency. Trans. Inst. Syst. Control Inf. Eng. 29(9), 408–413 (2016) 11. Morinaga, E., Iwasaki, K., Wakamatsu, H., Arai, E.: A facility layout planning method considering routing and temporal efficiency. In: 2016 International Symposium on Flexible Automation, pp. 193–198. IEEE (2016) 12. Moslemipour, G., Lee, T.S., Rilling, D.: A review of intelligent approaches for designing dynamic and robust layouts in flexible manufacturing systems. Int. J. Adv. Manuf. Technol. 60(1–4), 11–27 (2012) 13. Rosenblatt, M.J., Lee, H.L.: A robustness approach to facilities design. Int. J. Prod. Res. 25(4), 479–486 (1987) 14. Kouvelis, P., Kurawarwala, A.A., Gutierrez, G.J.: Algorithms for robust single and multiple period layout planning for manufacturing systems. Eur. J. Oper. Res. 63(2), 287–303 (1992) 15. Morinaga, E., Iwasaki, K., Wakamatsu, H., Arai, E.: A robust facility layout planning method considering temporal efficiency. In: L¨ odding, H., Riedel, R., Thoben, K.-D., von Cieminski, G., Kiritsis, D. (eds.) APMS 2017. IAICT, vol. 514, pp. 168–175. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-66926-7 20

Reduction of Computational Load in Robust Facility Layout

195

16. Yoshitomi, Y., Ikenoue, H., Takeba, T., Tomita, S.: Genetic algorithm in uncertain environments for solving stochastic programming problem. J. Oper. Res. Soc. Jpn. 43(2), 266–290 (2000) 17. Tokoro, K.: A statistical selection mechanism of GA for stochastic programming problems. Trans. Math. Model. Appl. 43(SIG10), 157–164 (2002)

Decision-Making Process for Buffer Dimensioning in Manufacturing Lisa Hedvall(&)

and Joakim Wikner

Jönköping University, Jönköping, Sweden [email protected]

Abstract. Systematic and stochastic variations, both endogenous and exogenous to companies, are a constant challenge for decision makers struggling to maintain a competitive advantage for the business. In response the decision maker introduces buffers to absorb variations but this does not target the source of the problem. The first step should instead be to focus on how to reduce variations and then to handle the remnant variations. In summary the first step should be to perform variation management and then as the second step buffer management should be applied. The combination of these two subprocesses represent service performance management and within this context is buffer dimensioning a key challenge. Input data, decision maker and process logic are identified as three key aspects of buffer dimensioning which are integrated and resulting in six scenarios. These scenarios unravel different conditions for performing buffer dimensioning and facilitate an awareness of a match or mismatch between current and desired situation. Keywords: Buffer dimensioning  Dimensioning process Service performance management



1 Introduction Competitiveness can be based on different business characteristics such as patents protecting a market, a unique brand that attracts customers or a value-adding flow aligned with customer requirements. From a manufacturing perspective the flow-based type of competitiveness is of particular interest since this is where manufacturing is active in providing the foundation of competitiveness. A swift and even flow is in line with the one-piece continuous flow portrayed as the ideal state, according to lean thinking. This state is however seldom reached except for in isolated segments of the flow. Instead, a holistic picture of the flow unravels a number of discontinuities that represent variations in different shapes. Such variations may occur along the flow in terms of e.g. different lot sizes or capacity availability but can also be exogenous and related to characteristics of demand or external supply. Even though there are a wide set of variations, both systematic and stochastic, they have in common that they represent major challenges to decision making. Frequently the countermeasure to variations is to introduce buffers. This approach does however overlook the potential in first addressing and reducing the variations, and as a consequence also reduces the need for buffers. The challenge is that there are numerous sources of variations affecting a business. In other © IFIP International Federation for Information Processing 2019 Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 196–203, 2019. https://doi.org/10.1007/978-3-030-29996-5_23

Decision-Making Process for Buffer Dimensioning in Manufacturing

197

words, it is not enough to only consider one type of variation, it is important to manage all variations that have significant impact on the delivery capability, and hence the competitiveness. Previous studies have even showed that variance reducing activities have a greater impact on the output rate than a buffer such as safety stock [1]. However, it is most often neither economically sustainable nor viable trying to eliminate all variations. Variation management (VM) with the intention to reduce variations is therefore recommended as a first step before addressing variations by the use of buffers. The total amount of variations sets the preconditions for the need of buffers, realized in terms of safety stocks, safety capacity or safety lead-time, that can be used to absorb variations and protect the delivery capability. To actually manage buffers is referred to as Buffer management (BM), which complements VM, and concerns decisions regarding the selection, positioning and dimensioning of buffers. In the current body of knowledge, studies have showed that an additional amount of resources can, for example, reduce lead-times [2] and increase responsiveness [3]. BM should therefore not be confused with reducing variations through VM. BM is rather a response to this and an approach to absorb variations [4]. This is an important notion since buffers will not reduce the causes that create challenges through variations, rather it is a symptomatic treatment to decrease the effects of variations. In order to minimize the effects of variations, by utilizing buffers, it is of outmost importance that the right buffer and buffer size is configured. Given that decisions are made for the type of buffer and position, the decision of buffer size remains. Buffer dimensioning refers to the process of the latter concern, focusing on determining the right buffer size based on the requirements by utilizing different dimensioning methods. For materials the literature is vast with for example different methods to calculate an appropriate safety stock level for certain circumstances and trade-offs between different cost components. However, for safety capacity and safety lead-time the theoretical support is not as prominent. This is also observed in industrial practice where managers have expressed a lack of decision support for dimensioning in general, independent of the type of buffer [5]. The purpose of this research is therefore to outline the key aspects of buffer management, with focus on the implications for the decision-making process for buffer dimensioning. This research builds on insights from practitioners and is a conceptualization of findings from a research project. The remainder of this paper is structured as follows. First, a theoretical frame of reference is presented where VM and BM are briefly described. Thereafter an integrated process of VM and BM is discussed from a general perspective for the key aspects included in service performance management, followed by a detailed perspective for how buffer dimensioning varies depending on the context. Finally, the insights from this research are interpreted in terms of managerial implications by some concluding remarks.

2 Theoretical Frame of Reference The process for establishing service performance is in focus here and it encompasses variations management and buffer management. This chapter is therefore dedicated to the parts that influence the service performance.

198

2.1

L. Hedvall and J. Wikner

Variation Management

All companies are to some extent exposed to variations. Endogenous variations in processes have been the focus of attention in quality management literature due to the fact that variations exist in all sort of processes and that variations are sources to quality problems [6]. Exogenous variations as demand is, however, often mentioned as the most challenging type of variations and that often have a direct impact on performance [7]. It is important to map existing variations, monitor the effects and reduce the variations to be competitive [8]. This management of variations is here referred to as variation management (VM) with focus on variation reducing activities. To identify the source of uncertainty is a first step to reduce variations but it does not automatically reveal the cause that drives the variations. In order to fully map variations it is important to identify the source and cause to find appropriate actions that reduce the variations. How well a process is studied and the information available determines the possibility to identify variations and thereby also the possibilities to reduce the variations. One important notion is that the variations, independent of source, can have different characteristics depending on the time horizon concerned. In general, the variations a company is exposed to can be classified as systematic or stochastic variations that in general represent variations of long-term patterns or shortterm random patterns. 2.2

Buffer Management

Variations that remain after the application of VM can be handled by buffers to absorb variations and thereby reduce negative effects on business performance. Managers need to select appropriate buffers, positioned where needed and dimensioned to the right size to sufficiently achieve this purpose. This work is referred to as buffer management (BM). Buffer selection is mainly about choosing what type of buffer to use and for what purpose. The first feature is if the variation(s) the buffer is supposed to absorb is of systematic or stochastic character. Next the buffer selection is decided based on what type of buffer is appropriate for the specific variation. As mentioned in the introduction, buffers can be in terms of additional material, additional capacity and additional leadtime [7]. Within the technique drum-buffer-rope a set of rules for buffer positioning is proposed with buffers referred to as constraint, assembly and shipping buffers depending on their position. When the buffer selection and positioning is determined, the focus shifts to the dimensioning of the buffer size, referred to as buffer dimensioning. 2.3

Buffer Dimensioning

A buffer should hedge variations, for example unexpectedly high demand. If variations are not reduced or absorbed, they can create varying lead-times and negative effects on the delivery performance [9]. Established buffers can be used to absorb deviations from the normal requirements of materials, capacity or lead-time. The size a buffer should be depends on the amount of existing variations weighted to the costs implied to hold the buffer, in order to achieve a balance between costs, the safety provided and the risks the

Decision-Making Process for Buffer Dimensioning in Manufacturing

199

company is prepared to take. How extensive the safety should be is in turn determined by established performance measures. These buffer decisions are in general covered in the buffer dimensioning where the focus is on determining buffer size.

3 Service Performance Management and Buffers Competitiveness rests on the capability of rendering service to the customers. A major challenge for competitive service performance is to reduce variations and to absorb variations through appropriate buffers. VM and BM can therefore be regarded as fundamental parts for service performance management, with the aim to meet customer needs by high service performance. In Fig. 1, it is illustrated that the subprocesses of the service performance management is influenced by three main aspects: input data, decision maker and process logic. Buffer dimensioning, as a part of BM, is the focus of attention in this research and how the main aspects can influence the decision-making process of buffer dimensioning is emphasized below.

Fig. 1. Service performance management process

3.1

Input Data

Buffer dimensioning varies depending on the amount and type of available data, methods and system support. Valuable input data are for example the remaining variations after application of VM. In this research the input data is differentiated in term of quantitative or qualitative data. Input data that are measured and included in the dimensioning process are regarded as quantitative while parts that are hard to measure or simply not measured, but still considered in the dimensioning process, are referred to as qualitative input data. The amount and type of input data may be used in dimensioning but can also affect the methods used, and two main categories of methods can be identified. The first category is here referred to as assessment methods that relates to more or less intuitive and experience-based ways to determine buffer size without formally performed calculations. It can be regarded as informal methods for decision making, e.g. deciding that a buffer should be of certain size based on what has been required previously (by experience), what feels appropriate (intuition or gut feeling) or as a proportion of demand or other appropriate measures. The last example implies that

200

L. Hedvall and J. Wikner

simple calculations are done but as the proportion variable is to a large extent estimated this method is considered as an assessment method. The second category is referred to as computational methods and regards more formal ways to calculate the buffer size by considering statistical distributions of prevailing variations. It can for example be to calculate the buffer size based on desired service level or as a trade-off between carrying costs and shortage costs. These types of methods have been extensively researched for material buffers but are not as established for capacity and lead-time buffers. 3.2

Decision Maker

Two categories of buffer dimensioning methods were identified above, one referred to as assessment methods, relying mainly on experience, and the other referred to as computational methods, based on more formal calculations. For a decision maker the decisions can also be either more based on experience or relying on the method applied. The latter is here referred to as rational decision maker, implying that decisions are systematically decided based on facts. A rational decision maker often employs a series of analytical steps to review input data and possible outcomes before proceeding with a decision. If subjective judgement and assessment are involved it is difficult to apply fully rational reasoning. For this matter, when a decision maker instead relies on experience in the decisions it is here referred to as an intuitive decision maker meaning that information processing can be regarded as non-sequential and based on previous experiences, emotions or implicit knowledge. This infers that the decision making is subject to influence by the decision makers experiences beyond common sense, sometimes even based on an intrinsic feeling that can be referred to as gut feeling, instinct or inner sense. This type of decision maker can thereby receive input or ideas to subjectively use in the decision making without always knowing exactly where the input came from. 3.3

Process Logic

The discussion has so far focused on different aspects affecting the decisions and not the process of buffer dimensioning as such. Decision making regarding buffer dimensioning can be based on expected future requirements and feedback of the actual state of buffers. Two main process logics can therefore be identified: conventional and adaptive process logic. The former represents processes where control signals are processed based on controlled variables and actions made when variables are outside control limits. An example could be to replenish a material buffer that is below its target level. An adaptive process logic on the other hand embrace an iterative approach and adjust controllers to achieve or maintain a desired level of performance when variables are unknown, uncertain and/or change. This means that the buffer dimensioning is adapted based on changing conditions, either by updating parameters or changing the dimensioning method depending on prevailing circumstances.

Decision-Making Process for Buffer Dimensioning in Manufacturing

201

4 Scenarios for Buffer Dimensioning The main aspects identified above for buffer dimensioning are input data, decision maker and process logic. In combination they influence and create different scenarios in the decision-making process for buffer dimensioning. An integrated perspective of the main aspects is presented, followed by a discussion of scenarios resulting from this integration. 4.1

Integrating the Buffer Dimensioning Aspects

The three aspects influencing buffer dimensioning are outlined above. Even though each aspect is interesting on its own it is in the integration of the three that defines the buffer dimensioning process and its main characteristics. The quality and type of input data and the type of buffer dimensioning method can influence how the method is utilized and how the dimensioning process is performed. A typical example for a fully rational decision-making process would be that a computational dimensioning method is used and that the decision maker applies the calculated buffer size to the manufacturing system without adaptions. The other extreme would be that an assessment method that is only based on gut feeling is utilized, representing a genuinely intuitive situation. Although, there might also be situations where the decision maker has previous experiences that a certain computational method is not providing sufficient performance and that manual modifications are applied to the calculated value. The buffer dimensioning would then be partly rational and partly intuitive. However, if required input data is not available, measured or observed for a computational method, or there is a lack of computational methods available for the buffer of interest, it is hardly possible to be fully rational. Finally the decision maker may employ adaptive control in utilizing the input or adapting the dimensioning method employed based on obtained service performance. However, if no explicit adaptation is employed the approach is referred to as being conventional. This discussion has highlighted that input data can be quantitative or qualitative, with a decision maker that can be rational or intuitive, and with a process logic that is adaptive or conventional. In total, the combination of these aspects provides three dimensions and creates eight different scenarios in the buffer dimensioning which is illustrated in Fig. 2 as a cube. The cuboids should not be perceived as mutually excluding but rather as representing a continuum with different characteristics. In most cases there are no clear dividing lines between fully rational or fully intuitive, but rather that one aspect is more prevalent than the other and in total there are eight main categories. The buffer dimensioning within a company could therefore be classified within the cube based on the characteristics, perhaps with varying positions in the cube for different buffers. The current situation for buffer dimensioning in relation the individual decision makers preferences or in relation to desired situation can create a match or mismatch.

202

L. Hedvall and J. Wikner

Fig. 2. An integrated perspective of buffer dimensioning aspects

4.2

Six Scenarios for Buffer Dimensioning

The cube in Fig. 2 highlights eight different cuboids that represent a unique combination of different scenarios for buffer dimensioning. However, the rational decision maker cannot rely on genuinely qualitative data and hence there are only six cuboids that represent viable buffer dimensioning strategies. Several of the identified relationships have been indicated in a research project where the buffer dimensioning process to a great extent is based on assessment methods due to a lack of decision support, especially for capacity buffers where there is a lack of established buffer dimensioning methods. For material buffers, which are well covered in the literature, the situation is slightly different with higher degrees of calculations and less of intuitive influence. One company have a fully rational approach for dimensioning of safety stock. For this company the buffer is dimensioned based on an advanced formula that only a few employees are fully knowledgeable of, which could explain why manual and more intuitive changes are not commonly employed.

5 Conclusions A general process for service performance management has been established to be applied and provide support in the management of different types of variations in combination with buffers in terms of additional materials, capacity and lead-time to absorb the variations. The main aspects (input data, decision maker and process logic) contribute to more awareness of what constitute and affect the buffer dimensioning process, thereby creating better conditions for making appropriate changes for a mismatch between current and desired situation. Further studies can refine preconditions, challenges and opportunities for the different buffer dimensioning scenarios, outline what is needed to change to another dimensioning scenario, investigate where different types of buffers tend to be in terms of dimensioning scenarios and thereby highlight where more research is needed from a theoretical and practical perspective.

Decision-Making Process for Buffer Dimensioning in Manufacturing

203

Acknowledgement. The study has been performed within the project KOPability which is funded by the research environment SPARK at Jönköping University (through the Knowledge Foundation) and the six participating companies.

References 1. Hurley, S.F., Whybark, D.C.: Inventory and capacity trade-offs in a manufacturing cell. Int. J. Prod. Econ. 59(1–3), 203–212 (1999) 2. Fry, T.D., Steele, D.C., Saladin, B.A.: A service-oriented manufacturing strategy. Int. J. Oper. Prod. Manag. 14(10), 17–29 (1994) 3. Van Mieghem, J.A.: Commissioned paper: capacity management, investment, and hedging: review and recent developments. Manuf. Serv. Oper. Manag. 5(4), 269–302 (2003) 4. Shoaib-ul-Hasan, S., Macchi, M., Pozzetti, A., Carrasco-Gallego, R.: A routine-based framework implementing workload control to address recurring disturbances. Prod. Plan. Control 29(11), 943–957 (2018). https://doi.org/10.1080/09537287.2018.1494344 5. Hedvall, L.: Buffers in Capacity Management: A multiple case study. In: Proceedings of the 25th International EurOMA Conference, Budapest, Hungary (2018) 6. Yu, G.J., Park, M., Hong, K.H.: A strategy perspective on total quality management. Total Qual. Manag. Bus. Excel. 1–14 (2017). https://doi.org/10.1080/14783363.2017.1412256 7. Caridi, M., Cigolini, R.: Improving materials management effectiveness: a step towards agile enterprise. Int. J. Phys. Distrib. Logist. Manag. 32(7), 556–576 (2002) 8. Nikulin, C., López-Campos, M., Villalón, R., Crespo, A.: Resolution of reliability problems based on failure mode analysis: an integrated proposal applied to a mining case study AU – Viveros, Pablo. Prod. Plan. Control 29(15), 1225–1237 (2018). https://doi.org/10.1080/ 09537287.2018.1520293 9. Boute, R.N., Disney, S.M., Lambrecht, M.R., Houdt, B.V.: Coordinating lead times and safety stocks under autocorrelated demand. Eur. J. Oper. Res. 232(1), 52–63 (2014). https:// doi.org/10.1016/j.ejor.2013.06.036

Postponement Revisited – A Typology for Displacement Fredrik Tiedemann(&)

and Joakim Wikner

Jönköping University, Jönköping, Sweden {fredrik.tiedemann,joakim.wikner}@ju.se

Abstract. Since its introduction, postponement as a supply chain strategy has received a lot of attention in the operations management and the supply chain management literature. Nevertheless, there are still mixed answers about the meaning of postponement and as such, about its operational benefits. For instance, while some scholars argue that postponement results in a shorter delivery lead time, others claim the contrary. To reconcile these apparently conflicting findings, the purpose of this study is to establish a typology that highlights the three key properties of displacement, which is a collective term for preponement and postponement. By breaking down postponement into the three dimensions of form, place, and time, as well as introducing its antithesis preponement, a typology for displacement is presented and illustrated using a well-known postponement case. Keywords: Postponement Decoupling point

 Preponement  Displacement  Flow thinking 

1 Introduction Postponement, also known as delayed product differentiation [1–6], delayed differentiation [7], and late customization [4, 5], was already put in practice in the 1920s [8–10] but was initially introduced in the marketing literature by Alderson [11] as an approach to reducing or eliminating the risk and uncertainty costs associated with the differentiation of goods [e.g., 12–15]. Since then, many success stories owing to postponement have been reported in the literature [e.g., 2, 16]. However, there are still mixed answers about the meaning of postponement. Starting with Alderson [11], several works [e.g., 2, 6, 16–19] define postponement as an approach to reducing or eliminating the risk and the uncertainty associated with the differentiation of goods, where the differentiation processes in a supply chain should be delayed as long as possible. The second group of works [e.g., 20, 21] argues that the point of differentiation should be postponed as close as possible to the customer order entry (i.e., the customer order decoupling point [CODP]) but not necessarily at or downstream of the CODP. The third group [e.g., 9, 10, 12, 14, 15, 22–26] defines postponement as the delay of activities until customer orders are received, that is, at or downstream of the CODP. The mixed answers regarding the definition of postponement have obviously resulted in mixed answers about how postponement relates to operational performance and even why it is beneficial to pursue [27]. For instance, some authors argue that © IFIP International Federation for Information Processing 2019 Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 204–211, 2019. https://doi.org/10.1007/978-3-030-29996-5_24

Postponement Revisited – A Typology for Displacement

205

higher levels of postponement lead to longer delivery lead times [e.g., 12, 26], whereas others [e.g., 5, 28] claim the opposite. As such, what some researchers deem postponement might be considered preponement by others (preponement is a newer term, first mentioned by Blackburn et al. [3] and is here considered the antithesis of postponement). Thus, to reconcile these apparently conflicting findings, the purpose of this study is to establish a typology that highlights the three key properties of displacement. Note that the movement of a transformation activity within a flow structure is here referred to as ‘displacement’ and is thus used as a collective term for postponement and preponement. In the next section, the idea behind postponement and its relation to the CODP is discussed. Thereafter, a typology for displacement is developed using the three generic dimensions of postponement, that is, form, place, and time [see, e.g., 7, 11, 29]. The typology is then applied to one well-known postponement case. The paper ends with a discussion, conclusions, and further research.

2 Defining Form and Place Postponement in Terms of the CODP For many kinds of products, the individual customer demand is unique, especially when taking into consideration the basic use, special features, colors, sizes, and places of purchase [11]. However, products belonging to the same family usually share common components and/or processes, meaning that in their initial stages of production, these products are in a generic form and place [1, 4, 6]. It is not until specialized components are inserted and/or special processes are performed that the items are progressively differentiated into specific end-products [6, 30], referred to as the point of differentiation [e.g., 1, 30, 31]. Each step taken to differentiate a product based on speculation involves a certain marketing risk [11], that is, risk and uncertainty costs tied to the differentiation of the good [23, 32]. The idea behind postponement is therefore to delay the points of differentiation, keeping products generic as long as possible. As such, despite the mixed answers about the definition of postponement, the general idea behind it is to reduce the risk of performing activities based on speculation (forecast), especially the activities related to variants and customization. Postponing such activities closer to or under a commitment from a customer order reduces the risk and uncertainty costs tied to the differentiation of goods [23, 32]. The concept of displacement (i.e., postponement and preponement) is thus tightly related to the flowthinking ontology [see 33], specifically the flow driver, that is, the CODP, which is the point that separates the forecast-driven part of the flow from the customer-order-driven part of the flow [34]. The position of the CODP then also coincides with the upstream part of the delivery lead time, which is the customer’s requested delivery lead time [34]. It is thus logical that many definitions of postponement use the concept of a customer order [e.g., 9, 10, 12, 14, 15, 20–26]. Pagh and Cooper [12] even state that the converse concept of postponement is speculation, that is, involving the activities carried out upstream of the CODP [34]. Forza et al. [27] use the CODP as a point of departure for creating their form postponement typology, which offers a coherent picture of form postponement and its effects on operational performance. Forza et al. [27] also mention that if the point of differentiation would be moved upstream, rather

206

F. Tiedemann and J. Wikner

than downstream, the logical opposite of form postponement would be achieved, called form preponement in this paper. Nevertheless, by defining form postponement or place postponement based on the customer order (i.e., the CODP), postponement can be achieved by simply extending the delivery lead time, that is, moving the CODP upstream in the flow. As such, several parts of the structure are added later and can therefore be considered postponed in relation to the CODP. This modification is illustrated in the time-phased bill of materials (BOM) in Fig. 1, where the left part of the figure is the state before the displacement and the right part is the displaced state (i.e., the postponed state). Before the postponement, the CODP is situated six time-units from the delivery of the product, that is, the delivery lead time consists of six time-units. In the postponed state, the delivery lead time is extended to ten time-units. As such, the transformation activities Q, U, V, and W are postponed in relation to customer requirements, where U, V, and W are even carried out after the receipt of a customer order. However, the word postpone means to put off to a later time, where pre means earlier than, prior to, or before. Postponement can thus be regarded as repositioning to a later point in time, with preponement being the opposite, repositioning to an earlier point in time. Both postponement and preponement are thus related to changes in an existing state, that is, repositioning an already existing position. For the example of Forza et al. [27], this means that the activities of form transformation are carried out closer to or upon request from a customer order (i.e., customer-order-driven). However, the form is still achieved at the same point within the BOM and the same time units before delivery. In other words, it still has the same flow structure, and the time required to perform the transformation activities remains the same. Arguably, the form has not been postponed. Even so, postponement and preponement are arguably tightly related to the CODP, but it might not be feasible to define postponement and preponement based on it, at least not for the concepts of form and place postponement. D= 6

Y Q

Z

U

CODP

Acc. 12 Lead time

10 9

8

6

W

V

X Y

Q

Z

U

Sink (Customer)

X

Source

W

Sink (Customer)

Source

V

D = 10

CODP

5

2

0

Acc. 12 Lead time

10 9

8

6

5

2

0

Fig. 1. Before versus after displacement, shifting the CODP upstream [based on 34].

3 Creating a Typology for Displacement Both postponement and preponement have their base in the time dimension, meaning that a time-phased flow structure is a suitable starting point. This also basically means that both terms are based on displacement in relation to a fixed point of reference, which is the flow sink (customer) in this case (see Fig. 1). Regarding form transformation, it is essentially about transforming raw materials into a finished product in

Postponement Revisited – A Typology for Displacement

207

terms of physical aspects, such as size, color, shape, and function. As for place transformation, it involves relating identity not only to what (form) but also to where (place), which is the meaning of the term stock-keeping units (SKU), covering both form and place. Place is therefore related to having something at one location (central) or at several locations (local). Another important property of transformation is time (when). One conventional way of combining form and place with time is through a time-phased BOM; for an example, see Fig. 1. In contrast to a traditional BOM, a timephased BOM is usually illustrated horizontally to clearly illuminate the time dimension of the activities performed. Each segment of the structures in Fig. 1 represents different transformations in terms of form and place. The time phasing means that each transformation step is offset by the corresponding time for each segment, that is, the lead time. Displacement of form and place transformation can be achieved by either creating new flow structures or changing the time required to perform the transformation activities. From a form perspective, this means that through postponement, the form is achieved later in the flow, by modularization, using standardized modules instead of building complete unique products from scratch, for instance. On the contrary, preponement could be used to create variants of a previously standard product. Postponement in terms of place transformation usually refers to the differentiation in location performed at a later point in time, for instance, by keeping the products in a centralized warehouse instead of distributing the goods to local warehouses. This can be formulated as product proliferation in terms of the place property, which is done at a later point in time. Conversely, preponement means that product proliferation in terms of the place property is done at an earlier point in time to ensure local availability, for instance. The third dimension is then time. Displacement of time seems that time is counted twice, but here, the term time refers to two different things. Displacement has traditionally referred to the change of the point in time when a transformation is carried out in relation to the requested delivery lead time [see, e.g., 27]. Displacement therefore conforms with the definition of the CODP [33]. To summarize, displacement and form, place, and time transformation can be combined in six different ways, as shown in Table 1. Basically, there are two types of changes in state. For form and place, the change is related to flow proliferation, with an ‘absolute’ displacement since it is made in time units. Time is then related to the flow driver and a relative displacement since it pertains to the delivery lead time. This also means that for form and place, the points of departure are the individual points of transformation in the flow, but for time, the point of departure is the CODP, which has its starting point in the delivery lead time. Table 1. Combinations of displacement and form, place, and time properties. Preponement Form A form transformation activity performed earlier in the flow structure Place A place transformation activity performed earlier in the flow structure Time CODP earlier in the flow structure

Postponement A form transformation activity performed later in the flow structure A place transformation activity performed later in the flow structure CODP later in the flow structure

208

F. Tiedemann and J. Wikner

4 Applying the Typology for Displacement The relevant literature provides a vast array of examples of postponement as used in practice. One of the more famous and disseminated examples is the case of Benetton [10, 35, 36]. The new typology for displacement is here applied to this example to illustrate how the transformation process is changed in terms of the three dimensions. Benetton’s market strategy and product diversity are essentially based on colors [35]. In fact, the company is currently known as the United Colors of Benetton (UCB). In its initial manufacturing process, the company dyed its yarn in different colors before knitting it into finished garments [10]. According to the company, this process resulted in too many garments in different colors that the customers did not want, whereas the garments that the customers wanted were sold out [10, 36]. UCB has since then changed the order of its dying and knitting process, using bleached yarn for knitting its garments and only dying them at a later stage when UCB either has received an order or has a better idea of which colors are selling [10]. Figure 2 presents the UCB application of displacement. Note that the activities presented in the figure are based on the information provided by Yang and Burns [10]. However, the lead times and the time-phased BOMs are conceptually developed and merely used as illustrative examples when discussing the displacement in terms of the three dimensions. Note that each segment of the time-phased BOM represents the process to be performed and positioned as ending at the required point in time. In actual execution, different processes could be aggregated and performed simultaneously. For example, the two Qs at the very left of the ‘before BOM’ in Fig. 2 could be purchased at the same time despite having different requirement dates. D= Z V

W Y

Before

Source

Q

U

V

X

Form postponement Time postponement

Z CODP

Sink (Customer)

U

Q

Time preponement Form postponement

W

Q

U

Y

After Q

X

V

Z

CODP

D = Z+V+U 15 14

11 10 9

8

7

6

4

2

0

D = Delivery lead time Activities Q = Purchase yarn U = Dye yarn/garments V = Finish yarn/garments W and X = Manufacture garment parts Y = Join parts Z = Pack and ship Accumulated Lead time

Fig. 2. Before versus after displacement carried out at UCB.

By changing the order of knitting and dying, the point at which the garments are dyed is postponed, that is, form postponement. Obviously, the garments are dyed later but probably done in the same factory as before and sold in the same retail stores as before. As such, there is no place postponement. However, it is not clear if the CODP has been shifted in time. According to Lee [36], in its before state, UCB used a maketo-stock strategy, manufacturing its finished products to stock, whereas in its after state, UCB now applies a build-to-order mode. As such, it could be argued that the delivery lead time has been extended (see Fig. 2), also including the dying of the garments.

Postponement Revisited – A Typology for Displacement

209

Thus, there is a time preponement. However, considering individual cases, in the new production process, some customers probably do not have to wait for garments that have been sold out. As such, it could be argued that the delivery reliability has increased, in line with the reported improvement in customer service [36].

5 Discussion, Conclusions, and Further Research The introduction states that there are mixed answers about the definition of postponement. In numerous scientific and managerial documents, postponement has been discussed as the delaying of activities, especially transformation activities associated with the differentiation of goods, and these activities should be postponed as long as possible [2, 6, 16–19], closer to the CODP [20, 21] or even to or downstream of the CODP [9, 10, 12, 14, 15, 22–26]. Moreover, this way of perceiving postponement is a bit one-dimensional; thus, some researchers have discussed postponement by using one or more of the three dimensions, that is, form, place, and time [e.g., 8, 14, 25, 27, 29]. By using these three dimensions of postponement and including preponement, a more nuanced understanding of displacement is gained, creating the ability to compare different cases of postponement and their operational benefits. As shown in the illustrative postponement example of UCB, even though the company has achieved form postponement, there is no place displacement, and the delivery lead time has even been somewhat extended, resulting in time preponement. Furthermore, when creating the typology presented in this paper, the meanings of the words prepone and postpone are used as starting points. As such, the point of reference is changed from the CODP to the initial time at which a transformation activity is conducted in terms of form or place. This way, form postponement or form preponement cannot be achieved simply by repositioning the CODP upstream and downstream of the flow, respectively, for instance. However, the CODP is still used in terms of time postponement and time preponement. As such, the form and the place dimensions are related to whether a transformation activity is conducted based on speculation or on a commitment from a customer. In this way, the operational results of postponement and preponement can be taken into consideration, for instance, if an activity after the displacement can be performed based on a commitment from a customer, rather than on speculation. As such, the typology relates back to Alderson’s [11] novel view of postponement as illuminating the risks and the uncertainties of carrying out certain activities based on speculation. To conclude, the typology presented here uses the existing dimensions of postponement, that is, form, place, and time. However, form and place are not defined using the CODP, thereby excluding the view of merely repositioning the CODP as being the postponement of form and place. The typology also includes preponement, realizing that an activity can be preponed. As such, a typology for displacement (i.e., postponement and preponement) has been established. By utilizing flow thinking [33] and the CODP, a better understanding of the operational effects of a preponement or a postponement decision is also obtained. The established typology for displacement thus complements the literature on postponement and offers a more nuanced understanding of it. In turn, the typology can help managers make more nuanced displacement

210

F. Tiedemann and J. Wikner

decisions by more clearly distinguishing between flow driver and flow differentiation in terms of form and place. However, further research could include specific studies on customization and variety in combination with preponement and postponement. Here, flow thinking, specifically the customer adaptation decoupling point and the system lead time [see, e.g., 33, 34], could assist in providing a better understanding of the operational implications of preponing or postponing a customization activity upstream or downstream of the flow, as well as whether a delivery-unique variant can be offered. Acknowledgement. This research has been conducted under the KOPtimera project, funded by the Swedish Knowledge Foundation and Jönköping University.

References 1. Aviv, Y., Federgruen, A.: The benefits of design for postponement. In: Tayur, S., Ganeshan, R., Magazine, M. (eds.) Quantitative Models for Supply Chain Management. ISOR, vol. 17, pp. 553–584. Springer, Boston (1999). https://doi.org/10.1007/978-1-4615-4949-9_18 2. Aviv, Y., Federgruen, A.: Design for postponement: a comprehensive characterization of its benefits under unknown demand distributions. Oper. Res. 49(4), 578–598 (2001) 3. Blackburn, J.D., Guide Jr, V.D.R., Souza, G.C., Van Wassenhove, L.N.: Reverse supply chains for commercial returns. Calif. Manag. Rev. 46(2), 6–22 (2004) 4. Garg, A., Lee, H.L.: Managing product variety: an operations perspective. In: Tayur, S., Ganeshan, R., Magazine, M. (eds.) Quantitative Models for Supply Chain Management. ISOR, vol. 17, pp. 467–490. Springer, Boston (1999). https://doi.org/10.1007/978-1-46154949-9_16 5. Swaminathan, J.M., Lee, H.L.: Design for postponement. In: de Kok, S.C., Graves, S.C. (eds.) Handbooks in Operations Research and Management Science, pp. 199–226. Elsevier Publisher, Amsterdam (2003) 6. Lee, H.L., Tang, C.S.: Modelling the costs and benefits of delayed product differentiation. Manag. Sci. 43(1), 40–53 (1997) 7. Christopher, M.: Logistics and Supply Chain Management: Strategies for Reducing Costs and Improving Services, 2nd edn. Financial Times, Pitman Publishing, London (1998) 8. Council of Logistics Management: World class logistics: the challenge of managing continuous change, 1st edn. Council of Logistics Management, Oak Brook, IL (1995) 9. Wang, H.-j., Ma, S.-h., Zhou, X.: Postponement in mass customization: a literature review. J. Donghua Univ. (Eng. Ed.) 20(4), 111–117 (2003) 10. Yang, B., Burns, N.: Implications of postponement for the supply chain. Int. J. Prod. Res. 41(9), 2075–2090 (2003) 11. Alderson, W.: Marketing efficiency and the principle of postponement. Cost Profit Outlook 3(4), 1–3 (1950) 12. Pagh, J.D., Cooper, M.C.: Supply chain postponement and speculation strategies: how to choose the right strategy. J. Bus. Logist. 19(2), 13–33 (1998) 13. Yang, B., Yang, Y.: Postponement in supply chain risk management: a complexity perspective. Int. J. Prod. Res. 48(7), 1901–1912 (2010) 14. Van Hoek, R.I.: The rediscovery of postponement: a literature review and directions for research. J. Oper. Manag. 19(2), 161–184 (2001)

Postponement Revisited – A Typology for Displacement

211

15. Van Hoek, R.I.: The thesis of leagility revisited. Int. J. Agil. Manag. Syst. 2(3), 196–201 (2000) 16. Li, J., Cheng, E.T.C., Wang, S.: Analysis of postponement strategy for perishable items by EOQ-based models. Int. J. Prod. Econ. 107(1), 31–38 (2007) 17. Yang, B., Burns, N.D., Backhouse, C.J.: Management of uncertainty through postponement. Int. J. Prod. Res. 42(6), 1049–1064 (2004) 18. Swaminathan, J.M., Tayur, S.R.: Managing broader product lines through delayed differentiation using vanilla boxes. Manag. Sci. 44(12-part-2), S161–S172 (1998) 19. García-Dastugue, S.J., Lambert, D.M.: Interorganizational time-based postponement in the supply chain. J. Bus. Logist. 28(1), 57–81 (2007) 20. Cooper, J.C.: Logistics strategies for global businesses. Int. J. Phys. Distrib. Logist. Manag. 23(4), 12–23 (1993) 21. Mikkola, J.H., Skjøtt-Larsen, T.: Supply-chain integration: implications for mass customization, modularization and postponement strategies. Prod. Plan. Control. 15(4), 352– 361 (2004) 22. Zinn, W., Bowersox, D.J.: Planning physical distribution with the principle of postponement. J. Bus. Logist. 9(2), 117–136 (1988) 23. Yang, B., Burns, N.D., Backhouse, C.J.: The application of postponement in industry. IEEE Trans. Eng. Manag. 52(2), 238–248 (2005) 24. Van Hoek, R.I.: Logistics and virtual integration: postponement, outsourcing and the flow of information. Int. J. Phys. Distrib. Logist. Manag. 28(7), 508–523 (1998) 25. Bowersox, D.J., Closs, D.J.: Logistical Management: the Integrated Supply Chain Process. International editions. McGraw-Hill, Singapore (1996) 26. Waller, M.A., Dabholkar, P.A., Gentry, J.J.: Postponement, product customization, and market-oriented supply chain management. J. Bus. Logist. 21(2), 133–160 (2000) 27. Forza, C., Salvador, F., Trentin, A.: Form postponement effects on operational performance: a typological theory. Int. J. Oper. Prod. Manag. 28(11), 1067–1094 (2008) 28. Cannas, V.G., Pero, M., Rossi, T., Gosling, J.: Integrate customer order decoupling point and mass customisation concepts: a literature review. In: Hankammer, S., Nielsen, K., Piller, F., Schuh, G., Wang, N. (eds.) Customization 4.0. SPBE, pp. 495–517. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-77556-2_31 29. Aitken, J., Childerhouse, P., Christopher, M., Towill, D.: Designing and managing multiple pipelines. J. Bus. Logist. 26(2), 73–96 (2005) 30. Garg, A., Tang, C.S.: On postponement strategies for product families with multiple points of differentiation. IIE Trans. 29(8), 641–650 (1997) 31. Tang, C.S.: Perspectives in supply chain risk management. Int. J. Prod. Econ. 103(2), 451– 488 (2006) 32. Bucklin, L.P.: Postponement, speculation and the structure of distribution channels. J. Mark. Res. 2(1), 26–31 (1965) 33. Wikner, J.: An ontology for flow thinking based on decoupling points – unravelling a control logic for lean thinking. Prod. Manuf. Res. 6(1), 433–469 (2018) 34. Wikner, J.: On decoupling points and decoupling zones. Prod. Manuf. Res. 2(1), 167–215 (2014) 35. Andries, B., Gelders, L.: Time-based manufacturing logistics. Logist. Inf. Manag. 8(3), 30– 36 (1995) 36. Lee, H.L.: Postponement for mass customization: satisfying customer demands for tailormade products. In: Gattorna, J. (ed.) Strategic Supply Chain Alignment: Best Practice in Supply Chain Management, pp. 77–91. Gower, Aldershot (1998)

Efficient Heuristic Solution Methodologies for Scheduling Batch Processor with Incompatible Job-Families, Non-identical Job-Sizes and Non-identical Job-Dimensions M. Mathirajan1(&) and M. Ramasubramanian2 1

2

Department of Management Studies, Indian Institute of Science, Bangalore 560012, India [email protected] Loyola Institute of Business Administration, Chennai 600034, India [email protected]

Abstract. Efficient scheduling of heat-treatment furnace (HTF), a batch processor (BP), is very important to meet both throughput benefits as well as the committed due date to the customer, as the heat-treatment operations require very long processing time in the entire steel casting manufacturing process and accounts for large part of the total casting processing time required. In the recent time, there are good number of studies reported in the literature related to scheduling of BP associated with many discrete parts manufacturing. However, still there is very scant treatment has been given in scheduling of HTF problem, which has one of the unique job-characteristic: non-identical job-dimensions. This characteristic differentiates most of the other BP problems reported in the literature. Thus, this study considers a scheduling HTF, close to real-life problem characteristics, and proposes efficient heuristic solution methodologies. Keywords: Heat-treatment furnace  Non-identical job-dimensions Lower bound  Greedy heuristic algorithm  Genetic algorithm



1 Introduction This study is motivated from steel-casting industries, particularly scheduling of heattreatment furnaces (HTF), a batch processor (BP). Heat-treatment operation is one of the most important operations as it determines the final properties that enable components to perform under demanding service conditions such as large mechanical load, high temperature and anti-corrosive processing. The jobs (castings) in work-in-process (WIP) inventory, available in front of a HTF, vary widely in sizes and dimensions. Furthermore, jobs are primarily classified into several job families based on the alloy type. These job families are incompatible since the temperature requirement for low alloy and high alloy vary for similar type of heat-treatment operations required. These job families are further classified into various sub-families based on the type of heat treatment operations they required. These sub-families are also incompatible as each of these sub-families requires different combination of heat-treatment operations. © IFIP International Federation for Information Processing 2019 Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 212–222, 2019. https://doi.org/10.1007/978-3-030-29996-5_25

Efficient Heuristic Solution Methodologies for Scheduling Batch Processor

213

The widely varying job sizes, job dimensions and multiple incompatible job family characteristics, which differentiate other batch processing operations across many other industries, create complexity in scheduling HTF. It is also important to note that the heat-treatment operation requires lengthiest processing time (say between 6 h to 48 h) taking up a large part of total processing time. Because of these, the heat-treatment operation is a major bottleneck operation in the entire steel casting process. Considering the complexity and bottleneck nature of the heat treatment process the efficient scheduling of this operation to maximize throughput, in order to enhance productivity of the entire steel casting manufacturing process, is of importance to the firms. The concerns of the management in increasing the throughput of the bottleneck machine, thereby increasing productivity, motivated us to adopt the scheduling objective of minimizing makespan, as this objective can be a surrogate measure for maximizing throughput of the entire steel casting manufacturing process. Accordingly, this study specifically focuses on developing a few simple Greedy Heuristic Algorithms and meta heuristic: Genetic Algorithm for the research problem of minimizing makespan (Cmax) on a batch processor (BP) with multiple incompatible job families (MIJF), where all jobs of the same family have identical processing times and jobs of different families cannot be processed together, non-identical job sizes (NIJS), non-identical job dimensions (NIJD), and non-agreeable release times and due dates (NARD). The rest of this paper is organized as follows. Closely related literature review is presented in Sect. 2. In Sect. 3 several Greedy Heuristic Algorithms (GHA) and metaheuristic algorithm: Genetic Algorithm (GA) are proposed to solve the problem. A lower bound procedure, which is considered as benchmark procedure, proposed for the research problem is discussed in Sect. 4. The computational experiment for evaluating the proposed heuristic algorithms is discussed in Sect. 5. In Sect. 6 performance evaluation of the proposed heuristic algorithms is analysed and presented. Finally, conclusions and future research areas are discussed in Sect. 6.

2 Related Literature Review Scheduling BP in discrete parts manufacturing has been widely studied in the literature [10, 12, 13], due to its intensive varied applications in the real-world industry. This research is motivated by heat-treatment operations, which is performed in heattreatment furnace, in steel casting industries. One of the job’s (casting’s) characteristics on non-identical job dimension, which is not applicable in most of the applications of BP, is uniquely differentiate almost all the applications of BP listed in the literature. Furthermore, the non-identical job size and non-identical job dimension characteristics together are not addressed [1, 3, 4, 11]. [14, 15] considered the scheduling of an HTF with a single job-family. Subsequently, [16] extended their earlier studies, by considering the important additional real-life characteristics on multiple and incompatible job families and proposed a MILP model and demonstrated its computational intractability. Due to the computational intractability of scheduling of BP with incompatible job-families problems, recently, researchers have started to propose meta heuristic algorithms, in addition to simple greedy heuristic algorithms for scheduling of BP [5, 6, 17]. In the similar line, this

214

M. Mathirajan and M. Ramasubramanian

study is proposing (a) a lower bound procedure and (b) a few greedy heuristics and meta heuristic: Genetic Algorithms for obtaining efficient HTF-schedule for the problem considered in this study.

3 Solution Methodologies The development of nine variants of simple greedy heuristic algorithms (GHA) and meta heuristic: Genetic Algorithm (GA) to address the research problem considered in this study are discussed in this section. In the proposed GHA, first we sort the jobs by a criterion and then construct a set of batches (this process is called as Sort by Criteria and Construct Batch) by picking jobs from the sorted list. Finally, the set of batches constructed will be sequenced. Accordingly, the step-by-step procedure of the proposed GHA is as follows: 3.1

Greedy Heuristic Algorithm (GHA)

Step 1: Group the jobs and form different family based on the family identification data. For each family formed, sort the list of jobs, which are to be processed in HTF, using a specific sorting criterion. Step 2: Select family F and b=1 Step 3: Select a set of feasible jobs, which are yet to be considered in forming a batch as well as satisfying capacity restrictions on size, dimension and release time constraint, from the selected job family, sequentially from the sorted list of jobs and construct batch ‘b’. Also compute EBAT(b, F) as follows: EBAT(b,F) = maximum {release time of all jobs in the batch ‘b’, of family ‘F’}

Step 4: If the job-list is not empty for the selected family ‘F’, then form next batch from the selected family. That is, b=b+1. Step 5: Repeat Steps 3 to 4 until the job-list for the selected family ‘F’ is empty. Otherwise, GOTO Step 6. Step 6: F = F+1. If family ‘F’ is last family for the considered problem, then GOTO Step 7; Otherwise GOTO Step 2. Step 7: Sort the set of batches formed across the family ‘F’ with respect to minimum-to-maximum criterion considering the value of EBAT(b,F). As per the sorting order of the list of batches, compute the completion time of batch b of family F (= C (b,F)), as follows:

C(b,F) = EBAT(b,F) + Processing Time of Family ‘F’

Efficient Heuristic Solution Methodologies for Scheduling Batch Processor

215

Step 8: Calculate the makespan for the numerical problem using the following equation: Cmax = Max {C(b, F), b = 1, 2,..... maximum number of batches formed for family F for the given problem, and F = 1,2,, …. maximum number of families grouped for the given problem}

The proposed GHA, presented here can be varied by introducing the nine different sorting criterions, as given in Table 1, and with that we have nine proposed variants of GHA for scheduling of HTF, considered in this study to minimize makespan. Table 1. Sorting criterion considered for the proposed variants of GHA Code SL SW SH SV SS SD SVD SSD SR

3.2

Sorting criterion Sort by Length and construct Batches Sort by Width and construct Batches Sort by Height and construct Batches Sort by Volume and construct Batches Sort by Size and construct Batches Sort by Due date and construct Batches Sort by (Volume/Due date) and construct Batches Sort by (Size/Due date) and construct Batches Sort by Release time and construct Batches

Proposed variants of GHA SLB SWB SHB SVB SSB SDB SVDB SSDB SRB

Genetic Algorithm (GA)

GA has been applied to scheduling BP, related to semiconductor industry [2, 8, 18]. However, to the best of our knowledge, there are no studies considering GA for the problem configurations defined in this study. Accordingly, this study considers GA and use random key based representation for GA. Further, in order to fix value for GAparameters (such as: number of generations, population size, crossover percentage, crossover probability, migration percentage and mutation), which are problem specific, we conduct a computational experiment. Based on the computational experiment conducted, the values of the GA-parameters are fixed as in Table 2. Table 2. GA-Parameter and its value for GA-Implementation GA-parameter Number of generations Population size Migration percentage Crossover percentage Mutation percentage Crossover probability

Values of the parameter 200 25 20% 50% 30% 0.6

216

M. Mathirajan and M. Ramasubramanian

Considering the value of each of the GA-parameters given in Table 2, the step-bystep procedure of the proposed GA for scheduling a BP with MIJF, NIJS and NIJD is as follows: Step 1. Generate a population of permutation sequences with size 25. This sequence is essentially job indices numbered from 1 to n. Step 2. Sort the first sequence based on the job length using criterion SL mentioned in Table 1 with the maximum length job being first and the minimum length job being last. Step 3. Repeat step 2 for the next eight sequences using criteria: SW, SH, SV, SS, SD, SVD, SSD, SR mentioned in Table 1. Step 4. Generate random permutation sequences for the rest of the population. Step 5. If the number of generations is less than or equal to 200, construct and schedule batches using the GHA mentioned in Sect. 3.1 for each of the sequence. Else, go to step 16. Step 6. Calculate the fitness value and the makespan of the schedule for each of the sequences. Step 7. Sort the population of sequences based on the fitness value with lowest value on the top to highest on the bottom. Step 8. Encode the permutation sequences using random keys. Step 9. Keep top 20% of the population for migration to next generation. Step 10. Select two chromosomes randomly from the population and apply crossover operation. Step 11. Repeat step 10 to generate a total of 50% of population sequences which are the new set of off springs. Step 12. Replace the middle 50% of the population with the new off springs. Step 13. Generate rest 30% of the chromosome population randomly and replace the lowest 30% of the population. Step 14. Sort each chromosome by random keys which are generated using crossover and mutation in descending order. Step 15. Decode the chromosomes to get a new set of permutation sequences. Go to step 5. Step 16. Output the best makespan value from the last generation. Each of the proposed heuristic algorithms, presented here, is implemented in Turbo C++ language on a system with 1 GB RAM, 2.4 Ghz processor with Windows XP operating system.

4 Computational Experiments The following proposed (a) benchmark procedure, (b) experimental design, and (c) performance measure are considered for empirical evaluation of the proposed heuristic algorithms.

Efficient Heuristic Solution Methodologies for Scheduling Batch Processor

4.1

217

Benchmark Procedure: Proposed Lower Bound (LB) Procedure

In BP literature, researchers have proposed LB procedures for evaluating the heuristic algorithms (Example: [7]). However, these studies do not consider NIJS and NIJD characteristics explicitly while developing LB procedure in scheduling a BP. Further, there are LB procedures proposed for three-dimensional bin packing problem [9] which consider only NIJD characteristic for obtaining LB. To the best of our knowledge, there are no studies which consider both NIJD and NIJS to obtain LB while scheduling a BP. This motivated us to develop an LB procedure for the research problem, considered in this study. Since our research problem is an extension of the three-dimensional bin packing problem, the LB procedure proposed by [9] is used with appropriate modification by considering both NIJD and NIJS characteristics to obtain LB on makespan. The proposed LB procedure in calculating LB on makespan for the research problem is as follows: Step 1: Step 2: Step 3:

Index the jobs in the non-decreasing order of the release times rj. Consider all the jobs to be available at time zero. Now calculate Cmax(f, j, n) for jobs j, …, n belonging to the first job family ‘f’ using the following sub-steps, taking appropriate processing time for the family. Step 3(a): Calculate LB on makespan by considering only NIJD called as NIJD Cmax using the LB procedure proposed in [9] with suitable modification. Step 3(b): Calculate LB on makespan by considering only NIJS characteristic NIJS called as Cmax , Step 3(c): Calculate LB on makespan by considering both NIJD and NIJS as  NIJD NIJS  LBCmax ¼ max Cmax ; Cmax :

Step 4: Step 5: Step Step Step Step

6: 7: 8: 9:

Repeat step 3 for other job families in the list with their appropriate processing times. Add all Cmax (f, j, n) and finally, add the release time of the job at head of the list with Cmax (f, j, n). Store the value (rj + Cmax (f, j, n)) in a separate list. Remove the job at the head of the list. Repeat steps 2–7 till the list is empty. Then LB for the research problem, LBCmax is maximum of all the values ( ) p P Cmaxðf ; j; nÞ ; stored in the separate list. i.e. LBCmax ¼ max rj þ j¼1;2;...;n

f ¼1

Where, p is number of job families. The procedure to obtain LB on makespan for scheduling a BP with MIJF, NIJS and NIJD is implemented in Turbo C++ language.

218

4.2

M. Mathirajan and M. Ramasubramanian

Experimental Design

Based on the problem configurations considered in this study, the parameters: Number of Jobs (n), Job size (si), Job Dimension ((Length (li), Width (wi) and Height (hi)), Number of Incompatible Job families (fi), Release time (ri) and Due date (di) are required data. Accordingly, an experimental design is developed (Table 3) to generate suitable test data. The range of intervals used in each uniform distribution is based on the observation from the user industries. The proposed experimental design for generating test problems is implemented in Turbo C++ language. Table 3. A summary of experimental design for the research problem Problem parameters No. of jobs (n) Job No. of incompatible job parameters families (fi) Release time (ri) Due date (di) Job size (si) Job length (wi) Job height (hi) Job length (li) Number of configurations Number of instances per configuration Total number of instances

No. of levels 6 2 2 1 2 2 2 2

Values 25, 50, 75, 100, 125, 150 4, 6 U[0,84], U[0,42] ri +U[168,240] U[1, S/2], U[1, S/4] U[1, W], U[1, W/2] U[1, H], U[1, H/2] U[1, L], U[1, L/2] 62212222 = 384 10 3840

In addition to the input provided for the problem parameters, mentioned in Table 3, we assume that there is only one BP which has BP-Size as S = 2500 kg; and BPDimension as L = 2500 mm, W = 1000 mm, and H = 1250 mm. Also, we assume the processing time of the job families as (a) 13, 15, 12, 10 respectively when number of job families are 4, and (b) 13, 15, 12, 10, 22, 18 respectively when number of families are 6. 4.3

Measures of Effectiveness

The performance of the proposed heuristic algorithms is compared using the performance measure: Average Relative Percentage Deviation (ARPD), indicating the average performance of proposed heuristic algorithms. The details of the calculation of the performance measures ARPD is as follows.

Efficient Heuristic Solution Methodologies for Scheduling Batch Processor

219

Let CH be the makespan given by the proposed heuristic algorithm “H”. Let CLB be the Lower Bound on Cmax given by the proposed LB procedure. Then, the Relative Percentage Deviation (RPD) on instance ‘i’ for the proposed heuristic algorithm “H” is RPDH(i) and computed as follows:  RPDH ðiÞ¼

 CH ðiÞ  CLB ðiÞ  100 CLB ðiÞ

ðAÞ

The average RPD (i.e, ARPD) is calculated as follows: ARPDH ð pÞ¼

PN i¼1

RPDH ðiÞ N

ðBÞ

Where, ARPDH(p) = ARPD of proposed heuristic algorithm ‘H’ for problem configuration p over N instances of planned configuration p.

5 Performance Evaluation of the Proposed Heuristic Algorithms To understand the performances of the proposed heuristic algorithms in comparison with the proposed LB, the randomly generated problem instances (3840 instances) is used. First, the proposed heuristic algorithms are run through each of the 3840 instances and the makespan values are recorded. Then, the LB-procedure is also run through each of these 3840 instances and the LB on makespan is obtained. With these values, for each problem instance the RPD value is computed using equation (A). Furthermore, Problem instance wise and the proposed heuristic algorithm wise, the computed RPDH(i) is used for calculating the scores: ARPD using equation (B). The computed ARPD score is presented in Table 4. From Table 4, we can observe that (a) the proposed GA outperforms the other proposed nine variants of GHA, and (b) within the proposed variants of the greedy heuristic algorithms: the variant of the GHA: SWB performs relatively better and the variant of the GHA: SVDB becomes relatively the second best one. Finally, in order to check for the influence of individual problem parameters on the performance of the proposed heuristic algorithms in comparison with LB on makespan, the computed RPD score is used for statistical test. For this purpose, a statistical test: multi-factor (heuristic algorithm, f, n, r, s, w, h, l) ANOVA is used on the score: RPD. Since the normality and equal variance assumption failed for our data, we used MannWhitney non-parametric test for factors with two groups (f, r, s, h, w, l) and KruskalWallisnon-parametric test for factors with many groups (heuristic algorithm, n). The results of this analysis show that the there is an influence of all problem parameters, considered in this study, on the performance of the proposed heuristic algorithms.

No. of instances

640 640 640 640 640 640 1920 1920 1920 1920 1920 1920 1920 1920 1920 1920 1920 1920

Problem configuration

(n = 1, *, *, *, *, *, *) (n = 2, *, *, *, *, *, *) (n = 3, *, *, *, *, *, *) (n = 4, *, *, *, *, *, *) (n = 5, *, *, *, *, *, *) (n = 6, *, *, *, *, *, *) (*, f = 1, *, *, *, *, *) (*, f = 2, *, *, *, *, *) (*, *, r = 1, *, *, *, *) (*, *, r = 2, *, *, *, *) (*, *, *, s = 1, *, *, *) (*, *, *, s = 2, *, *, *) (*, *, *, *, h = 1, *, *) (*, *, *, *, h = 2, *, *) (*, *, *, *, *, w = 1, *) (*, *, *, *, *, w = 2, *) (*, *, *, *, *, *, l = 1) (*, *, *, *, *, *, l = 2)

Proposed variants SLB SWB 18.9 17.5 17 15.8 24.9 20.8 15.3 13.7 12.9 11.4 18.2 13.3 19.1 16.4 16.7 14.4 19.9 17.8 15.9 13.1 10.6 9.9 25.2 20.9 22.7 18 13 12.8 22.5 18 13.2 12.8 21 17.3 14.7 13.5

of GHA SHB 18.4 16.9 22.3 14.2 12.6 15.4 17.7 15.5 18.9 14.4 10.3 23 20.2 13 20.1 13.2 19.6 13.7 SVB 17.5 16.2 21.1 13.5 11.6 14.2 16.7 14.6 17.8 13.5 9.9 21.5 19 12.4 18.6 12.8 17.8 13.6

SSB 19.5 16.5 28.4 14.5 12.4 23.8 20.4 18 21.4 17 9 29.4 25.3 13.1 24.9 13.4 23.4 15

SDB 22.9 20.3 29.3 17.1 14.2 20.8 21.5 20 23.4 18.1 12.4 29.2 26 15.5 26.2 15.4 24.6 16.9

SVDB 17.8 16.6 21.5 13.9 11.8 14.4 17 15 18.2 13.8 10 22 19.2 12.8 19 13 18 14

SSDB 20.6 17.6 29.2 15.1 12.6 23.4 20.8 18.7 22.2 17.3 9.6 29.9 25.9 13.6 25.7 13.8 24 15.5

Table 4. Performance of the proposed heuristic algorithm w.r.t. the score: ARPD based on LB SRB 26.3 21.4 29.5 17.2 14.6 20.6 22.3 20.9 24.5 18.7 13.3 29.9 26.9 16.3 27 16.2 25.4 17.8

4.4 3.9 9.2 4.9 4.3 7.9 6.2 5.3 6.6 4.9 2.5 9 9 2.5 8.9 2.6 8.1 3.4

Proposed GA

220 M. Mathirajan and M. Ramasubramanian

Efficient Heuristic Solution Methodologies for Scheduling Batch Processor

221

6 Conclusion This study addresses new problem configurations, close to real-life situations, while considering the scheduling of HTF. Due to the computational difficulty in obtaining optimal solution for the research problem defined in the study, nine variants of simple greedy heuristic algorithms and meta heuristic: Genetic Algorithm are proposed to obtain efficient scheduling. To understand the efficiency of the proposed heuristic algorithms, a LB-procedure is developed to obtain LB. Based on the series of computational experiments conducted, considering 3840 randomly generated problem instances (representing 384 problem configurations), we observe that (a) the problem parameters considered in this study has influence on the performance of the heuristic algorithms, (b) the proposed LB-procedure is found to be efficient, and (c) the proposed GA outperforms among the proposed heuristic algorithms. However, the computational time required for GA increases as the problem size keeps increasing. Furthermore, in case the decision maker wants to choose a heuristic algorithm which is computationally advantageous among the proposed algorithms, the variants of greedy heuristic algorithm: sort by width and construct batch (SWB) is relatively better algorithm for the research problem considered. There are several interesting future research directions. For example: (i) Appropriate modification and/or extension of LP-procedure and heuristic algorithms to address the situation on the availability of more than one non-identical BPs; (ii) It would be interesting to study with the objective of minimizing other completion time-based objectives such as total completion time, total flow time, etc.; and (iii) Considering due date-based objectives such as minimizing number of tardy jobs, maximum lateness, total tardiness, total weighted tardiness are possible future research directions for the research problem considered in this study.

References 1. Baykasoglu, A., Ozsoydan, F.B.: Dynamic scheduling of parallel heat treatment furnaces: a case study at a manufacturing system. J. Manuf. Syst. 46, 152–162 (2018) 2. Cheragh, S.H., Vishwaram, V., Krishnan, K.K.: Scheduling a single batch processing machine with disagreeable ready times and due dates. Int. J. Ind. Eng.-Theor. Appl. Pract. 10(2), 175–187 (2003) 3. Gokhale, R., Mathirajan, M.: Minimizing total weighted tardiness on heterogeneous batch processors with incompatible job families. Int. J. Adv. Manuf. Technol. 70(9–12), 1563– 1578 (2014) 4. Huang, J., Liu, J.J.: Hierarchical production planning and real-time control for parallel batch machines in a flow shop with incompatible jobs. Mathematical Problems in Engineering (2018). https://www.hindawi.com/journals/mpe/2018/7268578/abs/. Accessed on 24 May 2019 5. Hulett, M., Damodaran, P., Amouie, M.: scheduling non-identical parallel batch processing machines to minimize total weighted tardiness using particle swarm optimization. Comput. Ind. Eng. 113, 425–436 (2017)

222

M. Mathirajan and M. Ramasubramanian

6. Jia, Z., Wang, C., Leung, J.Y.: An ACO algorithm for makespan minimization in parallel batch machines with non-identical job sizes and incompatible job families. Appl. Soft Comput. 38, 395–404 (2016) 7. Koh, S.G., Koo, P.H., Kim, D.C., Hur, W.S.: Scheduling a single batch processing machine with arbitrary job sizes and incompatible job families. Int. J. Prod. Econ. 98(1), 81–96 (2005) 8. Malve, S., Uzsoy, R.: A genetic algorithm for minimizing maximum lateness on parallel identical batch processing machines with dynamic job arrivals and incompatible job families. Comput. Oper. Res. 34(10), 3016–3028 (2007) 9. Martello, S., Pisinger, D., Vigo, D.: The three-dimensional bin packing problem. Oper. Res. 48(2), 256–267 (2000) 10. Mathirajan, M., Gokhale, R., Ramasubramaniam, M.: Modeling of scheduling batch processor in discrete parts manufacturing. In: Ramanathan, U., Ramanathan, R. (eds.) Supply Chain Strategies, Issues and Models. Springer, London (2014). https://doi.org/10. 1007/978-1-4471-5352-8_7 11. Mathirajan, M., Sivakumar, A.I.: Minimizing total weighted tardiness on heterogeneous batch processing machines with incompatible job families. Int. J. Adv. Manuf. Technol. 28(9), 1038–1047 (2006) 12. Mathirajan, M., Sivakumar, A.I.: A literature review, classification and simple meta-analysis on scheduling of batch processors in semiconductor. Int. J. Adv. Manuf. Technol. 29(9–10), 990–1001 (2006) 13. Monch, L., Fowler, J.W., Mason, S.J., Dauzere-Peres, S.: A survey of problems, solution techniques, and future challenges in scheduling semiconductor manufacturing operations. J. Sched. 14(6), 583–599 (2011) 14. Ramasubramanian, M., Mathirajan, M., Ramachandran, V.: Minimizing makespan on a single heat-treatment furnace in steel casting industry. Int. J. Serv. Oper. Manag. 7, 112–142 (2010) 15. Ramasubramanian, M., Mathirajan, M.: Heuristic algorithms for scheduling heat-treatment furnace of steel-casting foundry manufacturing. Int. J. Adv. Oper. Manag. 3, 271–289 (2011) 16. Ramasubramaniam, M., Mathirajan, M.: A mathematical model for scheduling a batch processing machine with multiple incompatible job families, non-identical job dimensions, non-identical job sizes, non-agreeable release times and due dates. In: 2013 International Conference on Manufacturing, Optimization, Industrial and Material Engineering (MOIME 2013) (2013) 17. Su, L., Qi, Y., Jin, L.: Integrated batch planning optimization based on fuzzy genetic and constraint satisfaction for steel production. Int. J. Simul. Modell. 15(1), 133–143 (2016) 18. Wang, C.S., Uzsoy, R.: A genetic algorithm to minimize maximum lateness on a batch processing machine. Comput. Oper. Res. 29(12), 1621–1640 (2002)

Optimizing Workflow in Cell-Based Slaughtering and Cutting of Pigs Johan Oppen(B) Møreforsking Molde, 6410 Molde, Norway [email protected]

Abstract. In this paper we describe and solve a scheduling problem taken from the slaughterhouse industry. In an on-going research project, a new concept for slaughtering and cutting of pigs is developed. The idea is to replace the traditional production line with a number of meat factory cells, where an operator slaughters and rough-cuts the pig, assisted by a robot. In order to minimize non-productive operator time, a solution approach is presented together with computational results. Keywords: Production planning · Scheduling Meat industry · Meat Factory Cell

1

· Optimization ·

Introduction

The meat industry, like most other industries, is constantly searching for decreased costs and increased revenues. Cost savings have mainly been achieved through centralization, with fewer, larger and more specialized and automated factories and increased transport distances. This has resulted in very large plants with a slaughter capacity of up to 1 400 pigs per hour in the Netherlands [2], and 5 000 smallstock (lambs/sheep) per day in Bordertown, Australia [1]. Also in Norway, many small slaughterhouses have been closed down and replaced by larger and more automated plants during the last decades. However, long distances and relatively few livestock in Norway makes it impossible to decrease the number and increase the size of slaughterhouses further. The transportation time for livestock from many farms to the closest slaughterhouse is already close to eight hours, which is the longest time allowed for transportation of live animals in Norway. This means that a more efficient meat production must be sought in other ways than by replacing relatively small plants by larger ones with increased capacity and more automation. Nortura [3], which is the largest meat company in Norway, is currently running a research project called Meat 2.0 to find out if a new way of organizing slaughtering and rough-cutting can give benefits in terms of increased flexibility, improved hygiene and better utilization of edible offals, so-called plus products, without losing efficiency compared to today’s assembly line-like production system. The basic idea is to replace the traditional production line, where each c IFIP International Federation for Information Processing 2019  Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 223–230, 2019. https://doi.org/10.1007/978-3-030-29996-5_26

224

J. Oppen

worker performs a small operation before the carcass continues to the next station, with a number of Meat Factory Cells, where one animal at the time is slaughtered and rough-cut by an operator assisted by a robot. In the project, parts of the Meat Factory Cell concept is tested on pigs at Nortura’s plant in Tønsberg, Norway. The focus in this paper is how the work in the cell is organized, so we will just give a brief overview of the whole process before looking at the operations in the cell and how these can be optimized. The pigs are first stunned, put to death, emptied for blood, skalded, dehaired and disinfected. Up to this point, the process is the same as in today’s production line system. Each pig is then transported to a cell, where an operator, assisted by a robot, slaughters and rough-cuts the pig and places the parts on a rack or trolley. When all the parts are on the rack, the rack is transported via meat control to sorting and further processing of the parts. The equipment in the cell is cleaned, and the next pig is placed in the cell for processing. The equipment and operations in the cell are still being developed and tested, but it is already clear that both an operator and a machine or robot will be working in the cell, so we have enough information to look at how the work in the cell should be optimized. Each operator will be working in more than one, most likely two, cells in parallel. This is to avoid unnecessary waiting when the robot works in the cell, as the operator must keep a certain distance to a working robot. In addition to choosing which cell to work in at a given time, the operator can also sometimes choose between different operations to carry out, this adds to the complexity of the planning problem. The time needed for the operator to complete all operations to slaughter and rough-cut each pig is assumed to be constant and known, so the goal is to minimize the amount of time the operator spends waiting for the robot and moving between cells. The problem has many similarities to scheduling problems, see, e.g. [5], as we are looking for the best ordering or sequence of operations or tasks. In most scheduling problems in food processing [4], “industry-specific characteristics induce specific and complex scheduling problems” [4, p. 1]. This is also the case here, and we have not been able to find similar problems in the literature. The option for the operator to stay in the current cell and wait for the robot to finish its current task, or move to another cell, seems to be unique. We have tried a few modeling and solution approaches from the operations research literature to handle the problem. The remainder of the paper is organized as follows. The problem is presented in more detail in Sect. 2. A heuristic solution method is described in Sect. 3. Computational testing is described in Sect. 4, followed by conclusions in Sect. 5.

2

Problem Description

When a pig arrives in the cell, it is placed on a table and kept in place by four grippers, each of them holding one leg. The operator cuts off the pig’s head and the four legs, the parts are lifted onto a rack by the robot. The grippers “hand over” the legs to the robot.

Optimizing Workflow in Cell-Based Slaughtering and Cutting of Pigs

225

After the head and the legs have been cut off and removed, the pig is turned around on the table, now with the back up. The operator uses a saw to open the carcass by performing one cut on each side of the spine, the robot then lifts the neck/back over to the rack. While the robot is lifting, the operator cuts the back/neck free from the carcass. This means that in this so-called combined task, the operator is assisted by the robot, and the robot is active while the operator is working in the cell. During the other robot operations the operator has to keep a certain distance to the robot and can therefore not do operations on the pig. Next, the operator removes the intestines (heart, lungs, liver, stomach etc), which are also placed on the rack. When all the inner organs are removed, the robot lifts the belly part over to the rack. The table is then cleaned before the next pig arrives. An overview of operator, robot and combined tasks is given in Table 1. The times needed to perform the different tasks, and to move between cells, are supposed to be known. The times given in Table 1 are preliminary estimates which will become more precise as more testing is performed in the project. Table 1. Operations in the cell Task Description

Predecessors Est. time

Ot1

Place pig on table

60 s

Ot2

Cut off head

Ot1

20 s

Ot3

Cut off legs

Rt1

100 s

Ot4

Turn pig on table

Rt2

20 s

Ot5

Saw both sides of back

Ot4

40 s

Ot6

Remove intestines

Ct1

120 s

Ot7

Clean table

Rt3

Om

Move between cells

Rt1

Lift head to rack

Ot2

20 s

Rt2

Lift legs to rack

Ot3

60 s

Rt3

Lift belly to rack

Ot6

20 s

Ct1

Cut loose and lift back/neck Ot5

30 s

90 s 15 s

Operators can work in multiple (normally two) cells at the time, and must choose what to do while the robot works. In general, the operator will wait in the cell if the robot task is short, and go to the neighboring cell and work there if the robot task is long. Depending on the duration of tasks and when the operator moves between cells, it may happen that the robot is still working when the operator returns to a cell, forcing the operator to wait. The time estimates given in Table 1 will not lead to such situations, as the longest robot task (except the task where the operator and the robot works together) does not

226

J. Oppen

last longer than the shortest operator task. We nevertheless want to consider this as a possibility in case time estimates change, and because time estimates may be different for different animal types. If the project shows that the Meat Factory Cell concept should be used in production, both bovine and smallstock may also be slaughtered using this method.

3

Modeling and Solution Approaches

When studying a new and unknown optimization problem, we have often found it useful to start by writing down a mathematical model and try to find optimal solutions using traditional algorithms. This was also the first approach for the problem presented here. The model was a mixed integer model with indicator variables αotpct p c equal to 1 if operator o performs task t on pig p in cell c immediately after performing task t on pig p in cell c, and 0 otherwise. The model also has variables telling when the operator starts performing the different tasks, and variables for moving and waiting times between operations. The constraints in the model ensure that all tasks are performed in the correct order, and that all time variables are computed correctly. The model is relatively easy to understand and straightforward to write down, but it turns out to be very hard to solve by standard solvers like Gurobi, and not even the smallest instances included in the computational experiments in Sect. 4 could be solved to optimality in reasonable time. We therefore do not present this model in detail. 3.1

A Construction Heuristic

Because solving a standard mathematical model turned out to be impossible in reasonable time for realistically sized instances, a heuristic approach seemed natural. We have developed a simple heuristic which finds good solutions in short time for problem instances of reasonable size. The solution method builds schedules for one operator working in two cells in parallel, and works as follows. – An initially empty vector P of partial schedules is constructed. An empty schedule is created, the first operator task in cell 1 is added, cell 1 is set as the current cell (the cell where the operator is currently staying), and the schedule is added to P. – Whenever P is not empty, a partial schedule is picked from P, extended in all feasible ways, the resulting schedules are then put back into P. When all tasks in both cells are added to a schedule, the completion time is compared to the best so far, and the schedule with the earliest completion time is kept. – When a partial schedule is extended, an operator task, a robot task or a combined task is added to the schedule. The previous task in the current cell must have the new task as a successor, and all predecessors of the new task must already be added for the current cell. If the added task is an operator task or a combined task, the current cell does not change.

Optimizing Workflow in Cell-Based Slaughtering and Cutting of Pigs

227

– If the added task is a robot task, the current cell may change. If the robot task lasts no longer than the time it takes for the operator to move to the other cell, the operator stays in the cell and thus the current cell is not changed. If the robot task lasts longer than the operator move time, the time it takes before the operator can start working in the current or the other cell is computed and compared. • If the difference is at most equal to the operator move time, both changing and not changing the current cell is considered by making two copies of the schedule. • If the difference is larger, the alternative where the operator can start working first is chosen.

4

Computational Experiments

In order to find out how well the heuristic performes, both in terms of run time and solution quality, we have conducted computational experiments. In the following subsections, we describe the test instances used and the test results. 4.1

Test Instances

We have created 60 test instances, based on five different problem sizes. The smallest instances have three operator tasks, two robot tasks and one combined task, and are small enough to allow us to check that solutions are correct, and even optimal, by inspection. The largest instances have more operator and robot tasks than we consider in the problem presented here, in addition there are up to five successor tasks. Instance 9 represents the real-world problem described in Sect. 2 with tasks listed in Table 1. For each problem size, we create instances with two different sets of task durations, one where most operator tasks are longer than the longest robot task (instances with odd numbered IDs), and one with shorter operator tasks and longer robot tasks (instances with even numbered IDs). Each instance is then replicated with one and two pigs per cell, and with three different operator move times. An overview of the instances is given in Table 2, and a graphical view of instance 3 is given in Fig. 1. 4.2

Results

The heuristic outlined in Sect. 3 was implemented in C++, and tests are run on an Intel(R) Core(TM) i7-4600U CPU @ 2.10 Ghz 2.70 Ghz with 8.00 GB of RAM. Results are shown in Table 3. From the results in Table 3, it is evident that the real-world instances (instance 9 and 10) are easy to solve, even if the number of tasks is quite high. Especially instance 7 and 8 are hard to solve, these have both the largest number of tasks and the largest complexity in terms of many possible successors for some of the tasks. This means the number of possible paths through the network of tasks is quite high, and thus finding the best one takes time. Even though the

228

J. Oppen

Table 2. Problem instances. “Inst” refers to instance numbers, the columns under “Size” give the number of operator, robot and combined tasks, and the maximum number of successors for a task, respecitively. The ranges for task durations are given in the columns under “Duration”, and in the last two columns we give the total operator and robot time needed per pig. Inst Size

Duration

Time per pig

Op tasks R tasks C tasks Succ Op tasks R tasks C tasks Operator Robot 1

3

2

1

2

6–16

2–10

12

46

20

2

3

2

1

2

2–10

6–20

12

34

40

3

5

2

1

3

6–16

2–10

14

64

30

4

5

2

1

3

2–10

6–20

14

38

42

5

6

3

1

3

6–16

2–10

14

74

36

6

6

3

1

3

2–10

6–20

14

46

52

7

9

6

1

5

6–16

2–10

14

112

46

8

9

6

1

5

2–10

6–20

14

90

74

9

7

3

1

1

4–24

4–12

6

90

20

10

7

3

1

1

2–16

8–16

6

58

32

Fig. 1. Instance 3 with five operator tasks (in blue), two robot tasks (in red), and one combined task (in green). Durations for each task are given, and the arrows indicate possible orderings of tasks. In this instance, Ot3 has the highest number of possible successors (three). (Color figure online)

Optimizing Workflow in Cell-Based Slaughtering and Cutting of Pigs

229

Table 3. Test results. Explanation of column headers: “Instance” refer to instances, where the three numbers refer to intance number, the number of pigs processed in each cell and the operator move time, respectively. “Op time” is the total operator time needed to perform the operator and combined tasks. “Obj” is the best solution found in terms of completion time. “Move” and “Wait” are the number of time units the operator spends moving and waiting in the best found solution, and “Time” is the time (in seconds) the algorithm needs to solve the problem. A * in the time column means the algorithm ran out of memory before it completed, and the solution given is the best solution found before this occured. Instance Op time Obj Move Wait Time Instance Op time Obj Move Wait Time 1-1-1

92

97

5

0

0.00

2-1-1

68

81

5

8

0.00

1-1-2

92

102

6

4

0.00

2-1-2

68

84 10

6

0.00

1-1-5

92

111 15

4

0.00

2-1-5

68

93 25

0

0.00

1-2-1

184

193

9

0

0.00

2-2-1

136

155

9

10

0.00

1-2-2

184

202 10

8

0.00

2-2-2

136

160 18

6

0.00

1-2-5

184

217 25

8

0.00

2-2-5

136

181 45

0

0.00

3-1-1

128

133

5

0

0.00

4-1-1

76

91

5

10

0.00

3-1-2

128

138 10

0

0.00

4-1-2

76

94 10

8

0.00

3-1-5

128

151 15

8

0.00

4-1-5

76

103 25

2

0.00

3-2-1

256

265

9

0

0.01

4-2-1

152

171

9

10

0.01

3-2-2

256

274 18

0

0.01

4-2-2

152

178 18

8

0.01

3-2-5

256

297 25

16

0.01

4-2-5

152

199 45

2

0.01

5-1-1

148

155

7

0

0.01

6-1-1

92

103

7

4

0.01

5-1-2

148

162 14

0

0.02

6-1-2

92

108 14

2

0.02

5-1-5

148

181 25

8

0.01

6-1-5

92

127 35

0

0.02

5-2-1

296

309 13

0

8.91

6-2-1

184

201 13

4

8.97

5-2-2

296

322 26

0

14.63 6-2-2

184

212 26

2

18.20

5-2-5

296

357 45

16

6.18

6-2-5

184

249 65

0

25.80

7-1-1

224

237 13

0

36.10 8-1-1

180

203 11

12

21.56

7-1-2

224

250 18

8

30.10 8-1-2

180

212 26

6

49.25

7-1-5

224

275 35

16

16.21 8-1-5

180

245 65

0

108.47

7-2-1

448

475 23

4

*

8-2-1

360

407 21

26

*

7-2-2

448

498 34

16

*

8-2-2

360

424 46

18

*

7-2-5

448

545 65

32

*

8-2-5

360

487 115

12

*

9-1-1

192

199

7

0

0.01

10-1-1

128

137

7

2

0.00

9-1-2

192

206 14

0

0.01

10-1-2

128

142 14

0

0.01

9-1-5

192

223 15

16

0.01

10-1-5

128

163 35

0

0.01

9-2-1

384

397 13

0

0.02

10-2-1

256

273 13

4

0.01

9-2-2

384

410 26

0

0.02

10-2-2

256

282 26

0

0.01

9-2-5

384

441 25

32

0.02

10-2-5

256

321 65

0

0.01

230

J. Oppen

problem studied here normally does not have that many options, we find it useful to test some instances with more successors, as there may be other application of this problem with a different structure. It is not obvious that running instances with two pigs in each cell is needed, as we expect the same pattern of movements between cells and waiting for the robot to be repeated, but we wanted to see how much harder the instances become with two pigs. In addition, it may be beneficial to use different cutting patterns to get more flexibility for how the carcasses are utilized, and thus the ordering and duration of tasks may change from one pig to the next, and between cells. From Table 3, we also see that the time spent moving and waiting varies with different move times and different task durations. For the odd numbered instances with long operator tasks and short robot tasks (the left half of Table 3), waiting occurs when moving takes more time than waiting for the robot task to complete, and this never happens with short move times. For the even numbered instances with short operator tasks and long robot tasks (the right half of Table 3), waiting may occur also with short move times, as it may happen that waiting is needed when the operator comes back after working in the other cell, because the robot may still be working and thus forces the operator to wait.

5

Conclusions

We have described and solved a scheduling problem in cell-based slaughtering and rough-cutting of pigs, based on a new concept which is currently being tested in a research project. Using standard software to solve a traditional mathematical model is not feasible, so we have developed a simple heuristic to solve the problem. Computational testing shows that this method gives good results in very short time for realistically sized problem instances. Acknowledgements. This work is funded by the Norwegian Research Council, grant 256266.

References 1. Jbs bordertown plant (2013). http://www.jbssa.com.au/ourfacilities/ processingfacilities/Bordertown/default.aspx. Accessed 3 Jan 2019 2. F-line marel meat (2019). https://marel.com/meat-processing/systems-andequipment/slaughter-robotization/pig-slaughter-line-robotization/f-line/1763. Accessed 3 Jan 2019 3. Nortura (2019). http://www.nortura.no/. Accessed 3 Jan 2019 4. Akkerman, R., Donk, D.P.V.: Analyzing scheduling in the food-processing industry: structure and tasks. Cognit. Technol. Work 11(3), 215–226 (2009) 5. Nahmias, S.: Production and Operations Analysis, 6th edn. McGraw-Hill, New York (2009)

Increasing the Regulability of Production Planning and Control Systems Günther Schuh and Philipp Wetzchewald(&) Institute for Industrial Management (FIR), Campus-Boulevard 55, 52074 Aachen, Germany [email protected]

Abstract. In an environment of constantly growing market dynamics and associated increasing complexity of company structures and their production processes, manufacturing companies are forced to adapt to this environment. Information technology is thereby a key for manufacturing companies to regain sovereignty over their own production processes. Digital networking via their own company and as well, the overall supply chain, can only succeed if digital planning reflects reality as accurately as possible and if production control can react to deviations in real time. In essence, this leads to a development of process control towards process regulation. While long-term production and resource planning is usually mapped by Enterprise Resource Planning (ERP) systems, detailed planning, including short-term deviations and real-time data at the production level, is increasingly supported by Manufacturing Execution Systems (MES) at the production control level. However, in order to bring the underlying system concepts into line with Industry 4.0 efforts in a standardized manner, mutual functional integration within the framework of interoperable production planning and control is of crucial importance. For this purpose, studies were carried out in particular into cause-effect relationships. Thus, the overarching research objective is a valid design model to increase the controllability of production planning and control systems (PPC) in the context of Industry 4.0. Keywords: Production management  Production planning and control (PPC)  Enterprise Resource Planning  Manufacturing Execution  Management cybernetics  Industrie 4.0 Maturity Index

1 Background The increasing dynamics and corresponding fluctuations in market demands constantly promote the development of customer requirements and preferences to individualized products, which have to be manufactured in the shortest delivery time and nevertheless in highest quality [1–4]. This inevitably leads to an increasing range of variants with shortened product life cycles and at the same time, an increase in performance, flexibility and speed of the production processes [2–5]. This often results in decreasing order sizes and thus smaller batch sizes, so that the complexity of planning and controlling production processes and their handling is constantly increasing [6]. © IFIP International Federation for Information Processing 2019 Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 231–239, 2019. https://doi.org/10.1007/978-3-030-29996-5_27

232

G. Schuh and P. Wetzchewald

The central challenge in the area of production can be identified as mastery of the resulting complexity and dynamics with parallel rising demands on quality, real-time evaluations and cost pressure. In the future, companies will have to be able to make well-founded, decisions much more quickly based on valid data and information, which can only be done realistically in the production environment because of real-time data - which is processed in the corresponding PPC systems. [2] Thus companies are in a highly dynamic environment that forces them to continuously increase their own, highly individual productivity and flexibility of their processes and information provision. In particular, this is to be achieved with necessary regard to digital development by supporting suitable operative application systems of PPC in order to secure a longterm existence on the market. [2] For this purpose, a design model is being developed by the authors, which will be described in this paper. Against the background of Industry 4.0 opportunities, this paper addresses the challenges of an increasing controllability of PPC systems.

2 Introduction Today, manufacturing companies are faced with the great challenge of digital networking of their entire production processes. In its implementation, however, the vision of Industry 4.0 turns out to be blurred, not transparent and diverse. The majority of these digitalization challenges arise from the constant demands for increasing process efficiency in the case of unforeseeable events due to non-existent ensuring of valid data quality for real time analysis and evaluation. However, this effort is often countered by poor communication based on insufficient data availability without any system support. A processing of average values with a simultaneous lack of responsiveness to information changes inevitably leads to a wrong decision basis along the planning process. This has a negative impact on production performance, costs and time. [7, 8] A successful PPC process is characterized by its high process efficiency, the real-time availability of information and data in the right place at the right time [9]. Manufacturing companies are already generating and managing an enormous amount of raw data, usually in separate data silos. Thereby the associated data access, data cleansing, data aggregation, data filtering, data contextualization and data synchronization require great manual effort. With regard to cross-domain problems within PPC systems, missing context information, large amounts of data, low information density, limited access as well as the necessary industry knowledge for understanding and using the data are common problems and obstacles within the digital transformation. [7, 9] Already during the data generation, a refinement must be carried out and an aggregated image (the digital shadow) must be stored, to which different operational application systems of the PPC have access [9]. However, for target-oriented processing of available real-time information by supporting PPC systems, it is imperative not only to control production processes, but rather to regulate them by using an adequate regulation model based on the feedback control loop (see Fig. 1).

Increasing the Regulability of Production Planning and Control Systems

233

Fig. 1. Feedback control loop of systems with cybernetic approach [10]

Therefore, the model approach for developing increased regulability of PPC processes by supporting suitable PPC systems in the context of Industry 4.0 levers is subsequently illustrated in this paper. The core objective is to analyse how business application systems can be used properly for digital development.

3 Methodological Approach and Preconditions An essential precondition for the model is the availability of data and information in real time at the right place through the digital image, also called “digital shadow”. This has already been improved through the use of various technologies such as barcodes and RFID tags. However, this crucial prerequisite can only be fulfilled by the integration and use of Cyber-Physical Systems (CPS). CPS systems migrate to CyberPhysical Production Systems (CPPS) through Industry 4.0 and an integrated view of product, production equipment and production system in terms of model technology, architecture, communication technology and interaction, taking into account changing and changed processes. [11] For this purpose, various scientific models have been analyzed for their applicability. In the scientific past, the Viable System Model (VSM) by Beer [12] has already established itself for this purpose of dealing with high company complexity and the dynamic environment of production companies. It defines on the one hand the necessary planning and regulation tasks by scientifically founded control mechanisms and on the other hand, the required and sufficient information for the PPC processes (see Fig. 2). Beer defines the viability as continuous conservation of system identity against the backdrop of continuously changing environment. [12] This definition applies to companies in Industry 4.0 transition, so the VSM is used as a conceptual framework.

234

G. Schuh and P. Wetzchewald

Fig. 2. Structure of the Viable System Model (VSM) [12]

In addition, it represents a promising production control approach for the design and functionality of complex systems. The focus of the VSM model is therefore on the enterprise, which is divided into the following five subsystems of a viable system [12]: • System 1: Production, the operational units (value-added activities), these units must be viable for themselves. • System 2: Coordination (of value-adding systems 1), place of self-organization of systems 1 among each other. • System 3: Optimization (use of resources in the here and now), selective, supplementary procurement of information on the state of the operative systems. • System 4: Future analysis and planning (resource planning for there and then), the world of options. It focuses on the future and the environment of the entire system. • System 5: If System 3 and 4 are unable to agree on a common course, System 5 will make the final decision. Based on the VSM model, the management model of versatile production systems has been developed [13]. In addition to cybernetic basic principles for control and regulation, this extended model includes the consideration of the essential sub-areas of order processing and PPC, based on the “Aachen PPC Model” [5, 13]. This scientific model describes four reference views of tasks, functions, processes and process architecture as the basis of PPC. According to this model, the increase of the regulatability of the operative PPC processes can be further explored with an extended research to the tasks and functions of the PPC. This analysis has been carried out with reference to scientific, technical literature as well as norms and standards of PPC systems.

Increasing the Regulability of Production Planning and Control Systems

235

4 Components of the Model The constant development of cybernetic models for the control and regulation of companies, broken down into various business processes along the order processing with their respective individual tasks and functions, reached its limits with regard to the digital degree of networking for the processing of real-time data. Only Industry 4.0 in direct interaction with the establishment of CPS and CPPS in production (see also Sect. 3) creates a high level of transparency about the production process. This established the basis for the generation and aggregation of real-time data for the use and evaluation of operational application systems of the PPC, on which the model of regulation is based. [14] This paper build up on existing approaches that broaden feedback control loops of ERP systems by integrating ME systems in order to increase the regulability (see Fig. 3). However, the remaining open question is how to efficiently align these PPC systems with each other.

Fig. 3. Regulation model: Feedback control loop with ERP system and MES [15]

An ERP system is a pure input-output system and does not represent a possibility without further aids to establish a functioning regulation in an enterprise. For example, there are no automated decision proposals for changing inputs, let alone autonomous system interventions in existing master data. Until now, there is no approach to facilitate the integration of these benefits into ERP systems. By merging the systems and its functional worlds at first, the basis for a functioning control loop can be laid in the PPC. By expanding the functional range of the PPC systems at first, it is possible to interact with real-time data from the production environment in addition to strategic planning and control and to establish a viable feedback loop of the information and data flow. In this regard, Manufacturing Execution Systems are currently established on the software market, which meet these requirements in many aspects. So far there has been no investigation into how companies benefit from the utilisation of MES in regard to a transition to Industry 4.0 via the feedback control loop. [15]

236

G. Schuh and P. Wetzchewald

In this context focusing the scientific context, a research gap has been identified. There is still no analysis of which measures must be taken to firmly anchor this basic model of a feedback control loop within the PPC application software with regard to system tasks and functions. The comparison to the requirements of Industry 4.0 and the identification of the benefits is also missing, which would further lead to a comprehensive establishment in the corporate world. The main benefits of digital connectivity are to be illustrated by this study using scientific models. Finally, it must be examined in this context how dynamic market influences affect the adaptability of a feedback control loop according to the VSM approach. To address this deficit, the “Industrie 4.0 Maturity Index” has initially been developed by the highly reputable scientific institution “acatech” as German Academy of Science and Engineering to determine the utility of Industry 4.0. The Industrie 4.0 Maturity Index defines a multi-stage development path for the step-by-step further development of companies to Industry 4.0: Digitisation consists of computerisation and connectivity and forms the basis of the fourth industrial revolution. Even then, it includes the development levels of visibility, transparency, predictability and adaptability (see Fig. 4). [16]

Fig. 4. Stages of the Industry 4.0 Development Path [16]

The model corresponds to a directional instruction for the development, but does not indicate how this can be achieved, e.g. with regard to the use of PPC Systems. But along this development, it is now possible to analyse in principle how different the application systems influence the degree of networking of the respective business processes by analysing the different functions they use for mostly the same processes. In order to follow this logic, a differentiated analysis of the ERP and ME system functions is carried out along the standardized business processes of order processing for contract manufacturers. Due to the targeted consideration of the described system interface, the focus of the analysis is especially on in-house production planning and

Increasing the Regulability of Production Planning and Control Systems

237

control as a core task of PPC. This study is based on the use of scientific literature, a. o. the VDI guideline 5600, and has been validated through expert discussions. As a representative example of the entire research, the task of detailed production planning, as a fundamental task of PPC systems, is described in more detail (see Fig. 5).

Fig. 5. Maturity comparison of the task “detailed production planning” in ERP and ME systems (own representation)

The difference becomes apparent in how far ERP and ME systems can support this task and how it effects the maturity level of the Industrie 4.0 Maturity Index. ERP systems usually plan against unlimited capacities. Nevertheless, there are already solutions, which are able to plan against limited capacities, thus enabling the visibility of machine utilization and other capacity-limiting variables. ERP systems, however, do not have functions to identify the causes of capacity bottlenecks. ME systems, on the other hand, plan against limited capacities as standard. The additional functional option of production simulation allows companies to identify interrelationships and causes on the one hand and make forecasts about future events on the other. [5, 17] In this way, an ME system achieves the level of transparency and in some cases even the level of forecasting capability in detailed production planning at various points. The extension of the functional spectrum of the PPC by the ME system leads to more precise and realtime information flows within the feedback control loop. Deviations from planning as well as external influences on the feedback control loop can be processed immediately. This also meets the requirements of adaptability and the viability of companies according to the VSM regulatory framework.

5 Outlook and Further Research In this paper, existing scientific models in the context of production regulation to derive fields of action for the future initiative Industry 4.0 has been discussed. Therefore the VSM as framework and the Aachen PPC Model has been used to present the scientific foundations for the guidance of organizations and their business divisions. In combination with the Industrie 4.0 Maturity Index and its maturity levels, it has been demonstrated, how business processes are supported by functions from the ERP system in the context of Industry 4.0 maturity. Moreover it has been exemplarily shown, how

238

G. Schuh and P. Wetzchewald

an extension of the task and function spectrum, e.g. by ME system functions within the in-house PPC, has a direct influence on the development of the Industry 4.0 maturity level. As a further research activity, all business processes as well as their tasks and functions must now be examined and a conclusion drawn about the possible extension of the operational application software. This is the only way to realize well-founded and precise recommendations for the design of IT-supported business processes on the development path to Industry 4.0.

References 1. Bellmann, L., Crimmann, A.: Company dynamics and flexibilisation on the German labour market. In: Bornewasser, M. (ed.) Working Time - Temporary Work. Making Work More Flexible in Response To Globalisation, pp. 43–60. Springer, Wiesbaden (2013). https://doi. org/10.1007/978-3-8349-3739-1_2 2. Deuse, J., Weisner, K., Hengstebeck, A., Busch, F.: Design of production systems in the context of Industry 4.0. In: Botthof, A., Hartmann, E. (eds.) The Future of Work in Industry 4.0, pp. 99–109. Springer, Berlin (2015). https://doi.org/10.1007/978-3-662-45915-7_11 3. Kletti, J. (ed.): MIP - Manufacturing Integration Platform. Opening up New Horizons in Production IT. NetSkill Solutions GmbH, Cologne (2018) 4. Kletti, J.: Manufacturing Integration Platform. ZWF 112(10), 707–709 (2017). https://doi. org/10.3139/104.111801 5. Schuh, G., Stich, V. (eds.): Production Planning and Control 1. Basics of PPC, 4th edn. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-25423-9 6. Bundesministerium für Wirtschaft und Energie (BMWi): Monitoring-Report Wirtschaft DIGITAL. Berlin: TNS Infratest Business Intelligence. https://www.bmwi.de/ Redaktion/DE/Publikationen/Digitale-Welt/monitoring-report-wirtschaft-digital-2015.pdf? __blob=publicationFile&v=12. Accessed 23 June 2018 7. Brecher, C.: Integrative Production Technology for High-Wage Countries. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-20693-1 8. Schuh, G., et al.: High-resolution supply chain management: optimized processes based on self-optimizing control loops and real time data. Prod. Eng. Res. Dev. 5(4), 433–442 (2011). https://doi.org/10.1007/s11740-011-0320-3 9. Schuh, G. (ed.) Digital Connected Production. Werkzeugmaschinenlabor (WZL) der RWTH Aachen. Aachen (2017) 10. Schwaninger, M.: Systems theory. An introduction for executives, economists and social scientists. Discussion contribution. University of St. Gallen, St. Gallen. Institute for Business Administration. http://www.forschungsnetzwerk.at/downloadpub/systemtheorie-%20einfueh rung.pdf. Accessed 18 June 2019 11. Kagermann, H., Wahlster, W., Helbig, J. (ed.) Implementation Recommendations for the Future Project Industry 4.0. Final Report of the Working Group Industry 4.0. Acatech Deutsche Akademie der Technikwissenschaft. Frankfurt a. M. (2013) 12. Beer, S.: Cybernetics and Management. Fischer, Frankfurt a. M. (1983) 13. Brosze, T.: Cybernetic management of versatile production systems. Apprimus, Aachen, Technical University (2011) 14. Meißner, J., Hering, N., Hauptvogel, A., Franzkoch, B.: Cyberphysical production systems. In: Productivity Management, vol. 18, no. 1, pp. 21–24. GITO, Berlin (2013)

Increasing the Regulability of Production Planning and Control Systems

239

15. Kletti, J., Schumacher, J.: The Perfect Production. Manufacturing Excellence Through Short Interval Technology (SIT), 2nd edn. Springer, Berlin (2014). https://doi.org/10.1007/978-3662-45441-1 16. Schuh, G., Anderl, R., Gausemeier, J., ten Hompel, M., Wahlster, W. (ed.) Industrie 4.0 Maturity Index. Managing the digital transformation of businesses. Acatech - Deutsche Akademie der Technikwissenschaft. Herbert Utz Verlag (acatech STUDIE), München (2017) 17. Wiendahl, H.-H., Kluth, A., Kipp, R.: Market Mirror Business Software - MES. Production Control 2017/2018, 6th edn. Aachen (2017)

Possibilities and Benefits of Using Material Flow Information to Improve the Internal Hospital Supply Chain Giuseppe Ismael Fragapane(&) , Aili Biriita Bertnum and Jan Ola Strandhagen

,

Department of Mechanical and Industrial Engineering, Norwegian University of Science and Technology, 7491 Trondheim, Norway {giuseppe.fragapane,aili.b.bertnum, ola.strandhagen}@ntnu.no

Abstract. The concept of Supply Chain Management has become increasingly important in healthcare and notably in hospitals. Information along the supply chain is the key element for analysis and improvement purposes. The aim of the study is to analyze and visualize the material flow at a Norwegian hospital to identify the possibilities and benefits in current and future planning and operation. The integration of IT enables combining material and information flow. Statically analyses of the material flow can support in planning and control of the logistics activities. The visualization of the material flow can support to take long-term decisions e.g. for distributing departments at the hospital. Keywords: Material flow

 Automated Guided Vehicle  Hospital

1 Introduction The concept of Supply Chain Management (SCM) has become increasingly important in healthcare and notably in hospitals. The primary objective of SCM is “to integrate and manage the sourcing, flow, and control of materials using a total systems perspective across multiple functions and multiple tiers of suppliers” [1]. While literature concerning SCM on a strategic level is extensive, less academic literature focuses on tactical and operational challenges particular to the healthcare industry [2]. One of the key challenges and barriers for effective SCM in the healthcare industry is the lack of capital to build a sophisticated Information Technology (IT) infrastructure supporting supply chain operations [3]. Information along the supply chain is the key element for analysis and improvement purposes. In order to benchmark and measure performance improvements, it is important to define measures that track both material activities and related costs, and supply chain performance related to the core activity, patient care [2]. A recent literature review by Volland et al. [4] suggests that the availability of information across the supply chain can lead to more integrated supply chain concepts for hospitals. There is a need to understand how consistent IT systems and data standards across hospitals can improve the hospital supply chain. Several studies have © IFIP International Federation for Information Processing 2019 Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 240–247, 2019. https://doi.org/10.1007/978-3-030-29996-5_28

Possibilities and Benefits of Using Material Flow Information

241

shown that IT systems can improve purchasing decisions and reduce costs lowering inventory levels [5]. However, using SCM approaches in hospitals have proven to be more complex. The hospital supply chain has to manage a variety of product and service enterprises including medical consumables, pharmaceuticals, catering, laundry cleaning, waste management, home-care products, information technology, vehicle fleet management and general supplies [6]. Norwegian hospitals have started centralizing and automate both the external and especially the internal material flow [7, 8]. Consequently, the IT and automated transportation systems collect and align a large amount of information across the supply chain. A previous study on internal logistics in a Norwegian hospital visualized the material and information flow in order to analyze the current situation of the supply chain [8]. Several IT and sharing systems are used and materials can be tracked and controlled throughout the internal supply chain. However, the IT and automated transportation systems create a huge amount of data, mainly for monitoring and controlling purposes. This data is often not further used or analyzed for improvement purposes and is typically known as ‘idle data’ [9]. Increased computing power has facilitated the possibilities of using data analytics to discover patterns and improvement possibilities from datasets where a human would not necessarily have found a pattern. This study aims at answering the following questions: How can visualization of material flow information improve the internal supply chain of a hospital? Further, how can the hospital supply chain benefit from an integrated IT system? The study will be based on the data collection from the previous study at a Norwegian hospital further explained in Sect. 3. The paper is structured as follows: Sect. 2 provides a background on related literature within hospital logistics and SCM. Section 3 presents an overview of the case hospital supply chain, followed by a description of the methodology. In Sect. 4, the results of the statistical analysis are presented, which are further discussed in Sect. 5. This research will end with recommendations for further improvements for the case hospital and future research within this topic.

2 Theoretical Background Over the past 20 years, there has been an increase in healthcare expenditures in all OECD countries [10], which has motivated to improve the healthcare sector. A study by Poulin et al. [11] state that half of the logistics costs can be eliminated with efficient logistics management. Furthermore, Volland et al. [4] state that increasing logistics efficiency might not directly influence patient care but may give medical staff more time for patient related activities. It is important to recognize that logistics activities have an impact on the overall performance of the hospital [12]. Polater et al. [13] describe a hospital supply chain as “a complex system that requires the flow of products and services, in order to satisfy the needs of those who serve patients”, with the aim to “deliver products at the right time, for the purpose of fulfilling the requirements of those providing healthcare”. The hospital can be divided into an external and internal supply chain. The internal supply chain has to establish its own logistics network that correlates with external suppliers in order to supply the

242

G. Fragapane et al.

customer, in this case, the patient [14]. According to Volland et al. [4], the external supply chain has been received the most attention, while the internal supply chain is the weak spot in the hospital supply chain. Granlund and Wiktorsson [12] argue that the performance of the internal supply chain highly affects the overall performance, and remark the importance of continuous improvement in order to achieve increased competitiveness. The focus of internal logistics is to provide supplies to the company’s core activities, which for hospitals is patient care. Logistics and material handling are described as an extensive and important part of the healthcare sector, but it is far from the sector’s core competencies [12]. A high variety of materials is often required in a hospital, which usually leads to complex logistics activities. In addition, different departments are responsible for various procurement activities, e.g. food services are responsible for the food supply, while pharmacy services are responsible for managing the procurement of pharmaceuticals, etc. [11, 15], further increasing the complexity of SCM. Introducing automation to logistics activities in the healthcare sector is viewed as a measure for improving efficiency and productivity [16]. The main challenges are knowledge transfer and adoption to technology, rather than the absence of available technology. Due to the complexity of the system, it is often difficult to establish central management and a consensus regarding the purpose of the system fitting all actors. It is crucial to distribute tasks and responsibilities appropriately, demonstrate the ability to have a comprehensive view on the planning process and to implement strategies and technologies on the basis of long-term benefits [12]. Several decisions and changes in hospitals have made the Automated Guided Vehicle (AGV) system a success and standard for material transportation in Norwegian hospitals. An AGV system can be defined as a driverless transportation system that is used for horizontal movement of materials, allowing for flexible material handling [17, 18]. Some of the advantages of implementing an AGV system are increased productivity and flexibility, cost efficiency, savings in labor costs, reduced emissions and energy consumption, and improved safety [19]. In general, success factors for the implementation of automation in hospitals include benchmarking, learning from other industries, involving hospital personnel, and identifying the internal logistics functions. Therefore, available data about hospital logistics is crucial. While there has been a strong interest in establishing patient data, data of material flow are still less used and analyzed. The Nordic countries are leading in eHealth applications. The number of general practitioners using electronic health records is among the highest in Europe [20]. While it is well known that IT in healthcare and social services have the potential to improve the welfare and efficiency of systems, the potentials of IT technologies to improve hospital logistics are still scarcely explored in hospitals. According to Johns [21], Information and communication technology (ICT) is an important tool for increasing competitiveness in the healthcare industry. Furthermore, ICT can help for total integration of healthcare systems.

Possibilities and Benefits of Using Material Flow Information

243

3 Methodology This study has used the mixed methods approach, combining qualitative and quantitative methods in order to strengthen the credibility of the results. More precisely, the methodology used is triangulation being one of the most common mixed methods designs. It is described as “the use and combination of different methods to study the same phenomenon” [22] and aims to “validate quantitative statistical findings with qualitative data results” [23]. The research design consists of a case study and literature study. A literature study was conducted to establish the state-of-the-art on hospital supply chain, internal transportation in hospitals, and IT supporting hospitals. Observations and semi-structured interviews were performed to give an understanding of the case hospital and internal supply chain. The case study was carried out in a large Norwegian hospital that has a capacity of 800 beds. The hospital implemented and launched an automated material handling system in 2009. Today, the automated material handling system consists of 21 AGVs, transporting approximately 50–70 tons of goods every week between 114 pick-up and delivery stations in different buildings, at different levels and departments. Daily, 500–650 containers are transported by the AGV system. The AGV system is operated with a centralized structure. A transportation schedule for the AGV system has been defined based on simulations, hospital layout, and battery management and maintenance. This schedule has been adapted to changing demand of goods over the years. Orders are dispatched to the nearest AGV. This dispatching rule was chosen to reduce AGV idle transportation time. The hospital has integrated a radio frequency communication system, connecting the different buildings, doors and elevators with the AGV system. The AGV can lift and move the wagons within the 4500-m guide-path that connects all departments. AGVs can operate continuously for approximately three hours and is then sent to be charged for one hour. Several individual and group interviews have been conducted with employees from both the operating and planning departments of the hospital. The positions of the interviewed hospital employees are Technical System Administrator, Supply Planner, Operating Manager, and Operating Technician. Based on the data received from the case hospital, a statistical analysis of the material transportation was conducted. The data represents the internal material transportation from October 2017 until March 2018.

4 Analysis Based on the data retrieved from the AGV system of the case hospital, information about the material transportation were analysed to investigate the variety of transportation times and locations within the hospital. Figure 1 shows the average waiting and delivery times during the main hospital operation hours. During these hours, the following material groups are transported and supplied within the hospital: consumer and medical goods from both external suppliers and the central warehouse, waste, laundry, sterile goods, and food.

244

G. Fragapane et al.

Fig. 1. AGV transportation time during an average day of operation.

Fig. 2. Material groups transported and supplied within the case hospital during an average day

The pattern of transported goods is documented and clearly recognizable (see Fig. 2). Further, the material data provide insights about material supply during the different weekdays. The daily average amount of containers picked and delivered by the AGV at the different buildings within the hospital can be seen in Fig. 4. Goods arrival and disposal is located in building 10, and clearly represents the highest volume of hospital material movement. Building 2 follows where the kitchen, central sterilization service, and the emergency department are located. Laboratories and specialized departments of the hospital are located in the rest of the buildings, with a considerably lower volume of material movement (Fig. 3).

Fig. 3. Average amount of containers picked and delivered at the different buildings within the case hospital per day.

Fig. 4. Amount of containers picked and delivered, visualized in the case hospital layout.

Possibilities and Benefits of Using Material Flow Information

245

5 Discussion and Conclusion The hospital supply chain has high requirements for quality of services where timely delivery of materials is one of them. Lack of timely delivery of materials can have negative effects for a patient seeking treatment and care in a hospital. Sterile goods used in the operation departments must be delivered on time to ensure availability for any planned or acute medical operation. Patients need to receive their food within a certain time period in order to be ready for further operations or treatments. The statistical analysis shows the transportation time of materials by AGVs in the case hospital. In general, the total transportation time varies throughout the day. While the average delivery time is rather stable, the waiting times are quite long at certain periods of the day. This can decrease the delivery precision of materials, which can lead to negative consequences for the patients. It can be argued that the waiting time is influenced by both transportation volume and AGV capacity in the specific periods. Balancing the transportation volume or combining deliveries to reduce the transportation volume are possible approaches for achieving timely delivery. This is possible since materials and respectively material groups transported and supplied during the operation hours of the case hospital varies strongly. Changing the timely supply of materials can have critical consequences in hospitals. According to Abdulsalam et al. [24], the healthcare supply chain is more fragile on critical supplies compared to other industries. Some materials and products must always be available in case of an emergency. It can be critical or fatal if the patient cannot receive instantly treatment due to missing supplies. Even though customization is an increasing trend in most industries, some supplies in the hospital supply chain are highly customized and are one-of-kind products for one certain patient [25, 26]. Often are a large number of actors involved in the hospital supply chain with different interests when supplying materials or products [24]. Thus, the hospital supply chain can be described as highly fragmented, which can prevent from operating as a system [26]. In general, not all materials moved within hospitals can be captured by the ICT system. Centralizing the material flow with the AGV system in a hospital can support to link the information along the supply chain and visualize the main material movements. The statistical analysis and the visualization of the material flow within the case hospital provide new opportunities to improve the timely delivery of materials and the performance of the internal hospital supply chain. Information derived from the statistical analysis supports the identification of material groups that can rather be moved outside of the overloaded operation hours. The traceability and tracking of materials within the case hospital have been demonstrated in a previous study [8]. The integration of IT technologies in hospitals enables to combine the material and information flow along the supply chain and therefore to identify critical products. These measures help to identify and decide whether products can be transported in a different period or not. Visualizing the material flow in hospitals can support decision-makers in scheduling the material flow on a tactical level to balance the internal material flow to increase the timely delivery of materials. In the case hospital, the material traffic is strongly one-sided distributed.

246

G. Fragapane et al.

The current policy in several countries is to expand larger hospitals and restructure them, and simultaneously close smaller hospitals [27]. The insights of the material flow information can support in deciding where to place new departments or where to move current departments. Decisions on centralizing or decentralizing, insourcing or outsourcing departments or inventories, and changing the time period of internal material supply can also be based on a statistical analysis of material flow. Future research should focus on how to link information both material flow and electronic health records to further improve decision-making and performance of hospitals. Thus, electronic health records can be integrated and be input for forecasting and planning the material flow. Acknowledgement. This research received funding from the strategic research area NTNU Health in 2018 at NTNU, Norwegian University of Science and Technology. The authors also gratefully acknowledge the case hospital that made it possible to carry out this study.

References 1. Monczka, R.M., et al.: Purchasing and Supply Chain Management. Cengage Learning, Boston (2015) 2. McKone-Sweet, K.E., Hamilton, P., Willis, S.B.: The ailing healthcare supply chain: a prescription for change. J. Supply Chain Manag. 41(1), 4–17 (2005) 3. Burns, L.R.: The Health Care Value Chain: Producers, Purchasers, and Providers. JosseyBass, San Francisco (2002) 4. Volland, J., et al.: Material logistics in hospitals: a literature review. Omega 69, 82–101 (2017) 5. Kumar, S., DeGroot, R.A., Choe, D.: Rx for smart hospital purchasing decisions: the impact of package design within US hospital supply chain. Int. J. Phys. Distrib. Logist. Manag. 38(8), 601–615 (2008) 6. Gattorna, J.: Strategic Supply Chain Alignment: Best Practice in Supply Chain Management. Gower Publishing Company, Boston (1998) 7. Ullrich, G.: Automated Guided Vehicle Systems. Springer, Heidelberg (2015). https://doi. org/10.1007/978-3-662-44814-4 8. Fragapane, G.I., et al.: Material distribution and transportation in a norwegian hospital: a case study. In: IFAC INCOM 2018, 16th IFAC Symposium on Information Control Problems in Manufacturing, Bergamo, Italy (in press) 9. Schmidt, R., Möhring, M., Härting, R.-C., Reichstein, C., Neumaier, P., Jozinović, P.: Industry 4.0 - potentials for creating smart products: empirical research results. In: Abramowicz, W. (ed.) BIS 2015. LNBIP, vol. 208, pp. 16–27. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-19027-3_2 10. OECD: Fiscal Sustainability of Health Systems (2015) 11. Poulin, É.: Benchmarking the hospital logistics process. CMA Manag. 77(1), 20 (2003) 12. Granlund, A., Wiktorsson, M.: Automation in internal logistics: strategic and operational challenges. J. Logist. Syst. Manag. 18, 538–558 (2014) 13. Polater, A., Bektas, C., Demirdogen, S.: An investigation of government and private hospitals’ supply chain management. In: 2014 International Conference on Advanced Logistics and Transport (ICALT) (2014)

Possibilities and Benefits of Using Material Flow Information

247

14. Rivard-Royer, H., Landry, S., Beaulieu, M.: Hybrid stockless: a case study: lessons for health-care supply chain integration. Int. J. Oper. Prod. Manag. 22(4), 412–424 (2002) 15. Ozkil, A.G., et al.: Service robots for hospitals: A case study of transportation tasks in a hospital. In: 2009 IEEE International Conference on Automation and Logistics (2009) 16. Bačík, J., et al.: Pathfinder-development of automated guided vehicle for hospital logistics. IEEE Access 5, 26892–26900 (2017) 17. Vis, I.F.A.: Survey of research in the design and control of automated guided vehicle systems. Eur. J. Oper. Res. 170(3), 677–709 (2006) 18. Mehrabian, A., Tavakkoli-Moghaddam, R., Khalili-Damaghani, K.: Multi-objective routing and scheduling in flexible manufacturing systems under uncertainty. Iran. J. Fuzzy Syst. 14(2), 45–77 (2017) 19. Bechtsis, D., et al.: Sustainable supply chain management in the digitalisation era: the impact of Automated Guided Vehicles. J. Clean. Prod. 142, 3970–3984 (2017) 20. Bergstrøm, R., Heimly, V.: Information technology strategies for health and social care in Norway. Int. J. Circumpolar Health 63(4), 336–348 (2004) 21. Johns, P.M.: Integrating information systems and health care. Logist. Inf. Manag. 10(4), 140–145 (1997) 22. Chris, V., Nikos, T., Mark, F.: Case research in operations management. Int. J. Oper. Prod. Manag. 22(2), 195–219 (2002) 23. Hesse-Biber, S.N.: Mixed Method Research - Merging Theory with Practice. The Guilford Press, New York (2010) 24. Abdulsalam, Y., et al.: Health care matters: supply chains in and of the health sector. J. Bus. Logist. 36(4), 335–339 (2015) 25. Dobrzykowski, D., et al.: A structured analysis of operations and supply chain management research in healthcare (1982–2011). Int. J. Prod. Econ. 147, 514–530 (2014) 26. Rimpiläinen, T.I., Koivo, H.: Modeling and simulation of hospital material flows. In: Tenth International Conference on Computer Modeling and Simulation, UKSIM 2008. IEEE (2008) 27. Giancotti, M., Guglielmo, A., Mauro, M.: Efficiency and optimal size of hospitals: Results of a systematic search. PLoS ONE 12(3), e0174533 (2017)

Medical Supplies to the Point-Of-Use in Hospitals Giuseppe Ismael Fragapane1(&) , Aili Biriita Bertnum1 Hans-Henrik Hvolby1,2, and Jan Ola Strandhagen1 1

2

,

Department of Mechanical and Industrial Engineering, Norwegian University of Science and Technology, 7491 Trondheim, Norway {giuseppe.fragapane,aili.b.bertnum,hans.h.hvolby, ola.strandhagen}@ntnu.no Centre for Logistics, Department of Materials and Manufacturing Engineering, Aalborg University, Aalborg, Denmark

Abstract. In order to match financial sustainability with the delivery of highquality healthcare hospitals need to seek efficient ways of managing inventories with a large variety of medical supplies. To find a balance in the trade-off between cost and service levels that ensures on time and high-quality patient treatment is a challenge. It is especially crucial in the hospital setting, where the consequence of a stock-out can be much more severe than lost revenue. The process of ensuring that the required supplies are available at the right time is a particularly important supporting role within hospital logistics. The scope of this study is inventory control at the point-of-use inventories in hospitals. It concerns the short-term planning and control area, which focus on coping with actual demand and making necessary changes in order to match efficiently to plans. This study aims to model the inventory control process and discuss how technology can support high availability of medical supplies in hospitals. Keywords: Inventory control

 Point-of-use  Hospital

1 Introduction New medical advances broaden the spectre of possible treatments and the patients have higher expectations to the quality of treatment than ever [1, 2]. The trend of increasing demand in hospitals is accompanied by increasing shortages of doctors, nurses and supporting staff [3]. In addition, healthcare expenditures represent an increasing share of the economy in most OECD countries [2]. The cost of supplies and services are starting to outpace hospital budgets. In order to match financial sustainability with the delivery of high-quality healthcare hospitals need to seek efficient ways of managing inventories with a large variety of medical supplies [4, 5]. Several challenges are present in the processes of inventory control in hospitals. Hospitals experience unpredictable demand and large varieties in the supply consumption due to varying patient mix. Fluctuating demand is especially present in the emergency department in which patients have not scheduled treatments. Hospitals need to keep an extensive number of stock keeping units (SKUs) to be prepared to treat a © IFIP International Federation for Information Processing 2019 Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 248–255, 2019. https://doi.org/10.1007/978-3-030-29996-5_29

Medical Supplies to the Point-Of-Use in Hospitals

249

variety of illnesses. The variety is even further increased by individual physicians’ preferences regarding specific brands of equipment or consumables they prefer using. Traditionally, there are at least two levels of inventory in the hospital supply chain, a central warehouse and point-of-use (POU) inventories. The POU inventories are generally characterized by lack of space to store the different medical supplies needed at the point-of-care. The space restrictions combined with fluctuating demand and a large amount of different SKUs lead to the need for efficient control of available inventories. Though slightly neglected in the past, inventory management in hospitals has in recent years been recognized as one key lever towards realizing efficiency improvements with regards to cost and waste, while at the same time satisfying service levels [6]. To find a balance in the trade-off between cost and the service levels that ensures on time and high-quality patient treatment is a challenge. It is especially crucial in the hospital setting, where the consequence of a stock-out can be much more severe than lost revenue [7]. The process of ensuring that the required supplies are available at the right time is a particularly important supporting role of hospital logistics. The study concerns the short-term planning and control area, which focus on coping with the actual demand and making the necessary changes in order to match efficiently to plans [8]. The objective of this study is to investigate the challenges of inventory control in hospitals and discuss how technology can support high availability of medical supplies. The inventory control processes in hospitals will also be visualised, showing the interdependencies between the internal and external hospital supply chain. The rest of this study is organized according to the following structure. Section 2 embodies the literature on inventory control in general, specific inventory control policies and their application in an overall inventory control system. Section 3 describes the inventory control processes of an emergency department at a large hospital in Norway, which serves as input to the following section. Section 4 discusses the different review policies in hospitals and enablers with the use of technology for inventory control. The study is concluded with a summary of major insights and an outline of future research.

2 Theoretical Background Inventory control is based on answering the questions of what to order, how much to order and when to order [9]. It entails both controlling the inventory in form of on-hand physical inventory, as well as record counts and monetary worth [10]. The demand is hard to predict and there is lack of information on the consumption of supplies from the POU inventories [7]. Non-availability of supplies may result in postponement of surgeries, and can in the worst case be critical to a patient’s life. On the other side, overstocking or hidden stocks lead to supply chain inefficiencies and higher inventory costs [11]. Demand for items of POU inventories can be characterized as independent. Demand of a product is independent when it is influenced by conditions outside the control of the organization, rather than being dependent on another product [10]. With independent demand, efforts must be made to match the inventory with demand based

250

G. I. Fragapane et al.

on historical demand, forecasts and “best guesses”, while still be prepared to respond quickly when mismatches occur [8]. According to Muller [10], independent demand calls for a replenishment approach to inventory management, where stock is replenished as it is used, in order to have items on hand to fulfill demand. The decisions on how much to order and when to order are closely linked. How much to order depends on the frequency and time in which orders are placed, in the same way as when to order depends on the volumes ordered. In order to control inventories, a policy that is simple and efficient to use is of importance. The policy needs to be combined with supporting identification technologies to decrease the response time. The choice of suitable inventory (replenishment) policy, is a widely discussed topic within the area of hospital inventory management [6]. Traditionally, the choice of inventory policy concerns deciding the suitable inventory review cycle and reorder quantity [6]. The review cycle can be either periodic or continuous. Slack [8 p. 707] defines the periodic review as “an approach to making inventory decisions that define points in time for examining inventory levels and making decisions accordingly”. Points in time can be e.g. on a daily or weekly basis. Continuous review is defined as “an approach to managing inventory that makes inventory-related decisions when inventory reaches a particular level” [8 p. 702]. The review policy requires continuous tracking of inventory levels. The review cycle must be determined based on variability in demand and costs of ordering, holding and delivering supplies [12]. The periodic review seems to be most commonly used in hospitals today, although some argue that continuous review is superior to periodic review [7]. This is due to the ability of continuous review to provide real-time updates on inventory levels by the use of data capturing technologies. Çakıcı et al. [13] argue that in cases where real-time information is made available without large additional cost, the continuous review is superior to periodic review. E.g., the use of RFID allows for a continuous review policy instead of the typically utilized periodic review [14]. The number of shipments is expected to be higher with a continuous review policy than with a periodic review policy. This results in the continuous review being most convenient when the cost per shipment is low. However, with a period review policy it is easier to incorporate different orders in the same shipment, which will save transportation costs. The traditional periodic review policies applied in hospitals are being replaced with continuous or hybrid review policies (periodic-continuous) enabled by modern point-of-use technology, such as automated dispensing cabinet and two-bin systems in combination with RFID [15]. Inventory control systems, also called replenishment systems or distribution systems, describe how inventory control policies are applied in practice. It defines how inventories are intentionally organized and combined with enabling identification technologies in order to enable different inventory control policies. Several methods have been applied and discussed in the literature on how to control and distribute supplies from the central warehouse to POU in hospitals [16]. The variation of methods and inventory policies have an impact on the degree of staff involvement. The replenishment responsibility either is centralized to the logistics department or decentralized to the staff at the POU locations. The range of methods applied at POU inventories varies from manual systems like the requisition based

Medical Supplies to the Point-Of-Use in Hospitals

251

system or exchange carts, which are rarely used today, to commonly used systems such as Periodic Automatic Replacement (PAR) or two-bin systems enabled by barcode or RFID technology.

3 Case Study 3.1

Methodology

The case study was carried out with a large Norwegian hospital that treats 60’000 inpatients (patients who stay overnight in the hospital during treatment) and 370’000 outpatients (patients who visit the hospital for treatment without staying overnight) yearly. The hospital has 800 beds, 8000 employees and an annual budget of 8.2 billion NOKs [17]. The purpose of the case study was to get a holistic view of the logistic processes at a hospital in practise, and more specifically the process of inventory control at the POU inventories in the emergency department subject to fluctuating demand. The advantage of a single case study is to analyse the phenomena in greater depth [18]. Empirical data was collected through semi-structured interviews, observations, and information retrieved from field documents. Multiple semi-structured interviews were conducted both on-site and through meetings. The purpose was to interview key personnel able to provide solid descriptions of the processes inside and outside the hospital. Interviews were conducted with both a warehouse manager, purchaser, logistics manager at the hospital and a porter from the emergency & cardiothoracic centre. A semi-structured interview is a flexible interview technique consisting of a set of predefined questions while allowing for open-ended exploration and deflection from the predefined questions [19]. In preparation of the interviews, a brief interview guide was prepared, consisting of the purpose of the study and topics to be covered in the interview, as well as suggested questions. During the interviews, most of the topics were covered and additional information was collected. Based on notes made during interviews, reports were formulated as documentation. 3.2

Materials Ordering Flow

The case hospital utilizes periodic review and order-up to level as the inventory control policy. The current inventory level is decided by roughly estimating the inventory levels of the different products. However, the input provided to the handheld computer is not a count of the current levels of inventory, but rather a direct input on the quantity wished to be replenished. The disturbances that can occur during the periodic measure and order of inventory level is a combination of human and technological errors. It is up to human accuracy to scan the right barcodes, give the correct input on order quantity and make sure that every room, cabinet, and shelves are checked each round. If the order is not sent according to the schedule provided, it is also a human error. Additionally, the handheld computers are quite old, and the software operates slowly, which sometimes leads to technical errors such as hang-ups or that the computer fails to scan the barcode.

252

G. I. Fragapane et al.

The deadline for sending the order is 8:30 AM. Once the orders are sent, there is a lead-time of 2 h (8:30–10.30 AM) on replenishment between the central warehouse and goods reception in the hospital. As all supplies are delivered to the good reception at once, queues in AGV transportation often occurs. Therefore, the supplies are normally not available at the POU before around 12.00 PM. An additional disturbance to the system regarding the replenishment of supplies is observed. Several times a week the hospital get wrong quantities or products delivered. This could be due to technological or human errors in the ordering process at the POU, or by wrongful picking of orders at the central warehouse. The overall processes of inventory control from the central warehouse to POU in the case hospital are visualised in Fig. 1. The external supply processes are visualised in the top timeline, while the internal inventory control processes are visualised in the bottom timeline. T1, T2, and T3 represent the scheduled transportation of goods deliveries by trucks from the central warehouse to the hospital.

Fig. 1. The inventory control process in the case hospital

4 Discussion Due to the demand patterns found in the emergency department (especially on a busy night) stock-outs can easily occur in between the periodic reviews of inventory. With the characteristics of demand at emergency departments, literature states that the continuous review policy is a more suitable method. New technologies enabling realtime inventory records, visibility of material flows, as well as automated order generation are already available. RFID is an enabler of continuous review that has been given attention within the field of hospital inventory management. However, hospitals have not adopted the technology as fast as in other industries. Different inventory control policies and technologies are combined with replenishment systems that control the POU inventories. The descriptive control model in Fig. 1 serves to describe the POU inventory system and support to discuss how different technological solutions support the availability of medical supplies.

Medical Supplies to the Point-Of-Use in Hospitals

253

The storage area is very limited in the emergency department at the case hospital, meaning that not all supplies can be stored in the common two-bin systems. With a limited storage capacity, the availability of items becomes more dependent on the responsiveness of the overall supply chain. The study by Little and Coughlan [20] shows that optimizing inventory within space constrictions requires more frequent deliveries of supplies to keep high service levels. This implies using continuous review solutions and quick replenishment systems, by either employing extra labour or implementing technological solutions. The inventory control system can support in identifying and mapping the current state of the replenishment and inventory control process. Modelling the process provides a visualization of the different flows, input and output controlling the inventory level. This can support to decide for a suitable technological solution within the internal hospital supply chain. One of the strengths of the inventory control system at the case hospital is the central warehouse’s quick response to signal and demand, and the use of order-up-to levels to decide order quantities. This solution is appropriate and efficient for limited storage space, while still providing high service levels [21]. The responsibility and time consuming task of monitoring inventory and supplies have been outsourced from the nurses to a small group of porters. However, stock-outs and unavailability of medical supplies are unavoidable and occur weekly in the emergency department. Stock-outs can occur especially quickly in situations where several patients are treated at the emergency department due to e.g. a car crash. With the implementation of technologies for the inventory control processes, it can support the external supply processes by providing better visibility of the inventory. With this information, the supply chain can adjust its efforts towards being more responsive to changing demand. The use of real-time information of inventory levels can be an enabler for increased responsiveness in the overall supply chain, which can result in the possibility of having better availability of a broader range of SKUs, while keeping smaller quantities of the SKUs at POU inventories in the hospital. In an emergency department, this can prove to be especially valuable due to its characteristics of fluctuating and emergent demand.

5 Conclusion and Further Research Most hospitals have limited storage space at the POU and an increasing variety of SKUs. This provides challenges when it comes to the availability of medical supplies. Every hospital has its own individual supply chain capabilities and configurations, and space restrictions when it comes to the storage area. The infrastructure can be new or old, and the degree of technology advancement varies. This has led to a range of different ways to control POU inventories. In the case hospital, the handheld computers used for ordering are old and can cause several types of errors. The literature has highlighted the use of periodic review in combination with either fixed quantity or order-up-to level as the traditional inventory control policy applied at POU inventories in hospitals. However, this may not be a robust enough control policy if the goal is to assure availability. The policy gives poor stock visibility in between the periodic review and lack of real-time inventory data. The case hospital experience often

254

G. I. Fragapane et al.

that the wrong quantity or product is delivered from the central warehouse. By improving stock visibility and using real-time inventory data when ordering it can be assumed that such errors can be reduced or even eliminated. Different inventory control policies are being applied in combination with various supporting technologies, to form the replenishment systems of the POU-inventories. The inventory control process has been mapped and visualised. In order to improve the visibility of the inventory levels new replenishment systems have been introduced. One can conclude with the fact that ensuring availability of medical supplies needs a joint effort from both the supply chain and the inventory control processes. Identification technology such as RFID has shown great potential to enable continuous real-time tracking of inventory levels and automatic order generation. Several barriers are present for the successful implementation of these solutions in the hospitals. Continuous improvements in both areas, in combination with the possibilities implied by emerging technologies, can change the design of the inventory control process towards a state were better availability of medical applies could be ensured. Future research should focus on how different technological solution and hospital supply chain configuration have an impact on the response time and inventory efficiency. Acknowledgement. This research received funding from the strategic research area NTNU Health in 2019 at NTNU, Norwegian University of Science and Technology. The authors also gratefully acknowledge the case hospital that made it possible to carry out this study and Elise Keseler, master student at NTNU, for supporting this research.

References 1. Bacik, J., et al.: Pathfinder-development of automated guided vehicle for hospital logistics. IEEE Access 5, 26892–26900 (2017) 2. OECD: Health at a Glance 2017 (2017) 3. Bendavid, Y., Boeck, H., Philippe, R.: RFID-enabled traceability system for consignment and high value products: a case study in the healthcare sector. J. Med. Syst. 36, 3473–3489 (2011) 4. Rosales, C.R., Magazine, M., Rao, U.: The 2Bin system for controlling medical supplies at point-of-use. Eur. J. Oper. Res. 243(1), 271–280 (2015) 5. Papanicolas, I., Smith, P.: Health System Performance Comparison: An Agenda for Policy, Information and Research. McGraw-Hill Education, Maidenhead (2013) 6. Volland, J., et al.: Material logistics in hospitals: a literature review. Omega Int. J. Manag. Sci. 69, 82–101 (2017) 7. Moons, K., Waeyenbergh, G., Pintelon, L.: Measuring the logistics performance of internal hospital supply chains – a literature study. Omega 82, 205–217 (2019) 8. Slack, N., Brandon-Jones, A., Johnston, R.: Operations Management, 7th edn. Pearson Education, Harlow (2013) 9. Waters, C.D.J.: Inventory Control and Management, 2nd edn. Wiley, Chichester (2003) 10. Muller, M.: Essentials of Inventory Management. AMACOM, Saranac Lake (2011) 11. de Vries, J.: The shaping of inventory systems in health services: a stakeholder analysis. Int. J. Prod. Econ. 133(1), 60–69 (2011)

Medical Supplies to the Point-Of-Use in Hospitals

255

12. Rossetti, M.D., Buyurgan, N., Pohl, E.: Medical supply logistics. In: Hall, R. (ed.) Handbook of Healthcare System Scheduling. ISOR, vol. 168, pp. 245–280. Springer, Boston (2012). https://doi.org/10.1007/978-1-4614-1734-7_10 13. Çakıcı, Ö.E., Groenevelt, H., Seidmann, A.: Using RFID for the management of pharmaceutical inventory — system optimization and shrinkage control. Decis. Support Syst. 51(4), 842–852 (2011) 14. Paltriccia, C., Tiacci, L.: Supplying networks in the healthcare sector: A new outsourcing model for materials management. Ind. Manag. Data Syst. 116(8), 1493–1519 (2016) 15. Rosales, C.R.: Technology Enabled New Inventory Control Policies in Hospitals. University of Cincinnati (2011) 16. Landry, S., Beaulieu, M.: The challenges of hospital supply chain management, from central stores to nursing units. In: Denton, B.T. (ed.) Handbook of Healthcare Operations Management. ISOR, vol. 184, pp. 465–482. Springer, New York (2013). https://doi.org/10. 1007/978-1-4614-5885-2_18 17. Nedland, S.M.: Avanserte logistikkløsninger på St. Olavs Hospital (2015) 18. Voss, C., Frohlich, M., Tsikriktsis, N.: Case research in operations management. Int. J. Oper. Prod. Manag. 22(2), 195–219 (2002) 19. Wilson, C.: Chapter 2 - Semi-structured interviews. In: Wilson, C. (ed.) Interview Techniques for UX Practitioners, pp. 23–41. Morgan Kaufmann, Boston (2014) 20. Little, J., Coughlan, B.: Optimal inventory policy within hospital space constraints. Health Care Manag. Sci. 11(2), 177–183 (2008) 21. Bijvank, M., Vis, I.F.A.: Inventory control for point-of-use locations in hospitals. J. Oper. Res. Soc. 63(4), 497–510 (2012)

Combining the Inventory Control Policy with Pricing and Advertisement Decisions for a Non-instantaneous Deteriorating Product Reza Maihami1(&) and Iman Ghalehkhondabi2(&) 1

Department of Business, School of Business and Leadership, Our Lady of the Lake University, Houston, TX 77067, USA [email protected] 2 Department of Business, School of Business and Leadership, Our Lady of the Lake University, San Antonio, TX 78207, USA [email protected]

Abstract. A non-instantaneous deteriorating item refers to the product that its deterioration starts after a specific period time rather than starting instantly of its arrival in stock. In this paper, we study the inventory control policy for a noninstantaneous deteriorating item subject to pricing and advertising decisions. The demand function is price- and- time-dependent and shortage is allowed and partially backlogged. The retailer aims to maximize its total profit determining the optimal selling price and inventory control variables. We formulate the proposed model and develop an algorithm to indicate the optimal solution. Finally, we extend a numerical example with discussion to show the efficiency of the proposed model. Keywords: Inventory control Pricing  Advertisement

 Non-instantaneous deteriorating items 

1 Introduction Many products such as medicine, high-tech products, fruits, and blood are exposed to the deterioration process. It means that their usefulness is decreased over time due to loss of utility or loss of original value of the items. For some products, there is a span of maintaining quality or original condition where in that period, there is no deterioration occurring. [1] introduced this kind of product as “non-instantaneous deterioration item”. In the real world, this type of phenomenon exists commonly such as firsthand vegetables and fruits have a short span of maintaining fresh quality, in which there is almost no spoilage. Afterward, some of the items will start to decay. For this kind of items, the assumption that the deterioration starts from the instant of arrival in stock may cause retailers to make inappropriate replenishment policies due to overvalue the total annual relevant inventory cost. Therefore, in the field of inventory management, it is necessary to consider the inventory problems for non-instantaneous deteriorating items. Some important studies in this area are [2–6]. © IFIP International Federation for Information Processing 2019 Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 256–264, 2019. https://doi.org/10.1007/978-3-030-29996-5_30

Combining the Inventory Control Policy

257

Furthermore, the companies use marketing policy such as pricing and advertisement efforts to improve their performance. In fact, the marketing policy is a necessary element in controlling the inventory and customer’s demand for all companies. Thus, many studies such as [7–14] combined the inventory control problem of deteriorating products with pricing and advertisement activities. A pricing and inventory control model for a non-instantaneous deteriorating product subject to advertisement effort is introduced in this paper. The demand is price and time dependent and shortage is allowed and partially backlogged. We simultaneously determine the optimal sale price, replenishment schedule and order quantity to maximize the total profit as the objective function. The rest of the paper is as follows: Sect. 2 introduces the notations, mathematical modelling and the searching algorithm to solve the proposed model. Section 3 presents the numerical example and discussion based on the sensitivity analysis. Finally, we conclude the paper in Sect. 4 and suggest some future works.

2 Mathematical Modelling First, we introduce the notations applied throughout the paper. c h s o p h q td T t1 Q I1 ðt Þ I2 ðt Þ I3 ðt Þ I0 S PE TPðp; t1 ; T Þ

Unit purchase cost Unit holding cost Unit backorder cost Unit lost sale cost Unit sale price, where p [ c Deterioration parameter Advertisement coefficient No-deterioration time length Inventory cycle time length No-shortage time length Order quantity inventory level at time t 2 ½0; td  inventory level at time t 2 ½td ; t1  inventory level at time t 2 ½t1 ; T . maximum inventory level. maximum amount of demand backlogged. advertisement cost total profit per unit time of the inventory system.

Figure 1 shows the studied inventory model. There are I0 units of products at the beginning of each cycle. The inventory level falls down to zero due to satisfying demand and deterioration in the system. Then, shortage is happened until the end of the order cycle time. We assume that the demand function Dðp; tÞ ¼ ða  bpÞekt (where a [ 0; b [ 0) depends on price and time where the demand may increase when the sale price decreases, or it may change through time changing. [9] showed that this kind

258

R. Maihami and I. Ghalehkhondabi

demand function is appropriate for deteriorating products such as high-tech items, fruits and vegetables, and fashion commodities. In the inventory system, shortage is allowed and partially backlogged. We assume that the fraction of shortage backordered is bðtÞ ¼ k0 edt , ð0 \ k0  1; [ d0Þ where t is the waiting time up to the next replenishment and d is a positive constant and 0  bðtÞ  1; bð0Þ ¼ 1. We also adopt  a T the advertisement function from [9] as PE ¼ Kðq  1Þ2 R Dðp; tÞdt where K [ 0 and 0

a is a constant. Further, we assume that advertisement coefficient acecorresponding impacts the effort induced demand, i.e. qDðp; tÞ. Finally, it is assumed that the time where the item show no deterioration is greater than or equal to the time with no shortage. i.e. t1 [ td .

Invnetory level

Q

I0

t1 S

td

T

time

Fig. 1. Inventory level against time (by author).

During the time interval ½0; td , the inventory level is only reduced to satisfy the demand. So the following differential equation shows the inventory status: dI1 ðtÞ dt

¼ qDðp; tÞ ¼ qða  bpÞekt ;

0  t  td

ð1Þ

We have I1 ð0Þ ¼ I0 . solving (1) results:   I1 ðtÞ ¼ qðabpÞ 1  ekt þ I0 ; k

0  t  td

ð2Þ

In the second interval ½td ; t1 , demand and deterioration process decrease the inventory level. Therefore, the inventory status is presented by the differential equation as follows: dI2 ðtÞ dt

þ hI2 ðtÞ ¼ qDðp; tÞ;

td  t  t1

ð3Þ

Combining the Inventory Control Policy

259

With the condition I2 ðt1 Þ ¼ 0, Eq. (3) yields: I2 ðtÞ ¼ qðabpÞe kþh

ht



 eðk þ hÞt1  eðk þ hÞt ;

ð4Þ

td  t  t1

From Fig. 1 we conclude I1 ðtd Þ ¼ I2 ðtd Þ therefore, the maximum inventory level I0 will be as: I0 ¼

i qða  bpÞ   qða  bpÞehtd h ðk þ hÞt1 1  ektd e  eðk þ hÞtd  k kþh

ð5Þ

Substituting (5) into (2) gives:   I1 ðtÞ ¼ qðabpÞ 1  ekt þ k



qðabpÞehtd kþh

   eðk þ hÞt1  eðk þ hÞtd  qðabpÞ 1  ektd ; k

0  t  td

ð6Þ During the third interval ½t1 ; T  the inventory system confronts to shortage and the demand is partially backlogged according to the fraction bðT  tÞ. Thus, the inventory level is formulated as Eq. (7): dI3 ðtÞ dt

¼ qDðp; tÞbðT  tÞ ¼ qDðp;tÞ ; edðTtÞ

ð7Þ

t1  t  T

From primitive condition I3 ðt1 Þ ¼ 0, solving Eq. (7) yields: I3 ðtÞ ¼ qðabpÞe

dT

ðeðd þ kÞt1 eðk þ dÞt Þ ; kþd

ð8Þ

t1  t  T

Considering t ¼ T into (8), the maximum amount of demand backlogging will be: S ¼ I3 ðTÞ ¼ 

qða  bpÞedT ðeðd þ kÞt1  eðk þ dÞT Þ kþd

ð9Þ

Order quantity per cycle (Q) equals to summation of S and I0 , i.e. Q ¼ S þ I0 i qða  bpÞedT ðeðd þ kÞt1  eðk þ dÞT Þ qða  bpÞehtd h ðk þ hÞt1 þ e ¼  eðk þ hÞtd kþd kþh  qða  bpÞ  ktd 1e  k ð10Þ Now, we can compute the inventory costs and revenue as follows: HC : inventory holding cost

HC ¼ h

SC : shortage cost due to backlog

hR

td 0 I1 ðtÞdt þ

SC ¼ s

RT t1

R t1 td

I2 ðtÞdt

½I3 ðtÞdt

i

ð11Þ ð12Þ

260

R. Maihami and I. Ghalehkhondabi

OC: the opportunity cost due to lost sales

OP ¼ o

RT t1

qDðp; tÞð1  bðT  tÞÞdt ð13Þ

PC : the purchase cost SR : the sales revenue PE : the promotion cost

PC ¼ cQ

SR ¼ p

R t1 0

ð14Þ

qDðp; tÞdt þ S

PE ¼ Kðq  1Þ2

hR T 0



Dðp; tÞdt

ð15Þ ia

ð16Þ

A: ordering cost Thus, the total profit per unit time TPðp; t1 ; TÞ will be as: SR  A  HC  SC  OC  PC  PE T   a 1 þ eTk ða  bpÞ 1 ð1 þ qÞ2 ¼  ðA þ K k T  1 þ et1k eTk  eTd þ t1ðd þ kÞ þ q  pðabpÞ k dþk  Tk  oða  bpÞ e d þ eTd þ t1ðd þ kÞ k  et1k ðd þ kÞ q þ kðd þ kÞ   1 þ etdk eTd et1ðd þ kÞ þ etdðd þ kÞ etdk  etdh þ t1ðh þ kÞ þ cðabpÞð þ Þq  k hþk dþk   etdh hða  bpÞ et1ðh þ kÞ ð1 þ tdhÞk2 þ etdh hðh þ kÞ  etdh þ t1k kðh þ kÞ þ etdðh þ kÞ h2 ð1 þ tdkÞ q þ hk2 ðh þ kÞ   eTd ða  bpÞs eT ðd þ kÞ þ et1ðd þ kÞ ð1  T ðd þ kÞ þ t1ðd þ kÞÞ q þ Þ ðd þ k Þ2

TPðp;t1 ;T Þ ¼

ð17Þ The decision maker aims to maximize the total profit determining the optimal ordering policies and sale price. On the other hands, we want to maximize TPðp;t1 ;T Þ indicating the optimal value for ðp; t1 ; T Þ. We first show that for any given p the optimal value for t1 ; T exists. Then, for any given value of t1 ; T, there exists a unique p where obtain the maximize value for TPðp;t1 ;T Þ . TPðp;t1 ;T Þ is a function of p; t1 ; T. So, for any given p, the necessary condition for the total profit per unit time (17) to be maximized is: @TPðp;t1 ;T Þ ¼0 @t1

ð18Þ

@TPðp;t1 ;T Þ ¼0 @T

ð19Þ

It is easy to prove the concavity of TPðp;t1 ;T Þ . So for any given price, the point    t1 ; T which maximize the total profit per unit time not only exists but is unique.

Combining the Inventory Control Policy

261

Next,   we  study the condition under which the optimal  selling  price also exists. For any t1 ; T the first-order necessary condition for TP p; t1 ; T  to be maximize is: @TPðp;t ;T  Þ 1 ¼0 ð20Þ @p   We will numerically show that TP p; t1 ; T  is a concave function of p for a given t1 ; T  , hence a value of p that obtain from (20) is unique. As a result, we prove that   there is a unique value of ðp ) which maximizes TP p; t1 ; T  . p can be obtained by solving (20). 2.1

Searching Algorithm

Based on the mathematical formulation, we apply a searching algorithm to obtain ðp ; t1 ; T  Þ. This algorithm is an iterative search process starts with an arbitrary initial value, then using the proposed model formulation, finds the optimal solution. The algorithm uses the computations from the model formulation section to find the solutions in steps 2nd and 3rd. As we showed in the model formulation, the algorithm is utilized to solve a non-linear optimization model. The model depends on three variables. So in step 1, the algorithm considers an initial value for one of the variables. Then, applying the taking derivative method computes the other two variables. Next, use the resulted value for two variables to find the next value of the first variable. The algorithm reiterates the entire process till the difference between two consecutive values of the first variable is sufficiently small. The algorithm steps are as follows: step1: begin with j ¼ 0 and set pj ¼ p1 as the initial value of pj step2: for pj , solve the equations system (18) and (19) and determine the optimal   value of t1 ; T  step3: using result of step 2 and solve Eq. (20) to find the optimal of pj þ 1 . Step 4: if pj  pj þ 1  0:0001, set p ¼ pj þ 1 , then ðp ; t1 ; T  Þ is the optimal solution and stop. Otherwise, set j ¼ j þ 1 and go back to step 2. using above algorithm, we obtain the optimal solution ðp ; t1 ; T  Þ.then; we can obtain Q by using (10) and TP by using (17).

3 Numerical Example In this section, we rely on a numerical example to show the efficiency of the proposed model and algorithm. The results can be found using Mathematica 9.0. in the subsequent analysis, the following base values are used. A ¼ $250; c ¼ $200; h ¼ $40; s ¼ $80; o ¼ $120; h ¼ 0:08; q ¼ 2; td ¼ 0:04; f ðt; pÞ ¼ ð500  0:5pÞe0:98t ; bðtÞ ¼ e0:1t . We consider initial value for sale price equals to 600, p1 ¼ 600. Using the searching algorithm, after 5 iterations, the optimal solution will be computed as p ¼ 525:948; t1 ¼ 0:128; T  ¼ 0:182; TP ¼ 204292; Q ¼ 19:111. Table 1 shows the computational results.

262

R. Maihami and I. Ghalehkhondabi Table 1. The computational results j pj

t1

T

Q

TP

1 2 3 4 5

0.085 0.108 0.113 0.125 0.128

0.158 0.176 0.180 0.181 0.182

15.156 16.189 18.987 19.089 19.110

105678 193456 203456 204292 204292

600 534.560 526.678 525.947 525.948

Table 1 highlights the main output of this study. Applying the suggested algorithm, we obtained the optimal solution for selling price and replenishment policy for a vendor who sells a non-instantaneous deteriorating product. This optimal solution helps the vendor to achieve the highest total profit in each inventory cycle time. This finding provides a mathematical tool for the vendor to make better decision in a simultaneous manner about two significant decisions in its system; inventory control policy and pricing. As we talked earlier, the total profit is a concave function corresponding to the sale price. Here, we perform the numerical example with various starting value of sale price 460, 480, 500, 520, 540,560, 580, 600 and 620. As shown in Fig. 2, the result reveals that TP is strictly concave in p. Hence, we assure that the obtained local maximum from the proposed algorithm is indeed the global maximum solution.

250000

TP

200000 150000 100000 50000 0 460 480 500 520 525 540 560 580 600 620

p Fig. 2. Optimal total profit associated to p

3.1

Discussion

In this section, we extend some managerial implications based on the sensitivity analysis of parameters. First, we solve the suggested numerical example for distinct values of td . This sensitivity analysis shows the impact of non-instantaneous phenomena. The computational results for td 2 f0; 0:08; 0:16; 0:24g are shown in Table 2. If td = 0, the model becomes the instantaneous deterioration items case, and the optimal solution is TP* = 193678. It can be seen that there is an improvement in total profit from the non-instantaneously deteriorating demand model. Moreover, the longer of time where no deterioration occurs, the greater the improvement in total profit from

Combining the Inventory Control Policy

263

Table 3. Optimal solution for various values of q q

p

t1

T

Q

TP

1 1.2 1.4 1.6 1.8 2

524.855 525.084 525.307 525.524 525.737 525.948

0.132 0.131 0.130 0.129 0.129 0.128

0.187 0.185 0.184 0.183 0.182 0.182

9.599 11.504 13.408 15.310 17.211 19.110

101687 122275 142830 163351 183838 204292

the non-instantaneously deteriorating demand model. This implies that if the retailer can convert the instantaneously to non-instantaneously items by improving stock equipment, then the total profit per unit time will increase. Next, we perform the Example for different values of the promotional effort q. The results are shown in Table 3.

Table 2. Optimal solution for various values of td td

p

t1

T

Q

TP

0 0.08 0.16 0.24

526.780 525.948 525.678 523.987

0.117 0.128 0.134 0.146

0.180 0.182 0.187 0.196

18.678 19.110 19.219 19.789

193678 204292 210890 223456

If q = 1 (the retailer does not adopt the promotion policy), the optimal solutions is TP* = 101687. This optimal total profit is greatly lower than TP when ¼ 2. Moreover, the greater of promotional effort, the greater the improvement in total profit. This implies that if the retailer can increase the effect of promotional activity, then the total profit will increase drastically.

4 Conclusions and Future Works In this study, we developed an inventory control model with pricing and advertising efforts for a non-instantaneous deterioration product. We assumed price-and-time dependent demand and partially backlogged shortage. The mathematical formulation is presented and a searching algorithm to find the optimal solution is extended. In the last section, we run a numerical example with sensitivity analysis on the main parameters. The main result of this paper is determining the optimal sale price and replenishment policy for the retailer. We also showed that the total profit is significantly higher in non-instantaneous case rather than instantaneous deterioration case. Furthermore, the total profit for the retailer has improved in the presence of advertisement activity. This research can be extended through some new ways. It would be interesting to examine the proposed model with stochastic demand or deterioration function. Besides, we just

264

R. Maihami and I. Ghalehkhondabi

consider a retailer. As supply chain includes multiple players, it becomes imperative to examine the inventory model in a supply chain context.

References 1. Wu, K.-S., Ouyang, L.-Y., Yang, C.-T.: An optimal replenishment policy for noninstantaneous deteriorating items with stock-dependent demand and partial backlogging. Int. J. Prod. Econ. 101(2), 369–384 (2006) 2. Ouyang, L.-Y., Wu, K.-S., Yang, C.-T.: A study on an inventory model for noninstantaneous deteriorating items with permissible delay in payments. Comput. Ind. Eng. 51 (4), 637–651 (2006) 3. Yang, C.-T., Ouyang, L.-Y., Wu, H.-H.: Retailer’s optimal pricing and ordering policies for non-instantaneous deteriorating items with price-dependent demand and partial backlogging. Math. Probl. Eng. 2009, 1–18 (2009) 4. Dye, C.-Y.: The effect of preservation technology investment on a non-instantaneous deteriorating inventory model. Omega 41(5), 872–880 (2013) 5. Maihami, R., Karimi, B., Ghomi, S.M.T.F.: Pricing and inventory control in a supply chain of deteriorating items: a non-cooperative strategy with probabilistic parameters. Int. J. Appl. Comput. Math. 3(3), 2477–2499 (2017) 6. Pal, H., Bardhan, S., Giri, B.C.: Optimal replenishment policy for non-instantaneously perishable items with preservation technology and random deterioration start time. Int. J. Manag. Sci. Eng. Manag. 13(3), 188–199 (2018) 7. Krishnan, H., Kapuscinski, R., Butz, D.A.: Coordinating contracts for decentralized supply chains with retailer promotional effort. Manag. Sci. 50(1), 48–63 (2004) 8. Taylor, T.A.: Supply chain coordination under channel rebates with sales effort effects. Manag. Sci. 48(8), 992–1007 (2002) 9. Tsao, Y.-C., Sheen, G.-J.: Dynamic pricing, promotion and replenishment policies for a deteriorating item under permissible delay in payments. Comput. Oper. Res. 35(11), 3562– 3580 (2008) 10. Shah, N.H., Soni, H.N., Patel, K.A.: Optimizing inventory and marketing policy for noninstantaneous deteriorating items with generalized type deterioration and holding cost rates. Omega 41(2), 421–430 (2013) 11. Zhang, J., Wang, Y., Lu, L., Tang, W.: Optimal dynamic pricing and replenishment cycle for non-instantaneous deterioration items with inventory-level-dependent demand. Int. J. Prod. Econ. 170, 136–145 (2015) 12. Maihami, R., Karimi, B., Ghomi, S.M.T.F.: Effect of two-echelon trade credit on pricinginventory policy of non-instantaneous deteriorating products with probabilistic demand and deterioration functions. Ann. Oper. Res. 257(1–2), 237–273 (2017) 13. Jaggi, C.K., Tiwari, S., Goel, S.K.: Credit financing in economic ordering policies for noninstantaneous deteriorating items with price dependent demand and two storage facilities. Ann. Oper. Res. 248(1–2), 253–280 (2017) 14. Li, G., He, X., Zhou, J., Wu, H.: Pricing, replenishment and preservation technology investment decisions for non-instantaneous deteriorating items. Omega 84, 114–126 (2019)

Assessing Fit of Capacity Planning Methods for Delivery Date Setting: An ETO Case Study Swapnil Bhalla1(B) , Erlend Alfnes1 , and Hans-Henrik Hvolby1,2 1 Department of Mechanical and Industrial Engineering, NTNU, Norwegian University of Science and Technology, Trondheim, Norway [email protected] 2 Department of Materials and Production, Centre for Logistics, Aalborg University, Aalborg, Denmark

Abstract. The paper studies an engineer-to-order (ETO) manufacturing firm. A novel approach is used to assess the fit of capacity planning methods in the planning environment of the firm, and towards delivery date setting, which is of strategic importance for ETO firms. Keywords: Engineer-to-order (ETO) · Strategic fit Delivery date or due date · Capacity planning · Rough-cut capacity planning (RCCP)

1

·

Introduction

Developing product designs for specific customer orders allows manufacturers to deliver customised products that address customers’ unique requirements. This manufacturing approach, known as customer-driven manufacturing, is a key concept for future factories [9,19,26]. Such a customer-driven approach to manufacturing is prevalent among enterprises producing high-value capital products, such as shipbuilding, offshore equipment manufacturing, etc. [11,22]. Based on the customer order decoupling point (CODP) framework, such manufacturing contexts are characterised by a supply chain strategy or product delivery strategy known as engineer-to-order (ETO) [17]. While firms producing high-value products benefit from an ETO strategy, which enables them to address specific customer requirements, they also operate in relatively complex planning environments due to increased uncertainty regarding specifications of the product and production process [23]. Customising products for every customer’s requirements leads to newness within order fulfilment activities for each customer order, which include engineering, purchasing, production, etc. The newness of order fulfilment activities for a product is managed by organising these activities as a project [27]. A precursor to planning a project and confirmation of customer orders is the customer c IFIP International Federation for Information Processing 2019  Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 265–273, 2019. https://doi.org/10.1007/978-3-030-29996-5_31

266

S. Bhalla et al.

enquiry stage, where estimated price, project delivery date, etc. should be quoted by the manufacturer [23]. The acceptance of these delivery dates by potential customers are often a criterion for confirmation of customer orders [23]. Due to the uncertainty in product and process specifications, and the newness or nonrepetitiveness of production activities, identifying reliable production delivery dates at this stage is not a trivial task in these manufacturing environments. It is worth clarifying that while ‘project delivery date’ is used here to refer to the date promised to the customer for handing over the finished product, ‘production delivery date’ refers to the estimated date when production is expected to be completed, which precedes the project delivery date. The possibility to determine reliable production delivery dates through capacity planning in ETO (and make-to-order (MTO)) environments, is a primary criteria for applicability of production planning and control (PPC) systems in these environments [23]. The fit between PPC systems and corresponding planning environment, has often been emphasised as consequential to manufacturers’ performance [1,16,24]. Motivated by the importance of this fit towards improving manufacturing performance, different frameworks for mapping planning environments have been proposed in literature [3,13,18,20]. These frameworks for mapping planning environments are intended to be starting points for identifying suitable PPC systems. This paper presents a case study that set out to investigate the applicability of relevant theoretical knowledge to delivery date setting practice in ETO manufacturing. Through the case study, the paper also demonstrates a novel approach for investigating how the mapping of a planning environment can be used to assess the fit of PPC methods, as called for by Buer et al. [3] in possibilities for future work. The paper is organised as follows. Section 2 contextualises this paper using literature. Section 3 outlines relevant capacity planning techniques. Section 4 presents the research framework and case study. Section 5 serves as a brief conclusion to the paper.

2

Delivery Date Setting in ETO Manufacturing

The strategic importance of estimating, quoting and setting production delivery dates for customised manufacturing environments such as ETO and MTO, has been emphasised often in literature [12,14,15,25,29]. The repeatedly emphasised importance has triggered much research on the subject. However, much of the literature is found to be primarily focused on MTO contexts. Undeniably, firms operating with strategies that are a hybrid of ETO and MTO can be found in practice [19,21]. Nevertheless, the primary difference between the two strategies is the engineering aspect, which may introduce uncertainty into the planning environment depending on the level of customisation [29]. As a result, delivery date setting approaches applicable in MTO contexts may not demonstrate equivalently satisfactory performance in ETO contexts. The remainder of this section discusses some salient contributions to delivery date setting literature to contextualise the contribution of this paper.

Fit of Capacity Planning Towards Delivery Date Setting

267

Zorzini et al. [28] studied the delivery date setting process at 15 capital goods manufacturers. They report that majority of the studied firms opted to perform aggregated capacity analysis for quoting delivery dates, as compared to detailed or no workload analysis. While the sampled firms using aggregated capacity analysis are scattered across the spectrum of customisation and complexity, Zorzini et al. [28] point out that the assembly process was found to be a fixed bottleneck resource for all of these firms. However, this might not be the case for all ETO environments. It was also not found to be true for our case study context where historically, different machining resources have been observed to be the bottleneck resource for different products. Further, among the firms sampled by Zorzini et al. [28], it is also not clear how firms across different levels of customisation differently manage uncertainties regarding product and process specification. Their proposed model assumes that average lead times can be estimated based on past orders, but the validity of this assumption can be expected to vary with the level of customisation and size of the product portfolio. Ebadian et al. [7] propose a heirarchical PPC model to support delivery date setting, which assumes that incoming orders can be prioritised according to the service level desired for different customers. Carvalho et al. [6] present an optimisation approach developed for tactical capacity planning under uncertainty in an ETO firm, calling for exploration of the validity of the proposed approach in other ETO contexts. As outlined above through the discussed literature, context-specificity can be observed as a common feature among most research on delivery date setting in ETO. While these are valuable contributions to theory, generalised validity of the findings is only limited. This highlights the contingent nature of the applicability of delivery date setting methods, as also argued by Zorzini et al. [29] for taking a contingency theory approach to studying customer enquiry management. Therefore, the case study presented in this paper assesses the applicability of basic theoretical methods in an ETO setting, while explicitly demonstrating the approach, which can be replicated in other ETO contexts for assessing applicability of seemingly relevant methods.

3

Rough-Cut Capacity Planning

Capacity planning refers to “the process of determining the amount of capacity required to produce in the future” [2]. It entails different activities at different hierarchical levels of PPC, such as strategic or long-term resource requirement planning, rough-cut capacity planning (RCCP) on a tactical or master production scheduling (MPS) level, and detailed capacity requirements planning (CRP) at the material requirements planning (MRP) level [2]. As emphasised frequently in literature, delivery date setting is most commonly observed on a tactical level of PPC [5,10,23,28], and therefore, RCCP methods can be classified as most relevant for delivery date setting. This section briefly presents three basic RCCP methods that are later assessed for their fit to the case environment and for their capability to support delivery date setting. The methods are explained using descriptions from Vollman et al. [24] and the APICS dictionary [2].

268

S. Bhalla et al.

Capacity Planning Using Overall Factors (CPOF). Using overall factors for RCCP is a relatively simple approach where the MPS is used as the starting point. The scheduled quantities of end products in different time buckets serve as basis for estimating the capacity requirements for different work centres, by applying historical percentages to the total number of hours for producing the item. This essentially gives the estimated workload requirement from different work centres for producing the scheduled quantities, without consideration of the actual timing of the capacity requirement projections. The advantages offered by this approach are minimal data requirements and computational simplicity. Capacity Planning Using Capacity Bills (CPCB). Using capacity bills follows a similar computational procedure as CPOF, but differs in the data requirements. Instead of using historical percentages for different work centres, as in CPOF, CPCB requires bill-of-material (BOM) and routing data with labour-hour or machine-hour data for each operation. Not unlike CPOF, CPCB also does not consider the production lead times, and capacity requirements are not timephased. Capacity Planning Using Resource Profiles (CPRP). Among the three basic RCCP techniques, using resource profiles for capacity planning is the most sophisticated. It takes production lead times into account, and provides timephased projections of capacity requirements. Using the MPS, BOM and routing data, capacity requirements are estimated as in CPCB. These estimates are further utilised to develop time-phased projections by offsetting the capacity requirements.

4

Case Study

The case company is a supplier of equipment for the maritime industry. The main products and spare parts for previously sold products constitute the manufacturing activities, which are undertaken at the same facility. Their products can be broadly classified into four types, where every type has various sub-types and size alternatives that essentially serve as templates for tailoring the product designs to specific customer requirements. The cumulative production volume of different product types is typically below 500 units per year. Presently, at the customer enquiry stage, production delivery dates are determined using a method that is a hybrid of CPOF and CPRP methods. The product templates and experience from past projects are used to estimate the total workload for a project and consequently, workloads for different work centres. These workloads are then offset in time to get time-phased projections of capacity requirements, and delivery dates are estimated based on these projections. Maintaining delivery precision for the production department has been challenging, and has worsened in recent years with widening of the product portfolio and variations in the product mix of demand.

Fit of Capacity Planning Towards Delivery Date Setting

269

Fig. 1. Research framework underlying the case study

4.1

Methodology

The purpose of the case study was to better understand the challenges in setting delivery dates in ETO environments. As it was revealed during earlier collaboration that the case company recognise delivery date setting as one of the challenging managerial tasks, the firm served as an appropriate context for this case study. The case sample presented in this paper is limited to a single indepth case study due to space constraints and can be expanded in the future to test generalisability of the findings. Figure 1 shows the underlying research framework for the case study process, which can be utilised in future studies to assess fit of planning activities or PPC systems towards planning environments. 4.2

Assessing Fit

The case company is mapped using Buer et al.’s [3] framework, where characteristics of a planning environment are classified into product, market and manufacturing process related variables. The fit of the RCCP methods is then assessed towards the clustered environmental variables and delivery date setting, based on a combination of literature synthesis and logical assumptions, as shown in Table 1. The planning environment is characterised as follows. Product-Related Variables. CODP placement: ETO-MTO; level of customisation: some specifications are allowed; product variety: high; BOM complexity: 3–5 levels; product data accuracy: low-medium; level of process planning: fully designed process. Market-Related Variables. P/D ratio < 1; demand type: customer order allocation; source of demand: customer order; volume/frequency: few large customer orders per year; frequency of customer demand: unique-sporadic; time distributed demand: annual figure; demand characteristics: dependent; type of procurement ordering: order by order procurement; inventory accuracy: medium. The market-related variables can be further distinguished into demand-related (from P/D ratio to demand characteristics) and supply-related variables (type of procurement and inventory accuracy).

270

S. Bhalla et al.

Table 1. Assessing fit of RCCP methods towards planning environment and delivery date setting. Overall factors

Capacity bills

Resource profiles

Product characteristics

CPOF reflects a poor fit to the product characteristics. Values of mutually causative variables such as CODP, customisation, product variety, BOM complexity and data accuracy [3] render CPOF unreliable for the case environment. This is consistent with CPOF’s success criteria of flat BOM [13, 24]

CPCB reflects a poor fit to the product characteristics. This is primarily due to relatively low product data accuracy, which is detrimental to the use of CPCB [8]. Product data entails detailed BOM and routing data, the availability and reliability of which, are integral to the success of CPCB [4]

CPRP reflects a poor fit to the product characteristics. Performance of CPRP, like CPCB, relies on the availability and reliability of detailed BOM, routing data and time standards, accuracy of which is significantly low during RCCP. Customisation and high product variety indirectly contribute to this [3]

Market characteristics

CPOF reflects a poor fit to the market characteristics, and more specifically, to the demand-related variables such as customer-allocated and customer order-originated demand, low frequency and uniqueness of demand. While a P/D ratio < 1 is a favourable situation for the fit of CPOF [13], the overall fit is rendered poor by other majority of variables

CPCB reflects a poor fit to the market characteristics, and specifically, to demand-related characteristics. However, these characteristics affect the fit of CPCB indirectly rather than directly. Dependent demand influences time distribution of demand [3], which in turn influences the CODP placement [3], and leading to the unavailability of BOM and route during RCCP

CPRP reflects a poor fit to the market characteristics of the planning environment. Demand-related characteristics that cause CPCB to be a poor fit to the planning environment, also cause CPRP to have a poor fit. None of the discussed RCCP methods are influenced by supply-related characteristics, as on-hand stocks of components are not considered in any of them [13]

Process characteristics

CPOF reflects a poor fit to the manufacturing process due to environmental characteristics such as non-homogeneous manufacturing mix [13], production in single-unit or small series, long throughput times, relatively high number of operations and planning points, and infrequent repetition of production orders

CPCB reflects a poor to neutral fit to the process characteristics of the planning environment. Lack of homogeneity in the manufacturing mix does not cause any particular challenges in using CPCB, as it uses detailed bill of resources [13]. Large number of major operations is expected to negatively influence the CPCB’s reliability

CPRP reflects a neutral to good fit to the manufacturing process characteristics of the planning environment. A high number of major manufacturing operations is expected to increase the importance of offsetting the capacity requirements in time for reliable projections, thus qualifying CPRP to have the best fit

DD setting

Using CPOF for RCCP is expected to provide unreliable delivery dates, as it does not offset capacity requirements in time

Using CPCB for RCCP is expected to provide unreliable delivery dates, as it does not offset capacity requirements in time

Using CPRP is expected to give reliable delivery dates, as it offsets capacity requirements in time to get time-phased projections

Fit of Capacity Planning Towards Delivery Date Setting

271

Manufacturing Process Characteristics. Manufacturing mix: mixed products; shop floor layout: fixed-position - cell; type of production: single-unit - smallseries; throughput time: months - weeks; number of major operations: high; batch size: equal to customer quantity; frequency of production order repetition: infrequent repetition; fluctuations of capacity requirements: medium; planning points: medium; set-up times: medium; sequencing dependency: medium; part flow: one-piece/lot-wise; material flow complexity: medium; capacity flexibility: low; load flexibility: medium.

5

Conclusion

The studied case environment demonstrated an overall low applicability of the theoretical RCCP methods. It can be concluded that the fit of RCCP methods to the planning environment is relatively less influenced directly by the manufacturing process as compared to the market and product characteristics. The case study also revealed that existing mapping frameworks [3,13,18,20] lack ‘production monitoring accuracy’ as an environmental variable. Production monitoring accuracy refers to the accuracy of data that is used to monitor actual production with respect to planned production. The availability and reliability of this data was found to play a vital role in the success of the delivery date setting process by providing information about available capacity in different planning periods, and was found to be a factor in the low delivery precision observed at the case company. More comprehensive investigation of the influence of production monitoring data on delivery date setting is a possibility for future work.

References 1. Berry, W.L., Hill, T.: Linking systems to strategy. Int. J. Oper. Prod. Manag. 12(10), 3–15 (1992) 2. Blackstone, J.H. (ed.): APICS Dictionary, 14 edn. APICS (2013) 3. Buer, S.-V., Strandhagen, J.W., Strandhagen, J.O., Alfnes, E.: Strategic fit of planning environments: towards an integrated framework. In: Temponi, C., Vandaele, N. (eds.) ILS 2016. LNBIP, vol. 262, pp. 77–92. Springer, Cham (2018). https:// doi.org/10.1007/978-3-319-73758-4 6 4. Burcher, P.G.: Effective capacity planning. Manag. Serv. 36(10), 22–5 (1992) 5. Carvalho, A.N., Oliveira, F., Scavarda, L.F.: Tactical capacity planning in a realworld eto industry case: an action research. Int. J. Prod. Econ. 167, 187–203 (2015) 6. Carvalho, A.N., Oliveira, F., Scavarda, L.F.: Tactical capacity planning in a realworld eto industry case: a robust optimization approach. Int. J. Prod. Econ. 180, 158–171 (2016) 7. Ebadian, M., Rabbani, M., Torabi, S., Jolai, F.: Hierarchical production planning and scheduling in make-to-order environments: reaching short and reliable delivery dates. Int. J. Prod. Res. 47(20), 5761–5789 (2009) 8. Fogarty, D.W., Hoffmann, T.R.: Production and Inventory Management. Thomson South-Western, West Chicago (1983) 9. Gosling, J., Hewlett, B., Naim, M.M.: Extending customer order penetration concepts to engineering designs. Int. J. Oper. Prod. Manag. 37(4), 402–422 (2017)

272

S. Bhalla et al.

10. Hans, E.W., Herroelen, W., Leus, R., Wullink, G.: A hierarchical approach to multi-project planning under uncertainty. Omega 35(5), 563–577 (2007) 11. Hicks, C., McGOVERN, T., Earl, C.F.: A typology of uk engineer-to-order companies. Int. J. Logist. 4(1), 43–56 (2001) 12. Hicks, C., Earl, C.F., McGovern, T.: An analysis of company structure and business processes in the capital goods industry in the uk. IEEE Trans. Eng. Manage. 47(4), 414–423 (2000) 13. Jonsson, P., Mattsson, S.A.: The implications of fit between planning environments and manufacturing planning and control methods. Int. J. Oper. Prod. Manag. 23(8), 872–900 (2003) 14. Kingsman, B., Worden, L., Hendry, L., Mercer, A., Wilson, E.: Integrating marketing and production planning in make-to-order companies. Int. J. Prod. Econ. 30, 53–66 (1993) 15. Konijnendijk, P.A.: Coordinating marketing and manufacturing in eto companies. Int. J. Prod. Econ. 37(1), 19–26 (1994) 16. L¨ odding, H.: Handbook of Manufacturing Control: Fundamentals, Description, Configuration. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-64224458-2 17. Olhager, J.: The role of the customer order decoupling point in production and supply chain management. Comput. Ind. 61(9), 863–868 (2010) 18. Olhager, J., Rudberg, M.: Linking manufacturing strategy decisions on process choice with manufacturing planning and control systems. Int. J. Prod. Res. 40(10), 2335–2351 (2002) 19. Rudberg, M., Olhager, J.: Manufacturing networks and supply chains: an operations strategy perspective. Omega 31(1), 29–39 (2003) 20. Sch¨ onsleben, P.: Integral Logistics Management: Planning and Control of Comprehensive Supply Chains. CRC Press, Boca Raton (2016) 21. Semini, M., Haartveit, D.E.G., Alfnes, E., Arica, E., Brett, P.O., Strandhagen, J.O.: Strategies for customized shipbuilding with different customer order decoupling points. Proc. Inst. Mech. Eng. Part M: J. Eng. Marit. Environ. 228(4), 362–372 (2014) 22. Sriram, P.K., Alfnes, E.: Taxonomy of engineer-to-order companies. In: Grabot, B., Vallespir, B., Gomes, S., Bouras, A., Kiritsis, D. (eds.) APMS 2014. IAICT, vol. 440, pp. 579–587. Springer, Heidelberg (2014). https://doi.org/10.1007/9783-662-44733-8 72 23. Stevenson, M., Hendry, L.C., Kingsman, B.G.: A review of production planning and control: the applicability of key concepts to the make-to-order industry. Int. J. Prod. Res. 43(5), 869–898 (2005) 24. Vollmann, T., Berry, W., Whybark, D.: Manufacturing Planning and Control Systems. Irwin/McGraw-Hill, New York (1997) 25. Watanapa, B., Techanitisawad, A.: Simultaneous price and due date settings for multiple customer classes. Eur. J. Oper. Res. 166(2), 351–368 (2005) 26. Wortmann, J.C., Muntslag, D.R., Timmermans, P.J.: Why customer driven manufacturing. In: Wortmann, J.C., Muntslag, D.R., Timmermans, P.J.M. (eds.) Customer-Driven Manufacturing, pp. 33–44. Springer, Dordrecht (1997) 27. Yang, L.R.: Key practices, manufacturing capability and attainment of manufacturing goals: the perspective of project/engineer-to-order manufacturing. Int. J. Project Manage. 31(1), 109–125 (2013)

Fit of Capacity Planning Towards Delivery Date Setting

273

28. Zorzini, M., Corti, D., Pozzetti, A.: Due date (dd) quotation and capacity planning in make-to-order companies: results from an empirical analysis. Int. J. Prod. Econ. 112(2), 919–933 (2008) 29. Zorzini, M., Hendry, L., Stevenson, M., Pozzetti, A.: Customer enquiry management and product customization: an empirical multi-case study analysis in the italian capital goods sector. Int. J. Oper. Prod. Manag. 28(12), 1186–1218 (2008)

Data-Driven Production Management

From a Theory of Production to Data-Based Business Models Günther Schuh1, Malte Brettel2, Jan-Philipp Prote1, Andreas Gützlaff1, Frederick Sauermann1(&), Katharina Thomas1, and Mario Piel2 1

2

Laboratory for Machine Tools and Production Engineering (WZL), RWTH Aachen University, Aachen, Germany {g.schuh,j.prote,a.guetzlaff,f.sauermann, k.thomas}@wzl.rwth-aachen.de Innovation and Entrepreneurship Group (WIN) – TIME Research Area, RWTH Aachen University, Kackertstr. 7, 52072 Aachen, Germany {brettel,piel}@time.rwth-aachen.de

Abstract. Producing companies are challenged by competition in global markets, in which customers have in general a strong negotiation position. In order to improve their competitive situation, companies constantly attempt to decrease production costs. However on the one hand, it can be observed that companies often do not only have an issue in decreasing their production costs but also in the determination of current production costs as basis for improvements. On the other hand, producing companies face an increasing volume of production data in the course of Industrie 4.0. This data is expected to be potentially usable as additional sales asset. Yet, especially traditional companies do not know how to translate generated production data into incoming cash flows. In order to tackle both named issues, this paper presents both an overview of data-based business models for producing companies and a tool for increasing transparency of production costs in global production networks. Keywords: Theory of production Global production network

 Business model 

1 Introduction In recent years, customers of producing companies demand increasingly individualized products at similar or even lower costs. In competitive markets companies have responded to this trend by increasing their product variants. Formerly cost-efficient oriented production configurations have at least partially been transformed into flexible job shop structures [1]. While decreasing the transparency of cost allocation to jobs and machines, the challenge of high quality but efficient cost calculations for new orders has increased. This challenge does not only affect local production sites but even more complete global production networks [2]. However, due to global markets, that have developed from supplier to customer ones, cost pressure is another important influencing factor for producing companies. Thus, for decreasing costs an important first step is creating transparency of production costs in the production network. © IFIP International Federation for Information Processing 2019 Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 277–284, 2019. https://doi.org/10.1007/978-3-030-29996-5_32

278

G. Schuh et al.

In the last decade, the use of new production technologies, e.g. 3D printing, could widely be observed [3]. In parallel, newly designed machines were equipped with numerous sensors, too. Not only digitization but also the connection of machines is a major trend that is commonly known as Industrie 4.0 [4]. As a result, companies face a significant increase of production data in growing granularity, that have the potential not only to be used for increasing transparency but also to be sold as a good itself. This paper aims to give guidance for producing companies in the identification of the right data-based business model for their need and to increase transparency of global production network costs as particular example of a data-based business model. The remainder of the paper is structured as follows. In Sect. 2 a brief introduction to the research background of data-based business models is given. In Sect. 3 different databased business models are presented that have been researched in the past. As a particular example of one of the derived business models, the concept and implementation of a tool for increasing transparency of production costs in global production networks is presented in Sect. 4. In Sect. 5 the main findings are summarized.

2 Towards a Theory of Production In the cluster of excellence “Integrative Production Technology for high-wage Countries” at RWTH Aachen University, research was conducted on developing a generic theory of production. This theory aimed to increase the assessability of individual, technological and managerial advances with respect to a company’s profitability. The developed theory of production relies on a number of both cost and profit drivers. Quantifying relevant parameters of those cost drivers supports the determination of different costs for newly designed products that can potentially be produced with different technologies. Comparing the advantageousness of the considered technologies, a profound and objective decision on the best alternative can be made. By doing so, the trade-off between efficiency and flexibility is objectified [3]. Production research often focuses on quantifying cost drivers. Yet, researching profitability drivers of companies is an important objective, too. Therefore, an investigation of data-based business models that are built on or are supported by Industrie 4.0 has been conducted in the transition phase from the above mentioned cluster of excellence to the new cluster of excellence of RWTH Aachen University called “Internet of Production”. Against this backdrop, the intention of this paper’s underlying research is to provide answers to the question, on how digitized manufacturing firms can use production data to increase profits through business model innovation. The results of this research are presented and validated in the following sections.

3 Data-Based Business Models 3.1

Foundations of Business Models

Digitization revolutionizes the production industry with respect to new possibilities of collaboration among individuals, machines, and firms. In order to leverage these opportunities, the German federal government has adopted Industrie 4.0 as a key pillar

From a Theory of Production to Data-Based Business Models

279

of its Hightech Strategy 2020 [5]. In this context, Industrie 4.0 involves ICT-caused radical technological and organizational changes within the production industry [6]. A central element of Industrie 4.0 is the so-called “smart factory“, which enables ICTbased, flexible, autonomous and self-optimizing production systems and networks [7]. Within smart factories processes, machines, products, and resources are represented via cyber-physical systems (CPS) and Industrial Internet of Things (IIoT) as a “digital twin” [8, 9]. Besides the advantages of smart factories (e.g. improved flexibility, productivity, and transparency), it also offers new opportunities with respect to data availability and use in new business models [10]. Based on the value chain concept by Porter, the data value chain serves as a base for exploring monetization opportunities of production data [11]. Referring to the information systems-based conceptualization that includes a hierarchical link of data, information, knowledge, and wisdom (DIKW), corresponding management activities for the four phases, and analytical methods that enable the transition between the phases, it is proposed that firms can directly generate value from production data in all four phases (Fig. 1) [12–14]. Actual monetization of production data can be achieved through (innovation of) data-based business models.

Information generation

Pattern Recognition

‘What has happened?‘

‘Why has it happened?‘

Descriptive Analytics

Diagnostic Analytics

Prognostic capacity

Decision- making ability

‘What will happen?‘ ‘What can be done?‘

Predictive Analytics

Prescriptive Analytics

Data

Information

Knowledge

Wisdom

Generation & Acquisition

Structured Storage & Processing

Integration & Optimization

Presentation & Marketing

Past & Real-Time

Future

Fig. 1. Data value chain [12–14]

As shown in Table 1, Osterwalder and Pigneur [15] and Holm et al. [16] conceptualize business models via four dimensions, i.e. value proposition, value delivery, value creation, value capture. These, in turn, involve nine components, i.e. value proposition, customer segments, channels, customer relationships, key resources, key activities, key partnerships, revenue streams, and cost structure. Casadesus-Masanell and Zhu [17] describe business model innovation (BMI) as a change of key elements of organizations and their business logic to generate additional value for stakeholders. As for production data, existing or new data can be used to generate new business models or improve existing ones to make them fully data-based [18]. Data-based business models can be categorized according to the use of data, i.e. solely firm-internal use of data, sharing data with partners, trading data as a product, or making data available for free [19]. Against this backdrop, a holistic and practiceoriented framework was developed that involves production data-based, innovative business models for the production industry in high-wage countries.

280

G. Schuh et al. Table 1. Business model components

Metacomponents Value proposition Value delivery

Value creation

Value capture

3.2

Components

Description

Value proposition Customer segments Channels

Gives an overall view of a company’s bundle of products and services An organization serves one or several customer segments

Customer relationships Key resources Key activities Key partnerships Revenue streams Cost structure

Value propositions are delivered to customers through communication, distribution, and sales channels Customer relationships are established and maintained with each customer segment Key resources are the assets required to offer and deliver previously described elements Number of key activities performed by key resources Some activities are outsourced and some resources are acquired outside the enterprise Revenue streams result from value propositions successfully offered to customers The BM elements result in the cost structure

Framework for Production Data-Based Business Models and Innovations

As starting point for the development of a framework for production data-based business model types and innovation, a systematic literature analysis on keywords, such as “data-based business models”, “data monetization” and “data strategy” has been conducted. Referring to Nickerson et al. the business model categories that evolved from the literature review have been structured and selected so that the remaining business model categories form a mutually exclusive and collectively exhaustive framework [20]. Business models that are not applicable to production firms have been excluded. Besides, business models that are not based on proprietary data, but only use external data, have been excluded, too. The remaining business model types have been categorized by the authors across the data value chain in three core categories, i.e. “measure”, “infuse”, “apply” (first level in Fig. 2) and visualized in a framework that is presented in Fig. 2. Infuse entails data-based business model types that include DIKW in specific products, services or networks of products and services. By making products and services more engaging for customers, these business models enable firms to expand their revenue streams [14]. Apply involves those business model types that apply knowledge and wisdom without combining it with existing products and services. Predictive and prescriptive data analytics enable firms to identify patterns, relations, and future outcomes [21, 22]. Figure 2 depicts the corresponding eight innovation paths (second level in Fig. 2) for digitized production firms that, in turn, are distributed in thirteen production databased BMIs (third level in Fig. 2).

From a Theory of Production to Data-Based Business Models

281

Fig. 2. Derived framework for data-based business models

4 Validation The business models have been evaluated using different criteria, i.e. implementation complexity, upside potential and downside risks. Within the proposed framework of data-based business models in Sect. 3, the business model production network location consulting, as example of the innovation path “Consultancy” (business model type “apply” in Fig. 2), has been validated by developing and implementing a data-based online tool for designing production networks. Production network location consulting derives its competitive advantage over other consultancies by leveraging actual production data from smart factories. For this reason, the software demonstrator OptiWo has been developed at the Laboratory for Machine Tools and Production Engineering (WZL) at RWTH Aachen University with the goal of supporting product allocations and distributing value added in global networks. Nowadays, production network designers face the challenge of a globalized world with linked markets, calling for shorter planning cycles and with production networks demanding to be adapted more rapidly. Therefore, all production network decisions need to be based on viable data at a comprehensive level of aggregation in order to make faster as well as more profound decisions in uncertain business environments [8]. Relevant data is spread over different IT systems, such as ERP, MES or PDM systems. The OptiWo tool creates value for the end user by streamlining planning and decision-making processes through focused visualization and precis and transparent dissection of production network interdependencies. Considering that designing production networks involves many uncertain influencing factors, the assessment of dependencies is crucial. Therefore, OptiWo takes into account different target values such as visualizing costs, delivery times and network risks to allow a profound network analysis. The necessary prerequisite for using the tool is the availability of data on production sites, sales regions, demand quantities, production resources, processes and transport routes. The data can be entered directly in the tool at a freely selectable aggregation level. Therefore, the user can decide for each project which products, processes and cost types should be considered for network analysis. The OptiWo tool

282

G. Schuh et al.

works based on a total landed costs approach, where all production costs of a product, including all transport costs and customs duties, are included [23]. Hence, fixed costs such as basis costs for production sites and depreciation costs as well as variable costs such as direct and indirect personnel, variable machining costs and purchases parts are taken into account. A major advantage of the tool is the freely selectable level of detail of the data acquisition and following the level of detail for network visualizations. At the same time, the tool supports the user through the process of data acquisition and displays the data requirements of the relevant areas in successive menu tabs. After data collection, the tool summarizes the results of the network analyses in a “viewer” as shown in Fig. 3. The viewer offers several interactive visualizations where the user gets an overview of the existing production network as well as a deeper insight, if desired by several mouse over effects and access to additional information. In detail the named network target values costs, delivery times and network risks are visualized for each production site. Thus, the connection between these target values is visualized in a user-friendly way, which supports the user in understanding the impact of his decisions about product allocations and value added distribution.

Location overview

Network costs

− Cost per piece − Transports − Overhead

Fig. 3. Visualization of transport relations, delivery times and costs in a production network

As a result, the user gets a holistic transparency about the performance of the production network and can derive specific needs for action to improve the network such as distributing value added or decreasing transport efforts.

5 Summary and Outlook The approach presented in this article has been developed to illustrate the potential of data-based business models in the field production management. In particular, the developed framework includes three core categories of business model types with eight corresponding innovation paths and thirteen specific production data-based BMIs.

From a Theory of Production to Data-Based Business Models

283

Thereby, the framework depicts the spectrum of strategic opportunities that producing firms can leverage to generate value by using proprietary production data. While the presented demonstrator tool OptiWo mainly focuses on increasing transparency over the existing distribution of value added in production networks, the next step will be to support the decision maker in proactive designing the production network by illustrating different network scenarios and the assessment of their impact on strategic targets. By supporting the network designer in creating different network scenarios, sensitivity analyses can be performed which is becoming increasingly important in uncertain business environments. Furthermore, the tool-based decision support should be developed further with regards of identifying an optimal interaction of data-based decision preparation and integrated expert knowledge. Therefore, different user groups should be surveyed to analyze the interaction with the tool and to evaluate the different tool features. Thus, a new way of strategic decision-making can be developed by autonomous decision preparation for enabling decision makers to focus on the valueadding part of decisions in designing future production structures. Acknowledgement. The authors would like to thank the German Research Foundation DFG for the kind support within the Cluster of Excellence “Integrative Production Technology for High-Wage Countries”.

References 1. ElMaraghy, H., et al.: Product variety management. CIRP Ann. 62(2), 629–652 (2013) 2. Schuh, G., Potente, T., Varandani, R., Schmitz, T.: Global Footprint Design based on genetic algorithms. CIRP Ann. 63(1), 433–436 (2014) 3. Brecher, C., Özdemir, D.: Integrative Production Technology. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-47452-6 4. Wang, J., Zhang, J., Wang, X.: A data driven cycle time prediction with feature selection. IEEE Trans. Semicond. Manuf. 31(1), 173–182 (2018) 5. Roth, A.: Einführung und Umsetzung von Industrie 4.0. Springer, Heidelberg (2016) 6. Brecher, C., Klocke, F., Schmitt, R., Schuh, G. (eds.) Industrie 4.0: Aachener Perspektiven, 1. Aufl. ed. Shaker, Herzogenrath, 468 Seiten (2014) 7. Lucke, D.: Smart factory. In: Westkämper, E., Spath, D., Constantinescu, C., Lentes, J. (eds.) Digitale Produktion, pp. 251–269. Springer, Berlin (2013). https://doi.org/10.1007/978-3642-20259-9_21 8. Schuh, G., Klocke, F., Straube, A.M., Ripp, S.: Integration als Grundlage der digitalen Fabrikplanung. VDI-Z Int. Prod. 144(11/12), 48–51 (2002) 9. Shariatzadeh, N., Lundholm, T., Lindberg, L., Sivard, G.: Integration of digital factory with smart factory based on internet of things. Proc. CIRP 50, 512–517 (2016) 10. Yin, S., Kaynak, O.: Big data for modern industry: challenges and trends [Point of View]. Proc. IEEE 103(2), 143–146 (2015) 11. Porter, M.E.: Competitive Advantage: Creating and Sustaining Superior Performance, 1, XXIV, 557 S. Free Press, New York (2004) 12. Friedli, T., Lanza, G., Schuh, G., Reuter, C., Arndt, T.: Industrie 4.0 - ein Beitrag zur Entwicklung von “Smart Networks”. ZWF 110(6), 378–382 (2015) 13. Rowley, J.: The wisdom hierarchy: representations of the DIKW hierarchy. J. Inf. Sci. 33(2), 163–180 (2007)

284

G. Schuh et al.

14. Tempich, C., Bodenbenner, P., Feuerstein, L.: Turning data into profit success factors in data-centric business models (2011) 15. Osterwalder, A., Pigneur, Y.: Business Model Generation: A Handbook for Visionaries, Game Changers, and Challengers, p. 278. Wiley, Hoboken (2010) 16. Holm, A.B., Günzel, F., Ulhøi, J.P.: Openness in innovation and business models: lessons from the newspaper industry. IJTM 61(3/4), 324 (2013) 17. Casadesus-Masanell, R., Zhu, F.: Business model innovation and competitive imitation. Strat. Manag. J. 34(4), 464–482 (2013) 18. Dinter, B., et al.: Big Data und Geschäftsmodell-Innovationen in der Praxis, 138 pp (2015) 19. Walker, R.: From Big Data to Big Profits: Success with Data and Analytics, p. 283. Oxford University Press, New York (2015) 20. Nickerson, R.C., Varshney, U., Muntermann, J.: A method for taxonomy development and its application in information systems. Eur. J. Inf. Syst. 22(3), 336–359 (2017) 21. Auschitzky, E., Hammer, M., Rajagopaul, A.: How big data can improve manufacturing. McKinsey & Company Operations 22. Delen, D., Demirkan, H.: Data, information and analytics as services. Decis. Support Syst. 55(1), 359–363 (2013) 23. Reuter, C., Hausberg, C.: An IT driven approach for global production network design. In: The 2015 IAENG International Conference on Communication Systems and Applications, pp. 888–893. IAENG, Hong Kong (2015)

Real-Time Data Sharing in Production Logistics: Exploring Use Cases by an Industrial Study Masoud Zafarzadeh1(&), Jannicke Baalsrud Hauge1, Magnus Wiktorsson1, Ida Hedman2, and Jasmin Bahtijarevic2 1

KTH Royal Institute of Technology, Stockholm, Sweden [email protected] 2 AstraZeneca, EMEA Operations, Södertälje, Sweden

Abstract. Production logistics systems consist often of a number of low valueadded activities combined with a high degree of manual work. Therefore, increasing effectivity and responsiveness has always been a target for production logistics systems. Sharing data in real-time may have a considerable potential to increase effectivity and responsiveness. The first step to realise real-time data sharing is to have a clear understanding of current state of PL systems and their requirements. This work is performed an ‘as-is’ situation analysis of an industrial case aiming at identifying which areas and applications would benefit most from real-time data sharing. The findings take a step closer to have a better understanding of CPS and Industry 4.0. Keywords: Production logistics

 Real-time data  Efficiency

1 Introduction Production logistics (PL) systems comprises often a number of low value-added activities combined with a high degree of manual work [1]. To improve this situation, automating the operations and streamlining the processes is put on the agenda for companies [1]. This transition may often be stepwise, and thus it is not unusual that companies have inhomogeneous technologies within logistics systems, which again may reduce the overall effect of the implementation of the technology component. Typically, problems arising are incompatibility and lack of interoperability across the system leading to low information visibility causing long response time and low efficiency in PL systems [2]. High information visibility allows an improved decision making process and is therefore an important factor for improving the value of PL systems for stakeholders [3]. Furthermore, it can mitigate information sharing challenges caused by implementing inhomogeneous technologies. One of the main items facilitating effective information sharing is real-time data sharing [4]. Defining realtime information is a matter of conflict but Brahim et al. [3] defined it as “information that constantly allows action on the system in order to react rapidly and in suitable ways with respect to environment dynamics”. In line with this definition, it is argued that the system is considered real-time when the information is still valid and relevant © IFIP International Federation for Information Processing 2019 Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 285–293, 2019. https://doi.org/10.1007/978-3-030-29996-5_33

286

M. Zafarzadeh et al.

after collection and processing [3]. Reviewing the literature shows that using information flow in real-time has some benefits for PL systems in terms of integrated end-toend delivery planning, more accurate material delivery and increased flexibility [5]. Real-time data enables the PL systems to have agile and swift reaction toward changes and unplanned events and can help to improve aligned decision-making among different stakeholders [6, 7]. Access to real-time data contributes to a reduction of bullwhip effects, reduction of misplacement and theft, reduction of identification errors, better replenishment policies, better scheduling, securing dangerous goods and temperature collected goods, improved traceability of products in the routing and improved the distribution planning are some other examples of these benefits [3]. Besides all these, real-time data is an imperative for any CPS (cyber-physical system) and Industry 4.0 concept [4, 8]. So far, researches have tried to clarify these concepts and their applications for PL systems. For example, Kagermann et al. [4] have depicted a stepwise digitalisation model for companies where collecting data in real-time is necessary for successful transition toward Industry 4.0. Qu et al. [9] have discussed AUTOM infrastructure, which facilitates real-time communication and helps to have a better resource management in production logistics. AUTOM infrastructure explains a general process of creating Radio Frequency Identification (RFID enabled shop floor. Lee et al. [10] have proposed 5C architecture for CPS implementation, which ensures real-time data circulation between physical world and cyber space. Tu et al. [11] have developed an emulation testbed in order to evaluate the effects of implementing IoT based CPS on PL activities. Most of the activates in their testbed are heavily depend on the real-time data, collects by mean of IoT technologies. Means of implementation are different but some examples can be categorized as IoT technologies like RFID, Realtime Locating System (RTLS) and Wireless Sensor Networks (WSN) [12]. Besides, cloud computing plays an important role to use IoT technologies more efficient in order to create CPS systems [13]. PL systems are complex and have high dynamics [1, 3] and introducing new concepts and technologies that enable real-time data sharing require clear understanding of their existing situation. Authors have found out most of the existing researches in this area, are still on conceptual level and there are less empirical study. To be able to realise these concepts in real PL systems, first it is essential to understand what areas and processes in PL systems need more attention and can benefit most from real-time data sharing. Therefore, the aim of this paper is to identify real-time data sharing use cases in PL systems by analysing the current state of a case study in production logistics. At the end, some suggestions regarding enabling technologies are shortly discussed.

2 Research Methodology and Approach According to Yin [14] case study has the potential to investigate a phenomenon in its context. It seems the number of empirical researches in this area is rather scarce [15], and it is still needed to find out solutions on individual level. Besides, because of the complex nature of this phenomenon, case study can be an appropriate method to shed a light on some specific problems. This might even help to address more generic

Real-Time Data Sharing in Production Logistics

287

solutions based on this case study. The case company has been visited several times, and experts are interviewed to provide the current state regarding PL processes as well as describing the company’s vision regarding PL. In addition, a group of managers in the case company has been participated in some meetings with the authors in order to clarify and confirm the case company’s current and future plans. Some other data regarding logistical routines and procedures are reviewed by the authors to find a more detailed picture.

3 Case Study AstraZeneca is an international producer of highly advance pharmaceutical, has almost 61100 employees in 18 production sites and supplies to over 100 different markets. They pay enormous attention to product and process quality in all parts of their supply chain. AstraZeneca has recognized that their logistics processes can be improved, that there is a low degree of information transparency and that they have inhomogeneous technologies in terms of both interoperability and maturity. Therefore, AstraZeneca is a good representative of an industry in digital transition facing the problems described above, and thus very suitable as a case study. One of the major production sites located in Sweden consists of central warehouse, high-bay, fixed bearing, cold chain and picking storages for warehousing system. Today in average, each item needs 20 touches to be able to be transported from initial stages to the end of process. AstraZeneca envision an “one-touch” strategy in order to have more efficient and responsive process flow within its production logistics with inhomogeneous level of automation and interconnectivity. The research presented here is partly carried out within the project ‘DIGILOG’ [16] and partly as a practical case study for undergraduate students at KTH University. The next section will first present the AS-IS analysis including an analysis of current problems, real-time data sharing use cases and some possible technical solutions to overcome the challenges.

Fig. 1. Schematic flow of production logistics-AstraZeneca

288

3.1

M. Zafarzadeh et al.

Production Logistics at AstraZeneca

The focus of the study is the analysis of the two areas ‘Goods receiving’ and ‘internal material handling’ at AstraZeneca. Figure 1 shows the material flow in the case company. The figure is numbered 1 to 4 for sake of easier read. Each number represents one sub-process and contains relevant activities. For internal logistics, after unloading from trucks (1), the pallets go through quality control (2). Then, after barcode scanning, they will be transferred to pallet exchange zone since some of the wooden pallets need to be changed to hygiene pallets (3). Next, the pallets will be send to the transit hall to be shipped to ‘fixed bearing’ storage, semi-finished storage, cold storage or directly to the production area (4). Material movement happens in several floors by means of trucks and by using elevators. Before and after storing the items as well as quality control, the ERP (Enterprise Resource Planning) system must be updated by the operators. 3.2

Real-Time Data Sharing Use Cases at AstraZeneca

AstraZeneca’s strategy to automate some of the PL processes as well as existing issues in PL, are two major forces motivate real-time data sharing in order to meet one-touch vision objectives. Issues mentioned here are those that are somehow affected by data sharing among different stakeholders. This is reflected in Fig. 2. Each of these driving forces are more discussed in the next step, which lead to find use cases for each of these categories.

Fig. 2. Real-time data sharing in the context of production logistics at AstraZeneca

A. Use cases to address existing issues. After reviewing the PL process, interviewing experts at AstraZeneca and analysing the results of student projects, following issues are identified. 1. For goods receiving, there is no information regarding the arrival time, type of items and the exact quantity of the ordered items from the central warehouse. Logistics personnel become aware when trucks arrive to the loading docks. 2. Similar issue exist when parts are shipped from transit hall to the production areas, which do not receive information regarding type, quantity and arrival time. Logistics personnel act based on their experience to collect received items.

Real-Time Data Sharing in Production Logistics

289

3. Transportation of wastes is logistics duty. Logistics personnel move around regularly and remove waste bins. Sometimes there is no waste to be removed and it has already caused unnecessary movement for the personnel because of lack of information. 4. Destinations information are stored in barcodes, labeled on each of the pallets. Operators use laser scanners to read the tags on the pallets making sure items are delivering to the right destination. Sometimes labels have poor condition for scanning. The information visibility is rather low and manual scanning can be considered as low values-added activity. 5. Storing of materials into picking storage has no specific order and there are no information on what is the exact address of each pallet and can be placed in any available storage position. After placement, the storage address is updated in the ERP system for later retrieval. 6. Transporting pallets trough elevators handled by trucks. Since there is no information regarding the availability of the elevators, sometimes truck drivers have to wait to access an available elevator or even deal with unavailable elevators. In general, the elevators have no means of connection and information sharing with the world around. 7. The ERP system needs to be updated in several stages like storing the items, retrieval of the items and registering quality inspection results. All of these happens manually creating low value-added activities for the operators. B. Use cases cause by planned automation. AstraZeneca wishes to automate some of the existing processes. Following describes what information is required to have effective implementation of automation. 1. Automated material transportation – vertically and horizontally. In this case, these data should be shared in real-time: location, estimated travel time, operational status and destination. To be able to use elevators: elevators position, operational data, maintenance data, estimated travel time and capacity of the elevator should be shared in real-time with the automated transportation system. 2. Automated visual quality inspection. Quality inspection does not only concern finding defects but also identifying the correct item. Identification data, quantity of items and picking location needs to be shared in real-time with both transportation system and the ERP system. 3. Automated storing and retrieval of parts from picking storage. Inventory level of each item needs to be shared with the ERP system in real-time. To be able to make the automated storing and retrieval system effective: location, estimated travel time, operational status and destination needs to be shared among the system agents. This enables the system to plan and schedule dynamically and avoid any probable collision. 4. Automated communication between conveyor and transportation system. Data regarding the location and positioning of items placed on the conveyor belts need to be communicated with both the conveyor system and the transportation system.

290

3.3

M. Zafarzadeh et al.

Real-Time Data Sharing and Enabling Technologies

Based upon the analysis of which information required to overcome the challenges identified in Sect. 3.2 it can be summarized that the following information needs to be provided in real-time: (1) Location and position; (2) Operational data (Type of mission, availability for the next mission); (3) Time (estimated arrival time); (4) Items identification (type and quantity of the data); (5) Maintenance data (service time, service appointment etc.); (6) Quality inspection results (Ok, not Ok); (7) Speed (to be used for fleet management); (8) Parts temperature. Next step therefore is a suggestion to map what technologies exist that can ensure real-time data sharing for AstraZeneca. Table 1 listed these technologies what data they should share in real-time to support PL processes in AstraZeneca. Part of this work has been carried out by KTH students at ML1136 in January and February 2018. Table 1. Technologies ensuring real-time data sharing in AstraZeneca PL processes. Technology AGV fleet management (Automated Guided Vehicle)

Data Location, Operational data, Time, Maintenance data, Speed

RTLS (Real-time Locating System)

Location

RFID (Radio Frequency Identification) Automated vision inspection system WSN (Wireless sensor networks)

Items identification

Quality inspection results

AS/RS (Automated storage and retrieval system)

Location, Operational data, Time, Maintenance data, Speed, Items identification

Operational data, Temperature

Process – Unloading from the trucks – MM (Material movement) between quality zone and the conveyer system – MM from and to the dropzone – MM through the elevators – MM from the warehouse to the production – MM between quality zone and the conveyer system – MM from and to the dropzone – MM through the elevators – MM from the warehouse to the production – Unloading from the trucks – Updating the ERP system – Visual quality control – Visual quality control – – – –

Visual quality control MM through the elevators Drop-zone Storage and retrieval from the picking storage – Updating the ERP system

Real-Time Data Sharing in Production Logistics

291

In the next step, a brief explanation regarding available technologies and their use in PL processes is presented. Types and numbers of existing technologies in this area varies. To decide what technology might be suitable, communication capability, flexibility, technology maturity and spatial constraints considered. However, some other criteria such as cost and AstraZeneca’s competence to embrace these technologies have not been considered either because there were no information available or because it could severely limit the study. Means for truck unloading can varies from using AGV’s, conveyor belt or crane system. Even though a more detailed study is needed to find the correct technology for implementation, but considering flexibility, implementation effort and compatibility with other technologies, using AGV seems more reasonable at this stage. Types, variations, and navigation techniques can be different: some follow magnets on the floor while some of the other types use radio waves or lasers. ‘Toyota’ [17] and ‘Rocla’ [18] are two examples of AGVs that use laser for navigation [19]. Another example is ‘MIR’, which scan the area and then use camera for navigation. The advantage of the solutions similar to MIR is the ‘fleet management system’, which enables the logistics system to plan and schedule multiple transportation simultaneously [20]. Besides, WSN can support material delivery to and from the conveyor belt system to identify when items should be delivered and identify any probable stoppage in the line. For pallet exchanging, automated pallet exchange machines or automated load transfer systems can be used. An example can be load transfer system from “Cherry’s Industrial Equipment” [21]. To make material movement through elevators more efficient, connecting WSN and RTLS with AGV fleet management can ensure that items will be transported vertically fully automated with possibly shorter delivery time. Types of sensors, interconnection methods need further analysis. Regarding the quality control process, in addition to using RFID and industrial cameras, using dimension-measuring technologies might be necessary. An example can be VIPAC D3 [22] uses infrared laser scanners. AS/RS solutions such as Autostore [23] can be considered for automating the process. Implementing AS/RS system needs heavy investment, which makes this suggestion not critical for the first step.

4 Discussion and Conclusion This paper has identifies use cases for real-time data sharing in PL by investigating an industrial case study at AstraZeneca. Finding shows, PL systems should have access to data within a specific time farm to make the PL system effective and responsive. Otherwise, data might only have value for historical analysis and not for immediate use of other stakeholders such as production line units. Goods receiving, delivery to production lines, waste transportation, storing and retrieval from storages, transportation through elevators and updating ERP system will benefit most from sharing data in realtime. Findings of this case study reveals that two major issues exist in the case company. The first issue is lack of logistical data such as arrival time, shipments quantity, shipments location and transportation equipment position. The second issue is the difficulty to retrieve the data such as difficulty in reading barcodes, difficulty to find storage address and difficulty to update the ERP system in several occasions. In order to deal with these issues, sharing data in real-time is demanded. At first, some of the

292

M. Zafarzadeh et al.

processes need to be automated not only to facilitate real-time data sharing but also to increase effectivity and make the system less vulnerable against problems causes by human factors. Some automation suggestions are presented in Sect. 3. It should be noted that automation create some new requirements for data sharing which are discussed in the form of use cases in Sect. 3. Further research can focus on evaluating the results of real-time data sharing considering enabling technologies. Acknowledgment. The authors would like to acknowledge the financial support from Vinnova and Produktion2030 to the project DigiLog.

References 1. Granlund, A., Wiktorsson, M.: Automation in internal logistics: strategic and operational challenges. Int. J. Logist. Syst. Manag. 18(4), 538–558 (2014) 2. Khurana, M.K., Mishra, P.K., Singh, A.R.: Barriers to information sharing in supply chain of manufacturing industries. Int. J. Manuf. Syst. 1, 9–29 (2011) 3. Brahim-Djelloul, S., Estampe, D., Lamouri, S.: Real-time information management in supply chain modelling tools. Int. J. Serv. Oper. Inform. 7(4), 294–312 (2012) 4. Kagermann, H., Wahlster, W., Helbig, J.: Recommendations for implementing the strategic initiative INDUSTRIE 4.0. acatech – Frankfurt (2013) 5. Hoffman, E., Rusch, M.: Industry 4.0 and the current status as well as future prospects on logistics. Comput. Ind. 89, 23–34 (2017) 6. Ward, P., Zhou, H.: Impact of information technology integration and lean/just-in-time practices on lead-time performance*. Decis. Sci. 37(2), 177–2039 (2006) 7. Cantor, D.E.: Maximizing the potential of contemporary workplace monitoring. J. Bus. Logist. 37(1), 18–25 (2016) 8. Obitko, M., Jirkovský, V.: Big Data Semantics in Industry 4.0. In: Mařík, V., Schirrmann, A., Trentesaux, D., Vrba, P. (eds.) HoloMAS 2015. LNCS (LNAI), vol. 9266, pp. 217–229. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-22867-9_19 9. Qu, T., Yang, H., Huang, D., Zhang, G., Luo, Q., Qin, Y.: A case of implementing RFIDbased real-time shop-floor material management for household electrical appliance manufacturers. J. Intell. Manuf. 23(6), 2343–2356 (2012) 10. Lee, J., Bagheri, B., Kao, H.: A Cyber-Physical Systems architecture for Industry 4.0-based manufacturing systems. Manuf. Lett. 3, 18–23 (2015) 11. Tu, M., Lim, M.K., Yang, M.-F.: IoT-based production logistics and supply chain system Part 2. Ind. Manag. Data Systems. 118(1), 96–125 (2018) 12. Lee, I., Lee, K.: The Internet of Things (IoT): applications, investments, and challenges for enterprises. Bus. Horiz. 58(4), 431–440 (2015) 13. Wang, L., Wang, X.: Cloud-Based Cyber-Physical Systems in Manufacturing. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-67693-7 14. Yin, R.K.: Case Study Research: Design and Methods, 4th edn. SAGE, London (2009) 15. Ruiz, E., Syberfeldt, A., Urenda M.: The Internet of Things, Factory of Things and Industry 4.0 in manufacturing: current and future implementations. In: Conference on Manufacturing Research, UK, pp. 221–226 (2017) 16. Digilog page. https://www.vinnova.se/en/p/digilog—digital-and-physical-testbed-forlogistic-operations-in-the-production/. Accessed 19 Apr 2010 17. Toyota homepage. https://toyota-forklifts.se/last. Accessed 19 Apr 2010 18. Rocla homepage. https://www.rocla-agv.com/. Accessed 19 Apr 2010

Real-Time Data Sharing in Production Logistics

293

19. Feledy, C., Schiller, M.: A State of the Art Map of the AGVS Technology and a Guideline for How and Where to Use It. Masteruppsats. Lunds Universitet, Lund (2017) 20. MIR homepage. https://www.mobile-industrial-robots.com/en/. Accessed 19 Apr 2010 21. Cherrysind homepage. https://cherrysind.com/pallet-changers.html. Accessed 19 Apr 2010 22. Vitronic homepage. https://www.vitronic.com/. Accessed 19 Apr 2010 23. Swisslog homepage. https://www.swisslog.com/. Accessed 19 Apr 2010

Scenarios for the Development and Use of Data Products Within the Value Chain of the Industrial Food Production Volker Stich1, Lennard Holst1(&), Philipp Jussen1, and Dennis Schiemann2 1

Research Institute for Industrial Management (FIR) at RWTH Aachen, Campus-Boulevard 55, 52074 Aachen, Germany [email protected] 2 Lindt & Sprüngli Germany GmbH, Süsterfeldstraße130, 52072 Aachen, Germany

Abstract. The industrial food production is currently caught between the increasing demands of numerous stakeholders, economic profitability and the challenges of digitization. A solution to face these various challenges can be seen in the aggregation of data into higher-value, independent data products that can be offered and sold on a buyer’s market. Large amounts of heterogeneous data are already available in the value chain of the industrial food production, e.g. throughout the data-driven harvesting of primary products, further processing by interconnected production facilities and the information-intensive product distribution to end consumers. However, the data is usually only evaluated and used locally for the optimization of internal processes or, at the most, within comprehensive partnerships. The purpose of this paper is to identify new revenue opportunities for current and future players in the industrial food production by using data as an independent economic good (data products). For this purpose, scenarios for the development and use of data products via Industrial Internet of Things platforms are developed for a food technical reference process, the industrial chocolate production and its value chain. On this basis, examples for different types of data products and their value propositions are derived. The results can not only serve food producers and relevant stakeholders but all industrial producers as an input for the future, yieldincreasing orientation of their business models. Keywords: Industrial food production  Chocolate production  Food value chain  Data products via IIoT  Scenario analysis  Data economy

1 Introduction The food industry plays a decisive financial and social role in the German economy with a turnover of 179.6 billion euro and more than 595.000 employees in 2017. It is the fourth largest economic sector in Germany and has a global significant influence on political and ecological relevant issues such as climate change, land usage, public health and environmental protection [1–3]. Alongside the entire value creation chain, © IFIP International Federation for Information Processing 2019 Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 294–302, 2019. https://doi.org/10.1007/978-3-030-29996-5_34

Scenarios for the Development and Use of Data Products

295

industrial food producers face various internal and external challenges these days [4]. Global competition leads to increasing cost pressure while at the same time a large number of stakeholders exert influence on food production ecosystems. Manufacturers are forced to meet the increasing requirements of consumers and political regulations, demanding high quality and safety, sustainability, tracking and tracing of products, versatility and availability everywhere and at any time. Figure 1 shows the key stakeholders of industrial food production:

Fig. 1. Mapping of stakeholders in the industrial food production ecosystem, based on [5]

In addition to the requirements of the relevant stakeholders, digitization is one of the greatest challenges and opportunities for industrial food producers. According to a recent survey by Bitkom Research on behalf of the Federation of the German Food Industry 70% of the industrial food producers consider digitization to be challenging meanwhile only 29% have their own dedicated digitization team. On the other hand, 98% of the companies expect increasing process efficiency as a result of digitization and 66% already use digital technologies [6]. Due to the increasing data base in the food industry, this paper addresses the chances of overcoming some of the numerous challenges by the development of data products (cf. Sect. 2). Through modern possibilities of mass data production, storage and processing via Industrial Internet of Things (IIoT) platforms, data can be transformed from a by-product to the actual core of the value proposition itself and can thus

296

V. Stich et al.

be traded as an independent asset (data product) via suitable marketplaces [7, 8]. IIoT platforms are technical systems which collect information about the use of physical things, e.g. production facilities, which are installed and used or operated over a distributed area and make this information available for further processing, e.g. for the development of apps [8, 9]. For this purpose, it is necessary to consider the framework conditions for the development, trade and use of data products in a defined research area. Because food production is very broad and varied, the production of chocolate is used as the object of investigation i.e. as a reference process in this paper. The chocolate production combines a large number of comprehensive input factors and processes used across the board. While cocoa beans are subject to a manual harvesting process and thus generate little automated data input, other ingredients, such as sugar, can be obtained through a highly automated and data-driven beet harvest. The global supply processes, the further processing to cocoa mass and chocolate also generate large amounts of data. Further data is generated e.g. through sales and customer surveys. However, a comprehensive use of the data beyond the own ecosystem is currently not state of the art. Therefore, this paper presents research on data products and prognoses regarding the future of chocolate production in Sect. 2. Additionally, a systematic scenario analysis is described (Sect. 3) and conducted (Sect. 4) to examine data products with regard to their opportunities on solving the presented challenges of industrial food production.

2 State of Research Currently, there are no scenario descriptions for the use of data products in the chocolate production value chain in scientific and research literature. Therefore, data products (different types and a development process) and scenarios for future developments in chocolate production are examined separately. The results of this literature analysis show the research gap in the application of data product management within the chocolate production industry. 2.1

Data Products

The significance of data as an actual value driver of a product or service increases rapidly. The change from data as a by-product or tool to the core of a value proposition to customers represents a new perspective especially for industrial production companies, which are used to develop, build and handle physical products. Several approaches can be found in literature to apply product management methods to the data perspective and in order to gain insights about data products and data product management [7, 10]. Within this paper, three different types of data products according to Tempich are examined [7]: Type 1 – Data product as a service: Data can be used to generate direct sales, i.e. number of data x price = sales. Examples are stock exchange, address or weather data.

Scenarios for the Development and Use of Data Products

297

Type 2 – Data enhanced product: Data can be used to enrich physical or virtual products. In this case, the change in sales of the physical product corresponds to the sales generated by the data. Examples are the Nike+ compatible products. Type 3 – Data product as digital insight: Data usage to improve sales activities or quality, without involving external entities. This data product is only used internally (partnerships included) and does not generate direct revenue. The development process of data products is described by Sands. According to her it should follow product management set ups with some major changes since e.g. the value of data products increases over the time correlating with the number of data and their users. The data component in product management adds an layer of complexity that should be tackled by emphasizing cross-functional collaboration, by evaluating data products in the long-term and by starting iteratively and simple when setting up a new digital business model [10]. 2.2

Future Trends and Scenarios for the Chocolate Production

The future developments within chocolate production have been discussed in several interviews with leading managers of the chocolate manufacturer Lindt & Sprüngli Germany. In addition to the technical and organizational challenges of digitization and data management within the production value chain, the issue of sustainable cocoa cultivation plays a very important role in future scenarios. A lot has been invested in the past to sustain and improve the conditions in the cocoa cultivating countries [11]. Further support for local farmers to increase the crop yield is expected to be needed, as currently the crop yield varies by a factor of 10. Climatic conditions are another driver within future scenarios. Considering the current global warming speed, cocoa plants will no longer be able to be cultivated by the middle of the century because the soil will be too dry around the equator. Furthermore, the use of modern technologies plays a decisive role in the future of chocolate production. This includes new manufacturing technologies, the traceability of molds in production using tracking technology, an automatic inline quality control, automatic process engineering set ups at the production facilities, the use of IIoT platforms for comprehensive data analysis and crossindustrial networking as well as the prediction of the remaining lifetime for technical components. A stronger integration of the customer is also expected in the future. This is reflected in increasing customer requirements for information on the safe production of chocolate and in ordering batch sizes.

3 Methodology At the beginning of the development of new products, such as data products, it is necessary to identify internal and external factors that have an influence on the company’s ecosystem and the new products to be developed. A proven method for this is the scenario analysis, which is characterized by the design of alternative visions of the future. [12, 13]. Various approaches of the scenario analysis are suggested in the literature, but most of them differ only in the accentuation of the content or the formal

298

V. Stich et al.

presentation, so that a general approach can be abstracted. Basically, the individual steps of the scenario analysis are to be determined separately for each situation and in dependence on the object of investigation. Due to the holistic approach, the focus is placed on the first five steps of the scenario analysis according to Reibnitz and Geschka [14, 15] (Fig. 2):

Fig. 2. The five steps of the scenario analysis research method, based on [14, 15]

The first step is the definition of the scenario field, i.e. a description of the field of investigation (thematically, temporally, and territorially) whose future is to be mapped in the form of scenarios. In the second step, a large number of influencing factors on the object of investigation are systematically determined by environmental, environmental and influence analyses. Through literature analyses and workshops with experts, the influencing factors are weighted and key factors, i.e. factors with the most significant influence on the object of investigation, are selected. The third step involves the development of alternative future projections for the defined key factors. Three projection types are developed for this purpose: An optimistic, a neutral trend and a pessimistic type, based on literature research and expert interviews. In the fourth step, these three types of each key factor are evaluated with regard to their consistency with each other. The program ScenarioWizard is used to calculate consistent cross impact matrix to form various consistent scenario bundles. In the last step 5, the scenario bundles are finally described, interpreted and used to analyze implications on the initial question on how to develop and use data products within the industrial chocolate production as well as the food production in general.

4 Results 4.1

Execution of the Scenario Analysis (Step 1 to 4)

The research question on certain future scenarios for data products within the industrial food production value chain developed during the execution of the research project EVAREST (cf. Acknowledgements). As the first step, this research question defined the scenario field, limited by the observation area of the chocolate production. In Step 2 the selection of key factors was conducted by a deep scenario field analysis. Therefore relevant literature and studies concerning data products were examined and current global and digitization developments in the chocolate industry were investigated with industrial experts. Using the theoretical, macroeconomic model of the PESTEL analysis, a long list of influencing factors was set up and factors were sorted into the categories political, economic, social, technological, ecological and legal aspects [16]. In expert workshops with Lindt & Sprüngli Germany, 12 key factors were selected,

Scenarios for the Development and Use of Data Products

299

described due to its current state and three future projections for each key factor, an optimistic, a trend and a pessimistic type, were developed (Step 3). In Step 4 the program ScenarioWizard was used to set up weighted cross impact matrices for the future projections. Thus, three consistent, global scenario alternatives were determined, that boast the highest degree of cross-linking between the different key factor projections (Fig. 3). A: Sustainability of cocoa been cultivation

D: Industrie 4.0

G: IIoT platforms

J: Data security

B: Climatic change

E: Track & Tracing

H: Digital business models

K: Capabilities of data product management

C: Political restrictions

F: Individualized clients

I: Data market places

O: Education

Fig. 3. 12 Selected key factors in the scenario field analysis

4.2

Scenario Description and Interpretation (Step 5)

Based on the consistency analysis, the three global scenarios will be described from a future perspective. The different scenarios are illustrated in Fig. 4 with regard to the development and monetarization possibilities of data products within each scenario alternative. Subsequently, conclusions on the use of data products are drawn.

Fig. 4. Development stages of data products, differing in the three global scenarios A to C

Scenario A can be considered a pessimistic extreme scenario. The digitization hype has led to a multitude of redundant systems and data sources. IIoT Platforms are available and in use to aggregate data, but there is no “Single Source of Truth”. The collection of specific, valid data with time stamps fails. Data is only used to optimize the cocoa bean value chain. Due to insufficient broadband expansion this enormous quantity of data cannot be transmitted in real time. Data marketplaces are not profitable.

300

V. Stich et al.

There are no clear mechanisms for distributing the values generated from possible data products. Companies are too slow to establish comprehensive data science teams that can evaluate the heterogeneous data and combine domain knowledge of chocolate production with intelligent machine learning processes. In the countries where cocoa beans are grown, local political crises and global weather extremes also lead to crop failures that generally hamper the market. Furthermore, the large number of stakeholders makes a concrete orientation of data products, tailored to the various needs, impossible. In this scenario only data products of type 3 “data products as digital insights” are practicable. Scenario B can be considered a trend scenario. In particular, the secure use of IIoT platforms, also across manufacturers, enables a consistent data basis for smart services and aggregated data products. The data economy set up is ready for the trade of data products on suitable marketplaces with existing mechanisms for the fair distribution of the monetary value, added by the various data generators. A high level of traceability of the cocoa bean throughout all stages of the chocolate production value chain can be guaranteed. The further focus on the issue of sustainability in the cultivating areas leads to the increasing collecting of valid data about the harvest process. A practical example: The current demand of various stakeholders such as politicians, NGOs and consumers for CO2 reduction has increased further and can be addressed in the chocolate production by a type 2 data product. Each pack of chocolate contains a QR code that shows the exact manufacturing history including the overall CO2 emissions of a single chocolate product. A premium price can then be maintained or achieved for these products on the market. The optimistic extreme scenario C is based on the trend scenario. In this scenario a global data market is operated by data brokers. The use of digital technologies such as drones for crop monitoring and the distance measurement of plantations, as well as universally deployed sensors for soil data, leads to enormous amounts of data that can be traded on a real time capable stock market. In this scenario C type 1 data products can be developed and aggregated, as a constant, real-time supply of valid data is guaranteed, while chocolate producers focus intensively on data value creation. Both the optimistic and trend scenario characterize an environment in which data products can be used for extensive monetization and not only complement the core business of chocolate production but also become a stand-alone digital business model.

5 Conclusion and Outlook In this paper, key factors for the future development and use of data products within the food production industry were selected, using the chocolate production as a reference process. Three global scenario alternatives were developed, determined and illustrated. The validity of the scenario analysis as a strategic planning tool was ensured by the development of individual projections for each key factor and the establishment of cross-impact matrices by chocolate production experts. The further use of trend scenario B as a framework for a technical implementation of a suitable IIOT platform for the production and trading of data products is supported by the results of an internal Lindt Maturity Study on Industry 4.0. In the future, the key factor projections must be constantly evaluated for relevance in the context of changing framework conditions and

Scenarios for the Development and Use of Data Products

301

stakeholder requirements in chocolate production. The food production specialists in the EVAREST consortium evaluated the general validity of the identified, projected and interpreted key factors regarding the transferability to other branches of the food industry. Figure 5 summarizes the research results on a comprehensive level, giving an estimation of the revenue increasing potential of the different data product types and classifying them into the different scenario set ups that were developed in this paper.

Fig. 5. Summary of the research results for the development and use of data products Acknowledgements. The research and development project EVAREST that forms the basis for this report is funded within the scope of the “Smart Data Economy” technology program run by the Federal Ministry for Economic Affairs and Energy and is managed by the DLR project management agency. The authors are responsible for the content of this publication.

References 1. Minhoff, C.: Jahresbericht 2017/2018 Bundesvereinigung der Deutschen Ernährungsindustrie e.V., Berlin (2018) 2. Berners-Lee, M., Kennelly, C., Watson, R., Hewitt, C.N.: Current global food production is sufficient to meet human nutritional needs in 2050 provided there is radical societal adaptation. Elem. Sci. Anth. (2018). https://doi.org/10.1525/elementa.310 3. United Nations: Sustainable Development Goals (2015). https://www.un.org/ sustainabledevelopment/sustainable-development-goals/. Accessed 28 Mar 2019 4. Beulens, A.J.M., Broens, D.-F., Folstar, P., Hofstede, G.J.: Food safety and transparency in food chains and networks Relationships and challenges. Food Control (2005). https://doi. org/10.1016/j.foodcont.2003.10.010 5. The Orlen Group: Orlen Group’s stakeholder map (2015). https://raportzintegrowany2015. orlen.pl/en/the-orlen-group-and-its-environment/the-orlen-group/our-stakeholders.html. Accessed 29 Mar 2019 6. Bitkom Research: Ernährung 4.0 – Digitalisierung bringt Transparenz für Industrie und Verbraucher (2019). https://www.bve-online.de/veranstaltungen/konferenzen/ unternehmertag-2019/interviews-ut-2019/beitrag-bitkom-studie. Accessed 10 Apr 2019 7. Tempich, C.: Inovex GmbH - Datenprodukte erklärt! (2017). https://www.inovex.de/blog/ datenprodukte-erklaert/. Accessed 28 Mar 2019 8. Dorst, et al.: Digitale Geschäftsmodelle für die Industrie 4.0 (2019)

302

V. Stich et al.

9. Gubbi, J., Buyya, R., Marusic, S., Palaniswami, M.: Internet of Things (IoT): a vision, architectural elements, and future directions. Futur. Gener. Comput. Syst. (2013). https://doi. org/10.1016/j.future.2013.01.010 10. Sands, E.: How to build great data products (2018). https://hbr.org/2018/10/how-to-buildgreat-data-products. Accessed 28 Mar 2019 11. Lindt & Sprüngli AG: Sustainability Report 2017 (2017) 12. Eversheim, W.: Innovationsmanagement für technische Produkte. VDI-Buch. Springer, Heidelberg (2003). https://doi.org/10.1007/978-3-642-55768-2 13. Hassani, B.K.: Scenario analysis in risk management. Theory and Practice in Finance. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-25056-4 14. Reibnitz, U.: Szenario-Technik. Instrumente für die unternehmerische und persönliche Erfolgsplanung, 2nd edn. Gabler Verlag, Wiesbaden (1992). https://doi.org/10.1007/978-3663-15720-5 15. Geschka, H., Hammer, R.: Die Szenario-Technik in der strategischen Unternehmensplanung. In: Hahn, D., Taylor, B. (eds.) Strategische Unternehmungsplanung, pp. 311–336. PhysicaVerlag HD, Heidelberg (1990). https://doi.org/10.1007/978-3-662-41484-2_14 16. Yüksel, I.: Developing a multi-criteria decision making model for PESTEL analysis. IJBM (2012). https://doi.org/10.5539/ijbm.v7n24p52

Bidirectional Data Management in Factory Planning and Operation Uwe Dombrowski, Jonas Wullbrandt(&), and Alexander Karl Institute for Advanced Industrial Management (IFU), Technische Universität Braunschweig, Langer Kamp 19, 38106 Brunswick, Germany [email protected]

Abstract. Due to a growing number of product variants, shorter lead times, and global supply chains, planning and launching production systems is becoming increasingly important. Therefore, managing the period of production ramp-up becomes a competitive advantage. To handle the increasing complexity and uncertainty in this special phase, data availability is necessary in terms of efficient decision making. However, in this early phase of the product and production system lifecycle, data quantity and quality are not guaranteed. This is due to the degree of novelty of processes, technologies and human behaviors in this special phase. In this paper it will be analyzed that selected data from the factory planning phase as well as the factory operation phase needs to be jointly processed by ramp-up involved personnel as value adding information. Finally, the presented use case, as well as the derived data management approach, will help companies to better manage production ramp-ups in the future. Keywords: Production ramp-up

 Digital factory  Data management

1 Introduction Due to shorter product lifecycles and growing range of product varieties, the ramp-up phase, which is the transition from factory planning to factory operation phase, becomes increasingly important [1]. In addition, the digital transformation of products and processes leads to higher availability of data in the product development cycle. In the planning phase, the digital factory approach supports production planning departments with a “network of digital models, methods and tools (…), which are integrated by a consistent data management.” [2] In the operating phase, the interconnection of people, products and other resources in form of cyber-physical production systems focuses on collecting, analyzing and providing real-time data in order to improve value-added processes [3]. In order to ensure a timely and cost-efficient ramp-up phase, the coordinated management of planning data and real data is crucial [4]. Therefore, the purpose of this paper is to close the “Ramp-up data gap” by identifying the two levers “Exploiting digital factory planning tools” and “Enrichment of planning data with real data”. After discussing the characteristics of both fields of action practical experiences with bidirectional data management in an innovative test environment are presented.

© IFIP International Federation for Information Processing 2019 Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 303–311, 2019. https://doi.org/10.1007/978-3-030-29996-5_35

304

U. Dombrowski et al.

The paper results in recommendations on how to improve planning accuracy in the digital factory as well as the factory operations phase.

2 Production Ramp-Up Phase as the Transition from Factory Planning to Factory Operation 2.1

The Ramp-Up Data Gap

Production ramp-up phase can be described as the link between the product development phase and the series fabrication process and therefore represents an important integral function when it comes to the physical implementation of a products’ valueadded processes (compare Fig. 1) [5, 6]. The conflicting factors “low production capacity” and “high production demand” are characteristic of this special period. High demand arises because the product is new and market request is high. Low production capacity results from instable processes, little knowledge about socio-technical systems interactions as well as low qualification levels of involved employees [7].

Data usage for Digital twin

Modification management

Process development Development production system

production phase

Ramp-up phase

Product development

Build up & adjust

Pilot production

Zero series

Ramp -up fabrication

Commissioning & adjustment production system Ramp -up management

Smart Data

Data Basis

Process classification

Development phase

Digital Factory

Cyber-Physical-Production System

2

1

3

Ramp-up data gap

time

Real data (Digital shadow )

Planning data Planning data

Real data

Smart Data availability

low

high

Fig. 1. Process organization of production ramp-up, based on [5]

As illustrated in Fig. 1, a fundamental data basis along the whole product development process is necessary in order to make use of smart data in all different phases. Especially in the instable and critical production ramp-up phase it can be beneficial in terms of time-to-market reduction to manage high quality planning data from the factory planning phase as well as real operating data bidirectional [4]. From the moment the physical production system is built up, real data availability in form of a

Bidirectional Data Management in Factory Planning and Operation

305

Digital shadow is beneficial. This is because real data is more accurate than historical data and therefore necessary in order to adjust planning data in the digital factory. This, in turn, can be described as the Digital twin, in which real data can be used in test scenarios to simulate alternatives virtually. Useful improvements can then by applied to the real factory operation phase. In general one can say that Industrie 4.0 leads to realtime data availability (digital shadow), which, if integrated in the digital twin by a consistent data management, results in highest planning accuracy in the digital factory as well as the factory operations [8]. However, in the early production system ramp-up phase real-data is often unavailable because data generating sensors and technologies in the cyber-physical system are not installed, interconnected and tested yet. Therefore, DOMBROWSKI et al. define the dashed field in Fig. 1 as “Ramp-up data gap” [5]. In order to minimize this gap and secure an integrated bidirectional data management in the ramp-up phase it is necessary to define two different levers that are described in the following: 1. Lever 1 – Exploiting digital factory planning tools: What tools and methods from the digital factory can be used in order to process and provide high qualitative planning data in the ramp-up phase? 2. Lever 2 - Enrichment of planning data: What real data is available from the production phase and can be useful in order to complement or enrich planning data for ramping-up the production? 2.2

Lever 1: Exploiting Digital Factory Planning Tools

The planning process of a factory involves the systematic, goal-oriented and phasestructured procedure, which is carried out with the aid of methods and tools [9]. These include, for example, tools for product design, process and layout planning, robot simulation or material flow simulation [2]. Numerous approaches and concepts for planning a factory have been developed and discussed in the literature. The approaches mainly differ only in the division and naming of individual phases and in scope of a factory life cycle. While the definition of “VDI” only takes into account the range from the specification of the target to the ramp-up of production, Dombrowski et al. extends this area to the entire factory life cycle. They also includes the areas of operation and shutdown implicating the sub-categories industrial wasteland, re-utilization and revitalization [10]. While the basic procedure for planning a factory has remained almost unchanged in recent years, numerous changes have been made in the methods and tools available for planning. Especially in the field of digital tools, scores of technology innovations offer additional potential throughout the planning process and thus also for the ramp-up phase. The set of different digital models, methods and tools that are integrated by a continuous data management system can be summarized under the term Digital Factory and have the common aim of “holistic planning, evaluation and ongoing improvement of all main structures, processes and resources of the real factory in conjunction with the product” [2]. With the help of these assistance systems for planning, factory and production structures can be planned in a targeted and standardized manner. Planning errors, for example through false acceptances and the violation of legal restrictions, can be

306

U. Dombrowski et al.

identified in a very early planning phase and be corrected before the implementation. This results in a significant shortening of the necessary planning time and in an increase in the planning quality in which the consequences of the decisions made on the entire lifecycle of a factory become clear. The results of a German study conducted in 2009 show that planning errors can be reduced by up to 70% by using the classic methods and tools of the Digital Factory. In addition, these results show that planning time could be reduced by up to 30%, change costs by 15%, investment costs by 10%, manufacturing costs between 3% and 5%. It was also possible to increase product and machine maturity by 12% [11]. Based on existing findings, a qualitative online survey of 97 German companies was conducted at the Institute for Advanced Industrial Management in 2018 to investigate the effects of the fourth industrial revolution (Industrie 4.0) on the digital factory. The results show that the relevance of the Digital Factory is increasing significantly. For example, 90% of the companies surveyed agree with the statement that Industry 4.0 will have a major impact on the Digital Factory. In addition, the survey results show that large companies tend to assess their relevance rather higher than small and medium-sized enterprises. This confirms the need to adapt the classic concept of the Digital Factory to ensure the future efficient and profitable use of the Digital Factory. Based on a literature review, different barriers of the Digital Factory could be identified and evaluated in the study. The results of the evaluation are shown in Fig. 2 [12].

Tools not adjustable

32%

33%

21%

4%3%

2,1

25%

Connectivity of tools

44%

19%

7% 3%

2,2

26%

Investment costs Connectivity of the production Tools not available

35%

2,3

13%

18%

48%

15%

27%

3% 8% 3%

2,4

8%

42%

31%

13%

3%

2,6

Knowledge of operation

7%

32%

31%

18%

8%

2,9

Running costs 4% Knowledge of implementation n = 72

0% Very large barriers

35%

22%

19%

17%

3,1

10%

21%

24%

21%

21%

3,2

10%

20% Large barriers

30% 40% Middle barriers

50% 60% Small barriers

70%

80%

No barriers

90%

100%

No allegation

Fig. 2. Evaluation of various barriers to the further development of the Digital Factory (based on [12])

The biggest barriers therefore exist in the adaptability and networking of individual tools. More than 80% of the participants rated these two barriers as at least a medium barrier. The same applies to the general networking of production, which is considered to be very costly and time consuming. Lack of knowledge in the operation and introduction of the Digital Factory is considered as a relatively smaller barrier. Likewise, running costs are not considered a critical barrier to the introduction or development of the Digital Factory [12].

Bidirectional Data Management in Factory Planning and Operation

307

To solve these barriers, there are already individual approaches that, for example, concern the reengineering of classic factory planning processes to the new requirements [10]. It has also been shown in isolation how tools of factory planning have to be designed in terms of content and user-specific needs for these future requirements [10, 13]. In particular, for the establishment of bidirectional data management, the continuous networking of tools represents a very relevant barrier for which no concrete results and design notes exist yet. To remove the barrier of data-to-data-networking between factory planning and operations it requires a more detailed view of each digital tool and how it interacts. In order to consolidate the current state of implementation of the Digital Factory, a study was conducted in the form of expert interviews with a German automobile manufacturer. In the course of the interviews, existing processes were identified through a systematic survey of the experts and the methods and tools contained therein were classified. The processes were then arranged along the entire product development process. Both the industry and the company were deliberately selected due to the high degree of maturity of existing digital tools. It thus provides a very good basis for relevant insights. A summary of these results is shown in Fig. 3.

Product development

Production preparation

Production

Product ramp-up

Layout planning bodywork construction Layout planning factory Factory structure planning: plant and hall layouts, material adjustment, material handling, quantity calculation Logistics planning: logistic process modelling, route planning, transport and area concepts interdisc. geom. validation factory DMU System planning, Coverage: product, documentation, layout, process and robot tool, manufacturability, offline programming, simulation, material flow process, line-oriented process detailing planning planning Bodywork construction planning: integration product into system, hall layout, employment concept Line and product plan and material adjustment, F-time planning Target plan

System adjustment, traceability of online data

ergonomy, performance planning, controll and accounting, time-efficient data management, development and maintenance of process plans

System dimensioning

Troughput variations decoupling

Ramp-up scenarios

Capacity and assignment planning Container and procedure planning, relevant data, 3D concepts dessicating and distortion planning EPD simulation Simulation ramp-up and serial support Product influencing Planning and virtual coverage: material adjustment, working funds, buildability, layout Concept coverage: target, product and line plan, layout, vPPG, working funds RPS planning, Planning and virtual coverage: material adjustment, working funds, buildability, layout, vPPG assembly method, forming simulation VR Process

Process in pilot phase

Fig. 3. Current implementation status of a German OEM

308

U. Dombrowski et al.

It is noticeable that the analyzed systems are mainly used in the development phase and the ramp-up phase. Especially from the transition of the ramp-up to the production phase (“SOP”), fewer processes are supported digitally by the named systems. According to the expert interviews, this is due to the lack of up-to-date and permanent data from real production. Without these data, Digital Factory systems can not be enriched. Again, the relevance of bidirectional data management becomes clear. In addition, it becomes clear that the first lever can not be done without a specific analysis of the real data. The two levers thus depend directly on each other. 2.3

Lever 2: Enrichment of Planning Data

The second lever that can help to close the ramp-up data gap is the usage of real data generated in the operation phase. Therefore, it is necessary to understand and define what kind of data is available in production and can be useful in the early ramp-up phase or in the digital factory. If stable state of series production level is reached, more and more data is tracked, documented and used for continuous improvement of the value added processes. In order to close the ramp-up data gap, the permanent real-time availability of data would be an ideal situation [4]. However, in terms of cost-benefit relationship in practice, it is not economical to track every data in real-time from the beginning. This is because complexity of ramping-up the necessary cyber-physical production system would increase tremendously. In turn, this would lead to even higher instability and coordination expenses. This dilemma can be defined as “Real-time ramp-up data dilemma” [5]. To solve this conflict, it is necessary to analyze what data would be especially helpful in production ramp-up phase in real-time in order to focus on generating this data when setting up future ramp-up scenarios. To evaluate this research question a qualitative data collection approach was conducted in the form of workshops discussions and interviews with production experts from a German commercial vehicle manufacturer. After introducing the research topic to the experts, the general categories from a cause-and-effect diagram were presented to the experts (compare Fig. 4). This classification seems to be helpful since those areas are of special relevance for highly qualitative processes. In a brain writing phase, the participants, who are operational leaders, ramp-up managers and process experts, were asked to identify the four most relevant data types. After discussing the individual ideas, the answers were categorized, prioritized and summarized as shown in Fig. 4. It can be derived that real-time data that has a direct impact on the overall quality of the value added processes, such as efficiency of personnel, quality of raw and finished parts, or problem solving issues is identified to be particularly helpful in production ramp-up. Also, data that is helpful in order to monitor certain process deviations, such as OEE, tracking and tracing of machine and material parameters, or statistical process control has been identified by the experts to be very beneficial. Other data, such as 5S activities or accident rates is identified to be less important in real-time. Despite the fact that the results from the different research studies show clear evidence that both levers can help to minimize or close the ramp-up data gap, the next step is to analyze practically how both approaches can be integrated in order to jointly optimize the ramp-up phase.

less important important

Human

Productiv ity of personnel Accident rate Ef f iciency Qualif ication lev el

309 less important important

Bidirectional Data Management in Factory Planning and Operation

Method/ Management

Machine

Perf ormance/Usability level OEE Scrap rate Tracking and tracing

Environment

Material

Deliv ery locations Inv entory /Work in progress Quality of raw/f inished parts Tracking and tracing

Process

5S CIP-activ ities Problem solv ing Shopf loor management Temperature Humidity IT-inf rastructure Supplier network Lead/Cy cle time Process ef f iciency Degree of v alue added Statistical process control

Fig. 4. Importance of real-data in production ramp-up

2.4

Practical Validation and Results

At the Center of Excellence for Lean Enterprise 4.0 (CoE) at Institute for Advanced Industrial Management (Technical University Braunschweig) a restriction-free research environment was set up in which all components of a factory are interconnected and their communication with each other can be experienced in real time [14]. In order to analyze how the bidirectional data management approach can help to close the ramp-up data gap, the following use-case studies has been carried out. Research Design: In the CoE, participants can experience production of a simple product. Before starting series production phase, the participants are asked to plan the production system virtually and by using digital factory tools. In the second phase, the participants are asked to launch production and try to fulfill customer demands in the shortest time possible. The “time-to-market” is measured by a time-keeper. In the third phase, the participants are asked to launch production again and constantly improve ramp-up phase by making use of the bidirectional data management approach. Again, the “time-to-market” is measured by a time keeper. After completing all rounds, the participants share their experiences of round two and three in a guided discussion/ interview session. Research Results: In this specific use case, focus was on using real data about Material and Machines (compare Fig. 4). Therefore, RTLS-sensors were placed on every machine, workstation and container and thus could be tracked and traced in real time. Additionally, RFID sensors were attached to every part/material so that material flows could also monitored permanently. The participants started to rearrange the production layout in order to improve material flows. Since the cyber-physical production system in the CoE is interconnected with the digital factory tools, real data from the RTLS and RFID sensors could be integrated in the virtual planning model that has been created by the participants on the planning table beforehand. The real-time data from the changed production system layout could be used to perform a new

310

U. Dombrowski et al.

material flow calculation as well as a walking-routine-optimization on the planning table. As a result, a U-shape machine layout including a changed workplace layout was suggested to be best alternative. Since the possible improvement was presented to the participants via VR-glasses and the virtual planning table, the improvement could be transferred to the real production system immediately. As a result, the time-to-volume could be reduced by about 30%. The research findings from the three areas above have shown that the ramp-up data gap can not be eliminated by a singular lever. Rather, the interactions of both levers should be considered. Overall, the following key findings can be summarized: Human Level: At the human level, the continuous support of the employee through information is necessary. CIM (Computer Integrated Manufacturing) has focused on providing rapid data delivery that evolves over time as the Digital Factory continues to evolve toward transparent information processing. Increasingly, the focus is on the pure quantity of information, but on the qualitative preparation of the information in a digital assistant for a meaningful and transparent decision-making basis. To close the ramp-up data gap, humans need to be assisted by real-data and/or enriched planning data that is provided user-oriented (e.g. VR/AR, digital planning table). Organization: On an organizational level, the increasing relevance of the “frontloading” design field, which aims to shift the processes to tasks in the early phase of planning, can be identified. This process relocation should identify planning errors early on and keep the associated change costs as low as possible. If the focus here was first on process integration, there is an increasing abolition of company boundaries and operator models. Regardless of frontloading, process organization is also gaining in importance. Only through a consistent process orientation, relevant data can be provided efficiently by making use of digital tools in the production ramp-up phase. Technology: In the CIM approach, a common use of data can be primarily identified as an important task field. Due to the ongoing fourth industrial revolution, larger amounts of real data and increasing interconnections are the core areas of current technical challenges. In future, the focus is on enrichment of planning data with real data in order to improve the quality of the ‘digital shadow’.

References 1. Lanza, G., Sauer, A.: Simulation of personnel requirements during production ramp-up. Prod. Eng. Res. Devel. 6(4–5), 395–402 (2012). https://doi.org/10.1007/s11740-012-0394-6 2. The Association of German Engineers: VDI 4499, Part 1: Digital factory: Fundamentals. Beuth (2008) 3. Bauernhansl, T., ten Hompel, M., Vogel-Heuser, B.: Industrie 4.0 in Produktion, Automatisierung und Logistik: Anwendung, Technologien, Migration. Springer, Wiesbaden (2014). https://doi.org/10.1007/978-3-658-04682-8 4. The Association of German Engineers: VDI 4499, Part 2: Digital Factory Operations (2011). Accessed 01 Oct 2018

Bidirectional Data Management in Factory Planning and Operation

311

5. Dombrowski, U., Wullbrandt, J., Krenkel, P.: Industrie 4.0 in production ramp-up management. Procedia Manuf. 17, 1015–1022 (2018). https://doi.org/10.1016/j.promfg. 2018.10.085 6. Schuh, G., Stölzle, W., Straube, F.: Anlaufmanagement in der Automobilindustrie erfolgreich umsetzen: Ein Leitfaden für die Praxis. VDI-Buch. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-78407-4 7. Lanza, G.: Simulationsbasierte Anlaufunterstützung auf Basis der Qualitätsfähigkeiten von Produktionsprozessen (127) (2005) 8. Dombrowski, U., Stefanak, T., Krenkel, P.: Aspekte der Fabrikplanung für die Ausrichtung auf Industrie 4.0. In: Reinhart, G. (ed.) Handbuch Industrie 4.0: Geschäftsmodelle, Prozesse, Technik. Hanser, München, pp. 169–190 (2017) 9. The Association of German Engineers: VDI 5200: Factory Planning - Planning Procedures. Beuth, Berlin (2009) 10. Dombrowski, U., Karl, A., Reiswich, A.: Reengineering of Factory Planning Process for the Realization of Digital Factory 4.0 (2018). Accessed 26 Mar 2019 11. Bracht, U., Spillner, A.: Die Digitale Fabrik ist Realität: Ergebnisse einer Umfrage zum Umsetzungsstand und zu weiteren Entwicklungen der Digitalen Fabrikplanung bei deutschen OEM. ZWF 104(7–8), 648–653 (2009) 12. Dombrowski, U., Karl, A., Ruping, L.: Herausforderungen der Digitalen Fabrik im Kontext von Industrie 4.0. ZWF 113(12), 845–849 (2018). https://doi.org/10.3139/9783446437029 13. Dombrowski, U., Reiswich, A., Karl, A.: Designing digital tools for factory planning: Integrating requirements for usability on a meta-level. In: Proceedings of 2018 IEEE 23rd International Conference on Emerging Technologies and Factory Automation (ETFA): Politecnico di Torino, Torino, Italy, 04–07 September 2018, pp. 99–106. IEEE, Piscataway (2018) 14. Dombrowski, U., Wullbrandt, J., Fochler, S.: Center of excellence for lean enterprise 40. Procedia Manuf. 31(31), 66–71 (2019). https://doi.org/10.1016/j.promfg.2019.03.011. (9th Conference on Learning Factories 2019)

Open Access Digital Tools’ Application Potential in Technological Process Planning: SMMEs Perspective Roman Wdowik1 and R. M. Chandima Ratnayake2(&)

2

1 The Faculty of Mechanical Engineering and Aeronautics, Rzeszów University of Technology, 35-959 Rzeszów, Poland [email protected] Department of Mechanical and Structural Engineering and Materials Science, University of Stavanger, 4036 Stavanger, Norway [email protected]

Abstract. This concept study focuses on technological process planning (TPP), taking into account the application potential of open access digital tools (OADT) in small- and medium-scale manufacturing enterprises (SMMEs). It presents the authors’ classification of digital tools (DT) used in the SMMEs and available groups of OADT. It also proposes possible scenarios’ potential for future TPP by taking into account the developments in artificial intelligence (AI) and immersive technologies, i.e. virtual and augmented realities (VR/AR). It also focuses on challenges and procedures regarding the implementation of DT in specific SMMEs’ environments, focusing on how open access tools play a crucial role at the first stages of SMME development, as these tools enable minimization of resource wastage. Although the capabilities of these tools are limited, it is vital to develop implementation strategies within a SMME, based on specific need(s). Keywords: Digital tools  Open access  Small-medium scale manufacturing enterprises Process planning  Digitalization



1 Introduction 1.1

Motivation

Digital transformation, such as: cloud data computing techniques (CDCT), machine learning (ML), virtual reality (VR), cloud-based tools (CBT), augmented reality (AR), artificial intelligence (AI), data-driven production control, etc., offer new opportunities to enhance the operational performance of mechanical and manufacturing engineering [1, 4, 9]. In addition, fast-growing digital transformation leads to a demand for more personalized, connected, smart, and sustainable products and services [2]. Although the availability of information is immense, due to digitalization, it is very important to share and use it through the effective use of computer networks and at the lowest possible cost (e.g. by the use of open access tools) [3]. Although the aforementioned is novel, there is significant level of challenges which exist in small- and medium-scale © IFIP International Federation for Information Processing 2019 Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 312–319, 2019. https://doi.org/10.1007/978-3-030-29996-5_36

Open Access Digital Tools’ Application Potential

313

manufacturing enterprises (SMMEs), especially in the implementation into their shop floors of the newest, fast-growing digital technology, often with limited funds and availability of other infrastructure resources such as human resources, R&D and training centers, etc. Technological process planning (TPP) plays a vital role in manufacturing environments, due to the fact that it can be considered one of the most important stages of production, especially from an overall productivity enhancement perspective [5]. TPP provides a foundation for all manufacturing success, leading to the definition of machining operations’ order, choice of necessary fixtures, tools and parameters of processes which are run using CNC machine tools and technological equipment [6]. Although the efforts regarding the implementation of CAPP tools [7] and CAD/CAMs [8] are well known, and these digital tools have proved their usability in digital transformation, contemporary advanced digital and smart manufacturing redefines overall success in relation to the optimization of: design, production, supply chain and customer service perspectives [7]. In this context, it is vital to investigate how to reinvent TPP to cater for the latest and expected future needs and how to enable the expansion of start-ups and SMMEs. To reinvent TPP to be efficient and effective requires making use of expert knowledge, together with artificial intelligence (AI), databases, together with machine learning techniques, as well as analysis of market trends and the implementation of various digital tools (DT) such as software and/or devices. In addition, cloud-based techniques could be used [9]. In this context, SMMEs need appropriate strategies and plans, in order to accelerate their development and to be able to compete with potential competitors. Various aspects enable the aforementioned to be achieved. For instance, currently there is a trend to use digital tools with common software platforms (e.g. 3D systems, Autodesk® BIM 360 Design, etc.) to share the data, accelerate TPP, test the prototypes (i.e. in a physical or virtual environment) or operate existing hardware (e.g. CNC machine tools). In this context, SMMEs must make a decision regarding investments and, at the early stage of their operation, whether they need to use open access (free or low-cost) digital tools or to buy other tools existing in the market. Hence, this manuscript mainly presents findings and suggestions, focusing on the use of digital tools in SMMEs, and provides a concise discussion on existing possibilities for the use of digital tools in SMMEs, which use several CNC machine tools in a manufacturing environment. Furthermore, it takes into account the groups of digital tools, their examples, limitations and approaches for their use in TPP. In addition, it suggests the potential to increase R&D work, focusing on free digital tools. 1.2

Classification of Digital Tools Used in TPP

The digital tools, which are used in TPP, may be classified into groups such as: Software tools (S), e.g. computer applications; Devices (D), e.g. PCs, smartphones, tablets, smartwatches, etc.; machines, machine tools and technological equipment (MTE), e.g. CNC machine tools; technological equipment (e.g. tools, chucks, presetters, etc.); and Internet tools (IT), e.g. webpages, webguides, webcatalogues, internet calculators, internet clouds, internet search engines, etc. The classification proposed by the authors is presented in Fig. 1. It classifies the most important DT on the basis of

314

R. Wdowik and R. M. C. Ratnayake

expert knowledge, experience and discussion with industrial partners. However, the number of DT in SMMEs is limited compared to those in large firms, due to the costs generated by DT. Software (S) PC software software for mobile devices such as smartphones, tablets, laptops

Internet tools (IT) internet-based tools such as Web pages, electronic catalogues

Technological process planning with the use of digital tools Machines and technological equipment (MTE)

Devices (D)

CNC machine tools measuring devices (CMMs, tool presetters) furnaces assembly stations

smartphones and smartwatches tablets laptops virtual reality/augmented reality devices

Fig. 1. Classification of the groups of digital tools used in TPP.

1.3

Perspectives of Technological Process Planning in SMMEs

There are several approaches in TPP which can be considered by industrial companies (refer to Fig. 2). The first one, standard process planning, mainly uses the expert knowledge of a process planner and is a combination of experience, tests and the use of guidelines. It has been used for many decades by process planners. SMMEs which, in general, are careful when they face new investments, have so far used this approach. In the authors’ opinion, it is mainly caused by availability and low expenditure. Recent decades have proved that the application of CAD/CAM systems may help to eliminate paper documentation, accelerate and automate the manufacturing processes of complex parts and provide great opportunity for safer (digital) simulation of manufacturing operations which will be run on real machines. In parallel with the abovementioned two approaches, it can be beneficial to invest in completely new solutions for TPP, which are based on immersive technology (e.g. VR) and AI. Immersive technology may provide the opportunity to prepare and test technological processes in a virtual environment, while AI may lead to automation. It can be adapted both to VR/AR and standard approaches. Figure 2 presents the level of complexity of the aforementioned approaches and shows approximate, time-related implementation and possible future scenarios. In the case of SMMEs, the use of immersive technology may be beneficial because it still provides an opportunity to use expert knowledge. However, all TPP tasks are to be performed in one virtual environment. The limited resources of SMMEs are, in this case, easier to implement into VR than in the case of large enterprises. In addition, in the case of various SMMEs, immersive testing of manufacturing processes seems to be more important than automation – mainly if open access virtual realities were to be delivered. AI may play a crucial role in less complex manufacturing tasks and

Open Access Digital Tools’ Application Potential

315

Fig. 2. The development of TPP applications and possible future scenario.

infrastructures (e.g. SMMEs’ infrastructures), which do not require advanced expert knowledge and the analysis of many variables and external noise, and also the implementation of AI algorithms should be cost-effective. The next years will also be important because of the implementation of faster communication techniques. The new trend, which can now be observed regarding the implementation of 5G mobile networks may be very beneficial for SMMEs. This new technology may allow the company to be managed more efficiently. The 5G networks could provide the opportunity, e.g. in several years/decades, to start many SMMEs. The infrastructures of SMMEs will be susceptible to wireless communication. In this context, resources existing in different places will be connected and used for the aims of new SMMEs’ development. This will be very important at the start of an SMME. Connecting advanced manufacturing resources is much faster in this case.

2 Existing Open Access Digital Tools and Their Availability for SMMEs The main issue regarding open access digital tools (OADT) concerns the effective strategies of their use in SMMEs. Table 1 presents the main groups of open access tools for TPP. Their main functionalities and examples are also presented. In the authors’ opinion, the OADT are limited to Software (S) and Internet tools (IT). However, SMMEs must invest funds in Devices (D) and Machine tools/technological equipment (MTE). It can be observed that different CAD digital tools, CAM digital tools and digital calculators (i.e. enabling machining parameters’ selection and verification) are available in open access mode both as stand-alone licenses on PCs,

316

R. Wdowik and R. M. C. Ratnayake

smartphones, tablets, and as Internet tools. Usually their functionalities are limited, compared to commercial software (i.e. CNC programming is limited to 2 or 2.5 axes, 3D modelling is less effective or impossible, etc.), but they can be considered the first choice if the existing budget is used for the purchase of machine tools and technological equipment. In the case of commercial software used by large/developed companies, it is reasonable to use the software offered by one producer, taking into account the fast training of employees. However, the bankruptcy of a producer will cause additional expenses for a large company. If SMMEs want to use open access tools, they must arrange a list of digital tools, which consist of the products of different companies. Developing the proper training procedures will protect an SMME against unwanted expenditure. Table 1. The main groups and examples of OADT for TPP in SMMEs. Software tools (S)

CAD

CAM

Calculators

Internet tools (IT)

Main functionalities

Examples

2D drawing preparation, viewing of 2D or 3D models, measurements of dimensions, printing, changing a file format, copying or pasting CAD data, error correction, etc. Basic 2-, 2.5- or 3-axis machining, simulation of tool paths, basic postprocessing capabilities, etc. Calculations of power requirement in machining, the choice of depths of cut, roughness, dressing conditions for grinding, etc. Choice of tools and machining parameters, data sharing by the use of network, searching data, etc.

For PC: DraftSight, FreeCAD, Libre CAD For Mobile Devices: CAD Reader, CAD Assistant For PC: CamBam (partially free), FreeMILL, G-Simple Web/Mobile Apps: Walter feeds and speeds, Dr Kaiser App Tool Guide, Iscar Tool Advisor

As the conclusion of this chapter it should be stated that OADT find wider application in SMMEs than in large enterprises, but it could also be changed by increased (several times) R&D and dissemination of digital techniques.

3 The Proposed OADT Implementation Procedures Implementation of OADT requires the proper procedures (see Fig. 3). Approach I is the standard procedure, which is proposed at an early stage of an SMME’s existence. If OADT cannot be implemented in the specific manufacturing environment of the SMME, traditional TPP (without the use of digital solutions) may be developed. While the SMME grows and money savings exist (e.g. they can be used for new investments), additional possibilities should be considered (Approach II). However, these possibilities require detailed risk calculations and cost-effective investments in short-term software rent, license purchase or subcontracting of the SMME’s contracts by other companies.

Open Access Digital Tools’ Application Potential

317

The further development of the SMME may lead to the final purchase of commercial DTs. However, the development of used OADT at an early stage may also be sufficient to meet all the main requirements of the SMME without investing in commercial software in the future. This concept generates added value by promoting concentration on innovations in the area of OADT development in the SMME (e.g. investments in new functions of existing OADT or new open access tools).

Fig. 3. The approaches of OADT application in TPP of SMMEs.

4 Cost-Related Implementation of the Digital Tools in SMMEs The cost of digital tools (CDT) is the sum of machine and technological equipment costs (CMTE), software costs (CS), Internet tools’ costs (CIT) and devices’ costs (CD): CDT ¼ CMTE þ CS þ CIT þ CD

ð1Þ

Usually, in the case of manufacturing firms, the number of DT increases as the firm grows. Figure 4 presents this in the case of a small enterprise (SE), a medium enterprise (ME) and a large enterprise (LE). It is caused by the increased expectations from customers and also internal developments. The increase in the number of DT may cause an increase in the use of OADT. Which tools their employees should use is usually the decision of managers, but very often nowadays employees may decide how to enhance their work efforts by the use of new open access applications. It is a win-win situation in the case of SMMEs, which do not define rigid procedures, for both employees and employers. The employer does not need to invest additional funds, and

318

R. Wdowik and R. M. C. Ratnayake

the employee can contribute to the firm’s growth by using OADT. Because of the increased number of DT in the case of a larger company, the expenses (DT expenses) may also increase. The expenses should be balanced by the proper income, in order to obtain adequate profit. In the case of SMMEs, the cost of preparation of a technological process can be defined as high. In this context, the technological process can be understood as a product, which costs less if mass-produced.

Fig. 4. Comparison of selected and expected DT-related indicators in small (SE), medium (ME) and large (LE) enterprises.

On the basis of the abovementioned discussion, it can be stated that, in the case of SMMEs, a reduction in expenses regarding technological process planning may be realized by the use of open access digital tools, even if the total number of existing digital tools is limited. Moreover, added value may be generated if the development process is also focused on OADT capabilities. Figure 5 presents the challenges (aims) of SMMEs regarding the implementation of digital tools. It can be stated that the main challenges concern reducing expenses, automation, data sharing, training designed for employees, distance work, new investments and investors, co-funding capabilities and promotion. Next to the abovementioned aims, Fig. 5 presents the groups of digital tools which can be useful in achieving the specified aims. The groups of products which can be offered by SMMEs

Fig. 5. Challenges of CNC machine tools-based SMMEs and products offered with groups of digital tools used for specific aims (refer to Fig. 1).

Open Access Digital Tools’ Application Potential

319

are also presented. In this context, manufacturing firms can offer digital data, subcontracting, service and other products.

5 Conclusion SMMEs usually need additional resources for investments. The use of DT, available as commercial licenses, is possible while SMMEs develop and gather the funds. At the early stage of SMMEs’ existence, they need the support from open access digital tools (OADT). OADT have functional limitations but are offered for free or at a relatively low price and can be researched for the specific needs of the SMME. This study is a concise analysis of OADT. The OADT could be applied by SMMEs using the proposed implementation procedures, also by taking into account the presented capabilities. Acknowledgement. This study was developed within the project, “Science internship for the investigations of digitalization in manufacturing”, financed by the Polish National Agency for Academic Exchange (www.nawa.gov.pl) in the Bekker Programme.

References 1. Chryssolouris, G., Mavrikios, D., Papakostas, N., Mourtzis, D., Michalos, G., Georgoulias, K.: Digital manufacturing: history, perspectives, and outlook. Proc. Inst. Mech. Eng. Part B J. Eng. Manuf. (2009). https://doi.org/10.1243/09544054JEM1241 2. Moghaddam, M., Cadavid, M.N., Kenley, C.R., Deshmukh, A.V.: Reference architectures for smart manufacturing: a critical review. J. Manuf. Syst. 49, 215–225 (2018). https://doi.org/10. 1016/j.jmsy.2018.10.006 3. Calvo, I., López, F., Zulueta, E., González-Nalda, P.: Towards a methodology to build virtual reality manufacturing systems based on free open software technologies. Int. J. Interact. Des. Manuf. 11, 569–580 (2018). https://doi.org/10.1007/s12008-016-0311-x 4. Enzo, M., Frazzon, E.M., Kück, M., Freitag, M.: Data-driven production control for complex and dynamic manufacturing systems. CIRP Ann. 1(67), 515–518 (2018). https://doi.org/10. 1016/j.cirp.2018.04.033 5. Wdowik, R., Magdziak, M., Ratnayake, R.M.C., Borsellino, Ch.: Application of process parameters in planning and technological documentation: CNC machine tools and CMMs programming perspective. Procedia CIRP (78), 43–48 (2018). https://doi.org/10.1016/j.procir. 2018.09.054 6. Feld, M.: Basics of technological process planning of machines’ parts (in Polish). Wydawnictwa Naukowo-Techniczne, Warsaw (2003) 7. Zhang, H.-C.: Computer aided process planning: the state-of-the-art survey. Int. J. Prod. Res. 4(27), 553–585 (1989). https://doi.org/10.1080/00207548908942569 8. Kutin, A., Dolgov, V., Sedykh, M., Ivashin, S.: Integration of different computer-aided systems in product designing and process planning. Procedia CIRP 67, 476–481 (2018). https://doi.org/10.1016/j.procir.2017.12.247 9. Tarchinskaya, E., Taratoukhine, V., Matzner, M.: Cloud-based engineering design and manufacturing: state-of-the-art. In: 7th IFAC Conference on Manufacturing Modelling, Management, and Control International Federation of Automatic Control, 19–21 June, Saint Petersburg, Russia (2013)

Industry 4.0 Implementations

Implementation of Industry 4.0 in Germany, Brazil and Portugal: Barriers and Benefits Walter C. Satyro1(&) , Mauro de Mesquita Spinola1 , Jose B. Sacomano2 , Márcia Terra da Silva2 , Rodrigo Franco Gonçalves1,2 , Marcelo Schneck de Paula Pessoa1 , Jose Celso Contador3 Jose Luiz Contador4 , and Luciano Schiavo1 1

,

Production Engineering Research, Polytechnic School of USP – Universidade de Sao Paulo, Av. Prof. Luciano Gualberto, 1380, Butanta, São Paulo SP 05508-010, Brazil [email protected], [email protected], [email protected], [email protected], [email protected] 2 Postgraduate Program in Production Engineering, UNIP – Universidade Paulista, Rua Dr. Bacelar, 1212, Sao Paulo, SP 04026-000, Brazil [email protected], [email protected] 3 Postgraduate Program in Administration, UNIP – Universidade Paulista, São Paulo, SP, Brazil [email protected] 4 Postgraduate Program in Administration, FACCAMP - Faculdade Campo Limpo Paulista, Campo Limpo Paulista, SP, Brazil [email protected]

Abstract. Industry 4.0 is a subject that has attracted the interests of researches worldwide for its ability to achieve productivity gains and to provide competitiveness to the companies. Although much research has been done on technical studies, little attention has been paid to the challenges that decision-makers, executives and managers face to implement the concepts of Industry 4.0 in their companies. This research was based on secondary data, involving a research made with 246 companies in Brazil, 287 in Germany and 72 in Portugal, which studied the internal and external obstacles and expectations of these 605 companies. The originality and practical implication of this research is to compare these three countries, studying common and different points to implement the concepts of Industry 4.0, so researchers can conduct their studies to try to provide answers to practical expectations, linking research to practice. Keywords: Barriers

 Industry 4.0  Strategy  Implantation

© IFIP International Federation for Information Processing 2019 Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 323–330, 2019. https://doi.org/10.1007/978-3-030-29996-5_37

324

W. C. Satyro et al.

1 Introduction There has been a substantial increment in scientific publication on Industry 4.0 [1], attracting the interest of researchers from all over the world [2] by its capacity to provide flexibility and practical reconfigurable manufacturing systems, making possible mass customization processes [3, 4]. Industry 4.0, the Digital transformation of the company, or the Digital manufacture, the new production paradigm [5, 6], is based on interconnectivity, in which the important is the Internet, not only the computer [7, 8], so the production line can exchange information and data online with supply chain, customers and other important stakeholders [9, 10]. Industry 4.0 was made possible by the integration of Information Technology (IT) and Automation Technology (AT) with Production supported by high technology, so humans and machines can interact with each other, bringing new possibilities to the companies to improve productivity and competitiveness. The aim of Industry 4.0 is to generate value, establishing new business models, services and products, solving problems and increasing competitiveness by the interconnection of the internal and external environment of the companies [11–13]. The objective of this study is to analyze the barriers and benefits that decisionmakers, executives and managers from Germany, Brazil and Portugal face to implement the concepts of Industry 4.0 on their companies.

2 Literature Review 2.1

Barriers

A barrier can be defined as any system, operational organization or technical solution that minimizes the probability of occurring events and thus limits the consequences of such events [14]. A barrier can be understood as regulatory activities put in place to avoid loss of technical integrity or reduce possible consequences [15]. 2.2

Benefits

Benefit has different concepts, which vary according to the perspective of analysis. Benefit can be understood as something that provides gains, advantages, is helpful, convenient or brings good effect or something of value [16, 17], a positive result obtained by an action [18, 19]. 2.3

Digitization and Industry 4.0

Digitalization is the introduction of Internet-connected digital applications and technologies by companies, impacting relationship in a business network and the way value is created [20]. Digitalization is the adoption of IT-based solutions using the Internet predominantly [20]. Industry 4.0 and industrial digitalization are considered synonyms,

Implementation of Industry 4.0 in Germany, Brazil and Portugal

325

being digitalization defined as the actions necessary to implement the concepts of Industry 4.0 [21]. The technologies that support the concepts brought by Industry 4.0 can create intelligent systems that can reduce risks, lead time, costs, but the barriers to implement it can be enormous [22], and many companies are struggling to see the challenges and opportunities in relation to digital transformation [23].

3 Method We used secondary data provided by a series of researches made by Siemens AG [24– 26] in Germany, Brazil and Portugal about digitalization. The focus was to identify problems and expectations that decision-makers, executives and managers faced to implement the concepts of Industry 4.0 in their companies, the digital transformation of the companies, or the Digital manufacture. The surveys were conducted in companies of all sizes and from all types of industries between 2014 and 2015, and no other recent surveys have been published so far from these countries by Siemens AG. The surveys carried out among Siemens customers had also the intention to try to understand the complexity that digital transformation represents in the daily life of the companies. Although the questionnaires were intended to be standardized, they had some differences among them, which made it impossible to take full advantage of them, so some parts had to be excluded for comparison and some parts had to be adapted.

4 Results and Discussion Table 1 presents the size of the companies per county, which shows that except in Portugal, the majority of the 605 companies that collaborated with the surveys were of large size. Large companies were considered the ones with over 500 employees. Table 1. Size of the companies involved in the survey. Country Large companies Small and medium sized Total of respondents Brazil 177 69 246 Germany 151 136 287 Portugal 11 61 72

Table 2 shows the position of the respondents, which presents that in Germany the majority of the respondents where from middle management. In Brazil and Portugal the number of top and middle managers were almost equilibrated. Portugal presented the great majority of the respondents from the C-level position. In Brazil industries from 21 sectors of the economy participated of the survey, being 16% Automotive, 13% Power utilities, 11% Power transmission, 8% Minerals & Mining, and others. In Germany the survey involved 30 sectors of the economy, but the

326

W. C. Satyro et al. Table 2. Position of the respondents. Country Brazil Germany Portugal

Top management 40% 29% 30%

Middle management C-level position 44% 16% 58% 13% 29% 41%

sector and the percentage was not informed, and in Portugal the number of the sectors was not presented. Participants were asked if they already had developed a structured digital strategy, whose results were displayed in Fig. 1. Curiously, the majority of affirmative answers were received by companies from Brazil (43%), followed by Portugal (35%), and Germany (19%). The majority of German companies (43%) informed that they do not have a digital strategy formulated, followed by Portuguese (33%) and Brazilian ones (29%).

Fig. 1. Developed a structured digital strategy.

The majority of the companies from these three countries informed that the responsible for the implementation of the digital strategy was IT and/or IT & other departments, as illustrated in Fig. 2. It is worth mentioning that the implementation process of the concepts of Industry 4.0 in the company is a complex task that must be managed by the top executives, like the ISO 9000 standards, and other important projects. Delegate to IT a task of this magnitude represents a serious risk to the whole company, and showed that the top managers interviewed by this occasion did not have the understanding of the necessary changes in structure and strategy to implement the concepts of Industry 4.0. The respondents were asked about the challenges faced to implement the concepts of Industry 4.0 mentioned in the report (survey) as internal and external barriers. The internal barriers were presented in Table 3. Since companies could report more than one option, the sum can give greater than 100%.

Implementation of Industry 4.0 in Germany, Brazil and Portugal

327

Fig. 2. Central responsibility for the implementation of Industry 4.0. Table 3. Internal challenges to implement the concepts of Industry 4.0. Internal challenges Company structure/culture High operating costs (licenses and software updates) Unclear benefits (lack of economic feasibility study, etc.) Financing of technologies/software Costs for further education/training (*) not reported

Brazil 57% 53% 52% 42% 48%

Germany 31% 36% 41% 32% (*)

Portugal 46% 64% 46% 64% 57%

Brazilian companies informed that the structure and culture of the companies were the biggest internal barrier, followed by high operating costs (licenses, software and their updates). Germany companies reported that unclear benefits (lack of economic feasibility studies, etc.) represented the biggest internal barrier, followed by high operating costs and the necessity of financing the technologies and software. Portuguese companies informed the high operating costs and the necessity of financing the technologies and software representing the biggest internal barriers, followed by costs of further education/training. The challenges represented by external barriers to implement the concepts of Industry 4.0 are shown in Table 4. Once again the sum can be more than 100% as multiple options are possible. Brazilian companies reported that data security was considered the main external barrier, followed by no tax advantage for the investments. Germany companies revealed that the lack of technical standardization was the predominant concern, followed by data security and the lack of demand for Industry 4.0 from customers or suppliers. Portuguese companies informed that the principal

328

W. C. Satyro et al. Table 4. External challenges to implement the concepts of Industry 4.0. External challenges Discussions related to data security No tax advantage for the investments Lack of technical standardization No demand from customers or suppliers (*) not reported

Brazil 55% 52% 45% 33%

Germany 39% (*) 46% 35%

Portugal 38% 57% 50% 38%

external barrier was the absence of tax advantage for the investments, followed by the lack of technical standardization and data security. 4.1

Comparison of Results

The majority of the respondents in Brazil and Portugal reported that they already had a structured strategy to implement the concepts of Industry 4.0, differently from Germany, where the majority informed that such strategy was lacking. In the accumulated of these three countries the respondents informed that they do not have a solid digital strategy. These three countries had in common to delegate to IT the central responsibility for the implementation of Industry 4.0, not assessing the consequences of this decision. In total, the respondents of Brazil, Germany and Portugal see as main internal challenges the necessity of expensive investments in operating costs to implement the concepts of Industry 4.0, followed by the unclear benefits or lack of economic feasibility studies to guide the investments and the necessity of financing technologies/software required. Only in Brazil it was mentioned the absence of support from top management as an internal challenge. The lack of technical standardization was reported as the main external challenge in these three countries, followed by the necessity of improvement of data security and the absence of tax advantage for the investments. The respondents in Brazil and Portugal informed that they feared competitors from other industries could be faster in the implementation process, and also that in these two countries they had not found the right partner to help them in this process. These respondents hoped that Industry 4.0 could increase resource and energy efficiency, increase service processes, enhance decision making, provide greater transparency in business processes, strength synergies and/or collaboration, and be guided by a stronger orientation towards the customer.

5 Conclusion This study involved 605 companies from three countries: Germany, Brazil and Portugal, which participated of a survey made by Siemens AG in order to evaluate the expectations and difficulties that these companies faced to implement Industry 4.0 or the Industrial digitalization concepts.

Implementation of Industry 4.0 in Germany, Brazil and Portugal

329

Although the research may appear old (Germany, 2014, Brazil 2015, and Portugal 2015), a limitation of this study, it is useful in terms of comparing three different countries with the common goal of implementing the concepts of Industry 4.0 in close time, and also see that some challenges still persists. As internal challenges, the respondents of these three countries were afraid of having to invest in new operation lines, technology financing, culture/structure change, among others, not seeing clear economic benefits. Regarding the external challenges or barriers mentioned, there was the lack of technical standardization and data security. Despite the progress in these areas, the problem persists to this day. The concepts of Industry 4.0 can help companies to increase international competitiveness in order to be more active and relevant on global markets to achieve important progress toward superior levels of competitiveness; however, for doing so, decision-makers, executives and managers will face obstacles, risks, barriers but also opportunities and benefits. It is not an easy task, since it may be necessary to change structure or strategy, or both, involving high-magnitude restructurings in the middle of an environment full of uncertainties. We hope the main challenges, or internal and external barriers of this study, can serve as a basis for future researches or special Congresses sections, so that academic studies can provide answers to relevant practical expectations, linking research to practice, where both win.

References 1. Klotzer, C., Weibenborn, J., Pflaum, A.: The evolution of cyber-physical systems as a driving force behind digital transformation. In: Proceedings - 2017 IEEE 19th Conference on Business Informatics, CBI 2017, pp. 5–14 (2017) 2. Schneider, P.: Managerial challenges of Industry 4.0: an empirically backed research agenda for a nascent field. Rev. Manag. Sci. 12(3), 803–848 (2018) 3. Pasetti Monizza, G., Rojas, R.A., Rauch, E., Garcia, M.A.R., Matt, D.T.: A case study in learning factories for real-time reconfiguration of assembly systems through computational design and cyber-physical systems. In: Chiabert, P., Bouras, A., Noël, F., Ríos, J. (eds.) Product Lifecycle Management to Support Industry 4.0. IFIP Advances in Information and Communication Technology, vol. 540, pp. 227–237. Springer, Cham (2018). https://doi.org/ 10.1007/978-3-319-66926-7 4. Simon, J., Trojanova, M., Zbihlej, J., Sarosi, J.: Mass customization model in food industry using industry 4.0 standard with fuzzy-based multi-criteria decision making methodology. Adv. Mech. Eng. 10(3) (2018) 5. Galvão, J., Sousa, J., Machado, J., Mendonça, J., Machado, T., Silva, P.V.: Mechanical design in Industry 4.0: development of a handling system using a modular approach. In: Machado, J., Soares, F., Veiga, G. (eds.) HELIX 2018. LNEE, vol. 505, pp. 508–514. Springer, Cham (2019). https://doi.org/10.1007/978-3-319-91334-6_69 6. Modrak, V., Soltysova, Z., Poklemba, R.: Mapping requirements and roadmap definition for introducing I 4.0 in SME environment. In: Hloch, S., Klichová, D., Krolczyk, G., Chattopadhyaya, S., Ruppenthalová, L. (eds.) Advances in Manufacturing Engineering and Materials. LNME, pp. 183–194. Springer, Cham (2019). https://doi.org/10.1007/978-3-31999353-9_20 7. Plattform Industrie 4.0 (2016). https://www.plattform-i40.de/I40/Navigation/EN/Home/ home.html. Accessed 10 Jan 2019

330

W. C. Satyro et al.

8. Lopez, H.A.G., Cisneros, M.A.P.: Industry 4.0 & Internet of things in supply chain. In: CLIHC 17 Proceedings of the 8th Latin American Conference on Human-Computer Interaction, Article No. 23 (2017) 9. Choi, S., Kang, G., Jung, K., Kulvatunyou, B., Morris, K.C.: Applications of the factory design and improvement reference activity model. In: Nääs, I., et al. (eds.) APMS 2016. IAICT, vol. 488, pp. 697–704. Springer, Cham (2016). https://doi.org/10.1007/978-3-31951133-7_82 10. Wiesner, S., Hauge, J.B., Thoben, K.-D.: Challenges for requirements engineering of cyberphysical systems in distributed environments. In: Umeda, S., Nakano, M., Mizuyama, H., Hibino, H., Kiritsis, D., von Cieminski, G. (eds.) APMS 2015. IAICT, vol. 460, pp. 49–58. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-22759-7_6 11. Kang, H.S., et al.: Smart manufacturing: past research, present findings, and future directions. Int. J. Precis. Eng. Manuf. Green Technol. 3(1), 111–128 (2016) 12. Hermann, M., Pentek, T., Otto, B.: Design principles for Industrie 4.0 scenarios: a literature review. In: Working Paper No. 01/2015, Technische Universität Dortmund, Fakultät Maschinenbau and Audi Stiftungslehrstuhl - Supply Net, Order Management, pp. 1–15 (2015) 13. Sanders, A., Elangeswaran, C., Wulfsberg, J.: Industry 4.0 implies lean manufacturing: research activities in industry 4.0 function as enablers for lean manufacturing. J. Ind. Eng. Manag. 9, 811–833 (2016) 14. Bento, J.-P.: Review from an MTO-Perspective of Five Investigation Reports from BP (Draft). Stavanger, Norway (2003) 15. Purton, L., Clothier, R., Kourousis, K., Massey, K.: The PBP Bow-Tie framework for the systematic representation and comparison of military aviation regulatory frameworks. Aeronaut. J. 118(1210), 1433–1452 (2014) 16. Kim, M.J., Bonn, M., Lee, C.-K.: Seniors’ dual route of persuasive communications in mobile social media and the moderating role of discretionary time. Asia Pac. J. Tour. Res. 22 (8), 799–818 (2017) 17. Coombs, C.R.: When planned IS/IT project benefits are not realized: a study of inhibitors and facilitators to benefits realization. Int. J. Project Manage. 33(2), 363–379 (2015) 18. Yoshikawa, M., Shimizu, T.: A new ranking scheme and result representation for XML information retrieval based on benefit and reading effort. In: Proceedings - International Conference on Informatics Education and Research for Knowledge-Circulating Society, ICKS 2008, 4460473, pp. 87–92 (2008) 19. Adams, R.M.: Issues in assessing the economic benefits of ambient ozone control: some examples from agriculture. Environ. Int. 9(6), 539–548 (1983) 20. Pagani, M., Pardo, C.: The impact of digital technology on relationships in a business network. Ind. Mark. Manage. 67, 185–192 (2017) 21. Schluter, F., Hetterscheid, E.: A simulation based evaluation approach for supply chain risk management digitalization scenarios. In: 2017 International Conference on Industrial Engineering, Management Science and Application, ICIMSA 2017, 7985579 (2017) 22. Lowenstein, D., Slater, C.: Management of test utilization, optimization, and health through real-time data. In: AUTOTESTCON (Proceedings), 8532554, September 2018 23. Bienhaus, F., Haddud, A.: Procurement 4.0: factors influencing the digitisation of procurement and supply chains. Bus. Process Manage. J. 24(4), 965–984 (2018) 24. Siemens AG: Germany 2014 Digitalization - Siemens Customer Survey | Result Report, pp. 1–18 (2015) 25. Siemens AG: Digitalization – Trends and Solutions for a More Competitive Brazil 2015 Siemens Customer Survey | Result Report, pp. 1–24 (2015) 26. Siemens AG: Digitalization - Trends and Solutions for a More Competitive Portugal 2015 Siemens Customer Survey | Result Report, pp. 1–23 (2016)

Planning Guideline and Maturity Model for Intra-logistics 4.0 in SME Knut Krowas1 and Ralph Riedel2(&) 1

TUCed Affiliated Institute for Transfer and Continuing Education, Chemnitz University of Technology, Chemnitz, Germany 2 Department of Factory Planning and Factory Management, Chemnitz University of Technology, Chemnitz, Germany [email protected]

Abstract. Logistics systems have a key function to meet competition criteria like delivery time, punctuality or flexibility. Industry 4.0 technologies are considered as an important key to master increasing requirements like individualization, shorter product lifecycles or global competition. However, bringing the complex structures and processes of a logistics system to a higher level of maturity is not an easy endeavor. The actions to be planned and implemented need to be rooted in the overall digitalization strategy of the company. Furthermore, they need to be interlinked with the development of other corporate functions like production, quality or planning and they need to be based on current capabilities. To support such a systematic development process, maturity models seem to be the method of choice, and there is already a considerable amount of such models available. As those models are mainly focused on the company as a whole or specifically on production systems, we identified the need for a specific support for logistics. Therefore, in this paper we describe the relevant background as well as the components of a maturity model for an Intralogistics 4.0. Keywords: Maturity model

 Industry 4.0  Intralogistics

1 Introduction One of the most important trends of our time is digitalization, which goes along with long-lasting changes in a lot of areas. A common synonym for digitalization, especially in the manufacturing sector, is Industry 4.0, which will lead to disruptive changes, providing opportunities but also challenges for business models, production technology, and work organization [12]. Mastering this (r)evolution is considered as the key for the future sustainability of an (industrial) enterprise. Industry 4.0 technologies will form the basis for increased transparency and improved safety and security in supply chains [18] as well as for sustainable manufacturing [15]. Nowadays, logistics systems need to fulfil high requirements. The trend of customer-individual production leads to the need for quick-response and efficient processes despite small lot sizes. Beside other approaches like lean logistics etc. Industry 4.0 technologies are considered as an important key to master those challenges. © IFIP International Federation for Information Processing 2019 Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 331–338, 2019. https://doi.org/10.1007/978-3-030-29996-5_38

332

K. Krowas and R. Riedel

However, there is no “off-the-shelf” solution for a “Logistics 4.0”. It rather needs to be tailored to the special needs of a company or a supply chain. And it needs to be viewed from a holistic perspective and should not be restricted to single technologies. Therefore, a systematic guideline for the design of company-specific solutions is more than desirable. Especially for small and medium sized enterprises (SME) it is not easy to deal with those topics, due to a general lack of resources, deficits in strategic thinking, and an individual infrastructure which limits adaptability. A recent observable trend to solve these shortcomings was to provide SME with maturity or readiness models, which were meant to allow an easier access to the topic [17]. However, many solutions are at a rather general level or are focussed mainly on production systems, which does not really help to derive concrete decisions for particular functions like logistics. Therefore, we identified the need for developing a planning guideline with a maturity model especially for intralogistics. This should serve as a basis for a structured and comprehensible evaluation of the current system, its processes and capabilities as well as for the derivations of concrete actions for further evolution.

2 Theoretical Background 2.1

Intralogistics

Intralogistics comprises the organization, control, execution and optimization of the intra-company material flow and its accompanying information flow [18]. The objective of intralogistics is to supply the right part or the right tool, in the right quantity and quality, at the right time, at the right place, with minimal costs. Operative functions of logistics are transportation, handling, storage and commissioning [6, 7]. In logistics a huge amount of data should already be available that just need to be exploited [10]. In this context the terms Logistics 4.0 and Smart Logistics emerged. Logistics 4.0 refers to the combination of logistics with the innovations and applications from Cyber-Physical Systems [2]. The hereby intended optimization shall be supported by intelligent systems, embedded in software and databases from which relevant information is provided and shared though Internet of Things (IoT) systems, in order to achieve a major automation degree [2]. 2.2

Industry 4.0

Central paradigms of Industry 4.0 are a horizontal integration throughout value adding networks, a vertical integration and networked production systems as well as an integrated engineering along the whole value chain [12, 15]. Industry 4.0 is based on the acquisition of data and their intelligent usage. The vision is a real-time feedback in whatever processes for their active control and manipulation. This leads to a paradigm shift which describes the switch from solid production structures to autonomous, self-organizing, intelligent systems. The basis for Industry 4.0 form new sensor technology for data acquisition, mechatronic components which are enriched with intelligent functions, a comprehensive interlinking of those

Planning Guideline and Maturity Model for Intra-Logistics 4.0

333

components for data distribution and exchange, modern information technology for information processing, and human-machine interactions [16]. 2.3

Maturity Models

As the digital transformation of a company should not be an occasional process, a roadmap is needed, which in turn should be based on a thorough analysis of the current status and capabilities [5]. A prominent approach to support this process is provided by maturity models, which serve for the evaluation of the quality of a company’s processes, often against some specific target state [11, 14]. A maturity model usually consists of the following components [1]: maturity levels, maturity dimensions and indicators, weights for indicators and/or dimensions, and a maturity level – parameter – matrix. There are frameworks or procedures as a sound methodological basis for designing maturity models, see for instance [3, 4]. Moreover, a quite considerable amount of maturity models has been published in the context of Industry 4.0, Smart Manufacturing, or Smart Services [5, 8, 9, 11, 13, 14]. However, many of those address mainly a technical perspective or don’t refer to particular functions. So far, we haven’t found any maturity model that is particularly focussed on logistics. Therefore, the gap we identified is a maturity model which allows to analyse and to evaluate the as-is situation of Intralogistics 4.0, and to recognize and to exploit its relevant potentials. Such a model might be a good extension for the evolution of production processes or it might be helpful for companies which base their business model on logistics processes such as logistics service providers.

3 Maturity Model for Intralogistics 4.0 3.1

Requirements and Context

The maturity model should meet the following requirements: It should be able to evaluate the current degree of implementation of Industry 4.0 technology in the logistics sector of the company. It should take a holistic view, especially taking into account socio-technical aspects [16]. It should be modular, so that indicators can be adapted according to the needs of the respective application. The application of the maturity model should be possible without special training and without special expert knowledge [17]. The model should be able to identify dimensions with high potential and it should offer guidance on how to attain a higher level of maturity [17]. The application of the maturity model itself is embedded in a planning guide which consists of five, clearly separated parts, see Fig. 1. Hereby, it is possible to secure interim results and the whole process becomes more transparent for all involved people. We assume that changing logistics to 4.0 is a complex endeavor which needs to be made manageable, especially in the implementation phase. The definition of (internal) projects of manageable size and risks might be a good approach for that. The planning guideline is loosely oriented on problem solving methods, e.g. from Systems Engineering. Snapshots will be avoided and project management principles like the involvement of relevant stakeholders, the definition of objectives and of activities for detailed engineering, implementation and necessary resources, etc. are considered.

334

K. Krowas and R. Riedel

Fig. 1. Steps of the planning guideline

An important component of the planning guideline is a particular maturity model for intralogistics, which meets the specialties of that corporate function. 3.2

Maturity Levels

The maturity levels are based on a phase model, where the particular phases are built on each other but are separated by quality gates. The achievement of one level also implies the achievement of all subjacent levels. In our model a five-level approach was chosen, as this allows a compact demonstration of results and follows other widely accepted models, see Table 1. 3.3

Dimensions and Indicators

Based on literature we defined twelve indicators and categorized them into four dimensions. Hereby, we try to map the intralogistics system in a holistic and sociotechnical way. The content-related clustering allows the derivation of recommendations for actions for each dimension. Data: Intralogistics 4.0 or Smart Logistics is based on data and their intelligent usage. An important precondition is the integration of sensors (and possibly also actors) to be capable to acquire data at all. The transport units need to be identified and localized; at higher levels information processing is needed. Communication: The exchange of data and information is an essential precondition for the (autonomous) interaction of different entities of the logistics system. Communication can occur between machines (M2M) and between humans and machines (HMI); furthermore, the information exchange throughout the whole logistics systems need to be considered. Processes: The relevant areas of actions in intra-logistics are the basic working processes transportation, storage and commissioning. Their optimization is the main goal of applying Industry 4.0 technologies.

Planning Guideline and Maturity Model for Intra-Logistics 4.0

335

Table 1. Maturity levels Level Level 1 (outsider) Level 2 (beginner)

Level 3 (advanced)

Level 4 (experienced)

Level 5 (expert)

Description The logistics system does not fulfill any requirement in the context of Industry 4.0 applications. The topic is not known or currently not relevant Industry 4.0 is recognized as relevant and first pilot projects have been realized. However, only a few logistics processes are supported by information technology and the logistics system does no fulfil the requirements on future networking and communication. For further improvement only limited competences are available Industry 4.0 is part of the company’s strategy. The implementation of Industry 4.0 technologies in logistics is pursued and controlled as a continuous improvement process. Data are acquired partly-automatically and are used to limited extend. Necessary competencies are available but require upgrading The company acts based on an Industry 4.0 strategy. A sector-specific innovation management supports the implementation of Industry 4.0 technologies. In the logistics sector all software systems are connected by interfaces; important data are acquired automatically. Internal and intercompany information exchange takes place partly system-integrated. The upgrading of Industry 4.0 competencies is part of the company’s strategy The company has already realized a Industry 4.0 strategy. A companywide innovation management controls the implementation of respective projects. In intra-logistics there are consistent information and communication technologies, all relevant data are acquired and processed automatically. The transportation and storage system operate autonomously. The company possesses matured competences to develop processes and systems further

Intellectual Capital: This dimension deals with humans, work organization and the company as a whole. The dimension and its indicators aim at a holistic, socio-technical perspective. Flexibility and adaptability are considered as main requirements on logistics systems. Due to the still high portion of manual work humans and the work structured they are embedded in play a decisive role in fulfilling these requirements (Table 2). Table 2. Dimensions and indicators of maturity Dimension Data Communication

Processes Intellectual capital

Indicators Integration of sensors and actors Machine-to-machine communication

Intelligent transport units Human-machine interface

Transportation system Employees’ competences (human capital)

Storage system Work design (structural capital)

Data exchange Information and communication technology Commissioning Innovation culture (relation capital)

336

K. Krowas and R. Riedel

By assigning different weights to the dimensions or indicators it is possible to differentiate those according to their importance. 3.4

Maturity Level – Parameter Matrix

This matrix is the central component, because it represents the evaluation basis for the current and also for future state(s). For each indicator ordinal scaled requirements are determined and assigned to the different maturity levels. Hereby, it becomes possible to categorize and to evaluate the current stage of intralogistics with respect to the particular maturity indicators. For each of the aforementioned dimensions resp. indicators specific requirements for each of the five maturity levels have been defined, see Table 3. The particularities of logistics are especially considered in the “processes” dimension. The parameters for the logistics processes transportation, storage and commissioning cover characteristics from purely manual over mechanical supported, mechanized, automated up to autonomous. Table 3. Cut-off of the maturity-level – parameter matrix Dimen sion

Indicator

Maturity levels

Data

Integration of No usage of sensors and sensors and actors actors

Level 1

Intelligent transport units

No functionality available

Level 2

Level 3

Sensors and actors are integrated

Logistics systems processes sensor data

Clear identification and localization possible Data No connection to Information exchange in other corporate exchange via intra-logistics sectors e-mail

Level 4

Level 5

Logistics systems interprets data for analyses Storage of data and Execution of conditions possible predefined actions

Logistics system acts autonomously based on data Autonomously acting transport unit

Consistent data formats and rules for data exchange

Completely connected IT-solutions, company-wide

Inter-divisional connected data-servers

4 Experiences from Practical Application and Conclusion The planning guideline and especially the maturity model have been applied in a medium sized company (150 employees) that produces ceramic tiles. In the logistics department there are 20 employees who are responsible for commissioning, warehouse management, material supply and transportation. The company so far has started only digitalization projects in production, not in logistics. Current challenges of the company are an increasing cost pressure from the market and higher requirements from the customers regarding the availability and delivery time of final products. The purpose of the use case was to test the developed model exemplarily and to validate its applicability, i.e. its ability to produce useful results in a practical context.

Planning Guideline and Maturity Model for Intra-Logistics 4.0

337

The proposed planning guideline has been applied completely: In the preparation phase a project team has been formed, consisting of employees and middle managers in logistics. The planning horizon has been defined as three years. Important goals for intra-logistics are the reduction of stored material, faster deliveries, a higher customer satisfaction, and higher efficiency in customer individual production. In the measurement phase data collection has been done with the help of a semi-structured questionnaire. Gaps between the current state and a future state have been identified in all dimensions. However, for the “communication” dimension the gap was evaluated not as big as for the “data”, “processes” and “intellectual capital” dimensions. In the evaluation phase a thorough analysis led to the conclusion, that the company is at a beginners’ level (level 2) in our maturity model. The storage systems was identified as an indicator with a lot of deficiencies in the “processes” dimension. As a consequence, an internal improvement project was defined, which aimed at the seamless identification and localization of every material and part in stock. In the planning phase the internal project has been structured in detail with concrete measures like an update and extension of the identification system using RFID, the equipment of transportation means (e.g. forklifts) with readers and interfaces to the internal WiFi, etc. The implementation is still in progress. The application of the developed model showed, that it was possible to evaluate the maturity of intralogistics regarding Industry 4.0 in a given setting without much effort and without extra training of involved people. Dimensions and indicators could be easily understood. The application showed, that there haven’t been white spots and also no redundancies. Obviously, the four maturity dimensions with their twelve maturity indicators were able to cover the field of intralogistics in an Industry 4.0 context completely - at least for the pilot company. The defined five maturity levels seemed to be sufficient for discrimination. The results of the maturity evaluation could be easily interpreted, despite involved people did not have any experience with maturity models. Therefore, the (easy and purposeful) applicability of the concept can be concluded. All in all, we can assume that the developed maturity model can serve as a sound basis for industrial companies to evaluate and to further develop their intralogistics system towards Industry 4.0. The model helps to determine the state-of-the-art for the digital maturity of the logistics system. Areas with a high potential for further development can be identified. As a consequence, companies are able to derive and to implement purposeful strategies and actions which serve their needs. The modular structure of the model allows the user to adapt specific indicators according to the needs of a particular company. It is also possible to extend the model with additional indicators or even dimensions. The weighting of dimensions or indicators further supports the diversification of the model. It could be shown that the planning guideline with its maturity model have been helpful to systematically analyze and to evaluate the state-of-the-art, to identify the right main points for changes and to generate appropriate ideas for the evolution of intralogistics towards Industry 4.0. Therefore, our solution seems to be a suitable management tool for the improvement of the logistics system, its elements and processes.

338

K. Krowas and R. Riedel

References 1. Akkasoglu, G.: Methodik zur Konzeption und Applikation anwendungsspezifischer Reifegradmodelle unter Berücksichtigung der Informationsunsicherheit. Dissertation, Universität Erlangen-Nürnberg (2013) 2. Barreto, L., Amaral, A., Pereira, T.: Industry 4.0 implications in logistics: an overview. Procedia Manuf. 13, 1245–1252 (2019) 3. Becker, J., Knackstedt, R., Pöppelbuß, J.: Developing maturity models for IT management. Bus. Inf. Syst. Eng. 1(3), 213–222 (2009) 4. De Bruin, T., Freeze, R., Kaulkarni, U., Rosemann, M.: Understanding the main phases of developing maturity assessment model. In: Campbell, B., Underwood, J., Bunker, D. (eds.) Australasian Conference on Information Systems (ACIS), New South Wales, Sydney, Australia, 30 November–2 December 2005 (2005) 5. De Carolis, A., Macchi, M., Negri, E., Terzi, S.: A maturity model for assessing the digital readiness of manufacturing companies. In: Lödding, H., Riedel, R., Thoben, K.-D., von Cieminski, G., Kiritsis, D. (eds.) APMS 2017. IFIPAICT, vol. 513, pp. 13–20. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-66923-6_2 6. Farahani, R.Z.: Logistics Operations and Management: Concepts and Models. Elsevier, Amsterdam (2011) 7. Gudehus, T., Kotzab, H.: Comprehensive Logistics. Springer, Heidelberg (2012). https://doi. org/10.1007/978-3-642-24367-7 8. Kampker, A., Frank, J., Emonts-Holley, R., Jussen, P.: Development of maturity levels for agile industrial service companies. In: Moon, I., Lee, G.M., Park, J., Kiritsis, D., von Cieminski, G. (eds.) APMS 2018. IFIPAICT, vol. 536, pp. 11–19. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-99707-0_2 9. Lichtblau, K.: Industrie 4.0 - Readiness. Impuls-Stiftung (2015) 10. Maslarić, M., Nikoličić, S., Mirčetić, D.: Logistics Response to the Industry 4.0: the physical internet. Open Eng. 6, 511–517 (2016) 11. Mittal, S., Romero, D., Wuest, T.: Towards a smart manufacturing maturity model for SMEs (SM3E). In: Moon, I., Lee, G.M., Park, J., Kiritsis, D., von Cieminski, G. (eds.) APMS 2018. IFIPAICT, vol. 536, pp. 155–163. Springer, Cham (2018). https://doi.org/10.1007/978-3319-99707-0_20 12. Pereira, A.C., Romero, F.: A review of the meanings and the implications of the Industry 4.0 concept. Procedia Manuf. 13, 1206–1214 (2017) 13. Schuh, G., Anderl, R., Gausemeier, J., ten Hompel, M., Wahlster, W. (eds.): Industrie 4.0 maturity index. managing the digital transformation of companies. Acatech STUDIE. Utz, Herbert, München (2017) 14. Schumacher, A., Erolb, S., Sihn, W.: Changeable, agile, reconfigurable & virtual production. A maturity model for assessing Industry 4.0 readiness and maturity of manufacturing enterprises. Procedia CIRP 52, 161–166 (2016) 15. Stock, T., Seliger, G.: Opportunities of sustainable manufacturing in Industry 4.0. Procedia CIRP 40, 536–541 (2016) 16. Wagner, T., Herrmann, C., Thiede, S.: Industry 4.0 impacts on lean production systems. Procedia CIRP 63, 125–131 (2017) 17. Wiesner, S., Gaiardelli, P., Gritti, N., Oberti, G.: Maturity models for digitalization in manufacturing - applicability for SMEs. In: Moon, I., Lee, G.M., Park, J., Kiritsis, D., von Cieminski, G. (eds.) APMS 2018, Part II. IFIPAICT, vol. 536, pp. 81–88. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-99707-0_11 18. Zijm, H., Klumpp, M., Regattieri, A., Heragu, S.: Operations, Logistics and Supply Chain Management. Springer, Cham (2019). https://doi.org/10.1007/978-3-319-92447-2

Self-assessment of Industry 4.0 Technologies in Intralogistics for SME’s Martina Schiffer(&) , Hans-Hermann Wiendahl and Benedikt Saretz

,

Fraunhofer Institute for Manufacturing Engineering and Automation IPA, Nobelstraße 12, 70569 Stuttgart, Germany [email protected]

Abstract. The 4th industrial revolution generates a high potential for smart production systems. Many manufacturing companies considering therefore the application of cyber-physical systems in the sector of intralogistics. The aim is to achieve better logistics performance or lower costs. However, small and medium sized enterprises (SME) are hesitant about introducing Industry 4.0 technologies. They fear high implementation costs, low benefits and the lack of know-how increases the reluctance of the companies. This paper presents a procedure which enables SME’s to assess the benefits of Industry 4.0 technologies by themselves. The model follows the recognized principle: First improve your processes, then automate them:

• Methodical basis is a process model intralogistics, which also considers self-controlling cyber-physical systems. In addition, the benefit aspects are assigned to the individual process steps. • In the specific application, the company first determines the digitization potential of the individual activities and then the associated benefits of Industry 4.0 technologies. The procedure reduces on the one hand the uncertainty regarding of wrong decisions and creates on the other hand the possibility for companies to select Industry 4.0 technologies goal-oriented. The described procedure was validated with SMEs. Keywords: Intralogistics Smart factory

 Industry 4.0  Cyber physical systems 

1 Introduction Production companies are in a constant state of change. Comparable service offers with regard to functionality, quality and price of the products bring logistics services, such as short delivery times, to the fore as a competitive factor. The introduction of Industry 4.0 Technologies, in particular cyber-physical systems (CPS), is seen as a possible solution to the requirements of the market. Most of the companies examine the planning and evaluation of Industry 4.0 Technologies in value-added processes [1, 2]. The adjoining areas, such as Intralogistics, have received little attention. But the implementation of © IFIP International Federation for Information Processing 2019 Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 339–346, 2019. https://doi.org/10.1007/978-3-030-29996-5_39

340

M. Schiffer et al.

Industry 4.0 Technologies in this area hides a high potential. On the one hand, Intralogistics secures the flow of materials and information within the company and, on the other hand, enables a successful supply chain [3]. Despite this high potential, many companies don’t invest in Industry 4.0 Technologies. In particular, small and mediumsized enterprises (SME’s) have deficits in planning and implementation. [4] The reasons are the high introduction costs, the missing know-how of the companies as well as the non-evaluable benefits of Industry 4.0 Technologies. The contribution of this paper is a two-step process which enables SME’s to assess the benefits of Industry 4.0 Technologies by themselves. The procedure was validated with SME’s in workshops. The procedure was profitably developed within the framework of the research project Industry 4.0.

2 Related Works Intralogistics is responsible for the material and information flow between the value creation steps within a company. The use of cyber physical systems can realize great improvement potentials along the internal operational processes. Cyber physical systems can communicate with each other via the Internet and record their environment with their sensors. The generated data are evaluated, linked and used for the control of corresponding actuators. The result is a decentralized network that can optimize itself and counteract problems along the entire value creation process. CPS technologies offer great advantages especially for SMEs, which often produce small series or individual products. Through intelligent linkage of the material flow a flexible and fast reaction factory is realized [5–7]. For the introduction of CPS Technologies an economical evaluation is necessary. A benefit or potential analysis is especially important for SMEs with tighter budgets as large enterprises. A quantification of the potential of a certain CPS technology is however very difficult due to the cross-sectional function of logistics. In the literature different evaluation methods are called dependent on the problem definition and the area of application. Consequently, a uniform procedure for the potential analysis of CPS technologies for use in intralogistics has not yet been defined [6, 8].

3 Methodic Procedure The process model intralogistics build the methodical basis of the procedure. A (process)-model is a reflection of the reality which, through abstraction and simplification, provide conclusions about states, changes and functional relationships [9, 10]. The requirement criteria for the process model intralogistics are simplicity, completeness and intuitive presentation. Thus, the SME-suitability is on the one hand granted as the adaptability on larger enterprises is possible. The model subdivides six process modules: • Incoming goods • Internal transport

Self-assessment of Industry 4.0 Technologies in Intralogistics for SME’s

• • • •

341

Storage Order picking Packaging Outgoing goods

The object of consideration is the characteristics of the material and information flow, the used resources and the relevant data for planning and scheduling. Figure 1 shows the generic process model. The model follows the process sequences preparation, implementation and completion of general process and action models [11]. The process sequences apply to information and material flow level and give the model a clear structure.

Fig. 1. Generic process model

Process activities describe the respective process module in a sequential order. This applies both, information and material flow, levels. The preparation phase includes the steps ‘Create order’, ‘Accept order’ and ‘Release order’ on the information flow level. After each activity a result must take place, which represents the possibility of a system-technical illustration. The trigger arrow forms the interface between information and material flow. The phase implementation starts only with the order release and the physical availability of the material on the input buffer area. In the implementation phase, the ‘Start order’ and ‘Monitor order’ activities are at the information flow level. Both activities reflect the progress of the order on the one hand and provide on the other the data basis for calculating the lead time. At the material flow level, implementation begins with the transfer of the performance object (PO) from the buffer area. Generically, the two activities ‘take on PO’ and ‘transfer PO’ take place within the model. Specific process-module activities are possible, e.g. ‘Check PO’ in incoming goodsprocess. In order to consider the above-mentioned requirement criteria, assumptions and application limits of the process model must be drawn. The intralogistics activities of

342

M. Schiffer et al.

SME’s and the associated interfaces to extra logistics are the object of consideration. The model is be subject to the following assumptions: Information flow level • Rework or cancellation orders are not taken into account • Generation of demand is prerequisite Material flow level • Within the six process modules transport is neglected • Complete and error-free order processing • The input and output buffer area is a defined transfer point for the upstream and downstream processes Based on the generic process model, a detailed process module refinement was developed, see Fig. 2:

Fig. 2. Detailed process model (cut-out)

The detailed process module reflects the structure of the generic model. Only variants of the activities, sub-activities and variants of the sub-activities were created. The characteristics show the possible attributes of the respective activity. The color differentiation indicates whether the values are additive or alternative. The potential assessment aims to make the benefits of Industry 4.0 Technologies in intralogistics transparent. A multi-stage procedure is recommended for estimating the potential, see Fig. 3. In the first step, the digitization potential of the individual intralogistics activities is determined. The potential is identified with the help of the four target dimensions variability, quality, velocity and effort [12]. In order to take full account of the potential of Industry 4.0 Technologies, transparency is added as a fifth dimension.

Self-assessment of Industry 4.0 Technologies in Intralogistics for SME’s

343

To ensure that corporate strategy and goals can be taken into account when evaluating the digitization potential, it’s possible and recommended to prioritize the dimensions. [12] Furthermore, there are conflicting objectives between the dimensions which must be taken into account when determining the digitization potential [13]. The step of determining the digitization potential enables the company to make a strategically correct selection of the processes to be digitized. The degree of detail of the target dimensions is not sufficient for estimating the potential of Industry 4.0 Technologies. Therefore, in the next step KPI’s were defined on the basis of the target dimensions, see Fig. 3.

Fig. 3. Procedure for potential assessment

The calculation of these KPI’s does not deviate from the commonly used one in the literature, so that no further definition is given here. The potential estimation is carried out with the values low, medium and high. As with the determination of the digitization potential, the conflicts of objectives of the KPIs must also be taken into account when estimating the potential of Industry 4.0 Technologies. The process model forms the basis of the potential estimation. Possible Industry 4.0 Technologies were assigned to the activities of the process model. The assignment of the technologies as well as the estimation of potential took place with the help of experts from research and industry.

4 Results/Validation The validation of the methodical approach took place at two medium-sized factory equipment suppliers during a workshop. Within the scope of the workshop the requirements for the applicability of the procedure in SME’s were to be examined.

344

M. Schiffer et al.

Criteria for this included comprehensibility, extensibility, survey effort and consistency. The procedure for the self-assessment of Industry 4.0 Technologies was essentially confirmed. In particular, the approach of optimizing the processes first and then digitizing them met with approval. For example, the process model revealed gaps in process responsibility and thus provided initial fields of action for process improvement. The procedure for determining the digitization potential and the benefit potential of Industry 4.0 Technologies was confirmed under the aspects of SME suitability. Three extension requests were mentioned by the participating companies: 1. Possible combination of the process modules It was noted that in SME‘s, several process modules are often processed together, e.g. combined picking and transport orders. Figure 4 shows the possible combinations of the individual process modules. For a better understanding the production/assembly has been included in this overview. The internal transport is shown in this diagram between the process modules.

Fig. 4. Possible combination of the process modules

2. Extension of the KPI’s by soft factors In the course of the potential assessment, participants would note that Industry 4.0 Technologies are not only being introduced to improve process capability. Especially for SME’s the external impact is very important. On the one hand towards the customer but also towards potential employees. SME’s often have the problem to find suitable personnel. To be seen as an innovative and sustainable company, companies should also invest in new technologies because of these aspects. To this end, the potential assessment could be supplemented by KPI’s such as degree of innovation and employee motivation.

Self-assessment of Industry 4.0 Technologies in Intralogistics for SME’s

345

3. Extension of the model by life cycle costing Not only the estimation of potential is relevant for the introduction of Industry 4.0 Technologies. As already mentioned, SME’s fear the introduction of such technologies. These results are on the one hand from the not recognizable benefit and on the other hand from the high costs. In order to have a comprehensive picture for the decision, the procedure should be extended by the life cycle costing. The extension of the life cycle costing in relation to Industry 4.0 Technologies is also currently part of the research project Industry 4.0 profitable.

5 Conclusion Industry 4.0 Technologies will gain in strategic importance in the future. SME’s in particular should not lose their connectivity with regard to digitalization. Nevertheless, the approach is: First improve your processes and then digitize them. The step-by-step approach in this article supports SME’s in improving intralogistics processes and in making decisions about these Industry 4.0 Technologies. After the process improvement the procedure gives a decision assistance which process step the company should digitize first and afterwards with the help of which Industry 4.0 Technology. The results of the validation show that the procedure appears conclusive and plausible. The extensions to the procedure requested by the companies are currently being examined and subsequently taken into account in the model. Acknowledgements. The IGF project 19183 N of the Forschungsvereinigung Bundesvereinigung Logistik (BVL) e.V., Schlachte 31, 28195 Bremen was funded by the AiF within the framework of the program for the promotion of joint industrial research (IGF) of Federal Ministry of Economics and Energy by resolution of the German Bundestag.

References 1. Dombrowski, U., et al.: Prozessorientierte Potentialanalyse von Industrie 4.0-Technologien. ZWF 113(3), 107–111 (2018) 2. Schönmann, A., et al.: Planung und Bewertung von Produktionstechnologien. ZWF 113(12), 7–11 (2018) 3. Fördern+Heben: Gut gewartet, hält langer – Funktionssicherheit mit Life-Cycle-Plänen langfristig erhalten. f+h Fördern und Heben, 5, S. 26–28 (2015) 4. Wang, Y., Wang, G., Anderl, R.: A holistic approach for introducing the strategic initiative Industrie 4.0. In: IAENG Transactions on Engineering Sciences, vol. 2, London (2016) 5. Bauernhansl, T.: Industrie 4.0 in Produktion, Automatisierung und Logistik. Springer, Wiesbaden (2014). https://doi.org/10.1007/978-3-658-04682-8 6. Hausladen, I.: IT-gestützte Logistik. Systeme-Prozesse-Anwendungen, 3rd edn. Springer Gabler, Wiesbaden (2016). https://doi.org/10.1007/978-3-658-13080-0 7. Ludwig, T., et al.: Arbeiten im Mittelstand 4.0 – KMU im Spannungsfeld des digitalen Wandels. In: Digitalisierung IT und Arbeit, vol. 53. Springer, Heidelberg (2016)

346

M. Schiffer et al.

8. Saam, M., Viete, S., Schiel, S.: Digitalisierung im Mittelstand: Status Quo, aktuelle Entwicklungen und Herausforderungen. Forschungsprojekt im Auftrag der KfW Bankengruppe, Mannheim (2016) 9. Karer, A.: Optimale Prozessorganisation im IT-Management. Ein Prozessmodell für die Praxis. Springer, Berlin (2007). https://doi.org/10.1007/978-3-540-71558-0 10. VDI3633: Simulation von Logistik-, Materiafluss- und Produktionssystemen- Grundlagen. Blatt 1 2014-12 (2014) 11. Ballmer, T., Brennenstuhl, W.: Deutsche Verben. Eine sprachanalytische Untersuchung des Deutschen Verbwortschatzes. In: Ergebnisse und Methoden moderner Sprachwissenschaft. Band 19. Narr, Tübingen (1986) 12. Erlach, K.: Wertstromdesign, 2nd edn. Springer, Heidelberg (2010). https://doi.org/10.1007/ 978-3-540-89867-2 13. Ward, P.-T.: Configurations of manufacturing strategy, business strategy, enviroment and structure. J. Manag. 22, 597–626 (1996)

Industry 4.0 Visions and Reality- Status in Norway Hans Torvatn(&)

, Pål Kamsvåg

, and Birgit Kløve

SINTEF Digital, Trondheim, Norway [email protected]

Abstract. The concept and vision of Industry 4.0 has been around for almost a decade and gain a lot of momentum and attraction globally. Central to the vision of Industry 4.0 is the concept of a “Cyber-Physical system”, linking the IT elements of an enterprise (Cyber) with the physical system (man and machine) of an enterprise. This vision is well known and promoted as crucial in radically transforming todays manufacturing industry. While there is a plethora of papers and studies of the various “cyber” aspects, the concept, visions, benefits as well as the downsides of Industry 4.0, few papers have much to say about the actual implementation. Based on a digital maturity mapping of ten front line manufacturing enterprises in Norway this paper analyses implementation at shop floor level of both cyber and physical system and their interaction. From the survey data a clear picture emerges of the development of a cyber system, as well as worker usage and benefit of the system. However, the two systems don’t interact very well, worker interaction is limited to plain old keyboard usage, instead of employing more mobile, handsfree, voice based or similar interaction methods. Currently there is no cyber-physical system, rather a burgeoning cyber system poorly linked to the physical world. If the cyber-physical system is to be realized there is a need for a rethinking and upgrading of man-machine interaction. Keywords: Smart manufacturing & Industry 4.0  Human-machine interaction (HMI) & Operator 4.0  Cyber-physical systems Survey  Norway



1 Introduction Industry 4.0 (I4.0) as a concept and vision has been around since 2011. If not countless, at least thousands of academic papers have been written on it over the whole world. Most of these papers focus on the technological elements of I4.0, but there are enough papers outlining the concept and its merits. The human aspects of the concept, on the other hand, remain under researched. While I4.0 is a German concept in its origin, it has become quite popular in Norway and other Nordic countries. Through whitepapers, workshops, networks, industry and government agencies, the idea has been promoted and encouraged in Norway. However, there has been few, if any, attempts to measure implementation rate or benefits for those implementing it. While some case studies exist, no surveys of status of I4.0 in

© IFIP International Federation for Information Processing 2019 Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 347–354, 2019. https://doi.org/10.1007/978-3-030-29996-5_40

348

H. Torvatn et al.

Norwegian industry exists, and no one knows what the industry is struggling with in its implementation. This paper employs survey data to describe the situation in a group of frontrunning Norwegian manufacturing companies, participating in a national strategic research program called “Sustainable growth of Manufacturing”.

2 Visions of Industry 4.0 I4.0 refers to the current trend of digitalization, automation and data exchange in manufacturing. According to the I4.0 Working Group, the German originators of the I4.0 initiative, progress in the field of information technology and concepts such as Internet of Things (IoT) and Cyber-physical systems (CPS) pave the way for a “fourth industrial revolution” [1, 2]. Cyber-physical systems are merging the virtual and physical world through embedded networks which are capable of monitoring and controlling physical processes. These systems detect data from physical objects through sensors and is interacting with physical processes via machinery, actuators and human movements [3, 4]. The current digitalization of manufacturing challenges the traditional role of industrial workers [5]. A shift from physical, repetitive and low skilled work to more complex and cognitive tasks is anticipated [1, 6, 7] Operators at the shop floor will probably need to control more machines simultaneously and thereby know more about the production processes in the future [7]. In order to handle the increasing complexity of production and surged data flows from cyber-physical systems (CPS), the operators need to be supported by well-functioning assistance systems [7]. 2.1

Operationalization of Industry 4.0

In a much cited paper by Hermann et al. [2] four design criteria were outlined, these four criteria must be met for a system to be called an I4.0 system: • Interconnection: This is the systems ability to communicate and collaborate internally and externally (human-human, human-machine and machine-machine). Wireless communication with sensors, IoT and IoE is a critical part of this. It also includes the security aspects of the systems. • Technical assistance: This is the system’s ability to offer assistance to humans in their work, both virtual assistance (information, cognitive support) and physical assistance by various tools.

Industry 4.0 Visions and Reality- Status in Norway

349

• Decentralized decisions refer to the system’s ability to delegate authority of decision making from managers through operators and ultimately to machines. • Information transparency refers to the fusion of the physical and virtual world through the linking of sensor data with digitalized plant models, enabling the creation of a virtual copy of the physical world (Digital twin).

2.2

The Human Aspect of Industry 4.0

I4.0 is criticized for being just another “tech-concept”. However the I4.0 literature, actually accentuate the human factor and consider the vision of a completely automated factory as neither desirable nor realistic [9]. The proponents of I4.0 expect digital assistance systems and a new generation of collaborative industrial robots to make work more exciting and rewarding across all hierarchical levels [1, 2]. Hence, in order to create an optimal cyber-physical system, the human workers should be able to interact and use the cyber system as much as possible. As I4.0 is introduced to enterprises, this becomes even more prominent. With the advancement in technology “the number of computing devices that a person uses is increasing and there is a need of faster and non-intrusive methods of communicating with these devices” [11].

3 Survey Method, Questionnaire and Sample 3.1

Design of Survey

Sustainable growth of Manufacturing is a cross-disciplinary center for competitive high value manufacturing in Norway, established in 2015. Its vision is that with the right products, technologies and humans involved, sustainable and advanced manufacturing is possible in high cost countries such as Norway. I4.0 is a key element in this vision of preserving Norwegian manufacturing and keeping it competitive. In the spring of 2017, it was decided to carry out a survey for mapping the digital maturity of the participants. The participating companies were concerned about their ability to implement I4.0 and wanted an analysis of their performance. It should be noted that the enterprises in question clearly belongs to the more advanced group of Norwegian manufacturers. While they vary in product, ownership, geographic region in Norway, they are all exporters with decades of operation at the location investigated. They are also strategically thinking regarding development, and they have all earlier experiences working with research milieus and use considerable internal efforts on this. 3.2

Designing and Implementing the Survey

A cross-sectional study was carried out in ten Norwegian manufacturing companies, covering all organizational levels and roles (N = 3188) in spring/summer 2017. The survey was constructed in dialogue with the enterprises building on the I4.0 design criteria. However, “interconnection” and “information transparency” were dropped

350

H. Torvatn et al.

from the survey, because the respondents (especially at shop floor level) were not expected to have knowledge of these issues. The study was mostly conducted via email, but some paper copies were also distributed. After repeated follow-ups, the total sample consists of 1023 male and 160 females. With a total response rate of 37% the sample can be said to be representative of the participating enterprises, but as outlined the participating companies are not representative of the Norwegian industry. Within the enterprises not all respondents were relevant in the sample, given our focus on production and production workers. The sub sample (n = 305) of interest in this study are operators working in the production hall. When looking at the whole sample together, the most frequently age range are 41 to 60 years (59,67%), and 9,8% were women.

4 Results 4.1

Existing Digital Tools and Systems at Shop Floor Level

We surveyed a set of digital tools and systems at shop floor level. Figure 1 reports the usage of each digital tool. Note that this finding supports the prediction of Fatima et al. [11] on number of devices, numbers of devices per user are now clearly above 2 on average. A total of 98% of the respondents are using computers in their work. Computers are both a traditionally digital tool, but also an extremely powerful and versatile tool. It can be integrated with all kinds of systems and for almost all kinds of digital tasks, including of course all administrative tasks. However, as can be seen from Fig. 1 the PC is not the only tool used. The typical operator uses several tools, new tools are added to the old, not supplanting them. As we can see, 61% are also using portable smartphones to carry out their work. Only 7% are using tablets or smart watch, and less than 3% voice control from portable equipment or smart glasses. In addition to PCs,

Fig. 1. Digital tools employed at shop floor level. Percentage using this tool in their work.

Industry 4.0 Visions and Reality- Status in Norway

351

Smartphones and Photo/video, we can see a fair use of “cybersystems”. 58% use MES, all ten companies employ MES. Tracking systems (RFID and similar), robots and portable scanners are used by 39, 35 and 30% respectively. For those workers that have administrative tasks such as record- and documentation of quality, ordering components and planning own production in their work description, a total of 67–77% have digital systems to perform these tasks. More specifically, 77% of the respondents uses digital systems for documenting product quality, compared to 20% that only have paper or oral documentation systems. Also, about 70% of the respondent group uses digital systems for orders, production planning and maintenance. Thus, it seems like I4.0 has gained some position on the shop-floor, as digital tools and systems is available, a least to some extent, at the shop floor level. As outlined in the design principles of I4.0, supporting companies in identifying and implementing I4.0 scenarios, decentralized decisions is the ability of cyberphysical systems (machines) to make decisions on their own and to perform their tasks as autonomously as possible. Only in the case of interventions, exceptions or conflicting goals, are tasks delegated to a higher level (humans). However, only 9% of the respondents in the study report that ICT systems suggests or take decisions in normal operation, 29% report that people and ICT-systems cooperate to some degree in proposing and evaluating solutions, while 47% reports that all decisions are made by people in production. 16% reports they do not know. Thus, the I4.0 transformation towards decentralized decisions has started, as digital decision tools are to some degree available at the shop-floor level, but there is a lot of remaining work before reaching an I4.0-level. For those respondents having access to digital systems in own production planning (N = 156), only 6% have access to portable information (information available on the body through smart phones, tablets, smart watches etc.), and only 40% experiences that the information is updated instantly if any changes. Most workers have the information for use in production planning available at the workstation (82%), thus limiting their ability to move around at the shop-floor and at the same time have full control over production to continuously being able to optimize their production planning. 4.2

User, Perceived Usefulness and Quality of Digital Technology

85% of the total sample size of N = 305 uses digital tools to get information about the production in their work. However, only about half of the respondents that receive this type of information digitally, believe that the information they receive is sufficient (54%), updated (47%) and understandable (51%). Thus, the perceived quality of digital information about the production at the shop-floor level has room for improvement. Still the operators see digitalization as useful for carrying out work tasks. More than 70% of the respondents believe that the quality of work gets better, and they get the work done faster, using digital tools. Also, about 60% believe that they get the work done safer, and by using digital tools gets work done which could not be carried out without such tools.

352

H. Torvatn et al.

5 Analysis 5.1

State of the Cyber-Systems

The survey clearly identifies the beginning of a cyber-system. We can see that the investigated companies have implemented manufacturing execution systems (MES), they use robotics and production planning systems, and have started to delegate authority to machines. The changes have reached the shop floor level, operators find the information useful albeit incomplete and rely on it in their work. Coordination, information, maintenance and order planning is being done digitally for two thirds of the operators. We have not discussed issues of interconnection (connectivity and security), but we know that the system is at least functioning in a daily work setting. While we clearly can see the beginning of a cyber-system, it is also clear that there is much potential for further development. Information is not perceived as trustworthy for half of the users, it is real-time for only 40%, and a third of the workers are not using digital coordination and order planning. 5.2

State of the Human Computer Interaction

“The ultimate aim is to bring HCI to a regime where interactions with computers will be as natural as an interaction between humans” [12]. We can see that this is not the case in these companies. The most frequent form is still computer interaction, taking place via a keyboard and a mouse. Historically we interacted with computers as “key strokers” working on a keyboard. Over the years several new features and possibilities have been introduced. The first major update was the introduction of touch-screens, providing lightweight mobile and easy to use interfaces relying on wireless communication. From now on we had evolved into “screen pawers”, and as can be seen from Fig. 1 at total of 61% use their smartphone and 7% their tablet in work. The second upgrade is a set of technologies including virtual reality, augmented reality, various voice control and voice command systems, gestures/hand movements and eye movements. Even brain waves are now possible [13]. We can call that third generation “data whisperers”, since several of these systems allow for speech commands. However, use is limited still and interaction between man and machine is happening through computers using keyboards or through smartphones/tablets. To what degree are those various tools suitable at shop floor level? We will analyze this along two dimensions, mobility and the need to for the operator to employ his/her hands in interaction with computers. Starting with the latter we can see that an operator at shop floor level moves around and uses his/her hands a lot in the operation, it would seem obvious that an easy interaction should be mobile and allowing for hand-usage in work. PCs are not a good choice, relying on keyboards and hand usage. Neither are smartphones, because information must be retrieved by key-stroking or touching screens, and thus limits other hand-usage and ties the operator up in the task of commanding the smartphone instead of doing his physical work. Voice commands would be very appropriate if the noise level allows it, and eye movement would also be a good option for a worker.

Industry 4.0 Visions and Reality- Status in Norway

353

Regarding mobility, PCs are not a good choice because they are stationary, which significantly reduces the operator’s ability to move freely and still getting access to updated and important information when he needs it in his work. Smartphones are a powerful portable device that is easy to carry along and deploy, often has a very simple user interface and is user-friendly as most workers utilize this digital tool more or less all day outside of work. It can provide necessary real-time information and is very flexible in use as one can gain almost any information through the internet. Of course, as the smartphone is very flexible it can support voice-control or hands-free usage though customs applications delivering important information through speaker or headset. Digital tools such as smart glasses or voice control, though often not as flexible as smart phones, are completely hands-free, and thus support the operator in doing his/her tasks while at the same time receiving information. However, so far it seems like operators’ interaction towards digital systems in production planning is limited to plain old stationary keyboard usage, instead of using more mobile, handsfree, voice-based or eye-moving interaction systems. If I4.0-goals are to be achieved, operators at the shop floor will need to control more machines simultaneously, and therefore cannot be placed stationary in front of a workstation. As they most likely will need to know more about the production processes in the future to become strategic decision-makers rather than pure operators of machines, the operators will thus need to be supported by well-functioning portable assistance systems that provide the necessary real-time information which will contribute to letting them continuously optimize their own production planning.

6 Conclusion Comparing the ideals of I4.0 to the reality of a group of frontrunning Norwegian Manufacturing enterprises we can clearly see the start of a cyber-physical system. There are digital tools and information in use at shop floor level, and decentralization of decisions have started. While we can see a start there is also a lot of room for improvement. This is especially true regarding the human parts of the system. Our findings indicate that the employees on the shop floor lack the necessary digital tools and assistance systems to form a truly interconnected cyber-physical system. Old (digitally speaking) human computer interactions like computers dominate. Some respondents use mobile digital tools, such as tablets and smartphones, but they are not nearly as common in manufacturing as in people’s private lives (this is the case in Norway at least). Technologies like smart glasses, virtual reality and augmented reality are almost completely absent in the investigated companies. Human employees in all levels and departments of the organization need such tools to be part of the CPS and to improve their performance, and especially at the shop floor. For instance, smart glasses and voice-control could make it easier for operators to receive information and guidance while having their hands free and ready to handle their actual job. Mobile solutions, such as tablets or smartphones, could make operators more flexible and capable of controlling more machines simultaneously. In order to utilize these tools, the interfaces should be designed in a way which satisfies the demands of the operators. Both hardware and software must be developed with respect to the workers at the shop

354

H. Torvatn et al.

floor. In order to improve the implementation of the cyber-physical systems we need to improve the collaboration between man and machine through better interfaces. We consider this finding applicable also outside of Norway. As far as we know, no survey based studies outlining the situation in other countries in implementing I4.0 have been carried out. However, while the exact level of I 4.0 implementation is likely to vary, the challenges facing the operator at shop floor level is similar across nations, there is a need for mobile and hands free HCI in other countries as well. Therefore, we expect the general problem of poor human-machine interfaces to be relevant in settings outside of Norway.

References 1. Kagermann, H., Wahlster, W., Helbig, J.: Securing the future of German manufacturing industry. Recommendations for implementing the strategic initiative INDUSTRIE 4.0. Final report of the Industrie 4.0 Working Group (2013) 2. Hermann, M., Pentek, T., Otto, B.: Design principles for industrie 4.0 scenarios. In: 2016 49th Hawaii International Conference on System Sciences (HICSS), IEEE (2016) 3. Wang, L., Törngren, M., Onori, M.: Current status and advancement of cyber-physical systems in manufacturing. J. Manuf. Syst. 37, 517–527 (2015) 4. Thoben, K.-D., Wiesner, S., Wuest, T.: Industrie 4.0 and smart manufacturing-a review of research issues and application examples. Int. J. Autom. Technol. 11(1), 4–16 (2017) 5. Schneider, P.: Managerial challenges of Industry 4.0: an empirically backed research agenda for a nascent field. Rev. Manag. Sci. 2(3), 1–46 (2018) 6. Hecklau, F., et al.: Holistic approach for human resource management in Industry 4.0. Procedia Cirp 54, 1–6 (2016) 7. Prinz, C., et al.: Learning factory modules for smart factories in industrie 4.0. Procedia CiRp 54, 113–118 (2016) 8. Howaldt, J., Kopp, R., Schultze, J.: Why industrie 4.0 needs workplace innovation—a critical essay about the german debate on advanced manufacturing. In: Oeij, P.R.A., Rus, D., Pot, F.D. (eds.) Workplace Innovation. APHSW, pp. 45–60. Springer, Cham (2017). https:// doi.org/10.1007/978-3-319-56333-6_4 9. Hirsch-Kreinsen, H., Weyer, J., Wilkesmann, J.D.M.: Industry 4.0 as Promising Technology: Emergence, Semantics and Ambivalent Character, Universitätsbibliothek Dortmund (2016) 10. McAfee, A., Brynjolfsson, E.: Machine, Platform, Crowd: Harnessing Our Digital Future. WW Norton & Company, New York (2017) 11. Fatima, R., Usmani, A., Zaheer, Z.: Eye movement based human computer interaction. In: 2016 3rd International Conference on Recent Advances in Information Technology (RAIT) (2016) 12. Rautaray, S.S., Agrwal, A.: Vision based hand gesture recognition for human computer interaction: a survey. Artif. Intell. Rev. 43(1), 1–54 (2015) 13. Tan, D., Nijholt, A.: Brain-computer interfaces and human-computer interaction. In: Tan, D., Nijholt, A. (eds.) Brain-Computer Interfaces. Human-Computer Interaction Series, pp. 3–19. Springer, London (2010). https://doi.org/10.1007/978-1-84996-272-8_1

Exploring the Impact of Industry 4.0 Concepts on Energy and Environmental Management Systems: Evidence from Serbian Manufacturing Companies Milovan Medojevic1 , Nenad Medic1 , Ugljesa Marjanovic1(&) Bojan Lalic1 , and Vidosav Majstorovic2

,

1

Faculty of Technical Sciences, University of Novi Sad, Novi Sad, Serbia {medojevicm,nenad.medic,umarjano,blalic}@uns.ac.rs 2 Department for Production Engineering, University of Belgrade, Belgrade, Serbia [email protected]

Abstract. It became more than evident that the era of Industry 4.0 is upon us, where industrial manufacturing companies are facing strong demand to increase their productivity and profitability by realizing or upgrading to smart factories and resource-efficient manufacturing processes. Having this in mind the aim of this paper is to provide insight regarding best practices in implementing Industry 4.0 concepts and their implications on manufacturing energy and environmental management systems and overall manufacturing energy efficiency. Our analysis used the Serbian dataset from the European Manufacturing Survey conducted in 2018. Furthermore, non-parametric correlation (Spearman’s) analysis of the introduction of technologies on the one hand and EN ISO 50001 and EN ISO 14001 on the other was carried out. Results indicated significant correlation among Industry 4.0 concepts and both manufacturing energy and environmental management systems. Keywords: Industry 4.0

 ISO 50001  ISO 14001  EMS

1 Introduction Energy Management Systems (EnMS) and Environmental Management Systems (EnvMS) have emerged over the last two decades as a proven best practice methodology to ensure sustainable and progressing energy and environmental efficiency performance manufacturing firms [1–3]. EnMS outlines a structured and systematic approach on how to integrate energy efficiency in an enterprise management culture [4]. In contrast, Industry 4.0, a German strategic initiative, aims at creation of smart factories where manufacturing technologies are upgraded and transformed by CyberPhysical Systems, the Internet of Things (IoT), and cloud computing [5, 6]. In the Industry 4.0 era, manufacturing systems can monitor physical processes, by generating so-called “digital twin” (or “cyber twin”) of the physical system/process to make smart decisions through real-time communication and cooperation between humans, © IFIP International Federation for Information Processing 2019 Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 355–362, 2019. https://doi.org/10.1007/978-3-030-29996-5_41

356

M. Medojevic et al.

machines, and sensors [7, 8]. Today, factors driving Industry 4.0 are paving the path for intelligent energy management. Nevertheless, when companies embark on efforts to introduce Industry 4.0, they mostly focus on the ‘core’ parameters of production efficiency, quality and cost. This is obviously what drives revenue. However, there is a strong case to also include EnMS in such projects. After all, systematically tracking energy consumption, production and costs is a major challenge nowadays in most industries – and one that has a lot in common with Industry 4.0 way of thinking. The development of technologies for instrumentation and monitoring of industrial processes enables data capture in ever-increasing resolutions, allowing increasingly powerful analyses. In the field of energy management, sophisticated physical meters (instruments) can interpret physical quantities that allow precise understanding of processes of interest, as well as monitoring variables that range from applied power, to harmonics that describe the quality of the electricity consumed. From the energy management perspective, Industry 4.0 is realized in the connectivity between measuring instruments and the entire information and automation architecture of industrial organizations. Bearing in mind a combination of environmental aspects, cost pressure, and regulation as well as the proactiveness of organizations when it comes to efficient consumption of energy, energy management becomes one of the main pillars of Industry 4.0. In energy management, the data available can give rise to prediction models for energy consumption (or generation) of operations, starting from planned production levels or other contextual variables. This study reveals how and whether integrated EnMS, EnvMS and Industry 4.0 concepts influence on manufacturing efficiency. In addition, it provides insight on correlation between integrating energy management systems and Industry 4.0 concepts into daily practices. Our study relies on a unique dataset from the European Manufacturing Survey (EMS) with a sample of 240 industrial companies from the Republic of Serbia. Serbia is developing country with great foreign investments inflow into various types of manufacturing process. For instance, cost savings vs. EU-28 average 41% in electricity [9].

2 Background and Related Work 2.1

Implications of Industry 4.0 Concepts on Industrial Energy Strategies

For manufacturers to be competitive in the 4th industrial revolution, reducing production costs is of crucial importance. One way to achieve this is by creating a solid strategy when it comes to EnMS in Industry 4.0 environment. Among many factors that need to be optimized in order to stay competitive, one is energy efficiency which is still easily forgotten in most companies. While improved energy efficiency, as well as effectively implemented EnMS, is always welcome, it is rarely the main driver of Industry 4.0 deployments. However, energy savings have been reported by those organizations that have attempted to make Industry 4.0 a reality. For example, Daimler in Germany has reported a 30% improvement in energy efficiency for its robot systems that use Industry 4.0 techniques. Another example is Canadian Forest Products, which reported a 15% reduction in energy consumption by using real-time alerts for energy consumption outside of anticipated norms [8].

Exploring the Impact of Industry 4.0 Concepts

357

In previous literature, the highest impact on energy strategy is brought by IoT [10]. As a consequence of vast IoT solutions implementation, huge amount of data is being generated, introducing the Big Data term used to describe sets of data characterized by high volume, high velocity, and high variety [11], and for which the use of advanced analytical tools is required in order to process data into actionable information by identifying patterns, trends, and relationships [12]. However, it is estimated that less than 1% of all available data is currently analyzed [13]. Big data therefore creates important challenges and opportunities now and in the coming years. In addition, presence of Big Data opens the door to Blockchain technologies which represents distributed databases and ledgers made of blocks stored on a large number of machines, so that any changes made to the database are permanently recorded, while any record is made publicly available thanks to the distributed design [14]. This puts the spotlight on new opportunities for reducing or eliminating the need for a trusted middleman in many operations, ensuring a supply of certified renewable electricity coming from distributed energy generation, verification of legal energy provisions, etc. From the aspect of EnMS, Blockchain technology provides companies with the potential to analyze their energy consumption with greater accuracy and efficiency, monetize the data collected and validate energy consumption and savings in real-time. Additionally, this technology provides a new solution to record and monetize carbon credits where applicable. Rising electricity costs and energy sustainability are a critical risk for factory owners and asset managers worldwide. For instance, German energy executives see a wide range of possible applications for Blockchains in the energy sector and believe the technology could have the potential to reduce costs and spur new business models in it [15]. However, the Blockchain technology faces many challenges, privacy and data security issues, but also technical issues such as the currently still rather long time needed to conduct a transaction [16]. Rapid Prototyping technologies represent one of the major developments in the modern manufacturing [17]. Beyond prototyping, technologies such as additive-layer manufacturing also provide benefits to serial or mass manufacturing processes. For example, General Electric’s LEAP engine, with 3D-printed fuel nozzles, has enabled to go from 18 sub-parts to only 1 [16]. This not only multiplied the durability of the component by 5% and reduced its weight by 25%, but also enabled a better optimized geometry to achieve higher combustion efficiency [18]. This caused to fuel savings throughout the life of the engine and reducing its CO2 emissions. Another example is Augmented Reality (AR). Wide spread of AR use in EnMS through “Open Energy” concept is noticed [19]. For instance, Lobby Showcase of the Fraunhofer Center for Sustainable Energy Systems use thousands of sensors inside and outside of their headquarters, ensuring 5CC building to become living laboratory gathering and sharing live data. 2.2

Research Questions

Given the, the following research questions were proposed in the attempt to identify the relationship between implementation of digital technologies and implementation of energy and environmental management systems:

358

M. Medojevic et al.

• RQ1: What is the relationship between implementation of Industry 4.0 concepts and energy management system in Serbian manufacturing companies? • RQ2: What is the relationship between implementation of Industry 4.0 concepts and environmental management system in Serbian manufacturing companies?

3 Data and Methodology For the analysis purposes Serbian dataset from the EMS conducted in 2018 was used. EMS is a survey on the manufacturing strategies and the application of innovative organizational and technological concepts in production in European manufacturing industry [20–23]. The survey was conducted among manufacturing companies (NACE Rev 21 codes from 10 to 33) having at least 20 employees. Each survey has been carried out based on a proportionally size- and industry-based stratified random sample. The dataset includes 240 companies of all manufacturing industries as given in Table 1. The comparison of data regarding firm size distribution shows no significant size bias. Table 1. EMS database – distribution of companies by size

Serbia

20 to 49 employees n (%) 110 (45.8)

Company size 50 to 249 employees n (%) 103 (42.9)

250 and more employees n (%) 27 (11.3)

Table 2. Classification on manufacturing sectors according to share on total sample NACE Rev. 2 10 25 22 27 28 14 16

23 29

1

Manufacturing industry Manufacture of food products Manufacture of fabricated metal products, except machinery and equipment Manufacture of rubber and plastic products Manufacture of electrical equipment Manufacture of machinery and equipment n.e.c. Manufacture of wearing apparel Manufacture of wood and of products of wood and cork, except furniture; manufacture of articles of straw and plaiting materials Manufacture of other non-metallic mineral products Manufacture of motor vehicles, trailers and semi-trailers Others

Share on total sample (%) 16.3 15.0 8.8 6.3 6.3 5.8 4.6

4.6 4.2 28.5

NACE Rev. 2 stands for Statistical classification of economic activities in the European Community (more information regarding NACE Rev.2 could be found here).

Exploring the Impact of Industry 4.0 Concepts

359

The largest industry in the sample is the manufacture of food products (NACE 10; 16.3%), followed by manufacture of fabricated metal products (NACE 25; 15.0%) and manufacture of rubber and plastic products (NACE 22; 8.8%). Tables 1 and 2 depict the sample distribution of the dataset from Serbia. Lastly, to analyze the relationships between implementation of digital technologies on one side and EnMS and EnvMS on the other we employed non-parametric correlation (Spearman’s) analysis.

4 Results and Discussion Table 3 depicts results of the correlation analysis (i.e. Spearman correlation coefficient values) between implementation of Industry 4.0 concepts and implementation of energy (RQ1) and environmental (RQ2) management systems. Given analysis revealed that there is a significant positive relationship between implementation of each Industry 4.0 concept on one hand and implementation of EnMS and EnvMS on the other. Table 3. Results of the correlation analysis Technologies Digital factory Mobile/wireless devices for programming and controlling facilities and machinery (e.g. tablets) Digital solutions to provide drawings, work schedules or work instructions directly on the shop floor Software for production planning and scheduling (e.g. ERP system) Digital Exchange of product/process data with suppliers/customers Near real-time production control system Systems for automation and management of internal logistics Product-Lifecycle-Management-Systems (PLM) for Product/Process Data Management Virtual Reality or simulation for product design or product development Automation and robotics Industrial robots for manufacturing processes Industrial robots for handling processes Additive manufacturing technologies 3D printing technologies for prototyping 3D printing technologies for manufacturing of products, components and forms, tools, etc. Energy efficiency technologies Technologies for recycling and re-use of water Technologies to recuperate kinetic and process energy Source: own research results; Note: ***p < 0.001

RQ1

RQ2

.304***

.276***

.266***

.215***

.297*** .303*** .276*** .307*** .282***

.273*** .199*** .308*** .248*** .205***

.237***

.295***

.279*** .304***

.269*** .249***

.256*** .274***

.246*** .248***

.261*** .256***

.261*** .215***

360

M. Medojevic et al.

These results provide evidence that Serbian manufacturing companies recognize the importance of e EnMS and EnvMS integration along with implementation of indicated Industry 4.0 technology concepts. However, as it could be seen from the results, although this linkage is statistically significant, it is characterized as a low strength correlation. On the other hand, this cognition can be interpreted as verification of the previously given statement that implemented EnMS and EnvMS have been deployed by those organizations that have attempted to make Industry 4.0 a reality. Nowadays, manufacturing systems worldwide implement advanced manufacturing or Lean Manufacturing principles, in which minimal resources to bring maximum value to the business are applied. These same principles can and should be used for energy utilization. From this perspective, it is all about being efficient in the way the energy within manufacturing system is being used, with aim to reduce consumption where necessary, and transfer adequate knowledge to relevant personnel across the organization in order to bring the same value with less energy and thus reduced harmful emissions [8]. However, it is strongly believed that majority of manufacturing companies, both in Serbia and throughout the world, still lacking well-tailored developing strategies to integrate environmental sustainability into manufacturing which is in any case of utmost importance to gain competitiveness. Successfully implementing energy and resource efficiency programs, pollution prevention and control programs, or sustainability initiatives is of urgent need [24–26], which with the appropriate application of Industry 4.0 concepts, can be easily integrated into processes.

5 Conclusion This study provides insight regarding best practices in implementing Industry 4.0 concepts and their implications on manufacturing EnMS and EnvMS and overall manufacturing energy efficiency. Based on analysis that used the Serbian dataset from the ENMS conducted in 2018, it was revealed that there is a significant positive relationship between implementation of each Industry 4.0 concept on one hand and implementation of EnMS and EnvMS on the other. More importantly, these results provide evidence that Serbian manufacturing companies recognize the importance of EnMS and EnvMS integration along with implementation of indicated Industry 4.0 technology concepts. However, although results indicate statistical significance, this linkage is characterized as a low strength correlation. Despite the obvious similarities and the huge potential, EnMS is not (yet) fully incorporated in most factory digitization projects. There is reason for optimism, though. Awareness is growing quickly with key decision makers in industry. Having this in mind, Industry 4.0 (with special emphasis on IoT and Big Data) and EnMS and EnvMS combined suggest an exciting future with reduced costs and emission of pollutants, while simultaneously ensuring improved performance. However, you could be left with a severe wondering where all the promise and money went. Or in other words, manufacturing decision makers should be taking full advantage of these Industry 4.0 tracking tools for EnMS and EnvMS carefully. Lastly, the sample was drawn from a single developing country, probably lacking the diversity that can be expected from a comparable sample chosen from

Exploring the Impact of Industry 4.0 Concepts

361

across different economies, both developed and developing. Further research should test the model and relationships in the manufacturing companies within other EMS countries (e.g. Austria, Germany, Slovenia, Lithuania).

References 1. Global Superior Energy Performance Partnership: Models for Driving Energy Efficiency Nationally Using Energy Management (2012) 2. UNIDO: Achieving impact and market credibility - Policy and conformity assessment frameworks for EnMS/ISO 50001 – Expert Group Meeting report, Vienna, 8–10 April 2014 3. Petrovic, J., Medojevic, M.: Importance and role of the state, ISO 50001 standards and regulations in the introduction of the energy management system. In: IEEP 2015, Zlatibor (2015) 4. Medojevic, M., Petrovic, J., Medic, N., Medojevic, M.: ISO 50001 as a tool to establish an adequate energy management system. In: 6th International Symposium on Industrial Engineering, SIE 2015, 24th–25th September, Belgrade, Serbia (2015) 5. Lee, J., Bagheri, B., Kao, H.A.: A cyber-physical systems architecture for industry 4.0-based manufacturing systems. Manuf. Lett. 3, 18–23 (2015) 6. Lasi, H., Fettke, P., Kemper, H.G., Feld, T., Hoffmann, M.: Industry 4.0. Bus. Inf. Syst. Eng. 6, 239–242 (2014) 7. Wang, S., Wan, J., Zhang, D., Li, D., Zhang, C.: Towards smart factory for Industry 4.0: a self-organized multi-agent system with big data based feedback and coordination. Comput. Netw. 101, 158–168 (2016) 8. Medojevic, M., Díaz Villar, P., Cosic, I., Rikalovic, A., Sremcev, N., Lazarevic, M.: Energy management in industry 4.0 ecosystem: a review on possibilities and concerns. In: Katalinic, B. (ed.) 29th DAAAM International Symposium on Intelligent Manufacturing and Automation, 0674–0680, Published by DAAAM International, Vienna, Austria, (2018) 9. Development Agency of Serbia: Why Invest in Serbia, March (2019). https://ras.gov.rs/ uploads/2019/03/why-invest-march-2019.pdf 10. Ashton, K.: That “Internet of Things” Thing. RFiD J. 22(2009), 97–114 (2009) 11. De Mauro, A., Greco, M., Grimaldi, M.: What is big data? A consensual definition and a review of key research topics. In: International Conference on Integrated Information (ICININFO 2014), AIP Conference, Proceedings 1644, 97, Madrid, 5–8 September (2014) 12. Lycett, M.: ‘Datafication’: making sense of (big) data in a complex world. Eur. J. Inf. Syst. 22(4), 381–386 (2013) 13. Gantz, J., Reinsel, D.: The Digital Universe in 2020: Big Data, Bigger Digital Shadows, and Biggest Growth in the Far East United States, IDC (2012) 14. Crosby, M., Nachiappan Pattanayak, P., Verma, S., Kalyanaraman, V.: Blockchain technology: beyond bitcoin. Applied Innovation Review, June, Issue No. 2, Sutardja Center for Entrepeneurship and Technology, Berkeley (2016) 15. Burger, C., Kuhlmann, A., Richard, P., Weinmann, J.: Blockchain in the energy transition: a survey among decision makers in the German energy industry, Deutsche Energie-Agentur GmbH (dena) - German Energy Agency: Energy Systems and Energy Services, Berlin, pp. 44 (2016) 16. Nagasawa, T., et al.: Accelerating clean energy through Industry 4.0: Manufacturing the next revolution, UNIDO (2017) 17. Kruth, J.P., Leu, M.C., Nakagawa, T.: Progress in additive manufacturing and rapid prototyping. CIRP Ann. 47(2), 525–540 (1998)

362

M. Medojevic et al.

18. Ford, S., Despeisse, M.: Additive manufacturing and sustainability: an exploratory study of the advantages and challenges. J. Cleaner Prod. 137, 1573–1587 (2016) 19. Brackney, L.J.: Augmented reality building operations tool. US Patent App. 12/946,455 (2010) 20. Lalić, B., Rakic, S., Marjanović, U.: Use of Industry 4.0 and organisational innovation concepts in the serbian textile and apparel industry. Fibres Text. East. Eur. 27(3), 10–18 (2019) 21. Lalic, B., Majstorovic, V., Marjanovic, U., Delić, M., Tasic, N.: The effect of Industry 4.0 concepts and e-learning on manufacturing firm performance: evidence from transitional economy. In: Lödding, H., Riedel, R., Thoben, K.-D., von Cieminski, G., Kiritsis, D. (eds.) APMS 2017. IAICT, vol. 513, pp. 298–305. Springer, Cham (2017). https://doi.org/10.1007/ 978-3-319-66923-6_35 22. Lalic, B., Medic, N., Delic, M., Tasic, N., Marjanovic, U.: Open innovation in developing regions: an empirical analysis across manufacturing companies. Int. J. Ind. Eng. Manag. 8 (3), 111–120 (2017) 23. Lalic, B., Anisic, Z., Medic, N., Tasic, N., Marjanovic, U.: The impact of organizational innovation concepts on new products and related services. In: Proceedings of 24th International Conference on Production Research - ICPR, pp. 110–115, DEStech Publications, Inc., Poznan, Poland (2017) 24. May, G., Stahl, B., Taisch, M.: Energy management in manufacturing: Toward eco-factories of the future – a focus group study. Appl. Energy 164, 628–638 (2016) 25. Vujica, H.N., Buchmeister, B., Beharic, A., Gajsek, B.: Visual and optometric issues with smart glasses in Industry 4.0 working environment. Adv. Prod. Eng. Manag. 13(4), 417–428 (2018) 26. Vieira, A.A.C., Dias, L.M.S., Santos, M.Y., Pereira, G.A.B., Oliveira, J.A.: Setting an industry 4.0 research and development agenda for simulation – a literature review. Int. J. Simul. Model. 17(3), 377–390 (2018)

Smart Factory and IIOT

Virtualization of Sea Trials for Smart Prototype Testing Moritz von Stietencron1(&), Shantanoo Desai1,2, and Klaus-Dieter Thoben1,2 1

BIBA - Bremer Institut für Produktion und Logistik GmbH at the University of Bremen, Hochschulring 20, 28359 Bremen, Germany {sti,des,tho}@biba.uni-bremen.de 2 Faculty of Production Engineering, University of Bremen, Badgasteiner Straße 1, 28359 Bremen, Germany

Abstract. The design and development of new vessels is a cost and timeintensive effort, which is greatly reliant on expertise and experience. The prototype building and testing are, especially for small producers who do not sell on volume, often at the same time the production of the first vessel. This further increases the need for other means of reliable and accurate prototype experimentation. This paper presents a procedure for the virtualization of sea trials in which the vessel prototypes are tested, thus generating a concise and reliable data model of the trial, which can be used in simulation and other product development tasks. Keywords: IoT

 Knowledge-based engineering  Vessel design  CFD

1 Introduction The landscape of European producers of specialized boats, like emergency response and recovery vessels (ERRV), is marked by small and medium enterprises, which manufacture these vessels on a made-to-order basis. With regards to the product design and development stage, this adds further restraints to the fact, that the total volumes of these vessels are rather small (with the German DGzRS currently employing 39 [1] and the Norwegian RS 51 [2] small rescue vessels). With these constraints and the fact that these vessels are financed through donations, there is a significant emphasis on the development phase of the vessels as the margin for building separate prototypes is usually not available. At the same time, of course, there are particularly high standards and requirements [3] attached to each order, as a lot is at stake during search and rescue missions. The majority of manufacturers already utilize computer-based prototype testing and development methods, e.g., through simulations. However, the output of these measures can only be granted a limited amount of credibility as it vastly relies on assumptions, e.g., about the driving conditions and the corresponding vessel behavior. Often these assumptions are complemented with extensive experience of the involved vessel designers and naval engineers. Still, in the pursuit of reducing uncertainty about the products real-world behavior, real-world sea trials are the state of the art means of © IFIP International Federation for Information Processing 2019 Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 365–371, 2019. https://doi.org/10.1007/978-3-030-29996-5_42

366

M. von Stietencron et al.

product experimentation. While the vessel stability is usually rather well understood by the developers and is the main objective of the testing procedures, an increasing focus is given to the effects of the boat’s performance on the personnel and the environment [4]. The extended usage of the digital development aids is seen as a necessary direction in the production of these vessels but currently faces some challenges which obstruct the implementation of entirely virtual sea trials, of which the reliability of the underlying information is a crucial factor. This paper presents an approach, to eliminate some of the uncertainty in the process of running virtual sea trials by presenting an approach to digitize real-world sea trials and offer a pathway to the usage of virtual sea trials in a fact-based vessel design process.

2 Related Work Across the range of usage of small vessels, the usage patterns vary significantly, which results in a high level of ambiguity in the design process, which this paper proposes to reduce by introducing a higher level of real-world product behavior into the development process. This chapter briefly introduces the two main concepts behind this, namely knowledge-based vessel development and marine sensor data acquisition. 2.1

Knowledge-Based Vessel Development

It is necessary to compress development cycles [5] to optimize the vessel development process towards high-quality products without compromising the financial feasibility of the developments. In recent years, concepts like concurrent engineering [6] have been applied to the domain of vessel development [7]. These advances are complemented by investigations on the integration of product behavior knowledge into a product development process [8, 9]. Together this has led to the implementation of a modeling language for knowledge-based engineering tasks called KbeML (knowledge-based engineering modeling language) in the vessel design process [10]. This process is highly dependent on a reliable stream of vessel-related sensor data. 2.2

Sensor Data Acquisition

Modern vessels contain a large number of sensors within them, depending upon the requirements of the users as well as the vessel designers. These sensors tend to communicate with the vessel’s subsystem through standardized protocols developed by the National Marine Electronics Association (NMEA). Two of the well-known standards within the maritime sector are NMEA0183 and NMEA2000. NMEA0183 uses the RS422 Serial interface, while the NMEA2000 uses the modern Computer Area Network (CAN) interface and provides higher speeds compared to NMEA0183 [11]. For modern IoT applications, these protocols produce crucial challenges. Since the data is only available on the local network of the vessel and for simple applications, obtaining information from these networks and sending them to cloud-based infrastructure pose as a critical challenge. Since operating vessels are more often at sea, the

Virtualization of Sea Trials for Smart Prototype Testing

367

information exchange between the vessel and the cloud services becomes a vital task to execute. While there are several commercial systems available to monitor either variables at a global level on the vessel or highly specialized data sets for single development questions [12] only a few approaches feature the flexibility and capability needed to make such a system appealing to small and medium vessel producers. Some efforts are currently being made by open-source communities to bridge the gap between bringing such vital information from vessels to cloud infrastructure where it can be analyzed and fed back to the production aspect for better and optimal vessel design. One such open-source project is Signal K [13] as well as the Universal Marine Gateway (UMG). Subsequently, we describe an overview of Signal K as well as the Universal Marine Gateway (UMG) which provide different approaches to acquiring sensor data from vessels. Signal K. Signal K is an open-source solution, driven by a community of sailing and marine enthusiasts. It strives to achieve an open data format for the maritime sector by using standard internet technologies on the vessel using a dedicated Signal K server. The complete solution is licensed under Apache v2.0 permissive open-source license. A significant advantage that the project accomplishes is the representation of the data from various heterogeneous sources of information. It is easily deployable on a standard laptop as well as different single-board computers (SBCs) and relies on standard internet technologies like REST, WebSocket, and TCP/IP suite for information exchange [14]. The data format within Signal K is represented using JSON Schema with UTF-8 encoding, making it compatible with many standard IoT solutions. UMG. The Universal Marine Gateway (UMG) is a data acquisition unit capable of interfacing various data busses as well as different sensors and has been developed through a series of collaborative research projects. At its hardware core is an industrialgrade single board computer with a customized operating system. The UMG hosts a time-series database, which stores the vessel data and allows for data curation. While the Signal K framework is more addressed towards the requirements of the consumer market, the UMG is an implementation for professional users. It comprises of a subsystem of UMG Nodes which handle data from various heterogenous data-sources (e.g., data buses and digital or analog sensors) and sends it to the UMG through an ethernet network deployed within the vessel. UMG Nodes provide deployment opportunities to place different sensors like accelerometers within the vessel at locations normally difficult to access like the hull or engine room.

3 Approach The basis for a virtual sea trial is usually a computational fluid dynamics (CFD) simulation of the vessel model in relevant simulation space. Besides a well-defined vessel model, the data basis for this simulation is the crucial step towards running a credible virtual sea trial. This paper’s approach is, therefore, to employ a flexible data acquisition system to capture a precise model of a sea trials parameters and deliver reliable data which can remove ambiguity from the current digital design process.

368

3.1

M. von Stietencron et al.

Virtualization Prerequisites

It is crucial to work from a concise set of requirements and expectations to create a meaningful model of a sea trial. Therefore, the first step in the virtualization process is to define these. It is essential to define the scope of the digitization. In general, the following three data scopes are differentiated: 1. Data describing the environmental input to the trial situation (e.g., wind speed, wave height) 2. Data describing the vessel’s input to the trial situation (e.g., engine speed, heading) 3. Data describing the vessel’s behavior in the trial situation (e.g., deformation, accelerations). Once the scope has been decided for the sea trial at hand, the individual measurements and their parameters need to be defined. The process will involve the selection of data sources (e.g., sensors, onboard systems) as well as their type of placement and calibration parameters. 3.2

Sea Trial Virtualization

With the data acquisition parameters finalized, the second step is the virtualization of a real-world sea trial. Besides the configuration of the data acquisition system to the requirements fixated in the first step, the mode of installation on the vessel must be prepared before the sea trial. Most important factors are the exact placement and fixation of the sensors as it is especially crucial for sensors which track physical variables, like vibration. Before the actual sea trial occurs, the behavior of the vessel in the zero state condition shall be captured which implies to store data from all sensors with minimal environmental influence to capture the impact of the vessel itself on the readings and establish a baseline for measurements during the sea trial. It is recommended to do this for 15 min with the engine turned off and another 15 min with the engine at idle speed. It is vital to ensure proper synchronization of the system time to allow the correlation of the data from the vessel with external data sources (e.g., weather service), and it is highly recommended to do this before the sea trial. During the sea trial, the live data can be monitored in real-time on the vessel or with minimal delay on an accompanying vessel or shore station. After the end of the sea trial, it is recommended to repeat the zero state capturing procedure to document whether and if so, how the sea trial has impacted the installation. 3.3

Data Curation

After the sea trial, data curation is recommended and should consist of the following steps. First, the data validity needs to be checked. Besides quantitative consistency checks (e.g., on the required and real sampling frequency) also qualitative verifications (e.g., comparison of pre and post sea trial zero state conditions) shall be performed to ensure a high level of data quality. Subsequently, the data shall be annotated with

Virtualization of Sea Trials for Smart Prototype Testing

369

events (e.g., time of a specific maneuver) and subjective findings (e.g., uncomfortable driving situation) from the sea trial. Finally, the data should be persisted both in a database for further use in analysis and simulation as well as in a report to summarise the sea trial.

4 Validation For the validation of the sea trial virtualization procedure, an ongoing vessel development process of the Norwegian boat manufacturer Hydrolift1 has been selected. With the development of highly modular vessel platform, the sea trial virtualization experimented. Figure 1 below gives an overview of the stages of the validation experiment.

Fig. 1. Sea trial virtualization schema

In the beginning, a simulation space is created, which takes into consideration all relevant virtual parameters for a vessel. Such information is available from blueprints of the prototype vessels and engineering information knowledge banks. From the simulations, simulated parameters of the vessel are obtainable like the trim (angle between the vessel and the engine), pitch, and drag. The next step was to understand whether these parameters are measurable directly or indirectly from the vessel. Many parameters (like vessel orientation and engine values) are directly obtainable via the NMEA2000 interface and were collected via the UMG. The drag of the vessel in the water is usually measured indirectly through accelerometers measuring acceleration values at specific points of interest within the vessel. By mapping these parameters to 1

https://hydrolift.com.

370

M. von Stietencron et al.

the CFD simulations, the gap between simulation space and actual sea trial conditions is narrowed. The data collected on the UMG via interfaces like the onboard systems (NMEA2000) and different sensor deployment via the UMG Nodes was logged at a fixed time interval. Figure 2 shows a snapshot of parts of the sea trial live dashboard.

Fig. 2. Sea trial dashboard

Post sea trial, the data was curated and used in comparative analyses. Comparison between the measured parameters – namely, pitch angles of the vessel versus speed, engine/fuel consumption versus speed – were found to provide a more sensible representation than the interpretation of individual measurements. Finally, based on the analyses, an optimization of the vessel can be planned and validated through further simulations while at the same time, the initial simulation space can be refined.

5 Conclusion This paper has presented an approach towards the virtualization of sea trials for the vessel prototype testing process. The proposed procedure relies on the capturing and integration of vessel-related sensor data into a knowledge-based engineering process. From the ideation and initial experimentation of the procedure first benefits for the vessel, developers have already been experienced in the form of increased understanding of the vessel’s behavior under the test conditions. Also, the trust in the CFD simulation, which is being employed in the design process, has been increased. To fully assess the viability and universal applicability in the vessel manufacturing domain, further experimentation is needed and foreseen. At the same time, the range of

Virtualization of Sea Trials for Smart Prototype Testing

371

measurements will be expanded to allow to extend the scope from mechanical product development and validation, e.g., towards the assessment of the real lifecycle costs and emissions. Acknowledgments. The research leading to these results has received funding from the Research Council of Norway within the project “Robust Industriell Transformasjon - RIT” as well as the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 727982. The contents of this paper reflect only the authors’ view, and the Commission is not responsible for any use that may be made of the information it contains.

References 1. Deutsche Gesellschaft zur Rettung Schiffbrüchiger (DGzRS) (2019). Numbers and Facts. https://www.seenotretter.de/en/who-we-are/portrait/numbers-facts/. Accessed 12 Apr 2019 2. Redningsselskapet Redningsskøytene (2019). https://www.redningsselskapet.no/om-oss/ redningsskoytene/. Accessed 12 Apr 2019 3. HM Coastguard, MCA OREI SAR Requirements (2019) 4. Dobbins, T., Thompson, T., McCartan, S.: Addressing Crash And Repeated Shock Safety Design Requirements of Fast Craft (2015) 5. Røstad, C.C., Henriksen, B.: ECO-Boat MOL capturing data from real use of the product. In: Rivest, L., Bouras, A., Louhichi, B. (eds.) PLM 2012. IAICT, vol. 388, pp. 99–110. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-35758-9_9 6. Jo, H.H., Parsaei, H.R., Sullivan, W.G.: Principles of concurrent engineering. In: Parsaei, H. R., Sullivan, W.G. (eds.) Concurrent Engineering, pp. 3–23. Springer, Boston (1993). https://doi.org/10.1007/978-1-4615-3062-6_1 7. Henriksen, B., Røstad, C.C., von Stietencron, M.: Development projects in SMEs. In: Lödding, H., Riedel, R., Thoben, K.-D., von Cieminski, G., Kiritsis, D. (eds.) Advances in Production Management Systems. The Path to Intelligent, Collaborative, and Sustainable Manufacturing, pp. 193–201. Springer, Cham (2017). https://doi.org/10.1007/978-3-31966926-7 8. Klein, P., Lützenberger, J., Thoben, K.-D.: A proposal for knowledge formalization in product development processes. In: Proceedings of the 20th International Conference on Engineering Design, ICED 2015, Milan, Italy, pp. 27–30 (2015) 9. Lützenberger, J., Klein, P., Hribernik, K., Thoben, K.-D.: Improving product-service systems by exploiting information from the usage phase. a case study. Proc. CIRP 47, 376– 381 (2016) 10. Sullivan, B.P., et al.: A prospective data-oriented framework for new vessel design. In: 25th 2019 IEEE International Conference on Engineering, Technology, and Innovation (ICE/ITMC) (2019, to be published) 11. Kim, K.-Y., Park, D.-H., Shim, J.-B., Yu, Y.-H.: A study of marine network NMEA2000 for e-Navigation. J. Korean Soc. Mar. Eng. 34, 133–140 (2010) 12. Swartz, R.A., et al.: Hybrid wireless hull monitoring system for naval combat vessels. Struct. Infrastruct. Eng. 8, 621–638 (2012) 13. Signal K: Signal K. In: Signal K (2019). http://signalk.org/. Accessed 20 Apr 2019 14. Signal K: Signal K specification. In: Signal K Documentation. https://signalk.org/ specification/1.3.0/doc/. Accessed 30 Apr 2019

IoH Technologies into Indoor Manufacturing Sites Takeshi Kurata1,2(&), Takashi Maehata1, Hidehiko Hashimoto1, Naohiro Tada1, Ryosuke Ichikari2, Hideki Aso3, and Yoshinori Ito3 1

IoT R&D Center, Sumitomo Electric Industries, Ltd., Osaka, Japan [email protected] 2 Human Augmentation Research Center, AIST, Tokyo, Japan 3 IoT Acceleration Lab, J-Power Systems Co. Ltd., Hitachi, Japan

Abstract. This paper focuses on introducing measurement technologies into manufacturing sites regarding the worker-oriented part of 6M, which consists of Man, Machine, Material, Method, Mother Nature, and Money. First, we introduce indoor positioning and work motion recognition systems that we have developed as key components of Internet of Humans (IoH) technologies. Next, we briefly report on two case examples of manufacturing sites where worker behavior measurement, analysis, and visualization are promoted. Then, we conclude this paper with discussion about the costs and benefits on the introduction of indoor positioning technologies into manufacturing sites. Keywords: IoH  Indoor positioning  Work motion recognition Mieruka  Kaizen  Manufacturing site

 6M 

1 Introduction To comprehensively understand and specifically improve the situation of manufacturing sites, it should be effective to aggregate big data regarding 6M (Man, Machine, Material, Method, Mother Nature, and Money) [1]. This research is currently continuing with the aims of realizing 6M ‘mieruka’, which means visualization or vision control, as well as providing technologies which support continual kaizen and work/safety/health-care management. With the proliferation of Internet of Things (IoT) products and services, visualization in terms of tangible things (Machine and Material) such as facilities, equipment, raw materials (RM), finished goods (FG), etc. is progressing rapidly for grasping the present situation and confirming the result of kaizen (improvement). However, the development of visualization technologies and methodologies is still ongoing when it comes to intangible things (Man, Method) such as worker conditions and workflow processes [2]. The lack of relevant data on human behavior could be considered a major disincentive for progress on this front. It is relatively easy to collect individual worker data in cases where the work is repetitive in a specific area of mass production. In cellular manufacturing or high-mix low-volume production, however, moving and working are often combined, and it poses a major barrier to data collection on each worker. Therefore, it is crucial to further © IFIP International Federation for Information Processing 2019 Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 372–380, 2019. https://doi.org/10.1007/978-3-030-29996-5_43

IoH Technologies into Indoor Manufacturing Sites

373

develop and utilize Internet of Humans (IoH) technology for collecting data on human behavior. As there is often a high correlation between worker positions and operation contents, and also manufacturing sites are mainly occupied by indoor environments, indoor positioning technology is regarded as one of key IoH components. Accordingly, this paper especially focuses on measurement technologies of the worker-oriented part of 6M, and briefly reports on two case examples of manufacturing sites where worker behavior measurement, analysis, and visualization are promoted. It also discusses the costs and benefits of the introduction of indoor positioning technologies into manufacturing sites.

2 IoH Technologies 2.1

Indoor Positioning System Configuration

Our indoor positioning system shown in Fig. 1-left has two features suitable for introducing into actual manufacturing/service sites. The first feature is that the system utilizes xDR (Dead Reckoning for X) which includes PDR (Pedestrian Dead Reckoning) [3] and VDR (Vehicle Dead Reckoning) [4] as a method of relative positioning. The second feature is that solar-powered BLE beacons are employed as stationary nodes placed around the area of interest. This system is an integrated positioning system that consists of xDR, RSSI-based BLE positioning, and map constraints. Although solar-powered BLE beacons are twice to three times as expensive as typical battery-powered BLE beacons, the combination of xDR and BLE positioning makes it possible to reduce the number of BLE beacons to only a fraction of the number it would take with only BLE positioning. This allows for a battery-free setup without increase in initial installation costs. Additionally, a battery-free setup eliminates the need for battery replacement, greatly reducing operational costs. Although the configuration inherently has issues on lighting for energy harvesting, we have been developing a method of sustain positioning performance with uneven time interval of BLE signal transmission due to insufficient lighting. We will present the details in near future. Stationary nodes are naturally placed for covering areas of interest. However, significant flow lines or activities may occur outside of the expected areas of interest, and there are cases in which such data are crucial for worker behavior analysis. Although there is nothing to do with only BLE positioning, we still can keep tracing flow lines with xDR. As hardware for mobile nodes, the workers are able to use embedded modules and smartphones equipped with a ten-axis sensor (3 for acceleration, 3 for angular velocity, 3 for magnetism, 1 for barometric pressure). Especially because of their increasing size in recent years, smartphones were not used in this research as they could impede worker operations. Instead, the Android-based terminal BL-02 made by BIGLOBE is adopted in our manufacturing sites. The same setup can be also applied to forklifts for VDR. Figure 1-right shows the cost-effective arrangements of stationary and mobile nodes. In general, transmitters such as BLE beacons and passive RFID tags are much

374

T. Kurata et al.

less expensive than receivers or transponders such as smartphones and IoT gateways. In cases with a larger area and fewer number of persons measured, it is better to choose transmitters as stationary nodes and receivers as mobile nodes (SB-type system) from the aspect of cost. Our manufacturing sites are categorized in this case. In addition, because cameras are often difficult to bring in and install on site due to issues of cost and privacy, vision-based positioning technologies are not integrated in our system.

Fig. 1. (Left) indoor positioning system in use at manufacturing sites; (right) cost-effective arrangements of transmitters and receivers.

Fig. 2. Work motion recognition systems the difference of which is the number of IMUs.

2.2

Work Motion Recognition

If micro-level understanding of behaviors is required as in the analysis of hospitality in customer service, cooking and assembly work, skills involved, etc., the use of only position data are not enough for such purposes. Inertia Measurement Units (IMUs) as in mobile nodes for xDR go beyond tracking position, as they are also capable of capturing type and size of motions, allowing for micro-level analysis on work motions and for safety management by detecting falling movements. Figure 2 depicts three examples of work-motion capture systems. Typically, 10 to 20 IMUs are attached all over the body as in the left example in Fig. 2. Although this sort of setup is usually permitted for short-term collection of data, the time involved in attaching and detaching the system, and its potential to interfere with work tasks make such a system unlikely to be adopted for long-term, everyday use. The system in Fig. 2-center is designed to reduce the number of IMUs to only five,

IoH Technologies into Indoor Manufacturing Sites

375

in order to be less cumbersome for workers and reduce hardware costs. In this case, a smartphone is placed inside the ‘obi’ belt as one of IMUs and also as a BLE receiver. Compared to a configuration where IMUs are attached to the whole body, this kind of configuration results in a precision reduction of around 10 to 20%. The whole-body configuration provides the position and movement of each body part based on a skeleton model. In contrast, the partial body configuration, as in Fig. 2-center, means some more detailed information on work motions are missed, and we must rely on the local movement data for the available sensors only. To address such problems, an integrated IoH sensor module with a wearable passive RFID tag reader and a ten-axis sensor module have been developed through METI’s project to support the advancement of strategic core technologies (represented by Gobi) (Fig. 2-right). This allows for the micro-level information lost in the decreased number of IMUs mentioned above to be partially obtainable once more by taking micro-positional data with RFID tag reading, and improvement in motion recognition precision can be expected.

3 Manufacturing Site Case Studies In this section, we briefly introduce two sample cases of manufacturing sites where worker behavior measurement, analysis, and visualization are promoted. Implications and suggestions through the following cases are not well structured, however, we believe that they fit properly to many cases at actual manufacturing sites, and therefore they would be helpful for a person in charge to develop or install systems for worker behavior measurement, analysis, and visualization.

Fig. 3. Example of flow line measurement at a manufacturing site.

3.1

Cable Manufacturing Line

J-Power Systems Co., Ltd. (hereafter referred to as JPS) is a member of Sumitomo Electric Group, and is involved in the designing, manufacturing, constructing, and selling of electric power transmission cables, overhead power transmission lines, amongst other things. We are introducing the indoor positioning system depicted in Fig. 1-left and the 6M visualization setup in hopes of increasing equipment utilization rates on cable production lines.

376

T. Kurata et al.

Although not addressed much in this paper, the first step of implementation at JPS involved the Mother-Nature (Environment) aspect of 6M. A 3D model of the indoor environment was automatically generated using LIDAR and an omnidirectional camera [5]. This 3D model was then converted into a 2D floor map for the site’s area of interest. As shown in Fig. 3-left, BLE beacons were placed on the floor map and the beacons’ geospatial coordinates on a real-world coordinate system were automatically extracted. Floor maps are necessary for positioning programs and visualization tools, but many sites lack a CAD model or otherwise, have CAD models with outdated information. Furthermore, the process of calculating geospatial coordinates for BLE beacons can be made complicated by inaccurate floor maps. The approach we have taken makes it possible to avoid such issues. The installation of a safety laser rangefinder (LRF) has also been arranged for preventing accidents in which workers are caught in equipment. Within the range of measurement, the LRF can be used for precise positioning. Another precise position correction by the PLC also becomes possible. However, since the worker ID is unknown in both methods, it is necessary to have a process to assign IDs with PDR. 3.2

Assembly Line

At an assembly line in a factory of Company A, motion capture data have been gathered by multiple IMUs attached to each part of workers’ bodies, and micro motion recognition on the assembly line is underway. From this, we hope to precisely make standardized procedure for work element, and efficiently evaluate the variation in work involved. On typical assembly lines, the work of interest (Material) often moves a little bit at a time. Because of this, it is not enough to simply obtain a static floor map, as was the case at JPS. Rather, linking dynamic data on the line with sensor data makes it possible to perform highly accurate work motion recognition. In addition, quirks in movement peculiar to each individual are inevitably included in the machine learning training data for micro-motion recognition. There is a need for development of technologies and methodologies to realize not only efficient training data collection but also motion recognition which is not affected by such individual differences.

4 Discussion 4.1

Make Time Tangible

We would like to discuss some of the things that should be considered when introducing measurement technologies to manufacturing or service sites. Generally, there is little objection to the importance of visualization when it comes to 6M as a whole or in part. However, thinking that short-term data collection and a one-time “before and after” comparison is sufficient, or that using a single system successively in many different areas can cut costs, rather than opting for continual long-term data collection, are common lines of thinking.

IoH Technologies into Indoor Manufacturing Sites

377

To shed light on these misconceptions, this section uses Fig. 4 to discuss some of the advantages of continual on-site measurement of data. The top and middle portions of the diagram show sites that are not continually collecting data, while the bottom shows a site that is. Firstly, a common concern heard during on-site interviews is that workers are not used to their work being monitored, and it will be difficult to determine whether the resulting data reflect realistic and natural circumstances or not. Continual data collection would allow such uncertainty to be dispelled. It is necessary to consider how causes of found issues are analyzed and addressed. Typically (i.e., where continual data collection has not been adopted), the data collection system is only set up after a problem is identified, in which case one must wait for enough data surrounding the issue to be collected (a period that is referred to as the “Before data collection”). This results in a significant time lag between when the problem is first identified and when its causes can begin to be analyzed and addressed. Furthermore, as shown in Fig. 4-top, if the exact issue of concern does not arise once again after beginning Before data collection, there is no data to analyze to identify causes. Dashboard cameras in cars use accelerometers to detect incidents such as collisions and sudden braking, and are able to keep records of such incidents for future use and, report accidents. Likewise, with continual data collection in manufacturing or service sites, issues can be analyzed at any time, with seamless transition into data collection for confirming the effectiveness of solutions (the “After data collection”). In this way, issues can be swiftly addressed and dealt with. This “Virtual Time Machine” concept [6] is expected to take root in more areas in the future as 6M visualization technology continues to develop and mature.

Fig. 4. Benefits of continual data collection.

4.2

Cost Comparison

With regard to the costs involved in introducing human behavior measurement technology, in this research, we considered three-company cases, which are Companies P,

378

T. Kurata et al.

Q, and R, to investigate the costs of time study and work sampling in human-wave tactics, as well as of automatic data collection with data collection systems. The details for each case are as follows: • Company P: Time study by video recording, 1 observer, 1 worker recorded, and macro-positioning granularity [1] (TS-Macro) • Company Q: Work sampling, 5 observers, 33 workers recorded, and mezzopositioning granularity (WS-Mezzo) • Company R: Time study by video recording, 2 observers, 3 workers recorded, and micro-positioning granularity (TS-Micro) We added one more system configuration for cost comparison. Up to now, each system for the flow-line measurement and analysis discussed here (system #1), for safety management with worker fall detection (system #2), and for health-care management with gait evaluation (system #3) has been individually implemented and adopted at each manufacturing or service site. Because the data collection system, which contains sensors such as accelerometers, used in system #1 can also be used for systems #2 and #3, it is possible for the systems to integrate with one another. If the system cost can be evenly distributed across those three systems #1, #2, and #3, the cost of system #1 can be reduced to a third. The results of cost comparison are shown in Table 1. In all of three companies’ cases, we confirmed that the costs of time study and work sampling in human-wave tactics gets higher than the ones of data collection systems usage. Additionally, in the case of Company Q, because work sampling was conducted, there is inevitable loss of data in human-wave tactics, whereas visualization was possible with the lossless data by adopting automatic measurement. Integrating work analysis with health-care and safety management into a unified system brings additional value and advantages beyond just system cost reduction. For example, it is possible for such a system to send a fall-detection-based emergency call with position information even indoors although conventional fall detection systems do not cover indoor sites.

Table 1. Cost comparison of manual observation and automatic measurement with three cases. H: Cost for human-wave tactics, S: Cost of data collection system usage, D: 1/3 cost of data collection system usage. Case Company P (TS-Macro) Company Q (WS-Mezzo) Company R (TS-Micro)

Num of workers 1 11 3

Num of observers 1 5 2

H>S (days) 35 76 101

H>D (days) 3 24 25

IoH Technologies into Indoor Manufacturing Sites

379

5 Conclusion Needless to say, even if massive amount of 6M data can be gathered by IoT/IoH devices in real manufacturing sites, it does not hold all the answers to comprehensively understand the real sites since big data in general has issues on quality and variety. Indepth surveying such as retrospective interviewing has potential to complement the defect of big data, however, it inevitably requires intensive effort with high work load. In-depth surveying with subject screening based on big data would alleviate the load, and it would result in efficient surveying with both breadth and depth [7]. This is consistent with the idea of “Pier Data” outlined in Fig. 5 in which “Big Data” are integrated with “Deep Data” [8]. Demonstration of such a methodology in actual sites through further development of 6M data collection and visualization technology including IoH technologies is one of our future works.

Fig. 5. Screening with 6M big data for obtaining deep data by in-depth survey. Acknowledgements. This work was supported by JST-OPERA Program Grant No. JPMJOP 1612, Japan.

References 1. Kurata, T., et al.: Towards realization of 6M visualization in manufacturing sites. In: Proceedings of IEEE VR Workshop on Smart Work Technologies (WSWT), 4 pages (2019) 2. Fukuhara, T., et al.: Improving service processes based on visualization of human behavior and POS data: a case study in a Japanese restaurant. In: Proceedings of ICServ, pp. 1–8 (2013). https://doi.org/10.1007/978-4-431-54816-4_1 3. Kourogi, M., Kurata, T., Personal positioning based on walking locomotion analysis with self-contained sensors and a wearable camera. In: Proceedings of ISMAR, pp. 103–112 (2003). https://doi.org/10.1109/ismar.2003.1240693 4. Ichikari, R., et al.: Off-site indoor localization competitions based on measured data in a warehouse. Sensors 19(4), 763 (2019). 26 pages. https://doi.org/10.3390/s19040763 5. Kuramachi, R., et al.: G-ICP SLAM: an odometry-free 3D mapping system with robust 6DoF pose estimation. In: Proceedings of IEEE International Conference on Robotics and Biomimetics (ROBIO), pp. 176–181 (2015). https://doi.org/10.1109/robio.2015.7418763

380

T. Kurata et al.

6. Hirose, M., et al.: Virtual time machine. In: Proceedings of ICAT, SS1-6 (2004) 7. Nakajima, M., et al.: Cognitive chrono-ethnography lite. Work J. Prev., Assess. Rehabil. 41, 617–622 (2012). https://doi.org/10.3233/wor-2012-0219-617. (IEA: 18th World congress on Ergonomics - Designing a sustainable future) 8. Kurata, T., et al.: Making pier data broader and deeper: PDR challenge and virtual mapping party. In: Proceedings of MobiCASE, pp. 3–17 (2018). https://doi.org/10.1007/978-3-31990740-6_1

3D Visualization System of Manufacturing Big Data and Simulation Results of Production for an Automotive Parts Supplier Dahye Hwang

and Sang Do Noh(&)

Sungkyunkwan University, Suwon, South Korea [email protected]

Abstract. Recently, many manufacturers have recalled their products owing to quality issues. It is increasingly difficult to determine the cause of quality issues because of the complexity of the supply chain. Thus, it is essential to share manufacturing information throughout the product life cycle. However, small and medium-sized enterprises (SMEs) often lack the necessary infrastructure and information systems. This research proposes an open-source system allowing the 3D visualization of production history and simulation results. The production history includes products’ time stamps and inspection results, defect information, and a status of each facility. This information is then used to construct a product workflow and simulation model. Further, it is possible to compare simulation results for up to three alternative scenarios. The system is developed using open-source libraries for easy diffusion and application to SMEs in the automobile industry. A method for the implementation of this system to Korean auto parts companies is introduced. Keywords: 3D visualization

 Lot tracking  Manufacturing information

1 Introduction As the companies involved in the production cycle of products are diverse, identifying the cause of quality issues is often extremely challenging and time-consuming [1, 2]. A more thorough record and analysis of production cycle information could mediate this challenge [3]. However, many small and medium-sized manufacturing companies have difficulty systematically organizing information generated during parts manufacturing due to a lack of appropriate information infrastructure [4, 5]. Additionally, as the fourth industrial revolution develops, the number of research efforts to digitize factories increases. However, most of this research surrounds real-time data exchange for virtual reality factory implementation or Cyber Physical System realization [6–8]. This study proposes an open-source 3D visualization system to cope with quality problems after shipment. If a quality problem occurs within the quality assurance period after its sale to customer, it is difficult to find the cause by simply checking production history of product. The purpose of this study is to provide users with an intuitive view of quality problem by visualizing production history data. The system © IFIP International Federation for Information Processing 2019 Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 381–386, 2019. https://doi.org/10.1007/978-3-030-29996-5_44

382

D. Hwang and S. D. Noh

presents process status and material flow of the product claimed after its delivery. It also allows users to simulate a production model with their inputs by using production history as the base information and returning simulation results in 3D.

2 3D Visualization of Production History and Simulation Results The architecture of a 3D visualization system of production history and simulation results is shown in Fig. 1. It consists of a user application, a production history tracking system, a 3D visualization system with a simulation engine, and four databases.

Fig. 1. An architecture of 3D visualization system

The user application manages the input and output of the system. A serial number with the “Product/Lot” information is used to track an item’s production history. Production line code is used to load a layout and location data of facilities. Simulation settings include the total simulation time, warm-up time, process cycle time, loading/unloading time, buffer capacity, and defect rate. Finally, the user application outputs a 3D visualization of past production flow and simulation results. A production history tracking system consists of two management modules: one for the process-facility library and one for the production history. While the processfacility library management module handles adding, editing, and deleting 3D models and the basic information for processes and facilities, the production history management module searches past production information of ‘Product/Lot’ using its serial number and configures a neutral file to represent the scenario.

3D Visualization System of Manufacturing Big Data and Simulation Results

383

A 3D visualization system is composed of a simulation engine and three management modules for (1) rerun scenario/simulation, (2) 3D model control, and (3) data visualization. The rerun scenario/simulation model management module creates, saves, and cuts rerun scenarios or simulation models. Each scenario includes up to three simulation models and is created based on a neutral file generated from the production history tracking system. Rerun scenario contains real production history and visualization data, such as capacity, defect rate, process sequence, process-facility codes, and layout, to represent line status when the selected product was being produced. On the other hand, simulation model contains process time and buffer capacity in addition to the rerun scenario’s visualization data. The 3D model control module accelerates or decelerates the speed of 3D objects in the presented animation. The data visualization module refines raw simulation data and shows the result in graphs and tables. The four databases used in this system are (1) the process-facility library, (2) the manufacturing information database, (3) the process design database and (4) the rerun scenario database. Of these, the manufacturing information database and process design database are part of a legacy system, for example, MES. The process-facility library is a 3D model library consisting of a standard process-facility and a customized process-facility. A standard process-facility library provides a basic pre-modelled 3D model commonly used in the target industry, and a customized process-facility library containing user applied process-facility 3D models.

3 Implementation In this study, a standard process-facility library was developed by selecting commonly used facilities found by visiting all the production lines of the target company. As a result, 23 types of facilities frequently used in the electric component manufacturing line were selected and modeled in 3D using Trimble’s SketchUp Pro 2018. To evaluate the performance of the developed 3D visualization system, authors requested software testing from the Korea Testing Laboratory. The evaluation measures the time required to run a single 3D process facility model and then divides the time by the size of the model file to calculate the speed (sec/MB). Table 1 summarizes the system environment for the performance evaluation. Table 1. Tools/Software used in system development Item OS CPU RAM HDD Programming Languages Open-source Library

Tool/Software Windows 10 Pro 64 bit Intel Core i7-7700 CPU 3.60 GHz 16 GB 500 GB C# MonoGame

384

D. Hwang and S. D. Noh

Seventeen models were selected at random from the process-facility library to measure the speed. Sizes of the model files used for the evaluation are between 1.00 MB and 1.6 MB. The loading speed of the model had a maximum of 0.768 s/MB, a minimum of 0.692 s/MB, and an average of 0.735 s/MB. Additionally, the difference between the fastest loading time and the slowest loading time was very small, 0.076 s, which suggests that the model loading speed is more affected by the loading preprocessing than by the model file size. Visualization of a large amount of production history information with 3D animation makes it easier to identify the problematic process and the results of the inspection process than when viewing such information in the existing text-based documents. In particular, when confirming the production history information of each lot, it became easy to specify information on various layers such as time, process, and product as shown in Fig. 2. Facility which has defect history or is selected by user highlight in red or blue.

Fig. 2. Example of production history visualization (Color figure online)

As a result of the simulation, the number of defects per hour, the number of defects per facility per hour, the utilization of each process, and the buffer utilization over time can be efficiently viewed and considered as seen in Fig. 3. Simulation results are available to show in 3D animation as production history visualization.

3D Visualization System of Manufacturing Big Data and Simulation Results

385

Fig. 3. Examples of simulation results

4 Conclusion This research has developed an open-source based 3D visualization system of production history and simulation results for SMEs. Through this system, the production environment of specified product or lot can be represented in 3D. In this manner, users can better visualize supply chain occurrences and errors through analyzed data rather than through extensive text and document-based data. For example, users who want to check a production line or process status at a specific production point can view the necessary information on a single screen without a separate query search or document comparison. To evaluate current production live productivity, a comparison of present production lines and hypothetical alternatives is possible through simulations that utilize adjustable settings. Applications to the other production lines and improvement of the simulation engine’s performance are required as a future works of the research. Acknowledgement. This work was supported by the Development of the Reconfigurable Manufacturing Core Technology based on the Flexible Assembly and ICT Converged Smart Systems grant of MOTIE/KEIT (10052972).

References 1. Kaynak, H., Hartley, J.: A replication and extension of quality management into the supply chain. J. Oper. Manag. 26, 468–489 (2008) 2. Venugopalan, J., et al.: Analysis of decision models in supply chain management. In: 12th Global Congress on Manufacturing and Management, pp. 2259–2268 (2014)

386

D. Hwang and S. D. Noh

3. Statistics Korea: Survey on the level of informatization of SMEs (2016) 4. Bala, K.: Supply chain management: some issues and challenges – a review. Int. J. Curr. Eng. Technol. 4(2), 946–953 (2014) 5. Ghobakhloo, M., Hong, T.S., Sabouri, M.S., Zulkifli, N.: Strategies for successful information technology adoption in small and medium-sized enterprises. Information 3, 36–67 (2012) 6. Choi, S., et al.: The integrated design and analysis of manufacturing lines (I) – an automated modeling & simulation system for digital virtual manufacturing. Korean J. Comput. Des. Eng. 19(2), 138–147 (2014) 7. Lee, J., Choi, S., Park, Y., Noh, S.: A study on factory review using virtual reality model based on P3R information. Korean J. Comput. Des. Eng. 15(5), 343–353 (2010) 8. Hwang, D., Kim, S., Lee, S., Kang, J., Noh, S.: A study on 3D based visualization system of production and quality history for an automotive parts supplier. Korean J. Comput. Des. Eng. 23(4), 404–413 (2018)

Cyber-Physical Systems

Blockchain as an Internet of Services Application for an Advanced Manufacturing Environment Benedito Cristiano A. Petroni1(&) , Jacqueline Zonichenn Reis1 and Rodrigo Franco Gonçalves1,2 1

,

Graduate Studies in Production Engineering, Paulista University, Sao Paulo, Brazil [email protected], [email protected], [email protected] 2 Politecnic School, University of Sao Paulo, Sao Paulo, Brazil

Abstract. In the current dynamic and competitive market, contemporary manufacturing systems must be constantly adapted to meet the requirements for a more agile and smart production. The advent of Industry 4.0 comes as a reference on development of applications and technologies for manufacturing process innovation. Among the pillars of Industry 4.0, a noticeable relevance is given to Cyber Physical Systems, Internet of Things and Internet of Services. In parallel, new technologies as Blockchain and Smart Contracts are important innovations also coined by the Information Technology domain. More specifically, Internet of Services is characterized by a service-oriented computing model enabling a diversity of software-based services through the Internet, among them the Blockchain solution. The paper explores these technologies bringing their intersection as well as their possible applications in the shop floor level. Through the interlock of such concepts, the paper aims to propose an architecture that promotes the utilization of Blockchain for the validation of some service demands in an advanced manufacturing scenario of the Industry 4.0. Lastly a hypothetical case study is presented for illustrating the proposed architecture. Keywords: Blockchain  Internet of services Cyber physical sytems  Smart contracts

 Industry 4.0 

1 Introduction The advent of Industry 4.0 comes as a reference on development of applications and technologies for manufacturing process innovation. Such paradigm integrates Cyber Physical Systems, Internet of Things and Internet of Services [1–3]. Individuals and enterprises are each day more surrounded by connected devices and remote services with the potential for real-time interaction. The objects interconnected demand application-centric technologies to be able to provide a diversity of services [4], indicating Internet of Things and Internet of Services domains emerge. © IFIP International Federation for Information Processing 2019 Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 389–396, 2019. https://doi.org/10.1007/978-3-030-29996-5_45

390

B. C. A. Petroni et al.

The Internet of Services, for instance, is characterized as a distributed computing environment which can be filled by a large number of software services [5]. Blockchain is also considered a new endeavor of the pervasive area of technology information, supporting a secure exchange of data, knowledge and services so that the parties can contribute their parts [6]. By considering that manufacturing have many types of contracts, related with industrial services (e.g. maintenance and support), we can propose the follow question: how could conventional service contracts turn into Smart Contracts? The aim of this paper is to propose an architecture for the application of Blockchain-technology and Smart Contracts as the linkage between the Internet of Services and the data generation on the shop floor level through the Cyber-Physical Systems.

2 Research Method The paper is theoretical and proposes an architecture of possible implementation, which main goal is the utilization of Blockchain technology for the validation of manufacturing service contracts. First, through an exploratory bibliography review, the paper brings the technical concepts of Blockchain as well as the rules of use in concern of Smart Contracts. It brings also the state-of-the-art for the technologies subsumed from the Industry 4.0 such as the Cyber Physical Systems, Internet of Things and Internet of Services. Based on the definitions and funtionalities of these new technologies, an interlock is proposed considering its application on a shop floor scenario. Lastly a hypothetical case study is presented for illustrating the proposed architecture.

3 Blockchain and Smart Contracts Blockchain creates reliable transactions through an intelligent code. Such possibility promotes a collaboration in community since the transactions are validated among two or more parties, different from the traditional centralized model [7]. Technically, Blockchain is a protocol or sequence of messages between at least two computers, consisting of algorithms that communicate through the messages. During its operation, it occurs the storage of blocks with information about specific applications, which allow to identify intelligent devices and program them according to the interactions [6]. Blockchain works as a distributed ledger through a system of computers in a network. Its functions are recording transactions, executing electronic contracts and performing asset tracking. There is no central control and no participant is more reliable than any other, depending, therefore, on a decentralized consensus that comes from other participants validation of the existing rules. Such rules, which blockchain can store and validate, are specific domain information called Smart Contracts [8]. Smart Contracts can be understood as a computer protocol used to facilitate and automate financial contracts [9]. According to the operation of the Blockchain, the

Blockchain as an Internet of Services Application

391

possibility of storing records with the implementation of Smart Contracts for transactions (application of rules) in distinct domains has become feasible. One of the possible areas of implementation that is setting itself up is the Internet of Services [10].

4 Industry 4.0 Structural Elements 4.1

Cyber Physical Systems

Cyber Physical Systems (CPS) are the convergence of two layers of information technology: the physical or technology layer, and the “cyber” or virtual layer [11]. The architecture of this type of electronic system is composed of sensors and actuators that monitor the physical processes in the environment in which they act, creating a virtual twin of the real world [12]. Sensors are devices responsible for capturing information from the physical environment and transforming it into electrical and digital signals. Actuators perform interventions in the physical world started from digital signals and commands, such as the opening and closing of valves, for example. The Cyber Physical Systems will be increasingly involved in production systems due to the interconnectivity skills and the autonomy over external influences and internal stored configurations. In such production systems, both horizontal integration through value networks as well as the vertical integration through network manufacturing systems can be built to produce intelligent production [13]. 4.2

Internet of Things

It is through the Internet of Things that the Cyber Physical Systems communicate with each other and with people in real time [12]. Internet of Things (IoT) is a term created by a British entrepreneur and founder of start-ups called Kevin Ashton. His idea formulated in 1999 described a system in which the material world communicates with computers through the exchange of data with omnipresent sensors. Almost a decade after its creation, in 2009, the number of devices connected to the network exceeded the number of inhabitants of our planet. This time, according to Cisco, is the true birth of the “Internet of Things” [4]. With the advent of IoT, objects such as machines, vehicles and home appliances, are connected and accessed remotely by mobile devices connected to the Internet. Therefore it is possible to affirm that the operation of a certain cyber physical system became conditioned along the operation of the Internet of Things. 4.3

Internet of Services

The Internet of Services (IoS) can be defined as a new way of business relation with its target audience through the intelligent objects, offering new services and transforming business models [14]. Through the IoS, both internal and external services are created, offered and reused by the value chain participants [12].

392

B. C. A. Petroni et al.

In a technical level, service can be a software available on Internet with a defined interface [15]; a platform used to pay for products through the web; or part of an infrastructure such as a virtual machine where files are stored, named “cloud” [16]. The services generated by the Internet of Services bring even more value to the Internet of Things because through the many objects connected, new services can be created and combined with each other. The cross-referencing of information through the companies and the users of the systems allows to add value to the service that is being provided to the final customer.

5 Proposed Architecture By the intersection of the technologies described, we propose an architecture for a Smart Contracts application on Blockchain technology as illustrated on Fig. 1.

Service Providers

Blockchain Smart Contracts

Supply Service Maintenance Service Support Service

CPS layer

Service Delivery

Service Request

Virtual Twin

IoS layer

Shop Floor

Smart Contract

Actuator

Sensor Sign light Display

Fig. 1. Architecture for blockchain and smart contracts through the Internet of Services

The proposed architecture works as follows: – The Shop Floor (physical) layer is responsible for performing manufacturing transformation operations. In the case of productive processes, it can be understood as operation machines and tools, conveyor belts, robots, furnaces, boilers, manual

Blockchain as an Internet of Services Application



– –



393

operations, reactor tanks, etc. The events in this layer can request services (e.g. task validation; job information, poka yoke digitalization). The Cyber-Physical System (CPS) layer is interconnected to the physical elements through sensors and actuators, so that the monitoring and control of the physical environment can be performed from the virtual environment. In addition to the control of such equipment, it is also allowed to carry out tests and simulations. “Smart” equipment allow for decentralized decision-making and real-time human cooperation. The CPS layer delivers services to the Shop Floor layer through actuators, displays, sign lights, etc. The Internet of Services layer works as a service bus where different services can be accessed, matched and integrated by discovery and composition applications for an Industry 4.0 smart manufacturing environment [17]. The Blockchain layer is implemented by Blockchain DLT (Distributed Ledger Technologies) allowed with an immutable ledger that will be maintained by peers in a private network where all Smart Contracts can be accessed. When a service is requested and delivered, a smart contract is stablished between the company and one service provider in a peer-to-peer transaction, validated by the Blockchain network. Thus, the Blockchain layer is responsible for: – The validation of the predefined rules (services), in form of Smart Contracts, according to the distributed network consensus and stored in the specific Blockchain; – The interactions on the development of new rules, as well as the adversity of operations, that come from the interconnection existing through the Internet of Services; – The control, which is automatized and decentralized in the units, with their intelligent and autonomous objects on the production line. The Services Providers are different players, both virtual and physical, that, through the Blockchain validation, become trusted third parties within a collaborative business ecosystem where the services are offered and consumed in combined use.

The total integration of the value chain with the production structure allows possible communications with suppliers and customers through the Internet of Services.

6 Application Case The proposed architecture can be illustrated considering the following scenario: A steel industry uses a complex emission filter system, provided by a partner through servitization. Servitization in the manufacturing equipment industry is enabled by the increased incorporation of sensors that provides data about the condition and usage of manufacturing equipment. The collected data can be processed and analyzed to gain new insights and provides companies with the opportunity to create new value adding services [18]. For a product-oriented business model, the manufacturer has a low level of servitization. The revenue stream is largely based on product sales and spare parts and

394

B. C. A. Petroni et al.

the ownership of the product is considered as transferred to the customer. This is a transaction-based “production and consumption” business model where the responsibilities of ownership lie with the customer. For a service-oriented business model, which has a higher level of servitization, the supplier still collocate the equipment, but the ownership of the product is not transferred to the customer. Instead, the supplier takes responsibility for equipment selection, consumables, monitoring of performance and carrying out servicing and disposal. In return, the supplier receives payment as the customer uses the capabilities that the equipment provides [19]. This service-oriented or “pay-per-use” model in a manufacturing company would rely on connected manufacturing equipment that contain embedded technologies, enabling them to interact with other objects or external environment. 6.1

Context of the Hypothetical Case

The steel industry “XYZ” presents a volatile productive capacity thus it demands elasticities, as well as pass-through rates and updating of allocation rules. If an increase on production demand is forecasted, the filtering capacity should be flexible enough to match the new emission volume. On the other hand, when demand decreases to a lower level, the industry should not pay for a higher emission filtering. To manage this critical configuration, software versioning and real-time access control are vital to maintain information quality and demand accuracy. Further, a connected network and “smart” equipment allow for decentralized and real-time decision-making. Moreover, smart contracts may mitigate the bureaucracy and automate the contract rules by replacing the conventional service contracts. In this “value in use” business model, the responsibilities for equipment performance lie with the supplier or service provider who receive revenues as the customer uses the filters. 6.2

Application of Proposed Architecture

By contracting the filtering capacity through Internet of Services and Blockchaintechnology, the contract rules between the steel industry “XYZ” and their service providers can be quickly adapted to the demand sensed by the filters on the shop floor level. Illustrating through our proposed architecture, we have the following: – Steel production filters on shop floor are on the physical layer with their sensors and actuators. Emission volume and emission composition sensors, that involves gases and particular material, requests services to the Cyber-Physical System (CPS) layer, depending on the up or down fluctuation. – The Cyber-Physical System (CPS) layer is interconnected to such physical elements through their sensors and actuators, so that the monitoring and control of the physical environment can be performed from the virtual environment. – Services can be invoked from the Internet of Services layer, such as Supplier services, Maintenance support service, among others. Regarding the emission

Blockchain as an Internet of Services Application

395

volume control, the Supplier service is triggered for a contract update with the supplier. – Depending on the gases composition, the contract model is one or another. A Smart Contract is managed via Blockchain, according to the filtering capacity of the virtual twin. – The service provider, or supplier, validates the new level of gases emission and the Blockchain-based smart contract is updated on a peer-to-peer transaction. Such architecture addresses the self-adaptive contract, real-time flexitility and security, which are the challenges that the “XYZ” manufacturing company faces when developing its servitization model. It promotes a range of expected benefits that encourage manufactures to embrace the Blockchain technology.

7 Final Considerations This paper brings a conceptual Blockchain-technology architecture as a solution that can be implemented on manufacturing environment. It may validate certain manufacturing service demands from members of the Internet of Services context, through the rules and definitions of the Smart Contracts. The services generated by the Internet of Services bring value to the Cyber Physical Systems in a manufacturing environement. By involving such Industry 4.0 structural elements and communication protocols, there is a great potential for creating rules or Smart Contracts through the Cyber Physical Systems, to be applied in validation and control of manufacturing processes. Through the data transmitted in real time, external elements can act in a productive system, controlling, monitoring and intervening both in a reactive or proactive way. Nowadays, individuals and organizations are increasingly connected and using applications or services offered by companies through the crossings of information considered relevant to the final consumer. However in the current model, there is a need for a third party to validate a certain service. With applications in distributed systems and through the interaction and security offered by Blockchain technology, the validation will be defined by the network members themselves. In a hypothetical case, we have presented an approach for implementing an adaptive servitization contract, based on the Blockchain proposed architecture.

References 1. Kagermann, H., Wahlster, W., Helbig, J.: Securing the future of German manufacturing industry: recommendations for implementing the strategic initiative Industrie 4.0. Acatech – National Academy of Science and Engineering, pp. 1–82 (2013) 2. Sanders, A., Elangeswaren, C., Wulfsberg, J.: Industry 4.0 implies lean manufacturing: research activities in industry 4.0 function as enablers for lean manufacturing. J. Ind. Eng. Manag. 9(3), 811 (2016) 3. Hofmann, E., Rüsch, M.: Industry 4.0 and the current status as well as future prospects on logistics. Comput. Ind. 89, 23–34 (2017)

396

B. C. A. Petroni et al.

4. Witkowski, K.: Internet of things, big data, industry 4.0–innovative solutions in logistics and supply chains management. Procedia Eng. 182, 763–769 (2017) 5. Autili, M., Giannakopoulou, D., Tivoli, M.: Thematic series on verification and composition for the Internet of Services and things. J. Internet Serv. Appl. 9(1), 10 (2018) 6. Li, Z., Wang, W.M., Liu, G., Liu, L., He, J., Huang, G.Q.: Toward open manufacturing: a cross-enterprises knowledge and services exchange framework based on blockchain and edge computing. Ind. Manag. Data Syst. 118(1), 303–320 (2018) 7. Tapscott, D., Tapscott, A.: Blockchain revolution: how the technology behind bitcoin is changing money, business, and the world. Penguin (2016) 8. Kweon, M.: A study of blockchain technology in facilitating lending services with distributed risk aversion. Rev. Comput. Sci. Eng. 4(1), 91 (2018) 9. Bitfury Group, Smart contracts on Bitcoin blockchain. http://bitfury.com/content/5-whitepapers-research/contracts-1.1.1.pdf. Accessed 18 Feb 2019 10. Gill, A.Q., Braytee, A., Hussain, F.K.: Adaptive service E-contract information management reference architecture. VINE J. Inf. Knowl. Manag. Syst. 47(3), 395–410 (2017) 11. Givehchi, O., Landsdorf, K., Simoens, P., Colombo, A.W.: Interoperability for industrial cyber physical systems: an approach for legacy systems. IEEE Trans. Ind. Inform. 99, 1–1 (2017) 12. Hermann, M., Pentek, T., Otto, B.: Design principles for industrie 4.0 scenarios: a literature review. In: Working Paper No. 01/ 2015, Technische Universität Dortmund, Fakultät Maschinenbau and Audi Stiftungslehrstuhl - Supply Net, Order Management (2015) 13. Liu, Q.: An application of horizontal and vertical integration in cyber physical production systems. In: International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery, pp. 110–113. Xi’an (2015) 14. Satyro, W.C., Sacomano, J.B., da Silva, M.T., Gonçalves, R.F., Contador, J.C., Von Cieminski, G.: Industry 4.0: evolution of the research at the APMS conference. In: IFIP International Conference on APMS 2017, pp. 39–47 (2017) 15. Soriano, J., Heitz, C., Hutter, H.-P., Fernández, R., Hierro, J.J., Vogel, J., Edmonds, A., Bohnert, T.M.: Internet of Services. In: Bertin, E., Crespi, N., Magedanz, T. (eds.) Evolution of Telecommunication Services. LNCS, vol. 7768, pp. 283–325. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-41569-2_14 16. Vozmediano, R.M., Montero, R.S., Llorente, I.M.: Key challenges in cloud computing: enabling the future Internet of Services. IEEE J. Mag. 17(4), 18–25 (2011) 17. Zonichenn Reis, J., Gonçalves, R.: The Role of Internet of Services (IoS) on Industry 4.0 Through the Service Oriented Architecture (SOA): IFIP WG 5.7 International Conference, APMS 2018, Seoul, Korea, August 26–30, 2018, Proceedings, Part II. pp. 20–26. (2018). https://doi.org/10.1007/978-3-319-99707-0_3 18. European Commission, Cross-Cutting Business Models for IoT, European Union: European Commission (2017) 19. Samuelsson, S.: Evaluating Servitization in the Manufacturing Equipment Industry (2018)

Development of a Modeling Architecture Incorporating the Industry 4.0 View for a Company in the Gas Sector Nikolaos A. Panayiotou(&), Konstantinos E. Stergiou, and Vasileios P. Stavrou National Technical University of Athens, Zografos Campus 15780, Athens, Greece [email protected]

Abstract. Industry 4.0 is a fast growing concept which has started to gain ground over the last few years and strives to achieve a higher and more efficient production rate through the usage of automations. This concept is directly correlated with Business Process Management because its implementation concerns the improvement of business processes. Business Process Modeling is a tool of Business Process Management which can depict the processes of an organization in order to be elaborated and improved. For that reason models are widely used for the better understanding of processes and as a first step of new concepts insertion, such as Industry 4.0, in an organization. Hence, a comprehensive framework of a modeling architecture is essential for a company which desires the transition to new concepts according to its needs, its processes and its structure. In this paper, a complete architecture which proposed in a company activating in gas industry is presented including the appropriate models for the recording of business processes and how Industry 4.0 principles could be incorporated to them. Keywords: Industry 4.0

 Modeling architecture  Gas sector

1 Introduction The term Industry 4.0 (or 4th industrial revolution) is a technology-oriented concept concerning primarily the manufacturing domain, however, can be adapted and applied to any value chain organization [1]. The integration of Industry 4.0 principles in business processes, following Business Process Management (BPM) tools, could assist and facilitate their improvement [2]. BPM contribution to Industry 4.0 is accomplished using the method of Process Modeling, providing stakeholders with adequate means to control intelligent manufacturing processes and smart factories efficiently and effectively [3]. Business process models examine and exploit the behavioral aspects of a system and are usually developed in an early stage of requirements designing [1]. Model-based systems engineering (MBSE) is a key enabler for building complex systems as demonstrated by the increased number of related publications [4, 5]. The deployment of employees as well as the human-machine interaction is depicted by systems models development [1, 6, 7] leading to a complete view of Industry 4.0 processes. © IFIP International Federation for Information Processing 2019 Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 397–404, 2019. https://doi.org/10.1007/978-3-030-29996-5_46

398

N. A. Panayiotou et al.

Through the capability of process simulation via the configured models, a better understanding of systems operation could be achieved, providing feedback about changes and readjustments [8] which can be implemented for improved results and better resilience. Simulation as a method of modeling is proposed by [8, 9] and [10] as a reliable way of getting results. UML-based models are utilized by [1, 11, 12] and [13] while they constitute also a big category of process modeling techniques used as Business Process Management tool. Hence, both dynamic and static techniques are exploited for models and architectures configuration. BPMN, in which the architecture of this paper is based, is mentioned to be a valid and adaptable method for both static and dynamic approaches in modeling [1, 11]. Completed 5-level architectures are proposed by [14, 15] for the deployment of Industry 4.0 in manufacturing environment where real time data acquisition and management is needed for the operations control. According to [16] an architecture exploiting human knowledge for data input in smart factories can be developed. Vertical integration architecture is suggested by [17] in order to collect data and transform it into valuable information for the system. PERA (Purdue Enterprise Reference Architecture) for industry 4.0 is used by [18] as a base for adjusted architectures development such as this prosed in this study. The development of enterprise architecture provides business with useful details about how to align their strategy with their business processes in order to achieve the desirable results [19]. The aim of this paper is the development of a Business Process Modeling architecture prototype which is used to describe methodologies and guidelines for the design and combination of Business Processes and Industry 4.0 in an organization which is responsible for the middle and low pressure gas distribution networks operation and desires the transition of its processes to an Industry 4.0 framework. Initially, the views that will satisfy the needs of the organization are presented, followed by the description of the proposed architecture. PERA architecture was chosen as the base of the adjusted meta-architecture of this case, developed as a next-step emerging framework according to PERA principles, because of its simplicity and its generic nature [20] which facilitated the fitting with the needs and the mentality of the company. PERA and ARIS were combined for the development of a new meta-architecture based on the needs of a specific organization operating in the gas industry.

2 Proposed Architecture The first step for the configuration of the proposed architecture is the understanding of the role of BPM in Industry 4.0 implementation in a gas company. As shown in Fig. 1, BPM constitutes the higher level (Level 6) of the “pyramid” which represents the vertical integration of Industry 4.0 in business processes, consisting of the level of modeling, execution and business process control. The lower level (Level 0) is constituted by the Actual Process of organization operation. Level 1 encompasses all the appropriate equipment for direct interaction and coordination with actual processes, followed by SCADA in Level 2 which collects the data of Level 1 equipment used in the actual processes. Level 3 includes Equipment System and Maintenance Management operating by the aggregation of the data come from SCADA. Level 4, Products

Development of a Modeling Architecture Incorporating the Industry 4.0 View

399

Lifecycle Management (PLM), is the link between Level 3 and Level 5 because it develops products based on the requirements coming from the ERP and CRM systems of Level 5 and it simultaneously controls the low level production. The concept of security is vital in all levels of the architecture for a company operating gas distribution networks and has different expressions in the six levels of the pyramid. For example, security is important at a physical level (e.g., covering the risk of physical assets destruction) as well as at an informational and network level (e.g., preventing a cyberattack). Similarly, knowledge has to be management in the different levels of the pyramid, in different ways (e.g. explicit knowledge concerning executional processes is expected to be found in the lower levels of the pyramid, possibly in the form of analytical procedures and work instructions, whereas more tacit knowledge is found in the higher levels, possibly in the form of management good practices and shared experiences).

Fig. 1. The “pyramid” of Industry 4.0 implementation levels

Based on the needs of the organization in which the proposed architecture was implemented, the views of the system analysis have been detected and categorized as follows: • Organization View: Encompasses the organizational structure of the company according to employees’ positions and its allocation in departments. • Process View: Composed of the processes, subprocesses and activities of the organization. • Information Systems View: Depicts the information systems used by the company as well as the applications included in them and their interconnections • Industry 4.0 and Internet of Things View: Analyzes the utilization of automations in the operations of semi-autonomous functions using sensors, actuators and telecommunication networks. • Documents/Files View: The recording and categorization of significant documents and files for the business processes operation are included in this view.

400

N. A. Panayiotou et al.

• Rules/Legislation View: Refers to the business rules and laws which influence organization processes and should be noted down. • Risks/Controls View: Includes risks listing according to their category and their implications in business processes. • Products/Services/Customers View: Contains the analysis of products and services provided by the examined organizations and the approaches on the basis of which its main customer categories are served, too. In dependence to the views which are described and should be covered based on the analysis of the organization, ARIS was decided to be the modeling architecture and more specifically the meta-architecture modeling framework to be used. ARIS not only was selected because it completely encompasses the views which have to be included in the analysis, but it can be understandable and easily accessible by all the employees as well, owing to the existence of supporting software. In Fig. 2 the views of ARIS are presented, including the methods (diagrams) used in each view. ARIS views are: Organizational, Data, Processes, Functions and Products and Services. The representation is conducted through the House of ARIS depicting the interconnections between views. These views correspond to specific levels of the PERA architecture through specific modeling methods. In particular, in Levels 1, 2 & 3 in which the Industry 4.0 principles are implemented in company processes, Network Diagrams, Network Topology Diagrams, IoT Object Definitions Diagrams, IoT Context Diagram and Information Carrier Diagrams are used. In order to represent the information systems requirements, structure and function in Levels 3, 4 & 5, the Application System Type Diagrams, Application Collaboration Diagrams, Requirements Tree and Requirements Allocation Diagrams, Customer Journey Map and Customer Journey Landscape (for CRM), Product Service Tree and Information Carrier Diagram are used. BPM in Level 6 is expressed by Value-added Chain Diagram, Enterprise Collaboration Diagrams (BPMN), Function Allocation Diagrams (FAD), Business Rule Architecture Diagram, Business Controls Diagram KRI Allocation Diagrams, Risk Diagram and Information Carrier Diagram. In Fig. 3 the methods of ARIS were redistributed to be adjusted in each view of organization. In both Figs. 2 and 3 an extra Supervision View has been added, constituting the common platform of connection between the other views, in the form of a control panel. In particular, the Organization view is structured by the configuration of the Organizational Chart. The Information Carrier Diagram is used for the recording and categorization of documents and files covering the Documents/Files view. The Function view is supported by the Enterprise Collaboration Diagram which is the main diagram of the suggested architecture as it depicts all the functions of the organization in BPMN form and both “as-is” and “to-be” can be represented. BPMN diagrams are supported by Function Allocation Diagrams (FAD) which connect BPMN with the diagrams of the other views and Value Added Chain Diagrams which group the business processes in a high level. The view of System Requirements is presented by the Requirements Tree Diagram determining the hierarchy of requirements which are subsequently analyzed by

Development of a Modeling Architecture Incorporating the Industry 4.0 View

401

Fig. 2. Used diagrams distributed in ARIS view

Fig. 3. Used Diagrams distributed in organization views

the Requirements Allocation Diagram. System requirements connect “as-is” with “tobe” diagrams and are connected with business process improvement initiatives. One of the most important views for the utilization of the proposed metaarchitecture is the view of Industry 4.0 and IoT for the reason that it supports the transition of the organization in automations adoption. IoT Object Definition Diagram represents the structure of automations including the function of sensors and

402

N. A. Panayiotou et al.

actuators supplemented by IoT Object Context Diagram which describes the function of the automation in a process. In addition, Network Topology and Network diagrams depict the communication and interconnection networks of the automations completing in that way the Industry 4.0 view in organization processes. The view of Risks/Controls/Policies is analyzed as an integrated framework because of their correlation. Business Rule Architecture Diagram presents the business policies that the organization should comply with. Risk diagram lists the risk categories and the risks associated with the organization operation which are measured by the KPI Allocation Diagram and are confronted according to the recorded information encompassed in the Business Controls Diagram. The view of Information Systems is modeled through the Application System Type Diagram registering all the information systems used by the company and decomposing their structure and functionality through the Application Collaboration Diagram. The final view of Products/Services/ Customers uses the Product/Service tree for the depiction of products and services offered by the organization to customers. Moreover, the route that is followed by the costumer from the beginning until the end of its touch with the company and the analysis of each step of the route parameters as well, are displayed by the Costumer Journey Landscape and the Costumer Journey Map, respectively. In sum, the proposed architecture consists by the main view of Processes which constitutes its core, based on BPMN diagrams representing business processes, framed by the other views which supplement a completed framework harmonized with the needs of the studied gas company, in order to achieve the adoption of Industry 4.0 principles in its processes. BPMN lanes offer better interaction between physical and electronic actors, too. That was the reason for choosing BPMN as the central architecture diagram. The transition was primarily depended on the matching of Industry 4.0 and IoT methods in BPMN diagrams along with the cooperation with the methods of the other views. Figure 4 represents an overview of the “connection of a new customer in the gas network”, a core process of the company. In this process, the interaction of the actors is important to be shown in order to understand it better. A BPMN diagram can depict in high level of detail the interaction of the customer with the departments of the company and the external partners involved. Every activity of this process can be connected with other diagrams of other or the same view for better understanding of the system. For example, the smart tag activities in this process are associated with IoT Object Context Diagrams as a part of Industry 4.0 logic.

Fig. 4. (Simplified) Process of new customer connection in the gas network

Development of a Modeling Architecture Incorporating the Industry 4.0 View

403

3 Conclusions and Further Research Modeling of an enterprise system in the Industry 4.0 era is a complex task of strategic importance that faces many challenges in order to achieve the integration of different and often conflicting views in a holistic manner. In this paper, the design of an integrated architecture based on the ARIS framework is presented, incorporating the view of Industry 4.0 and Internet of Things. The architecture was developed for a company operating middle and low pressure gas distribution networks and was based on its strategic orientation and specific needs. It is understood that this is the starting point and many steps have to be taken in the future. The developed architecture has not been validated yet and has not been refined based on its full application. So far, 5 out of 108 identified business processes have been designed and 12 out of the 21 methods have been utilized from which only two involve cyber-physical operation, so the integration of all methods and their related and interconnected objects has not been fully verified. However, it has to be stated that the experience gained by now is encouraging, showing that the objective to connect the Industry 4.0 view with other organizational views in different management and operational levels is being fulfilled. The example of the (simplified) process of “new customer connection in gas network” shows the incorporation of Industry 4.0 workflows with other Business Processes as it is materialized through the developed architecture. After the refinement of the architecture in the case of the user company, it is interesting to test its applicability to other companies operating in the same or alternative sectors. Acknowledgements. This research has been co-financed by the European Union and Greek national funds through the Operational Program Competitiveness, Entrepreneurship and Innovation, under the call RESEARCH – CREATE – INNOVATE (project code: T1EDK-01825).

References 1. Petrasch, R., Hentschke, R.: Process modeling for industry 4.0 applications: towards an industry 4.0 process modeling language and method. In: 13th International Joint Conference on Computer Science and Software Engineering (JCSSE) on IEEE, Khon Kaen, Thailand (2016). https://doi.org/10.1109/jcsse.2016.7748885 2. Nascimento, D.L., et al.: Exploring Industry 4.0 technologies to enable circular economy practices in a manufacturing context: a business model proposal. J. Manuf. Technol. Manag. (2018). https://doi.org/10.1108/jmtm-03-2018-0071 3. Rehse, J.-R., Dadashnia, S., Fettke, P.: Business process management for Industry 4.0 – three application cases in the DFKI-smart-lego-factory. Inf. Technol. 60(3), 133–141 (2018) 4. Szvetits, M., Zdun, U.: Systematic literature review of the objectives, techniques, kinds, and architectures of models at runtime. Softw. Syst. Model. 15(1), 31–69 (2016) 5. Heineck, T., Gonçalves, E., Sousa, A., Oliveira, M., Castro, J.: Model-driven development in robotics domain: a systematic literature review. In: Software Components, Architectures and Reuse (SBCARS), 2016 X Brazilian Symposium on IEEE, pp. 151–160, Maringá-PR-Brazil (2016)

404

N. A. Panayiotou et al.

6. Wortmann, A., Combemale, B. and Barais, O.: A systematic mapping study on modeling for industry 4.0. In: 2017 ACM/IEEE 20th International Conference on Model Driven Engineering Languages and Systems (MODELS) on IEEE, Austin, TX, USA (2017) 7. Hermann, M., Pentek, T., Otto, B.: Design principles for Industrie 4.0 scenarios. In: 49th Hawaii International Conference on System Sciences (HICSS) on IEEE, Koloa, pp. 3928– 3937, HI, USA (2016) 8. Uriarte, A.G., Ng, A., Moris, M.U.: Supporting the lean journey with simulation and optimazation in the context of Industry 4.0. Procedia Manuf. 25, 586–593 (2018) 9. Cicconi, P., Russo, A.C., Germani, M., Prist, M., Pallotta, E., Monteriù, A.: Cyber-physical system integration for industry 4.0: modelling and simulation of an induction heating process for aluminium-steel molds in footwear soles manufacturing. In: 2017 IEEE 3rd International Forum on Research and Technologies for Society and Industry (RTSI) on IEEE, Modena, Italy (2017) 10. Ullah, A.S.: Modeling and simulation of complex manufacturing phenomena using sensor signals from the perspective of Industry 4.0. Adv. Eng. Inf. 39, 1–13 (2019) 11. Petrasch, R., Hentschke, R.: Towards an internet-of-things-aware process modeling method an example for a house surveillance system process model. In: 2nd Management and Innovation Technology International Conference (MITiCON2015), Bangkok, Thailand (2015) 12. Feldmann, S., et al.: Towards effective management of inconsistencies in model-based engineering of automated productions systems. IFAC-Papers Online 48(3), 916–923 (2015) 13. Strang, D., Anderl, R.: Assembly process driven component data model in cyber- physical production systems. In: Proceedings of the World Congress on Engineering and Computer Science 2014 Vol. II on IAENG, San Francisco, USA (2014) 14. Bagheri, B., Yang, S., Kao, H., Lee, J.: Cyber-physical systems architecture for self-aware machines in Industry 4.0 environment. IFAC-Papers Online 48(3), 1622–1627 (2015) 15. Lee, J., Bagheri, B., Kao, H.: A Cyber-Physical Systems architecture for Industry-based manufacturing systems. Manuf. Lett. 3, 18–23 (2015) 16. Fleischmann, H., Kohl, J., Franke, J.: A reference architecture for the development of sociocyber-physical condition monitoring systems. In: 2016 11th System of Systems Engineering Conference (SoSE) on IEEE, Kongsberg, Norway (2016) 17. Pérez, F., Irisarri, E., Orive, D., Marcos, M., Estevez, E.: A CPPS architecture approach for Industry 4.0. In: 2015 IEEE 20th Conference on Emerging Technologies & Factory Automation (ETFA) on IEEE, Luxembourg, Luxembourg (2015) 18. Theorin, A., et al.: An event-driven manufacturing information system architecture for Industry 4.0. Int. J. Prod. Res. 55(5), 1297–1311 (2017) 19. Kitstios, F., Kamariotou, M.: Business strategy modelling based on enterprise architecture: a state of the art review. Bus. Process Manag. J. https://doi.org/10.1108/BPMJ-05-2017-0122 20. Li, H., Williams, T.J.: Some extensions to the Purdue Enterprise Reference Architecture (PERA): I. Explaining the Purdue architecture and the Purdue methodology using the axioms of engineering design. Comput. Ind. 34(3), 247–259 (1997)

Process for Enhancing the Production System Robustness with Sensor Data – a Food Manufacturer Case Study Sofie Bech, Thomas Ditlev Brunoe(&), and Kjeld Nielsen Department of Materials and Production, Aalborg University, Fibigerstrade 16, 9220 Aalborg, Denmark {sofie,tdp,kni}@mp.aau.dk

Abstract. Global markets for food products are changing towards accommodating larger product variety while experiencing shorter lifespans of products. These market trends impose pressure on the existing manufacturing infrastructure, however the food industry lacks investment in manufacturing technology. This research illustrates how the robustness of the existing production system can be enhanced using sensor data, including a case study of a Danish food company. The paper presents an outline of an iterative process performed in a case company to identify and carry out the potential of utilize sensor data from the existing production setup, and an example of the potential and return of investment is introduced. Several challenges are recognized when implementing sensor data to enhance the production system robustness. The in-house competencies are lacking in order to install the setup of sensors and operating in a database with a big amount of data. In addition, the challenge of continuously maintaining the data and the analyses is present. The research points to the importance of involving operators to better understand the context of the production. In conclusion, the case company has achieved information of the production based on data and not only running adjustments, based intuition of the operators and has thereby enhanced the production system robustness. Keywords: Food manufacturing  Sensor data  Production system robustness

1 Introduction Within the food industry, global markets are today driven by increasing global competition, and a demand for changes in product mix, and frequent introductions of new products [1, 2]. In addition, the food and beverage markets has shifted towards a very dynamic market, with increasing competition, lower profit margins and extensive new regulations [3, 4]. Nevertheless, in many companies, the food and beverage production setup has difficulties accommodating these new challenges partly due to the production setup being designed and operated with tacit knowledge and the production equipment has a long service life [5]. Another aspect is lower investments in manufacturing technology compared to other industries. As an indicator of this lacking investment, in 2016 the food industry, commissioned 8,200 industrial robots worldwide, while the automotive industry commissioned 103,300 units in 2016 accounting for a share of © IFIP International Federation for Information Processing 2019 Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 405–412, 2019. https://doi.org/10.1007/978-3-030-29996-5_47

406

S. Bech et al.

35% of the total supply of industrial robots [6]. Moreover, Kohr et al. [7] state that in comparison, the automotive OEM industry has a relation of 30% of sites of R&D vs production sites, whereas the process industry has only a 11% ratio. This difference in investments and R&D indicates that food and beverage companies are less likely to design and commission new manufacturing lines, when introducing new products and variants. Hence, new products need to be introduced in existing manufacturing systems, which yields a requirement for higher process robustness. The majority of research on designing robust processes focuses on establishing new manufacturing systems, i.e. green field operations. Green field research in production robustness thus cannot be applied directly due to the lack of willingness to invest in new equipment. Therefore, to accommodate the new market demand with existing equipment, this research addresses whether sensors can be utilized to collect data from the existing manufacturing equipment to increase process robustness. As the cost of sensors setup in manufacturing has decreased over the last few years, this has given way for this opportunity. The challenges and opportunities outlined in this section leads to the research question of this paper: How can robustness of the existing production systems be enhanced, using sensor data?

2 Method To address the research question, a case study is conducted within a Danish food manufacturing company. The methodology of case study is selected due to needed for analyzing the phenomenon of sensor data of the existing production system, in its specific context (the food manufacturing industry) [8, 9]. The case company is introduced in Sect. 2.1. The research takes an outset in a two-year project in the case company where the authors participated. A new process for establishing sensor data acquisition, forming and testing hypotheses and transforming those into actions to improve the manufacturing system is proposed by the authors and outlined in Sect. 3. This process is combined with the case study to exemplify the process and determine the potential in the specific case. The contribution of the research is thus twofold: (1) a proposed iterative method for establishing sensor-based data collection in food manufacturing, and (2) a case study outlining the application of the method and examples of actual improvements in the manufacturing system suggested by the method. 2.1

Case Company

The case study was conducted in a Danish manufacturer of Danish pastry. The company has two separate production lines with a total annual production of 12.700 tons. The products have different recipes, fillings, toppings, and shapes, implying thousands of possible combinations. Nevertheless, a the process flow is on an overall level generic, and is illustrated in Fig. 1, including the following activities: (1) mixing of the dough, (2) processing of dough and incorporating margarine, (3) shaping of products and adding filling and topping, (4) raising in a warm environment and (5) freezing.

Process for Enhancing the Production System Robustness with Sensor Data

407

Fig. 1. A generic process flow at the case company [10]

Figure 2 illustrates a section of the previous information flow at the case company, structured according to the automation pyramid [11, 12]. The boxes represent the systems that operate on different levels, and lines represent flow of information. There is a dedicated system controlling the mixing process of the dough which has a direct connection with the PLC on the field level, which is the general picture of most processes in the production lines. The factory level addresses the overall business processes and the supply management, and at this level the data is only required for production planning etc. Today, at the case company the information flow between the factory level and the cell and control level sparse and rarely utilized. Specifically, performance data which could be utilized for optimizing the production planning and control is not implemented. The communication between the factory level and cell and control level is primarily manual, and no direct integration is implemented. Hence, information primarily flows from factory level and down, whereas no information formally flows from the lower levels to factory level. This implies that decisions on factory level are not able to be supported by information from the lower levels. On cell level and control level, part of the lines are controlled by individual systems and limited to no communication is done between these systems. This sparse communication is illustrated by a dotted line in Fig. 1. Moreover, the systems on cell level and control level are mainly manually controlled by an operator. Since little information on machine settings is formalized, this results in a production system run on tacit knowledge of how the operator intuitively thinks the process should be controlled, and a system with little fact based justification of decisions on higher levels. Factory level

Cell and Control level Mixing

Processing

Field level Extruder

Cut

Fig. 2. The information flow at the case company

408

S. Bech et al.

3 Identifying the Potential As stated in the introduction, the food industry generally lacks behind other industries in terms investing in utilizing sensor data for improving operations. Although adding sensors to existing production systems is an inexpensive alternative to establishing new more digitized production systems, identifying how the sensor data can improve the processes is a major task in the case company. Moreover, determining the potential outcome on beforehand is a time-consuming task, and thus the proposed process is iterative, encouraging experimentation with data from inexpensive sensors. Figure 3 illustrates the process from initial hypotheses to conclusion and capitalization. In the following, each part of the process in Fig. 3 is outlined. (1) The process starts with hypotheses, stated by experts in the company, of what the potentials might be of using sensor data in the existing production system, and what data is necessary to address these hypotheses. Establishing the hypotheses involves several parts of the company, including operators, technicians, engineers and data analysts. The initial hypotheses thus imply requirements for new sensors. (2) The data needed, and quality requirements for the data depends on the hypotheses. There are basically two solutions to obtain data of the existing production equipment: (a) Adding new sensors or (b) accessing existing sensors in existing equipment. When adding new sensors to existing equipment, the location of the sensors and type of sensors must be carefully considered. With a tough, and strongly regulated production environment as the food industry, all sensors must stand a cleaning. Next, the sampling frequency of data logging must be determined, so that the information is sufficient but not excessive. (3) Collect sensor data. Once sensors are installed, and integrated into computer network, all sensor data is stored in one common database, which is made available to different stakeholders within the company. The timespan needed is implied by the initial hypotheses. (4/5) Analyzing the data to address the hypotheses. If data is logged by using existing equipment PLCs, there is a task of translating the names in the PLC into terms that are familiar to stakeholders, since there may be a lack of documentation of the existing equipment. If the production line consists of both a continuous and a discrete flow, there is a task of relating the data to the same timeframe, especially when the production process is operator driven. Once this is completed the actual analyses can be performed which should be done by applying data analytics software, such as R Studio, Microsoft Power BI or a combination, depending on the actual analyses necessary to address the hypotheses. If the hypotheses can be addressed, the analyses can be concluded and results can be capitalized on, and the loop may be terminated. Alternatively, if the hypotheses are rejected, new hypotheses can be formulated and evaluated through a new loop. (6) As a side effect of this process, questions can be raised of new potentials. Moreover, as a spinoff from the analyses new hypotheses may be generated. If the data foundation is sufficient for the new analyses, this can lead to new findings through data analytics. If not, the process takes a new loop as illustrated in Fig. 3. by establishing sensor data collection, collect sensor data, and analyzing data. Section 4 presents an example application from the case company.

Process for Enhancing the Production System Robustness with Sensor Data

409

1. Initial Hypotheses 2. Establish sensor data collection

3. Collect sensor data

6.New hypotheses 4. Analyse data to address hypotheses

5. Conclude & capitalize

Fig. 3. The process of establishing sensor data collection, analyzing and capitalizing on using sensor data

4 Example This section presents an example from the case company applying the method illustrated in Fig. 3. From the initial hypotheses to establish sensor data, collect sensor data for the analyses and conclude and capitalize. Furthermore, this example includes figures on cost and savings as a result of applying the method. 4.1

Identifying Start Temperature

Temperature is key to producing high quality Danish pastry. Prior to this project, only the initial temperature of the dough was measured by a manual offline test. One operator was in control of mixing the dough and had a recipe defining guidelines for the temperature of the dough. This can vary from 4 °C to 7 °C regardless of the ambient temperature in the factory. The operator can add CO2 manually and thereby control the initial temperature of the dough. The optimum start temperature of the dough has been a topic of discussion for several years. The hypotheses formulated by the stakeholders on the case company was: Controlling the initial temperature of the dough based on sensor data can reduce the CO2 consumption. The process of establishing sensor data started by looking into the current production setup. Temperature sensors were added to the system with the purpose of understanding the effect on the temperature. Three online sensors measuring the dough temperature were added, and two sensors were implemented in order to log the ambient temperature in the factory. The data collection had a 10 month time-span with a frequency of one set of data points

410

S. Bech et al.

per minute. The frequency of the data logging was determined by interviewing operators determining how often changes were made. The logged data was analyzed by loading the dataset into Microsoft Power BI, applying the R plugin for advanced statistical analyzes. Following this, a correlation analysis was made determining the correlation between the initial temperature of the dough, dough temperatures further along the process chain, the ambient temperature and the amount of cooling CO2 injected into the dough. The analysis indicated that the initial temperature of the dough in many cases had been lower than actually necessary, and determined a clear correlation between the ambient temperature and the increase in dough temperature. This correlation was translated into a table indicating how much CO2 should be injected at different combinations of ambient temperatures and initial dough temperatures in order to achieve the desired dough temperature along the production line. In conclusion, the hypothesis was confirmed, since using temperature sensor data from the production it was now possible to define the optimal start temperature of the dough and determine the right amount of CO2. 4.2

Cost of Investment

The initial investment included new sensors, installation, documentation and connection to a database. The direct cost of that was 30.000 DKK (4,500 USD). The return of investment is directly related to a reduction of CO2 injected. Today the annual cost CO2 used for regulating dough temperature is approximately 1 million DKK. However, by introducing acceptance of a higher initial temperature of the dough, CO2 consumption can be reduced by 30–40%. Therefore, solely based on the CO2 reduction, this investment has an approximated return on investment of roughly one month. As an additional benefit of higher initial dough temperature, other analyses indicate a quality increase of the final product. Moreover, these analyses also indicated fewer production stops and less product waste due to the increase in quality. As a side effect of this analysis, new potentials were identified. rather than controlling the initial dough temperature in the processing part of the production, it raises the question of controlling the temperature earlier in the process. In other words, going from reacting based on the temperature measured to predicting the amount of CO2. This prediction can be based on the ambient temperature and the temperature of the ingredients. Thereby, controlling the temperature based on the real-time data with a direct feedback loop. However, to make this analysis new sensors have to be added to the flour silo and the water dispenser. This is currently an ongoing project at the case company.

5 Conclusion The increasingly dynamic markets impose pressure on food and beverage manufacturers but investment in new state of the art production lines are in some cases not possible, and the industry generally lacks investments in new technology. The aim of this research was to identify how sensor data from the existing production system can be utilized to achieve a more robust production system, and to address this, an iterative

Process for Enhancing the Production System Robustness with Sensor Data

411

process for establishing sensor data collection and analyses was introduced, and a case study has been conducted within a Danish food manufacturing company. In addition, an example of applying the method in the case company was included. By introducing sensors in the production in the case company, the knowledge of how temperature affects the dough and the case company can reduce the amount of CO2 used for cooling was established. This part of the project had a return of investment of one month and as a potential side effect a quality increase in the product. Moreover, further recommendations for applying sensors and data analyses, found from this research are outlined below: Extend in house competences: Installing the setup of sensors, establishing a link to in a database with a big amount of data, and navigating this data requires special competences. This research revealed a lack in the in-house competences in the case company. Thus, the case company has outsourced all setup of data logging and database development. However, still it is a major learning curve in making value adding analyses with more than 10.000.000 data points. Maintain the analyses: Using data as the foundation of decisions in the production, calls for continuous maintenance of the analyses/ assumptions and formulation of new hypotheses The importance of including operators: in order to involve the operators in linking the data into a complex production context in the case company, several interviews have been performed, thereby relating the data to specific challenges in the production. This proved to be a significant change management task. Possibility for having a fact based discussion of the production, across management: Having production data changes the focus from operator intuition driven production to rather data driven production. Reducing the operator dependency introduces better quality of the products due to more stable process. Applying the method proposed in this paper in the case company resulted in more robust manufacturing processes. Being in control of manufacturing processes and increasing robustness is expected to allow the company to increase product variety and reduce time to market, providing a more agile manufacturing setup fit for serving markets requiring increased variety and more frequent product introductions.

References 1. Koren, Y., Shpitalni, M.: Design of reconfigurable manufacturing systems. J. Manuf. Syst. 29(4), 141 (2010) 2. Wiendahl, H., et al.: Changeable manufacturing-classification, design and operation. CIRP Ann. 56(2), 783–809 (2007) 3. OpenText: Enterprise information management for the food and beverage industry (2013) 4. Gargouri, E., Hammadi, S., Borne, P.: A study of scheduling problem in agro-food manufacturing systems. Math. Comput. Simul. 60(3–5), 291 (2002) 5. McIntosh, R.I., Matthews, J., Mullineux, G., Medland, A.J.: Late customisation: issues of mass customisation in the food industry. Int. J. Prod. Res. 48(6), 1557–1574 (2010)

412

S. Bech et al.

6. International Federation of Robotics: Executive summary world robotics 2017 industrial robots . https://ifr.org/downloads/press/Executive_Summary_WR_2017_Industrial_ Robots.pdf 7. Kohr, D., Budde, L., Friedli, T.: Identifying complexity drivers in discrete manufacturing and process industry. Proc. CIRP 63, 52–57 (2017) 8. Voss, C.: Case research in operations management. In: Researching Operations Management, pp. 176–209. Routledge (2010) 9. Eisenhardt, K.M., Graebner, M.E.: Theory building from cases: opportunities and challenges. Acad. Manag. J. 50(1), 25–32 (2007) 10. Bech, S., Brunoe, T.D., Larsen, J.K.: Changeability of the manufacturing systems in the food industry – a case study. Proc. CIRP 72, 641–646 (2018) 11. Schilberg, D., Meisen, T., Reinhard, R.: Virtual production Intelligence-Process analysis in the production planning phase. In: Frerich, S., et al. (eds.) Engineering Education 4.0, pp. 131–144. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46916-4_11 12. Trsek, H.: Isochronous Wireless Network for Real-Time Communication in Industrial Automation. Springer, Heidelberg (2016). https://doi.org/10.1007/978-3-662-49158-4

In-Process Noise Detection System for Product Inspection by Using Acoustic Data Woonsang Baek

and Duck Young Kim(&)

Ulsan National Institute of Science and Technology, Ulsan, South Korea [email protected]

Abstract. Objective quality inspection of products in manufacturing process is inseparable from sensor technologies. Inspection methods using analysis of vibration signals have advantages such as being non-destructive, accurate, fast for in-process application. This paper presents recent developments and applications of in-process product inspection which use vibration and acoustic data in various industry. In detail, the inspection system developed with accelerometer, laser vibrometer, laser ultrasonic sensor, acoustic emission sensor, and microphone are presented. An in-process noise detection system for car body parts inspection is introduced as a case study. Keywords: In-process inspection

 Vibration sensors  Acoustic sensors

1 Introduction Increasing awareness about quality of products have improved performance of inprocess inspection in highly automated manufacturing process. Since the manufacturing process vary from molding, machining, and so on, numerous quality indicators, which are measurable physical manifestations of the products, have been suggested. Inspection with as many indicators as possible should provide plenty information of the products, however, limitations of takt time and cost hinder the usage of multiindicators. Therefore, selecting proper quality indicators according to characteristics of the inspection process has been a critical issue. For instance, visual inspection with computer vision technology is one of the representative inspection method that measures dimensional error or color of the products. Not only for the manufactured products, but also it has been applied to inspection of raw products such as agricultural foods [1]. However, the visual inspection is not the best solution when it is necessary to inspect inside of the product. In this cases, inspection using vibration data can be an alternative solution since it has strength in detecting changes and dynamics inside of the products [2]. However, in the past decades, most of the inspection using vibration data have been applied to condition monitoring of industrial machineries, not for inspection of the products. The reason is that the vibration sensors should had to be contacted with the surface of the target. Moreover, developing apparatus to fix the vibration sensors at the surface is improper for industrial applications [3].

© IFIP International Federation for Information Processing 2019 Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 413–420, 2019. https://doi.org/10.1007/978-3-030-29996-5_48

414

W. Baek and D. Y. Kim

Fig. 1. Illustration of five inspection methods using vibration sensors

Recently, advances in sensor technologies enabled various types of vibration sensors which are capable of remote measurements of vibration signals. Also, improved signal processing algorithms have increased precision of the inspection. Therefore, numerous researches adopted the vibration sensors for product inspection. Our paper reviews the state-of-the-art technologies for in-process product inspection methods using the vibration sensors. In detail, inspection systems which installed accelerometer, laser vibrometer, laser ultrasonic sensor, acoustic emission sensor, and microphone are presented. Moreover, the solution of the practical problem, that developing an in-process inspection system for a car body part is introduced, based on the state-of-the-art technologies. The presence of the irritating small noises that caused by assembly defects of the car body parts are determined by a noise detection algorithm.

2 Vibrations Sensors for Inspection Product inspection methods using the vibration and acoustic data, assume that assembly errors, random discrepancies of components have potential to be manifested in abnormal vibratory behaviors. Traditionally, the vibration data have been measured by contact-type sensors, however, the contact type sensors require sensing part that is in

In-Process Noise Detection System for Product Inspection

415

contact with the products. Non-contact type sensors recently have aroused as an attractive alternative due to its simple implementation. Post-processing analyses of vibration data make use of the features in time domain, frequency domain, and cepstral domain [4]. Considering variety of the sensors, five major categories are created to classify the existing inspection methods. Inspection methods using each type of sensors are as follows: inspection using accelerometer; inspection using laser vibrometer; inspection using laser ultrasonics; inspection using acoustic emission sensor; inspection using microphone. Here, the selected representative papers are introduced as below. 2.1

Inspection Using Accelerometer

Accelerometer is a seismic type sensor that consists of a spring-damper system, and a piezoelectric crystal, to create an electric signal proportional to the acceleration of the measured vibration. The sensor is mostly installed at the surface of the product that being measured. Most of the previous studies have monitored the continuous vibration response of fixed machinery for condition monitoring. However, there are several applications that adopted the sensor for inspection of products. The inspection system for microchip packages, which measures quality of sealing, had installed the accelerometer at the vibrating fixture. Signal processing techniques, such as PCA (Principal Component Analysis) and statistical process control were applied to determine the status of collected data [5]. In case of inspecting injectionmolding process, checking existence of the flash that caused by imperfect plastic resin is the main concern. Accelerometers were installed at the fixture of the inspection system to collect the dynamic trend of the mold movement during the filling stage. Data captured by Z direction of the accelerometer was in particular analyzed to extract feature signal, and logistics regression model was applied for determination of the state [6]. Assembly process of automotive assembly with robot arm and gripper, had applied the accelerometer to monitor the quality of assembled product. The traditional assembly monitoring system consists of force and moment sensors, was supported by analysis of the vibration data. Various features in frequency domain were extracted, then three-layered neural network were applied for automatic classification of the defects [7]. 2.2

Inspection Using Laser Vibrometer

Laser vibrometer measures the vibration velocity of objects using Doppler effect while providing high spatial resolution in a wide frequency band. In addition, the laser vibrometer has its strength in easy application, which reduces the cost for developing the inspection station. In several industrial cases, the vibrometer has proven their advantages over the accelerometers, especially when the contact measurement is not available, or contaminate the target signals. A comparative study of electric motor inspection measured the vibration data by using the accelerometer and the laser vibrometer. The study shown that the data measured by the laser vibrometer derived higher correlation with defects than that of the accelerometer. The research mentioned that the data of accelerometer was

416

W. Baek and D. Y. Kim

contaminated by the electromagnetic noise from the surface of the motor that distorts frequency spectra and compromise classification through frequency analysis [8]. For extracting the quality indicator of fruits, especially for firmness, which is not easily measured by a visual observation, the power spectrum analysis was applied to detect differences in displacements inferior to the nanometer [9]. On-line inspection system of washing machines installed the laser vibrometer at right above the position that the products be fixed for repeatable inspection procedures. The system concentrated on detecting abnormal vibrations of the tub that usually generated by the faulty operation of motor. Determination of the state was conducted by measuring features in frequency domain after reducing the speckle noise from the collected signal [10]. In the other research, the similar inspection system was installed at the mobile robot for warehouse inspection, while analyzing the power spectrum of the collected signals for detection of abnormal vibration [11]. 2.3

Inspection Using Laser Ultrasonics

Laser ultrasonic sensor is a non-contact type sensor that have been widely used for nondestructive inspection of structures, and products. The sensor collects the vibration response of the surface where a brief laser pulses are induced by pulse generator. Then, a heterodyne, fiber-coupled laser interferometer measures the out-of-plane displacement of the surface. Since the laser ultrasonic sensor has its strength in measuring vibration signals of rough surfaces, it has been applied to various industrial inspections. Inspection of chip-scale packages can be the representative case of the laser ultrasonics application for product inspection. The inspection systems were developed to measure unexpected cracks inside of the multilayer ceramic capacitors. Both features of time-domain and frequency domain were extracted, and compared for fault detection in real-time [12]. The laser ultrasonic sensor is also assumed to have potential advantages for the inspection of composites. Inspection of the products in aeronautic industry is one of the popular application, since the products are mostly large, complex, and assembled with various materials. An automated inspection system for composite wing plate installed the laser ultrasonic sensor at the mobile robot for full inspection, and developed an algorithm that detects cracks by using Laplacian filter and standing wave extraction [13]. In case of automotive industry, laser ultrasonic sensors were applied to inspection of defects that are generated during spot welding, friction stir welding, painting, and checking strength of adhesive bonds [14]. Synthetic Aperture Focusing Technique (SAFT) algorithm was developed and applied to the laser ultrasonic data to measure the defects of welded spots [15]. Inspection of painting focused on measuring thickness of painted area by obtaining a resonance spectroscopy with the laser ultrasonic sensor. The determination of the status was conducted by comparing the amplitude spectra with the model for propagation in multilayers. 2.4

Inspection Using Acoustic Emission Sensor

Acoustic emission sensor collects the transient elastic waves which are generated by the rapid release of energy from a localized source or source within a material. The sensors are widely applied to the field of condition monitoring with its advantage of

In-Process Noise Detection System for Product Inspection

417

high sensitivity regarding dynamic processes or changes in a material. Although the sensor has limited applicability as the other contact type sensors, there have been researches applied the sensor for the in-process inspection of products. Recently, additive manufacturing adopted the acoustic emission sensors for measuring the internal defects rather than the traditional monitoring methods, such as temperature monitoring [16]. The acoustic vibration signals were analyzed by extracting various characteristics of frequency domain, and signal processing methods, such as collaborative neural networks [17]. Similarly, inspection system for stir welding and laser welding installed the sensor at the surface of the galvanized steel for detecting internal dynamics by using signal processing methods [18]. In addition, inspection system for beverage container that checks the existence of unwanted materials inside of the container installed the sensor. The pulse compression method was applied for data preprocessing, and tomographic reconstruction method was applied for defect detection [19]. 2.5

Inspection Using Microphone

Similar to the acoustic emission sensor, a microphone was applied to the condition monitoring of machineries during manufacturing process. The methods are useful in certain cases, especially when it is impossible to access the machine since audio measurements can be performed at a distance from the machine. However, the microphone based methods were not developed and widely applied as the other vibration based inspection methods. This is due to the possibility of contamination of the acoustic signal by unexpected noises such as shop-floor noises [20]. Therefore, application of the microphones always requires additional apparatus for reducing the shop-floor noise, or signal processing methods for suppressing, or removing the noise. Inspection system for cracks of rice grain after milling process installed the microphones with an equalizer and PVC pipes for insulating the shop-floor noise. The microphones collect the acoustic signals that generated right after the grains crashed into the plate [21]. In automotive industry, traditional inspection of irritating noises in car body parts have been done by human ear. Recently, researches about automatic and objective judgement of that issue have studied with the microphones [22]. The acoustic signals were collected near a fixture, that vibrates the products to resemble the driving situation. The dominant sound quality metrics in a multiple linear regression equation was applied for inspection algorithm [23].

3 In-Process Noise Detection System for Car Body Parts Recently, annoying noises of car body parts have been critical issues for automotive industry, since it can easily degrade the emotional satisfaction of drivers. The noises are caused by a lot of potential sources, for instance, error in assembly operations, defects in components, and dimensional control issues [22]. Acquisition of the annoying noise has been conducted by the microphones [23]. However, the shop floor noises are known to easily contaminate the signals of interest as mentioned in Sect. 2.5.

418

W. Baek and D. Y. Kim

Two methods have proposed to reduce the effect of the shop floor noises: conducting the inspection in an anechoic chamber, or applying the noise reduction methods. However, anechoic chamber is not used for full-inspection of real manufacturing process due to its expensiveness costs. The realistic solution for full-inspection are assumed to be the application of noise reduction methods. However, most of the inspection system about the annoying noise have utilized the anechoic chambers, while the noise reduction methods were not seriously studied. Therefore, the author proposed an inspection system for car body parts, as shown in Fig. 1. The system is applicable for manufacturing process by developing noise reduction system with two groups of microphones [24]. A pneumatic pusher and fixtures were installed for replicating a situation that a driver pushing a car door. A car door trim is slowly pressed down by the pneumatic pusher with a pressure of 10 kgf/cm2. The groups of microphones, which are a microphone array and parabolic microphones, collect the acoustic signals during the operation of the pneumatic pusher. The microphones of the microphone array are installed right above the fixture that holds the product, where those of the parabolic microphones are installed at the outside of the system to concentrate on collecting the shop-floor noise. The microphones were grouped to collect real-time data of shop-floor noise, while the most of the existing noise reduction methods use trained shop-floor noise. With assumptions about the sound sources of two groups of microphones, spectral subtraction method and the modified Non-Negativity Matrix Factorization were developed for extracting the annoying noise from the contaminated signal (Fig. 2).

Fig. 2. In-process noise detection system of the present work of the author

In-Process Noise Detection System for Product Inspection

419

4 Conclusion The paper reviews the current quality inspection of products during manufacturing process. Since inspection methods using analysis of vibration signals have advantages such as being non-destructive, accurate, fast and applicable for in-process inspection, the focus of review is on the vibrational sensors. Applications of in-process product inspection by using five categories of the sensors are presented, which are inspection using accelerometer, laser vibrometer, laser ultrasonic sensor, acoustic emission sensor, and microphone are presented. As a case study, the in-process noise detection system for inspection of car body parts is presented. The configuration of developed system, named in-process noise detection system, is briefly introduced. The system was developed for replacing human-based inspection by devising noise detection algorithms with acoustic data. Since the shop-floor of manufacturing line generates shop-floor noise that may contaminate the acoustic signals, noise reduction methods are also developed. The system installed two groups of the microphones to improve the noise reduction methods.

References 1. Narendra, V., Hareesha, K.: Quality inspection and grading of agricultural and food products by computer vision-A review. Int. J. Comput. Appl. 2, 43–65 (2010) 2. Chui, Y., Barclay, D., Cooper, P.: Evaluation of wood poles using a free vibration technique. J. Test. Eval. 27, 191–195 (1999) 3. Rodriguez, R., Cristalli, C., Paone, N.: Comparative study between laser vibrometer and accelerometer measurements for mechanical fault detection of electric motors. In: Fifth International Conference on Vibration Measurements by Laser Techniques: Advances and Applications, pp. 521–530 (2002) 4. Aryan, P., Sampath, S., Sohn, H.: An overview of non-destructive testing methods for integrated circuit packaging inspection. Sensors 18, 1–27 (2018) 5. Ostyn, B., Darius, P., De Baerdemaeker, J., De Ketelaere, B.: Statistical monitoring of a sealing process by means of multivariate accelerometer data. Qual. Eng. 19, 299–310 (2007) 6. Zhang, J.Z.: Development of an in-process pokayoke system utilizing accelerometer and logistic regression modeling for monitoring injection molding flash. Int. J. Adv. Manuf. Technol. 71, 1793–1800 (2014) 7. Mechefske, C.K., Sun, Q.: Failure detection in automotive light assemblies during vibration endurance testing. Int. J. Adv. Manuf. Technol. 51, 799–810 (2010) 8. Cristalli, C., Paone, N., Rodríguez, R.: Mechanical fault detection of electric motors by laser vibrometer and accelerometer measurements. Mech. Syst. Signal Process. 20, 1350–1361 (2006) 9. Santulli, C., Jeronimidis, G.: Development of a method for nondestructive testing of fruits using scanning laser vibrometry. NDT. Net 11, 1–12 (2006) 10. Torcianti, B., Cristalli, C., Vass, J.: Non-contact measurement for mechanical fault detection in production line. In: IEEE International Symposium on Diagnostics for Electric Machines, Power Electronics and Drives, pp. 297–301. IEEE, Atlanta (2007) 11. Raffaeli, R., Cesetti, A., Angione, G., Lattanzi, L., Longhi, S.: Virtual planning for autonomous inspection of electromechanical products. Int. J. Interact. Des. Manuf. 6, 215–231 (2012)

420

W. Baek and D. Y. Kim

12. Erdahl, D.S., Ume, I.C.: Online-offline laser ultrasonic quality inspection tool for multilayer ceramic capacitors-Part I. IEEE Trans. Adv. Packag. 27, 647–653 (2004) 13. Dubois, M., Drake Jr., T.E.: Evolution of industrial laser-ultrasonic systems for the inspection of composites. Nondestruct. Test. Eval. 26, 213–228 (2011) 14. Blouin, A., Kruger, S., Lévesque, D., Monchalin, J.-P.: Applications of laser-ultrasonics to the automotive Industry. In: Proceedings to 17th World Conference on Non Destructive Testing, pp. 105–112. WCNDT, Shanghai (2008) 15. Lévesque, D., et al: Synthetic aperture focusing technique for the ultrasonic evaluation of friction stir welds. In: Proceedings of 2018 AIP Conference, pp. 263–270. ICAMSME, Incheon (2008) 16. Lu, Q.Y., Wong, C.H.: Additive manufacturing process monitoring and control by nondestructive testing techniques: challenges and in-process monitoring. Virtual Phys. Prototyp. 13, 39–48 (2018) 17. Yoon, J., He, D., Van Hecke, B.: A PHM approach to additive manufacturing equipment health monitoring, fault diagnosis, and quality control. In: Proceedings of the Prognostics and Health Management Society Conference, pp. 1–9 (2014) 18. Gu, H., Duley, W.W.: Resonant acoustic emission during laser welding of metals. J. Phys. D Appl. Phys. 29, 550–604 (1996) 19. Ho, K., Billson, D., Hutchins, D.: Inspection of drinks cans using non-contact electromagnetic acoustic transducers. J. Food Eng. 80, 431–444 (2007) 20. Vilela, R., Metrolho, J., Cardoso, J.: Machine and industrial monitorization system by analysis of acoustic signatures. In: Proceedings of the 12th IEEE Mediterranean Electrotechnical Conference, pp. 277–279. IEEE, New York (2004) 21. Buerano, J., Zalameda, J., Ruiz, R.: Microphone system optimization for free fall impact acoustic method in detection of rice kernel damage. Comput. Electron. Agric. 85, 140–148 (2012) 22. Bernard, T., et al.: The development of a sound quality-based end-of-line inspection system for powered seat adjusters. Technical report, SAE Technical Paper, 0148-7191 23. Cook, V.G.C., Ali, A.: End-of-line inspection for annoying noises in automobiles: trends and perspectives. Appl. Acoust. 73, 265–275 (2012) 24. Baek, W., Kim, D.Y.: An in-process BSR-noise detection system for car door trims. In: Moon, I., Lee, G.M., Park, J., Kiritsis, D., von Cieminski, G. (eds.) APMS 2018. IFIPAICT, vol. 536, pp. 35–38. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-99707-0_5

Knowledge Management in Design and Manufacturing

Closed-Loop Manufacturing for Aerospace Industry: An Integrated PLM-MOM Solution to Support the Wing Box Assembly Process Melissa Demartini1(&), Federico Galluccio3, Paolo Mattis3, Islam Abusohyon1, Raffaello Lepratti2, and Flavio Tonelli1 1

DIME - Department of Mechanical Engineering, Energetics, Management and Transportation, Polytechnic School, University of Genoa, Genoa, Italy [email protected] 2 Siemens Italy S.p.A, Via Enrico Melen 83, 16152 Genoa, Italy 3 Siemens AG, Gleiwitzerstr. 555, 90475 Nuremberg, Germany

Abstract. The aim of this research is to provide an example of the importance that integrated Product Lifecycle Management (PLM) and Manufacturing Operation Management (MOM) systems have in realizing the Digital Manufacturing. The research first examines what the Digital Manufacturing involves and then identifies Digital Twin and the related Digital Thread as key elements. PLM and MOM solutions support the Digital Twin and the Digital Thread allowing the exchange of product-related information between the digital manufacturing model and the physical manufacturing execution. A Digital Twin of a wing box and its assembly process is created in PLM by building the bill of material and bill of process. Then it is shown how in MOM system the production phase is facilitated by managing production operations, advanced scheduling and supporting the execution of the processes and how the analysis of the manufacturing performance is possible. The result integrating these systems is to have the right information at the right place at the right time along with the related benefits in terms of costs, time and quality. The activity has been developed in Siemens Industry Software under the European Project AirGreen 2, an integrated research action of the REG IADP (Regional Innovative Aircraft Demonstration Platform) part of the Joint Technical Programme, the steering and coordination of LEONARDO Aircraft. The AirGreen 2 project is an Innovation Action funded by the Clean Sky 2 Joint Undertaking under the European Union’s Horizon 2020 research and innovation programme, under Grant Agreement N°807089 REG IADP). Keywords: Digital manufacturing Closed loop manufacturing

 Aerospace industry 

1 Introduction The digital transformation has already changed the way products are designed, produced and delivered and this has a profound impact on manufacturing companies [1, 2]. In this competitive scenario, manufacturing companies perform by using up-to-date © IFIP International Federation for Information Processing 2019 Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 423–430, 2019. https://doi.org/10.1007/978-3-030-29996-5_49

424

M. Demartini et al.

information to progressively remove delays to the management and execution of its critical business processes. Therefore, they need to monitor their value-creation processes controlling both the internal (e.g. scheduling flexibility) and external events (e.g. customization and volume flexibility) in order to maintain their competitive advantage [1]. In the light of the above, the success of a manufacturing company depends upon effective coordination across all stages in a product’s development, from design and production up to aftermarket services [3]. Companies should define and optimize manufacturing processes, manage data and encourage collaboration between different types of engineers by incorporating both digital and product definitions. Thus, an integrated system is necessary to manage the different production stages and comprising various business functions. To this end, the aim of this paper is to introduce an innovative system to bridge the vertical integration gap between the engineering and manufacturing domains leading to an integrated production management system, which includes Product Lifecycle Management (PLM) and Manufacturing Operation Management (MOM). Thanks to the advancement of innovative technologies, such as Digital Twin and a Digital Thread, this gap could be enclosed. Through digital twin, any entity or physical system, product or process can be represented in a simulated way to respond immediately to external changes to prevent problems solving them in real time or improving performances. The digital twin is fed by information from the digital thread [6], which is the chain of information that connects all the involved parties with data to design and produce products [7]. The term “thread” is used because it interweaves and brings together data from all stages of the product and production lifecycles [8]. Bringing together the digital twin and thread, it is possible to create a collaborative, connected information loop, the so – called closed loop manufacturing (CLM). CLM enables companies to synchronize and optimize production across product design, production planning, manufacturing execution, automation and intelligence from consumer use in the field. 1.1

Research Goals and Questions

The purpose of this research is the design, implementation and industrial evaluation of an innovative system to bridge the vertical integration gap between the engineering and manufacturing domains. A digital twin of a wing box and its assembly process is created in PLM by building the bill of material (BOM) and bill of process (BOP). Then it is shown how in MOM system the production phase is facilitated by managing production operations and supporting the execution of the processes leading to a CLM. The literature is witnessing of the transition of the aerospace industries, which are moving from fixed to flexible production cells, requiring a higher level of vertical integration between enterprise systems and shop floor controls and managing a higher level of customization. Aerospace companies are deploying new digital technologies in manufacturing to speed up production and reduce costs. In essence, this study is conducted to answer the following research question: RQ: A solution based only on ERP system can properly manage highly complex product’s BOMs?

Closed-Loop Manufacturing for Aerospace Industry

425

When the products are highly complex, the integration between PLM-MOM is necessary to manage BOMs that otherwise can’t be managed using only the ERP. The BOM involves unique challenges for the management of information across business functions. It is initially created by engineering to meet engineering needs, but multiple departments rely on it and need to use the information it contains in different ways. Managing these different views, integrating the various engineering disciplines, and maintaining completeness and traceability become more difficult as product complexity increases. To answer this research question, this study provides a qualitative literature review to analyze the current scenario and then a proof of concept of an integrated PLM and MOM system to demonstrate its benefits. This paper is organized as follows: in Sect. 2 the literature review is depicted, Sect. 3 describes the methodology adopted for this paper. Section 4 presents the results, while in Sect. 5, conclusions are described.

2 Research Background The literature review has been performed by selecting papers from Scopus. Authors chose this database for its ample coverage of articles in this field [9]. The Authors’ strategy was to identify articles that included “Smart manufacturing”, “Product lifecycle” and “Digital twin” as keywords in all fields. Additionally, the Authors considered various synonyms of each of these terms such as “smart factory” and “Digital thread”. The aforementioned search technique allowed identification of 38 academic papers, which were rigorously reviewed in order to evaluate their adherence to the study. After reading all the papers, none of them talks about the importance of using the digital twin to realize the interconnection and the interaction between the PLM and MOM platforms within the manufacturing enterprises. This gap in the literature review shows that more work is needed in this area. Today’s manufacturing enterprises use platforms to manage the product design as well as the production planning and the manufacturing execution. The design and implementation models are developed on a PLM platform. Modern PLM systems bring together product and process development, allowing interaction in the development process. Instead, the MOM platform covers the scheduling and execution of the work orders based on the development process. Usually PLM and MOM platforms do not have an appropriate interaction with each other and therefore the world of product and process design and that one of production and execution are not integrated for flexible, scalable production processes that maximize the responsiveness to real-time manufacturing events. Building a CLM is the solution to solve this problem. In this research, in order to achieve CLM and support the journey to Industry 4.0, PLM and MOM platforms have been made able to communicate with each other. It means they have access to, and understand the language of the others. MOM is where the virtual plan becomes the physical reality and it is also where decentralized production can be globally orchestrated. To conclude, it is possible to claim that the topic of the digital manufacturing starts to be studied deeper over the last year. Nevertheless, it must be said the fact that over 38 articles identified, only a few were interesting could mean a lack of knowledge of the importance of the CLM from the scientific literature point of view. This lack represents the starting point of this

426

M. Demartini et al.

paper, that try to fill this gap by showing an application of the integration between PLM and MOM systems in the aerospace industry.

3 Methodology A proof of concept software based on PLM and MOM platform has been implemented, the activity has been developed in Siemens Industry Software under the European Project AirGreen 2, an integrated research action of the REG IADP (Regional Innovative Aircraft Demonstration Platform) through the steering and coordination of LEONARDO Aircraft. The project aims to develop and demonstrate innovative concepts and methodologies enabling the realization of a wing of new generation. This wing will be characterized by an innovative structure, as a result of an improved life cycle design; a high level of adaptability, enabling load control and alleviation strategies, and enhancing the aerodynamic performance at the different flight regimes; an innovative aerodynamic design, oriented to the preservation of the natural laminar flow and for the drag reduction. Among the various work packages that have been set in order to achieve these goals, Siemens Industry Software has been involved in that one called “Innovative Wing Structure D&M (Design & Manufacturing)”. The “Innovative Wing Structure D&M” scope is the development, verification and optimization of suitable advanced technologies and materials for the manufacturing and assembly of the wing box aimed to the reduction of the manufacturing costs and the improvement of the environmental aspects if compared to a traditional process. In this context, Siemens and Leonardo Aircraft contribute to the development of a process for the industrial case study under the European Project AirGreen 2 - task 2.1.1.15 “Wing Box Assembly Manufacturing Executing System (MES) in a Manual and Automatic assembly environment collecting Manufacturing Data and Performances KPIs (Fig. 1).

Fig. 1. Architecture of the CLM

Closed-Loop Manufacturing for Aerospace Industry

427

In the industrial case analyzed during this research, a wing box assembly method based on a predictive Dimensional Management (DM) strategy has been used. It involves the reverse engineering of some main wing components into a Digital Twin, representative of the actual shape of the parts within a defined level of accuracy. Intermediate parts are then defined into the Digital Twin Environment and transformed into real items through a dedicated manufacturing method developed by Leonardo Aircraft. The achievement of quality requirements is ensured adopting a Verification and Validation approach during the Product Life Cycle developmental stages. Customer requirement, Process Capability, Tooling and Methods are developed in concurrent engineering with the design within the Digital twin environment with the aim to guarantee customer needs expressed in terms of geometrical requirements at each product level (KCs), with a top down approach. Both RSS Stack-up evaluations and Montecarlo simulations are used to support variation propagation analysis and tolerance allocation in 1D and 3D; the Build-Up strategy achieved by extensive use of DM allows a set of advantages: (i) readiness for the digital factory revolution since all feedback from shop floor is fully implemented; (ii) allows lower fixed costs and easier design changes; (iii) reduce labor costs and (iv) involves less assembly variability. Despite all the benefits of Build-Up, there are several risks associated with it. Primarily, it requires much more attention to issues of variation. For this reason, greater attention shall be paid to enforce Statistical Process Control strategies including proper feedback and reporting in the Digital Twin trough the PLM applications. Therefore, in addition to the process based on the Build-Up strategy an alternative assembly method has been developed in order to mitigate potential failures discovered or predicted. It is a Back-up solution, based on failure scenarios analysis and tolerances (defective parts). Both processes have been modeled on the PLM platform but only the first one has been transferred to the MES.

4 Results and Discussions In the Siemens PLM solution (Teamcenter) the BOM of the wing box as well as the BOPs related both to the machining process and to the wing box assembly have been built. They are the virtual twin of the product and virtual twin of the processes, respectively, which will be maintained in one shared manufacturing master data model and authored in one environment for production planning and production execution. BOPs have been built by creating process structure and creating operations. The link between the product structure and process is formed by associating the end item of the BOM as target for the BOP structure. Parts from the BOM are then assigned to the operations as “consumed parts”. BOPs also define the data that must be collected during each operations and the electronic work instructions (EWI) that allow the communication of all the manufacturing process information between engineers and operators. The two assembly methods have been implemented in the PLM using a 150% BOP. With a 150% BOP is associated a 150% BOM, they are just other names for variant structures, or more specifically, configurable structures. Configurable BOMs and related BOPs have one or more optional or different components and/or optional or different operations, which, when properly set, define a specific variation of a product

428

M. Demartini et al.

and its production. A “Configurable BOM” alias “150% BOM” alias “Variant BOM”, is used by manufacturer when dealing with highly complex product such as in the aerospace domain and when there is a need to maintain a balance between configurability, time to manufacture, and preserving the cost under threshold. A configurable bill of material contains all the parts that are required to manufacture the material to a customer’s specific requirements. A “Select-Condition” is then applied on this predefined structure that determines whether a part is to be included in the final BOM or not. Modern PLM tools enables manufacturing companies to better adapt to engineering practices and facilitates efficient management of these configurable BOMs using a unit effectivity method. Effectivity might be defined as a date, serial number or, in a more complex way, as a “unit”. Effectivity definition originally comes from ERP environment. The most typical example is “date effectivity” which defines the available to a particular part or item. PLM originally was created without effectivity in mind. Most of engineering systems were “revision” oriented rather than “effectivity” oriented. It means PLM is used to manage different revision of objects rather than defines their effectivities and it is the ERP system that manage the BOMs and their variations. A 150% BOM haven’t been built but the same BOM without any variation can be composed by different parts depending on the adopted assembly method. Custom parts based on Digital Twin configuration will therefore be different from those used in the other assembly method that involves traditionally fettled and drilled parts. Once the BOMs, the BOP and the related resources have been verified, the components of the 150% (master data like e.g. Material, Process, Operation, etc.) have been released from the PLM to the MES Siemens solution. Structures, relations and configurations are not relevant in this step. Instead, it only cares about the building blocks of the unconfigured structures. Then, imagining that a request to produce a wing box using the DA assembly method comes from an ERP system, in the MES work orders have been created specifying the effectivity needed and the download of the configured production structures (100% BOPs) from the PLM is then automatically triggered. Therefore, in the manufacturing execution system several work orders related to the just released processes has been executed. During their execution, materials are displayed as information about what has “to be consumed” for each single operation or step with the aim to guide the operators in consuming the right parts at the right places (operation station), at the right time (when the product is in the operation station) and collect in real-time the information about their consumption. Data collection form containing all the data and measures to be collected are displayed in the operator environment as well as the supporting documents and the work instruction to perform the operations, so that in a single place operator can find what to do and what to measure. Both work instruction and data collection form directly come from PLM model as part of the BOP: this means the MES shows, operation by operation, the document version according to the most updated BOP. Once the processes have been executed on the MES, the manufacturing operations performance are analyzed. The reporting and analysis layer have the scope to provide transparency to an information layer higher than the operational, which is supported by the as build and genealogy of the product and work orders inside the MES. It is the layer where we can compare work orders and get relevant KPIs and information about products, materials, defects and equipment outside the boundaries of a specific work order. It supports decision makers in the analysis of

Closed-Loop Manufacturing for Aerospace Industry

429

the plant information giving them the opportunity to understand how eventually modify and improve the products or its related process. The digital model of the production and process has been continuously compared with actual production to reduce the differences between as planned and as build. A PLM-derived and MES-driven BOP can ensure that every material and operation is performed as it was intended. Requirements; which has been established in the earliest phases of engineering, become features of the aircraft design, dimensioned in the 3D models, published in the BOP, send to operators in EWIs, have been measured and finally stored in the as-built thanks to the digital thread. The as-built record can be compared to the as-ordered configuration to highlight deviations on the shop floor versus customer expectations. Once published back to the Digital Twin it can be matched against the as-engineered definition. The Build-Up strategy analyzed in this research is possible since it integrates the information available from production databases and methods into the design. This means that time is not spent working on components within the final assembly where a high level of capital expenditure is then tied up in these operations and a bottleneck to production exists. Finally, to answer the RQ: An ERP system can properly manage highly complex product’s BOMs? When the products are highly complex, the integration between PLM-MOM is necessary in order to manage their BOMs that otherwise can’t be managed using only the ERP. PLM allows managing and maintaining both the engineering and the manufacturing aspects of a BOM in a single context during each stage of a product development process. The BOM is the result of a lot of design activity and if an ERP system, coming in at the end of the process, hold it we might fail to represent the real way in which the decisions, the logic, the engineering rules are built in to the product. ERP solutions generally do not actually optimize or have development tools for defining what the manufacturing BOM is. They just focus on executing a defined BOM. Modern PLM systems instead have the tools to make changes, analyze and optimize the BOM and therefore it makes sense to manage it through these solutions. To conclude, the problem of “siloed data” can be largely eliminated, this because there is now a bi-directional level of communication between the ERP, PLM, business systems and the shop floor, allowing for greater and immediate control of the business on the production floor.

5 Conclusion Among all industries aerospace sector has been identified as an excellence case to show the benefits that Digital Twin, Digital Thread and CLM managed through PLM and MOM solutions, can bring. As demand for aircraft continues to grow, the aerospace industry has one of its largest opportunities to drive down existing backlogs, and increase competitiveness moving forward in integrating the world of product and process design and that one of production and execution into one common manufacturing model. Aerospace and systems will be built in a far more complex manner. Newer technologies and processes require new methods of manufacturing. In order to face these challenges, aerospace manufactures should introduce integrated PLM and MOM solutions into their manufacturing processes and thus obtain optimized order

430

M. Demartini et al.

management, an automated advanced planning and scheduling for coordination of the supply chain flows and optimization of material synchronization, and all the benefits deriving from a CLM. In this regard, using the combined power of both PLM and MOM technology is a key part of company digitalization process. To drive innovation, it is essential to have the right technology in place in order to reduce development time and deliver high-quality solutions, with the ability to adapt to changes easily at every stage of the process.

References 1. Burger, N., Demartini, M., Tonelli, F., Bodendorf, F., Testa, C.: Investigating flexibility as a performance dimension of a manufacturing value modeling methodology (MVMM): a framework for identifying flexibility types in manufacturing systems. Procedia CIRP 63, 33– 38 (2017) 2. Diana, C., Ioan, P.: Industrie 4.0 by Siemens: steps made next. J. Cases Inf. Technol. (2018) 3. Fei, T., Jiangfeng, G., Qinglin, Q., Meng, Z.: Digital twin-driven product design manufacturing and service with big data. Int. J. Adv. Manuf. Technol. (2017) 4. Fei, T., Meng, Z.: Digital twin shop-floor: a new shop-floor paradigm towards smart manufacturing. IEEE Access 5, 20418–20427 (2017) 5. Giuseppe, A., Marco, S., Paolo, P.: A Networked production system to implement virtual enterprise and product lifecycle information loops. IFAC-PapersOnLine 50, 7964–7969 (2017) 6. Demartini, M., Tonelli, F., Damiani, L., Revetria, R., Cassettari, L. Digitalization of manufacturing execution systems: The core technology for realizing future smart factories. In: Proceedings of the Summer School Francesco Turco, September 2017, pp. 326–333 (2017) 7. Mike, P., Andreas, A., Donald, F., Victor, M., Marcus, H.: Smart connected digital factories: unleashing the power of Industry 4.0. In: Cloud Computing and Services Science Selected Papers (2018) 8. Haijan, Z., Guohui, Z., Qiong, Y.: Digital twin-driven cyber physical production systems towards smart shop-floor. J. Ambient. Intell. Hum. Comput. (2018) 9. Demartini, M., Tonelli, F.: Quality management in the industry 4.0 era. In: Proceedings of the Summer School Francesco Turco, September 2018, pp. 8–14 (2018)

Modeling Manual Assembly System to Derive Best Practice from Actual Data Susann Kärcher1(&) , David Görzig2 and Thomas Bauernhansl1,2

,

1

2

Fraunhofer IPA, Nobelstrasse 12, 70569 Stuttgart, Germany [email protected] IFF University of Stuttgart, Nobelstrasse 12, 70569 Stuttgart, Germany

Abstract. In manual assembly systems, there is often little transparency and great potential for optimization, especially in assembly systems with small batch sizes. In this paper, a model is developed that supports an approach to automated assembly optimization. For this optimization, actual data is collected in manual assemblies. Based on the data, the optimized assembly sequence is derived by developing a best practice. Best practice describes a combination of assembly processes performed by the workers during the data collection. The model shows the relationships and the dependencies in the assembly systems and allows to improve it. First, the considered assembly system is defined as a socio-technical system and general modeling principles are prepared. After presenting the benchmark approach to derive the best practice, the requirements for the model are identified. Then, the model is developed in four steps: The system boundary is defined, the features are described, and the model is formalized. Finally, the model is applied and tested in an example project and its purposefulness is shown. Keywords: Assembly

 Model  Best practice

1 Introduction Nowadays, the planning of assembly still needs high effort [1]. Therefore, especially with small lot sizes, the assemblies are often only roughly planned and lack in transparency for planners and workers. There is still great potential for optimization [2]. In order to reduce the effort, the aim is to plan assembly automatically. One approach towards automated assembly planning is to collect actual data, to analyze it and then to derive an improved assembly sequence, based on best practice. Best practice describes a combination of assembly processes performed by the workers during the data collection [2]. The aim of this paper is to develop a model supporting such an approach. The model should represent the manual assembly system sufficiently well and should still be pragmatic. Furthermore, it should allow data from assembly processes to be recorded and analyzed. The model allows to improve manual assembly systems by deriving a best practice observed during the analysis. © IFIP International Federation for Information Processing 2019 Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 431–438, 2019. https://doi.org/10.1007/978-3-030-29996-5_50

432

S. Kärcher et al.

2 Model of Assembly Systems 2.1

Manual Assembly System as a Socio-Technical System

A system is a set of interrelated objects that are seen as a whole in a certain context and as separate from their surroundings [3]. Beer [4] equates the term ‘system’ with the word connectivity to emphasize, that the parts of a system relate to each other. Hence, no part is independent of other parts and the behavior of the whole is influenced by the interaction of all parts [5]. Systems consist of individual elements that each have functions and properties. Relationships, for example information flows, material flows etc., connect the individual elements with each other [6]. A system is bordered against its environment (system boundary), depending on the purpose and the problem [7]. Socio-technical systems generally consist of the components, humans, resources as well as tasks and objectives. They are particularly complex, since these components can have various characteristics [7]. Social and technical systems constitute a unit in the form of a work or an action system [8]. Trist et al. accentuate that it is necessary to optimize the social and the technical system in combination [9]. A company contains technical (e.g. tools, machines) and social systems (e.g. people) which generate value through the production of products. Work connects the social and the technical system [10]. Thus, work systems are socio-technical systems [11]. Assembly as a part of production can also be described and analyzed as a system [12]. As a working system a manual assembly system also describes a socio-technical system. 2.2

Model

A model is the image of a section of reality [13]. Models represent a process or a system sufficiently accurately [14] and always represent a ‘constructed reality’ [15]. A model is a simplification or abstraction of the reality and can never illustrate it in all its aspects. The purpose of the model is decisive [6]. There is always a necessity to use the model or the representative instead of the original [16], for example if a reduction, enlargement and simplification is necessary for an illustration of the original or if the original is reduced to basic contexts which explain or predict the behavior [17]. Reality thus becomes better understandable and manageable [18]. Stachowiak [17] summarizes three general characteristics of the model concept: the mapping characteristic (the model corresponds to the representation of the original), the shortening characteristic (the model records only the characteristics relevant to the creator and user), and the pragmatic characteristic (the model represents the original only for certain subjects, within certain time intervals, limited to certain operations).

3 Modeling Manual Assembly System in Context of Deriving Best Practice from Actual Data The model is intended to support an approach for optimizing manual assembly systems by deriving a best practice, based on recorded data. The idea is not to search for the optimum, but to derive the best solution from all the solution strategies to assemble the

Modeling Manual Assembly System

433

product executed by the workers. Thus, the optimization is based on actual data instead of plan data [2]. The presented approach benchmarks different solution strategies and adapts the four steps of Watson’s benchmark approach [20] to manual assembly, which are summarized in the following Fig. 1.

Fig. 1. Approach to apply benchmark in manual assembly (own figure based on [2]).

In the ‘planning’ phase the goal is planned and the model is created or adapted [2]. In the ‘data gathering’ phase actual data is collected by a system in which sensors are installed on components, tools and equipment to automatically identify process steps and record process times [19]. In particular, acceleration sensors, magnetometers and gyroscopes are used for this purpose. Data from video and an app is recorded as an auxiliary until the system recognizes all processes reliably [2]. In the ‘data analysis’ phase the best practices has to be found. Therefore a digraph is created. Each solution strategy to assemble the product is represented as a path in the digraph. If the workers execute the processes in the same order faster, the required assembly times are updated. Finally, the shortest path has to be found [2]. The fourth step is to introduce the improvements [2]. This paper focuses on the model development during the ‘planning’ phase. 3.1

Derivation of Formal and Content-Related Requirements of the Model

A model is always required to meet both, formal and content-related requirements. Patzak [7] derives formal characteristics of a good model: It should be empirical and formally correct, functional and manageable at low effort in creation and application. Furthermore, the model should meet the following content requirements: As described in Sect 2.2, the purpose of a model is crucial. The planning aim is to record the assembly processes of different workers and to derive the best solution strategy to assemble a product from the different solution strategies identified in the context of the analysis. The target value is to minimize time. Consequently, the best assembly sequence is the sequence that results in the shortest assembly time. [2] As derived in Sect 2.1, a manual assembly system is a complex socio-technical system. To analyze and optimize it (e.g. the assembly sequence) in an

434

S. Kärcher et al.

automated way, a model is needed that can handle this complex socio-technical system. Already existing models do not have the same purpose. 3.2

Modeling

Graphical modeling is selected for a transparent representation and easier understanding of the system parts and their relation [21]. Specker [22] distinguishes four design aspects in the analysis and the modeling of complex systems: the process view (logical and temporal sequence of operations), the function view (similarity of elementary functions), the object view (processing elements) and the task view (personal assignment of operations). The elements and their connections are in the focus of this model, which is why the object view is chosen. Wiendahl [23] summarizes four steps for model creation – system delimitation, feature description, model formalization, model validation – on the basis of which the model is developed. System Delimitation. In this step, the facts and the purpose are defined properly. According to systems engineering, the problem should be detailed from rough to fine [6, 23]. The pragmatic and the shortening characteristic describe that a model represents the system for a certain purpose and records only elements relevant to the creator [17] (see also Sect. 2.2). The following Fig. 2 shows the manual assembly system considered in this paper.

Fig. 2. Analysis of the manual assembly system considered in the model.

The system consists of workers, work stations, products, assembly orders and assembly processes [2]. Data is collected over time from different workers who assemble different products at different work stations. The assembly system is a subsystem of the production system. Input of the systems are information, assembly orders and parts. Output are the assembled products. Preliminary and subsequent areas such as parts production and logistics are outside the system boundaries. Feature Description. In the following, the main features of the model are described: • Product: For optimization, it is important, to clearly identify the product in order to relate the assembly process steps and sequence to it. A product is a variant of a specific product type and may have options.

Modeling Manual Assembly System

435

• Work station: The work station influences the required assembly time, e.g. due to different material supply, which results in different gripping areas. When gathering data, it must therefore be recorded at which work station a product was assembled. • Process: An assembly consists of individual assembly processes such as ‘screwing base plate’. The assembly sequence is described by specifying the predecessor and successor for each assembly process. • Worker: The decisive factor in this socio-technical system is the worker. Depending on previous experience, competence, etc., she or he can execute assembly tasks in different ways and uses different tools and devices. For pragmatic reasons, other influencing factors of the socio-technical system (e.g. motivation and daily form of the worker, other environmental conditions) are not considered in this model. • Assembly order: The input of the assembly system and trigger of the assembly processes are the assembly orders. • Optimization: The optimization derives the best practice to assemble the product. Model Formalization. Unified Modeling Language (UML) is selected as the modeling language because it is established as a standard. A class diagram provides an overview of the code structure and its internal relationships [21]. Figure 3 shows the formalized model. The features described above are represented in classes which are specified in the following: The class ‘Worker’ includes the attribute ‘worker_ID’. The worker’s name is not saved in this context. The method ‘worker_experience()’ counts, how often the worker has already assembled a product, separated in type, variant and options. The method ‘worker_ performance()’ calculates, how fast the worker carries out the processes compared to the average. The results are not used to evaluate the workers themselves, but to evaluate her or his solution strategies and to identify outliers in the data. An inexperienced worker usually cannot carry out an assembly significantly faster than an experienced worker. The faster assembly of the inexperienced worker may result in a loss of quality. The class ‘WorkStation’ defines the work station via ‘work_station_ID’. ‘work_station_performance()’ establishes connections between the work station and the process times (e.g. the material supply at work station 1 is better than the material supply at work station 3). The class ‘Product’ describes the product uniquely by the attributes ‘product_name’, ‘product_ID’, ‘product_type’, ‘product_variant’ and ‘product_options’. The method ‘product_total_quantity()’ sums up the number of assembled products per type, variant and option. An object of the class ‘AssemblyOrder’ is clearly identified by ‘assembly_order_ID’ and refers to products and quantity. Furthermore, it is assigned to at least one worker and one possible work station. The class ‘Process’ gets the actual data from the class ‘ActualData’ and is described by ‘process_ID’ and ‘process_name’. Moreover, it belongs to a ‘process_category’ and has a predecessor and a successor. It is always assigned to a specific product, worker, work station and assembly order (‘process_relations’). The process duration is calculated on the basis of the start and end dates (‘process_duration()’). ‘process_def()’ identifies the processes and puts them into context.

436

S. Kärcher et al.

Fig. 3. Model.

The class ‘OptimizationBenchmark’ runs the optimization methods and stores the results in the ‘OptimizationResult’ class. The method ‘digraph’ creates the digraph, checks it, e.g. if there are outliers or circles, and finds the shortest path (see [2] for further information). In a manual assembly, often several products are assembled. Sufficient data is not collected for all products during data collection. The ‘times_other_products()’ method can fill this gap by deriving times for other variants. ‘same_process()’ is looking for identical processes. Further optimizations are performed in ‘other_optimization()’. Model Validation. In this step, it must be checked whether the model represents the system sufficiently well (mapping characteristic). In the literature, the iterative modeling is emphasized frequently. Dörner [13], for example, recommends to start with a first draft of the model and to improve it step by step (successive approximation).

Modeling Manual Assembly System

437

The model has already been used in a practical example and had already been improved further. The example project was the assembly of a rear axle. Three workers assembled the product three times each and the process times were recorded (see [2] for further details concerning the project). All data was collected in a structured manner and initial optimization was made possible. Figure 4 shows exemplary instances.

Fig. 4. Exemplary instances.

4 Conclusion and Outlook In this paper, a model is presented that supports the optimization of manual assemblies. Actual data is collected and, based on the data, the optimized assembly sequence is derived by developing a best practice. The joint mapping of the technical and social systems enables their joint optimization in a socio-technical system. The model was developed in four steps: system delimitation, feature description, model formalization and model validation. The application in the example project shows that the model allows the data to be structured in a meaningful way and is limited to the most important elements. In the following work, the model will be applied to several industry projects and iteratively further developed. Moreover, the optimization classes are worked on.

438

S. Kärcher et al.

References 1. Lotter, B., Wiendahl, H.-P.: Montage in der Industriellen Produktion, 2nd edn. Springer, Berlin (2012). https://doi.org/10.1007/3-540-36669-5 2. Kärcher, S., Bauernhansl, T.: Approach to generate optimized assembly sequences from sensor data. Procedia CIRP 81, 276–281 (2019) 3. DIN EN ISO 10209: Technische Produktdokumentation - Vokabular - Begriffe für technische Zeichnungen, Produktdefinition und verwandte Dokumentation (2012) 4. Beer, S.: Kybernetik und Management. Fischer Verlag, Frankfurt am Main (1962) 5. Ulrich, H., Probst, G.: Anleitung zum ganzheitlichen Denken und Handeln: Ein Brevier für Führungskräfte, 4th edn. Paul Haupt, Bern, Stuttgart, Wien (1995) 6. Haberfellner, R., de Weck, O., Fricke, E., Vössner, S.: Systems Engineering: Grundlagen und Anwendung. Orell Füssli, Zürich (2015) 7. Patzak, G.: Systemtechnik – Planung komplexer innovativer Systeme, Grundlagen, Methoden, Techniken. Springer, Berlin (1982). https://doi.org/10.1007/978-3-642-81893-6 8. Ropohl, G.: Allgemeine Technologie: Eine Systemtheorie der Technik, 3rd edn. KIT Scientific Publishing, Karlsruhe (2009) 9. Trist, E.L., Bamforth, K.W.: Some social and psychological consequences of the Longwall method of coal-getting. Hum. Relat. 4(1), 3–38 (1951) 10. Westkämper, E.: Einführung in die Organisation der Produktion. Springer, Berlin (2006). https://doi.org/10.1007/3-540-30764-8 11. Ulich, E.: CIM – eine integrative Gestaltungsaufgabe im Spannungsfeld von Mensch, Technik und. In: Cyranek, G., Ulich, E. (eds.) CIM – Herausforderung an Mensch, Technik, Organisation, 1 vdf Verlag der Fachvereine. B.G. Teubner, Zürich, Stuttgart (1993) 12. Warnecke, H.-J.; Löhr, H.-G.: Die Montage als Teil des Produktionssystems. In: Fachtagung Montage ‘73: 3. Arbeitstagung des IPA Stuttgart, Vortrag Nr.1, pp. 1–24 (1973) 13. Dörner, D.: Modellbildung und Simulation. In: Roth, E., Holling, H. (eds.) Sozialwissenschaftliche Methoden: Lehr- und Handbuch für Forschung und Praxis, pp. 327–340. Oldenbourg Verlag, München, Wien (1999) 14. DIN IEC 60050-351: Internationales Elektrotechnisches Wörterbuch, Teil 351: Leittechnik (2014) 15. Stachowiak, H.: Modelle. Konstruktion der Wirklichkeit. Wilhelm Fink, München (1983) 16. Wüstneck, K.D.: Zur philosophischen verallgemeinerung und bestimmung des modellbegriffs. Deutsche Zeitschrift für Philosophie 11(12), 1522 (1963) 17. Stachowiak, H.: Allgemeine Modelltheorie. Springer, Wien (1973) 18. Bandow, G., Holzmüller, H.H.: Das ist gar kein Modell! Unterschiedliche Modelle und Modellierungen in Betriebswirtschaftslehre und Ingenieurwissenschaften. Gabler, Wiesbaden (2010) 19. Kärcher, S., et al.: Sensor-driven Analysis of manual assembly systems. Procedia CIRP 72, 1142–1147 (2018) 20. Watson, G.H.: Benchmarking. Vom Besten lernen. Verlag Moderne Industrie, Landsberg/Lech (1993) 21. Rumpe, B.: Modellierung mit UML. Xpert. Press, Springer, Berlin (2011) 22. Specker, A.: Modellierung von Informationssystemen. Ein methodischer Leitfaden zur Projektabwicklung. Vdf Hochschulverlag AG, Zürich (2015) 23. Wiendahl, H.H.: Stolpersteine der PPS Ein sozio-technischer Ansatz für das industrielle Auftragsmanagement. In: Nyhuis, P. (ed.) Beiträge zu einer Theorie der Logistik, pp. 275– 304. Springer, Berlin (2008). https://doi.org/10.1007/978-3-540-75642-2_13

Application of a Controlled Assembly Vocabulary: Modeling a Home Appliance Transfer Line Chase Wentzky2, Chelsea Spence1, Apurva Patel2, Nicole Zero2, Adarsh Jeyes1, Alexis Fiore1, Joshua D. Summers2(&), Mary E. Kurz1, and Kevin M. Taaffe1 1

2

Department of Industrial Engineering, Clemson University, Clemson, SC 29634-092, USA Department of Mechanical Engineering, Clemson University, Clemson, SC 29634-092, USA [email protected]

Abstract. A controlled vocabulary list that was originally developed for the automotive assembly environment was modified for home appliance assembly in this study. After surveying over 700 assembly tasks with the original vocabulary, additions were made to the vocabulary list as necessary. The vocabulary allowed for the transformation of work instructions in approximately 90% of cases, with the most discrepancies occurring during the inspection phase of the transfer line. The modified vocabulary list was then tested for coder reliability to ensure broad usability and was found to have Cohen’s kappa values of 0.671 < j < 0.848 between coders and kappa values of 0.731 < j < 0.875 within coders over time. Using this analysis, it was demonstrated that this original automotive vocabulary could be applied to the non-automotive context with a high degree of reliability and consistency. Keywords: Transfer lines  Work instructions Line balancing  Standard vocabulary

 Controlled vocabulary 

1 Why Implement a Controlled Vocabulary? Manufacturing and assembly of products is often a collaborative effort between human associates and machines. With an increasing demand in productivity and efficiency from human workers, it is especially important to ensure that the provided work instructions to human workers are accurate and descriptive. This includes both the content and delivery of work instructions. A controlled vocabulary is one such approach within manufacturing which focuses on the content of text-based instructions. It is meant to capture a particular implementation process, to provide clear instructions, and prevent error. In this context, a set of verbs describes the motions performed to do a task, which can then be used to create a standard design for dictating process descriptions [1–3]. Prior to this, written documents describing assembly tasks lacked uniformity and were ambiguous and open to interpretation. This led to a variety of problems ranging from manufacturing defects to safety concerns. To create consistency © IFIP International Federation for Information Processing 2019 Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 439–446, 2019. https://doi.org/10.1007/978-3-030-29996-5_51

440

C. Wentzky et al.

across technologies and products, controlled vocabulary has been developed, which can help mitigate the differences in interpretation. Ford Body and Assembly Operations was one of the first firms to create a language to standardize their work instructions protocol for assembly tasks [1]. From there, they were quickly able to see the benefits, especially with respect to the assembly planners. Not only was the writing of assembly instructions standardized, but ease of machine translation was also apparent. In combination, this helped the planners to predict assembly times based on the verbs used in the work instructions [3]. A controlled vocabulary was also found to be helpful in creating decision support systems and to estimate assembly time [4]. In terms of the Toyota production system, standardization has been found to be one of the bedrocks of kaizen and the continuous improvement benefits of standardization have been welldocumented [5, 6]. The standardization of vocabulary in instruction manuals or process sheets have shown to provide great value [1, 7]. As previously mentioned, companies like Ford and Toyota have implemented standard language and found an increase in comprehension by individuals throughout the whole organization. Upon evaluating the standardization, Ford found that clear vocabulary helped limit the number of errors that they encountered in their processes. Additionally, with technology on the rise within manufacturing systems, a standard language allows for not only human understanding, but machine and software understanding as well. In turn, this provides a more cohesive communicative understanding between human and machine. Implementing a standard language helps decrease the complexity of the instructions, which makes the manuals clearer and more concise. As a result, this provides more accurate data on the given tasks and a thorough understanding of the process [1, 6].

2 Controlled Vocabulary Context In previous work, a controlled vocabulary was proposed to analyze several aspects of an automotive assembly line [2]. In this paper, this controlled vocabulary is applied to a home appliance transfer line to understand the general applicability of the proposed vocabulary and identify any potential improvements. This standard vocabulary provided in previous work was used, among other things, to better understand the appropriate level of automation for a given assembly task. Although the aims of the previous study and this current research differ in some respects, the successes of the first study would suggest potential benefits in applying this vocabulary to a home appliance transfer line. This work explored what modifications, if any, would need to be made to apply the automotive standard vocabulary to a home appliance assembly line. Table 1 shows the standard vocabulary used for characterizing assembly tasks in the home appliance assembly line. Table 1. List of standard verbs (adapted from [2]) Align Clamp Clean Connect

Disengage Get Insert Inspect

Lay Move Open Place

Remove Restock Restrict Scan

Screw Snap Tighten

Application of a Controlled Assembly Vocabulary

441

It should be noted that not all of the standardized verbs specified in prior work are listed in Table 1, rather only the verbs used in this case are shown. Additionally, verbs associated with non-value-added tasks are shaded, while the remaining verbs are related to value-added tasks. In this study, value-added tasks are those that transform the product in a way that gives it some desirable feature or trait as defined by the end user. The manufacturing system currently being evaluated is comprised of transfer lines for the assembly of home appliances. The appliances are assembled through a variety of stamping processes and automated assembly, with roughly 650 employees executing the process. The layout of the plant consists of two separate assembly lines for two different models (P1 and P2) of the appliance. Transfer lines were divided off in a similar fashion, with tasks for Assembly (A), Inspection (I), with several support lines. They will be described in this analysis as M1 and M2, and the three other miscellaneous support lines specified as X1, X2, and X3. For the purposes of this analysis, the whole line was broken down into approximately 700 tasks, which were distributed throughout the plant between human assembly, machine assembly, and the infrastructure used to transfer products between workstations. Operators in this facility typically conduct their tasks either sitting or, more commonly, standing; most workers also have some sort of storage space that contain parts or assembly pieces. Some operators use basic hand tools or torque wrenches to aid in their assembly tasks and upon completion of their task, some operators use a foot pedal to advance the assembly to the next workstation.

3 Application of Controlled Vocabulary 3.1

Procedure

The standard vocabulary identified in prior work [2] was applied to this new assembly process. The tasks used currently on the assembly line were reviewed to apply the controlled vocabulary such that the substantive information within the task list is not modified, but the actions verbs were changed to match the controlled vocabulary. The task list was divided into four equivalent sections which were then processed by four coders. These were then combined to create the complete list of assembly tasks. The vocabulary used in this analysis is shown below in Table 2, where the first column shows the standard vocabulary from previous work, and the remaining columns show verbs used in task descriptions for the home appliance assembly line. As shown in the table, mapping of verbs in the current task description to controlled vocabulary resulted in a one-to-many mapping. This suggests that assembly instructions currently used in the home appliance assembly line may be using different terms to describe the same action. Additionally, the word “position” in the current task descriptions was found to be mapped to two verbs from the standard vocabulary: “align” and “place.” This shows that there is ambiguity in the term “position”, and it should possibly be avoided when describing assembly tasks in this setting.

442

C. Wentzky et al. Table 2. The final proposed standard vocabulary with key. Standard verb Value-Add? Verb observed in original instruction Align VA Twist Position1 Turn Adjust Organize Clamp VA Attach Clamp Clean VA Wipe Connect VA Attach Install Hook Plug-in Connect Disengage VA Unhook Separate Get NVA Obtain Pick up Insert VA Start Insert Install Dip Inspect NVA* Inspect Check Lay NVA Route Pull Move NVA Move Pull out Close Lift Flip Open NVA Unwrap Open Place VA Place Position2 Insert Remove NVA* Peel Restock NVA* Restock Fill Restrict VA Secure Tie Scan VA Scan Screw VA Screw Secure Snap VA Snap Tighten VA Tighten Tool 1 – followed by prepositional phrase, 2 – without location restriction * - Identified as value-add in [3] but modified as nonvalue-add in this study.

3.2

Applicability to Assembly Process

The process of verb standardization was found to be straightforward for the most part; however, not all of the assembly tasks were successfully converted into standardized verbs. Figure 1 below shows the result of verb standardization. The graph is divided into seven sections, each of them being a segment or stage of the assembly line. The percentage of verbs that were compatible with the controlled vocabulary are shown by product model (P1 and P2) and by stage (A, I, M1, M2, X1, X2, and X3). It should be noted that the “X3” support line was only present for P1, and that there was no equivalent support line for P2. Overall, 89% of the verbs in the original work instructions were addressed by the standardization. The “M2” stage, which corresponds to highly mechanized tasks, had the highest standardization across both product models. Many of the verbs that were not addressed occurred more frequently in stages “I” and “M1”, which involved irregular tasks. For example, one of the tasks listed in the work instructions was vacuuming excess water from the appliance, with the assembly task described as “Wait time on vacuum.” This task was not indicative of the assembly process at hand but was rather an intermediate “irregular” step that related to the assembly environment.

Application of a Controlled Assembly Vocabulary

443

P2

P1

P2

72%

P1

90%

P2

93%

P1

97%

88%

P1

100%

P2

100%

P1

86%

P2

70%

P1

67%

40%

61%

60%

92%

80% 98%

PERCENT OF VERBS ADDRESSED

100%

20% 0% A

I

P2

M1

Not Addressed

M2

X1

X2

P1

P2

X3

Addressed

Fig. 1. The results of the verb standardization by model and process stage.

When matching verbs to standardized language, it was important to have precise definitions. For example, place and align are standardized verbs that have similar meanings. To clarify, the group chose to use the verb “align” if the assembly instructions refer to an orientation, distance or angle with respect to another object, otherwise the verb “place” was used. The reason to differentiate between those verbs was to ensure when higher precision in assembly was required. This suggests that in addition to a standard list of verbs, each verb needs to be clearly defined to ensure accuracy and consistency of instructions. An example for each verb may also be beneficial to authors of assembly instructions. 3.3

Measures of Agreement

Once the final set of verbs was established, an intercoder reliability analysis was completed within and between coders. The four coders chosen for this test were all graduate students in either mechanical or industrial engineering and had work experience ranging from one to five years. The different coders each applied the standard vocabulary to a random sample of the larger set of work instructions. This sample was gathered by amassing every fifth line of the work instructions provided to assembly associates at their respective workstation. This sampling of the workstation instructions allowed for the entire length of the transfer line to be assessed. Each of the four coders assessed the same sampling of work instructions separately, and each coder’s assignment of the standard vocabulary was compared to the other three sets. In order to properly understand the utility of the standard vocabulary amongst individuals with varying backgrounds, Cohen’s Kappa (j) was calculated for each of the comparison cases between coders. For intercoder agreement, Kappa values greater than 0.600 are considered to indicate substantial agreement, while Kappa values above 0.800 indicate “almost perfect” agreement [8, 9]. The Kappa values found between each of the four coders is shown below in Table 3. Cohen’s Kappa (j) value for the agreement analysis between the four coders.

444

C. Wentzky et al.

Table 3. Cohen’s Kappa (j) value for the agreement analysis between the four coders. Coder Coder Coder Coder

1 2 3 4

Coder 1 Coder 2 Coder 3 Coder 4 0.671 0.761 0.775 0.671 0.848 0.715 0.761 0.848 0.811 0.775 0.715 0.811 -

Kappa values indicated substantial agreement amongst the four coders, which is unsurprising given that all four had been given training with the standard vocabulary key in Table 2. To further analyze the utility of the new standard vocabulary, a similar analysis was completed using Cohen’s Kappa, but in this instance, the analysis was within each of the four coders. After five weeks, the original four coders were given the same set of randomly selected tasks from the broader list of work instructions. Without consulting their previous responses, the coders were asked to apply the standard vocabulary once again, and agreement was assessed between this set of responses and their first set of earlier responses. This application of Cohen’s Kappa helps to ensure that the vocabulary is a useful tool over time, and that the original intercoder agreement was not simply the result of recent training but was indicative of the vocabulary’s usability. The Kappa values for each of the raters was greater than 0.731, suggesting a high-level of internal consistency amongst coders using the standard vocabulary [8, 9]. In summary, the “rating” or application of controlled vocabulary in this case was found to be consistent between coders and within coders, suggesting that the underlying method of vocabulary application is robust. Because each of the four coders was trained using the standard verb key, it may seem odd that there was not complete agreement (j = 1) amongst the coders. However, there were a few discrepancies present in the sample of work instructions that could explain this variation. Namely, there were discrepancies in the number of tasks that an individual work instruction represented. Some work instructions explicitly stated two distinct tasks (e.g. “Open door and insert weight.”) or instructed the operator to interact with multiple parts (e.g. “Get bolts (3).”). In these cases, if coders split those tasks into several different tasks, the test was adjusted to only include verb assignments that each coder made. However, work instructions that included implied conditions may also affect how the coder assigned a standard verb based on prior knowledge of the process or the parts and tools being used. This ambiguity was also another possible explanation for some of the variation among coders in the sample tasks.

4 Summary Due to the successes observed in the automotive industry from using standard vocabulary, an existing assembly vocabulary was modified to evaluate a homeappliance manufacturing line. Nearly 90% of the tasks listed in the original work instructions were addressed within the standardization. In conclusion, the use of standard vocabulary stemming from the automotive industry applied well to the

Application of a Controlled Assembly Vocabulary

445

home-appliance system with only slight variance. To test the agreement of the standard vocabulary list, an intercoder reliability test was conducted between and within the coders. This showed the strength of long-term agreement within the coders individually and the agreement between all the coders together. As a result, it holds true that controlled vocabulary is useful over time and the application was robust. Several additional benefits have been observed during the application of the standard vocabulary including: • • • •

automation of assembly time estimation [10, 11] benefits for system modelling using discrete event simulation [12] identifying the value stream of a product the elimination of potential translation errors

These additional benefits will be expounded upon in a later work, but also serve to demonstrate the optimized process flow that may accompany the implementation of a standard vocabulary in manufacturing and assembly. 4.1

Limitations

During the process of applying the standard vocabulary list to such a large number of assembly tasks, several limitations became apparent. Application of the standard vocabulary was done through document review only, meaning the actual work being done on the assembly was not always observed in order to apply the vocabulary onto existing instructions. While this is one of the biggest advantages of applying the vocabulary, this also presented some challenges. Even when certain verbs were straightforward to standardize, there were some exceptions when the task description was vague. The verb “obtain” has a specific meaning making it simple to assign a descriptive and accurate verb, while other verbs, like “secure”, left more room for interpretation (i.e. a part could be secured by screwing, tightening, or clamping). When these verbs were encountered, more work was required to find the most accurate standardized language. By looking at previous tasks, more context could be gained. If an operator needs to get screws immediately before the part is secured, there is a high likelihood that the part is screwed in. Using the pictures that accompany the work instructions can also help clarify exactly what the operator is doing. If none of the verbs on the standardized list were applicable, new ones could be added. 4.2

Future Work

Future work would benefit from looking at the physical work done by the operators and seeing the value of applying the modified controlled vocabulary. Additionally, future work can also determine whether an appropriate level of specificity is provided within the standard vocabulary, and whether the level of specificity needed is different based on the products being assembled. Also, using the standard vocabulary for other manufacturing settings can help expand the applicability of the automotive assembly standard vocabulary from [2]. Because this application was successful for the homeappliance manufacturing line, it is expected that applying the standard vocabulary in other manufacturing systems could also produce positive benefits.

446

C. Wentzky et al.

References 1. Rychtyckyj, N.: Standard Language at Ford Motor Company: A Case Study in Controlled Language Development and Deployment. Massachussets, Cambridge (2006) 2. Salmi, A., David, P., Blanco, E., Summers, J.D.: Standardized vocabularies for assembly systems modelling and automation alternatives description. In: 36th Computers and Information in Engineering Conference, vol. 1B, p. V01BT02A009 (2016) 3. Peterson, M., Mocko, G.M., Summers, J.D.: Knowledge management for semi-automated automotive assembly instruction authorship and translation. In: ASME International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, ASME, Portland, OR, pp. DETC2013-13070 (2013) 4. Miller, M., Griese, D., Peterson, M., Summers, J.D., Mocko, G.M.: Reasoning: installation process step instructions as an automated assembly time estimation tool. In: Proceedings of the ASME Design Engineering Technical Conference, ASME, Chicago, IL, pp. DETC201270109 (2012) 5. Johansson, P.E.C., Lezama, T., Malmsköld, L., Sjögren, B., Ahlström, L.M.: Current state of standardized work in automotive industry in Sweden. Procedia CIRP 7, 151–156 (2013) 6. Pereira, A., et al.: Reconfigurable standardized work in a lean company–a case study. Procedia CIRP 52, 239–244 (2016) 7. Peterson, M.: Standardization of Process Sheet Information to Support Automated Translation of Assembly Instructions and Product-Process Coupling (2012) 8. McHugh, M.L.: Interrater reliability: the kappa statistic. Biochem. Medica: Biochem. Medica 22(3), 276–282 (2012) 9. Landis, J.R., Koch, G.G.: The measurement of observer agreement for categorical data. Biometrics, 159–174 (1977) 10. Miller, M.G., Summers, J.D., Mathieson, J.L., Mocko, G.M.: Manufacturing assembly time estimation using structural complexity metric trained artificial neural networks. J. Comput. Inf. Sci. Eng. 14(1), 011005 (2014) 11. Cakmakci, M., Karasu, M.K.: Set-up time reduction process and integrated predetermined time system MTM-UAS: a study of application in a large size company of automobile industry. Int. J. Adv. Manuf. Technol. 33(3), 334–344 (2007) 12. Lee, Y.-T.T., Riddick, F.H., Johansson, B.J.I.: Core manufacturing simulation data–a manufacturing simulation integration standard: overview and case studies. Int. J. Comput. Integr. Manuf. 24(8), 689–709 (2011)

What Product Developers Really Need to Know - Capturing the Major Design Elements Bjørnar Henriksen(&), Andreas Landmark, and Carl Christian Røstad SINTEF Digital, Technology Management, 7465 Trondheim, Norway [email protected]

Abstract. Digitalization is no longer about finding data or inputs to different business processes as advances in technology now enables us to capture and/or retrieve all needed data. The challenges lie in the quality of the data, not only in the narrowest technical sense of the term, but also in relation to the extent of what e.g. product developers really need to know in order to support their processes and how this should be presented. In four industrial R&D projects, financed by the Norwegian Research Council and the EU, these challenges have been addressed. In the first project, a traditional approach was chosen. Here overall objectives and use-cases were defined, and the project quickly jumped to what everybody (including industrial users and research scientists) believed what the users needed and what could be provided to users from pre-defined data sources. However, it became clear that this did not necessarily match what the users really needed. This is in line with experiences from more traditional process mapping. Through the four R&D projects, a methodology focusing on what the product developer really needs, named “Major Design Elements” was developed, tested and implemented. The approach is to identify the most important design elements, then find what kind of knowledge is required, including relevant analyzes, data and sources. This together with an understanding of the relevant processes form the basis for a Design Dashboard enhancing the product development process. Keywords: Product development

 Fact-based design  Industry 4.0

1 Introduction 1.1

Challenges in Fact-Based Product Development

The product development process has become an even more important competitive element with the greater pressure to meet customer needs faster, with more precision and increased customization to individual needs. Digitalization has accelerated and is an enabler for companies to keep up. However, it is not obvious how companies can achieve the desired effects from this “window of technology”. The Industry 4.0 paradigm, with its high emphasis on data from the manufacturing process itself, has put more and more focus on process and organization. However, manufacturing companies often are not in the front-seat in developing more fact-based © IFIP International Federation for Information Processing 2019 Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 447–454, 2019. https://doi.org/10.1007/978-3-030-29996-5_52

448

B. Henriksen et al.

product development processes. This is not necessarily due to lack of technology, but more a question of knowing how to identify a company’s need for data, analytics and knowledge – and how to put their existing data to good use, i.e. enabling use. Through a portfolio of R&D-projects including industrial- and R&D-partners we have tested, developed and described an approach for more fact-based product development. In these projects we have experienced the pitfalls of overly focusing on technology for data capture and analytics (i.e. capturing “facts”) at the expense of the core tasks in product development and innovation. In this paper we present an approach developed through these projects. This methodology aims to analyze the need for facts in core activities in product development in manufacturing companies. 1.2

The R&D Projects and Scientific Approach

This paper is based on the research carried out in four R&D projects in medium sized manufacturing companies. A denominator for the projects is the objective of improving product development based on facts. The projects consist of many of the same partners, hence the R&D-work has been fertilized between the projects. One is funded by H2020, while three are co-funded by the Norwegian Research Council (NRC). The action-oriented approach means that the researchers have actively worked out solutions the companies can use and implement – in line with traditional action research methodology. LINCOLN (H2020): 2016–2019, 4 industrial- and 12 R&D-partners. The main objectives are to: (i) develop three types of radically new vessel concepts through simulation model testing, (ii) demonstrate Lean Product/Service Development to the vessel design, (iii) improve ICT tools to support vessels design and operations. RIT (NRC): 2018–2022, 3 industrial- and 3 R&D-partners. The main objective is to develop a Design Dashboard where large data volumes are analyzed/presented together with other types of data according to product requirements in the leisure boat industry. RADDIS (NRC): 2018–2022, 4 industrial- and 2 R&D-partners. The main objective is the reduction of physical work using enabling technologies within visualization, product digital twins and simulation. The project also aims to find more proactive ways to deal with regulations within the marine industry. WRAPID (NRC): 2018–2022, 2 industrial- and 2 R&D-partners. The main objective is to develop solutions for fact-based modularized product design for heavy machinery for agricultural and industrial applications.

2 Perspectives on Fact-Based Design 2.1

Knowledge and “Facts”

Competence, knowledge and facts are terms frequently used when describing high performance organizations. Gunasekaran and Ngai characterize these organizations by: (i) core competence, networks and cooperation; (ii) process orientation; (iii) free margins; (iv) learning organizational structures; and (v) knowledge management and

What Product Developers Really Need to Know

449

information technology [1]. Product development is a source of new knowledge, but we also need to reuse existing knowledge for the generation of new ones. This has to be integrated back into the development cycle to match ever increasingly complexity of design. Knowledge and facts are often used interchangeably. A scientific fact is an objective and verifiable observation, in contrast to a hypothesis or theory, which is intended to explain or interpret facts [2]. In this paper, we see fact quite similar to what Nonaka and Takeuchi terms explicit knowledge, in contrast to tacit knowledge [3]. Applying the notion and theory of knowledge management to engineering activities; Knowledge-based engineering (KBE) is the application of knowledge-based systems technology to the domain of manufacturing design and production. The design process is inherently a knowledge-intensive activity, so a great deal of the emphasis for KBE is on the use of knowledge-based technology originally to support computeraided design (CAD). KBE can have a wide scope that covers the full range of activities related to Product Lifecycle Management and Multidisciplinary design optimization. The scope of KBE includes design, analysis, manufacturing, and support [4]. 2.2

The Product Development Process

The product design process is extremely important for the competitiveness of manufacturing companies. It may also one of the most complex procesess in a company due to the integration of heterogenous, existing, and new knowledge that must transform requirements and constraints into technical solutions. The choices and decisions taken during the design phases impact other steps and processes in product development, production, logistics etc. [5]. Within Lean product development (LPD), knowledge plays an increasingly important role. Design cycles are examples of activities that may provide value or waste, depending on how to look at it. E.g. design process iterations could create valuable knowledge i.e. for variability. However, the specific iteration that can be eliminated without loss of useful knowledge is waste since only information that contributes to reduce risk provides value [8]. According to Rossi, Morgan, and Shook, Lean product development is about creating value through a process that builds on knowledge and learning enabled by an integrated system of people, processes, and technology [6]. The Lean pioneer Allan C. Ward’s core thesis is that the very aim of the product development process is to create profitable operational value streams and that the key to doing so predictably, efficiently, and effectively is to create useable knowledge [7]. Kennedy prefers to use designations as ‘Knowledge-Based Product Development’ and ‘Learning-First Product Development’ instead of LPD. He suggests that PD must be viewed as a ‘world of knowledge’, rather than a ‘world of tasks’ [10]. According to Kennedy the value stream for knowledge represents the flow of useful knowledge across different projects, products and functional areas – which is a characteristic that most enterprises fail to leverage. This can be seen as “Minimising the waste of knowledge”, not originally one of the seven “wastes” in lean manufacturing, but highly relevant in PD.

450

B. Henriksen et al.

There is a range of more recent methodologies for product development with close links to LPD. Jill Jusko presents some of these concepts gaining greater attention in the manufacturing product development community [9]: • Agile Development relies on the ability of small teams and teamwork to make changes quickly and promote intimate customer collaboration. • Knowledge-based Development is characterized by the creation of reusable knowledge through learning cycles. It incorporates set-based design. • Spiral Development is associated with software development and characterized by multiple iterations of the entire process, unlike staged processes. The Industry 4.0 concept represents a new paradigm as it takes account of the increased computerization of manufacturing where physical objects are seamlessly integrated into the information network. The main features of Industry 4.0 are [11]: • Interoperability: cyber-physical systems allow humans and smart factories to connect and communicate with each other. • Virtualization: a virtual twin of the Smart Factory is created by linking sensor data with virtual plant models and simulation models. • Decentralization: the ability of cyber-physical systems to make decisions of their own and to produce locally thanks to technologies such as 3d printing. • Real-Time Capability to collect and analyze data. • Increasing Service Orientation. • Modularity: flexible adaptation of smart factories to changing requirements. Industry 4.0 enables a more effective infrastructure in which the design and development activities of a product’s life cycle are closely integrated through real-time information and big data. The challenge is to understand how to use this extensive information in order to enhance product value and to improve industrial productivity. Since information must be displayable, reusable and available in real-time, the fourth industrial revolution is already well-aligned with lean thinking [12]. 2.3

Knowledge Requirements - Product and Process

Even though software development plays an increasing part, product development in manufacturing is normally conducted differently from pure ICT projects. However, as described in the sections above, knowledge and ICT support are becoming more and more important to the product design process. The challenge is then to define what we need, and how to get the knowledge. Generally, the knowledge requirements could be categorized as product- or process knowledge [13]. However, in product design, we will normally need both kinds of knowledge. The common approach for structuring product knowledge is through Product Models, where one of the most common is the “Functional-Behavior-Structure” [14]. Depending on the product complexity, the knowledge could be distinguished into [5]: (i) Function, (ii) Structure – technical architecture, BOM, (iii) Behavior, (iv) Properties – set of characteristics i.e. material, geometry, technology, etc., (v) Lifecycle. A range of sources and ICT tools such as eBOM, CAD and ERP are used to capture, transfer, analyze and present product knowledge.

What Product Developers Really Need to Know

451

The process knowledge is created during the whole lifecycle of a product and is normally based on activity models creating links between products, resources and their characteristics. An activity aggregates several kinds of knowledge such as sequences, functions, rules, states etc. Various types of process mappings, formalized through software tools such as PLM, MPM [15] are common enablers.

3 The R&D Projects’ Focus on Product Design 3.1

The Companies

The companies from the marine and agricultural industries participating in the R&Dprojects presented are medium-sized operating in international markets, facing expectations for continuous product improvements and product launches. They have limited staff-resources dedicated to product development and need to combine development with operational activities. However, the companies share an eagerness bringing new technology into their products and processes. 3.2

The Design Processes

Process analysis is a very common approach for understanding the business and context where new ICT solutions are meant to be implemented and give an improvement in performance, quality etc. Figure 1 illustrates a typical product development model.

Fig. 1. Product development as an iterative process

The details presented are not important here, but what we see is that the product development in our case companies (Fig. 2) are typically very unstructured, with many iterations and diffuse requirements. It is also difficult to identify what facts are integrated in the process and what is really needed for a good product development process.

452

B. Henriksen et al.

Fig. 2. Product development, not a straightforward process in the case companies

The above process analysis gave an overall picture of the product development context and challenges for the companies but did not provide sufficient insight on how to improve the situation. Hence this traditional process analysis approach didn’t identify what kind of analyses, data, and ICT solutions are needed to really improve the process. 3.3

Identifying the Major Design Elements

Our need for better ways of defining requirements for facts into the product development made us emphasize the product as such, i.e. product data and the core tasks in the design process. The challenge is to identify what kind of analyses and subsequentially data we really need in our (technical) design. What are the major design elements of the product? This approach tunes down “what data is available” and/or “what would be nice to have”. For some industries and product/markets, the major design elements (MDE) are well known even though the weight of importance could vary. The leisure boat manufacturing industry is such an example, where the product designers are working on well-defined design elements (Table 1). In this case, the next step is to define what analytics and data we should bring to the design dashboard to improve product design processes. For other industries and product/markets the MDEs are to some extent inherent, but not necessarily well defined. This was the case for the agricultural machine industry. In our projects a methodology that could identify the MDEs and what facts needed to improve the work on these elements has been developed. This was done through workshops where the people involved in the companies’ product development participated together with external industrial experts (facilitators). The company was well prepared before the workshop, so much of the workshop was about classifying and grouping the design elements and discussing examples. The above table illustrates the MDEs for our cases in different industries. We see that the design elements the product designers are focusing on in the technical product design have similarities. Hence, some of the data might be captured and analyzed using similar hard- and software solutions. However, it is important to respect the specifics of the industries and companies in defining the MDEs and their need for facts.

What Product Developers Really Need to Know

453

Table 1. Major Design Elements (MDE) in the marine- and agricultural cases Boatbuilders

Type of knowledge Product Process Hull definition x Hydrostatics x Weights x Powering x Stability x x Structure x x Arrangements x

Agriculture equipment

Type of knowledge Product Process Material properties x Flow in machines x Weight and geometry x x Hydraulics capacity x x Material input in machine x Robustness x x HSE (Health Security Env.) x

When the MDEs are grouped and labeled, they have to be described and detailed to a level where we can see what kind of analyses and the data providing the desired facts to the designers. To some extent, this is about defining a product design dashboard that provides facts and also is a gateway for more advanced analyses and data-drilling. Some of the facts required in a design dashboard for major design elements are already stored in the company systems, e.g. ERP-, quality- and/or product-embedded systems. The challenge is to bring data of a correct quality to the dashboard and create/make available relevant analytics for designers. This phase is the more technical part of the projects where solutions are developed. However, this is also where the cost-/benefit considerations of such a solution comes into force. 3.4

A Hint of What Data We Might Have and What We Could Learn

The focus on MDEs assumes that the customer requirements and the overall process is fact-based and would benefit from lean product design approaches presented in Sect. 2.2. A basis for the MDE methodology is to focus on the important tasks and challenges in product design, and not focus on ICT solutions. However, in the latter stages, we see that the discussions are more concrete and fruitful when we could give practical examples of data and analytics. In the agriculture case, data from product embedded sensors were used to give examples related to patterns of use (6, 7 and 8 in Fig. 3), hydraulics/chamber pressure, etc. (3 and 5 in Fig. 3). For the marine industrial cases, several analytical approaches and visualizations are now being tested and implemented based on real-life captured data from prototype vessels and fleet of vessels available. E.g. hull pressure, which can give important data on MDEs such as structures and powering. A common experience across these companies is the iterative nature of explorative data analysis and PD requirements-driven data analysis. To develop the product developer’s intuition about the value of the data in the organization, it seems necessary to couple “product development”-minded data scientists with “data-minded” product developers. Aided by a methodology it is possible to focus the discussions to avoid becoming overly technical- or overly PD-driven.

454

B. Henriksen et al.

4 Conclusion Today the challenge for manufacturing is more and more about finding smart ways to use data and knowledge that is available through different systems and devices. In product development, we often see different kinds of spiral approaches, where the real need for facts is often difficult to define. Through different industrial R&D projects, a methodology has been developed and tested, that aid companies to capture and present data and analytics for the Major Design Elements in product development. This approach could reduce the time used in product development and improve quality in product and services. In this way the focus on Major Design Elements could enable companies to lean and Knowledge Based Product Development.

References 1. Gunasekaran, A., Ngai, E.W.T.: Knowledge management in 21st century manufacturing. Int. J. Prod. Res. 45(11), 2391–2418 (2007) 2. Gower, B.: Scientific Method: A Historical and Philosophical Introduction. Routledge (1997). ISBN 0-415-12282-1 3. Nonaka, I., Takeuchi, H.: The Knowledge-Creating Company. Oxford University Press, New York (1995) 4. Prasad, B.: What Distinguishes KBE from Automation. coe.org. Archived from the original on 24 March 2012. Accessed 3 July 2014 5. Bricogne, M., Belkadi, F., Bosch-Mauchand, M., Eynard, B.: Knowledge based product and process engineering enabling design and manufacture integration. In: Vallespir, B., Alix, T. (eds.) APMS 2009. IAICT, vol. 338, pp. 473–480. Springer, Heidelberg (2010). https://doi. org/10.1007/978-3-642-16358-6_59 6. Rossi, M., Morgan, J., Shook, J.: Lean product development. In: Netland, T., Powell, D. J. (eds.) The Routledge Companion to Lean Management. Productivity Press, New York (2016) 7. Ward, A.C., Sobek, D.K.: Lean Product and Process Development, 2nd edn. Lean Enterprise Institute, Inc., Cambridge (2014) 8. Reinertsen, D.G.: Lean Thinking Isn’t So Simple. Electron. Des. 47(19), 48 (1999) 9. Jusko, J. https://www.industryweek.com/companies-amp-executives/new-models-productdevelopment2010. Accessed Feb 2019 10. Kennedy, M.N.: Learning First Product Development: Understanding Implementation Principles, Lean PD Seminar and Workshop at IVF, Gothenburg, Sweden (2008) 11. http://www.europarl.europa.eu/RegData/etudes/STUD/2016/570007/IPOL_STU(2016) 570007_EN.pdf. Accessed Feb 2019 12. Cattaneo, L., Rossi, M., Negri, E., Powell, D., Terzi, S.: Lean thinking in the digital era. In: Ríos, J., Bernard, A., Bouras, A., Foufou, S. (eds.) PLM 2017. IAICT, vol. 517, pp. 371–381. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-72905-3_33 13. Zha, X., Du, H.: Knowledge-intensive collaborative design modelling and support – Part 1: review, distributed models and framework. Comp. Ind. 57, 39–55 (2006) 14. Gero, J.S., Kannengiesser, U.: The situated function- behavior-structure framework. Des. Stud. 25(4), 373–391 (2004) 15. Hugo, J., Vliegen, W., Van Mal, H.H.: The structuring of process knowledge: function, task, properties and state. Robot. Comput. Integr. Manuf. 6(2), 101–107 (1989)

Collaborative Product Development

Design-for-Cost – An Approach for Distributed Manufacturing Cost Estimation Minchul Lee(&) and Boonserm (Serm) Kulvatunyou Systems Integration Division, National Institute of Standards and Technologies, Gaithersburg, USA {minchul.lee,serm}@nist.gov

Abstract. Research has shown that design changes cost more in later stages of product development. Therefore, companies adopt Design-for-X methods to optimize product designs for many aspects in the early design stage. Despite such efforts, products often encounter several design changes during the commission of the production, a principal reason being failure to meet target costs. Accurately estimating cost in the early design stage is difficult due to insufficient information. In particular, as production becomes more distributed cost estimation is also more difficult because information is more distributed. This paper introduces a cost estimation method to address this problem. It describes a distributed manufacturing situation and a cost breakdown framework. A use case is provided to illustrate how the framework allows for supply-chain cost negotiation and design adjustments in the early design stage. Keywords: Cost estimation  New Product Introduction  Cost breakdown approach  Design-for-Cost  Supply chain management

1 Introduction Research has shown that design changes cost more in later stages of product development [1]. Therefore, companies adopt Design-for-X methods to optimize the product design from many aspects such as quality, time to delivery and cost in the early design stage. Companies have strong interest in the ability to accurately estimate manufacturing cost earlier in the design stage, because executives typically focus on maximizing profit. In the New Product Introduction (NPI) process, companies typically set target market, volume, price, and manufacturing cost along with the design and functions of a product at the first stage [2]. They are very interested in maintaining the profit margin; therefore, target manufacturing costs are validated at every NPI stage. Despite this general practice, unexpected costs usually shows up at the commission of production because the current cost estimation approach is insufficient for today’s distributed manufacturing environment.

© IFIP International Federation for Information Processing 2019 Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 457–465, 2019. https://doi.org/10.1007/978-3-030-29996-5_53

458

M. Lee and B. (Serm) Kulvatunyou

Traditionally, manufacturers1 have most of the information necessary for cost estimation. It is not the case for distributed manufacturing. Manufacturers need to interact with suppliers, estimate component costs, and consider delivery and packaging costs from the early design stage. Cost estimation approaches for distributed manufacturing need to enable manufacturers to negotiate with suppliers and come up with supply chain strategies that reduce cost. For example, in addition to the typical design adjustments to reduce material and manufacturing process costs, manufacturers may find suppliers who can use the same material and purchase the material on behalf of all the suppliers to receive a larger bulk-buying discount. Without proper cost break down in the estimation, manufacturers will have difficulty negotiating with suppliers as they do not know which cost elements can be reduced. This paper proposes a framework for manufacturing cost estimation in the early design stage for the case of distributed manufacturing. The framework reduces risks of encountering unexpected cost at the commission of production that could result in a costly design change. The rest of paper is organized as follows. First, a literature review on manufacturing cost estimation is given. Then, our cost estimation framework is outlined. Finally, a case study showing cost estimation of a supplied component is illustrated followed by a conclusion and future work.

2 Literature Review In this chapter, a summary of existing manufacturing cost estimation methods and cost elements is provided. 2.1

Cost Estimation Methods

Manufacturing cost estimation methods can generally be divided into two groups: qualitative methods and quantitative methods [3]. Qualitative methods are based on a comparative analysis between a new product and similar products manufactured previously. On the other hand, quantitative methods are based on a detailed cost analysis of product design, its features, and corresponding manufacturing processes instead of simply relying on the past data or tribal knowledge of an estimator. The qualitative methods do not provide a cost break down that can be used to understand cost elements that benefit from design change and cost negotiation. There are several types of quantitative methods including Operation-based [4], Feature-based [5] and Break-down approaches [6]. Operation-based approach mainly estimates cost in terms of types of operations and considers material cost, factory expenses and manufacturing processing cost as part of the costs associated with the

1

In this paper, manufacturers refer to the organizations that design and/or produce products or components that need subassemblies from supplier organizations. The two roles are played by organizations in a distributed manufacturing chain. For example, prime contractor (a manufacturer) designs and produces cars which require instrument panel assembly from a subcontractor (a supplier). A subcontractor on the other hand can be a manufacturer ordering electrical harness from a supplier.

Design-for-Cost

459

time of performing operations. This approach focuses on accurate estimation of the manufacturing processing cost but provides less detailed consideration on other cost elements. The feature-based cost estimation approach identifies cost-related features of the design and estimates their costs. However, existing approaches in this category only consider conventional machining process. The break-down approach partitions manufacturing costs into cost elements. An estimation is applied to each cost element. The estimated manufacturing cost is a sum of all cost elements incurred during the production cycle. Cost elements include material cost, manufacturing process cost, maintenance cost, and repair cost. Other cost elements are insurance cost [6], overhead costs [7], and calculated manufacturing process cost based on the hourly usage of machinery [8]. For more accurate cost estimation, we adopt a cost break-down approach. However, our research focuses on estimating cost for distributed manufacturing. For that, we have to look into what data is available for the manufacturer, and use it to extend the scope of cost elements such as packaging and delivery cost. The scope of this paper and associated cost elements is described in Sect. 2.2. 2.2

Cost Elements

According to [9], selling price consists of machining, material, labor, indirect, selling and administrative expense and profit as shown in Fig. 1.

Fig. 1. Elements of cost

Most research focuses on Final Factory Cost, however, it is necessary to widen the scope to Selling Price to include packaging and delivery cost in the Total Cost to Sell in order to estimate cost of a distributed manufacturer. To enable detailed cost analysis, cost elements for machining, tool, and defects should be added to Prime Cost. In conclusion, cost elements for distributed manufacturing are shown in Fig. 2.

460

M. Lee and B. (Serm) Kulvatunyou

Fig. 2. Cost elements for distributed manufacturing

3 Cost Estimation Framework In this chapter, we introduce a cost estimation framework for a supplied component. We assume that design of the component starts after target market, volume, and price are set as they are needed for the cost estimation. Several efforts provided logic (cost elements and formula) for estimating costs. However, they can be difficult to apply in the early design stage of distributed manufacturing because data are not available. Therefore, it is necessary to incrementally increase the accuracy of cost estimation as shown in Fig. 3 - starting from using logic requiring the least information and add more details and data as they become available. The approaches used to update the logic and improve the accuracy include further breaking cost elements, decomposing parameters in formula, and collecting additional data and updating the database. Ways to collect data include investigating trends in the market, prototyping the component internally, contacting the supplier, contacting the equipment makers, and contacting raw material suppliers.

Fig. 3. Cost estimation framework for distributed manufacturing

As shown in Fig. 2, the Selling Price of the supplier (or supplied component) needs to be estimated. Therefore, the cost of the designed part is estimated by considering the material, the manufacturing process, the packaging method, and the anticipated supplier, and then comparing with the target cost. The right-hand side of Fig. 3 shows which cost elements (in Fig. 2) are related to cost estimation steps.

Design-for-Cost

461

In the early stages of development, since every aspect of the design is not decided, such as the raw material or manufacturing method, assumptions have to be made on the parameters of cost elements. Therefore, only as the design of the product becomes more concrete, the cost can be estimated more accurately.

4 Cost Estimation Method and Cost Breakdown In this chapter, we introduce a basic formula for each cost element and discuss their essential aspects. The formula can be adjusted for different supplied components and situations. Each of the cost elements is estimated per a part. 4.1

Material Cost

Most of the research suggests estimating material cost using unit cost and amount. A basic estimation formula for material cost Cmat is given by Material Cost Cmat ¼ mt  Umat ;

ð1Þ

where mt is the total amount of the raw material and Umat is the unit cost for raw material However, the amount of material should be broken down into the net material, mn, and loss material, ml, to present an opportunity to reduce the lost material. The design engineer may improve the design or the manufacturer may work with supplier to improve the process to reduce the material loss. Materials are lost, for example, from preheating in the injection molding process and from chips or scraps in NC machining. Thus, material cost Cmat, which characterizes material loss is, given by: Material Cost Cmat ¼ ðmn þ ml Þ  Umat :

4.2

ð2Þ

Machining, Labor and Tool Costs

Generally, the basic estimation formula for machining is expressed as the product of machine rate and machining time [6]. Machining Cost Cmcn ¼ rm  tm ;

ð3Þ

where rm is the machine rate and tm is the machining time. Machine rate is usually calculated by dividing the cost to operate a machine or machines for the needed processing duration by the duration itself. Details of machine rate estimation is beyond the scope of this paper, but it can be calculated approximately with depreciation, electricity and maintenance cost. Machining time can be subdivided into setup time (ts) operation time (to) and nonoperation (idle time and down time) time (tno) [10].

462

M. Lee and B. (Serm) Kulvatunyou

Machining Cost Cmcn ¼ rm  ðts þ to þ tno Þ:

ð4Þ

Labor cost which is similar to machining cost is given by Labor Cost Cl ¼ rl  tl ;

ð5Þ

where rl is the labor rate and tl is the labor operation time. A labor rate is the cost of labor that is used to derive the costs of various activities or products directly related to manufacturing. Calculating accurate labor costs can be difficult for distributed manufacturing because every supplier has different wages and compensation. Average labor rate of the industry can be used and is a good reference point for negotiation. The average labor operation time can be obtained by dividing the total number of products made during the day by the work time per day. However, it is also possible to apply the technique of analyzing the operation time such as Modular Arrangement of Predetermined Time Standards (MODAPTS) [11]. Tool cost, the cost of devices such as a mold or a jig, is calculated by dividing the price of the device by the target production quantity. Tool Cost Ctool ¼

4.3

Tools Prices : Target production volume

ð6Þ

Defect Cost

As every manufacturing process has failures, defect cost needs to be considered. Defect rate is difficult to predict before production starts and it can vary between suppliers. Therefore, the defect rate is typically set to the same as or lower than an average of historical defect rates. The formula to predict the defect cost, Cdft, is given by Defect Cost Cdft ¼ ðCmat þ Cmcn þ Cl Þ  rd ;

ð7Þ

where rd is the defect rate. 4.4

Packaging Cost

Packaging cost includes all the materials and processes needed for delivering the supplied component to the manufacturer. Packaging Cost Cpkg ¼ pb  np þ rl  tp ;

ð8Þ

where pb is the cost for package box, np is the number of parts for a box and tp is the packing time.

Design-for-Cost

4.5

463

Delivery Cost

Delivery cost is the cost of delivering the packaged product from the supplier to the manufacturer. The method of estimating cost may vary depending on the transportation, but the formula below includes essential parameters for cost reduction analysis. Delivery Cost Cdlv ¼ rd  d  nt  np ;

ð9Þ

where rd is the delivery rate, d is the distance from supplier to buyer and nt is the number of boxes in a transport. 4.6

Selling and Administrative Cost and Profit

These cost elements can be estimated as percentage over other cost elements. Selling and administrative (S&A) cost also includes research and development. These ratios vary by industries and suppliers and may be obtained from industry statistics such as in [12, 13]. The S&A ratio can be higher for a supplier with an R&D center, such as a PCB manufacturer. In addition, the profit ratio can be negotiable, and it may increase for favorable partners such as those delivering consistent quality components and ontime delivery. The basic formulas for the two cost elements are:  Selling and Administrative Cost Csac ¼ Cmat þ Cmcn þ Cl þ Cpkg þ Cdlv  rsac ð10Þ  Profit Cprf ¼ Cmat þ Cmcn þ Cl þ Cpkg þ Cdlv þ Csac  rprf ;

ð11Þ

where rsac is the selling and administrative ratio and rprf is the profit ratio.

5 Use Case The use case is a scenario in which a refrigerator manufacturer needs injection-molding parts from a supplier. After the supplier receives the drawings, molds are manufactured. After the injection-molding part production and quality inspections are completed, they are put in protective tapes and delivered in boxes. Below, we discuss cost estimations and cost reductions experienced in this use case. The component used 120 g of abs resin; therefore, according to (1): Material Cost Cmat ¼ mt ð120 gÞ  Umat ð$2:7=kgÞ ¼ $0:324: However, it was determined that if two parts were produced at the same time in a single mold, machining, labor, and tool costs were reduced despite an increase in material cost. The increase in material cost came from additional resin needed for the sprue and runner (6 g) in the mold. Hence, material cost using (2) is: Material Cost Cmat ¼ mt ð120 g þ 6 gÞ  Umat ð$2:7=kgÞ ¼ $0:340:

464

M. Lee and B. (Serm) Kulvatunyou

The price of the mold to produce the product is about $30,000 and can be used 200,000 times. However, since we produce two parts in one mold, Tool Cost Ctool ¼

$300,000  2 ¼ $0:75: 200,000

Machine rate for injection is calculated to be $24 per hour and the machining time is 40 s for two pieces. Machining Cost Cmcn ¼ rm ð$24=hÞ  tmcn ð40 s/2Þ ¼ $0:13: The operator extracts the injected parts from the machine and performs visual inspection. The labor rate is $30 per hour and has the same tact time as machining time. Therefore, applying (3), Labor Cost Cl ¼ rl ð$30=hÞ  tl ð40 s/2Þ ¼ $0:17: Since historically the average failure rate of the supplier in the use case was 2.0%, Defect Cost Cdft ¼ ðCmat ð$0:340Þ þ Cmcn ð$0:33Þ þ Cl ð$0:37ÞÞ  rd ð2%Þ ¼ $0:01: After the manufacturing is completed, tape is attached, and a total of 100 parts are inserted into a $2 container, which takes about 20 s per part. Therefore, Packaging Cost Cpkg ¼ pb ð2Þ  np ð100Þ þ rl ð$30=hÞ  tp ð20 sÞ ¼ $0:187: A truck delivers parts with 400 boxes for 20 km, Delivery Cost Cdlv ¼ rd ð$20=kmÞ  d ð20Þ  nt ð400  100Þ ¼ $0:01: Since the average S&A rate is 7.0% and profit rate is 8.0%, they cost $0.11 and $0.14. And finally, the total cost is a summation of all the cost elements: $1.85 per part. Discussion: If the estimated cost cannot meet the target cost, it is necessary to consider changing to cheaper raw materials, increasing the number of parts in the mold, or changing the packaging method to reduce the cost. The estimated costs cannot be guaranteed to be available from the supplier but can be negotiated with supplier based on the estimation. For example, it may be possible to negotiate the defect rate, S&A rate, etc.; or to come up with a supply chain strategy such as bulk buying raw materials at a lower price on behalf of multiple suppliers, arrange logistics services, etc.

Design-for-Cost

465

6 Conclusion In this paper, we introduced a method to estimate costs for distribute manufacturing components. In order to estimate the cost of distribute manufacturing, selling price should be considered instead of final factory cost. We also introduced cost elements of the selling price for a more accurate cost estimation. Through this, a framework is proposed that incrementally enhance the detail of cost estimation during the design process. A use case illustrated the cost estimation process for a component and discussed how parameters identified in cost elements could be used for adjusting design and negotiating with suppliers. Future work includes integrating the quantitative cost estimation approaches described in this paper with qualitative approaches. In this way machining time and working time can be better predicted based on data kept in a database of cost estimation. Therefore, defining data schema for such database is an important research topic. Disclaimer. Any mention of commercial products is for information only; it does not imply recommendation or endorsement by NIST.

References 1. Ulrich, S.D., Eppinger, D.J.: Product Design and Development. McGraw-Hill, New York (1999) 2. Riitta, K., Gautam, A.: Something old, something new: a longitudinal study of search behavior and new product introduction. Acad. Manag. J. 45(6), 1183–1194 (2002) 3. Adnan, N., Jian, D.: Product cost estimation: technique classification and methodology review. J. Manuf. Sci. Eng. 128(2), 563–575 (2005) 4. Shehab, E.M., Abdalla, H.S.: Manufacturing cost modeling for concurrent product development. Robot. Comput. Integr. Manuf. 17(4), 341–353 (2001) 5. Zhang, Y.F., Fuh, J.Y.H., Chan, W.T.: Feature-based cost estimation for packaging products using neural networks. Comput. Ind. 32, 95–113 (1996) 6. Son, Y.K.: A cost estimation model for advanced manufacturing systems. Int. J. Prod. Res. 29(3), 441–452 (1991) 7. Bernet, N., Wakeman, M.D., Bourban, P.E., Månson, J.A.E.: An integrated cost and consolidation model for commingled yarn based composites. Compos. Part A Appl. Sci. Manuf. 4, 495–506 (2002) 8. Ostwald, P.F.: Engineering Cost Estimating. Prentice Hall, Englewood Cliffs (1992) 9. Colin D., Management and Cost Accounting. Cengage Learning EMEA (1985) 10. Jung, J.: Manufacturing cost estimation for machined parts based on manufacturing features. J. Intell. Manuf. 13(4), 227–238 (2002) 11. International MODAPTS Association. http://www.modapts.org/. Accessed 13 Apr 2019 12. Macro trend. https://www.macrotrends.net. Accessed 13 Apr 2019 13. Butler Consultant. http://research.financial-projections.com/. Accessed 13 Apr 2019

Computer-Aided Selection of Participatory Design Methods Michael Bojko, Ralph Riedel(&), and Mandy Tawalbeh Department of Factory Planning and Factory Management, Chemnitz University of Technology, Chemnitz, Germany {michael.bojko,ralph.riedel, mandy.tawalbeh}@mb.tu-chemnitz.de

Abstract. The activities to introduce Industry 4.0 and Digitalization into manufacturing environments imply a multitude of new systems and technologies which change work environments and tasks that are existing there up to now. In order to cope with the increasing complexity, employees in production must be transferred into knowledge workers. This requires new approaches of work organization, training and education, and the active involvement of employees in shaping future workplaces and processes. To achieve these goals new approaches and methods for collaborative design and reorganization of workplaces and processes involving employees by means of participatory design need to be developed and implemented. Due to the large number of available methods, targeted support for the workers is required for selecting suitable methods. In this paper, the motivation and reasons to use participatory design are explained, an approach to support method selection is developed, and a computer-aided selection procedure for empowering the responsible persons on the shop floors is presented. Following, this is the basis for applications in an industrial context, where the solution can be validated and improved based on practical experiences. Keywords: Participatory design  Knowledge management  Method selection

1 Collaborative Workplace and Process Design Methods to Empower and Engage Employees Against the background of current activities in research and corporate landscape to establish Industry 4.0 in the manufacturing sector, new working environments emerge in the factories and workshops affecting directly the workers employed there [1–3]. These emerging changes will lead to new demands on employees in the medium to long term. In order to cope with future tasks and to master the increasing number of systems and information, the employees are further developed into so-called knowledge workers [2, 4]. Therefore, the EU-funded research project Factory2Fit focuses in particular on the design of sustainable and adaptive production environments involving employees as well as the prerequisites and methods necessary for achieving these goals. In the following, the developed concepts and tools in the project for involving and empowering workers by means of Participatory Design (PD) will be presented. PD is a systematic approach for designing workplaces and processes © IFIP International Federation for Information Processing 2019 Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 466–474, 2019. https://doi.org/10.1007/978-3-030-29996-5_54

Computer-Aided Selection of Participatory Design Methods

467

involving well-experiences employees. It has an advantageous effect on the quality and adaptability of the solutions achieved. Hence, a positive impact on the success of the company as well as on the wellbeing of workers grow up [5–7]. In addition, this approach leads to greater transparency towards those involved with regard to the solutions found and their respective suitability for fulfilling the task at hand. This transparency in turn creates acceptance for the final solution among the workers and also contributes to the understanding of future users regarding tasks and roles during the design [5]. Since the introduction of PD, a large number of methods have been established to achieve these goals in the field of workplace and process design [8], but there is a lack of support in finding and selecting the right method for a particular situation. Therefore, maintaining an overview of all available PD methods is rather difficult. In many cases people assigned with optimization or planning projects – as for instance in the piloting companies of Factory2Fit – have limited experiences with the PD approach. This results in difficulties in selecting the most suitable method for a given application and use case. The challenge for those persons is not only the size of the literature pool, but also the correct task definition and the uniqueness of the respective company. Despite all these obstacles, companies and worker representatives recognize the need to address this issue [9, 10]. Positive effects on the success of the company result from meeting the challenges of demographic changes at the same time. The expectations of the new generation of employees in terms of adequate as well as flexible involvement and participation are considered. For this reason, a suitable procedure, preferably implemented in a manageable tool, is required for the selection of suitable PD methods, which takes the above challenges into account. With respect to the increasing digitalization and affinity of employees for digital solutions [11] and in order to limit the time required while ensuring the greatest possible user friendliness, it is advisable to implement a computer-based method selection.

2 Classification of Participatory Design Methods 2.1

Elements for Structuring the Participatory Design Method Cube

The development of a suitable procedure for the selection of PD methods is based on a model for classifying and describing the methods from the field of application. Literature shows a great variety of methodological approaches for the implementation of PD, since the established methods are suitable for a wide range of topics [8]. As the selection of the most appropriate method occurs to be a multidimensional problem, we created a model in a three-dimensional space, as this is barely visually processible by humans. Therefore, the PD method cube was derived to support the classification and selection of suitable PD methods for specific contexts with reasonable effort. The main aspects incorporated in the PD method cube are shown in Fig. 1. A pre-selection of suitable methods can be made on the basis of the team composition and the knowledge of the individual participants. In addition, the point of application within the innovation cycle is crucial in order to identify suitable methods [12].

468

M. Bojko et al.

Fig. 1. Main aspects for the development of the Participatory Design Method Cube

2.2

Attendees of Participatory Workshops in a Factory Context

Since there are no hierarchical or educational restrictions on participation, the suitability of methods corresponds with the composition of the group of attendees. Even though Factory2Fit’s primary goal is to involve factory workers with different professions, knowledge and responsibilities, they are also supported by e.g. factory planners or other experts during the workshops. Therefore, the possibly participating stakeholders have to be identified for the pre-selection of suitable methods. Three main criteria have been defined to describe the knowledge base of the group of participants: • Company affiliation (internal vs. external): The company affiliation describes the positions of the participants. For example, a participant originating from the same company or department where the workshop is conducted has advanced knowledge of processes, hierarchies, corporate culture and other characteristics of the organization. Participants from outside the company or department, on the other hand, usually only have general knowledge or knowledge of the organization where they work at. • Distribution of basic knowledge within the group (homogeneous vs. heterogeneous): The general distribution of knowledge describes the subject areas and their extent of representation in the participant group. The relevant perspectives, such as methodological, hierarchical or role knowledge, must be chosen case-by-case basis according to the objectives. • Previous knowledge of the subject matter (uninitiated vs. experts): The previous knowledge refers to the context of the application of the method and describes the need for prior knowledge building in order to generate a common understanding on the topic. 2.3

Structuring the Participatory Design Method Cube

Starting from the relevant aspects for conducting a pre-selection, the method cube is divided into three dimensions: (1) Team composition, (2) Position in the innovation cycle and (3) Degree of prior knowledge [8]. They represent the main selection criteria for identifying suitable methods in the field of PD. Figure 2 shows the model developed for shaping the PD method cube, the dimensions and characteristics represented in the PD method cube as well as one of the sub-cubes. As demonstrated, three main levels or perspectives are represented by the method cube:

Computer-Aided Selection of Participatory Design Methods

469

Fig. 2. Dimensions and levels of the participatory design method cube

• Target perspective: In general, the “target perspective” results from the methodological overview and provides suitable procedures with reference to the prerequisites and intended results. The more a design task has proceeded with regards to the innovation cycle, the more details are available due to the achieved progress. • Team perspective: The “team perspective” covers the participants and their knowledge base. The knowledge base refers to the explicit knowledge and experience with the process or workplace to be adapted and/or optimized during the design process. Depending on the origin of the participants, different procedures need to be applied. • Competence perspective: The “competence perspective” combines the previous knowledge of the participants with the innovation cycle phase in its view and thus allows to match methods to achieve the defined goals with the skill of the participants. The three perspectives are necessary to illustrate all interdependencies between the participants’ knowledge and origin as well as considering the context and objectives of the design process. The main principle of the method cube is the subdivision of the solution space into 27 sub-cubes. Each of them has a defined innovation cycle phase and is characterized by a team composition as well as the available knowledge of the

470

M. Bojko et al.

participants. Suitable methods for the implementation of PD, including their sequence and combination, are determined based on the actual situation. To simplify the usage of the framework presented and to assist factory workers in identifying suitable PD methods, a computer-aided procedure has been developed and implemented prototypically.

3 Designing the Computer-Aided Selection Tool 3.1

Definition and Refinement of the Selection Criteria

With the classification of PD methods presented in Sect. 2 and the derived PD method cube as the initial support, a targeted, computer-aided selection procedure requires further detailing of the dimensions incorporated in the model. For this purpose, the PD method collection planned to be utilized for the Factory2Fit project first was completed. The considered methods are examined with regard to their characteristic properties, e.g. application context, object of investigation and prerequisites. Thus a further subdivision of the dimensions was derived. In the following, the derived criteria as well as their contribution to the method selection are explained. • Object-related Method Selection The object of a design process is relevant for the applicability of specific methods, so that this criterion serves as an exclusion criterion. While certain methods, such as Cardboard Engineering, have been established for e.g. the investigation and optimization of assembly workstations, these methods are not or only partially suitable for the determination and revision of cycle or process times for automated systems. • Time-related Method Selection An approach developed earlier in our research group for selecting quality methods classifies the methods contained in the pool with regard to their phase-related use. This has proven to be a well-suited approach for the classification and selection of methods. This well-suited approach for the classification and selection of methods needs to consider the point of method application to determine the selection criterion for in-corporation. Therefore, the main demands are the time-related understanding and significance by usage across industries. For this reason, the phases of the innovation cycle also used in the PD method cube framework are the basis for classification in the selection procedure. • Participant-related Method Selection In addition to the criteria already set out, the target group of participants provides further important information for the precise selection of a suitable method. In particular the number of participants has a direct effect on the manner how the workshop will be designed. For example, due to the fact that the chosen method(s) must be feasible for the expected group size while delivering the desired results. In addition to considering the number of participants, it is important to consider the group composition(s) for the success of the method application. While some methods, such as Brainstorming, benefit from a heterogeneous group composition, other methods, e.g. Focus Groups, require a homogeneous composition. Depending on the context and objective of a task, the evaluation of the group composition with

Computer-Aided Selection of Participatory Design Methods

471

regard to the participants’ affiliations to organisations, product lines or process chains is therefore considered as relevant. • Knowledge-based Method Selection Besides the origin of the participants, the available knowledge also plays a decisive role when selecting suitable methods. Depending on the task at hand and the required group composition, the organisation, product, process or method-related knowledge of the attendees ranges from homogeneous to very heterogeneous. Particularly in the context of open innovation and the inclusion of interdisciplinary teams across organizational boundaries, heterogeneity is deliberately increasing. • Resource-based Method Selection Additional important aspects for those people responsible for selecting methods are the constraints set by the methods and the resources required for their implementation. Two distinct constellations during method selection can occur: (1) The application of the method is planned for the future and the resources required can be provided within time; (2) The application of the method has to take place ad hoc or the resources available are fixed and only existing means can be utilized. If there is a limited availability of resources, the user can specify the available resources for the method selection. The procedure is able to identify and exclude methods that cannot be conducted. Otherwise, the required resources do not represent a restriction; so a “no restrictions” option has been provided in the selection procedure. The evaluation of the suitability of the methods based on the constraints is omitted in this case. The criteria derived for the PD method selection tool as well as the selectable options for the users are shown in the excerpt of the tools’ user interface in Fig. 3.

Fig. 3. Excerpt of the user interface of the participatory design method selection tool

472

3.2

M. Bojko et al.

Implementation of the Procedure for Participatory Design Methods

The demands regarding cross-industry applicability and ease of use of the procedure were also applied to the prototypical implementation by using Microsoft Excel, the spreadsheet program widely used in industry. This makes it easy to implement the logic and to validate the selection procedure in case studies without requiring the pilots to purchase e.g. additional software. In addition, all licenses of the Microsoft Office package also include the integrated development environment Visual Basic for Applications (VBA), which enables the implementation of extensive logics and functions [13]. This environment was utilized to create both the storage for the classification information and the user-friendly display of the criteria and their respective characteristics in a user interface. All categories and characteristics are illustrated by means of the office suite and were supplemented with short explanatory notes for better understanding. In order to be able to execute the selection procedure, the logics for the categories and/or characteristics and the considered methods need to be made available in the tool. The provision of the decision base by experienced experts and the easy-touse input mask of the selection tool enable even non-trained employees to identify suitable PD methods. This contributes to the expansion of application and acceptance of PD in the industrial environment and thus strengthen the involvement and empowerment of workers.

4 Summary and Future Plans Against the background of the goal of involving employees in the production environments in the design of workplaces and processes, PD was presented as a suitable approach for the Factory2Fit project. Due to the large number of PD methods, the lack of method selection procedures and the limited experience of responsible persons in the companies, the targeted selection of suitable methods for the various tasks is quite difficult. In order to meet this initial situation, a classification procedure for the targeted selection of PD methods was developed and mapped as PD Method Cube. In order to reduce the effort involved in the targeted selection of methods, the developed classification procedure was subsequently implemented as a computer-aided tool using Microsoft Excel and VBA. The dimensions of the method cube for this were further detailed and an user interface was designed to simplify the execution of the method selection. In this way, the gap between theory and practice is bridged and the targeted use of PD methods in the case studies is enabled. In the further course of Factory2Fit, the presented results will be piloted under real conditions; the selection procedure for PD methods will be introduced and the further development of the prototype will be promoted with the involvement of the users. The first workshops conducted with workers at the pilot sites as well as with worker representatives to introduce the concept and tools showed positive attitudes as well as anticipations of the participants towards the utilization of PD [9]. The insights gained from the feedback form the basis for a future implementation of the approach as a web-based decision support system. The web-based design of the final system also provides the framework for the integration of

Computer-Aided Selection of Participatory Design Methods

473

further functionalities, such as the direct exchange between PD experts and method users or algorithms for improving the task-related evaluation of method suitability. Acknowledgement. The Factory2Fit project has received funding from Horizon 2020 (H2020/2014-2020), the European Union’s Programme for Research and Innovation under grant agreement n° 723277.

References 1. Bundesministerium für Arbeit und Soziales (BMAS): Weissbuch Arbeiten 4.0, Berlin (2017) 2. Bundesministerium für Wirtschaft und Energie (BMWi): Zukunft der Arbeit in Industrie 4.0, Berlin (2014) 3. Kagermann, H., Wahlster, W., Helbig, J.: Deutsche Akademie der Technikwissenschaften e. V. (acatech): Deutschlands Zukunft als Produktionsstandort sichern. Umsetzungsempfehlungen für das Zukunftsprojekt Industrie 4.0. Abschlussbericht des Arbeitskreises Industrie 4.0 (2013) 4. Factory2Fit Consortium: Factory2Fit – Empowering and participatory adaptation of factory automation to fit for workers. Description of the Action (2016) 5. Riedel, R., Schmalfuß, F., Bojko, M., Mach, S.: Flexible Automatisierung in Abhängigkeit von Mitarbeiterkompetenzen und -beanspruchung. In: Gesellschaft für Arbeitswissenschaft e.V. (GfA) (ed.) Dokumentation der Herbstkonferenz der Gesellschaft für Arbeitswissenschaft e.V.; 28. und 29. September 2017 in Chemnitz. GfA-Press, Dortmund (2017) 6. Vink, P., Koningsveld, E.A.P., Molenbroek, J.F.: Positive outcomes of participatory ergonomics in terms of greater comfort and higher productivity. Appl. Ergon. 37(4), 537–546 (2006) 7. Abildgaard, J.S., Hasson, H., von Thiele Schwarz, U., Løvseth, L.T., Ala-Laurinaho, A., Nielsen, K.: Forms of participation: the development and application of a conceptual model of participation in work environment interventions. Econ. Ind. Democracy, 1–24 (2018). https://doi.org/10.1177/0143831X17743576 8. Tawalbeh, M., Riedel, R., Horler, S., Müller, E.: Case studies of participatory design. In: Lödding, H., Riedel, R., Thoben, K.-D., von Cieminski, G., Kiritsis, D. (eds.) APMS 2017. IAICT, vol. 514, pp. 159–167. Springer, Cham (2017). https://doi.org/10.1007/978-3-31966926-7_19 9. Kaasinen, E., et al.: Empowering and engaging industrial workers with Operator 4.0 solutions. Comput. Ind. Eng. (2019, in press). https://doi.org/10.1016/j.cie.2019.01.052 10. Aromaa, S., et al.: User evaluation of industry 4.0 concepts for worker engagement. In: Ahram, T., Karwowski, W., Taiar, R. (eds.) IHSED 2018. AISC, vol. 876, pp. 34–40. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-02053-8_6 11. Boes, A., et al.: Zwischen Empowerment und digitalem Fließband: Das Unternehmen der Zukunft in der digitalen Gesellschaft. In: Sattelberger, T., Welpe, I., Boes, A. (eds.) Das demokratische Unternehmen. Neue Arbeits- und Führungskulturen im Zeitalter digitaler Wirtschaft, pp. 57–76. Haufe-Lexware, Freiburg (2015)

474

M. Bojko et al.

12. Chen, X., Riedel, R., Bojko, M., Tawalbeh, M., Müller, E.: Knowledge management as an important tool in participatory design. In: Moon, I., Lee, Gyu M., Park, J., Kiritsis, D., von Cieminski, G. (eds.) APMS 2018. IAICT, vol. 535, pp. 541–548. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-99704-9_66 13. Morgado, F.: Programming Excel with VBA. A Practical Real-World Guide. Springer, New York (2016)

Knowledge Management Environment for Collaborative Design in Product Development Shuai Zhang(&) University of Greenwich, Greenwich, London SE10 9LS, UK [email protected]

Abstract. Knowledge management environments are being developed for product development activities to help companies reuse their knowledge. This trend has been identified in manufacturing companies, which operate product design departments at various locations. Investigating how these companies can configure their knowledge management environments to fulfil engineers’ knowledge needs in design activities opens up a research topic for us. A well configured knowledge management environment (KME) will require a clear understanding of what key features the KME shall have. The research focuses on the structures and operations of knowledge sharing for product development. A case study of four manufacturing companies was conducted to understand their KMEs. The study contributes to theory by providing an understanding of the structure of KMEs in companies. Researchers in the domain of knowledge management can develop a good understanding of how engineers interact with KMEs so that researchers can propose knowledge management systems or methods that make tangible improvements. Chief engineers or managers in companies who are in charge of knowledge management can benefit from the understanding of their own KMEs. Keywords: Knowledge management Collaborative design

 Interorganisational system 

1 Introduction Knowledge management can help companies reuse the knowledge generated from the design of previous products. Researchers in this domain have proposed and investigated knowledge systems for the capture, storage and retrieval of knowledge [1–3]. Studies also addressed some of the knowledge queries that engineers make in design and how they interact with specific knowledge management systems [4, 5]. Although some studies have attempted to investigate information needs, knowledge sources and interorganisational systems for engineering design [6, 7], there is a lack of systematic knowledge for understanding the KMEs that companies provide for engineers. Thus, the proposed research question for this report is ‘What are the structures and configurations of KMEs to support engineers in collaborative design?’ By answering this © IFIP International Federation for Information Processing 2019 Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 475–480, 2019. https://doi.org/10.1007/978-3-030-29996-5_55

476

S. Zhang

question, the research will build up new knowledge about knowledge management for engineering design, which can help companies manage their knowledge reuse. The report presents case studies of KMEs for four manufacturing companies. It starts with discussing the three terms of data, information and knowledge, followed by reviewed literature on knowledge management in engineering design. Section 3 explains how the research is designed and how data is collected and analysed. Sections 4 and 5 presents the findings of case study and discussion, including the features of KMEs.

2 Literature Review Literature in knowledge management usually distinguishes between data, information and knowledge, or at least defines the term ‘knowledge’ explicitly. There is a consensus among many researchers on the relationship between data, information and knowledge. The general view is that a large amount of data is refined and combined into meaningful structures to create smaller amounts of information, followed by further distillation when meaningful information is put into context to create knowledge. Ackoff [8] believes that data are symbols representing the properties of objects and events, while information is useful processed data. Information is contained in descriptions and provides answers to the questions of who, what, when, where, and how many. Knowledge is conveyed by instructions and provides the answers to ‘howto’ questions. This report focuses on data, information and knowledge as the three terms which cover all entities involved in knowledge management in current engineering design context. The general view that knowledge is something more than information has resulted in distinctions being drawn by many authors. Spek and Spijkervet [9] believe that data are understood as uninterpreted symbols, information is data endowed with meanings, and knowledge is understanding that is used to assign meanings to information. Davenport [10] says that data are segmented observations, information is data processed with relevance and purpose, and knowledge is information with value. Sveiby [11] holds the view that information is meaningless, and knowledge is interpreted information. Wiig [12] regards information as a combination of facts and data organised to describe situations, while knowledge consists of ‘truths and beliefs, perspectives and concepts, judgement and expectations, methodologies and know-how’. The common idea is that data is something less than information, and information is something less than knowledge [13]. However, this does not always imply that data is the prerequisite of information, and that information is the prerequisite of knowledge. Tuomi [13] presents a reversed hierarchy of data, information and knowledge, in which data emerges only after information and knowledge are available.

Knowledge Management Environment for Collaborative

477

3 Research Design 3.1

Sampling

The theoretical sampling method suggests that suitable case companies will help develop reliable theories [14, 15], which requires the researcher to select cases based on the theoretical categories of interest in the study. By considering the aims and focuses of our study, the potential cases should include multi-location companies from one country. The case companies should also have established knowledge management systems. The following criteria are proposed and used to select case companies. Criterion 1: Case companies need to be multi-location companies that conduct engineering design. Criterion 2: Case companies need to be companies from the same developing country. Criterion 3: Case companies need to have established information management facilities and systems. Meanwhile, the cases should be selected from companies in the same industry. For the purpose of comparison, it is more favourable to have paired cases. This leads to: Criterion 4: Case companies need to be companies from the same industries, and if possible, cases in the same industry should be paired. Furthermore, since the researcher’s background is in mechanical engineering, cases are selected from companies who design mechanical products so that the researcher can better understand the design activities in these companies. It is also helpful to focus on a specific type of product to cross-compare cases. Therefore, we also specify: Criterion 5: Case companies need to design mechanical products that the researcher has knowledge of. Finally, it is important to have easy access to study case companies. The researcher approached companies in the following ways: (1) persuading companies to participate in the study to help understand their information management; and (2) using personal contacts and social network sites such as LinkedIn. Four companies fulfilling the above selection criteria were recruited. The four companies can be regarded as two pairs of cases in two industries in China: machine tools (Company A and Company B) and oil equipment (Company C and Company D). 3.2

Data Collection

Data was collected in 2018, with two visits to each case company. Semi-structured interviews and observations of engineers at work were conducted. The interview process was planned according to Miles and Huberman [16] and Yin [17]. Each participant was interviewed individually on-site. The interviews were recorded with audio devices. Each interview lasted for about one hour. Taped interviews were transcribed immediately after the interview, with notes taken by the researcher. Interviews were followed by observations of engineers at work. Each participant was observed for 32 h (4 working days). Four participants were observed in each case company. During the observations, information sheets were used to record participants’ information queries

478

S. Zhang

and the information sources addressed. By the end of the observation study, 512 h of participants’ work were observed and 685 information sheets were filled. A second round of interviews was conducted after the observation study for discussion and feedback to explore further and ensure the accuracy of understanding in the first-round interview. 3.3

Data Analysis

Analysis of qualitative data is usually an iterative process to allow intensive interaction between the data and the developing theory [18]. Considering the exploratory nature of this study, the inductive approach was deemed to be the most suitable data analysis approach. An inductive grounded approach [16, 19, 20] was adopted for the analysis of the data collected, including transcriptions, fieldwork notes and information sheets. The coding process resulted in the emergence of four theoretical codes, which were integrated to develop a typology of KMEs for understanding information management in the case companies’ engineering design activities. The four theoretical codes include (1) strategic orientations, (2) KME structures, (3) organizational enablers, and (4) individual’s capabilities. The focus of the integration is identify the common features in KMEs and the typical knowledge sharing and searching activities in these KMEs. Following configuration theory [21] and organisation theory [22, 23], the typology is developed based on the theoretical codes identified rather than classification of the case companies.

4 A Typology of KMEs Cross-case analysis reveals different strategic orientations of KME. Three types of orientation are identified in the cases, namely project based, document-possessor based and integration orientation. These are a set of ideal types that are developed conceptually. Being ideal types, a real firm can get close to several types rather than realising one single type. 4.1

Project Based KMEs

Typical cases with project-oriented KME include Company A and Company D. In project based KMEs, knowledge is stored based on which project it belongs to. The advantage of this approach is that project files can be easily found in the database when a user knows which project the knowledge belongs to. In Company A’s database, participants search through documents directly within a tree-shaped hierarchy that allows users to navigate the content directly [24]. From a strategic perspective, a project based KME is straightforward and easy to manage. With unrestricted access to sufficient information sources, engineers can explore with the help of retrieval systems. However, engineers may get lost easily, if retrievals systems cannot get required knowledge for engineers.

Knowledge Management Environment for Collaborative

4.2

479

Document-Possessor Based KMEs

Company B is a typical case with document-possessor oriented KME. In documentpossessor oriented KMEs, information is stored based on the author or the owner of the documents. Knowledge is accessible for participants when they know who filed it. The advantage of this KME is that the company has control of potential confidentiality issues. From a strategic perspective, an engineer who is familiar with the company’s organization and database is able to identify the required knowledge. Companies have good control of information security, while engineers have a direct access to knowledge. 4.3

Integration in Workflow

Company C is a typical example of a company with integration-oriented KME, which is more complicated than the two orientations above. Integration-oriented KME focuses on capturing, storing and reusing design-relevant solutions [25–27], with internal integrated procedures to collaboratively assist knowledge management. Such integrative collaboration operates a series of activities to collect, document and share knowledge for design or product development, distributing the product performance and service records to engineers in different departments. The integration orientated nature of KME combines the use of human resources and knowledge management systems, which supports the engineers in their design work.

5 Conclusion The study focused on the KME of multi-location companies. It has been shown that there is an increasing tendency for design in product development to be conducted collaboratively in distributed organization or in company networks. The typology of KMEs enables companies understand their knowledge management for design activities. By providing reliable and usable support for engineers, companies can improve their product design. This results in good quality products delivered on time at low cost, which increases the global competitiveness of the companies.

References 1. Ahmed, S.: Encouraging reuse of design knowledge: a method to index knowledge. Des. Stud. 26(6), 565–592 (2005) 2. Ettlie, J.E., Kubarek, M.: Design reuse in manufacturing and services. J. Prod. Innov. Manag. 25(5), 457–472 (2008) 3. Wang, H., Johnson, A.L., Bracewell, R.H.: The retrieval of structured design rationale for the re-use of design knowledge with an integrated representation. Adv. Eng. Inform. 26(2), 251–266 (2012) 4. Eckert, C., Maier, A., McMahon, C.: Communication in design. In: Clarkson, J., Eckert, C. (eds.) Design Process Improvement: A Review of Current Practice. Spinger-Verlag, London (2005). https://doi.org/10.1007/978-1-84628-061-0_10

480

S. Zhang

5. Jagtap, S.N.: Capture and structure of in-service information for engineering designers. PhD Dissertation, University of Cambridge (2008) 6. Heisig, P., Caldwell, N.H.M., Grebici, K., Clarkson, P.J.: Exploring knowledge and information needs in engineering from the past and for the future – results from a survey. Des. Stud. 31(5), 499–532 (2010) 7. Reed, N., Scanlan, J., Wills, G., Halliday, S.T.: Knowledge use in an advanced manufacturing environment. Des. Stud. 32(3), 292–312 (2011) 8. Ackoff, R.L.: Ackoff’s Best: His Classic Writings on Management. John Wiley and Sons, New York (1999) 9. Spek, R., Spijkervet, A.: Knowledge Management: Dealing Intelligently with Knowledge. Kenniscentrum CIBIT, Utrecht (1997) 10. Davenport, T.H.: Ten principles of knowledge management and four case studies. Knowl. Process Manag. 4(3), 187–208 (1997) 11. Sveiby, K.E.: The New Organizational Wealth: Managing and Measuring Knowledge-Based Assets. Berrett-Koehler, San Francisco (1997) 12. Wiig, K.M.: Thinking About Thinking - How People and Organizations Create, Represent, and Use Knowledge. Schema Press, Arlington (1993) 13. Tuomi, I.: Data is more than knowledge: implications of the reversed knowledge hierarchy for knowledge management and organizational memory. J. Manag. Inf. Syst. 16(3), 103–117 (1999) 14. Eisenhardt, K.M.: Better stories and better constructs- the case for rigor and comparative logic. Acad. Manag. Rev. 16(3), 620–627 (1991) 15. Siggelkow, N.: Persuasion with case studies. Acad. Manag. J. 50(1), 20–24 (2007) 16. Miles, M.B., Huberman, A.M.: Qualitative Data Analysis: An Expanded Sourcebook, 2nd edn. Sage, Thousand Oaks (1994) 17. Yin, R.K.: Case Study Research: Design and Methods, 2nd edn. Sage, Thousand Oaks (1994) 18. Dey, I.: Qualitative Data Analysis: A User-Friendly Guide for Social Scientists. Routledge, London (2003) 19. Glaser, B.G., Strauss, A.L.: The Discovery of Grounded Theory: Strategies of Qualitative Research. Routledge, London (2017) 20. O’Reilly, K., Paper, D., Marx, S.: Demystifying grounded theory for business research. Organ. Res. Meth. 15(2), 247–262 (2012) 21. Meyer, A.D., Tsui, A.S., Hinings, C.R.: Configurational approaches to organizational analysis. Acad. Manag. J. 36(6), 1175–1195 (1993) 22. Shi, Y., Gregory, M.: International manufacturing networks- to develop global competitive capabilities. J. Oper. Manag. 16(2–3), 195–214 (1998) 23. Zander, U., Kogut, B.: Knowledge and the speed of the transfer and imitation of organizational capabilities – an empirical test. Organ. Sci. 6(1), 76–92 (1995) 24. Vijaykumar, G., Chakrabarti, A.: Taxonomy for understanding knowledge captured in documents by designers. In: Proceedings of 16th International Conference on Engineering Design (ICED07), Ecole Centrale, Paris (2007) 25. Liu, S., McMahon, C., Darlington, M.J., Culley, S.J., Wild, P.J.: An automatic mark-up approach for structured document retrieval in engineering design. Int. J. Adv. Manuf. Technol. 38(3–4), 418–425 (2008) 26. Bracewell, R., Wallace, K., Moss, M., Knott, D.: Capturing design rationale. Comput.-Aided Des. 41(3), 173–186 (2009) 27. Zhang, X., Hou, X., Chen, X., Zhuang, T.: Ontology-based semantic retrieval for engineering domain knowledge. Neurocomputing 116, 383–391 (2013)

A Multi-criteria Approach to Collaborative Product-Service Systems Design Martha Orellano1(B) , Khaled Medini2 , Christine Lambey-Checchin3 , Maria-Franca Norese4 , and Gilles Neubert5 1 Mines Saint-Etienne, Univ Lyon, Univ Jean Moulin, Univ Lumire, Univ Jean Monnet, ENTPE, INSA Lyon, ENS Lyon, CNRS, UMR 5600 EVS, Institut Henri Fayol, 42023 Saint-Etienne, France [email protected] 2 Mines Saint-Etienne, Univ Clermont Auvergne, CNRS, UMR 6158 LIMOS, Institut Henri Fayol, 42023 Saint-Etienne, France 3 Univ Clermont Auvergne, EA3849 CleRMa, 63008 Clermont-Ferrand, France 4 Politecnico di Torino, DIGEP, Turin, Italy 5 Emlyon Business School, CNRS, UMR 5600 EVS, 42009 Saint-Etienne, France

Abstract. The design of innovative systems involves a complex decision making process spanning over different criteria and stakeholders. The complexity of the design process is heightened at its early stages by data scarcity, involving high uncertainty and vagueness. Product-Service Systems (PSS), which are bundles of products and services designed to fit complex customer needs, are an example of those innovative systems. PSS design can be thus approached as a multi-criteria and multistakeholder decision process. The aim of this research is to provide a consistent framework for decision aiding in the early stages of collaborative PSS design. The framework was built within a collaborative project involving a French company, interested in innovative solutions for managing their safety clothing system. At the methodological level, the Analytic Hierarchy Process (AHP) was used. Keywords: Collaborative innovation

1

· PSS design · AHP

Introduction

Product-Service Systems (PSS) are bundles of products and services designed to improve competitiveness by satisfying customers needs during the entire life cycle of the offer [5]. Designing PSS requires a strong collaboration among several actors along the supply chain, aiming at creating higher value than in traditional offers [2]. Developing a collaborative PSS suggests that the value to be co-created should be clearly and explicitly defined (i.e., beyond economics, involving organizational and sustainability dimensions), and actors’ expectations should be c IFIP International Federation for Information Processing 2019  Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 481–489, 2019. https://doi.org/10.1007/978-3-030-29996-5_56

482

M. Orellano et al.

deeply understood [5]. Consequently, PSS design can be approached as a complex multi-criteria and multi-stakeholder decision process [2,6]. Several methodologies have been proposed in the literature to design and evaluate PSS offers. Most of the researches are concerned with the provider perspective, focusing on the operational design of PSS alternatives [2,5,8,9,13]. Few works consider the customer perspective in early phases of PSS design [1,6,10,12]. This research aims at exploring a customer perspective of early PSS design in the absence of a predefined set of PSS alternatives. An empirical research is carried out within a French company regarding its system of safety clothing. The adopted approach starts by identifying the customer needs and setting its objectives, then identifying the key actors, and finally, drawing out a set of possible alternatives in collaboration with the providers. The Analytic Hierarchy Process (AHP) method is implemented to support the decision process. Indeed, the aim of this research is to provide a consistent framework for decision aiding at the early phases of collaborative PSS design, integrating both provider and customer perspectives. The structure of the paper is as follows. Section 2 presents briefly a literature review of Multi-Criteria Decision-Aiding (MCDA) approaches applied to PSS design. Section 3 introduces the AHP method, explaining the adequacy for early PSS design. Section 4 introduces the case study, describing the empirical approach and the application of AHP. The discussion is carried out in Sect. 5. Finally, conclusions and research perspectives are explained in Sect. 6.

2

MCDA Approaches for PSS Design

Decision aid has become an important axe of research in PSS literature, in particular during the last decade [2,5,8,9,13]. One of the main reasons why this research stream is gaining in importance, is the need of collaboration between several actors to develop PSS offers, seeking at the trade-off between their expectations [2,6]. Engaging a decision process depends on the time horizon, the availability and quality of data and knowledge, and the quality of key actors’ relationships; elements that influence the level of decisions’ effectiveness [2]. Additionally, making decisions in early stage of PSS design involves an important level of uncertainty, vagueness and subjectivity, induced by the lack of knowledge [6]. To deal with these issues, several researchers have used MCDA techniques. Most of researches focus on the provider perspective, being interested on the operational analysis and the design of a given set of PSS alternatives [8,9,13]. Methods such as TOPSIS and VIKOR have been used to choose PSS alternatives compared to an ideal solution, already available in the market [11]. In this case, decision approaches are often based on costing and environmental assessment, needing a significant amount of data. Few works consider the customer perspective in early phases of PSS design [1,6,10,12], in which data is rather scarce, increasing the complexity of the decision process. Here, decision aiding approaches are mostly based on the judgement of actors and general aspects of the offer are described qualitatively. However, these researches are carried out within a structured decision situation, in which

A Multi-criteria Approach to Collaborative Product-Service Systems Design

483

there is a clear set of PSS alternatives proposed by a given provider. Under this panorama, there is a gap in the literature of PSS design about early decision making from the customer perspective (i.e., criteria prioritization), when there is not a predefined set of alternatives. AHP can be used to prioritize the evaluation criteria for PSS design, before dealing with the choice of an alternative [4]. Next section explains AHP, which is a judgement-based method extensively used for unstructured decision contexts, and that will be explored in this research.

3

AHP for Early Stages of PSS Design

Analytical Hierarchy Process (AHP) is a MCDA method introduced by Thomas Saaty [7]. The main base of AHP is psychological, seeking at integrating actors’ judgement to the decision process. AHP aims at systematizing actors’ subjectivity instead of eliminating it [7]. It consists on two main phases: modelling and evaluation. The modelling phase aims at structuring the problem in a hierarchy. It involves goals, criteria and alternatives. The evaluation phase performs a pair comparison between the elements of each level of the hierarchy, by using an ordinal scale (Table 1) [7]. The process of pair comparison takes into account the following mathematics rules [7]. For n elements, a matrix A ∈ Rn×n of comparisons is obtained, which is positive and reciprocal. Only n×(n−1)/2 comparisons aij are needed, taking into account that aii = 1 and aij = 1/aji , ∀i ∈ {1, · · · , n}. As explained in [4], this process results in a judgements’ matrix A for each category of comparison. For calculating the weights it is necessary to normalize the values on the columns of A, then calculate the average of the rows, obtaining the called eigenvector or vector of priorities [7]. From the theoretical point of view, AHP is based on three main axioms: 1. Homogeneity: elements in the same category should be comparable. 2. Hierarchy: elements in each level of the hierarchy should be independent. 3. Reciprocity: expressed as aij = 1/aji . AHP does not need quantitative data to effectuate the evaluation process, since it is based on value judgements of actors. Furthermore, it allows to integrate several actors in the decision process and consider the problem context [7]. Thus, in early stages of PSS design, it appears to be suitable. Using this method for preliminary decision aiding enables two main benefits: i. Illustrating actors priorities (i.e., weighting evaluation criteria) and their compatibility, and ii. Verifying the adequacy of the current problem representation (i.e., hierarchical model constituted by value dimensions, criteria, and alternatives).

4

The Case of a French Company

In this paper we analyse the case of a large-sized company in France, which will be designated as ‘C’ for confidentiality reasons. C main activity is the production and distribution of energy. One of the most important activities of support

484

M. Orellano et al. Table 1. Scale of preferences of AHP [7]. Importance level of i over j Verbal scale 1

i and j are equally important

3

Weak preference of i over j

5

Net preference of i over j

7

Very strong preference of i over j

9

Absolute preference of i over j

of the company is the provision of safety clothing for its employees. Currently, the safety clothes belong to the employees, who are fully responsible for their usage, maintenance and recycling. This process introduces a lack of information in the company about the performance of the safety clothing system during the middle of life (MOL) and end of life (EOL) phases. Given this situation, C launched a two-years project (2017–2019) of innovation. The aim of this project is to transform the current offer by considering its entire life cycle, moving from a product-based offer towards a PSS offer. Since the project is about the early design phase of a PSS, C faces several decisions involving internal and external actors. As the project is in the context of purchasing decisions for innovative offers, the prioritization of the selection criteria is a major condition. AHP has been used in purchasing decision in several contexts [3]. In this particular context, AHP facilitates the task of experts in manifesting their preferences for sustainability criteria, following a collaborative approach. 4.1

The Collaborative Process in C

The research project is structured in three main phases. i. Structuring of the decision situation, ii. Identifying feasible business models based on PSS, and iii. The transformation of C purchasing strategy. Currently, the methodology scope covers the first and second phases, and can be summarized as follows from a practical point of view: Intervention with internal actors: workshop with the employees from the departments of Human Resources (HR), Purchasing, Prescription, Research & Development (R&D) and Sustainable Development (SD). The aim of this step is setting the objectives and expectations of C. Intervention with external actors: workshop with key contractual and potential providers, belonging to the fields of confection, transportation, maintenance (washing) and end of life treatment. The aim is understanding providers’ expectations and capabilities to answer C needs. Decision situation structuring: collaborative workshop between internal and external actors to identify potential alternatives responding to C needs, and formalize the criteria of evaluation.

A Multi-criteria Approach to Collaborative Product-Service Systems Design

4.2

485

AHP Application in C

Since C is interested in reviewing the entire value chain associated to the safety clothing, the decision situation has been broken down into the three main life cycle stages of the offer: beginning of life (BOL), middle of life (MOL) and end of life (EOL). Given this holistic approach, the project involves several internal and external actors. Internal actors belong to the departments of Purchasing (project coordinator), Prescription, Sustainable Development, and Human Resources. External actors are confectioners, logistics providers, washing service providers, EOL service providers, and social and environmental organizations. In the following, the decision situation of C will be explained from the highest level of abstraction to the lowest one, shaping the hierarchical structure proposed by AHP. Level 1 – Definition of the Dimensions of Value Creation. The first level of reasoning corresponds to the identification of the main objectives linked to the decision process. Five main categories were identified, called “value dimensions”. They are linked to C ’s expected benefits. The five value dimensions have been defined in collaboration with the internal actors since the beginning of the project, and they are supported by literature review on PSS and value co-creation. The value dimensions are defined as follows: (E) Economics: refers to the economic benefits and costs for each stakeholder along the offer life cycle. (N) Environmental: concerns the environmental impact of the offer, involving resources consumption, and emissions to the air, soil, and water. (S) Social: considers the contribution to the well-being of the internal and external stakeholders. (R) Relational: refers to the value derived from the quality of the relationships between the actors (i.e., enabling the common construction of knowledge). (F) Functional: refers to the level of fitness between the offer functions and the customer expectations. Level 2 – Design of Evaluation Criteria. The evaluation criteria were designed during an internal seminar conducted towards end of 2018. It involved a collaborative work of four groups formed by employees from the key departments of C (HR, R&D, SD, Purchasing and Prescription). Each group identified a set of criteria for each value dimension, establishing the link with C strategy. The criteria were analysed and synthesized by the research group, in collaboration with C for its validation. Table 2 shows the list of the social criteria. Level 3 – Identification of Alternatives. The second seminar was conducted with the providers (Dec. 2018). The actors worked collaboratively to generate ideas about possible alternatives fitting C ’s objectives. Individual interviews were carried out with 11 providers to detail the offers. It was possible to draw

486

M. Orellano et al. Table 2. Social criteria (S) of C. Criterion

Description

S.1 – Social performance of providers

Social evaluation of the provider according to the provider auditory carried out by C

S.2 – Employees resistance

Measure the degree of resistance (to change) of employees to choose one alternative over another regarding the clothing system. Score obtained through a satisfaction survey

S.3 – Solidarity purchasing

Service purchasing to the SAP regarding washing, maintenance and end of life treatment of the safety clothes

S.4 – Local job generation

Number of new local jobs created from the new clothing system

out the decision focus for each stage of the life cycle. For the BOL, the main decision concerns the type of fibre to manufacture the safety clothes. Decisions on the MOL focus on the washing system and the type of technology for ensuring the clothes traceability. For the EOL, decisions concern the system for waste revaluation. The interviews resulted on a first overview of potential alternatives. Performing AHP Evaluation. The data was computed automatically with R software, using the AHP library. The weights were calculated for each stage of the life cycle. The evaluation process relies on the use of questionnaires answered by the internal actors of C. Six actors from different departments were chosen, having a good knowledge about the project. The questionnaires are answered with the guidance of the research team involved in the project. Table 3 shows the criteria weighting obtained from the Purchasing department.

5

Discussion

Based on Table 3 it is possible to highlight some preliminary conclusions about the preferences of Purchasing department. First, the economic dimension is a priority for all the stages of the life cycle. Second, the functional dimension is the most important aspect in the beginning of life (BOL), which is explained by the primary importance of keeping employees safe; however, this dimension is rather insignificant in the middle and end of life. Third, environmental and social dimensions have similar importance in the middle and end of life. This can be explained by the interest of C of reducing long-circuits purchasing (i.e., re-locating production in France), and a good knowledge of the market in these phases of the life cycle. However, the importance of the environmental dimension in the EOL is the lowest among all the dimensions, which would be explained by the lack of knowledge regarding this phase, combined with a misinterpretation of

A Multi-criteria Approach to Collaborative Product-Service Systems Design

487

Table 3. Preliminary criteria weights in each life cycle stage in C. Criterion

BOLa

Ib

MOL

I

EOL

I

E – Economic

32.8% 0.0% 42.4% 0.0%

59.4% 0.0%

E.1 – Life cycle cost

28.7%

0.0%

37.1%

3.7%

52.0%

E.2 – Purchasing cost

4.1%

0.0%

5.3%

0.0%

7.4%

0.0%

N – Environmental

15.2% 0.0% 16.9% 0.0%

4.5%

0.0%

N.1 – Fiber env. quality

12.7%

0.0%

-

-

-

-

N.2 – Provider env. perf.

2.5%

0.0%

-

-

-

-

N.3 – Chemical use

-

-

14.8%

0.0%

-

-

N.4 – Carbon footprint

-

-

2.1%

0.0%

0.8%

0.0%

N.5 – Recycling rate

-

-

-

-

3.7%

0.0%

3.7%

R – Relational

4.5%

0.0% 15.8% 0.0%

7.9%

0.0%

R.1 – Brand image

3.7%

0.0%

13.8%

0.0%

6.9%

0.0%

R.2 – Innovation sharing

0.8%

0.0%

2.0%

0.0%

1.0%

0.0%

S – Social

12.9% 0.0% 15.0% 28.1% 23.2% 41.5%

S.1 – Provider social perf.

1.6%

0.0%

-

-

-

-

S.2 – User resistance

11%

0.0%

10.3%

0.0%

15.6%

0.0%

S.3 – Solidarity purchasing

-

-

3.5%

0.0%

1.6%

0.0%

S.4 – Local job generation

-

-

1.2%

0.0%

6.0%

0.0%

F – Functional

34.6% 0.0% 9.8%

0.0%

5.3%

0.0%

F.1 – Fiber quality

17.3%

0.0%

-

-

-

-

F.2 – Availability

17.3%

0.0%

-

-

-

-

F.3 – Comfort

-

-

7.3%

0.0%

-

-

F.4 – Clothing lifespan

-

-

2.0%

0.0%

-

-

F.5 – Traceability efficiency 0.5% 0.0% BOL: beggining of life; MOL: middle of life; EOL: end of life. b Consistency ratio.

-

a

the environmental criteria. Finally, the relational dimension is the less important one from the Purchasing department perspective, focusing in the brand image. There is a general consistency in the judgements, excepting the social dimension, in which inconsistencies exceed the maximum allowed level for AHP (I < 0.1). These inconsistencies indicate a misunderstanding of the social criteria, explained by their high level of abstraction. From this evaluation, it is possible to realize the need of clarifying some criteria, specially the abstract ones as relational and social. The remaining evaluations should provide enough evidence to get a well-structured set of priorities of value dimensions and criteria. At least, two possible scenarios could take place, i. the evaluations are completely divergent, and ii. the evaluations show a clear convergence between actors’ perspectives. In the first case, it is no possible to get reliable conclusions,

488

M. Orellano et al.

and a redefinition of the model and the decision approach should be considered. In the second case, small modifications of the model are required, and it is possible to draw out some conclusions about the criteria priorities.

6

Conclusions

This paper presents an atypical problem of design in the domain of ProductService Systems (PSS), in which the innovation is triggered by the customer, without any knowledge about possible alternatives. The main intent of this research is to provide a supporting methodology for systematize the decision process in a highly unstructured situation for early PSS design. This facilitates the co-creation process between the customers and providers. Here, the AHP approach, based on actors’ judgements, provides a frame to structure such a complex decision situation. A correct application of AHP requires domain specific knowledge, the understanding of the potentialities and limits of the methodology, and a good representation of the elements to be analysed. This approach helped to clarify the points of view of the key actors and their compatibility. The output of this work is a semi-structured decision model to support the early phases of PSS design in company C, providing the weights of criteria and value dimensions. In the next phase of the project, a multi-criteria decision approach should be used to compare a set of alternatives.

References 1. Bertoni, A., Bertoni, M., Johansson, C.: Analysing the effects of value drivers and knowledge maturity in preliminary design decision-making. In: Design Information and Knowledge Management. International Conference on Engineering Design, Design Society (2015) 2. Dahmani, S., Boucher, X., Peillon, S., Besombes, B.: A reliability diagnosis to support servitization decision-making process. J. Manuf. Technol. Manag. 27(4), 502–534 (2016) 3. Mani, V., Agrawal, R., Sharma, V.: Supplier selection using social sustainability: AHP based approach in India. Int. Strateg. Manag. Rev. 2(2), 98–112 (2014) 4. Medini, K., Cunha, C.D., Bernard, A.: Tailoring performance evaluation to specific industrial contexts - application to sustainable mass customisation enterprises. Int. J. Prod. Res. 53(8), 2439–2456 (2015) 5. Neubert, G., Lambey-Checchin, C.: The sustainable value proposition of PSSs: the case of ECOBEL “Shower Head”. Procedia CIRP 47, 12–17 (2016) 6. Rondini, A., Bertoni, M., Pezzotta, G.: An IPA based method for PSS design concept assessment. Procedia CIRP 64, 277–282 (2017) 7. Saaty, T.L.: How to make a decision: the analytic hierarchy process. Eur. J. Oper. Res. 48(1), 9–26 (1990) 8. Schmidt, D.M., Malaschewski, O., M¨ ortl, M.: Decision-making process for product planning of product-service systems. Procedia CIRP 30, 468–473 (2015) 9. Shen, J., Erkoyuncu, J.A., Roy, R., Wu, B.: A framework for cost evaluation in product service system configuration. Int. J. Prod. Res. 55(20), 6120–6144 (2017)

A Multi-criteria Approach to Collaborative Product-Service Systems Design

489

10. Song, W., Ming, X., Han, Y., Wu, Z.: A rough set approach for evaluating vague customer requirement of industrial product-service system. Int. J. Prod. Res. 51(22), 6681–6701 (2013) 11. Song, W., Sakao, T.: A customization-oriented framework for design of sustainable product/service system. J. Clean. Prod. 140, 1672–1685 (2017) 12. Song, W., Sakao, T.: An environmentally conscious PSS recommendation method based on users’ vague ratings: a rough multi-criteria approach. J. Clean. Prod. 172, 1592–1606 (2018) 13. Zhang, W., Guo, J., Gu, F., Gu, X.: Coupling life cycle assessment and life cycle costing as an evaluation tool for developing product service system of high energyconsuming equipment. J. Clean. Prod. 183, 1043–1053 (2018)

ICT for Collaborative Manufacturing

MES Implementation: Critical Success Factors and Organizational Readiness Model Daniela Invernizzi1, Paolo Gaiardelli1(&), Emrah Arica2, and Daryl Powell3 1

3

University of Bergamo, Bergamo, Italy [email protected], [email protected] 2 Sintef Digital, Oslo, Norway [email protected] Norwegian University of Science and Technology, Trondheim, Norway [email protected]

Abstract. Manufacturing Execution Systems (MES) have evolved to alleviate the drawbacks of Enterprise Resource Planning (ERP) systems by providing real-time information exploitation from the shop floor. In parallel with the increasing number of companies adopting MES, MES vendors have exponentially increased over the past two decades. While companies tend to focus merely on the technological aspects of the MES implementation, the adoption of MES implies an organizational transformation process that needs to be properly addressed by companies for implementation success. This is important because the new functions, services, and operability offered by the MES needs to be aligned with existing business processes and practices. Considering the human, technological, and organizational aspects holistically, this paper outlines critical success criteria and proposes an organizational readiness model for MES implementation. Keywords: Manufacturing Execution System Organizational skills readiness model



Critical Success Factors



1 Introduction Implementation of Information System (IS) can bring many benefits to production planning and control, including ease of management, competitive advantage, efficient use of resources, effective data exchange, and the ability to provide accurate and timely information for decisions. As such, Manufacturing Execution Systems (MES) promise to improve the production control task through the collection and analysis of data in real-time. Data and information are made available to all users involved in the production activities in order to measure current performances, analyse flow’ operations and identify opportunities for improvement. However, when a company opts to implement MES, it often has limited knowledge regarding the factors for a successful implementation. Many unsuccessful MES implementations can be due to critical errors in selection and adoption of the system. © IFIP International Federation for Information Processing 2019 Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 493–501, 2019. https://doi.org/10.1007/978-3-030-29996-5_57

494

D. Invernizzi et al.

The implementation process must be aligned with the company’s strategy, and a correctly planned and controlled implementation method is required. In the context of MES implementation, it is important to draft the most realistic model for the future manufacturing organization. The model can facilitate the understanding on which steps are needed and what activities are necessary to guide the manufacturing control tasks successfully. Based on a comprehensive literature review using search terms “Manufacturing Execution System”, “MES”, “Enterprise Resource Planning”, “ERP”, “Enterprise Systems”, “Information Systems”; “Critical Success Factors” and “Implementation” in Science Direct and Google Scholar, this paper outlines the critical success factors for MES implementation taking human, technological, and organizational (HTO) aspects into account. The identified factors are further consolidated into an organizational readiness model (ORM) to assess the preparedness of companies for MES implementation.

2 Critical Success Factors in MES Implementation Critical Success Factors (CSFs) are defined as “the few key areas of activity in which favourable results are absolutely necessary [for a particular manager] to reach [his] goals” [1]. While there is a significant amount of research on CSFs in the implementation of ERP and other enterprise systems, the literature on CSFs for MES implementation is scarce. However, as an enterprise system, MES implementation possesses many similarities with ERP systems. As such, this study also benefits from extant literature by building on CSFs for ERP. The identified CSFs were categorised into three dimensions: human, technological, and organizational (HTO). This categorisation is consistent with [2], which states that human, technological and organizational factors represent the most significant dimensions for successful implementation of information systems. 2.1

Human Factors

Human skills, perception, and experience play a central role in successful MES implementation. Our literature search uncovered the following human factors: Project Team. A MES implementation team should consist of a set of best and brightest individuals, to foster innovation and creativity [3–5]. Individuals with greater knowledge, reputation and influence within the organization are the preferred candidates. Building a cross-functional team is also important. Competence and knowledge of the members should complement each other [6]. External consultants should be directly part of the project team, to support the implementation [7] and help employees develop the necessary technical skills for design and implementation [8]. In addition, as successful implementation requires trust and the willingness to cooperate, teams with high morale, motivation and awareness are more likely to achieve successful MES implementation.

MES Implementation

495

Communication. Communication is one of the most challenging and difficult tasks to achieve in any implementation project. Top management, project teams, employees and other stakeholders must communicate directly and clearly, sharing detailed information about the project status. A good level of communication should be obtained from the early stages of the implementation, making results and objectives easy to understand [9]. Education and Training. Employee education is a priority in the early phases of the project, as people must be aware of what will change and be guided towards acquiring new skills [10]. Training should concern the software and its functions, the IT skills, the quality and accuracy of data and the responsibilities in the new processes [9]. An appropriate training plan requires the identification of the right methodologies to adopt for each user, considering his/her individual characteristics, knowledge and skills. The knowledge of consultants or software vendors can be used to allow autonomous management of activities and problems [11]. User Involvement. User participation is a key parameter for improving the quality of any enterprise system and information. The user’s involvement should be foreseen in two phases: first, to define the needs to which the system will have to respond and second, to define the level of participation in the software implementation [9]. With users’ contribution, the system could be developed as a more user-friendly application. Accessing the correct data when needed improves individual performance. Moreover, employees with more job responsibilities are more productive, loyal and satisfied about their work [12, 13]. 2.2

Technological Factors

Technological factors affect MES adoption, especially in terms of user-friendliness and software adoption success. Literature reveals the following technological factors: MES Selection Process. Hardware selection is driven by choices on a software package [14]. Therefore, during the selection process, it is important to identify the requirements to be met by the selected MES, plan and perform a professional analysis of the software package and ensure the fit between MES and existing enterprise system. Additionally, software stops due to errors and maintenance should be assessed [5, 15]. Technological Infrastructure. AMES implementation implies a complex transition from previous information systems and business processes. Before starting the implementation, the new enterprise system’s architecture should be established. This should prevent reconfiguration at every stage [16]. As the software architecture could be modified to meet the requirements of the MES, legacy systems must be evaluated, to be ready to face problems and hindrances during implementation [7]. Moreover, the definition of an appropriate architecture that can support and integrate software and hardware must be considered. System integration with other IS, as well as system interoperability in terms of flexibility, accessibility, integration and efficiency, is also essential [17]. This aspect will no doubt prove to be of fundamental importance in the coming years, to better manage the new challenges created by Industry 4.0 [18].

496

D. Invernizzi et al.

Data Management. Data quality management refers to the selection of data and the level of accuracy satisfied by the implemented system [19]. The data analysis plan is recommended to be develop from the beginning. Data model must be compatible with data requirements to avoid implementation delays. Preventive measures can be taken by developing a plan for migration and cleaning up data. Identification of data that must be uploaded and conversion of data structures into a single consistent format is also crucial. In addition, support tools must be deployed to monitor accuracy, timeliness, completeness, consistency, accessibility and security of data [20, 21]. 2.3

Organisational Factors

Organizational factors are often overlooked by companies during the MES implementation process [7]. The following critical factors were identified from the organizational perspective. Top Management Support. Top managers must be fully involved and oriented to allocate and provide valuable resources to the implementation [22, 23]. In addition, top management should also influence the process of selecting the MES delivery partner [24], handling users’ resistance [25] and encouraging users’ participation during the implementation. Among others, the Project Champion is the top manager devoted to support the implementation project and the reference person to which the team refer in case of problems or conflicts [26]. He/she is responsible for the identification of strategic objectives, and collaborates with the project team to check if the business perspective is successfully translated into the adopted solution. Project Management. Good project management is vital to successful MES implementation [27]. Project management refers to the ongoing management of the implementation plan [28], that involves the creation of team work to support the implementation process. Therefore, project team tasks should be defined and comprehensively documented, where inter-departmental cooperation between the involved stakeholders must be set up [29]. Coordinating, scheduling and monitoring of defined activities as well as identification and management of risks ensure that the stated objectives are achieved [9]. Moreover, realistic input is required to avoid any unnecessary delay and also up to date information [26]. Change Management. Change management strategies aim for handling the enterprise wide cultural and structural changes [7]. Sharing a solid culture and corporate identity facilitates the management of this change [15, 30]. Moreover, resistance to change could hinder the transformation and delay the project. Organisations with a continuous improvement mind-set are more open to change [7]. This suggests that firms that manage MES implementation as a structured improvement project may be more likely to succeed. Business Process Re-engineering. Manufacturing companies often find it difficult to adapt their traditional processes software implementation requirements. Business Process Re-engineering (BPR) might be adopted to make business and production processes more flexible [31] thus reducing errors and simplifying the integration of future MES updates. As the implementation of MES leads to changes in the

MES Implementation

497

management system, it is necessary to manage and control such a change according to the selected the BPR plan and implementation strategy. Implementation Strategy. A clear strategic vision of the implementation process ensures a more effective result. An initial strategic plan, including management of expectations, alignment between business and IT strategies and business change provides a macro perspective of the implementation. As users and stakeholders’ involvement reduces barriers and conflicts, it is important to act positively, affecting communication and expanding knowledge about the new software [7]. Acceptance Control. Measurement and evaluation of the implementation process ensures that deviations can be identified and promptly corrected [7]. Therefore, building performance indicators and mechanisms to monitor progress is essential. Key performance indicators will be analysed and updated throughout the project, thus contributing to achieve system acceptance and improvement.

3 Organizational Readiness Model We suggest that an Organizational Readiness Model (ORM) could help organizations understand the current level of preparedness before embarking on the implementation of a MES system, for example to identify strengths, weakness, opportunities and threats related to such an implementation. As such, we categorized the previously identified HTO-factors through dialogue with a group of industrial IT professionals in Norway and Italy. The model is built upon four levels of maturity: Premature, Aware, Willing, and Ready. These levels are summarized in Table 1: Table 1. Organizational skills readiness model for MES implementation Readiness level 1° - Premature 2° - Aware

3° - Willing

4° - Ready

3.1

Description The organization has decided to implement MES but has very limited knowledge of all HTO-factors The organization has minimum knowledge MES and demonstrates effective mechanisms for education and learning as well as project management The organization demonstrates the required level of top management support and exhibits a grand plan for how the implementation process will be executed The organization is aware of how to manage data, monitor implementation and is able to govern the change management process effectively

Critical Success Factors at Each Readiness Level

For each of the identified readiness levels (1–4), it is anticipated that an organization should be able to relate to the various critical success factors (Fig. 1). For example, in the “Premature” phase, when the desire to implement MES is born in the organization, it is anticipated that much of the pre-emptive knowledge and characteristics required

498

D. Invernizzi et al.

for successful MES implementation will not be exhibited by the organization. Before compiling the business case and subsequent implementation strategy, basic knowledge, education and training should be received by key parties at the organization. Selecting the correct vendor is important to create a relationship based upon mutual trust, as the collaboration that follows shall make or break the implementation.

Fig. 1. Organizational readiness model: CSFs for MES implementation

At “Aware” level, the focus shifts to the company’s project management capability, and to the understanding the required integration level within the IT architecture. When internal skills are insufficient, the support of vendors or external consultants can be necessary. The ability to construct and deploy an effective communication strategy is also a key criterion at this level. At third level of maturity, “Willing”, the roles of project manager and of the project team work become central. Members must be carefully selected as they play a primary role during the software implementation. Wrong choices of team members as well as the selection of unskilled or influential project manager, could jeopardize the whole project. The company starts a processes re-engineering phase and management changes are needed. At “Ready” level, the focus shifts mainly to data management and change management. Critical aspects also concern technological infrastructure. However, the

MES Implementation

499

acceptability of the new system must be taken into account, as imposing a new way of working and convince people about the validity of the new system is not straightforward. If an organization demonstrates all of the requirements at each level, we suggest that the likelihood of successful MES implementation is maximised.

4 Conclusion This paper outlines the critical success factors (CSFs) for MES implementation, considering human, technological and organizational (HTO) aspects holistically. An organizational readiness model for assessing the preparedness of an organization in terms of each of the CSFs is further proposed. Besides these theoretical contributions, our model can be used by practitioners (i.e. managers, vendors and consultants) to gear a global overview of a MES implementation. As very often, organizations do not have the knowledge and resources necessary for the adoption of a new software, this work constitutes an opportunity to benefit from a structured approach for an effective and fruitful implementation of MES. Moreover, the achieved results can be used to create guidelines and checklists to adopt before and after the completion of each MES implementation phase, reducing the risk of failure and leading companies to perform a successful MES implementation. This paper is of course not without limitations, which require some further exploration. In particular, the proposed organizational readiness model should be tested and validated in action in order to assess its completeness and usefulness, as well as to identify necessary improvements. Moreover, in order to understand more fully whether a company is ready to proceed with a MES implementation, the model should be expanded to incorporate a skills perspective.

References 1. Bullen, C., Rockart, J.: A Primer on Critical Success Factors. Massachusetts Institute of Technology, Massachusetts Institute of Technology (MIT), Sloan School of Management, 69, Working paper (1981) 2. Petter, S., DeLone, W., McLean, E.R.: Information systems success: the quest for the independent variables. J. Manage. Inform. Syst. 29(4), 7–62 (2013). https://doi.org/10.2753/ MIS0742-1222290401 3. Shanks, G., Parr, A., Hu, B., Corbitt, B., Thanasankit, T., Seddon, P.: Differences in critical success factors in ERP systems implementation in Australia and China: a cultural analysis. In: ECIS 2000 Proceedings, Vienna, Austria, vol. 53, pp. 537–544 (2000) 4. Siau, K., Messersmith, J.: Analyzing ERP implementation at a public university using the innovation strategy model. Int. J. Hum-Comput. Int. 16(1), 57–80 (2003). https://doi.org/10. 1207/S15327590IJHC1601_5 5. Nah, F.F.-H., Delgado, S.: Critical success factors for enterprise resource planning implementation and upgrade. J. Comput. Inform. Syst. 46(5), 99–113 (2006) 6. Görtz, M., Hesseler, M.: Basiswissen ERP – Systeme. Auswahl, Einführung & Einsatz betriebswirtschaftlicher Standardsoftware W3I, Witten (2007)

500

D. Invernizzi et al.

7. Leyh, C., Sander, P.: Critical success factors for ERP system implementation projects: an update of literature reviews. In: Sedera, D., Gronau, N., Sumner, M. (eds.) Pre-ICIS 20102012. LNBIP, vol. 198, pp. 45–67. Springer, Cham (2015). https://doi.org/10.1007/978-3319-17587-4_3 8. Sumner, M.: Critical success factors in enterprise wide information management systems projects. In: AMCIS 1999 Proceedings, Milwaukee, Wisconsin, pp. 297–303 (1999) 9. Bhatti, T.R.: Critical success factors for the implementation of enterprise resource planning (ERP): empirical validation. In: The Second International Conference on Innovation in Information Technology, vol. 110 (2005) 10. Roberts, H.J., Barrar, P.R.N.: MRPII implementation: key factors for success. Comput. Integr. Manuf. 5(1), 31–38 (1992). https://doi.org/10.1016/0951-5240(92)90016-6 11. Françoise, O., Bourgault, M., Pellerin, R.: ERP implementation by critical success factor management. Bus. Proc. Manag. J. 15(3), 371–394 (2009). https://doi.org/10.1108/ 14637150910960620 12. Karim, F., Rehman, O.: Impact of job satisfaction, perceived organizational justice and employee empowerment on organizational commitment in semi-government organizations of Pakistan. J. Bus. Stud. Q. 3(4), 92–104 (2012) 13. Hanaysha, J.: Examining the effects of employee empowerment, teamwork, and employee training on organizational commitment. Proc. Soc. Behav. Sci. 229, 298–306 (2016). https:// doi.org/10.1016/j.sbspro.2016.07.140 14. Arica, E., Powell, D.J.: Status and future of manufacturing execution systems. In: Proceedings of 2017 IEEE International Conference on Industrial Engineering and Engineering Management (IEEM), Singapore, pp. 2000–2004 (2017) 15. Rosario, J.G.: On the leading edge: critical success factors in ERP implementation projects. Bus. World Philipp. 17, 15–29 (2000) 16. Wee, S.: Juggling toward ERP success: keep key success factors high. ERP News, February 2000 17. Modrák, V., Mandulák, J.: Mapping development of MES functionalities. In: ICINCOSPSMC, pp. 244–247 (2009) 18. Almada-Lobo, F.: The Industry 4.0 revolution and the future of manufacturing execution systems (MES). J. Inn. Manag. 3(4), 16–21 (2016) 19. Saade, R.G., Nijher, H.: Critical success factors in enterprise resource planning implementation: a review of case studies. J. Enter. Inf. Manag. 29(1), 72–96 (2016). https://doi.org/10. 1108/JEIM-03-2014-0028 20. Saenz de Ugarte, B., Artiba, A., Pellerin, R.: Manufacturing execution system – a literature review. Prod. Plan. Control 20(6), 525–539 (2009). https://doi.org/10.1080/0953728090 2938613 21. Morariu, O., Morariu, C., Borangiu, T.: Policy-based security for distributed manufacturing execution systems. Int. J. Comput. Integr. Manuf. 31(3), 306–317 (2018). https://doi.org/10. 1080/0951192X.2017.1413251 22. Holland, C.P., Light, B.: A critical success factors model for ERP implementation. IEEE Softw. 16(3), 30–36 (1999) 23. Dai, Q., Zhong, R., Huang, G.Q., Qu, T., Zhang, T., Luo, T.Y.: Radio frequency identification-enabled real-time manufacturing execution system: a case study in an automotive part manufacturer. Int. J. Comput. Integr. Manuf. 25(1), 51–65 (2012). https:// doi.org/10.1080/0951192X.2011.562546 24. Chung, Y.S.: An empirical study of success factors influencing the implementation of information systems outsourcing (2016). https://digitalcommons.unl.edu. Accessed 21 Mar 2019

MES Implementation

501

25. Lee, S.M., Hong, S.G., Katerattanakul, P., Kim, N.R.: Successful implementations of MES in Korean manufacturing SMEs: an empirical study. Int. J. Prod. Res. 50(7), 1942–1954 (2012). https://doi.org/10.1080/00207543.2011.561374 26. Loh, T.C., Koh, S.C.L.: Critical elements for a successful enterprise resource planning implementation in small-and medium-sized enterprises. Int. J. Prod. Res. 42(17), 3433–3455 (2004). https://doi.org/10.1080/00207540410001671679 27. Yang, H.S., Zheng, L., Huang, Y.: Critical success factors for MES implementation in China. In: Proceedings of 2012 IEEE International Conference on Industrial Engineering and Engineering Management (IEEM2012), Hong Kong, pp. 195–199 (2012) 28. Finney, S., Corbett, M.: ERP implementation: a compilation and analysis of critical success factors. Bus. Proc. Manag. J. 13(3), 329–347 (2007). https://doi.org/10.1108/146371507 10752272 29. Shaul, L., Tauber, D.: Critical success factors in enterprise resource planning systems: review of the last decade. ACM Comput. Surv. 45(4), 1–39 (2013). https://doi.org/10.1145/ 2501654.2501669 30. Falkowski, G., Pedigo, P., Smith, B., Swanson, D.: A Recipe for ERP Success. Beyond Computing, pp. 44–45 (1998) 31. Sjödin, D.R., Parida, V., Leksell, M., Petrovic, A.: Smart factory implementation and process innovation. Res. Technol. Manage. 61(5), 22–31 (2018). https://doi.org/10.1080/ 08956308.2018.1471277

Identifying the Role of Manufacturing Execution Systems in the IS Landscape: A Convergence of Multiple Types of Application Functionalities S. Waschull(&), J. C. Wortmann, and J. A. C. Bokhorst University of Groningen, PO Box 800, 9700 AV Groningen, The Netherlands [email protected]

Abstract. Manufacturing execution systems (MES) enable the detailed control of manufacturing operations, i.e. they facilitate digital and integrated shop-floor systems as envisioned by Industry 4.0. Yet, many manufacturing organizations struggle to integrate MES and demarcate it from other information systems (IS) in manufacturing. Therefore, this paper explores how MES can be functionally and technologically distinguished from other IS to determine its (future) role in the IS landscape. To provide an answer, this research applies the conceptualization of IS into five application functionalities and underlying enabling technologies. They are referred to as transaction processing, interactive planning, analytics, document management and process monitoring and control systems. We found that MES merges different types of application functionality into one system through its diverse functional requirements, and therefore can be characterized as technologically heterogeneous, in contrast to other ‘classical’ systems. MES then also takes on a central integrating role in the IS landscape. The findings offer an explanation for the challenges associated with the adoption of MES functionality, and highlight the importance of addressing integration questions in light of Industry 4.0. Keywords: Industry 4.0  Manufacturing execution systems Application functionality  Integration



1 Introduction Today more than ever companies are forced to optimize production processes and procedures to achieve efficiency and quality while keeping costs down. Also, customer demands put more pressures on companies, and e.g. demand the complete logging of production processes to ensure traceability. Developments such as just-in-time approaches can often be better achieved through detailed planning and the precise control of production. Many of the requirements today can only be met with sound information system (IS) support, i.e. digital and integrated shop-floor systems enabled by the adoption of Industry 4.0 technologies [1, 2]. This includes manufacturing execution systems (MES). MES facilitate the achievement of the integrated factory as envisioned by the computer-integrated manufacturing (CIM) movement and nowadays © IFIP International Federation for Information Processing 2019 Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 502–510, 2019. https://doi.org/10.1007/978-3-030-29996-5_58

Identifying the Role of Manufacturing Execution Systems in the IS Landscape

503

promoted in Industry 4.0 [3, 4]. MES support shop-floor processes, their control and integration by collecting and distributing data related to a diverse set of manufacturing activities [1]. MES are considered to be the middle layer in a multi-layered architecture between the world of real-time process monitoring and control on the manufacturing shop-floor and the world of IS applications in the offices of industrial organizations [5, 6]. Despite their growing popularity in enabling organizations to meet future market requirements, in practice organizations often struggle to integrate MES in their IS landscape [7]. Its implementation seems to be less rewarding than theoretically asserted. One contributing factor includes the ambiguity on the role and necessity of MES, considering the presence of other IS that provide support for similar functions, for example ERP systems, quality systems or maintenance systems [8, 9]. As de Ugarte et al. [8] point out, MES functions are generally difficult to classify and the concept of MES is therefore not easily grasped by manufacturers in practice. In order to better understand the current ambiguity that organizations are confronted with, we need to conceptualize how MES can be functionally and technologically distinguished. To our knowledge, the precise role of MES and its distinctive technological characteristics of MES compared to other IS are hardly addressed in the literature. This is the goal of this research. We therefore identify five different classes of application functionalities with their corresponding technological characteristics. We then assess MES based on this conceptualization of IS. As opposed to most IS, we found that MES can hardly be classified as one single class of application functionality, and that MES takes on a central role to integrate technologically heterogeneous IS. Our findings then provide some explanation on the challenges of adopting MES and why it is often less straightforward than anticipated. The paper is organized as follows. Section 2 provides the conceptualization of IS in manufacturing based on the type of application functionality and their enabling technology. This classification is used in Sect. 3 to evaluate generic MES functions in terms of the different dimensions specified in the classification. Based on the findings, Sect. 4 analyzes MES’ role in the IS landscape to explain why organizations face challenges with regards to delineating MES from other systems. Section 5 concludes and presents ideas for future research.

2 Classification of Information Systems in Manufacturing In this paper, we classify IS according to application functionality, i.e. the functionality exhibited by a certain application. In manufacturing we distinguish five main types of application functionalities, viz. Transaction Processing, Interactive Planning, Analytics, Document Management, and Process Monitoring and Control [10–12]. Each type of application functionality is characterized by a specific set of underlying enabling technologies, viz. Software Technology, User-interaction (UI) Technology and Database Technology, but also by the type of user and, if applicable, the time dimension. A summary is presented in Table 1. Each type of application functionality will be described in more detail.

504

S. Waschull et al.

1. Transaction processing systems: Systems providing transaction processing functionality form the information backbones of organizations [2]. They support business processes, integrate information flows and provide basic reporting for different types of users such as operations or quality managers [13, 14]. Transactional data is stored in records centrally in relational databases which is shared among departments and systems, usually by standard software packages labeled as enterprise resource planning systems (ERP). In manufacturing, ERP provides modules for e.g. inventory control, quality control, asset management, shop-floor control, tracking and tracing etc. The user interacts with the system through sessions that guide the user through several screens. These systems typically maintain actual states of objects (e.g. the object Work Order has states planned, released, open, finished, closed). 2. Interactive planning systems: These systems provide interactive planning functionality to planners and production supervisors. They support decision-making by relating possible proposed decisions to performance measures. They employ mathematical algorithms to develop plans that provide near optimal solutions, addressing all supply chain constraints [15]. Through in-memory computing, advanced planning systems support users through interactive planning addressing varying time horizons. These systems usually take a snapshot of objects’ states from transactional systems before they start their calculations. 3. Analytical systems: Through analytic functionality, organizations can further cut the time to access and analyze data by collecting information on objects of the same type over a predefined period of time in a data warehouse. For example, analytical systems produce management reports for strategic and tactical decisions for analysts or managers. Based on online analytical processing technology (OLAP), they extract, transform and load (ETL) data from varying sources to produce aggregated reports but also graphical presentations to users via customized visual user interfaces [2]. Increasingly, data volumes continue to grow and innovative advanced techniques are further enhancing analytical application (e.g. big data, artificial intelligence). 4. Document Management systems: In the office environment of manufacturing engineers, document management functionality is required, usually provided by product data management systems (PDM). These systems manage and integrate product engineering data that originates from working with computer-aidedengineering (CAE) systems. Engineering data is often very complex and includes texts, drawings, documents or product structures that require an object-oriented database. The work flow is procedural, deals with one item at a time and creates complex data structures [16]. 5. Process monitoring and control systems: These systems are deployed on the shop-floor to monitor and control different production parameters and variables in real-time. The data is used to measure the behavior of certain variables (e.g. pressure) to conduct real-time manipulation of selected production variables through actuators, if necessary. These systems are mostly hardware related, e.g. sensors or programmable logic controller (PLC), but also include software such as supervisory control and data acquisition (SCADA) systems [17]. Data is mainly stored in log files.

Identifying the Role of Manufacturing Execution Systems in the IS Landscape

505

Table 1. Classification of different application functionalities in manufacturing Application System user functionality

Software technology

Transaction processing

Client-server Relational (2 or 3 tier) 4GL database; Records

Interactive planning Analytics Document management

Process monitoring and control

Different types of functional end-users Planner, supervisor Analyst, manager Engineer

Operator

Advanced planning (APS) OLAP

Database technology

In-memory database

Data warehousing ObjectMany CAE: Unix/Computer- orientedaided engineering database; Files Real-time Real-time operating system, databases/log SCADA files

User interaction technology Sessions

Time dimension Past and present

Interactive Varying: planning work present and future Graphical Past (and representations future) N.A. Computeraided engineering work Control rooms Real-time with many (seconds) screens and future

3 Assessing the Role of MES and Its Functions In Table 2, we provide an overview and definition of the specific MES functions as defined in the ISA95 standard [18]. To assess the role and boundaries of MES, we analyzed these MES functions in terms of the type of application functionality required in the fulfilment of that function (i.e. transaction processing, interactive planning, analytics, document management and process monitoring and control).

506

S. Waschull et al. Table 2. MES functions related to types of generic application functionality

Resource allocation and status: Guiding what people, machines, tools and materials should, do, and track what they are doing. Application functionality: Tracking what resources do so is mainly transaction processing, but it is also increasingly necessary to track and control resources in realtime e.g. through IoT (process control functionality); guiding what resources should do is interactive planning functionality. Operations/Detailed scheduling: Sequencing and timing of activities for optimized plant performance based on finite capacities. Application functionality: When scheduling the shop floor operations, the planner needs both in-memory (interactive planning) functionality and transactional functionality. The transactional functionality is needed for updating plans after decisions are made. Quality Management: Recording, tracking and analyzing product and process characteristics against engineering ideals. Application functionality: process control (SPC) to e.g. calibrate machines or update control parameters; transaction processing to track product characteristics e.g. inspections; analytics to determine quality causes (event processing), analytical work is followed by updating, e.g. sample sizes in SPC or instructions in PDM. Dispatching production units: Giving the command to send materials or orders to certain parts of the plant Application functionality: Mainly transaction processing, might also involve interactive planning when dispatching sequence involves some scheduling functionality on the shop-floor. Product tracking and genealogy: Monitoring the progress of units, batches or lots of output to create a full history of products Application functionality: Both transaction processing paired with document management functionality, e.g. when creating a digital twin for a physical product. Performance analysis: Comparing measured results in the plant with goals and metrics set by the corporation, customers etc. Application functionality: Mainly analytics functionality to compare and visualize performances. Output is used by managers and staff often on a weekly or monthly basis; hence data often addresses a longer time horizon (weeks to months). Labor Management: Tracking and directing the use of personnel during a shift based on qualifications, work patters and business needs. Application functionality: Mainly transaction processing e.g. tracking employees working-time, man-hours available in a department etc. Increasingly, IoT technologies might enter the shop-floor in the form of trackers and sensors, which can have significant effects on employees (process control). Maintenance Management: Planning and executing appropriate activities to keep equipment and other assets in the plant performing to goal. Application functionality: Mainly transaction processing, but also document management (e.g. providing instructions, and rendering via augmented reality); analytics to analyze machine status and performances. Process Management: Directing the flow of work in the plant based on planned and actual production activities.

Identifying the Role of Manufacturing Execution Systems in the IS Landscape

507

Application functionality: Transaction processing paired with interactive planning; analytics to compare planned and actuals (visualize processes); process control has to go together with some transaction processing, e.g. when a machine is operated, the runtime will be logged and posted to a maintenance application and so on. Data collection/acquisition: Monitoring, gathering and organizing data about the processes, materials and operations from people, machine or controls. Application functionality: Mainly process control, for example tracking the status of a machine, thereby the run-tine will be logged and posted to a maintenance application. Transforming data into transactions is necessary. Document control: Managing and distributing information on products, processes, designs or orders, as well as gathering certification statements of work. Application functionality: Document control functionality: product design, machine programs and instructions are distributed to shop-floor resources from PDM system; in turn feedback can be collected in the form of photos or texts.

4 The Interfaced Functionality of MES Based on the previous analysis, we found that MES and its functions cannot be classified as a single type of application functionality based on homogeneous technologies as traditionally seen in manufacturing. Rather, multiple types of application functionalities converge in MES, which we refer to as ‘interfaced functionality’ (Fig. 1).

Fig. 1. Interfaced functionality of MES

Thus, there is no core application functionality and no single enabling technology of MES. MES takes on a central role integrating technologically heterogeneous systems, each with their own data model, user communication mechanisms and even its own database. Specifically, we found that transaction processing is an essential requirement across almost all MES functions, but it is generally expanded with other types of application

508

S. Waschull et al.

functionality, e.g. analytics or process control. Moreover, we identified two MES functions which could clearly be identified as requiring one single type of application functionality, namely document control and performance analysis. Going further than earlier research, which states that it is difficult to demarcate MES from other IS [8], this research explains the underlying reasons by means of an analysis into the technical and functional characteristics of IS. The findings provide an explanation for the challenges that organizations face when implementing MES in their IS landscape. In the sphere of MES, different heterogeneous technologies converge resulting in high integration requirements accompanying typical integration problems. As previous research pointed out (e.g. [11]), even though technically possible, integrations are not trivial. Integrations are often costly, can block the upgrading of software, and can consequently make organizations less flexible and less future-proof. Organizations might also struggle with defining a suitable business case for MES, as direct returns on investment resulting from this integration might not be clearly depictable. Implementation, integration and maintenance of MES are usually expensive and hard to estimate upfront. This could explain why some organizations choose to expand their existing IS with MES functionality as opposed to implementing a separate MES. Alternatively, emerging technologies such as portals might also provide a solution to overcome these challenges. Due to MES’ central position in the multi-layered architecture, it magnifies the many issues organizations currently face with regards to integrating their IS and processes in their quest to achieve the vision of Industry 4.0, namely a fully integrated factory.

5 Conclusion In this paper, MES and its role in the IS landscape has been analyzed from a functional and technological perspective to better understand why organizations face challenges integrating MES in their IS landscape. We conceptually distinguished five application functionalities that IS can traditionally be classified as, based on their enabling technology, their system user and if applicable, their time dimension. Assessing MES based on this classification showed that: • MES provides different types of core application functionalities (transaction processing, interactive planning, analytics, document management and monitoring and process control) in contrast to traditional IS which usually comprise one application functionality. • As MES is not based on one single type of technology it is technologically heterogeneous: it merges technologies from different application fields into one system. • MES takes on a central integrating role between technologically heterogeneous information systems, therefore its integration requirements must be well addressed. This is one contributing factor why organizations struggle with the adoption of MES, e.g. costs, flexibility.

Identifying the Role of Manufacturing Execution Systems in the IS Landscape

509

This research is currently mostly conceptual and future research should empirically validate and corroborate these findings. As MES incorporates several application functionalities and enabling technologies, it would be interesting to further study integration problems. Integration is a central aspect of Industry 4.0 and will supposedly intensify in the future. Integration between heterogeneous technologies (e.g. creating automated transaction from real-time data originating from process control technologies) can be challenging and complex due to e.g. different semantics, syntax or data types. The future integration requirements of MES and other systems that are technologically heterogeneous therefore must be well addressed, as well as its implications for e.g. flexibility or scalability.

References 1. Kletti, J., Deisenroth, R.: Application concept – horizontal and vertical integration. MES Compendium, pp. 1–16. Springer, Heidelberg (2018). https://doi.org/10.1007/978-3-66254983-4_1 2. Romero, D., Vernadat, F.: Enterprise information systems state of the art: past, present and future trends. Comput. Ind. 79, 3–13 (2016) 3. Kagermann, H.: Change through digitization—value creation in the age of industry 4.0. In: Albach, H., Meffert, H., Pinkwart, A., Reichwald, R. (eds.) Management of Permanent Change, pp. 23–45. Springer, Wiesbaden (2015). https://doi.org/10.1007/978-3-65805014-6_2 4. Monostori, L.: Cyber-physical production systems: roots, expectations and R&D challenges. Procedia CIRP 17, 9–13 (2014) 5. Chen, D., Doumeingts, G.: Architectures for enterprise integration and interoperability: past, present and future. Comput. Ind. 59, 647–659 (2008) 6. Williams, T.J., Bernus, P., Brosvic, J., et al.: Architectures for integrating manufacturing activities and enterprises. Comput. Ind. 24, 111–139 (1994) 7. Arica, E., Powell, D.J.: Status and future of manufacturing execution systems. In: Proceedings of the 2017 IEEE IEEM, pp. 2000–2004 (2017) 8. de Ugarte, B.S., Artiba, A., Pellerin, R.: Manufacturing execution system – a literature review. Prod. Plann. Control. 20, 525–539 (2009) 9. Schmidt, A., Otto, B., Österle, H.: A functional reference model for manufacturing execution systems in the automotive industry. In: Wirtschaftsinformatik Proceedings, p. 89 (2011) 10. Helo, P., Szekely, B.: Logistics information system: an analysis of software solutions for supply chain coordination. Ind. Manag. Data Syst. 105, 5–18 (2014) 11. Wortmann, H., Alblas, A.A., Buijs, P., Peters, K.: Supply chain integration for sustainability faces sustaining ICT problems. In: Prabhu, V., Taisch, M., Kiritsis, D. (eds.) APMS 2013. IAICT, vol. 415, pp. 493–500. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3642-41263-9_61 12. Buijs, P., Wortmann, H.: Joint operational decision-making in collaborative transportation networks: the role of IT. Supply Chain Manag. Int. J. 19, 422–446 (2015) 13. Shehab, E.M., Sharp, M.W., Supramaniam, L., Spedding, T.A.: Enterprise resource planning: an integrative review. Bus. Process Manag. J. 10, 359–386 (2004) 14. Al-Mashari, M., Al-Mudimigh, A., Zairi, M.: Enterprise resource planning: a taxonomy of critical factors. Eur. J. Oper. Res. 146, 352–364 (2003)

510

S. Waschull et al.

15. Chen, I.J.: Planning for ERP systems: analysis and future trend. Bus. Process Manag. J. 7, 1463–7154 (2001) 16. Kim, S.-H., Oh, T.-H., Park, J.-Y.: The object-oriented modeling for product data management (PDM). In: Plonka, F., Olling, G. (eds.) Computer Applications in Production and Engineering. ITIFIP, pp. 33–46. Springer, Boston, MA (1997). https://doi.org/10.1007/ 978-0-387-35291-6_4 17. Molina, A., Panetto, H.: Enterprise integration and interoperability in manufacturing systems: trends and issues. Comput. Ind. 59, 641–646 (2008) 18. ISA: ISA 95 Standards (2014)

A Generic Approach to Model and Analyze Industrial Search Processes Philipp Steenwerth(&) and Hermann Lödding Hamburg University of Technology, Hamburg, Germany {philipp.steenwerth,loedding}@tuhh.de

Abstract. Search processes are omnipresent. In the field of industrial production they occur whenever material or information is needed. While searching is a fundamental activity within production processes, existing models and methods in the field of production management are not designed for modelling or analyzing industrial search processes. This paper presents a generic phase model that can be used to describe industrial search processes. Furthermore, an analysis method is proposed to determine and prioritize fields of action for the optimization of search processes. Keywords: Search process Analysis method

 Manufacturing  Generic model 

1 Introduction Manufacturing companies continuously focus on increasing productivity by optimizing their processes [1]. Searching is a fundamental activity of every process since it always occurs when a demand for material or information is present. A general understanding is the basis for designing and optimizing search processes. Since searching does not add value to a product, the reduction of searching in existing processes provides potential for further improvements [2]. Even though some methods, for example in the field of lean production [3], already aim at reducing search processes with standardized tools, a deeper understanding of searching is still missing. For a systematic improvement, a better understanding of the process and the characteristics of searching in the field of industrial production is needed. This paper presents a generic phase model, which allows to describe industrial search processes. Furthermore, an analysis method that uses the generic phase model to identify potential fields of action for the improvement of search processes is presented.

2 Search Processes in the Field of Industrial Production In the field of industrial production, search processes occur whenever materials, tools or information are needed for a job. They are relevant if employees spend significant time on searching or if an item or information cannot be found. Searching reduces labor productivity because search activities consume resources without adding value to the product [3]. © IFIP International Federation for Information Processing 2019 Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 511–519, 2019. https://doi.org/10.1007/978-3-030-29996-5_59

512

P. Steenwerth and H. Lödding

Percentage of present time

Nevertheless, searching is a fundamental activity that cannot be avoided completely as it occurs with every demand for material or information. Therefore, it is necessary to analyze and improve search processes to reduce search activities to a minimum. Productivity analyses show that the procurement of information and material is a significant part of the paid working time, especially in one-of-a-kind production (Fig. 1) [4].

30%

30% 21%

19%

20%

22%

10% 3% 0%

Information

Material allocation

Component preparation

Phases of a generic work cycle

Execution

Post processing 12407e

Fig. 1. Working time proportions in shipbuilding production (about 11,000 records in 12 areas of 4 companies, unrecorded time: 5%) [4]

Even though it is not possible to explicitly identify the proportion of searches from the analysis, it becomes clear from the collected data that searching for information, materials, and tools can be a very important topic for companies. This is especially true if material or information is not easily accessible. The trend towards a greater diversity of variants increases the importance of industrial search processes because it increases the variety of different search objects. 2.1

Identifying Search Processes

Despite the relevance of searching, common approaches in the field of production management do not specifically focus on search activities. The REFA (Association for Work Design, Industrial Organization and Corporate Development) classification, for instance, allows to divide working time into procedures to analyze worker activities [5]. However, explicit assignment of search activities to one specific procedure is unfeasible since the searching process often includes various activities from different procedures. Predetermined motion time systems are another common methodology used for describing work processes with MTM (Methods-Time Measurement) as the best known representative. They subdivide workflows into work elements to which time durations are assigned. However, occurring search activities are not considered in particular [6]. The classification of work contents in value adding and waste, following the Toyota Production System, is another approach to describe the elements of work [7]. However, searching is not defined as a separate category.

A Generic Approach to Model and Analyze Industrial Search Processes

513

Within the field of material allocation, search processes are described as separate activities. The activity of searching for objects is assigned to the process of order picking and is classified as downtime [8]. However, downtime is measured as one value and additionally includes other activities therefor. In summary, the analysis of common approaches outlines that research and practice in production management agree on the fact that search activities can be indicated as waste and should be reduced [2]. However, a model or method that is capable to describe and analyze industrial search processes in general is still missing. 2.2

Improving Search Processes

Even though search processes are not considered separately within the field of lean production, methods that help to reduce searching efforts exist. One of these is the 5S method [3], which improves the placement of materials and tools at the workplace. Another method is Visual Management [9], which helps to identify materials and tools more quickly and to make the right information accessible. One problem with using these methods is that their effect on productivity is unclear before they are implemented because the duration of the search process is unknown. Furthermore, the effectivity of a method is hard to quantify since there is no method to compare search processes in general. Also, the increasing trend of digitalization provides additional tools to reduce and eliminate search activities. One example is the RFID technology, which can be used to keep information about the location of material updated and to make this information easily accessible.

3 Modelling Industrial Search Processes Figure 2 shows the generic phase model we propose as a means to describe and compare industrial search processes. 1

determining search object intangible search object

tangible search object

2a

determining data carrier

2b

searching for location

3a

searching for data carrier

3b

moving to location

4a

scanning data carrier

4b

scanning location

5

assessment of success phases with subordinate search process

Fig. 2. Generic phase model for industrial search processes

13588e

514

P. Steenwerth and H. Lödding

The model uses five generic phases to describe and compare search processes within the field of industrial production. The phases are described in the following. Determining Search Object (1): The first phase of every search process is triggered by the need for a physical object or an information. Search objects can be classified as tangible and intangible. Tangible search objects include tools, material and persons. Intangible search objects can generally be referred to as information. Important information in industrial processes are for example dimensions or process parameters. The following phases depend on the search object. For intangible search objects, the phases are defined as determining data carrier, searching for data carrier, and scanning data carrier. For tangible objects, the phases are searching for location, moving to location and scanning location. The phases for the information search are described by the example of a required dimension. Determining Data Carrier (2a): For intangible search objects, this phase contains the activities that aim at determining the data carrier on which the required information is stored. For the dimension, this could be a 2D paper drawing. Searching for Data Carrier (3a): Once the data carrier is defined, all activities that are necessary to get access to the data carrier are included in this phase. For the 2D drawing, the location where the drawing is stored needs to be found. Scanning Data Carrier (4a): As soon as the data carrier is accessible, it can be scanned for the required information. This includes all activities that are necessary to gather information from the data carrier. Reading the 2D drawing is one example. In comparison, the phases for the material search are described by the example of a required screwdriver. Searching for Location (2b): Once a tangible search object is defined, information about possible locations is necessary. All activities that are necessary to get this information belong to this phase. Since the screwdriver is usually in a toolbox at another workstation, this could be the first location to search. Moving to Location (3b): A defined location is the basis for this phase, including all activities necessary to get to the location. Scanning Location (4b): As soon as the location is reached, the scanning for the search object starts. This includes all activities necessary to examine the location for the object, which in our example is a screwdriver. Every search process, independent of the material constitution of the search object, ends with the last phase of the model, the assessment of success. Assessment of Success (5): By comparing the defined characteristics with the characteristics of a found search object, the success of the search can be assessed. The decisive question is, whether or not the object is found at an estimated place or on an estimated data carrier. The model outlines that a search process can contain other searches. The search for a tangible object always contains a search for information about the location (intangible object). Vice versa, the search for information always includes the search for the data

A Generic Approach to Model and Analyze Industrial Search Processes

515

carrier (physical object). This connection is represented in the model by the two searching phases (3a/2b). By this logic, the model is able to describe complex search processes. Generally, the total time required to find a search object is an important characteristic of search processes. Similar to the throughput time of production processes [10, 11], a search throughput time can be defined, which comprises the time from the beginning to the end of the search (Fig. 3). Following this logic, the search throughput time of a main search process as well as of every subordinate search process can be divided into elements that represent the generic phases. Figure 3 shows a schematic example of a search process for information (intangible search object) in level 1, with a subordinate search process for a data carrier (tangible search object) in level 2. The data carrier that is determined in the first search process is at the same time the search object for the subordinate search process in level 2. Hence, the first phase of the model (determining the search object) does not occur in subordinate search processes. search throughput time

level determining search object

determining data carrier

scanning assessment data carrier of success

searching for data carrier

1 search throughput time

2 searching for location

moving to location

scanning location

assessment of success time 13589e

Fig. 3. Concept of search lead time with a main search process for intangible search objects

4 Analyzing Industrial Search Processes The phase model for search processes can be used to identify and evaluate industrial search processes. Therefore, a method was developed that is structured in the three steps data collection, data aggregation, and data analysis. 4.1

Data Collection

Initially, data regarding the search processes needs to be collected. A feasible method is to follow a worker during a whole search process to measure the time spent in each phase. Since some activities of searching are hard to observe, it is essential that the worker comments his activities while searching. By this method, cognitive activities, like determining the search object, can be observed, as well as physical activities like moving to a location. It is essential to gather information about the allocation of search efforts to subordinate search processes.

516

P. Steenwerth and H. Lödding

A web-based application for productivity analyses (CheckIT) [12] was customized to support the process of data collection. The analysis starts with the classification of the actual search object, followed by a hierarchical query of a generic phase to track subordinate search processes. The time for a selected phase is tracked automatically until the next phase is selected. 4.2

Data Aggregation

Accordingly, the collected data is aggregated with respect to the generic phases. The output of the data acquisition can be aggregated for the main search process and for all subordinate search processes. In order to distinguish the individual processes, levels are introduced. If necessary, the model allows to go through every phase several times within one search process. This could, for example, be the case if another location is scanned after a tangible object was not found at the first location. The search process for one search object is related to one level, beginning with the main search process. The total length of a phase equals the sum of all measured times for the phase within the same level. The sum of all phases of the same level equals the search throughput time (see Fig. 3) for this level. For intangible objects: STTPi ¼

X

X

dsi þ

dci þ

X

sci þ

X

sdi þ

X

asi

ð1Þ

asi

ð2Þ

For tangible objects: STTPi ¼

X

dsi þ

X

sfli þ

X

mli þ

X

sli þ

X

With: STTPi dsi dci sci sdi

search throughput time determining search object determining data carrier searching for data carrier scanning for data carrier

asi sfli mli sli i

assessment of success searching for location moving to location scanning location level of search process

Additionally, the total length of a search phase (3a/2b) within one level equals the search throughput time of the according subordinate search process (STTPi þ 1 ). X X

sci ¼ STTPi þ 1

ð3Þ

sfli ¼ STTPi þ 1

ð4Þ

A Generic Approach to Model and Analyze Industrial Search Processes

4.3

517

Data Analysis

The analysis includes the processing and visualization of data in key figures. This allows to gain a deeper understanding of existing search processes and to identify fields of potential as a basis for improvements. The main results of the analysis are: • The given structure of the search process: The hierarchical aggregation of search phases illustrates the relations between all subordinate search processes that are observed. Thus, a deeper understanding of existing search processes is provided • Identification and prioritization of the greatest field of action: By the portions of each phase in every search process a prioritization can derived with regard to the potential for improvement.

5 Evaluation by a Simplified Application The model and the analysis method were tested in an experimental environment with 7 test persons to evaluate the usability. The experiment included a manufacturing task, in which a case cover needs to be found. To support the search process, the test persons were given a document with possible locations of the missing workpiece. Every test person was observed during the entire search process and the time spent in every phase was measured using the analysis method. During the whole experiment, the test persons were encouraged to comment their thoughts. The results are shown in Fig. 4. All calculated numbers represent mean durations. As it can be seen, the observed search process consists of two levels, the main search for the case cover (tangible search object) and one subordinate search process for information about the location (intangible search object). The average search lead time for the case cover is 87.5 s, including 33.5 s for searching for the location (subordinate search process). Within the information search, only the phase of scanning the data carrier was observable. This is probably because the document was already handed out to the test persons in the beginning of the experiment. Overall, the scanning of the data carrier was the most time consuming phase of the entire search and is therefore a potential starting point for improvements. search object

case cover

determining search object

searching for location

moving to location

scanning location

assessment of success

12.6 s

33.5 s

19.7 s

12.2 s

9.5 s

scanning data carrier

location

33.5 s

0

87.5 s

time [sec.] 13590e

Fig. 4. Results of experimental analysis (mean duration, n = 7)

518

P. Steenwerth and H. Lödding

The experiment also shows that some phases can be marginal up to a degree, where it becomes difficult to observe and measure a meaningful time duration. In the experiment, this is the case for all phases of the search process for the data carrier except the scanning phase.

6 Summary and Outlook The presented model is designed to comprehensively describe industrial search processes. Subordinate search processes allow to model complex, multi-level search processes. With the presented analysis method, search processes of industrial processes can be identified and evaluated. In future research, it is intended to use the approach on a larger scale. One idea is to combine the time durations from the analysis with information about the total number of search objects of the same class to estimate the aggregate search effort. The goal is to gain knowledge about existing search processes and to calculate an overall potential for improvements within a company. Acknowledgement. The Authors would like to thank Deutsche Forschungsgesellschaft (DFG) for funding the project “Analysis and optimization of searching procedures in manufacturing environments” (Project No. 365697984).

References 1. Czumanski, T., Lödding, H.: Analyse von Einflussfaktoren auf die Arbeitsproduktivität in der Serienproduktion. In: Müller, E. (ed.) Demographischer Wandel-Herausforderung für die Arbeits- und Betriebsorganisation der Zukunft, pp. 237–261. GITO-Verlag, Berlin (2012) 2. Liker, J.: The Toyota Way: 14 Management Principles from the World’s Greatest Manufacturer. McGraw-Hill, New York (2004) 3. Hirano, H.: JIT implementation manual: The complete guide to just-in-time manufacturing, 2nd edn. Waste and the 5S’s, vol 2. CRC Press, Boca Raton (2009) 4. Tietze, F., Lödding, H.: Analyse der Arbeitsproduktivität in der Unikatfertigung: Eine Grundlage für zielorientierte Verbesserungsprozesse in der Unikatfertigung. Industrie 4.0 Management 30(3), 62–66 (2014) 5. REFA: Datenermittlung. Methodenlehre der Betriebsorganisation, vol 15. Carl Hanser Verlag, München (1997) 6. Bokranz, R., Landau, K.: Produktivitätsmanagement von Arbeitssystemen: MTMHandbuch. Schäffer-Poeschel, Stuttgart (2006) 7. Ohno, T.: Toyota Production System: Beyond Large-Scale Production. Productivity Press, Cambridge (1988) 8. Hompel, M., Sadowsky, V., Beck, M.: Kommissionierung: Materialflusssysteme 2 - Planung und Berechnung der Kommissionierung in der Logistik. Springer, Berlin (2011) 9. Takeda, H.: The Synchronized Production System: Going Beyond Just-in-Time Through Kaizen. Kogan Page, London (2006) 10. Wiendahl, H.-P.: Betriebsorganisation für Ingenieure, 7th edn. Hanser, München (2010)

A Generic Approach to Model and Analyze Industrial Search Processes

519

11. Bechte, W.: Steuerung der Durchlaufzeit durch belastungsorientierte Auftragsfreigabe bei Werkstattfertigung. Fortschritt-Berichte der VDI-Zeitschriften 2(70) (1984) 12. Grabner, C., Khokhar, F., Schoop, T., et al.: Ein digitales Universalwerkzeug für die Produktionsanalyse: Entwicklung einer Web-App zur methodenübergreifenden Analyse von Produktionsprozessen. Industrie 4.0 Management 33(6), 7–10 (2017)

A Methodology to Assess the Skills for an Industry 4.0 Factory Federica Acerbi(&), Silvia Assiani, and Marco Taisch Department of Management Engineering, Politecnico di Milano, via Lambruschini 4/B, 20156 Milan, Italy {federica.acerbi,silvia.assiani, marco.taisch}@polimi.it

Abstract. The rapid change that is affecting the society together with the rising of new technologies are impacting the manufacturing sector as well. Moreover, this change has also an impact on the skills that operators and managers should master. Companies, on their own, must be always updated in order to keep high their competitive advantage. For these reasons, we carried out this study which aims at searching for a new methodology to assess the current level of companies’ workforce in terms of skills needed for taking advantage from the Industry 4.0 paradigm. Starting from an analysis of skills assessment methods, we created DREAMY4Skills, a skills 4.0 assessment model focused on the specific job profile within a company operating in the manufacturing sector. This model is based on a maturity model which enables to make companies be aware of their current status in terms of skills and thus it helps companies in implementing a transformation path to pursue a continuous improvement strategy. This work has two purposes, on one side we would like to have a model useful in practical terms to enable the skills 4.0 assessment held by the workforce, on the other side there is the scientific purpose which is to create another small brick in the literature. Keywords: Industry 4.0

 Skills  Assessment model  Maturity model

1 Introduction Nowadays, we are living in a world where everything is transforming. The rising of new technologies is re-shaping the entire society and even humankind. Along with the Fourth Industrial Revolution, the Industry 4.0 (I4.0) concept has risen and this has been transforming the traditional manufacturing sector by enabling more efficient and flexible production systems. The rising of I4.0 enabling technologies [1] is bringing many advantages but some challenges must be considered [2]. The requests of competencies are changing rapidly, and, since the relevance of human capital is commonly recognized as “competitive weapon” [3], today in order to be competitive both new hard skills and soft skills are needed [4]. Companies must develop clear plans for expenditures, but they should also take into account that the increase in productivity and the consequent expected revenue growth will shorten the

© IFIP International Federation for Information Processing 2019 Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 520–527, 2019. https://doi.org/10.1007/978-3-030-29996-5_60

A Methodology to Assess the Skills for an Industry 4.0 Factory

521

payback time to a few years. So, in order to really take advantage of this revolution firms must balance investments in human capital and in technologies. Considering both the relevance of human capital and the change of the necessary skills to be competitive in the manufacturing sector, leveraging on different studies on skills 4.0, as [5], the paper [4] identified a wide spectrum of both hard and soft skills necessary to operate in an I4.0 factory. Moreover, a clustering of them in new or evolved job profiles was developed. From that work, a skills 4.0 assessment model was built in this paper. In fact, companies are interested in understanding their current status in order to act in the proper way to undertake an improvement path. As stated by Watts Humphrey [6] “If you don’t know where you are, a map won’t help”. In this study, skills assessment methods were analysed together with maturity models to identify the best way to measure the level of workforce skills and thus, enabling companies to act accordingly to their current status. According to us, those models already developed by academics and consulting companies were not fully completed or were not focused on skills for the I4.0. Thus, we decided to develop DREAMY4Skills, a skills 4.0 assessment model focused on the specific job profiles characterising a current manufacturing company. This paper will be divided into 5 sections: (1) introduction, (2) description of the methodology of the study that is based on a literature review, two focus groups and a pilot assessment; (3) a literature review on skills assessment models, and maturity models; (4) the description and development of the DREAMY4Skills model; (5) limitations and conclusions.

2 Methodology In this section, we define the main steps that we undertook in order to perform the studies and develop DREAMY4Skills. Considering the worldwide necessity to align the workforce skills with the investments in new technologies, we perform a detailed analysis on skills assessment models in order to identify a new way to enable companies operating in the manufacturing sector to have a picture on their workforce current status. For this purpose, a systematic literature review was performed. In order to collect material regarding the methods most used to evaluate workforce, we searched about both hard and soft skills on the main scientific database by using the following keywords “soft skills”, “hard skills”, “assessment models”, and “assessment methods”. Among all the papers emerged from the research and looking at their references, we selected the most relevant ones in order to understand how our evaluation could have been performed. Once we understood which instruments are present in the literature, we decided to analyse which tools are put in place by some consulting companies. The skills assessment models analysed were not completely effective for our purpose since none of them was explicitly developed to assess skills 4.0. Furthermore, maturity models have been found suitable to assess manufacturing companies’ processes within I4.0 context. For these reasons, we performed a deeper analysis of maturity models in order to evaluate whether it was suitable for people as well. The keywords used for this part of the analysis were: “maturity model”, “capability

522

F. Acerbi et al.

maturity models”, “People capability maturity model” and then the terms above were combined with “Industry 4.0”. To the results obtained from this research, we added a model suggested by academic experts [7], focused on the assessment of the readiness of companies’ processes in terms of processes 4.0. Then, in order to validate and to verify the feasibility and usability of our study, we organized two focus groups: one with academic experts and one with industrial experts. Finally, we submitted our model to an Italian firm to do a pilot assessment. The results are shown in Sect. 4.3.

3 Literature Review 3.1

Skills Assessment Models

In the academic world, many skills assessment models are described and studied. They focused on soft skills or hard skills few of them covered both the thematics. Considering hard skills, the majority of the models are those developed in the clinical field like [8–10]. The majority of them assesses whether a task is performed or not by using yes/no checklists, others do not report how the assessment is performed. Moreover, these studies did not drill down explaining which are the technical skills that the workforce should have, and there is no distinction between different job profiles. As regards soft skills, the majority of those analysed presents a self-assessment model through a questionnaire with answers based on the Likert scale [11, 12], that biased the results. Finally, the assessment of a mix of technical skills and soft skills, but still in the clinical field, is studied by [13]. There is a lack of studies about skills assessment models in the industrial field and Consulting companies’ tools were investigated too in order to have a more practical view of the topic. They were mainly focused on specific and restrict subjects not regarding the I4.0 paradigm, as for example [14–17], but the methodology was interesting because they proposed a multi-source assessment in order to reduce at minimum the subjectivity. In spite of this, some attempts to assess both soft and hard skills were conducted. An example is “Toolbox Workforce management 4.0” [18] where an assessment of skills in an I4.0 context is proposed. The main limitation found was the fact that it considers a generic set of skills that is not differentiated according to the different job profiles. An interesting point instead is that, it is based on a maturity model and for this reason, we decided to further investigate the potentialities that these types of models can have to complete our work. 3.2

Maturity Models

A wide literature about maturity models has been written starting from the latest ’70 until today. Even if they are applied with many different purposes, the structure is quite common to all of them. The main purposes for which the maturity models usually are utilised are [19]:

A Methodology to Assess the Skills for an Industry 4.0 Factory

523

– Descriptive: the final goal is to describe the current status of an organization and it can be used as a diagnostic tool; – Prescriptive: the final goal is to identify the desired level of maturity for then providing guidelines for the improvement and courses of actions are suggested; – Comparative: the final goal is to perform an internal or external benchmark on the basis of historical data. The common characteristics shared among the maturity model analysed are: – the number of levels, that varies from 3 to 6 (the most commons are those composed of 5 levels); – there is a logical progression among the levels, thus to reach subsequent levels of maturity, all the previous levels must be covered; – the main goal is always to enable an improvement. Some models were focused on the process assessment, others were focused on workforce management, and considering also the work done by [18], we thought that the structure of these models, in particular, those with a descriptive purpose, could have been exploited for the skills assessment too. In fact, they allow obtaining a picture of the current status of the entity under analysis. They also support the continuous improvement, by giving specific details, step by step, on the next goal to be achieved. For these reasons, they could be useful to enable companies to design the correct training strategy for each person. Thus, in the end, we took inspiration from [20], that proposed a framework with a detailed description, step by step, on how to create a maturity model.

4 DREAMY4Skills Methodology In this section, the focus is on the description of DREAMY4Skills and on its development steps including the creation of the questionnaire needed to gather data. 4.1

DREAMY4Skills: Skills 4.0 Assessment Model

DREAMY4Skills, whose development is reported in Sect. 4.2, is a skills 4.0 assessment tool based on maturity models which aims to evaluate the competencies, in terms of both hard and soft skills, owned by managers and operators employed in manufacturing companies embracing I4.0 paradigm. It is composed by five maturity levels, described below, characterizing the competencies level owned by each worker. (v) Proficient: The worker is totally able to manage the emerging technologies and he/she is open to improving his/her capabilities being always updated. He/she is able to use tablets and PC and the software that are commonly installed on them autonomously and in complex situations by supervising the others in case of need. (iv) Competent: The worker is able to manage autonomously the majority of the emerging technologies and he/she is open to improving his/her capabilities trying to be always updated. He/she is able to use tablets and PC quite autonomously and also the software that are commonly installed on them.

524

F. Acerbi et al.

(iii) Practiced: The worker has few notions about the emerging technologies and he/she is able to use some of them. Usually, he/her is not intentioned to update his/her current status of his/she capabilities unless it is of his/her own interest or it is necessary for his/her career or to keep his/her job occupation. The worker has a working knowledge about tools like PC, tablet and the software that are commonly used. (ii) Aware: The worker has few notions about the emerging technologies even if he/she has never used any of them. Moreover, he/she is not intentioned to update his/her current status of his/her capabilities unless it is of his/her own interest. The worker is aware of the existence of common tools like PC and tablet but he/she does not know how to use them. (i) Basic: The worker is not aware of the majority of the new technologies and he/she has not a behaviour inclined to modify his current status. He/she has no idea about the existence of common tools like PC or tablet neither of their usage. In order to perform the assessment, ad hoc questionnaires are provided to managers and operators according to the job profile. Thanks to their answers, it is possible to evaluate their competencies level and thus to highlight their strengths and weaknesses. A final report with some indications about the next objectives that should be achieved is provided to the company. This enables the company to define the right training plan. 4.2

DREAMY4Skills Development

Being inspired by [20], we started by defining the scope, essentially for the development of a maturity model since it will be the basis for all the subsequent decisions. The scope is assessing the workforce maturity to embrace Industry 4.0. In the design phase, we defined the general characteristics of the model for the accomplishment of the goal set. These are the following: – – – –

the audience that we address are the companies; the method of application is a self-assessment supported by third party analysts; the drivers of application are an internal need or a stimulus from an external analyst; the respondent of our questionnaire are both managers and workforce; in fact, in order to perform the assessment, we gather data from their answers; – the application of the model will be based on multiple entities since the questionnaire will be submitted to many people within different process areas. Moreover, the geographical application will be defined on the basis of the company needs and localization of its facilities. In terms of maturity levels, we defined 5 levels: basic, aware, practiced, competent, proficient as reported above in Sect. 4.1. In the populate phase, we defined what it is necessary to be measured and how the measurement is performed. The definitions of the job profiles under assessment are based on the integration between the identification of the hard skills essential for that process area together with the soft skills that we found to be fundamental in general to embrace properly the I4.0 paradigm and basic knowledge about ICT. This last dimension should be part of the hard skills, but considering their absolute importance in this context we decided to examine it as a stand-alone category. Therefore, the assessment is performed by following these three analysis dimensions.

A Methodology to Assess the Skills for an Industry 4.0 Factory

525

In order to gather data from the workforce, we created a questionnaire tailored to each job profile. The first section of the questionnaire is dedicated to soft skills, the second one to the ICT literacy and these two parts are submitted to all the job profiles without any differences. The third one is focused on hard skills that are specific for each job profile, therefore this section is developed ad hoc for each job profile according to its characteristics. To select the structure and formulate the normative answers we were inspired by [7, 17]. We used normative answers, instead of checklists or similar methods, to properly describe a specific level without being subjective, as the model analysed in Sect. 3.1, and increasing the level of standardization. The person answering the questionnaire selects which sentence better describes his/her behaviour, attitude or thoughts. In the test phase, we organized two different focus groups. The first one was held with academic experts the team “Jobs&Skills” of Polimi together with the author of the DREAMY. The team “Jobs&Skills” was of fundamental importance in order to validate our questionnaire and the skills defined by us as skills4.0 since their past studies were focused on this topic. The author of DREAMY, on the other side, was fundamental for the validation of our model as a whole since she provided suggestions on the possible criticalities of our first version of the model. A second focus group was organized with industrial exponents in order to verify the formulation of each question and the related normative answers. Once the model has been created and tested, it was submitted to a sample of companies in order to verify its usability. 4.3

DREAMY4Skills Application

In order to validate our model, we applied it to a real case. We sent the questionnaire to a company and its personnel provided us with the answers. To report the final results, we used a radar chart. In fact, using a radar chart (see Fig. 1) that shows all the job profiles together, it is possible to better see their differences. As far as soft skills, all the job profiles are around the third level, the main differences are due to higher knowledge of foreign languages and higher interest in continuous learning showed by managerial roles with respect to the more operative ones. As regard ICT literacy the results highlight that, as expected, those profiles with lower knowledge are warehouse and production operators being around the third level. These are people who currently do not use PCs and tablets during their working activities. Instead, the profiles reaching the fourth level in ICT literacy are the data scientist followed by the managers. While the best performer is the IT-OT integrator. Looking at hard skills, the data analysed show that the majority of the job profiles has an evident gap. Also in this case, given that we are talking about new technologies like smart devices, operative profiles have a bigger gap than managerial ones and the best performer is the IT-OT integrator.

526

F. Acerbi et al.

Fig. 1. Radar chart showing the results of the case study

This assessment allowed us to understand the current status of skills owned by the workforce and thus we developed a report to explain to the company the meaning of the scores giving them a detail description about the current level of each profile. Finally, we suggested them, as forthcoming targets, the improvement needed to reach the upper level by underlining the strengths and weaknesses of each person.

5 Limitations and Conclusions The model provides a picture of the current status of companies’ workforce enabling their continuous improvement. In order to apply it in an effective way, some communication efforts are required to companies because the assessment must be seen by the workforce as something positive for their professional growth and not as an internal investigation conducted by human resources to evaluate good/bad performances of a person. Moreover, to guarantee the alignment between the assessment method and the technological advancement, the model should be kept always updated in terms of both job profiles considered and skills included in the analysis. To conclude, the proposed model has a descriptive purpose, therefore the main follow-up that we envisage is to make it become prescriptive, by providing companies guidelines to achieve their next goal once the current situation has been assessed.

References 1. Gerbert, P., et al.: Industry 4.0: The Future of Productivity and Growth in Manufacturing Industries (2015) 2. WMF: THE 2018 WORLD MANUFACTURING FORUM REPORT Recommendations for the Future of Manufacturing (2018) 3. Becker, G.S.: The Age of Human Capital (2002)

A Methodology to Assess the Skills for an Industry 4.0 Factory

527

4. Acerbi, F., Assiani, S., Taisch, M.: A research on hard and soft skills required to operate in a manufacturing company embracing the industry 4.0 paradigm (2019, Submitted) 5. Pinzone, M., Fantini, P., Perini, S., Garavaglia, S., Taisch, M., Miragliotta, G.: Jobs and skills in industry 4.0: an exploratory research. In: Lödding, H., Riedel, R., Thoben, K.D., von Cieminski, G., Kiritsis, D. (eds.) APMS 2017. IFIP, vol. 513, pp. 282–288. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-66923-6_33 6. Humphrey, W.S.: Managing the Software Process. Addison-Wesley, Reading (1989) 7. De Carolis, A.: A methodology to guide manufacturing companies towards digitalization. Politecnico di Milano (2017) 8. van der Vleuten, C.P.M., Swanson, D.B.: Assessment of clinical skills with standardized patients: State of the art. Teach. Learn. Med. 2(2), 58–76 (1990) 9. Martin, J.A., et al.: Objective structured assessment of technical skill (OSATS) for surgical residents. Br. J. Surg. 84(2), 273–278 (1997) 10. Mandel, L.S., Goff, B.A., Lentz, G.M.: Self-assessment of resident surgical skills: Is it feasible? Am. J. Obstet. Gynecol. 193(5), 1817–1822 (2005) 11. Wilson-Ahlstrom, A., Yohalem, N., DuBois, D., Ji, P., Hillaker, B., Weikart, D.P.: From Soft Skills to Hard Data: Measuring Youth Program Outcomes, 2nd edn. (2014) 12. Oecd: Take the Test sample Questions from OECD’s pisa assessments programme for international student assessment (2009) 13. Arora, S., et al.: Self vs expert assessment of technical and non-technical skills in high fidelity simulation. Am. J. Surg. 202(4), 500–506 (2011) 14. Consiglio, C., Santarpino, M.M.: Dalle competenze professionali ai risultati di successo. Un caso organizzativo di applicazione della BFC map 15. Giunti Psychometrics: Mappatura e valutazione delle competenze. http://www.giuntihdu.it/ it/servizi/mappatura 16. Business Plus: Mappatura delle Competenze – Un Metodo e un Modello (2013). https:// www.bplus.it/mappatura-delle-competenze-un-metodo-e-un-modello/ 17. R-Group s.r.l.: Il Manuale dei profili professionali di competenza 18. Galaske, N., Arndt, A., Friedrich, H., Bettenhausen, K.D., Anderl, R.: Workforce management 4.0 - assessment of human factors readiness towards digital manufacturing. In: Trzcielinski, S. (ed.) AHFE 2017. AISC, vol. 606, pp. 106–115. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-60474-9_10 19. Pöppelbuß, J., Röglinger, M.: What makes a useful maturity model? a framework of general design principles for maturity models and its demonstration in business process management. In: European Conference, p. 28 (2011) 20. de Bruin, T., Rosemann, M., Freeze, R., Kulkarni, U.: Understanding the main phases of developing a maturity assessment model. In: 16th Australasian Conference on Information Systems, pp. 8–19 (2005)

Collaborative Technology

A Theoretical Approach for Detecting and Anticipating Collaboration Opportunities Ibrahim Koura1(&), Frederick Benaben1(&), and Juanqiong Gou2(&) 1

IMT Mines Albi, Albi, France {Ikoura,Frederick.Benaben}@mines-albi.fr, [email protected] 2 Beijing Jiaotong University, Beijing, China

Abstract. The concept of collaborative networks has been encountered very frequently these days as the reply when trying to adapt and enhance enterprises in this tremendously competitive commercial environments. A lot of knowledge has been gathered for collaborative networks so far, from defining network kinds to levelizing partnerships and also proposing models for partnership developments. But most of these efforts didn’t tackle a very vital obstacle which is detecting and predicting collaboration possibilities between enterprises. In this paper, a new enterprise characteristics classification is proposed, which will be used as a profile for characterizing enterprises susceptible to take part in a collaborative network. The proposed detection approach is based on the enterprise characteristics concept as well as collaboration network types. Also a hypothesis to rank the potential partners using KPIs is shown along with the big picture of this approach accompanied by the future work that has to be done. Keywords: Collaborative networks  Enterprise characteristics Collaboration detection  KPI classification



1 Introduction To catch a transient chance within the market, today’s enterprises would rather collaborate with alternative enterprises than invest in resources that may be scarcely used when the chance goes away, although the investment could appear right at the moment of chance arrival. For example Airbus has lately frozen the process of hiring new employees and on the contrary extended the network of its subcontractors. Hence, organizations began to partner up and rely on each other if a benefit is found [1]. However throughout the history, companies were vertically integrated organizations, partnerships were not that easy to form. Nowadays, major changes are taking place in the economy towards more flexible network organizations which are dedicated to help improving the flexibility, ability to quickly setup and maintain partnerships [2, 3]. Working collaboratively could contribute significantly to the success of the business, delivering a number of business benefits including cost savings, increased sales, knowledge transfer and access to new markets, increased capacity and improvements in efficiency and effectiveness. Members of a network will often participate in

© IFIP International Federation for Information Processing 2019 Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 531–538, 2019. https://doi.org/10.1007/978-3-030-29996-5_61

532

I. Koura et al.

information-sharing and work together on cost-reduction measures to maximize their competitiveness. It allows the transformation of normal information sharing activity into dynamic relationships that helps all parties in the collaboration network [4, 5]. However, there might be some limitations to the network. Forcing one specific way of style on other parties either because of culture diversity, conflicts in style of working or overshadowing could be a negative aspect if not dealt with correctly. Also timing could be an issue. Gathering information or checking with other parties on each decision can actually slow the process. It also can make things go faster to meet the expectations of the network. Therefore having a balance between those aspects is necessary to have a good collaboration [6]. So how these enterprises could outline, assemble and build their collaborations and how could they optimize their partnership choices and benefit from each other as much as possible. Thus the aim of this research is to propose an approach for suggesting potential collaboration between enterprises using enterprise characterization with the help of network types and concepts. This article will include an explanation for the characterization of an enterprise/organization and examples of collaboration network types. A new hypothesis will be proposed based on an industrial classification criteria which will be used in the approach. Also types of collaboration links are defined along with a ranking hypothesis based on KPI classifications and dimensions.

2 Collaboration Network Types In order to understand how an enterprise could be capable for any collaboration and be a part of such network, the type of collaboration networks and properties has to be studied. This section provides an overview of the involved research areas, as found in current literature for collaboration network types examples. A collaborative network is a network of different entities such as organizations or people that are most of the time not related to each other (geographically, culturally…), in terms of operating environment, culture etc. These entities come together to serve a certain purpose which will benefit all parties in this network [7]. They will often collaborate on commercial ventures such as the development of new products, penetration of new markets and improvement of existing processes, buying and selling finished or non-finished goods and so on. Moreover, depending on the collaboration goal of each enterprise, there are different types of network examples that can be formed to suffice different types of benefits. Business networks may provide member companies with access to resources that would otherwise be beyond the scope of a single business. Individual businesses can face a number of limitations when trying to compete in global markets, this may include scale and expertise. Through collaboration, businesses can often complement each other and specialize in different areas to compete in markets usually beyond their individual reach. Examples of such networks which can be an output of the collaboration process is as follows [4, 7–9].

A Theoretical Approach for Detecting and Anticipating Collaboration Opportunities

533

• Extended Enterprise—an idea commonly connected to an association in which a dominant industry “broadens” its limits to all or a portion of its providers. An extended enterprise can be seen as a particular case of a VE. • Virtual Enterprise (VE)—a temporary partnership of industries that meet up to share abilities or center capabilities and assets to react to business openings. • Virtual Organization (VO)—an idea like a VE, set of autonomous associations that share assets and skills to accomplish a mission/objective, yet that isn’t constrained to a partnership revenue driven ventures. A VE is a specific instance of VO. • Dynamic Virtual Organization—normally alludes to a VO that is built up in a brief span to react to a focused market opportunity, and has a short life cycle, dissolving when the transient reason for the VO is achieved. • VO Breeding Environment (VBE)—group of associations and their related supporting organizations that have both the potential and the will to collaborate with one another in a long haul understanding. At the point when a business opportunity is recognized by one part, a subset of these associations can be chosen and subsequently framing a VE or VO (Fig. 1).

Fig. 1. Network types example

3 Approach The first step in this approach is to define a profile for each enterprise. There are a lot of characteristics that can define an enterprise, for example number of employees, amount of sales, number of branches and so on. However, due to the lack of literature review in enterprise characterization that can be used in detecting potential collaboration, this article will be using the following characteristics as the principal components of any organization. 1. Performance – can be concerned about liquidity and solvency ratios of a company. Also quality of the product, customer satisfaction and so on. Can be described by revenue, cash fluidity, patrimony, market share, etc. For example, the revenue of company X (mobile manufacturing industry) could be 500,000 euros and the market share would be 15% from the whole market capacity. 2. Size - can be measured by various criteria like Number of employees, Number of sites, Outsourcing Activities and Existing links with other enterprises. 3. Type of industry - for example NACE code (industry standard classification system used in the European Union) [10].

534

I. Koura et al.

4. Type of benefit desired - what are exactly the goals behind such desired collaboration. For example, company X wants to introduce a new product to the market, thus the goal would be to develop a new process for this new product. 5. Collaboration capability – what can the enterprise offer for such collaboration and what exactly are the tendencies for that. 6. Non tangible characteristics (for example social goals of an enterprise). These characteristics are going to be used as a profile for any enterprise. This profile will be used in identifying the potential collaboration network. Any subsets of these six characteristics could be a relevant way to characterize organizations. To focus on a proof of concept, in this article we will only consider one specific criteria which is the type of industry (NACE code). This code classifies all industry types into 4 levels (sections, divisions, groups and classes). The criteria for grouping such divisions, groups and classes are discussed in the official European commission document [10]. If we take into account the criteria of divisions, it can be said that if any enterprise that can be identified as one or more industry class, a potential collaboration link can be suggested within the same division. Because two organizations at least within the same division will collaborate together to sell, buy or share something. Normally there could be a lot of different collaboration network types, but if we start with a VE, we can consider the collaborations that would imply similar companies to increase their workforce. So we will first consider that enterprises in the same division are candidates for collaboration. This also doesn’t mean that VE can only be considered if the enterprises are in the same division, this is only one possibility of many. Obviously there could be other types, for instance between two divisions with complementary activities in the same group, such as wholesale of food, beverages and tobacco and wholesale of household goods (for example big hyper markets). Also there could be a potential collaboration between two divisions in different sections, such as manufacturing of food products (section C) and fishing (section A). According to this approach, the first and basic step for this enterprise can be: having a potential collaboration with other industry classes within the same group. For example sharing a resource or working together to reach the expectations of the same client. The following example will explain this idea. Enterprise X’s industry activity is manufacturing rugs and carpets. So Enterprise X is considered to be in Section C (Manufacturing), the class codes for this enterprise is considered to be in the fourth group in division 13 which is 1393. According to our approach, enterprise X could have a potential collaboration with all the industry types in the same group of class codes 1391, 1392, 1393, 1394, 1395, 1396, and 1399. There are several types of elements that can be exchanged between enterprises. These elements are resources, information, intermediate products (I Product), final products (F Product) and services. These elements are either given, received or shared by an enterprises. Table 1 explains this relation.

A Theoretical Approach for Detecting and Anticipating Collaboration Opportunities

535

Table 1. Relation between exchange types Resource

Information

I Product

Given/Sold by

Owner

Informer/Advisor

Supplier

Received/bought by

Renter

Recipient

Integrator

Shared

Parties

Parties

Parties

F Product Vendor Endorsee Customer Endorser Parties

Service Provider Receiver Parties

As shown in Table 1, there is a two way direction between ‘given/sold by’ cells and ‘received/bought by’ cells due to the nature and type of relation between them. For example if there is a customer then for sure there is a supplier and if there is a renter for a resource then there is an owner, and so on. However in the shared row, the two way direction is within the cell itself as it requires parties with the same role. A potential collaboration within the same division can exist for any exchange types described in Table 1. As discussed earlier in the enterprise characteristics, there were six characteristics that would define any enterprise’s profile. If we took two measurements as an example for the first characteristic (performance) like revenue and product quality. An enterprise will be affected by one or both of these measurements depending on which partner joins the collaboration network (Fig. 2). If the collaboration partner is an auditing company, this will affect the quality somehow. However if it’s an endorser company that can take the final product and introduce it to the market by its name and sell it, then this can affect the revenue. Therefore depending on the desire behind the collaboration, choosing the best partner based on KPIs comes to use.

Fig. 2. Effect on enterprise example

The main concern of any enterprise is to manage its resources in a way to achieve a certain goal [9]. In order for an enterprise to test whether it managed to achieve the set of goals or no is to see the results of its KPIs. A lot of KPIs can be stated, but based on [11], five dimensions are listed below which covers all types of KPIs for any industry type. 1. Financial - a measurable value that indicates how well a company is doing regarding generating revenue and profits (ex. Current Ratio). 2. HR - measure the efficiency and effectiveness of human resources processes (ex. Employee Productivity Rate).

536

I. Koura et al.

3. Learning & Growth - measurement of an organization development (ex. R&D expenses) 4. Product - measurement of a product quality (ex. safety and reliability) 5. Market (customer perspective and sales perspective) - measurement of product effectiveness on customers and market (ex. customer satisfaction and market share percentage) Each enterprise can have different characteristic measurement within those five dimensions depending on the nature of activity of the enterprise. For example, a plane manufacturer would be more interested to have a very high result of product KPIs (such as quality, safety, reliability…) than HR. However, a university or an educational center would be more interested to improve its learning & growth KPIs other than focusing on financial KPIs and so on. Putting in mind the two previous KPI examples, a hypothesis can be proposed regarding the relation between the five KPI dimensions and the exchange types described in Table 1. Each relation can have a set of KPI characteristics described in the five KPI dimensions point of view. For example, in a vendor customer relation a customer is interested in a set of KPIs (Fig. 3) [12]. If the customer has a priority of improving customer satisfaction rather than increasing the revenue, so it’s more convenient to collaborate with a company that has a positive feedback from its customers which in return will affect the company’s products in customer’s perspective.

Fig. 3. Vendor KPIs

Of course, one or more of the dimensions could be less of an interest or not even interesting for some enterprises depending on the type of industry and type of relation. For example, for a provider-receiver relationship, the receiver KPIs that a provider might focus on in the financial dimension could be cash flow, current ratio or account payable turnover to be able to estimate the time that the receiver would pay in. For Learning & growth the KPIs might be average years of service, Accidents or R&D expense/total expenses. Also, for product it might be usability, repairability or maintainability and for market KPIs it might be customer satisfaction, customer turnover rate or relative market share. However, there might be no interest at all for the HR KPIs as it won’t help the provider by any means. Therefore if two or more companies that will act as receivers and are considered to have a potential collaboration with this provider, they will be ranked by this KPI criteria. These were just examples of many

A Theoretical Approach for Detecting and Anticipating Collaboration Opportunities

537

KPIs that could be defined for each dimension. Using this hypothesis in a collaboration network to sell, buy or share helps enterprises to detect the best match to fulfill their goal behind this collaboration.

4 Conclusion and Future Work The aim of this research is to establish a solution for suggesting potential collaboration between enterprises to help improve their businesses and to benefit from each other as much as possible. This solution can be used by enterprises individually to help find the best suitable collaboration network to serve a potential opportunity or by governments to help proposing better business environments that will improve and develop the economy (Fig. 4).

Fig. 4. Big picture

As presented in Fig. 4, this approach uses enterprise characteristics as a profile to do so. However to focus on a proof of concept, one subset of these characteristics was used which was the NACE code. A new hypothesis was proposed which explains the usage of NACE industry type codes as nodes that will be considered as the first basic level of potential collaboration which can be done between classes within the same group of industry types. This potential collaboration can be subjected to a number of exchange types and links that was also described. Furthermore, the five KPI dimensions were discussed to act as an enterprise profile which will help in detecting the most suitable partner for collaboration of any type. After implementing this hypothesis, the next step of this research would be to develop this model to be able to suggest potential collaboration between different industry types within different divisions and sections and not only within groups.

538

I. Koura et al.

References 1. Bititci, U.S., Martinez, V., Albores, P., Parung, J.: Creating and managing value in collaborative networks. Int. J. Phys. Distrib. Logist. Manag. 34(3/4), 251–268 (2004) 2. Soares, A.L., de Sousa, J.P., Barbedo, F.: Modeling the structure of collaborative networks: some contributions. In: Camarinha-Matos, L.M., Afsarmanesh, H. (eds.) PRO-VE 2003. ITIFIP, vol. 134, pp. 23–30. Springer, Boston, MA (2004). https://doi.org/10.1007/978-0387-35704-1_3 3. Camarinha-Matos,. L.M., Abreu, A.: Performance indicators for collaborative networks based on collaboration benefits. Prod. Plan. Control. 18, 592–609 (2007) 4. Bacquet, J., Fatelnig, P., Villasante, J., Zwegers, A.: An outlook of future research needs on networked organizations. In: Camarinha-Matos, L.M. (ed.) PRO-VE 2004. IIFIP, vol. 149, pp. 17–24. Springer, Boston, MA (2004). https://doi.org/10.1007/1-4020-8139-1_2 5. McLaren, T., Head, M., Yuan, Y.: Supply chain collaboration alternatives: understanding the expected costs and benefits. Internet Res. 12(4), 348–364 (2002) 6. Radu, C.: Need and potential risks of strategic alliances for competing successfully. Ser. Manag. 13, 165–169 (2010) 7. Camarinha-Matos, L.M., Afsarmanesh, H.: Collaborative networks: a new scientific discipline. J. Intell. Manuf. 16, 439–452 (2005) 8. Camarinha-Matos, L., Afsarmanesh, H.: The virtual enterprise concept. In: Working Conference on Virtual Enterprise, pp. 3–14 (1999) 9. Martinez, M.T., Fouletier, P., Park, K.H., Favrel, J.: Virtual enterprise – organisation, evolution and control. Int. J. Prod. Econ. 74(1–3), 225–238 (2001) 10. Statistical Office of the European Communities: NACE Rev.2: Statistical classification of economic activities in the European Community. Luxembourg: Office for Official Publications of the European Communities (2008) 11. Bauer, K.: Key performance indicators: the multiple dimensions. New York, vol. 14, pp. 62– 66, October 2004 12. Characteristics for Qualifying Suppliers, GCP industrial Products. http://www.gcpindustrial. com/blog/characteristics-qualifying-suppliers. Accessed 05 Mar 2019

The Systematic Integration of Stakeholders into Factory Planning, Construction, and Factory Operations to Increase Acceptance and Prevent Disruptions Uwe Dombrowski1, Alexander Karl1(&), Colette Vogeler2, and Nils Bandelow2 1

Institute for Advanced Industrial Management (IFU), Technische Universität Braunschweig, Langer Kamp 19, 38106 Brunswick, Germany [email protected] 2 Chair of Comparative Politics and Public Policy (CoPPP), Technische Universität Braunschweig, Langer Kamp 19, 38106 Brunswick, Germany

Abstract. The construction of factories is based on the factory planning process. A new construction of factories or their expansion represents a significant investment decision for companies. Therefore, it is necessary from an economic point of view to allow a trouble-free flow. Current trends, such as urban factories, increase the likelihood of conflict with external stakeholders. In many cases, this means high additional costs and risks for the companies concerned. The evaluation of current case studies shows that ignoring individual stakeholders during planning can lead to delays or, in the worst-case scenario, to the shutdown of the factory project. The aim of this article is to present the current state of stakeholder integration in factory planning, construction, and factory operations in research and practice. Based on the results of a research project in Germany, studies are presented and necessary fields of action identified. Subsequently, a concept is derived that facilitates the systematic and project-specific integration of stakeholders into the factory planning process. Keywords: Factory planning process

 Stakeholder participation

1 Introduction The construction of large infrastructure or building projects often leads to conflict with citizens or citizen groups. These conflicts have repeatedly escalated, culminating in violent behavior. A prominent example is the conflict surrounding the German infrastructure and railway project “Stuttgart 21”, which was accompanied by massive public – and partly violent – protests. Tens of thousands of citizens mobilized against the project, demanding an end to the construction works. Although the reasons for the resistance against the project are manifold, many opponents questioned the legitimacy of the project and even called for public participation in the decision-making process [1]. The public resistance even contributed to a change of the federal government and © IFIP International Federation for Information Processing 2019 Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 539–546, 2019. https://doi.org/10.1007/978-3-030-29996-5_62

540

U. Dombrowski et al.

to the holding of a referendum on the project. Following the referendum, in which a majority voted to continue with the infrastructure project, the manifestations lost momentum [2]. The case of “Stuttgart 21” exemplifies how citizens increasingly demand participation in decisions over infrastructure and building projects. In many countries, policymakers are therefore confronted with the pressure of introducing participatory elements into the decision-making process in addition to the existing representative democratic procedures [3, 4]. The degree to which individuals feel personally affected by a project often diverges from the “objective” or measurable consternation, e.g. local residents, which can be predicted by public or private project bodies [5]. A specific and comparatively well-researched phenomenon is the NIMBY (Not In My Backyard) behavior that has been observed in many conflicts surrounding public infrastructure [6]. The early integration of potentially affected citizens in planning processes may help to identify individual consternations and concerns, thereby preventing escalation processes. Psychological research emphasizes the role of trust in participatory processes [3]. The early integration of different stakeholders in the very early planning phase may avoid the consolidation of opposing coalitions and the formation of enemy images [1]. A specific challenge for participatory processes arises from their contradictory form of procedure: Although the possibilities to influence the project design are greatest during the early project phase, participation is often only envisaged for the later planning phases. The decreasing possibilities to influence the planning process in these later phases may lead to discontent among external stakeholders during the participation process and thus contribute to escalation. In the factory planning process, participation often only takes place with employees of the company. In many cases, external stakeholders are only involved if the participation of authorities, for example, is absolutely necessary in the context of approval procedures. Compared to public project developers, private project bodies face an additional challenge. This challenge is particularly relevant in the case of construction projects in areas where local inhabitants are directly affected by the project, e.g. urban factories. Contrary to public projects, e.g. the construction of a public railway station, private projects rarely benefit the general public. The benefits of private projects are normally limited to private actors, whereas the costs must partially be borne by the general public, e.g. by local residents in the form of noise or air pollution or increased traffic. We argue that the unequal distribution of costs and benefits in the case of private projects increases the likelihood of resistance against the project by citizens and the risk of escalation. Our main argument is that the creation of general benefits by private project developers may decisively contribute to the approval of the project by external actors. In addition to the provision of individual financial compensations, which only benefit a limited number of actors, project developers should seek to create public benefits. A second challenge relates to the different levels in the cost-benefit considerations [7]. The negative side effects of private construction projects, such as the building of a new production plant, often affect the local level only. Noise or air pollution are mostly limited to the local level, whereas the benefits in the form of economic gain are independent, i.e. they are transferred to the headquarters of the company. Even legally binding instruments, such as ecological compensation areas, are often created in other regions and thus do not benefit local residents. Accordingly, participatory processes

The Systematic Integration of Stakeholders

541

must pay special attention to the local stakeholders and find ways to create private and public benefits at the local level, e.g. in the form of jobs or public goods. In Fig. 1, these differences in costs and benefits between public and private projects are subsumed. Benefits privatepublic

private

Compensation payments by private actors

Public participation and compensation payments

public

Participation and private provision of public goods

Participation and public provision of public goods

Costs

Fig. 1. Cost-benefit considerations in public and private projects (own compilation)

To solve the depicted conflicts of interests, we propose a combination of compensation payments made specifically to affected citizens, the provision of public goods, and public participation that is already in the early planning phase. Private project developers can use participatory instruments, such as neighborhood dialogues or roundtables, to create a basis of trust with external stakeholders and identify concerns and reservations regarding the construction project. Based on the results of the participatory process, external stakeholders can jointly decide upon the design and selection of compensatory measures in the form of public goods. There are a number of examples of excellent stakeholder involvement in the successful construction of factories in urban areas. In these instances, external actors were involved in the early phases of the planning process. To gain a deeper understanding of the dynamics between project developers and external actors during private construction projects, we will firstly analyze the status quo of participatory elements in the factory planning process (see Sect. 2). To explore how both sides – planning bodies and external stakeholders – evaluate the current situation, we present preliminary results of a survey conducted in 2018 (see Sect. 3). The survey results indicate that external stakeholders increasingly demand participation and expect project developers to address their concerns personally. In addition, we present a concept for the integration of stakeholders systematically and projectspecifically in the context of factory planning (see Sect. 4).

2 Status Quo of Participatory Elements in the Factory Planning Process When considering participatory research, different forms of participation in factory planning can be identified. Within the framework of participation in factory planning projects, existing findings focus on the involvement and participation of employees [8]. The participation of external stakeholders is rarely, if at all, addressed in the relevant literature. The current research project (see ACKNOWLEDGMENT) was the first to

542

U. Dombrowski et al.

systematically highlight the relevance of various stakeholder groups to the factory planning process as well as their impact on the resulting costs, time and quality of the factory planning project. A detailed and up-to-date overview of the role of participation in the factory planning process is provided by Dombrowski et al. [9]. The so-called “participation paradox” is evident in the participation of both internal and external stakeholder groups (see Fig. 2).

Fig. 2. The participatory paradox of factory planning [9]

Accordingly, the actual interest of stakeholders in participating in the early phase of a factory project is very low. Stakeholder interest only increases as the project progresses. In the early phase of a project especially, the possibilities of influencing the project are highest and the resulting costs of change lowest. These findings already point to the necessity of establishing a systematic participation process. However, the degree of appropriate participation depends on the project and cannot usually be determined by companies. A lack of stakeholder participation in numerous projects usually leads to increased communication effort and delays in the factory project. Overly intensive participation results in additional costs and approval procedures. Here, too, project delays are likely. A higher complexity is added: Different process stages and their sub-processes within the framework of factory planning show different possibilities and potentials [9].

3 Selected Results of the Studies Based on the problems described above (see Sect. 2), several individual studies were carried out as part of the research project “Future Building Participation”. The studies were conceived with the involvement of the project consortium with partners from planning offices, authorities, industry, and science. The survey was conducted throughout Germany. The studies are qualitative as well as quantitative surveys.

The Systematic Integration of Stakeholders

543

Figure 3 summarizes further information on them. The quantitative studies are divided into two surveys (1). Within the framework of these two surveys, a total of 5,793 responses were collected within six months. Topic areas of the questions address the interaction between companies and stakeholders, the time and manner of integration, current obstacles and risks, as well as recommendations for participation. In addition to the current findings from the literature, the results of systematic analyses from a total of five case studies were used in the preparatory stage of the study (2). The results were supplemented and substantiated by ten expert interviews (3). The aim of the partial studies was to obtain a comprehensive overview of the current deficits, to concretize them and to generate findings for the design of the action plan. 1

2

Qualitative Studies: Questionnaire Study 1: Com panies 52 participants fromdifferent companies 2.798 answ ers given Previous involvement of stakeholders and experiences Time of involvement and reasons Relevance of methods and tools Obstacles and risks Need for action

3

Study 2: External stakeholders 108 participants 2.995 answ ers given Details of the relationship to the specific project Details of involvement Details of the personal attitude tow ards the project Personal information Importance of tools and instruments Aspects related to the project Instruments for the enforcement of one's ow n position Influences on the environment

Analysis of case studies

Systematic analysis of five factory planning processes

Quantitative Study: Expert Survey Survey of 10 experts from the sectors companies and authorities based on a guideline interview

Fig. 3. Overview of the studies carried out

To (1): The results of the two surveys clearly demonstrate the need for action. Overall, more than 50% of companies state that they do not yet have a system, guidelines or other processes for involving external stakeholders. Without the necessary system, the integration process runs without standards and is conducted in a purely subjective manner by the responsible employee. In these cases, successful participation is not guaranteed and results as a random event. Necessary fields of action are shown in Fig. 4.

Formulation of clear objectives

18%

36%

Timing of communication

19%

33%

14% 14%

24%

Standardization

14%

24%

Responsibilities

14%

24%

Ways of communication (e.g. information event, flyer, email newsletter)

10%

0% Very large need for action (1)

Large need for action (2)

10%

20%

19%

14% 19%

38%

50%

5%

33%

29%

40%

5%

5%

24%

33%

30%

5% 5%

5% 9%

41% 43%

29%

Medium need for action (3)

14%

27% 24%

32%

Convince stakeholders of the project Tools to be able to leverage conflict/escalation potential

14%

24%

38%

24%

5% 5%

29%

24%

38%

Clear process of stakeholder involvement Methodological competence and professional communication (e.g. on where there is potential for conflict/escalation)

60%

Little need for action (4)

70%

80%

90% 100%

No need for action at all (5) = Arithmetic mean value

Fig. 4. Need for action from the company perspective (n = 52)

544

U. Dombrowski et al.

According to the survey, over 62% of the companies see a need for major action in a transparent process to involve the stakeholders. The participants’ entire planning process shows considerable deficits due to the lack of participation standards, which can lead to escalations with external participants, higher costs and lower planning quality. Without this standardization, management of participation remains impossible. This is confirmed by the survey, which criticizes the lack of clear objectives as relevant fields of action. Another important area of action is the required methodological competence. The lack of competence is closely linked to the implementation of communication. In this context, the time of communication can be identified as a relevant field of action. To which needs the transparent process must be geared towards remains unclear at this point. In addition to the perspective of the company, a closer look at the external stakeholders is necessary. Figure 5 below shows the consequences that the surveyed actors would draw from mistaken participation.

23,9%

Direct information from the project-executing agency

20,7% 22,9%

Consultation of the opinion on the project

19,8% 22,0%

Possibility of (partially) actively co-determining the project

15,7% 13.7% 13,7%

(Partial) co-decision on the project

19,8%

Involvement through the project managers, beyond the consultation of the opinion

12,8% 8,2% 4.6% 4,6%

Information on the project via third parties (e.g. press conference, press release, newspaper article) No involvement in the project through the project managers

10,7% 0.0% 0,0%

0,0% Demanded involvement

5,0% 10,0%

20,0%

30,0%

Involvement in the project

Fig. 5. Desired and actual involvement in the project from stakeholder perspectives (n = 108)

According to the respondents, greater participation in the planning process is required. Interestingly, the participants generally want a lower level of co-decision on the project. Rather, they would like to be informed directly by the project-executing agency. Having their opinions heard as well as the possibility of participating (partly) actively in the project are strongly pronounced wishes. These results play a major role in the design of the concept as they enable recommendations to be made on the degree of participation required. To (2): As a result of the survey, the existing fields of action could be further concretized. On the one hand, relevant results are available. On the other hand, there is a lack of concrete technical content for the design of an action guideline. This is where the expert interviews come in. Partial results are summarized in Fig. 6.

The Systematic Integration of Stakeholders Subject areas Authorities / public offices

Fields of activity (excerpt) Building authority Water authority Industrial inspectorate Numerous approval procedures

545

Recommendations (excerpt) Round table (in a timely manner) –approx. one year before the start of the project, get the authorities / public offices on board Inform at an early stage which approval procedure is the correct one Contact the appropriate authority at an early stage Star-shaped distribution of the corresponding documents

Selection of correct documents Communication channels within authorities too long Application requirements from authorities to plant operators Contact the appropriate authority at an early stage often not feasible Electronic application under immission protection law (ELiA) Unfinished technical support ELiAis not uniform (depending on federal state) Initial status report (AZB); soil, noise and air pollutant Timely appointment determination with expert witness(es) immission forecasts Licensing Expert witness / expert opinion Arrange an expert witness in a timely manner for testing procedures Expert witness examination (every 5 years) (sign long-term contract) Federal Immission Protection Earlier start of construction(§8a) -> plant controller builds Involve expert witness for an extensive AZB Law (BImSchG) at his own risk Tight deadlines Timely and intensive consultation with authorities, Timefactor Lead time for visiting authorities expert witnesses and the plant team Federal Nature Conservation Law(BNatSchG) and Breeding and settling times Federal Forest Law Species protection of animals and plants Create compensation and replacement areas Environmental protection Permits for contaminated sites in the soil, wastewater plan External trainings/seminars for plant operators or Ordinance on Installations for Handling Substances Hazardous to Water (AwSV) Fire protection, escape plans and accessibility for the fire Fire department department Ram probes Announce and publish building projects and building Construction noise AVV construction noise ≠ TA noise with regard to rest periods progress and the next planned steps earlyon one’s own website, in the local town hall, to city council members Chemistrybuilding Odor nuisance Public and/or via the media –round table Typical building site smell participation Greater willingness of citizens Lawsuits delay construction progress to take legal action Citizens want to be noticed Delays construction progress by unnecessarily leaving State Administration Office Early involvement of the State Administration Office approval documents lying around for a long time Communication with authorities

Fig. 6. Structured results from the ten expert interviews (excerpt)

4 Concept The approach consists of transferring the collected findings into a practice-oriented guideline for action. This guideline is intended to facilitate the project-specific involvement of stakeholders in factory planning and construction projects. Based on the results (see Sect. 3) and supported by the participants of the project consortium, the four following fields of action could be summarized: “sustainable planning”, “frontloading”, “transparency and visualization” and “stakeholder orientation”. The structure concept of Lean production systems (VDI 2870) was used for the operational implementation of the structure concept. The concept offers an ideal structure for operationalizing fields of action; it is already familiar to numerous companies and has established itself in operational practice. The structure is shown in Fig. 7.

Fig. 7. Structure of the Guide to Action (based on VDI 2870, [10])

546

U. Dombrowski et al.

5 Summary and Outlook The existing results provide the basis for the systematic integration of external stakeholders into the planning process of factories. In the future, it will be necessary to evaluate the methods and tools already identified for participation and to measure their influence on costs, time and quality. In addition, the concept must be fully validated in practice. Acknowledgment. The research project “Integration of stakeholders to increase acceptance andprevent disruptions in planning and execution” (SWD-10.08.18.7-17.52) is funded by the research initiative “Zukunft Bau” of the Federal Institute for Building, Urban Affairs and Spatial Development (BBSR) within the Federal Office for Building and Regional Planning in Germany. The research project focuses on the degree of participation in the construction project in an entrepreneurial context. The aim of the research project is the early, targeted involvement of experts in construction and planning processes. The resulting increase in acceptance should help to avoid long-term stress and resulting delays. Relevant results of the research project are to be prepared in a comprehensive and practice-oriented action guide to support companies in the project-specific integration of stakeholders into their factory planning projects. The project takes place from 08/2017 to 07/2019.

References 1. Vogeler, C.S., Bandelow, N.C.: Mutual and self perceptions of opposing advocacy coalitions: devil shift and angel shift in a German policy subsystem. Rev. Policy Res. 35(5), 717–732 (2018). https://doi.org/10.1111/ropr.12299 2. Wagschal, U.: Die Volksabstimmung zu Stuttgart 21: Zwischen parteipolitischer Polarisierung und “Spätzlegraben”. Der Bürger Im Staat 62(3), 168–173 (2012) 3. Bandelow, N.C., Thies, B.: Gerechtigkeitsempfindungen bei Großprojekten als Ursache von Konflikteskalationen? Vertrauen und Legitimität als moderierende Faktoren illustriert am Beispiel der Konflikte um die Erweiterung des Frankfurter Flughafens. Politische Psychologie (J. Polit. Psychol.) 3(1), 24–37 (2014) 4. Fink, S., Ruffing, E.: The differentiated implementation of European participation rules in energy infrastructure planning. Why does the German participation regime exceed European requirements? Eur. Policy Anal. 3(2), 274–294 (2017). https://doi.org/10.1002/epa2.1026 5. Lindloff, K., Lisetska, K., Eisenmann, T., et al.: Regionale Betroffenheit in informellen Beteiligungsverfahren bei Infrastrukturprojekten. der moderne staat (dms)(1), 89–115 (2017) 6. Di Nucci, M., Brunnengräber, A.: In whose backyard? The wicked problem of siting nuclear waste repositories. Eur. Policy Anal. 3(3), 295–323 (2017) 7. Hornig, E.-C.: Airport expansions and public protests—the democratic dilemma of vertically asymmetric policies. Eur. Policy Anal. 3(3), 324–342 (2017) 8. Dombrowski, U., Riechel, C., Schulze, S.: Enforcing employees participation in the factory planning process. In: 2011 IEEE International Symposium on Assembly and Manufacturing (ISAM), pp. 1–6. IEEE (2011) 9. Dombrowski, U., Karl, A., Imdahl, C.: The Role of Participation in the Factory Planning Process, pp. 957–960 (2018) 10. Verein Deutscher Ingenieure: VDI 2870 Lean production systems Basic principles, introduction, and review: Part 1 (2012)

Service Engineering Models: History and Present-Day Requirements Roman Senderek, Jan Kuntz(&), Volker Stich, and Jana Frank Institute for Industrial Engineering at RWTH Aachen University, Campus-Boulevard 55, 52074 Aachen, Germany [email protected]

Abstract. Since the field of service engineering emerged in the late 20th century, the service industry has undergone drastic changes. Among the reasons for these changes is the increasing digitalization, which has made it difficult for companies to successfully develop new service offerings. While numerous service engineering models are available to provide guidance during the design of new services, many of them cannot keep up with the requirements of today’s economic environment. The present paper examines the requirements that service engineering models need to meet in order to be suitable guidelines for the digital age. To this end, the introduction illustrates how digitalization has changed the service industry. Afterwards, selected service engineering models and related norms are presented. Finally, a set of requirements for modern service engineering models derived from best practices from recent years is introduced. Keywords: Smart services

 Service engineering  Digitalization

1 Introduction Since the 1990s, service engineering has established itself as a systematic process for the development of services. As a strategic and creative process that aims at designing and implementing services and individualized solutions in a model-based way, service engineering is to services what product planning and development are to physical products. Among the overarching goals of service engineering are an efficient service development and a high level of service quality. Therefore, service engineering promises a competitive advantage and an increase in quality and customer satisfaction [1, 2]. Various service engineering models and related norms have been published over the course of the past decades. Service engineering models aim at supporting companies in developing successful service offers by providing a course of action that companies can follow in the development process, and many of them have proven their value for the development of services in the past. However, the service industry has changed significantly during the last years. One major factor contributing to this change is the ongoing digitalization, which has created a variety of new challenges. It has drastically changed the way services are created and delivered [3, 4], and lower barriers of entry have paved the way for stronger competition and an overall increased supply [5]. © IFIP International Federation for Information Processing 2019 Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 547–554, 2019. https://doi.org/10.1007/978-3-030-29996-5_63

548

R. Senderek et al.

Companies must therefore differentiate themselves from competitors by continuously delivering innovative solutions that speak to the individual customer needs. As an effective tool to achieve this goal, digital services and smart services in particular have gained significant importance in recent years. Smart services can be defined as individualized combinations of physical and digital services that generate added value for providers and customers and offer demand-oriented value via digital platforms. They are based on smart products, which are connected to the internet, interact with their environment and gather environmental data. The collected data sets are combined with other easily accessible information and processed into so-called smart data, based on which smart services are designed [6]. In this digitalized economic environment, many companies struggle to develop successful digital services. This is partly caused by the lack of service engineering methods that are suited for this task [7], as many old models lack the flexibility that is required to keep up with the dynamic of today’s market. The increasing share of digital components in service engineering reveals deficits in the direct application of classical service engineering methods to smart services. Thus, the development of smart services requires a new service engineering process that can quickly adapt to evolving customer needs, is efficient, requires little resources and is centered on the customer and the value it can create through data insights [8, 9]. The present paper suggests a list of requirements that service engineering models need to fulfill to succeed in today’s economic environment. Before these requirements are presented, however, the following section provides an overview of selected service engineering models and norms that touch upon the topic of service engineering.

2 Selected Service Engineering Models and Related Norms Numerous service engineering models have been published in the past. They prescribe a course of action that serves as a guideline for the development of new services and usually consists of phases that represent a high-level outline of how the model is structured and describe an overarching goal for each stage of the model. Each phase can comprise various activities that describe the individual tasks the company needs to complete to fulfill the phase’s goal [10]. There are three main types of service engineering models. The first type is the linear model in which each development phase builds on the previous one. While these models benefit from their simplicity and transparency, their one-track direction leads to a lack of flexibility and adaptability. The second model type is the iterative model, in which the individual development phases are meant to be repeated several times. With each iteration, a finer concept of the service is developed. This approach offers quick results and a flexibility in correcting mistakes; however, maintaining an overview of tasks fulfilled requires a high level of coordination. The third type is the prototyping model, which focuses on the early development of prototypes that can be tested with customers and improved based on customer feedback. Prototyping ensures a strong customer orientation although it demands a high level of complex coordination among all parties in order to function properly [7, 11, 12].

Service Engineering Models: History and Present-Day Requirements

549

Apart from service engineering models, numerous norms touch upon the subject of service engineering directly or indirectly. Table 1 provides a short overview of German and international norms related to the service engineering process and of selected service engineering models. Due to the limited scope of this paper, these cannot be explained in detail. For further information, please refer to the sources listed in Table 1. Many of these norms and models have proven to be valuable aids for the development of services in the past. However, a majority of the available models are not compatible with today’s market situation, as they are too complex and require excessive resources and development time before initial results can be produced and tested for value [8]. They are often inflexible and thus unsuitable for most fields of applications today [32]. Researchers and companies alike agree that existing models must catch up with the new requirements for creating innovative service solutions. The next section explores what requirements a model needs to fulfill in order to be suitable for today’s market. Table 1. Selected service engineering models and related norms Models • Scheuing and Johnson’s linear model (1989) [13, 14] • Edvardsson and Olsson’s linear model (1996) [15] • Ramaswamy’s linear model (1997) [16] • Jaschinski’s iterative model (1998) [17] • Liestman’s iterative model (2002) [18] • Bullinger and Schreiner’s circular model (2006) [19] • Cernavin’s linear model (2007) • Meyer and Böttcher’s approach (2011) [9] • Leimeister’s model (2012) [8] • Roth’s approach (2017) • Pöppelbuß and Durst’s Smart Service Canvas (2017)

Related norms • DIN Fachbericht 75 [20] • ISO/IEC 15940:2013 [21] • DIN ISO 9004-2:1991 [22] • DIN PAS 1082 [23] • DIN PAS 1094 [24] • DIN PAS 1091 [25] • DIN PAS 1014 [26] • DIN PAS 1018 [27] • DIN PAS 1019 [28] • DIN PAS 1047 [29] • DIN SPEC 91310 [30] • DIN PAS 1076 [31]

3 Requirements for Modern Service Engineering Models As explained above, many service engineering models are no longer suitable for today’s economic environment as they lack agility and flexibility, which calls for the development of new service engineering methods. In order to identify requirements for service engineering models for the digital age, it seems sensible to have a look at methods that have recently proven their utility and success in practice and to find out what characteristics they share. While no all-encompassing recipe for service engineering has emerged yet, certain methodologies have established their worth in adding value out of a specific focal point. An analysis of recommendations from recent literature and trends in service engineering reveals three main best practices that exhibit proven results in various industries. These will be discussed in the following paragraphs. The first best practice found in several recent and successful service engineering models is user centricity. While a clear focus on customer needs has always been

550

R. Senderek et al.

important in service engineering, it has gained importance in the digital age as customers have become more empowered through a greater selection of increasingly individualized products. In the sense of customer centricity, an offering is created by integrating the user into the entire development process and thus co-creating a positive customer experience. The closer a user is involved in the development process, the more the offering will reflect their needs. Customer ideas can be used to create a first prototype, which is then presented to the user for testing and feedback. This is repeated in as many iterations as needed until the prototype and user expectations are matched. This approach requires an extensive collection and analysis of data concerning customer satisfaction and experience, but it also allows for a high degree of customization [33–35]. The second best practice is the utilization of service ecosystems. Services are normally not developed and implemented by a company alone but through a collaboration of a multitude of actors and resources. Service ecosystems can be defined as “relatively self-contained, self-adjusting system[s] of mostly loosely coupled social and economic (resource-integrating) actors connected by shared institutional logics and mutual value creation through service exchange” [36]. They provide a fertile environment for companies to innovate and realize challenging ideas as they enable them to partner with actors that can complement and expand their own resources. Today, digital infrastructures allow for more diverse actors and more resources to be integrated in a service ecosystem, which cultivates value co-creation on an immense scale. In a service ecosystem, all actors should be empowered by gaining access to the various ecosystem assets and infrastructures. That way, companies can tap into a wealth of resources that a service design model can develop into an innovative service offering [4, 36, 37]. The third best practice can be described by the term ‘agile’. An agile mindset involves a quick and flexible development process, is customer centric and collaborative in nature as cross-functional teams are brought together. Moreover, it is output oriented and entails constant reflection on previous work to identify shortfalls. In addition, the agile mindset is efficient in its use of resources [38, 39] and enables the design process to be adapted to changing requirements at any time. Furthermore, it allows for a shorter time to market. One essential method is the development of a minimum viable product (MVP), which means that a new offering is created with the bare minimum of core features that enable sufficient interaction for constructive user feedback. The final product is completed after multiple iterations of the MVP feedback loop. Agile approaches also tend to follow the lean mindset, which includes a reduction of waste and aims to achieve more with fewer resources, including time and information [40, 41]. The requirements resulting from these best practices, their purpose and suggestions for their application can be found in Fig. 1. For further information, please refer to the references listed in the table [2, 9, 36–50].

Service Engineering Models: History and Present-Day Requirements

551

Fig. 1. Requirements for modern service engineering models

4 Outlook Even though the present paper has argued that many existing service engineering models are no longer suitable for today’s economic environment, it is worth mentioning that some promising models that incorporate the requirements listed in Fig. 1 have already been or are currently being developed. Among these are Smart Service Engineering [51], Multilevel Service Design [52], Design Thinking for Industrial Services [53] and Recombinant Service System Engineering [54]. Whether these models will prove their success in practice in the long run remains to be seen.

5 Acknowledgment This research and development project Digivation constitutes the superordinate project of the funding line Service Innovation Based on Digitization and is funded by the German Federal Ministry of Education and Research within the research program Innovations for Tomorrow’s Production, Services and Work under the registration number 02K14A221. The author is responsible for the contents of this publication.

References 1. Schuh, G., Gudergan, G., Senderek, R., Frombach, R.: Service Engineering. In: Schuh, G., Gudergan, G., Kampker, A. (eds.) Management industrieller Dienstleistungen Handbuch Produktion und Management, pp. 169–199. Springer, Heidelberg (2016)

552

R. Senderek et al.

2. Richter, H.M., Tschandl, M., Platsch, M., Mallaschitz, C.: Erfolg durch neue Services: Service Design & Engineering. Methoden. Werkzeuge und Vorgehensweisen, Kapfenberg/ Steyr (2016) 3. Roth, A., Höckmayr, B., Möslein, K.: Digitalisierung als Treiber für Faktenbasiertes ServiceSystems-Engineering. Dienstleistungen 4.0, pp. 185–203. Springer, Wiesbaden (2017). https://doi.org/10.1007/978-3-658-17550-4_8 4. Acatech: Smart Service Welt. Digitale Serviceplattformen - Praxiserfahrungen aus der Industrie. Best Practices. acatech - Deutsche Akademie der Technikwissenschaften, Munich (2016) 5. Gotsch, M., Fiechtner, S., Krämer, H.: Open Innovation Ansätze für den Dienstleistungsinnovationsprozess. Die Entwicklung eines Service Open Innovation Frameworks. In: Thomas, O., Nüttgens, M., Fellmann, M. (eds.) Smart Service Engineering, pp. 29–54. Springer Fachmedien Wiesbaden, Wiesbaden (2017). https://doi.org/10.1007/978-3-65816262-7_2 6. Acatech: Smart Service Welt. Umsetzungsempfehlungen für das Zukunftsprojekt Internetbasierte Dienste für die Wirtschaft. Arbeitskreis Smart Service Welt (2014) 7. Bullinger, H.-J., Fähnrich, K.-P., Meiren, T.: Service engineering methodical development of new service products. Int. J. Prod. Econ. 85(3), 275–287 (2003). https://doi.org/10.1016/ s0925-5273(03)00116-6 8. Leimeister, J.M.: Dienstleistungsengineering und -management. Springer, Berlin (2012) 9. Meyer, K., Böttcher, M.: Entwicklungspfad Service Engineering 2.0. Neue Perspektiven für die Dienstleistungsentwicklung. Leipziger Beiträge zur Informatik, vol 29 (2011) 10. Gudergan, G.: Service engineering. multiperspective and interdisciplinary framework for new solution design. In: Maglio, P.P., Kieliszewski, C.A., Spohrer, J.C. (eds.) Handbook of Service Science. Service science. Research and innovation in the service economy. Springer, New York, London pp 387–415 11. Richter, H.M., Tschandl, M.: Service Engineering – Neue Services erfolgreich gestalten und umsetzen. Dienstleistungen 4.0, pp. 157–184. Springer, Wiesbaden (2017). https://doi.org/ 10.1007/978-3-658-17550-4_7 12. Bullinger, H.-J., Scheer, A.-W. (eds.): Service Engineering. Entwicklung und Gestaltung innovativer Dienstleistungen, 2nd edn. Springer, Berlin (2006) 13. Scheuing, E.E., Johnson, E.M.: A proposed model for new service development. J Serv Mark (1989). https://doi.org/10.1108/eum0000000002484 14. Schneider, K., Daun, C., Behrens, H., Wagner, D.: Vorgehensmodelle und Standards zur Entwicklung von Dienstleistungen. In: Bullinger, H.-J., Scheer, A.-W. (eds.) Service Engineering Entwicklung und Gestaltung innovativer Dienstleistungen, 2nd edn, pp. 113– 138. Springer, Heidelberg (2006) 15. Edvardsson, B., Olsson, J.: Key concepts for new service development. Serv. Ind. J. 16(2), 140–164 (1996) 16. Ramaswamy, R.: Design and Management of Service Processes: Keeping Customers for Life. Addison-Wesley Publishing Company, Boston (1996) 17. Jaschinski, C., Luczak, H., Eversheim, W. (eds.): Qualitätsorientiertes Redesign von Dienstleistungen. Schriftenreihe Rationalisierung und Humanisierung, vol 14, pp 1–148 (1998) 18. Liestmann, V., Luczak, H., Eversheim, W. (eds.): Dienstleistungsentwicklung durch Service Engineering: von der Idee zum Produkt, 2nd edn. FIR e.V. an der RWTH Aachen, Aachen (2002)

Service Engineering Models: History and Present-Day Requirements

553

19. Bullinger, H.-J., Schreiner, P.: Service Engineering Ein Rahmenkonzept für die systematische Entwicklung von Dienstleistungen. In: Bullinger, H.-J., Scheer, A.-W. (eds.) Service Engineering. Entwicklung und Gestaltung innovativer Dienstleistungen, 2nd edn, pp. 53–84. Springer, Heidelberg (2006) 20. DIN Service Engineering. Entwicklungsbegleitende Normung (EBN) für Dienstleistungen. DIN-Fachbericht 75. Beuth Verlag GmbH, Berlin (1998) 21. International Organization for Standardization. Systems and software engineering. Software Engineering Environment Services. ISO/IEC 15940 (2013) 22. DIN Qualitätsmanagement und Elemente eines Qualitätssicherungssytems. Leitfaden für Dienstleistungen. DIN EN ISO 9004–2. Beuth Verlag GmbH, Berlin (1992) 23. DIN DIN SPEC PAS 1082. Standardisierter Prozess zur Entwicklung industrieller Dienstleistungen in Netzwerken (Standardized process for the development of industrial services in networks). Beuth Verlag GmbH, Berlin (2008) 24. DIN DIN SPEC: PAS 1094. Hybride Wertschöpfung - Integration von Sach- und Dienstleistung. Beuth Verlag GmbH, Berlin (2009) 25. DIN DIN SPEC: PAS 1091. Schnittstellenspezifikationen zur Integration von Sach- und Dienstleistung. Beuth Verlag GmbH, Berlin (2010) 26. DIN DIN SPEC: PAS 1014. Vorgehensmodell für das Benchmarking von Dienstleistungen. Beuth Verlag GmbH, Berlin (2001) 27. DIN DIN PAS 1018:2002–12. Grundstruktur für die Beschreibung von Dienstleistungen in der Ausschreibungsphase. Beuth Verlag GmbH, Berlin (2002) 28. DIN DIN SPEC: PAS 1019. Strukturmodell und Kriterien für die Auswahl und Bewertung investiver Dienstleistungen. Beuth Verlag GmbH, Berlin (2002) 29. DIN DIN PAS 1047:2005-01. Referenzmodell für die Erbringung von industriellen Dienstleistungen - Störungsbehebung. Beuth Verlag GmbH, Berlin (2005) 30. DIN DIN SPEC 91310:2014-08. Klassifikation von Dienstleistungen für die technische Betriebsführung von Erneuerbare-Energie-Anlagen. Beuth Verlag GmbH, Berlin (2014) 31. DIN DIN PAS 1076:2008-02. Aufbau, Erweiterung und Verbesserung des internationalen Dienstleistungsgeschäfts. Beuth Verlag GmbH, Berlin (2008) 32. Meiren, T., Edvardsson, B., Jaakkola, E., Khan, I., Reynoso, J., Schäfer, A., et al.: Derivation of a service typology and its implications for new service development. In: Gummesson, E. (ed.) The Naples Forum on Service 2015, pp. 1–10. University of Naples Federico II, Naples (2015) 33. Meyer, C., Schwager, A.: Understanding customer experience. Harvard Busp. Rev. 85(2), 116–126 (2007) 34. Deloitte: What is digital economy? Unicorns, transformation and the internet of things (2016). https://www2.deloitte.com/mt/en/pages/technology/articles/mt-what-is-digital-economy. html. Accessed 1 Aug 2018 35. McKinsey & Company: From touchpoints to journeys: seeing the world as customers do (2016). https://www.mckinsey.com/business-functions/marketing-and-sales/our-insights/ from-touchpoints-to-journeys-seeing-the-world-as-customers-do. Accessed 23 Aug 2018 36. Lusch, R.F., Nambisan, S.: Service innovation: a service-dominant logic perspective. MISQ (2015). https://doi.org/10.25300/misq/2015/39.1.07 37. Immonen, A., Ovaska, E., Kalaoja, J., Pakkala, D.: A service requirements engineering method for a digital service ecosystem. SOCA (2016). https://doi.org/10.1007/s11761-0150175-0 38. Lamberth-Cocca, S., Meiren, T.: Towards a reference model for agile new service development using the example of e-mobility service systems. Procedia CIRP. (2017). https://doi.org/10.1016/j.procir.2017.03.052 39. Groll, J.G.: The Agile Service Management Guide. DevOps Institute (2017)

554

R. Senderek et al.

40. Ries, E.: The Lean Startup: How Today’s Entrepreneurs use Continuous Innovation to Create Radically Successful Businesses, 1st edn. Crown Business, New York (2011) 41. Blank, S.: Why the lean start-up changes everything. Harvard Bus. Rev. 91(5), 63–72 (2013) 42. Vargo, S.L., Lusch, R.F.: Service-dominant logic: Continuing the Evolution. J. Acad. Mark. Sci. (2008). https://doi.org/10.1007/s11747-007-0069-6 43. Piller, F.T., Blazek, P.: Core capabilities of sustainable mass customization. In: Felfernig, A., Hotz, L., Bagley, C., Tiihonen, J. (eds.) Knowledge-Based Configuration, pp 107–120. Elsevier, Amsterdam (2014) 44. Randhawa, K., Scerri, M.: Service innovation: a review of the literature. In: Agarwal, R., Selen, W., Roos, G., Green, R. (eds.) The Handbook of Service Innovation, pp. 27–51. Springer, London (2015). https://doi.org/10.1007/978-1-4471-6590-3_2 45. Piller, F., West, J.: Firms, users, and innovation. an interactive model of coupled open innovation. In: Chesbrough, H.W., van Haverbeke, W., West, J. (eds.) New Frontiers in Open Innovation, pp. 29–49. Oxford UP, Oxford (2014) 46. Maglio, P.P., Lim, C.-H.: Innovation and big data in smart service systems. J. Innov. Manag. 4(1), 11–21 (2016) 47. Ismail, S., Malone, M.S., van Geest, Y., Diamandis, P.H.: Exponential Organizations. Why New Organizations are Ten Times better, Faster. and Cheaper Than Yours (And What to do About it). Diversion Books, New York (2014) 48. Olsen, D.: The Lean Product Play Book. Wiley, Hoboken (2015) 49. Beck, K., Beedle, M., van Bennekum, A., Cockburn, A., Cunningham, W., Fowler, M. et al.: Manifesto for agile software development (2001). http://agilemanifesto.org. Accessed 23 Aug 2018 50. Richter, H.M., Tschandl, M.: Service Engineering – Neue Services erfolgreich gestalten und umsetzen. Dienstleistungen 4.0, pp. 157–184. Springer, Wiesbaden (2017). https://doi.org/ 10.1007/978-3-658-17550-4_7 51. Senderek, R., Jussen, P., Moser, B., Ragab, S.: Smart service engineering. an agile approach to develop data-driven services. In: CNSM (ed.) Proceedings of the 14th International Conference on Network and Service Management, pp 1–5, Rome, Italy. IEEE (2018) 52. Patrício, L., Fisk, R.P., Falcão e Cunha, J., Constantine, L.: Multilevel service design. From customer value constellation to service experience blueprinting. J Serv Res. (2011). https:// doi.org/10.1177/1094670511401901 53. Redlich, B., Becker, F., Fischer, S., Fromm, J., Gernreich, C, Lattemann, C.P. et al. (2019) Design Thinking für das Service Engineering in kleinen und mittelständischen Unternehmen. In: Schuh, G., Gudergan, G., Senderek, R., Jussen, P., Krechting, D., Beverungen, D. (eds.) Dienstleistungsinnovation durch Digitalisierung. Springer (2019, in press) 54. Beverungen, D, Lüttenberg, H, Wolf, V: Recombinant Service System Engineering. In: Leimeister, J.M., Brenner, W. (eds.) Proceedings der 13. Internationalen Tagung Wirtschaftsinformatik. 13th International Conference on Wirtschaftsinformatik, St. Gallen, pp 136–150 (2017)

Design and Simulation of an Integrated Model for Organisational Sustainability Applying the Viable System Model and System Dynamics Sergio Gallego-García(&) and Manuel García-García Research Area of Productive Systems at UNED University, Madrid, Spain [email protected]

Abstract. The current global situation increases the exposure of organisations to their environment. As a consequence, companies have to consider a variety of topics that they had managed to a limited extent before. An organisation should now go beyond its limits in order to maintain its viability over time, this means being sustainable and making its related environment better by enabling collaborative working. In addition, there are not only challenges in the management of external relationships but in the management of internal ones. These are a consequence of the pressure for decision-making in short periods of time and a lack of coordination between different parts of organisations, effectively operating as silos with divergent goals. Even with common goals, due to a lack of adaptability or flexibility, some areas do not see the benefits of coordinating work for securing a long-term USP (Unique Selling Proposition) of an organisation. In this context, for both external and internal challenges, researchers often fail to holistically consider an organisation as being an agent, interacting and creating ecosystems of cooperation. Thus, the aim of this study is to propose a holistic approach as to how organisations can interact with their environment, as well as internally. Hereby, the Viable System Model and System Dynamics were applied. In conclusion this proposed approach enables companies to interact within its area of influence with an efficient approach. In order to prove the conceptual model, a simulation for a manufacturing supply chain and its area of influence was performed. Keywords: Sustainability  Organisational management  Supply chain Key performance indicators  Viable system model  Simulation  System dynamics



1 Introduction Many companies have “separate kingdoms” in their business operations, with employees being more loyal to their specific business area than to the company itself and the challenge goes beyond the supply chain. As a result, for a company to be successful, a climate of trust, respect and dedication has to be developed by allowing other entities to have their fair share of mutual activities, enabling win-win situations [1]. Nowadays, our society requires the collaboration of all entities to support its commitment to the community. This means fulfilling human needs by creating a © IFIP International Federation for Information Processing 2019 Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 555–563, 2019. https://doi.org/10.1007/978-3-030-29996-5_64

556

S. Gallego-García and M. García-García

sustainable environment for all human beings. In this context, a company should contribute in its area of influence. However, the company will only support theses purposes if there is a benefit for its actions. Within a company and the society individuals play a significant role. Moreover these individuals needs have needs are classified according to Maslow´s Hierarchy of needs and own interests. For these reasons, the purpose of this article is to develop a conceptual model for organisational sustainability at three levels: the company itself (with its functional areas as entities), its supply chain and the society in the area of influence of the company. The initial hypothesis is that an organisation that is built on the basis of sustainable principles and collaborative working using the structure of the VSM will have a positive impact on the achievement of short, medium and long-term goals of all entities; society, supply chain and company goals. Based on the conceptual model, a case study for an organisation was chosen. The case study was performed for an automotive plant with different suppliers and customers and also including the interaction with society.

2 Fundamental Definitions and State of Research 2.1

Sustainability

The meaning of sustainability depends on the perspective (social, economic or ecological) and the development of appropriate indicators provides a useful framework for policy making [2]. The concept of sustainability assumes that humans and their economic systems are linked [3]. The World Commission on Environment and Development defines ‘Sustainable Development’ as being the “development that meets the needs of the present without compromising the ability of future generations to meet their own needs”. As a consequence, sustainability has to not only meet our needs but, also, it has to provide a benefit to society and external systems [4]. To sum up, the study uses a concept of sustainability that is based on the cybernetic concept of viability. If a company wants to be organised to be able to maintain sustainability, then it needs to be organised with a view to viability [5]. 2.2

The Company and Its Organisational Management

The organisational structure of many companies is highly function-driven. This approach to process improvement aims to minimise costs for all of the individual processes and this is a significant mistake because the optimisation of individual processes does not give the optimum for the entire process [6], this has a consequence for the organisation structure. As an example, the logistical corporate conflict of goals comprises three conflicts of interest [7]; the conflict between costs and delivery, the conflict between costs and throughput as well as the conflict between throughput and delivery.

Design and Simulation of an Integrated Model

2.3

557

The Company and Its Supply Chain (SC): From the Bullwhip Effect to a Collaborative SC

The supply chain components involve all parties, either directly or indirectly, when fulfilling a customer’s request and include the manufacturer, suppliers, transporters, warehouses, retailers, and even the customers themselves [8]. A challenge in the supply chain arises from the fact that every stage in the supply chain wants to reach the same goals such us low stocks, high capacity utilisation, high delivery service, etc. creating an unavoidable conflict of interest between the entities. Moreover, another challenge is created by the lack of transparency between stages, named the ‘bullwhip effect’. As a result, demand along the supply chain is difficult to forecast [9] and it causes an extreme fluctuation of stock [10]. Concepts for a collaborative supply chain strive to mitigate the bullwhip effect through close cooperation between chain members to effectively match the demand and supply that contributes to the increase of the overall chain profitability. Although collaboration is based on a mutual objective, collaboration is a self-interested process in which firms only participate if it contributes to their own survival [11]. 2.4

Individual and Society Needs in the Area of Influence of an Organisation

Social structures are the result of man’s actions in society and these structures enable him to satisfy his needs [4]. In this study, the definition of human needs by Maslow is used. Maslow studied the motivation of individuals without considering rewards or unconscious desires [12]. Maslow stated that people are motivated to achieve certain needs. When one need is fulfilled a person seeks to fulfil the next one, and so on [13]. The analysis of Maslow´s hierarchy within organisational studies remains “extremely rare” and therefore it has a high potential [14]. The hierarchy of needs can be divided into 3 types of needs: basic needs, psychological needs and self-fulfilment needs [15]. There are 5 categories: physiological needs, safety and security needs, love and belonging needs, self-esteem needs and self-actualisation needs [16]. 2.5

Concurrent and Collaborative Work Applied to an Organisation and Its Related Environment

Organisational behaviour is described as being the result of concurrent actions of the “intelligent adaptive agents”, i.e. the individuals within an organisation. Therefore, organisational performance is a function of both individual actions and the context or environment in which they act [17]. As a consequence, the principles of concurrent and collaborative work are used as the basis for the conceptual model development. Therefore, a model in which coordination mechanisms and tools for sharing information are given is compared with a model in which those tools do not exist.

558

S. Gallego-García and M. García-García

3 Methodology and Concept Development The method used to reach this goal was the following: 1. Methodological approach: the VSM was selected due to its structure for viability and SD due its capability to analyse the behaviour of complex systems over time 2. Literature review 3. Conceptual model design: logical inter-relationships and casual loop diagrams 4. Simulation and analysis of results: simulation models, assumptions and scenario definitions, extraction and analysis of results 5. Critical reflection and outlook Beer deduced the VSM by taking the central nervous system of the human being and cybernetics as basis in order to deal with complex systems. The VSM is built on three main principles: viability, recursivity and autonomy [18]. The cybernetic model of every viable system consist always in a structure with five necessary and sufficient subsystems that are in relation in any organism or organization that is able to conserve its identity with independency of its environment [19]. As described in the literature the VSM is a conceptual and methodological tool for the modelling and design of organisations and its areas with the goal of being viable [20]. Moreover, SD is a computer-guided approach for studying, managing and solving complex feedback problems with a focus on policy analysis and design [21]. It is increasingly used to design successful policies in companies and public policy settings [22]. For this reason, Vensim (an SD software) was used to program the conceptual model for the case study.

4 Design and Simulation of an Integrated Model for Organisational Sustainability: Applying the Viable System Model and System Dynamics 4.1

Design of an Integrated Model for Organisational Sustainability: Applying the Viable System Model

The three levels under study are: society, the supply chain and the company itself. As applying the VSM all levels consist of an invariant structure of 5 subsystems: normative (system 5), strategic (system 4), tactical (system 3), coordination (system 2) and operative units (system 1). In this research study the supply chain is the network level (recursion level n), the company is the plant level (recursion level n + 1) and society is the related environment for the company. For the company, the key indicators for every functional area should be described in order to understand the motivation of each entity inside the organisation, systems 1, i.e. the operative units within the recursion level n + 1 (Table 1). Then, the collaborative and mutual benefits for the functional areas are searched for, thanks to a system dynamics analysis with casual loop diagrams: This was repeated between all indicators of the above-listed functional areas. As a second step, the recursion level of the supply chain was considered. For this, the

Design and Simulation of an Integrated Model

559

Table 1. Key indicators for functional areas of an organisation Functional area Production Quality Maintenance Logistics Design & development Sales & marketing Purchase Finance & IT Human resources Other staff areas

Indicators Production volume, performance rate, production costs, production lead time, OEE, capacity utilisation rate Quality or rejection rate, control and rework costs Availability, maintenance costs Service level, customer order lead time, logistics costs Time-to-concept, number of variants, variants adding value to endcustomer Customer demand, brand image, employer image, price, margin Number of suppliers, material costs, time to negotiate Product development costs, investments, logistics, production, maintenance, quality & general costs, cost per unit, margin per unit Number of employees, training, motivation, conditions Legal, compliance, strategy, etc.

Fig. 1. Extract of a company casual loop diagram: interrelationships between key indicators

different entities, their objectives and key indicators were considered in order to determine potentials for collaboration across the supply chain (Table 2). In the same way as for the company recursion level, casual loop diagrams were developed for the participants in the supply chain describing the inter-relationships between key indicators. As an example, if the retailer knows the real demand information, then the logistics routes and the utilisation rates of trucks can be optimised, thereby reducing the distribution costs and improving the customer order lead time. By using this approach, benefit-sharing can be achieved. In the third level, society and its needs in the area of influence of the company are considered. For this, only the basic needs of society are taken into account. The following needs, indicators and measures that are contributed from the company to society were considered in the conceptual model and in the simulation (Table 3).

560

S. Gallego-García and M. García-García Table 2. Key indicators for the supply chain participants

Participants Company

Suppliers

Retailer

End-customer

Indicators Replenishment time, rejection rate from suppliers, raw material stock levels, capacity utilisation, finished product stock levels, customer order lead time, service level, production costs, level of exchange of real customer demand information, adaptability to change Stock level or capital employed, capacity utilisation, delivery service, product quality, order lead time, level of exchange of real customer demand information Stock levels, replenishment time, customer order lead time, level of exchange of real customer demand information, distribution costs, new quality problems Service level, order lead time, number & type of claims

Table 3. Key indicators & potential measures for basic needs in the area of influence Basic needs Physiological needs

Safety and security needs

Specific need Breathing

Need key indicator Pollution rate

Food

Malnutrition, obesity

Shelter

Homeless rate, working years for buying a house

Health

Sedentary level Cardiovascular diseases Death rate Absenteeism rate Accident rate Average time off work

Employment

Permanent contract Motivation rate

Family

Two working parents, single parent, grandparents in the city, distance to work location

Measure Countermeasures for zero-impact Catering with healthy food, surplus for nonprofit organisations Construction of flats for employees & relatives Hospital Health insurance Gym Rehabilitation centre Catering with healthy food Ergonomics Leasing options Financing options Training Development options Home-office, kindergarten, permissions paid

Design and Simulation of an Integrated Model

4.2

561

Simulation of an Integrated Model for Organisational Sustainability Using System Dynamics

The previously described levels were programmed into the Vensim software. For the case study, an automotive plant was selected. Several suppliers and a retailer with endcustomers were considered. The assumptions in the simulation were: one car model, 5 year simulation time with 220 working days, a maximum capacity and demand of 1,000 cars per day: From the results obtained, it can be observed how the collaborative work simulation model presents better results for all three levels (Table 4). Table 4. Simulation results for the three levels: company, supply chain and society measures Simulation level

Key indicator

Company level

Margin per unit Production capacity Customer order lead time Margin per unit Customer demand Customer order lead time Margin per unit Absenteeism rate Productivity rate

Supply chain level

Society measures of the company

No collaborative work Simulation Model 3,680 €/unit

Collaborative work Simulation Model 6,750 €/unit

740 units/day (t = 1,100) 80 days (t = 50)

910 units/day (t = 1,100) 60 days (t = 50)

6,000 €/unit

6,750 €/unit

910 units/day

935 units/day

100 days (t = 50)

60 days (t = 50)

1.600 €/unit

6.750 €/unit

9%

3%

90%

99%

5 Critical Reflection and Outlook After completion of the research work, the following points can be successfully concluded: • Thanks to a new model for organisational management that is able to make decisions in relation to its environment, its supply chain partners and internal set-up, the sustainability of an organization and its related environment can be assured. • The VSM provides the necessary structure to determine the inter-relationships between areas and parameters that allow them to be optimised in a recursive way.

562

S. Gallego-García and M. García-García

• The simulation of an organisation using the conceptual model with collaborative mechanisms presents better results for the relevant key parameters for all three levels of all entities: company, supply chain and society impact. As a result, the simulation evolves in a decision-making support tool for managers. The final goal is to transfer this research method to real organisations, applying it in particular cases as a management guide for sustainable design and development. Moreover, future research would be to study the specific levels in detail before trying to implement it in a real organisation. In summary, all individual needs and society needs can be included as demand factors to complete the model.

References 1. Van Marrewijk, M.: Concepts and definitions of CSR and corporate sustainability: between agency and communion. J. Bus. Ethics 44(2–3), 95–105 (2003) 2. Brown, B.J., Hanson, M.E., Liverman, D.M., Merideth, R.W.: Global sustainability: toward definition. Environ. Manage. 11(6), 713–719 (1987) 3. Caradonna, J.L.: Sustainability: A History, p. 8. Oxford University Press, Oxford (2014) 4. Haan, F.J., Ferguson, B.C., Adamowicz, R.C., Johnstone, P., Brown, R.R., Wong, T.H.: The needs of society: a new understanding of transitions, sustainability and liveability. Technol. Forecast. Soc. Chang. 85(121–132), 122–126 (2014) 5. Schwaninger, M.: Systemic design for sustainability. Sustain. Sci. 13(5), 1225–1234 (2018) 6. Kletti, J., Schumacher, J.: Die perfekte Produktion. Springer, Heidelberg (2014). https://doi. org/10.1007/978-3-662-45441-1 7. Schuh, G., Weber, H., Kajüter, P.: Logistikmanagement. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-28992-7 8. Chopra, S., Meindl, P.: Supply chain management. strategy, planning & operation. In: Boersch, C., Elschen, R. (eds.) Das Summa Summarum Des Management, pp. 265–275. Springer, Wiesbaden (2007) 9. Campuzano, F., Mula, J.: Supply Chain Simulation: A System Dynamics Approach for Improving Performance. Springer Science & Business Media, London (2011). https://doi. org/10.1007/978-0-85729-719-8 10. Schönsleben, P.: Integrales Logistikmanagement: Operations und Supply Chain Management Innerhalb des Unternehmens und unternehmensübergreifend. Springer, Heidelberg (2016). https://doi.org/10.1007/978-3-662-48334-3 11. Simatupang, T.M., Sridharan, R.: The collaborative supply chain. Int. J. logistics Manage. 13(1), 15–30 (2002) 12. McLeod, S.: Maslow’s hierarchy of needs. Simply Psychol. 1, 2 (2007) 13. Maslow, A.H.: A theory of human motivation. Psychol. Rev. 50(4), 370 (1943) 14. Dye, K., Mills, A.J., Weatherbee, T.: Maslow: man interrupted: reading management theory in context. Manage. Decis. 43(10), 1375–1395 (2005) 15. Poston, B.: Maslow’s hierarchy of needs. Surg. Technol. 41(8), 347–353 (2009) 16. Maslow, A., Lewis, K.J.: Maslow’s hierarchy of needs. Salenger Incorporated 14, 987 (1987) 17. Sage, A.P., Rouse, W.B. (eds.): Handbook of Systems Engineering and Management, p. 661. Wiley, Hoboken (2009) 18. Schuh, G., et al.: High resolution supply chain management: optimized processes based on self-optimizing control loops and real time data. Prod. Eng. 5(4), 433–442 (2011)

Design and Simulation of an Integrated Model

563

19. Espejo, R., Harnden, R. (eds.): The Viable System Model: Interpretations and Applications of Stafford Beer’s VSM. Wiley, Chichester (1989) 20. Schwaninger, M.: Intelligent Organisations: Powerful Models for Systemic Management. Springer Science & Business Media, Heidelberg (2008). https://doi.org/10.1007/978-3-54085162-2 21. Angerhofer, B.J., Angelides, M.C.: System dynamics modelling in supply chain management: research review. In: Proceedings of the 32nd Conference on Winter Simulation, pp. 342–351. Society for Computer Simulation International (2000) 22. Sterman, J.D.: Business dynamics: systems thinking and modeling for a complex world (No. HD30. 2 S7835 2000) (2000)

Applications of Machine Learning in Production Management

Enabling Energy Efficiency in Manufacturing Environments Through Deep Learning Approaches: Lessons Learned M. T. Alvela Nieto(B) , E. G. Nabati, D. Bode, M. A. Redecker, A. Decker, and K.-D. Thoben Faculty of Production Engineering, BIK - Institute for Integrated Product Development, University of Bremen, 28359 Bremen, Germany {malvela,nabati,dbode,m.redecker,decker,thoben}@uni-bremen.de

Abstract. Currently, manufacturing industries are faced by evergrowing complexities. On the one hand, sustainability in economic and ecological domains should be considered in manufacturing. With respect to energy, many manufacturing companies still lack energy-efficient processes. On the other hand, Industry 4.0 provides large manufacturing datasets, which can potentially enhance energy efficiency. Here, traditional methods of data analytics reach their limits due to the increasing complexity, high dimensionality and variability in raw data of industrial processes. This paper outlines the potential of deep learning as an enabler for energy efficiency in manufacturing. We believe that enough consideration has not been given to make manufacturing efficient in terms of energy. In this paper, we present three manufacturing environments where available DL approaches are identified as opportunities for the realization of energy-efficient manufacturing.

Keywords: Manufacturing Industry 4.0

1

· Energy efficiency · Deep learning ·

Introduction

Today’s manufacturing is moving towards an upgrade of the currently available manufacturing practices to a more efficient and intelligent level [13]. This upgrade incorporates advances of various fields, particularly the field of artificial intelligence, which helps in different facets of a manufacturing company such as machines, processes, facilities, software and staff [1]. Thus, sensory data will be then collected across the manufacturing company. Exchange of information and instructions in near real-time between smart machines and smart products will be a remarkable vision of manufacturing industries in the coming future [13]. Under this setting, effective usage of data should consider different aspects of c IFIP International Federation for Information Processing 2019  Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 567–574, 2019. https://doi.org/10.1007/978-3-030-29996-5_65

568

M. T. Alvela Nieto et al.

improving quality, reducing costs and energy simultaneously [8]. Reducing the consumption of energy is referred to as “energy efficiency” from an engineering point of view [4]. To achieve energy efficiency in manufacturing, researchers such as May et al. [8] considered the challenges against efficient energy usage in manufacturing. Based on the studies of [8,13], artificial intelligence (AI) has the potential to enable energy efficiency in manufacturing. However, as observed in the literature research, only a few papers show AI as a technology for reducing energy consumption in manufacturing. In this paper, we look at the methods of Deep Learning (DL), as DL is one of the subfields of AI. DL has been considered in this study because of its potential to support near real-time decision making in manufacturing by handling large data from different sensory components as well as their complexities. In addition, the high-volume modelling capability of DL allows automatic processing of large raw data instead of “handcrafted” data. This aspect of DL is a powerful advantage over conventional machine learning (ML) methods because the performance of ML models is limited by their ability to process high dimensional and varied input data in their raw form [6]. Therefore, we investigate DL through three use cases for the assistance of energy-efficient processes in manufacturing. Section 2 describes the challenges for achieving energy-efficient manufacturing as well as DL as a technology for enhancing efficiency in terms of energy. Section 3 introduces three case studies in the context of energy-efficient manufacturing. In each use case, DL contributes to an energy-efficient process. Numerical analysis of DL-models performance is not the focus of this article. Instead, in Sect. 4, practical lessons learned during the implementation and deployment of DL models in manufacturing processes are described. Section 5 provides the conclusion.

2

Challenges for Enabling Energy-Efficient Manufacturing

Modern manufacturing has been affected by growing energy prices, environmental regulations and customer demands to meet sustainable products [9]. Above all, industries face several barriers while implementing energy efficiency in manufacturing. Firstly, companies produce manufacturing goods concerning quality and time efficiency. On the one hand, energy efficiency is mostly not considered due to the possible multiplicity of process configurations as manufacturers alone cannot monitor numerous, high-dependency parameters and optimize them regarding energy efficiency. On the other hand, most up-to-date datadriven methods are developed without considering how these methods can be applied in practice [5]. Consequently, manufacturers fail the implementation of pragmatic approaches to improve their processes energetically, qualitatively and economically; so that, changes in process parameters will not negatively impact their product quality.

Enabling Energy Efficiency in Manufacturing

569

Secondly, the energy consumption of manufacturing machines is not static but dynamic. In some cases, it also depends on the variation in the quality of input raw materials. Moreover, machinery often uses different interfaces to their internal meters, which becomes an inconvenience while interpreting the different production data and later while linking them to the energy-related data. Thirdly, some manufacturers do not know how to treat their available production data and how to interpret them in order to efficiently improve their processes and products [5]. Particularly, there are not many software and modelling systems to analyze data in a simple manner yet [5]. In this regard, manufacturers require tools or settings that provide them with operational information of machines and data of energy consumption, which can then be used to evaluate and control energy-related performances [9]. Based on the above challenges, a better understanding of processes and settings in manufacturing as well as a better usage of available process data with less hand-engineering are required. Moreover, the use of tools or technologies that provide exact information about manufacturing operations, energy consumption and the ones that at least do not harm the quality of the end product are preferred. In this paper, we provide three relevant industrial use cases where DL is applied as a method that meets the above observations.

3

Energy Efficiency in Manufacturing with Deep Learning

In many energy-intensive manufacturing processes, there is no available model which supports energy efficiency when considering the fluctuations of input raw materials or other process-varying inputs such as quality features. A quality feature defines both the quality of the process and of the good, taking into account the energy efficiency of a manufacturing process. The following energy-intensive use cases were considered in this research because their raw materials (or natural resources) have inconsistent properties, i.e., the quality of raw materials themselves vary in their composition over time, which affects the process behavior. Additionally, manufacturing processes may change with regard to maintenance and operator skills, which also affects the data associated with the process. Here, DL approaches can be used as an advanced predicting tool once they have been trained on large data of a particular process. Therefore, the integration of DL forecasting models enables energy efficiency in processing. In the first case study, according to results of Bouktif et al. [3], a data-driven model based on a Long Short Term Memory (LSTM) architecture to predict the energy consumption in food processing is implemented. In the second case study, inspired by the experimental results of the Single Shot Detection (SSD)-based model [7] on standard datasets, image processing for identifying raw materials in a waste processing facility is applied. And in the last study, an approach of Yan et al. [12] for predictive maintenance in the animal feed industry is investigated by using an autoencoder (AE).

570

3.1

M. T. Alvela Nieto et al.

Use Case 1: Energy Efficiency in Food Processing

The industry of food processing involves energy-intensive processes. To ensure a high-energy efficiency and an optimal setting of parameters in a certain french fries processing line [2], parameters like temperature and steam pressure have to get adjusted continuously with fluctuations in the raw materials used. In this case study, streams of the raw material get scanned and measured, which generates a complex time-series of statistical histograms according to a rawmaterial criterion like shape or humidity. The generation of a data-based model for food processing in relation to the efficiency of energy and final product quality must include the quality of input raw materials and variations of the process settings. In order to handle these complex histograms of process data, an LSTMbased data-driven model is applied. LSTM addresses sequences of varying-length statistics and captures long-term process dependencies on different time-scales data of food processing. The time-series production data and different settings of parameters in a food process have been collected for six months; so that in this case study, an LSTM-based approach learns features from these data histograms. Thereby, it projects the relationship between sensor data and quality features of the food process onto the model.

Fig. 1. Application of a LSTM-based data-driven model in food processing

Figure 1 shows a process flowchart for implementing LSTM in order to increase energy efficiency in a french fries processing line. The model provides online forecasts from both the energy consumption and the final food quality. The forecasting is visualized and assists the plant operator, who is able to react in time to variations in process behavior rather than waiting for the end product inspection. This additional information leads to quicker reactions, reduces end product rejections caused during quality inspections and effectively enables energy efficiency in food processing. 3.2

Use Case 2: Energy Efficiency in Waste Processing

Waste management is one of the major global challenges of today. The collected municipal, commercial and industrial waste is first sorted in a waste processing

Enabling Energy Efficiency in Manufacturing

571

facility. The sorted solid waste material is then pressed into tight bales in a certain baling line [11], which are ready to be delivered for reuse or further treatment. The length of the final bale and the pressure to be applied to the waste materials depends on the type of the input raw waste to be processed in a baling line. Therefore, prior to pressing, the streams of waste material, which are transported by a conveyor belt, must be visually identified. Thereby, material-dependent parameters of a baler press must be continuously adjusted. Otherwise, a poor quality bale is rejected, causing additional logistical, financial and energy costs. Hence, a key performance indicator for a waste baling line is proper identification of the input raw materials and the energy consumed during the process of baling. In this case study, a robust identification of input materials in a waste processing line to enhance energy efficiency is considered.

Fig. 2. Application of a SSD-model in waste processing

A computer vision-based method of DL to support automatic identification of the input waste materials was integrated. When a baling line is in operation, streams of solid waste materials are driven through the image acquisition system. Then, RGB-images of the materials on the conveyor belt are taken. Large volume datasets of RGB-images of size 270 × 270 pixels of waste materials such as paper and plastic foils are used for training the SSD-model. As a result, the output of the SSD-model automatically generates a near real-time prediction of the waste material present in each image. The prediction of the material type is displayed on a monitor, while the parameters of a baler press according to model forecastings are automatically adjusted for processing a new bale. Figure 2 illustrates how the integration of an SSD-model helps to automatically adjust the parameters of a baler press in a waste baling line. The SSDmodel forecasting assists both, automatic adjustment of process parameters and an energy-efficient baling process. 3.3

Use Case 3: Energy Efficiency in Feed Processing

Different machinery is used for processing animal feed. Hammermill machine is used to shred and make natural raw materials such as maize, wheat to smaller

572

M. T. Alvela Nieto et al.

pieces [10]. The process of converting the grains (e.g., maize) to smaller particles is an essential step in the animal feed production because it affects the extent to which the animal’s body can absorb the feed nutrients. After analyzing the energy consumption in a feed processing plant, it was found that the energy consumed during the shredding process of a particular compound feed product caused by hammermill failures is more compared to energy waste related to raw material or due to operator behavior. An automated Supervisory Control And Data Acquisition (SCADA) system shows the warnings and alarms related to failure and maintenance of this equipment. However, no further information has been gained for forecasting the occurrence of failures based on SCADA warnings and alarms. A predictive maintenance system offers to predict the occurrence of next possible sensory failure by using the SCADA information, which can be integrated into the maintenance schedule. The input data from the SCADA system contain the followings; the frequency of warnings and failures, their importance (risk), type of raw material, which is being shredded in the hammermill and speed of hammermill.

Fig. 3. Application of an AE-model in feed processing

The data from sensors has been collected for 1.5 years in the form of daily logs. Because of the existence of failure codes, high-dimensional data behaviour and their interdependence, the data has a complex behaviour; so that, an AEbased model for predictive maintenance is proposed here. Figure 3 illustrates the model proposal based on an AE-model. An AE-model gives suggestions of the anomalous behaviors of a hammermill process based on complex high-level representations of sensory data. The results of AE are presented in the form of clustering spaces of failure types which shows the similarities among failures. As a result, predictions of next failures are added into the maintenance schedule, which in turn helps to adjust maintenance and production plan and therefore, enable an energy-efficient process.

4

Discussions and Findings

The three case studies revealed the following findings. Application of DL approaches provide means for better analysis and comprehension of a

Enabling Energy Efficiency in Manufacturing

573

manufacturing process and its settings, which in the case studies was exploited to automatically provide suggestions to enhance energy efficiency. However, still there are challenges associated with adaptation, implementation and deployment of DL models for energy-efficient manufacturing processes. Challenges to the development of energy-efficient manufacturing in terms of data and learning transferability are discussed here. Challenges of Annotation in Datasets. As DL models become larger and more complex, they require training datasets that are bigger than those required by other ML techniques. Majority of the available manufacturing datasets are not labeled, or if labeled, there exist noisy labels which must be manually removed. Moreover, it is not easy to define labels by hand i.e. for images of waste materials. Sometimes, hand-labeling becomes very complex to differentiate among classes, i.e. when waste materials to be identified are located very close to each other or when they are overlapping and only a small portion of the material is visible to the DL model (in the input image). Dependence on the Variability of Input Data. A common presumption in DL is that the performance of DL algorithms has high dependencies only on the scale and quality of raw datasets. So far, DL has shown feasibility in our case studies when DL is applied to only an input data type such as images and to welldefined tasks. However, the DL-models of our use cases show that the DL still has difficulties in identifying objects when the variability within the data class is high. Specifically, material classes that appear in many colours and shapes are still tough for the deep model to classify correctly, although significant volumes of data are used. Dependence on Hardware Performance. By selecting a deep model, a certain dependence on the used hardware platform is considered. The depth of the model architecture and the available data impact the DL-model performance significantly. Both of these factors demand the usage of high-performance hardware such as GPUs due to computationally intensive processes during the training of deep models. Model Transferability. Additionally, the generation of DL-models and maintenance of them require more data, which for a novel or changed-domain manufacturing processes are not always readily available. Transferability of the entirely or even parts of deep models used in our case studies to similar instances within the models are shown to be possible.

5

Conclusion

This paper presented an overview of the potentials of DL to achieve energy efficiency in manufacturing processes. DL does not only offer a new point of view into the manufacturing operations but also supports near real-time measurement and energy saving during manufacturing. In our case studies, the integration of DL played a significant role in production planning and therefore, in the energy

574

M. T. Alvela Nieto et al.

efficiency of these processes. Moving towards an energy-efficient production planning requires the inclusion of energy efficiency within the goals of production design, input quality control and maintenance at all levels, together with time cost and flexibility. The reduction of the machinery idle-times through energyefficient process planning with the combination of better orders organisation, predictive maintenance and quality controls lead to a better prediction of the workflow and therefore to better assistance in energy-efficient manufacturing. Acknowledgments. The authors would like to thank the Federal Ministry for Economic Affairs and Energy (BMWi) and the Project Management Juelich (PTJ) for funding the project “AI supported platform for the assistance of production control for improving energy efficiency” - KIPro (funding code 03ET1265A).

References 1. Albus, J.S., et al.: An intelligent systems architecture for manufacturing (ISAM); a reference model architecture for intelligent manufacturing systems (2002) 2. Behrens, A., Kerstens, T.: Demonstrationsvorhaben W¨ armer¨ uckgewinnung und Abw¨ armenutzung durch Kombination von zwei unterschiedlichen Prozesslinien am Beispiel von Pommes Frites und Chips: Abschlussbericht zum Vorhaben 20265. BMU-Umweltinnovationsprogramm Umweltbereich Klimaschutz, Energie, Wildeshausen (2014) 3. Bouktif, S., Fiaz, A., Ouni, A., Serhani, M.: Optimal deep learning LSTM model for electric load forecasting using feature selection and genetic algorithm: comparison with machine learning approaches. Energies 11(7), 1636 (2018) 4. Irrek, W., Thomas, S.: Defining energy efficiency (2008) 5. Kusiak, A.: Smart manufacturing must embrace big data. Nature 544(7648), 23–25 (2017). https://doi.org/10.1038/544023a 6. LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015) 7. Liu, W., et al.: SSD: single shot multibox detector. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 21–37. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46448-0 2 8. May, G., Stahl, B., Taisch, M.: Energy management in manufacturing: toward ecofactories of the future - a focus group study. Appl. Energy 164, 628–638 (2016) 9. May, G., Stahl, B., Taisch, M., Kiritsis, D.: Energy management in manufacturing: from literature review to a conceptual framework. J. Cleaner Prod. 167, 1464–1489 (2017) 10. Meyer, F.: Producing animal feed with less electricity and heat. FIZ Karlsruhe, Eggenstein-Leopoldshafen (2014) 11. Unotech GmbH: UPASMART. Die Intelligente Kanalballenpresse: Automatische Kanalballenpresse ausgestattet mit K¨ unstlicher Intelligenz. LM Group, Niederlangen (2018) 12. Yan, W., Yu, L.: On accurate and reliable anomaly detection for gas turbine combustors: a deep learning approach. In: Proceedings of the Annual Conference of the Prognostics and Health Management Society (2015) 13. Zhong, R.Y., Xu, X., Klotz, E., Newman, S.T.: Intelligent manufacturing in the context of industry 4.0: A review. Engineering 3(5), 616–630 (2017). https://doi. org/10.1016/J.ENG.2017.05.015

Retail Promotion Forecasting: A Comparison of Modern Approaches Casper Solheim Bojer1(&), Iskra Dukovska-Popovska1, Flemming Max Møller Christensen1, and Kenn Steger-Jensen1,2 1

2

Centre for Logistics (CELOG), Materials and Production, Aalborg University, Aalborg, Denmark [email protected] Faculty for Technology and Maritime, Department of Maritime Technology, Operations and Innovation, University College of Southeast Norway, Notodden, Norway

Abstract. Promotions at retailers are an effective marketing instrument, driving customers to stores, but their demand is particularly challenging to forecast due to limited historical data. Previous studies have proposed and evaluated different promotion forecasting methods at product level, such as linear regression methods and random trees. However, there is a lack of unified overview of the performance of the different methods due to differences in modeling choices and evaluation conditions across the literature. This paper adds to the methods the class of emerging techniques, based on ensembles of decision trees, and provides a comprehensive comparison of different methods on data from a Danish discount grocery chain for forecasting chain-level daily product demand during promotions with a four-week horizon. The evaluation shows that ensembles of decision trees are more accurate than methods such as penalized linear regression and regression trees, and that the ensembles of decision trees benefit from pooling and feature engineering. Keywords: Grocery retail

 Promotion forecasting  Machine learning

1 Introduction Grocery retail supply chains are facing pressure on margins because of the tougher competition, more demanding customers, and increasing focus on reduction of food waste. Managing the supply chain efficiently is important, particularly for perishables, as a mismatch of supply and demand leads to lost profit due to lost sales, markdowns or waste. Forecasts at the product level are crucial for the alignment of supply and demand in the retail supply chain to ensure a smooth and timely flow of products. They are needed by manufacturers for planning of capacity and materials, and by retailers as input to the replenishment process at both stores and distribution centers. However, forecasting at the product level is challenging due to several characteristics of the retail environment, such as stockouts, intermittency and promotions [1]. Promotions’ demand is particularly challenging to forecast due to the often-limited history of similar promotions. © IFIP International Federation for Information Processing 2019 Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 575–582, 2019. https://doi.org/10.1007/978-3-030-29996-5_66

576

C. S. Bojer et al.

Research has shown that stockouts are more frequent during periods of promotion [2, 3], cementing that they pose a challenge for the supply chain. Several researchers [4–6] have dealt with the challenge of forecasting promotions at product level. One of the widely used methods in retail, especially as a benchmark in the literature to justify the use of more complex methods, is the base-time-lift approach. This approach uses an estimate of the regular demand and adjusts for promotions using a multiplicative factor, the “lift” [1]. Different methods that achieve greater predictive performance than the base-time-lift have been proposed, ranging from linear to nonlinear methods. However, the evaluation conditions for the proposed methods differ in terms of aggregation level, forecast horizon and available data, which are context specific. The methods also differ in terms of: (1) model scope, i.e. whether the model deals with forecasting of both regular and promotion demand, or promotion demand only, (2) the level of pooling, i.e. whether multiple SKUs at a given aggregation level are included in the same model, and (3) the variables constructed given the available data, also known as feature engineering. Most of the previous research only compares the proposed methods to the simple baseline method and thus it is unclear how the methods compare to each other under different conditions and modelling choices. This paper adds to the previously evaluated methods in literature a class of emerging techniques, based on ensembles of decision trees, and aims to provide a comprehensive comparison of different promotion forecasting methods, under different levels of pooling and feature engineering.

2 Background The previous work in the area of product level promotion forecasting differs widely in their approaches and evaluation conditions. A number of papers have evaluated ordinary least squares (OLS) linear regression models. Foekens et al. [7] examined forecasting accuracy for weekly product demand at different aggregation and pooling levels using the log-linear SCAN*PRO model. The best performing model at both chain and market level in terms of mean absolute percentage error (MAPE) and median relative absolute error was the chain-specific store-level SCAN*PRO model without weekly seasonality indicators. However, they did not include any other models in the comparison. Cooper et al. [4] presented a linear regression based model formulation for forecasting promotional product demand at store level. Information on price, advertising, display conditions, major events and historical performance of promotions were included. Additionally, products were categorized into slow movers or fast movers, while promotion events were categorized based on their duration with one model created for each combination of product and duration category. Thus, pooling was conducted with regards to stores and items. The authors find that the model is superior to using historical averages of matching display and advertising conditions in terms of forecast error measured in cases. Van Donselaar et al. [8] presented a linear-regression model for forecasting the lift factor at chain and product level including information on price, competitive information, advertising, display, baseline sales, weight and shelf life. They compare pooling at category level and one model for all categories. They find that the best pooling level differs by

Retail Promotion Forecasting: A Comparison of Modern Approaches

577

category and that the regression model outperforms a moving average of historical lift factors in terms of MAPE and root mean square error (RMSE). Huang et al. [9] examined forecasting weekly product demand at chain level considering competitive information using an autoregressive distributed lag (ADL) model, and not using pooling. They found that the ADL model incorporating competitive information is more accurate than the base-time-lift method using a variety of error measures, including mean absolute error (MAE) and MAPE. A number of recent papers have dealt with more modern statistical or machine learning approaches, including penalized linear regression, decision tree models, support vector machines and neural networks. Ma et al. [6] considered the use of competitive information using a multi-stage LASSO model with an ADL formulation for forecasting weekly product demand at store level without the use of pooling. They found that the model improves upon base-time-lift and that including competitive information outside the product category only helps marginally in terms of a variety of error measures. Gür Ali et al. [5] compared a variety of methods for forecasting weekly product demand at store level, including linear regression, support vector machines and regression trees, using MAE. Different pooling schemes were considered: one model for all observations, pooling by store and pooling by subgroup. Compared to the previously mentioned papers, they use a more data-driven approach with extensive feature engineering as is often seen in the machine learning community. They found that regression trees outperform the other models considerably, particularly during promotions. The best pooling scheme for the regression tree was one model for all observations, and the feature engineering improved forecast accuracy. A later study by the same lead author compared the aforementioned models as well as penalized linear regression and neural networks for forecasting both weekly and daily product demand at store level using MAE and MASE [10]. Pooling was conducted at subcategory level, as well as more extensive feature engineering. They found that regression trees and penalized linear regression show similar performance for weekly forecasting, whereas penalized linear regression is best at daily forecasting. In addition to the published research, a retail forecasting competition was held by Ecuadorian grocery retail chain Corporacion Favorita. The challenge given was to forecast weekly product sales at store level given transaction data including historical sales and promotion indicators, but not price. The top five performers of the competitions used gradient boosted decision trees, neural networks or a combination of both. To sum up, most of the studies conducted only evaluate their proposed models against simple baselines. In addition, it is difficult to get a unified overview due to differences in modeling choices and evaluation conditions. It is therefore unclear how these proposed models stack up against each other. Two exceptions are the studies conducted by Gür Ali et al. [5] and Gür Ali [10], which compare several different techniques of varying complexity. However, it remains unclear how large the performance gap is between regression trees and penalized linear regression, and under which conditions one outperforms the other. In addition, the studies do not include recent advances in the field of machine learning such as random forest [11] and gradient boosted decision trees (e.g. [12]), which have shown great promise in forecasting competitions and are widely used by machine learning practitioners. In this study, a

578

C. S. Bojer et al.

comparison of the different methods presented in literature will be conducted, including recent developments within the area of machine learning. The aim is to provide further evidence as to which models are most accurate for product level retail forecasting, and thereby also contribute to the question of whether non-linear models, specifically decision tree-based models, are superior to linear models and warrant the added complexity.

3 Method The purpose of the paper is to compare and evaluate the main methods used for forecasting demand during promotion events at a daily product level. More specifically, the regression-based methods - ordinary least squares linear regression, penalized linear regression using LASSO and regression trees, are compared to modern machine learning methods based on ensembles of decision trees - random forest [11] and XGBoost [12], which are non-linear. In addition, the historical average is used as benchmark. The historical average method simply forecasts the historical average under matching conditions of price and advertising, with price as the dominant condition in case of no full match. The fallback forecast in case of no match is simply a naïve forecast of the last promotion. Base-time-lift is not included as a benchmark, as it is not possible to use it for items not sold outside promotion periods. The method comparison is conducted on promotional sales of fresh meat and fish from a large Danish discount grocery chain, as these items present a major challenge due to their perishable nature. A chain level forecasts four-weeks prior to a promotion are sent to the suppliers for creation of a shared plan for meeting promotion demand. Promotions primarily have a duration of one week, and a forecast is required at the daily level for the promotional period. The case company uses a relatively simple promotion strategy based on price discounts advertised in a weekly flyer and occasionally on TV & radio. Most products are promoted at two or three price points. The data available consists of aggregated POS data, product master data and promotion master data, including price and advertising information, for the period of January 2015 to November 2018. Information on display conditions were not available in the case company databases. Only data from promotional periods are used for fitting the models. The data is split into training, validation and test sets, where five weeks are used as the validation set to tune hyperparameters, and twenty weeks are used as the test set. A total of 152 SKUs are present in the test dataset, with 48 of the SKUs having less than five promotions in the training and validation period. These items have therefore not been possible to forecast using item level models, although the use of pooling allows for forecasting new products with few or no observations. We evaluate the forecasting accuracy on these SKUs separately to illuminate which method performs best for products with short promotion history and what accuracy can be achieved. The methods are compared using the basic data set, and with feature engineering. The features constructed include competitive intensity information, historical averages of sales by product, category, promotion conditions etc., similar to the work of Gür Ali [5, 10]. The feature engineering is not included for simple OLS regression, as it does

Retail Promotion Forecasting: A Comparison of Modern Approaches

579

not have any built-in variable selection method and hence will overfit. Instead, a model formulation similar to that of the SCAN*PRO model is used. In addition, we consider the models at various levels of pooling: • • • •

Full pooling, i.e. a single model used for forecasting all products Category pooling i.e. one model per category Subcategory pooling, i.e. one model per subcategory No pooling, i.e. one model per product.

The simple OLS model is considered with no pooling and subcategory pooling as any higher pooling level is likely to produce bias. The decision tree-based models are evaluated with full pooling and with category pooling, since these methods generally need a larger sample size to be effective. This is in line with Gür Ali et al. [5] that found that including all observations for the regression tree improved performance significantly. Table 1 summarizes the models considered in the evaluation. The methods are evaluated using time series cross-validation also known as rolling origin evaluation [13]. At each time step of one week the model is fitted and a four-week ahead forecast is created for the promotion event. For the hyperparameter tuning, the model is not refitted due to the large computational demands. The forecast accuracy is evaluated in terms of forecast error magnitude and bias using both scale-dependent and scaleindependent measures. The volume weighted MAPE (WMAPE) is chosen as it is scaleindependent and stable with zero values, while the RMSE is chosen as it is widely used. The mean error (ME) and a mean-scaled version are used to evaluate forecast error bias. The evaluation is carried out in the statistical computing language R [14]. Table 1. Models considered in the evaluation including dataset used and pooling strategy. N - evaluated using basic dataset, B - evaluated using both basic dataset and with feature engineering. No pooling Subcategory pooling Category pooling Full pooling Simple OLS N N LASSO N B B B Regression tree B B Random forest B B XGBoost B B

4 Results The forecasting accuracy of the best combination of dataset preparation and pooling for each of the evaluated methods for SKUs with at least five historical promotions can be seen in Table 2, whereas the forecast accuracy for newly introduced SKUs are presented in Table 3. From Table 2, it is clear that the best method for SKUs with historical information in terms of all measures of error magnitude is XGBoost with category-level pooling and feature engineering. The results are in general dominated by XGBoost and random forest, but XGBoost is slightly biased, whereas the random forest has lower bias at a very small decrease in accuracy. The regression trees and OLS

580

C. S. Bojer et al.

models perform significantly worse, not managing to beat out the historical average method. In the middle of the performance spectrum lies LASSO, which is best without feature engineering and at the subcategory level. From Table 3, it is clear that the LASSO is the best performer for new SKUs, with no feature engineering and category level pooling coming out on top. The WMAPE and bias is much higher for the new items in general, with the WMAPE of 33.1% for the best model, compared to 16.2% for the SKUs with longer history of promotions. Overall, the results show that the modern machine learning approaches outperform the historical averages, OLS, LASSO and regression trees given that there is more than five historical promotions available for the SKUs being forecasted. In addition, feature engineering and either category or full pooling improves the performance of these methods. Table 2. Best forecast accuracy for each method on SKUs with historical information. Method XGB RF LASSO Hist. Avg. RT LM

Dataset With With Without Without With Without

Pooling Category Full Subcategory None Category None

WMAPE 0.162 0.168 0.182 0.189 0.229 0.249

RMSE 1488 1578 1702 1692 2076 2601

ME 363 174 160 164 90 395

Scaled ME 0.077 0.037 0.034 0.036 0.019 0.084

Table 3. Best forecast accuracy for each method on SKUs with few historical promotions. Method LASSO RF XGB RT

Dataset Without Without With Without

Pooling Category Full Full Category

WMAPE 0.331 0.399 0.420 0.451

RMSE 522 588 631 716

ME 118 −129 −60 −112

Scaled ME 0.125 −0.137 −0.063 −0.119

5 Discussion The results are impacted by both the forecasting conditions and the modeling choices used in the evaluation. We hypothesize that the modern machine learning methods outperform linear models in situations with strong patterns, interaction effects and a large relevant sample. The daily aggregation level has the effect of increasing the sample size, but also presents more noise than at the weekly level, whereas the chain aggregation level has the opposite effect. Our findings indicate that at this particular aggregation level the sample size and pattern strength is large enough to make the machine learning models superior to linear models. It is difficult to compare the findings to the findings of Gür Ali [10], as they look at forecasting store level demand and do not include ensembles of decision trees. Contrary to previous research, our results show that for this case, linear regression and regression trees are surpassed by a simple benchmark method: historical averages under matching conditions.

Retail Promotion Forecasting: A Comparison of Modern Approaches

581

This benchmark method has to our knowledge only been used by Cooper et al. [4], but its strong performance suggests that it should be considered when evaluating forecast accuracy for promotions. Pooling in general proved useful, as it improves performance for all models except for OLS. This underlines that patterns exist across products, and that the models can effectively use these. The linear models seem to benefit less from pooling, and subcategory level seems to be the best trade-off between bias and variance in coefficient estimates, whereas the decision tree-based models favor more pooling, even performing well with one model for all fresh meat and fish products. This is likely due to the nature of decision trees, as they can effectively choose between pooling and no pooling where appropriate. The feature engineering conducted benefitted the decision tree-based models and led to greater forecast accuracy, particularly for the random forest model. This was not the case for the linear models, which could be due to non-linear relationships between the created variables or high correlations. It is therefore plausible that feature engineering aimed specifically at linear models would have improved their performance. However, this would also demand greater effort on feature engineering caused by the more restrictive nature of linear models. For the forecasting of products with limited demand history, the results indicate that while the models can provide forecasts for these products, they are not very accurate and it is likely that a judgmental forecast by a category manager can provide better or at least similar accuracy.

6 Conclusion The comparison of methods for forecasting product level promotion demand found that modern machine learning methods in the form of ensembles of decision trees outperform previously proposed methods such as linear regression, penalized linear regression and regression trees for SKUs with more than five historical promotions on the task of forecasting chain-level daily demand. For SKUs with less promotion history, penalized linear regression is the best performer, although all of the methods have relatively high forecast errors. In addition, the comparison found that the ensembles of decision trees benefit from both feature engineering and pooling, either in the form of category level or full pooling, whereas the linear models perform better without feature engineering at subcategory level. The implications of these findings are that ensembles of decision trees should be considered candidates for forecasting product level promotion demand at daily chain-level. An interesting area for further research is whether this also holds at weekly chain level, where the sample size is reduced by a factor of seven, or at weekly store level, where the sample size is increased significantly at the expense of much greater noise. Limitations of the study include that the results are based on one case only with one particular forecast horizon. A shorter horizon could potentially change the results, as lagged sales thus becomes a valuable form of information not currently considered. In addition, we only use promotional data to fit the model, and it is possible that the linear models particularly can benefit from non-promotional data to obtain better estimates of seasonality, making this a topic worthy of further research.

582

C. S. Bojer et al.

References 1. Fildes, R., Ma, S., Kolassa, S.: Retail forecasting: research and practice. Working Paper. Lancaster University Management School (2018). https://doi.org/10.13140/RG.2.2.17747. 22565 2. Taylor, J., Fawcett, S.: Retail on-shelf performance of advertised items: an assessment of supply chain effectiveness at the point of purchase. J. Bus. Logistics 22, 73–89 (2001). https://doi.org/10.1002/j.2158-1592.2001.tb00160.x 3. Corsten, D., Gruen, T.: Desperately seeking shelf availability: an examination of the extent, the causes, and the efforts to address retail out-of-stocks. Int. J. Retail Distrib. Manag. 31 (12), 605–617 (2003). https://doi.org/10.1108/09590550310507731 4. Cooper, L.G., Baron, P., Levy, W., Swisher, M., Gogos, P.: PromoCast™: a new forecasting method for promotion planning. Market. Sci. 18(3), 301–316 (1999). https://doi.org/10. 1287/mksc.18.3.301 5. Gür Ali, Ö., Sayin, S., Van Woensel, T., Fransoo, J.: SKU demand forecasting in the presence of promotions. Expert Syst. Appl. 36, 12340–12348 (2009). https://doi.org/10. 1016/j.eswa.2009.04.052 6. Ma, S., Fildes, R., Huang, T.: Demand forecasting with high dimensional data: the case of SKU retail sales forecasting with intra- and inter-category promotional information. Eur. J. Oper. Res. 249, 245–257 (2016). 10.1016/j.ejor.2015.08.029 7. Foekens, W., Leeflang, P., Wittink, D.: A comparison and an exploration of the forecasting accuracy of a loglinear model at different levels of aggregation. Int. J. Forecast. 10, 245–261 (1994). https://doi.org/10.1016/0169-2070(94)90005-1 8. Van Donselaar, K.H., Peters, J., de Jong, A., Broekmeulen, R.A.C.M.: Analysis and forecasting of demand during promotions for perishable items. Int. J. Prod. Econ. 172, 65–75 (2016). https://doi.org/10.1016/j.ijpe.2015.10.022 9. Huang, T., Fildes, R., Soopramanien, D.: The value of competitive information in forecasting FMCG retail product sales and the variable selection problem. Eur. J. Oper. Res. 237, 738–748 (2014). https://doi.org/10.1016/j.ejor.2014.02.022 10. Gür Ali, Ö.: Driver moderator method for retail sales prediction. Int. J. Inf. Technol. Decis. Making 12, 1–26 (2013). https://doi.org/10.1142/S0219622013500363 11. Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001). https://doi.org/10.1023/A: 1010933404324 12. Chen, T., Guestrin, C.: XGBoost: a scalable tree boosting system. In: 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 785–794. ACM, New York (2016). https://doi.org/10.1145/2939672.2939785 13. Tashman, L.: Out-of-sample tests of forecasting accuracy: an analysis and review. Int. J. Forecast. 16, 437–450 (2000). https://doi.org/10.1016/S0169-2070(00)00065-0 14. R Core Team: language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria (2018). https://www.R-project.org/

A Data Mining Approach to Support Capacity Planning for the Regeneration of Complex Capital Goods Melissa Seitz(&), Maren Sobotta, and Peter Nyhuis Institute of Production Systems and Logistics (IFA), Leibniz University Hannover, An der Universität 2, 30823 Garbsen, Germany [email protected] http://www.ifa.uni-hannover.de

Abstract. With regard to the recommissioning of damage caused inoperable complex capital goods, a high logistics efficiency is a very important competitive factor for regeneration service providers. Consequently, fast processing as well as a high schedule reliability need to be realized. However, since the required regeneration effort for future damages may vary and is usually indefinite at the time of planning, capacity planning for the regeneration of complex capital goods has to deal with a high degree of uncertainty. Regarding this challenge, the evaluation of prior regeneration process data by means of data mining offers great potential for the determination of load forecasts. This paper depicts the development of a data mining approach to support capacity planning for the regeneration for complex capital goods focusing on rail vehicle transformers as a sample of application. Keywords: Capacity planning  Data base  Data mining Complex capital goods  Logistics efficiency



1 Introduction Complex capital goods consist of numerous expensive components [1]. Due to the high value of goods and components, regeneration processes (maintenance, repair and overhaul) are used to increase the product service life along with their value-adding potential [2]. Some manufacturers of complex capital goods, such as rail vehicle transformers, thus additionally offer those regeneration services. However, there is a challenging uncertainty in terms of planning and synchronizing the capacities required for the regeneration processes that needs to be managed in order to achieve high schedule adherence and reliable delivery times [1, 3]. This uncertainty is caused by the fact that the regeneration effort varies due to the good’s condition and is usually not known when the planning is conducted [1, 4, 5]. The increasing availability of data due to progressive digitalization provides the opportunity to gain new, valid and relevant information from extensive databases, which may be used to support business decisions [6, 7]. Research by Eickemeyer has shown that data mining methods may support capacity planning when there is uncertain © IFIP International Federation for Information Processing 2019 Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 583–590, 2019. https://doi.org/10.1007/978-3-030-29996-5_67

584

M. Seitz et al.

load information. Data mining methods may be used as a tool in planning capacities by generating load forecasts regarding the anticipated expenditures for the regeneration tasks that are to be conducted. The underlying data need to be provided by a database of already completed regeneration orders which includes information about the damage incident, diagnosis as well as conducted regeneration measures and the related expenditures [6, 8, 9]. With regard to the knowledge discovery in data bases (KDD) process by Fayyad et al. [8] current research is thus focused on developing a database and identify a suitable data mining method for generating reliable load forecasts to improve capacity planning in the regeneration of rail vehicle transformers [10]. Following the KDD process, the first step is to define the objective and identify available data [8]. During data selection, relevant data are chosen and integrated in a database [9]. These data are also referred to as target data and over the course of the KDD process, they form the core for the gained data based knowledge. In the next step, this data is prepared and pre-processed [8]. This step is critical because the data quality strongly influences the analysis results. During the subsequent transformation, the preprocessed database is transformed into a suitable database schema [11]. Following this introduction Sect. 1, Sect. 2 focusses on the development of the data base model regarding the regeneration of rail vehicle transformers in detail. In this data base all relevant data collected during the regeneration processes of prior regeneration orders is stored. It contains information about the damage incident, diagnosis, measures used and scope of the regeneration. In order to gain new information from this data, Sect. 3 describes the next step in KDD process that focuses on identifying an appropriate data mining approach to generate load forecasts.

Regeneration Process for Rail Vehicle Transformers SL: Service Life, D: Damage Incident, Di: Diagnosis, B/C: Bid Estimation / Contract, MP: Material Procurement, DA: Disassembly, R: Regeneration, A: Assembly, QA: Quality Assurance SL

D

Data Base Development

Di

B/C

MP

Data Mining

DA

R

A

QA

Interpretation and Evaluation Expenditure Forecast

Capacity Planning

Fig. 1. Regeneration process for rail vehicle transformers (above) and approach to support capacity planning by means of data mining (below), based on [10]

A Data Mining Approach to Support Capacity Planning

585

For the sample of application described in this paper, data mining is used to generate reliable forecasts regarding the anticipated expenditures for the regeneration tasks to support capacity planning when regenerating rail vehicle transformers. With the aid of these forecasts, capacity requirements can be better planned and measures for aligning capacities can be introduced early on. Since new and different information becomes available as the regeneration process progresses, load forecasts can be regenerated over the course of planning [6]. The determination of a suitable data mining approach to support capacity planning when regenerating rail vehicle transformers is depicted in Sect. 3. Figure 1 briefly depicts the regeneration process of rail vehicle transformers [10] and summarizes the approach of the research described in this paper.

2 Development of a Rail Vehicle Regeneration Database In order to be able to analyze historical data and gain new information, there has to be a relevant data basis [8, 9]. Therefore, a database of damages and corresponding regeneration tasks was created for rail vehicle transformers as the addressed sample of application. A relational database was used as a model. In contrast to a large hierarchically structured data sheet, the data in a relational database is acquired in groups for individual topics. In technical language, these topics are referred to as entities. Subsequently, relational databases are more flexible than hierarchical databases. This flexibility applies both with regards to expanding the database and to changing its structure [12, 13]. The regeneration database was built in accordance with the database design process developed by Steiner [12]. The sequence of the individual steps for the selected design process are depicted in Fig. 2 and are clarified below. Define Function: The regeneration database for rail vehicle transformers is responsible for systematically acquiring information relevant to damages in order to generate load forecasts and regeneration measures that are to be executed. Form Entity Sets: The regeneration database for rail vehicle transformers is comprised of the following entities: transformer, project information, damage incident, diagnosis results and regeneration expenditure. Define Relationships: The entities of a relational database can be linked with one another in a number of ways. These links are defined by their relationship type. Types of relationships result from the combination of the following types of association: simple association ‘1’ (exactly one linked data set), conditional association ‘c’ (no or exactly one linked data set), multiple association ‘m’ (at least one linked data set), or multiple conditional association ‘mc’ (any number of linked data sets) [12]. Every manufactured rail vehicle transformer is assigned to a project. A project is a customer’s collective order and consequently can include multiple transformers (1-m). A damage incident recorded in the database belongs to exactly one transformer.

586

M. Seitz et al.

Theoretically, a transformer can be damaged a number of times during its lifecycle, whereby a transformer might have to be assigned to a number of damage incidents (1-m). For the diagnosis of damage incidents in the observed rail vehicle transformer application, the individual analyses are only conducted when instructed by the customer. The results of a test may in turn be identical for a number of transformers (1-c). This implies, that there can be either a diagnosis result or no diagnosis result for a saved damage incident and that a diagnosis result can be allocated to at least one damage incident. In contrast, exactly one expenditure is registered for the repair work entailed in regenerating a rail vehicle transformer, whereas the specific timespan required for the repair can theoretically be allocated to multiple damage incidents (1-m).

Start

Define Function

Form Entity Sets

Define Relationships

Define Identification Keys

Global Normalization

Define Local Attributes

Formulate Consistency Requirements

Define Identification Keys: A clear identiFormulate Transactions fication key has to be set in order to classify the entities and the data sets contained End within them [12, 13]. In our sample of allocation, this is the transformer’s article Fig. 2. Process for designing a database [12] number and the project number. For damage incidents, case numbers are created and internal job numbers are used to unmistakably identify the diagnosis results and expenditures. Global Normalization: In the global normalization step, the conceptual data model is converted into a physical data model with corresponding spreadsheets. To do so, auxiliary entities may be added [12]. To concretize the diagnosis, the results of the individual analyses are separately recorded in the database. The mechanical and electrical tests as well as the oil test are therefore assigned separate entities. On the relationship level, these are described as 1-c because the three named tests are conducted for each individual transformer only when instructed by the customer. Consequently, there can either be a result or there can be no result. The three test entities can also be assigned different individual results and are therefore defined by a 1-m relationship. Moreover, in the database, the regeneration expenditure is subdivided into four separate auxiliary entities (winding, prefabrication, final assembly). Figure 3, depicts the entity block diagram that results following these transformation steps. The individual entity sets are linked via so-called foreign key attributes [14].

A Data Mining Approach to Support Capacity Planning

587

Project Information 1 Winding Regeneration Expenditure

Prefabrication Regeneration Expenditure

Final Assembly Regeneration Expenditure

m Transformer 1 m 1

c

1 m

m

m 1 Testing Regeneration Expenditure

1

Visual / Mech. Inspection Results

m

1

Electrical Tests Results

m

1

Oil Analysis Results

1 1 c

Damage Incident m

Visual Inspection / m Mech. Testing

Electrical Testing

1 c Oil Analysis

Fig. 3. Entity relationship diagram for the physical data model of the regeneration database

Define Local Attributes: The local attributes describe the information that corresponds to the entities and thus define the content of the data sets, to which specific values can be assigned. Furthermore, as previously described, individual attributes serve as identification keys or foreign keys [12, 14]. In addition to the article number, the project number is also given for every transformer. A project is described by both its project number and the attributes customer and site of operation. The project number attribute consequently is the foreign key between the transformer and project information entity sets. The damage incidents with the identification key damage case number are further described for example, by means of the service duration up to the time at which damages occurred. The damage categories include attributes such as module failure location, component failure location and cause of failure. The diagnosis results entity includes the possible analyses with their corresponding results. For electrical testing, possible attributes entail results of resistance or ratio testing. The damage case number is added as a foreign key attribute to link the entity sets damage incident, diagnosis results and regeneration expenditure. Formulate Consistency Requirements: During this step in the design process, we had to formulate requirements to ensure the consistency of data in the database. Among these, is the necessity for well-defined coding [12]. Thus, for example, in the transformer spreadsheet, a project number can only be entered when a corresponding data set is find in the project information spreadsheet. Moreover, the data quality can be ensured by setting specific value ranges for the individual attributes. Formulate Transactions: When in operation, the database is continually expanded by all of the regeneration orders that arise. To ensure data consistency, operations need to be defined in the form of transactions that uniquely specify entering, editing or deleting data sets. Processually, access rights are also set for individual persons or groups [12].

588

M. Seitz et al.

3 Determination of a Suitable Data Mining Approach According to the KDD process, once the database that is to be analyzed is created, a suitable data mining method has to be selected [8]. In view of the targeted implementation in the industrial practice, we can derive the following criteria for selecting an approach: • Forecast quality: The method should be able to generate reliable load forecasts with a high predictive value for planning capacities. • Robustness: The method should also be able to process incomplete input information. This criterion is particularly relevant because usually, not all of the information or only imprecise information about the damage incident is available at the start of a regeneration process. • Scalability: The method should be applicable to a variant quantity of data in the damages database. This requirement ensures that the method will deliver valid forecasts even with a growing amount of data. • Transparency: Users should be able to understand and interpret the application of the method and the targeted results. This requirement should ensure that users can identify discrepancies, for example resulting from input errors, and undertake appropriate correction measures. • Time Expenditure: The time required for the user to prepare and operate the method should be reasonable and proportionate to the obtained benefit. • Flexibility: The method should be able to be adjusted to changed conditions as quickly and easily as possible. Within the framework of the research activities, different approaches from the fields of statistics and artificial intelligence were examined with regard to their suitability for determining load forecasts. Figure 4 briefly summarizes the results of our literaturebased evaluation.

Fig. 4. Evaluation of possible data mining approaches

A Data Mining Approach to Support Capacity Planning

589

According to this assessment, the approaches from the field of artificial intelligence are particularly characterized by a higher potential in terms of forecasting accuracy. For a sustainable application in industry, especially transparency and time expenditure as well as robustness are of decisive importance. That is why Bayesian networks are chosen as the most promising approach when comparing approaches from the field of artificial intelligence. Bayesian networks are among the visual models and result from a combination of graph and probability theories. The mathematical model underlying Bayesian networks is the Bayesian theorem, which links conditional and marginal probabilities of individual events. The symbolic or graphic representation of Bayesian networks is easily understood by users and transparent [15, 16]. Moreover, the forecasts of Bayesian networks are fairly accurate even with a minimal amount of data in the training set. Bayesian networks also fulfil the robustness requirement, since they can process incomplete data sets. Bayesian networks are therefore suitable for modelling and drawing conclusions when there is uncertainty resulting from unknown or incomplete information [17]. With regard to processing input information, Bayesian networks can work with continual or discrete variables [16]. Compared to simulation models, conducted queries are thus quickly answered by the Bayesian networks’ analytical calculation [17]. To sum up, Bayesian networks fit the requirements described above comparatively best and will thus be implemented to determinate load forecasts to support capacity planning when regenerating rail vehicle transformers.

4 Summary and Outlook Planning capacities for the regeneration of complex capital goods such as rail vehicle transformers is a tremendous challenge due to the imprecision of load information [1]. Methods from the field of data mining have proven to be useful tools to support planning processes for regeneration process [6]. Since logistics efficiency is a very important and competitive factor in regeneration business, more reliable planning benefits sales and profit [6, 18]. In order to support planners by means of data mining, it is necessary to have an underlying, suitable database with already conducted regeneration orders [6, 9, 10]. We thus detailed the process we took to develop a database model for regenerating rail vehicle transformers as well as the selection of a suitable data mining approach to generate load forecasts as a basis for capacity planning and throughput time estimation. Following research activities will focus on determining load forecasts by Bayesian networks using real data from the regeneration industry to verify the general evaluation of this approach with regard the specific sample of application. Subsequently, the load forecasts may be used to enable automated decision support for capacity adjustments. Furthermore, additional fundamental research activities may also address an empirical comparative analysis of different data mining approaches using data from different samples of application to review the so far literature-based evaluation. Acknowledgments. The authors kindly thank the German Research Foundation (DFG) for the financial support to accomplish the research projects T3 “Capacity planning and quotation

590

M. Seitz et al.

costing for transformer regeneration by means of data mining” within the Collaborative Research Centre (CRC) 871–Regeneration of Complex Capital Goods.

References 1. Eickemeyer, S.C., Nyhuis, P.: Capacity planning and coordination with fuzzy load information. Bus. Rev. 16, 259–264 (2010) 2. Uhlmann, E., Bilz, M., Baumgarten, J.: MRO-challenge and chance for sustainable enterprises. Procedia CIRP 11, 239–244 (2013) 3. Kuprat, T., Nyhuis, P.: Designing capacity synchronization within the regeneration of complex capital goods. Univers. J. Manag. 4(10), 581–586 (2016) 4. Hoffmann, L.-S., Kuprat, T., Kellenbrink, C., Schmidt, M., Nyhuis, P.: Priority rule-based planning approaches for regeneration processes. Procedia CIRP 59, 89–94 (2017) 5. Gassner, S.: Instandhaltungsdienst-leistungen in Produktionsnetzwerken. Mehrzielentscheidung zwischen Make, Buy, Concurrent Sourcing und Cooperate. Springer, Wiesbaden (2013). https://doi.org/10.1007/978-3-658-01367-7 6. Eickemeyer, S.C.: Kapazitätsplanung und - abstimmung für die Regeneration komplexer Investitionsgüter. PZH-Verlag TEWISS - Technik und Wissen GmbH, Garbsen (2014) 7. Cabena, P., Hadjinian, P., Stadler, R., Verhees, J., Zanasi, A.: Discovering Data Mining – From Concept to Implementation. Prentice Hall, New Jersey (1998) 8. Fayyad, U., Piatetsky-Shapiro, G., Smyth, P.: From data mining to knowledge discovery in databases. AI Mag. 17(3), 37–54 (1996) 9. Maimon, O., Rokach, L.: Introduction to knowledge discovery and data mining. In: Maimon, O., Rokach, L. (eds.) Data Mining and Knowledge Discovery Handbook, 2nd edn, pp. 1–15. Springer, New York (2010). https://doi.org/10.1007/978-0-387-09823-4_1 10. Seitz, M., Sobotta, M., Nyhuis, P.: Einsatz von Data Mining im Regenerationsprozess von Schienenfahrzeug-Transformatoren. Potenziale für die Kapazitätsplanung und Angebotskalkulation. In: Zeitschrift für wirtschaftlichen Fabrikbetrieb, 113/12 (2018) 11. Cleve, J., Lämmel, U.: Data Mining, 2nd edn. De Gruyter Oldenbourg, Berlin, Boston (2016) 12. Steiner, R.: Grundkurs Relationale Datenbanken. Einführung in die Praxis der Datenbankentwicklung für Ausbildung, Studium und IT-Beruf. Springer, Wiesbaden (2014). https://doi.org/10.1007/978-3-658-04287-5 13. Elmasri, R., Navathe, S.: Grundlagen von Datenbanksystemen. Pearson Studium Informatik, München (2005) 14. Cordts, S., Blakowski, G., Brosius, G.: Datenbanken für Wirtschaftsinformatiker. Vieweg +Teubner, Wiesbaden (2011) 15. Schiaffino, S., Amandi, A.: Intelligent user profiling. In: Bramer, M. (ed.) Artificial Intelligence An International Perspective. LNCS (LNAI), vol. 5640, pp. 193–216. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-03226-4_11 16. Sebastiani, P., Abad, M.M., Ramoni, M.F.: Bayesian networks. In: Maimon, O., Rokach, L. (eds.) Data Mining and Knowledge Discovery Handbook, 2nd edn, pp. 175–208. Springer, New York (2010). https://doi.org/10.1007/0-387-25465-X_10 17. Uusitalo, L.: Advantages and challenges of Bayesian networks in environmental modelling. Ecol. Model. 203(3), 312–318 (2007) 18. Lucht, T., Kämpfer, T., Nyhuis, P.: Characterization of supply chains in the regeneration of complex capital goods. In: International Conference on Competitive Manufacturing, COMA 2019. Proceedings, pp. 444–449 (2019)

Developing Smart Supply Chain Management Systems Using Google Trend’s Search Data: A Case Study Ramin Sabbagh(&) and Dragan Djurdjanovic The University of Texas at Austin, Austin, TX 78712, USA [email protected], [email protected]

Abstract. Future manufacturing companies require smarter solutions to compete in the economy. Smart supply chain management systems are one of the most effective solutions. Use of previous information can help companies to predict the demands of the market and react in an agile manner to sudden changes. Google receives over 63,000 searches per second on any given day. This huge amount of data provides us with the opportunities to investigate researches in multiple subjects and extract useful information from the raw data that is available through Google Trend. In this research, we investigate the possible relationships between searches that are made in Google for two manufacturing capability terms, namely, Precision Machining (PM) and Electric Discharge Machining (EDM). Time-series oriented research is conducted on these two datasets in order to find the dynamics characteristics as well as interesting hidden relationships between these two search items to help us build a smarter supply chain management system. Two different methods namely ARMA and ARMAV models are be applied to fit a representative model to these datasets. The order of the both models are evaluated based on AIC statistic. In addition, multiple seasonal trends are detected in the datasets. Finally, Using ARMA model, we predict the datasets for one-step ahead in order to validate our models. Recognition of seasonalities and correlations between two datasets could lead to better prediction and smarter supply chain creation and management. Keywords: Supply chain

 Time-series analysis  Knowledge management

1 Introduction In order to remain competitive in today’s volatile economy, manufacturing companies need to be provided with smart supply chain management system that enables them to manufacture products more efficiently, less expensively, and more quickly. Recent applications of Machine learning and Artificial Intelligence provided us with powerful tools and methods to make smarter decisions [1–4]. To react quickly to the sudden changes and be aware of the future demands and situations, companies need to focus more on previous information and trends. Google created algorithms to help people find their way around the ever-growing amount of online content. Today, Google is a powerhouse that continues to innovate and improve the virtual world. Its ongoing © IFIP International Federation for Information Processing 2019 Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 591–599, 2019. https://doi.org/10.1007/978-3-030-29996-5_68

592

R. Sabbagh and D. Djurdjanovic

success story is a result of its dedication to keep getting better, which is why it’s the goto search engine and arguably the most trusted source of information out there. Based on recent report on Search Engine Land website [5], Google has 90.46% of the search engine market share worldwide. However, 15% of all searches have never been searched before on Google. It is reported in the Engine Land website that Google receives over 63,000 searches per second on any given day and has a market value of $739 billion. 1.1

Google Trend

Google Trends is a website by Google that analyzes the popularity of top search queries in Google Search across various regions and languages. The website uses graphs to compare the search volume of different queries over time [6]. Google Trends also allows the user to compare the relative search volume of searches between two or more terms. The quota limits for trend search is based on the number of search attempts available per user/IP/device. Details of quota limits have not yet been provided, but it may depend on geographical location or browser privacy settings. It has been reported in some cases that this quota is reached very quickly if one is not logged into a Google account before trying to access the trends service [7]. 1.2

Precision Machining Versus Electric Discharge Machining

Based on what we discussed in the previous section, Google Trend is great source of data for different fields and is customizable based on different categories such as countries, time, search type, etc. In this research, aim to explore the possible relationships between searches that are made in Google for two manufacturing capability terms, namely, Precision Machining (PM) and Electric Discharge Machining (EDM). In first impression, one can think of some correlation and relationships between PM and EDM as concept. However, we perform time-series oriented research on these two datasets in order to find those interesting hidden relationships between these two manufacturing concepts.

2 Methodology In this section, autoregressive moving average models are utilized to fit the best model to the datasets according to a specific statistic. In addition, vectorial ARMA model (ARMAV model) is introduced and the results will be compared with the regular ARMA model. Moreover, the stochastic seasonalities associated with the datasets are evaluated. Finally, we use the datasets to predict the future data points based on previous information. 2.1

Auto Regressive Moving Average (ARMA) Models

In order to fit a model to the data, ARMAX model is utilized in this paper [8]. Several ARMA (2n, 2n − 1) models are fitted to the model and using a procedure

Developing Smart Supply Chain Management Systems

593

introduced by Pandit and Wu, we could find the appropriate order for the model that represents the data sufficiently [9]. As a significance detection function, Akaike Information Criterion (AIC) [10] method is used to find the optimum ARMA model by passing the white noise test [11]. Using ARMA model does not give us this opportunities to get information from different dataset as an extra input variable. However, vectorial ARMA models could solve this challenge so that the model be more realistic and reliable. 2.2

Vectorial ARMA Models (ARMAV)

To investigate better options of fitting the dataset into appropriate models, ARMAV model is utilized in this section. An ARMAV model is provided for one dataset based on both datasets (using the second dataset as extra input). The results including the model order as well as residual sum of squares (RSS) is provided and compared between two different approaches at the end of this section. 2.3

Auto Regressive Characteristic Polynomial Roots

In both ARMA and ARMAV models, the roots associated with AR part of the model are mapped and based on being inside, on, or outside of the unit circle, datasets would be stationary, marginally stationary and non-stationary respectively. Finally, possible stochastic trends as well as seasonal trends are explored and detected using parsimonious models where the roots close to 1 are being pushed to be exactly one and removed from the model. 2.4

Prediction

The last step of the methodology is to predict the future of the datasets based on previous data points. For this purpose, 75% of the data is used as training part of the data and 25% is used as the test data. Once the model is evaluated, we predict the onestep-ahead, two-steps-ahead, …, N-steps-ahead predictions based on training dataset. In this report, the prediction results for ARMA models and ARMAV models are compared and ARMAV model is used for the prediction due to better performance.

3 Result In this section, results are provided for different parts of the methodology. First, we provide the results for ARMA models as well as roots and seasonalities. Then, the results for ARMAV models and seasonalities are illustrated. 3.1

Dataset Introduction

Two datasets will be presented. Figure 1 illustrates the dataset which is the weekly sampled PM and EDM searches for the last 5 years. It is shown that the search for EDM search has higher interest over time compared to PM search. In addition, some

594

R. Sabbagh and D. Djurdjanovic

seasonalities and correlations between two datasets are observable. One of the main purposes of this research is to find those seasonalities in the two datasets. The data shows the interest over time for PM and EDM searches in United States. Interest over time expresses the popularity of that term over a specified time range. Google Trends scores are based on the absolute search volume for a term, relative to the number of searches received by Google. Where there is sufficient data available, Google Trends awards a score of between 0 and 100 to inputted search terms on a month-by-month or week-by-week or day-by-day basis and on a geographical basis.

Fig. 1. Interest over time for precision machining and EDM searches

Before presenting more detailed description of the methodology, we provide some extra information regarding the search data provided by Google Trend. Top five areas with highest interest for PM are Illinois, Kansas, Oregon, California, and Wisconsin. In addition, the top five areas with highest interest for EDM are New Hampshire, Utah, Idaho, Kentucky, and South Carolina. This measure provided by Google Trend can show the real interest or demand from aforementioned states for PM and EDM respectively. Moreover, most popular related queries when searching for PM and EDM are also provided by Google Trend website. 3.2

ARMA Model Results

Figure 2 shows the autocorrelations resulted by ARMA model. For PM search data, ARMA (12, 11) and for EDM data, ARMA (1, 0) models were detected to be adequate models by AIC statistic. The resulting models illustrate that the PM search data is more complicated than EDM search data. It is also shown from Fig. 2 that both models are confirmed to be adequate model by RSS of 9.367809e+03 for PM search data and RSS of 2.319061e+04 for the EDM search data. 3.3

Stochastic Trends and Seasonalities for ARMA Models

Figure 3 shows the roots associate with AR parts of the ARMA models for PM search and EDM search data separately. In PM data, five roots are inside the unit circle and so the PM search is a stationary dataset with one real root and four complex roots. In addition, none of the four roots is close to 1 and thus, there is no stochastic and

Developing Smart Supply Chain Management Systems

595

Fig. 2. Fitting ARMA Model to data - left figure is the autocorrelation plot for Precision Machining (PM) search data and the right figure is the autocorrelation plot for EDM search data).

seasonal trends in the PM data. The EDM search dataset has only one root that is real. The AR root associated with EDM data are also inside the unit circle and the EDM data is also stable dataset. After further process of EDM and PM datasets and looking for seasonal trends using parsimonious models, it turned out that none of them have seasonal trends.

Fig. 3. Auto Regressive Roots of the datasets mapping along with the unit circle.

3.4

ARMAV Model Results

In this section, the results associated with fitting an ARMAV model are illustrated. Figure 5 shows the autocorrelations resulted by ARMAV model. For PM search data driven by EDM data, ARMA (19, 18) and for EDM data driven by PM data, ARMA (12, 11) models were detected to be adequate by AIC statistic. The resulting models illustrate that the PM search data driven by EDM search data is more complicated than EDM search data driven by PM search data. It is also concluded from Fig. 4 that both models are confirmed to be adequate models by RSS of 4.387391e+03 for PM search data driven by EDM and RSS of 1.366753e+04 for the EDM search data driven by PM.

596

R. Sabbagh and D. Djurdjanovic

Fig. 4. Fitting ARMAV Model to data - left figure is the autocorrelation plot for PM search data driven by EDM search data as input and right figure is the autocorrelation plot for EDM search data driven by PM search data as input.

3.5

Stochastic Trends and Seasonalities for ARMAV Models

Figure 5 shows the roots associate with AR parts of the ARMA models for PM search and EDM search data separately. In EDM search data driven by PM data model, all 12 roots are inside the unit circle and so the EDM search is a stationary dataset with zero real root and 12 complex roots. In addition, none of the seven roots are close to 1 and thus, there is no stochastic and seasonal trend in the PM data. The PM search data driven by EDM search data has 19 roots with three real root and 16 complex roots. However, there are four roots in EDM data that are almost on the unit circle. These four roots could be the sources of the possible seasonality in EDM data. Moreover, the roots which are on the unit circle has the multiplicity of one. Thus, the PM data is a marginally stable dataset.

Fig. 5. Auto Regressive Roots of the datasets mapping with the unit circle. (Left plot is the PM search data driven by EDM search data; and right plot is the EDM data driven by PM data)

After further process of the PM dataset by forcing the roots close to zero to be exactly zero, the parsimonious model of the ARMAV model associated with PM dataset driven by EDM search dataset will be created in order to evaluate the seasonal trends of the EDM search data. It turned out that EDM search data has two different seasonal trends 13, and 52.

Developing Smart Supply Chain Management Systems

597

• Seasonality of 13: The ARMAV model of the PM data has the seasonality of 13. This seasonality looked unusual in the first place. However, considering the sampling rate of one week, the 13 weeks means exactly three months (91 days). It is an interesting that precision machining search data have quarterly seasonality. • Seasonality of 52: The next detected seasonality is for 52 weeks which is exactly one year and similar to what we found based on ARMA model results. 3.6

Performance Evaluation for ARMA and ARMAV Models

Now that the adequate models are detected and roots are observed and stochastic and seasonal trends are detected both for ARMA and ARMAV models, we can compare the performance of those two models. One of the main criteria for performance of a model is the residual sum of squares (RSS). ARMAV model performed better in both EDM and PM search data in terms of fitting the better model to the data. In order to compare the fittings, we could compare the RSS associated with two models. RSS of ARMAV model for PM driven by EDM is 4.387391e+03. However, RSS of ARMA model for PM is 9.367809e+03. This means a great reduction (53.17%) in RSS after using ARMAV model. On the other hand, the RSS of ARMAV model for EDM search dataset driven by PM search data is 1.366753e+04. However, RSS of ARMA model for EDM search data is 2.319061e+04. In this dataset we have reduction of 81.08% in RSS after using ARMAV model.

Fig. 6. Multiple-steps ahead predication using ARMAV model for Precision Machining (PM) dataset driven by EDM dataset.

598

3.7

R. Sabbagh and D. Djurdjanovic

Prediction Results for the PM Search Dataset

In order to validate our model, we conducted a multiple-steps ahead prediction based on ARMA and ARMAV model to predict the PM search dataset. Figure 6 shows the result of the prediction. We updated the dataset by further predicting the next data point and adding the predicted points to be used as an input for the next prediction. The result is promising noticing that we have not done the prediction using ARMAV model which of course makes the prediction stronger (by using the PM search data as well to predict the EDM search data). Using ARMAV model predictions in one of the main future works for this research.

4 Conclusion and Future Work In this research, we explored the possible relationships between searches that are made in Google for two manufacturing capability terms, namely, Precision Machining and Electric Discharge Machining. The purpose of this investigation is to illustrate a possible method to build a smart supply chain management system based on online data provided by Google. Time-series oriented research were conducted on these two datasets in order to find the dynamics characteristics of these two datasets as well as interesting hidden relationships between these two search items. Two different methods namely ARMA and ARMAV models are investigated in order to fit a representative model to these datasets. The order of both models were evaluated based on AIC statistic. For EDM search and PM search data, ARMAV model outperformed the ARMA model by RSS reduction of over 50%. Two different seasonal trends were detected in EDM search dataset. It is concluded that there are quarterly, and yearly seasonal trends in the EDM search data. However, no stochastics and seasonal trends is found in PM data. Finally, Using ARMAV model, we can predict the PM until next 60 weeks using multiple-steps ahead method with high fidelity. For future work, more complicated predictions will be considered such as using ARMAV models for more than two datasets. Google Trend could be a source in order to perform general predictions especially for the concepts in which gathering the data is difficult.

References 1. Majstorovic, V., Zivkovic, S., Djurdjanovic, D., Sabbagh, R., Kvrgic, V., Gligorijevic, N.: Building of internet of things model for cyber-physical manufacturing metrology model (CPM3). Procedia CIRP 81, 862–867 (2019) 2. Sabbagh, R., Ameri, F.: Thesaurus-guided text analytics technique for capability-based classification of manufacturing suppliers. ASME J. Comput. Inf. Sci. Eng. 18(3), 031009 (2018) 3. Sabbagh, R., Ameri, F.: Supplier clustering based on unstructured manufacturing capability data. In: Proceedings of ASME 2018 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, p. V01BT02A036. American Society of Mechanical Engineers, Quebec City (2018)

Developing Smart Supply Chain Management Systems

599

4. Sabbagh, R.: Semantic text analytics technique for classification of manufacturing suppliers. Texas State University, San Marcos, Texas, USA (2018) 5. Sullivan, D.: Search engine land. https://searchengineland.com/google-now-handles-2-999trillion-searches-per-year-250247 6. Carneiro, H.A., Mylonakis, E.: Google trends: a web-based tool for real-time surveillance of disease outbreaks. Clin. Infect. Dis. 49(10), 1557–1564 (2009) 7. InsideGoogle (2007). http://google.blognewschannel.com/archives/2007/07/30/successgoogle-trends-updated/ 8. Bierens, H.J.: Armax models (1988) 9. Pandit, S.M., Wu, S.-M., et al.: Time Series and System Analysis with Applications. Wiley, New York (1983) 10. Sakamoto, Y., Ishiguro, M., Kitagawa, G.: Akaike Information Criterion Statistics, p. 81. D. Reidel, Dordrecht (1986) 11. Stoica, P.: A test for whiteness. IEEE Trans. Autom. Control 22(6), 992–993 (1977)

Collaborative Technology

Managing Knowledge in Manufacturing Industry - University Innovation Projects Irina-Emily Hansen1(&), Ola Jon Mork1, and Torgeir Welo2 1

Department of Ocean Operations and Civil Engineering, Norwegian University of Science and Technology, 6009 Aalesund, Norway [email protected] 2 Department of Mechanical and Industrial Engineering, Norwegian University of Science and Technology, 7491 Trondheim, Norway

Abstract. Nowadays, manufacturing companies collaborate with universities in innovation projects to sustain or achieve competitive advantage. However, fundamental differences between the industrial and academic worlds hamper the utilization full innovation potential of such collaboration. As a countermeasure, industry stresses the need for the development of knowledge management tools that can increase the value of collaborative innovation projects. This paper covers a qualitative study of research-based innovation projects owned by manufacturing companies and partly funded by government, where the academia has the role as research provider. We seek to answer two research questions: (1) how can the strategies and objectives for collaboration to meet both partners’ expectations be defined? (2) how to facilitate the projects to enhance the creation and exploiting of knowledge? The study identifies that a modified version of Nonaka’s so-called five-phase model of organizational knowledge creation is applicable for the given context. Based on this, we propose a conceptual knowledge management model of university-industry collaboration in innovation projects. The proposed model provides (1) management initiatives that intensify knowledge creation and exploiting processes (2) ensures partners’ commitment to collaboration along with the continuing improvement of university-industry collaborative concepts. It is proposes that the model will support knowledge managers of industry and university in conducting innovation projects more effectively and efficiently, as well as deliver even more innovation values to partners and society. The model can also assist national and federal research/innovation councils in decision-making when assessing industrial research project applications. Keywords: Industry-university collaboration Innovation project

 Knowledge management 

1 Introduction Research-based innovation projects between industry and university leverage competitiveness in the global market, while providing scientific knowledge and value for society. However, substantial differences between manufacturing companies and universities hamper collaboration, often leaving innovation potential from projects © IFIP International Federation for Information Processing 2019 Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 603–610, 2019. https://doi.org/10.1007/978-3-030-29996-5_69

604

I.-E. Hansen et al.

unexploited [1]. Industry stresses that the use of knowledge management tools will enable more and better quality results from innovation projects with universities [2]. This study aims to contribute to the understanding of this challenge by answering two research questions that target the main challenges in University-Industry Collaboration (UIC) projects: (1) how can strategies and objectives to meet both partners’ expectations be defined? (2) how to facilitate the projects to amplify the creation and exploiting of knowledge. This study encompasses university-industry (UI) innovation projects that are owned by industry and partly funded by government, where academia has the role as ‘external’ research provider. The companies studied herein are characterized by mechanical production in marine and maritime businesses; including producers of propulsion systems, shipbuilders, manufactures of equipment for fish factories and similar. Within this industrial context, innovation occurs typically due to solving specific industrial problem based on tacit knowledge acquired from work experience, often through learning-by-doing, using and interacting, i.e., a so-called DUI-mode of innovation [3]. This is an important research topic since most existing UI studies are done on the scientific-technological type of innovation (STI) process in which innovation is invented by researchers for industry, and not the result of the joint activities between industry and university. The qualitative research conducted herein identifies that Nonaka & Takeuchi’s model of organizational knowledge creation is applicable to the UI context [4]. Based on that and a somewhat modified version of their model, we propose a conceptual model of knowledge management of university-industry collaboration in innovation projects. The model aims to intensify knowledge creation and improve exploiting processes. The authors believe that knowledge managers of industry and university can use this model as a practical guideline to execute innovation projects more effectively and efficiently, thus increasing innovation impact. Moreover, the model can support national and federal research/innovation councils in decision-making when assessing industrial research applications. The reminder of the article is organized as follows: The next Sect. 2 introduces the theoretical background. Section 3 presents the research methodology employed. The research findings are summarized in Sect. 4 and followed by a discussion in Sect. 5. Section 6 gives conclusions and further work.

2 Theoretical Background The acknowledged organizational knowledge creation model is introduced by Nonaka & Takeuchi [4]. It consists of five phases: sharing tacit knowledge, creating the concept, justifying the concept, building of an architype and cross-leveling knowledge. The project team starts with sharing tacit knowledge. People share knowledge they acquired through personal experiences in the specific knowledge fields. For instance, the technology integrator can provide insights into feasibility of the technology integration in factories. Based on the ability to share tacit knowledge team members create a concept of a new product, process or service. The created concepts must be justified against criteria identified by knowledge goals and the needs of society. Justifying the concept often involves experts outside of the project group. Ones the concept is

Managing Knowledge in Manufacturing Industry

605

justified, it must be tested by an archetype. The last step, cross-leveling of knowledge, implies the sharing of knowledge derived from the project with the rest of the organization. Cross-leveling triggers new cycle of knowledge creation forming a spiral process accentuating that organizational knowledge creation is a continuous and process.

3 Methodology and Background for Study The study covers the so-called ‘user-driven research-based innovation’ type of project. ‘User’ is an industrial company, which typically submits an application to the Research Council for financial support [5]. The Research Council (RCN) of Norway provides financial support for collaboration between industrial companies and research organizations to promote innovation and sustainable value creation through research-based innovation. RCN stimulates industry to innovate and mitigates the risk of innovation by covering around 40% of the cost of such projects. Typically, most cash funding for external academic research in such projects is funded by RCN. The company contributes in-kind (hours and equipment), some cash funding and acts as a contract partner with RCN for such projects and is therefore responsible for the project and its budget. The context for this study are the manufacturing companies and a university campus located on the west coast of Norway. The size of the companies and their R&D capabilities vary significantly. There are some large companies with plans for research many years ahead, they are the ones that comprise their own basic research. However, the majority of companies are smaller with significantly shorter research horizons. This work focuses on two research questions: (1) how to define goals for collaboration to meet the expectations of industry and university, and (2) how to facilitate projects to amplify knowledge creation and exploiting. Fifteen individual semi-structured interviews were conducted. The respondents were six academic project managers, six industrial projects managers, two academic PhD candidates, and one PhD employed by one of the companies. Furthermore, the workshop with fourteen PhDs and two senior researchers was arranged to collect input to the study. The first and second authors of this paper facilitated the workshop. Moreover, seven observations of an ongoing project, including formal and informal meetings combined with nine semi-structured interviews were conducted as a part of this study.

4 Findings The collected data was used to create a new knowledge-management model of UI collaboration in innovation. The proposed model reflects the necessity of several aspects to support the knowledge creation process in the UI context—ones that are not considered by Nonaka & Takeuchi’s model: (1) commitment of resources, (2) managerial initiatives that support not only the creation of knowledge, but also its exploitation.

606

I.-E. Hansen et al.

Commitment of dedicated resources to the project is one of the major tasks in managing UI projects [2]. Therefore, the proposed model is organized at three interdependent levels; i.e., each organization’s strategic level, UI collaborative strategic level, and UI project level as shown in Fig. 1. Integration of the project level into UI collaborative level and integration of the latter into top-management level reflect the necessity of top-management to support and prioritize the projects by allocating them sufficient resources, even in competition with operational needs and daily business. 1 Organization's Strategic Level 1

1

1

Sharing Knowledge

Concept

Justify Concept

2 UI Collaborative Strategic Level

3 UI Collaborative Project Level

1

1 Building Archetype

2

2

Sharing Knowledge

Concept

Knowledge Exploiting

2

2 Building Archetype Building Archetype

2 Justify Concept

3

3

Sharing Knowledge

Concept

3 Justify Concept

Knowledge Exploiting

3

3

Building Archetype

Knowledge Exploiting

Fig. 1. Conceptual model of knowledge management of UI innovation projects

Exploiting knowledge in innovation projects is a major concern. Even when the projects generate new ideas, they are not always used in ongoing project. To accentuate the importance of this aspect, the last phase in UI collaboration model is formulated as ‘knowledge exploiting’. Table 1 presents conceptual solutions that collectively support knowledge exploiting by answering the research questions on how to identify collaborative goals and facilitate knowledge processes. The collaborative concepts emerged from the data analysis. The scrutiny of the data from interviews and observations led to categorizing data into collaborative concepts on three levels: each organization’s strategic level, UI strategic level, and UI project level. The model incorporates continued improvement of collaborative concepts. Figure 1 shows that each organization’s grand concept for collaboration in innovation is verified at the UI collaborative level. Here, the step ‘building architype’ on level one triggers the knowledge creation process at UI collaborative strategic level two. Both concepts go through the test on project level: the knowledge-creation process on project level three evaluates the quality of the concepts from the levels above. Thus, the project functions as an archetype of level one and level two. For instance, a constant shortage of resources from university and industry in the project would imply a revision of the collaborative concepts at all levels.

Managing Knowledge in Manufacturing Industry

607

Table 1. University-industry innovation projects: collaborative concepts Concept level Organization’s strategic level

How to define collaborative goals Knowledge vision and strategy with national and regional directions for innovation

UI collaborative strategic level

Building knowledge platform: longterm collaborative strategy

UI project level

Project objectives are aligned with UI collaborative strategy

How to facilitate innovation Dedicate and allocate resources for collaboration: • PhD programs • Industrial management and senior researchers support PhDs • Interdisciplinary collaboration • Clarification of partners’ expectations • Strategic UI project group: members are involved in many UI projects • Absorptive capacity of involved: industrial and academic background • Clarification of partners’ expectations • Anchoring the projects at topmanagement • Industry should navigate the project • Involve internal and external stakeholders • Provide common language • Create a momentum: keep enthusiasm

5 Discussion The study shows that the organizational five-step model of the knowledge-creation process introduced by Nonaka & Takeuchi can be applied to knowledge management in UI collaboration with some modifications. Modifications are consistent with the answers to the research questions: how to define collaborative goals and how to facilitate knowledge-creation and exploiting. The model contributes to UI collaboration by (1) providing collaborative concepts that intensify knowledge creation and exploiting processes, (2) ensuring the commitment of partners in collaboration and providing for the continuing improvement of UI collaborative concepts. In the following, each of these contributions will be discussed separately. 5.1

Collaborative Concepts for Knowledge Creation and Exploitation

The concept at each level of collaboration contains specific initiatives that provide creating and application of new knowledge thereby ensuring success of UI innovation projects. The contribution to each phase of the knowledge creation process are considered individually.

608

I.-E. Hansen et al.

The Sharing tacit knowledge phase includes sharing experiences to enable sharing mental models. The involvement of people with backgrounds from academia and industry, such as senior researchers and highly educated industrial employees, helps partners to rapidly relate to each other. Integrating academic PhD candidates in a company’s operational environment generates common experience with industry, making it easier to share tacit knowledge. University and industry knowledge strategies crystallize the multiplicity of shared mental models in one direction [4]. The managerial initiative at project level creating a momentum by delivering a value in the beginning of the project quickly, generates positive collaborative experience and accelerates sharing of tacit knowledge. For instance, researchers can use their methodological tools to solve a few small industrial problems at the very beginning of the project. Concept creation is about how the partners are going to collaborate in innovation. Table 1 depicts the concepts guidelines how to define the goals for collaboration in innovation and the initiatives to achieve these goals. Dedication of the resources for innovation project is the core of the Concept creation. In practice, partners should assign which knowledge contributors are required for the project and the adequate quantity of time they will use on the project. For instance, the project requires that one engineer uses 40% of their time, two days a week, to design the prototype and participate in prototype building. The mechanic and electrician each will use 20% of their time to work on prototype. The same concerns the university. The project manager on behalf of the university and the researchers from the required knowledge fields, for example, expert in automation and software developer will be assigned time they are required to use on the project. Dedication of resources is crucial for project success. Otherwise, daily routines will take over and UI innovation project will be given less priority. Concept creation implies also initiatives that support ‘shared language’. Avoiding academic terminology, use of industrial language, sketches, drawings, and mockups are the means that provide mutual understanding. The model emphasizes the role of the strategic UI project group that collaborates over time on many technological projects. Accumulating collaborative experience helps to execute the project more effectively. Clarification of expectations in strategies and objectives supports Concept justification. This study suggests that the top management in university and industry has the main responsibility to incorporate justification criteria in organizational knowledge vision and strategy, which must be consistent with national and regional plans in research and development. The collaborative UI unit should establish a set of sub criteria in the form of a UI collaborative strategy for a long-term partnership, which is in line with the knowledge strategies of organizations. Consequently, the project objectives present the set of sub criteria that coincide with the justification criteria at the above levels. The research questions should be defined in line with the industrial needs and have some room for flexibility due to uncertainty in innovation projects. Involving stakeholders, people from different departments in companies and universities, end users of future innovative solutions, is vital for creation and justification of the concepts. Building archetypes is a necessary part of creating a new process or new product. Rapid prototyping and frequent interactions with users are a prerogative for success.

Managing Knowledge in Manufacturing Industry

609

Additionally, a prototype is a great communication tool for people from different backgrounds. Using it frequently will improve communication between industry and academia and increase the quality of the knowledge creation processes. Cross-leveling knowledge depends on university and industry committing resources to undertaking projects and implementing the results. The university should provide enough time for the researchers to be able to integrate new knowledge into educational programs and develop it further through other projects with industry. To ensure implementation of research findings in industry, one should actively involve company’s customers and/or operational users of new knowledge in the project. That will ensure that project will meet industrial requirements and make the company commit the resources to execute the project and implement the results. Moreover, the engagement of operational users in the project will give them ownership of new knowledge, creating willingness to use it. The study also emphasizes that integration of technology experts in the project ensures the feasibility of applying project results. 5.2

Commitment and Continuous Improvement of Collaborative Concepts

Figure 1 illustrates the embeddedness of the project and strategic UI collaborative levels in the main knowledge creation process of each organization. This means that creating and exploiting knowledge at the project level need support from a collaborative UI strategic unit and the decision-makers at the top-level of each organization. Therefore, the grand concept at each organization’s strategic level emphasizes the necessity of top-management commitment to collaboration. Without this, the basis for initiating a new project is lacking. The model’s dynamic provides continuous improvement of collaborative concepts. The collaborative concepts at three levels function interdependently. Universities and industrial companies define their own grand concept for collaboration with others in innovation. The grand concept must support the concepts for collaboration at the levels below: the common UI collaborative strategy level and the UI project level. Modifications on each level of the collaboration trigger the optimization processes on the other levels. Continuous improvement makes the model dynamic. 5.3

Verification of the Proposed Knowledge Management Model

The proposed model is newly developed. The first assessment of the theoretical model was made in the workshop with the participants from one accomplished UI innovation project. The project leader from the university, the PhD candidate and industrial PhD candidate who, during the project had to take over the role of the project manager on behalf of the company, evaluated the collaborative concepts. The criteria for evaluation was the degree of impact form the concepts’ substances on the projects: low, middle and high. The participants assessed the concepts individually. They also discussed the formulation and content of the statements. The participants saw the model’s potential to help university and industry deliver more innovation. They asserted that the model could also be a tool for setting up new projects for further exploration and exploitation of knowledge derived from the project.

610

I.-E. Hansen et al.

The workshop results help the researchers to develop a more practical version of the model for implementation. Conversion of the theoretical model into practical guidelines will make it easier for knowledge managers of university and industry to apply and validate the model. The plan is to use focus groups to test if practical guidelines give meaning and are suitable for practical application. The focus groups will involve experienced project managers from university, industry and representatives from RCN. The true contribution of this work can only be evaluated through the comparison before and after applying the proposed approach to the real innovation project. In this connection, it is worth noting that this is a typically three-years long project and it would take long time to get the final results. Moreover, one should develop the criteria for comparing the innovation impact of the projects with and without employing of the model. As many projects as possible is the best for validation. However, one should be aware of the amount of work and necessity to ensure the quality of validation.

6 Conclusion This research contributes to knowledge management theory by adapting the organizational knowledge creation theory of Nonaka & Takeuchi to the context of UI collaboration projects. In practice the model has a potential to support university and industry in conducting innovation projects more effectively and efficiently. The model provides the collaborative concepts on the three levels: strategic organizational, UI strategic collaborative level and UI project level. The concepts are the knowledge management initiatives that support creation and application of knowledge in innovation project. They encompass the specific recommendations of how to define collaborative knowledge goals and the activities to achieve them. The model emphasizes the importance of continued knowledge exploitation that triggers constant improvement of the collaborative concepts on all levels. The future research shall validate the effect of application of the model to the projects. Although the study covers mechanical engineering companies in marine and maritime sectors on the west coast of Norway, the issues between university and industry are common in other industries and alike for any private-public collaboration.

References 1. Perkmann, M., et al.: Academic engagement and commercialisation: a review of the literature on university–industry relations. Res. Policy 42, 423–442 (2013) 2. Hansen, I.-E., Mork, O.J., Welo, T.: Towards a framework for managing knowledge integration in university-industry collaboration projects (2018) 3. Lundvall, B.-Å.: Interactive learning, social capital and economic performance. In: Advancing Knowledge and the Knowledge Economy, pp. 63–74 (2006) 4. Nonaka, I.: A dynamic theory of organizational knowledge creation. Organ. Sci. 1, 14–37 (1994). https://doi.org/10.1287/orsc.5.1.14 5. Research Council of Norway. https://www.forskningsradet.no/prognett-bia/Programme_ description/1226993636103

Technology Companies in Judicial Reorganization Ricardo Zandonadi Schmidt(&)

and Márcia Terra da Silva

Paulista University - UNIP, PPGEP, São Paulo, Brazil [email protected]

Abstract. The updating of the judicial reorganization and bankruptcy legislation, law 11.101/2005, resulted in an increase 63.7% per year of judicial recovery from 2005 to 2018, but with a success rate of only 1%. The speed of launching new technologies tends to contribute to the crisis in companies, and to emerge from this crisis, companies must be aware of the financial indicators and, when necessary, request for judicial recovery at the same speed of technological changes. Keywords: Recovery

 Revenue

1 Introduction The legal order of recovery and bankruptcy of the company in difficulty are instruments by which the entrepreneur manages to overcome the situation of crisis in the execution of the economic activity, in order to promote the preservation of their company, or to carry out the regular closing of the company, assuring the social function of this economic activity [1]. The course of prescription when decreed a judicial recovery is 180 days, in this period, all executions of the company are suspended. Within this period, in the first 60 days, the company must present the recovery plan, which will be analyzed by the creditors and at the end of the 180 days at the general meeting of the creditors, they may approve the plan giving continuity to the company’s activities, or refuse the plan, decreeing the bankruptcy of the company. In case of approval, the company continues in follow-up for 24 months, where it cannot request a new request for judicial recovery. Even though it is a law of 2005, judicial reorganization (or recovery) has been gaining space as a survival option for companies by economic context, such as the changes in the external market due to the deceleration in China, or the reduction of the public deficit in Europe and in the peripheral countries, which caused difficulties in the verification of the value of the goods [2]. One of the sources of instability for companies in recent years, which possibly will extend for years, is the high rate of breakthrough innovation, due to the emergence of new products and services based on innovative technology. This is what happened with the emergence of music streaming, according to the annual Pro-Music Brazil report this kind of services such as digital media and streaming like Spotify, Apple Music and YouTube had revenues of $178.6 million in Brazil in 2017, compared to $15.8 million millions of physics like CD © IFIP International Federation for Information Processing 2019 Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 611–616, 2019. https://doi.org/10.1007/978-3-030-29996-5_70

612

R. Zandonadi Schmidt and M. Terra da Silva

and DVD [3]. Another platform, Airbnb, according to research by the Foundation for Economic Research (Fipe), showed that Airbnb added R $2.5 billion to the Brazilian GDP in 2016, even though it still represented only 2.1% of the total number of guests in Brazil [4]. Since the creation of this law, there has been an exponential increase in the request for judicial recovery, from 110 in 2005 to 1863 in 2016. The recoveries granted went 1 in 2005 to 606 in 2018. However, the recovery rate of US companies is 30% and Brazilian recovery is 1%. Therefore, this paper aims to identify the causes of the inefficiency of the Brazilian process, firstly, comparing the Brazilian and American legislation, and then, financial indicators of Brazilian companies that can indicate the index of success of the judicial recovery in Brazil.

2 Methods As judicial reorganization is a legal process, the vast majority of published studies and books are prepared by a lawyer and addressing only the legal aspect. So initially, we will address the legal issue and make a comparison between Brazilian and American legislation (which inspired the Brazilian legislation) to identify whether there is any aspect in Brazilian law that could imply an increase in corporate mortality in judicial reorganization proceedings [1, 13]. Then, we will address the financial aspect of companies in judicial recovery as a way to understand the economic situation [7] of a company is to evaluate the financial indicators [12]. In accordance with determinations by the Brazilian Securities and Exchange Commission (law 6.404/1976) and law 11.101/2005, corporations are required to disclose financial information such as balance sheet, statement of accumulated profits or losses, statement of income for the year and demonstration of the origins and applications of resources. So, we examined the financial statements of the companies and analyze the economic indicators of technology companies in the interior of São Paulo. The choice of this region is due to why companies have similar and competing products, and with the advent of new technology, their products have become obsolete, and mainly because we have access to data.

3 Results The mortality index is too high of companies that has judicial recovery accepted. So, this article will take a look on legislation and studied cases of three similar companies in judicial recovery and their financial statements. 3.1

Legislation

This article will outline an overview of the American Law and the Brazilian Law, studying the peculiarities of each one, regarding the recovery of companies, pointing out some of the differences, similarities and contrasts between the laws of the two countries.

Technology Companies in Judicial Reorganization

613

Table 1, comparison of Law 11.101 with the American chapter 11, does not intend to compare perfect equivalents, as we know, they are two countries of different formations and peculiarities, and especially under quite different legal orders. Table 1. Comparative – Brazilian vs American Legislation [2] Topic Companies emerging from judicial recovery Deadline for plan submission

Brazil 1%

US 30%

60 days

Classes of creditors

Labor creditors, secured and unsecured creditors

Business excluded from judicial recovery Who can request recovery

Banks, financial institutions, health plans and insurance companies

120 days, with possibility of extension for up to 18 months Free to create class of creditors. It allows greater freedom of negotiation Operators of railways, stock brokers and commodities

Bankruptcy administrator function Role of the treasury Maintenance of the lender in the administration Block sale Voting of the recovery plan

Business only

Acts as a debtor’s taxpayer

Does not participate with its financing credits of the companys recovery Yes, unless there is fraud

When there is bankruptcy Majority of votes is needed and, for the application of cram down, approval of at least 1/3 of the dissenting class

In addition to companies, individuals and even counties The bankruptcy administrator also performs administrative duties Tax authorities can make concessions and receive shares Yes, under the possession debtors figure, unless there is fraud Block sales can be made The court has greater freedom to apply the cram down

The Brazilian legislation was based on Chapter 11 of the US Code: Bankruptcy, so similar results would be expected, but we can verify that the results are other. The American success rate, historically, varies between 20% and 30%, well above the Brazilian 1% [6]. The legislation has no influence on the outcome of the judicial recovery, so we will analyze the companies economic indicators to try to understand the differences between Brazilian and American companies.

614

3.2

R. Zandonadi Schmidt and M. Terra da Silva

Studies Cases

The cases analyzed belong to three companies that have applied for judicial reorganization, fallowing the arrival of an innovative product that competes head-on with products already consolidated in the market offered by the analyzed companies. The lamps intended for residential lighting were the incandescent type, which consumed a lot of energy and lasted a short time and were gradually replaced by compact fluorescents, or electronic lamps, four times more efficient and six times more durable than incandescents, with greater environmental impact. Recently, the popularization of LED light bulbs - Light Emitting Diodes, which offer low power consumption, longer life span and lower environmental impact - has emerged as a competitor to incandescent bulb companies and lamp reactors [5]. The analyzed companies did not see the change and continued to produce luminaires and reactors for the fluorescent lamp segment. It was not long before the revenue started to decline and when they tried to produce this new technology and put their products on the market, they came across a relentless player, China, with an extremely cheap and lowcost product, making local manufacturing impractical. These companies have produced more than 70,000 reactors per month; today, production does not reach 8,000 reactors per month. This reduction shows the progress of the LED, and the cycle of fluorescent lighting is coming to an end, production today is only for the maintenance of luminaires installed before the arrival of the new technology. 3.2.1 Economic Indicator Economic and financial indicators are the basis of key indicators of a company and used to elaborate strategic planning [11]. They are performance measurement thermometers and evaluation of the operational process of business organizations. In accordance with Brazilian law 6.404/1976, corporations are required to disclose financial information. So, we choose and analyze three indicators to understand the economic situation of three technology companies in the interior of São Paulo that were based on the manufacture of products for fluorescent lighting and their financial results in the last 3 years before the judicial recovery. These data are public and they are in the initial petition of the companies. Companies will be defined as company A, company B and company C. Company C filed for judicial recovery in 2015, while companies A and B applied in 2016. 3.2.2 Revenue The revenue corresponds to the sum of sales of products and/or services in a given period. It is all the incoming money that goes into the company’s cash, from the products sale, merchandise and services (Table 2). Table 2. Annual revenue Revenue 2013 Company A – Company B – Company C R$40.155.741

2014 R$72.475.814 R$120.837.892 R$47.020.971

2015 2016 R$66.802.036 R$59.650.522 R$104.249.675 R$70.336.793 R$26.518.663 -

Technology Companies in Judicial Reorganization

615

The companies already showed a drop in revenue with the arrival of LED, company C, realizing this, requested for judicial recovery, and stopped with the production of fluorescent technology after accepting the order in 2015, this resulted in a significant drop in sales, but made it possible to return production to other products more suited to the needs of the market. 3.2.3 Financial Expenses The financial expense represents the price to be paid by an enterprise to its creditors, and are related to interest from loans contracted by the business, in two companies the values are significant (Fig. 1).

Company A Company B Company C

2013

2014

2015

2016

-R$333.307 -R$14.183.558 -R$1.049.138

-R$3.465.011 -R$7.504.673

-R$2.105.709

-R$192.818 -R$7.153.784 -R$6.163.756

-

-

Fig. 1. Financial expenses

Brazilian interest rates in 2018, considering Brazilian real interest rates of 4% per month (excluding inflation in the next 12 months), among the 40 countries analyzed, Brazil occupies the fourth position, behind Turkey (13.93%), Argentina (18.20%) and Russia (6.01%) [10]. The costs of these companies for using third-party capital are very high, making them even less competitive in the market. Company C reduced financial expenses in 2015 with the request for judicial recovery, thus, the cash flow of this expense, can be used for other purposes. 3.2.4 Statement of Profit or Loss Profit or loss is the amount that results from a commercial transaction, when we take into account the amount received minus the costs of production. In other words, it is the amount that remains in each business transaction when we discount all the direct and indirect costs that a product has (Table 3). Table 3. Statement of profit or loss Profit/Loss Company A Company B Company C

2013 -R$3.781.724

2014 R$223.904 R$173.903 -R$8.132.169

2015 R$76.985 -R$17.794.254 -R$745.640

2016 -R$4.873.362 -R$13.352.732 -

Company C reduced the damage to amounts consistent with the revenue, while company B and C, significantly increased the loss and still maintained the production of an outdated technology. Today, company C has left the judicial recovery and is fulfilling the commitments made to the creditors’ meeting. Company A and B are in serious financial difficulties because of the late request for judicial recovery.

616

R. Zandonadi Schmidt and M. Terra da Silva

4 Conclusion The modernization of Brazilian law did not work to improve the success of requests for judicial reorganization today by 1%, comparing the results of the American law that served as a basis and today has a success of 30%. Taking too long to admit that the financial problem is serious is the most common mistake of entrepreneurs on the brink of crisis. (Bertão, Naiara, 2018). When analyzing the financial data of companies A, B and C, it is clear that company C, which requested the judicial recovery in advance, was able to reoganize and get out of the crisis, is now with the plan approved and fulfilling the agreement of the creditors meeting. The company A and B, with the late request, are in serious economic difficulties and with many debts regarding the approval of the economic plan by the assembly of creditors. An early request could allow the survival of these companies, as they would still have revenues in line with financial expenses, which would allow a change of course (development of new technologies) with a greater prospect of success. Companies, especially in the technology sector, have to be very attentive to the changes of technology and innovations, only then, can survive in the world today, where revolution 4.0 is already a reality.

References 1. Teófilo, Jr., F., Bruno, B.S.: Terceira onda renovatória e a instituição de varas especializadas em recuperação judicial e (auto)falência. Tribinais Online Magazine (2018) 2. Abreu, L.F.: A recuperação judicial na Lei Brasileira e na Lei Americana (2014) 3. Pró Brasil. https://pro-musicabr.org.br/wp-content/uploads/2018/04/Pro_MusicaBr_ IFPIGlobalMusicReport2018_abril2017-003.pdf 4. Turismoetc. https://www.turismoetc.com.br/viagem/airbnb-interfere-na-economia-do-brasilsegundo-fipe/ 5. Inmetro. http://www.inmetro.gov.br/inovacao/publicacoes/cartilhas/lampada-led/ lampadaled.pdf 6. Jusbrasil. https://oab-rj.jusbrasil.com.br/noticias/111936478/so-1-das-empresas-sai-darecuperacao-judicial-no-brasil 7. Silva, V.F., Sampaio, J.S.: Reorganization Requests in Brazil: Na Explanation with economics variables, Revista Brasileira de Finanaças (2018) 8. Torres, C.F.: JusBrasil. https://jus.com.br/artigos/62669/lei-n-11-101-2005-e-seus-impactosno-indice-de-falencias-gráfico3 9. Bertão, N.F.: Época Magazine, Recuperação judicial no Brasil: as lições de quem sobreviveu (2018) 10. Infinity Asset (2018). http://infinityasset.com.br/blog/ranking-mundial-de-juros-reaisset18-2/ 11. Kaplan, R.S., Norton, D.P.: Mapas Estratégicos: Balance scorecard. Rio de Janeiro: Campus (2001) 12. Oliveira, L.: (2015) https://capitalsocial.cnt.br/situacao-economica-financeira-empresa/ 13. Jupetipe, F.K.N., Martins, E., Mário, P.D.C., Carvalho, L.N.G.D.: Custos de falência no Brasil comparativamente aos estudos norte-americanos (2017)

Multiscale Modeling of Social Systems: Scale Bridging via Decision Making Nursultan Nikhanbayev(B) , Toshiya Kaihara, Nobutada Fujii, and Daisuke Kokuryo Graduate School of System Informatics, Kobe University, Kobe, Hyogo 6578501, Japan [email protected]

Abstract. In recent years technological advancement makes it possible to connect heterogeneous systems at hierarchical levels, such as macro level where strategic decisions are made, and micro level where organizations interact with the users. Modeling of these connections alongside with systems is one of the problems, which can be solved by modeling techniques that take hierarchical nature of the systems into consideration. In this paper, we propose a multiscale modeling approach for social systems. We suggest to design a model by adopting certain entities: decision makers, resources, actions and propagation variables. The proposed approach is evaluated on an example of collaboration between two systems: electricity suppliers and manufacturers. Results of the computational experiments demonstrate the effectiveness of the proposed technique. Keywords: Multiscale modeling Scale bridging

1

· Simulation · Social systems ·

Introduction

Recently, due to advanced development of technology connecting heterogeneous systems has started to emerge. Usually, stakeholders of heterogeneous systems are more concerned about their own objectives and goals. On top of that, in some cases those objectives are conflicting as well, which makes connection of heterogeneous systems a very complex endeavor. Connection of multiple systems is also one of core ideas behind the concept of Super Smart Society (or Society 5.0) [1], which was proposed by Japanese government. It was highlighted that social implementation and proper risk management are the necessary points to achieve new society without loses. Making a simulation model can be an useful tool to deal with this kind of problems. However, some processes in this setup are performed on different temporal or spatial scales. Users, systems, and the government are concerned about different objectives, that differ a lot in terms of the scope. For this reason, a novel modeling technique, which can capture multiscale nature of the problem, is necessary. c IFIP International Federation for Information Processing 2019  Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 617–624, 2019. https://doi.org/10.1007/978-3-030-29996-5_71

618

N. Nikhanbayev et al.

There exist different modeling techniques, such as Multimethod modeling [2], and Hybrid modeling [3] that try to connect multiple parts of the system in one model. In authors opinion, Multiscale Modeling would be more suitable technique in this case. Mainly, because of its ability to manage submodels with different temporal and spatial scales. This paper contributes to a certain part of multiscale modeling: a scale bridging phase. Authors suggest to connect different scales of the model by using set of entities: decision makers, resources, actions and propagation variables. In the next section more comprehensive explanation of a target model and of the technique is performed.

2 2.1

Target Model and Multiscale Modeling Target Model

In a target model we consider a collaboration between two systems: electricity suppliers and manufacturers. In general, stakeholders of these systems are more concerned about their own business, and do not take into consideration decision variables of other systems. For example, if we consider connection between electricity suppliers and manufacturers, formers are interested in reducing peak demand and making electricity load pattern as flat as possible (to avoid using inefficient power plants). Meanwhile, manufacturers are focused on making profit and reducing costs. In this case, one of the scenarios of collaboration might be shift of manufacturers’ work time to the later hours, which would be helping in reducing peak electricity demand. However, due to late working hours, manufacturers have to spend more money on the salary to the workers. Therefore, direct collaboration is not in interest of manufacturer, and in order to achieve a connection which will benefits both sides, more comprehensive scenarios are necessary. There are three parties considered in this example: electricity supplier, manufacturer and residents. Electricity Supplier. In reality, electricity supply is a very complex system, which includes a lot of decision makers on different levels. In our example, we consider electricity supplier to be interested in one objective: reducing peak demand(or flattening electricity load pattern). This problem can be tackled by equation of Peak-to-Average Ratio (PAR). Mathematical representation of the average (Lavg ) and the peak (Lmax ) loads of the grid looks as follows: Lavg =

1 Lt and Lmax = max Lt . t∈T T

(1)

t∈T

Lmax . Lavg Electricity provider interacts with residents at one side, and with the manufacturer at another. Resources of the electricity supplier are power plants. They are utilized to generate the electricity and satisfy demand coming from manufacturers and residents. Electricity supplier makes decisions about which kind of Consequently, the PAR is calculated as P AR =

Multiscale Modeling of Social Systems: Scale Bridging via Decision Making

619

power plants to utilize in order to satisfy demand. This decision is represented as cost minimization problem. Manufacturer. Here we consider simple supply chain which consists of the manufacturer and users (residents). The manufacturer produces one type of product and uses certain amount of electricity. Basic electricity usage of the manufacturer is performed by using electricity load pattern. Manufacturers are mainly concerned about minimizing costs and maximizing their profit. Cost and profit are captured in the model in the straightforward way: C = F C + V C and P r = pN − C, where F C is a fixed cost, V C is a variable cost, P r is a profit, p is a price, and N is a number of sold products. In this particular case, resources of the manufacturer are its products. The propagation variable is the price of the product. An action which is performed over this resource is selling it. Residents. In this model, residents are considered to use electric appliances, based on their daily activities as shown in Fig. 1, and to demand the electricity from the electricity supplier.

Fig. 1. An example of the usage of home appliances by single working person on hourly basis

Users are mainly concerned about their spendings. They possess money and electric appliances, these resources ensure connection of users to the systems of the higher level. Through money and electric appliances to the electricity supplier, and to the manufacturer through money. 2.2

Multiscale Modeling

In general terms, multiscale modeling is a way of designing a system by separating it into several scales, where each scale has its own inner dynamics. Multiscale

620

N. Nikhanbayev et al.

modeling is mostly used in areas, such as meteorology, mathematics, physics, material science, chemistry, and etc. [4]. Majority of available literature are less concentrated on the theoretical part of the approach and put their main focus on the details of the specific case. There are only a few researches concentrating on theoretical or methodological aspects of multiscale modeling [5–7]. Lack of standardized theory is the only tip of the iceberg: in case of multiscale modeling for social systems, to the best of our knowledge, there are a limited number of studies which address to this topic. Accordingly adaptation of the existing methods, as well as the creation of novel methods specific for social systems would be helpful to deal with multiscale cases of social systems. Explanation of the approach will follow two steps of creation of multiscale models mentioned by Hoekstra et all. [5]: Scale separation and Scale bridging. Scale Separation. Scale separation is the step where we identify and clarify what our scales are, and what do they do. As it was mentioned, modeling of social systems includes several heterogeneous systems, which we are trying to connect. We made an assumption here that society is located on the lowest scale and accessible to the all systems. Each heterogeneous system can be divided into several scales, and their number might be different from each other. The main idea is to connect particular scales of one system with the appropriate scales of the other. This is going to breakdown connections into sublevels, providing more insights into modeling of connections. Scale Bridging. Scale bridging is a way of connecting scales. There already exist a lot of scale bridging techniques such as sampling, projection, up-scaling, homogenization and etc. [7–9]. However these techniques are used in exact sciences, therefore, due to the differences of them from social systems, an adaptation of scale bridging methods or creation of new ones is necessary. In social systems decision makers and decisions play very important role. Therefore, we propose connecting scales using decision makers. Proposed connection approach is based on four entities: (1) Decision Making Entity (DME) (2) Resource (3) Action and (4) Propagation variable. Definitions are: DME - an entity which owns and has control over particular resource; Resource (R) - an asset that is possessed by a particular DME, including but not limited to information, materials and people; Action over resource (A) - is a process which affects or alters particular resource; Propagation variable - variable that propagates through scales. DME owns resource(s) and has particular set of actions which are performed on a particular resource. Each resource has properties. Propagation variable is a property of the resource which is in interest of DME of other scale. Scale bridging is tightly related to the timescale. In general, bottom-up propagation of decision is captured as an impact of propagation variable on resource of the higher scale. Propagation variable shifts upwards based on time step of lower scale.

Multiscale Modeling of Social Systems: Scale Bridging via Decision Making

621

Application of Scale Separation and Scale Bridging. Based on the dynamics of each actor mentioned in this example, there are two scales. First scale is the residents. They have specific dynamics like using electric appliances on hourly basis, which make them different from dynamics of the next scale. On the second scale we have manufacturer, and electricity supplier. Connection among scales is performed by using the proposed approach. In total there are three DMEs in the model: manufacturers, electricity suppliers and residents. Bottom-up connection relies on two types of resources that residents possess, which are electric appliances and money. They use electric appliances each hour, and then send their consumption, as its property, to the electricity supplier. Residents are connected to the manufacturer through their money. If a particular resident wants to buy a certain product, amount of money is sent to the manufacturer. Top-down propagation is the price of the electricity and a product respectively for each DME of the second scale. In Fig. 2, we can see how bottom-up connections are performed in the target model.

Fig. 2. Scale bridging of the target model: bottom-up connections

In this example, upper level decision makers, such as the government, is omitted, therefore only prices of both electricity and products are sent to the users as its property. In the model where the government is also considered, more different kind of properties like volume of electricity, energy fuels that have been used, amount of sales, and etc. should be taken into consideration.

3

Computational Experiment and Discussions

In this experiment we consider collaboration between two systems. The main objective of the collaboration is an identification of an optimal scenarios.

622

N. Nikhanbayev et al.

We consider collaboration in terms of incentives given by electricity supplier. Manufacturer will have a cheaper electricity bill, if it shifts the working hours to the low load hours. However, shifting working hours will lead to raising of cost of the manufacturer, because it has to pay more salary. Therefore, the objective of the simulation is to find an optimal conditions for minimizing the negative impact of the trade-off. The main experimental setups are done in the following way. Electricity supplier: types of energy fuels - 6 (coal, oil, gas, nuclear, hydro, renewable), basic electricity price - 20 (yen/kWh). Manufacturer: electricity usage per product: 2 kWh, number of workers: 20, salary per hour - 100 yen. Users: number of residents - 150. Detailed setups are omitted due to space limitation. State of the Systems Before Collaboration. Experimental results before any type of collaboration, show the existence of the peak at the side of the electricity supplier (Fig. 3). Here, PARs look as follows: PAR of the manufacturer P ARm = 1.8017, PAR of the residents P ARr = 1.5891 and in the combined case P ARc = 1.3149.

Fig. 3. Demand pattern of electricity supplier before collaboration

Costs necessary to work at normal hours (8:00–18:00) of the manufacturer is 20000 yen (for salaries) +6280.7 yen (for electricity) = 26280.7 yen per day. Results of the Collaboration. It is assumed that manufacturer has to pay 1.5 times more salary for working at (22:00–5:00), also electricity price is 15 yen/kWh at night (day price is 20 yen/kWh). After several simulations obtained results show that if we setup six of the workers would work at night hours (0:00–7:00) and rest of the workers work as usual, we will get a solution which affects cost of manufacturer the least. In this case demand pattern changes as can be seen in Fig. 4. P ARs of the changed patterns are P ARm = 1.384 and P ARc = 1.3122. We can notice that as expected P ARm becomes smaller, but P ARc of combined case has not changed much. The reason behind this is related to the fact, that the changes had only affected manufacturers. Costs of the manufacturer: night shift salary-7200 yen, day shift salary - 14000 yen, spendings on night-shift electricity consumption - 1843.964 yen, spendings

Multiscale Modeling of Social Systems: Scale Bridging via Decision Making

623

on day-shift electricity consumption - 4720.7039 yen, and total- 27764.6686 yen. Those values differ from original cost only by 5.6%.

Fig. 4. Demand pattern of electricity supplier after collaboration

Additionally if we consider collaboration of residents and electricity suppliers via home batteries [10], we will get more promising results. In Fig. 5, we can clearly see that combined demand pattern becomes much flatter after implementing collaboration on multiple levels. Calculating peak-to-average ratio gives us: P ARr = 1.1069, P ARm = 1.1769. That is very good results. However, it should be mentioned here, that this is only simulation, and it gives us results of what might happen under certain conditions.

Fig. 5. Demand pattern of electricity supplier in case of collaboration with residents and manufacturers

To sum up, the proposed technique demonstrated its ability to connect several systems in one executable social simulation model. Clarifying resources and their properties that should propagate, helps to understand how connections among several systems work, which is in general not a trivial task to deal with.

4

Conclusions

This paper describes an implementation of multiscale modeling to design social systems. This work contributes to existing knowledge of scale bridging by extending it to the case of social systems. More specifically, authors suggest to perform

624

N. Nikhanbayev et al.

scale bridging via decision makers. Multiscale model can be designed by adopting four entities: decision makers, resources, actions and propagation variables. This way of modeling helps to clarify which kind of resources ensure connection among multiple systems. Moreover, there was presented example of collaboration between an electricity supplier and a manufacturer. Computational experiments demonstrated necessity of multiscale perspective when considering cases, similar to one mentioned in this paper. In addition, it was figured out that under certain conditions, collaboration among heterogeneous is possible to achieve. Finally, a number of limitations need to be considered. At first, the presented example does not include certain features such as higher level decision maker, like the government, or the product market. More comprehensive cases might bring some challenges, which can help to improve the proposed technique. Secondly, theoretical part of the proposed technique still lacks concretization, and more standardized approach would be helpful in dealing with complex systems. At last, simulation models have difficult point in terms of the validation of the model, and multiscale modeling is not an exception. These limitations are subject to the future work.

References 1. Society 5.0. https://www.gov-online.go.jp/cam/s5/eng/index.html. Accessed 19 June 2019 2. Borshchev, A.: Multi-method modeling. In: Proceedings of the 2013 Winter Simulation Conference (2013) 3. Hooker, J.N.: Hybrid modeling. In: van Hentenryck, P., Milano, M. (eds.) Hybrid Optimization. Springer Optimization and Its Applications, vol. 45, pp. 11–62. Springer, New York (2011). https://doi.org/10.1007/978-1-4419-1644-0 2 4. Horstemeyer, M.F.: Multiscale modeling: a review. In: Leszczynski, J., Shukla, M. (eds.) Practical Aspects of Computational Chemistry, pp. 87–135. Springer, Dordrecht (2009). https://doi.org/10.1007/978-90-481-2687-3 4 5. Falcone, J., Chopard, B., Hoekstra, A.: MMl: towards a multi-scale modeling language. Procedia Comput. Sci. 1(1), 819–826 (2010) 6. Chopard, B., Borgdorff, J., Hoekstra, A.: A framework for multi-scale modelling. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 372(2021), 20130378 (2014) 7. Weinan, E., Li, X., Ren, W., Vanden-Eijnden, E.: Heterogeneous multiscale methods: a review. Commun. Comput. Phys. 2, 367–450 (2007) 8. Fish, J.: Multiscale Methods: Bridging the Scales in Science and Engineering. Oxford University Press, Oxford (2009) 9. Ayton, G., Noid, W., Voth, G.: Multiscale modeling of biomolecular systems: in serial and in parallel. Curr. Opin. Struct. Biol. 17, 192–198 (2007) 10. Nikhanbayev, N., Kaihara, T., Fujii, N., Kokuryo, D.: A study on multiscale modeling and simulation approach for social systems. In: Proceedings of 2018 International Symposium on Flexible Automation (2018)

e-Health: A Framework Proposal for Interoperability and Health Data Sharing. A Brazilian Case Neusa Andrade1,2(&) , Pedro Luiz de Oliveira Costa Neto1, Jair Gustavo de Mello Torres1 , Irapuan Glória Júnior1 , Cláudio Guimarães Scheidt1,2 , and Welleson Gazel1,2 1

Postgraduate Studies Program in Production Engineering, Paulista University, Rua Dr. Bacelar 1212, São Paulo 04026-002, Brazil [email protected] 2 SPDM, Associação Paulista para o Desenvolvimento da Medicina, R. Dr. Diogo de Faria, 1036, São Paulo, Brazil

Abstract. Interoperability among systems is a challenge that requires several regards and infrastructure often complex. The best worldwide reports and frameworks say that this can also improve health care and bring the best outcomes for stakeholders. Implementing Interoperability in developing countries is less affordable even it can also promote quality care and save lives. The best models and guidelines could offer protocols for sharing health data allowing to build a system that can deliver at the same time quality, transparency, and social value. This paper addresses an interoperability problem providing the steps built in a pilot to enable a conceptual framework for exchange healthcare data through EHR, and presents the first step and overview of a platform build using rules of PDCA. The experiment was built in a small Brazilian town intends to be a standard for deliver interaction between local government and citizens and also to offer to patients to control own medical data records through a mobile application. Keywords: Health interoperability  Health exchange data  Quality of care Action research



1 Introduction The interoperability at the Health Sector is considerate a very complex task, but according to the American Hospital Association (AHA) can provide advances and better health care outcomes that saving lives and involving key stakeholders [1, 2]. Recent studies from The European Commission (EC) says that yet today many countries like France, Germany, and Italy are still struggling with several challenges to exchange their health data. Developing countries suffering also with a lack of infrastructure and financial resources, presenting a systematic deficit in the quality of care, costs, transparency, and deficient management [2, 3].

© IFIP International Federation for Information Processing 2019 Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 625–630, 2019. https://doi.org/10.1007/978-3-030-29996-5_72

626

N. Andrade et al.

The Commission on Health System Quality (HQSS) and World Health Organization (WHO) offer some frameworks and guidelines to produce well-functioning and performance health system in order to ensure the use of reliable and timely information regarding also to protecting privacy and security patient data [4, 5]. Brazil has 5,500 municipalities where the major part of the population suffers from a lack of resources and it is dependent on a public single and universal health care system (SUS). A partnership between UNIP researchers and a small town in the Paraiba Valley to develop a pilot platform, aiming to promote interoperability that intends to become a standard and scalable solution that could be implemented at other cities that faces the same problem, enabled the development of this research [6, 7]. This paper is the first one of an Action Research Cycle that describes the experience and proposal to design a framework based on PDCA to implement interoperability at public Brazilian health systems. Following the best practices is possible to design a strategy to be followed, based on the premise that is possible to build a cost-effective platform to reach data integration between entities using patients record as key data. The work is competing for an international award as a solution for this Brazilian quest [8–11].

2 Methodology Action Research is a qualitative method that has been used in Production Engineering researches, and also a step-by-step method that helps to conduct interventions in improving a business situation. The term “Research” refers to the production of knowledge, and the term “Action” to an intentional modification of a given reality. The method was chosen to drive this project because leads to collaboration and involves stakeholders in a diagnostic and active-learning [8]. This work has begun with diagnosing that lead to an action plan. Action Research is a tool whose cycles aiming to affect and change the social reality using observations, explanations, and understandings. Table 1 shows the summarized steps used to fulfill the objectives of this research providing a structure for replication [8]. Table 1. Applied cycles of this action research Step Identify the problem and theories Develop a plan

Main objective Research literature and cases and concepts Envision success

Collect data Analyze data

Describe actual situation Develop a plan for improvement Implement the plan

Adjust the theory and plan Reports and results

Framework guidelines

Expected outcome Set goal and target audience Invites to participate, techniques and analysis units Register data Compare theories and practices Actions plan improvements Provide structure for replication

e-Health: A Framework Proposal for Interoperability and Health Data Sharing

627

The theoretical background was chosen after a detailed analysis of frameworks, interoperability ontologies, best practices and policies documents regarding the exchange of patient data health. The main concepts were pointed out by the AHA originated in countries like the USA, and by the European Commission (EC) at countries like Croatia, France, Germany, Italy, The Netherlands, and Sweden [1, 3, 5]. The final work will join the best practices and ontologies for interoperability suggested by the American Hospital Association (AHA) and Quality impacts offered by The Lancet, as shown at Fig. 1 indicating the guidelines for the relationship between citizens and governs referring to health data [1, 3, 5].

Fig. 1. Foundations for high-quality health systems commission [3].

Figure 2 shows the suggested rules for decision makings, to access quality, transparency, use, share and support information for health professionals, and patients that encouraging government-citizen relationship and also will be used to measure the impact of the pilot project [1, 3, 5].

Fig. 2. Suggested framework for high-quality health systems commission [3].

628

N. Andrade et al.

Thus, we will introduce a summary of the pilot that is being developed that intends to become a standard to fulfill this gap. The chosen Brazilian city to applying the pilot project was due to their number of inhabitants, relationship and proximity with health managers that facilitated the diagnoses of their health infrastructure for the first step to collect data to test the platform with patients regarding health data exchange [1, 9].

3 Results The public health system (SUS) offered in Brazil was designed to fulfill all kind of health care whose Information Policy offers a lot of stand-alone software free supplied to manage data in order to provide information for the government. New society goals and health systems need to produce better health outcomes, including greater social value suggesting that citizens are able to collaborate at the government-citizen relationship, improving quality of access, transparency, the security of the information and also support for decision making that is stickle by this disconnected technologies [5, 7, 10]. 3.1

A Brazilian Case

According to data of the Brazilian Statistic Institute (IBGE), Bananal is a little town in the state of São Paulo at the Metropolitan Region of Vale do Paraíba that has 10,775 inhabitants. The city presenting a health infrastructure with a single Joint Health Unit, four offices for developing family health strategy, and one basic care unit that not exchange information among themselves causing many duplicates records and unnecessary costs without any participation of their citizens [6, 7]. Figure 3 shows the first process and their 3 steps that enable the platform to be implemented at the city allowing a standardized method. After this, the platform will use a single key of patient data from an Electronic Medical Record to track and exchange data through entities that also establishes a channel between the local municipality and population and given to patients the control of carrying on their medical history on palm through their own mobile [9, 10]. The platform also offers an interface to establish a communication process between the local government with citizens through an ordinary mobile application (APP). The final procedure is consolidated sending all medical historic data to patients that used the health services provided by the city allowing transparency. The applied model at the city will be extended to other cities reaching a population of more than 2 million inhabitants within 39 municipalities [9–11]. However, health data are also highly privacy-sensitive, and even if more users are complaining by not obtain control over their personal health data, governs are compelling to face regulations generating several challenges. At the platform, we are concerned to safeguarding all steps of security and authentication, in a similar model done by the Blockchain platform allowing tracking and reliability of the data [10]. A summary of the workflow is shown in Fig. 4 that illustrates since the first identification of each patient at any health facility at the city. It is required to validate data before the first use of the mobile application [10].

e-Health: A Framework Proposal for Interoperability and Health Data Sharing

629

Fig. 3. Diagnose and a process of identification and patient authentication data.

Fig. 4. Summarized workflow process.

4 Conclusions A new society demand is affecting the Health Sector ending up in regulations. Electronic Medical Records (EHR) and patient data are usually retained by health institutions and very critical information to be safeguarded with systems that were not designed to offer patients own data. The best practices in healthcare and patient expectations include trusting that shared data is accurate. Health systems need to be designed to produce better outcomes including greater social value [1, 2, 5, 10] The American Hospital Association (AHA) suggests an interoperability ontology through Electronic Health Records to build an efficient solution with a cost-effective

630

N. Andrade et al.

platform for improving health care and sharing best practices with stakeholders. The Lancet Global Health Commission allows to understanding a framework to reach transparency implementing high-quality health systems that could save 8 million lives. These frameworks were chosen at this research by offering models and guidelines to create a reliable Health Systems including delivering data on the palm of patients at their mobile phones [1, 2, 4, 5]. This work presented a proposal of interoperability that reveals possibilities of sharing data between entities through a single key acquired through Electronic Medical Record. The pilot intends to be a model to be implemented at the suggested platform allowing major safety, tracking, and reliability of data at systems. The project is competing for an international award between 1,294 practices which can generate significant savings and better health outcomes for the Brazilian population [9–11]. Acknowledgments. The authors thank Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) for their support for the development of this research, University Paulista (UNIP), and the Ministry of Health Brazilian Government.

References 1. AHA - American Hospital Association - Sharing Data, Saving Lives: The Hospital Agenda for Interoperability, January 2019 2. Spyrou, S., Berler, A.A., Bamidis, P.D.: Information system interoperability in a regional health care system infrastructure: a pilot study using health care information standards. Stud. Health Technol. Inform. 95, 364–369 (2003) 3. Greer, S.L., Hervey, T.K., Mackenbach, J.P., McKee, M.: Health law and policy in the European Union (2013). http://dx.doi.org/10.1016/S0140-6736(12)62083-2 4. Kruk, M.E., et al.: High-quality health systems in the sustainable development goals era: time for a revolution. Lancet Glob. Health 6, 1196–252 (2018). http://dx.doi.org/10.1016/ S2214-109X(18)30386-3 5. WHO Library Cataloguing-in-Publication Data. Everybody business: strengthening health systems to improve health outcomes: WHO’s the framework for action. Geneva 27, Switzerland (2007). ISBN 978 92 4 159607 7 6. IBGE: Instituto Brasileiro de Geografia e Estatística. Pesquisa Nacional de Saúde 2013. Rio de Janeiro: IBGE (2013). https://ww2.ibge.gov.br/english/geociencias/geografia/geografia_ urbana/arranjos_populacionais/default.shtm?c=9 7. BRASIL: Ministério da Saúde. http://portalms.saude.gov.br. Accessed 25 Feb 2019 8. Mello, C.H.P., Turrioni, J.B., Xavier, A.F., Campos, D.F.: Pesquisa-ação na engenharia de produção: proposta de estruturação para sua condução. Production 22(1), 1–13 (2012). https://dx.doi.org/10.1590/S0103-65132011005000056 9. González, C., Blobel, B.G., López, D.M.: Ontology-based framework for electronic health records interoperability. Stud. Health Technol. Inform. 169, 694–698 (2011) 10. Liang, X., Zhao, J., Shetty, S., Liu, J., Li, D.: Integrating blockchain for data sharing and collaboration in mobile healthcare applications. In: IEEE 28th Annual International Symposium on Personal, Indoor, and Mobile Radio Communications (PIMRC), Montreal, pp. 1–5 (2017) 11. BRASIL: Prêmio APS Forte para o SUS: Acesso Universal. https://apsredes.org/premio-apsforte/. Accessed 15 May 2019

Managing Risk and Opportunities in Complex Projects Asbjørn Rolstadås1(&), Agnar Johansen1, Yvonne C. Bjerke2, and Tobias O. Malvik3 1

2

Norwegian University of Science and Technology, Trondheim, Norway [email protected] Norwegian Directorate of Public Construction and Property Management, Oslo, Norway 3 The University of South-Eastern Norway, Kongsberg, Norway

Abstract. Projects is the preferred model for one-of-a-kind production. Projects may be difficult to manage due to complexity and many involved stakeholders. Stakeholders are a major source of uncertainty. Uncertainty may be both positive and create opportunities and negative giving risks. Risks and opportunities are either operational, strategic or contextual. The traditional approach to managing risk comprise identification and analysis of risks as well as response planning and control. There is a need for a shift in mindset for managing risks. Rather than regarding risks as “evil”, they should be managed because uncertainties also create opportunities. The Bermuda Risk Triangle is the intersection between operational, strategic and contextual risks. Project risk navigation is about how project leaders can navigate in this triangle to reach their objectives. Opportunities are often more or less neglected in projects. At the most, just a few are identified. A framework for managing opportunities is suggested. It builds on the project control variables: time, cost and scope of work. It contains a classification of eight opportunity types. Using this classification in dedicated workshops has shown to produce far more opportunities than usual. The framework is verified in a case study. The case is the construction of the new National Museum in Oslo, Norway. Through the framework a total of 246 opportunities have been identified representing an estimated cost reduction of about 64.2 million USD. Keywords: Project management Opportunity management

 Risk management 

1 Complex Projects The classical manufacturing typology distinguishes between flow, batch and one-of-akind production. One-of-a-kind production is widely applied in the shipbuilding, oil & gas and construction industry. It is also the normal approach for developing process facilities and infrastructure (for example power plants). One-of-a-kind production is also referred to as project work. Project management is a discipline in itself with its © IFIP International Federation for Information Processing 2019 Published by Springer Nature Switzerland AG 2019 F. Ameri et al. (Eds.): APMS 2019, IFIP AICT 567, pp. 631–639, 2019. https://doi.org/10.1007/978-3-030-29996-5_73

632

A. Rolstadås et al.

own research community. The Project Management Institute defines a project as a temporary endeavor undertaken to create a unique product, service, or result [1]. Projects tend to be more difficult to manage than flow or batch production. In fact, many major projects fail to meet their objectives with respect to schedule and costs. One of the most cited examples is the Sydney Opera House, which overran its initial budget by a factor of 14 and ran 15 years behind schedule [2]. Another example is the early oil & gas development in the North Sea. The Statfjord A platform suffered a twoyear delay and had a cost overrun of 222% [3]. There are several reasons why projects are more difficult to manage: • They may involve substantial innovation, as there might not be any proven technology available. • As they are one-of-a-kind, there is limited experience data available for planning and execution. • They involve cooperation amongst several organizations that may not have previous experience from working together. • They have a high complexity involving many actors and challenging processes, organizational aspects, and logistics. Many authors have discussed complexity. One of the first is Baccarini [4]. He defines two components of complexity: organizational and technological. Williams [5] combine these into structural complexity and adds uncertainty as a second component. Brady and Davies [6] take a systems view and make a distinction between structural and dynamic complexity, where structural complexity has to do with the arrangement of components and subsystems into an overall systems architecture. This architecture includes the system produced, the producing system, and the wider system. Projects include a number of stakeholders. A stakeholder is a person or organization that either can influence the project execution or is affected by the project. The large number of stakeholders having different views on what the best solution is and what to deliver makes a project notoriously difficult to plan and manage. Consequently, stakeholders may have different motives for participating in. Some may tend to benefit their own interests above the outcome of the project. Thus, stakeholders create uncertainty making it difficult to plan and predict the execution of the project.

2 Project Uncertainty, Risk and Opportunity Uncertainty in projects is often regarded as lack of information. This means that projects carry uncertainty, and that the uncertainty will vary over the project life cycle. Some major sources of uncertainty are [7]: • • • • • •

Stakeholders may have conflicting expectations. Project objectives may change. Insufficient definition of project result or development method. Technological constraints. Market conditions. Variations in quantities or quality.

Managing Risk and Opportunities in Complex Projects

633

The uncertainty sources can be classified into controllable and non-controllable factors [8]. The controllable factors are foreseeable events such as quality and deviation of design, change of methods and technology, etc. The non-controllable factors are beyond the control of the project organization. They fall into two categories: nature (beyond the control of man) and stakeholders. There is both negative and positive uncertainty. The negative uncertainty and its potential impact comprise a risk in the project. Likewise, a positive uncertainty and its impact, represents an opportunity for the project. Whereas risk management has been a major focus of project managers and owners for many years, few projects has managed to exploit the opportunities that arises from positive uncertainty. Krane et al. [9] has shown that projects may have a balanced approach to manage both risks and opportunities at project start up. However, while projects sustain their effort in managing risks during project execution, there is almost no focus on opportunities. Rolstadås and Johansen [10] classifies risks and opportunities into three categories • operational • strategic • contextual. Operational risk and opportunity is due to internal circumstances in the project that typically can be controlled by the project management team. Strategic risk and opportunity is the potential impact on the project benefits. Contextual risk and opportunity is connected to circumstances outside the project that may influence the scope of work and the performance of the organization.

3 Risk Navigation The Project Management Institute defines risk management as the processes concerned with conducting risk management planning, identification, analysis, responses planning, and controlling risk [1]. This classical approach to risk management is illustrated in Fig. 1. The first step is to identify risk factors. Risk analyses includes estimating the consequences of each risk factor. The final two steps are to develop an action plan and to monitor and control it.

Identification of risk factors

Risk analysis

Risk action plan

Risk control

Fig. 1. Classical risk management process.

This approach is based on a perception that risk is something “evil” that should be avoided or at least mitigated. This perception of risk has led to major efforts to forecast

634

A. Rolstadås et al.

any risk and develop plans for mitigation or elimination. Despite of this large effort, cost overruns and delays still happen. Rolstadås et al. [11] argues that project managers need to make a shift in mindset towards a stronger focus on managing risks rather than eliminating them: • Projects as deliverables to a means to enhance project business value. • Uncertainties as “evil” to acknowledge the project nature as unique and uncertain. • Projects as known tasks to be accomplished in known environments to embracing a continuum of tasks to be executed in turbulent business environment. • Deviations from project baselines as inaccurate planning or inappropriate control to acknowledgement of deviations as being the rule. They argue for an extended risk concept based on the three risk and opportunity categories defined above, and launches a new approach to project risk navigation. The concept is illustrated in Fig. 2 and shows the three risk categories in the intersection between the project management, the project owner and the environment. The intersection of these three risk areas is called “The Bermuda Risk Triangle”. Just as travel in the area of the Atlantic Ocean known as the Bermuda Triangle requires one to accept unknown risks, so too does the multi-year duration of a major capital project. Strategic risk Corporate management Bermuda Risk Triangle Environment

Operational risk

Project management Contextual risk

Fig. 2. A new concept for risk navigation.

Project risk navigation is about how to navigate in the Bermuda Project Risk Triangle and reach their objectives. The navigator is a framework containing three major components: the governance system, the decision process, and the strategic planning. Although this new approach covers risk, the principles are also valid for exploiting opportunities. This follows from the arguments for a shift in mindset.

Managing Risk and Opportunities in Complex Projects

635

4 Hunting for Opportunities According to Davies et al. [12] innovation is often avoided in large projects due to uncertainty and associated cost escalations. Rather than seeking novel ideas and innovative approaches, the owner and the project management try to minimize risks. They rely on using proven technology and well-established procedures. In the end, this means choosing the lowest bid, transferring risk to contractors, and sticking to the original plan. However, in opportunity management, projects cannot be reluctant to innovation and change since change and innovation are the drivers for harvesting and exploiting the opportunities. There are several reasons why projects fail to hunt for opportunities: • It feels safer to stick to the agreed plan rather than test new options even if there is a potential reward. • Existing tools neglect opportunities and focus on mitigating risks. • Exploiting an opportunity could be time consuming and costly. Thus exploiting the opportunity turns into a risk (to be avoided). • If the project is on track, there is little motivation for the project management to seek new innovations.

1 Establish/update context 2 Identify key stakeholders 3 Identify new opportunities

5 Identify new risks

4 Evaluate opportunities

6 Evaluate risks

7 Decide action 8 Implement action

Monthly review

9 Follow up, review and reporting

Fig. 3. Risk and opportunity management process.

The Project Management Institute defines opportunity as a condition or situation favorable to the project [1]. It will have a positive impact on the project objectives and

636

A. Rolstadås et al.

represents a possibility for positive changes. Johansen et al. [13] have developed a nine-step process for identifying, analyzing and following up on project uncertainty as shown in Fig. 3. In steps 1 and 2 goals and key deliverables are confirmed, and stakeholders identified. Steps 3 through 7 involve workshops for identifying, analyzing and developing measures of exploiting or controlling risks and opportunities. In steps 8 and 9 risks and opportunities are followed up over the project life cycle. To hunt for opportunities requires ways of identifying opportunities. Normally this is achieved using brainstorming techniques at workshops involving several key stakeholders. The authors have found that there has to be dedicated workshops for identifying opportunities separate from those identifying risks. It has proven helpful to develop a framework for classification of different types of opportunities. The suggested framework is based on the three project control variables: cost, time and scope of work as well as the project’s second order effect for the project owner in the operation phase. Scope of work, however, is in this context regarded as quality. It is well known that in order to meet budget requirements, the quality level may have to be adjusted (for example by simplifying the design or by using alternative, less expensive, materials). The three control variables are often illustrated by the three sides of a triangle referred to as the iron triangle [14]. The idea is that the need for a change in one of the variables may be compensated by a change in one or both the other variables (the schedule may for example be compressed at a higher cost). The classification framework is shown in Table 1. It shows eight categories. The first category describes the opportunities that are beneficial for both cost, time, and quality. Categories 2 through 4 represents the opportunities that have a combination of two out of three of cost, time, and quality. Categories 5 through 7 contain the opportunities that only gives benefits in either cost, time or quality. Opportunities that will give second-order consequences for the project owner in the operation phase are placed in category 8. This category differs from the others, as it is counted regardless if it is combined with other opportunity types. Table 1. Framework for classification of opportunities. Number 1 2 3 4 5 6 7 8

Opportunity category Control variables Multiple first order Cost, Time, Quality Double first order Cost, Time Cost, Quality Time, Quality Single first order Cost Time Quality Second order Value for user/client

Eight different opportunity properties are defined: (a) reduced cost, (b) avoid cost overruns, (c) faster deliverance, (d) avoid delays, (e) higher quality, (f) avoid unnecessary high quality, (g) increased value for the client, (h) increased value for the user.

Managing Risk and Opportunities in Complex Projects

637

5 Verification Case The suggested framework for hunting opportunities has been verified using a case from the construction industry, the new National Museum in Norway’s capital, Oslo. How the project harvested opportunities was followed over a three and a half year period (2016–2019) harvesting potential opportunities. The new museum is built in the city center with demanding construction and site logistics and many stakeholders to be managed. Because of the art that will be exhibited and stored, the building is it equipped with advanced, state of the art alarm and monitoring systems, and lot of other sophisticated technical systems. The architectural and technical design is state of the art and includes several innovative solutions. During spring 2016 the project faced problems. It was behind schedule and forecasted a cost overrun. In spite of a good uncertainty management system, there were only three opportunities left to harvest in 2015. Thus, the management system had to be adjusted. Two problems had to be solved: (1) How to chase opportunities during the execution phase and (2) How to achieve commitment at all levels of the organization? To get the project back on track, the project management created 11 different sources/arenas with activities that facilitated opportunity hunting such as formal workshops and informal café dialogues. Six separate opportunity studies on the largest contracts (believed to have the largest potential) were organized. By April 2019, 246 opportunities have been identified, and the project has an estimated cost reduction of about 64.2 million USD. 175 of the 246 identified opportunities were harvested. A harvested opportunity is considered as an opportunity the project made an effort to realize. Table 2 shows the number of opportunities harvested for the first order opportunities. 56% were assumed to have an effect on cost, 62% on time and 32% on quality. Table 2. Number of first order opportunities harvested. Cost Reduced cost 105 138

Avoid cost overruns 33

Time Faster deliverance 90 152

Avoid delays 62

Quality Higher quality 27 78

Avoid unnecessary high quality 51

Based on the framework, there are two important conclusions for the museum case: (1) opportunities related to cost and time are dominant, and (2) it looks like there is less focus on opportunities that are beneficial for the National Museum in the long run. The project management rated the performance objectives as (1) cost, (2) quality, (3) time. The frequency of cost, time, and quality in the framework show that the harvested opportunities are not aligned with the performance objectives. However, the focus was on reducing time and cost at the time of implementation of the new opportunity management approach, and one could argue that the strategy was successful. The project had to get back on track with respect to cost and time as the quality

638

A. Rolstadås et al.

standard already were set extremely high. This might explain why most of the opportunities were cost- and time-related in the construction phase and why these opportunities are so dominant in this case. Another reason is that cost and time is more straightforward to measure than quality, which could make such opportunities easier to define during identification.

6 Conclusion Managing uncertainty is of crucial importance for a complex project to be successful. However, it is manly the negative uncertainty (creating risks) that is considered in most projects. The positive uncertainty (enabling opportunities) is often neglected. To remedy this, a framework for managing opportunities is suggested. It uses an opportunity classification system in a number of formal and informal workshops. In this way, almost the same number of opportunities as risks can be identified. Future project managers must pay more attention to the harvesting of opportunities, as they are crucial to enhance projects benefits or to bring projects running over time and budget back on track. However, this requires a new mindset, both with the project organization and the project owner.

References 1. Project Management Institute: A guide to the Project Management Body of Knowledge, 5th edn. Project Management Institute, Newton Square (2013) 2. Kharbanda, O.P., Pinto, J.K.: What Made Gertie Gallup? Lessons from Project Failures. Van Nostrand Reinhold, New York (1996) 3. Rolstadås, A.: Cost study Norwegian continental shelf. Process. Econ. Int. 4(3), 15–23 (1983) 4. Baccarini, D.: The concept of Project complexity - a review. Int. J. Project Manage. 14(4), 201–204 (1996) 5. Williams, T.: Modelling Complex Projects. Wiley, Hoboken (2002) 6. Brady, T., Davies, A.: Managing structural and dynamic complexity: a tale of two projects. Proj. Manag. J. 45(4), 21–38 (2014) 7. Johansen, A.: Project Uncertainty Management a New Approach: The “Lost Opportunities”. NTNU, Trondheim (2015) 8. Johansen, A., Olsson, N., Jergeas, G., Rolstadås, A.: Project Risk and Opportunity Management - An Owner’s Perspective. Routledge, London (2019) 9. Krane, H.P., Johansen, A., Alstad, R.: Exploiting opportunities in the uncertainty management. Proc. Soc. Behav. Sci. 119, 615–624 (2014) 10. Rolstadås, A., Johansen, A.: From Protective to Offensive Project Management. PMI Global Congress EMEA, Malta, 19–21 May 2008 11. Rolstadås, A., Hetland, P.W., Jergeas, G., Westney, R.: Risk Navigation Strategies for Major Capital Projects - Beyond the Myth of Predictability. Springer Series in Reliability Engineering. Springer, London (2011). https://doi.org/10.1007/978-0-85729-594-1

Managing Risk and Opportunities in Complex Projects

639

12. Davies, A., MacAulay, S., DeBarro, T., Thurston, M.: Making innovation happen in a megaproject: London’s crossrail suburban railway system. Proj. Manag. J. 45(6), 25–37 (2014) 13. Johansen, A., Sandvind, B.T.O., Økland, A.: Uncertainty analysis: 5 challenges with today’s practice. Soc. Behav. Sci. 119, 591–600 (2014) 14. Rolstadås, A.: Applied Project Management: How to Organize, Plan and Control Projects. Tapir Academic Press, Trondheim (2008)

Author Index

Abraham, Emerson Rodolfo I-87, I-135, I-180 Abusohyon, Islam II-423 Acerbi, Federica II-520 Akbar, Muhammad I-206 Alfina, Kartika Nur II-59 Alfnes, Erlend I-579, I-596, I-604, II-35, II-265 Alvela Nieto, M. T. II-567 Alves, Francisco Canindé Dias I-108 Ameri, Farhad I-291, I-722 Andersen, Ann-Louise I-349, I-366 Andersen, Rasmus I-392 Andrade, Neusa II-625 Arai, Eiji II-189 Arena, Damiano I-299, I-307, I-323 Arena, S. I-315 Arica, Emrah I-624, I-690, II-493 Aromaa, Susanna I-615 Ashrafian, Alireza I-29, II-35 Asiabanpour, Bahram II-151 Aso, Hideki II-372 Assiani, Silvia II-520 Bach, Terje I-466 Baek, Woonsang II-413 Bahtijarevic, Jasmin II-285 Bandelow, Nils II-539 Bauernhansl, Thomas I-502, II-431 Bech, Sofie II-405 Behroozikhah, Mahdi II-151 Belil, Sabah II-75 Benaben, Frederick II-531 Bertnum, Aili Biriita II-240, II-248 Bhalla, Swapnil II-265 Bjerke, Yvonne C. II-631 Blank, Andreas I-248 Bode, D. II-567 Bojer, Casper Solheim I-155, II-575 Bojko, Michael II-466 Bokhorst, J. A. C. II-502 Bonilla, Silvia Helena I-187 Boucher, Xavier I-214, I-375 Brettel, Malte II-277

Brunoe, Thomas Ditlev I-366, I-375, I-392, I-400, II-405 Buer, Sven-Vegard I-579 Buess, Paul I-77 Burow, Kay I-408 Busam, Thomas I-46 Cabadaj, Jan I-633 Cardoso Jr., Ataíde Pereira I-142 Cardoso Junior, Ataide Pereira I-135, I-180 Carvalho Curi, Thayla M. R. I-102 Carvalho, Genyvana Criscya G. I-108 Cavalieri, Sergio I-485, I-493 Cerqueus, Audrey I-214 Chandima Ratnayake, R. M. I-537 Chiacchio, Ferdinando I-451 Chiarenza, Marcello I-451 Cho, Hyunbo I-291 Choi, Junhyuk I-291 Christensen, Bjørn I-366 Christensen, Flemming Max Møller I-155, I-164, II-575 Colossetti, Adriane Paulieli I-87 Compagno, Lucio I-451 Contador, Jose Celso II-323 Contador, Jose Luiz II-323 Cordeiro, Meykson Rodrigues Alves I-95 Costa Neto, Pedro Luiz de Oliveira I-173 D’Urso, Diego I-451 da Cruz Correia, Paula Ferreira I-129, I-142 da S. A. Castelo Branco, Daiane I-123 da Silva, Agnaldo Vieira I-95 da Silva, Márcia Terra I-129, II-323 de A. M. Brandão, Lilane I-116 de A. Nääs, Irenilza I-116, I-123 de Alencar Nääs, Irenilza I-95, I-102, I-108 de Almeida Morais, Ivonalda Brito I-108 de Lima, Yuri Claudio C. I-123 de M. Araujo, Luis A. Mendes I-123 de Man, Johannes Cornelis I-708 de Mello Torres, Jair Gustavo II-625 de Mesquita Spinola, Mauro II-323 de Morais, Silvia Piva R. I-123

642

Author Index

de Oliveira Costa Neto, Pedro Luiz II-625 de Paula Pessoa, Marcelo Schneck II-323 de Souza, Aguinaldo Eduardo I-87, I-129, I-135, I-142, I-180 Decker, A. II-567 Del Valle, C. I-299 Delorme, Xavier I-214, I-239, I-375 Demartini, Melissa II-423 Deng, Quan I-458 Dengler, Sebastian I-248 Desai, Shantanoo I-458, II-365 Dethlefs, Arne I-642 Djurdjanovic, Dragan II-591 Dolgui, Alexandre I-231 Dombrowski, Uwe I-54, II-303, II-539 dos Reis, João Gilberto Mendes I-87, I-129, I-135, I-142, I-173, I-187, II-118 dos Santos, Nilza Aparecida I-650 dos Santos, Renato Márcio I-135, I-180 Dovere, Emanuele I-485 Dreyer, Heidi C. I-29 Dukovska-Popovska, Iskra I-155, I-164, II-575 Eguia, I. I-299 ElMaraghy, Hoda A. I-400 Emblemsvåg, Jan I-570, II-127 Esmatloo, Paria II-159 Estender, Antônio Carlos I-95 Evans, Steve II-143 Ezzat, Omar I-375 Fallahtafti, Alireza II-100 Fast-Berglund, Åsa I-682 Feuerriegel, Stefan I-333 Fiore, Alexis II-439 Fischer, Lars I-642 Fontane, Frederic I-357 Fragapane, Giuseppe Ismael II-240, II-248 Franco, Eldelita A. P. I-116 Frank, Jana I-518, I-674, II-547 Franke, Jacob II-35 Franke, Jörg I-248 Franke, Marco I-408 Friedli, Thomas I-77 Fujii, Nobutada I-148, II-180, II-617 Gaiardelli, Paolo I-3, I-37, I-493, II-493 Gallego-García, Sergio II-555

Galluccio, Federico II-423 Gao, Xuehong II-91 Garcia, Solimar I-102, I-173 García-García, Manuel II-555 Gayialis, Sotiris P. I-474 Gazel, Welleson II-625 Ghalehkhondabi, Iman II-17, II-100, II-256 Ghoddusi, Hamed II-151 Gianessi, Paolo I-239 Giskeødegård, Marte F. I-554 Glöckner, Robert I-642 Gonçalves, Kelly L. F. I-116 Gonçalves, Rodrigo Franco II-323, II-389 Gonnermann, Clemens I-214, I-231 Görzig, David I-502, II-431 Gou, Juanqiong II-531 Gran, Erik I-596 Greger, Marius I-197 Gützlaff, Andreas II-43, II-277 Halfdanarson, Jon II-127 Halse, Lise Lillebrygfjeld II-135 Hansen, Irina-Emily II-603 Hashemi-Petroodi, S. Ehsan I-231 Hashimoto, Hidehiko II-372 Hauge, Jannicke Baalsrud I-666, II-285 Hedman, Ida II-285 Hedvall, Lisa II-196 Heger, Jens II-171 Heikkilä, Päivi I-615 Henk, Sebastian II-43 Henriksen, Bjørnar II-447 Henriksen, Knut F. II-35 Henry, Ludovic I-357 Heuss, Lisa I-248 Hildebrand, Marlène I-307 Holmen, Elsebeth I-29 Holst, Lennard II-294 Holtkemper, David I-443 Holtskog, Halvor I-29 Horler, Samuel I-414 Hünnekes, Philipp I-341 Hvolby, Hans-Henrik I-164, I-604, II-27, II-68, II-248, II-265 Hwang, Dahye II-381 Iakymenko, Natalia I-588 Ichikari, Ryosuke II-372 Iitsuka, Shunsuke I-148 Ingvaldsen, Jonas A. I-29

Author Index

Invernizzi, Daniela II-493 Irohara, Takashi I-206 Ito, Yoshinori II-372 Iwasaki, Komei II-189 Izutsu, Rihito II-180 Jæger, Bjørn I-466, II-135 Jeyes, Adarsh II-439 Ji, Bongjun I-291 Johansen, Agnar II-631 Jun, Hong-Bae I-701 Jünge, Gabriele Hofinger I-562 Junior, Ataide Pereira Cardoso I-129 Júnior, Irapuan Glória II-625 Junior, João José Giardulli I-129 Jussen, Philipp I-518, II-294 Kaasinen, Eija I-615 Kaihara, Toshiya I-148, II-180, II-617 Kajati, Erik I-633 Kalkanci, Kaan I-21 Kampker, Achim I-518 Kamsvåg, Pål II-347 Kamsvåg, Pål Furu I-690 Kärcher, Susann I-502, II-431 Karl, Alexander II-303, II-539 Kechagias, Evripidis I-474 Keepers, Makenzie II-1 Keprate, Arvind I-537 Kim, Duck Young II-413 Kiritsis, Dimitris I-267, I-299, I-307, I-323 Kjersem, Kristina I-554 Kløve, Birgit I-690, II-347 Koesling, Matthias II-127 Kokuryo, Daisuke I-148, II-180, II-617 Konstantakopoulos, Grigorios D. I-474 Korder, Svenja I-257 Koura, Ibrahim II-531 Kraus, Mathias I-333 Krowas, Knut II-331 Kulvatunyou, Boonserm (Serm) II-457 Kuntz, Jan I-674, II-547 Kuntze, Kristian N. II-35 Kurata, Takeshi II-372 Kurz, Mary E. II-439 Kvadsheim, Nina Pereira I-570, II-127 Lalic, Bojan I-510, II-355 Lambey-Checchin, Christine II-481 Landmark, Andreas II-447 Landmark, Andreas D. I-690

643

Landryova, Lenka I-633 Larsen, Maria Stoettrup Schioenning I-375, I-658 Lassen, Astrid Heidemann I-658 Lee, Minchul II-457 Leiber, Daria I-223 Lepratti, Raffaello II-423 Li, Yan II-143 Liinasuo, Marja I-615 Lima, Nilsa D. S. I-102 Lödding, Hermann I-642, II-511 Lodgaard, Eirin I-29 Lorenz, Rafael I-46, I-69, I-77, I-333 Louw, Louis I-197 Luco, Javier I-357 Luz, José A. A. I-116 Macchi, Marco I-274, I-283, I-315, I-349 Macuvele, Julian I-77 Maehata, Takashi II-372 Maier, Janine Tatjana II-171 Maihami, Reza II-17, II-256 Majstorovic, Vidosav II-355 Malvik, Tobias O. II-631 Mangini, Clayton Gerber I-95 Mannhardt, Felix I-708 Mantravadi, Soujanya I-164 Marek, Svenja II-51 Marjanovic, Ugljesa I-510, II-355 Masi, Lucky C. I-546 Masmoudi, Oussama I-239 Mathirajan, M. II-212 Mattis, Paolo II-423 Maxwell, Duncan I-529 Medic, Nenad II-355 Medini, Khaled I-366, I-375, II-481 Medojevic, Milovan II-355 Mendes dos Reis, João Gilberto I-180 Mestre, Sara I-357 Mikalef, Patrick I-624 Miquilim, Danielle I-425 Mizuyama, Hajime II-109 Moghimi, Faraz II-151 Møller, Charles I-164 Moon, Ilkyeong II-84 Morinaga, Eiji II-189 Mork, Ola Jon II-603 Moser, Benedikt I-518 Müller, Egon I-414 Mwesiumo, Deodat I-570

644

Author Index

Nääs, Irenilza de Alencar I-173 Nabati, E. G. II-567 Nakano, Shinichi I-148 Napoleone, Alessia I-349 Neagoe, Mihai II-27, II-68 Netland, Torbjørn H. I-77, I-333 Netland, Torbjörn H. I-46 Neto, Manoel Eulálio I-108 Neubert, Gilles II-481 Nezami, Zeinab I-323 Nielsen, Kjeld I-375, I-392, I-400, I-658, II-405 Nikhanbayev, Nursultan II-617 Noh, Sang Do II-381 Nold, Daniel I-12 Norese, Maria-Franca II-481 Nyhuis, Peter II-583 Oliva, M. I-299 Oliveira, Manuel I-690 Oppen, Johan II-223 Orellano, Martha II-481 Orrù, P. F. I-315 Palm, Daniel I-197 Panayiotou, Nikolaos A. II-397 Papadopoulos, Georgios A. I-474 Papcun, Peter I-633 Park, Youngsoo II-84 Patel, Apurva II-439 Paul, Magdalena I-214, I-223, I-231 Pause, Daniel II-51 Pedersen, Ann-Charlott I-29 Pedersen, Simen Alexander I-466 Petroni, Benedito Cristiano A. II-389 Pettersen, Ole-Gunnar II-35 Pezzotta, Giuditta I-493 Piel, Mario II-277 Pinto, Roberta Sobral I-135 Pirola, Fabiana I-485 Pleli, Julian I-223 Polenghi, Adalberto I-274, I-283 Powell, Daryl I-3, I-37, I-62, I-69, II-493 Powell, Daryl J. I-29 Pozzetti, Alessandro I-283, I-349 Prabhu, Vittaldas I-716 Prote, Jan-Philipp I-46, I-341, II-43, II-277 Psarommatis, Foivos I-267, I-307

Rakic, Slavko I-510 Rakiz, Asma II-75 Ramasubramanian, M. II-212 Ratnayake, R. M. Chandima II-59, II-312 Raymundo, Helcio II-118 Raymundo, Júlio Cesar I-180 Rebours, Céline II-127 Redecker, M. A. II-567 Reinhart, Gunther I-214, I-223, I-231, I-248, I-257 Reis, Jacqueline Zonichenn II-389 Reis, João Gilberto Mendes I-102 Reke, Eivind I-62 Retmi, Kawtar II-75 Riedel, Ralph I-414, II-331, II-466 Rød, Espen I-562 Roda, Irene I-274, I-283, I-315 Rodrigues, Raimundo Nonato Moura I-108 Rolstadås, Asbjørn II-631 Romero, David I-3, I-37, I-493, I-633, I-682, II-1 Roser, Christoph I-12, I-21 Røstad, Carl Christian II-447 Rudberg, Martin I-529 Ruggero, Sergio Miele I-650 Sabbagh, Ramin II-159, II-591 Sacomano, José Benedito I-129, I-650, II-323 Sala, Roberto I-485 Saretz, Benedikt II-339 Satyro, Walter C. II-323 Sauermann, Frederick I-341, II-277 Scheidt, Cláudio Guimarães II-625 Schiavo, Luciano II-323 Schiemann, Dennis II-294 Schiffer, Martina II-339 Schimidt, Ricardo Zandonadi I-129 Schmidt, Matthias II-171 Schuh, Günther I-46, I-341, I-443, II-43, II-231, II-277 Schulz, Julia I-214 Schütz, Peter I-29 Seim, Eva Amdahl I-690 Seitz, Melissa II-583 Semini, Marco I-579, I-588 Senderek, Roman I-674, II-547 Seo, Minyoung I-701

Author Index

Shin, Hyunjong I-716 Shlopak, Mikhail I-562 Silva, Helton Raimundo Oliveira I-187 Silva, Raquel Baracat T. R. I-102 Sippl, Fabian I-214 Sobotta, Maren II-583 Sonntag, Paul I-666 Sorensen, Daniel G. H. I-400 Spence, Chelsea II-439 Spone, Jakob II-35 Stavrou, Vasileios P. II-397 Steenwerth, Philipp II-511 Steger-Jensen, Kenn I-155, I-164, II-68, II-575 Stergiou, Konstantinos E. II-397 Stich, Volker I-674, II-294, II-547 Strandhagen, Jan Ola I-588, II-240, II-248 Strandhagen, Jo Wessel I-579 Stratmann, Lukas I-341 Suginouchi, Shota II-109 Summers, Joshua D. II-439 Szirbik, Nick B. I-433 Taaffe, Kevin M. II-439 Tada, Naohiro II-372 Taisch, Marco II-520 Tamayo, Simon I-357 Taskhiri, Mohammad Sadegh II-27 Tawalbeh, Mandy II-466 Terra da Silva, Márcia I-425, I-650, II-611 Thevenin, Simon I-231 Thoben, Klaus-Dieter I-408, I-458, I-666, II-365, II-567 Thomas, Katharina II-277 Thürer, Matthias I-3, I-37 Tiedemann, Fredrik I-383, II-204 Tien, Kai-wen I-716 Toloi, Marley Nunes Vituri I-187 Toloi, Rodrigo Carlo I-187 Tonelli, Flavio II-423 Torvatn, Hans II-347 Torvatn, Hans Yngvar I-624 Tourkogiorgis, Ioannis I-307

Tropschuh, Barbara I-257 Trucco, Paolo I-274 Turner, Paul II-27, II-68 Umeda, Toyohiro

II-180

Vaagen, Hajnalka I-546 Vascak, Jan I-633 Velardita, Luca I-451 Velthuizen, Vincent R. I-433 Vendrametto, Oduvaldo I-108, I-135 Vestergaard, Sven II-68 Vogeler, Colette II-539 von Leipzig, Konrad I-197 von Stietencron, Moritz II-365 Voß, Thomas II-171 Wakamatsu, Hidefumi II-189 Waschull, S. II-502 Wdowik, Roman I-537, II-312 Weckman, Gary R. II-100 Wellsandt, Stefan I-458 Welo, Torgeir II-603 Wentzky, Chase II-439 Wetzchewald, Philipp II-231 Wiendahl, Hans-Hermann II-339 Wiesner, Stefan I-666 Wikner, Joakim I-383, II-196, II-204 Wiktorsson, Magnus II-285 Wolf, Hergen I-333 Wortmann, J. C. II-502 Wuest, Thorsten I-3, I-37, II-1 Wullbrandt, Jonas I-54, II-303 Yamashita, Ken II-180 Yoder, Reid I-722 Zafarzadeh, Masoud II-285 Zamanifar, Kamran I-323 Zandonadi Schmidt, Ricardo II-611 Zero, Nicole II-439 Zhang, Shuai II-475 Zikeli, Georg Lukas I-248 Zolotova, Iveta I-633

645