Industrial Engineering : Management, Tools, and Applications 9781482226850, 1482226855

Annotation Industrial Engineering: Management, Tools, and Applications, Three Volume Set provides innovation application

2,683 183 53MB

English Pages [816] Year 2015

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Industrial Engineering : Management, Tools, and Applications
 9781482226850, 1482226855

Table of contents :
Content: Volume 01
Front Cover
Contents
Preface
Editors
Contributors
Introduction
Chapter 1: Integrated Production Planning Model for Noncyclic Maintenance and Production Planning
Chapter 2: Non-Traditional Performance Evaluation of Manufacturing Enterprises
Chapter 3: Automotive Stamping Operations Scheduling Using Mathematical Programming and Constraint Programming
Chapter 4: Voting Security in Social Media
Chapter 5: Integrated Approach to Optimize Open-Pit Mine Block Sequencing
Chapter 6: Locating Temporary Storage Sites for Managing Disaster Waste, Using Multiobjective Optimization Chapter 7: Using Earthquake Risk Data to Assign Cities to Disaster-Response Facilities in TurkeyChapter 8: Factors Affecting the Purchasing Behaviors of Private Shopping Club Users: A Study in Turkey
Chapter 9: Traffic Signal Optimization: Challenges, Models, and Applications
Chapter 10: Comp arative Financial Efficiency Analysis for Turkish Banking Sector
Back Cover
Volume 02
Front Cover
Contents
Preface
Editors
Contributors
Introduction
Chapter 1: Quality Management in Pharmaceutical Supply Chains : Developing-Country Perspective Chapter 2: Risk Analysis and Efficient Logistics for Maritime Ports and Waterways in QatarChapter 3: Combinatorial Auctions in Turkish Day-Ahead Electricity Markets
Chapter 4: Simulation-Based Inventory Control in a Chemical Industry
Chapter 5: Hand Torque Strength in Industry : A Critical Review
Chapter 6: Optimization of Traffic Flow on Kuwait's Roads and Highways
Chapter 7: Modeling, Simulation, and Analysis of Production Lines in Kuwait's Petroleum Sector
Chapter 8: Simulation and Analysis of Izmir's Metro Transportation System Chapter 9: Productivity Improvement Studies in a Process Industry : A Case StudyChapter 10: Modeling Inventory Dynamics : The Case of Frenudco
Back Cover
Volume 03
Front Cover
Contents
Preface
Editors
Contributors
Introduction
Chapter 1: Daily Planning for Three-Echelon Logistics Associated with Inventory Management under Demand Deviation
Chapter 2: New Local Search Algorithm for Vehicle Routing Problem with Simultaneous Pickup and Delivery
Chapter 3: Optimal Fencing in Airline Industry with Demand Leakage Chapter 4: Bi-Objective Berth-Crane Allocation Problem in Container TerminalsChapter 5: Route Selection Problem in the Arctic Region for the Global Logistics Industry
Chapter 6: Route Design in a Pharmaceutical Warehouse via Mathematical Programming
Chapter 7: Integrated Decision Model for Medical Supplier Evaluation
Chapter 8: Arc Selection and Routing for Restoration of Network Connectivity after a Disaster
Chapter 9: Feasibility Study of Shuttle Services to Reduce Bus Congestion in Downtown Izmir

Citation preview

Industrial engineering originated in the United States, and although the popularity of this discipline has grown worldwide, there is still little information available outside of the US regarding its practical use and application. Industrial Engineering Non-Traditional Applications in International Settings raises the bar and examines industrial engineering from a global perspective. Representing the best papers from the International Institute of Industrial Engineers (IIIE) conference held in Istanbul in June 2013, and developed by contributors from at least six different countries, this material lends their expertise on the international impact of industrial engineering applications and provides a thorough understanding of the subject. Focusing on two key aspects of the industrial engineering (IE) discipline, non-traditional settings and international environments, the book introduces applications and incorporates case studies illustrating how IE-based tools and techniques have been applied to diverse environments around the world. Each chapter represents a novel application of industrial tools and techniques. In addition, the authors highlight some of the more exciting developments and implementations of industrial engineering. The book enables both students and practitioners to learn from universal best practices and observe the international growth of the discipline. Industrial Engineering Non-Traditional Applications in International Settings explores the globalization of this expanding discipline and serves as a guide to industry professionals including systems, industrials, manufacturing engineers, design, production, environmental, and Lean Six Sigma engineers, and is also relevant to applied ergonomics, business scm, business logistics, and business operations management.

an informa business

www.crcpress.com

6000 Broken Sound Parkway, NW Suite 300, Boca Raton, FL 33487 711 Third Avenue New York, NY 10017 2 Park Square, Milton Park Abingdon, Oxon OX14 4RN, UK

K22509 ISBN: 978-1-4822-2687-4

90000 9 781482 226874

w w w.crcpress.com

K22509 cvr mech.indd 1

Industrial Engineering Non-Traditional Applications in International Settings

“The value of this book lies in applying industrial principles and tools of IE to non-traditional areas. The chapters are presented like research articles. So, the book can be used for both teaching and research. More specifically it is an asset for research-led teaching in the field of industrial engineering.” —Arun Elias, Victoria University of Wellington, New Zealand

Bidanda Sabuncuoğlu Kara

Engineering - Industrial & Manufacturing

Industrial Engineering Non-Traditional Applications in International Settings

Edited by Bopaya Bidanda İhsan Sabuncuoğlu Bahar Y. Kara

10/6/14 10:35 AM

Industrial Engineering Non-Traditional Applications in International Settings

Industrial Engineering: Management, Tools, and Applications Industrial Engineering Non-Traditional Applications in International Settings Industrial Engineering Applications in Emerging Countries Global Logistics Management

Industrial Engineering Non-Traditional Applications in International Settings

Edited by Bopaya Bidanda İhsan Sabuncuoğlu Bahar Y. Kara

MATLAB® is a trademark of The MathWorks, Inc. and is used with permission. The MathWorks does not warrant the accuracy of the text or exercises in this book. This book’s use or discussion of MATLAB® software or related products does not constitute endorsement or sponsorship by The MathWorks of a particular pedagogical approach or particular use of the MATLAB® software.

CRC Press Taylor & Francis Group 6000 Broken Sound Parkway NW, Suite 300 Boca Raton, FL 33487-2742 © 2015 by Taylor & Francis Group, LLC CRC Press is an imprint of Taylor & Francis Group, an Informa business No claim to original U.S. Government works Version Date: 20140801 International Standard Book Number-13: 978-1-4822-2688-1 (eBook - PDF) This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been made to publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may rectify in any future reprint. Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers. For permission to photocopy or use material electronically from this work, please access www.copyright. com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged. Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. Visit the Taylor & Francis Web site at http://www.taylorandfrancis.com and the CRC Press Web site at http://www.crcpress.com

This book is dedicated to my dear wife, Louella, and my children, Maya and Rahul, who have spent many evenings and weekends without a husband and a father, when I was in the pursuit of integrating the global industrial engineering profession. Bopaya Bidanda

This page intentionally left blank

Contents P R E FA C E

ix

EDITORS

xi

CONTRIBUTORS

xv

INTRODUCTION

xix

C H A p T E R 1 I N T E G R AT E D P R O D U C T I O N P L A N N I N G M O D E L F O R N O N CYC L I C M A I N T E N A N C E PRODUCTION PL ANNING

AND

1

M E H Di Bi JA ri A N D M E H Di JA FA ri A N

C H A p T E R 2 N O N -TR A D I T I O N A L P E R F O R m A N C E E VA L UAT I O N O F M A N U FA C T U R I N G E N T E R p R I S E S

21

I br A H im H . GA rbi E

C H A p T E R 3 A U T O m O T I V E S TA mp I N G O p E R AT I O N S S C H E D U L I N G U S I N G M AT H E m AT I C A L P R O G R A mm I N G A N D C O N S T R A I N T P R O G R A mm I N G

37

BU rC U CAgl A r GENC O S m A N ,

H . CE N K OZ m U T lU, HUSE Y I N OZKAN , A N D M EH M ET A . BE GEN

C H A p T E R 4 VO T I N G S E C U R I T Y

IN

SOCIAL MEDIA

69

SE i F E Di N E K A Dry

VII

VIII

C O N T EN T S

C H A p T E R 5 I N T E G R AT E D A pp R O A C H T O O p T I m I Z E OpEN-PIT MINE BLOCK SEQUENCING

83

Ami N MOUS AV i , Er H A N KOZ A N , A N D SH i Qi A Ng L i U

C H A p T E R 6 L O C AT I N G TE mp O R A R Y S T O R A G E S I T E S F O R M A N A G I N G D I S A S T E R WA S T E , U S I N G M U LT I O B J E C T I V E O p T I m I Z AT I O N

99

Kı VA NÇ ON A N , F ÜS U N Ül ENgi N , A N D BA H A r SEN N A rO ĞlU

C H A p T E R 7 U S I N G E A R T H Q UA K E R I S K D ATA T O A S S I G N C I T I E S T O D I S A S T E R - R E S p O N S E FA C I L I T I E S I N TU R K E Y

115

Ay ŞEN U r S A H i N , M US TA FA Al P Er T E m , A N D Em E l Em U r

C H A p T E R 8 FA C T O R S A F F E C T I N G T H E P U R C H A S I N G B E H AV I O R S O F P R I VAT E S H O pp I N G C L U B U S E R S : A S T U DY I N TU R K E Y

135

SErCA N A K K A Ş, CEr EN S A l K ı N , A N D BA Ş A r ÖZ TAy Şi

C H A p T E R 9 TR A F F I C S I G N A L O p T I m I Z AT I O N : C H A L L E N G E S , M O D E L S , A N D A pp L I C AT I O N S

151

M A H m U T Al İ G öKÇE , Er Dİ NÇ ÖN Er ,

A N D GÜ l I Şı K

C H A p T E R 10 C O mpA R AT I V E F I N A N C I A L E F F I C I E N CY A N A LY S I S F O R TU R K I S H B A N K I N G S E C T O R

163

A . A RGUN K A r ACA bE y AND FAZ I L G ÖKG ÖZ

AUTHOR INDE X SUBJECT INDE X

181

185

Preface We are pleased to present to you this book that focuses on two key aspects of the industrial engineering (IE) discipline—non-traditional settings and international environments. IE was born and evolved in the United States during its few decades of existence. However, the last three decades have seen a dramatic shift in the application and evolution of this profession. The rapid growth we have witnessed in the IE profession and academic departments outside the United States has led to a significant body of IE knowledge being developed— this sometimes does not find appropriate archival outlets. We are pleased to provide such a forum. This book clearly illustrates how IE-based tools and techniques have been applied to a diversity of environments. We therefore believe that this first-of-its-kind book can create an awareness of the breadth of the IE discipline both in terms of geography and scope.

IX

X

P REFAc E

MATLAB® is a registered trademark of The MathWorks, Inc. For product information, please contact: The MathWorks, Inc. 3 Apple Hill Drive Natick, MA 01760-2098 USA Tel: 508-647-7000 Fax: 508-647-7001 E-mail: [email protected] Web: www.mathworks.com

Editors Bopaya Bidanda is currently the Ernest E. Roth professor and chairman in the Department of Industrial Engineering at the University of Pittsburgh. His research focuses on manufacturing systems, reverse engineering, product development, and project management. He has published five books and more than a hundred papers in international journals and conference proceedings. His recent (edited) books include those published by Springer—Virtual Prototyping & Bio-Manufacturing in Medical Applications and BioMaterials and Prototyping Applications in Medicine. He has also given invited and keynote speeches in Asia, South America, Africa, and Europe. He also helped initiate and institutionalize the engineering program on the Semester at Sea voyage in 2004. He previously served as the president of the Council of Industrial Engineering Academic Department Heads (CIEADH) and also on the board of trustees of the Institute of Industrial Engineers. He also serves on the international advisory boards of universities in India and South America. XI

X II

ED IT O RS

Dr. Bidanda is a fellow of the Institute of Industrial Engineers and is currently a commissioner with the Engineering Accreditation Commission of ABET. In 2004, he was appointed a Fulbright Senior Specialist by the J. William Fulbright Foreign Scholarship Board and the U.S. Department of State. He received the 2012 John Imhoff Award for Global Excellence in Industrial Engineering given by the American Society for Engineering Education. He also received the International Federation of Engineering Education Societies (IFEES) 2012 Award for Global Excellence in Engineering Education in Buenos Aires and also the 2013 Albert G. Holzman Distinguished Educator Award given by the Institute of Industrial Engineers. In recognition of his services to the engineering discipline, the medical community, and the University of Pittsburgh, he was honored with the 2014 Chancellors Distinguished Public Service Award. İhsan Sabuncuoğlu is the founding rector of Abdullah Gul University. He earned his BS and MS in industrial engineering from the Middle East Technical University in 1982 and 1984, respectively. He earned his PhD in industrial engineering from Wichita State University in 1990. Dr. Sabuncuoğlu worked for Boeing, Pizza Hut, and the National Institute of Heath in the United States during his PhD studies. He joined Bilkent University in 1990 and worked as a full-time faculty member until 2013. In the meantime, he held visiting positions at Carnegie Mellon University in the United States and at Institut Français de Mécanique Avancée (IFMA) in France. His research interests are in real-time scheduling, simulation optimization, and applications of quantitative methods to cancer-related health-care problems. His research has been funded by TUBITAK (The Scientific and Technological Research Council of Turkey) and EUREKA (a European-wide initiative to foster European competitiveness through cooperation among companies and research institutions in the field of advanced technologies). Dr. Sabuncuoğlu also has significant industrial experience in aerospace, automotive, and military-based defense systems. His industrial

ED IT O RS

X III

projects are sponsored by a number of both national and international companies. He is currently the director of the Bilkent University Industry and the University Collaboration Center (USIM) and the chair of the Advanced Machinery and Manufacturing Group (MAKITEG) at TUBITAK. In addition to publishing more than a hundred papers in international journals and conference proceedings, Dr. Sabuncuoğlu has edited two books. He is also on the editorial board of a number of scientific journals in the areas of industrial engineering and operations research. He is also a member of the Institute of Industrial Engineering, the Institute for Operations Research, the Management Sciences, and the Simulation Society. He is also a member of the Council of Industrial Engineering Academic Department Heads (CIEADH) and various other professional and social committees. Bahar Y. Kara is an associate professor in the Department of Industrial Engineering at Bilkent University. Dr. Kara earned an MS and a PhD from the Bilkent University Industrial Engineering Department, and she worked as a postdoctoral researcher at McGill University in Canada. Dr. Kara was awarded Research Excellence in PhD Studies by INFORMS (Institute for Operations Research and Management Science) UPS-SOLA. In 2008, Dr. Kara was awarded the TUBA-GEBIP (National Young Researchers Career Development Grant) Award. She attended the World Economic Forum in China in 2009. For her research and projects, the IAP (Inter Academy Panel) and the TWAS (The Academy of Science for the Developing World) Awarded her the IAPs Young Researchers Grant. Dr. Kara was elected as an associate member of Turkish Academy of Sciences in 2012. She has been acting as a reviewer for the top research journals within her field. Her current research interests include distribution logistics, humanitarian logistics, hub location and hub network design, and hazardous material logistics.

This page intentionally left blank

Contributors Sercan Akkaş Department of Industrial Engineering İstanbul Technical University İstanbul, Turkey Mehmet A. Begen Richard Ivey School of Business University of Western Ontario London, Ontario, Canada Mehdi Bijari Department of Industrial and System Engineering Isfahan University of Technology Ishahan, Iran Mustafa Alp Ertem Department of Industrial Engineering Çankaya University Ankara, Turkey

Emel Emur Department of Industrial Engineering Çankaya University Ankara, Turkey Ibrahim H. Garbie Department of Mechanical and Industrial Engineering College of Engineering Sultan Qaboos University Muscat, Oman and Department of Mechanical Engineering College of Engineering at Helwan Helwan University Cairo, Egypt

XV

XVI

C O N T RIBU T O RS

Burcu Caglar Gencosman Department of Industrial Engineering Uludag University Bursa, Turkey Mahmut Ali Gökçe Department of Industrial Engineering Izmir University of Economics Izmir, Turkey Fazıl Gökgöz Faculty of Political Sciences Department of Management Ankara University Ankara, Turkey Gül Işık Department of Industrial Engineering Izmir University of Economics Izmir, Turkey Mehdi Jafarian Department of Industrial and System Engineering Isfahan University of Technology Ishahan, Iran Seifedine Kadry Department of Industrial Engineering American University of the Middle East Egaila, Kuwait

A. Argun Karacabey Faculty of Economics and Business Administration Department of Management Okan University İstanbul, Turkey Erhan Kozan School of Mathematical Sciences Queensland University of Technology Brisbane, Queensland, Australia Shi Qiang Liu School of Mathematical Sciences Queensland University of Technology Brisbane, Queensland, Australia Amin Mousavi School of Mathematical Sciences Queensland University of Technology Brisbane, Queensland, Australia Kıvanç Onan Department of Industrial Engineering Doğuş University İstanbul, Turkey Erdinç Öner Department of Industrial Engineering Izmir University of Economics Izmir, Turkey

C O N T RIBU T O RS

Huseyin Ozkan Beycelik Gestamp Bursa, Turkey H. Cenk Ozmutlu Department of Industrial Engineering Uludag University Bursa, Turkey Başar Öztayşi Department of Industrial Engineering İstanbul Technical University İstanbul, Turkey Ayşenur Sahin Department of Industrial Engineering Çankaya University Ankara, Turkey

X VII

Ceren Salkın Department of Industrial Engineering İstanbul Technical University İstanbul, Turkey Bahar Sennaroğlu Department of Industrial Engineering Marmara University İstanbul, Turkey Füsun Ülengin School of Management Sabancı University İstanbul, Turkey

This page intentionally left blank

Introduction The discipline of industrial engineering has evolved for more than a hundred years. Over the last two decades, much of the growth of applications in industrial engineering has occurred outside the United States. This book focuses on non-traditional applications in international settings and will therefore detail some of the more exciting developments, applications, and implementations of industrial engineering and related tools. This book contains 10 chapters developed by authors and coauthors of at least six different countries. Though the chapters are arranged in no particular order, each represents a novel application of industrial tools and techniques. Chapter 1 details an integrated model that can be applied to noncyclic maintenance planning, which can also be integrated with production planning. The authors develop a model that can be applied to smaller solutions, but since it is NP hard, they also call for more efficient heuristic models that can be developed and applied to large problems. Chapter 2 presents a series of performance metrics for manufacturing (and possibly other types of) enterprises. These metrics are based on the degree or level of complexity, level of leanness, and level of agility. These metrics are then implemented and validated in an organization based in the Middle East. XIX

XX

IN T R O D U c TI O N

Chapter 3 focuses on scheduling of automotive stamping operations, which are typically characterized by high set-up times and small unit run times. The authors present a solution utilizing mixed integer programming and apply this, in part, to the operations of a real live workplace. How this model can significantly increase production capacity by effective scheduling is also discussed. Chapter 4 presents a completely new application in that Heider’s balance theory is utilized to detect voting fraud in social media. This is especially important with the increasing utility of social media in our everyday lives. Chapter 5 presents applications in the area of natural resource development, or more specifically open-pit mining. An optimization model is developed and applied (via case studies) to optimize the extraction sequence of blocks—an operation that can have a major impact on mining profitability. Chapter 6 details where best to locate sites for disaster waste procession. Multiobjective optimization is used to identify site locations and provide solution guidance to this important, yet often unnoticed, problem that can occur at a moment’s notice anywhere in our world. Chapter 7 also studies disasters, but from a different perspective. It first details the shift in Turkey from crisis management to risk management and assigns disaster response facilities to near optimal locations. The work was driven by the risks posed by earthquake in Turkey and also in the locations where disaster response facilities currently exist. Chapter 8 deals with a more pleasant topic—one that studies factors affecting buying patterns and behaviors at private shopping clubs. Turkey is taken as a benchmark and a technology acceptance model is used to study the buying behavior. Results from an already implemented questionnaire are also discussed. Chapter 9 shifts gears to detail optimization methods that can be used to increase the effectiveness of the timing of traffic signals. With the rapid urbanization of emerging countries and related congestion in cities, this is a problem that will continue to receive much attention. Chapter 10 discusses the Turkish banking sector and the measurement of efficiency of its banks, a topic that greatly impacts

IN T R O D U c TI O N

XXI

the emerging financial market. The authors apply quantitative models to study 29 commercial banks and 12 investment banks and show the relative efficiency (or lack thereof) of each type of bank. As can be seen, there is a refreshing diversity of application environments and types of tools used.

This page intentionally left blank

1 I NTEG R ATED P RODUCTi ON P L ANNiN G M OD EL FOR N O N CYCLi C M AiNTENAN CE AND P RO DUCTi O N P L ANNiN G M e H D i B i J a R i a N D M e H D i Ja Fa R ia N Contents

1.1 Introduction 1.2 Literature Review 1.3 Integrated Model for Noncyclic Maintenance Planning and Production Planning 1.3.1 Problem Statement 1.3.2 Assumptions 1.3.3 Profit Maximization of General Lot-Sizing and Scheduling Problem 1.3.4 Integrated Model 1.3.5 Model Output Representation 1.3.6 Hybrid Solution Algorithm Acknowledgment References

1 3 6 6 7 9 11 16 17 18 18

1.1  Introduction

Maintenance and production are closely related in different ways. This relationship makes production planning and maintenance planning the most important and demanding areas in the process of decisionmaking by industrial managers. Hence, they have been the focus of attention in the manufacturing industry while a lot of research has also been devoted to them in the area of operations research. Although the two activities are interdependent, they have most often been performed independently. Integration of production planning and maintenance planning into one single problem is a complex and 1

2

M EH D I BI JA RI A N D M EH D I JA FA RIA N

challenging task since the resulting integrated planning problem leads to nonoptimal solutions. Maintenance becomes necessary because of either a failure in production or the undesirably low quality of the items produced. However, the significance of maintenance planning can be more vividly realized when maximum plant availability and maximum mean time between equipment failures are sought at the lowest cost. Maintenance activities may be classified into four types: corrective maintenance, predictive maintenance, repairs maintenance, and preventive maintenance (PM). Corrective maintenance can be defined as the maintenance that is required when an item has failed or worn out, to bring it back to working order. While predictive maintenance tends to include direct measurement of the item, repairs maintenance is simply doing maintenance work as need develops. This elementry approach has sometimes been replaced by periodic overhauls and other preventive maintenance activities. PM is performed periodically in order to reduce the incidence of equipment failure and the costs associated with it. These costs include disrupted production schedules, idled workers, loss of output, and damage to products or other equipment. PM, thus, improves production capacity, production quality, and overall efficiency of production plants. Moreover, it can be scheduled to avoid interference with production. There are trade-offs between PM planning and production planning. PM activities take time that could otherwise be used for production, but delaying PM for production may increase the probability of machine failure. Whenever an unexpected machine failure occurs, the current production plan becomes inadequate and needs to be modified. Changes in production plan sometimes cause extra costs or significant changes in the service level and production line productivity. Production planning mainly has two aspects: lot-sizing and scheduling. Lot-sizing concerns determining production quantity while scheduling concerns sequencing products on the production line. Decisions for these two problems are mostly made in a hierarchical manner. In other words, the lot-sizing problem is solved first and the output is used in the sequencing and scheduling problem. The problem is sometimes described as the general lot-sizing

IN T EG R AT ED P R O D U c TI O N P L A N NIN G M O D EL

3

and scheduling problem; GLSP (Fleischmann and Meyr 1997) or capacitated lot-sizing problem with sequence-dependent setup times (CLSP-SD) that addresses the integrated lot-sizing and scheduling problems simultaneously due to their dependencies. In this chapter, a new integrated model is presented for the noncyclic maintenance and production planning problem. The Markov chain is used for producing the parameters required for processing the model of a singlestage multiparallel machine production system with the objective of maximizing profits with the assumption of demand flexibility. In this model, the value of maintenance has been taken into account. The product yield depends on equipment conditions, which deteriorate over time. The objective is to determine equipment maintenance schedule, demand quantity, and lot-sizes, and production schedules in such a way that the expected profit is maximized. 1.2  Literature Review

PM planning models are typically stochastic models accompanied by optimization techniques designed to maximize equipment availability or minimize equipment maintenance costs. There are mathematical or simulation models. The literature abounds in papers on planning and optimizing maintenance activities. However, a few can be found dealing with models that combine PM planning and production planning. The models reported in the literature include such decision variables as the number of maintenance activities, safety buffers, and inspection intervals. While the objective in most models is minimizing costs, some also consider system lifetime, which is generally assumed to have a Weibull distribution. Models developed for integrating PM planning and production planning are Np-Hard, which can be optimally solved for small-size instances, but obtaining optimal solutions is impractical for large-size instances. This, therefore, warrants efficient solvers for large-size problems. Both heuristic and meta-heuristic methods including genetic algorithm, simulated annealing algorithm, and Lagrangian procedure, and expert systems have been used for the solution of these models. Most papers in this area deal with production scheduling, and the models used for production scheduling and PM planning are designed

4

M EH D I BI JA RI A N D M EH D I JA FA RIA N

with an implicit common goal of maximizing equipment productivity. Some studies extend the simple machine scheduling models by considering the maintenance decisions as given, or as constraints, rather than integrating them. The problem in these studies is modeled as a sequencing and scheduling problem with the machine availability constraint (Molaee et al. 2011). Different methods have been used to develop models for the production planning and PM problems. Cassady and Kutanoglu (2005) have classified these methods into two broad approaches: reactive and robust. In the reactive approach, attempts are made to update production when a failure occurs. In robust planning, the plan is not sensitive to failure events. In another classification (Iravani and Duenyas 2005), the studies are classified into two groups. While the first focuses on the effect of failure on production schedule, the other group integrates production and maintenance planning into a single problem. Meller and Kim (1996) reviewed the literature and classified studies into two categories: one focusing on PM, and the other focusing on the statistical analysis of safety-stock-based failure neglecting PM. Brandolese et al. (1996) considered a single-stage, multiproduct production environment with flexible parallel machines. They developed an expert system for the planning and management of a multiproduct and one-stage production system made up of flexible machines operating in parallel. The system schedules both production and maintenance at the same time. Setup costs are sequence dependent. Sloan and Shanthikumar (2000) studied a multiproduct, single-machine problem in which the machine has states that change during the planning horizon such that the machine state affects the production rate of each product. They used the Markov chain and their objective function aimed to maximize profits. In each period, either a product is being produced or a maintenance activity is being performed. Their model determines the optimal policy of production and maintenance. The objective is achieving optimal maintenance policy in such a way that the sum of the discounted costs of maintenance, repairs, production, backorders, and inventory is minimized. Aghezzaf and Najid (2008) presented a production plan and a maintenance plan in a multiproduct, parallel machine system with corrective maintenance and PM. It is assumed that when a production line fails,

IN T EG R AT ED P R O D U c TI O N P L A N NIN G M O D EL

5

a minimal repair is carried out to restore it to an as-bad-as-old status. PM is also carried out periodically at the discretion of the decision maker to restore the production line to an as-good-as-new status. The resulting integrated production and maintenance planning problem is modeled as a nonlinear mixed-integer program when each production line implements a cyclic PM policy. When noncyclic PM policies are allowed, the problem is modeled as a linear mixed-integer program. In this situation, maintenance activities decrease production capacity. Sitompul and Aghezzaf (2011) proposed an integrated production and maintenance hierarchical plan. Noncyclic maintenance in the single machine problem has been considered in another study (Nourelfath et al. 2010), in which it is assumed that while production capacity is constant, a decision must be made in each period about implementing the PM. Fitouhi and Nourelfath (2012) extended upon previous studies (Nourelfath et al. 2010). They proposed a model in which the noncyclic maintenance assumption was abandoned and the assumption that the machine has several states due to its components was adopted instead. The proposed model coordinates the production with the maintenance decisions so that the total expected cost is minimized. We are given a set of products that must be produced in lots on a multistate production system during a specified finite planning horizon. Planned PM and unplanned corrective maintenance can be performed on each component of the multistate system. The maintenance policy suggests cyclic preventive replacements of components and a minimal repair on failing components. The objective is to determine an integrated lot-sizing and PM strategy of the system that will minimize the sum of preventive and corrective maintenance costs, setup costs, holding costs, backorder costs, and production costs, while satisfying the demand for all products over the entire horizon. Production yield is influenced by the machine state. Yao et al. (2005) studied the joint PM and production policies for an unreliable production-inventory system in which maintenance/ repair times are nonnegligible and stochastic. A joint policy decides (1) whether or not to perform PM and (2) if PM is not performed, then how much to produce. A discrete-time system is considered, and the problem is formulated as a Markov decision process model. Although their analysis indicates that the structure of the optimal joint policies is generally very complex, they were able to characterize

6

M EH D I BI JA RI A N D M EH D I JA FA RIA N

several properties regarding PM and production including optimal production/maintenance actions under backlogging and high inventory levels. Wee and Widyadana (2011) studied the economic production quantity models for deteriorating items with rework and stochastic PM time. Lu et al. (2013) studied system reliability. According to them, a system reliability lower bound is determined that is smaller than the system reliability. Marais and Saleh (2009) investigated the maintenance value and proposed that maintenance has an intrinsic value. They argue that the existing cost-oriented models ignore an important dimension of maintenance activities that involves quantifying their value. They consider systems that deteriorate stochastically and exhibit multistate failures. The state evolution is modeled in their study using the Markov chain and directed graphs. To account for maintenance value, they calculate the net present value of maintenance in their model. Njike et al. (2011) used the value-optimized concept in their research. They sought to develop an optimal stochastic control model in which interactive feedback consisted of the quantity of flawless and defective products. The main objective was to minimize the expected discounted overall cost due to maintenance activities, inventory holding, and backlogs. A near-optimal control policy of the system was then obtained through numerical techniques. The originality of their research lies in the fact that all operational failures have been taken into account in the same optimization model. This brings a value added to the high level of maintenance and for operation managers who need to consider all failure parameters before taking cost-related decisions. 1.3 Integrated Model for Noncyclic Maintenance Planning and Production Planning 1.3.1  Problem Statement

In this section, an integrated model is presented for the noncyclic maintenance planning and production planning problem. The objective is to maximize profits. The model considers simultaneous lot-sizing and scheduling. The challenge commonly faced within production planning is the coordination of demand and production capacity. Demand flexibility assumption is also introduced into the model. In many industries, product yield is heavily influenced by equipment

IN T EG R AT ED P R O D U c TI O N P L A N NIN G M O D EL

7

conditions. Previous studies have focused either on maintenance at the expense of the effect of equipment conditions on yield or on production at the expense of the possibility for actively changing machine state. 1.3.2  Assumptions

The assumptions made here are classified into those related to production planning and those concerning PM planning. Assumptions of production planning:

• The model is a multiproduct one. • The model is in a multiparallel machine environment. • The available capacity is finite. Considering PM planning in each period, the capacity may take different values as computed based on mathematical expectation. • The planning horizon is finite and consists of T periods. • The demand for a product is not known before each period and is determined by the model. For each product, this value ranges between a lower and an upper bound for the demand. • Shortage is allowed in periods. However, the total demand should be met at the end of the planning horizon. • Setup times and setup costs are sequence dependent. • Holding costs, setup costs, and production costs are time independent. • The model has the characteristic of setup preservation. It means that if we have an idle time, the setup state would not change after it. • Lots are continuous. This means that production can continue for the next period with no break and with no setup. • The setup state is specific at the beginning of the planning horizon. • It is possible to produce some types of products in each period. In other words, the model is a big-time bucket one. • The objective function is maximizing the sales revenue minus production, holding, and setup costs. • The breakdown of setup time between two periods is not allowed, and the setup is finished in the same period in which it begins.

8

M EH D I BI JA RI A N D M EH D I JA FA RIA N

Assumptions of PM planning: • The machine has the following three states: 1. Working at good efficiency (state 1). 2. Working at low efficiency (state 2). 3. Where the machine breaks down, a non-PM repair state starts after a sudden breakdown (state r). • Product quality is not influenced by the machine state. • Maintenance operation does not create a disturbance or a change in the setup state. • The PM operation is an activity with a positive effect; it increases system efficiency. There is at least one state in which the PM improves system efficiency. • For PM planning, microperiods are considered to be separate from microperiods in production planning. Therefore, the PM schedule is discrete. • In each microperiod, it is decided whether one and only one PM is to be performed or not. • The efficiency or the capacity of a machine is reduced by production or as a result of exploiting this capacity. Therefore, the state of the machine goes toward state 2 or state r. • The state of the machine turns to state 1 after a PM operation. • Both the PM operation and the emergency maintenance operation are costly. In addition, they reduce the capacity of the period as they use this capacity. • Only one PM operation is possible in each maintenance microperiod. On the other hand, it is assumed that only one sudden breakdown may happen in each microperiod. • Transition of machine state is memory less from one period to the next. In addition to these assumptions, the proposed model considers the existence of triangular inequality conditions or their paucity. In most industries, setup times conform to the triangular inequality. This assumption, which can also be applied to costs, is stated as follows:

Scik + Sckj > Scij

where Scij represents the setup time from product i to product j.

(1.1)

IN T EG R AT ED P R O D U c TI O N P L A N NIN G M O D EL

9

In simple words, the triangular inequality states that the setup time or the setup cost for moving directly from product i to product j is less than the time or cost when there is a mediator. In some industries, it is plausible that setup times do not conform to the triangular inequality. Changing color in some industries can be mentioned as an example; changing color from black to white needs more setup time than changing color from black to blue then from blue to white. 1.3.3  Profit Maximization of General Lot-Sizing and Scheduling Problem

The proposed integrated model for maintenance planning and production planning is based on the profit maximization general lot-sizing and scheduling problem (PGLSP) model (Sereshti 2010). Recently, attention has been directed toward the simultaneous lot-sizing and scheduling problem that has come to be called the general lot-sizing and scheduling problem or GLSP. For modeling this problem, two distinctive approaches may be employed. In the first approach, there are two kinds of time buckets, small buckets and large ones. Small buckets or positions are within the large buckets or macroperiods. The positions, or microperiods, are used for sequencing. This approach was first presented by Fleischmann and Meyr (1997). Meyr (2000) extended GLSP to deal with sequence-dependent setup times. The second approach is based on the CLSP-SD, which is related to the traveling salesman problem (Almada-Lobo et al. 2008). The profit maximization of GLSP with demand choice flexibility is an extension of the GLSP in which the assumption of flexibility in choosing demands is also included. The accepted demand in each period can vary between its upper and lower bounds. The upper bound could be the forecasted demand, and the lower bound is the organization commitments toward customers or minimum production level according to production policy. PGLSP can be described as follows. Having P products and T planning periods, the decision maker seeks to determine (1) the accepted demand of each product in each period, which is between an upper and a lower bound, (2) the quantity of lots for each product, and (3) the sequence of lots. The objective

10

M EH D I BI JA RI A N D M EH D I JA FA RIA N

Table 1.1  Parameters T P N πn Ct

Number of planning periods Number of products Number of positions in planning horizon The period in which position n is located Available capacity in each period

Ldjt

Demand lower bound for product j in period t

Udjt

Demand upper bound for product j in period t

nt Ft Lt

Number of positions in period t First position in period t Last position in period t

t = 1,…,T

hj rjt

Holding cost for one unit of product j Sales revenue for one unit of product j in period t

j = 1,…,P j = 1,…,P , t = 1,…,T

Cpj

Production cost for one unit of product j

j = 1,…,P

pj

Processing time for one unit of product j

j = 1,…,P

Sij

Setup cost for transition from product i to product j

Stij

Setup time for transition from product i to product j

i i

Ij 0

Initial inventory level for product j

j = 1,…,P

n = 1,…,N t = 1,…,T j = 1,…,P , t = 1,…,T j = 1,…,P , t = 1,…,T t = 1,…,T t = 1,…,T

= j = 1,…,P = j = 1,…,P

function is maximizing the sales revenues minus production, holding, and setup costs. Backlog is not allowed. Setup times and costs are sequence dependent. The triangular inequality lies between setup times. Back order is not allowed. The parameters for this model are presented in Table 1.1. This model is an extension of the model proposed by Meyr (2000). PGLSP has also been modeled through the traveling salesman problem approach (Sereshti and Bijari 2013). In Meyr’s model, the microperiods or positions within the planning periods are used as a modeling consideration to define the sequence of products. The number of these microperiods in each macroperiod forms a parameter of the model, and they are used to define the first and last positions in each period. The decision variables for this model are as follows: Iij = Inventory level of product j at the end of period t. Djt = Accepted demand of product j in period t. Q jt = Quantity of product j produced in position n. Yjt = A binary variable that is 1 when the setup state in position n is for product j.

11

IN T EG R AT ED P R O D U c TI O N P L A N NIN G M O D EL

Xijn = A positive variable whose amount is always 0 or 1. This variable is 1 when the setup state changes from product i to product j in position n. The mathematical model is presented as follows: P

Max

T

P

N

P

P

N

P

T

∑ ∑ r D − ∑ ∑Cp Q − ∑ ∑ ∑S X − ∑ ∑ h I jt

jt

j =1 t =1

j

jn

j =1 n =1

ij

ijn

j =1 i =1 n =1

j

jt

j =1 t =1

subject to I jt = I j ( t −1) +

Lt

∑Q

jn

− D jt

n = Ft

Ld jt ≤ D jt ≤ Ud jt Q jn ≤ M jπn Y jn

j = 1,…, P , t = 1,…,T (1.2)

j = 1,…, P , t = 1,…,T j = 1,…, P , n = 1,…, N

(1.3) (1.4)

Other constraints of the model are the same as the Meyr’s model. The objective function of model is to maximize sales revenues minus production, setup, and holding costs. Constraint (1.2) shows the balance among demand, production, and inventory. Constraint (1.3) guarantees that the accepted demand for each product in each period is between its upper and lower bounds. Constraint (1.4) ensures that a product can be produced when its setup is complete. The upper bound of production in this constraint can be seen in statement (1.5). If we just use c t /p j as the upper bound, the constraint will be true; using the maximum value of the remaining demand in the following period may result in a tighter constraint, which occurs when the remaining demand is less than the production capacity.



 C M jt = min  t ,  p j

 Ud jk  k =t  T



j = 1,…, P , n = 1,…, N (1.5)

1.3.4  Integrated Model

We have used PGLSP for modeling our problem (Bijari and Jafarian 2013). In the model, both macroperiods and microperiods are

12

M EH D I BI JA RI A N D M EH D I JA FA RIA N

considered as in the basic GLSP. The decision maker wants to define the mentioned decisions in the previous section and also the period in which PM will be executed. The objective function is maximizing sales revenues minus production, holding, shortage costs, setup, PM, and non-PM costs. The model is stochastic because the machine state is stochastic, too. The objective is maximizing the expected value of profits. Machine has three states. We use production microperiods for producing products. Maintenance microperiod is used for performing maintenance activities. In each period, the probability of machine state after the last PM can be determined. The parameters of the production microperiods are as follows: R: Number of products πn: Number of microperiods containing position n nt: Number of positions available in period t nrt: Number of maintenance microperiods in period t β: Coefficient of efficiency when the machine is in state 2 Cpj: Production cost of j ρjk: Usage rate of machine k for producing item j Sijk: Setup cost of machine k for producing item j after item i Stijk: Setup time of machine k for producing item j after item i e: Discount rate in each period bj: Shortage cost in each period cr: Non-PM cost mj: Minimum production lot-size of j Pi τr : Probability of state 1 after (τr − 1) microperiods from last PM πnr : Number of macroperiods that include maintenance microperiod nr Crp: PM cost Uk: Maintenance microperiod capacity for machine k δij: Probability of transition from state i to j The model and its decision variables are proposed as follows: Ijt+: Inventory of product j at the end of period t Ijt–: Shortage of product j at the end of period t Q jnk: Production quantity of product j in microperiod n for machine k

IN T EG R AT ED P R O D U c TI O N P L A N NIN G M O D EL

13

trnrk: The number of microperiods (plus 1) between the last PM and the maintenance period nr for machine k Yjnk: Binary variable; it is 1 if product j is produced in position n on machine k Xijnk: If machine K setup is accomplished for producing product j in position n; when product i is produced in this position, it is 1; otherwise, zero j, i = 1, …, R, k = 1, …, K, n = 1, …, N nr q k : Binary variable; PM was done (1) or was not done in the maintenance microperiod nr on machine k Other parameters and decision variables are the same as PGLSP. Production microperiods parameters are expressed as follows: Ftk: The first position in period t for machine k, k = 1, …, K, t = 1, …, T Ltk: The last position in period t for machine k N: Total number of positions at planning horizon t −1

∑n + 1

(1.6)

Lt = Ft + nt − 1

(1.7)

Ft =

k

k =1

T

N =

∑n t

t =1

(1.8)

Maintenance (M) microperiods parameters are as follows: Frtk: The first PM position in period t for machine k Lrtk: The first PM position in period t for machine k Nr: Total number of PM positions t −1

∑nr + 1

(1.9)

Lrt = Frt + nrt − 1

(1.10)

Frt =

k

k =1

T

Nr =

∑nr t

t =1

(1.11)

14

M EH D I BI JA RI A N D M EH D I JA FA RIA N

The model is shown as follows: max E ( Z ) =

R

R

T

∑∑(1 − e)

t −1

r jt D jt −

j =1 t =1 R



R

N

K

∑∑∑(1 − e)

N

R

K

∑∑∑∑

(1 − e )πn −1 Sij X jink −

R

K

∑∑

(1 − e )t −1 b j I −jt − cr

j =1 t =1

− crp

T

∑∑(1 − e)

t −1

h j I +jt

j =1 t =1

T

K

cp j Q jnk

j =1 n =1 k =1

j =1 i =1 n =1 k =1



t −1

Nr

Nr

∑∑∑ k =1 nr =1 τr

Nr

∑∑(1 − e)

(1 − e )πnr −1 Pr τr ( trnrk − tr ) M + 1

πnr −1 nr k

(1.12)

q

k =1 nr =1



subject to Ltk

K

I +jt = I +j ( t −1) − I −j ( t −1) +

∑∑Q k =1 n = Ftk

K

∑∑



k =1 n =1

∑D

Ltk

R

R

∑∑

ρ jkQ jnk +

j =1 n = Ftk

Lrtk



(1.14)

∀t , j

(1.15)

t =1

Q jnk ≤ MY jnk



∀j

jt

Ld jt ≤ D jt ≤ Ud jt



∀ j , n, k R

(1.16)

Ltk

∑∑∑St

ijk

X ijnk

i =1 j =1 n = Ftk

 P1τrU k + P2τrU βk †  ( trnrk − τr ) M + 1 τr =1 Nr

∑∑

nr = Frtk

∀t , j (1.13)

T

N

Q jnk =



− D jt + I −jt

jnk

∀k, t

(1.17)

R



∑Y j =1

jnk

= 1 ∀n, k

X ijnk ≥ Yi (n −1) k + Y jnk − 1 ∀ j , i , n, k

(1.18) (1.19)

IN T EG R AT ED P R O D U c TI O N P L A N NIN G M O D EL

15

Table 1.2  Transition Matrix 1 2 3



1

2

R

δ11 0 1

δ12 δ22 0

δ1r δ2r 0

(

)

trnrk = tr(nr −1) k 1 − q knr + 1 ∀n, k

(1.20)

Y jnk , q knr ∈ {0, 1} ∀j , n, k, nr

(1.21)

X ijnk , Q jnk , I +jt , I −jt , trnrk , D jt ≥ 0

∀j , n, k, i

(1.22)

The transition matrix is shown in Table 1.2. The probability of state i after (τr – 1) microperiods from last PM can be obtained by the following equations: P1tr = P1tr −1δ11 + Pr( tr −1) P2tr = P2tr −1δ 22 + P1tr −1δ12 Prtr = P2tr −1δ 2r + P1tr −1δ1r P11 = P21 = Pr1 = 0

P12 = 1, P22 = Pr2 = 0

All Pi1 (tr = 1) are equal to zero when PM is performed on the machine. A discount rate is used in the objective function. It designates the value of maintenance. The first term in the objective function is related to sales revenue. Other terms designate production cost, setup cost, holding cost, shortage cost, expected value of non-PM cost, and PM cost. Non-PM cost is determined by multiplying the emergency non-PM cost by the probability of this state ( Prτr ) after τr – 1 from the last PM period. The denominator ensures that the value of τr is properly chosen. Only when tr equals trnrk, the denominator equals 1; otherwise, it has a big value because M is a big value. Thus, fractions become zero.

16

M EH D I BI JA RI A N D M EH D I JA FA RIA N

Constraint (1.13) shows production, demand, inventory, and shortage balance. Constraint (1.14) ensures that production quantities are equal to the satisfied demand. Constraint (1.15) shows the demand range. The next constraint shows the relation between setup and production feasibility. Constraint (1.17) ensures that the machine usage for production and setup has not exceeded the available capacity. The right side of this constraint estimates the available capacity. The numerator calculates the expected value of capacity. Prτr µ modifies the error of unequal. The denominator ensures that the value of τr is properly chosen. Constraint (1.18) shows that only one product can be produced in each microperiod. Constraint (1.19) ensures that if two different products are manufactured in two consecutive microperiods, then setup will be necessary. Constraint (1.20) counts the number of periods since the last PM. As long as PM is not performed, that is, q knr = 1, the value of trnrk per each maintenance microperiod is one unit greater than that of the previous maintenance microperiod; otherwise, its value is only 1, which means that PM occurred in the maintenance microperiod nr. Constraint (1.23) also limits the minimum batch size production. The constraint is added for considering the minimum batch size (mj), in each machine setup. It can be written as follows:

Q jnk ≥ m j × ( Y jnk − Y j (n −1) k ) ∀j , n, k

(1.23)

In some industries, if setup occurs, the production batch size must then be greater than a minimum level due to technological or economic factors. This constraint ensures that the minimum batch size is produced after each setup. It ensures that if the setup for a product was not carried out in microperiod (position) n – 1 but that it was in position n, then the product batch size in period t must be at least equal to the minimum batch size of the product. 1.3.5  Model Output Representation

The output of model should contain the production schedule and the PM schedule. The maximum number of lots equals the number of positions in the model. Therefore, the number of lots may be less than the number of positions. In this state, setup carryover is applied

IN T EG R AT ED P R O D U c TI O N P L A N NIN G M O D EL

17

to the remaining positions at the end of the period. In other words, one setup state in each position is determined in the yield solution while production may plausibly not occur in some positions at the end of the period. The following conditions may be regarded as an example. There are three types of products and five microperiods in each period. This means five lots can be produced in each period. If it is assumed that the triangular inequality conditions do not hold between setup times, there might be two or more product lots in one period. However, assuming that triangular conditions hold between setup times, production of a product occurs only in one lot in each period. Therefore, it will not be necessary for the number of microperiods in each period to exceed the number of product types. In addition, it is worth mentioning that if there are three types of products, there will be no need for the number of microperiods to be greater than the number of products under any assumption. For this problem size, there is no need for creating a complex state with more microperiods. However, as the number of products increases, problem complexity may also increase to the extent that prediction of the number of microperiods becomes impossible, especially when we are simultaneously faced with setup time and setup cost. 1.3.6  Hybrid Solution Algorithm

Given the fact that the PGLSP is NP-hard, the model presented in this chapter is NP-hard, too. Hence, efficient methods need to be developed that can obtain near-optimal solutions in a reasonable time for large-size instances. A simulating annealing (SA) algorithm and a hybrid algorithm have been developed for this purpose. The hybrid algorithm combines a heuristic algorithm and SA. The heuristic algorithm has two parts. Part 1 satisfies the minimum product demand. Part 2 assigns available capacities to products with higher profits. SA determines the product sequence and the PM schedule. Lot-sizing and demand quantity are obtained by the heuristic algorithm. The solutions obtained from solving the mathematical models have been used to assess the quality of the algorithms. Numerical results show the efficiency of the developed hybrid algorithm.

18

M EH D I BI JA RI A N D M EH D I JA FA RIA N

Acknowledgment The authors thank Dr. Ezzatollah Roustazadeh from Isfahan University of Technology for editing the final English manuscript of this chapter.

References

Aghezzaf, E.H. and N.M. Najid. 2008. Integrated production planning and preventive maintenance in deteriorating production systems. Information Sciences 178: 3382–3392. Almada-Lobo, B., D. Klabjan, M.A. Carravilla, and J.F. Oliveira. 2007. Single machine multi-product capacitated lot sizing with sequence-dependent setups. International Journal of Production Research 45: 4873–4894. Bijari, M. and M. Jafarian. 2013. An integrated model for non cyclical maintenance planning and production planning. Proceedings of International IIE Conference, Istanbul, Turkey. Brandolese, M., M. Franci, and A. Pozzetti. 1996. Production and maintenance integrated planning. International Journal of Production Research 34: 2059–2075. Cassady, C.R. and E. Kutanoglu. 2005. Integrating preventive maintenance planning and production scheduling for a single machine. IEEE Transactions on Reliability 54: 304–309. Fitouhi, M.C. and M. Nourelfath. 2012. Integrating noncyclical preventive maintenance scheduling and production planning for a single machine. International Journal of Production Economics 136: 344–351. Fleischmann, B. and H. Meyr. 1997. The general lot-sizing and scheduling problem. OR Spectrum 19: 11–21. Iravani, S.M.R. and I. Duenyas. 2002. Integrated maintenance and production control of a deteriorating production system. IIE Transactions 34: 423–435. Lu, Z., Y. Zhang, and X. Han. 2013. Integrating run-based preventive maintenance into the capacitated lot sizing problem with reliability constraint. International Journal of Production Research 51: 1379–1391. Marais, K.B. and J.H. Saleh. 2009. Beyond its cost, the value of maintenance: An analytical framework for capturing its net present value. Reliability Engineering System Safety 94: 644–657. Meller, R.D. and D.S. Kim. 1996. The impact of preventive maintenance on system cost and buffer size. European Journal of Operational Research 95: 577–591. Meyr, H. 2000. Simultaneous lot-sizing and scheduling by combining local search with dual reoptimization. European Journal of Operational Research 120: 311–326.

IN T EG R AT ED P R O D U c TI O N P L A N NIN G M O D EL

19

Molaee, E., G. Moslehi, and M. Reisi. 2011. Minimizing maximum earliness and number of tardy jobs in the single machine scheduling problem with availability constraint. Computers & Mathematics with Applications 62: 3622–3641. Njike, A., R. Pellerin, and J.P. Kenne. 2011. Maintenance/production planning with interactive feedback of product quality. Journal of Quality in Maintenance Engineering 17: 281–298. Nourelfath, M., M. Fitouhi, and M. Machani. 2010. An integrated model for production and preventive maintenance planning in multi-state systems. IEEE Transactions on Reliability 59: 496–506. Sereshti, N. 2010. Profit maximization in simultaneous lot-sizing and scheduling problem. MSc dissertation, Isfahan University of Technology, Isfahan, Iran. Sereshti, N. and M. Bijari. 2013. Maximization in simultaneous lot-sizing and scheduling problem. Applied Mathematical Modelling 37: 9516–9523. Sitompul, C. and E.H. Aghezzaf. 2011. An integrated hierarchical production and maintenance-planning model. Journal of Quality in Maintenance Engineering 17: 299–314. Sloan, T.W. and J.G. Shanthikumar. 2000. Combined production and maintenance scheduling for a multiple-product, single-machine production system. Production and Operations Management 9: 379–399. Wee, H.M. and G.A. Widyadana. 2011. Economic production quantity models for deteriorating items with rework and stochastic preventive maintenance time. International Journal of Production Research 35: 1–13. Yao, X., X. Xie, M.C. Fu, and S.I. Marcus. 2005. Optimal joint preventive maintenance and production policies. Naval Research Logistics 52: 668–681.

This page intentionally left blank

2 N O N -TR AD iTi ONAL P ERFO RmAN CE EVALUATi ON O F M ANUFACTURiN G E NTERpRiSES I b R a H im H . G a R bie Contents

2.1 Introduction and Motivation 2.2 Importance and Background 2.3 Analysis of Non-Traditional Evaluation Aspects 2.4 Proposed Methodology 2.4.1 Fuzzy Logic Approach 2.4.1.1 Fuzzification Interface 2.4.1.2 Fuzzy Measure 2.4.1.3 Defuzzification Interface 2.4.2 Methodology Procedure 2.5 Case Study and Implementation 2.6 Conclusions References

21 22 26 29 29 29 30 31 31 33 34 35

2.1  Introduction and Motivation

Non-traditional evaluation of manufacturing enterprises regarding the existing status will be suggested and discussed in this chapter. There are many nonconventional aspects for measuring performance measurements in the manufacturing organizations/firms. These aspects are represented into level of complexity, level of leanness, and level of agility. These aspects are also considered as performance measurements. In this chapter, these performance measurements are used as a new evaluation of the existing status of manufacturing enterprise or firms. With respect to complexity, manufacturing firms require reduction in their complexity. Complexity in manufacturing firms presents a new 21

22

IB R A HIm H. G A RBIE

challenge, especially, during the existing global recession. Estimating the level of complexity in manufacturing firms is still unclear due to difficulty of analysis. Lean thinking and/or lean manufacturing, which is mainly focusing on minimizing the wastes in production processes, is considered the second aspect of non-traditional evaluation. Measuring the level of manufacturing leanness is most important especially when it is considered as one of the most important strategies in manufacturing firms to increase their utilization of resources, processes, and materials. The last performance measure concerns agility, where manufacturing firms have great interest in developing their manufacturing systems to be more competitive in terms of flexibility and capability. Agile philosophy will be considered as one important issue of the next industrial revolution as a core prerequest of sustainable manufacturing enterprises. It is a manufacturing and/or management strategy that integrates technology, people, production strategies, and industrial organization management systems. In this chapter, three proposals for estimating complexity, leanness, and agility were suggested and discussed. A fuzzy logic approach was proposed to estimate these levels of the manufacturing firms. An illustrative example is used to obtain a very clear understanding of complexity, leanness, and agility. 2.2  Importance and Background

The complexity in any organization has a direct impact on its performance. Reducing the complexity in industrial/service organizations reduces their costs and also increases their revenue and enhances their competence in local and international markets. Complexity has a direct relationship between inputs and outputs of the organization. As organizations grow bigger and expand to satisfy their demand, they tend to have more complex supply management and manufacturing operations than simple ones. Today, several definitions of complexity exist as immense international interest and knowledge for the scientific basis. First, industrial and/or manufacturing complexity was defined as systemic characteristics that integrate several key dimensions of the manufacturing environment including size, variety, information, uncertainty, control, cost, and value (Garbie and Shikdar, 2011a,b). Also, flexibility and agility are considered as the most desirable of certain system properties for the manufacturing enterprises

N O N -T R A D ITI O N A L P ERF O Rm A N c E E VA LUATI O N

23

with respect to structural and operational complexity measures. These properties will give industrial organizations more ability to cope with increased environmental uncertainty and adapting to the faster pace of change of today’s markets (Giachetti et al., 2003). There are two different forms of complexity: (1) static or structural complexity, which is designed into the system architecture, (2) operational or dynamical complexity, which also can change dramatically in short periods of time according to its environment. Although most measurements were concentrated on operational measures, both structural and operational characteristics are important for the performance of the system as a whole. Determining the industrial system complexity still has different concepts and views. Also complexity in manufacturing systems was divided into two different categories: time-independent complexity and time-dependent complexity (Kuzgunkaya and ElMaraghy, 2006). Time-independent complexity is used to add the complexities arising from the designer’s perception while time-dependent complexity is either combinational or periodic. The structural complexity measure is very close to time-independent complexity and provides a good description of the inherent complexity of its components, the relationship among them, and their influence (Kuzgunkaya and ElMaraghy, 2006). But dynamic complexity is more applicable to the system time-dependent behavior and requires data normally obtained during actual operations or simulation of the shop floor (Garbie, 2012a,b). Reducing complexity level is a key factor for reducing costs and enhancing operating performance in many organizations. The more reduction in complexity in the organization, the greater is the customer expectations. This will lead to improve system’s reliability, find out the particular parts of complexity of an organization, and measure overall performance (Garbie and Shikdar, 2011a). Optimizing the complexity in industrial organizations was recommended to be one of several solutions for the recovery of the existing financial recession (Garbie, 2009, 2010; Garbie and Shikdar, 2011b). Structural complexity on job shop manufacturing system was investigated considering processing time and scheduling rules (Jenab and Liu, 2010). There are also many types of complexities mentioned by several academicians such as process complexity and operational complexity. While the process complexity analysis focuses on the tools, equipment, and operations

24

IB R A HIm H. G A RBIE

used to manufacture it (Hu et al., 2008b), operational complexity was considered as the cognitive and physical effort associated with the tasks related to a product/process combination. Complexities in supply chain management are considered as complexity issues regarding manufacturing enterprises such as upstream complexity, internal manufacturing complexity, and downstream complexity (Bozarth et al., 2009). Measuring the manufacturing complexity in assembly lines based on assembly activities is presented with different configurations and manufacturing strategies (Wu et al., 2007). Also, the effect of scheduling rules with processing times on hybrid flow a system is investigated (Yang, 2010). The complexity levels in industrial firms are estimated through several case studies based on a general framework that includes a questionnaire focusing on each issue in a firm (Garbie and Shikdar, 2011a). They concluded that complexity arises from not only the size of the system but also the interrelationships of the system components and the emergent behavior that could not be predicted from the individual system components (Cho et al., 2009). Also, complexity can be classified into four different types: time-independent real complexity, timeindependent imaginary complexity, time-dependent combinatorial complexity, and time-dependent periodic complexity. Also, technological complexity can be considered as another type of manufacturing complexity analysis (Tani and Cimatti, 2008) especially when applied to engineering and industrial manufacturing. Analysis of complexity is widely used to analyze the industrial enterprises or firms (Garbie and Shikdar, 2011a) as it is considered as one of important issues to reconfiguring manufacturing enterprises (Garbie, 2013a) and for sustainability (Garbie, 2013b). Since 1980s until 1990s, manufacturing analysts have used the terms lean production/manufacturing for achieving greater flexibility, optimizing inventory, minimizing manufacturing lead-times, and increasing the level of quality in both products and customer service. The lean manufacturing is defined as a systematic approach to identifying and eliminating wastes or non-value-added activities through continuous improvement by flow of the product(s) at the pull of the customer in purist of perfection (Thomas et al., 2012). It can be expressed in industrial/manufacturing firms as the performancebased process to increase competitive advantage. The basics of lean

N O N -T R A D ITI O N A L P ERF O Rm A N c E E VA LUATI O N

25

manufacturing employ continuous improvement processes in order to focus on the elimination of wastes or non-value-added activities within an organization. The challenge to organizations utilizing lean manufacturing is to create a culture that will create and sustain longterm commitment. Toyota production system is considered the leading lean exemplar in the world. It became the largest car maker in the world in terms of overall sales due to adopting lean thinking. Lean thinking brings growth to every manufacturing company in the world year after year as a new manufacturing and/or management philosophy to maximize productivity and quality, and minimizing costs. The managers are also adapting to the tools and principles beyond manufacturing in different areas such as logistics and distribution, services, retail, healthcare, construction, maintenance, and even government. Therefore, lean thinking is beginning to implement its tools and techniques in all sectors today in general and in manufacturing sector in specific. Lean and six sigma are used as subgoals to measure the performance measurement in manufacturing companies (Hu et al. 2008a). In 1991, about two decades ago, when the industry leaders were trying to formulate a new paradigm for successful manufacturing enterprises in the twenty-first century, even though many manufacturing firms were still struggling to implement lean thinking and concepts, the agile manufacturing paradigm was formulated in response to the constantly changing new economy as a basis for returning to global competitiveness based on practical study under the auspices of the Iacocca Institute at Lehigh University. This study was sponsored by the US Navy Mantech program and involved 13 US companies. The objective of the study was to consider what the characteristics would be that successful manufacturing companies will possess (Groover, 2001). By the time the study was completed, more than 100 companies had participated in addition to the original 13 companies. The report of the study was entitled “21st Century Manufacturing Enterprise Study.” The term agile manufacturing was coined to describe a new manufacturing paradigm that was recognized as emerging to replace mass production. Agility means different things to different enterprises under different contexts. Agility is characterized by cooperativeness and synergism, a strategic vision, responsive creation and customer-valued delivery, nimble organization structures, and an information infrastructure

26

IB R A HIm H. G A RBIE

(Garbie et al., 2008a). Agile system does not represent a series of techniques much as it represents a fundamental change in production and/or management philosophies (Gunasekaran et al., 2002). These improvements required are required not only in a small scale but in a completely different way of doing business with the primary focus of flexibility and quick response to changing markets as well. As agility is used to update the level of manufacturing firms for competition or industry modernization programs, this new concept non-traditional or nonconventional, or nonclassic should be introduced into manufacturing firms to assess the competitive strategy of these firms. Evaluations of manufacturing firms non-traditionally are still the most important issue for the next period, and it will be highly considered. This will lead to a great change in the traditional manufacturing organizational/firms. There will be changes in production such that manufacturing firms will quickly respond to customer demand with high quality in compressed time. On the other side, it can be found that the traditional manufacturing workers on the shop floor will focus on their own small portion of the process without regard to the next step. There will be other changes in some areas such as the following: production support, production planning and control, quality assurance, purchasing, maintenance, marketing, engineering, human resources, finance, and accounting. These changes will cause a revolution in the manufacturing enterprises (Garbie et al., 2008a). However, there is a need for a systemic approach to evaluate and study the nonconventional performance measurements in manufacturing enterprises. Therefore, there is a strong relationship between lean production and agile manufacturing. Measuring leanness and agility must be related and integrated based on the complexity of the system. 2.3  Analysis of Non-Traditional Evaluation Aspects

Non-traditional aspects (complexity, agility, and leanness) are still ambiguous and an ill-structured problem because they are subjectively described assessments and are unsuitable and ineffective classical techniques. Regarding complexity, there are four important questions to be asked concerning manufacturing complexity (Garbie and Shikdar, 2011a) as follows:

N O N -T R A D ITI O N A L P ERF O Rm A N c E E VA LUATI O N

• • • •

27

How is the complexity level of a firm estimated? How can a firm reduce its complexity? Which issues are more important than others? How can firms identify the adverse factors for reducing complexity?

Regarding leanness aspect, there are also some comments to be discussed before analyzing the manufacturing leanness such as the following (Garbie, 2010): • Value-added and non-value-added activities. • Which lean manufacturing techniques can be used? • How the non-value-added can be eliminated? With respect to manufacturing agility, there are six important questions to be asked concerning agility as follows (Garbie et al., 2008a): • How far down the path is a company toward becoming a manufacturing organization? • How and to what degree does the organizational attributes affect the company’s business performance? • How do you measure or evaluate the agility of a company? • How can a company improve its agility? • Which factors are more important than others? • How can companies identify the adverse factors for improving? Based on these concepts of complexity, agility, and leanness, this proposed evaluation suggests three frameworks to focus on complexity, leanness, and agility, respectively. Regarding complexity, the system vision complexity, system structure (design) complexity, system operating, and system evaluation complexity are used as the infrastructures of complexity (see Table 2.1). Also, three major infrastructures of leanness (supplier related, customer related, and internally related) are used with their sub-major infrastructures to estimate the manufacturing leanness (see Table 2.2). With respect to agility, four infrastructures are used to focus on agile capabilities (technology, people, manufacturing strategy, and management) (see Table 2.3). They are considered to be the pillars of nonconventional performance evaluation of manufacturing enterprises (see Figure 2.1). As the overall problem of performance measurement is limited to the three frameworks, the

28

IB R A HIm H. G A RBIE

Table 2.1  Complexity Aspect and Its Components AspEct Complexity (CL)

ThEmE System vision (SV)

System design (SD)

System operating (SO)

System evaluation (SE)

SUBthEmE Time to market Supply chain management Demand variability Introducing no. of new products Product life cycle Product structure and design System design Manufacturing philosophies Status of operating resources Shop floor control Work in progress Business operations Product cost Quality Productivity Response Performance appraisal

Table 2.2  Leanness Aspect and Its Components AspEct Leanness (LM)

ThEmE

SUBthEmE

Supplier (SC)

Supplier feedback Just-in-time (JIT) delivery Supplier Customer Pull Continuous flow Setup time reduction Statistical process control Employees involvement Total productive maintenance

Customer (CU) Internal (IN)

major fundamental questions, what to measure, how to measure, and how to evaluate the results, will be determined. The analysis could be performed in an interview survey by quantifying the importance from 1 to 10 based on three concepts of evaluation: optimistic, most likely, and pessimistic. This analysis is also proposed from a manufacturing system analyst’s perspective, which means it has some delimitation by distributing a questionnaire among industry experts. These questions might not be enough, but they give an idea of how the company is struggling today and an indication of influences in the future.

N O N -T R A D ITI O N A L P ERF O Rm A N c E E VA LUATI O N

29

Table 2.3  Agility Aspect and Its Components AspEct Agility (AL)

ThEmE

SUBthEmE

People (PE)

Manufacturing strategies (MS)

Technology (TE)

Organization management (OM)

Complexity level (CL)

Knowledge and skills of workers Workforce empowerment Interpersonal skills Team-based work Job enrichment Job enlargement Improved workforce capability and flexibility Virtual manufacturing environment Supply chain management Concurrent engineering Reconfiguration Production design infrastructure Components infrastructure Information infrastructure Customer oriented Time to market for launching new product Number of new products produced by factory Interdepartmental conflicts

Leanness level (LM)

Agility level (AL)

Performance evaluation (PE)

Figure 2.1  Aspects of non-traditional performance evaluation.

2.4  Proposed Methodology 2.4.1  Fuzzy Logic Approach

The basic architecture of each aspect (complexity, agility, and leanness) is depicted in Figure 2.2. In order to perform the aspect evaluation, the system architecture consists of three main parts: fuzzification interface, fuzzy measure, and defuzzification interface. The details of fuzzy logic approach will be discussed in depth through methodology procedure. 2.4.1.1  Fuzzification Interface  The variables of basic-level attributes

may be expressed with fuzzy values to characterize their uncertainty.

30

IB R A HIm H. G A RBIE

Nonfuzzy data (crisp data)

Fuzzification interface Fuzzy measure Defuzzification interface Nonfuzzy output (crisp data)

Figure 2.2  Architecture for fuzzy logic approach.

Triangular membership functions were used in this study to express these basic-level attributes. Because the units and the range of raw values for the basic attributes are different, it is difficult to compare them directly. The raw value of each basic variable should be transformed into an index that is bounded in the uniform range from 1 to 10 by using the best value and worst value for the basic attributes. The transformation process normalizes the attribute values in relation to the best and worst values for a particular criterion. The expert assigns the best value (BV) and the worst value (WV) for a particular attribute. The linear transformation index value μ(xi ) can be calculated for the raw value of each attribute, Z i , as follows (Garbie et al., 2008a; Garbie and Shikdar, 2011a):

µ ( xi ) =

Zi − WV BV − WV

(2.1)

where Zi is the raw value of each attribute or each question (WV | i ∈ I , m ∈ M : Aim = 1}



Curr = {< i, j >|i ∈ I, j ∈ Oi : Pij > 0}



Decision variables

yi: Interval variable for job i in [0.3600 × Kmax × U]. eij: Interval variable for operation j of job i, size Ji /Oi. xim: Interval variable for alternative machines m of job i in [0.3600 × Kmax × U]. gm = pulse(eij , 1), ∀ m ∈ M (For pulse, see below).



∈Curr

In order to attach the related operations to jobs, we define them as interval variables. We indicate the definition space of jobs using in expression, which means that jobs must be completed between 0 and 3600 × Kmax × U seconds. We define the size of each operation according to the size of job i with Ji/Oimax, so the total size of those operations would be equal to the size of job i. As mentioned before, every job has an alternative machine set, and jobs must be assigned according to these sets. Therefore, we define the alternative machines as the interval variables and use the alternative expression to make sure that every job must be assigned to one of its alternative machines. We assume that

AU T O m O TI V E S TA mpIN G Op ER ATI O NS Sc HED ULIN G

53

every operation causes 1 unit usage of machine, which is added up by pulse expression, and we present the cumulative usage of each machine with gm function. According to these variables, the constraints are summarized as follows:

min max i (endOf ( yi )) endBeforeStart (eij , eij +1 ),

(3.15)

∀i ∈ I , j ∈ Oi | j + 1 ≤ Oi

(3.16)

yields presenceOf (eij )  → presenceOf (eij +1 ), ∀i ∈ I , j ∈ Oi | j + 1 ≤ Oi

(3.17) span( yi , all ( j ∈ Oi )eij ), ∀i ∈ I

(3.18)

alternative( yi , all (a in Alter : a.i = i )xa ), ∀i ∈ I

(3.19)

noOverlap(all (a in Alter : a.m = m)xa ), ∀m ∈ M

(3.20)

 presenceOf (xa ) > 0   endOf (xa ) ≤ startOf (xb )   yields    OR → AND      presenceOf (xb ) > 0   endOf (xb ) ≤ startOf (xa )      ∀a, b ∈ Alter| Oa .imax > 1 AND b.i ≠ a.i AND b.m ≥ a.m AND

(3.21)

b.m ≤ a.m + Oa .imax − 1

∑g

m

≤M

(3.22)

The aim of CP1 is to minimize the maximum completion time of the production horizon. The endOf term defines the end time of a job, so expression (3.15) implies the Cmax. The endBeforeStart term prevents eij + 1 to start before eij ends; so constraint (3.16) ensures that the start time of operations must be sequential. The presenceOf term is a logical true–false expression, which returns 1–0, and constraint (3.17) links the operations of jobs together. Constraint (3.18) uses span expression to make sure that a job must cover its operations in terms of start and end times. Constraint (3.19) guarantees that each job can be assigned to one of its alternative machines. Constraint (20) prevents to assign more than one job to a machine at the same time with noOverlap m∈M

54

BUR c U C AG L A R G EN c O Sm A N E T A L .

Table 3.4  Comparison of MIP1 and CP1 with Real Data PRoBlEm 1 2 3 4 5 6 7 8 9 10

CMAX (H)

ElapsEd TimE (S)

No. of JoBs

No. of PERiods

MIP1

CP1

MIP1

5 10 15 20 25 30 35 40 45 52

5 8 13 22 25 27 32 40 49 54

16.3a 29.3a 44.9 77.6 85.0 99.0 119.2 146.8 — —

16.3a 29.3a 43.9 73.8 78.8 89.0 98.2 126.1 156.4 170.0

1.1 6.7 1800 1800 1800 1800 1800 1800 1800 1800

Gap%

ADOL%

CP1

MIP1

CP1

2.9 2.7 109.7 119.5 136.1 141.8 175.2 211.0 305.7 371.2

— — 22.3 63.7 78.7 87.8 91.3 95.9 — —

— — 2.1 4.9 7.3 10.0 17.6 14.1 — —

a Proven optimality. Note: Highlighted lower Cmax values are in bold.

term. Constraint (3.21) ensures that if job a is assigned to machine m, no other jobs can be assigned until its required number of operations are completed. Constraint (3.22) restricts the cumulative usage of machines with machine capacity. To evaluate the performance of CP1, we used the same 10 instances from real scheduling data as in Section 3.3. We compared CP1 with MIP1, and we restricted MIP1 within 1800 seconds and CP1 with 700,000 fail limits, which is determined in Section 4.1. We provide the results in Table 3.4 and highlighted lower Cmax values in bold. The first three columns present the number of instances, the number of jobs, and the number of periods. The fourth part indicates the makespan of problems that are calculated by MIP1 and CP1, the fifth part illustrates the elapsed time of two models. The gap for MIP1 is calculated with Equation 3.14. To calculate the improvement of schedules by using CP1 instead of MIP1, we use the average percentage deviation of the best objective line (ADOL) of MIP1 against CP1, which is presented in Equation 3.23 and detailed in last column of Table 3.4. (Meyr and Mann 2013).

ADOL % =

MIPCmax − CPCmax × 100 MIPCmax

(3.23)

Table 3.4 shows that MIP1 fails to find the optimal schedules of larger problems. On the other hand, CP1 performs better than MIP1, and

AU T O m O TI V E S TA mpIN G Op ER ATI O NS Sc HED ULIN G

55

it finds successful schedules with an average of 157.6 seconds. In addition, CP1 improves the quality of feasible solutions by 7%, and it is able to generate feasible schedules for larger problems. However, because of the termination limit, CP1 has a heuristic nature, and we evaluate the best limit considering the trade-off between the quality of solutions and the elapsed time in Section 3.4.1. 3.4.1  Effective Value of the Fail Limit for CP1

The solution procedure of CP1 is different from MIP1. First of all, CP1 eliminates all infeasible solution points by its propagation techniques and builds the solution space with feasible solution points. Then, it searches the solution space for the best objective value by evaluating each feasible solution point. Therefore, if we limit the CP1 model with a sufficient time, it is able to reach the optimal solution of a problem, and because of this property, the CP1 model can be classified as an exact method. However, the termination limit of models is determined by the user, and this situation adversely affects the quality of the solution. If the user chooses a short time limitation, the model generates a feasible solution quickly, but it may be far from the optimal one. If the user chooses a higher time limitation, the quality of the solution could improve, but the solution time would take longer time. Since the termination limit is a significant parameter for the CP1 model, we need to determine an appropriate time limit by testing the algorithm on different problems. We generate 10 different problems by using the real production data for past 10 weeks. We consider 1-week schedule as one problem, and we produce 10 different problems, which are described in Section 3.6. We use the fail limit and the elapsed time as a termination limit for CP1. We run the CP1 model with different fail limits and with 1800 seconds time limit. We calculate the ADOL% of C max values CP1 models with fail limits against CP1 models with 1800 seconds. Thus, we try to find the gap between the solutions with fail limits and 1800 seconds. We detail our findings in Table 3.5 and in Table 3.6. The first two columns in Table 3.5 give information about the problems, and the remaining columns illustrate the ADOL% of C max values between the CP1 models with different fail limits and the CP1 model with time limit. For example, if we use the fail limit

56

ADOL% foR DiffEREnt Fail Limits PRoBlEm 1 2 3 4 5 6 7 8 9 10

No. of JoBs

CP1_ 100,000

CP1_ 300,000

CP1_ 500,000

CP1_ 700,000

CP1_ 900,000

CP1_ 1,000,000

CP1_ 1,500,000

CP1_ 1800 S

63 64 55 61 59 58 57 54 54 49 Avg.

−12.4 −14.6 −11.8 −13.3 −6.8 −10.4 −27.7 −9.8 −15.2 −10.0 −13.2

−8.1 −8.5 −5.7 −7.5 −2.4 −5.9 −20.2 0.0 −8.8 −6.1 −7.3

−5.8 −6.6 −5.7 −5.8 −2.3 −5.9 −7.2 0.0 −6.6 −4.6 −5.0

−3.0 −6.6 −5.7 −5.1 −1.2 −0.6 −7.2 0.0 −2.8 −1.3 −3.4

−2.3 −6.6 −5.7 −5.1 −1.2 0.0 −4.0 0.0 −2.8 −1.1 −2.9

−2.3 −6.6 −5.7 −5.1 −1.2 0.0 −2.3 0.0 −2.8 0.0 −2.6

−1.7 −4.2 −5.7 −2.7 0.0 0.0 −1.5 0.0 0.0 0.0 −1.6

0 0 0 0 0 0 0 0 0 0 0

BUR c U C AG L A R G EN c O Sm A N E T A L .

Table 3.5  ADOL% Values of CP1 Models with Different Fail Limits against CP1 Models with 1800 seconds

ElapsEd TimE (S) PRoBlEm 1 2 3 4 5 6 7 8 9 10

No. of JoBs

CP1_ 100,000

CP1_ 300,000

CP1_ 500,000

CP1_ 700,000

CP1_ 900,000

CP1_ 1,000,000

CP1_ 1,500,000

CP1_ 1800 S

63 64 55 61 59 58 57 54 54 49 Avg.

31.5 35.7 57.8 40.3 65.6 56.5 24.9 54.4 20.2 28.4 41.5

113.2 131.0 172.1 108.4 187.4 235.7 78.5 155.6 47.7 87.3 131.7

199 262.3 258.8 164.9 306.6 379.1 108.8 259.5 76.8 195.1 221.1

320.4 522.4 435.6 222.4 546.6 549.6 157.8 393.9 140.1 241.5 353.0

578.4 659.0 569.0 287.0 578.9 674.6 118.3 466.4 227.2 305.3 446.4

640.5 702.2 609.7 319.9 621.0 756.1 247.5 523.1 268.9 393.2 508.2

981.9 1055.8 943.3 450.6 898.2 1206.6 427.8 817.7 424.1 696.1 790.2

1800 1800 1800 1800 1800 1800 1800 1800 1800 1800 1800

AU T O m O TI V E S TA mpIN G Op ER ATI O NS Sc HED ULIN G

Table 3.6  Elapsed Time in Seconds of CP1 Models with Different Termination Limits

57

58

BUR c U C AG L A R G EN c O Sm A N E T A L .

as 100,000, we generate schedules with an average of 13.2% worse than the schedules generated in 1800 seconds. Similarly, if we use 700,000 fail limit, the CP1 model generates 3.4% worse schedules than the CP1 model with 1800 seconds, and it only spends 353 seconds on average to reach these solutions instead of 1800 seconds. However, if we choose 900,000 fail limit, the average solution time would increase to 446.4 seconds, but the quality of the solution would only increase by 0.5% on average. Considering the trade-off between the quality of solutions and the elapsed time, we chose the termination limit as 700,000 fail limit. The results so far show that MIP1 is able to find optimal schedules for small-size instances, but it fails to find feasible solutions in acceptable times for larger instances. On the other hand, CP1 is able to find successful schedules for large instances however, it fails to prove optimality. In conclusion, we see that these models have weaknesses and strengths. We next develop methods to improve both MIP1 and CP1. 3.5  Model Improvements 3.5.1  MIP1 Improvements

Although MIP1 works slow for real-world problems, it provides information on the quality of a solution by either proving optimality or providing an optimality gap. Therefore, we attempt to accelerate the MIP1 model by searching the effects of initiating the model with a lower bound. We convert MIP1 to a relaxed-LP model (LP) and use its objective function value as a lower bound for MIP1. MIP1 starts with this lower bound and tries to find the optimal solution with the same constraints as given in Section 3.3 (MIP1). The experimental results, provided in Table 3.7, indicate that the LP solutions are too far from the optimal solutions, and they would not give a useful lower bound for MIP1. As a second approach, we use CP1 solution as an initial point for MIP1. CP1 is able to find successful solutions in short times, but it could not prove optimality. We first run CP1 to find a solution, and then we give this solution to MIP1 as a starting point. The idea is that starting a good initial point would increase the performance of

AU T O m O TI V E S TA mpIN G Op ER ATI O NS Sc HED ULIN G

59

Table 3.7  Comparison of Relaxed LP, MIP1, and CP1 Using Real Data PRoBlEm 1 2 3 4 5 6 7 8 9 10

CMAX (H)

ElapsEd TimE (S)

No. of JoBs

No. of PERiods

LP

MIP1

CP1

LP

MIP1

CP1

5 10 15 20 25 30 35 40 45 52

5 8 13 22 25 27 32 40 49 54

11.7 19.0 18.9 18.9 18.9 45.5 45.5 45.5 48.4 48.3

16.3a 29.3a 44.9 77.6 85.0 99.0 119.2 146.8 — —

16.3a 29.3a 43.9 73.8 78.8 89.0 98.2 126.1 156.4 170.0

0.1 0.1 1.0 1.7 2.0 2.5 3.1 8.2 40.7 75.8

1.1 6.7 1800 1800 1800 1800 1800 1800 1800 1800

2.9 2.7 109.7 119.5 136.1 141.8 175.2 211.0 305.7 371.2

a Proven optimality. Note: Highlighted lower Cmax values are in bold.

MIP1; hence, we may be able to find optimal solutions of large-size problems. We also compare our findings with pure MIP1 solutions, and we record the solution stages of ILOG CPLEX 12.4. Results show that MIP1 with initial solution starts with a smaller gap, but the CPLEX algorithm generates more cuts for pure MIP1 automatically. In conclusion, the initial solution of CP1 is not useful for MIP1. In this section, we develop different approaches to accelerate MIP1 but none of them is good enough to increase the performance. Therefore we next continue with CP1 and work on improving it. 3.5.2  CP1 Improvements

As mentioned before, CP1 is able to find successful solutions in short times, but it could not prove optimality. The idea is to improve the performance of CP1 so that it may be faster to reach an optimal solution. The CP allows the user to define specific search algorithms, which includes rules for variable and value selection in constraint propagation phase. We develop various search algorithms and evaluate their performances. We consider xim decision variable representing the alternative machines of jobs. Intuitively, the jobs with the least alternative machine should be considered first. With this variable selection, we generate three different value selection methods: assigning

60

BUR c U C AG L A R G EN c O Sm A N E T A L .

jobs to machines with the smallest machine index first, assigning jobs to machines with the largest machine index first, and assigning jobs to machines with random order. We compare the performance of these search algorithms considering the solution points, number of branches, and choice points. The experimental results for randomly chosen problems indicate that the number of branches and choice points are the same for three distinct search algorithms. In addition, we increase the number of branches with an average of 85.6% and the choice points with an average of 88.4%, and we decrease the quality of solutions with an average of 0.2% by using proposed search algorithms instead of CP1 with automatic search algorithm. We conclude that the pure CP1 model searches the solution space faster than our search rules, and it reaches better results within same limits. In addition, we consider the most affected jobs in terms of the number of operations and the amount of production. We generate an algorithm that assigns the most affected job first but there is no significant difference between the previous algorithms. Although we could not improve the performance of CP1 by different search strategies, we could reduce the search space including some constraints to the model. If we aim to minimize the waste time of machines, CP1 would be able to find better schedules. We define a new decision variable called occupiedmk, which is dependent on machine and period. This variable checks machine m at time k × 3600 × 4. If machine m is processing a job at that time, the occupiedmk is equal to 1, otherwise is 0. This inspection is done for each machine and each period. We also can break the symmetry using constraint (3.24), which indicates that the occupancy rate of the first half of production horizon should be equal to or greater than the second half of production horizon: M



K /2

∑∑ m =1 k =1

M

occupied mk ≥

K

∑∑

m =1 k =1+ K / 2

occupied mk

(3.24)

To observe the impact of constraint (3.24), we repeat the experiments with randomly generated data that are detailed in Table 3.8. The first part represents the results of CP1 model, and the second part represents the results of CP1 model with constraint (3.24) and the occupiedmk decision variables. The new decision variable expands the number of branches and the choice points as expected, but this addition adversely

CP1 No. of JoBs 10 15 20 25 30 35 40 45 50

CP1_constRaint (3.24)

SolUtion TimE

CMAX

SolUtion Points

No. of BRanchEs

ChoicE Points

SolUtion TimE

CMAX

SolUtion Points

No. of BRanchEs

ChoicE Points

No. of BRanchEs

ChoicE Points

CMAX

2.8 36.1 73.5 99.1 143.8 191.9 267.2 313.6 494.6

131.3a 259.9 415.8 468.8 551.0 636.9 770.2 909.8 982.5

1 3 1 6 9 7 4 6 7

25,259 238,073 254,126 242,452 264,249 257,140 285,566 253,935 280,986

14,703 141,762 156,923 143,672 166,401 158,066 187,026 154,073 180,449

4.4 92.1 376.4 456.3 366.4 667.6 865.6 1114.5 1833.8

131.3a 259.9 415.8 468.8 551.3 637.9 771.9 906.5 984.8

1 4 4 7 8 8 3 11 20

25,433 313,010 394,509 365,114 293,161 305,326 311,744 323,773 321,911

16,350 215,426 297,572 267,207 194,710 206,781 212,921 225,204 222,315 Avg.

−0.7 −31.5 −55.2 −50.6 −10.9 −18.7 −9.2 −27.5 −14.6 −24.3

−11.2 −52.0 −89.6 −86.0 −17.0 −30.8 −13.8 −46.2 −23.2 −41.1

— — — — −0.1 −0.1 −0.2 0.4 −0.2 −0.1

Proven optimality. Note: Highlighted lower Cmax values are in bold.

a

ADOL%

AU T O m O TI V E S TA mpIN G Op ER ATI O NS Sc HED ULIN G

Table 3.8  Impact of Constraint (3.24) to CP1 Model

61

62

BUR c U C AG L A R G EN c O Sm A N E T A L .

affects the performance of CP1 with constraint (3.24). Pure CP1 finds better schedules within the same limits. According to the generated schedules by two models, it is observed that there is no difference between solutions in terms of occupancy rates. In conclusion, we can claim that CP1 generates schedules with the same logic without constraint (3.24). The symmetry breaking is an important tool to reduce the solution space of CP1. Using an effective symmetry-breaking constraint decreases the alternative solutions and improves the performance of the model. In the previous experiment, we attempt to reduce the solution space by constraint (3.24) considering the time axis, which is already performed by pure CP1. However, we observe that in the optimal schedules, the first half of the machines are more occupied than the other machines. Therefore, we generate a new symmetrybreaking constraint (3.24′) instead of constraint (3.24), which takes into account the symmetry of assigned jobs to machines:





a in Alter | a . m ≤6

presenceOf (xa ) ≥



presenceOf (xa )

a in Alter | a . m >6



(3.24′)

To investigate the impact of constraint (3.24′), we repeat the experiments and present the findings in Table 3.9. According to the last part of Table 3.9, if we use CP1_constraint (3.24′) instead of CP1, we can reduce the number of branches with an average of 1.7%, choice points with an average of 1.6%, and improve the quality of solutions with an average of 0.04% by reaching more feasible solutions. In conclusion, we can improve the performance of CP1 including the symmetry-breaking constraint (3.24′) to the model. In this section, we develop methods to improve the performance of MIP1 and CP1. Although we could not achieve an improvement with MIP1, we accelerate CP1 by constraint (3.24′). We next compare the performance of CP1 with the real schedules. 3.6  Comparison of Real-World Schedules with CP1

Beycelik Gestamp works with weekly demands. A production engineer spends the first day of the week to generate schedules and spends 4  hours a day to make sure the production is running smoothly.

CP1 No. of JoBs 10 15 20 25 30 35 40 45 50

CP1_constRaint (3.24′)

ADOL%

SolUtion TimE

CMAX

SolUtion Points

No. of BRanchEs

ChoicE Points

SolUtion TimE

CMAX

SolUtion Points

No. of BRanchEs

ChoicE Points

No. of BRanchEs

ChoicE Points

CMAX

2.8 36.1 73.5 99.1 143.8 191.9 267.2 313.6 494.6

131.3 259.9 415.8 468.8 551.0 636.9 770.2 909.8 982.5

1 3 1 6 9 7 4 6 7

25,259 238,073 254,126 242,452 264,249 257,140 285,566 253,935 280,986

14,703 141,762 156,923 143,672 166,401 158,066 187,026 154,073 180,449

3.0 39.9 74.5 96.8 131.9 178.4 240.4 319.2 451.4

131.3 259.9 415.8 468.8 551.0 636.9 770.2 906.5 981.8

1 2 1 6 10 7 4 5 15

22,231 249,220 254,159 240,544 257,615 254,668 285,798 266,882 255,443

12,942 154,250 156,949 142,679 160,432 155,192 187,381 167,312 155,303 Avg.

12.0 −4.7 0.0 0.8 2.5 1.0 −0.1 −5.1 9.1 1.7

12.0 −8.8 0.0 0.7 3.6 1.8 −0.2 −8.6 13.9 1.6

0 0 0 0 0 0 0 0.36 0.07 0.05

Note: Highlighted lower Cmax values are in bold.

AU T O m O TI V E S TA mpIN G Op ER ATI O NS Sc HED ULIN G

Table 3.9  Impact of Constraint (3.24′) to CP1 Model

63

64

BUR c U C AG L A R G EN c O Sm A N E T A L .

Thus, the engineer spends 28  hours to maintain the production for the whole week. However, we can generate successful schedules in minutes by CP1. In order to simulate the real-world schedules, we examine the real-world schedules of past 10 weeks, and we generate 10 different problems. We consider the actual production time in the company by eliminating shift/lunch breaks. The company and CP1 assume that jobs may not be preempted, the production period is 4 hours, and the changeovers are done in shift/lunch breaks. Thus, we do not need to consider the changeovers in both systems. However, the company faces with two groups of interruptions; the unpredictable interruptions include breakdowns of machines and maintenance activities, and the predictable interruptions include educations of operators and public holidays. To reflect the real system accurately, we consider these interruptions as dummy jobs. We first determine the start time, the duration, and the related machine to which interruption happened. We run the CP1 model without interruptions. If the model generates a schedule that is completed before the interruption, we do not need to consider the interruption. For example, if a model finds a schedule with the Cmax value of 100 hours and a breakdown happens after 120 hours, we do not need to consider this interruption. On the other hand, if the length of the schedule includes the start time of the interruption, we define a dummy job to represent the interruption. The dummy job includes only one operation and only one alternative machine. The processing time is determined by considering the total duration of the interruption. We run the CP1 model with dummy jobs and with a new constraint to present the interruptions at the same time and machine with the real-world schedule. If an interruption occurs at machine m in period k, the dummy job i' is represented with the constraint (3.25) in CP1 model:

startOf (xa ) = k × 4 × 3600, ∀a ∈ Alter| a.m = m, a.i = i ′ (3.25)

The dummy jobs have the same durations with the real-world schedules, and they are assigned to the same machine at the same time. Therefore, we can reflect the production horizon with interruptions precisely. We run the CP1 model with 700,000 fail limit (CP1_700,000) and 1800 seconds (CP1_1800) as seen in Table 3.10. The first two columns give information about the problems, and

PRoBlEm 1 2 3 4 5 6 7 8 9 10

CMAX (H)

ElapsEd TimE (S)

ADOL%

No. of JoBs

Company

CP_700,000

CP_1800

Company

CP_700,000

CP_1800

CP_700,000

CP_1800

63 64 55 61 59 58 57 54 54 49

150 142.5 142.5 150 142.5 142.5 150 127.5 135 150

108.9 120.9 110.4 121.6 112.3 116.7 118.1 106.7 134.2 137.0

108.1 116.5 105.5 119.6 111.0 112.2 118.1 106.7 131.8 137.0 Avg.

100,800 100,800 100,800 100,800 100,800 100,800 100,800 100,800 100,800 100,800 100,800

416.4 280.0 338.0 374.8 394.4 570.8 225.6 232.1 358.4 374.7 356.5

1800 1800 1800 1800 1800 1800 1800 1800 1800 1800 1800

27.4 15.2 22.5 18.9 21.2 18.1 21.3 16.3 0.6 8.7 17.0

28.0 18.2 25.9 20.3 22.1 21.3 21.3 16.3 2.3 8.7 18.4

AU T O m O TI V E S TA mpIN G Op ER ATI O NS Sc HED ULIN G

Table 3.10  Comparison of CP1 with Real-World Schedules

65

66

BUR c U C AG L A R G EN c O Sm A N E T A L .

the remaining columns illustrate the C max values of schedules by CP1_700,000 and CP1_1800, the elapsed times, and the ADOL% values of CP1 models against the real-world schedules of the company. The company spends 28 hours on average to generate the realworld schedules. However, we can improve the manually generated schedules with an average of 17% by CP1_700,000, and the model spends only 356.5 seconds instead of 28 hours (100,800 seconds). If we aim to find better schedules, we can improve the schedules with an average of 18.4% by using CP1_1800. Although CP1_1800 is able reach better results, the difference between CP1_700,000 and CP1_1800 is 1.4% on average, and the company may choose to generate weekly schedules in 356.5 seconds rather than 1800 seconds. In conclusion, the CP1 model is able to improve the weekly schedules with an average of 17%, which has a positive effect on reducing the production cost and increasing the weekly capacity. 3.7  Conclusion

This chapter presents a novel problem description of stamping scheduling. The problem is presented by a MIP model utilizing the conventions and practices of the company. The complexity of the problem adversely affects the quality of solutions, and MIP1 could not find any feasible solutions of the real-world instances of the problem. We next develop a new model, CP1, by using CP. The CP1 model is able to find successful solutions of the problem instances; however, its termination limit has a negative effect on the solutions. To determine the trade-off between the solution quality and the solution time, we run experiments to determine a good termination limit. Although we develop various approaches to combine MIP1 and CP1, the pure CP1 model outperforms MIP1 and combined methods. Therefore, we focus on improving CP1 model by considering the symmetry-breaking constraints. We also compare CP1 model with manually generated schedules by the company. The CP1 model improves the weekly schedules with an average of 17% and reaches these solutions within 356.5 seconds. The company can convert CP1 model to a practical software and can use it to generate effective schedules in minutes.

AU T O m O TI V E S TA mpIN G Op ER ATI O NS Sc HED ULIN G

References

67

Baptiste, P., Le Pape, C., and Nuijten, W. (2001). Constraint-Based Scheduling: Applying Constraint Programming to Scheduling Problems (Vol. 39). Springer, Berlin, Germany. Barlatt, A. Y., Cohn, A., Gusikhin, O., Fradkin, Y., Davidson, R., and Batey, J. (2012). Ford Motor Company implements integrated planning and scheduling in a complex automotive manufacturing environment. Interfaces, 42(5), 478–491. Błażewicz, J., Ecker, K. H., and Pesch, E. (Eds.). (2007). Handbook on Scheduling [Electronic Resource]: From Theory to Applications. Springer, Berlin, Germany. Edis, E. B., Oguz, C., and Ozkarahan, I. (2013). Parallel machine scheduling with additional resources: Notation, classification, models and solution methods. European Journal of Operational Research, 230(3), 449–463. Edis, E. B. and Ozkarahan, I. (2012). Solution approaches for a reallife resource-constrained parallel machine scheduling problem. The International Journal of Advanced Manufacturing Technology, 58(9–12), 1141–1153. Khayat, G. E., Langevin, A., and Riopel, D. (2006). Integrated production and material handling scheduling using mathematical programming and constraint programming. European Journal of Operational Research, 175(3), 1818–1832. Malapert, A., Guéret, C., and Rousseau, L. M. (2012). A constraint programming approach for a batch processing problem with non-identical job sizes. European Journal of Operational Research, 221(3), 533–545. Meyr, H. and Mann, M. (2013). A decomposition approach for the general lotsizing and scheduling problem for parallel production lines. European Journal of Operational Research, 229(3), 718–731. Nuijten, W. P. and Aarts, E. H. (1996). A computational study of constraint satisfaction for multiple capacitated job shop scheduling. European Journal of Operational Research, 90(2), 269–284. Pinedo, M. (2012). Scheduling: Theory, Algorithms, and Systems. Springer, Berlin, Germany. Relvas, S., Boschetto Magatão, S. N., Barbosa-Póvoa, A. P. F., and Neves Jr., F. (2013). Integrated scheduling and inventory management of an oil products distribution system. Omega, 41(6), 955–968. Van Hentenryck, P. (1999). The OPL Optimization Programming Language. MIT Press, Cambridge, MA.

This page intentionally left blank

4 VOTiN G S ECURiT Y iN S O CiAL M ED iA S ei F e D i N e Ka D Ry Contents

4.1 Introduction 4.2 What Are Social Networks? 4.3 Network Workbench Tool 4.4 Simulation and Results 4.4.1 Data Description 4.4.2 Voting Security 4.5 Heider’s Balance Theory 4.6 Conclusion References

69 70 71 72 72 72 77 79 80

4.1  Introduction

Social networks are websites on the Internet with a common ownership. The members of social networks have some privacy, with some information being available for selected people but blocked for others. The security problem is a very complex issue in social networks. Theft of personal data and intrusion into profile pages are recorded in large numbers. Researchers and developers are therefore looking for ways to protect these data. The biggest problem is how to protect the voting in social networks from the risk of penetration. The most important risks in voting are votes that are stolen by candidates or those obtained by creating fake votes. In this chapter, we develop a fast and accurate process for checking the social network voting data. This method will compare the votes cast by voters with those obtained by candidates with the use of algorithms to detect any stolen or fake votes [1].

69

70

SEIF ED INE K A D RY

4.2  What Are Social Networks?

Social networks are networks of relationships between institutions or individuals that share a common bond, such as classmates, coworkers, or family members. Social networking is the number of websites providing services to users for blogs, e-mail, publishing articles, posting photos, exchanging ideas, advertising, etc. [2]. The structure of social networks makes it easier for researchers to visualize the structure of social relations between institutions and individuals. In the last few years, social networking sites have increasingly spread around the world at a fast pace. Some social networking websites involve as many as 800 million users. The most popular social networking websites that have fastest and widespread usage worldwide are Wikipedia, Facebook, Twitter, and Myspace, among others. Wikipedia, deriving from wiki, meaning “quick” in Hawaiian, and “pedia,” the suffix in “encyclopedia,” was launched in January 2001. Wikipedia is a free encyclopedia where any user can create or edit articles. It is a very important website on the Internet comprising a classification of terms and is the sixth most visited website worldwide. In 2012, Wikipedia was available in more than 225 languages. In the United States alone, the Wikipedia website recorded 2700 million visits per month [3]. Social network analysis (SNA) is a method of viewing the structure of social networks, where each individual or institution within the social network is considered a node and every relationship between the nodes is called a link (edge). The purpose of graphical social networks is to understand the structure and the links between members, and to apply social theories to them [2]. Voting on social networks is one of the main reasons for the success of the idea of social networks on the Internet. Social networks allow users the freedom to express their opinions and vote democratically. Users of social networks can vote for an article, image, or video [4]. For example, they can vote for a video on YouTube (like or dislike) or a status or photo on Facebook (like); shortly, Facebook will be adding “dislike” to voting. For Wikipedia, one can vote to choose

V O TIN G SEc URIT Y IN S O cIA L M ED IA

71

administrators (negative or positive). As Wikipedia is a nonprofit website, administrators are selected by the election of some incompetent users. The duty of administrators is correcting articles. The most important goal of voting on social networks is to know the views of the members of the community and their aspirations and promotion of social democracy. 4.3 Network Workbench Tool

The network workbench (NWB) tool is the most famous and the latest SNA software in recent years. The NWB (Figure 4.1) is a network analysis, modeling, and visualization toolkit for physics, biomedical, and social science research [5–7]. The NWB is designed on a Cyber Infrastructure Shell (CIShell). In 2007, it became an open-source software framework for the easy integration and utilization of datasets, algorithms, tools, and computing resources. The NWB defines the features of a social network in the simplest way, through representation by numbers, graphs, and histograms.

Figure 4.1  User interface for NWB.

72

SEIF ED INE K A D RY

The NWB is an easy tool for researchers and developers of the concept of social networks that involve a large number of algorithms to be applied to extensive data of social networks. This tool allows users to create a model network and apply the necessary algorithms; they may also use different visualizations to represent the network to effectively adopt the characteristics of the network analyzed. In 2008, the NWB was updated as having 80 algorithms and 30 sample datasets. As the NWB tool is developed using JAVA, algorithms developed in other programming languages such as FORTRAN, C, and C++ can be easily integrated. The NWB interface is easy to use and consists of several lists containing the names of the algorithms, of network models, and a number of other functions available in the tool. 4.4  Simulation and Results 4.4.1  Data Description

Wikipedia is a nonprofit organization that contains a large number of articles, which continue to increase (in 2012, over 3.9 million articles were published in English). Articles are constantly added by the numerous users of this website. A small number of users are administrators, who are responsible for the maintenance and amendment of articles. The administrators of Wikipedia must be elected by its users. The data shown in Figure 4.2 are taken from January 2008. These data users vote to select administrators [8–10]. Users have the right to vote for positive or negative. As shown in Figure 4.3, the total number of votes is 103,663 and the number of users participating in the voting is 7,066. The group comprises users and administrators. The number of people elected is 2794 (i.e., those who received positive and negative votes). Of these, there were 1235 winners who received votes in their favor and were promoted as administrators. The 1559 losers received negative votes. 4.4.2  Voting Security

The in-degree of a node is an edge (or link) coming to the node. The algorithm builds two histograms, each showing the values of indegree of all nodes calculated in a different way. In the first histogram

V O TIN G SEc URIT Y IN S O cIA L M ED IA

73

7066 Members of the group

2794 Members recipients votes

1559

1235

Members received the votes of a negative

Members received the votes of a positive

Figure 4.2  Structure of data.

(Figure 4.4), the occurrence of any in-degree value between the minimum and the maximum is estimated and divided by the number of nodes in the network in order to obtain the probability. In the second histogram (Figure 4.5), the interval spanning the values of in-degree is divided into bins, whose size grows while moving toward higher values of the variable. The size of each bin is obtained by multiplying the size of the previous bin by a fixed number [11,12]. The purpose of using this algorithm to the data is to know the probability of the votes received by the candidates, where each node in the network is considered a candidate and each edge received by a node is a vote. The out-degree of a node is an edge (link) going out of a node. The application of the out-degree algorithm gives two histograms, each showing the values of out-degree of all nodes calculated in a different way. In the first histogram (Figure 4.6), the occurrence of any out-degree value between the minimum and the maximum is

74

SEIF ED INE K A D RY

Figure 4.3  Extract of data network.

estimated and divided by the number of nodes in the network to obtain the probability. In the second histogram (Figure 4.7), the interval spanning the values of out-degree is divided into bins whose size grows while moving toward higher values of the variable. The size of each bin is obtained by multiplying the size of the previous bin by a fixed number [13,14]. The purpose of applying this algorithm to the data is to know the probability of the votes sent from the voters, where each node in the network is considered a voter and each edge sent from the node is a vote. The PageRank algorithm calculates the rank of web pages. The histogram in Figure 4.7 is divided into intervals spanned by the PageRank values. One divides the range of variation of the PageRank into equal bins and determines the number of nodes whose PageRank

75

V O TIN G SEc URIT Y IN S O cIA L M ED IA

0.35

Probability of node in degree value

0.3 0.25 0.2 0.15 0.1 0.05 0

0

200

400

600 In degree

800

1000

1200

Figure 4.4  In-degree distribution divided by the number of nodes.

Probability of binned in degree value

0.35 0.3 0.25 0.2 0.15 0.1 0.05 0

0

200

400

600 In degree

800

Figure 4.5  In-degree distribution divided by the number of bins.

1000

1200

76

SEIF ED INE K A D RY

Probability of node out degree value

0.35 0.3 0.25 0.2 0.15 0.1 0.05 0

0

200

400

600 Out degree

800

1000

1200

Probability of PageRank (damping factor)

Figure 4.6  Out-degree distribution divided by the number of nodes.

0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

0

0.0005

0.001

0.0015

0.0025 0.002 PageRank

0.003

0.0035

0.004

Figure 4.7  In-degree distribution divided by the number of bins.

values lie inside each bin: the scores are then divided by the number of nodes in the network, to obtain the probability. The damping factor is a real number between 0 and 1, which is considered for calculating the probability. When measuring the PageRank of a large website such as Google, the damping factor is set at 0.85. When measuring

77

V O TIN G SEc URIT Y IN S O cIA L M ED IA

the PageRank of the data of social networks, the damping factor is usually set at 0.15 (i.e., when we apply the algorithm, we set the value of the damping factor at 0.15). The purpose of the application of this algorithm on the data is to know the probability of the rate of visiting a web page in order to vote [15]. 4.5 Heider’s Balance Theory

Heider’s balance theory is the most important theories of social analysis. This theory explains the relationship between individuals in a social network on the basis of emotions. This theory can be applied to several social platforms, businesses, elections, and even to know those following people on a particular subject [16,17]. In social networks, it can be used to know the structure of the social network, as well as to measure the strength of the relationship and discover the type of relationship between individuals within the social network. If a balanced state occurs between two people (e.g., if there is a link between the first person A and the second person B such that A → B and B → A), this relationship is called dyad reciprocity. But if the relationship between three people {A, B, C} is such that A → B, B → C, C → A, this is called a triad. Table 4.1 shows the results of applying the algorithms to the data of votes in Wikipedia. Status 1, in which the dyad algorithm is used, calculates the number of dyads with the reciprocated relation A → B and B → A. This number is very large compared with the links between the nodes in the network. In other words, the voting relationship among the users of Wikipedia is too large. This means that there is a good relationship between individuals in the network. Status 2, in which the triad algorithm is used, calculates the number of ordered triads (A → B and B → C). Heider’s balance theory assumes the principle of triangular relationships. For example, my Table 4.1  Votes in Wikipedia Results StatUs 1 2 3

AlGoRithm Number of dyads with reciprocated relation Number of ordered triads (A → B and B → C)

Number of transitive ordered triads (A → B, B → C, and C → A)

REsUlt 100,762 29,091,160 3,650,334

78

SEIF ED INE K A D RY

friend’s friend is my friend, my friend’s enemy is my enemy, my enemy’s friend is my enemy, my enemy’s enemy is my friend. This means that if user A has a positive link with user B and user B has a positive link with user C, in all probabilities user C will have a positive link with user A. Here, one theoretically guesses the votes issued by user C, although the votes are not yet present. This theory could even benefit political elections, where one can guess who would choose to vote later either because they are under the legal age for voting or because they did not participate for other reasons. For example, party A obtains 100 votes in the first city and party B obtains 100 votes in the second city. This theory solves the problem of equality of number of votes of the two parties. If the first city has more legal (in terms of age) voters than the second city, then party A is the winner because this corresponds to the ideas, ideology, and religion of the voters and their family members and the population in the city. In status 3, the triad algorithm calculates the number of transitive ordered triads (A → B, B → C, and C → A). In other words, it takes into account the three links between the nodes that already exist without needing estimation. A

B Status (1)

A

B

C Status (2)

B

A

C Status (3)

The balance theory enables us to check whether the data in a network are real or imaginary. If the data are obtained from a reliable

V O TIN G SEc URIT Y IN S O cIA L M ED IA

79

source or if they are already existing data for a social network, the data are considered real. When the dyad and triad algorithms were applied to network data, the results produced large numbers of links, which indicates that the network data were balanced and real. Many of the links within the network shared the dyad relationship more than the triad. 4.6  Conclusion

The purpose of this study is to check large amounts of data in social networks. If the data are vulnerable to hacking and counterfeiting, this can be detected by using certain algorithms contained within the software of the SNA. In this chapter, we examine the data for Wikipedia’s voting to choose its administrators. Results of the analysis revealed that the election was not exposed to fraud and electronic piracy operations. The NWB tool was used for the analysis. The in-degree and outdegree algorithms were applied to calculate the number of votes cast by the voters and the number obtained by the candidates. If the curve for in-degree is similar to the curve for out-degree, this means that the number of votes cast by the voters equals the number of votes obtained by the candidates. When applying the k-nearest neighbors (KNN) algorithm, the tool compares the degree of nodes with the degree for their neighbors; in other words, the number of votes is compared with the votes for neighbors. In four cases that compared the sent and received votes for users with those for their neighbors, the curves obtained were similar. When applying PageRank to measure the rate of users’ access to the voting page, the rate of users logging in to the voting web page was compatible with the number of voters and the number of votes in the network. This indicates that no voted are deleted or added. The balance theory proved that whether dyad or triad relations were found between the nodes inside a network, there were large numbers of links in the social network. This indicates that the network data are real and not fictional. It also shows the strength of social cohesion between individuals involved in Wikipedia. Moreover, the balance theory also allows the use of these algorithms to predict the votes of some voters.

80

SEIF ED INE K A D RY

Researchers in this field are working on expanding the use of these algorithms for a quick check of data, especially for electronic voting in social networks and in other areas. Moreover, there is a need to develop efficient algorithms for the detection of fraud in electronic voting. Voting in social networks is a good and important way of getting feedback from the community about social issues. Thus, social networks play a significant role in granting freedom of expression for members of the community.

References

1. G. Hogben (2009) Security issues in the future of social networking. ENISA Position Paper for W3C Workshop on the Future of Social Networking, Barcelona, Spain. 2. D. Passmore (2011) Social network analysis: Theory and applications. Retrieved, January 3, 2011, from http://train.ed.psu.edu/WFED-543/ SocNet_TheoryApp.pdf. 3. Wikipedia. Retrieved, June 15, 2012, from http://en.wikipedia.org/wiki/ Wikipedia. 4. P. Boldi, F. Bonchi, C. Castillo, and S. Vigna (2009) Voting in social networks. In Conference on Information and Knowledge Management (CIKM) (2009), pp. 777–786. 5. Network Workbench About. Retrieved, June 15, 2012, from http://nwb. cns.iu.edu/about.html. 6. B. Herr and W. Huang (2007). Introduction to the network workbench and cyber infrastructure shell, Introductory Paper. School of Library and Information Science, Indiana University, Bloomington, IN. 7. B. (Weixia) Huang, M. Linnemeier, T. Kelley, and R. J. Duhon (2009) Network workbench tool. Cyber infrastructure for Network Science Center, School of Library and Information Science, Indiana University, Bloomington, IN. 8. Stanford University. Wikipedia vote network. Retrieved, June 15, 2012, from http://snap.stanford.edu/data/wiki-Vote.html. 9. J. Leskovec, D. Huttenlocher, and J. Kleinberg (2010) Predicting positive and negative links in online social networks, ACM WWW International Conference on World Wide Web (WWW). 10. J. Leskovec, D. Huttenlocher, and J. Kleinberg (2010) Signed networks in social media, ACM SIGCHI Conference on Human Factors in Computing Systems (CHI). 11. Network Workbench. Node in-degree. Retrieved, June 15, 2012, from  https://nwb.slis.indiana.edu/community/?n  =  AnalyzeData. NodeIndegree.

V O TIN G SEc URIT Y IN S O cIA L M ED IA

81

12. Network Workbench. In-degree distribution. Retrieved, June 15, 2012, from   https://nwb.slis.indiana.edu/community/?n   =   AnalyzeData. IndegreeDistribution. 13. Network Workbench. Node out-degree. Retrieved, June 15, 2012, from   https://nwb.slis.indiana.edu/community/?n   =   AnalyzeData. NodeOutdegree. 14. Network Workbench. Out-degree distribution. Retrieved, June 15, 2012, from https://nwb.slis.indiana.edu/community/?n = AnalyzeData. OutdegreeDistribution. 15. Network Workbench. PageRank. Retrieved, June 15, 2012, from https:// nwb.slis.indiana.edu/community/?n = AnalyzeData.PageRank. 16. D. Khanafiah and H. Situngkir (2003) Social balance theory revisiting Heider’s balance theory for many agents. Technical Report. Retrieved, July 11, 2014, from http://cogprints.org/3641/1/Heidcog.pdf. 17. N. P. Hummon and P. Doreian (2003) Some dynamics of social balance processes: Bringing Heider back into balance theory, Social Network 25: 17–49.

This page intentionally left blank

5 I NTEG R ATED A ppROACH TO O p TimiZE O pEN -P iT M iNE B LO CK S EQUEN CiN G A mi N M O U saV i , E R H a N KO Z a N , a N D S H i Q ia N g L i U Contents

5.1 Introduction 5.2 Mathematical Programming 5.2.1 Notations 5.2.2 Decision Variables 5.2.3 Objective Function 5.2.4 Constraints 5.3 Computational Experiments 5.4 Conclusion Acknowledgment References

83 86 88 88 89 89 91 95 96 96

5.1  Introduction

Nowadays, optimization techniques have been applied to solve a variety of problems in mining industries for production planning, production scheduling, determining capacities, optimizing mining layout, obtaining optimal resource allocation, determining material destinations, equipment maintenance, and rostering (Kozan and Liu, 2011; Newman et al., 2010; Osanloo et al., 2008). One of the vital optimization problems in open-pit mining is to determine the optimal extraction sequences of material. The openpit mine sequencing problem is defined as specifying the sequence in which material should be extracted from pits and then transferred to appropriate destinations. Generally, material with no economic value is dumped while profitable material is processed at mills or stocked at stockpiles for future usage. 83

84

AmIN M O US AV I E T A L .

In the first step of mining operation optimization, mineral deposit is divided into several blocks, and attributes such as grade and density are estimated for each block. The set of these blocks for whole deposit and surrounding area is called three-dimensional (3D) block model. A block model contains several thousands to over several millions of blocks based on the size of the orebody and the dimensions of each individual block. The block model provides most important information for open-pit optimization problems. Kriging estimator as the best linear unbiased estimator and geo-statistical simulation methods such as sequential Gaussian simulation, p-field simulation, and simulated annealing are widely used to estimate block attributes (Lark et al., 2006; Vann et al., 2002; Verly, 2005). After estimating block characteristics, block economic value or cash flow of the block can be calculated by considering economical parameters. Identifying the grade that classifies material into waste or ore has been a challenging subject in mining engineering for several decades. A cut-off grade is applied to distinguish ore and waste in a given mineral deposit. Generally, cut-off grade is defined as the minimum amount of valuable mineral that must exist in one unit (e.g., one tone) of material before this material is sent to the processing plant. In conventional methods, cut-off grade is determined as a function of price of product, cash costs of mining and processing, while in reality, capacities of mining and processing as well as the grade-tonnage distribution of the deposit should be taken into account (Asad and Topal, 2011; Johnson et al., 2011). Therefore, the most comprehensive method to determine dynamic cut-off grade is to integrate cut-off grade optimization with the determination of the extraction sequencing. The mine production sequencing problem may be solved for different levels of the accuracy. For the simplification reasons, some blocks are aggregated into bigger units to obtain extraction sequences of these units with less computational efforts (Askari-Nasab et al., 2010; Ramazan, 2007). As a result of the aggregation, the distinct nature of these blocks is ignored. Therefore, to keep the resolution of solution, the production sequencing problem is solved at the level of block, smallest unit of material of which attributes are estimated, and the problem is called open-pit block sequencing (OPBS) problem (Cullenbine et al., 2011). The OPBS is a challenging problem

IN T EG R AT ED App R OAc H T O Op TImIZ E O P BS

85

to be solved due to the size of problem in terms of the number of decision variables and constraints, and the complexity of the problem. Typically, the constraints relate to accessibility to the blocks, mining and milling capacities, grades of mill feed and concentrates, capacities of extraction equipment, and physical and operational requirements such as minimum required width for the machinery. The objective of the OPBS problem is to maximize mining economic value with mining operation efficiency. In the literature, mixed integer programming (MIP) was used to formulate the OPBS problem. Dagdelen and Johnson (1986) applied Lagrangian relaxations to relax and solve the MIP model of the OPBS problem. Bienstock and Zuckerberg (2010) presented a column generation method with some modifications in iterations to solve linear programming relaxations. Ramazan (2007) proposed a tree algorithm to reduce the size of MIP formulation. Caccetta and Hill (2003) presented a branch and cut algorithm to solve the model, but the details were not given in their paper. Ramazan and Dimitrakopoulos (2004) proposed a relaxed MIP model with fewer binary variables. Menabde et al. (2004) presented a MIP model that integrates cut-off grade simultaneously. Boland et al. (2007) presented a disaggregation method in order to control the processing feed at the level of block decision and heighten the variable freedom. To solve the OPBS problem more efficiently, several authors developed heuristics. Gershon (1987) proposed a heuristic approach for the OPBS. In this approach, blocks are ranked based on the value of blocks that are located beneath the given block. Blocks with higher rank have priority to be extracted. Tolwinski and Underwood (1996) combined dynamic programming, stochastic optimization, artificial intelligence, and heuristic approaches to solve the OPBS problem. Cullenbine et al. (2011) proposed sliding-time-window heuristic algorithm to solve the model. Chicoisne et al. (2012) combined LP relaxations of the problem with a topological-sorting-based rounding algorithm. Sattarvand and Niemann-Delius (2008) discussed metaheuristics that were applied to the OPBS. In an early work, Denby and Schofield (1994) applied genetic algorithm to solve the large-size problem. Kumral and Dowd (2005) recommended simulated annealing to obtain the solution of the OPBS problem. The Lagrangian relaxations technique was applied to obtain a suboptimal solution,

86

AmIN M O US AV I E T A L .

and later simulated annealing was applied to improve the initial solution of the multiobjective model. Ferland et al. (2007) developed a hybrid greedy randomized adaptive search procedure (GRASP) and particle swarm algorithm. The GRASP was employed to construct the initial population (swarm). Then, particle swarm algorithm searches within a feasible domain to improve the initial solution. Sattarvand (2009) presented an algorithm based on ant colony optimization (ACO) algorithm. Myburgh and Deb (2010) presented an evolutionary algorithm based on predeterministic stripping ratio to solve large-sized OPBS and introduced evORElution package. Lamghari and Dimitrakopoulos (2012) proposed Tabu search algorithm to solve OPBS problem. In order to investigate extensive domain, they applied long-term memory and variable neighborhood search methods. In this chapter, a model with real-life constraints is developed for the OPBS problem. This model will be applicable to find the optimum extraction sequence of blocks over the hourly based periods. The computational experiments are performed to validate the proposed model and to recommend the solution approaches for the real-size cases. 5.2  Mathematical Programming

A general process flow in an iron ore mine is shown in Figure 5.1. Blocks are extracted by excavators, and the mined material is carried to different destinations by trucks. This run of mine material is classified into the waste, low, or high grade based on the block content. Stockpiles are designed for mixing or blending material with different characteristics or to defer the processing of material to future. The low-grade material is enriched in the processing plants to achieve the final product requirements. Finally, the product of low-grade plants and high-grade crusher is blended and sent to the rail loading area as the final product. Based on the analysis in Figure 5.1, the OPBS problem aims to determine the optimal sequence of blocks. After solving block sequencing problem, two important questions will be answered: which blocks are selected to be extracted over periods, and which destination will each block be sent to? The problem is formulated using the following notations.

Excavating and haulage

Stockpiling and dumping

Crushing and processing

Waste dump 2

Waste

Hub Waste

Crushers

Ore

Stockpile

Fine and dry stockpiles (Loaders, trucks)

To the rail and ship

Crushers

87

Figure 5.1  A general process flow in an iron ore mine.

Drum plant

Wet screening plant

(Loaders, trucks)

High grade

Run of Excavating and mine haulage material Low grade (excavators, shovels, loaders, trucks, dozer, scrapper, grader)

Stockpiles

(Loaders, trucks)

Ore

Waste

Waste dump

IN T EG R AT ED App R OAc H T O Op TImIZ E O P BS

Waste dump 1

88

AmIN M O US AV I E T A L .

5.2.1  Notations

T: number of time periods t: time period index, t = 1, 2, …, T r: time period index, r = 1, 2, …, t i: block index, i = 1, 2, …, I bi: tonnage of block i A: number of attributes α: attribute (grade) index, α = 1, 2, …, A g iα : specification of attribute α in block i Γ pi ′: set of immediate successors of block i′ uid : unit value of block i when it is sent to destination d M: number of machines m: machine index (e.g., excavator, shovel, loader), m = 1, 2, …, M eim: extraction rate of machine m for block i pim: number of time periods consumed to extract block i by machine m, (pim = bi /eim) θmt : equal to 1 if machine m is not available in time period t, otherwise 0 P: total number of mills (mineral processing plant) ρ: mill index, ρ = 1, 2, …, P. W: number of waste dumps w: waste dump index, w = 1, 2, …, W D: number of destinations (D = W + P) d: destination index, d = 1, 2, …, D M ρmin : minimum capacity of mill ρ M ρmax : maximum capacity of mill ρ ϕmin αd : minimum required attribute α for destination d max ϕαd : maximum required attribute α for destination d 5.2.2  Decision Variables

x

t imd

 1 = 0 

if block i is being extracted by machine m at t and sent to destin nation d . otherwise

yiit ′ : the binary decision variable to handle precedence if-then constraint

89

IN T EG R AT ED App R OAc H T O Op TImIZ E O P BS

5.2.3  Objective Function

The objective function is to maximize the profit: T

I

M

D

∑∑∑∑x



e u .

(5.1)

t imd im id

t =1 i =1 m =1 d =1

5.2.4  Constraints t −1

D

∑∑x

t i ′md

(

− pi ′m ≥ L yiit ′ − 1

)

r =1 d =1

∀{i , i ′ ∈ I | i ′ ≠ i , i ∈ Γ pi ′ } ; t = 2,…,T ; m = 1… M .

M



Γ pi ′

D

∑∑∑x m =1 d =1 i =1

M

e ≤ bi

t imd im

t =1 m =1 d =1

I

≤ 1 ∀m = 1, 2,…, M ; t = 1, 2,…,T .

t imd

i =1 d =1 T

I

t ximd +

t =1 i =1 d =1

M

t imd

m =1 d =1

∑θ

t m

t =1

=T

∀m = 1, 2,…, M .

(5.6)

≤2

∀i = 1, 2,…, I ; t = 1, 2,…,T .

(5.7)

I

∑∑x

e ≥ M ρmin

∀t = 1, 2,…,T ; ρ = 1, 2,…, P . (5.8)

e ≤ M ρmax

∀t = 1, 2,…,T ; ρ = 1, 2,…, P . (5.9)

t imρ im

m =1 i =1 M

(5.5)

D

∑∑x M

(5.4)

T

D

∑∑∑



∀i = 1, 2,…, I .

D

∑∑x





∀{i , i ′ ∈ I | i ′ ≠ i , i ∈ Γ pi ′ } ; t = 1, 2,… …,T .

D

∑∑∑x





(5.2)

(5.3) T



≤ Lyiit ′

t imd



I

∑∑x m =1 i =1

t imρ im

90

AmIN M O US AV I E T A L . M

I

∑∑x ( g t imd

α i

)

− ϕmin ≥0 αρ

m =1 i =1

∀t = 1, 2,…,T ; ρ = 1, 2,…, P ; α = 1, 2,…, Α.

M

I

∑∑x ( g t imd

α i



(5.10)

)

− ϕmax ≤0 αρ

m =1 i =1





∀t = 1, 2,…,T ; ρ = 1, 2,…, P ; α = 1, 2,…, Α. t ximd ∈ {0, 1} .



(5.11) (5.12)

Equations 5.2 and 5.3 ensure that precedence constraint is satisfied. Precedence constraint indicates that directly related overlying blocks should be mined before extracting the target block. Directly related overlying blocks or precedence relations for the target block are determined by applying slope pattern. There are several slope patterns used to identify precedence relationships. For example, 1:5-pattern, 1:5:9-pattern, and knight’s move pattern are commonly used methods to generate precedence relationships (Hochbaum and Chen, 2000). As an example, the 1:5:9-pattern is shown in Figure 5.2. According to this pattern, each block should be connected to five blocks in the upper level and nine blocks in the second upper level. In other words, 14 overlying blocks must be mined before mining the target block. Generally, if the block dimensions or stable slopes vary in different directions or levels, then slope pattern may change. Wright (1990) discussed the details of methods applied to identify precedence relations. Equation 5.4 restricts that at most bi tones of material can be extracted from block i in the time horizon. Equation 5.5 enforces that each machine (excavator) is working on at most one block in a time period. Equation 5.6 indicates any machine cannot work more than the whole time horizon. In addition, machine may not work in some periods due to the maintenance or other reasons. According to Equation 5.7, a maximum of two machines can work on a block in a time period. Equations 5.8 and 5.9 represent the mill capacity constraints. According to these constraints, the total tonnage of ore material that is sent to the mineral processing plant must not be more that the maximum capacity of the processing plant. In addition, this

IN T EG R AT ED App R OAc H T O Op TImIZ E O P BS

91

3D Block model

80 Z

70 60 50 70

(a)

50 Y

40

30

20

20

10

40 X

Z

35 30 25 20 15 10 5 0 40

60

(b)

30

X

20

10

0 0

10

20

Y

30

60

5

6

7

2

3

4

40 (c)

1

Figure 5.2  (a) A 3D block model; (b) 1:5:9-pattern; to extract a block in level k, 5 blocks in level k + 1 and 9 blocks in level k + 2 must be mined first; (c) precedence relations in a side view.

tonnage must not be less than the minimum required feed for the processing plant in time period t. The mill feed must contain required quality of ore content. Equations 5.10 and 5.11 ensure that grade quality for the mill is satisfied. Finally, Equation 5.12 states that decision variables are binary. 5.3  Computational Experiments

To illustrate and validate the proposed model, numerical investigations are accomplished. Several sample data sets are prepared based on the collected data from an iron ore mine in Australia. The characteristics of instances are summarized in Table 5.1. The proposed model has been coded in the ILOG CPLEX optimizer as a MIP problem and run in a PC Intel Core i7, 2.7 GHz, with 8 GB of RAM, running Windows 7. To optimize a MIP problem,

92

AmIN M O US AV I E T A L .

Table 5.1  Characteristics of Instances NUmBER of InstancE OPBS1 OPBS2 OPBS3 OPBS4 OPBS5 OPBS6 OPBS7 OPBS8 OPBS9 OPBS10 OPBS11 OPBS12 OPBS13 OPBS14 OPBS15 OPBS16 OPBS17 OPBS18 OPBS19 OPBS20 OPBS21 OPBS22 OPBS23 OPBS24 OPBS25

Blocks

MachinEs

DEstinations

TimE PERiods (H)

25 25 25 25 25 25 50 50 50 50 50 50 75 75 75 75 75 75 100 100 100 100 100 100 150

4 5 6 4 5 6 4 5 6 4 5 6 4 5 6 4 5 6 4 5 6 4 5 6 4

2 2 2 4 4 4 2 2 2 4 4 4 2 2 2 4 4 4 2 2 2 4 4 4 2

12 12 12 12 12 12 24 24 24 24 24 24 30 30 30 30 30 30 48 48 48 48 48 48 72

CPLEX constructs a tree with the linear relaxation of the original MIP at the root and subproblems at the nodes of the tree. The best node is the node with the best achievable objective function value. The best MIP bound is the best integer objective value among all the remaining subproblem nodes. The relative MIP GAP is the difference between the best integer objective and the best node objective. If CPLEX reaches to the acceptable GAP, branch-and-cut procedure stops and begins polishing a feasible solution. The relative MIP GAP is calculated as follows (Cplex, 2010): GAP% = (1 − (| Best Node |/| Best Integer |)) × 100. Results of these instances are given in Table 5.2. In the proposed model, destinations consist of at least one mill and one waste dump; however, the model can be adopted for more mills

Table 5.2  Results of the Numerical Experiments

OPBS1 OPBS2 OPBS3 OPBS4 OPBS5 OPBS6 OPBS7 OPBS8 OPBS9 OPBS10 OPBS11 OPBS12 OPBS13 OPBS14 OPBS15 OPBS16 OPBS17 OPBS18

2,508 3,108 3,708 4,908 6,108 7,308 10,080 12,480 14,880 19,680 24,480 29,280 19,560 24,060 28,561 37,560 46,560 55,560

MIP SolvER

CP SolvER

NUmBER of ConstRaints

BEst IntEGER (Million $)

OBjEctivE ValUE (Million $)

GAP%

CPU TimE (s)

645 660 673 695 709 723 2,412 2,436 2,462 2,508 2,532 2,560 5,694 5,725 5,757 5,813 5,845 5,877

170.71 159.91 116.33 239.79 237.86 227.07 345.60 325.38 292.91 419.17 397.58 375.97 399.57 350.18 NA 399.57 372.57 NA

170.71 159.91 116.33 239.79 237.86 227.07 345.44 325.38 292.91 419.17 397.58 375.97 399.57 350.18 NA 399.57 372.57 NA

0.0 0.0 0.0 0.0 0.0 0.0 0.05 0.0 0.0 0.0 0.0 0.0 0.0 0.0 — 0.0 0.0 —

4 5 3 16 15 12 10,800 42 102 44 54 364 28 50 — 37 —

OBjEctivE ValUE (Million $)

GAP%

170.71 159.91 116.33 239.79 237.86 227.07 342.16 283.28 20.65 NS NS NS 260.05 NS NA NS NS NA

0.0 0.0 0.0 0.0 0.0 0.0 0.99 12.9 92.9 — — — 34.9 — — — — —

CPU TimE (S) 420 610 840 180 320 1,550 10,800 10,800 10,800 10,800 10,800 10,800 10,800 10,800 — 10,800 10,800 — (Continued)

IN T EG R AT ED App R OAc H T O Op TImIZ E O P BS

InstancE

NUmBER of DEcision VaRiaBlEs

93

94

Table 5.2 (Continued )  Results of the Numerical Experiments

OPBS19 OPBS20 OPBS21 OPBS22 OPBS23 OPBS24 OPBS25

42,336 51,936 61,536 80,736 99,936 119,136 100,080

MIP SolvER

CP SolvER

NUmBER of ConstRaints

BEst IntEGER (Million $)

OBjEctivE ValUE (Million $)

GAP%

CPU TimE (s)

OBjEctivE ValUE (Million $)

GAP%

CPU TimE (s)

13,166 13,215 13,264 13,356 13,406 13,456 38,896

575.09 NA NA 575.09 736.67 NA 103.60

575.09 NA NA NS NS NA NS

0.0 — — — — — —

6,300 — — 10,800 10,800 — 10,800

NS NA NA NS NS NA NS

— — — — — — —

10,800 — — 10,800 10,800 — 10,800

Note: NA, No feasible solution can be found for this instance; NS, no solution has been found within the predetermined solution time.

AmIN M O US AV I E T A L .

InstancE

NUmBER of DEcision VaRiaBlEs

IN T EG R AT ED App R OAc H T O Op TImIZ E O P BS

95

and waste dumps for larger mining operation cases. The analysis, as shown in Table 5.2, demonstrates that increasing the number of destinations or machines will not change the complexity of the problem too much. However, the effects of increasing the number of blocks or time periods are more substantial. The computational experiments show that the OPBS problem with a large number of blocks cannot be solved by MIP solver in a reasonable time and showed that we need a better solution technique to solve this NP-complete problem. The input data, especially for block information, are changed and updated frequently; therefore, the solution algorithm must be quick enough to solve the problem in reasonable time by a standard computer. The infeasibility of instances OPBS15, OPBS18, OPBS20, OPBS21, and OPBS24 come from the input data, where there is not enough qualified material for the mill. In this chapter, the ability of constraint programming (CP) to solve the OPBS problem is also investigated. In CP, a solution is a feasible set of the variables that fulfill the constraints (Van Beek and Chen, 1999). In this study, CPLEX CP engine has been used to solve the instances. The results of CP are presented in Table 5.2. Results state that high-quality solutions are obtained by CP for small-size instances of this problem. However, computational results of larger instances indicate that CP is not a timely efficient approach to solve this problem. The reason for this observation may arise from the fact that there are many binary variables that are nominated to be assigned a value (0 or 1). Therefore, during the process of CP, a large number of branching steps must be carried out that lead to increase the solution time. An alternative treatment approach to overcome this drawback can be combining the CP with the constructive heuristic method in order to direct the CP branching steps by finding an initial feasible solution. Investigation and verification of this alternative are left to the future work. 5.4  Conclusion

In summary, we have developed a MIP model for the OPBS problem. In the proposed model, real-life constraints including precedence relationships, mill capacity, resource capacity, and grade control are

96

AmIN M O US AV I E T A L .

considered. The applicability of the model has been tested by performing computational experiments on the real-base data. Numerical investigations demonstrate that the OPBS problem is a complex problem that cannot be solved by exact solution techniques in a reasonable time. In addition, the capability of CP approach to solve this problem has been investigated. The computational experiments of CP by the CPLEX CP engines state that CP may not be a timely efficient approach for this problem. However, to prove the latest claim, more studies should be conducted in the future research. In addition, a state-of-the-art meta-heuristic algorithm will be developed to solve large-size instances.

Acknowledgment The authors acknowledge the support of CRC ORE, established and supported by the Australian Government’s Cooperative Research Centers Program.

References

Asad, M. W. A. and Topal, E. (2011). Net present value maximization model for optimum cut-off grade policy of open pit mining operations. The Journal of The Southern African Institute of Mining and Metallurgy, 111, 741–750. Askari-Nasab, H., Awuah-Offei, K., and Eivazy, H. (2010). Large-scale open pit production scheduling using mixed integer linear programming. International Journal of Mining and Mineral Engineering, 2(3), 185–214. Bienstock, D. and Zuckerberg, M. (2010). Solving LP relaxations of largescale precedence constrained problems. In F. Eisenbrand and B. Shepherd (eds.), Integer Programming and Combinatorial Optimization (pp. 1–14). Springer, Berlin, Germany. Boland, N., Dumitrescu, I., Froyland, G., and Gleixner, A. M. (2007). LP-based disaggregation approaches to solving the open pit mining production scheduling problem with block processing selectivity. Computers & Operations Research, 36, 1064–1089. Caccetta, L. and Hill, S. P. (2003). An application of branch and cut to open pit mine scheduling. Journal of Global Optimization, 27, 349–365. Chicoisne, R., Espinoza, D., Goycoolea, M., Moreno, E., and Rubio, E. (2012). A new algorithm for the open-pit mine production scheduling problem. Operations Research, 60(3), 517–528. Cplex, I. I. (2010). 12.2 User’s Manual. IBM, United States.

IN T EG R AT ED App R OAc H T O Op TImIZ E O P BS

97

Cullenbine, C., Wood, R. K., and Newman, A. (2011). A sliding time window heuristic for open pit mine block sequencing. Optimization Letters, 5, 365–377. Dagdelen, K. and Johnson, T. B. (1986). Optimum open pit mine production scheduling by Lagrangian parameterization. In Proceedings of the 19th Symposium of APCOM, Jostens Publications, State College, PA (pp. 127–141). Denby, B. and Schofield, D. (1994). Open-pit design and scheduling by use of genetic algorithms. Transactions of the Institution of Mining and Metallurgy. Section A. Mining Industry, 103, A21–A26. Ferland, J. A., Amaya, J., and Djuimo, M. S. (2007). Application of a particle swarm algorithm to the capacitated open pit mining problem. Studies in Computational Intelligence (SCI), 76, 127–133. Gershon, M. (1987). Heuristic approaches for mine planning and production scheduling. International Journal of Mining and Geological Engineering, 5, 1–13. Hochbaum, D. S. and Chen, A. (2000). Performance analysis and best implementations of old and new algorithms for the open-pit mining problem. Operation research, 48(6), 894–914. Johnson, P. V., Evatt, G., Duck, P., and Howell, S. (2011). The determination of a dynamic cut-off grade for the mining industry. In S. I. Ao and L. Gelman (eds.), Electrical Engineering and Applied Computing (Vol. 90, Chapter 32, pp. 391–403). Springer-Verlag, Berlin, Germany. Kozan, E. and Liu, S. Q. (2011). Operations research for mining: A classification and literature review. ASOR Bulletin, 30(1), 2–23. Kumral, M. and Dowd, P. A. (2005). A simulated annealing approach to mine production scheduling. The Journal of the Operational Research Society, 56(8), 922–930. Lamghari, A. and Dimitrakopoulos, R. (2012). A diversified Tabu search approach for the open-pit mine production scheduling problem with metal uncertainty. European Journal of Operational Research, 222(3), 642–652. Lark, R. M., Cullis, B. R., and Welham, S. J. (2006). On spatial prediction of soil properties in the presence of a spatial trend: The empirical best linear unbiased predictor (E-BLUP) with REML. European Journal of Soil Science, 57, 787–799. Menabde, M., Froyland, G., Stone, P., and Yeates, G. (2004). Mining schedule optimisation for conditionally simulated orebodies. In Proceedings of the International Symposium on Orebody Modelling and Strategic Mine Planning: Uncertainty and Risk Management, Perth, Western Australia (pp. 347–352). Myburgh, C. and Deb, K. (2010). Evolutionary algorithms in large-scale open pit mine scheduling. In Proceedings of the 12th Annual Conference on Genetic and Evolutionary Computation (pp. 1155–1162). ACM, New York. Newman, A. M., Rubio, E., Caro, R., and Eurek, K. (2010). A review of operations research in mine planning. Interfaces, 40(3), 222–245.

98

AmIN M O US AV I E T A L .

Osanloo, M., Gholamnejad, J., and Karimi, B. (2008). Long-term open pit mine production planning: A review of models and algorithms. International Journal of Mining, Reclamation and Environment, 22(1), 3–35. Ramazan, S. (2007). The new fundamental tree algorithm for production scheduling of open pit mines. European Journal of Operational Research, 177, 1153–1166. Ramazan, S. and Dimitrakopoulos, R. (2004). Recent applications of operations research and efficient MIP formulations in open pit mining. Mining, Metallurgy, and Exploration Transactions 316, 73–78. Sattarvand, J. (2009). Long term open pit planning by ant colony optimization. PhD Dissertation, RWTH Aachen University, Aachen, Germany, p. 144. Sattarvand, J. and Niemann-Delius, C. (2008). Perspective of metaheuristic optimization methods in open pit production planning. Gospodarka Surowcami Mineralnymi, 24(4), 143–156. Tolwinski, B. and Underwood, R. (1996). A scheduling algorithm for open pit mines. IMA Journal of Mathematics Applied in Business & Industry, 7, 247–270. Van Beek, P. and Chen, X. (1999). CPlan: A constraint programming approach to planning. In AAAI/IAAI, Orlando, FL (pp. 585–590). Vann, J., Bertoli, O., and Jackson, S. (2002). An overview of geostatistical simulation for quantifying risk. In Proceedings of Geostatistical Association of Australasia Symposium “Quantifying Risk and Error”, Perth, Western Australia (p. 1). Verly, G. (2005). Grade control classification of ore and waste: A critical review of estimation and simulation based procedures. Mathematical Geology, 37(5), 451–475. Wright, A. (1990). Open Pit Mine Design Model: Introduction with FORTRAN 77 Programs. Trans Tech Publication, Clausthal-Zellerfeld, Germany.

6 L O CATiN G TEmp OR ARY S TO R AG E S iTES FOR M ANAG iN G D iSASTER WASTE , U SiN G M ULTi O b JECTi V E O p TimiZ ATi ON Kı Va N Ç O N a N , F Ü s U N Üle N gi N , a N D B a H a R S e N N a RO Ğ l U Contents

6.1 Introduction 6.2 Background of the Study 6.2.1 Disaster Waste Management 6.3 Methodology 6.4 Results 6.5 Discussion 6.6 Conclusion References

99 102 103 105 108 111 113 114

6.1  Introduction

The disaster wastes (DWs) may include recyclable, reusable, and hazardous materials. For instance, asbestos is a hazardous waste, which may be found in disaster debris and cause serious illnesses when not properly disposed. Because of the risks in debris composition, separation and treatment of materials is a serious problem. The composition of waste is also an opportunity in terms of environmental sustainability. Generally, collected DW is disposed at landfill areas all around world. But uncontrolled disposal of the waste is environmentally and economically harmful. There are ways of proper disaster waste management (DWM). The ingredients of DW is similar to regular construction and 99

10 0

Kı VA N Ç O N A N E T A L .

Concrete

Bricks

Stones Waste to dumpsite

Area reserved for recycling plant Storage site for incoming unprocessed debris

Wood debris

Scrap metal

Area needed to maneuver trucks, loaders, and other vehicles working in the area

Gate

Figure 6.1  Suggested layout of a temporary storage site. (From Söder, A.B. and Müller, R., Disaster Waste Management Guidelines, United Nations Office for the Coordination of Humanitarian Affairs Environmental Emergencies Section [UNEP/OCHA Environment Unit], Geneva, Switzerland, 2011.)

demolition (C&D) wastes. So the C&D waste treatment methods can be a starting point when DW is considered. The most common way applied to manage the C&D waste is collecting all waste and transporting to dumpsites for final disposal without any treatment, just like the present applications of DWM. One other way is to transport collected waste to a recycling facility and separate reusable and recyclable parts from the debris either before or after transporting them. Another way, which is the most environmentally effective one, is to separate all materials at a temporary site (Figure 6.1), so that the landfill disposal of untreated hazardous wastes is minimized and all applicable materials can be recycled or reused (Söder and Müller, 2011). Separated materials can be immediately reusable for a new construction, which may be important for the immediate reconstruction activities after a disaster happened, or can be recycled for also immediate or future use in these reconstruction activities. So this last way of treating waste can be said to be a more proper way, both considering environment and economics, for DWM. A special needs research session on DWM was held in 2010 called Intercontinental Landfill Research Symposium in Japan, and one of the three main areas emphasized as future research areas was emergency

L O c ATIN G T Emp O R A RY S T O R AG E SIT E S

Collapsed building site

Temporary storage site

Reconstruction

Recycling facility

Landfill disposal

101

Figure 6.2  Concept of temporary storage site.

temporary storage areas (Milke, 2011). Since the emergency response circumstances cause poor storage area planning for DW, there is a need for research on planning temporary storage areas for keeping, separating, and recycling the DW (Milke, 2011). Separating hazardous materials from DW is an important part of the process, which must be carried out with extreme care. That is especially because of asbestos and other hazardous and harmful materials, which may be found in the composition of DW. The concept of temporary storage facilities (Figure 6.2) is therefore very important in order to properly treat these hazardous materials in the debris for decreasing the risks to people’s health. As an example to the harms of hazardous materials, the major risk caused by asbestos can be given: once the building parts that are made of asbestos are broken, during a demolition, loading, transportation, treatment of a building, or disposing DW, huge amounts of very small particles spread through air, and breathing or contacting these particles may cause several diseases; even there is a high risk of cancer (Söder and Müller, 2011). In order to manage DW properly, first step is assessing the risk and then estimating the disaster-related damages or in other words loss estimation. After a survey on disaster risk assessment and loss estimation studies, a tool that was developed by the researchers at Kandilli Observatory and Earthquake Research Institute (KOERI), which is called ELER, is selected. ELER is a tool that is developed to rapidly estimate the loss after the earthquakes with a potential of damage. This is very vital for efficient emergency response and also for informing the community. ELER is MATLAB® -based tool, which is used to create an earthquake and

10 2

Kı VA N Ç O N A N E T A L .

assess the building damage and casualties according to this simulated earthquake (Hancılar et al., 2010). Software includes a module called Hazard, which simulates an earthquake according to given parameters and three modules for loss estimation. These three modules are named as level 0, level 1, and level 2. Hazard module simulates the earthquake and produces shake maps according to the given parameters of ground motion. The parameters are peak ground acceleration, peak ground velocity, and spectral acceleration, and also depth, magnitude, broken fault, etc., for simulating an earthquake. These parameters are used to calculate ground motion prediction equations (Hancılar et al., 2010). ELER software has three levels for hazard assessment. The estimated ground motion parameters, which are calculated with hazard module, are used to produce loss estimations according to the level of assessment. Demographic data and the given building inventory are used to calculate the loss estimations. The building inventory includes the number and properties of buildings and also the number of dwellings in each building. These values are used to transform the number of damaged buildings to amount of waste. There are three parameters, which are used for calculating damage of buildings and casualties. These parameters are population distribution, building inventory data, and vulnerability relationships. Level 1 module produces the estimations of building damages and also casualty distributions. This module is used to produce damage estimations for the model proposed in this study. The aim of this research is to determine the candidate locations of temporary storing areas of DW while minimizing the total cost, and at the same time to minimize the amount of population subject to risk. To this point, main motives and first steps of the study are explained in detail; the background, methodology, model, solution, and discussions are given in the following sections. 6.2  Background of the Study

This section is constructed to understand the underlying ideas and motives for the proposed framework. For this purpose, the related literature was reviewed. First, the two main reports about Istanbul and

L O c ATIN G T Emp O R A RY S T O R AG E SIT E S

10 3

the expected earthquake were reviewed. These are Earthquake Master Plan of Istanbul and JICA Report about Istanbul Earthquake. These reports include the most detailed researches about two alternative estimated earthquake models (Model A and Model C) for Istanbul. These reports also include brief information on estimations about the ratio of heavily damaged buildings for each district of Istanbul (Ansal et al., 2003; Ikenishi et al., 2002). Another field to be reviewed is DWM. There are four main issues when treating DW: collection of waste, reuse, recycling, and landfill disposal. When planning collection of waste, hazardous materials, which the debris may contain, must be considered in order to keep the process environmentally sustainable for people’s health. The most effective way of dealing with hazardous materials in a sustainable way is to minimize the landfill disposal. As it was mentioned before, the aim of this research is to determine the candidate locations for locating the temporary storage areas while minimizing the total cost, and at the same time to minimize the amount of population subject to risk related to storing, transportation, and recycling of DW. In order to optimize these objectives, the problem must be modeled and solved mathematically. Since the problem is of multiobjective type (minimization of total cost and minimization of risk on population), related studies were reviewed. Multiobjective evolutionary optimization methods are state of art, so these methods are reviewed in detail. (Collette and Siarry, 2003; Deb, 2001, 2008; Marler and Arora, 2004) Brief information about DWM studies will be given in the following section. 6.2.1  Disaster Waste Management

Disasters can cause large amounts of waste. DW is mainly caused by the disaster itself, but waste may occur also during recovery stage. These wastes can cause health risks in case of contact with hazardous wastes, such as asbestos, pesticides, oils, and solvents. Also DWs may block relief efforts (Brown et al., 2011; FEMA, 2010; Milke, 2011; Söder and Müller, 2011). Having so many negative impacts on health and relief efforts, DW may also contain materials, which may be needed for recovery, such

10 4

Kı VA N Ç O N A N E T A L .

as concrete, steel, and wood. These valuable materials can be used for reconstruction of the affected area and so decreases the need for natural resources. Because of the earlier-mentioned issues, managing DW is a critical issue. Proper management reduces the risks and provides recovery opportunities. Recent studies show that this importance is not recognized enough that in most cases, DWM means collecting and dumping waste without control, separation, and recycling or reuse. This kind of treatment to waste may cause environmental problems, which may affect public health and may result with contamination of valuable lands, and further actions to be taken to deal with this problem will cause extra costs. There was a need for a guideline for the appropriate treatment of DWM. Especially United Nations and some researchers made recent studies on DWM (Brown et al., 2011; Milke, 2011). The common objectives of these studies are minimizing risk to health, minimizing risk to environment, and ensuring recycling or reuse for the benefit of the affected people. Also, another objective being considered by the authorities is the cost of these operations. DW issues are as follows: uncollected waste from damaged buildings, dumping in unsuitable sites, problems in solid waste services due to disaster, and uncontrolled disposal of hazardous and infectious waste. These issues cause some impacts as follows: damaged sites can be considered as dumping areas, which increases the amount of waste to be collected, effect of closely located disposal sites on health, destructing usable lands, effects on water resources, additional cost of responding to these impacts, health risks occurred during inhalation or exposure. Since the main focus of this study is earthquake disaster, a deeper focus is needed. Earthquake damages may make it difficult to separate hazardous wastes from other wastes since a total collapse may happen. Recovery requires heavy machinery, which increases the cost and difficulty to collect wastes. But the most important issue is that the quantity generated by an earthquake may be very high compared to other disaster types. United Nations Environment Programme’s (UNEP) DWM guidelines suggest preparing a contingency plan prior to disaster, and

L O c ATIN G T Emp O R A RY S T O R AG E SIT E S

10 5

also it is suggested that this plan should be cost-effective and should increase control over waste management (Söder and Müller, 2011). UNEP guidelines give a pathway to deal with DW, and the steps of this pathway include the following: • • • •

Forecasting amounts of waste and debris Monitoring current capacity for waste and debris management Selecting waste and debris storage sites prior to disaster Creating a debris removal strategy

Researchers indicate that increasing number and intension of natural disasters increases the importance of efficient and low impact recovery (Brown et al., 2011). Writers also indicate that importance of DWM has been recognized since “Planning for Disaster Debris” was published by United States Environmental Protection Agency in 1995. This report was also updated in 2008. The earlier-mentioned literature review highlighted that the importance of DWM is almost not realized by the researchers from a sustainability perspective. DWM-related studies mostly focus on guiding authorities on how to make a plan prior to the disaster and how to act after the disaster (FEMA, 2010; Söder and Müller, 2011). But these studies do not include mathematical models optimizing both environmental effect and cost. So it can easily be seen that both multiobjective approaches, for sustainable and economic management, and DW-related studies are needed to be done by the researchers (Milke, 2011). 6.3  Methodology

Figure 6.3 represents the general framework of the methodology proposed in this study. Shortly, the study consists of three steps: producing loss estimations with ELER, obtaining data produced by ELER and producing OD matrix with ArcGIS, and finally using these data to determine the candidate points with a multiobjective optimization (MOO) model. The solution gives the Pareto-optimal set or front. Final step is to choose solutions from the set according to the preferences of decision maker (DM), which is called higher-level information. As an example to higher-level information, total distance to landfill disposal sites or distance to water supplies may be given.

10 6

Kı VA N Ç O N A N E T A L .

Earthquake center, broken fault, magnitude, and depth values

Loss estimations Building damage estimation

GIS editing OD-matrix and damaged building quantities MOO (NSGA-II)

Multiobjective

Pareto-optimal solution set

Optimization model Selection of the points from the Pareto-optimal set

Figure 6.3  General framework of the study.

In this case, since the aim is to determine the candidate locations for temporary storage facilities, all points selected will be considered. The formulation of the mathematical model is given as follows. This mathematical model is a biobjective model for deciding candidate locations and afterward selection of the most preferred solution from the Pareto-optimal front: n

min z =

∑x A i

i

i =1

(6.1)

n

min z =

∑x P i i

i =1

(6.2)

subject to n

∑x ≥ M i



i =1

(6.3)

L O c ATIN G T Emp O R A RY S T O R AG E SIT E S

10 7

where xi is the binary integer variable; it is equal to 1 if a temporary storage site is located to ith cell. Ai is the average weighted distance of ith cell from other cells. So Equation 6.1 represents the total average weighted distance of temporary storage sites to waste source points. Pi is the population of ith cell, and Equation 6.2 represents the total population in the cells, which contains a temporary storage site. This equation minimizes the amount of population subject to risks of recycling and separation facilities. M is the minimum number of candidate locations to be selected to let the model locate at least one site, and this number can be determined by DM. This mathematical model is solved by using nondominated sorting genetic algorithm (NSGA-II) (Deb, 2011). This is an elitist evolutionary genetic algorithm (GA)–based methodology, which is called NSGA-II. Here the term called domination should be explained. Let us consider a biobjective (minimization type) problem with equal importance of these functions. A pair of solutions is said to be nondominated if none of them can be marked as a better one comparing both of the objective function values. The details related to the applied algorithm, including parameters of GA, are given in Section 6.4. Other issues that should be explained about the applied methodology are the usage of GIS and multiobjective optimization methods. GISs are being used to create and analyze geographical systems. ELER software produces outputs in shapefile format, and these files can only be opened and edited by GIS software. GIS can also be used to create origin-destination (OD) cost or distance matrices. ArcGIS is the most commonly used GIS software. All disaster data created by ELER can be edited and viewed via ArcGIS. Also, ArcGIS has tools stored under network analyst toolbox. These tools include several network analyses used for solving problems such as routing or allocation. Origin destination matrix can be calculated by defining source and demand points. The road network is needed to calculate distances. All shapefile formatted files can be edited by ArcGIS. The integration of loss estimation tools with GIS tools is very crucial for the evaluation of the results and editing of these results for further researches. The proposed solution methodology is an elitist multiobjective evolutionary optimization method. NSGA-II, SPEA, and PAES are the

10 8

Kı VA N Ç O N A N E T A L .

latest elitist methodologies presented to solve multiobjective models in the most efficient way. The NSGA-II is considered as one of the most robust methods for solving such problems. ELER’s building damage values and ArcGIS OD matrix values are used to build multiobjective model to minimize both total cost and amount the distance of storage sites to people. 6.4 Results

The evolutionary multiobjective solution procedure was applied to the problem, and all possible candidate locations were determined. These are the points that take place in at least one solution in the Paretooptimal front. Figure 6.4 represents all points considered in calculating candidate points. The total number of the cells in the grid is 193, covering all European side of Istanbul (west of Bosporus). The structure of the studied map is grid based. A grid is composed of cells, which are represented as points in the map. These cells are 0.05° × 0.05°, and center points are represented. These are all points in the solution space where a temporary storage site can be located. Each cell includes the information of population and building inventory in that cell. Figure 6.5 represents the selected 26 points as a result of solution procedure. These points are demonstrated with light-colored circles. As can be seen from the figure, the selected cells mostly located to cells, which both close to city center and also less populated compared to the cells in the city center. The proposed model is solved using proposed solution methodology, which was explained in the previous section. Since the solution methodology is a GA-based evolutionary algorithm and it searches for better solutions, it requires that parameters of algorithm’s operations should be determined by having several runs or trials. For instance, one of the important parameters to be determined is the number of total generations to be created, or, in other words, the termination criterion, which is 10,000 iterations in this case. All the other parameters of the algorithm were fixed (crossover rate = 0.7, mutation probability = 0.0005). For the final run of the actual model, the parameters can be summarized as follows. Shortly, the mathematical model is composed of

27

39

50

60

70

16

28

40

51

61

71

80

89

17

29

41

52

62

72

81

90

99 108

18

30

42

53

63

73

82

91

100 109 117 126 135

19

31

43

54

64

74

83

92

101 110 118 127 136 144 152 160

20

32

44

55

65

75

84

93

102 111 119 128 137 145 153 161 168 175 182 189 196 203

8

21

33

45

56

66

76

85

94

103 112 120 129 138 146 154 162 169 176 183 190 197 204 210

9

22

34

46

57

67

77

86

95

104 113 121 130 139 147 155 163 170 177 184 191 198 205

10

23

35

47

58

68

78

87

96

105 114 122 131 140 148 156 164 171 178 185 192 199 206

11

24

36

48

59

69

79

88

97

106 115 123 132 141 149 157 165 172 179 186 193 200

12

25

37

98

107 116 124 133 142 150 158 166 173 180 187 194 125 134 143 151 159 167 174 181 188

10 9

Figure 6.4  The 0.05 × 0.05 cell grid of Istanbul (points represent the center of each cell for west of Bosporus).

195

L O c ATIN G T Emp O R A RY S T O R AG E SIT E S

15

110

15

27

39

50

60

70

16

28

40

51

61

71

80

89

17

29

41

52

62

72

81

90

99

18

30

42

53

63

73

82

91

100 109 117 126 135

19

31

43

54

64

74

83

92

101 110 118 127 136 144 152 160

20

32

44

55

65

75

84

93

102 111 119 128 137 145 153 161 168 175 182 189 196 203

8

21

33

45

56

66

76

85

94

103 112 120 129 138 146 154 162 169 176 183 190 197 204 210

9

22

34

46

57

67

77

86

95

104 113 121 130 139 147 155 163 170 177 184 191 198 205

10

23

35

47

58

68

78

87

96

105 114 122 131 140 148 156 164 171 178 185 192 199 206

11

24

36

48

59

69

79

88

97

106 115 123 132 141 149 157 165 172 179 186 193 200

12

25

37

98

107 116 124 133 142 150 158 166 173 180 187 194

108

Figure 6.5  Map representing all solutions of first stage (all candidate locations).

Kı VA N Ç O N A N E T A L .

125 134 143 151 159 167 174 181 188 195

L O c ATIN G T Emp O R A RY S T O R AG E SIT E S

111

two objective functions (distance and population), and then evolutionary multiobjective optimization approach is used for solving this model. This approach is based on GA, where the crossover operation is single-point type and mutation operation is bitwise type. 6.5 Discussion

This study aims to propose a solution methodology for DWM in a sustainable way. In this context, the west side of Bosporus of Istanbul was chosen as a case study. First, the risk of disaster in Istanbul is emphasized with related background. Next, a suitable disaster hazard assessment methodology is searched. Studies of Turkey’s principal earthquake institute (KOERI) were surveyed. The studies published by the researchers in KOERI and from all around world were reviewed. A tool called ELER is decided to be used for risk assessment and disaster loss estimation in order to obtain the data needed to calculate waste amount estimations. One of the scenarios of expected earthquake, which was highly expected to happen, is used to create a hazard simulation in ELER. This scenario is an earthquake with a magnitude of 7.5 in the Richter scale. Then by using the building database and loss estimation modules, the amount of damaged buildings was calculated in each damage category. These numbers were used to calculate the amount of DW in each cell of the GIS based grid, used by ELER for assessing hazards. A research made about 1999 Marmara earthquake supplied very important information in transforming the number of damaged buildings into the amount of waste generated in tons. Baycan’s simple formula is used to transform the number of buildings to approximate waste amount (Baycan, 2004). Damage data are transformed in tons, then a framework is constructed for solving the candidate temporary storage sites location problem, which includes locating temporary storage sites to cells. This framework has a solution procedure to determine candidate locations. In terms of sustainability, minimization of distance covered is not satisfactory. Another objective was defined for the model. Since the transportation, disposal, and recycling operations are hazardous and harmful, it is important to keep these sites as far as possible from

112

Kı VA N Ç O N A N E T A L .

population. To achieve this goal, another objective is included into the model: minimization of total population in selected cells for locating temporary sites. The framework turned into a biobjective location problem. For solving biobjective optimization problem, multiobjective optimization solution methods were reviewed. These methods were classified differently in several surveys. In general, these methods can be classified as a priori, a posteriori, interactive, and evolutionary methods. The review of multiobjective optimization methods showed that for large-scale problems with conflicting objectives, evolutionary methods usually promise a better performance. So focus on multiobjective optimization methods was narrowed to evolutionary methods for this study. Evolutionary multiobjective optimization methods are GA-based search algorithms. These methods can be classified as nonelitist and elitist methods, where elitist methods have an elite solution preserving mechanism. Since elitist methods were evolved from classic evolutionary methods, they are more up to date, and they promise better performance according to the previous studies. Elitist evolutionary multiobjective optimization methods were reviewed to select an elitism mechanism for solving the two-stage, biobjective location-allocation optimization problem. This research showed that most of the procedures have a similar framework to NSGA-II solution methodology, which was developed by Kalyanmoy Deb (2011). After selecting an appropriate solution method, the problem is formulated as a biobjective elitist evolutionary location optimization model. This model is solved using evolutionary and elitist NSGA-II solution methodologies. And results were represented in the previous section. The result of the problem is a set of solutions, which is named as Pareto-optimal front. These Pareto-optimal solutions need a final discussion to select one or more solutions from the set. This final discussion requires higher-level knowledge. Higher-level knowledge is any information, which can be used to favor a solution or solutions from the Pareto-optimal solutions set. In this case, the points with

L O c ATIN G T Emp O R A RY S T O R AG E SIT E S

113

any appearance in all set are discussed to be the final solution, where it also provides to manage the DW with the widest range of the number of temporary storage facilities selection. 6.6  Conclusion

The most important contribution of this study is the proposed framework, which integrates risk assessment with multiobjective modeling for planning DWM. This integration is provided by incorporating several methods and tools: geographical information systems, earthquake loss estimation, and evolutionary multiobjective optimization. This framework is a new approach for sustainable DWM. Since DWM and MOO are emerging fields, new methods can be applied to similar problems in the context of this framework in future. Also, the future loss estimation methods can be discussed in this way. This study aimed to determine the candidate locations of temporary storage sites for DWM. But the framework can be used for different problems, which may occur after a disaster. For example, by using suitable loss estimation tools, infrastructure damages can be estimated, and considering several objectives, such as cost and priority, a postdisaster repairing plan can be produced. The proposed framework is applied to a postdisaster scenario, but predisaster planning studies can also be done within this context. For example, the prioritized areas for urban transformation can be determined by using a similar approach. The information and regulations about temporary storage sites are technically insufficient or even do not exist so there is a need for studies about the design of these facilities. The mathematical model presented in this study is not including constraints based on the technical limitations of a temporary storage site, so in future researches, such kind of constraints can be added to the model. The transportation is considered as covered distances in this model. This objective can be calculated as cost of transportation. Also instead of calculating the costs using constant values for carrying waste, cost can be calculated dynamically by changing the value of the cost dynamically depending on the distance traveled.

114

References

Kı VA N Ç O N A N E T A L .

Ansal, A., Özaydın, K. et al. (2003). Earthquake Master Plan of Istanbul. Istanbul, Turkey: Istanbul Metropolitan Municipality. Baycan, F. (2004). Emergency planning for disaster waste: A proposal based on the experience of the Marmara earthquake in Turkey. In 2004 International Conference and Student Competition on Post-Disaster Reconstruction “Planning for Reconstruction”, Coventry, U.K. Brown, C., Milke, M., and Seville, E. (2011). Disaster waste management: A review article. Waste Management, 31, 1085–1098. Collette, Y. and Siarry, P. (2003). Multiobjective Optimization Principles and Case Studies. Berlin, Germany: Springer. Deb, K. (2001). Multi-Objective Optimization Using Evolutionary Algorithms (1st edn.). Wiltshire, U.K.: John Wiley & Sons, Ltd. Deb, K. (2008). Introduction to evolutionary multiobjective optimization. In J. Branke, K. Deb, K. Miettinen, and R. Slowinski (Eds.), Multiobjective Optimization (1st edn., pp. 59–96). Heidelberg, Germany: Springer-Verlag. Deb, K. (2011). Multi-Objective Optimization Using Evolutionary Algorithms: An Introduction. Kanpur, India: Indian Institute of Technology. FEMA. (2010). Debris Estimating Field Guide. U.S. Department of Homeland http://www.fema.gov/pdf/government/grant/pa/fema_329_ Security, debris_estimating.pdf. Hancılar, U., Tüzün, C., Yenidoğan, C., and Erdik, M. (2010). ELER software— A new tool for urban earthquake loss assessment. Natural Hazards and Earth System Sciences, 10(12), 2677–2696. Ikenishi, N., Kadota, T. et al. (2002). The study on a disaster prevention/ mitigation basic plan in Istanbul including microzonation in the Republic of Turkey. Istanbul, Turkey: Japan International Cooperation Agency. Marler, R. and Arora, J. (2004). Survey of multi-objective optimization methods for engineering. Structural and Multidisciplinary Optimization, 26(6), 369–395. Milke, M. (2011). Disaster waste management research needs. Waste Management, 31, 1. Söder, A. B. and Müller, R. (2011). Disaster Waste Management Guidelines. Geneva, Switzerland: United Nations Office for the Coordination of Humanitarian Affairs Environmental Emergencies Section (UNEP/ OCHA Environment Unit).

7 U SiN G E ARTHQUAKE R iSK D ATA TO A S Si G N C iTiES TO D iSASTER -R ESp ONSE FACiLiTiES iN TURKE Y Ay Ş e N U R S a H i N , M U sta Fa A lp E R tem , a N D E mel E m U R Contents

7.1 Introduction 7.2 Literature Review 7.3 Solution Methodology 7.3.1 Mathematical Model 7.4 Experimental Studies 7.4.1 First Case 7.4.2 Second Case 7.4.3 Third Case 7.5 Conclusion and Future Work Acknowledgments References

115 117 119 120 121 121 125 125 131 132 132

7.1  Introduction

Turkey is located at one of the most active earthquake regions of the world. It is the third in the world in terms of human loss and eighth in terms of the number of people affected by an earthquake (AFAD 2012). The only unchanging reality of Turkey besides the political events and the changes of economic conditions that took place during the years is the earthquake. Most of Turkey’s population can be considered as risky because of the North Anatolian Fault (NAF) line. Several earthquakes have been reported in this geographical region. In August 17, 1999, Marmara earthquake took place on the western part of NAF line 115

116

AYş ENUR S A HIN E T A L .

with a magnitude of 7.4 on the Richter scale (Görmez et al. 2011). This major earthquake marks a turning point in the field of disaster management and coordination of disaster relief activities in Turkey. This earthquake, which caused a great loss of life and property, has revealed that the issue of disaster management in Turkey needed to be reconsidered (AFAD 2012). There is uncertainty in the nature of disasters (e.g., earthquakes) because the timing and location them cannot be predicted beforehand. This uncertainty affects the proper management of disaster relief operations. It has been observed that in different locations of Turkey, earthquakes show different destruction powers. The severity of the earthquake and building quality might be considered as the main source of this difference. On the other hand, when a particular fault line is taken into account, it can be inevitably seen that some locations in Turkey have a higher risk of experiencing devastating earthquakes than the others. In our study, we defined this potential as “the earthquake risk.” Humanitarian logistics is defined as “the process of planning, implementing and controlling the efficient, cost-effective flow and storage of goods and materials, as well as related information, from the point of origin to the point of consumption for the purpose of alleviating the suffering of vulnerable people” (Thomas and Kopczak 2005). Activities in humanitarian logistics include preparedness, planning, procurement, transport, warehousing, tracking and tracing, and customs clearance (Thomas and Kopczak 2005). Similar to the business supply chain and logistics activities, humanitarian logistics includes diverse activities like procurement and prepositioning. Before the onset of disaster, the relief items are procured from global or local sources and stored in the warehouses. Therefore, prepositioning provides time and place utility since the time and location of the disasters cannot be predicted beforehand. Moreover, after the disaster onset the warehouses are continuously supplied with amenities from the suppliers because of the flow of relief items from warehouses to disaster locations. Therefore, planning the storage locations of relief supplies and selecting these locations in terms of vulnerability is a crucial job before disasters for humanitarian relief organizations. This study aims to assign demand points to prepositioned disaster-response facilities (DRFs) in terms of population in order to

A S SI G NIN G CITIT E S T O DRFs

117

minimize the distance between demand points and DRFs considering the earthquake risk. The DRFs of the new container warehouses proposed by AFAD (Turkish Prime Ministry Disaster and Emergency Management Presidency), Turkish Red Crescent warehouses, and AFAD Civil Defense Search and Rescue City Directorates are considered in this study. Turkish Red Crescent Society is a humanitarian organization that provides relief to the vulnerable and those in need by mobilizing the power and resources of the community, and AFAD is the government agency concerning disasters and emergencies, and works like an umbrella organization, collaborating with the Ministry of Foreign Affairs, the Ministry of Health, the Ministry of Forests and Hydraulic Works, and other relevant ministries as well as nongovernmental organizations. We develop a mathematical model that determines the assignment of each demand point to each DRF by restricting the destruction powers and restricting the capacities of each DRF with its population size. The rest of the chapter is organized as follows: In Section 7.2, we provide an overview of prepositioning in humanitarian logistics and risk management in disasters. In Section 7.3, we describe the system and problem in detail and present an integer programming model formulation. In Section 7.4, we test the model with case studies and report the computational results. Finally, we conclude and discuss future work in Section 7.5. 7.2  Literature Review

Despite humanitarian logistics’ importance, the literature in this area is limited (Van Wassenhove 2006). Altay and Green (2006) survey the literature to identify potential research directions in disaster operations, discuss relevant issues, and provide a starting point for interested researchers. In the fall of 2005, since hurricanes Katrina, Wilma, and Rita caused damage of more than $100 billion and highlighted the inadequacy of existing preparedness strategies, some research effort was aimed at devising prepositioning plans for emergency supplies (Rawls and Turnquist 2010). Ukkusuri and Yushimoto (2008) modeled the prepositioning of supplies as a location-routing problem. Their model incorporates

118

AYş ENUR S A HIN E T A L .

the reliability of the ground transportation network in case of any destruction happened. They maximize the probability that all the demand points can be served by a service location given fixed probabilities of link/node failure and a specified budget constraint. This model is related to our study in terms of demand points and service locations. Balcik and Beamon (2008) developed a model to design a prepositioning system that balances the costs against the risks in the relief chain, which is a variant of the maximal covering location model, integrates facility location and inventory decisions, considers multiple item types, and captures budgetary constraints and capacity restrictions. It is revealed by the results of computational experiments that there are effects of pre- and postdisaster relief funding on relief system’s performance, specifically on response time and the proportion of demand satisfied. Duran et al. (2011) developed a mixed-integer programming inventory-location model to find the optimal configuration while considering a set of typical demand instances given a specified upfront investment (in terms of the maximum number of warehouses to open and the total inventory available to allocate) to determine the configuration of the supply network that minimizes the average response time over all the demand instances all over the world. The model obtains the typical demand instances from historical data; the supply network consists of the number and the location of warehouses and the quantity and type of items held in inventory in each warehouse. The basic differences between this study and our study are stock prepositioning, response times, and coverage area since our model provides an emergency response by assigning demand points to the DRFs with minimum earthquake risk in Turkey. Görmez et al. (2011) developed a mathematical model to determine the locations of DRFs for Istanbul with the objectives of minimizing the average-weighted distance between casualty locations and DRFs, and opening a small number of facilities, subject to distance limits and backup requirements under regional vulnerability considerations. They analyzed the trade-offs between these two objectives under various disaster scenarios and investigated the solutions for several modeling extensions. The main difference of our study is our aim

A S SI G NIN G CITIT E S T O DRFs

119

of covering all of Turkey and considering a single objective of minimizing total traveled distance. Dükkancı et al. (2011) developed a model for Turkish Red Crescent Society (i.e., Kızılay in Turkish) that determined the DRF locations by evaluating demographic and past disasters’ information to cover maximum number of people. Risk is a widely used term in everyday life and businesses. Knight (1921) defined risk as “if you don’t know the for sure what will happen, but you know the odds, that’s risk, and if you don’t even know the odds, that’s uncertainty.” The concept of resilience is closely related to the capability and ability of an element to return to a predisturbance state after a disruption (Bhamra et al. 2011). After the disaster, there might be risks related to the disruption of transportation roads and long delivery time, which should be well analyzed. In this study, we used an earthquake risk map, including destruction powers to integrate risk concept into our model. To the best of our knowledge, the assignment of demand points to prepositioned DRF locations (in terms of cities) throughout Turkey considering that the earthquake risk has not been analyzed thoroughly. The next section presents an integer programming model for assigning city demand points to prepositioned DRF locations in Turkey considering the earthquake risk. 7.3  Solution Methodology

When the prepositioning literature is analyzed, it is seen that either the distance traveled between DRFs and affected areas or elapsed time is minimized by considering the closeness of DRFs to the disasterprone areas. In this study, the affected areas by the disaster are called as demand points. The assumptions used in the problem are given in the following: • The DRFs can cover a maximum 15,000,000 population, because we limit the coverage with the population sizes of the cities that have DRFs. • The DRFs can satisfy their own requirements from an infinite supply.

12 0

AYş ENUR S A HIN E T A L .

7.3.1  Mathematical Model

The objective is to minimize the distances between demand points and DRFs in order to quickly respond to the requirements of beneficiaries. The following notation is used for the DRF assignment model: Sets

C set of DRF locations; i ∈ C T set of demand points; j ∈ T

Parameters Dij: Distance between DRF i and demand point j Kj: Population of demand point j Pi: Capacity of DRF i in terms of population Rij: Average destruction power based on the magnitude of the earthquake for DRF i and demand point j Decision Variables 1 xij =  0

if demand point j is covered by DRF i   Otherwise 

The mathematical model for the problem is as follows: min

∑∑D x ij ij

(7.1)

subject to

∑K x

≤ Pi

j ij





∑x

ij

∑R x

≥ 1 ∀j ∈ T

ij ij



∀i ∈ C

≥ 1 ∀i ∈ C

(7.2)

(7.3) (7.4)

j

xij ∈ {0, 1} ∀i ∈ C , ∀j ∈ T

(7.5)

A S SI G NIN G CITIT E S T O DRFs

121

The objective function (7.1) minimizes the distance between the DRFs and demand points. Constraint set (7.2) ensures that a DRF can cover the population of a demand point j up to its population capacity. Constraint set (7.3) ensures that every demand point must be covered by at least one facility. Constraint set (7.4) satisfies that the total average destruction power between DRFs and demand points must be greater than or equal to one. Thus, the DRFs cover the demand points that have large destruction powers. Constraint set (7.5) ensures that the location coverage variables are binary. 7.4  Experimental Studies

The proposed mathematical model is tested for DRFs of the new container warehouses proposed by AFAD, Turkish Red Crescent warehouses, and AFAD Civil Defense Search and Rescue City Directorates in the following sections. Only the data set and computational results of the first case will be given in detail, and the visual representation of the results will be given for others. The data set (i.e., risk, population, distance) used for all these cases are the same. 7.4.1  First Case

This experiment is conducted for 27 container warehouse locations proposed by AFAD recently. Earthquake risk data are taken from the earthquake risk map at city and town level, which was prepared by Prof. Dr. Ahmet ERCAN (Ercan 2010). The distances between cities are taken from KGM (General Directorates for Highways 2013). Demographic information of cities and towns (populations) is taken from TUIK (Turkish Statistical Institute 2012). The average destruction powers given in Table 7.1 are derived from the minimum and maximum destruction powers in the earthquake map (Ercan 2010). The first column of the table shows cities, the second column shows the populations, and the third column shows the corresponding risk regions. Fourth and fifth columns show minimum and maximum destruction powers corresponding to risk regions. The sixth column is the average destruction power value calculated by taking average of minimum and maximum destruction powers. This is taken as the average to have a moderate representation of the

12 2

AYş ENUR S A HIN E T A L .

Table 7.1  Sample from the Data Set

City Adana Adıyaman Afyon Ağrı Amasya Ankara Antalya Artvin Aydın Balıkesir … Kilis Osmaniye Düzce

Max. DEstRUction PowER (a-cm/sn2)

AvG. DEstRUction PowER (a-cm/sn2)

2012 PopUlation

Risk REGion

Min. DEstRUction PowER (a-cm/sn2)

2,125,635 595,261 703,948 552,404 322,283 4,965,542 2,092,537 167,082 1,006,541 1,160,731

IX IX IX XI X VIII IX VIII X X

0.31 0.31 0.31 1.50 0.71 0.15 0.31 0.15 0.71 0.71

0.71 0.71 0.71 3.1 1.50 0.31 0.71 0.31 1.50 1.50

0.51 0.51 0.51 2.30 1.10 0.23 0.51 0.23 1.10 1.10

124,320 492,135 346,493

VI VII XII

0.03 0.07 3.10

0.07 0.15 7.10

0.05 0.11 5.10

destruction power. According to Table 7.1, the maximum average destruction power is 7.1g for Düzce in the most risky area (XII). The minimum average destruction power is 0.051g for Kilis in the least risky area (VI). The proposed mathematical model was solved using GAMS 23.7 with CPLEX 11 Solver. The total traveled distance is 10,778 km with 59 (i, j) pairs. The (i, j) pair stands for the assignment of demand point j to DRF i. We identified them as pair since our model determines the (i, j) pair, and we make comparison among each cases by the pair assignments. The total average destruction power between (i, j) pairs is 60.92. The assignment of demand points to DRFs is given in Table 7.2 for this case. In the first and fifth columns, the prepositioned DRFs are listed. In the second and sixth columns, the assigned demand points to DRFs are listed. In the third and seventh columns, the distances between the DRFs and the demand points are given as (i, j) pairs. In the fourth and eighth columns, the average destruction power between (i, j) pairs is given. The results show that demand points are assigned to DRFs with an ability to serve the demand points in at most 4  h by highways in normal conditions except for Elazığ-Rize assignment with 570 km. It can be concluded that each DRF covers at

Table 7.2  Assignment of Demand Points to DRFs for Container Warehouses Proposed by AFAD

DRFs Adana

Afyon Ankara Antalya Balıkesir Bursa Denizli Diyarbakır

AvG. DEstRUction PowER BEtwEEn (i, J ) PaiRs

69 205 289 349 110 144 100 453 122 130 224 322 126 150 144 95 100 98

0.37 0.31 0.37 0.81 0.31 0.37 1.41 1.27 0.51 0.51 1.70 2.00 0.81 0.51 0.81 0.31 0.31 0.37

DRFs Manisa Kahramanmaraş

Muğla Muş

Samsun

Sivas Tekirdağ

CovEREd DEmand Points

DistancE BEtwEEn (i, J ) PaiRs

Aydın Uşak Gaziantep Tokat Osmaniye Aydın Isparta Bitlis Siirt Şırnak Giresun Ordu Sinop Amasya Kayseri Çanakkale Edirne Kırklareli

156 195 80 415 100 99 292 83 180 275 196 152 163 222 195 188 140 121

AvG. DEstRUction PowER BEtwEEn (i, J ) PaiRs 0.81 0.51 0.17 0.67 0.17 0.81 0.51 0.51 0.31 0.51 1.27 1.27 1.27 1.11 0.67 1.41 1.27 1.27 (Continued )

12 3

Elazığ

Mersin Niğde Karaman Bingöl Şanlıurfa Eskişehir Kütahya İstanbul Burdur Isparta Kütahya İzmir Aydın Uşak Bingöl Mardin Batman Malatya

DistancE BEtwEEn (i, J ) PaiRs

A S SI G NIN G CITIT E S T O DRFs

Adıyaman

CovEREd DEmand Points

12 4

Table 7.2 (Continued )  Assignment of Demand Points to DRFs for Container Warehouses Proposed by AFAD

Erzincan

Erzurum

Hatay Kastamonu Kocaeli

DistancE BEtwEEn (i, J ) PaiRs

AvG. DEstRUction PowER BEtwEEn (i, J ) PaiRs

Rize Gümüşhane Trabzon Tunceli Ağrı Artvin Kars Bayburt Ardahan Kilis Bartın Karabük Sakarya

570 131 231 130 184 226 203 125 230 147 181 114 37

2.67 2.61 2.67 2.81 1.41 0.37 0.51 0.31 0.37 1.18 0.67 0.67 5.10

DRFs Van Aksaray

Kırıkkale

Yalova Düzce

CovEREd DEmand Points

DistancE BEtwEEn (i, J ) PaiRs

AvG. DEstRUction PowER BEtwEEn (i, J ) PaiRs

Hakkari Iğdır Çorum Kırşehir Konya Nevşehir Çankırı Çorum Yozgat Bilecik Bolu Zonguldak

202 225 326 110 148 75 105 167 141 129 45 114

1.41 1.27 0.61 0.17 0.17 0.11 0.37 0.67 0.23 1.21 3.10 2.67

AYş ENUR S A HIN E T A L .

DRFs

CovEREd DEmand Points

A S SI G NIN G CITIT E S T O DRFs

12 5

least one demand point and at most five demand points such as Bursa and Erzurum DRFs. The demand points receive relief supplies from one facility since the facility sizes are limited with their population sizes. Few demand points receive relief supplies from more than one facility like Kütahya, Aydın, Uşak, and Bingöl. The demonstration of the assignments for (i, j) pairs is given in Figure 7.1. It shows the assignment of demand points to DRFs, which are symbolized by a container. 7.4.2  Second Case

This experiment is conducted for 30 Turkish Red Crescent warehouses. The proposed mathematical model was solved using GAMS 23.7 with CPLEX 11 Solver. The total distance traveled is 10,617 km with 59 (i, j) pairs. The total average destruction power between (i, j) pairs is 47, which is less than the value observed in the first case. The visual representation of the assignment of demand points to DRFs is given in Figure 7.2 for the second case. As seen from Figure 7.2, the demand points are assigned to DRFs with an ability to serve the demand points in at most 4 h by highways in normal conditions except for Gaziantep-Çorum and Rize-Amasya assignments with 630 and 535  km, respectively. Each DRF covers at least one demand point and at most five demand points such as Ağrı and Gaziantep DRFs. The demand points receive relief supplies from one facility since the facility sizes are limited with their population sizes. Few demand points receive relief supplies from more than one facility like Kütahya, Çankırı, Aydın, Bitlis, and Bingöl. 7.4.3  Third Case

This experiment is conducted for 11 DRFs of AFAD Civil Defense Search and Rescue City Directorates. The total traveled distance is 13,997 km with 71 (i, j) pairs. The total distance traveled is higher than the first and second case studies because the number of DRFs is fewer. The total average destruction power between (i, j) pairs is 72.71, which is more than the observed value in the first and second case studies. The visual representation of the assignment of demand points to DRFs is given in Figure 7.3 for this case. As seen from

12 6

Kırklareli

Bartın

Edirne Tekirdağ İstanbul

Zonguldak Karabük

Çanakkale

Bursa

Çanakkale

Çankırı

Eskişehir

Ankara

Kütahya

İzmir Aydın

Samsun Çorum

Uşak

Kırıkkale

Ordu

Amasya Tokat

Bilecik

Balıkesir

Manisa

Sinop

Yozgat

Trabzon

Aksaray Isparta

Denizli

Konya

Içet Mersin

Bitkis

Elazığ

Figure 7.1  Assignment of cities for container warehouses proposed by AFAD.

Adıyaman Kahramanmaraş Adana Osmaniye

Gaziantep Kicis

Hatay

Ağrı

Muş

Bingöl

Diyarbakır

Karaman Antalya

Malatya

Niğde

Burdur Muğla

Kayseri

Iğdır

Erzurum

Erzincan

Sivas

Tunceli

Nevşehir

Kars

Giresun Gümüşhane Bayburt

Kırşehir Afyon

Artvin Ardahan

Rize

Şanliurfa

Batman

Mardin

Siirt Şırnak

Van

Hakkari

AYş ENUR S A HIN E T A L .

İstanbul Kocaeli Düzce Yalova Sakarya Bolu

Kastamonu

Kırklareli Edirne Tekirdağ İstanbul

Bartın Duzce Sakarya Bolu

Bursa

Çanakkale

Eskişehir

Kütahya

İzmir Aydın

Uşak

Çankırı

Samsun Çorum

Ankara

Kırıkkale

Ordu

Amasya Tokat

Yozgat

Trabzon

Aksaray Isparta

Konya

Karaman Içet Mersin

Malatya

Ağrı

Muş

Bıngöl

Bitkis

Elazığ

Adiyaman Kahramanmaraş Adana Osmaniye Gaziantep Kicis Hatay

Iğdır

Erzurum

Diyarbakır Niğde

Burdur Antalya

Kayseri

Kars

Erzincan

Sivas

Tunceli

Nevşehir

Afyon

Artvin Ardahan

Rize

Giresun Gümüşhane Bayburt

Kırşehir

Denizli Muğla

Sinop

Bilecik

Balikesir

Manisa

Kastamonu

Şanliurfa

Batman Mardin

Siirt Şırnak

Van

Hakkari

A S SI G NIN G CITIT E S T O DRFs

İstanbul Kocaeli

Yalova

Çanakkale

Zonguldak Karabük

Figure 7.2  Assignment of cities for Turkish Red Crescent warehouses. 12 7

12 8

Kırklareli Edirne Tekirdağ

Bartın

İstanbul

AFAD

İstanbul Kocaeli

Yalova

Çanakkale

AFAD

Çankırı

Bolu

Eskişehir

Kütahya

Uşak

Çorum

Trabzon

Ordu

Amasya

Giresun

Tokat

AFAD

Kırıkkale

Yozgat

AFAD

Kayseri

Denizli

Konya

Malatya

AFAD

Karaman Içet Mersin

Figure 7.3  Assignment of cities for AFAD Civil Defence and Rescue City Directorates.

Kahramanmaraş

Osmaniye

Hatay

Adıyaman

Gaziantep Kicis

Ağrı

Muş Bitkis

AFAD

Adana Antalya

Bıngl

Diyarbakır Niğde

Burdur Muğla

AFAD

Elaziğ

Aksaray Isparta

Iğdır

Erzurum

Erzincan

Sivas

Tunceli

Nevşehir

İzmir Aydın

Kars

Gümüşhane Bayburt

Kırşehir Afyon

Ardahan

Rize

Bilecik Ankara

AFAD

AFAD

Duzce

Sakarya

Balıkesir

Manisa

Artvin

Samsun

Şanliurfa

Batman

Mardin

Siirt Şırnak

Van

AFAD

Hakkari

AYş ENUR S A HIN E T A L .

Bursa Çanakkale

AFAD

Sinop

Kastamonu

Zonguldak Karabük

12 9

A S SI G NIN G CITIT E S T O DRFs

Figure 7.3, the demand points are assigned to DRFs with an ability to serve the demand points in at most 4 h by highways in normal conditions except for Van-Erzincan assignment with 602 km. Each DRF covers at least 1 demand point and at most 12 demand points such as Diyarbakır DRF. The demand points receive relief supplies from one facility since the facility sizes are limited with their population sizes. Kütahya is an exception since it receives relief supplies from two facilities. The set of the warehouses used in the first, second, and third cases are given in Figure 7.4 by displaying the overlapping DRFs among them. There are 12 overlapping cities for container warehouses and Turkish Red Crescent warehouses, 3 overlapping cities for AFAD warehouses and Turkish Red Crescent warehouses, and 1 overlapping city for container warehouses and AFAD warehouses. Eight cities belong to only Turkish Red Crescent warehouses, and seven cities

Kızılay Ağrı Bolu Eskişehir Isparta Gaziantep Rize Tokat Trabzon Denizli Düzce Elaziğ Muğla Erzincan Muş Hatay Sivas İstanbul Kastamonu Yalova İzmir Adana Kocaeli Manisa Sakarya Diyarbakır Afyon Erzurum Ankara Van Bursa Container warehouses Adıyaman Antalya Balıkesir

Kahramanmaraş Tekirdağ Aksaray Kırıkkale

AFAD –

Samsun

Figure 7.4  The set of the warehouses used in the case studies.

13 0

AYş ENUR S A HIN E T A L .

Table 7.3  Comparison of the Cases according to Numerical Results (a)

CasE First case Second case Third case

(B)

(c)

(d)

(E)

(f) = (E)/(B)

No. of DRFs

No. of (i, J ) PaiR

No. of DEmand Points CovEREd By Two DRFs

Total TRavElEd DistancE (km)

Total AvG. DEstRUction PowER

AvG. DEstRUction PowER of (i, J ) PaiR

27 30 11

59 59 71

5 8 1

10,778 10,617 13,997

60.92 47.00 72.71

1.033 0.797 1.024

belong to only container warehouses. Seven DRFs are common in all cases: Adana, Diyarbakır, Afyon, Erzurum, Ankara, Van, and Bursa. The summary of three cases is depicted in Table 7.3 for comparison. In the first column of the table, three cases are given for container warehouses, Turkish Red Crescent warehouses, and AFAD warehouses, respectively. In the second and third columns, the number of DRFs belonging to each case and the number of (i, j) pair by the assignment model are shown. In the fourth column, the number of demand points covered by more than one DRF is given. In the results of the model for three cases, we observed that five, eight, and one demand points are covered by two DRFs. The coverage by two DRFs is induced by the model parameters and could be increased when the capacity limits of the DRFs are increased. The fifth and sixth columns are for the total distance traveled and total average destruction powers obtained by the assignment model. In the last column of Table 7.3, the average destruction power of (i, j) pair for each cases is calculated by dividing the total average destruction power to the number of (i, j) pair obtained in the result of the assignment model. Thus, the average destruction power is found per (i, j) pair. This value could be compared with the situation when it is thought as there are 81 DRFs (i.e., one warehouse in each city) and 81 demand points. If each demand point is assigned to each DRF, then we have 81 × 81 assignment, and the overall average destruction power per assignment is found as 0.85 by dividing the average destruction powers of each (i, j) pair to the number of demand points, which is 81. This means that when all cities behave like DRFs and are able to serve to all cities, the average destruction value of any assignment is 0.85. However, we take into

A S SI G NIN G CITIT E S T O DRFs

131

account the population capacity of each DRF as well as destruction powers. It can be said that the assignment of demand points to the prepositioned DRFs are less risky when the obtained value is less than 0.85, so 0.85 is taken as a moderate value. When considered from this point of view, the second case is superior to the other cases, and it has the least average destruction power per (i, j) pair. 7.5  Conclusion and Future Work

In this study, our aim was to minimize the total distance between prepositioned DRFs and the demand points in cities by considering facility capacities and the average earthquake destruction powers between them. We developed an integer programming model for the assignment of demand points to the prepositioned DRFs. We tested our model with three cases, namely, container warehouses proposed by AFAD (Turkish Prime Ministry Disaster and Emergency Management Presidency), Turkish Red Crescent warehouses, and AFAD Civil Defense Search and Rescue City Directorates. In the results, we obtained the total distance traveled, the number of covered demand points by each DRF, and the total average earthquake destruction power. In the study, we observed that humanitarian relief organization considered in experimental studies has common cities to store the relief items being unaware of the warehouse decisions of each other. It shows that those common cities are suitable to have DRFs. This also reveals that some of the factors they consider in selecting the DRF locations are the same. This study can be utilized to see the assignment effects on the average destruction powers and the number of the assigned demand points. In the study, we observed the total average destruction powers for each case, and we observed that they have different average destruction powers per (i, j) pair, two of them are above the moderate value, which is 0.85, and one of them is below the moderate value. The assignment for Turkish Red Crescent warehouses is the best in terms of all performance measures. The study can be extended by considering the exact locations and capacities of DRFs. In future studies, the distances of exact locations for DRFs would support the implementation of the model and improve the analysis. Backup facility concept can be introduced to the

13 2

AYş ENUR S A HIN E T A L .

model in order to be safe in the risks of not delivering the relief items to demand points when the warehouse or roads are destroyed.

Acknowledgments We are grateful to Turkish Prime Ministry Disaster and Emergency Management Presidency (AFAD) personnel and the Scientific and Technological Research Council of Turkey (TUBITAK) who partially supported this study with the research Grant No. 113M493.

References

AFAD. T.C. başbakanlık Afet ve Acil Durum Yönetimi Başkanlığı [cited December 2012]. Available from http://www.afad.gov.tr. Altay, N. and W. G. Green. 2006. OR/MS research in disaster operations management. European Journal of Operational Research 175(1): 475–493. Balcik, B. and B. M. Beamon. 2008. Facility location in humanitarian relief. International Journal of Logistics: Research and Applications 11(2): 101–121. Bhamra, R., S. Dani, and K. Burnard. 2011. Resilience: The concept, a literature review and future directions. International Journal of Production Research 49(18): 5735–5793. Dükkancı, O., Ö. Koşak, A. İ. Mahmutoğulları, H. Özlü, and N. Timurlenk. 2011. Merkezi Afet Yönetiminde Karar Destek Sistemi Tasarımı. Bilkent University, Ankara, Turkey. Duran, S., M. A. Gutierrez, and P. Keskinocak. 2011. Pre-positioning of emergency items worldwide for CARE international. Interfaces 41(3): 223–237. Ercan, A. Ö. S. 2010/12. Türkiye’nin Deprem Çekincesi: İl il, ilçe ilçe Deprem Belgeseli. PARA Dergisi, 14–20 Mart. Görmez, N., M. Köksalan, and F. S. Salman. 2011. Locating disaster response facilities in Istanbul. Journal of the Operational Research Society 62(7): 1239–1252. KGM. General Directorates for Highways [cited December 2013]. Available from http://www.kgm.gov.tr. Knight, F. H. 1921. Risk, Uncertainty and Profit. Hart, Schaffner & Marx, Boston, MA. Rawls, C. G. and M. A. Turnquist. 2010. Pre-positioning of emergency supplies for disaster response. Transportation research part B: Methodological 44(4): 521–534. Thomas, A. S. and L. R. Kopczak. 2005. From logistics to supply chain management: The path forward in the humanitarian sector. Fritz Institute, San Francisco, CA. TUIK. Turkish Statistical Institute [cited December 2012]. Available from http://tuik.gov.tr.

A S SI G NIN G CITIT E S T O DRFs

13 3

Ukkusuri, S. V. and W. F. Yushimoto. 2008. Location routing approach for the humanitarian prepositioning problem. Transportation Research Record: Journal of the Transportation Research Board 2089(1): 18–25. Van Wassenhove, L. N. 2006. Humanitarian aid logistics: Supply chain management in high gear. Journal of the Operational Research Society 57(5): 475–489.

This page intentionally left blank

8 FACTO RS A FFECTiN G THE P URCHASiN G B EHAV i O RS OF P Ri VATE S HOppiN G C LUb U SERS A Study in Turkey S e Rca N A K K a Ş , C e R e N S al K ı N , a N D B a Ş a R Ö Z tay Ş i Contents

8.1 Introduction 8.2 Background 8.2.1 E-Commerce and Online Shopping 8.2.2 E-Commerce Success Factors and Previous Modeling Approaches 8.3 Technology Acceptance Model with Multiple Regression Analysis 8.4 Future Research Directions 8.5 Conclusion References

135 137 137 138 140 148 148 148

8.1  Introduction

Since technological improvements have provided advantages of convenience in business and as the Internet’s widespread usage is increasing all around the world, existing companies have started adapting and integrating their existing business models to the electronic environment with recent entrepreneurs; the result, obviously, is that novel business models have started to appear. One of these  is online shopping, which is growing in Turkey as well as all over the world. As a result, modern business models that have come into existence have brought in new perspectives in the e-commerce sector due to improvements in online shopping. In recent years, “Private 13 5

13 6

SER c A N Akk A ş E T A L .

Shopping Clubs (PSCs)” have emerged as a new type of business model with branch categorizations on online shopping websites, making them popular business models that encourage fast shopping for customers. PSCs have their own internal business dynamics and characteristics as is found in every business model. PSCs are websites on which products are classified in accordance with certain sectors and brands are sold online in smaller amounts with huge discounts rather than mass customization. These discounts have time limitations to induce rapid shopping and increase of sales. Besides, members are notified before entering their discounts. This online business model works usually by invitation and is structured as a closed-loop shopping system. People who do not have membership in related PSCs will not be able to see the products; nor will they be informed about the daily discounts and promotions. Previous studies that examined the purchasing behavior of customers in online shopping proposed different models for determining end users’ shopping behavior. In this study, the PSC model is examined with the Technology Acceptance Model (TAM). The recommended model includes factors such as the tendency for utilitarian and hedonic shopping behaviors, social effects, personal innovativeness, e-commerce service quality, among others. The study focuses on explaining the behaviors of PSC users in order to make improvements with respect to an increase in online shopping. Data are analyzed with multiple regression analysis and the proposed model is confirmed within an acceptable error rate. Results of the study can be utilized as a guide for existing PSCs and recent enterprises with the intention of engaging this new business model for understanding the purchasing tendencies of the customers. Researchers can also examine what PSC users expect from online shopping websites and what kind of properties they give importance to in the shopping process. Moreover, the study could be useful in terms of investigating the intention behind the purchasing behavior. In addition to this, it could also analyze the customer satisfaction at the end of their actual purchase in order to improve on innovative models in related topics. This chapter is organized as follows. Section 8.2 provides information about online shopping, e-commerce success factors and

FAc T O RS A F F Ec TIN G BEH  AV I O R O F P S C USERS

13 7

previous studies in the literature. Section 8.3 consists of the statistical formulization of the proposed model. The section also presents the results of the conducted survey. In Section 8.4, suggestions for further research is given and finally the paper is concluded in Section 8.5. 8.2  Background

In this section, we investigate the critical factors of purchasing from online shopping websites and e-commerce applications. First of all, some information on e-commerce and online shopping perspective is given. Then, previous studies are mentioned for understanding the concept. 8.2.1  E-Commerce and Online Shopping

Online shopping is a continuous and integrated process that begins with attaining the materials and ends with the final customer’s purchase of goods or the final customer’s after-sales service (Nath et al., 1998). First of all, the behavior of online shopping players alters continuously in accordance with other players. For instance, seller’s main objective is to sell raw material or semi-finished products in order to make a better profit while the customer’s aim is to buy goods at a cheap rate. In addition to these diverse purposes, online shopping is affected by the realities of world events that could cause uncertainties (Navarro et al., 2007). In this dynamic environment, B2B and B2C members should think about other chain members’ decisions and make contacts regarding the coordination mechanism with respect to external economic changes and environmental factors that could affect the whole chain (Numberger and Rennhak, 2005). Therefore, Peppers and Rogers (2001) referred a centralized purchasing management model for accurate decision making. Besides that, purchasing risk management and defining risk performance criteria became key elements for purchasing activities (Lee et al., 2008). In this respect, Lee et al. (2003) proposed a purchasing management concept consisting of five components: triggers, decision-making characteristics, management factors, management responses, and performance outcomes. They emphasized that aggregate management comprising

13 8

SER c A N Akk A ş E T A L .

nodes and chains diversified activities that were combined with e-commerce management activities. According to Hsia et al. (2008), there is a necessity for activities that reduce the flow in purchasing management to be controlled and corrected simultaneously. Hawes and Lumpkin (1986) described uncertain parameters in purchasing management such as demand, supply, processing, transportation, lack of goods, and capacity. Choudhury and Hartzel (1998) noted performance control parameters in e-commerce applications from the perspective of resources, lead times, capacity, and inventory levels. According to Choudhury and Hartzel (1998), e-commerce consists of five factors: delivery lateness, price could be above expectations, quality failures, confidence, and flexibility problems in the literature. Besides all these factors related with e-commerce activities, there is lack of physiological perspective in relation to purchasing and the tendency for Internet shopping, as mentioned by the authors. 8.2.2  E-Commerce Success Factors and Previous Modeling Approaches

In the literature, models used for various purposes generally utilize different optimization tools such as linear programming, stochastic modeling, and deterministic modeling (Atchariyachanvanich et al., 2007). Because of the shortcomings in the interpretation of the results, they do not provide sufficient information about purchasing behavior and the measure of e-commerce success (Aboelmaged, 2010). Additionally, some authors also mention that there is a lack of models that can analyze interactions among the physiological components, which should be viewed (Anumba and Ruikar, 2002). Due to the emerging and dynamic virtual relations between these factors, models should acquire sudden changes and provide continuous monitoring. From the point of view of this dynamic, mathematical models that formulize purchasing behavior were unable to cope with the difficulties of instant variations and continuous monitoring of the entire system (Browne and Cudeck, 1993). In addition to this static structure, interactions and variations of the parameters could not be shown in the mathematical models. Therefore, the models constructed by system dynamics, agent-based simulation, and case-based reasoning are more suitable for determining causal relationships and controlling parameter effects on system behavior (Jiang and Qian, 2009).

FAc T O RS A F F Ec TIN G BEH  AV I O R O F P S C USERS

13 9

Romania Bulgaria Turkey Italy Lithuania Croatia Greece Portugal Latvia Estonia Cyprus Hungary Spain Czech Republic Poland Slovenia Slovakia Belgium Ireland Austria Malta Iceland France Finland Germany Luxembourg The Netherlands Denmark Sweden United Kingdom Norway

European customers shopping online (%) 80% 70% 60% 50% 40% 30% 20% 10% 0%

Figure 8.1  Online shopping rates in Europe in 2012. (Eurostat Report—Online Payments 2012; http://epp.eurostat.ec.europa.eu/cache/ITY_OFFPUB/KS-CD-12-001/EN/KS-CD-12-001-EN.PDF.) Table 8.1  E-Commerce Success ContRiBUtion to SUccEss Triggers

Obstacles

EffEcts IntERnal EffEcts 1. Cost management 2. Reputation 3. Market 4. Entrance to the sector 1. Financial predicaments 2. Risks 3. Experience

ExtERnal EffEcts 1. Product pricing 2. Devoted time 3. Convenience 4. External relationships 1. Costs caused from customers 2. Delivery time 3. Transaction risk 4. Access

Source: Quaddus, M. and Achjari, D., Telecommun. Policy, 29, 127, 2005.

In recent years, online shopping has progressed as an effective management approach that regards unstable external factors and interactions among them (Figure 8.1). The main question is how key performances related with e-commerce applications (Table 8.1) interact with each other and how these interactions affect the whole online shopping processes (Jun et al., 2004). In relation to this problem, purchasing behavior originates from three components: information on a loss-making event, probability of existing risks, and effect of the event. Purchasing management is a systematic way of management that includes determining uncertainties in the processes that could cause failures and trouble in a system and monitoring these processes and indicators continuously

14 0

SER c A N Akk A ş E T A L .

(Ho et al., 2007). In this sense, e-commerce management should incorporate financial risks, which imply a balance of cost and profit; demand uncertainty and fluctuations, which cause the butterfly effect; and tardiness and delays (Chen et al., 2008). All these factors trigger purchasing behavior and affect sales in online shopping. Since online shopping websites deal with a huge number of complaints, this study investigates psychological factors that indirectly affect purchasing behavior. Unlike former studies, this model involves individual perspectives of the purchasing behavior as the main indicators in the analysis. 8.3  Technology Acceptance Model with Multiple Regression Analysis

Technology Acceptance Model (TAM) was first introduced by Davis in 1989 in his doctorate thesis. This methodology is based on the Theory of Reasoned Action. This model is evaluated as a sociophysiological method that has a well-structured perspective of forecasting the acceptability of new technologies (Wang, 2002). The basis of this theory involves a cost–benefit paradigm, an adaptation of innovations, self-sufficiency, and a theory of expectations theory (Davis, 1989). Additionally, this model has the capability of making improvements in the model with respect to other factors that could affect the tendency of purchasing (Legris et al., 2003). Substantial factors that impress upon purchasing intuition are listed as follows: • • • • • •

External variations such as late delivery, high prices Perceived usefulness Perceived ease of use Attitude to utilization Intuitions of use Usage of the system

The basic infrastructure of the model includes the following factors: • • • •

Utilitarian and hedonic shopping orientation Information quality related to the product Reliance on the website Website quality

FAc T O RS A F F Ec TIN G BEH  AV I O R O F P S C USERS

• • • • •

141

Personal innovativeness Perceived pleasure E-commerce service quality Content wealth Satisfaction

For measuring the direct and indirect effects on the purchasing behavior of the customers in private shopping, we construct a multiple regression model that can be shown generally as follows:

Y = β0 + β1x1 + β 2 x 2 + β3x3 + β4 x 4 + 

In this equation, Y denotes the dependent variable that symbolizes private shopping behavior and x1, x2, … xn are independent variables that directly or inversely affect Y. The independent variables are determined as perceived usefulness, perceived ease of use, attitude to utilization, intuitions of use, usage of the system, utilitarian and hedonic shopping orientation, information quality related to the product, reliance on the website, website quality, personal innovativeness, perceived pleasure, e-commerce service quality, content wealth, and satisfaction. In order to demonstrate the effect of these variables, a questionnaire is applied in accordance with the Likert scale. The demographic characteristics of the survey are given in Tables 8.2 through 8.4. Before using multiple regression analysis, reliability analysis should be applied to assess the suitability of the data to the statistical analysis. We can examine the reliability of the data by checking the Cronbach Alfa value. If the Cronbach Alfa value is greater than or equal to 0.7, one may conclude that the data are suitable for statistical analysis. The Cronbach Alfa values are represented in Table 8.5. In this table, the utilitarian and hedonic shopping orientation and individual norms are Table 8.2  Considered Branches in the Survey PRivatE ShoppinG ClUBs Markafoni Trendyol Limango Morhipo 1v1y Others

FREqUEncy

PERcEntaGE (%)

304 130 44 30 20 198

41.87 17.91 6.06 4.13 2.75 27.27

14 2

SER c A N Akk A ş E T A L .

Table 8.3  Demographic Properties of the Participants CRitERia Sex Marital status Age

Education

Monthly income

ChaRactERistics

FREqUEncy

PERcEntaGE (%)

Male Female Married Single 40 Doctorate MSc BSc Undergraduate Technical High School High School Secondary School 5000 TL

342 384 78 648 416 286 24 7 105 558 7 8 37 4 165 135 196 204 26

47.11 52.89 10.74 89.26 57.30 39.39 3.31 0.96 14.46 76.86 0.96 1.10 5.10 0.55 22.73 18.60 27.00 28.10 3.58

Table 8.4  Previous Online Shopping Experience of the Participants CRitERia Previous online shopping experience

FREqUEncy PERcEntaGE (%) Shopping less than once in 1–2 years Shopping once or twice a year Shopping three or four times in a year Shopping once or twice in a month Shopping two or three times in a month

81 91 196 253 105

11.16 12.53 27.00 34.85 14.46

not reliable for multiple regression analysis. If these items are deleted, the reliability of the analysis will increase. After the reliability analysis, we design the multiple regression model as follows (Figure 8.2): H0 denotes the main hypothesis which demonstrates that the changes made in the population cannot affect the arithmetic average of the universe. Hi indicates the counterpart situation. Subsequent to this, we group the independent variables in order to construct the relationships between independent and dependent variables. As seen in Table 8.6, we gather 12 groups that can explain the variability of 69.275% with independent variables (Tables 8.7 and 8.8).

14 3

FAc T O RS A F F Ec TIN G BEH  AV I O R O F P S C USERS

Table 8.5  Cronbach Alfa Values of the Variables VaRiaBlE

CRonBach Alfa ValUE if ItEm Is DElEtEd

CRonBach Alfa

— 0.777 0.742 — — — — — — — — — — —

0.81 0.62 0.693 0.813 0.924 0.787 0.78 0.815 0.846 0.877 0.825 0.903 0.847 0.906

Innovativeness Utilitarian and hedonic shopping orientation Individual norms Perceived risk Reliance on the website Information quality related to the product Website quality E-commerce service quality Content wealth Attitude to utilization Usage of the system Perceived pleasure Intuitions of use Satisfaction

Content wealth

Image Individual norms

Information quality related to the product

Perceived risk

H13

Perceived usefulness

H14

H9 H5 Behavioral intention

H8

H15 Website quality

H4

H2

H1

Perceived ease of use

H11 Perceived pleasure

Personal innovativeness

Satisfaction

E-commerce service quality

Utilitarian and hedonic shopping orientation

Figure 8.2  Multiple regression model for the explanation of purchasing behavior.

After applying the principal component analysis, we construct a multiple regression model in analysis of variance (ANOVA) by considering the significance value (p value) at 95% confidence level. Usage of the system = 1.945 + (0.431 × website quality) + (0.76 × personal innovativeness).

14 4

Table 8.6  Principal Component Analysis Results Initial EiGEn ValUE

ExtRaction SUms of SqUaREs LoadinGs

Rotation SUms of SqUaREs LoadinGs

Total

PERcEntaGE of VaRiancE (%)

CUmUlativE PERcEntaGE (%)

Total

PERcEntaGE of VaRiancE (%)

CUmUlativE PERcEntaGE (%)

Total

PERcEntaGE of VaRiancE (%)

CUmUlativE PERcEntaGE (%)

1 2 3 4 5 6 7 8 9 10 11 12

14,769 3,567 2,487 2,228 2,013 1,943 1,598 1,446 1,337 1,152 1,069 1,028

29.538 7.134 4.975 4.457 4.025 3.887 3.195 2.892 2.674 2.304 2.139 2.056

29.538 36.672 41.646 46.103 50.128 54.015 54.21 60.102 62.776 65.08 67.219 69.275

14.769 3.567 2.487 2.228 2.013 1.943 1.598 1.446 1.337 1.152 1.069 1.028

29.538 7.134 4.975 4.457 4.025 3.887 3.195 2.892 2.674 2.304 2.139 2.056

29.538 36.672 41.646 46.103 50.128 54.015 54.21 60.102 62.776 65.08 67.219 69.275

6.997 3.573 3.372 2.996 2.761 2.64 2.299 2.246 2.175 2.15 1.765 1.665

13.993 7.146 6.743 5.992 5.522 5.279 4.598 4.492 4.35 4.299 3.529 3.331

13.993 21.14 27.883 33.875 39.396 44.676 49.274 53.766 58.116 62.415 65.944 69.276

SER c A N Akk A ş E T A L .

ComponEnt

Satisfaction Intuitions of use Perceived pleasure Usage of the system Attitude to utilization

NoRm

Risk

0.12 — — — —

−0.10 — — — —

SERvicE PRaGmatic InnovativEnEss WEBsitE Satisfaction — — — — —

0.10 0.12 — — —

0.04 0.05 — — 0.07

0.33 0.41 — — 0.35

— — — — —

IntUitions PERcEivEd UsaGE of AttitUdE to PlEasURE thE SystEm Utilization of UsE — — — — —

0.25 — — — —

0.35 0.39 — — —

0.54 — — — —

FAc T O RS A F F Ec TIN G BEH  AV I O R O F P S C USERS

Table 8.7  Variables Indirectly Affecting Each Other

14 5

14 6

Satisfaction Intuitions of use Perceived pleasure Usage of the system Attitude to utilization

NoRm

Risk

0.15 — — —

— −0.13 — — —

SERvicE PRaGmatic InnovativEnEss WEBsitE Satisfaction 0.22 — — — —

— — 0.39 — —

— — — 0.12 —

— — — 0.60 0.23

— — — — —

IntUitions PERcEivEd UsaGE of AttitUdE to PlEasURE thE SystEm Utilization of UsE 0.81 — — — —

— 0.31 — — —

— 0.04 — — 0.59

— 0.66 — — —

SER c A N Akk A ş E T A L .

Table 8.8  Variables Directly Affecting Each Other

147

FAc T O RS A F F Ec TIN G BEH  AV I O R O F P S C USERS

Intuition of use = 0.679 + (0.069 × individual norms) + (0.067 × image) − (0.045 × perceived risk) + (0.356 × usage of the system) + (0.080 × attitude to utilization) + (0.238 × perceived pleasure) Perceived pleasure = 2.645 + (0.281 × Pragmatic shopping orientation) Satisfaction = 0.725 + (0.625 × Intuition of use) + (0.239 × e-commerce service quality) To summarize, usage of the system is strongly dependent on personal innovativeness and website quality. This means that it is within the rights of PSCs to increase the purchasing level of their website quality. Additionally, intuition of use is affiliated with individual norms, image, perceived risk, usage of the system, attitude to utilization, and perceived pleasure, which is hardly influenced by personal views. Perceived pleasure is affected by utilitarian and hedonic shopping orientation, which can be indirectly controlled by PSCs. Finally, satisfaction is imposed from intuition of use and e-commerce website quality, which can be inspected by PSCs (Figure 8.3). Image

Content wealth

Individual norms Perceived risk

Information quality related to the product

0.15

Perceived usefulness 0.34 0.51 Website quality 0.66

–0.13 0.69 Behavioral intention

Perceived ease of use 0.11

0.30

Perceived pleasure

Personal innovativeness

0.79

Satisfaction

0.25 E-commerce service quality

Utilitarian and hedonic shopping orientation

Figure 8.3  Accepted Hi hypotheses (continuous line) and rejected Hi hypotheses (dotted line) and their coefficients.

14 8

SER c A N Akk A ş E T A L .

8.4 Future Research Directions

This research can be expanded with structural equation modeling in order to examine the relationship between independent variables. Moreover, another survey can be conducted to compare the effectiveness of these two methods. 8.5  Conclusion

This study aims to demonstrate the significant factors that affect the purchasing behavior of private shopping end users in e-commerce. To determine the expectations of the customers, multiple regression analysis is integrated with the psychological attributes such as individual norms, image, perceived risk, usage of the system, attitude to utilization, and perceived pleasure in accordance with an effective representation of the TAM. From the results of this research, the intention to use PCS business models can be analyzed by companies. Moreover, PSC users’ expectations from PSC websites can be specified, which will enable PCSs to assess the customers’ priorities from their purchasing behavior. Additionally, they can actualize their intention of sales behavior through the findings of this study. Thus, they can also analyze customer satisfaction at the end of their actual purchase.

References

Aboelmaged, M.G. (2010). Predicting e-procurement adoption in a developing country: An empirical integration of technology acceptance model and theory of planned behavior, Industrial Management & Data Systems, 110, 392–414. Anumba, C.J. and Ruikar, K. (2002). Electronic commerce in constructiontrends and prospects, Automation in Construction, 11, 265–275. Atchariyachanvanich, K., Okada, H., and Sonehara, N. (2007). Theoretical model of purchase and repurchase in internet shopping: Evidence from Japanese online customers. In ICEC’07, Minneapolis, MN. Browne, M.W. and Cudeck, R. (1993). Testing Structural Equation Models. Newbury Park, CA: Sage Publications. Chen, D.N., Jeng, B., Lee, W.P., and Chuang, C.H. (2008). An agent-based model for consumer-to-business electronic commerce, Expert Systems with Applications, 34, 469–481.

FAc T O RS A F F Ec TIN G BEH  AV I O R O F P S C USERS

14 9

Choudhury, V. and Hartzel, K.S. (1998). Uses and consequences of electronic markets: An empirical investigation in the aircraft parts industry, MIS Quarterly, 22, 471–503. Davis, F.D. (1989). A technology acceptance model for empirically testing new end user information systems: Theory and results. Doctoral Dissertation, MIT Sloan School of Management, Cambridge, MA. Hawes, J.M. and Lumpkin, J.R. (1986). Perceived risk and the selection of a retail patronage mode, Journal of the Academy of Marketing Science, 14, 37–42. Ho, S.C., Kauffman, R.J., and Liang, T.P. (2007). A growth theory perspective on B2C e-commerce growth in Europe: An exploratory study, Electronic Commerce Research and Applications, 6, 237–259. Hsia, T.L., Wu, J.H., and Li, E.Y. (2008). The e-commerce value matrix and use case model: A goal-driven methodology for eliciting B2C application requirements, Journal of Information Management, 45, 321–330. Jiang, X. and Qian, X. (2009). Study on intelligent e-shopping system based on data mining, School of Electronic Information & Electrical Engineering, Changzhou Institute of Technology CZU, Chanzhou, China. Jun, M., Yang, Z., and Kim, D. (2004). Customers’ perceptions of online retailing service quality and their satisfaction, International Journal of Quality & Reliability Management, 21, 817–840. Lee, Y., Kozar, K.A., and Larsen, K.R.T. (2003). The technology acceptance model: past, present, and future, Communications of the Association for Information Systems, 12, 752–780. Legris, P.J., Ingham, P., and Collerette, P. (2003). Why do people use information technology? A critical review of the technology acceptance model, Information and Management, 40, 191–204. Nath, R., Akmanligil, M., Hjelm, K., Sakaguchi, T., and Schultz, M. (1998). Electronic commerce and internet: Issues, problems and perspectives, International Journal of Information Management, 18, 91–101. Navarro, J.G.C., Jimenez, D., and Conesa, E.A.M. (2007). Implementing e-business through organizational learning: An empirical investigation in SMEs, International Journal of Information Management, 27, 173–186. Numberger, S. and Rennhak C. (2005). The future of B2C e-commerce, Electronic Markets, 15, 269–282. Peppers, D. and Rogers, M. (2001). One to One B2B: Customer Development Strategies for the Business-to-Business World. New York: Doubleday. Quaddus, M. and Achjari, D. (2005). A model for electronic commerce success, Telecommunications Policy, 29, 127–152. Wang, Y.S. (2002). The adoption of electronic tax filing systems: An empirical study, Government Information Quarterly, 20, 333–352.

This page intentionally left blank

9 TR AFFi C S i G NAL O p TimiZ ATi ON Challenges, Models, and Applications M a H m U t A l İ G ö KÇ e , ER Dİ NÇ ÖNeR , a N D GÜ l IŞıK Contents

9.1 Introduction 9.2 Challenges 9.2.1 Coordination 9.2.2 Scale 9.2.3 Measurement 9.3 Models 9.3.1 Analytical Optimization Models 9.3.2 Heuristic Optimization Models 9.4 Conclusion References

151 152 152 153 153 154 155 156 159 159

9.1  Introduction

Traffic congestion is one of the most important problems of urban life. Urban traffic system composed of vehicles, pedestrians, traffic lights, and traffic network structure results in a complex problem to be solved (Salimifard and Ansari, 2013). As much as new roads are being built, because the (rate of expansion of) demand for new capacity on roads and other transportation systems exceed the (rate of expansion of ) supply, congestion continues to cause an increasing loss of valuable time and resources in urban areas. Different approaches may be used to minimize the traffic congestion problem in urban traffic systems. Improving public transportation and intelligent transportation system applications are some of the main approaches being used. One other such approach is the effective use of traffic signals. Traffic signal systems are used to 151

15 2

M A H mU T A L İ G Ö kÇ E E T A L .

control and regulate traffic flow for pedestrians and vehicles at road intersections or other locations, where heavy flow of traffic is present (Roess et al., 2004). Traffic congestion is heavily a flow problem. Therefore, it can be managed depending on how the combination of various signals on the way is regulated. Researchers often agree that the correct staging of traffic signals can help to reduce traffic congestion by improving the flow of the vehicles in urban areas (GarciaNieto et al., 2013). This chapter is on traffic signal optimization. In next section, we discuss the challenges associated with the traffic signal optimization problem. Under Section 9.3, we present a review of models to manage different types of traffic signal optimization problems with examples of applications. 9.2  Challenges

Challenges of the traffic signal optimization problem can be summarized under the following three headings: coordination, scale, and measurement. 9.2.1  Coordination

The traffic signal operations are generally designed for individual locations (single intersections). However, to improve and regulate the traffic flow in urban traffic system, the coordination of traffic signals or the whole traffic network system should be taken into consideration. The effects of the traffic signal operations at any intersection on the downstream/upstream have to be considered. The design of the traffic signal operations in one intersection may improve the traffic flow there, whereas in the other downstream/upstream intersection(s), it might degrade the traffic flow. The capacity of the intersections and the area available at the traffic signals are other important characteristics, which have to be considered, for the urban traffic system. The higher number of intersections in urban areas resulting in shorter distances between the intersections may not have enough capacity to hold high number of vehicles at the traffic signals, which would result in spillback queues to arterial roads.

T R A F FI c SI G N A L Op TImIZ ATI O N

15 3

In addition to the capacity of the intersections to improve the urban traffic flow, undisrupted traffic flow should be provided to reduce urban traffic congestion. The short distances between the intersections may result in a stop & go traffic flow condition. Green waves and coordinated traffic signal optimization approaches are used to overcome the stop&go situation (Pengdi et al., 2012). 9.2.2  Scale

The optimization of the traffic signal operations in the urban traffic system is a complex problem since an intersection consists of a number of approaches and the crossing area (Dotoli et al., 2006). The traffic signal light switches to red and green requires the introduction of the discrete variable for the traffic signal timing optimization problem, which makes the problem combinatorial, and considering the whole urban traffic system for the optimization, the problem becomes very large very quickly (Papageorgiou et al., 2003). Another difficulty in solving the traffic signal timing optimization problem is the unpredictable disturbances (incidents, illegal parking, intersection blocking, etc.) on the traffic flow, which also introduce the stochasticity to the urban traffic problem. 9.2.3  Measurement

The prerequisite of improving any system is to be able to measure the effects of the changes made to the system. If one wants to optimize traffic signal operations in an urban traffic system, for measuring the effects, ideally real-time data are required. Due to the nature of the traffic system, the data are variable during the day, week, month, or the year. The number of vehicles, percentages of different types of vehicles in the urban traffic system, the distance and location information of the intersections, the number of pedestrians, and entry and exit points of the vehicles and pedestrians are required to solve the traffic signal optimization problem optimally. With the help of the technological advancements, it is becoming easier to collect traffic data. Inductive loop detectors, radar, microwave, and ultrasonic sensors to infrared cameras and in-vehicle GPS/GSM receivers/transmitters

15 4

M A H mU T A L İ G Ö kÇ E E T A L .

(floating car data) are few of the sensor technologies used to collect traffic data (Van Lint and Hoogendoorn, 2010). The use of real-time traffic data to solve the traffic signal optimization problem also requires the real-time evaluation and analysis of the data. Developing models for the analysis of these large amounts of data is a great challenge. 9.3  Models

Traffic signal optimization problem is studied either for a single intersection or for a network of intersections. A single intersection is by nature isolated and relatively easier to study. Although network of intersections are much more realistic than a single intersection, modeling and solving those models become harder very quickly. Sometimes, a single intersection is so isolated that one can study and arrange its signal timings to get significant improvement in the traffic flow. There are mainly three types of plan for traffic signal control. These are fixed-time, semiactuated, and fully actuated types. Fixed-time plans, also called pretimed plans, are the most basic for which each phase of the signal lasts for a specific duration of time before changing into the next phase. Fixed-time plans usually utilize historical data to determine the timings. Although for this type of plan, the timings are independent of the current traffic flow, multiple timing settings can be used for different times of the day. The advantage of fixed-time plan is that they are relatively cheap to implement. Actuated plans, semi or fully, are traffic responsive but require significantly more investment to implement. Vehicle sensors and detectors need to be installed at many intersections as well as an algorithm to manage the real-time data collected. Some of the simpler actuated plans choose the best fixed-time plan among the stored in the system that best fits the real-time data collected at that time. Because traffic light signal timing is one of the cheapest and most effective methods of reducing traffic congestion in metropolitan roads and networks (Spall and Chin, 1997), the problem of signal timing optimization has been studied with a variety of methods. We present here some of these methods grouped by the approach used by the method.

T R A F FI c SI G N A L Op TImIZ ATI O N

15 5

9.3.1  Analytical Optimization Models

Signal timing optimization requires measurement of performance for a particular setting of the signal timing. To develop an analytical model, which can take into account the different vehicles from different directions, at nondeterministic rates is challenging. Therefore, there are, to the best knowledge of the authors, relatively fewer analytical models for signal timing optimization models in literature. For a single intersection, a fixed-time strategy can be either stagebased or phase-based. Stage-based strategy determines the optimal cycle and split times. Phase-based fixed-time strategy also determines stage specifications, which includes the options (turning left, going straight, etc.) a vehicle has at an intersection. One of the earliest and well-known stage-based fixed-time strategies was SIGSET (Allsop, 1971). SIGSET’s objective function was a nonlinear delay function derived by Webster (1958). SIGSET used m linear constraints on the capacity of stage specifications, which resulted in a linearly constrained nonlinear programming problem. SIGCAP was also developed by Allsop, which aimed to maximize intersection’s capacity (Allsop, 1976). SIGCAP, however, was a linear programming problem. Improta and Canteralla (1984) solved a similar single intersection signal optimization problem but included stage specifications (phasebased fixed-time strategy). Their approach determined split and cycle time, as well as stage specifications to optimize total delay or system capacity. To determine stage specifications, they had to add binary variables, which made their problem a mixed-integer linear programming (MILP) problem with increased difficulty. For a network of intersections, one of the earliest MILP models is developed by Little in 1966 by the name MAXBAND (Little, 1966). Little developed a MILP for an n intersection of a two-way arterial, for which split and cycle times are assumed to be given. His model determined the optimal offset based on maximizing the number of vehicles that can travel a given range without stopping. Later, MAXBAND was transformed into a portable Fortran code, which was able to handle three-artery networks with up to

15 6

M A H mU T A L İ G Ö kÇ E E T A L .

17 signals (Little et al., 1981). Chaudhary et al. (1991) reduced computational requirements of MAXBAND. Stamatiadas and Gartner (1996) extended MAXBAND so that it became applicable to networks of arterials. 9.3.2  Heuristic Optimization Models

Urban traffic analysis and control is a problem, which has complexity making it difficult to analyze with traditional analytical methods (Tartaro et al., 2001). Even Webster, in his work from 1958, states “Since a theoretical calculation of delay is very complex and direct observation of delay on the road is complicated by uncontrollable variations, it was decided to use a method whereby the events on the road are reproduced in the laboratory by means of some machine which simulates behavior of traffic…,” suggesting the use of simulation for traffic problems. Hewage and Ruwanpura (2004) stated that computer simulation can be useful to analyze traffic flow patterns and signal light timings. Urban Traffic Control System was the first traffic simulation software developed under the direction of Federal Highway Administration (FHWA) and later named NETSIM, short for Network Simulation. FREESIM (abbreviated from Freeway Simulation) was an enhanced version of NETSIM, which could handle more complex freeway geometrics and provide a more realistic representation of traffic on a freeway. In 1998, NETSIM and FREESIM were combined and offered to public under the new name CORSIM. There are two main approaches in traffic modeling using simulation, which are microscopic and macroscopic approaches. In a microscopic model, each car is simulated individually whereby dynamic variables of the models represent microscopic properties like the position and velocity of single vehicles. Macroscopic models take more an aggregated approach, where it is assumed that traffic flows as a whole are comparable to fluid streams. Therefore, in this case, one is more interested in traffic flow characteristics like density, mean speed of traffic flow, etc. TRANSYT-7F is a macroscopic traffic simulation model, which also does optimization using genetic algorithms (GA) and a hill climbing method. It was originally developed in the

T R A F FI c SI G N A L Op TImIZ ATI O N

15 7

United Kingdom by Transport and Road Research Laboratory. It was later adapted by FHWA, thus acquiring 7F after version 7. TRANSYT-7F considers platoons of vehicles instead of individual vehicles but simulates the flow in small time increments, which allows more detailed representation than most other macroscopic models. TRANSYT-7F allows the user to choose from either a GA or a hill climbing method to optimize signal timings (TR ANSYT7F’s User’s Manual, 1998). Synchro evaluates a series of cycle length while applying a heuristic method to determine green splits to optimize the four signal timing parameters (Synchro, 2014). During these evaluations, it also conducts an exhaustive search for left-turn phase position and a quasiexhaustive search for offsets. Synchro uses percentile of traffic flow as the optimization criterion. But TRANSYT-7F and Synchro are unable to fully consider the important aspects of traffic behaviors due to the nature of the macroscopic simulation model. For example, lane-changing behavior and vehicle interactions are not considered in the macroscopic models in a realistic manner. Although there are studies with macroscopic models, there is a growing recognition of the usefulness of stochastic microscopic simulation models (Lindgren and Tantiyanugulchai, 2003). Microscopic simulation models allow realistic scenarios to be tested under real-world conditions and also provide network-wide performance measures like travel times, delays, emissions, etc. (White, 2001). Microscopic simulation models are becoming a more accepted tool for signal timing and capacity studies (Park et al., 2003). CORSIM was among the earliest microscopic simulation tools developed. Today VISSIM, SUMO, PARAMICS, and SIMTRAFFIC are just some of the microscopic models available. Park and Yun (2003) compared various microscopic simulation models in terms of computation time and capability of modeling a coordinated actuated signal system. An earlier but a much wider review of microscopic models is given by Algers et al. (1997) in the SMARTEST project. Among the heuristic optimization methods, especially use of GA is dominant for determining signal timing. Foy et al. (1992) proposed a GA to determine signal timing for a two-phase system.

15 8

M A H mU T A L İ G Ö kÇ E E T A L .

Hadi and Wallace (1993) developed a GA to be used in combination with the TRANSYT-7F optimization routine to determine signal timing and phasing. Clement and Taylor (1994) developed a GA and a knowledge-based system for dynamic traffic signal control. Abu-Lebdeh and Benekohal (1997, 1998) applied GAs to oversaturated arterials for traffic control and queue management. Park et al. (2001) applied GA with CORSIM microscopic traffic simulation package. Park and Schneeberger (2003) applied GA with VISSIM microscopic traffic simulation package. The list of GA-based signal timing optimizations that are based on single-objective searches are long and cannot be covered here comprehensively (see the literature review in Stevanovic et al., 2007). But there are also studies that implemented an evolutionary multiobjective optimization to retime traffic signals. Sun et al. (2003) applied NSGA-II to optimize delay and stops for an isolated intersection under two-phase control. NSGA-II obtained a close approximation to the Pareto set while using an analytical formula to determine delay and stops. Abbas et al. (2007) applied NSGA-II to a small threesignal network while choosing signal settings from a predetermined set obtained from a single objective optimization at different cycle lengths. Thus, the multiobjective optimizer considered only variations in cycle length, simplifying the exercise considerably. Kesur (2010) investigated and suggested a multiobjective optimization when there are numerous optimization variables. Use of other metaheuristic methods to optimize signal timing has been limited. Chen and Xu (2006) applied particle swarm optimization (PSO) for training a fuzzy logic controller located at intersections with the aim of determining green light timings. They used a simple network with two basic junctions to test their model. More recently, Peng et al. (2009) developed a PSO algorithm for a restrictive oneway road with two intersections with custom microscopic traffic flow model. Finally, Garcia-Nieto et al. (2013) proposed a PSO algorithm to optimize traffic light cycle programs using a microscopic traffic simulator, SUMO. Işık et al. (2013) proposed a PSO algorithm to optimize signal timings using VISSIM microscopic traffic simulation as an evaluation function and presented a real-life application for a 28-signal head timing optimization in a roundabout in Izmir, Turkey. PSO model

T R A F FI c SI G N A L Op TImIZ ATI O N

15 9

goes through different cycle times and finds best split times for each signal head for each cycle time. Results from the experimentation show that their proposed model provides a 10% increase in the number of vehicles passing through the roundabout and a 56% decrease in the average delay through the roundabout compared to the current system settings. There are also some studies that utilize Neural Networks, Cellular Automata, fuzzy control, and fuzzy-neuro methods for urban traffic signal control, but they diverse quite a bit in the settings that they consider and, for that reason, are not included here. 9.4  Conclusion

Traffic congestion in urban areas is a growing problem. To manage and regulate traffic in these urban settings, traffic lights are used. The cheapest and most effective method of providing any improvement for traffic congestion goes through better managing the traffic signal timing. Signal timing improvement requires almost no hardware investment and can easily be implemented after being carefully developed in controlled environments. This chapter is on traffic signal optimization. The challenges associated with traffic signal optimization problem are discussed. Also, an extensive review of different models and methods used to solve different types of traffic signal optimization problems is provided. Although authors make no claim as to this chapter being a definitive or a complete review, it is a good resource for anyone interested in traffic signal optimization to learn about the important challenges and models of the subject matter.

References

Abbas, M.M., H.A. Rakha, and P. Li. 2007. Multi-objective strategies for timing signal systems under oversaturated conditions. Proceedings of the 18th IASTED International Conference, Montreal, Quebec, Canada, pp. 580–585. Abu-Lebdeh, G. and R.F. Benekohal. 1997. Development of a traffic control and queue management procedure for oversaturated arterials. Proceedings of the 76th Transportation Research Board Annual Meeting, Washington, DC.

16 0

M A H mU T A L İ G Ö kÇ E E T A L .

Abu-Lebdeh, G. and R.F. Benekohal. 1998. Evaluation of dynamic signal coordination and queue management strategies for oversaturated arterials. Proceedings of the 76th Transportation Research Board Annual Meeting, Washington, DC. Algers, et al. 1997. SMARTEST final report. http://www.its.leeds.ac.uk/ projects/smartest/finrep.PDF (last accessed June 12, 2014). Allsop, R.B. 1971. SIGSET: A computer program for calculating traffic capacity of signal-controlled road junctions. Traffic Engineering & Control, 12, 58–60. Allsop, R.B. 1976. SIGCAP: A computer program for assessing the traffic capacity of signal-controlled road junctions. Traffic Engineering & Control, 17, 338–341. Chaudhary, N.A., A. Pinnoi, and C. Messer. 1991. Proposed enhancements to MAXBAND-86 Program. Transportation Research Record 1324, pp. 98–104. Chen, J. and L. Xu. 2006. Road-junction traffic signal timing optimization by an adaptive particle swarm algorithm. Proceedings of the Ninth International Conference on Control, Automation, Robotics and Vision, Vols. 1–5, pp. 1–7. Clement, S.J. and M.A. Taylor. 1994. The application of genetic algorithms and knowledge-based systems to dynamic traffic signal control. Proceedings of the Second International Symposium on Highway Capacity, Vol. 1, pp. 193–202. Dotoli, M., M. Pia Fanti, and C. Meloni. 2006. A signal timing plan formulation for urban traffic control. Control Engineering Practice, 14(11), 1297–1311. Foy, M., R.F. Benekohal, and D.E. Goldberg. 1992. Signal timing determination using genetic algorithms. Transportation Research Record 1365, pp. 108–115. Garcia-Nieto, J., A.C. Olivera, E. Alba. 2013. Optimal cycle program of traffic lights with particle swarm optimization. IEEE Transactions on Evolutionary Computation, 17(6), 823–839. Hadi, M.A. and C.E. Wallace. 1993. Hybrid genetic algorithm to optimize signal phase and timing. Transportation Research Record 1421, pp. 104–112. Hewage, K.N. and J.Y. Ruwanpura. 2004. Optimization of traffic signal light timing using simulation. Proceedings of the Winter Simulation Conference, Vol. 2, pp. 1428–1433. Improta G. and G.E. Cantarella. 1984. Control systems design for an individual signalised junction. Transportation Research B, 18, 147–167. Işık, G., E. Öner, and M.A. Gökçe. 2013. Traffic signal timing optimization for a signalized roundabout in Izmir. The International IIE (Institute of Industrial Engineers) Conference/YAEM, June 26–28, 2013, İstanbul, Turkey. Kesur, K.B. 2010. Generating more equitable traffic signal timing plans. Transportation Research Record 2192, pp. 108–115.

T R A F FI c SI G N A L Op TImIZ ATI O N

161

Lindgren, R.V. and S. Tantiyanugulchai. 2003. Microscopic simulation of traffic at a suburban interchange. 2003 Annual Meeting of the Institute of Transportation Engineers, Seattle, WA. Little, J.D.C. 1966. The synchronisation of traffic signals by mixed-integerlinear-programming. Operations Research, 14, 568–594. Little, J.D.C., M.D. Kelson, and N.H. Gartner. 1981. MAXBAND: A program for setting signals on arteries and triangular networks. Transportation Research Record 795, pp. 40–46. Papageorgiou, M., C. Diakaki, V. Dinopoulou, A. Kotsialos, and Y. Wang. 2003. Review of road traffic control strategies. Proceedings of the IEEE, 91(12), 2043–2067. Park, B., N.M. Rouphail, and J. Sacks. 2001. Assessment of a stochastic signal optimization method using microsimulation. Transportation Research Record 1748, pp. 40–45. Park, B. and J.D. Schneeberger. 2003. Microscopic simulation model calibration and validation: A case study of VISSIM for a coordinated actuated signal system. Transportation Research Record 1856, pp. 185–192. Park, B., I. Yun, and K. Choi. 2003. Evaluation of microscopic simulation programs for coordinated signal system. In 13th ITS America’s Annual Meeting, Minneapolis, MN, May 9–22. Peng, L., M.-H. Wang, J.-P. Du, and G. Luo. 2009. Isolation niches particle swarm optimization applied to traffic lights controlling. Proceedings of 48th IEEE Conference on Decision Control/28th Chinese Control, pp. 3318–3322. Pengdi, D., N. Muhan, W. Zhuo, Z. Zundong, and D. Honghui. 2012. Traffic signal coordinated control optimization: A case study. 24th Chinese Control and Decision Conference (CCDC), Taiyuan, China, May 23–25, 2012, pp. 827–831. Roess, R.P., E.S. Prassas, and W.R. McShane. 2004. Traffic Engineering, 3rd edn. Upper Saddle River, NJ: Pearson/Prentice Hall. Salimifard, K. and M. Ansari. 2013. Modeling and simulation of urban traffic signals. International Journal of Modeling and Optimization, 3(2), 172–175. Spall, J.C. and D.C. Chin. 1997. Traffic-responsive signal timing for systemwide traffic control. Transportation Research—C, Vol. 5, pp. 153–163. Stamatiadisand C. and N.H. Gartner. 1996. MULTIBAND96: A Program for Variable Bandwidth Progression Optimization of Multiarterial Traffic Networks. Transportation Research Record, No. 1554, pp. 917. Stevanovic, A., P.T. Martin, and J. Stevanovic. 2007. VISGAOST: VISSIMbased genetic algorithm optimization of signal timings. Transportation Research Record 2035, pp. 59–68. Sun, D., R.F. Benekohal, and S.T. Waller. 2003. Multi-objective traffic signal optimization using non-dominated sorting genetic algorithm. IEEE Intelligent Vehicles Symposium, Piscataway, NJ, pp. 198–203. Synchro. http://208.131.129.243/wp-content/uploads/2013/08/SignalTiming​ Background.pdf (last accessed January 30, 2014).

16 2

M A H mU T A L İ G Ö kÇ E E T A L .

Tartaro M.L., C. Toress, and G. Wainer. 2001. Defining models of urban traffic using the TSC tool. Proceedings of the Winter Simulation Conference, pp. 1056–1063. TRANSYT-7F’s User’s Manual. Transportation Research Center, University of Florida, Gainesville, FL, March 1998. Van Lint, J.W.C. and S.P. Hoogendoorn. 2010. A robust and efficient method for fusing heterogeneous data from traffic sensors on freeways. ComputerAided Civil and Infrastructure Engineering, 25, 596–612. Webster, F.V. 1958. Traffic signal settings. Road Research Technical Paper No. 39, Research Laboratory, London, U.K. White, T. 2001. General overview of Simulation Models. 49th Annual Meeting of Southern District Institute of Transportation Engineers, Williamsburg, VA.

10 C O mpAR ATi V E FiNAN CiAL E FFi CiEN CY A NALYSiS FOR TURKiSH B ANKiN G S ECTOR A . A RG UN Ka R acabey A ND FA Z I L G Ö KG Ö Z Contents

10.1 Introduction 10.2 Current Status of Banking and Finance Sector in Turkey 10.3 Literature 10.3.1 DEA Models 10.4 Empirical Studies 10.4.1 Data and Methodology 10.4.2 Efficiency Results for Commercial Banks 10.4.3 Efficiency Results for Investment Banks 10.4.4 Results for the Improvement Ratios 10.5 Conclusions References Online Resources

163 164 166 166 171 171 172 172 174 177 178 180

10.1  Introduction

The financial decision makers desire to measure the efficiency level of a DMU by considering the positive and negative conditions. In this framework, efficiency analysis becomes a vital instrument for an enterprise operating under global competitive market conditions. CCR model proposed a widely used efficiency measurement technique that is known as the DEA (Charnes et al. 1978). It is advantageous since the DEA technique does not need a functional relationship between inputs and outputs (Charnes et al. 1978, Choi and Murthi 2001). However, the DEA has become popular in evaluating technical and pure efficiencies since the method easily processes multiple outputs 16 3

16 4

A . ARGUN K A R Ac A BE Y AND FAZIL GÖ KGÖZ

without requiring the input price data (Ruggiero 2001). There are numerous financial efficiency analyses applied on the banking sector— some empirical studies such as Baurer (1993), Berger and Humprey (1997), and Berger et al. (1993) had carried out frontier efficiency analyses for the US banks, while other studies, as Carbo et al. (2002) and Conceicao et al. (2007), performed efficiency measurements for the banks in the European Union. Thus, similar efficiency analyses have been carried out for the Turkish banking system and introduced particularly in the studies of Çingi and Tarım (2000), Ertuğrul and Zaim (1999), Karacabey (2002), Kasman (2002), Mercan and Yolalan (2000), and Zaim (1995). Due to the financial crises encountered in the Turkish economy in 2000 and 2001, the financial efficiency estimations for the Turkish banking system have become crucial in evaluating the vital components of the financial system of the country. The aim of this chapter is to measure the financial efficiency of the Turkish COMs and INVs for the 2010–2011 period and compare the technical efficiency (TE), pure technical efficiency (PTE), and scale efficiency (SE) of these DMUs. In this framework, the CCR and BCC versions of the DEA models were applied to the COMs and INVs of Turkey so as to find out the financial efficiency score levels. The remainder of the chapter is organized as follows: Section 10.2 briefly explains the characteristics of the Turkish banking and finance sector, while Section 10.3 gives a detailed account of the DEA model. Section 10.4 describes the methodology with regard to the data used and presents the results. Section 10.5 provides the concluding remarks. 10.2  Current Status of Banking and Finance Sector in Turkey

Crisis experience of Turkey emphasizes the supplementary relation between macroeconomic and financial stability and structural reforms (BRSA 2010). Weak banking sector system was one of the main causes for Turkey’s 2000 and 2001 crises that adversely affected the Turkish economy. With these financial crises, the Turkish banking sector underwent a restructuring process, during

C O mpA R ATI V E FIN A N cIA L EF FI cIEN cY A N A LYSIS

16 5

which 14 banks were transferred to the Saving Deposits Insurance Fund (SDIF) between 2000 and 2003, as they were not able to meet their liabilities (SDIF 2003). This process was beneficial for both solving the financial problems of the Turkish banking sector and for aligning the banking legislation with the international qualified implementations. In 2002, the Turkish economy began to recover, experiencing 6.8% annual growth in average between 2002 and 2007. High economic growth strengthened the banking sector, which in turn contributed positively to the economic growth. The number of the banks decreased to 50 from 59 as of December 2006 by the impact of the consolidation experienced between 2002 and 2006. Yet, the Turkish banks widened their branch structure and increased their personnel in accordance with the accelerated economic growth between 2002 and 2007 (BRSA 2006, 2007). The global financial crisis, which broke out in 2008, affected all countries’ economic stability, and subsequently global economic growth has decelerated. Shrunk by 4.1% in the 2008–2009 period, the Turkish economy recovered in 2010 and grew with an annual average of 9% in the period between 2010 and 2011. Experiencing relatively limited adverse effects of the global financial crisis, the Turkish banking sector is continuing to expand its branch structure. However, following the crisis, the growth rate in the number of branches has diminished consequently since the banks had to reevaluate their branching predictions. Moreover, it is considered that the proliferation of call centers and Internet banking has contributed to this trend (BRSA 2006). The banking sector improved its operational efficiency through intensifying the use of technology. Alternative distribution channels such as online banking, ATM, and call centers increased in parallel with the technologic developments in electronic environment of the banking sector. The volume of the financial transactions made a progress both by offering a widespread accessibility and ensuring saving time and costs (BRSA 2011). However, during the same period, the growth rate of the personnel number also decreased due to the global financial crisis. It is thought that the contribution of the financial institutions to total employment

16 6

A . ARGUN K A R Ac A BE Y AND FAZIL GÖ KGÖZ

is quite low. In the period between 2008 and 2011, the banking sector’s contribution to employment was 3.1% in average (BRSA 2011). Although the nonbanking financial sector has grown in number and size in Turkey, banks still dominate the sector. Set alongside banks, the Turkish financial sector largely comprises insurance and private pension companies. Nonbanking financial institutions like factoring, leasing, consumer financing companies, and intermediary institutions also operate in the sector. Asset sizes of the main financial services subsectors in Turkey as they stood at the end of 2011 are shown in Table 10.1. In the period between 2002 and 2011, the asset size of the Turkish financial sector has substantially increased. In this period, the banking sector assets have been risen to 1.217.6 billion TL from 212.7 TL as of 2011, implying approximately a fivefold increase (BRSA 2009, 2011). Profitability of the banking sector fluctuated in the last decade due to various factors such as changes in profits and growth of assets. Despite the global financial crisis, the sector succeeded in maintaining its precrisis approx. 2% return on assets ratio in the 2008–2011 period (TBB 2011). High-quality capital structure of the banking sector is a protective indicator of possible financial or macroeconomic fluctuations (BRSA 2011). The banking sector’s strong capital structure contributes to the maintenance of the economic growth at a sustainable level. On the other hand, the capital adequacy ratio tends to decrease since 2003 due to the relative high increases in assets. Stood at 30.9% at the end of 2003, capital adequacy ratio of the banking sector declined until 2008 to the level of 18%. Decreasing asset size due to the global financial crisis caused the ratio to jump to 20.6% in 2009. In the 2010–2011 period, it continued to decrease and realized as 16.5% at the end of 2009. However, the capital adequacy ratio of the banking sector is still above the target ratio, which is 12% (Treasury 2013). 10.3  Literature 10.3.1  DEA Models

The DEA is a nonparametric and a linear programming technique that has been used to compare the TE of relatively homogeneous

Billion TL

2002

2003

2004

2005

2006

2007

2008

2009

2010

2011

Banks Financial leasing Factoring Consumer financing Insurance Secur. Int. Inst.

212.7 3.8 2.1 0.5 5.4 1

249.7 5 2.9 0.8 7.5 1.3

306.4 6.7 4.1 1.5 9.8 1

406.9 6.1 5.3 2.5 14.4 2.6

499.7 10 6.3 3.4 17.4 2.7

581.6 13.7 7.4 3.9 22.1 3.8

732.5 17.1 7.8 4.7 26.5 4.2

834.0 14.6 10.4 4.5 31.8 5.2

1006.0 15.7 14.5 6.0 35.1 7.5

1217.6 18.6 15.7 8.9 39.9 9.6

Source: Banking Regulation and Supervision Authority of Turkey, Financial market reports, 2009, 2011.

C O mpA R ATI V E FIN A N cIA L EF FI cIEN cY A N A LYSIS

Table 10.1  Asset Size of the Main Financial Services Subsectors in Turkey

16 7

16 8

A . ARGUN K A R Ac A BE Y AND FAZIL GÖ KGÖZ

sets of DMUs. The theoretical consideration of TE has existed in the economic literature since Koopmans (1951) defined TE as a feasible input/output vector where it is technologically impossible to increase any output without simultaneously (Ruggiero 2000). Farrell performed one of the earliest studies regarding the efficiency measurement for the homogeneous DMUs. However, Farrell (1957) perceived the efficiency as in technical and allocative basis, and determined the efficiencies under the framework of one output and multiple inputs. Besides, the DEA has the advantage of having multiple output structure for the efficiency analysis. The DEA can be stated as a data-oriented, nonparametric,* and a linear programming technique that has been used to compare the TE of relatively homogeneous sets of DMUs. Thus, the DEA has been applied successfully to finance sector as well. Murthi et al. (1997) affirm that the DEA does not need a theoretical model as measurement benchmarks. Moreover, the DEA may address the problem of endogeneity of transaction costs by considering the transaction costs such as expense ratio and turnover, and the aforementioned model is flexible and may evaluate the performance on a number of outputs and inputs simultaneously. The DEA method also facilitates the observation of the marginal contribution of each input in affecting returns. As a consequence of these advantages, the method permits the analysis of the performance level of a particular DMU in comparison to the efficiency (Gökgöz 2009a,b, 2010). The efficiency score of the DEA can be defined as the ratio between a weighted sum of outputs and a weighted sum of inputs. The objective function of the DEA for n DMUs consuming k inputs and producing m outputs is as follows (Gökgöz 2009a,b, 2010):



Max

u ′yi v ′xi

(10.1)

* Parametric and nonparametric approaches are the well-known quantitative techniques that can be classified as the efficient Frontier Approach. Namely, the Stochastic Frontier Approach, Distribution Free Approach, and Thick Frontier Approach are the parametric efficiency measurement techniques, whereas the Free Disposal Hull and Data Envelopment Analysis (DEA) are considered the nonparametric approaches.

C O mpA R ATI V E FIN A N cIA L EF FI cIEN cY A N A LYSIS

16 9

where u′ is the output weight vector (m × 1) yi is the amount of output produced by DMUi v′ is the input weight vector (k × 1) xi is the amount of input utilized by DMUi The efficiency score lies between 0 and 1 for input-oriented model, while output-oriented model efficiency score ranges between 1 and ∞. For both models, the DMUs having efficiency score as 1 is considered efficient. Efficiency results of the DEA give efficient and inefficient DMUs according to constant return to scale (CRS) and variable return to scale (VRS) assumptions. However, these results also reveal slack inefficiency levels for the inefficient observations. There are two basic DEA models on the basis of orientation. The output-oriented model assumes the capacity of a DMU to reach the maximum production level (output) under available inputs. The inputoriented model refers the ability to produce the same capacity of production with the minimum input level (Cooper et al. 2000, 2006). TE scores of DMUs are measured by the CCR model that was introduced by Charnes et al. (1978). The model* depends upon CRS† assumption (Fandel 2003). Input-oriented CCR model is explained as in the formulation given as follows (Cooper et al. 2000, 2006):

Max u ′yi

(10.2)



v ′xi = 1

(10.3)



u ′yi − v ′xi ≤ 0

(10.4)

s.t.



u, −v ≥ 0

* Fandel states that a DMU will demonstrate technical efficiency, if TE equals to “1” according to the CCR model. Besides, if TE 29 years • Handles: Diamond knurled knobs with diameters of 0.375 in. (0.95 cm), 0.5 in. (1.25 cm), and 0.75 in. (1.9 cm) and length of 0.5 in. (1.25 cm) • Posture: In standing position, using preferred hand to twist the knob, while steadying the apparatus with the other hand Results: • Torque increased with the knob size and decreased with usage of gloves, and side knobs allowed more torque than front knobs. • Torque strength did not differ significantly between civilian and military personnel. • Torque strength did not change with age. In the study by Pheasant and O’Neill (1975), hand–handle linkage was studied to generate useful data for the optimal design of screwdrivers and other devices. Cylindrical handles were chosen for the first stage of the experiments due to their similarity to large number of practical devices use in the real-world activities. Experiment 1: Maximal steady voluntary torques were exerted by 24 subjects on various handles that were made of polished steel and ranged in diameter from 1 to 7 cm. Results: Torque strength changed with handle size. For knurled cylinders, 5 cm diameter seems to be optimal. Experiment 2: 10 subjects were required to exert maximal voluntary torques and maximal thrusts along the axes of the handles. Results: The actual shape of the handle was unimportant for the forceful activities; however, the effectiveness of the activity was limited by the size of the handle and the quality of the hand–handle interface.

92

M A HMU T EK Ş İ O Ğ LU

Rohles et al. (1983) conducted an experimental study to determine the wrist-twisting strength capabilities on opening and closing jar lids. Method: • Subjects: • Hundred elderly males and hundred elderly females with age ranging from 62 to 92 years • Hundred boys and hundred girls with age ranging from 44 to 58 months • Handles: The lids of eight commercially available food jars with diameters from 2.7 to 12.3 cm. • Direction of force: Both clockwise and counterclockwise. • Task: The strength was exerted on the jar lid snapped on a torquemeter two times in both clockwise and counterclockwise directions. Results: • Males were stronger than females in both age groups. • Age was negatively correlated with torque for older groups; however, in the children, it was positively correlated. • Torque strength increased with the increase in the diameter of the lid. • Direction of twisting did not affect torque significantly. • Grasp and lateral prehension were significant indicators of wrist-twisting strength for all subjects. Replogle (1983) studied the effect of handle diameter on hand torque strength. Method: • Subjects: 10 males and 10 females. • Handles: A series of 11 smooth phenolic fiber cylinders ranging in diameter from 0.95 to 8.89 cm. • Task: The participants applied turning force to each cylinder in both clockwise and counterclockwise directions with the preferred hand until the hand slipped. • Procedure: Not mentioned.

H A N D T O R Q UE S T REN GT h IN IN D US T RY

93

Results: • Female torque capability was about 40% of that of males. • The grip span (where the fingers and palm just touch without overlapping) and maximum torque diameters did not vary greatly between males and females. • Torque increased as the square of the handle diameter up to the grip span diameter (2.5 cm). For larger diameters, the torque continued to increase, but at a decreasing rate, and reached a maximum when the diameter is approximately 5 cm. • The maximum torque was approximately one and one-half times the torque obtainable at the grip span diameter. • The same diameter handles could be used by males and females to develop maximum torque. Mital and Sanghavi (1986) examined the effects of several operatorand task-related variables on peak volitional static hand torque exertion capabilities of males and females with common nonpowered hand tools. Method: • Subjects: A U.S. sample of healthy 55 college student subjects (30 males, 25 females). • Tools: Five different hand tools—two screwdrivers and three wrenches (short screwdriver, 3.7 cm grip diameter and 5.1 cm stem; long screwdriver, 2.9  cm grip diameter and 15.2  cm stem; spanner wrench, 25.4  cm long with 2.2  cm opening; vise grip, 19  cm long with an adjusted opening of 2.2  cm; socket wrench, 24.1 cm long with 1.7 cm opening). • Task heights: Three heights of torque application (eye, shoulder, and elbow height). • Postures: Sit and stand. • Reach distances: Three reach distances (45.7, 58.4, and 71.1 cm) from the seat reference point for the sitting posture and for the standing posture (33, 45.7, and 58.4 cm) from the ankles. • Test combinations: 540 treatment combinations (5 tools × 2 postures × 3 heights × 3 reach distances × 6 angles). • Protocol used: Claimed to be Caldwell et al.’s (1974) protocol but not exactly. Reach maximum in 3 s and hold maximum 1 s.

M A HMU T EK Ş İ O Ğ LU

94

Peak value was considered maximum. One minute of rest break between test combinations. No repetition. Results: • The average female peak torque exertion capability was 66% of the male. • Type of hand tool, posture, and reach distance were significantly effective on torque strength. • Both genders generated significantly high torque values with wrenches compared to the screwdrivers. Also higher torque values were generated with the socket wrench compared with the spanner wrench (and the lowest with the vise wrench). The increase in torque was proportional to the increase in the lever arm. Higher torque values were exerted with the screwdriver with the larger grip diameter. The handle diameter of screwdriver but not the length was important for higher hand torque force. • Both males and females exerted significantly greater torques in the standing posture than in the sitting posture with wrenches, but the opposite was obtained with the screwdrivers. • The torque exertion capability of both males and females decreased significantly with the reach distance. The maximum torque was exerted at a distance of 33 cm and minimum at a distance of 71.1 cm. • The effect of the height and angle of torque application, though statistically significant, was not of much practical value for either males or females. • The isometric shoulder strength appeared to limit the maximum volitional torque exertion capability. Imrhan and Loo (1986) investigated the container lid variables on maximal counterclockwise (opening) wrist-twisting torque on circular lids in the elderly population. Method: • Subjects: A U.S. sample of 42 elderly aged 60–97 years. • Lid types: Smooth and rough lid surfaces with four different diameters—31, 55, 74, and 113 mm for each.

H A N D T O R Q UE S T REN GT h IN IN D US T RY

95

• Posture: Standing posture except four subjects who preferred to apply force in sitting posture. Preferred handhold the container and the other handhold the tester handle to stabilize it. • Procedure: Not clearly defined. Maximal voluntary opening force was applied to open circular lid containers. Slow buildup allowed and peak force was sustained 2 s. Results: • Females were about 75% as strong as males. • Strength decreased with age and increased with diameter except large smooth lid for which torque decreased. • Torque increased as diameter increased with rough lids. On the other hand, torque increased with smooth lid up to a certain diameter (74 mm) then decreased. • Estimated optimum lid diameter was 93 mm. • Hand breadth, hand length, and hand circumference were all positively correlated with torque. Nagashima and Konz (1986) examined the effects of jar lid diameter, gripping material, and knurling on torque strength. Method:

Handles: Diameters of 4.8, 6.7, and 8.6 cm.

Experiment 1: • Subjects: 10 female subjects. • Task: Each subject twisted six jar lids in counterclockwise, which were a smooth and a knurled lid at each of three diameters (48, 67, and 86 mm) with a bare hand, rubber gripper, and a cotton cloth. • Procedure: Not mentioned in detail. Results: • The torque strengths increased with increasing lid diameter. • Torque with rubber gripper was higher than with bare hand and lowest with cotton cloth in the hand. • No difference between the torques with smooth lid and knurled lid.

96

M A HMU T EK Ş İ O Ğ LU

Experiment 2: To repeat the first one with a larger, more varied group of subjects to see if knurling was worthwhile. • Subjects: 29 subjects (17 males and 12 females). Results: Only for 6.7 and 8.6 cm, there were statistically significant differences between smooth and knurled lids. Adams and Peterson (1986) conducted an experimental study to determine the maximum static hand grip torque that can be exerted during tightening or loosening of circular electrical connectors. Purpose: Proper design of connectors and task configuration Method: • Factors investigated: The effects of connector size, grip type employed, orientation of connector, use of work gloves, the reach height of the connectors, and the direction of rotation on hand grip torque. • Connector size: Tightening ring diameters of 2.3, 3.8, and 5.1 cm. • Subjects: 20 males (18–32 years) and 11 females (19–40 years). • Procedure: Modified Caldwell procedure. Three-second static force was exerted by each of the subjects in standing. Torque was applied to simulated connector rings. Results: • Hand grip torque increased by connector diameter. • The orientation of connector was effective on torque strength. • The usage of gloves also resulted in slightly higher torque values. • Height and direction of rotation had little effect on torque strength. • Males were significantly stronger than females in both tightening and loosening. Imrhan and Jenkins (1999) investigated the effects of surface finish, wrist action, arm position, and hand laterality on wrist flexion and extension torque capabilities of male and female adults in simulated maintenance tasks.

H A N D T O R Q UE S T REN GT h IN IN D US T RY

97

Method: • Handles: Knurled and smooth two identical solid cylindrical aluminum handles (diameter, 5.72 cm; length, 13.97 cm). • Sample: Healthy 10 males (manual workers: 28–43 years) and 10 females (1 manual worker and 9 homemakers: 25–40 years). • Task: Generation of maximal volitional static torques on a cylindrical handle snapped on a portable torquemeter over 24 different test conditions. • Procedure: Subjects gripped handles bare-handed with power grip in comfortable standing with their arms fully extended and approximately in sagittal planes. Peak force was recorded. Rest time was ≥1.5 min between exertions. No further details were provided. Results: • Overall, males were twice as strong as females. However, the sex difference depended on certain task and handle variables, such as handle surface and wrist action. • In both males and females, torque was greater in extension than in flexion and with the knurled handle compared to the smooth one. Kim and Kim (2000) studied the effects of body posture and of different types of common nonpowered hand tools on maximum static volitional torque exertion capabilities of Korean people. Method: • Hand tools: Screwdriver, socket wrench, cylindrical handle, rotating knob, steering wheel. • Subjects: A Korean sample of 15 healthy male and 15 female university students. • Experimental variables: 15 body postures and 5 hand tools with a total of 75 test combinations/subject. • Procedure: Modified Caldwell et al.’s (1974) (though not mentioned). Subjects were asked to build up to the maximum torque gradually, without jerking, over 3 s period, and then hold it at the maximum for about 1 s. Repetition at least twice

M A HMU T EK Ş İ O Ğ LU

98

to be within ±10% difference. Higher value was chosen as MVC. Rest breaks were ≥1 min between trials. Results: • The torque exertion capability was significantly affected by the type of tool and posture for both males and females: relatively higher torques were exerted in the order of steering wheel, wrench, handle, knob, and screwdriver. It may be said that torque exertion was affected by lever arm of the tools. • Female torque strength was about 51.5% of males. • Both males and females exerted the most torque in standing, eye height, and tool axis horizontal, whereas males showed the least value in standing, overhead, and tool axis vertical and females exerted the least torque in kneeling on one knee, overhead, and tool axis vertical. • Higher torques were exerted at shoulder and eye height, whereas relatively lower torques were found at elbow and overhead height. The study by Voorbij and Steenbekkers (2002) aimed to answer the question of what maximum torque should be allowed for opening a jar (opening torque). Method: • Opening torque: Measured in people over 50 years old, with a jar lid. The jar, which was made of aluminum, weighed 650 g. The lid had a diameter of 66 mm, while at its widest point, the jar was 75 mm in diameter. The total height was 113.5 mm. • Subjects: A Dutch sample of healthy 750 subjects: 123 of them were aged between 20 and 30 years as a reference group, and 627 were over 50 years of age. • Posture: The wrist-twisting force was measured while the subject was standing. The subject was asked to adopt the posture normally used for opening jars. One hand was on the lid while the other grasped the jar. • Procedure: The subject was instructed to build up force to maximum and to hold this maximum force until the second

H A N D T O R Q UE S T REN GT h IN IN D US T RY

99

attendant called a stop. This attendant checked for an acceptable length of the constant phase in the force graph (i.e., 1 s). The force exertion was repeated once after a 2 min period of rest. Measurement was made twice, and the peak value of the two was taken as the maximum torque value. Results: • The preferred way of opening a jar was with both hands: one on the lid and one on the jar. • Laterality was significant: the preferred hand grasped the jar. • The required torque for opening a jar should not exceed 2 N m to accommodate >95% of users between 50 and 94 years. Crawford et al. (2002) investigated the impact of shape, diameter, and height of lid on static wrist torque and also examined opening torque of commercially available food jars. (Grip and pinch strengths were also measured.) Method: • Subjects: 40 healthy adults: 20 young (10 males and 10 females with age 20–39 years) and 20 older with age 69–81 years for the males and 60–72 years for the females. • Test pieces and combinations: 12 nylon test pieces with 10, 20, and 30  mm in length (9 circular with 20, 50, and 80  mm diam. and 350 mm diam. square with rounded edges). Each participant made two maximal wrist torque exertions for each piece of six test pieces. • Body posture and torque direction: The torque exertions were made in a standing position turning the test piece in a counterclockwise direction (to open a jar) using preferred hand to create the torque and the nonpreferred hand to hold the circular fixing point. • Opening forces (required to open a variety of food products): Measurements were carried out by placing each jar in the fixture and opening the product with the preferred hand. • Torque direction: Counterclockwise with preferred hand. • Procedure: Not reported.

10 0

M A HMU T EK Ş İ O Ğ LU

Results: • Higher torques were generated on lids that were square compared to those that are circular of the same diameter. • As lid diameter and lid height increased, torque increased for test pieces between 20 and 50 mm diameters. • A linear relationship for torque existed for the test pieces between 20  mm diameter and 10  mm height and 50  mm diameter and 30 mm height. • Height, weight, hand length, and hand breadth were positively correlated with torque strength. • The lid surface area and torque strength were highly correlated; thus, a linear model was developed to describe this relationship:



Torque = −7.26 + ( 1.23 × ln surface area ) For the weakest group of participants, the model was described by Torque = −5.69 + ( 0.94 × ln surface area ) The model could be used to predict maximal torque closure levels for use in the packaging industry.

Peebles and Norris (2003), in their normative static strength study, measured six strength capabilities in freestanding postures: fingerpush strength, pinch–pull strength, hand grip strength, push and pull strength, and the hand torque strengths (wrist-twisting and opening). For our purpose, hand torque strengths only were discussed here. Method: • Wrist-twisting strength: Twisting force was exerted with dominant hand in a clockwise direction on a variety of knobs and handles: (1) door lever (diameter 15  mm, length 170  mm), (2) door knob (diameter 65 mm, depth 45 mm), (3) circular knob (diameter 40 mm, depth 20 mm), (4) ridged knob (length 40  mm, depth 15  mm), (5) butterfly nut (length 40  mm, depth 10 mm), and (6) tap (diameter 50 mm, depth 40 mm).

H A N D T O R Q UE S T REN GT h IN IN D US T RY

101

All handles were positioned at elbow height and orientated vertically (vertical wrist-twisting strength). The ridged knob, butterfly nut, and tap were also orientated horizontally (horizontal wrist-twisting strength). • Opening strength: Measured on three custom-made aluminum jars (height 125 mm) with smooth and knurled lids (diameters 45, 65, and 85 mm). The jar was held with one (preferred) hand, and a static twisting force was exerted with the other hand on the lid of the jar. • Subjects: A UK sample of healthy 150 males and females aged from 2 to 86 years. • Procedure: Caldwell et al.’s (1974) with modification (not the middle 3 s steady-state average value but the peak value is considered). The higher of the two repetitions was taken as the maximum. Results: • All torque strength data exhibited a normal distribution. • Maximum strength increased throughout childhood, peaked in adulthood, and then decreased with age from around 50 years. • Each successive age group (2–5, 6–10, 11–15  years) was found to be significantly stronger than the previous for all measurements. Generally, however, no significant differences were found within the adult (16–20, 21–30, 31–50 years) or older adult (51–60, 61–70, 71–80, 81–90  years) age groups. No significant differences in maximum strength were generally found between 11–15 and 60–80 years old or 6–10 and 80–90 years old. • Female/male strength ratio ranged from 55% to 75%. However, no significant differences in maximum strength were found between male and female children. • No significant correlations were found between the six strength measurements. • The handle or control type, the direction of force, and the number of hands used all significantly affected the amount of force that could be exerted.

10 2

M A HMU T EK Ş İ O Ğ LU

• Significant difference between two relatively similar measurements—wrist-twisting strength (on a variety of handles placed at elbow height) and opening strength on jars. • Knurling on lids increased the opening strength. Nayak and Queiroga (2004) studied the pinch grip, power grip, and wrist-twisting strengths of healthy older adults in the United Kingdom. Method: • Subjects: 150 subjects (all Caucasian origin), 65 males and 85 females, in the age range of 55–85 years. • Equipment: A torquemeter to measure wrist-twisting strength. The main body of the unit was 90 mm long with a diameter of 44 mm. At one end of the unit, a 50 mm diameter plastic lid was fixed to which a torque was applied. The lid thickness was 10 mm with a slightly rough texture. • Posture: Seated that followed the guideline by the American Society of Hand Therapists. • Procedure: Subjects, while seated, were instructed to hold the jar-shaped unit with the nonpreferred hand (power grip position) and to apply the twisting movement on the lid with the preferred hand (spherical grip position). They were instructed to exert their maximum possible torque and to hold it for about 5 s. Results: A removal torque of 1.3 N m could be used as a guide for the design of screw tops for child-resistant bottles, such as medication containers. Miller et al. (2005) used a simple device to quantify twisting strength necessary to perform daily activities. Method: • Subjects: 64 normal subjects (46 females and 18 males) and 13 arthritic patients (9 females and 4 males) with arthritis of the thumb carpometacarpal joint. • Handles: Five disks of 8 mm thickness and diameters of 2.5, 5, 7.5, 10, and 12.5 cm were fabricated from plastic, the edges were rounded and smoothed, and each disk was rubber coated.

H A N D T O R Q UE S T REN GT h IN IN D US T RY

10 3

• Task: Apply a twisting force to each of the five disks with each hand, in both the clockwise and counterclockwise directions, for three trails of each. • Posture: Each subject adopted the standard testing position and applied a twisting force to each of the five disks. • Measurement protocol: Maximum values were recorded for each trial, and the results of the three trials were averaged. Results: • Males applied greater torques than females. • The dominant hand applied greater torque. • Subjects diagnosed with carpometacarpal arthritis could not apply normal levels of torque. • There were no differences in the test–retest group due to the day of testing. • All disk sizes generated significantly different torques. • Ulnar and radial torques were similar. • The torque values recorded were lower than those reported by Voorbij (2002), Crawford et al. (2002), and Peebles and Norris (2003). The study by Yoxall et al. (2006) investigated the forces applied by consumers to open a jar lid, specifically to opening a widemouth vacuum lug closure (such as those used for jams, sauces, and pickles). Method: • Subjects: 235—138 males (aged from 8 to 93) and 97 females (aged from 8 to 95). • Handle: Jar lid with a diameter of 75 mm. • Task: Open the jar as normally done by the users. Subject could pick up the jar or leave it resting on the table. No type of grip was suggested, and multiple attempts using different postures or grips were allowed. • Torque direction: Counterclockwise. • Response: Peak applied torque. • Posture: As preferred, sit, or stand.

10 4

M A HMU T EK Ş İ O Ğ LU

Results: • Most of the females would struggle to open some jars. • Males were considerably stronger than females, so most of them would not have trouble to open the bulk of jars. • After 60 years old, strength started to decrease rapidly. • For a 75 mm jar, 15% of women of any age would struggle with 50% of the jars they bought, indicating that openability of jars of this type was a significant problem. The study by Kong et al. (2007) investigated the effects of screwdriver handle shape, surface material and workpiece orientation on torque performance, total finger force, and muscle activity in a maximum screwdriver tightening torque exertion task. Method: • Handles: 24 screwdriver handles, each with a length of 130  mm, were constructed with factorial combinations of longitudinal cross-sectional shape (circular, hexagonal, triangular), lateral cross-sectional shape (cylindrical, double frustum, cone, reverse double frustum), and surface material (plastic, rubber coated). The nominal diameter of all the handles in cross section was 45.0  mm, but the maximum dimension in cross section depended on the handle’s crosssectional shape. • Material: rubber and plastic. • Subjects: 12 healthy male university students. • Task: Perform maximum clockwise screw-tightening exertions using screwdriver handles with 3 longitudinal shapes (circular, hexagonal, and triangular), 4 lateral shapes (cylindrical, double frustum, cone and reversed double frustum), and 2 surfaces (rubber and plastic) (all 24 screwdriver handles). Six of the 12 subjects tested maximum torque exertion from the vertical orientation; the other 6 subjects exerted their maximum torque from the horizontal orientation. • Procedure: Caldwell procedure was used (though not mentioned). The dominant hand maximum torque exertion to the screw assembly on the torque sensor in standing posture with a straight elbow in the horizontal workpiece orientation

H A N D T O R Q UE S T REN GT h IN IN D US T RY

10 5

or approximately 90° elbow flexion in the vertical workpiece orientation. Exertion duration was 4 s and was repeated two times for each handle, with 2 min of rest time between trials. Steady-state average value was used as the maximal torque strength. Results: • Torque output with rubber handles was 15% greater than plastic handles. • The vertical workpiece orientation was associated with higher torque output (5.9 N m) than the horizontal orientation (4.69 N m). • Screwdrivers designed with a circular or hexagonal crosssectional shape resulted in greater torque outputs (5.49, 5.57 N m), with less total finger force (95, 105 N). • Reversed double-frustum handles were associated with less torque output (5.23 N m) than the double-frustum (5.44 N m) and cone (5.37 N m) handles. • Screwdriver handles designed with combinations of circular or hexagonal cross-sectional shapes with double-frustum and cone lateral shapes were optimal. Seo et al. (2007) conducted an experimental study to develop a model to describe the relationship between grip and torque. Method: • Subjects: 12 subjects (6 females and 6 males) with age ranging from 21 to 35. • Handles: Diameters of 45.1, 57.8, and 83.2 mm. • Task: Grasp the cylindrical object with diameters of 45.1, 57.8, and 83.2 mm in a power grip, and perform maximum torque exertions about the long axis of the handle in two directions: the direction the thumb points and the direction the fingertips point. Maximum torque, grip force, total normal force, and fingertip/thumb force were measured. • Posture: Seated with elbow flexed about 90° and forearm horizontal and grasped a vertical cylindrical handle with the right hand in a power grip.

10 6

M A HMU T EK Ş İ O Ğ LU

Results: • Handle diameter had a significant effect on torque exerted. • Hand torque was greater when the torque on a cylinder was applied in the direction the fingertips point. Seo et al. (2008) investigated the relationship among friction, applied torque, and axial push force on cylindrical handles. Methods: • Subjects: 12 healthy participants. • Handles: Handle diameters—5.78 and 5.1.2 cm for the rubber and aluminum handles. • Task: Exert anteriorly directed maximum push forces along the long axis of aluminum and rubber handles while applying deliberate inward or outward torques, no torque (straight), and an unspecified (preferred) torque. • Posture: Standing. • Protocol: All data were averaged over 2 s during maximum exertions. No further information is provided. Results: • Axial push force was 12% greater for the rubber handle than for the aluminum handle. • Participants exerted mean torques of 1.1, 0.3, 2.5, and −2.0 N m and axial push forces of 94, 85, 75, and 65 N for the preferred, straight, inward, and outward trials, respectively. Left to decide for themselves, participants tended to apply inward torques, which were associated with increased axial push forces. • Participants appeared to intuitively know that the application of an inward torque would improve their maximum axial push force. Wieszczyk et al. (2009) aimed to determine the effect of height of hand wheel of an industrial valve on the maximum torque production and risk of injury to the shoulders and back of workers.

H A N D T O R Q UE S T REN GT h IN IN D US T RY

10 7

Method: • Valve wheel: 45 cm in diam. • Task: Maximum torque exertions in the clockwise and counterclockwise directions at three heights (knee, chest, and overhead) while standing. • Subjects: 24 healthy power plant mechanics or operators (23 males and 1 female: 32–61 years). • Procedure: Participants exerted two maximum torques for each condition of height and direction, with at least 2 min of rest between consecutive torque exertions. Participants wore leather gloves during all torque exertions. Maximum torque was the average of the two trials. No further details were given. Caldwell protocol possibly was partially implemented. • Torque direction: Both in clockwise and in counterclockwise. Results: • Torque generated in the counterclockwise direction was greater than that of clockwise. • Ten percentage or greater torque was exerted at the overhead level than at the chest level. However, there was no difference in maximum torque between knee and overhead levels and between knee and chest levels. • Design engineers should avoid placing hand wheel valves at knee height or lower. Rowson and Yoxall (2011) conducted a study to determine the effect of different hand grips on maximum opening torque. Method: • Subjects: 34 (19 females and 15 males). • Task: Apply twisting force to open the jar with 7 different gripping type and 3 different jar diameters of closure (21 tests for each participant). Results: • Female participants generally produced lower torques than males.

10 8

M A HMU T EK Ş İ O Ğ LU

• Different grip styles were then seen to produce different peak torque values. • Only a limited number of grip styles applied by women gave them a sufficient strength to be able to open the jar. • The spherical grip choice produced the highest torque for the females, and they are likely to use a spherical grip on containers of this type. • In males, all of the grip styles produced maximum torques above the torque required for opening jars. The study by Ekşioğlu and Recep (2013) aimed to establish the static hand torque strength norms of healthy adult female population of Turkey and to investigate the effects of handle type, posture, age group, job group, and several anthropometric variables on hand torque strength. Method: • Subjects: 257 females (18–69 years). • Handles: Cylindrical (diam., 51  mm; length, 113  mm), circular (diam.: 60 mm), ellipsoid (with axis lengths: 55.6 and 42 mm), and key. • Task: Maximum voluntary static torque strengths of dominant hand were measured both in sitting and in standing with four types of handles. • Procedure: Caldwell et al.’s (1974) was used. • Posture: Both freestanding body and neutral sitting posture with the shoulder, elbow, and wrist about in neutral posture. • Direction: Clockwise. Results: • Torque strength norms were developed for the adult females (18–69 years). • The handle type, age group, and job group significantly affected torque strength. The highest values were obtained with cylindrical handle followed by circular, ellipsoid, and the lowest with key handle. • The torque strength peaked in 30–39 age group for nonmanual and in 40–49 age group for manual workers.

H A N D T O R Q UE S T REN GT h IN IN D US T RY

10 9

• Manual workers were stronger than nonmanual workers. • Marginally higher strength values were recorded in standing posture. • Overweight group was marginally stronger than normal weight group. • Grip strength and some of the anthropometric variables, such as forearm circumference and hand breadth, were positively correlated with torque strength. • The comparison results showed similarities and differences with some other nationalities. Ekşioğlu and Baştürk (2013) estimated the static hand torque strength norms of healthy adult male population of Turkey and investigated the effects of handle type, posture, age group, job group, and several anthropometric variables on hand torque strength. Method: • Subjects: 257 males (18–69 years). • Handles: Cylindrical (diam., 51  mm; length, 113  mm), circular (diam.: 60 mm), ellipsoid (with axis lengths: 55.6 and 42 mm), and key. • Task: Maximum voluntary static torque strengths of dominant hand were measured both in sitting and in standing with four types of handles. • Procedure: Caldwell et al.’s (1974) was used. • Posture: Both freestanding body and neutral sitting posture with the shoulder, elbow, and wrist about in neutral posture. • Direction: Clockwise. Results: • Torque strength norms were developed for the adult males (18–69 years). • The handle type, age group, and job group significantly affected torque strength. The highest values were obtained with cylindrical handle followed by circular, ellipsoid, and the lowest with key handle. • The hand torque strength peaked in 40–49 age group for both manual and nonmanual job groups in the three handles

110

• • • • •

M A HMU T EK Ş İ O Ğ LU

(ellipsoid, circular, and key). On the other hand, for cylindrical handle, hand torque strength peaked in 18–29 age group for both manual and nonmanual job groups. Manual workers were stronger than nonmanual workers. Marginally higher strength values were recorded in standing posture. Grip strength, height, hand length, hand breadth, and forearm circumference were positively correlated with hand torque strength. Body mass index did not have significant effect on torque strength considering only normal and overweight. The comparison results showed similarities and differences with some other nationalities.

5.3.1  Hand Strength Data

A summary of hand torque strength data of the world populations obtained from some of the aforementioned summarized studies is given in Table 5.2. As it can be seen, torque values show some cross-national variations. Some of the variations can be attributed to the experimental conditions, sample size, instrument and methodology used, and age range of the subjects studied. Part of the variation, however, may be due to the differences among the characteristics of the nations. 5.3.2  Summary and Critics of Findings

1. The following can be driven from the reviewed studies in terms of methodology: a. All studies reviewed measured static hand torque strength. None was about dynamic and psychophysical hand torque strengths. b. Only a few of the studies may be considered normative. The rest of the studies tried to identify the effects of some factors on the hand torque strength with very small sample sizes. c. Most of the studies were about young and middle-aged healthy adult population; only a few were about elderly and children.

United States

United States

United States Korea

Imrhan and Loo (1986)

Imrhan and Jenkins (1999) Kim and Kim (2000)

COUNTRY

Nagashima and Konz (1986)

STUDY

10 M and 10 F 15 M and 15 F

42 M and F

Exp. 2: 17 M and 12 F

Exp. 1: 10 F

SAMPLE SIZE AND TYPE

28–43 18–29

60–97

NM a

AGE (YEARS)

Table 5.2  Summary Results of Some Torque Strength Studies in the Literature

Cylindrical handle Cylindrical handle

Rough and smooth lid with diameters

Jar lid

HANDLE/OBJECT

113 mm 74 mm 55 mm 31 mm 57.2 mm 34 mm

86 mm

48 mm 67 mm 86 mm 67 mm

DIAMETER

Rubber Bare Smooth: 9.8 7.8 Knurled: 8.9 7.9 Smooth: 11.3 10.4 Knurled: 10.9 10.1 Rough 5.01 4.20 3.30 1.62 9.11 ± 0.72 11.4 ± 1.52 12.66 ± 1.73

MALE

9.5 Smooth 3.29 4.19 3.25 1.53 4.68 ± 0.39 5.96 ± 2.23 6.94 ± 2.08 (Continued)

8.7

6.3

6.3

3.17 5.02 6.04 Cloth

FEMALE

TORQUE STRENGTH (Nm)

H A N D T O R Q UE S T REN GT h IN IN D US T RY

111

United Kingdom

United States

Peebles and Norris (2003)b

Miller et al. (2005)

COUNTRY

Netherlands

Voorbij and Steenbekkers (2002)

STUDY

46 F and 18 M and 13 arthritic patients

150 M and F

750 M and F

SAMPLE SIZE AND TYPE

19–74

61–70

51–60

31–50

20–30 50–54 55–59 60–64 65–69 70–74 75–79 80+ 21–30

AGE (YEARS)

Disk

Butterfly nut

Disk

Jar lid

HANDLE/OBJECT

Table 5.2 (Continued)  Summary Results of Some Torque Strength Studies in the Literature DIAMETER

5 cm 7.5 cm

4 cm

66 mm

8.7 ± 2.2 7.6 ± 1.8 7.6 ± 2.3 6.4 ± 1.8 6.5 ± 2.1 5.4 ± 2.1 5 ± 1.7 4.9 ± 1.7 4.1 ± 1.8 4.5 ± 1.7 4.2 ± 1.1 3.2 ± 1.4 3.9 ± 1 4.3 ± 1.6 3.6 ± 0.8 3.2 ± 0.5 2.16 ± 0.63 3.37 ± 0.86

MALE

5.6 ± 1.4 4.8 ± 1.5 4.7 ± 1.4 4.8 ± 1.4 4 ± 1.2 3.7 ± 1.1 3.5 ± 1.3 3.4 ± 0.9 3.5 ± 1.3 3 ± 1.3 3.5 ± 0.6 2.6 ± 0.6 2.4 ± 0.4 2.8 ± 0.7 2.7 ± 0.5 2.3 ± 0.6 1.44 ± 0.41 2.20 ± 0.63

FEMALE

TORQUE STRENGTH (Nm)

112 M A HMU T EK Ş İ O Ğ LU

Turkey

Ekşioğlu and Baştürk (2013)

b

257 M

257 F

6 M and 6 F

18–69

18–69

21–35

NM: not mentioned. The study involves 2–86 years of males and females. Only part of the study is shown here.

Turkey

Ekşioğlu and Recep (2013)

a

United States

Seo et al. (2008)

Cylindrical handle

50.7 mm

59.98 mm

Circular handle Key handle

55.58 mm

Ellipsoid handle

Cylindrical handle

50.7 mm

59.98 mm

Circular handle Key handle

55.58 mm

57.8 mm

Ellipsoid handle

Cylindrical handle (aluminum)

Cylindrical handle (rubber)

Inward torque 8.7 ± 2.5 Inward torque 6.9 ± 1.3

Inward torque 3.5 ± 2.1 Inward torque 2.8 ± 1.7 Sit:2.74 ± 0.71 Stand: 2.96 ± 0.78 Sit: 3.30 ± 0.86 Stand: 3.51 ± 0.88 Sit: 1.59 ± 0.39 Stand: 1.65 ± 0.42 Sit: 5 ± 1.3 Stand: 5.3 ± 1.38 Sit: 4.12 ± 1.09 Stand: 4.25 ± 1.12 Sit: 4.83 ± 1.12 Stand: 5.10 ± 1.28 Sit: 1.97 ± 0.38 Stand: 2.05 ± 0.45 Sit: 6.87 ± 1.82 Stand: 7.07 ± 1.87

H A N D T O R Q UE S T REN GT h IN IN D US T RY

113

114









M A HMU T EK Ş İ O Ğ LU

d. There was no consistency among the strength measurement protocols in the studies. Although the protocol developed by Caldwell et al. (1974) is considered the most scientific and the standard for static strength measurement, only a few studies followed it. e. Only several studies have investigated hand torque strength at actual hand–handle interfaces, such as on hand tools like screwdrivers and wrenches, cylinders simulating handles, jar lids, electrical connectors, and small knobs. f. There were only two studies that investigated key and oval handle, which are actually widely used in daily life and at work. g. Most of the studies measured torque strength in standard fixed standing posture, and only few of them used freestanding posture. Only two of the studies attempted to measure the torque in both standing and sitting postures and compared them. h. Some measured torque in clockwise, some others in counterclockwise, and still some others in both directions. i. Only a few studies measured two-handed torque strength. As can be seen, there are variations in certain aspects of the methodology used among the studies. Most importantly, since most of the studies did not use the standard protocol (Caldwell protocol), the accuracy of the results is questionable. Therefore, it is important that future studies follow the standard protocol for the accuracy of the results as well as comparison purposes. Dynamic torque strength studies should also be carried out. And conversion factors should be estimated between static and dynamic torque strengths for practical applications. Psychophysical torque strength studies should also be carried out to estimate psychophysical torque strength capabilities. More studies are needed for two-handed torque strength data applied to large wheels and similar handles. Postures studied are very limited; thus, varying upper extremity and body postures need to be studied simulating more realistic situations. Free-body posture seems to be more realistic that should be preferred in future studies. More studies should

H A N D T O R Q UE S T REN GT h IN IN D US T RY





115

involve different world populations, elderly, children, and people with disabilities. Normative data are very rare. A considerable number of studies should involve generating normative data for various world populations covering wide age range and occupation groups for both genders with large number of participants. 2. The following can be driven in terms of obtained results: a. Gender has significant effect on hand torque strength. Caucasian females are about 60%–75% as strong as Caucasian males. b. Hand torque strength is about normally distributed within the population. c. Handle type, handle material, handle shape, handle diameter, and handle surface are all influential on hand torque strength. d. Effect of knurling on torque strength: results are mixed. Some studies indicate its dependency on the diameter: diameters smaller than 86 mm are ineffective. e. Rubber handles allow higher torque strength compared to plastic. f. Increasing surface contact area increases hand torque strength. g. Hand torque strength increases as the diameter of cylindrical handle increases and reaches maximum at 50 mm diameter; afterward, it slowly decreases. h. The effect of glove on the hand torque strength remains inconclusive. i. The higher torque strength is exerted in the standing posture than in the sitting posture. But there are also some studies finding this difference practically insignificant. In addition, exertion height has an effect on hand torque strength. j. Free-body posture allows higher torque strength than standard fixed-body postures. k. Torque strength follows a curvilinear relationship with age: in general, strength increases throughout childhood, peaks in adulthood, and then decreases with age from around 50 years.

116



M A HMU T EK Ş İ O Ğ LU

l. In general, dominant hand is stronger than nondominant hand. m. The effect of the direction of torque exertion remains inconclusive. n. Manual job group is found to be significantly stronger than nonmanual group. o. The highest torque strength values are obtained with cylindrical handle followed by circular, ellipsoid, and the lowest with key handle. p. Torque outputs with hexagonal and circular handles are higher than triangular handles. q. Torque capacity of subjects with carpometacarpal arthritis is lower than normal subjects. r. 1.3 N m can be recommended as a removal torque for opening child-resistant bottle tops for Dutch population. s. The fifth percentile torque value for females for British older adults is found 1.32 N m, whereas a torque value of 2 N m has been quoted in the literature for Dutch older adults. t. Higher torques can be exerted on square lids compared to those that are circular of the same diameter.

Most of these results summarized are based on nonstandard strength measurement protocol; thus, some of the results are questionable. Most did not provide the statistical estimation of the sample size, so whether the sample sizes were satisfactory for drawn conclusions is unknown. The fifth percentile torque strength values still are unknown for most world populations. Children torque strength values are also unknown. Optimal circular handle diameter was estimated 50 mm. However, considering the hand length differences between genders and among individuals, the reliability of this result is questionable. As with the grip strength, one may expect that the optimal diameter of torque strength is a function of hand length. Hence, this result needs further verification. 5.4 Conclusions

It is important for ergonomists and designers to consider human capacity in design for human. In designing for hand torque strength (work or products), hand torque strength capacity of the corresponding

H A N D T O R Q UE S T REN GT h IN IN D US T RY

117

population needs to be referred. A close examination of the torque strength studies revealed that there are important gaps in the torque strength data available to be used in work and product design. Therefore, there remains much work to do. First of all, there are only a few normative torque data available. Considering the cross-national variations, hand torque strength norms need to be developed worldwide, particularly for elderly and children and also for people with disabilities. Besides jar opening and child-resistant removal torque strengths, the studies should continue to determine hand torque design values for other torque applications in daily life and in industrial applications. Hand torque studies with varying tool types, varying dimensions, and varying postures of body and hand need to be continued. Along with static torque strength, dynamic hand torque strength studies should also be carried out for at least to allow accurate estimations of dynamic data from static data. Furthermore, psychophysical hand torque strength studies should also be performed to determine the safe and acceptable torque levels for long-duration torque exertion tasks. The researchers should be more serious on the use of the scientifically accepted standard torque strength procedure (i.e., Caldwell et al., 1974) and equipment as well as statistically required sample types and sizes to obtain reliable and universal results.

References

Adams, S.K., 2006. Hand grip and pinch strength. In W. Karwowski (ed.), International Encyclopedia of Ergonomics and Human Factors, 2nd edn., Vol. 1. CRC Press, Boca Raton, FL, pp. 365–376. Adams, S.K., P.J. Peterson, 1986. Maximum voluntary hand grip torque for circular electrical connectors. In Proceedings of the Human Factors Society, 30th Annual Meeting. Human Factors Society, Santa Monica, CA, pp. 847–851. Berns, T., 1981. The handling of consumer packaging. Applied Ergonomics, 12, 153–161. Caldwell, S.L., D.B. Chaffin, F.N. Dukes-Dobos, K.H.E. Kroemer, L.L. Laubach, S.H. Snook, D.E. Wasserman, 1974. A proposed standard procedure for static muscle strength testing. American Industrial Hygiene Association Journal, 35(4), 201–206. Chaffin, D., G. Andersson, B. Martin, 2006. Occupational Biomechanics, 4th edn. Wiley-Interscience, Hoboken, NJ.

118

M A HMU T EK Ş İ O Ğ LU

Chaffin, D.B., 1975. Ergonomics guide for the assessment of human static strength. American Industrial Hygiene Association Journal, 36(7), 505–511. Crawford, J.O., E. Wanibe, L. Nayak, 2002. The interaction between lid diameter, height and shape on wrist torque exertion in younger and older adults. Ergonomics, 45(13), 922–933. Daams, B.J., 1990. Static force exertion in standardized, functional and free postures. In Proceedings of the Human Factors Society, 34th Annual Meeting, Santa Monica, CA, pp. 724–728. Daams, B.J., 1994. Human Force Exertion in User-Product Interaction. Physical Ergonomics Series. Delft University Press, Delft, the Netherlands. Daams, B.J., 2006. Torque data. In W. Karwowski (ed.), International Encyclopedia of Ergonomics and Human Factors, 2nd edn., Vol. 1. CRC Press, Boca Raton, FL, pp. 534–544. Ekşioğlu, M., 2004. Relative optimum grip span as a function of hand anthropometry. International Journal of Industrial Ergonomics, 34(1), 1–12. Ekşioğlu, M., 2006. Optimal work-rest cycles for an isometric intermittent gripping task as a function of force, posture and grip span. Ergonomics, 49(2), 180–201. Ekşioğlu, M., 2011. Endurance time of grip-force as a function of grip-span and arm posture. International Journal of Industrial Ergonomics, 41(5), 401–409. Ekşioğlu, M., E. Baştürk, 2013. An estimation of isometric hand torque strength of adult male population of turkey and effects of various factors. In Proceedings of International IIE Conference & YAEM 2013, İstanbul, Turkey, pp. 111–112. Ekşioğlu, M., K. Kızılaslan, 2008. Steering-wheel grip force characteristics of drivers as a function of gender, speed, and road condition. International Journal of Industrial Ergonomics, 38, 354–361. Ekşioğlu, M., Z. Recep, 2013. Hand torque strength of female population of turkey and the effects of various factors. In P. Arezes, J.S. Baptista, M.P. Barroso, P. Carneiro, P. Cordeiro, N. Costa, R.B. Melo, A.S. Miguel, G. Perestrelo (eds.), Occupational Safety and Hygiene. CRC Press, Boca Raton, FL, pp. 37–41. Gallagher, S., J.S. Moore, T.J. Stobbe, 1998. Physical Strength Assessment in Ergonomics. American Industrial Hygiene Association, Fairfax, VA. Imrhan, S.N., C. Loo, 1986. Torque capabilities of the elderly in opening screw top containers. In Proceedings of the Human Factors Society, 30th Annual Meeting, Dayton, OH, Vol. 30(12), pp. 1167–1171. Imrhan, S.N., G.D. Jenkins, 1999. Flexion-extension hand torque strengths: Applications in maintenance tasks. International Journal of Industrial Ergonomics, 23, 359–371. Kim, C.H., T.K. Kim, 2000. Maximum torque exertion capabilities of Korean at varying body postures with common hand tools. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, San Diego, CA, Vol. 3, pp. 157–160. Kong, Y.K., B.D. Lowe, S.J. Lee, E.F. Krieg, 2007. Evaluation of handle design characteristics in a maximum screwdriving torque task. Ergonomics, 50(9), 1404–1418.

H A N D T O R Q UE S T REN GT h IN IN D US T RY

119

Kroemer, K., 2006. Static and dynamic strength. In W. Karwowski (ed.), International Encyclopedia of Ergonomics and Human Factors, 2nd edn., Vol. 1. CRC Press, Boca Raton, FL, pp. 511–512. Kroemer, K.H.E., 1970. Human strength: Terminology, measurement and interpretation of data. Human Factors, 12(3), 297–313. Kroemer, K.H.E., H.J. Kromer, K.E. Kroemer-Elbert, 1997. Engineering Physiology, 3rd edn. Van Nostrand Reinhold, p. 107. Leigh, J.P., 2011. Economic burden of occupational injury and illness in the United States. The Milibank Quarterly, 89(4), 728–772. Miller, M.C., M. Nair, M.E. Baratz, 2005. A device for assessment of hand and wrist coronal plane strength. Journal of Biomechanical Engineering, 127, 998–1000. Mital, A., 1986. Effect of body posture and common hand tools on peak torque exertion capabilities. Applied Ergonomics, 17(2), 87–96. Mital, A., S. Kumar, 1998. Human muscle strength definitions, measurement and usage: Part I—Guidelines for the practitioner. International Journal of Industrial Ergonomics, 22, 101–121. Mital, A., N. Sanghavi, 1986. Comparison of maximum volitional torque exertion capabilities of males and females using common hand tools. Human Factors, 28(3), 283–294. Nagashima, K., S. Konz, 1986. Jar lids: Effect of diameter, gripping materials and knurling. In Proceeding of the Human Factors Society, 30th Annual Meeting. Human Factors Society, Santa Monica, CA, pp. 672–674. Nayak, U., J. Queiroga, 2004. Pinch grip, power grip and wrist twisting strengths of community-dwelling, healthy older adults. Gerontechnology, 3(2), 77–88. Norris, B., J.R. Wilson, 1997. Designing Safety into Products. Product Safety and Testing Group, Institute for Occupational Ergonomics, London, U.K. Peebles, L., B. Norris, 2003. Filling ‘gaps’ in strength data for design. Applied Ergonomics, 34, 73–88. Pheasant, S., D. O’Neill, 1975. Performance in gripping and turning—A study in hand/handle effectiveness. Applied Ergonomics, 6(4), 205–208. Replogle, J.O., 1983. Hand torque strength with cylindrical handles. In Proceedings of the Human Factors Society, 27th Annual Meeting. Human Factors Society, Santa Monica, CA, pp. 412–416. Rohles, F.H., K.L. Moldrup, J.E. Laviana, 1983. Opening Jars: An Anthropometric Study of the Wrist Twisting Strength of Children and the Elderly. Institute for Environmental Research, Kansas State University, Manhattan, KS. Rowson, J.A., A. Yoxall, 2011. Hold, grasp, clutch or grab: Consumer grip choices during food container opening. Applied Ergonomics, 42, 627–633. Schoorlemmer, W., H. Kanis, 1992. Operation of controls on everyday products. In Proceedings of the Human Factors, 36th Annual Meeting. Human Factors and Ergonomics Society, Santa Monica, CA, pp. 509–13. Seo, N.J., T.J. Armstrong, J.A. Ashton-Miller, D.B. Chaffin, 2007. The effect of torque direction and cylindrical handle diameter on the coupling between the hand and a cylindrical handle. Journal of Biomechanics, 40, 3236–3243.

12 0

M A HMU T EK Ş İ O Ğ LU

Seo, N.J., T.J. Armstrong, D.B. Chaffin, J.A. Ashton-Miller, 2008. The effect of handle friction and inward or outward torque on maximum axial push force. Human Factors, 50, 227–236. Smith, J.L., 2006. Static muscle strength. In W. Karwowski (ed.), International Encyclopedia of Ergonomics and Human Factors, 2nd edn., Vol. 1. CRC Press, Boca Raton, FL, pp. 513–514. Snook, S.H., 1985. Psychophysical acceptability as a constraint in manual working capacity. Ergonomics, 28, 331–335. Steenbekkers, L.P.A., 1993. Child Development, Design Implications and Accident Prevention. Physical Ergonomics Series. Delft University Press, Delft, the Netherlands. Swain, A.D., G.C. Shelton, L.V. Rigby, 1970. Maximum torque for small knobs operated with and without gloves. Ergonomics, 3(2), 201–208. Voorbij, A.I.M., L.P.A. Steenbekkers, 2002. The twisting force of aged consumers when opening a jar. Applied Ergonomics, 33, 105–109. Wieszczyk, S.M., R.W. Marklin, H.J. Sánchez, 2009. Height of industrial hand wheel valves affects torque exertion. Human Factors, 51, 487–496. Yoxall, A., R. Janson, S.R. Bradbury, J. Langley, J. Wearn, S. Hayes, 2006. Openability: Producing design limits for consumer packaging. Packaging Technology and Science, 19, 219–225.

6 O p TImIZ ATI ON OF TR AFFI C FLOw ON KUwAIT ’s ROADs AND H I G HwAYs CH I BLI JOU M A A , NOR I A H A L-M A S, S U A D A L - S U WA I T I , N O O R A S H O U R , A N D S H A I M A G O O DA R Z I Contents

6.1 Introduction 6.2 Literature Review 6.2.1 Survey Development 6.2.2 Algorithms 6.2.2.1 Network Model 6.3 Methodology 6.3.1 Data Collection 6.3.1.1 Developing Questions 6.3.1.2 Paper-Based Surveys 6.3.1.3 Online-Based Surveys 6.3.2 Road Information 6.3.3 Dividing Kuwait’s Map 6.3.4 Building Kuwait’s Network 6.4 Results and Discussion 6.4.1 Analyzing Results of Surveys 6.4.1.1 General Survey Analysis 6.4.1.2 Statistical Analysis of Traffic 6.4.2 Simulation and Improvement 6.4.2.1 Minimal Spanning Tree of Kuwait 6.4.2.2 Scenarios and Improvements 6.5 Conclusion Appendix 6.A: Surveys

122 123 123 124 124 127 127 128 129 129 129 133 137 138 138 138 138 151 152 162 168 169

121

12 2

C hIB LI J O Um A A E T A L .

6.A.1 Paper-Based Survey 169 6.A.1.1 Analyzing Kuwait’s Road Traffic 169 6.A.1.1.1 Weekdays 169 6.A.1.1.2 Weekends 170 6.A.1.1.3 After Work 171 6.A.2 Online-Based Survey 173 References 173 6.1 Introduction

As a resident of the state of Kuwait, my main focus of this study is the increasing traffic problems on the Kuwait roads. Traffic is taking its turn to the worst, and if not handled, it will continue to rise. The aim of this chapter is to study and optimize the traffic flow and reduce congestions as much as possible in Kuwait. The population of Kuwait has been increasing. According to the Kuwait government [1,2], the number of residents in Kuwait went from 321,621 in 1961 to 1,697,301 in 1985, to 2,213,403 in 2005, and to 3,328,136 in June 2008. Naturally, this large increase throughout the years led to the increase in the number of automobiles circulating in the country. As a matter of fact, according to a governmental study [2], Kuwait witnesses around 6%–9% annual increase in the number of vehicles. Since the area of Kuwait is rather small, the increase of cars resulted in traffic congestion. The study also showed that the population growth would continue to increase, and it is expected to have more than 8,000,000 residents by 2020, meaning that the traffic problems will only get worse. That is why optimizing the traffic in Kuwait has become a necessity. In Section 6.2, literature review, including survey development and algorithms for network optimization and transportation models that are to be applied to Kuwait’s network are presented. These methods have been derived from several operations and research books, as well as scientific papers, and have therefore been applied. Section 6.3 consists of a theoretical approach used to gather all the information required to analyze the existing flow of traffic. Gathering the information was the longest process. The data gathering process took about 2 months to develop, distribute, and collect the results. Surveys were developed to accommodate the public, in the sense of making

O P TImIZ ATI O N O F T R A F FI c F L O W

12 3

it as easy as possible for people to follow. The idea of online surveys helped speed up the process in which surveys were distributed through the social media; therefore, the responses were faster. To approach the issue, studying the existing road divisions of Kuwait was required. Dividing Kuwait into different zones was done using both the existing road network division of Kuwait and the results of the surveys. An observation of the main roads, highways, areas, and people’s behavior was made in order to study the flow of automobiles. Finally, a network was created for Kuwait, which linked all of the zones together using arcs and nodes representing roads and areas, respectively. In Section 6.4, results and discussions, including the analysis of the surveys along with the statistical analysis of traffic is presented for every time period of the day. Each time period has a dedicated map that shows the flow of traffic and the zones that are busy. Then simulation through the minimal spanning tree would indicate the shortest route from one zone to another, linking all zones in Kuwait. Finally, scenarios were conducted and improvements made for special cases. It was important to conduct scenarios to show the natural flow, where improvements were made even when misfortunes would occur. The chapter ends with the conclusion. Appendix 6.A includes paper-based and online surveys, which is followed by references. 6.2  Literature Review

Literature review calls attention to critical points of knowledge of all information and sources being used for the study made in certain fields and also evaluates and designates the path of the chapter. Literature review is a summary and outline of a specific area of research, indicates the reason and aims for perusing this research made, and aids the reader to clearly understand what the chapter is about. 6.2.1  Survey Development

In order to study the traffic flow and analyze and optimize it, information about the traffic is required. Ref. [3] offers sample survey questions, answers, and tips to those who incorporate surveys into projects or services. One of the first important aspects when developing

12 4

C hIB LI J O Um A A E T A L .

surveys is to ultimately satisfy customers; sample questions are offered for inspiration, one of which is to define gender and include age ranges. The incorporation emphasizes on the fact that answers should be provided in a form of bullet points, in which the answers are clear when chosen. By definition, a survey studies a sample of individuals from a given population and draws conclusions about the population based on the sample. Therefore, for accurate and reliable results, the sample to be studied must be defined correctly. The number of samples should represent well the whole population. 6.2.2 Algorithms

Linear programming [4–10] is used for decision-making purposes. All linear programming models include three components: 1. Decision variables that we want to determine, which is the first step in developing a model. 2. The objective function (goal) that will be optimized by either maximizing or minimizing:

Maximize/Minimize c1x1 + c 2 x 2 + c 3x3 ,, cn xn .

3. The constraints [4] that are used to limit the values, and the solution must satisfy those constraints: a11x1 + a12 x 2 ≤ b1 , a21x1 + a22 x 2 ≤ b2 ,

a31x1 + a32 x 2 ≤ b3 .

Any values of the defined variables that satisfy all the constraints give a feasible solution. Otherwise, the solution is infeasible. However, the goal of linear programming is to find the optimum, which is the best feasible solution that either maximizes or minimizes the objective function [4]. 6.2.2.1  Network Model  In this section, we introduce network optimi-

zation algorithms. Many algorithms were developed over the years; we will focus on the following:

O P TImIZ ATI O N O F T R A F FI c F L O W

Source

x

12 5

Destination

Figure 6.1  Nodes and arcs.

1. Minimal spanning tree 2. Shortest route a. Dijkstra’s algorithm b. Floyd’s algorithm A network consists of a set of nodes and arcs. The notation for describing a network model is (N, A), where N is the set of nodes and A is the set of arcs. Nodes are sources and destinations, and the arcs represent the flow between the nodes (Figure 6.1). A network is considered connected if there are two nodes linked by a path. The flow on the arcs in our case will be the automobile traffic flow in highways [4]. 6.2.2.1.1  Minimal Spanning Tree Algorithm  The minimal spanning

tree algorithm links the nodes of a network using the shortest link possible [5,7]. The advantage of the minimal spanning tree solution is that it provides the most economical design of the road system. Nodes may represent areas, intersections, and bus stops, while the links can represent the capacity of the route, cost, or distance. This algorithm is used in both transportation and communication infrastructures to reach the optimal network [6]. Algorithm:

Step 1: Select the shortest link between any two given nodes in the network. Step 2: Select the shortest potential link between a node that has already been touched by a link and a node that has not yet been connected. Step 3: Repeat step 2 until every node is touched by a link and until the final destination is reached. 6.2.2.1.2  Shortest Route Algorithm  In a transportation network, the

shortest route algorithm determines the shortest route between two nodes, a source and a destination. Two algorithms are used for solving such networks [4].

12 6

C hIB LI J O Um A A E T A L .

6.2.2.1.2.1  Dijkstra’s Algorithm  This algorithm is used to determine the shortest routes between the source node and every other node in the network. Dijkstra’s algorithm can be applied to improve and optimize the logistics distribution system. In result, the efficiency of transport vehicles will be improved, and man and material resources will be saved.

Algorithm:

Step 1: Label the first node (source) with a zero label. Step 2: Add the cost on the arc, between node zero and the next destination, with node zero and place the result in the destination node. Step 3: Repeat step 2, but this time, the source will be the destination with the shortest route reached by the last step. Step 4: Repeat previous steps until you reached the final desired destination. 6.2.2.1.2.2  Floyd ’s Algorithm  Floyd’s algorithm is used for deter-

mining the shortest route between any two nodes in the network. It is a more general approach compared with Dijkstra’s algorithm because it just requires two nodes from one network. It is used for freight trains that use railroad network to transport goods from an origin to a destination [9]. Algorithm:

Step 1: Start by drawing two tables containing rows and columns depending on the number of nodes, one for the distance (D 0) and one for the nodes (S 0). Step 2: Highlight the first column and row of table D 1. In the unhighlighted cells, search for the infinity sign if available; if not, search for the highest number. Step 3: The cell that has the infinity sign should be changed to the sum of its highlighted coordinates. The cell that has been changed in D 1 table should be changed to the iteration K in table S1. Step 4: Repeat steps 2 and 3, but instead of highlighting the first row and column, highlight the second and then third until you reach the last column or row.

O P TImIZ ATI O N O F T R A F FI c F L O W

12 7

After editing the tables, the end result would be the shortest distance between each source and destination. 6.3 Methodology

This section represents the approach leading to the results and improvements of Kuwait’s roads. The first step is the data collection process. Provided in Section 6.3.1 is how surveys were developed and data gathered through online and paper-based surveys. As mentioned earlier, Kuwait’s population is above three million. A sample population should be defined. The minimum number of collected surveys should be determined in order to ensure the representativeness of the sample with regard to Kuwait’s population. Afterward, the data regarding Kuwait’s road lengths, capacities, and speed information will be computed. After the calculations regarding Kuwait’s roads are made, Kuwait is divided into zones according to the highways and their intersections. Finally, Kuwait’s network is built. 6.3.1  Data Collection

The first step taken was the data collection process. Surveys were developed, distributed, and collected. Before completing the data collection process, the minimum sample size required to have accurate results needs to be calculated. Several methods exist to define the sample size to be used. In this study, the sample size infinite population formula was used from [11], where the population should exceed 50,000. The minimum sample size required is calculated using the following formula: n=

Z 2 × ( p ) × (1 − p ) (6.1) , C2

where n is the sample size Z is the Z-value (1.645 = 90% confidence level) p is the percentage of population expected to respond (generally 0.5) C is the confidence interval, expressed as decimal (0.6, it can range between 4% and 6%) The minimum sample size expected is 187.9 = 188 answered surveys.

12 8

C hIB LI J O Um A A E T A L .

Develop survey structure and type

Develop questions

Pretest

Distribution

Analysis of response

Sample size calculation

Filter relative responses

Collect response

Analysis of result

Final conclusion

Figure 6.2  Constructing a survey.

After computing the minimum number of surveys needed, two types of surveys were developed: paper-based and online surveys. A sample of these surveys can be found in Appendix 6.A. The information needed for the study was defined, and questions were tailored accordingly in the surveys. Using the surveys, we can specify the routes, sources, and destinations of the people at given times and hence track their movements. One hundred and nine paper-based surveys and one hundred and thirty-one online surveys were answered. This gives a total of 240 surveys, which is above the minimum calculated sample size. The steps followed are shown in Figure 6.2. 6.3.1.1 Developing Questions  Certain questions were developed for

the purpose of tracking the congestions of roads and zones. Those questions include the timing of departure from sources and arrival at destinations. In addition, people were asked about the routes chosen by them in order to determine the trend. The whole process took approximately 2 months, which included developing, distributing, and gathering. At first, only paper-based surveys were used and distributed to various people. However, it was clear that a more random survey and a wider population were needed; therefore, the idea of creating an online survey was introduced. Online surveys were simpler because the main goal was for the people to complete it as fast as possible. As for the distribution of the

O P TImIZ ATI O N O F T R A F FI c F L O W

12 9

surveys, the online version was spread through social networks. The online survey sample is shown in Appendix 6.A.2. After all the data were collected, they were put in an Excel sheet, which would make the organization and analysis of them easier. 6.3.1.2  Paper-Based Surveys  To locate the traffic in Kuwait roads and

to know what areas are frequently visited, a survey was distributed to the public. The goal of the survey was to know the source, the destination, and the links between them, which are roads. Knowing the time of leaving from and arriving at a certain area made it easier to calculate the time spent on the road. The paper-based survey was basically a combination of three different surveys, one addressing workdays, one addressing after-work movements, and one addressing the weekend. A sample survey is provided in Appendix 6.A.1. Two hundred and thirty paper-based surveys were collected, but only one hundred and nine surveys were counted in the analysis due to the fact that some of them were incomplete. Therefore, the response was 47.39%. 6.3.1.3  Online-Based Surveys  After the paper-based surveys were car-

ried out, a broader sample of people was needed. Therefore, an online survey was created using Survey Monkey, 2012 and was randomly distributed to the general public in Kuwait using social networks. The purpose of this survey was to distinguish where people during specific time periods were. Just like the paper-based surveys, the online survey differentiated between workdays and weekends. For each section, the time periods were specified, and people stated which area they were in during that time, which helped identify the areas that were the most crowded during the time periods. The results also helped in dividing Kuwait into zones and pointing out the congested zones. 6.3.2  Road Information

Capacity analysis involves quantitative evaluation of the capacity of a road section. It uses a set of procedures to determine the maximum flow of traffic that a section would carry [12].

13 0

C hIB LI J O Um A A E T A L .

Possible capacity is defined as the maximum number of vehicles that can pass a point in 1 h under prevailing roadway and traffic condition. Practical capacity, on the other hand, is the maximum number of vehicles that can pass a point without restrictions for average driver’s to pass other vehicles. The main goal of this section is to find out the number of cars a road can handle. Therefore, information regarding the roads was gathered, and numerous calculations were conducted. The Ministry of Interior provided the information needed for conducting the capacity calculations, which involve the length of the roads, the maximum speed, and the minimum speed. The average speed was computed along with the time for one car to cross the road. Equation 6.2 illustrates the time for one car to cross the road. T is the time for one car to cross the road (h): T =

L (6.2) , A

where A is the average speed for each lane (km/h) L is the length of the road (km) The next step is to find the capacity of the lane to ultimately compute the actual number of cars on the road. The average length of a car was found through measuring several cars to eventually find out how many cars fit a lane. To find the capacity per lane, the average length of cars (C), as well as the space provided between the cars (S), is needed. Average length of a car = 0.00443 km (three cars of 0.0043, 0.0043, and 0.0047 km were measured) Capacity per lane =

R , (6.3) (C + S )

where R is the road length (km) C is the average car length (km) S is the space between each car (0.001 km) Table 6.1 shows information on the road, whereas Table 6.2 shows information on the capacity of the roads. The time taken for

100 80 80 80 120 (bridge between Rumaithiya and Salmiya = 80) 120 120 120 80 120 100 120 100 100 80

4.7 8 8.3 15 30 58 32 50 6.4 96.5 7 2.5 1.5 10 7

1st Ring Road 2nd Ring Road 3rd Ring Road 4th Ring Road 5th Ring Road

6th Ring Road 7th Ring Road Fahaheel Highway (starting from south until 4th Ring Road) Fahaheel (after 4th Ring Road) King Fahad (starting from Saudi Arabia until 5th Ring Road) King Fahad (after 5th Ring Road until Kuwait City) King Faisal (from Airport to Qortuba) King Faisal (from Qortuba until 1st Ring Road) Al-Ghazally Street Damascus Street

MAXIMUM SPEED (km/h)

LENGTH (km)

STREET NAME

Table 6.1  Road Information

60 60 60 60 40

60 60

40 40 60

40 40 40 40 40

MINIMUM SPEED (km/h)

80 90 80 80 60

70 90

80 80 90

70 60 60 60 80

AVERAGE SPEED (km/h)

0.088 0.028 0.019 0.125 0.117

0.091 1.072

0.725 0.400 0.556

0.067 0.133 0.138 0.250 0.375

TIME FOR ONE CAR TO CROSS THE ROAD (T in h) O P TImIZ ATI O N O F T R A F FI c F L O W

131

13 2

C hIB LI J O Um A A E T A L .

Table 6.2  Capacity Information STREET NAME 1st Ring Road 2nd Ring Road 3rd Ring Road 4th Ring Road 5th Ring Road 6th Ring Road 7th Ring Road Fahaheel Highway (starting from south until 4th Ring Road) Fahaheel (after 4th Ring Road) King Fahad (starting from Saudi Arabia until 5th Ring Road) King Fahad (after 5th Ring Road until Kuwait City) King Faisal (from Airport to Qortuba) King Faisal (from Qortuba until 1st Ring Road) Al-Ghazally Street Damascus Street

CAPACITY PER LANE (CARS/LANE)

NUMBER OF LANES

866 1,474 1,529 2,763 5,525 10,682 5,894 9,209

2 3 3 3 3 3 4 3

1,179 17,772

4 3

1,290

4

461 277

3 3

1,842 1,290

2 3

one car (T) to cross the road was shown in Table 6.1. The next step is to determine how many cars are able to cross the road in a specific period of time. The time was determined through the specified time slots that were used during the data collection process. The calculation up until now has been directed toward an ideal case; since we are living in the real-world, estimations were added to the formula. Therefore, percentage was added to the road capacity to ensure that despite the actual cars that passed through, there were a certain number of cars that made it to their destination [9]. Our estimation was that 40% of the cars would pass the road and reach their desired destinations:

N = ( P + ( 40% × capacity per lane)) × L,

where N is the actual number of cars per lane L is the number of lanes P are the cars in a specified period = (h/period)/T Table 6.3 shows the number of cars in each road.

(6.4)

13 3

O P TImIZ ATI O N O F T R A F FI c F L O W

Table 6.3  Capacity of Each Road per Period STREET NAME 1st Ring Road 2nd Ring Road 3rd Ring Road 4th Ring Road 5th Ring Road 6th Ring Road 7th Ring Road Fahaheel Highway (starting from south until 4th Ring Road) Fahaheel (after 4th Ring Road) King Fahad (starting from Saudi Arabia until 5th Ring Road) King Fahad (after 5th Ring Road until Kuwait City) King Faisal (from Airport to Qortuba) King Faisal (from Qortuba until 1st Ring Road) Al-Ghazally Street Damascus Street

6–8 A.M.

8 A.M.–12 P.M.

12–4 P.M.

4–6 P.M.

6–11 P.M.

753 1,814 1,878 3,340 6,646 12,827 9,450 11,062

812 1,859 1,922 3,364 6,662 12,835 9,470 11,072

812 1,859 1,922 3,364 6,662 12,835 9,470 11,072

753 1,814 1,878 3,340 6,646 12,827 9,450 11,062

842 1,882 1,943 3,376 6,670 12,839 9,480 11,078

1,974

2,062

2,062

1,974

2,106

21,332

21,338

21,338

21,332

21,340

2,155

2,246

2,246

2,155

2,291

767

982

982

767

1,089

648

964

964

648

1,122

1,506 1,599

1,538 1,651

1,538 1,651

1,506 1,599

1,554 1,676

6.3.3  Dividing Kuwait’s Map

Kuwait’s roads are designed to cover the whole country. According to the map of Kuwait, there are roads that are well defined; they include horizontal ring roads, and roads and highways that intersect vertically through the ring roads. Seven horizontal ring roads are named sequentially starting from 1 to 7. • The 1st Ring Road—which starts from Sharq and ends at Qiblah. • The 2nd Ring Road—which starts from Bnaid Al-Gar and ends at Shuwaikh. • The 3rd Ring Road—which starts from Al-Daiya and ends at Kaifan. • The 4th Ring Road—which starts from Salmiya and ends at Shuwaikh Industrial.

13 4

C hIB LI J O Um A A E T A L .

• The 5th Ring Road—which starts from Salmiya and ends at Doha. • The 6th Ring Road—which starts from Mishref and ends at Jahra. • The 7th Ring Road—which starts from Mubarak Al-Kabeer and extends to the west of Kuwait. The vertical roads that intersect the ring roads are as follows: • Gulf Road—which starts from Salmiya and ends at Shuwaikh. • Fahaheel/Istiqlal (30)—which starts from Kuwait City and ends at Chalets District. • King Fahad/Maghreb (40)—which starts from Kuwait City and ends at Kuwait/Saudi Arabia Borders. • King Faisal/Riyadh (50)—which starts from Kuwait City and ends at Al-Ahmadi. • Airport Road (55)—which starts from Shuwaikh Industrial and ends at the Kuwait International Airport. • Ghazalli Road (6)—which starts from Shuwakh Residential and ends at the Airport. • Al-Jahra Road (80)—which starts from Kuwait City and ends at Al-Jahra. Based on the ring roads and intersections, the map was divided into various zones. On the map and the survey results, the first identifications were the main roads and areas of attraction. Kuwait City was clearly the biggest issue, so it was taken as a separate zone. Moreover, The Avenues (the largest mall in Kuwait, which is a very popular attraction), Jabriya (a residential area where there are some famous schools and hospitals), and Kuwait University were also some of the congested areas on weekends and weekdays. Finally, 48 zones were created. Figure 6.3 describes the chosen ring roads and highways. As shown in Figure 6.4 extracted from Ref. [13], the marked destinations are identified due to the intersections made by main roads through each ring road. These intersections connect main roads and divide areas beneath the ring roads; therefore, the divided areas were chosen as zones. The red borders in Figure 6.4 shows how Al-Adailiya is selected as a zone, as it lies between the 3rd

13 5

O P TImIZ ATI O N O F T R A F FI c F L O W

First ring road Second ring road Third ring road

Gh aza li (6)

Air por t ro ad (55)

Kin g Fai sal

Da ma scu s

King Fah ad (40)

Fah ahe el (30)

Gul f ro ad

Fourth ring road Fifth ring road Sixth ring road Seventh ring road

Figure 6.3  Representation of Kuwait’s Roads.

Figure 6.4  Method used to divide zones.

and 4th Ring Roads and is intersected by King Faisal Road and Damascus Street as indicated. The zones are selected with respect to the results of the surveys concerning the attraction areas and roads that are often used. The zones are summarized in Table 6.4.

13 6

C hIB LI J O Um A A E T A L .

Table 6.4  Zones ZONES 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37

AREAS Al-Zahra Abdullah Al-Salem Abraq Khaitan Adan + Qusoor + Qurain + Mubarak Al-Kabeer + Sabah Al-Salem + Egaila + Al-Rigga + Sabahiya Airport District + Al-Dhajeej + Abdullah Al-Mubarak + Jleeb Al-Shuyoukh Al-Abdli Al-Adailiya Al-Ahmadi Al-Bidea Al-Daiya + Qadsiya Al-Farwaniya Al-Jahra + Sa’ad Al-Abdullah Al-Khafji + Al-Khiran + Bnaider + Jlai’a + Al-Zoor Al-Mansouriyah + Dasma Al-Messila + Abu Al-Hassani + Mahboula + Fintas + Abu Halifa + Al-Mangaf + Fahaheel Al-Rabiya + Al-Rahab + Ishbilya + Ardiya Al-Rai Al-Rawda Kabd Al-Sha’ab Al-Shamiya Al-Wafra Andalus + Riggae + Nahda Bayan + Mishref Bnaid Al-Gar Dahar Faiha Firdous + Sabah Al-Nasser Hawalli Jabriya Kaifan Khaldiya Kuwait City (Murgab + Sharq + Kuwait City) Mina Abdullah Nuzha Qayrawan Qortuba (Continued )

O P TImIZ ATI O N O F T R A F FI c F L O W

13 7

Table 6.4 (Continued)  Zones ZONES 38 39 40 41 42 43 44 45 46 47 48

AREAS Qurnata Rumaithiya + Salwa Salmiya Shuwaikh A (Residential) Shuwaikh B (Industrial) Shuwaikh C (Health Region + Kuwait University) South Surra Subhan Sulaibiya Surra Yarmouk

6.3.4  Building Kuwait’s Network

A network model of Kuwait is built in order to simulate the current road situation. When building a network, nodes and arcs are needed. The nodes are represented by zones, which are either sources or destinations. The arcs represent the routes that link zones together. To construct the network, each zone has been taken and directly linked to a neighboring zone. The route has been recorded, and the distance and the capacity have been calculated. The data are then used to simulate several optimization algorithms. Figure 6.5 shows a sample of Kuwait’s network.

Figure 6.5  Kuwait’s network sample.

13 8

C hIB LI J O Um A A E T A L .

6.4  Results and Discussion 6.4.1  Analyzing Results of Surveys

6.4.1.1  General Survey Analysis  The sample population for the surveys

is described in Table 6.5. Males answered 46.7% of the surveys, and females answered 53.3%. Note that surveys were distributed randomly between males and females, of different age groups and different occupations. This variety will help get more accurate and reliable results. The majority of the participants’ age groups (95%) were 18 and above; therefore, the data gathered were for people driving and owning cars. The Ministry of Interior generously provided us with the day’s peak times, and after the surveys were analyzed and the behavior of people during the day studied, the day was divided into five periods: 1. 6–8 a.m.: Mostly students and employees leave during this time. 2. 8 a.m.–12 p.m.: The range of time at which people arrive to work depending on their shifts. 3. 12–4 p.m.: The range of time at which people arrive or leave work depending on their shifts. 4. 4–6 p.m.: Most people leave work or their house to run errands. 5. 6–11 p.m.: People are busy with their daily activities. 6.4.1.2 Statistical Analysis of Traffic  Grouping data were required

for the analysis of results from the surveys. Areas in Kuwait should be ranked according to the number of people in that area and in Table 6.5  Survey Participants Number of people participated Male 112 Age group of participants Below 18 18–25 26–45 3 103 91 Occupation of participants Students Employees Retired 77 149 3

Female 128 Above 45 34 Other 9

13 9

O P TImIZ ATI O N O F T R A F FI c F L O W

neighboring main routes. One approach was to group the data into two elements—congested and not congested. This method is inaccurate since it lacks detail on the levels of congestion. For each time period in both weekdays and weekends, results from the surveys were grouped using cluster analysis. Specifically, the cluster analysis method used was hierarchical cluster analysis in the IBM SPSS Statistics software to group data as sources, destinations, and their routes. This method has been conducted to set data in clusters that will aid in visualizing and analyzing the level of congestion in each zone and road. Before beginning the cluster analysis, the number of clusters should be determined. Starting with two clusters—congested and not congested—lots of detail has been eliminated. Going to three clusters, this number is compared with having a number of four clusters. Figure 6.6 shows the result during the weekdays from 6 to 8 a.m. The map on the left in Figure 6.6 represents three clusters, and the second map represents four clusters. Working with three clusters, zones that are congested are shown to have low congestion, such as zone 33 (Kuwait City). With four clusters, zone 33 has shifted from 33

25 14 10 20 31 27 35 29 42 32 7 18 21

41 43 38

23

36 46

28 16

2

47

37

17 48

44

3

11

40 30

43 38

9

39

46

28

32

17

36

24

31

42

23

16

33 25 14 2 10 20 27 35 29 7 18

21

41

48

11

47

37

9 39

44

3

1

40 30 24

1

5

5

45

45 4

26

4 15 26

19

15

19

8

8 34

High congestion Medium congestion Low congestion

Figure 6.6  Comparison between three and four clusters.

34

Very high congestion High congestion Medium congestion Low congestion

14 0

C hIB LI J O Um A A E T A L .

having low congestion to high congestion. Therefore, more detail is being added, and crowded zones are appearing with four clusters. In addition, a mathematical approach is used. In order to divide the data into clusters, the optimal number of clusters k was determined using Equation 6.5 [14]: K =

n , (6.5) 2

where n is the number of data points. When Kuwait was divided into 48 zones, the number of clusters k was 4.8. From the choice of four or five groups, four clusters were been chosen. As seen in Figure 6.7, with five groups, little detail was added. Zone (30) Jabriya changed colors; however, it is still considered medium congested. With four groups chosen, Figure 6.8 shows the results of the weekdays with four clusters. The three most congested routes in this time period are Fahaheel Motorway (30) with 20% congestion level compared with all road 33 25 14 21 2 10 20 31 27 35 29 42 32 7 18 30 47 17 48 37

41 43 38

23

36 46

28

16

11

44

3

43 9

39

36

24

23

46 28

1

5

33 25 14 21 2 10 20 31 27 35 29 42 32 7 18 30 47 17 48 37

41

40

16

11

24

45 4

4

26

9

39

1

5

45

44

3

40

15

26

15

19

19

8

8

34

34

Very high congestion High congestion Medium congestion Low congestion

Figure 6.7  Comparison between four and five clusters.

Very high congestion High congestion Medium congestion Low congestion Very low congestion

141

O P TImIZ ATI O N O F T R A F FI c F L O W

33 21

41

31

43 38

42

23

17

36

46

28

16

2

7

20 40

29

18

30

47

37

48

11

10

35

27 32

25 14

9 39

44

3

24

1

5 45

4

26

Very high congestion

15

High congestion Medium congestion Low congestion 8 34

Figure 6.8  Weekdays, 6–8 a.m.

activities, the 4th Ring Road with 13.61% congestion, and finally King Faisal Motorway—Riyadh (50)—with 11.52% of the crowding level: 1. The zones ranked high in congestion are as follows: a. Residential areas: South Surra (44) and Qortuba (37). During this time period, occupants leave their houses to go to work or to drop their children to school in the same or neighboring zones.

14 2

C hIB LI J O Um A A E T A L .

b. Major organizations: Subhan (45). This location has major companies, including oil companies and, mostly, Kuwaitbased factories. As mentioned, King Faisal Motorway (50) passing by the very highly congested zones is busy. 2. The zone ranked high in congestion is Kuwait City (33), which is destination point for multiple ministries such as the Ministry of Higher Education and Ministry of Social Affairs and Labor. Also, the routes, Fahaheel Motorway (30) and King Faisal Motorway (50), leading to this area are crowded as shown in Figure 6.8. 3. Hawalli (29), Jabriya (30), Bayan and Mishref (24), and Rumaithiya and Salwa (39) contain both public and private schools and universities; therefore, they are ranked with medium congestion during 6–8 a.m. Zones (4) and (15) containing residential areas like Mubarak Al-Kabeer, Abu Halifa, and Al-Qusoor are also medium congested. The congested routes represented in Figure 6.9 are again Fahaheel Motorway (30) with 15.38% congestion, King Faisal Motorway (50) with 26.92%, and the 4th Ring Road with 11.54% congestion. During this time period, the number of people on Fahaheel Motorway and the 4th Ring Road has decreased; however, the congestion level of King Faisal Motorway passing by a very highly congested zone and leading to a highly congested zone has increased by 15.4%: 1. The zone ranked very high in congestion is South Surra (44); besides being a residential area, this area also includes ministries such as the Ministry of Electricity and Water and the Public Authority for Civil Information. Clients come to this area for their paperwork. 2. Kuwait City (33) is again ranked high in congestion. Routes (30) and (50) leading to this area are also busy. Figure 6.10 shows that the most congested routes during weekdays from 12 till 4 p.m. are Fahaheel Motorway (30) with the highest percentage of 15.72, King Faisal Motorway (50) with

14 3

O P TImIZ ATI O N O F T R A F FI c F L O W

33 21

41 31

43 38

42

23

46

28

16

20 40

29

18

30

47

37

48

11

10

35

7

32

17

36

2 27

25 14

9 39

44

3

24

1

5 45

4

26

15

Very high congestion High congestion Medium congestion Low congestion 8 34

Figure 6.9  Weekdays, 8 a.m.–12 p.m.

13.84%, and King Fahad Motorway (40) with 10.69% of the total roads’ activity: 1. The zone ranked very high in congestion is only Qortuba (37). During this time period, occupants leave their work and start heading home. Others also may start leaving for work for their second shift.

14 4

C hIB LI J O Um A A E T A L .

33 21

41 43 38

42

23

46

28

16

11

20 40

29

18

30

47

37

48

25 10

35

7

32

17

36

2 27

31

14

9 39

44

3

24

1

5 45

4

26

15

Very high congestion High congestion Medium congestion Low congestion

8 34

Figure 6.10  Weekdays, 12–4 p.m.

2. During this time period, Kuwait City (33) is classified as a highly congested zone. It is where employees are either leaving work or heading to work for another shift. In addition, Subhan (45), which is dedicated to employees working in the oil sector and factories, is also highly congested. 3. Hawalli (29) and Salwa and Rumaithiya (39) fall in the category of medium-congested areas mainly because students and teachers leave their schools. Figure 6.11 shows the routes during weekdays from 4 to 6 p.m. During this time period, residents are most likely at home. This explains why residential areas in the northern part of Kuwait are congested. Road

14 5

O P TImIZ ATI O N O F T R A F FI c F L O W

33 21

41 43 38

31

42

23

46

28

16

27

11

20

40

29

18

30

47

3737

48

25 10

35

7

32

17

36

14

2

9 39

44

3

24

1

5 45

4

26

15

Very high congestion High congestion Medium congestion Low congestion

8 34

Figure 6.11  Weekdays, 4–6 p.m.

30 has a congestion level of 31.88%, followed by the 5th Ring Road with 14.49%, and finally road 50 with only 9%. Congested routes represented in Figure 6.12 are the 4th and 5th Ring Roads both with 16.5% congestion and Fahaheel Motorway (30) with 14.56%. The reason behind heavy congestion in the 4th and 5th Ring Roads is because they both lead to Salmiya, which is a very highly congested zone (40): 1. The highest busy zone from 6 to 11 p.m. is Salmiya (40). During this time, people go to cafes, restaurants, and malls located in this area. 2. The zones with high congestion are Kuwait City (33) and Qortuba (37), which are consistent for the previous time periods.

14 6

C hIB LI J O Um A A E T A L .

33 21

41 43 38

31

42

23

46

28

16

11

10

20

40

29

18

30

47

37

48

25

35

7

32

17

36

2 27

14

9 39

44

3

24

1

5 45

4

15 26

Very high congestion High congestion Medium congestion Low congestion

8 34

Figure 6.12  Weekdays, 6–11 p.m.

After the analysis of the weekdays, the results of the weekends are reviewed, as shown in Figure 6.13. Congested routes represented in Figure 6.13 are the 5th and the 6th Ring Roads with 33.33% each followed by Damascus Street and Fahaheel Motorway with 16.67% each: 1. The zones classified as highly congested are Salmiya (40), containing cafes, restaurants, and shopping malls, as well as Al-Rai (17), where The Avenues mall is located. 2. The zone ranked high in congestion is a residential zone (15), consisting of Al-Messila, Abu Al-Hassani, Mahboula, Fintas, Abu Halifa, Al-Mangaf, and Fahaheel, where several restaurants are located along the coastline.

147

O P TImIZ ATI O N O F T R A F FI c F L O W

33 21

41 43 23

46

28

16

7

11

20 40

29

18

30

47

37

48

25 10

35

27 32

17

36

2

31

42

14

9 39

44

3

24

1

5 45

4

26

15

Very high congestion High congestion Medium congestion Low congestion

8 34

Figure 6.13  Weekend, 6–8 a.m.

Congested routes represented in Figure 6.14 are ranked starting with the 5th Ring Road with 30.30% congestion and the 4th Ring Road with 16.67%, both leading to very highly congested and highly congested zones, respectively. The third congested road is Fahaheel Motorway (30) with 15.15%: 1. The zones categorized with very high congestion are as follows: a. Salmiya (40), where cafes, restaurants, malls, and other activity centers are located. b. Qortuba (37), which is a residential area where people have family gatherings during the weekends.

14 8

C hIB LI J O Um A A E T A L .

33 21

41 43 23

27

46

28

16

11

20 40

29

18

30

47

37

48

25 10

35

7

32

17

36

2

31

42

14

9 39

3

44

24

1

5 45

4

26

15

Very high congestion High congestion Medium congestion Low congestion

8 34

Figure 6.14  Weekend, 8 a.m.–12 p.m.



c. South Surra (44), which is a residential area and also contains one of Kuwait’s main attraction points, which is 360 Mall. 2. The zone classed as highly congested is Al-Rai (17), where one of Kuwait’s main attraction points—The Avenues mall— is located. 3. Medium congested zones are (15) containing Al-Messila, Abu Al-Hassani, Mahboula, Fintas, Abu Halifa, Al-Mangaf, and Fahaheel. Also, Al-Sha’ab (20) includes restaurants, a theme park, and several complex buildings with stores, clinics, and beauty salons.

14 9

O P TImIZ ATI O N O F T R A F FI c F L O W

33 21

41 43

42

23

46

28

16

11

20 40

29

18

30

47

37

48

25 10

35

7

32

17

36

2 27

31

14

9 39

44

3

24

1

5 45

4

26

15

Very high congestion High congestion Medium congestion Low congestion

8 34

Figure 6.15  Weekend, 12–4 p.m.

Figure 6.15 shows the traffic during weekends from 12 to 4 p.m. For this time period, the congestion level for both routes and zones is still consistent and similar to the previous period of 8 a.m.–12 p.m., except for Salmiya (40) and South Surra (44), which shifted from very highly congested zones to low-congested ones. The routes during weekends from 4 till 6 p.m. are shown in Figure 6.16. Congested routes represented in this time frame are the 5th Ring Road with 20.34% congestion compared with the total activity level of all roads, followed by the 6th Ring Road with 15.25% and Damascus Street with 13.56%.

15 0

C hIB LI J O Um A A E T A L .

33 21

41 43

42

23

46

28

16

11

20 40

29

18

30

47

37

48

25 10

35

7

32

17

36

2 27

31

14

9 39

44

3

24

1

5 45

4

26

15

Very high congestion High congestion Medium congestion Low congestion

8 34

Figure 6.16  Weekend, 4–6 p.m.

1. The zones classified as highly congested are Salmiya (40), which is a main attraction point for several family activities, and South Surra (44), where 360 Mall is located. 2. The zone categorized as highly congested is (39) Rumaithiya and Salwa. These are residential areas located near the sea and are an intermediate path that people take to go to the beach. 3. The moderately busy zone is Al-Rai (17), where The Avenues mall is located.

Figure 6.17 shows the routes during weekends from 6 till 11 p.m. Congested routes represented in the figure are the same as the previous time period of 4–6 p.m. However, the congestion level increases

151

O P TImIZ ATI O N O F T R A F FI c F L O W

33 21

41 43

31

42

23

46

28

16

2 27

11

20 29

18

40 30

47

37

48

25 10

35

7

32

17

36

14

9 39

44

3

24

1

5 45

4

26

15

Very high congestion High congestion Medium congestion Low congestion 8 34

Figure 6.17  Weekend, 6–11 p.m.

to 29.20% for the 5th Ring Road, decreases to 14.16% for the 6th Ring Road, and increases to 14.16% for Damascus Street. During this time period, the two most congested areas are Al-Rai (17) and South Surra (44), where The Avenues and 360 Mall are located, respectively. 6.4.2  Simulation and Improvement

With the complete data of Kuwait’s network, such as the distance from one zone to the other and the capacity of each segment of the road, collected, a simulation must be created to arrive at a plan to

15 2

C hIB LI J O Um A A E T A L .

handle the traffic: the shortest routes taken and the routes that can handle the most number of cars. Furthermore, improvements can be suggested. The software used to model Kuwait’s network is TORA (TAHA—Operations Research: An Introduction), an operations research program, developed by Dr. Hamdy Taha. The data are entered in several files each for distance and capacity, depending on both the number of lanes and the maximum speed. Figure 6.18 is a representation of the data entered in TORA, which corresponds to Kuwait’s roads capacity. 6.4.2.1  Minimal Spanning Tree of Kuwait  The minimal spanning tree

algorithm has been used to link Kuwait’s network using the shortest roads. All the divided zones have been taken into consideration and linked together using the shortest distances. Each zone is linked to another zone separately. In other words, one highway is not taken as a whole to link all its neighboring areas; however, highways and ring roads are divided into segments. Figure 6.19 shows the shortest routes linking all the 48 zones of Kuwait. Figures 6.19 through 6.30 show, in detail, the minimal spanning tree of Kuwait. For Fahaheel Motorway (30), a segment of it, presented in gray in Figure 6.20, is taken as the shortest link between large zones that are in the south of Kuwait. The zones connected together with this shortest link are zone 34 (Mina Abdullah), zone 15 (Al-Messila + Abu Al-Hassani + Mahboula + Fintas + Abu Halifa + Al-Mangaf + Fahaheel), zone 4 (Adan + Qusoor + Qurain + Mubarak Al-Kabeer + Sabah Al-Salem + Egaila + Al-Rigga + Sabahiya), zone 39 (Rumaithiya + Salwa), and zone 24 (Bayan + Mishref). King Fahad Motorway (40) starts at Kuwait City and ends at the Kuwait/Saudi Arabia borders. For the areas in the south of Kuwait, Fahaheel Motorway (30), as shown in Figure 6.21, is the shortest route linking them. In the case of the areas up north closer to Kuwait City, King Fahad (40) is the shortest route. The zones linked by road 40 are Abdullah Al-Salem (2), Al-Mansouriyah and Dasma (14), Nuzha (35), Al-Daiya and Qadsiya (10), Al-Rawda (18), and Hawalli (29). The shortest road linking the residential areas Faiha (27), Nuzha (35), Al-Adailiya (7), Al-Rawda (18), Qortuba (37), Surra (47), and South Surra (44) is Damascus Street, shown in gray in Figure 6.22,

15 3

Figure 6.18  Capacity matrix in TORA.

O P TImIZ ATI O N O F T R A F FI c F L O W

15 4

C hIB LI J O Um A A E T A L .

33 21

41 43 38

42

23

46

28

16

11

20 40

29

18

30

47

37

48

25 10

35

7

32

17

36

2 27

31

14

9 39

44

3

24

1

5 45

4

26

15

19

8 34

Figure 6.19  Minimal spanning tree of Kuwait.

rather than the roads King Faisal (50) and King Fahad (40) to the left and right of those areas. King Faisal Motorway (50) starts at the 1st Ring Road and ends at the 7th Ring Road. As shown in Figure 6.23 in gray, not the entire road is considered the shortest link between the areas to the right and left of it. It is taken in segments. For the first segment in the north, the road links the residential areas Kaifan (21), Abdullah Al-Salem (2), Faiha (27), and Al-Shamiya (31) closer to Kuwait City (33). For the second segment, it links only two zones, Al-Yarmouk (48) and Qortuba (37). And for the last segment, it links Al-Zahra (1), the Airport District, Al-Dajeej, Mubarak Al-Abdullah (5), and Subhan (45).

15 5

O P TImIZ ATI O N O F T R A F FI c F L O W

32

30

47

37

48

9 39

44

3

24

1

5 45

4

26

15

8 34

Figure 6.20  Fahaheel Motorway in the minimal spanning tree.

As highlighted in Figure 6.24, for Al-Ghazalli road (6), only one link is available between zones 11 (Al-Farwaniya) and 16 (Al-Rabiya + Al-Rahab + Ishbilya + Ardiya). Al-Jahra road (80) goes through all areas of Shuwaikh with the shortest distance as highlighted in gray in Figure 6.25. In Figure 6.26, the 2nd Ring Road is taken as a whole as the shortest road linking the zones above and below it. The 3rd Ring Road only links zone 10 (Al-Daiya + Qadsiya) to zones 20 (Al-Sha’ab) and 29 (Hawalli), and zone 31 (Kaifan) to zone 42 (Shuwaikh B [Industrial]), as shown in Figure 6.27.

15 6

C hIB LI J O Um A A E T A L .

33 21 31

25 14

2

10

27 32

40

29

18

7

48

20

35

30

47

37

9 39 24

Figure 6.21  King Fahad motorway in the minimal spanning tree.

33 21

41

42

16

27

11

20 40

29

18

30

47

37

48

25 10

35

7

32

17

28

2

31

43 23

14

9 39

44

3 1

Figure 6.22  Damascus Street in the minimal spanning tree.

24

15 7

O P TImIZ ATI O N O F T R A F FI c F L O W

33 21

41 43 38

31

42

23

46

28

2

16

20 40

29

18

30

47

37

48

11

10

7

32

25

35

27

17

36

14

9 39

44

3

24

1

5 45

4

15

Figure 6.23  King Faisal Motorway in the minimal spanning tree.

33 21

41 43 8

42

23

28

16

11

20 40

29

18

30

47

37

48

25 10

35

7

32

17

46

2 27

31

14

9 39

44

3 1

Figure 6.24  Al-Ghazalli road in the minimal spanning tree.

24

15 8

C hIB LI J O Um A A E T A L .

33 21

41 43 38

31

42

23

27 32

25

35

7 37

48

17

36

2

14

10

20 29

18

30

47

Figure 6.25  Al-Jahra road in the minimal spanning tree.

33 21

41 43

46

28

16

11

40 30

47

37

48

20 29

18

7

32

17

10

27 35

31

42

23

2

25 14

9 39

44

3

24

1 5

Figure 6.26  The 2nd Ring Road in the minimal spanning tree.

For the zones above the 3rd Ring Road, the 2nd Ring Road is considered the shortest road. Damascus Street is the shortest road linking the zones below the 3rd Ring Road. Figure 6.28 shows that one segment of the 4th Ring Road is considered the shortest link between zones 42 (Shuwaikh B [Industrial]), 32 (Khaldiya), 48 (Yarmouk), and 17 (Al-Rai). Figure 6.29 shows the 5th Ring Road. A large segment of the 5th Ring Road is considered in Kuwait’s minimal spanning tree, except

15 9

O P TImIZ ATI O N O F T R A F FI c F L O W

33 21 41

31

43 42

25 14

2

23

40

29 30

47

37

48

17

20

18

7

32

10

35

27

9 39

Figure 6.27  The 3rd Ring Road in the minimal spanning tree.

33 21

41 43

28

16

27 35

11

48

25

10

20

30

47

37

40

29

18

7

32

17

46

2

31

42

23

14

9 39

44

3

24

1

Figure 6.28  The 4th Ring Road in the minimal spanning tree.

for the link between Qortuba (37) and South Surra (44) because Damascus Street is a shorter road between these two areas. Figure 6.30 shows that the shortest link in the 6th Ring Road is between zones 24 (Bayan + Mishref), 39 (Salwa + Rumaithiya), 15 (Al-Messila + Abu Al-Hassani + Mahboula + Fintas + Abu Halifa + Al-Mangaf + Fahaheel), and 4 (Adan + Qusoor + Qurain + Mubarak Al-Kabeer + Sabah Al-Salem + Egaila + Al-Rigga + Sabahiya). As shown in Figure 6.31, the 7th Ring Road starts at zone 4 (Adan +  Qusoor + Qurain + Mubarak Al-Kabeer + Sabah Al-Salem + Egaila + Al-Rigga + Sabahiya) and extends to the west of Kuwait, and it is the shortest route for these areas.

16 0

C hIB LI J O Um A A E T A L .

33 21

41 43 38

23

46

28

16

20 40

29

18

30

47

37

48

11

25 10

35

7

32

17

36

2 27

31

42

14

9 39 24

44

3 1

Figure 6.29  The 5th Ring Road in the minimal spanning tree.

33 21

41 43

28

16

11

20

40

29

18

30

47

37

48

25 10

35

7

32

17

46

27

31

42

23

2

14

9 39

44

3

24

1

5 45

4

26

Figure 6.30  The 6th Ring Road in the minimal spanning tree.

15

161

O P TImIZ ATI O N O F T R A F FI c F L O W

46

28

16

11

44

3

24

1

5 45 4

26

15

Figure 6.31  The 7th Ring Road in the minimal spanning tree.

The following description links the minimal spanning tree of Kuwait with the busy roads of the survey data. During the two periods of 6–8 a.m. and 8 a.m.–12 p.m., the congested routes—Fahaheel Motorway (30), King Faisal Motorway (50), and the 4th Ring Road—remain the same. According to the minimal spanning tree shown in Figure 6.32, the three busy roads in those time periods that are also the shortest links between each zone in Kuwait are as follows: 1. Fahaheel Motorway (30) linking zones Rumaithiya and Salwa (39), where many private schools are located; Bayan and Mishref (24), where private universities and colleges are located; Adan, Qusoor, Qurain, Mubarak Al-Kabeer, Sabah Al-Salem, Egaila, Al-Rigga, and Sabahiya (4), which are large residential areas with private schools and universities. Also, people in those areas use the road (30) to go to their work up north. Finally, Al-Messila, Abu Al-Hassani, Mahboula, Fintas, Abu Halifa, Al-Mangaf, and Fahaheel (15), where restaurants and cafes are located by the sea. 2. King Faisal motorway—road 50—linking Subhan (45), the airport, Abdullah Al-Mubarak, and Jleeb Shuyookh (5) to Al-Zahra (1) and linking the zones close to Kuwait City (33) is congested because people tend to take the shortest route.

16 2

C hIB LI J O Um A A E T A L .

33 21

41

31

42

2 27

32

11

33

25 10

35

40 30

47

37

44

31

42 9

24

2 27

35

25 10

20

40

29

18

30

47

37

48

11

14

7

32

17

39

3

21

41

20 29

18

7

48

17

14

9 39

44

3

24

1

1

5

5

45

45

4

4

26

15

26

15

8

8 34

34

Figure 6.32  Comparison of shortest routes to busy routes from 6 a.m. to 12 p.m.

6.4.2.2 Scenarios and Improvements  The algorithm applied in this

section for the network model of Kuwait is the shortest route method to 1. Find the shortest route in terms of distance (km) from and to every zone 2. Find the routes with the highest capacities that link one zone to the other

Doing so will aid in further analysis of all the areas and roads of Kuwait and their optimal linkages. The simulation results were compared with the cluster analysis results taken from the survey data. For each time period, the busiest zones and roads shown in the cluster analysis maps given in Section 6.4.1.2 have been taken in order to compare the optimal results with the real-life situation. From 6 to 8 a.m., the zone ranked very high in congestion is South Surra (44). It is a residential area and a destination point for

16 3

O P TImIZ ATI O N O F T R A F FI c F L O W

33 21

41

2

31

42

27

11

42 9 39

44

3 1

24

27

17

11

10

35

40 30

47

37

48

20 29

18

7

32

25 14

2

31

40 30

47

21

41

20 29

18 37

48

17

10

35

7

32

33

25 14

9 39

44

3

24

1

Figure 6.33  Congestion on weekdays—6–8 a.m. and 8 a.m.–12 p.m.

the employees and clients of the Ministry of Electricity and Water, the Public Authority for Civil Information, and other organizations such as the International Islamic Charitable Organization. During this time, Kuwait City (33) is considered to be a highly congested area. It is the target of many ministries such as the Ministry of Higher Education, banks such as the Industrial Bank of Kuwait, and many private companies and organizations. These two zones remain congested for the time period of 8 a.m.–12 p.m. As for the routes, the busiest highways leading to the most congested zones (44 and 33) during the two mentioned time periods are Fahaheel Motorway (30) and King Faisal Motorway (50). When the shortest route algorithm to the network is applied, the shortest route from zone 44 to zone 33 is King Faisal Motorway (50), with 6.83 km. The route with the highest capacity linking these two zones is King Fahad Motorway (40), as shown in Figure 6.33. 6.4.2.2.1 Scenario 1  As shown in Figure 6.33, King Fahad

Motorway (40) is not congested; therefore, people tend to take the shortest route from South Surra (44) to Kuwait City (33) via King Faisal Motorway (50). This has led to the case of increasing the capacity of King Faisal Motorway (50) since it is the shortest link and the frequently used road. To increase the capacity of road 50, a lane was added. The road originally consisted of three lanes and was increased to four. When the new capacity of the road was simulated, the road turned out to be

16 4

C hIB LI J O Um A A E T A L .

33 21

41

31

42

10

35

27

30

47

37

48

40

29

18

7

20

9 39

16

11

Road

50

17

14

2

32

25

44

3

24

1

Figure 6.34  Optimal route.

Table 6.6  Capacity Increase of Road 50 THREE-LANE CAPACITY 3328

FOUR-LANE CAPACITY

PERCENT INCREASE

4156

19.92%

the optimal route, linking zone 33–44 in terms of distance and capacity, as shown in Figure 6.34. This improvement will also reduce the congestion level of residential areas, such as Qortuba (37). Table 6.6 compares the capacity increase of the segment between zones 44 and 33 before and after the improvement. 6.4.2.2.2  Scenario 2  In case of a blockage, such as that caused by an accident, where road 50 cannot handle any more cars, route 40 would be the best alternative in terms of capacity. This case is most likely to happen since road 50 is busy during 6 a.m.–12 p.m., as shown in Figure 6.35. 6.4.2.2.3  Scenario 3  Another scenario taken is Salmiya being a destination point. In weekdays, from 6 to 11 p.m., Salmiya is highly congested; therefore, the routes leading to it are also busy. According

16 5

O P TImIZ ATI O N O F T R A F FI c F L O W

33 21

41

2

Road

50

39

11

24

44

3

11

40

29 30

47

37

48

17

20

40

9

18

7

32

10

35

27

25

ad Ro

42

14

2

31

40 30

47

37

48

17

21

41

20 29

18

7

32

33

25 10

35

27

31 42

14

9 39 24

44

3 1

1

Figure 6.35  Congested route and alternative route.

to the results of the surveys, the two most congested routes leading to Salmiya are the 4th and 5th Ring Roads. Taking a highly congested residential area in between the 4th Ring Road and the 5th Ring Road—Qortuba (37)—and applying the shortest route algorithm for determining the road with the shortest distance and highest capacity to reach this destination. As represented in Figure 6.36, the optimal route to take from zone (37) to (40) is the 5th Ring Road in terms of the highest capacity and both the 4th and 5th Ring Roads in terms of the shortest distance of 4.65 km. To reduce the congestion on the 5th Ring Road, a new lane was added to the 4th Ring Road, increasing its capacity. In this way, the congestion would be distributed among the 4th and the 5th Ring Roads. In simulation, it is found that the optimal route to take from Qortuba (37) to Salmiya (40) is the 4th Ring Road, as shown in Figure 6.37. Table 6.7 shows the increase in capacity after the improvement. 33 21

41

42

11

35

37

48

3

33

25 10

18 4th

7

32

17

6

2 27

31

14

41

20

44

42 30

9 39 24

6

11

35

37

48

3

25 10

18 4th

7

32

17

14

2 27

31

40

29

47 5th

21

20 40

29

47 5th 44

Figure 6.36  Roads with the shortest distance and the highest capacity.

30

9 39 24

16 6

C hIB LI J O Um A A E T A L .

33 21

41

42

11

40 30

47

37

48

20 29

18 4th

7

32

10

35

27

17

6

14

2

31

25

9 39

44

3

24

1

Figure 6.37  Optimal route from (37) to (40).

Table 6.7  Capacity Increase of 4th Ring Road THREE-LANE CAPACITY 1295

FOUR-LANE CAPACITY

PERCENT INCREASE

1517

14.63%

The previous scenarios represented weekday behaviors. As for the weekend, different behaviors were noticed. As shown in Figure 6.38, Salmiya (40), South Surra (44), and Qortuba (37) were considered very highly congested areas during the 8 a.m.–12 p.m. time period. Salmiya is a popular destination point, which includes shopping malls, restaurants, cafes, and other entertainment destinations. Qortuba and South Surra are residential areas where family gatherings occur, especially during the weekend. Al-Rai (17) is another destination for those going for shopping, automobile companies, and car garages. One of Kuwait’s most famous attraction points, The Avenues mall, is located in this zone. According to the map in Figure 6.38, The Avenues is considered a highly congested area. The two roads leading to Salmiya (40) and Al-Rai (17) are the 4th and 5th Ring Roads. As represented in Figure 6.38, these two roads are highlighted, meaning that they are busy during this time period.

16 7

O P TImIZ ATI O N O F T R A F FI c F L O W

33 21

41

2

16

11

40

30

47

37

48

20 29

18

7

32

17

10 35

27

31 42

25 14

9 39

44

3

24

1

Figure 6.38  Destination points Al-Rai (17) and Salmiya (40).

Simulation was done to find the best route leading to Al-Rai (17). According to capacity, the optimal route leading to zone 17 was the 5th Ring Road, and according to distance, the 4th Ring Road is a shorter route, 3.7 km, to go to zone 17. In order to improve this situation and reduce the overcrowding of the 5th Ring Road, an increase in the 4th Ring Road capacity is needed. The maximum speed was increased, making it 100  km/h instead of 80  km/h. After simulating the new scenario, the optimal road leading to zone 17 is still the 5th Ring Road in terms of capacity. However, as shown previously in scenario 3, the best route leading to Salmiya (40) is the 4th Ring Road after the improvement in terms of distance and capacity. Table 6.8 shows the capacity changes between Qortuba (37) and Salmiya (40), whereas Table 6.9 shows the capacity changes between Qortuba (37) and Al-Rai (17). Table 6.8  Capacity Increase of the 4th Ring Road Leading to Salmiya (40) MAXIMUM SPEED 80 KM/H 1262

MAXIMUM SPEED 100 KM/H

PERCENT INCREASE

1323

4.61%

16 8

C hIB LI J O Um A A E T A L .

Table 6.9  Capacity Increase of the 4th Ring Road Leading to Al-Rai (17) MAXIMUM SPEED 80 KM/H 849

MAXIMUM SPEED 100 KM/H

PERCENT INCREASE

883

3.85%

6.5 Conclusion

With the increase in population and emigration, the traffic fluctuates, only becoming worse. The topic presented in this chapter was studied by monitoring the behavior of people to understand the trends of roads in Kuwait. Studying the existing flow of traffic along with the current road and zone division was fundamental. Surveys were the only source for gathering data, as the Ministry of Interior and the telecommunication companies did not provide any information on people’s sources, destinations, and frequently used roads. Two types of surveys were developed: paper-based surveys and online surveys. Online surveys had faster response, making it easier to distribute and gather data. From the surveys, time periods were developed. Specifically, five time periods divided the day; time periods were directed for both weekdays and weekends. Constructing matrices, including the zones along with the routes, was next the step, after the time period division. The matrices were different for every time period of the day and for both weekends and weekdays. They were the tools used to develop the cluster analysis maps. Those maps were developed using a statistical technique called hierarchical cluster analysis, which grouped the level of congestion in a hierarchical form, ranking from the highest to the lowest. Finally, a representation of the road congestions was viewed through the weather map evolution. Simulation helped in visualizing the existing road conditions and therefore contributed to the development of new scenarios. At first, the simulation of the existing flow of traffic was done, and then the simulation of the capacity of the roads and the distance of each road was done to detect the optimal route from one place to another. Moreover, speed limits were altered and shifted to reach the ideal road capacity to ultimately reduce traffic. Throughout the procedure, peak times were always considered regarding the congestion of roads. Thus, from the simulation of different situations, several improvements and alternatives were suggested.

O P TImIZ ATI O N O F T R A F FI c F L O W

16 9

The future studies on this topic could be dealt with in more detail. Since this chapter covers Kuwait’s main roads and highways, inner roads of the zones were not taken into consideration. However, the addition of these inner roads, roundabouts, and intersections can be an interesting addition to the study. Also, attention will be paid to traffic lights for more precise simulation of Kuwait’s existing roads as they also cause delays. This will result in a microscopic view of the traffic in Kuwait.

Appendix 6.A:  Surveys 6.A.1  Paper-Based Survey

6.A.1.1  Analyzing Kuwait’s Road Traffic

This survey is regarding the graduation project for industrial engineering students. We are aiming to collect data concerning the traffic load on the Kuwait roads and highways. 6.A.1.1.1 Weekdays

Gender

• Male Age

•  Less than 18 •  26–45 Occupation

• Student •  Retired

• Female •  18–25 •  46 and above • Employee •  Other (please specify) ___________________

Work/Class Destination (If you drop your kids to schools, please mention the areas in order) From: _____________________ From: _____________________ From: _____________________

To: _____________________ To: _____________________ To: _____________________

17 0

C hIB LI J O Um A A E T A L .

Going to work or class, what route(s) do you take? (If you take more than one road, please number them by their order.)

•  1st Ring Road •  2nd Ring Road •  3rd Ring Road •  4th Ring Road •  5th Ring Road •  6th Ring Road

•  7th Ring Road

•  Damascus street •  Fahaheel Motorway •  Al-Ghazally Street (60) •  King Fahad Bin Abdulaziz (40) •  Maghreb Street • King Faisal Motorway—Airport Road (50) •  Other (please specify) ___________________

Leaving work or class, what route(s) do you take? (If you take more than one road, please number them by their order.)

•  1st Ring Road •  2nd Ring Road •  3rd Ring Road •  4th Ring Road •  5th Ring Road •  6th Ring Road

•  7th Ring Road

•  Damascus street •  Fahaheel Motorway •  Al-Ghazally Street (60) •  King Fahad Bin Abdulaziz (40) •  Maghreb Street • King Faisal Motorway—Airport Road (50) •  Other (please specify) ___________________

Time leaving the house _______ Time of arrival to your work _______ Time leaving work _______ Time of arriving home _______ 6.A.1.1.2 Weekends

What is your preferred time for going out during the weekend? • Morning • Afternoon • Evening

What is your preferred destination during the weekend? From: ____________________ To: ____________________

O P TImIZ ATI O N O F T R A F FI c F L O W

171

From home to the desired destination, what route(s) do you take? (If you take more than one road, please number them in their order.)

•  1st Ring Road •  2nd Ring Road •  3rd Ring Road •  4th Ring Road •  5th Ring Road •  6th Ring Road

•  7th Ring Road

•  Damascus street •  Fahaheel Motorway •  Al-Ghazally Street (60) •  King Fahad Bin Abdulaziz (40) •  Maghreb Street • King Faisal Motorway—Airport Road (50) •  Other (please specify) ___________________

What route(s) do you take going back home? (If you take more than one road, please number them in their order.)

•  1st Ring Road •  2nd Ring Road •  3rd Ring Road •  4th Ring Road •  5th Ring Road •  6th Ring Road

•  7th Ring Road

•  Damascus street •  Fahaheel Motorway •  Al-Ghazally Street (60) •  King Fahad Bin Abdulaziz (40) •  Maghreb Street • King Faisal Motorway—Airport Road (50) •  Other (please specify) ___________________

Time leaving the house __________ Time of arrival to destination _____ Time leaving the destination _____ Time of arriving home ___________ 6.A.1.1.3  After Work

During weekdays, do you go to specific places regularly? If yes, please fill this page. Kindly write down your destination below:

From: ____________________ To: ____________________ What road(s) do you take going to your desired destination? (If you take more than one road, please number them by their order.)

17 2



C hIB LI J O Um A A E T A L .

•  1st Ring Road •  2nd Ring Road •  3rd Ring Road •  4th Ring Road •  5th Ring Road •  6th Ring Road

•  7th Ring Road

•  Damascus street •  Fahaheel Motorway •  Al-Ghazally Street (60) •  King Fahad Bin Abdulaziz (40) •  Maghreb Street • King Faisal Motorway—Airport Road (50) •  Other (please specify) ___________________

What road(s) do you take going back home? (If you take more than one road, please number them by their order.)

•  1st Ring Road •  2nd Ring Road •  3rd Ring Road •  4th Ring Road •  5th Ring Road •  6th Ring Road

•  7th Ring Road

•  Damascus street •  Fahaheel Motorway •  Al-Ghazally Street (60) •  King Fahad Bin Abdulaziz (40) •  Maghreb Street • King Faisal Motorway—Airport Road (50) •  Other (please specify) ___________________

Time leaving the house __________ Time of arrival to destination _____ Time leaving the destination _____ Time of arriving home ___________ Thank you for your time. ☺

O P TImIZ ATI O N O F T R A F FI c F L O W

173

6.A.2  Online-Based Survey

References

1. Kuwait Government Online. (2011). Population of Kuwait. Retrieved November 25, 2012, from http://www.e.gov.kw. 2. Ministry of Interior. (2007). Al-Mururiya magazine, issue No. 9. Retrieved November 30, 2012, from http://www.moi.gov.kw/portal/ vArabic/storage/other/mjm9.pdf. 3. N.A. (2008). Sample survey questions, answers and tips. Retrieved February 28, 2013, from http://www.constantcontact.com/aka/docs/pdf/ survey_sample_qa_tips.pdf. 4. Taha, H. (2010). Operations Research: An Introduction. 9th edn. Prentice Hall, NJ: Pearson Education. 5. Bosch, R. and Trick, M. (2005). Search Methodologies: Introductory Tutorials in Optimization and Decision Support Technique. Oberlin, OH: Springer. 6. Moradkhan, M.D. (2010). Multi-criterion optimization in minimum spanning trees. Studia Informatica Universalis, 8(2): 185–208.

174

C hIB LI J O Um A A E T A L .

7. Li, D.-L., Li, R.-W., Li, Y.-H., and Zhang, P.-J. (2009). Improved spanning tree-based genetic algorithm and its application in cost optimization of logistics dispatching system. Mathematics in Practice and Theory (21): 38–44. 8. Dijkstra, E.W. (1959). A note on two problems in connexion with graphs. Numerische Mathematik, 1(1): 269–271. 9. Teodorovic, D. (1986). Transportation Networks. New York: Gordon & Breach Science Publishers. 10. Chvatal, V. (1983). Linear Programming. New York: W.H. Freeman. 11. Godden, B. (2004). Sample Size Formula. Chicago, IL: Marketing Research Association. 12. Nicholas, J.G. and Lester, A.H. (2002). Traffic and Highway Engineering. Bill Stenquist, Books/Cole Thomson Learning. 13. Google. (2012). Google Maps. Retrieved December 9, 2012, from https:// www.google.com.kw/maps/@29.1924637,47.7780801,10z?hl=en 14. Mardia, K., Kent, J., and Bibby J. (1980). Multivariate Analysis (Probability and Mathematical Statistics). London, U.K.: Academic Press Inc.

7 M OD ELIN G , S ImUL ATI ON , AND A NALYsIs OF P RODUCTI ON L INEs IN KUwAIT ’s P E TROLEUm S ECTOR S E I F E D I N E K A D R Y, R AWA N J A R AG H , R E E M A L - M A DY, S H A H A D S H E E R , A N D S H A I K H A A L - DA B B O U S Contents

7.1 Introduction 175 7.2 Methodologies and Approaches 176 7.2.1 Heuristic Method 176 7.2.2 Mathematical Programming 176 7.2.3 Computer Simulation 177 7.2.4 Application of Arena 177 7.3 Process Description 178 7.4 Arena Simulation 180 7.4.1 Step-By-Step Process Identification 180 7.5 Output Analysis 189 7.5.1 Number In/Number Out 191 7.5.2 Parameter: Duration 192 7.5.3 Parameter: Employee Utilization 193 7.5.4 Usage 194 7.6 Summary and Concluding Remarks 195 References 196 7.1 Introduction

A production line is a repetitive manufacturing process in which the product passes through the same sequence of operations. Gaining a thorough understanding of a production line grants you the ease of tackling rising issues. Hence, industrial engineers seek to improve 17 5

17 6

SEIF ED INE K A D RY E T A L .

operation systems wherever they are placed. They target clear objectives throughout the stages of production. This chapter would convey the analysis and examination of production lines to find surfacing issues in order to find ways to improve them. 7.2  Methodologies and Approaches

Throughout this research, data were collected from the oil sector in Kuwait. The data were used to discover if any enhancements can be made to improve the production rate of the company. Different methods were found to approach the project. 7.2.1  Heuristic Method

According to Groner et al. (1983), a heuristic method is a classical problem-solving method that has been used by many to speed up the process by retracing steps to prove the outcome. It starts off with an analysis and a concrete assumption as a mean for proving if the expected consequences are in fact true or not. This method relies strongly on trial-and-error and up-to-date mathematical results for gaining a step-by-step path toward anticipated results. This method (Cortes et al., 2009) was used in a case study aimed to solve the assembly line-balancing problem of a motorcycle manufacturing company. The method worked well; however, it did not provide accurate results. Therefore, this method will be eliminated from the options at hand. 7.2.2  Mathematical Programming

Mathematical (linear) programming is one of the most popular methods for modeling with the purpose of increasing one’s profitability. It easily provides accurate results when used with simple production lines containing a single queue and a few stations. Saad et al. (2009) used such method as a tool for crude oil scheduling. Although this method may provide concrete facts, it requires a higher-level degree. The reason for the dismissal of this method is the current lack of availability of mathematical software and tools. Moreover, although

P R O D U c TI O N LINE S SImUL ATI O N

17 7

this method may provide accurate results, it would turn complicated with complex systems. 7.2.3  Computer Simulation

Kleijenen (2008) refers to computer simulation as a trial-and-error approach where systems can be modeled with regard to statistical methods. When a set of input data is ready for evaluation, placing it in a simulation software provides a simulated model where changes can be applied without the need of changing the actual environment. This methodology was found most suitable for this project since it does not require a large financial investment and does not consume extreme periods of time. According to Carson and Maria (1997), using a simulation approach, one can manipulate the different parameters to compare between scenarios in order to seek the most suitable scenario. If optimized results were found, the improved model may be suggested to the company where they can safely implement the changes, considering that such suggestions were backed up with evidence. This research will use the computer simulation software, Arena®. The software is user friendly and provides thorough statistical results, and the group members are familiar with the software. 7.2.4  Application of Arena

Prior to designing the project’s simulation model, one needs to research the topic to gain insight on how simulation using Arena was implemented previously. Cortes et al. (2009) simulated the assembly line of a motorcycle manufacturing company in order to solve the line-balancing problem. Several scenarios were developed using Arena as a solution to the issue. They modeled the company’s current assembly line and improved it using a couple of different approaches. Both approaches chosen proved to resolve the problem; however, they chose the scenario that resulted in accurate balancing of the production line with an increase in productivity. According to Hecker et al. (2010), when planning to simulate a real-life production line, it is preferable to follow the 40-20-40 rule to ensure optimization. This rule states that 40% of the time should

178

SEIF ED INE K A D RY E T A L .

be dedicated to the gathering of data. The 20% that follows is for the designing and simulation of the model. The remaining 40% is assigned for the verification of the model as well as the validation. During that 40% of the time, one would start enhancing the process in order to modify and implement the changes. The aim of the journal was to present existing bakeries with an opportunity to improve their production plan in accordance to machine utilization and energy consumption. Arena was used to simulate an existing scenario; such scenario was further altered to reach beneficial results. They were able to reduce the energy consumed by three machines and found a way to save 32% of the salaries given out by decreasing the total shift time. This study is the first in the domain of petroleum sector. Figure 7.1, in the following page, depicts a flowchart that examines the framework in the simulation procedure. The process begins with formulating a problem and ends with documenting the findings as well as the implementation. 7.3  Process Description

Ideally, one would seek to influence the production rate, a key project parameter. The following parameters have been set as a guideline when visiting a company: • • • • •

Employee utilization Input and output Waiting time Optimizing process flow Cost

One parameter will be changed, while the rest remain controlled. The influence of the chosen parameter on the production rate would be noted. When a customer request arrives at the refinery, the company receives a customized blend request form matching the customer’s specification. The process is performed by the controller unit, with the capacity of 4 employees in each 1 of the 10 units. Once the blending process has been completed, a sample is to be taken to the laboratories for testing to see whether the blend complies with the customer’s request. If it does not, then the mixture goes through a process known as intermediate blending. This cycle continues until the lab test proves

P R O D U c TI O N LINE S SImUL ATI O N

Formulate the problem

Collect data and develop a model

Computerize the model

Verified

No

Yes

Validated

No

Yes

Design the experiment

Perform simulation runs

Analyze output data

Simulation complete?

No

Yes

Document and implement runs

Figure 7.1  Simulation framework.

179

18 0

SEIF ED INE K A D RY E T A L .

a match. They, then, contact a third-party inspector chosen by the customer to validate the match. The following step would be the loading of the oil mixture in tanks, waiting for the arrival of the customer’s vessel. The area known as the industrial island holds up to two ships. Once the vessel is ready, the filling process takes place. The mixture travels through three pipelines filling only 1 ft of the vessel, or about 10%. Now, a sample is taken to the lab to check whether the mixture has been affected. This effect is usually due to the vessel being corroded and, thus, requires that the customer cleans the vessel. Once the vessel is cleaned, the filling process would resume once more. Yet, if the mixture remains unaffected, the lab would send an approval to load the full vessel. When the loading is completed, another sample is taken before sealing the vessel. If the lab states that the mixture is altered, the process must start over again. One must note that this rarely occurs. In most cases, the lab tests are positive and a quality certificate, also known as bill of lading, is issued. When the customer receives the quality certificate, the products become ready for shipment; and the customer has a right to do whatever he or she wants with the product. This process is illustrated in Figure 7.2. 7.4  Arena Simulation

The refinery’s simulated work flow can be seen in Figure 7.3. 7.4.1  Step-By-Step Process Identification

181

P R O D U c TI O N LINE S SImUL ATI O N

Arrival

Blending

Load in tanks

Load in vessel (1 ft)

Inspection

Lab testing (1 ft)

Yes Lab testing

Give draft certificate?

Approved?

No

Yes

Intermediate blending

Load full vessel

No

Clean vessel

Lab test (full vessel)

Give quality certificate?

Yes

Ship product

No

Figure 7.2  Refinery process flowchart.

The first process is the arrival of the customer request. The time to process this request is 1 day. The company stated that currently an average of 300 vessels is their output per year.

SEIF ED INE K A D RY E T A L .

Figure 7.3  Arena simulation.

18 2

P R O D U c TI O N LINE S SImUL ATI O N

18 3

The company can process a maximum of three customer blends at a time; any blend coming while the three blends are being processed must be held in a queue.

The blending process always takes 3  days to be completed. The process is performed by the controller unit, with the capacity of 4 employees in each 1 of the 10 units.

18 4

SEIF ED INE K A D RY E T A L .

After the blending process, the employees need to take a spec to the lab for testing. There are four types of labs: gas lab, oil lab, certificate lab, and water and analytical lab. Each lab has 20 employees whose process takes between half an hour and 8 h to be completed.

The decide process is a two-way process. There is a 95% chance of giving the spec a draft certificate and a 5% chance of rejecting the spec. The reason the lab rejects the spec is that the blend did not match the required standards.

P R O D U c TI O N LINE S SImUL ATI O N

18 5

When a spec is disapproved, it goes to the intermediate blending to change the blend to meet the specifications. This process is a delay process, because the blend takes additional time to be processed.

The third-party inspector comes to confirm that the spec matches the customer’s requirements. He or she takes an average of 5–10 min to complete inspection (assuming all mixture must pass inspection).

18 6

SEIF ED INE K A D RY E T A L .

The company has 50 holding/storage tanks.

Loading in all tanks is an automated process that takes 1–3 days, most likely 2 days.

P R O D U c TI O N LINE S SImUL ATI O N

18 7

The industrial island, where the blends are loaded on to vessels, can hold two vessels at one time.

For loading the vessel, the company first has to load for 1 ft, roughly 10% of the vessel’s volume. This process takes about half a day.

18 8

SEIF ED INE K A D RY E T A L .

A lab test is, now, required to check if the blend has been altered. This shows that the vessel is suitable for full loading. Ninety-nine percent of the time, the spec is approved.

The reason for disapproval is due to the interior of vessel being corroded. The customer, then, must clean the vessel. This will delay the process for about 1–3 days.

P R O D U c TI O N LINE S SImUL ATI O N

18 9

If the approval is issued, the pipe will fill the vessel to its maximum capacity, taking 1 day to be completed.

A final lab test will be done for the full vessel. One percent of the time, the quality certificate is not given. The product must be sent to blending to repeat the whole process once more. Arena counts the rework as an output.

When a quality certificate is finally issued, the customer receives full responsibility of the product. 7.5  Output Analysis

Once one has simulated this complex production line, one is given results based on that simulation. The first step of analysis was the

19 0

SEIF ED INE K A D RY E T A L .

Table 7.1  Processing Time in Each Process ENTITY Blending Inspection Lab testing Load in tanks Load in vessel for 1 ft Load full vessel

VALUE-ADDED TIME (DAYS)

WAITING TIME (DAYS)

TOTAL TIME (DAYS)

3.0000 0.00513163 0.1716 2.0038 0.5000

0.6313 0.00001087 0.000 0.1427 0.2643

3.6313 0.00514250 0.1716 2.1465 0.7643

1.0001

0.2102

1.2102

verification and validation of the model itself. The model was found compatible with the real-life process with an average output of 300 vessels for 1 year. Upon analyzing Arena’s results, one can see several parameters being influenced by the process components such as time, capacity, and utilization. In Table 7.1, one can note the total time spent in each process. From this, one can see how long the product remains in each entity and how long it waits to be processed. The process with the highest values in both is blending. This is accurate due to the fact that the blending process is the core step in formulating the product to suit customers’ specifications. The entities that follow revolve around the blend of the product. The value-added time cannot be reduced unless the process itself is changed. That is because within the petroleum industry, processes require a standard amount of time to be executed. Moreover, the reason behind the waiting time for blending is due to company possessing three pipelines, where only three customer requests can be processed at one time. The results also show that the inspection process requires the least amount of time. That can be verified because the inspector does not add to the product but simply checks if the blend specs match the customer’s request. Figure 7.4 emphasizes the utilization of the company employees compared to the inspector’s in relevance to the company’s production line. Loading 1 ft of the vessel has a value-added time of 0.500 days, while loading full vessel requires 1.0001 day. This can be justified because within the loading of the 1 ft process, an employee takes a sample from the vessel and sends it for lab testing.

191

P R O D U c TI O N LINE S SImUL ATI O N

0.7 0.6

Value

0.5 0.4 0.3 Automated

0.2

Automated 2 Control units 1–10

0.1

Inspector

0

Figure 7.4  Utilization of resources.

7.5.1  Number In/Number Out

After simulating the process for 1 year, the numbers in and out are shown in Table 7.2. The average number of output is 300 vessels. However, the output of each process is greater due to necessary rework. For example, three blends were found to be altered after loading 1 ft of the vessel, considered by Arena as additional input. Figure 7.5 emphasizes that lab testing has the highest number out with an output of 316 vessels. This means that an average of 16 samples was rejected during the simulated year. This is agreeable because it accounts for a tolerable 5.33% of total output. Table 7.2  Number of In and Out after One Year ENTITY Average output Blending Inspection Lab testing Load in tanks Load in vessel for 1 ft Load full vessel

NUMBER IN/NUMBER OUT (VESSELS) 300 301 301 316 301 303 301

19 2

SEIF ED INE K A D RY E T A L .

320 315

Vessels

310 305

Blending Inspection

300

Lab testing Load full vessel

295

Load in tanks Load in vessel for 1 ft

290

Figure 7.5  Number in/number out.

7.5.2  Parameter: Duration

The initial model focused on the duration of 1 year.* The company states that 300 vessels are their average output; this complies with the result of the simulated model. Though 300 vessels are the current average output, the simulations prove that the production line can withstand the maximum output number of 384 vessels. Hence, the company can further utilize their resources, if needed, to increase their production rate and therefore increase their profitability. In reference to Table 7.3, one can see the output generated at a given amount of time. This is important for future knowledge in terms of profit. Given the number of vessels produced in a span of 10  years, the company can multiply the number of vessels by $X, which is the future price of an oil barrel. Thus, they can calculate their expected profit within 10 years. This can also be useful to know how much the company will lose if the refinery were to shut down for a week. As of April 20, 2013, the price of an oil barrel (Kuwait News Agency, 2013) was $96.90. The company claims a vessel holds an * The oil sector’s year starts in April and ends in March.

P R O D U c TI O N LINE S SImUL ATI O N

19 3

Table 7.3  Generated Output DURATION (DAYS)

NUMBER OUT

7 30 180 365 3650

1 vessel 20 vessels 176 vessels 300 vessels 3569 vessels

average of 20,000–60,000 barrels. Knowing this, one can deduce the following: A refinery shut down for a week will result in an estimated loss of

$96.90 × 20,000 = $1,938,000



$96.90 × 60,000 = $5,814,000

The price of a vessel ranges from $1,938,000 to $5,814,000. The annual profit based on an output of 300 vessels ranges from $581,400,000 to $1,744,200,000. 7.5.3  Parameter: Employee Utilization

Human capital is a vital component within the oil sector. Employees formulate decisions and operate the production line. An employee in the control room has an average salary range of 2400–3000 KD. Ideally, a reduction in the number of employees would reduce labor cost. The control room in the refinery has 10 units, each with 4 employees. Utilization of employees reflects on how efficiently a company utilizes its personnel. The allowed range of utilization is within 40% and 90%. As the percentage of utilization increases, this resource would be utilized more efficiently. Currently, the four employees of a unit have a utilization percentage of 61.85% as shown in Table 7.4. Though it is within the range, reducing each unit by one employee can save the company an average of 27,000 KD and increases the utilization of the employees to 83.01%, still within the acceptable range. A unit of two employees exceeds the acceptable limit reaching an unhealthy 99.95%.

19 4

SEIF ED INE K A D RY E T A L .

Table 7.4  Utilization Percentage of Employees NUMBER OF EMPLOYEES 2 3 4 5 6

UTILIZATION (%)

NUMBER BUSY

99.95 83.01 61.85 49.81 41.51

1.99 2.49 2.47 2.49 2.49

120.00% Utilization percentage

100.00% 80.00% 60.00% 40.00% 20.00% 0.00%

2

3

4 Number of employees

5

6

Figure 7.6  Employee utilization.

Though the idea of reduction in the number of employees is desirable, changing this parameter would increase the waiting time in the blending process from 0.6313 to 1.5578 days. Referring to Figure 7.6, one can notice that as the number of employees increases, the utilization percentage decreases. 7.5.4 Usage

Table 7.5 shows the utilization of workers and automated systems. The most utilized entity is Automated 2 with 61.99%. This is veritable since it controls two processes: loading vessel for 1 ft and loading full vessel. Automated follows with 55.08% since it fills 50 tanks. Regarding the 10 control units, each has a utilization of 61.85%, explained previously. In reference to Figure 7.7, Automated 2 has an output value of 604 vessels. That is rational because it is in charge of loading 1 ft of the vessel as well as loading full vessels, summing up to 604. The remaining

19 5

P R O D U c TI O N LINE S SImUL ATI O N

Table 7.5  Utilization Percentage of Resources RESOURCE

SCHEDULED UTILIZATION

TOTAL NUMBER SEIZED

0.5508 0.6199 0.6185 0.00423183

301 604 301 301

Automated Automated 2 Controllers 1–10 Inspector 700 600

Vessels

500 400 300 200

Automated Automated 2

100 0

Control units 1–10 Inspector

Figure 7.7  Total number seized.

resources match the number in/number out, indicating that the output of each resource matches the output of the company. 7.6  Summary and Concluding Remarks

A simulated model of a refinery’s crude oil production line was provided to the company for future use. They can apply possible changes to analyze how they impact the real system instead of direct application. The oil industry in Kuwait has complex production lines, integrated to provide Kuwait with 95% of its income. Each of Kuwait’s petroleum companies is in charge of different operations within these production lines. The model, designed for the company, was verified and then validated with the actual system’s average output of 300 vessels. Through simulation, results indicate that the refinery can manage a maximum average output capacity of 384 vessels.

19 6

SEIF ED INE K A D RY E T A L .

Upon analyzing the statistical results of Arena, the current situation was vindicated. Every attribute was backed up with evidence that supports its case. Having the number of output per year provides knowledge of expected profit/loss. The simulation model will be contributed to the company to provide a basis for future modifications of their crude oil refining production line, where they can change processes without affecting the real system.

References

Carson, Y. and Maria, A. (1997). Simulation optimization: Methods and applications. Proceedings of the 1997 Winter Simulation Conference, Atlanta, GA, pp. 118–126. Cortes, P., Onieva, L., and Guadix, J. (2009). Optimizing and simulating assembly line balancing problem in a motorcycle manufacturing company: A case study. International Journal of Production Research, 48(12), 2840–2860. Groner, R., Groner, M., and Bischof, W. F. (eds.). (1983). Methods of Heuristics. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc. Publishers, pp. 79–88. Hecker, F., Hussein, W., and Becker, T. (2010). Analysis and optimization of bakery production line using arena. International Journal of Simulation Modelling, 9(4), 208–216. Kleijnen, J. P. C. (2008). Computer simulation: Practice & research [PowerPoint slides]. http://www.tilburguniversity.edu/webwijs/files/center/kleijnen/ TiLPS.pdf. Retrieved April 15, 2008. Kuwait News Agency. (2013). Kuwaiti crude price rises to USD 96.9 pb. KUNA. http://www.kuna.net.kw/. Retrieved April 20, 2013. Saad, S. M., Lau, K. H., and Omer, A. (2009). Design and analysis of oil production area—A simulation approach. Proceedings of the 23rd European Conference on Modeling and Simulation, Madrid, Spain, pp. 52–59.

8 S ImUL ATI ON

A NALYsIs OF I ZmIR ’ s M E TRO TR ANsp ORTATI ON SYsTEm AND

A DA L E T O N E R A N D G U L E R O Z T U R K Contents

8.1 Introduction 197 8.2 Characteristics of Metro Line and Operations Management 198 8.3 Literature Survey 198 8.4 Input Analysis 200 8.5 Simulation Model 202 8.6 Verification and Validation 203 8.7 Output Analysis 205 8.8 Results and Discussions 209 References 214 8.1 Introduction

Izmir is the third largest city in Turkey with a population of approximately three million people. The city has a public transportation system that is a combination of land, sea, and underground networks. The underground transportation system (metro line) is 12  km in length and carries approximately 180,000 passengers a day on average. This chapter focuses on the analysis of operations in metro line via a simulation model. The analysis includes all the phases that a typical simulation study covers, such as data collection, input analysis, model building, verification and validation, experimental design, and output data analysis. The goal is to build a simulation model to analyze and evaluate the effectiveness of operations in metro line. The performance criteria of effectiveness are the cost of energy spent for the operations (0.7 TL/ coach/km), the average waiting time of passengers at stations, and 19 7

19 8

A DA L E T O NER A N D GUL ER O Z T URK

the comfort of passengers (the number of straphangers in each square meter of the empty space in the coach). 8.2  Characteristics of Metro Line and Operations Management

The metro line has 10 stations. The first and last stations are terminuses, where trains begin and finish their trips in the line. The metro line lies in west–east direction; station #1 represents the west terminus, whereas station #10 sits at the east end of the line. The distance between stations varies between 0.8 and 1.6  km. The line has two tracks; therefore, trains can travel in both directions simultaneously. Station #6 is the midpoint of this line, which consists of management center and maintenance facilities. Besides, it serves as a parking lot for trains. Trains access and leave the metro line at this station. Each train contains four coaches currently. There are 45 coaches in total, and each coach has 44 seats. There is additional 34 m 2 of empty space in each coach for standing passengers who cannot have a seat when the coach is crowded. Under congested situations, when all seats in a coach are taken, some more passengers can still get into the coach as straphangers are available in the empty space. The number of straphangers is limited to 6 per m 2. In such a case, each coach may have up to 248 passengers. This maximum capacity creates an uncomfortable voyage and is not desirable. The company wants to provide a service not to exceed 3 standing passengers per square meter of empty space, which means 146 passengers per coach. The system begins operating at 6 o’clock every day, when trains start their daily tours. Traveling time between stations is known since the distance between stations is fixed. When a train arrives at a station, it waits for a specific period of time (dwell time) for passengers to get in and get off the train. Dwell times vary from station to station. There are 190 trips in each direction, which means 380 trips in total a day. 8.3  Literature Survey

There are many studies in the literature that involve transportation systems. Some studies are concerned with the analysis of the system

SImUL ATI O N A N D A N A LYSIS

19 9

whereas the others include the improvement and optimization of some performance criteria in the system. Goverde (1998) dealt with the synchronization control of scheduled train services to minimize passenger waiting times. This model can be utilized to review the optimal synchronization control policy from the point of view of cost of arrival delays. The objectives are to minimize the total generalized passenger waiting time and to resolve buffer times related to the passenger waiting time in service network timetables. Chang et al. (2000) are concerned with a multiobjective model for passenger train services planning that is applied to Taiwan’s highspeed rail line. The goal is to improve a multiobjective programming model for the optimal assignment of passenger train services on an intercity high-speed rail line without branches. Li (2000) built up a simulation model of a train station and its passenger flow. The simulation model consists of the processes, equipment, and queues a passenger comes across from entering the station. All these parameters directly affect the total passenger travel time. Minimizing the total passenger travel time and increasing the service quality are the purpose of this chapter. Martinez (2002) proposed the application of Siman ARENA™, which is a discrete event simulation tool in the operational planning of a rail system. This model also involves an animation of a Siman simulation. The simulation model gives the capability of using a realistic model of a rail network and calculated waiting time in the platform and on-time performance for special system performance parameters. Sheu and Lin (2011) presented the optimization of train regulation and energy usage of metro lines using an adaptive—optimal—control algorithm. The automatic train regulation system involves service quality, transport capacity, and energy usage of metro line operations. The train regulator has a purpose of maximizing the schedule and headway commitment while minimizing the energy depletion. Finally, Yalçınkaya and Bayhan (2009) described the modeling and optimization of average travel time for a metro line by simulation. They present a modeling and solution approach based on simulation and response surface methodology for optimization. The aim is to find the optimum headways and to minimize the average passenger time

200

A DA L E T O NER A N D GUL ER O Z T URK

spent in the metro line with a satisfactory rate of carriage fullness. Actually, their study is conducted on the same transportation system as in our model. There are some discrepancies in that study such that the arrival rates of passengers are assumed to be fixed during the day. However, it is shown that the arrival rates are nonstationary. On the other hand, the objective is to find the optimal headways (time interval between consecutive trips at a station). They assume that there are an infinite number of trains available at the end of the metro lines. In fact, there are a limited number of trains available and the headways are limited by some other factors. 8.4  Input Analysis

There are two stochastic inputs of the system. The first one is the arrival of passengers into the system, and the other one is the destination station of the passengers. The details of input analysis related to each input are given as follows. When a passenger arrives at a station, he/she uses an electronic pass card or a token to access the system. Therefore, the arrival information, including the time stamp and station, is automatically stored in a database. The records of passengers served by the system within last 15 months have been considered for the analysis in order to determine arrival rates. There are more than 50 millions of such records in the database. The average number of passengers has been determined for every minute of a day, for each day of the week, and for each station. The idea is to identify the differences, if any, in arrival rates for different stations and hours in a day. We are also interested to know whether there is a difference between days of a week (working days vs weekend) and between winter and summer seasons. It has been observed that every station has its own characteristic pattern for arrival rates. It is not surprising since the stations are located in different parts of the city. Some stations are near the business center whereas some others are in residential areas. The arrival patterns for two different stations are given in Figure 8.1, which depicts the average number of passengers per minute that arrive at those stations over time during the day. Another remarkable observation is that the patterns repeat themselves at each particular station during weekdays but differ only

2 01

SImUL ATI O N A N D A N A LYSIS

Station #3 70 60 50 40 30 20 10 0 06:00

10:00

14:00

18:00

Station #1

22:00

100 90 80 70 60 50 40 30 20 10 0 06:00

10:00

14:00

18:00

22:00

Figure 8.1  The average number of passengers per minute at two different stations on Monday.

during weekends, Saturday and Sunday. It leads to an important result such that there should be three scenarios in the model: one for weekdays, the one for Saturdays, and the last one for Sundays. The final observation is that only the magnitude changes, whereas the shapes of the patterns do not change when seasonal changes come into question. Another stochastic input is the destination of passengers. When a passenger accesses the system at a particular station, there is no information available about at which station he/she leaves the metro line. However, it is important to have this information in order to assess the performance of the system. One-week lasting public surveys have taken place twice during winter and summer seasons in order to provide information about the destination stations of the passengers. The surveys are conducted with the assistance of the security personnel. At each station, security men questioned the arriving passengers about their destination station. The answers, including time stamps, have been automatically recorded with their hand counters. Each survey covers more than 500,000 passengers in total. The collected data have been organized and analyzed to produce probability tables for destination stations. The outcomes are assumed to be confident and consistent since sample sizes are large for each station. Destination probabilities are organized in two complementary tables for each station, because when a passenger arrives at a station, it is required to identify the probability of the direction that he/she will take. The probability of destination station should be determined next. If the station is a terminus, it is obvious that the passenger will

202

A DA L E T O NER A N D GUL ER O Z T URK

travel in either west or east direction depending on the terminus he/ she arrives. However, if the station is an intermediate one, a preliminary probability should be assigned for the direction, and then a probability table should accompany for the destination stations. Due to socioeconomical behavior of the passengers and the attributes of the stations in the line, destination probabilities may change over time during the day. Therefore, the tables should be prepared as a function of time to represent those fluctuations. 8.5  Simulation Model

Simulation model is created by ARENA software, which uses discrete-event simulation methodology. An animation model is also established to accompany the simulation model. The model is accompanied by two external files, one for inputs and the other for outputs of the simulation. Both of them are MS Excel™ files. The input file includes travel durations for trains between stations, dwell times at stations, current train schedule, arrival rates at each station at each minute, the probabilities of direction and destination stations of passengers arriving at each station as function of time, and the number of trains in the system over time during the day. The model is developed in a modular structure that includes several submodels. For example, a submodel is concerned with the arrival of passengers, that is, creating passenger entities. Statistical input analysis reveals that the arrival rates are constantly changing during a day. It means that it is not appropriate to investigate a single probability distribution to fit for interarrival times. It can be shown that arrivals should be modeled using nonstationary Poisson process. Let λ(t) be the arrival rate of passengers to a particular station at time t. Parameter λ(t) is not constant but changes over time. For example, it would be larger during the morning rush hour than in the middle of the afternoon. In our model, the input analysis of arrivals has been summarized in a table that consists of the average number of arrivals per minute for each station. The entries of that tables correspond to parameter λ(t) for nonstationary Poisson process at minute t. Since the passengers are created using Poisson process, the time between arrivals should come from the exponential distribution with mean (1/λ(t)). The simulation model reads the average number of passengers from

SImUL ATI O N A N D A N A LYSIS

203

the input file and then assigns to corresponding parameter λ(t) for that particular minute. For the following minutes, different values are read from the input file, which represent different values for parameter λ(t). Those values are then assigned to corresponding variables in the simulation model. When a passenger arrives at an intermediate station, first of all, it is required to identify the probability of the direction he/she will take. Then the probability of destination station should be known. Furthermore, those probabilities may change over time during the day; therefore, they have to be identified as a function of time. All those probability tables are predetermined through input analysis and stored in the input file. The simulation model reads those values and assigns them to corresponding variables. There are other submodels that take care of other activities in the system. One of them simulates train motions and their entrance into and exit from the system. It also controls time, distance between stations, and the speed of the trains. Another submodel controls special statistical counters and system parameters. Finally, a submodel that controls the interaction between trains and passengers has been developed for each station. 8.6  Verification and Validation

In simulation studies, the concept of verification is concerned with building the model right. It is used in the comparison of the conceptual model with the computer representation. It asks the following questions: “Is the model implemented correctly in the computer? Are the input parameters and logical structure of the model correctly represented?” In order to verify the model, each module and unit has been tested using the debugging tools of ARENA software during the model development process. The movements of particular passengers are tracked upon arrival at the system. Choosing direction and identifying destination station and then putting the passenger into the appropriate queue has been verified by tracking random passengers at each station. Furthermore, especially the train–passenger interactions are investigated carefully. Technically, trains and passengers are two different kinds of entities in ARENA. When a train is ready to move

204

A DA L E T O NER A N D GUL ER O Z T URK

from a terminus, the entities representing the passengers getting in the train and the entity representing the train are merged into a group and they are moved together to the next station. Upon arriving at the next station, some of the entities should be separated from the group to represent the leaving passengers. Some new entities are then merged with the group to simulate the passengers getting in the train. The movements of trains are tracked over time as to whether they comply with regulations and time schedule. On the other hand validation is concerned with building the right model. It is used to determine that a model is an accurate representation of the real system. Validation is usually achieved through the calibration of the model, an iterative process of comparing the model with the actual system behavior and using the discrepancies between the two, and the insights gained, to improve the model. This process is repeated until the accuracy of the model is judged to be acceptable. Some comparisons have been prepared in order to validate the model. Actual and simulated numbers of passengers arriving at the stations are compared, and it is found that they are very close to each other. Table 8.1 shows the comparison related to the total passengers at each station. Furthermore, the numbers of arriving passengers are also compared on an hourly basis at each station. Table 8.2 presents the comparisons for two selected stations at different hours of the day. Table 8.1  Comparison of the Total Number of Passengers Arriving at Stations STATION NAME Station #10 Station #9 Station #8 Station #7 Station #6 Station #5 Station #4 Station #3 Station #2 Station #1 Total

SIMULATION AVERAGE

ACTUAL SYSTEM AVERAGE

29,943 7,296 2,706 10,048 11,460 1,118 7,904 18,735 25,446 21,257 135,913

30,028 7,321 2,722 10,059 11,503 1,132 7,930 18,780 25,516 21,267 136,258

205

SImUL ATI O N A N D A N A LYSIS

Table 8.2  Comparison of the Hourly Number of Passengers Arriving at Stations STATION #1 HOURS 05:30–07:00 07:00–08:00 08:00–09:00 09:00–10:00 10:00–11:00 13:00–14:00 14:00–15:00 16:00–17:00 17:00–18:00 18:00–19:00 19:00–20:00 22:00–23:00 23:00–00:00

STATION #2

GENERATED BY SIMULATION

ACTUAL

DEVIATION (%)

GENERATED BY SIMULATION

ACTUAL

DEVIATION (%)

547 2985 4390 1688 1085 1344 1252 1048 997 890 642 202 117

531 2920 4419 1712 1083 1345 1252 1055 1000 887 645 202 121

3 2 −1 −1 0 0 0 −1 0 0 0 0 −3

393 1571 2094 1260 1000 1605 1687 2322 2567 2299 1762 442 255

394 1547 2109 1271 998 1602 1684 2315 2573 2304 1781 445 265

0 2 −1 −1 0 0 0 0 0 0 −1 −1 −4

Please note that actual and simulated numbers of passengers arriving at the stations are very close to each other, which enable us to be confident with the verification and validation of the model. Furthermore, the animation model also provides a visual tool for validating the model. It is especially very helpful to present the model to the management of the company. 8.7  Output Analysis

The model developed in this chapter is terminating simulation since metro operations are stopped at midnight each day and start with an empty state the next day. It is required to collect data across some replications and analyze them statistically to construct confidence intervals for performance measures in the model. The performance measures in consideration are the number of passengers in the train across stations for each trip and the average waiting times of passengers at stations. The number of replications is determined to be 10 in the beginning as proposed by Law and Kelton (2000). Table 8.3 indicates the confidence intervals for the number of passengers in the train across stations for selected trips.

STATION #10

21.8 ± 3.61 10.8 ± 2.33 ⋯ 184.8 ± 7.39 205.4 ± 10.1 ⋯ 220.2 ± 10.1 195.9 ± 10.9 ⋯ 161.3 ± 11.6 ⋯ 50 ± 5.48 ⋯

TRIP TIME

06:00 06:10 ⋯ 08:06 08:10 ⋯ 13:18 13:24 ⋯ 17:42 ⋯ 20:47 ⋯

26.1 ± 3.18 14.1 ± 1.95 ⋯ 232 ± 9.53 254.5 ± 10.5 ⋯ 246.8 ± 12.0 220.1 ± 9.64 ⋯ 184.9 ± 13.0 ⋯ 55.5 ± 6.03 ⋯

STATION #9 27.8 ± 2.99 14.4 ± 2.13 ⋯ 237 ± 10.8 255.6 ± 9.13 ⋯ 251.5 ± 12.4 229.3 ± 10.4 ⋯ 192.3 ± 12.5 ⋯ 58.3 ± 7.66 ⋯

STATION #8 33 ± 4.19 16.9 ± 3.06 ⋯ 278.9 ± 8.96 303.9 ± 12.5 ⋯ 299.3 ± 13.2 269.2 ± 10.9 ⋯ 215.7 ± 13.9 ⋯ 62.7 ± 6.79 ⋯

STATION #7 26.9 ± 3.36 13.4 ± 2.50 ⋯ 281.4 ± 8.04 311.2 ± 15.1 ⋯ 273.1 ± 14.3 269.5 ± 12.6 ⋯ 203.3 ± 13.1 ⋯ 73 ± 0.01 ⋯

STATION #6 26 ± 3.16 13.5 ± 2.26 ⋯ 274.1 ± 8.35 304.3 ± 14.2 ⋯ 270.4 ± 14.4 266.5 ± 12.9 ⋯ 201.3 ± 12.7 ⋯ 73 ± 0.02 ⋯

STATION #5 23.5 ± 3.02 16.9 ± 2.39 ⋯ 252.3 ± 10.3 279.5 ± 11.6 ⋯ 251.8 ± 14.6 253.2 ± 13.1 ⋯ 200 ± 12.0 ⋯ 71 ± 0.01 ⋯

STATION #4

Table 8.3  Confidence Intervals for the Number of Passengers in Train across Stations (Number of Replications = 10) 19.1 ± 3.34 14.9 ± 2.59 ⋯ 175.6 ± 9.13 187.8 ± 5.64 ⋯ 222.2 ± 10.2 221.8 ± 14.0 ⋯ 214.2 ± 11.7 ⋯ 88 ± 0.03 ⋯

STATION #3

9.5 ± 2.55 11.6 ± 2.78 ⋯ 80.8 ± 3.67 80.9 ± 3.06 ⋯ 106 ± 8.98 113 ± 7.22 ⋯ 137 ± 4.53 ⋯ 65 ± 0.04 ⋯

STATION #2

206 A DA L E T O NER A N D GUL ER O Z T URK

SImUL ATI O N A N D A N A LYSIS

207

The averages and half-widths of confidence intervals differ substantially with respect to the trip time. In order to compare the significance of the confidence intervals in a common ground, the estimates of relative error are calculated for each trip. The estimate of relative error is defined as the division of half-width by the average (Law and Kelton, 2000). Table 8.4 shows the estimates of relative errors for the same trips given in Table 8.3. Estimates for relative errors are very small for all trips except the ones that are the early trips in the morning and the late trips in the evening. It is not surprising since the number of passengers arriving at those hours is relatively small for every station, which leads to smaller averages. Confidence intervals and estimates of relative errors are also determined for the average waiting times of passengers at different stations. The estimates of relative errors derived from 10 replications are reviewed for all system parameters, and it is observed that the highest relative error occurs for the number of passengers in the train at trip with time 06:00 am in west direction (see Table 8.4). This parameter will be used as the reference point to find the number of replications required to provide a smaller relative error level, which is determined to be 0.05. In other words, we need to figure out the number of replications to decrease the half-widths such that relative error is less than 0.05 for each system parameter. The concept of relative error is defined by Law and Kelton (2000). If the estimate X(n) is such that |X(n)−μ|/|μ| = γ, then we say that X(n) is a relative error of γ. Suppose that we make replications of a simulation until the half-width of the confidence interval divided by |X(n)| is less than or equal to γ (0  p2), where expected demand



x2 − y2

ξ2

F2 (ξ 2 )dξ 2 is protected from discounted fare

class 2 and reserved for full fare class 1. The last term represents the loss in revenue due to an observed demand for full fare class, which is lower than the actual capacity allocated x1 +



x2 − y2

ξ2

F2 (ξ 2 )dξ 2 . The

airline problem in this case is formulated as follows: P : Max ≠ˆ (3.6) p1 , p2 ,x1 ,x 2



subject to : x1 + x 2 ≤ c (3.7)

3.2.2  With Fencing Investment

Given that the price differentiation strategy results in imperfect fences and hence, in demand leakage, the airline’s problem extends to diminishing the customers’ shifting from full fare class to discounted fare class. Without loss of generality, we presume that the airline decides to increase fencing levels through an investment of specific costs. Suppose that for reaching γ leakage, the airline must bear a cost, G(γ), assumed nonnegative, continuous and monotonically decreasing in γ. Thus, the revenue function from Equation 3.5 is adjusted by the fencing cost G(γ), and the airline problem is formulated now as a constraint nonlinear optimization problem, P′: Pʹ :

Max

p1 , p2 ,x1 ,x 2 , γ

π = πˆ − G ( γ ) (3.8)

subject to : x1 + x 2 ≤ c (3.9)

The optimal expected revenue when fencing investment decisions are taken would be π* ( p1* , p2* , x1* , x 2* , γ * ), and the airline’s problem is to determine the optimal integrated decisions on fare prices p1* and p2* , seat inventory control x1* and x 2* ,and investment G(γ*) for demand leakage γ*. It is important to notice here that the optimality of revenue, ≠ˆ , from P would be an upper bound on the optimal total expected revenue, π, from P′, when the airline decides on fencing investment.

60

Sy ED A SIF R A Z A A N D MIHAELA T URIAc

3.3  Model Analysis

We address first the airline’s optimization problem, P, to jointly determine the fare pricing and seat inventory control. Due to computational complexity in structural properties analysis of the revenue function, we provide two approaches to solve the model: sequential (hierarchical) optimization and joint optimization. In hierarchical optimization, the decision control parameters are optimized sequentially such that the airline determines first the optimal fare prices, p1* and p2* , and later, the optimal inventory control decisions, x1* and x 2*. In problem P′, an additional decision parameter γ is considered to determine the fencing investment, achieved also by sequential optimization. This approach of addressing inventory control and pricing decisions has been applied in several studies (see Smith et al., 2007; Zhang et al. 2010). 3.3.1  Hierarchical Optimization

To apply the sequential approach, we consider problem P, and we use a hierarchical optimization procedure while demand leakage rate, γ, is fixed, thus, no investment assumed to control the fencing via demand leakage rate. In our pursuit to determine the fare pricing, while ignoring the seat inventory control decisions x1 and x2, and since pricing decisions are mostly dependent on the price-dependent deterministic demands, yi, ∀i = {1,2}, we formulate a deterministic version of problem, P, as problem, DP. Given that demand uncertainties are ignored, the stochastic demands D 1 and D 2 are approximated with the expectations, z1 = y1 + μ1 and z2 = y2 + μ2, respectively. Thus, the deterministic problem, DP, of the airline would be

DP : Max πd = p1z1 + p2 z2 (3.10) p1 , p2

subject to : z1 + z2 ≤ c (3.11)

For DP, we can determine the optimal fare prices, p1* and p2* , as outlined in Proposition 3.1. Proposition 3.1 In DP, the following holds: 1. The optimal prices pi* ,  ∀i = {1,2} are determined by solving the following system of nonlinear equations:

Op TIM A L F EN cIN G IN A IRLINE IN D US T Ry

61



α1 − 2 p1 ( β1 + γ ) + 2 p2 γ + λβ1 + μ1 = 0 (3.12)



α 2 − 2 p2 ( β2 + γ ) + 2 p1 γ + λβ2 + μ 2 = 0 (3.13)



c − ( α1 + α 2 ) + β1 p1 + β2 p2 − ( μ1 + μ 2 ) = 0 (3.14)

2. πd is jointly concave in pi, ∀i = {1,2}. Proof: See Appendix. Now, we reconsider problem P with no fencing investment while the stochastic demand assumption with sequential arrivals holds. We create the stochastic problem for the airline as: x2 − y2

P : Max πˆ = p1x1 + p2 x 2 + ( p1 − p2 ) p1 , p2 ,x1 ,x 2

− p1



F2 ( ξ 2 ) dξ 2

ξ2 x1 +





x2 − y2

∫ξ2

F2 ( ξ 2 ) − y1



F1 ( ξ1 ) dξ1

(3.15)

ξ1

subject to : x1 + x 2 ≤ c (3.16)

In P, the optimal expected revenue would be given by ≠ˆ * ( p1* , p2* , x1* , x 2* ), where p1* and p2* are the optimal fare prices, and x1* and x 2* , are the optimal seat inventory controls of full fare and discounted fare class, respectively. The constraint in Equation 3.16 is the flight cabin limitation. In this problem, the optimal fare prices, p1* and p2* , are first obtained from Proposition 3.1, and the expected total revenue function, ≠ˆ , in Equation 3.15 can be optimized to determine the optimal seat inventory controls x1* and x 2* , as outlined in Proposition 3.2. Proposition 3.2  In problem P, the following holds: 1. Given that the optimal fare prices p1 and p2 are fixed, the optimal booking limit x 2* is such that x 2* = y 2 + 3 σ.

62

Sy ED A SIF R A Z A A N D MIHAELA T URIAc

≠ˆ is jointly concave in booking limits x1 and x2 if p1Φ1 – 2. x2 − y2 ⎛ ⎞ (p1 − p2 ) ≥ 0, where Φ1 = F1 ⎜ x1 + F2 (ξ 2 ) − y1 ⎟ and Φ2 = ξ2 ⎝ ⎠ F2 (x 2 − y 2 ).



Proof: See Appendix. 3.3.2  Joint Optimization

In this section, we extend the optimization procedure approached earlier for problem P. Proposition 3.3 outlines the procedure to determine the joint optimal control for problem P. The decision controls here are the optimal fare prices p1* and p2* and the optimal seat inventory controls x1* and x 2* .Due to the complex structure of the revenue function, ≠ˆ , mainly contributed from demand uncertainty, sequential demand arrival (nested control), and price-dependent demand leakage, proving the joint concavity in all decision variable could be a prohibitive task and it is not explored in this study. However, joint ˆ shown in seat inventory control for fixed fare prices concavity of ≠ is and vice versa. These results may be found more restrictive in terms of a more general condition for joint concavity of ≠ˆ , but again, given that the complex structure of the revenue function, an analytical framework to derive a less-restrictive condition seems limiting. Proposition 3.3  In problem P′, the following holds for the joint optimization: 1. For a fixed set of inventory control xi,  ∀i = {1,2}, π is jointly concave in fare prices pi, ∀i = {1,2}, as long as p1ϕ1t 1t 2− (β2 + 2γ + γ(β2 + γ)t 3) ≥ 0, where t 1 = β1 + γ(1−Φ2), t 2 = Φ2β2−​ γ(1−Φ2),  t 3 = p1Φ1−(p1−p2), and t 1, t 2, t 3 ≥ 0. 2. The optimal fare prices p1* and p2* , seat inventory controls x1* and x 2*, and demand leakage γ* are determined by solving the following system of nonlinear equations:



p1 ( 1 − Φ1 ) − p2 + Φ 2 p1Φ1 − ( p1 − p2 ) = 0 (3.17)

(

)

x1 − I 1 + I 2 − ( p1 − p2 )Φ 2 γ − p1Φ1 ( β1 + γ(1 − Φ 2 ) ) = 0 (3.18)

Op TIM A L F EN cIN G IN A IRLINE IN D US T Ry



x 2 − I 2 + ( p1 − p2 ) Φ 22 ( β2 + γ ) − p1Φ1 ( β2 Φ 2 − γ ( 1 − Φ 2 ) ) = 0 (3.19) 2

−Φ 2 ( p1 − p2 ) − p1Φ1 ( p1 − p2 ) ( 1 − Φ 2 ) −

63

∂G = 0 (3.20) ∂γ

c − x1 − x 2 = 0 (3.21)

Next, we study problem P′, which includes also the cost of fencing. A similar study in a firm’s context with no capacity constraints has been reported in Zhang et al. (2010). There are two types of fencing cost models considered here: linear fencing cost and nonlinear fencing cost. 3.3.2.1  Linear Fencing Cost  For a linear fencing cost approach, the

cost function of the fencing investment is linearly linked to the leakage rate, γ. We define the linear cost function considering the range of leakage as G(γ) = G 0−(G 0/K)γ, where G 0 > 0 is the cost of null leakage, when the perfect fence is achieved (γ = 0), and K > 0 is the maximum leakage level when there is no initiative to invest in fencing and G(γ) = 0. Proposition 3.4 Given, xi, pi,  ∀i = {1,2} and a linear fencing cost, G(γ) = G 0 − (G 0/K)γ, the following hold in the problem, P′: 1. The revenue, π is quasi-concave (unimodal) in γ, if ϕ2(p1 − p2) − p1(ϕ1 (1−Φ2)2 + Φ1(1 – ϕ2)) ≤ 0. 2. The optimal leakage rate, γ* can be determined by solving, G (p1 − p2)(Φ2(p1 − p2) + p1Φ1(1 − Φ2)) + 0 = 0. K Proof: See Appendix. 3.3.2.2  Nonlinear Fencing Cost  In the case of nonlinear fencing cost,

it is assumed that for a small leakage rate the cost of fence grows rapidly and then slowly when leakage rate reaches high levels. This behavior is more realistic than the case of linear fencing cost function. We define the nonlinear function under similar considerations on leakage rate so that a representative function is G(γ) = G 0/(K + γ), where G 0/K is the cost of perfect fence (γ = 0), and G 0 > 0, K ≥ 0.

64

Sy ED A SIF R A Z A A N D MIHAELA T URIAc

3.4  Numerical Experimentation

In this section, a numerical study is presented to examine the impact of demand leakage rate, γ, and demand variability, σ, on an airline’s optimal strategy for fare pricing, seat inventory control, and fencing cost investment. The model-related parameters are adopted from the related numerical study presented in Zhang et al. (2010) in an illustrative example, but these parameters are customized as per the authors’ best guess and the additional parameter of airline’s cabin capacity, c. Thus, α1 = 80, β1 = 0.2,  α2 = 180, β2 = 0.8, μ1 = μ2 = 0, and c = 100. For simplicity, σi,  i = {1,2} are assumed equal for each fare class segment, thus, σ  = σi and σ = {2,5,10,15}. In addition to this and consistent with Mostard et al. (2005), the random factor is assumed ξi ∈ U [− 3σ, 3σ]. In a complex problem like the one formulated here, the numerical experimentations are conducted with uniformly distributed price-dependent stochastic demand only (see Zhang et al. 2010). The benefits of fare class creation and differentiated fare pricing are compared with the revenue from the corresponding single fare class, which has a cumulative (equivalent of two fare classes) pricedependent deterministic demand, 260−p, and an equivalent single fare class stochastic demand factor ξ ∼ tri[2 ξ , 2 ξ ] = tri[− 6σ, 6σ] with triangular distribution from the convolution of the two uniformly distributed demands (see Zhang et al. 2010). The corresponding single fare class optimal revenues (1)π* at demand variability σ = {2,5,10,15} are 15,959.93, 15,745.19, 15,379.58, and 15,010.36, respectively. A numerical experimentation from the hierarchical optimization approach suggested previously for problem P is presented in Table 3.2. The table reports the optimal decision control parameters, p1* , p2* , and x 2* which are prices in each fare class and seat inventory allocation for the discounted fare class segment; notice here that the optimal seat inventory would be simply x1* = c − x 2* and therefore not presented in the table. The airline’s revenue from the two fare classes for σ = 2 is 17,025.81 at no demand leakage (perfect market segmentation), which is about 6.7% superior to the corresponding optimal single segment revenue. Whereas at a higher demand variability and no demand leakage, the revenue gain from two fare classes is noticed about 4.8% superior to the corresponding single fare class revenue. Now, at demand leakage rate of γ = 1, and a low demand variability, σ = 2, the

Op TIM A L F EN cIN G IN A IRLINE IN D US T Ry

65

Table 3.2  Numerical Experimentation with Hierarchical Optimization Procedure γ

x 2*

p1*

p2*

≠ˆ *

2

0 0.25 0.5 0.75 1

92.79 86.75 98.86 99.65 69.97

230.00 187.32 176.97 172.31 169.66

142.50 153.17 155.76 156.92 157.59

17,025.81 16,315.83 16,143.71 16,066.16 16,022.04

5

0 0.25 0.5 0.75 1

76.63 96.46 76.26 81.11 99.15

230.00 187.32 176.97 172.31 169.66

142.50 153.17 155.76 156.92 157.59

16,727.04 16,072.50 15,913.82 15,842.33 15,801.65

10

0 0.25 0.5 0.75 1

87.98 99.22 94.61 98.66 84.55

230.00 187.32 176.97 172.31 169.66

142.50 153.17 155.76 156.92 157.59

16,229.07 15,666.94 15,530.67 15,469.27 15,434.34

15

0 0.25 0.5 0.75 1

98.45 99.00 98.45 99.01 99.00

230.00 187.32 176.97 172.31 169.66

142.50 153.17 155.76 156.92 157.59

15,731.11 15,261.39 15,147.52 15,096.21 15,067.02

σ

airline’s revenue from two fare classes is noticed only 0.39% superior to the corresponding optimal single fare class revenue. Similarly, at a higher demand variability, σ = 15, the optimal revenue gains of the airline offering two fare classes are only 0.38% superior to the corresponding optimal single fare class revenue. This clearly leads us to the conclusion that an increase in demand leakage rate, γ, causes a significant effect on the airline’s revenue while using market segmentation based on two fare classes compared to a single fare class. The higher demand variability also impacts toward diminishing the revenue gains to an airline, as it can be clearly noticed from the same Table 3.2. Table 3.3 reports a numerical experimentation with similar findings noticed earlier in sequential optimization approach for problem P. A comparative study of the two methodologies is presented in Figure 3.1, where it can be clearly noticed that both demand leakage rate and demand variability have significant impact on the airline’s profitability. In the joint optimization, at σ = 2, with no demand leakage, the airline

66

Sy ED A SIF R A Z A A N D MIHAELA T URIAc

Table 3.3  Numerical Experimentation with Joint Optimization Procedure γ

x 2*

p1*

p2*

≠ˆ *

2

0 0.25 0.5 0.75 1

92.26 74.23 68.47 99.60 82.69

231.04 188.17 177.79 173.12 170.47

144.19 154.40 156.84 157.93 158.54

17,068.45 16,334.95 16,158.04 16,078.49 16,033.27

5

0 0.25 0.5 0.75 1

92.17 83.35 73.20 80.38 72.52

232.11 189.08 178.69 174.02 171.37

146.46 155.97 158.19 159.17 159.73

16,823.91 16,114.38 15,944.67 15,868.58 15,825.40

10

0 0.25 0.5 0.75 1

97.49 78.88 79.24 90.22 98.79

232.73 189.74 179.46 174.84 172.21

149.65 157.98 159.85 160.66 161.12

16,396.15 15,735.01 15,579.25 15,509.78 15,470.46

15

0 0.25 0.5 0.75 1

98.94 86.01 86.51 93.45 86.88

232.11 189.59 179.50 174.99 172.44

152.22 159.41 160.94 161.60 161.97

15,950.36 15,345.90 15,205.82 15,143.68 15,108.61

σ

improves its profitability from 17,025.81 to 17,068.45, which yields about 0.27% revenue increase if the joint optimization procedure is used. However, when demand leakage rate increases to γ = 1, with a low demand variability of σ = 2, the optimal revenue of the airline is 16,022.04 in the sequential optimization, while the profitability achieved using the joint optimization is 16,033.27, which is only 0.07% revenue improvement from the sequential framework. At a high demand variability, σ = 15, and a perfect market segmentation, γ = 0, the airline’s optimal revenue using sequential optimization would be 15,731.11, which is improved with 1.46% to 15,950.36 through the joint optimization approach. Similar to an observation with low demand variability, σ = 2, when both demand leakage rate and demand variability are higher, γ = 1 and σ = 15, the revenue gain from the joint optimization framework compared to sequential optimization reduces to only 0.28%. Thus, we can conclude here that the sequential optimization procedure is quite competitive to the joint optimization procedure.

Op TIM A L F EN cIN G IN A IRLINE IN D US T Ry

1.75

67

×104 1 fare class, s =2 2 fare classes, sequential s =2 2 fare classes, joint s =2

1.7

1 fare class, s =15

Optimal expected revenue

2 fare classes, sequential s =15 2 fare classes, joint s =15 1.65

1.6

1.55

1.5

0

0.5 g

1

Figure 3.1  Impact of demand leakage and demand variability.

We consider next the extended problem, P′, which enables the airline to mitigate or enhance demand leakage rate, γ, between the two fare classes at an additional investment given by G(γ). In a study reported in Zhang et al. (2010), we have noticed that the linear fencing, G(γ), resulted the firm’s optimal decision to either fully control the demand leakage, γ, to zero, or to not invest in fencing. This is due to the fact that the revenue function, π, in problem P′ is convex in γ. Alternatively, the nonlinear fencing cost G(γ) = G 0/(K + γ) is reported in the same study to have a concave revenue function for a firm. Noticeably, when K = 0, it is prohibitive for an airline to stop the demand leakage, regardless of its investment, limγ→0G(γ) = limγ→0(G 0/γ)→∞. In this study, we have considered the nonlinear fencing cost to optimize the airline’s joint decisions on p1, p2, x1, x2, and γ. The fencing cost function used is given by G(γ) = (100/γ). Next, we study the airline’s optimal decision of a joint control on p1, p2,  x1, x2, and γ, at various demand variability and with a nonlinear fencing control. In Table 3.4, optimal fencing decision γ is

68

Sy ED A SIF R A Z A A N D MIHAELA T URIAc

Table 3.4  Optimal Fencing Decision σ

p1*

p2*

x 2*

γ*

π*

2 5 10 15

180.22 180.01 178.36 175.25

156.27 157.91 160.04 161.56

98.67 82.60 85.15 93.37

0.42 0.45 0.55 0.73

15,959.93 15,745.19 15,379.58 15,010.36

(a)

1.6

×104 1 fare class 2 fare classes

1.55

1.5

0

5

10

σ

15

Optimal fencing investment

Optimal expected revenue

determined by a numerical optimal procedure in MATLAB® and Global Optimization Toolbox (The MathWorks, 2013). GlobalSearch procedure from the toolbox with default settings is utilized. It is obvious to notice here that with higher demand variability, an airlines optimal decision on fencing investment would be to keep an increased demand leakage rate. Figure 3.2a through c illustrate the impact of demand variability, σ, and the optimal fencing decision of the airline. It is obvious to notice here that, with an increase in the demand variability, an airline’s optimal investment decision on fencing would be to diminish it 250 G(γ*) 200 150 100

(b)

Optimal leakage rate

0.8

(c)

0

5

γ*

0.7 0.6 0.5 0.4

0

5

σ

Figure 3.2  (a)–(c) Impact of optimal fencing decision.

10

15

σ

10

15

Op TIM A L F EN cIN G IN A IRLINE IN D US T Ry

69

as the demand variability increases. Naturally, it will lead to an airline to increase the optimal demand leakage rate, γ*. 3.5 Conclusions

In this research, an integrated approach to optimal fare pricing and seat inventory control is presented for an airline that experiences demand leakage. The fences that segment the market demand are considered imperfect. Due to imperfect market segmentation, the airline observes demand leakage from full fare class to the discounted fare class. The research provides models of RM for an airline in the situation when it experiences stochastic price-dependent demands. The models are analyzed to determine an integrated optimal control to fare pricing, seat inventory control and fencing cost decisions. Numerical experimentations are carried out to underline the impact of both market segmentation and fencing efforts onto the airline’s profitability. The future work directions include investigating the optimal investment strategies in regard to different types of consumer behaviors or specific product features in order to keep the airline immune to demand leakage effects. The present analysis has considered the firm in monopoly only; an interesting avenue, therefore, would be to consider a game theoretic approach to this problem in duopoly or oligopoly. Appendix 3.A 3.A.1 Derivation of the Revenue Function

E [πˆ ] = p1 min {x1 + x 2 − min {x 2 , D2 } , D1 } + p2 min {x 2 , D2 } (3.22) Notice that min{a,b} = a − [a − b] + = b− [b − a] + , where a, b ∈ R and [a] + = max{a,0}. Also, [a − b] + = (a − b)−[b − a] +  (see Gallego and Moon, 1993; Chen et al., 2004; AlFares and Elmorra, 2005 for details). Furthermore, Eξi ( Di ) = zi = yi + μi , ∀i = {1, 2}. Thus, we obtain min { D2 , x 2 } = x 2 − Eξ2 [x 2 − D2 ]+ and min{x1 + x 2 − min{x 2 , D2 }, D1 } = x1 + Eξ2 [x 2 − D2 ]+ − Eξ1 [x1 + Eξ2 [x 2 − D2 ]+ − D1 ]+ , and therefore, the revenue from Equation 3.22 becomes:

70

Sy ED A SIF R A Z A A N D MIHAELA T URIAc

(

)

+

+ + πˆ = p1 x1 + Eξ2 [ x 2 − D2 ] − p1 Eξ1 ⎡x1 + Eξ2 [ x 2 − D 2 ] − D1 ⎤ ⎣ ⎦ +

− p 2 x 2 − p 2 E ξ 2 [ x 2 − D2 ]



(3.23)

Using earlier studies (see Yao, 2002; Yao et al., 2006) in Equation 3.23, we have Eξ2 [x 2 − D2 ]+ =



x2 − y2

ξ2

F2 (ξ 2 )dξ 2 .

And, similarly we can determine the following expression: + Eξ1 ⎡x1 + Eξ2 [ x 2 − D2 ] − D1 ⎤ ⎣ ⎦ x1 +

x2 − y2

∫ξ2

F2 ( ξ 2 ) dξ 2 − y1



=

=

x2 − y2

∫ξ2

x2 − y2 ⎛ ⎞ ⎜x + F2 ( ξ 2 ) dξ 2 − y1 − ξ1 ⎟ f 1 ( ξ1 ) dξ1 ⎟⎟ ⎜⎜ 1 ξ2 ⎠ ⎝



ξ1 x1 +

+

F2 ( ξ 2 ) dξ 2 − y1



F1 ( ξ1 ) dξ1

ξ1

Substituting these expressions in Equation 3.23 yields the following revenue function: x2 − y2

πˆ = p1x1 + p2 x 2 + ( p1 − p2 )



F2 ( ξ 2 ) dξ 2

ξ2 x1 +

x2 − y2

∫ξ2

− p1

F2 ( ξ 2 ) − y1



F1 ( ξ1 ) dξ1

(3.24)

ξ1



Proof of Proposition 3.1 1. Applying Karush Kuhn Tucker (KKT) optimality conditions, the Lagrangian function associated to problem DP is:

L ( p1 , p2 ,λ ) = p1z1 + p2 z2 + λ ( c − z1 − z2 ) (3.25)

where z1 = α1−β1p1−γ(p1−p2) + μ1 z2 = α2−β2p2 + γ(p1−p2) + μ2

Op TIM A L F EN cIN G IN A IRLINE IN D US T Ry

71

The first-order optimality conditions (FOCs) are







⎛ ∂z ∂z ∂L ∂z ∂z = p1 1 + z1 + p2 2 − λ ⎜ 1 + 2 ∂p1 ∂p1 ∂p1 ⎝ ∂p1 ∂p1

⎞ ⎟ = 0 (3.26) ⎠

⎛ ∂z ∂z ∂L ∂z ∂z = p1 1 + z2 + p2 2 − λ ⎜ 1 + 2 ∂p2 ∂p2 ∂p2 ⎝ ∂p2 ∂p2

⎞ ⎟ = 0 (3.27) ⎠

∂L = c − z1 − z2 ≥ 0, λ ≥ 0, ∂λ

( c − z1 − z2 ) λ = 0 (3.28)

where (∂z1/∂p1) = −(β1 + γ) (∂z1/∂p2) = (∂z2/∂p1) = γ (∂z2/∂p2) = −(β2 + λ) Since c−z1−z2 = 0 must be satisfied, therefore, λ > 0. After the simplification, the KKT optimality conditions become

α1 − 2 p1 ( β1 + γ ) + 2 p2 γ + λβ1 + μ1 = 0 (3.29) α 2 − 2 p2 ( β2 + γ ) + 2 p1 γ + λβ2 + μ 2 = 0 (3.30) c − (α1 + α 2 ) + β1 p1 + β2 p2 − (μ1 + μ 2 ) = 0 (3.31)

2. To prove the joint concavity in p1 and p2 of πd, from DP, we explore the Hessian matrix H:



⎡ ∂ 2 πd ⎢ ∂p 2 1 H=⎢ 2 d ⎢ ∂ π ⎢ ⎢⎣ ∂p1∂p2

∂ 2 πd ⎤ ∂p1∂p2 ⎥⎥ (3.32) ∂ 2 πd ⎥ ⎥ ∂p22 ⎥⎦

Notice here that (∂ 2 πd /∂p12 ) = −2(β1 + γ ) ≤ 0, (∂ 2 πd /∂p22 ) = −2(β2 + γ ) ≤ 0 and (∂2πd /∂p1∂p2) = 2γ ≥ 0. Now, H is given by



⎡ −2 ( β1 + γ ) H=⎢ ⎣ 2γ

⎤ 2γ ⎥ (3.33) −2 ( β2 + γ ) ⎦

72

Sy ED A SIF R A Z A A N D MIHAELA T URIAc

In order to prove the joint concavity of πd in p1 and p2, the two first principal minors (∂ 2 πd /∂p12 ) and (∂ 2 πd /∂p22 ) must be nonpositive, and the second principal minor H = (∂ 2 πd /∂p12 )(∂ 2 πd /∂p22 ) − (∂ 2 πd /∂p1∂p2 )2 must be nonnegative. Form Equation 3.33, it can be clearly noticed that both principal minors are negative, and |H| = 4(β1γ + β2γ + β1β2) ≥ 0. This proves the joint concavity of πd in p1 and p2. Proof of Proposition 3.2 x2 − y2

1. πˆ = p1x1 + p2 x 2 + ( p1 − p2 )



F2 ( ξ 2 ) dξ 2

ξ2 x1 +

− p1

x2 − y2

∫ξ2

F2 ( ξ 2 ) − y1



(3.34)

F1 ( ξ1 ) dξ1

ξ1

The revenue function from this equation 3.34 is simplified



x2 − y2

using the following notations: I 2 = F2 (ξ 2 )dξ 2 and x2 − y2 ξ2 x1 + ∫ F2 ( ξ 2 ) − y1 ξ2 I1 = F1 (ξ1 )dξ1 , where y1(p1,p2,γ) = α1−β1



ξ1

p1 − γ(p1 − p2) and y2(p1,p2,γ) = α2 − β2p2 + γ(p1 − p2), so that πˆ = p1x1 + p2 x 2 + ( p1 − p2 ) I 2 − p1 I 1 (3.35)



The FOCs w.r.t. xi, i = {1,2}, are ∂πˆ ∂I = p1 − p1 1 (3.36) ∂x1 ∂x1



∂πˆ ∂I ∂I = p2 + ( p1 − p2 ) 2 − p1 1 (3.37) ∂x 2 ∂x 2 ∂x 2



In these Equations 3.36 and 3.37, (∂I1/∂x1) = Φ1, (∂I1/∂x2) = Φ1Φ2, (∂I2/∂x1) = 0, and (∂I2/∂x2) = Φ2, where Φ1 = F1 x2 − y2 x2 − y2 ⎛ ⎛ ⎞ F2 (ξ 2 ) − y1 ⎟ , Φ2 = F2 (x2−y2) and φ1 = f 1 ⎜x1 + ⎜ x1 + ξ2 ξ2 ⎝ ⎝ ⎠ ⎞ F2 (ξ 2 ) − y1 ⎟ , φ2 = f 2 (x 2 − y 2 ). Thus, from Equation 3.36, ⎠





Op TIM A L F EN cIN G IN A IRLINE IN D US T Ry

73

we have p1(1−Φ1) = 0, and from Equation 3.37, p1Φ2(1−Φ1) + p2(1−Φ2) = 0. Substituting p1(1−Φ1) = 0 in Equation 3.37, we obtain p2(1−Φ2) = 0. Furthermore, it is obvious to notice that p2  >  0, which yields the optimality condition such that Φ2 = 1. This translates into F2(x2−y2) = 1 and since ξi = 3σ,  which will result, F2−1 (1) = 3σ, and thus optimal seat allocation for discounted fare class would be x 2* = y 2 + 3σ. 2. The joint concavity of ≠ˆ in x1 and x 2 is satisfied if the Hessian matrix H is negative semidefinite:



⎡ ∂ 2 πˆ ⎢ ∂x 2 H = ⎢ 21 ⎢ ∂ πˆ ⎢ ⎣ ∂x1∂x 2

∂ 2 πˆ ⎤ ∂x1∂x 2 ⎥⎥ (3.38) ∂ 2 πˆ ⎥ ⎥ ∂x 22 ⎦

The first principal minor conditions for the joint concavity of ≠ˆ are (∂ 2 πˆ / ∂x12 ) = − p1φ1 ≤ 0 and (∂ 2 πˆ / ∂x 22 ) = −φ2 ( p1Φ1 − ( p1 − p2 )) − p1φ1Φ 22 ≤ 0, given p1Φ1−(p1−p2) ≥ 0. Next, the 2 2 2 2 2 second principal minor is H = (∂ πˆ / ∂x1 )(∂ πˆ / ∂x 2 ) − (∂ πˆ / 2 2 ∂x1 ∂x 2 ) , where (∂ πˆ / ∂x1∂x 2 ) = − p1φ1Φ 2 , and therefore, |H| = p1ϕ1ϕ2(p1Φ1−(p1−p2)). To prove the joint concavity of ≠ˆ w.r.t. capacity allocations xi, ∀i = {1,2}, the condition for second principal minor p1ϕ1ϕ2(p1Φ1−(p1−p2)) ≥ 0 must be satisfied, which implies a similar condition established for the principal minors of H, that is, p1Φ1−(p1−p2) ≥ 0.

Proof of Proposition 3.3 1. Joint concavity of π w.r.t. p1 and p2 is satisfied if H is negative semidefinite, where





⎡ ∂2π ∂2π ⎤ ⎢ ∂p 2 ∂p1∂p2 ⎥⎥ 1 ⎢ (3.39) H= ⎢ ∂2π ∂2π ⎥ ⎢ ⎥ ∂p22 ⎥⎦ ⎢⎣ ∂p1∂p2 The first-order derivatives of π from Equation 3.8 w.r.t. p1 and p2 are

∂π ∂I ∂I = x1 + I 2 + ( p1 − p2 ) 2 − I 1 − p1 1 = 0 (3.40) ∂p1 ∂p1 ∂p1

74



Sy ED A SIF R A Z A A N D MIHAELA T URIAc

∂π ∂I ∂I = x 2 − I 2 + ( p1 − p2 ) 2 − p1 1 = 0 (3.41) ∂p2 ∂p2 ∂p2 Using the previous relations,



∂y1 = − ( β1 + γ ) , ∂p1

∂y1 ∂y 2 ∂y 2 = = γ, and = − (β2 + γ ) ∂p2 ∂p1 ∂p2

and the derivatives ∂I 1 = Φ1 ( β1 + γ ( 1 − Φ 2 ) ) , ∂p1



∂I 2 = Φ 2 (β2 + γ ) , ∂p2

∂I 1 = Φ1 (β2 Φ 2 − γ (1 − Φ 2 ) ) ∂p2

∂I 2 = −Φ 2 γ ∂p1

we can write Equations 3.40 and 3.41 as



∂π = x1 − ( p1 − p2 ) Φ 2 γ − p1Φ1 ( β1 + γ ( 1 − Φ 2 ) ) − I 1 + I 2 = 0 ∂p1 (3.42) ∂π = x 2 − I 2 + ( p1 − p2 ) Φ 2 ( β2 + γ ) ∂p2



− p1Φ1 ( β2 Φ 2 − γ ( 1 − Φ 2 ) ) = 0



(3.43)

Hessian’s first principal minors are given by





∂2π ∂I 2 ∂2I 2 ∂I 1 ∂ 2 I 1 (3.44) = + p − p − − p 2 2 ( ) 1 2 1 ∂p12 ∂p1 ∂p12 ∂p1 ∂p12 ∂2π ∂I 2 ∂2I 2 ∂ 2 I 1 (3.45) = − + − − 2 p p p ( ) 1 2 1 ∂p22 ∂p2 ∂p22 ∂p22 And, the partial derivative is



∂2π ∂I ∂I ∂2I 2 ∂I ∂2I1 = 2 − 2 + ( p1 − p2 ) − 1 − p1 ∂p1∂p2 ∂p2 ∂p1 ∂p1∂p2 ∂p2 ∂p1∂p2 (3.46)

75

Op TIM A L F EN cIN G IN A IRLINE IN D US T Ry

where the partial second-order derivatives of Ii, ∀i = {1,2}, w.r.t. pi, ∀i = {1,2} are: 2 ∂2I1 = Φ1φ2 γ 2 + φ1 ( β1 + γ ( 1 − Φ 2 ) ) 2 ∂p1 2 ∂2I1 2 = Φ1φ2 ( β2 + γ ) + φ1 ( Φ 2β2 − γ ( 1 − Φ 2 ) ) 2 ∂p2

∂2I1 = φ1 ( β1 + γ ( 1 − Φ 2 ) ) ( β2 Φ 2 − γ ( 1 − Φ 2 ) ) ∂p1∂p2 − Φ 1φ 2 γ ( β 2 + γ ) 2



∂ I2 2 = φ2 ( β2 + γ ) , ∂p22

∂2I 2 = φ2 γ 2 , ∂p12

∂2I 2 = −φ2 γ ( β2 + γ ) ∂p1∂p2

For further simplification, we use the following notations: t 1 = β1 + γ(1−Φ2), t 2 = Φ2β2−γ(1−Φ2), and t 3 = p1Φ1−(p1−p2). It is obvious to notice that t 1 ≥ 0, and with the findings from Proposition 3.2, we find that t 3 ≥ 0. To further simplify, we assume that t 2 ≥ 0. This yields t 1, t 2, and t 3 ≥ 0. Thus, Equations 3.44 through 3.46 can be reduced using t 1,t 2, and t 3 notations to the following expressions: ∂2π = −2Φ 2 γ + ( p1 − p2 ) φ2 γ 2 − 2Φ 1 ( β1 + γ ( 1 − Φ 2 ) ) 2 ∂p1

(

− p1 Φ 1φ2 γ 2 + φ1 ( β1 + γ ( 1 − Φ 2 ) )

2

) (3.47)

= −2Φ 2 γ − 2Φ 1t1 − p1φ1t12 − φ2 γ 2 t 3 2 ∂2π = −2Φ 2 ( β2 + γ ) + ( p1 − p2 ) φ2 ( β2 + γ ) 2 ∂p2

(

2

− p1 Φ1φ2 ( β2 + γ ) + φ1 ( Φ 2β2 − γ ( 1 − Φ 2 ) )

2

= −2Φ 2 ( β2 + γ ) − p1φ1t 22 − φ2 ( β2 + γ ) t 3

2

) (3.48)

76

Sy ED A SIF R A Z A A N D MIHAELA T URIAc

∂2π = Φ 2 ( β2 + γ ) + Φ 2 γ − φ2 γ ( β2 + γ ) ( p1 − p2 ) ∂p1∂p2 − Φ1 ( β2 Φ 2 − γ (1 − Φ 2 ) )

(

− p1 φ1 ( β1 + γ ( 1 − Φ 2 ) ) ( β2 Φ 2 − γ ( 1 − Φ 2 ) ) − Φ 1φ 2 γ ( β 2 + γ ) )



= Φ 2 ( β2 + 2 γ ) − Φ1t 2 − p1φ1t1t 2 + φ2 γ ( β2 + γ ) t 3



(3.49)

It is clear to notice from Equations 3.47 and 3.48 that the first principal minors are both nonpositive. Now, we need to show the second principal minor sign is positive; therefore, we need to prove 2

∂2π ∂2π ⎛ ∂2π ⎞ | H |= 2 2 − ⎜ ⎟ ≥0 ∂p1 ∂p2 ⎝ ∂p1∂p2 ⎠



where |H| is determined after some simplification as

(

| H |= p1φ1t12 + φ2 γ 2 t 3 + 2 ( Φ 2 γ + Φ1t1 )

(

)

× p1φ1t 22 + φ2 (β2 + γ )2 t 3 + 2Φ 2 ( β2 + γ )

)

(

− Φ 2 ( β2 + 2 γ ) + φ2 γ ( β2 + γ ) t 3 − ( Φ1t 2 + p1φ1t1t 2 )

)

2



(3.50) Given that p1, Φ1, ϕ1, p2, Φ2, ϕ2, t 1, t 2, t 3 ≥ 0, we can achieve a lower bound on |H| established in Equation 3.50 by ignoring some positive terms. While simplifying the rest of the terms, we obtain the following reduced form:



(

| H | ≥ p1φ1t12

)( p φ t ) − (β 2 1 1 2

2

2 + 2 γ + γ ( β2 + γ ) t 3 ) (3.51)

Therefore, the condition for joint concavity will be

( p φ t )( p φ t ) − (β 2 1 1 1

2 1 1 2

2

2

+ 2 γ + γ ( β2 + γ ) t 3 ) ≥ 0 (3.52)

Op TIM A L F EN cIN G IN A IRLINE IN D US T Ry

77

which can be further written as

( p φ t t + (β + 2γ + γ (β + γ ) t ) ) 1 1 1 2



2

2

3

)

× ( p1φ1t1t 2 − ( β2 + 2 γ + γ ( β2 + γ ) t 3 ) ≥ 0 (3.53)

Finally, the necessary condition for joint concavity of π will be p1ϕ1t 1t 2−(β2 + 2γ + γ(β2 + γ)t 3) ≥ 0. There can be other possibilities that may also guarantee the joint concavity of π; however, this chapter only focuses on the single possibility presented in this proof. 2. The Lagrangian function of nonlinear problem P′ is L ( x1 , x 2 , p1 , p2 , γ, λ ) = p1x1 + p2 x 2 + ( p1 − p2 ) I 2 − p1 I 1 − G ( γ ) + λ ( c − x1 − x 2 )



The KKT optimality conditions are ∂L = p1 ( 1 − Φ1 ) − λ = 0 (3.54) ∂x1





∂L = p2 + ( p1 − p2 ) Φ 2 − p1Φ1Φ 2 − λ = 0 (3.55) ∂x 2 ∂L = x1 − I 1 + I 2 − ( p1 − p2 ) Φ 2 γ ∂p1 − p1Φ1 ( β1 + γ ( 1 − Φ 2 ) ) = 0



∂L = x 2 − I 2 + ( p1 − p2 ) Φ 2 ( β2 + γ ) ∂p2 − p1Φ1 ( β2 Φ 2 − γ ( 1 − Φ 2 ) ) = 0



(3.56)





(3.57)

2 ∂L = −Φ 2 ( p1 − p2 ) − p1Φ 1 ( p1 − p2 ) ( 1 − Φ 2 ) ∂γ



∂G ( γ ) =0 ∂γ



(3.58)

78

Sy ED A SIF R A Z A A N D MIHAELA T URIAc

∂L = c − x1 − x 2 = 0 (3.59) ∂λ



Recalling for Equations 3.54 through 3.58, the notations are x2 − y2 ⎛ ⎞ ⎜ Φ1 = F1 x1 + F2 ( ξ 2 ) − y1 ⎟ , Φ 2 = F2 ( x 2 − y 2 ) ⎜⎜ ⎟⎟ ξ2 ⎝ ⎠



x1 +

x2 − y2

I2 =

∫ F ( ξ ) dξ , 2

2

2

I1 =

ξ2

x2 − y2

∫ξ 2

F2 ( ξ 2 ) − y1



F1 ( ξ1 ) dξ

ξ1

Therefore, to determine the optimal solution (x1* , x 2*, p1*, p2*, γ * ), we will have to solve the following system of nonlinear equations:





(

x1 − I 1 + I 2 − ( p1 − p2 ) Φ 2 γ − p1Φ1 ( β1 + γ ( 1 − Φ 2 ) ) = 0 (3.61) x 2 − I 2 + ( p1 − p2 ) Φ 2 ( β2 + γ ) − p1Φ1 ( β2 Φ 2 − γ ( 1 − Φ 2 ) ) = 0

2





(3.62) −Φ 2 ( p1 − p2 ) − p1Φ1 ( p1 − p2 ) ( 1 − Φ 2 ) −



)

p1 ( 1 − Φ1 ) − p2 + Φ 2 p1Φ1 − ( p1 − p2 ) = 0 (3.60)

∂G = 0 (3.63) ∂γ

c − x1 − x 2 = 0 (3.64)

Proof of Proposition 3.4 We consider the linear fencing cost function G(γ) = G 0−(G 0/K)γ, where G 0 > 0, K > 0, and G 0 is the cost of null leakage when perfect fences are achieved so that γ = 0. When there is no initiative to invest in fencing, G(γ) = 0. The rate of change in G(γ) w.r.t. γ is (∂G/∂γ) = −(G 0/K) and (∂2G/∂γ2) = 0 due to linear G(γ). Notice here that G(γ = 0) = G 0 and G(γ = K) = 0.

79

Op TIM A L F EN cIN G IN A IRLINE IN D US T Ry

1. Recalling the revenue function, π, from Equation 3.8

π ( xi , pi , γ ) = p1x1 + p2 x 2 + ( p1 − p2 ) I 2 − p1 I 1 − G ( γ ) The partial derivatives of π w.r.t. γ are ∂π ∂I ∂I ∂G ( γ ) = ( p1 − p2 ) 2 − p1 1 − ∂γ ∂γ ∂γ ∂γ 2



= −Φ 2 ( p1 − p2 ) − p1Φ1 ( p1 − p2 ) ( 1 − Φ 2 ) −

G0 (3.65) K

∂2π ∂2I ∂2I = ( p1 − p2 ) . 22 − p1 . 21 2 ∂γ ∂γ ∂γ

(

= φ2 ( p1 − p2 )3 − p1 ( p1 − p2 ) 2 φ1 (1 − Φ 2 )2 + Φ1 (1 − Φ 2 )

(

) )

= ( p1 − p2 )2 φ2 ( p1 − p2 ) − p1 (φ1 (1 − Φ 2 )2 + Φ1 (1 − φ2 )) (3.66)

where ∂I 2 = −Φ 2 ( p1 − p2 ) , ∂γ

∂I 1 = Φ1 ( p1 − p2 ) ( 1 − Φ 2 ) ∂γ

2 ∂2I 2 = φ2 ( p1 − p2 ) , 2 ∂γ

2 ∂2I1 2 = ( p1 − p2 ) φ1 ( 1 − Φ 2 ) ∂γ

(

+Φ1 ( 1 − φ2 ) ) x2 − y2 ⎛ ⎞ ⎜ Φ1 = F1 x1 + F2 ( ξ 2 ) − y1 ⎟ , Φ 2 = F2 ( x 2 − y 2 ) ⎜⎜ ⎟⎟ ξ2 ⎝ ⎠



x2 − y2 ⎛ ⎞ ⎜ φ1 = f 1 x1 + F2 ( ξ 2 ) − y1 ⎟ , ⎜⎜ ⎟⎟ ξ2 ⎝ ⎠





φ2 = f 2 ( x 2 − y 2 )

From Equation 3.65, we can determine γ* by solving (p1−p2)​(Φ2(p1−p2) + p1Φ1(1−Φ2)) + (G 0/K) = 0, given that pi, xi,

80

Sy ED A SIF R A Z A A N D MIHAELA T URIAc

∀i = {1,2} are known. Notice from Equation 3.65 that the total expected revenue, π, is nonincreasing in leakage rate, γ, as (∂π/∂γ) ≤ 0 for 0 ≤ γ ≤ K. From Equation 3.66, π is quasiconcave in γ if ϕ2(p1 − p2) − p1(ϕ1(1 − Φ2)2 + Φ1(1 – ϕ2)) ≤ 0.

Acknowledgment This publication was made possible by NPRP grant # 5-023-05006 from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibility of the authors.

References

AlFares, H. and Elmorra, H. (2005). The distribution-free newsboy problem: Extensions to the shortage penalty case, International Journal of Production Economics 93/94, 465–477. Anon. (n.d.). The theory and practice of revenue management [online]. Available at: http://www.springer.com/business+&+management/operations+​ research/book/978-1-4020-7701-2 (accessed January 26, 2014). Bell, P.C. (1998). Revenue management: That’s the ticket, OR/MS Today, 25(2). Chen, F.Y., Yan, H., and Yao, Y. (2004). A newsvendor pricing game, IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and Humans 34(4), 450–456. Chiang, W.C., Chen, J.C.H., and Xu, X. (2007). An overview of research on revenue management: Current issues and future research, International Journal of Revenue Management 1(1), 97–128. Chiang, W.K. and Monahan, G.E. (2005). Managing inventories in a twoechelon dualchannel supply chain, European Journal of Operational Research 162(2), 325–341. Choi, S.C. (1996). Pricing competition in a duopoly common retailer channel, Journal of Retailing 72(2), 117–134. Cote, J.P., Marcotte, P., and Savard, G. (2003). A bilevel modelling approach to pricing and fare optimisation in the airline industry, Journal of Revenue Management and Pricing 2, 23–36. Feng, Y. and Xiao, B. (2001). A dynamic airline seat inventory control model and its optimal policy, Operations Research 49, 939–949. Gallego, G. and Moon, I. (1993). The distribution free newsboy problem: Review and extensions, Journal of Operational Research Society 44, 825–834. Hanks, R., Cross, R., and Noland, P. (2002). Discounting in the hotel industry, Cornell Hotel and Restaurant Administration Quarterly 43, 94–103. Kimes, S.E. (2002). Perceived fairness of yield management, Cornell Hotel and Restaurant Administration Quarterly 43, 21–30.

Op TIM A L F EN cIN G IN A IRLINE IN D US T Ry

81

Li, M.Z.F. (2001). Pricing non-storable perishable goods by using a purchase restriction with an application to airline fare pricing, European Journal of Operational Research 134(3), 631–647. Littlewood, K. (1972). Forecasting and control of passenger booking. AGIFORS 12th Annual Symposium Proceedings, Nathanya, Israel. McGill, J.I. and Van Ryzin, G.J. (1999). Revenue management: Research overview and prospects, Transportation Science 33, 233–256. Mostard, J., Koster, R., and Teunter, R. (2005). The distribution-free newsboy problem with resalable returns, International Journal of Production Economics 97, 329–342. Petruzzi, N.C. and Dada, M. (1999). Pricing and the news vendor problem: A review with extensions, Operations Research 47, 183–194. Philips, R.L. (2005). Pricing and Revenue Optimization Stanford, CA: Stanford University Press. Raza, S.A. and Akgunduz, A. (2008). An airline revenue management fare pricing game with seats allocation, International Journal of Revenue Management 2(1), 42–62. Raza, S.A. and Akgunduz, A. (2010). The impact of fare pricing cooperation in airline revenue management, International Journal of Operational Research 7(3), 277–296. Smith, N.R., Martinez-Flores, J.L., and Cardenas-Barron, L.E. (2007). Analysis of the benefits of joint price and order quantity optimisation using a deterministic profit maximization model, Production Planning and Control 18(4), 310–318. MATLAB and Global Optimization Toolbox R. (2013a). The MathWorks, Inc., Natick Massachusetts, United States, Software available at http:// www.mathworks.com/products/matlab/ Weatherford, L.R. (1997). Using prices more realistically as decision variables in perishable-asset revenue management problems, Journal of Combinatorial Optimization 1, 277–304. Yao, L. (2002). Supply Chain Modeling: Pricing, Contracts and Coordination. The Chinese University of Hong Kong, Shatin, Hong Kong. Yao, L., Chen, Y.F., and Yan, H. (2006). The newsvendor problem with pricing: Extension, International Journal of Management Science and Engineering Management 1(1), 3–16. Zhang, M. and Bell, P. (2010). Price fencing in the practice of revenue management: An overview and taxonomy, Journal of Revenue and Pricing Management 11(2), 146–159. Zhang, M. and Bell, P.C. (2007). The effect of market segmentation with demand leakage between market segments on a firm’s price and inventory decisions, European Journal of Operational Research 182(2), 738–754. Zhang, M., Bell, P.C., Cai, G., and Chen, X. (2010). Optimal fences and joint price and inventory decisions in distinct markets with demand leakage, European Journal of Operational Research 204, 589–596.

4 B I - O B JECtI V E B ERtH – C R ANE A LLO CAtI ON P ROBLEm IN C ONtAINER TERmINALS D E N I Z O Z D E M I R A N D E V R I M U R S AVA S Contents

4.1 Motivation 83 4.2 Related Work 85 4.3 Model Description 87 4.3.1 Assumptions 88 4.3.2 Notation 88 4.3.3 Model 89 4.4 Solution Methodology 92 4.5 Case Study 94 4.6 Conclusions and Further Research Directions 102 Acknowledgment 103 References 103 4.1 Motivation

Transportation via sea continues to rise as a result of the increasing demand due to its advantages over other transportation modes in terms of cost and security. Actually, as of 2013, seaborne trade accounted for 80% of global trade in terms of volume (UNCTAD, 2013), and since 2006, it counts for 70.1% in terms of value (Rodrigue et al., 2009). Due to this trend toward sea transportation, efficient port management has become a major issue for port owners and shipping companies. Typical operations in a port consist of allocation of berths to arriving vessels, allocation of cranes to docked vessels at the quayside, routing of internal transportation vehicles, storage space assignment, and gantry crane deployment at the yard side. Berth allocation problem (BAP) consists of assigning berth spaces to the incoming vessels. Crane allocation problem (CAP) is the determination of 83

84

D ENIZ O Z D EMIR A N D Ev RIM URS AvA S

the assignment sequence of cranes to a container ship. Both problems on the quayside have received significant attention from researchers (Bierwirth and Meisel, 2010). More often, these two problems are studied separately in the literature, resulting in suboptimal solutions. To find more realistic solutions, researchers offer solutions that combine the two problems. Port operations involve multiple parties such as ship owners, crane operators, port management, and government officers. By its nature, each party has its own concerns and requirements that need to be addressed in a decision-making process. Hence, the berth allocation and crane scheduling problem requires that the decision makers consider multiple objectives at a time, which, again, adds to the complexity of the problem. An essential concern to deliberate is the fact that objectives such as minimizing vessel service time and maximizing crane utilizations frequently conflict with each other. That is, the decision maker is forced to attain a balance among those conflicting objectives. However, recent literature on the berth and crane scheduling problem does not provide adequate support to resolve the issue. With those in mind, this study attempts to simultaneously determine the berthing and crane allocations under multiple objectives. In principle, with the existence of more than one objective, we would expect to have a set of optimal solutions instead of a single optimal solution. Therefore, our approach will be to determine these set of solutions, also referred as Pareto optimal solutions, in order to determine Pareto efficient frontier. Following this multisolution approach offers the decision maker the flexibility of adjusting the balance within conflicting objectives. We may depict the contributions of this chapter as twofold. First, we extend the existing literature by embracing more practical assumptions to better represent the real-world implementation. Second, we formulate a bi-objective integer problem and propose an ε-constraint method-based solution algorithm to acquire the nondominated berth– crane assignments and schedules as Pareto optimal front. The structure of the remaining part of the chapter is as follows: the following section is dedicated to the related studies in the literature. Section 4.3 is devoted to the mathematical model description of the problem. Section 4.4 puts forward our solution methodology based on ε-constraint method. Section 4.5 reports the computational

BI - O B JEc TI v E BER T H – C R A NE A L L O c ATI O N

85

experiments via a case study. Finally, Section 4.6 concludes the study and states future research directions. 4.2  Related Work

BAPs and CAPs aim to display the berthing position and service sequence of all the vessels; hence, it denotes an assignment and scheduling problem structure. In most of the studies in literature, crane allocation is planned after berthing the ship, which results in suboptimal solutions. Our focus in this review process will put an emphasis on studies that simultaneously tackle both problems. Work by Zhou and Kang (2008) has used the genetic algorithm to search through the solution space and compared it with the greedy algorithm for the BAP and CAP with stochastic arrival and handling times. The genetic algorithm proposed has significantly improved the greedy algorithm solutions so as to minimize the average waiting time of containerships in terminal. Zhang et al. (2010) use the subgradient optimizations technique to solve the problem with the aim of minimizing the weighted sum of the handling costs of containers. Review work provided by Bierwirth and Meisel (2010) as well as Carlo et al. (2014) presents state-of-the-art research on the topic that jointly tackles berth allocation and crane scheduling. Recent studies that maintain a multiobjective approach can be summarized as follows: Imai et al. (2007) address the problems with a bi-objective approach that considers the minimization of delay of ships’ departure and minimization of the total service time. They use the weighting method that combines all objectives into a single one by assigning weights and by changing the weights in a systematic fashion. They so form the noninferior solution set. Golias et al. (2009) use the multiobjective approach to differentiate the service level given to customers with different priorities. Total service time minimization is realized separately for different levels of customer preferences. Their solution approach is by the use of evolutionary algorithms. In their latter work, they propose a nonnumerical ranking preference method to select the efficient berth schedule (Golias et al., 2010). Cheong et al. (2010a) model the BAP so as to minimize the three objectives of makespan, waiting time, and degree of deviation from a predetermined priority schedule. They use a multiobjective evolutionary

86

D ENIZ O Z D EMIR A N D Ev RIM URS AvA S

algorithm to find the Pareto efficient frontier. However, studies discussed here do not tackle the CAP, and the solution set they provide is not guaranteed to be optimal. Cheong et al. (2010b) extend the literature by incorporating the crane scheduling problem. They design their problem to solve the two objectives of waiting time and handling time of ships. They as well use the multiobjective evolutionary algorithm approach to model the port conditions at the Pasir Panjang container terminal. The most related work to our study belongs to Liang et al. (2011). In their bi-objective crane and berth allocation model, they propose a hybrid genetic algorithm to minimize the sum of the handling time of containers and the number of crane movements concurrently. Their computational experiments are realized by a real-world case study of Shanghai container terminal. In this chapter, we approach the berth–crane scheduling problem concurrently, while considering two objectives of total service time minimization and crane setup minimization. Our crane-related objective differs from the work of Liang et al. (2011), in that their approach aims to avoid the probable crane splits among berths. However, there is no cost incurred for a vessel to be served by crane j at time t, then crane j′ at time t + 1 and crane j again at time t + 2 as long as the cranes are at the same berth. We, in turn, by minimizing the crane setups for each vessel, incorporate the potential cost of crane splitting together with their setup cost, giving a more detailed analysis of crane activities. Moreover, we lead the former work in the perspective of real-world representation. In our model, cranes differ in terms of their technical specifications regarding their container handling rates. Hence, particular cranes may be favored to another in convenient cases. Berth length restrictions and vessel length compatibility issues are also reflected in our model. To the best of our knowledge, this is the first attempt to provide the optimum Pareto efficient frontier for the considered problem. As to the exact methods for the solution of multiobjective combinatorial optimization problems, several scalarization techniques may be used. The most popular is by the use of the weighted sum approach where different objectives are aggregated through weighted sums. Although the efficient solutions found by the technique may be valid for linear programming problems, due to the discrete structure of the

BI - O B JEc TI v E BER T H – C R A NE A L L O c ATI O N

87

combinatorial problems, the results may not compromise the whole efficient solution set for the considered problem. The consideration of these nonsupported efficient solutions, which are not optimal for any weighted sum of the objectives, becomes crucial when there are more than one sum objective in contrast to cases where at most one sum objective is present and the others are bottleneck objectives. Another approach followed is the compromise solution method, where the distance to a reference point is minimized. The reference point is defined by the separate minima of each objective. Obviously, for conflicting objectives, it is not possible to obtain the minimum limits simultaneously. For bi-objective problems, the use of ranking methods is popular. As required by the technique, the computation of nadir point is difficult to obtain when there are more than two objectives. For the comprehensive description of the available methods, readers may refer to Ehrgott and Gandibleux (2002). For the case of two objectives, the two-phase method is described as a general framework. In twophase method, the supported efficient solutions are found by the use of scalarization methods in the first phase, and then the nonsupported efficient solutions are found by problem-specific techniques in the second phase. The solution approach we use to solve our bi-objective integer problem is an iterative algorithm incorporating the branch-and-cut solution embedded in ε-constraint method. ε-Constraint method is one of the well-known techniques to solve multiobjective optimization problems. In ε-constraint method, instead of combining the objectives with weights, only one of the original objectives is minimized while the others are rearranged as constraints. An extensive discussion of the method can be found in Ehrgott (2005). 4.3  Model Description

This study attempts to simultaneously determine the berthing and crane allocations under two objectives. The wharf is modeled to be discrete, that is, it represents a collection of partitioned sections. Different types of cranes with different handling rates are considered. Handling time and the number of cranes to be assigned to the ship are not known in advance. Handling time depends on the type and the number of cranes allocated to a vessel, which is dynamic throughout

88

D ENIZ O Z D EMIR A N D Ev RIM URS AvA S

the service time. For instance, a vessel can start to be served by only one crane and end up being served by three cranes. Therefore, the ships do not have to wait until a specified number of cranes are available. This prevents suboptimal solutions resulting from misleading crane unavailability assumption. We now present the bi-objective optimization model for solving simultaneous berth–vessel–crane allocation problem. The basic assumptions of the model can be summarized as follows. 4.3.1 Assumptions

1. There are discrete berths with specified lengths. A vessel may be assigned to any of the available berths as long as the vessel length fits to the berth length. 2. There are cranes with different technology that give service with varying handling rates. 3. Some of the cranes are mobile, in a sense that cranes can be assigned to any berth and any vessel in any order. 4. Crane allocation is dynamic throughout the handling period of a vessel. The number and the type of cranes assigned are flexible, and vessel handling time is dependent on crane allocations. 5. A vessel cannot be given service before its arrival. 6. Each different crane allocation incurs a cost. 7. There are a maximum allowable number of cranes that can be assigned to a vessel. The indices, parameters, decision variables, and the integer linear programming model are defined as follows. 4.3.2 Notation

Indices i = (1, …, I) set of vessels j = (1, …, J) set of cranes, where first p cranes are static and last J–p cranes are assumed to be portable k = (1, …, K) set of berths t = (1, …, T) time periods

BI - O B JEc TI v E BER T H – C R A NE A L L O c ATI O N

89

Input Parameters li: Vessel length including the safety margin for the vessel Qk: Length of berth k ai: Arrival time of vessel i Ni0: Number of containers initially on the vessel U: Maximum number of cranes that can be assigned to a vessel simultaneously Rj: Container handling rate of jth crane For modeling purposes, we define two constants: M: Large constants m: Constant 0 ≤ m ≤ 1 Decision Variables yijtk: 1 if crane j is allocated to vessel i at time t at berth k and 0 otherwise BVitk: 1 if vessel i is assigned to berth k at time t Nit: Total number of containers on vessel i at time t Δik: 1 if vessel i is assigned to berth k YHit: 1 if vessel i is served at time t PHit: 1 if vessel i has remaining containers at time t CRijt: 1 if crane j will start serving vessel i at time t + 1 TempHit: Auxiliary variable that realizes the logical connection between yijtk and YHit 4.3.3 Model

∑∑PH ( setup ) : min ∑∑∑CR

f 1 ( time ) : min

it

i

f2

t = ai

ijt

i

l i ⋅ yijtk ≤ Qk

∑y k

ijtk

j

t = ai

∀i , j , t , k (4.1)

≤ 1 ∀i , j , t (4.2)

90

D ENIZ O Z D EMIR A N D Ev RIM URS AvA S

∑∑ y



i

∑y



≤ 1 ∀j , t , k (4.4)

ijtk

i

∑∑∑ y



j

t = ai

j

ijtk



∑y

ijt +1k

k



∑y

ijtk

∑∑R ⋅ y j

j

k

ijtk

∀i , t , k (4.8)

≤ CRijt

k

N it +1 ≤ M ⋅ PH it N it −



≤ M ⋅ BVitk

ijtk

j





∀i , t (4.6)

≤U

≤ 1 ∀t , k (4.7)

itk

i

∑y

≥ 1 ∀i (4.5)

k

∑ BV



ijtk

k

∑∑ y





≤ 1 ∀j , t (4.3)

ijtk

k

∀i , j , t (4.9)

∀i , t , t ≠ T (4.10)

= N i ,t +1 ∀i , t , t ≠ T (4.11)

N i ,T ≤ 0 ∀i (4.12)



YH it ≤ PH it



YH it ≤ TempH it

∀i , t (4.13) ∀i , t (4.14)

BI - O B JEc TI v E BER T H – C R A NE A L L O c ATI O N

∑∑ y



j



j





j

ijtk

≤ M ⋅ TempH it

∀i , t (4.16)

YH it ≥ m ⋅ TempH it

∀i , t (4.17)

∑∑ y

∀i , k (4.18)

ijtk ʹ

t = ai k ʹ≠ k

j

j

≥ m ⋅ Δik

≤ M ⋅ (1 − Δik ) ∀i , k (4.19)

t = ai k ʹ≠ k

≤ M ⋅ Δik

ijtk ʹ

≥ m ⋅ (1 − Δik ) ∀i , k (4.21)

⎛ yij ʹtkʹ ≤ M ⋅ ⎜ 1 − ⎜ j ʹ≥ j + 1 k ʹ≤ k −1 ⎝

∑∑ ∑

∀i , k (4.20)

ijtk

t = ai

∑∑∑ y

i

ijtk

t = ai

∑∑ y





∀i , t (4.15)

∑∑∑ y





≥ m ⋅ TempH it

k

j





ijtk

k

∑∑ y

91

∑ i

⎞ yijtk ⎟ ∀j ≤ p, t , k (4.22) ⎟ ⎠

yijtk , Δik , PH it , YH it , TempH it , CRijt , BVi ,t ,k ∈ {0, 1} ∀i , j , t (4.23) N it ∃ ∀i , t (4.24)

The first objective f 1 minimizes the total time the vessels spend at the port. When all the containers are handled, the handling time is calculated by summing the total number of assignments in the time horizon. To calculate the total time, waiting time of the vessels on

92

D ENIZ O Z D EMIR A N D Ev RIM URS AvA S

the bay is also considered. The second objective f 2 minimizes the total number of crane setups. Constraint set (4.1) ensures that the allocation of a vessel does not exceed the quay length. Constraint set (4.2) implies that a vessel can be assigned to at most one berth. Constraint set (4.3) does not allow any crane to be allocated to more than one vessel at multiple berths at time t. Constraint set (4.4) implies that a single vessel can be served by a certain crane at any given time. Constraint set (4.5) ensures that all arriving vessels are served. Constraint set (4.6) guarantees that the total number of cranes allocated in a time period exceeds the maximum number of cranes that can be allocated to a vessel. By constraint set (4.7), the number of vessels allocated to a berth at a given time is limited to 1. Constraint set (4.8) ensures that the value of BV itk at the considered berth–vessel pair is set to 1 if a vessel is given service at the dock at a given time. In constraint set (4.9), crane setup indicators are updated. By constraint set (4.10), a vessel’s PHit value is set to 1, if the vessel has arrived and there are remaining containers. In constraint set (4.11), the number of containers to be handled in each vessel is decreased by the crane handling rate at each period. Constraint set (4.12) ensures that all the containers on the vessel are handled. The logical connection between PHit and YHit is secured by constraint set (4.13). Constraint sets (4.14) through (4.17) formulate the equations for solving the total handling time of each vessel. If an yijtk assignment exists for a vessel at a given time, the vessel handling time variable, YHit, is set to 1. Constraint sets (4.18) through (4.21) ensure that a vessel is docked at a single berth. Constraint set (4.22) handles the crane passing constraints for static cranes. If a crane j is serving a vessel at berth k, then no other crane with a larger crane id can serve a vessel at any berth that is positioned to its right. In the next section, our solution approach will be discussed. 4.4  Solution Methodology

The solution approach that we propose for solving the integrated BAP and CAP problems with multiobjectives relies on an iterative algorithm consisting of a branch-and-cut solver embedded in the ε-constraint method. ε-Constraint method is a well-recognized

BI - O B JEc TI v E BER T H – C R A NE A L L O c ATI O N

93

technique to solve multicriteria optimization problems (Ehrgott, 2005). Figure 4.1 illustrates our solution algorithm. The ε-constraint method does not aggregate the multiple objectives into one criterion as done in a weighted sum method, but minimizes one of the original objectives and transforms the others into constraints. For bi-objective model, values a and b shown in Figure 4.1 give the range for the objective criteria f 2. A general multiobjective problem with O objectives may be substituted by the ε-constraint method as follows:

Start

a = min f2; b = max f2 v = (f1, f2)

Make ε = b

Optimize the MIP model with a solver

Add to v No Check if ε = a Yes Get Pareto front from v

End

Figure 4.1  Flow chart diagram of the solution algorithm.

ε=ε–1

94

D ENIZ O Z D EMIR A N D Ev RIM URS AvA S

min f j (x ) x∈X

s.t. f j (x ) ≤ ε k

k = 1,…,O , k ≠ j

where ε ∈ O

Here, the choice of the criteria that will be selected to be treated as constraint depends on the problem structure. For our problem, this transformation has been implemented by selecting f 1(time) as objective function and f 2(setup) as constraint. This is mainly due to the highly esteemed customer service levels that are related to time considerations. This transformation is also suitable for the optimization structure as the integer values of the setup parameters allow the parameter ε to be changed by one unit in each subsequent iteration. Additionally, the range for the objective criteria is again appropriate considering the number of iterations that would be required in case of a wider range. Although this range is actually dependent on the instance data examined, nevertheless, the range for time criteria is expected to be broader than of the setup criteria. Therefore, we would anticipate having the number of iterations that needs to be realized as higher in the case where objective f 1(time) was selected to be treated as constraint. Due to the conflicting nature of both objective criteria, we expect to have the lowest values of one criterion while the other one takes its highest values. This fact allows for the solution algorithm where we may reduce the values of parameter ε for the objective function treated as constraint, to be reduced iteratively, after the selection of one of the criterion as the main objective. To retrieve the interval where parameter ε varies, we solve each problem with a single objective. We expect to have the highest setup cost values when the objective function f 1 is optimized. 4.5  Case Study

Based on the real data obtained from the port of Shanghai container terminal, the model proposed is used to optimize the simultaneous assignment of berths and cranes to the incoming container vessels. The problem has previously been demonstrated by Liang et al. (2009).

BI - O B JEc TI v E BER T H – C R A NE A L L O c ATI O N

95

Table 4.1  Input for the Computational Study

1 2 3 4 5 6 7 8 9 10 11

SHIP NAME

ARRIVAL TIME

ARRIVAL TIME (IMPLEMENTED)

DUE TIME

TOTAL NUMBER OF LOADING/UNLOADING CONTAINER (TEU)

MSG NTD CG NT LZ XY LZI GC LP LYQ CCG

9:00 9:00 0:30 21:00 0:30 8:30 7:00 11:30 21:30 22:00 9:00

10 10 2 22 2 10 8 13 23 23 10

20:00 21:00 13:00 23:50 23:50 21:00 20:30 23:50 23:50 23:50 23:50

428 455 259 172 684 356 435 350 150 150 333

Note that, as the same dataset has later been studied by Han et al. (2010) and Liang et al. (2011), the real case problem might be used as a benchmark. The arrival time, the total number of containers in TEU, and due dates for each vessel are given in Table 4.1. We represent a 24 h day by 24 equal time intervals and convert all the times in Table 4.1 accordingly. Figure 4.2 illustrates the time scale used for modeling the problem. The same scaling is used for each day. The berth structure is discrete, and the whole quay area is partitioned into four berths. Since berth lengths are not indicated in the benchmark problem, physical length restrictions are not reflected. There are seven quay cranes, with a handling rate equal to 40 TEUs/h. Due to the lack of available accurate data, the cranes are taken as identical in terms of their handling rates. That, in fact, is a generalization of our model structure, as we allow for variable quay crane handling rate specification. With more realistic crane specifications, our model can be used much more efficiently. The maximum allowable number of cranes assigned to a vessel is 4. In order to show the impact of portable and static cranes, crane ids 6 and 7 are assumed to be portable, that is, move among the berths, while five of the seven cranes are assumed to be static. (00:00–00:59) (01:00 –01:59) (02:00 –02:59) t=1 t=2 t=3

Figure 4.2  Time implementation frame.

...

(22:00– 22:59) (23:00– 23:59) t = 23 t = 24

96

D ENIZ O Z D EMIR A N D Ev RIM URS AvA S

The model is coded in GAMS 22.5 and solved with GUROBI solver for solving integer problems. The preliminary computational experimentation is conducted on NEOS server in January 2012 (Gropp and More, 1997; Czyzyk et al. 1998; Dolan, 2001). The implemented model has 8058 constraints and 3950 variables of which 3749 of them are discrete. The execution of the solver for each instance is reported to have less than 1 CPU s. However, the observed real time is between 5 min (for corner points) and 2 h (for points lying in the center of the Pareto frontier). The summary results of Pareto solutions are provided in Table 4.2, whereas Figure 4.3 illustrates the optimum Pareto efficient frontier. Table 4.2  Summary of the Solutions

SOLUTION ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24

F1: TOTAL SERVICE TIME (H)

F2: TOTAL NUMBER OF CRANE SETUP

NONDOMINATED SOLUTION (✓ IF NONDOMINATED)

39 39 40 40 40 41 42 42 43 44 45 46 47 48 50 52 53 56 59 63 68 72 80 89

42 41 38 37 36 35 34 33 32 31 30 29 28 27 26 25 24 23 22 21 20 19 18 18

𝟀 ✓ 𝟀 𝟀 ✓ ✓ 𝟀 ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ 𝟀

Number of crane setups

BI - O B JEc TI v E BER T H – C R A NE A L L O c ATI O N

97

(39,41) 40 38 35 (40,36) (41,35) 34 (42,33) 32 (43,32) (44,31) 30 (45,30) (46,29) 28 (47,28) (48,27) (50,26)

26

(52,25) (53,24)

24

(56,23)

22

(59,22)

(63,21)

20 41

43

45

47

49

51

53

55

57

59

61

63

65

(68,20) 71 73 75 67 69 (72,19)

77 79 (80,18)

Service time

Figure 4.3  Pareto efficient frontier.

Note that not all of our computed solutions contribute to the Pareto efficient frontier, since some of our solutions are dominated by the others. As in the case of solutions #3 and #4, both with total service time equal to 40 min, while total number of crane setups are 38 and 39 respectively, are dominated by solution #5 with exactly same total service time but with less total number of crane setups. Finally, we obtain 19 nondominated solutions out of 24 solutions to form the Pareto efficient frontier. In the study by Liang et al. (2009), the Pareto efficient frontier that is provided has seven solutions. With 19 nondominated solutions, we have further developed the decision support tool by offering an extended number of alternatives to the decision maker. The second solution in Table 4.2 gives the Pareto optimal solution, which minimizes the service time of the vessels. The relative computational results for the solution are given in Figure 4.4 and Table 4.3. The service time is the difference between the departure time and the arrival time of a vessel. The waiting time is defined as the time

98

D ENIZ O Z D EMIR A N D Ev RIM URS AvA S

23:59 22:00

22:05 21:00

20:00

Ship 10 (1)(2)(3)(7)

Ship 4 (1)(2) (3)(7)

23:02 22:45 Ship 9 crane (4)(5)(6) 21:30

18:00 16:00 14:00 12:00 11:47 10:00

14.42 Ship 8 cranes: (1) (2) (3)

Ship 2 cranes: (4)(5)(6)(7)

17:29 14:38

Ship 1 cranes: (4)(5)(6)(7)

11:57 Ship 6 cranes: (4)(5)(6)(7)

Ship 11 cranes: (1) (2) (3) 09:00

08:00

08:30

09:43 Ship 7 cranes: (4)(5)(6)(7) 07:00

06:00 04:47

04:00 02:00 00:30 00:00

Ship 5 cranes: (1)(2)(3)(7)

Berth 1

02:40 Ship 3 cranes: (4)(5)(6) Berth 2

Berth 3

Berth 4

Figure 4.4  The Gantt chart of solution 2.

a vessel spends in the bay before being berthed; that is, the berthing time minus the arrival time. Handling time is the time vessel spends at the port. Delay is the due date minus departure time of the vessel. Note that time scale is converted into minutes for benchmark purposes with earlier studies in literature. In this solution, total service time is 2165, handling time is 1555, waiting time is 610, and delay time is 0. The number of crane setups is 40. The last nondominated solution in Table 4.2 (solution 23) gives the Pareto optimal solution, which minimizes the quay crane setups. The relative computational results, presented in a similar fashion for this solution, are given in Figure 4.5 and Table 4.4. In solution 23, total service time is 4396, handling time is 3880, waiting time is 516, and delay time is 0. The number of crane setups is 18. Note that this

99

BI - O B JEc TI v E BER T H – C R A NE A L L O c ATI O N

Table 4.3  Decomposition of the Objectives for Solution 2

1 2 3 4 5 6 7 8 9 10 11 Total

SHIP NAME

ASSIGNED BERTH

MSG NTD CG NT LZ XY LZI GC LP LYQ CCG

2 2 2 2 1 3 4 1 4 3 1

WAITING TIME [1] (MIN)

HANDLING TIME [2] (MIN)

SERVICE TIME ([1] + [2])

TOTAL DELAY (MIN)

NUMBER OF CRANE SETUPS

177 338 0 0 0 73 0 17 0 5 0 610

161 171 130 65 257 134 163 175 75 57 167 1555

338 509 130 65 257 207 163 250 75 62 167 2165

0 0 0 0 0 0 0 0 0 0 0 0

4 4 3 4 4 4 4 3 3 4 3 40

23.59

23:13 Ship 10 22.00 cranes: (1) (2) 20.00 18:00 16:00

23:09 Ship 4 cranes: (3) (4) 21:00

18:54 Ship 8 cranes: (1) (2)

14:21 14:00

Crane: (1)

12:00

Ship 1 cranes: (1) (2)

17:53

23:23

21:00 17:24

04:00

Ship 7 crane: (3)

Ship 6 crane: (4)

Ship 2 cranes: (6) (7)

06:59 Ship 5 cranes: (1) (2)

Ship 3 crane: (4)

02:00 00:00

Ship 11 crane: (7) 14:42

10:00 09:03 08:00 06:00

23:02

Ship 9 cranes: (5) (6)

00:30 Berth 1

Berth 2

Figure 4.5  The Gantt chart of solution 23.

Berth 3

Berth 4

08:30

10 0

D ENIZ O Z D EMIR A N D Ev RIM URS AvA S

Table 4.4  Decomposition of the Objectives for Solution 23

1 2 3 4 5 6 7 8 9 10 11 Total

SHIP NAME

ASSIGNED BERTH

MSG NTD CG NT LZ XY LZI GC LP LYQ CCG

1 4 3 2 1 3 2 1 3 1 4

WAITING TIME [1] (MIN)

HANDLING TIME [2] (MIN)

SERVICE TIME ([1] + [2])

TOTAL DELAY (MIN)

NUMBER OF CRANE SETUPS

3 0 0 0 0 0 0 171 0 0 342 516

321 342 389 129 513 534 653 273 113 113 500 3880

324 342 389 129 513 534 653 444 113 113 842 4396

0 0 0 0 0 0 0 0 0 0 0 0

2 2 1 2 2 1 1 2 2 2 1 18

solution belongs to an extreme point in the optimal Pareto efficient frontier. It is, therefore, foreseeable to have service time increased by a large extent, though it is interesting to see that the solution still does not cause any delays with respect to the due date given. To provide an additional example, in Figure 4.6 and Table 4.5, computational results for solution 16 are reported. Here, total service time is 2842, handling time is 2578, waiting time is 264, and delay time is 0. The number of crane setups is 25. No delay times have been encountered for the given solutions. The minimum total service time found in Liang et al. (2009) is reported as 2165 min for all vessels. Han et al. (2010) have demonstrated the minimum total service time as approximately 36 h for the case where the maximum allowable number of cranes for a vessel is set to 4. Our solution with 2165 min for the Pareto optimal solution, which minimizes the service time of the vessels, is equal to the value found by Liang et al. (2009). Consequently, our study proves the optimality of this solution by implementing an exact algorithm approach. In both studies by Liang et al. (2009) and Han et al. (2010), crane movements are defined for movements among berths. In our formulation, however, we also take into account the movement among vessels, as the setup cost of a crane to serve a vessel is not neglected (LALB Harbor Safety Committee, 2012). Moreover, in their formulation,

101

BI - O B JEc TI v E BER T H – C R A NE A L L O c ATI O N

23:59

23:23

22:00

Ship 10 cranes: (1) (2) 21:30

20:00

14:00 12:00

14:38

Ship 1 crane: (7) 10:56

Cranes: (1)(2) (1)(2)(3) Ship 7 06:12

06:00

15:53 13:40

Ship 2 cranes: (1) (2)

08:00 09:00

02:00

21:00

21:00

Ship 1 cranes: (3) (7)

10:00

04:00

22:18

Ship 9 cranes: (5) (6)

Ship 4 cranes: (3) (4)

18:05

18:00 16:00

23:23

23:09

Ship 11 cranes: (3) (4)

Berth 1

11:30 Ship 6 11:28 cranes: (4) (6) (7) 08:30

Crane: (4)

07:00

Ship 5 cranes (1) (2) (3)

00:00

Ship 8 cranes: (5) (6)

03:45

Ship 3 cranes (4)(5) Berth 2

Berth 3

03:30 Berth 4

Figure 4.6  The Gantt chart of solution 16. Table 4.5  Decomposition of the Objectives for Solution 16

1 2 3 4 5 6 7 8 9 10 11 Total

SHIP NAME

ASSIGNED BERTH

MSG NTD CG NT LZ XY LZI GC LP LYQ CCG

1 2 3 2 1 4 2 4 3 1 3

WAITING TIME [1] (MIN)

HANDLING TIME [2] (MIN)

SERVICE TIME ([1] + [2])

TOTAL DELAY (MIN)

NUMBER OF CRANE SETUPS

148 116 0 0 0 0 0 0 0 0 0 264

387 342 195 129 342 178 236 263 113 113 280 2578

535 458 195 129 342 178 236 263 113 113 280 2842

0 0 0 0 0 0 0 0 0 0 0 0

2 2 2 2 3 3 3 2 2 2 2 25

10 2

D ENIZ O Z D EMIR A N D Ev RIM URS AvA S

cranes are assumed as identical, and the information as to which specific crane is assigned to a vessel cannot be retrieved from the solution, and this decision is left to the decision maker. We, in turn, also support the decision maker by specifying the crane identities. From the numerical results, one can conclude that when quay crane setup costs are ignored, the total service time of vessels decreases. A decision maker might choose to prefer a solution closer to the lefthand side of the Pareto efficient frontier in Figure 4.3, if the setup costs are not so significant. On the other hand, in case of extreme setup costs of quay cranes, the decision maker is directed toward the solutions in the right-hand side. The Pareto efficient frontier in this case may be used as an efficient decision support tool for decision makers. 4.6  Conclusions and Further Research Directions

Port management is often faced with many challenging problems that require the decision makers to consider numerous issues all at a time. Involvement of multiple parties in the activities associated with container terminal operations makes the port management problem even more complex. The presence of such complications necessitates the use of a decision support tool. In this study, we propose a decision support tool for the simultaneous berth allocation and crane scheduling problem in consideration of the multiple objectives that need to be satisfied. We first extend the literature by better reflecting practical considerations. We then formulate this problem by bi-objective integer programming. To solve the problem, we follow an ε-constraint method–based solution algorithm to acquire the nondominated berth–crane assignments and schedules as the Pareto optimal frontier. The decision makers may use the obtained optimal Pareto frontier as a decision aid tool. As an insight, we may say that the decisions will be made toward the left-hand side of the frontier if crane setup costs are not so substantial. Conversely, with extreme crane costs, the decision makers are directed toward the solutions in the right-hand side. With this multisolution approach, decision maker is offered the flexibility of adjusting the balance within conflicting objectives.

BI - O B JEc TI v E BER T H – C R A NE A L L O c ATI O N

10 3

We would like to emphasize the fact that this study is part of an ongoing work. We aim to implement our model to other ports of the world to further examine practical considerations that may be required. We will work toward the potential to incorporate our solution procedure with in-house-developed optimization techniques. As a further future work, we believe that the framework we have presented here may further be extended to capture more realistic implementations incorporating issues such as the uncertainty residing in the arrival time of vessels and handling time of cranes. Furthermore, objectives of the model may be analyzed in detail and restructured in parallel to the needs of the decision makers. As last words, it should be kept in mind that this model is a decision tool that can help decision makers to understand the situation better, rather than finding the optimum design. By adjusting parameters or assigning priorities to different objectives, it is possible to obtain a number of satisfactory solutions; however, the ultimate decision always lies with the decision maker.

Acknowledgment This study is part of a research project funded by TUBITAK (The Scientific and Technological Research Council of Turkey): 1001—The Support Program for Scientific and Technological Research Projects program grant no. 112M865.

References

Bierwirth, C., F. Meisel. 2010. A survey of berth allocation and quay crane scheduling problems in container terminals. European Journal of Operational Research 202(3): 615–627. Carlo, H.J., I.F.A. Vis, K.J. Roodbergen. 2014. Transport operations in container terminals: Literature overview, trends, research directions and classification scheme. European Journal of Operational Research, 236(1): 1–13. Cheong, C.Y., K.C. Tan, D.K. Liu, C.J. Lin. 2010a. Multi-objective and prioritized berth allocation in container ports. Annals of Operations Research 180: 63–103. Cheong, C.Y., M.S. Habibullah, R.S.M. Goh, X. Fu. 2010b. Multi-objective optimization of large scale berth allocation and quay crane assignment problems. In: Proceedings of the SMC, Barcelona, Spain, pp. 669–676.

10 4

D ENIZ O Z D EMIR A N D Ev RIM URS AvA S

Czyzyk, J., M. Mesnier, J. Moré. 1998. The NEOS server. IEEE Journal on Computational Science and Engineering 5: 68–75. Dolan, E. 2001. The NEOS server 4.0 administrative guide, Technical Memorandum ANL/MCS-TM-250. Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, IL. Ehrgott, M. 2005. Multicriteria Optimization. Springer, Berlin, Germany. Ehrgott, M., X. Gandibleux. 2002. Multiple Criteria Optimization: State of the Art Annotated Bibliographic Surveys. Kluwer Academic, Boston, MA. Golias, M.M., M. Boilé, S. Theofanis. 2009. Service time based customer differentiation berth scheduling. Transportation Research Part E: Logistics and Transportation Review 45(6): 878–892. Golias, M.M., M. Boilé, S. Theofanis, A.H. Taboada. 2010. A multi-objective decision and analysis approach for the berth scheduling problem. International Journal of Information Technology Project Management 1(1): 54–73. Gropp, W., J. Moré. 1997. Optimization environments and the NEOS server. In: M.D. Buhmann and A. Iserles (eds.), Approximation Theory and Optimization. Cambridge University Press, Cambridge, U.K., pp. 167–182. Han, X., Z. Lu, L. Xi. 2010. A proactive approach for simultaneous berth and quay crane scheduling problem with stochastic arrival and handling time. European Journal of Operational Research 207: 1327–1340. Imai, A., J.-T. Zhang, E. Nishimura, S. Papadimitriou. 2007. The berth allocation problem with service time and delay time objectives. Maritime Economics & Logistics 9: 269–290. LALB Harbor Safety Committee. 2012. Vessel terminal gantry crane safety. http://www.yjcrane.com/solution/gantry-crane/vessel-terminal-gantrycrane-safety.html (last accessed February 16, 2012). Liang, C., J. Guo, Y. Yang. 2011. Multi-objective hybrid genetic algorithm for quay crane dynamic assignment in berth allocation planning. Journal of Intelligent Manufacturing 22: 471–479. Liang, C., Y. Huang, Y. Yang. 2009. A quay crane dynamic scheduling problem by hybrid evolutionary algorithm for berth allocation planning. Computers & Industrial Engineering 56(3): 1021–1028. Rodrigue, J.P., T. Notteboom, B. Slack. 2009. Transportation modes. In: Rodrigue, J.P., C. Comtois, B. Slack (eds.), The Geography of Transport Systems. Routledge, New York, Chapter 3. UNCTAD/RMT/2013. 2013. Review of maritime transport. United Nations Publications, Geneva, Switzerland. Zhang, C., L. Zheng, Z. Zhang, L. Shi, A.J. Armstrong. 2010. The allocation of berths and quay cranes by using a sub-gradient optimization technique. Computers & Industrial Engineering 58: 40–50. Zhou, P., H. Kang. 2008. Study on berth and quay-crane allocation under stochastic environments in container terminal. Systems Engineering—Theory & Practice 28: 161–169.

5 ROUtE S ELECtI ON P ROBLEm IN tHE A RCtI C R EG I ON FOR tHE G LOBAL L O G IStI C S I NdUStRY BEK IR SA HIN Contents

5.1 Introduction 105 5.2 Route Selection Problem in the Arctic Region 107 5.3 Methodology 110 5.3.1 Linguistic Variable 110 5.3.2 Fuzzy Sets and Triangular Fuzzy Numbers 111 5.3.3 Fuzzy Analytic Hierarchy Process 112 5.3.4 Centric Consistency Index 118 5.4 GF-AHP Design and Application for Track Selection 118 5.5 Conclusion 129 References 129 5.1 Introduction

Logistics is a process of distribution network management and optimization of the flow of resources. Therefore, logistics management benefits from using optimal product transportation. The transportation locations of the economic world are undergoing a dramatic change with the emergence of new Arctic seaways (Wilson et al. 2004). The melting of sea ice in the northern hemisphere is being observed with great attention. As a result of both greenhouse effects and seasonal fluctuations of long-term average temperatures, a historical opportunity presents itself to extend maritime transport over the Arctic region. For instance, the Northern Sea route shortens the Yokohama–London distance via the Suez Canal from 11.447 to 7.474

10 5

10 6

BEKIR S A HIN

Northern Sea Route

Narvik Porsgrunn

Qingdao Shekou

Suez Route

Figure 5.1  Overview on the Northern Sea Route and the Suez Route. (Adapted from Schøyen, H. and Bråthen, S., J. Transp. Geogr., 19, 977, 2011.)

nautical miles. Figure 5.1 illustrates the Arctic Sea routes that shorten the traditional routes. The possibility of a shipping route over the Arctic region will significantly lessen time and energy spent on long transportations on a regular basis. However, because the possible shipping routes are still covered by floating ice (i.e., open ice, closed ice), the Arctic routing is facing an ongoing debate with its highly technical circumstances. Navigational track (route) optimization, entry into the ice field, and route selection are some challenges in this field. Among these debates, route selection is the main concern of this chapter and it depends on a number of factors such as the dimensions and the physical conditions of the route. Arctic navigation is a new concept with a short literature including track optimization among other aspects (Thomson and Sykes 1988; Ari et al. 2013). Once a navigational route is selected, various studies can improve the navigational quality in terms of time, structural stress, and fuel consumption. However, route selection is the primary problem, which is not discussed in earlier studies. The route selection problem can be categorized into static route selection and dynamic route selection. The static route selection approach is based on instant inputs of indicators while assuming that ice field and weather conditions are stationary over the intended

R O U T E SEL Ec TI O N P R O B L EM IN T HE A R c TI c

10 7

navigational sea field. In the case of the dynamic approach, the ice field and weather may have variations over the region and the size and direction of the vectors may change over time. As an introduction to the problem, this chapter deals with the static route selection approach from the perspective of subjective judgments of shipmasters. It is certain that these field experts have enough experience and knowledge on ice navigation operations and winterization. The problem is investigated by using the fuzzy analytical hierarchy process (F-AHP). The reasons behind the selection of F-AHP are twofold: first, AHP is very useful for handling both quantitative and subjective matters and, second, fuzzy extension facilitates the process for subjects in the survey by using linguistic representations. Decision makers’ (DMs’) uncertainty is a common case and, based on the drawbacks of uncertainty, fuzzy transformations help the moderator (i.e., researcher) to collect a span of data rather than a single crisp number with an unknown degree of certainty. 5.2  Route Selection Problem in the Arctic Region

Logistics activities in the Arctic region are regularly conducted by ferries, big roll-on/roll-off (RO-Ros), and icebreaker convoys. These powerful vessels leave their tracks. Therefore, recent tracks are preferable for navigation in ice-covered sea regions. Figure 5.2 is an empirical image which shows an objective vessel and previous tracks opened by icebreakers or other vessels. Ice navigation becomes hard in such an environment, and route selection management requires field experience. For vessels traveling from one point to another in ice, it is important to detect the optimal routes that reduce travel time, fuel consumption, and getting stuck in ice. Seafarers gather route information from various sources such as radar (e.g., automatic radar plotting aids, ARPA), satellite images, infrared cameras, visual recognition, and charts. After many continuous observations, available paths are drawn as shown in Figure 5.3. There are three different possible routes connecting the starting point to the final destination. The average route width (ARW), slot availability (S), maximum width along with the track (Max), minimum width along with the track (Min), ice concentration (IC), route length (RL), sea depth (SD),

10 8

BEKIR S A HIN

Objective ship

Figure 5.2  A ship prepares to navigate in ice-covered sea regions.

Objective ship

Figure 5.3  Routes for the ship navigation in ice-covered sea regions (tracks 1, 2, and 3 from left to right).

R O U T E SEL Ec TI O N P R O B L EM IN T HE A R c TI c

10 9

Ice concentration Open water

 0 and aij × aji = 1, be a judgment matrix. The prioritization method denotes the process of acquiring a priority vector. n wi = 1, from the w = (w1, w2,  …, wn)T where wi ≥ 0 and i =1 judgment matrix A. Let D = {d1, d2, …, dm} be the set of experts, and λ = {λ1, λ2, …, λm} be the weight vector of the DMs, where λk > 0, k = 1, 2, …, m, and m λ k = 1.





k −1

Let E = {e1, e 2, …, em} be the set of the experience in the professional career (in years for this chapter) for each expert, and λk for each expert is defined by

116

BEKIR S A HIN

Decision making for Arctic route selection

Definition of the objective

Determination of the alternatives for the Arctic route selection problem

Expert consultation by survey method

Determination of the criteria for the Arctic route selection problem

Structure of the decision hierarchy for the Arctic route selection problem

AHP

Data collection and pairwise comparisons No Is the model acceptable? CCI < 0.37 Consistency check loop

Yes

GF-AHP

Data analysis and evaluation of the alternative

Selection of the best alternative for the Arctic route problem

Figure 5.9  GF-AHP procedure.

R O U T E SEL Ec TI O N P R O B L EM IN T HE A R c TI c

117

Table 5.1  Membership Function of Linguistic Scale FUZZY NUMBER

MEMBERSHIP FUNCTION

LINGUISTIC SCALES

INVERSE

A1 A

Equally important

(1, 1, 1)

(1, 1, 1)

Moderately important

(1, 3, 5)

(1/5, 1/3, 1)

A3

More important

(3, 5, 7)

(1/7, 1/5, 1/3)

A4 A

Strongly important

(5, 7, 9)

(1/9, 1/7, 1/5)

Extremely important

(7, 9, 9)

(1/9, 1/9, 1/7)

2

5

λk =

ek



m k =1

ek

(5.12)

Let A ( k) = (aij( k ) )n×n be the judgment matrix that is gathered by the DM dk. wi( k ) is the priority vector of criteria for each expert calculated by wi( k ) =

aij ⎞⎟ j =1 ⎠

∏ ∑ ⎜⎝⎛ ∏ n

i =1



1/ n

n

⎛ ⎜ ⎝

1/ n

n

aij ⎟⎞ j =1 ⎠

(5.13)

The individual priority aggregation is defined by m

(w ) i

w

λk

∏ (w ) = ∑ ∏ (w ) n

i =1

k =1

(k) i

m

k =1

(k) i

λk

(5.14)

where wi( w ) is the aggregated weight vector. Then the extent synthesis method (Chang 1996) is applied for the consequent selection. A pairwise comparison between the alternatives i and j for criterion C is defined by

aijC =

Ari (5.15) Arj

where Ari is the rank valuation set of alternative i. By the final consistency control, the procedure of generic fuzzy AHP (GF-AHP) is

118

BEKIR S A HIN

achieved. Consistency control and centric consistency index (CCI) for F-AHP applications are described in the following section. 5.3.4  Centric Consistency Index

According to Saaty’s approach, all DMs’ matrix should be consistent to analyze the selection problem (Saaty and Vargas 1987). For the consistency control of the F-AHP method, Duru et al. (2012) proposed a CCI based on the geometric consistency index (Crawford and Williams 1985; Aguarón and Moreno-Jimenez 2003). The calculation of the CCI algorithm is as follows: CCI ( A ) =



2 (n − 1)(n − 2)

∑ ⎛⎜⎝ log i< j

aLij + aMij + aUij 3

W Lj + W Mj + WUj ⎞ W + W Mi + WUi + log − log Li ⎟ 3 3 ⎠

2



(5.16)

When CCI(A) is 0, A is fully consistent. Aguarón also expresses the thresholds (GCI ) as (GCI ) = 0.31 for n = 3, (GCI ) = 0.35 for n = 4, and (GCI ) = 0.37 for n > 4. When CCI(A) 0 where t, s, qik, and uik are decision variables.

(7.13)

15 7

IN T EG R AT ED D EcISI O N M O D EL

 j = (ψ1j , ψ 2j , ψ 3j ) Step 8: Calculate the weight of each criteria ψ for α = 0 and α = 1 employing U

( ψ j )α

m

= max

∑v ( X ʹ )

U ij α

i

i =1

subject to L

U

λ (Wi )α ≤ vi ≤ λ (Wi )α , i = 1, 2,…, m (7.14) m

∑v = 1 i

i =1

λ , vi ≥ 0



m

( ψ j )α = min ∑ vi ( X ijʹ )α L

L

i =1

subject to L

U

λ (Wi )α ≤ vi ≤ λ (Wi )α , i = 1, 2,…, m (7.15) m

∑v = 1 i

i =1

λ , vi ≥ 0



where λ and vi are decision variables. Step 9: Calculate distances from the ideal and the anti-ideal solutions (D *p and D −p , respectively) for each alternative as D *p =

n

∑ 12 {max ( ψ



j =1

)

}

1 j

y 1pj − 0 , ψ 3j y 3pj − 0 + ψ 2j y 2pj − 0 (7.17)

n

∑ 12 {max ( ψ

}

y 1pj − 1 , ψ 3j y 3pj − 1 + ψ 2j y 2pj − 1 (7.16)

j =1

D −p =

)

1 j

15 8

M EH TA p D URSUN E T A L .

Step 10: Calculate the ranking index (RI) of the pth supplier: RI p =

D −p D −p + D *p

(7.18)

Step 11: Rank the suppliers according to RIp values in descending order. Identify the alternative with the highest RIp as the best supplier.

7.7  Case Study

Over the past two decades, parallel to the upsurge in the number and complexity of medical devices, the medical device industry has become intensively competitive with an increase in the number of manufacturing companies. Selecting the best medical device supplier among multiple alternatives has become one of the most critical decisions faced by purchasing managers in medical device supply chain. The performance of suppliers has a key role on cost, quality, and service in achieving customer satisfaction in the health-care industry. In order to demonstrate the application of the proposed decisionmaking method to medical device supplier selection, an evaluation for epidural catheter suppliers is presented. The case study is conducted in a private hospital at the Asian side of Istanbul. The hospital operates with all major departments while including facilities such as clinical laboratories, emergency service, intensive care units, and operating room. First, a HOQ is constructed that demonstrates the relationships between the features that epidural catheters must possess and supplier assessment criteria as well as the interactions among supplier assessment criteria. As a result of discussions with experts from the purchasing department of the hospital, nine fundamental characteristics required of epidural catheters purchased from medical suppliers (CNs) are determined. These can be listed as cost (CN1 ), kink resistant (CN2 ), friction (CN3 ), high tensile strength (CN4 ), atraumatic tip design (CN5 ), easy to thread and remove (CN6 ), easy to anchor with the catheter connector (CN7 ), good flow characteristics (CN8 ), and shear resistant (CN9 ). Nine criteria relevant to supplier assessment are identified as product volume (TA1 ), delivery (TA2 ), payment method (TA3 ), supply variety

15 9

IN T EG R AT ED D EcISI O N M O D EL

(TA4 ), reliability (TA5 ), experience in the sector (TA6 ), earlier business relationship (TA7 ), management (TA8 ), and geographical location (TA9 ). There are 12 suppliers who are in contact with the hospital. The evaluation of the direct influence matrix among CNs is conducted by a committee of six decision makers (DM1, DM2, DM3, DM4, DM5, DM6). DM1, DM2, and DM3 used the linguistic term set definitely low (DL), very low (VL), low (L), moderate (M), high (H), very high (VH), and definitely high (DH) as shown in Figure 7.1, whereas the remaining three decision makers, namely DM4, DM5, and DM6, preferred to use a different linguistic term set with very low (VL), low (L), moderate (M), high (H), and very high (VH) as depicted in Figure 7.2. The β values of the direct influence matrix among CNs are given in Table 7.2. µ(x) 1.0

DL

DH L

VL

0

0.16

0.33

M

VH

H

0.5

0.66

0.83

1.0

χ

Figure 7.1  A linguistic term set where DL (0, 0, 0.16), VL (0, 0.16, 0.33), L (0.16, 0.33, 0.50), M (0.33, 0.50, 0.66), H (0.50, 0.66, 0.83), VH (0.66, 0.83, 1), and DH (0.83, 1, 1).

µ(x)

1.0 VL

0

L

M

0.25

H

0.5

VH

0.75

1.0

χ

Figure 7.2  A linguistic term set where VL (0, 0, 0.25), L (0, 0.25, 0.5), M (0.25, 0.5, 0.75), H (0.5, 0.75, 1), and VH (0.75, 1, 1).

16 0

M EH TA p D URSUN E T A L .

Table 7.2  β Values of the Direct Influence Matrix among CNs CN1 CN2 CN3 CN4 CN5 CN6 CN7 CN8 CN9

CN1

CN2

CN3

CN4

CN5

CN6

CN7

CN8

CN9

0.000 7.281 7.380 6.968 7.281 7.434 6.899 7.434 7.434

6.642 0.000 7.327 6.279 5.910 6.971 0.000 3.998 6.642

6.311 6.714 0.000 3.482 6.285 6.082 0.085 0.057 6.024

6.391 6.899 2.192 0.000 0.958 1.167 0.000 5.696 6.968

6.294 6.679 6.910 2.731 0.000 6.210 2.677 0.759 5.639

7.281 7.434 7.380 2.296 7.434 0.000 1.330 0.000 4.742

6.521 1.083 1.421 0.054 2.024 1.110 0.000 0.000 6.575

6.534 4.789 0.437 6.642 0.726 0.264 0.264 0.000 4.007

7.434 7.434 6.342 7.242 6.279 6.142 5.892 3.760 0.000

By employing the DEMATEL method, the weights of CNs are determined as 0.1598, 0.1337, 0.1138, 0.0993, 0.1130, 0.1130, 0.0583, 0.0709, and 0.1382, respectively. The data related to supplier selection that are provided in Table 7.3 consist of assessments of three decision makers employing linguistic variables defined in Figure 7.1. Using Equations 7.12 through 7.15, the weights of each TA are calculated as in Table 7.4. The distances from the ideal and the anti-ideal solutions for each alternative and the ranking index of each alternative are computed employing Equations 7.16 through 7.18 as in Table 7.5. The rank order of the suppliers is Sup 7 ≻ Sup 1 ≻ Sup 4 ≻ Sup 2 ≻ Sup 3 ≻ Sup 6 ≻ Sup 9 ≻ Sup 8 ≻ Sup 11 ≻ Sup 5 ≻ Sup 10 ≻ Sup 12. According to the results of the analysis, supplier 7 is determined as the most suitable supplier, which is followed by supplier 1, supplier 4, and supplier 2. Suppliers 10 and 12 are ranked at the bottom due to late delivery time and inadequate product volume. 7.8 Conclusion

Considering the global challenges in manufacturing environment, organizations are forced to optimize their business processes to remain competitive. In order to reach this aim, firms must work with its supply chain partners to improve the chain’s total performance. Supplier’s performance has a key role on cost, quality, delivery, and service in achieving the objectives of a supply chain. Hence, supplier selection is considered as one of the most critical activities of purchasing management in a supply chain.

Sup 1 Sup 2 Sup 3 Sup 4 Sup 5 Sup 6 Sup 7 Sup 8 Sup 9 Sup 10 Sup 11 Sup 12

(VH, VH, VH) (M, VH, M) (M, M, M) (L, M, L) (M, M, M) (H, H, H) (VH, DH, VH) (M, VL, L) (M, M, M) (L, M, L) (M, VL, M) (DL, DL, VL)

TA1

(M, H, L) (H, VH, H) (H, DH, H) (VH, VH, VH) (H, VH, H) (H, H, H) (M, H, VH) (M, M, M) (H, H, H) (L, M, VL) (L, M, VL) (VL, L, DL)

TA2 (H, DH, H) (M, M, M) (H, H, M) (VH, H, VH) (H, VH, H) (VH, DH, VH) (VH, VH, DH) (DH, DH, H) (M, H, M) (H, H, H) (H, H, H) (H, H, H)

TA3

Table 7.3  Ratings of Suppliers with respect to TAs (VH, VH, VH) (H, H, H) (M, H, M) (L, H, L) (M, M, M) (H, H, H) (H, H, H) (L, L, L) (M, M, H) (H, M, M) (M, VL, M) (L, M, VL)

TA4 (H, VH, VH) (VH, H, VH) (M, H, H) (H, DH, VH) (L, L, L) (H, H, H) (H, VH, H) (M, H, M) (H, M, H) (M, M, M) (L, L, L) (H, H, H)

TA5 (DH, DH, VH) (VH, H, DH) (H, H, H) (H, VH, H) (M, M, M) (L, M, L) (VH, DH, H) (L, H, M) (M, L, M) (H, L, H) (H, M, H) (M, M, M)

TA6 (H, H, M) (H, VH, H) (DH, VH, VH) (H, DH, H) (H, H, H) (H, VH, H) (H, VH, VH) (VH, M, VH) (H, H, H) (M, M, H) (H, VH, M) (L, L, L)

TA7

(H, H, H) (H, H, H) (M, H, M) (H, H, VH) (L, VL, L) (H, M, H) (VH, VH, VH) (M, H, M) (M, M, H) (M, M, M) (H, H, H) (M, VL, M)

TA8

(M, VL, M) (L, L, L) (H, VH, H) (M, H, H) (M, M, M) (M, L, M) (L, M, M) (VH, DH, VH) (M, VH, M) (L, H, L) (VH, DH, VH) (M, DL, L)

TA9

IN T EG R AT ED D EcISI O N M O D EL

161

16 2

M EH TA p D URSUN E T A L .

Table 7.4  Weights of each TA TAs

IMPORTANCE WEIGHTS

TA1 TA2 TA3 TA4 TA5 TA6 TA7 TA8 TA9

(0.0434, 0.0708, 0.1122) (0.0848, 0.1192, 0.1648) (0.0648, 0.0952, 0.1381) (0.0800, 0.1122, 0.1561) (0.1050, 0.1369, 0.1740) (0.1079, 0.1391, 0.1776) (0.0984, 0.1355, 0.1730) (0.0972, 0.1328, 0.1731) (0.0334, 0.0584, 0.0981)

Table 7.5  Ranking of Suppliers SUPPLIERS Sup 1 Sup 2 Sup 3 Sup 4 Sup 5 Sup 6 Sup 7 Sup 8 Sup 9 Sup 10 Sup 11 Sup 12

D p*

D p−

RIP

RANK

0.3116 0.3470 0.3568 0.3275 0.4916 0.3823 0.2761 0.4438 0.4326 0.5056 0.4801 0.6291

0.9437 0.9035 0.8813 0.9273 0.7248 0.8593 0.9945 0.7814 0.7898 0.7074 0.7427 0.5762

0.7518 0.7225 0.7118 0.7390 0.5959 0.6921 0.7827 0.6378 0.6461 0.5832 0.6074 0.4781

2 4 5 3 10 6 1 8 7 11 9 12

In a medical device supply chain, identifying the most appropriate supplier among multiple alternatives is of outmost importance. In this study, a fuzzy multicriteria group decision-making algorithm is presented for medical supplier evaluation and selection. The methodology developed in this study considers QFD planning as a fuzzy multicriteria group decision tool. It enables to consider not only the impacts of relationships among the purchased product features and supplier selection criteria, but also the inner dependencies among supplier selection criteria for achieving higher satisfaction to meet company’s requirements. Applying the decision framework presented here to realworld group decision-making problems in other disciplines that can be represented using HOQ matrices will be the subject of future studies.

IN T EG R AT ED D EcISI O N M O D EL

References

16 3

Ahmady, N., M. Azadib, S.A.H. Sadeghic, and R.F. Saen. 2013. A novel fuzzy data envelopment analysis model with double frontiers for supplier selection. International Journal of Logistics: Research and Applications, 16 (2): 87–98. Alinezad, A., A. Seif, and N. Esfandiari. 2013. Supplier evaluation and selection with QFD and FAHP in a pharmaceutical company. International Journal of Advanced Manufacturing Technology, 68: 355–364. Alptekin, S.E. and E.E. Karsak. 2011. An integrated decision framework for evaluating and selecting e-learning products. Applied Soft Computing, 11: 2990–2998. Amin, S.H. and J. Razmi. 2009. An integrated fuzzy model for supplier management: A case study of ISP selection and evaluation. Expert Systems with Applications, 36: 8639–8648. Bai, C. and J. Sarkis. 2010. Integrating sustainability into supplier selection with grey system and rough set methodologies. International Journal of Production Economics, 124: 252–264. Bevilacqua, M., F.E. Ciarapica, and G. Giacchetta. 2006. A fuzzy-QFD approach to supplier selection. Journal of Purchasing & Supply Management, 12: 14–27. Bottani, E. and A. Rizzi. 2005. A fuzzy multi-attribute framework for supplier selection in an e-procurement environment. International Journal of Logistics Research and Applications, 8 (3): 249–266. Chen, C.T., C.T. Lin, and S.F. Huang. 2006. A fuzzy approach for supplier evaluation and selection in supply chain management. International Journal of Production Economics, 102: 289–301. Chuu, S.J. 2009. Group decision-making model using fuzzy multiple attributes analysis for the evaluation of advance manufacturing technology. Fuzzy Sets and Systems, 160 (5): 586–602. Dursun, M. and E.E. Karsak. 2013. A QFD-based fuzzy MCDM approach for supplier selection. Applied Mathematical Modelling, 37: 5864–5875. Hammami, R., C. Temponi, and Y. Frein. 2014. A scenario-based stochastic model for supplier selection in global context with multiple buyers, currency fluctuation uncertainties, and price discounts. European Journal of Operational Research, 233: 159–170. Herrera, F., E. Herrera-Viedma, and L. Martínez. 2000. A fusion approach for managing multi-granularity linguistic term sets in decision making. Fuzzy Sets and Systems, 114 (1): 43–58. Herrera, F. and L. Martínez. 2000a. An approach for combining linguistic and numerical information based on 2-tuple fuzzy representation model in decision-making. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, 8 (5): 539–562. Herrera, F. and L. Martínez. 2000b. A 2-tuple fuzzy linguistic representation model for computing with words. IEEE Transactions on Fuzzy Systems, 8 (6): 746–752.

16 4

M EH TA p D URSUN E T A L .

Herrera, F. and L. Martínez. 2001. A model based on linguistic 2-tuples for dealing with multigranular hierarchical linguistic contexts in multi-expert decision-making. IEEE Transactions on Systems, Man, and Cybernetics— Part B: Cybernetics, 31 (2): 227–234. Herrera-Viedma, E., F. Herrera, L. Martínez, J.C. Herrera, and A.G. López. 2004. Incorporating filtering techniques in a fuzzy linguistic multi-agent model for information gathering on the web. Fuzzy Sets and Systems, 148 (1): 61–83. Jiang, Y.P., Z.P. Fan, and J. Ma. 2008. A method for group decision making with multi granularity linguistic assessment information. Information Sciences, 178 (4): 1098–1109. Karsak, E.E. 2004. Fuzzy multiple objective decision making approach to prioritize design requirements in quality function deployment. International Journal of Production Research, 42 (18): 3957–3974. Karsak, E.E., S. Sozer, and S.E. Alptekin. 2003. Product planning in quality function deployment using a combined analytic network process and goal programming approach. Computers and Industrial Engineering, 44: 171–190. Kumar, M., P. Vrat, and R. Shankar. 2006. A fuzzy programming approach for vendor selection problem in a supply chain. International Journal of Production Economics, 101: 273–285. Nazari-Shirkouhi, S., H. Shakouri, B. Javadi, and A. Keramati. 2013. Supplier selection and order allocation problem using a two-phase fuzzy multiobjective linear programming. Applied Mathematical Modelling, 37: 9308–9323. Shemshadi, A., H. Shirazi, M. Toreihi, and M.J. Tarokh. 2011. A fuzzy VIKOR method for supplier selection based on entropy measure for objective weighting. Expert Systems with Applications, 38: 12160–12167. Shen, L., L. Olfat, K. Govindan, R. Khodaverdi, and A. Diabat. 2013. A fuzzy multi criteria approach for evaluating green supplier’s performance in green supply chain with linguistic preferences. Resources, Conservation and Recycling, 74: 170–179. Shillito, M.L. 1994. Advanced QFD: Linking Technology to Market and Company Needs. New York: Wiley. Tzeng, G.H., W.H. Chen, R. Yu, and M.L. Shih. 2010. Fuzzy decision maps: A generalization of the DEMATEL methods. Soft Computing, 14: 1141–1150. Wang, W.P. 2010. A fuzzy linguistic computing approach to supplier evaluation. Applied Mathematical Modelling, 34: 3130–3141. Wu, D.D., Y. Zhang, D. Wu, and D.L. Olson. 2010. Fuzzy multi-objective programming for supplier selection and risk modeling: A possibility approach. European Journal of Operational Research, 200: 774–787. Yager, R.R. 1988. On ordered weighted averaging aggregation operators in multi-criteria decision making. IEEE Transactions on Systems Man and Cybernetics, 18 (1): 183–190.

8 A RC S ELECtI ON ANd ROUtIN G FOR R EStOR AtI ON OF N E t WORK C ONNECtI V It Y AF tER A D ISAStER AY Ş E N U R A S A LY A N D F. S I B E L S A L M A N Contents

8.1 8.2 8.3 8.4

Introduction and Problem Definition 165 Literature Review 169 Complexity Analysis 171 Mathematical Model 174 8.4.1 Sets, Indices, and Input Parameters 175 8.4.2 Decision Variables 175 8.4.3 Objective Function 176 8.4.4 Vehicle Balance Equations 176 8.4.5 Constraints That Relate Variables xij and zij 176 8.4.6 Flow Balance Equations 177 8.4.7 Constraints That Relate Variables f ij and xij 178 8.4.8 Component Connectivity Constraints 178 8.4.9 Constraints That Define the Variables 178 8.5 Data Acquisition and Generation 179 8.6 Computational Experiments and Results 188 8.7 Conclusions 192 References 193 8.1  Introduction and Problem Definition

Disaster management involves taking actions before and after a disaster to minimize its destructive effects. After a disaster, it is critical to reach affected areas to provide relief operations, such as search and rescue, medical services, aid delivery, and establishment of temporary shelter. Furthermore, routes should be provided for evacuation, 16 5

16 6

AYŞ E NUR A S A LY A N D F. SIBEL S A L M A N

and major gateways in the transportation system, such as airports and ports, should be accessible. One of the outcomes of a high-impact disaster is the disruption of transportation systems, which cripples postdisaster emergency and relief activities. In the 2013 Bohol earthquake and Typhoon Haiyan, rescue workers struggled to reach ravaged towns and villages in the central Philippines (Mogato and Ng 2013). Relief operations were hampered because roads, airports, and bridges had been destroyed or were covered in wreckage. After the 2011 devastating earthquake and the resulting tsunami in northeast Japan, almost 4000 road segments, 78 bridges, and 29 railway locations were reported to be damaged (BBC News and National Police Agency of Japan 2012). Accumulated debris in the downtown of Kamaishi City, Iwate Prefecture, and a damaged arterial road (National Highway 45) virtually isolated the community from rescue efforts. About 76% of the highways in the area were closed due to damage. This study focuses on logistics planning to ensure connectivity of road networks in the immediate disaster response stage. As experienced in many cases worldwide, roads can be severely damaged in a natural disaster. For instance, in a high-magnitude earthquake, (1) some parts of the roads may be affected as follows: blocked by building, lamppost, tree, and car debris, and deformed, distorted, and ruptured due to ground failure and liquefaction; and (2) vulnerable structures such as bridges and viaducts may collapse. Damage to other infrastructure networks, such as natural gas or drainage systems, may also cause dysfunctionality in the roads. As a result, traffic is blocked at various links of the road network, and some nodes may become unreachable. Some of the damaged roads can be cleared or restored in a short time, whereas it may take many hours, days, or months to eliminate other types of damage. For example, after the 2011 earthquake and tsunami in Japan, Japanese road administrators immediately launched an emergency road restoration operation with the cooperation of local construction companies. The efforts concentrated on 16 routes, to establish first the vertical artery, followed by east–west routes. The operation was completed after 9 days. In general, the emergency restoration goal is to ensure connectivity of the road network and provide accessibility between people in different areas as fast as possible.

A R c SEL Ec TI O N A N D R O U TIN G

16 7

For this purpose, first, the road conditions are assessed, and time to clear/open the roads is estimated. The tasks that take too long are postponed to later stages. Then, among the remaining tasks, a subset that enables connectivity should be selected, and a fleet of machinery or vehicles routed to conduct them in the shortest time. Since some people will want to evacuate the disaster area, while others will be coming in for help, strong connectivity of the network is required. Recently, several studies focused on upgrading a road network or improving accessibility after a disaster situation. These studies are reviewed in Section 8.2. To the best of our knowledge, the restoration of the roads after a disaster by routing a fleet of vehicles in order to ensure strong connectivity of a network has not been addressed in the literature. In this study, we define a new network optimization problem to address this topic. Since the problem combines arc routing and network design elements, it is called Arc Routing for Connectivity Problem (ARCP). Before we define ARCP formally, some definitions may be useful. A connected graph contains a directed path from a node i to another node j or a directed path from j to i for every pair of nodes i and j. Otherwise, the graph is disconnected. A graph is strongly connected if it contains a directed path from i to j and a directed path from j to i for every pair of nodes i and j. Otherwise, the graph is disconnected in the strong sense. We define ARCP on a directed, strongly connected, and simple graph G = (V, A) with nonnegative arc costs. After a natural disaster, speed of transportation is highly dependent on road and extraordinary traffic conditions, as also stated in Nolz et al. (2011). Therefore, costs are calculated in terms of estimated time instead of distance. Traversal time on an unblocked (i.e., not blocked initially) or a blocked arc after it has been unblocked (i.e., opened) is equal to cij, where (i, j) represents the arc. We refer to the fleet of emergency response machineries (including possibly lighting, drainage pump, and satellite communication vehicles) that move together as a single vehicle, which is located initially at a node d, for example, its depot or an emergency response facility. Moreover, a subset B of arcs, which are determined to be blocked according to postdisaster information on road conditions, are given such that GB = (V, A\B) is disconnected in the strong sense. The set B consists of all blocked arcs, and the set R, a subset of B, represents the arcs that will be traversed and cleared

16 8

AYŞ E NUR A S A LY A N D F. SIBEL S A L M A N

by the vehicle in order to restore strong connectivity of the graph. The set R is not known in advance, and its selection is a decision in the problem. The solution identifies R and constructs a walk for the vehicle that starts at its depot. We want the walk in the solution to cover arcs in R. In other words, the arcs in the set A\B∪R  should induce a connected graph, GR , on the set V. We assume that there are |Q| disconnected components in GB, where Q is the set of disconnected components, in the strong sense. Each component in Q consists of strongly connected nodes. We partition Q into three classes: (1) components within which the nodes are strongly connected and which require at least one incoming and one outgoing arc in order to be strongly connected to the remaining graph, (2) components that require at least one outgoing but no incoming arc to be unblocked in order to be strongly connected to the remaining graph, and (3) components that require at least one incoming but no outgoing arc to be unblocked in order to be strongly connected to the remaining network. Moreover, unblocking, that is, passing through a blocked arc for the first time, results in work time in addition to its traversal time. More formally, we define the additional time of unblocking arc (i, j) as bij where bij ≥ 0. In a walk, cij time units elapse each time an arc is traversed, and in addition, bij units elapse once for each blocked arc that is unblocked during the walk. In other words, a blocked arc is unblocked by a vehicle in its first traversal of that arc. We assume that traffic cannot flow in both directions after a blocked road is unblocked in one direction by a vehicle. Considering that allowing traffic in the reverse direction would slow down response activities, this is a reasonable assumption. The objective is to minimize the time at which the graph becomes strongly connected. That is, by definition, there must be a path from each vertex to every other vertex in the network. In order to connect all the disconnected components, at least two arcs in opposite directions within the cutset of a component must be unblocked. Otherwise, the network cannot be strongly connected. Since we are interested in minimizing the time when the graph becomes connected, return of the vehicle to its depot is not considered. Therefore, the walk is open. We can define the objective function as min c(W) + b(W), where W is walk of the vehicle; c(W) is traversal time, and c(W) is calculated by summing up the traversal time of arcs (in terms of cij) that are

A R c SEL Ec TI O N A N D R O U TIN G

16 9

traversed by the vehicle; b(W) is the total additional time (in terms of bij) of unblocking for the vehicle. The aim of this study is to develop a solution method to the connectivity problem that generates a solution in a short time. We formulate ARCP and observe for which cases it can be solved in reasonably short time by numerical tests. Our tests are performed on instances generated considering Istanbul road network at a macro level and its vulnerability to a potential earthquake. Our analysis of the solutions over a set of scenarios provides some insights for preparedness. The organization of this study is as follows: Section 8.2 reviews relevant studies in the literature. Section 8.3 gives computational complexity proof of the ARCP. In Section 8.4, a mixed integer programming (MIP) model for ARCP is given. Section 8.5 presents the data related to Istanbul highway network, and Section 8.6 gives the computational results. Finally, in Section 8.7, we conclude the study with a summary, some comments, and directions for future research. 8.2  Literature Review

Arc routing problems have attracted the interest of researchers for a long time and have many application areas such as delivery services and snow plowing. The problem addressed in this study falls into the class of arc routing problems. The main goal of this section is to introduce problems closely related to ARCP. In rural postman problem (RPP), a given subset of arcs is required to be traversed at least once by a closed walk. The objective is to minimize the total travel time. RPP is NP-hard on an undirected or directed graph (Lenstra and Rinnooy Kan 1976). If the arc costs satisfy the triangle inequality, there exists a 3/2-approximation algorithm (Frederickson 1979). From this point on, the heuristic algorithm that Frederickson presents will be addressed as Frederickson’s heuristic. Fernandez et al. (2003) give formulations and compare them with the former formulations from the literature. They also propose a heuristic method that is based on Frederickson’s heuristic. A local search approach is applied to RPP by Groves and van Vuuren (2005). Another heuristic method is a constructive algorithm that performs local postoptimization in each step (Ghiani et al. 2006). Based on Frederickson’s heuristic, Holmberg (2010) proposes heuristics using

17 0

AYŞ E NUR A S A LY A N D F. SIBEL S A L M A N

Minimum Spanning Tree solution and postprocessing techniques. A detailed review of work before the early 1990s can be found in Eiselt et al. (1995). Akoudad and Jawab (2013) provide a recent survey that presents some variations and applications of RPP. A variation of RPP is studied by Araoz et al. (2009). In this problem, there is no required edge to be traversed. A profit function is defined on the edges that must be taken into account for only the first time an edge is traversed. The objective is to maximize the net profit after the cost of traversing edges is deducted. They solve a relaxed model and propose a heuristic method that is based on the 3T heuristic method used in Fernandez et al. (2003). Araoz et al. (2006) studied privatized RPP on an undirected graph and analyzed several linear systems of inequalities. In this problem, the edge profit function is similar to unblocking time in ARCP. There is a cost of traversing an edge that is paid each time the edge is traversed. Profit is collected only the first time an edge is traversed. The aim is to find a closed walk starting and ending at a depot, traversing some edges in order to maximize the total profit. ARCP differs from the literature in several ways. In ARCP, strong connectivity is the main concern. Most of the other studies do not aim to ensure strong connectivity of the network. ARCP is similar to RPP, but in our problem, the set of required arcs are not known in advance, and there is no requirement for the walks to be closed. Moreover, in ARCP, after the first traversal of a blocked arc, the traversal cost changes. In the disaster context, recently, several studies modeled upgrading the road network or improving accessibility after a disaster without considering routing. They focus on the selection of road segments that are to be upgraded or repaired. One such study is by Duque and Sörensen (2011). They investigate the case where there is a budget constraint, and there are a number of nonoperative roads that need to be repaired after a disaster situation. They assign weights to the rural towns depending on the importance of the towns. Their objective is to minimize the weighted sum of time to travel from each rural town to its closest regional center (Duque and Sörensen 2011). They find the roads to be repaired in order to have the shortest paths between node pairs. Another study is by Campbell et al. (2006), which focuses on determining the number of edges to be upgraded before a catastrophe

A R c SEL Ec TI O N A N D R O U TIN G

171

while minimizing the maximum travel time between any source– terminal/origin–destination (s–t) pair. They use heuristic methods to solve the problem. Only few recent studies have addressed debris removal operations in terms of selecting the order in which unblocking of the edges should be conducted. Stilp et al. (2011) model debris management as a multiperiod network expansion problem and propose efficient heuristics. Sahin et al. (2013) aim at visiting critical disaster-affected districts as quickly as possible, taking into account priority levels, traversing (if necessary) along blocked arcs by carrying out unblocking operations. They model a multiperiod mixed integer program and solve a case study. Aksu and Ozdamar (2014) consider a dynamic path-based model to identify the order of blocked links to be restored during a given time limit. The objective is to maximize the total weighted earliness of all paths’ restoration completion times. ARCP differs from these problems in its objective of ensuring connectivity in shortest time. 8.3  Complexity Analysis

The problem defined in this study, namely, ARCP, is new to the arc routing literature. Therefore, we analyze the computational complexity of ARCP. Theorem 8.1  ARCP is NP-hard. Proof: In order to prove this theorem, we consider another NP-hard problem, RPP. We reduce RPP to ARCP. Definition 8.1  Undirected rural postman problem (RPP). Let G = (V, E) be an undirected graph, where V is the vertex set, E is the edge set, cij(≥0) is the cost of traversing edge (i, j) ∈ E, and R ⊆ E is the set of required edges. The RPP is to determine a least cost closed walk starting from and ending at a depot, traversing each edge of R at least once. The RPP is known to be NP-hard (Lenstra and Rinnooy Kan 1976). Now, let us consider ARCP.

17 2

AYŞ E NUR A S A LY A N D F. SIBEL S A L M A N

Definition 8.2  Arc routing for connectivity problem (ARCP). Let H = (N, A) be a directed strongly connected graph, where N is the vertex set, A is the arc set, and B ⊆ A is the set of blocked arcs. The graph induced by A\B is disconnected (in the strong sense). cij is the traversal time on an open arc (i, j) ∈ A, and bij is the time of unblocking edge (i, j) ∈ B in addition to traversal time cij. ARCP finds a walk starting from its depot, traversing some of the blocked arcs in B to unblock them at the first traversal in order to connect the network. The travel time of the walk is minimized, such that the resulting graph is strongly connected. For this proof, we take an instance I of RPP and construct an instance II of ARCP by a polynomial transformation τ between them. Definition 8.3 Transformation τ. We define a directed and strongly connected graph H from G as follows. We replace every edge (i, j) in E\R with two arcs in both directions with traversal times. We take G = (V, E), delete the edges in the set R and for each (i, j) ∈ R, add three new nodes i′, j′, and p. We define blocked arcs (i, i′), (i′, i), (  j, j′), and (  j′, j) all with traversing and additional unblocking time of 0. Moreover, between i and j, new blocked arcs (i, p), (p, i), (  j, p), and (p, j) with traversal and additional unblocking time cij /2 and 0, respectively, are defined. In order to transform a closed walk in I to an open walk in II, we add a dummy depot d′, which is connected to the original depot d of I in the ARCP instance with two arcs in both directions, one of them blocked. Traversal and additional unblocking time on this blocked arc from d to d′ is zero. The arc (d′, d) that is not blocked has a high traversal time, say M. By assigning a high traversal time to this arc, we enforce the vehicle to visit d′ last. The vehicle, located at d, first traverses other arcs in its walk, and then to ensure a strongly connected graph, it visits d′ as the last node in its walk. It does not visit it in the early stages of its walk because then it will continue its walk to connect the remaining nodes by traversing the arc (d′, d), which increases the objective value highly. Instances I, II, and the transformation are illustrated in Figure 8.1.

173

A R c SEL Ec TI O N A N D R O U TIN G

Instance /

d

i

+++

+ + + +

Instance //

d 0/0 d′

p i Cij/2 / 0

–––

– – –

M

Cij/2 / 0 0/0

j

Cij

0/0

i′

Cij/2 / 0

+++

j

Cij/2 / 0 0/0

–––

0/0

j′

Figure 8.1  Instance I, instance II, and transformation.

Lemma 8.1 Transformation τ from I to II runs in polynomial time in terms of the size of the instance I of RPP. Proof: For every edge in set R in I, we delete one edge and add three nodes and eight arcs. Moreover, for the depot node, one dummy depot and two arcs are added. The edges that are not required to be traversed are doubled into arcs. Now, we need to show that we can obtain an optimal solution to I when ARCP is solved on II. The nodes i′, j′, p, and d′ need to be visited in order to make the graph strongly connected. No matter from which direction the vehicle comes (from i to j or from j to i), it unblocks the arcs (i, i′) and ( j, j′) to reach i′ or j′. Due to the definition of ARCP, for connectivity, arcs (i′, i) and ( j′, j) have to be unblocked as well. Moreover, node p has to be connected to the network, and unblocking one arc going out of node p and one arc coming into it is sufficient in order to ensure strong connectivity of p to the network. Possible routes for the arc segment that corresponds to a required edge can be i−i′−i−p−j−j′−j or i−i′−i−p−i−⋯−j–j′−j and the reverse. In all cases, travel time of these route segments is cij. If the vehicle needs to pass through nodes i and j, it does not visit i′ and j′ not to increase its travel time unnecessarily. These route segments can be converted to the edge (i, j) in the RPP. Consequently,

174

AYŞ E NUR A S A LY A N D F. SIBEL S A L M A N

the required edge (i, j) is traversed. In each traversal of (i, p) and (  j, p) (or reverse) together, the cost of the required edge, cij, is paid. Since the vehicle starts its walk in d and visits d′ as the last stop for connectivity purpose, the resulting walk can be transformed to a closed walk starting and ending in the depot node d by omitting the dummy node d′ and the corresponding arcs. At the end, the solution of RPP on I is reached by solving ARCP on II. Since RPP is NP-hard and τ runs in polynomial time, the ARCP is at least as hard as the RPP. 8.4  Mathematical Model

This section presents a mathematical programming formulation of ARCP. Some properties of a feasible solution of ARCP are given as follows: • It is necessary that arcs in a subset R of B are unblocked. The arcs in the cutsets of components are candidates to be in R. However, additional arcs may also be unblocked to reach one of these arcs in shorter time. • In order to ensure connectivity of the graph, the total number of blocked arcs, which are unblocked in cutsets of all components has to be greater than or equal to 2(|Q| − 1). Otherwise, connectivity cannot be ensured. In other words, in each component’s cutset, at least two arcs that are in opposite directions must be open. This property is necessary for a solution to be feasible, but it is not sufficient for optimality. In order to ensure connectivity and continuity of the walk, we define flow variables f ij for each arc. For the depot, there is an amount of supply depending on the number of nodes that are visited by the vehicle. Similarly, for each component, there is unit demand so that each component can receive flow and the graph becomes connected at the end. Then, to prevent flows on an arc that is not traversed, we relate flow variables with xij, which shows the number of times an arc (i, j) is traversed. Flow variables are defined as real numbers, however, due to unimodularity property; they take integer values because flow variables in the constraints have integer coefficients. Moreover, we add a dummy sink node and force the vehicle to end its tour at this

A R c SEL Ec TI O N A N D R O U TIN G

17 5

sink node (n + 1). For connectivity, we include cutset constraints. The details can be seen in the upcoming paragraphs. 8.4.1  Sets, Indices, and Input Parameters

i, j: Indices of the vertices n + 1: Index of the dummy sink node V: Set of vertices: 1, …, n A: Set of arcs B: Set of blocked arcs d: Index for the depot D: Set of possible depots q: Index of the components Q: Set of disconnected components S: Set of all subsets of components within which the nodes are strongly connected s: Index of elements of S Y+ : Set of all subsets of components that require at least one outgoing arc but no incoming arc to be unblocked in order to be strongly connected to the remaining graph Y−: Set of all subsets of components that require at least one incoming arc but no outgoing arc to be unblocked in order to be strongly connected to the remaining graph y: Index of the components M: A nonnegative scalar with large enough value 8.4.2  Decision Variables

xij: Number of times that the vehicle traverses arc (i, j) zij: Binary variable indicating if blocked arc (i, j) is unblocked f ij: Flow variable on arc (i, j) vi: Number of times the vehicle visits node i The MIP model for ARCP determines an open walk such that the disconnected components in the network are connected after unblocking a subset of the blocked arcs. The walk traverses a subset of the arcs in B, say R, so that the graph G′ = (V, A\B∪ R) is connected. The model that solves ARCP gives a strongly connected graph. We explain the objective function and constraints group by group as follows.

17 6

AYŞ E NUR A S A LY A N D F. SIBEL S A L M A N

8.4.3  Objective Function

Constraint (8.1) represents the objective function that minimizes the total time spent by the vehicle until the network becomes strongly connected: Minimize

∑ c x + ∑ b z (8.1) ij ij

ij ij

( i , j )∈ A



( i , j )∈B

8.4.4  Vehicle Balance Equations

Constraints (8.2) through (8.5) are vehicle balance equations. Constraint (8.2) ensures that the vehicle starts the tour at the depot vertex where it is positioned. Constraint (8.3) balances arrivals and departures for a nondepot node i. Constraint (8.4) forces the walk to end in the sink node. There is only one visit to the sink node and no return. The latter case is satisfied by constraint (8.5). The vehicle leaves the depot and its component, and does not return there if it will not visit another disconnected component by passing through its own component:









( xdj − x jd ) = 1,

j ∈V ∪ {( n + 1)}



( xij − x ji ) = 0,

j ∈V ∪ {( n + 1)}

∑x

j ( n +1)

d ∈ D (8.2)

∀i ∈V \ D (8.3)

= 1

j ∈V

x(n + 1)i = 0, ∀i ∈V

(8.4) (8.5)

8.4.5  Constraints That Relate Variables xij and zij

Constraint (8.6) shows for a blocked arc that if it is unblocked, then it is also traversed. We assume a blocked arc becomes open in both directions whenever the vehicle unblocks it in one direction. This assumption can be meaningful because in disaster situations, roads

A R c SEL Ec TI O N A N D R O U TIN G

17 7

have to be used in both directions in order to reach disaster areas and deliver aid. Constraint (8.7) prevents the vehicle traversing a blocked arc if it is not unblocked. If an arc (i, j) is unblocked, it can be traversed by the vehicle at most 2(|Q| − 1) times. The vehicle connects one component each time it traverses the same arc by unblocking one arc going out of the subset of component and one arc coming into it. Therefore, we multiply this value by 2. Except the component that it is deployed, there are (|Q| − 1) components in total to be connected; thus, the scalar in this constraint takes the value of 2(|Q| − 1):

xij ≥ zij , ∀(i , j ) ∈ B (8.6)



xij ≤ 2(| Q | − 1)zij , ∀(i , j ) ∈ B (8.7)

8.4.6  Flow Balance Equations

For connectivity of the nodes in the vehicle’s walk, we define flow variables f ij or each arc that it passes through. For the depot vertex, the net flow into it is the total number of visits to all vertices except the depot (as seen in constraint [8.8]). For the other vertices, it is equal to the number of visits to the corresponding node (as seen in constraint [8.9]). In other words, the vehicle leaves one unit of flow each time it visits a node. Constraint (8.10) prevents backward flow from the sink node to any other node. Constraint (8.11) requires that the walk ends in sink node by sending one unit of flow to the sink node:









( f ij − f ji ) = −vi ,

j :( i , j )∈ A ,{ i , j }∈V ∪{( n +1)}



( f dj − f jd ) =

j ∈V ∪{( n +1)}

∀i ∈V ∪ {(n + 1)} \ D (8.8)



vi , d ∈ D (8.9)

i∈V ∪{( n +1)}{ d }

f (n +1) j = 0, ∀j ∈V (8.10)

∑f j ∈V

j ( n +1)

= 1 (8.11)

178

AYŞ E NUR A S A LY A N D F. SIBEL S A L M A N

8.4.7  Constraints That Relate Variables f ij and xij

Constraint (8.12) does not allow flow on an arc unless it is traversed. Constraint (8.13) shows that if an arc is traversed, then there must be a positive amount of flow passing through it:

f ij ≤ Mxij , ∀(i , j ) ∈ A , {i , j } ∈V ∪ {(n + 1)}

(8.12)



f ij ≥ xij , ∀(i , j ) ∈ A , {i , j } ∈V ∪ {(n +1)}

(8.13)

8.4.8  Component Connectivity Constraints

For component connectivity, (8.14) and (8.15) require at least one arc into and one arc out of each subset of components within which the nodes are strongly connected to be unblocked. Similarly, with constraints (8.16) and (8.17), for connectivity of the components in sets Y + and Y−, at least one arc into and one arc out of each subset are unblocked. As a result, the graph becomes strongly connected:











zij ≥ 1, ∀s ⊂ S (8.14)



zij ≥ 1, ∀s ⊂ S (8.15)

( i , j )∈δ+ ( s )



( i , j )∈δ ( s )



zij ≥ 1, ∀y ⊂ Y + (8.16)



zij ≥ 1, ∀y ⊂ Y − (8.17)

( i , j )∈δ+ ( y )

( i , j )∈δ− ( y )

8.4.9  Constraints That Define the Variables

Constraints (8.18) through (8.20) are integrality constraints, whereas constraints (8.21) are binary constraints. Constraints (8.22) state that flow variables are nonnegative real numbers:

xij , x ji ∈ Z + , ∀ ( i , j ) ∈ A , {i , j } ∈V (8.18)

179

A R c SEL Ec TI O N A N D R O U TIN G



xi (n +1) ∈ Z + , ∀i ∈V (8.19)



vi ∈ Z + , ∀i ∈V (8.20)



zij ∈ B, ∀(i , j ) ∈ B (8.21)



f ij , f ji ∈ R + , ∀(i , j ) ∈ A (8.22)

8.5  Data Acquisition and Generation

For computational experiments, we constructed a network of Istanbul that is obtained by considering province centers and real road distances. By using Google Maps, we identified strategically important locations such as province centers and provinces that have hospitals, disaster coordination centers, ports, airports, bus terminals, and bridges. Possible depot points are given in Table 8.1. Depot points are determined according to the locations related to highway maintenance, the locations that may have machinery, for example, cranes and trucks. There are 74 nodes including 38 province centers and 34 populated districts (see Figure 8.2). In total, there are 360 links (720 arcs) (see Figure 8.3). Arcs are created between neighbors, and arc traversal times are determined by using road distances given in Table 8.2, which are calculated using Google Maps. We converted road distances into time (in hours) assuming an average 50  km/h speed for the vehicle. Table 8.1  Possible Locations of Depots Disaster Coordination Center GDH Division of Machinery Supply GDH Division of Road Maintenance and Repair GDH Division of Road Maintenance and Repair GDH Regional Division of Maintenance and Operations GDH Regional Division of Maintenance and Operations General Directorate of Highways (GDH) Istanbul Metropolitan Municipality Istanbul Metropolitan Municipality—additional building

PROVINCE

NODE

Kağıthane Maltepe Kartal Edirnekapı/Eyüp Kavacık Kurtköy/Pendik Kağıthane Fatih Merter/Güngören

23 29 32 15 27 36 23 19 17

18 0

AYŞ E NUR A S A LY A N D F. SIBEL S A L M A N

Figure 8.2  Nodes on Istanbul map from Google Earth.

Ten scenarios with different sets of blocked roads are generated by referring to the latest earthquake risk map of Istanbul reported by the Japan International Cooperation Agency and Istanbul Metropolitan Municipality in a 2002 study (The Japan International Cooperation Agency [JICA]; Istanbul Metropolitan Municipality 2002). We classified the roads into three based on the earthquake risk map: high-risk roads (see Table 8.3 for high-risk roads), low-risk roads (see Table 8.4 for low-risk roads), and the remaining ones. More roads are picked to be blocked in high-risk area than in low-risk area, but within each risk level, blocked roads are selected randomly. In this way, three to six disconnected components are formed. The number of disconnected components and the number of blocked roads in each scenario are given in Table 8.5. For each scenario, two instances with high and low unblocking times are generated. Unblocking time of an arc is set proportional to its traversal time, that is, bij = α cij. The factor α is generated randomly as follows. First, blocked roads are classified into high-, medium-, and low-damage groups randomly with probabilities listed in Table 8.6. In high–unblocking time case, high-damage roads are more likely and low-damage roads are less likely. For example, around 60% of the blocked arcs would have high damage, while 10% would have low damage. The factor α has a uniform distribution and takes values between (10, 50), (5, 10), and (2, 5) for high-, medium-, and lowdamage groups, respectively.

3 5

39

4

8

Figure 8.3  The network representing the main roads in Istanbul.

1

2

6

7

9

42

40

43

14

10

12

11 45

41

13

17

47 53

60

62

55 22

54

25

26

56 67

64

69

70

59 58 51 38 27 21 52 57

50

61

20 19 72 73

15 74

46

48 44

16

18

49

23

24

65

68

66

31

28

29

30

32

71

34

36

33

35

37

63

A R c SEL Ec TI O N A N D R O U TIN G

181

1 1 2 2 2 3 3 3 3 3 4 4 4 5 5 5 5 6 6 6 6

ORIGIN NODE

2 3 1 3 4 1 2 5 6 39 2 7 39 3 6 8 39 3 5 7 8

DESTINATION NODE

Table 8.2  Real Road Distances

39.0 39.0 39.0 18.0 45.0 39.0 18.0 12.0 12.0 11.0 45.0 25.0 30.0 12.0 9.5 11.0 9.5 12.0 9.5 5.5 10.0

DISTANCE 22 23 23 23 23 24 24 24 25 25 25 25 25 26 26 26 26 27 27 27 28

ORIGIN NODE 56 21 49 50 61 60 61 62 30 64 66 68 69 22 67 69 70 28 38 70 27

DESTINATION NODE 3.0 3.0 5.0 1.2 5.5 8.0 11.0 10.0 7.5 6.5 10.0 10.0 3.5 9.5 3.0 5.0 2.5 6.5 6.5 10.0 6.5

DISTANCE 48 48 49 49 49 49 49 50 50 50 50 50 51 51 51 51 52 52 52 52 52

ORIGIN NODE 44 49 15 18 21 23 48 21 23 51 59 61 21 50 52 59 21 51 54 56 59

DESTINATION NODE

2.5 4.5 4.5 4.5 5.5 5.0 4.5 2.0 1.2 2.5 3.5 6.0 2.5 2.5 1.5 2.5 2.5 1.5 3.0 2.5 1.9

DISTANCE

18 2 AYŞ E NUR A S A LY A N D F. SIBEL S A L M A N

6 6 7 7 7 7 7 7 8 8 8 8 9 9 9 9 9 9 10 10 10 10 10

39 42 4 6 14 39 42 43 5 6 9 40 8 11 40 41 42 43 11 12 16 17 45

4.0 18.0 25.0 5.5 26.0 7.0 10.0 19.0 11.0 10.0 9.0 12.0 9.0 6.0 8.0 3.0 14.0 11.0 5.0 7.0 3.5 5.0 3.5

28 28 28 29 29 29 29 30 30 30 30 30 30 31 31 31 31 31 31 32 32 32 32

31 35 70 32 65 66 68 25 31 34 66 67 71 28 30 34 35 67 69 29 33 36 71

17.0 21.0 17.0 6.5 5.0 8.0 6.5 7.5 5.0 28.0 4.0 5.5 15.0 17.0 5.0 16.0 9.5 4.5 5.0 6.5 16.0 4.5 13.0

53 53 53 54 54 54 54 54 55 55 55 55 55 56 56 56 57 57 57 57 58 58 58

20 54 55 21 22 52 53 55 20 21 22 53 54 22 52 57 38 56 58 59 38 57 59

2.0 4.0 2.5 2.5 2.0 3.0 4.0 3.0 1.6 3.5 4.0 2.5 3.0 3.0 2.5 3.5 0.5 3.5 2.5 3.0 1.7 2.5 1.5 (Continued )

A R c SEL Ec TI O N A N D R O U TIN G

18 3

10 11 11 11 11 11 11 11 12 12 12 12 12 12 12 13 13 13 13 13 14

ORIGIN NODE

46 9 10 12 17 40 41 45 10 11 13 17 41 43 47 12 14 18 43 47 7

DESTINATION NODE

4.5 6.0 5.0 6.5 5.0 8.0 5.0 3.5 7.0 6.5 3.0 4.5 5.0 10.0 12.0 3.0 11.0 8.5 11.0 8.0 22.0

DISTANCE

Table 8.2 (Continued)  Real Road Distances 33 33 33 33 33 34 34 34 34 34 35 35 35 36 36 36 36 37 37 38 38

ORIGIN NODE 32 34 36 63 71 30 31 33 35 71 28 31 34 32 33 37 63 36 63 27 57

DESTINATION NODE 16.0 5.0 13.0 7.5 5.0 28.0 16.0 5.0 7.0 5.0 21.0 9.5 7.0 4.5 13.0 11.0 8.0 11.0 13.0 6.5 0.5

DISTANCE 58 59 59 59 59 59 59 59 59 59 60 60 60 60 61 61 61 61 61 62 62

ORIGIN NODE 62 21 50 51 52 57 58 60 61 62 24 59 61 62 23 24 50 59 60 24 38

DESTINATION NODE

6.0 3.5 3.5 2.5 1.9 3.0 1.5 6.0 7.0 8.0 8.0 6.0 3.0 4.0 5.5 11.0 6.0 7.0 3.0 10.0 4.0

DISTANCE

18 4 AYŞ E NUR A S A LY A N D F. SIBEL S A L M A N

14 14 14 15 15 15 15 15 15 16 16 16 16 16 16 17 17 17 17 17 17 17 17

13 18 43 18 21 44 48 49 74 10 17 19 44 46 74 10 11 12 16 41 44 47 74

11.0 10.0 13.0 2.5 7.5 3.0 1.6 4.5 2.0 3.5 4.5 6.5 9.0 1.9 7.5 5.0 5.0 4.5 4.5 11.0 4.5 7.0 7.0

38 38 39 39 39 39 39 40 40 40 40 40 41 41 41 41 41 41 41 42 42 42 42

58 62 3 4 5 6 7 8 9 11 41 45 9 11 12 17 40 42 43 6 7 9 41

1.7 4.0 11.0 30.0 9.5 4.0 7.0 12.0 8.0 8.0 8.5 4.0 3.0 5.0 5.0 11.0 8.5 8.0 5.5 18.0 10.0 14.0 8.0

62 62 62 63 63 63 64 64 64 64 64 65 65 65 66 66 66 66 66 66 67 67 67

58 59 60 33 36 37 25 65 66 68 69 29 64 68 25 29 30 64 68 69 26 30 31

6.0 8.0 4.0 7.5 8.0 13.0 6.5 5.5 6.0 4.5 6.0 5.0 5.5 2.0 10.0 8.0 4.0 6.0 3.5 7.5 3.0 5.5 4.5 (Continued )

A R c SEL Ec TI O N A N D R O U TIN G

18 5

18 18 18 18 18 18 18 19 19 19 19 19 20 20 20

ORIGIN NODE

13 14 15 44 47 48 49 16 20 72 73 74 19 21 22

DESTINATION NODE

8.5 10.0 2.5 3.0 8.0 1.9 4.5 6.5 3.0 3.0 3.0 3.0 3.0 5.0 4.0

DISTANCE

Table 8.2 (Continued)  Real Road Distances 42 43 43 43 43 43 43 43 44 44 44 44 44 44 44

ORIGIN NODE 43 7 9 12 13 14 41 42 15 16 17 18 46 47 48

DESTINATION NODE 2.5 19.0 11.0 10.0 11.0 14.0 5.5 2.5 3.0 9.0 4.5 4.0 7.0 5.0 2.5

DISTANCE 67 67 68 68 68 68 69 69 69 69 69 69 70 70 70

ORIGIN NODE

69 70 29 64 65 66 25 26 31 64 66 67 26 27 28

DESTINATION NODE

3.5 2.0 6.5 4.5 2.0 3.5 3.5 5.0 5.0 6.0 7.5 3.5 2.5 18.0 17.0

DISTANCE

18 6 AYŞ E NUR A S A LY A N D F. SIBEL S A L M A N

20 20 20 21 21 21 21 21 21 21 21 21 21 22 22 22 22

53 55 72 15 20 23 49 50 51 52 54 55 59 20 26 54 55

1.8 1.6 1.8 10.0 5.0 3.0 5.5 2.0 2.5 2.5 2.5 3.5 3.5 4.0 9.5 2.0 4.0

44 45 45 45 46 46 46 46 46 47 47 47 47 47 47 48 48

74 10 11 40 10 16 44 73 74 12 13 17 18 44 48 15 18

3.5 3.5 3.5 4.0 4.5 1.9 7.0 7.0 7.0 12.0 9.5 7.0 8.0 5.0 6.5 1.6 2.5

70 71 71 71 71 72 72 72 73 73 73 74 74 74 74 74 74

67 30 32 33 34 19 20 73 19 46 72 15 16 17 19 44 46

2.0 15.0 13.0 5.0 5.0 3.0 1.8 1.3 3.0 7.0 1.3 2.0 7.5 7.0 3.0 3.5 7.0

A R c SEL Ec TI O N A N D R O U TIN G

18 7

18 8

AYŞ E NUR A S A LY A N D F. SIBEL S A L M A N

Table 8.3  High-Risk Roads EUROPE I 3 5 8 8 9 9 10 10 10 10 16 16 16 19 19 19 20 40 40 45 72 72

ASIA J 5 8 9 40 11 40 11 17 16 46 46 19 74 73 72 20 22 45 11 10 20 73

I

J

25 25 25 25 29 30 32 32 32 33 33 36 36 37 64 64 64 64 65 65 66 66 66 68 68 71

30 69 64 66 32 71 33 36 71 36 63 63 37 63 68 65 66 69 68 29 29 30 69 66 29 33

8.6  Computational Experiments and Results

Effects of the following parameters on computational performance and objective value are analyzed: (1) degree of damage (i.e., high and low bij cases) and (2) location of the depot. To solve the models, CPLEX 12.5 was run as a multithreaded application (using GAMS 24.0 and a computer with two 3.30 GHz processors, 32 GB RAM under 64-bit operating system). The results of high and low blocking time cases (with 23 as the depot node) can be seen in Table 8.7. All scenarios are solved to optimality in a short time (at most 114 s and in less than a minute for all

18 9

A R c SEL Ec TI O N A N D R O U TIN G

Table 8.4  Low-Risk Roads EUROPE I 1 1 2 2 3 4 4 6 6 6 6 6 7 7 7 11 12 12 12 13

ASIA

J

I

J

I

J

I

J

I

J

I

J

I

J

3 2 4 3 39 7 39 7 42 5 8 3 42 43 14 17 11 10 17 12

13 13 14 14 15 15 15 17 17 17 17 17 18 18 20 21 21 21 21 21

47 18 13 18 74 21 49 16 74 44 47 16 49 44 55 20 55 54 52 51

21 23 23 23 38 38 39 39 39 41 41 41 41 41 42 42 42 43 43 43

50 61 50 21 27 57 7 6 5 40 11 12 17 9 41 43 9 14 12 13

43 43 44 44 44 44 47 47 47 47 48 48 48 49 49 50 50 50 51 52

41 9 16 74 15 48 44 18 48 12 15 49 18 23 21 51 59 61 59 59

52 52 53 53 53 54 54 55 55 55 56 57 58 58 59 59 59 59 60 60

51 56 55 54 20 52 22 22 54 20 22 56 57 38 57 60 62 58 24 62

61 61 61 62 62 62 74 74

24 60 59 24 58 38 19 46

26 26 27 27 28 28 31 31 31 31 34 34 34 35 67 67 67 70 70 70

67 69 28 70 31 35 35 30 69 34 33 71 30 34 31 30 69 28 67 26

Table 8.5  Scenarios SCENARIOS 1 2 3 4 5 6 7 8 9 10

NUMBER OF BLOCKED ARCS

NUMBER OF DISCONNECTED COMPONENTS

30 32 40 42 52 60 76 80 82 84

3 3 4 3 4 4 5 6 6 5

19 0

AYŞ E NUR A S A LY A N D F. SIBEL S A L M A N

Table 8.6  Damage Level, Probabilities, α, Classification of Blocked Roads HIGH–UNBLOCKING TIME CASE DAMAGE Probability Distribution of α

LOW–UNBLOCKING TIME CASE

HIGH

MEDIUM

LOW

HIGH

MEDIUM

LOW

0.6 U(10, 50)

0.3 U(5, 10)

0.1 U(2, 5)

0.1 U(10, 50)

0.3 U(5, 10)

0.6 U(2, 5)

Table 8.7  Effect of Degree of Damage and Computational Results HIGH COST NUMBER OF NUMBER OF BLOCKED DISCONNECTED OBJECTIVE LB SCENARIOS ARCS COMPONENTS (H) (H) 1 30 3 4.5 4.5 2 32 3 3.0 3.0 3 40 4 8.5 8.5 4 42 3 6.4 6.4 5 52 4 3.3 3.3 6 60 4 6.1 6.1 7 76 5 7.3 7.3 8 80 6 11.2 11.2 9 82 6 9.1 9.1 10 84 5 8.2 8.2 1. Average 6.7 6.7

LOW COST TIME OBJECTIVE LB TIME (S) (H) (H) (S) 39.0 2.5 2.5 30.0 8.0 2.4 2.4 10.0 8.0 5.1 5.1 9.0 6.0 3.4 3.4 9.0 8.0 2.7 2.7 8.0 13.0 3.4 3.4 9.0 50.0 5.5 5.5 114.0 27.0 5.9 5.9 27.0 23.0 5.9 5.9 17.0 14.0 3.5 3.5 21.0 19.6 4.0 4.0 25.4

but one). This shows that for problems with the tested size, the goal of solving the problem very quickly is achieved. Low–unblocking time case gives 23% higher runtime on average, compared to the high– unblocking time case. As the number of components and blocked arcs increases, the effect of damage level on runtime can be observed better. We can conclude that when unblocking times decrease, solution time increases since the decision of which arcs to unblock gets more difficult, and both connectivity and routing decisions affect the solution value strongly. In order to evaluate the effect of the location of the depot on the runtime and objective value, we picked several different nodes as the depot and solved the model with high unblocking times. Nodes 15 and 23 located in European side of Istanbul and nodes 27, 29, and 32 in Asian side are picked one by one as the depot. It is possible to consider other nodes, but we picked these for demonstration purposes. Table 8.8 shows the results for all scenarios. When the depot is at node 15, 27, 29, or 32, all scenarios are solved even faster, and the

191

A R c SEL Ec TI O N A N D R O U TIN G

Table 8.8  Effect of Location of the Depot on the Solution SCENARIOS

OBJECTIVE (H)

LB (H)

TIME (S)

Depot ID: 15 1 2 3 4 5 6 7 8 9 10 Average

4.3 2.8 8.6 6.2 3.2 5.9 7.2 11.1 8.9 8.2 6.6

OBJECTIVE (H)

LB (H)

TIME (S)

4.5 3.0 8.5 6.4 3.3 6.1 7.3 11.2 9.1 8.2 6.7

39.0 8.0 8.0 6.0 8.0 13.0 50.0 27.0 23.0 14.0 19.6

4.0 2.7 8.4 7.0 3.0 6.1 6.5 10.4 8.8 7.8 6.5

6.0 5.0 8.0 6.0 6.0 8.0 8.0 11.0 11.0 9.0 7.8

Depot ID: 23 4.3 2.8 8.6 6.2 3.2 5.9 7.2 11.1 8.9 8.2 6.6

10.0 6.0 8.0 6.0 6.0 9.0 22.0 29.0 13.0 11.0 12.0

1 2 3 4 5 6 7 8 9 10 Average

Depot ID: 27 4.5 3.2 8.6 7.7 3.5 6.3 7.4 11.2 9.3 8.2 7.0

4.5 3.2 8.6 7.7 3.5 6.3 7.4 11.2 9.3 8.2 7.0

15.0 9.0 8.0 12.0 8.0 15.0 32.0 39.0 13.0 14.0 16.5

1 2 3 4 5 6 7 8 9 10 Average

Depot ID: 32 4.1 2.9 8.5 7.1 3.1 5.9 6.3 10.3 9.0 8.4 6.6

4.1 2.9 8.5 7.1 3.1 5.9 6.3 10.3 9.0 8.4 6.6

7.0 6.0 8.0 6.0 6.0 19.0 9.0 12.0 18.0 11.0 10.2

4.5 3.0 8.5 6.4 3.3 6.1 7.3 11.2 9.1 8.2 6.7 Depot ID: 29 4.0 2.7 8.4 7.0 3.0 6.1 6.5 10.4 8.8 7.8 6.5

19 2

AYŞ E NUR A S A LY A N D F. SIBEL S A L M A N

objective value does not change much. Choosing node 29 as the depot seems rational since it gives a better solution in terms of objective value and runtime. The reason for this performance may be explained as follows. The network has a rectangular shape, and node 29 resides in bottom-right corner. Node 32 has a similar position. The other depots are in a more central position with respect to the layout of the network. Starting from a central location may result in traversals back and forth to the component in the center. Therefore, the travel time may be longer. 8.7 Conclusions

In this study, we introduced the ARCP, which is applicable for restoring network connectivity after a disaster. The aim is to make the disconnected graph strongly connected in the shortest time by unblocking some of the blocked roads. The responsible team leaves the depot and unblocks selected roads with an unblocking time that is spent only for the first time the blocked road is traversed. We show that ARCP is NP-hard and develop an MIP formulation that can be solved quickly for instances with realistic size. This can be contributed to the fact that arc routing part of the problem is handled efficiently by sending flows. To the best of our knowledge, this is new in the arc routing literature. To generate test data, Istanbul highway network is used. Ten different scenarios with differing blocked arcs are constructed by selecting links in high-risk areas in existing earthquake scenarios. Two levels of damage are defined, and unblocking times are calculated accordingly, leading to 20 test instances. While MIP is solved in at most 2 min in all of 20 instances, we observe that high–unblocking time cases are easier to solve than low–unblocking time cases. Changing the location of the depot does not affect the objective value and runtime much in our instances. We expect the solution of larger instances to take longer. However, having a single fleet traverse a wide area with many arcs will not provide an efficient solution. Instead, covering an area by multiple teams would be the way to go in a disaster situation. In order to analyze the computational performance of the model, a more extensive numerical study would be required. Here, we have used only a single network with 20 postdisaster scenarios. Instead of

A R c SEL Ec TI O N A N D R O U TIN G

19 3

the Istanbul map, a smaller network can be used. For example, different provinces in Istanbul can be taken, and a more detailed road network with shorter distances can be generated for each one. Then, having a single vehicle or fleet responsible from each region would reduce the completion time for connectivity of the larger network. When the network gets larger, having multiple vehicles becomes necessary. The timing of the vehicles becomes an issue since a vehicle may need to wait while another works on an arc. A heuristic approach to solve the multivehicle case quickly can be by partitioning the graph and solving the ARCP exactly for each partition. Clearly, the quality of the solutions would depend on the partitioning step. This line of research can be extended in several directions in future research. The problem can be defined on an undirected graph. If connecting the entire network takes too long time, connectivity to certain nodes such as supply points, hospitals, and airports can be prioritized. The objective would change to connecting given origin–destination pairs. This modification can be handled by multicommodity flows for connectivity.

References

Akoudad, K. and F. Jawab. Recent survey on bases routing problems: CPP, RPP and CARP. International Journal of Engineering Research & Technology (IJERT) 2 (2013): 3652–3668. Aksu, D. T. and L. Ozdamar. A mathematical model for post-disaster road restoration: Enabling accessibility and evacuation. Transportation Research 61, Part E (2014): 56–67. Araoz, J., E. Fernandez, and O. Meza. Solving the prize-collecting rural postman problem. European Journal of Operational Research 196 (2009): 886–896. Araoz, J., E. Fernandez, and C. Zoltan. Privatized rural postman problems. Computers and Operations Research 33 (2006): 3432–3449. BBC News and National Police Agency of Japan. Japan quake: Loss and recovery in numbers. BBC News March 11, 2012. http://www.bbc.com/ news/world-asia-17219008 (accessed August 13, 2014). Campbell, A., T. Lowe, and L. Zhang. Upgrading arcs to minimize the maximum travel. Networks 47 (2006): 72–80. Duque, P. M. and K. Sörensen. GRASP metaheuristic to improve accessibility after. OR Spectrum 33 (2011): 525–542. Eiselt, H. A., M. Gendreau, and G. Laporte. Arc routing problems, part II: The rural postman problem. Operations Research 43 (1995): 399–414.

19 4

AYŞ E NUR A S A LY A N D F. SIBEL S A L M A N

Fernandez, E., O. Meza, R. Garfinkel, and M. Ortega. On the undirected rural postman problem: Tight bounds based on a new formulation. Informs 51 (2003): 281–291. Frederickson, G. N. Approximation algorithms for some postman problems. Journal of ACM 26 (1979): 538–554. Ghiani, G., D. Lagana, and R. Musmanno. A constructive heuristic for the undirected rural postman problem. Computers and Operations Research 33 (2006): 3450–3457. Groves, G. V. and J. H. van Vuuren. Efficient heuristics for the rural postman problem. ORiON 21 (2005): 33–51. Holmberg, K. Heuristics for the rural postman problem. Computers and Operations Research 37 (2010): 981–990. Lenstra, J. K. and A. H. G. Rinnooy Kan. On general routing problems. Networks 6 (1976): 273–280. Mogato, M. and R. Ng. Philippine typhoon survivors beg for help as rescuers struggle, in.reuters.com. November 11, 2013. http:// in.reuters.com/article/2013/11/11/philippines-typhoon-haiyanidINDEE9A802120131111 (accessed August 13, 2014). Nolz, P. C., F. Semet, and K. F. Doerner. Risk approaches for delivering disaster relief supplies. OR Spectrum 33 (2011): 543–569. Sahin, H., O. E. Karasan, and B. Y. Kara. On debris removal during the response phase. Proceedings of the INOC 2013, Tenerife, Spain, May 20–22, 2013. Stilp, K., J. A. Carbajal, O. Ergun, P. Keskinocak, and M. Villareal. Managing debris operations. 2011 Health and Logistics Conference Poster Session, Georgia Tech. Supply Chain & Logistics Institute Center for Humanitarian Logistics, Atlanta, GA, March 2–3, 2011. The Japan International Cooperation Agency ( JICA); Istanbul Metropolitan Municipality. The Study on a Disaster Prevention/Mitigation Basic Plan in Istanbul Including Microzonation in the Republic of Turkey. Istanbul Metropolitan Municipality, Istanbul, Turkey, 2002.

9 FE ASIBILIt Y S tUdY OF S HUttLE S ERV I CES tO R EdUCE B US C ON G EStI ON IN D OWNtOWN I ZmIR ER Dİ NÇ ÖN ER , M A H M U T A Lİ GÖKÇE , H A N D E Ç A K I N , AY L İ N Ç A L I Ş K A N , E Z G İ K I N AC I , G Ü R K A N M E R C A N , E Z E L İ L K YA Z , A N D B E R İ L S Ö Z E R Contents

9.1 Introduction 195 9.1.1 Statement of the Problem 196 9.1.2 Objectives of the Study 198 9.2 Literature Review 199 9.3 Solution Method 202 9.3.1 Data Collection and Analysis 202 9.3.2 Simulation Model 203 9.3.2.1 Average Waiting Time at Bus Station 204 9.3.2.2 Average Traveling Time 205 9.4 Experimental Design and Results 206 9.5 Conclusions and Future Work 209 References 211 9.1 Introduction

Experiencing traffic congestion becomes inevitable for most people living in large cities, especially during rush hours. The growth of population and employment, especially in city centers, is the main reason behind this traffic congestion. According to the 2012 Urban Mobility Report for United States prepared by Texas Transportation Institute of Texas A&M University, the estimated annual travel delay is increasing drastically. The annual travel delay was 1.1 billion hours in 1982 and reached 19 5

19 6

ERD İ N Ç Ö NER E T A L .

5.5 billion hours in 2011. Moreover, it was recorded that while the amount of CO2 produced during congestion was 10 million lb in 1982, it increased to 47 million lb in 2000 and then to 56 million lb in 2011 (Schrank et al., 2012). An efficient public transport system can smooth traffic, reduce people’s travel time, and help reduce environmental pollution. Based on the 2012 Urban Mobility Report, extending public transportation has significant savings. While its contribution to savings in yearly travel delay was 409 million hours in 1982, it increased to 865 million hours in 2011. In addition, the use of public transportation resulted in an annual congestion cost saving of $8.0 billion in 1982 and $20.8 billion in 2011. 9.1.1  Statement of the Problem

Izmir is the third most crowded city with a population of 4.1 million, and it also has the second largest port in Turkey. Public bus transportation activities in Izmir are managed by ESHOT (Izmir Public Transportation Authority) in five main districts (Figure 9.1). ESHOT launched Smart Ticketing system for public bus transportation on March 15, 1999. Following that progress, smart ticketing system was also integrated to the subway, suburban railway, and sea transportations. ESHOT provides free transit within alternative modes of

Figure 9.1  Bus operation districts of ESHOT.

F E A SIBILIT y S T UDy O F SHU T T L E SERv I c E S

19 7

Figure 9.2  Triangle area in Izmir downtown.

public transportation for passengers when they use their smart tickets within 90 min of first use. In Izmir city center, the area between Halkapınar Connection Centre, Kemer Connection Centre and Konak Connection Centre is known as the prestige location of Izmir. It has high concentration of businesses and shopping centers and historical structures. Figure 9.2 shows this area, which is also referred to as the triangle area throughout this study. As presented in Table 9.1, the high number of public buses that go in and out of the triangle area shows the potential congestion problem caused by these buses. Numbers show that approximately half of the ESHOT bus fleet actively performs in this area, and considering that the total surface area of Izmir province is 12,007 km 2 and city center’s is approximately 816 km 2 , the buses Table 9.1  Facts of the Triangle Area Number of bus lines in the triangle area Total number of bus lines in Izmir Percentage 90 317 0.29 Number of active buses in the triangle area Total number of active buses in Izmir Percentage 651 1408 0.46 Total bus exits and entrances in morning rush Total bus exits and entrances to the Percentage hours to the triangle area in a day (3 h) triangle area in a day (18 h) 2418 9480 0.26

19 8

ERD İ N Ç Ö NER E T A L .

60 min 18 km Balçova

Konak AKM station

Halkapinar 5 km

32 min

Figure 9.3  Example of time spent vs. distance traveled in the triangle area by buses.

have high density in the city center. Besides, comparison between total bus exits and entrances to the triangle area during rush hours and whole day shows that just 3 h morning rush hour period generates approximately one-fourth of the entire day’s entrances and exits. Therefore, traffic congestion caused by public buses in the triangle area during rush hours results in higher congestion costs than any other period of time during the day. As a result of traffic congestion, bus travel time increases in the triangle area. This means that the total transit time of passengers is too long compared to the distance of route in the triangle area. For instance, while the total traveling time of line 169 from Balçova to Halkapınar varies in the range of 54–60 min, it takes 26–32 min between Konak and Halkapınar, which means that almost half of the total time is spent in the triangle area while the triangle area’s route length is approximately quarter of total route length of line 169 (Figure 9.3). 9.1.2  Objectives of the Study

In this study, a shuttle service system is proposed replacing the current bus routes and schedules in the triangle area of Izmir to solve the congestion problem. Figure 9.4 presents the proposed transfer hubs and the shuttle system for the triangle area. First, the simulation model of the current bus transportation system in the triangle area was developed, verified, and validated. Then, the expected benefits of the proposed shuttle system were determined through an experimental design using the simulation model.

F E A SIBILIT y S T UDy O F SHU T T L E SERv I c E S

19 9

Figure 9.4  Proposed transfer hubs and shuttle system for the triangle area.

Eight different lines of shuttle services with different routes were planned to operate in the triangle area. Routes were based on work done jointly with ESHOT’s transportation planning department. Some of the bus stations that were not frequently used in the current system were eliminated while determining shuttle services’ routes. Expected outcomes of the proposed shuttle system were as follows: • Decreasing average traveling and waiting time for passengers in the triangle area • Decreasing the number of buses traveling in the triangle area • Minimizing the simultaneous arrival of buses to bus stations (trailing) • Decreasing total CO2 emission 9.2  Literature Review

Many of the metropolitan cities suffer from higher traffic congestion in city centers. The factors behind this problem are numerous. The high number of business and entertainment centers can be counted as the first reason for the crowdedness. A great number of people travel into this region during the same few hours each morning and evening, called as peak periods; therefore, roads and public transportation systems do not have enough capacity for simultaneous arrival/exit of everyone who wants to use them (Downs, 2004).

200

ERD İ N Ç Ö NER E T A L .

On the other hand, according to Rosenbloom (1978), although traffic congestion is inevitable in metropolitan cities, there are some ways at least to decrease its intensity. These can be divided into two groups, as changing the demand for road system capacity and changing the system capacity itself. Under the first title, reorienting travel to less-congested alternative routes or reducing the number of vehicles while increasing vehicle occupancy can be counted. For the second choice, the solution is constructing additional roadway or adding lanes to existing routes. However, according to Parry, even when the highway capacity is increased, increase in the growth of vehicle miles traveled will be higher. As a result, congestion has grown steadily worse (Parry, 2002). In his study, some statistics is given to support this statement by using the Department of Transportation database. As an example, while vehicle miles traveled in urban areas increased by 289% between 1960 and 1991, total road capacity in urban areas increased by only 75%. Improvements to decrease congestion can be increasing frequency and operating hours, improving coordination among different modes, providing real-time information to customers (GIS), or designing services that serve for particular travel needs, such as express commuter buses, special event service, and various types of shuttle services according to Transportation Demand Management Encyclopedia (Victoria Transport Policy Institute, 2013). Other improvement suggestions for public transportation can be viewed in Boll’s (2008) thesis in detail under the title of physical priority to buses. Grade-separated right of way, median bus ways, and contra-flow lanes built on one-way streets are some of the examples implemented worldwide for physical priority to buses. A video enforcement system can be implemented to control adherence to the rules when these systems are implemented. Another group of methods to improve public transportation can be through incentives of using different modes of public transportation. A good example of this kind of incentive is the linked transport from Izmir, Turkey, which gives the chance of free ride to customers within 90 min after the first ride (ESHOT General Directorate). Madrid has also provided incentives to promote public transportation. Intermodal exchange stations for connections between urban and suburban transportation modes were built in Madrid to promote

F E A SIBILIT y S T UDy O F SHU T T L E SERv I c E S

2 01

public transportation within the city. In the study of Vassallo et al. (2012), effects of this implementation was analyzed in terms of users, public transportation operators, infrastructure managers, the government, the abutters, and other citizens. However, there is no universal measurement to analyze the effectiveness of any suggestions that are given earlier since congested traffic is a relative term. “In common sense, the traffic of any given artery can be considered congested when it is moving at speeds below the artery’s designed capacity because drivers are unable to go faster” (Downs, 2004). Based on this concept, Texas Transportation Institute and the Federal Highway Administration developed some measures of congestion that includes the travel time index. This index is calculated as a ratio of the total travel time during rush hours to the total travel time during nonrush hours for the same route. Simulation is a powerful tool to analyze all improvement suggestions given earlier and reach such an index, since it is possible to study detailed relations that might be lost in analytical or numerical studies. “The reasons to use simulation in the field of traffic are the same as in all simulation: the difficulty in solving the problem analytically; the need to test, evaluate and demonstrate a proposed course of action before implementation; to make research (to learn) and to train people” (Pursula, 1999). Also, Boxill and Yu (2000) claim that because of stable and unstable states, chaotic and stochastic behaviors of traffic, simulation is a useful method. Olstam and Tapani (2011) define each step of developing a traffic simulation in detail. They state that firstly, aim and scope of the study should be determined before collecting necessary data. Following these steps, the simulation model can be constructed; however, it needs also verification, calibration, and validation. Thereafter, it is ensured that the model represents reality in a reasonable way, alternative scenarios should be tried, and each of them should be analyzed. They asserted that the final step should be documentation. According to Balci (1990), representation of reality in a reasonable way does not mean an absolute accuracy. He claims in his study that while in some cases 60% level of confidence is enough for the aim of the study, others can require 90% level of confidence. Boxill and Yu (2000) categorize traffic simulation into three fields that are microscopic, macroscopic, and mesoscopic. While microscopic

202

ERD İ N Ç Ö NER E T A L .

models focus on behaviors of individual vehicles such as speed and location or characteristics of drivers, macroscopic models aim to evaluate traffic flow or density as a continuum (Oner, 2004). On the other hand, a mesoscopic model integrates characteristics of these two approaches by considering individual vehicles’ behavior and also general traffic flow. 9.3  Solution Method

In this section, the data collection and analysis and the developed simulation model are explained here. 9.3.1  Data Collection and Analysis

The main sources of data used in this study are the ESHOT smart ticketing system, GPS bus tracking database, and ESHOT transportation planning database. Bus lines that perform in the triangle area were determined using ArcGIS, which is a complete system for designing and managing solutions through the application of geographic knowledge. Ninety bus lines that go through the triangle and all stations in the triangle area were determined. Entrance and exit points within the triangle area and all stations in the triangle area were identified for the 90 bus lines. Bus routes in the triangle area were identified from ESHOT’s website. Distances between bus stations were retrieved from ESHOT’s database in km. ESHOT’s smart ticket database keeps the information (date, time, direction) of each boarding passenger for each bus line. The October 2011 data were selected to be used since it is the most crowded month of that year. Rush hours were selected since a minor improvement in rush hour would definitely improve nonrush hours. Data for morning (06:00–09:00) and evening rush (16:00–20:00) hours were used. Three- and four-hour datasets were used instead of hourly datasets. The Kolmogorov–Smirnov two-sample test showed no statistically significant difference between using either way. The travel time between consecutive stations on a line for each of the buses was fitted into a distribution using ARENA Input Analyzer.

F E A SIBILIT y S T UDy O F SHU T T L E SERv I c E S

203

The travel times between the bus stations can be generated in the simulation model using these distributions. By using smart ticketing system database, arrival of each bus line to each station and the number of passengers that were accumulated between two consecutive arrivals of the buses were recorded. Assuming a steady flow of passengers to the stations, dividing time difference between two consecutive arrivals by the number of accumulated passengers shows the interarrival time of passengers. Interarrival times of passengers were calculated using Excel macro, and input analyzer was used for generating passenger interarrival time distributions. Data collection was made for every bus stop in the triangle area for each bus line. Data of passengers dropped off at each bus stop were provided by ESHOT, which was based on an estimate. The smart ticketing system database was also used to identify the number of passengers for each line’s first bus stop, namely, the Konak, Kemer, and Halkapınar bus stations. 9.3.2  Simulation Model

ARENA simulation software by Rockwell Automation was used to simulate both current system and proposed shuttle system. Although there were 90 bus lines that run through the triangle area, all of them were not included in the simulation model. Instead, these bus lines were grouped based on their entrance points to the triangle area, and then all three groups were scaled down to one-third. The resulting scaled-down bus lines were as follows: • Konak entrance point: Lines 8, 12, 169, 300, 554, 42, 44, 45, 46, and 269 • Kemer entrance point: Lines 37, 38, 39, 42.44, 45, and 46 • Halkapınar entrance point: Lines 131, 140, 147, 148, 63, 576, 986, 886, 79, 70, and 169 Performance parameters of these models are representatives of all 90 bus lines. The model starts by creating the passengers and the buses. When the bus arrives to station, first passengers are dropped off and then the

ERD İ N Ç Ö NER E T A L .

204

Determining the number of passengers to be dropped off = X

Arrival of bus at the station

Number in bus > number to be dropped off

YES

NO

Drop off X number of passengers randomly

Drop off all passengers

Bus capacity > number in queue

YES

NO

Pick up all

Pick up until full

Bus moves to the next station

Figure 9.5  Flowchart of the simulation model for bus station events.

passengers waiting for the particular bus line are picked up. Finally, the bus moves to the next station. Because of capacity restrictions, picking up and dropping of processes require some additional decision blocks in the model. These can be seen in detail in Figure 9.5, which shows the main logic of bus station events. In this study, two different performance measures are given. These are as follows: • Average waiting time of passenger at bus stations within the triangle • Average traveling time of passenger in the triangle 9.3.2.1  Average Waiting Time at Bus Station  After passengers are cre-

ated according to identified passenger interarrival time distributions by CREATE block, they arrive in stations’ queues by using HOLD block in ARENA model. Passengers arrive in stations at different times, and when they get into the buses, average waiting time is calculated. Logic of this calculation is explained as follows:

205

F E A SIBILIT y S T UDy O F SHU T T L E SERv I c E S

Average waiting time of passengers at eachstation

∑ =

n i =1

(T ( arrival of bus to thestation ) now



−Tnow ( arrival of the passenger i to the station ) )

Total number of passengers gets into bus at station i ( n ) (9.1)

Then, average waiting time of passengers for each line was calculated as follows: Average waiting time of passengers for each line



n i =1

( Average waiting time of

passengers at station i *

Number of passengers at station i )

=





n i =1

( Number of



passengers at station i ) (9.2)

9.3.2.2  Average Traveling Time  Traveling time defines the duration

of passenger between getting into bus and getting out of bus. It is calculated as follows: Average traveling time n

n

i =1

j >i

∑ ∑ =

j*

Number of passengers travel between stations i and j ) n

∑ ∑ i =1



( Traveling time between stations i and

n

( Number of j >i

passengers travel between stations i and j )



(9.3)

The simulation model developed in ARENA was verified by comparing the number of buses created and the number of passengers picked up and dropped off at the bus stations with the data obtained from ESHOT for the current system.

ERD İ N Ç Ö NER E T A L .

206

9.4  Experimental Design and Results

After constructing the simulation model for shuttle services system, three parameters are selected to analyze the proposed system under different conditions. The three parameters are the expected percentage of passenger transfers to different transportation modes, frequency of shuttle services during rush hours, and expected % decrease in travel time due to the reduced number of buses in the triangle area. Each design parameter having three levels resulted in a total of 27 scenarios. Table 9.2 shows the experimental design parameters and theirs levels used in this study. Each scenario is run with 30 replications, and results from the scenarios are recorded. The first parameter is the expected percentage of passenger transfers to different transportation modes such as ferry and subway. This transfer takes place when the passengers arrive at the shuttle service transfer hubs at the entrances of the triangle area. Three different levels for these parameters are selected as 10%, 15%, and 20%. These levels are estimated based on the passenger information from the smart ticketing system for different modes of transportation by ESHOT Transportation Planning Department. Different frequencies of the shuttle buses are selected as the second parameter and determined for each hour of the morning rush hours. First level of the shuttle bus frequencies is assigning shuttle buses in every 5  min between 6:00 and 7:00, in every 2 min between 7:00 and 08:00, and in every 4 min between 8:00 and 9:00. Other levels are scheduling shuttles in every 10, 5, 6 and 6, 4, 5  min between 6:00 and 7:00, 7:00 and 08:00, and 08:00 and 09:00, respectively. The frequency of the shuttle bus services varies during the 3 h period (morning rush hours) based on smart ticketing system data of the triangle area bus stations. Table 9.2  Experimental Design Parameters and Their Levels PARAMETERS

Levels

EXPECTED PERCENTAGE OF PASSENGERS’ TRANSFER TO DIFFERENT TRANSPORTATION MODES

DIFFERENT SCHEDULES FOR SHUTTLES BETWEEN 06:00 AND 07:00, 07:00 AND 08:00, AND 08:00 AND 09:00 (MIN)

PERCENTAGE OF EXPECTED DECREASE IN TIME PASS BETWEEN TWO CONSECUTIVE STATIONS

10 15 20

5, 2, 4 6, 4, 5 10, 5, 6

2 4 6

F E A SIBILIT y S T UDy O F SHU T T L E SERv I c E S

207

Third parameter is the expected decrease in bus travel time in the triangle area. Proposed shuttle service system decreases the number of buses used in the triangle area. Therefore, it is expected that the traffic congestion will be reduced, which might be reflected by decreased bus travel times. This reduction is shown in ARENA model by reducing the time between stations. Three levels for this parameter are 2%, 4%, and 6% reduction in bus travel time. Each of eight shuttle lines are compared with current active buses on those routes and scenarios in terms of average passenger traveling time in the triangle area and average waiting time of passengers at bus stations. Average travel time in the triangle and average waiting time at the bus stations for the current system are compared with the best- and worst-case scenarios of each shuttle line of the proposed system in Tables 9.3 and 9.4, respectively. In these tables, buses are grouped according to their routes and matched with shuttle system’s lines. It is observed that even with the worst scenarios, almost all shuttle lines outperformed the current system. The individual effects of the experimental design parameters, which make up the scenarios, can be better observed by plotting all of the scenarios and their improvements in two performance measures at the Table 9.3  Comparison of Average Traveling Time of Passengers in the Triangle Area for the Current System with the Best- and Worst-Case Scenarios of the Proposed Shuttle System CURRENT

BEST

WORST

BUS GROUPS BUS GROUP SHUTTLE SHUTTLE AVERAGE AVERAGE AVERAGE TRAVELING TIME TRAVELING TIME % TRAVELING TIME % SHUTTLE OF PASSENGERS OF PASSENGERS IMPROVEMENT OF PASSENGERS IMPROVEMENT LINE (S) (S) OF BEST (S) OF WORST 1 2 3 4 5 6 7 8

724.7 764.691 572.08 393.2 613.41 585.23 753.08 603.88

482.19 664.17 511.11 314.5 358.37 513.8 509.11 394.16

33.46 8.35 29.47 56.60 50.55 29.10 29.75 45.61

555.38 720.72 673.05 344.69 427.87 596.91 671.05 414.59

23.36 0.55 7.13 52.44 40.96 17.63 7.40 42.79

ERD İ N Ç Ö NER E T A L .

208

Table 9.4  Comparison of Average Waiting Time of Passengers at the Bus Stations for the Current System with the Best- and Worst-Case Scenarios of the Proposed Shuttle System CURRENT

BEST

WORST

BUS GROUPS BUS GROUP SHUTTLE SHUTTLE AVERAGE AVERAGE AVERAGE WAITING TIME WAITING TIME % WAITING TIME % SHUTTLE OF PASSENGERS OF PASSENGERS IMPROVEMENT OF PASSENGERS IMPROVEMENT LINE (S) (S) OF BEST (S) OF WORST 1 2 3 4 5 6 7 8

430.14 469.85 339.94 343.81 400.28 552.26 403.94 326.81

152.09 145.34 106.75 107.36 135.55 136.81 103.75 112.69

79.01 79.94 85.27 85.19 81.30 81.12 85.68 84.45

312.74 956.13 366.24 234.52 257.48 655.6 363.24 250.23

56.85 −31.93 49.46 67.64 64.47 9.53 49.88 65.47

same time. Figure 9.6 shows the percentage improvements in average waiting time and average travel time for all 27 scenarios for shuttle line 1. Percentage improvements in Figure 9.6 clearly show three clusters. Upon closer examination, we found out that the clusters are almost perfectly formed based on the level of the second parameter,

Avg. waiting time improvement percentage (%)

40 35 30 25 20 15

First number shows level of parameter 1 Second number shows level of parameter 2 Third number shows level of parameter 3 For example, 1-1-1 is 10% expected percentage of passengers’ transfer to different transportation modes: 5, 2, and 4 min between buses: 2% Percentage of expected decrease in time pass between two consecutive stations

10 5 0

0

10

20

30

40

50

60

Avg. travel time improvement percentage (%)

Figure 9.6  Shuttle line 1 travel time and station waiting time % improvements.

70

1-1-1 1-1-2 1-1-3 1-2-1 1-2-2 1-2-3 1-3-1 1-3-2 1-3-3 2-1-1 2-1-2 2-1-3 2-2-1 2-2-2 2-2-3 2-3-1 2-3-2 2-3-3 3-1-1 3-1-2 3-1-3 3-2-1 3-2-2 3-2-3

F E A SIBILIT y S T UDy O F SHU T T L E SERv I c E S

209

which is the shuttle bus frequency during the morning rush hour. Although the other two parameters also do affect the performance measures, bus frequency schedules seem to have the highest impact. Plots for the other shuttle lines show a similar trend. With the reduction of the number of buses in the triangle area, CO2 emission is also expected to decrease. In Table 9.5, results of CO2 emission of current bus lines, which has the same routes with shuttle lines, are given. As it can be seen in Table 9.5, CO2 emission tones/year can be approximately reduced to half even with the worstcase scenario. According to the UK Department for Environment, Food, and Rural Affairs (2012 DEFRA Database) database, the average CO2 emission for local buses is 0.11195  kg/km. The total distance traveled within the triangle area is calculated, and the expected reduction for the CO2 emission is estimated. 9.5  Conclusions and Future Work

Traffic congestion and its results are significant for many metropolitan cities around the world. Izmir is no exception. In this study, the focus was traffic congestion in Izmir city center due to the large number of public buses, especially during rush hours. An alternative shuttle system was proposed, which prevents the entrance of large public buses into the city center, called as the triangle area. In the proposed system, passengers transfer to either newly designed shuttle buses or to the alternative modes of public transportation system while traveling through the triangle. Average passenger traveling times and average passenger waiting times at the bus stations were determined as performance indicators to compare current system with the proposed system. For each shuttle line, 27 different scenarios were generated based on three design parameters, and results were compared with current bus lines that give service on the same routes. Results of these scenarios showed significant improvements in performance measures, even for the worstcase scenarios. As an added benefit, the proposed shuttle system also had less CO2 emissions due to the reduced number of buses in the triangle area. Although, extra transfers made by passengers seem counterintuitive, our results prove that a well-designed system will improve

1 2 3 4 5 6 7 8

SHUTTLE LINE

0.05 0.06 0.06 0.03 0.05 0.06 0.06 0.03

BUS GROUP AVERAGE CO2 EMISSION (TONS/YEAR)

CURRENT

0.02 0.02 0.02 0.01 0.02 0.02 0.02 0.01

SHUTTLE AVERAGE CO2 EMISSION (TONS/YEAR) 60.0 66.7 66.7 66.7 60.0 66.7 66.7 66.7

IMPROVEMENT (%)

NUMBER OF SHUTTLES = 28

0.03 0.03 0.03 0.01 0.03 0.03 0.03 0.01

40.0 50.0 50.0 66.7 40.0 50.0 50.0 66.7

0.04 0.05 0.04 0.02 0.04 0.05 0.04 0.02

20.0 16.7 33.3 33.3 20.0 16.7 33.3 33.3

IMPROVEMENT (%)

NUMBER OF SHUTTLES = 57

SHUTTLE AVERAGE CO2 IMPROVEMENT SHUTTLE AVERAGE CO2 EMISSION (TONS/YEAR) (%) EMISSION (TONS/YEAR)

BUS GROUPS

NUMBER OF SHUTTLES = 37

Table 9.5  Expected CO2 Emission Reductions due to the Proposed Shuttle System in the Triangle Area

210 ERD İ N Ç Ö NER E T A L .

F E A SIBILIT y S T UDy O F SHU T T L E SERv I c E S

211

passenger experience and benefit the city as a whole. We believe, where the conditions are similar, simulation can be used efficiently to experiment with new system designs for public transportation systems. Improvement on some of the estimation procedures is planned for future work. The estimates of the percentage of passengers transferring to other transportation modes and reduction in travel times in the triangle area due to the less number of buses were used in experimentation for evaluating alternative scenarios. In addition, although the passengers boarding at the stations were known due to the smart ticketing system, passenger destinations had to be estimated. More data on the passenger travel habits and traffic flow pattern in the triangle area with the use of a microsimulation package will improve the accuracy of the results of this study.

References

Balci, O. Guidelines for successful simulation studies. Proceedings of the 1990 Winter Simulation Conference, Piscataway, NJ. IEEE, New York, 1990, pp. 25–32. Boll, C.M. Congestion protection for public transportation: Strategies and application to MBTA bus route 66. Thesis, Northeastern University, Boston, MA, 2008, pp. 8–14. Boxill, S.A. and L. Yu. An evaluation of traffic simulation models for supporting ITS. Technical report. Center for Transportation Training and Research, Texas Southern University, Houston, TX, 2000. Department of Environment, Food, and Rural Affairs. 2012 Guidelines to Defra/ DECC’s GHG conversion factors for company reporting: Methodology paper for emission factors. UK Department of Environment, Food, and Rural Affairs, London, U.K., July 2012. Downs, A. Still Stuck in Traffic: Coping with Peak-Hour Traffic Congestion. The Brookings Institution, Washington, DC, 2004. ESHOT General Directorate Webpage. Smart ticketing system operations. http://www.eshot.gov.tr/Faaliyet.aspx?MID=195 (accessed January 2014). Olstam, J. and A. Tapani. A review of guidelines for applying traffic simulation to level-of-service analysis. Sixth International Symposium on Highway Capacity and Quality of Service, Stockholm, Sweden. Elsevier Ltd., Oxford, U.K., 2011, pp. 771–780. Oner, E. A simulation approach to modeling traffic in construction zones. MSc thesis, Ohio University, Athens, OH, 2004. Parry, I.W.H. Comparing the efficiency of alternative policies for reducing traffic congestion. Journal of Public Economics 85 (2002): 334.

212

ERD İ N Ç Ö NER E T A L .

Pursula, M. Simulation of traffic systems—An overview. Journal of Geographic Information and Decision Analysis 3 (1999): 1–8. Rosenbloom, S. Peak Period Traffic Congestion: A State of the Art Analysis and Evaluation of Effective Solutions. Elsevier Scientific Publishing Company, Amsterdam, 1978, p. 169. Schrank, D., B. Eisele, and T. Lomax. TTI’s 2012 urban mobility report. Yearly mobility report. Texas A&M Transportation Institute, College Station, TX, 2012. Vassallo, J.M., F. Di Ciommo, and A. Garcia. Intermodal exchange stations in the city of Madrid. Transportation (Springer Science+Business Media, LLC) 29 (2012): 975–995. Victoria Transport Policy Institute. Public transit improvements. Prod. Victoria Transport Policy Institute, Victoria, British Columbia, Canada, August 28, 2013.

10 R ELO CAtI ON OF tHE P OWER TR ANSmIS SI ON ANd D IStRIBUtI ON D I V ISI ON OF A M ULtINAtI ONAL E LECtRONI C S ANd E LECtRI CAL E N G INEERIN G C OmpANY M E SU T K U MRU Contents

10.1 Introduction 213 10.2 Facility Layout Problem 215 10.3 Literature Search 218 10.4 SLP, Deltahedron, and Nadler’s Approaches 220 10.5 Case Study 221 10.5.1 Company 221 10.5.2 Problem 221 10.5.3 Methodology 222 10.5.4 Relocating the PTD Division 223 10.5.4.1 Production Capacity, Departments, and Their Abbreviations in PTD Division 223 10.5.4.2 Analysis of the Existing Layout 225 10.5.4.3 Alternative Layouts 225 10.5.5 Evaluation of Alternative Layouts 226 10.6 Conclusion 232 References 233 10.1 Introduction

Facility layout and design are an important issue for any business entity’s overall operations, in terms of both maximizing the effectiveness of production processes and meeting the employee needs and/or 213

214

M E SU T KUM RU

desires. Facility layout is defined by Weiss and Gershon (1993) as “the physical arrangement of everything needed for the product or service, including machines, personnel, raw materials, and finished goods. The criteria for a good layout necessarily relate to people (personnel and customers), materials (raw, finished, and in processes), machines, and their interactions.” Business owners need to consider many operational factors when building or renovating a facility for maximum layout effectiveness. These factors include the following: future expansion or possible changes of facility, land use, workflows, material movements, transportation and procurement needs, output requirements, ease of communication and support, employee morale and job satisfaction, promotional values, and safety. In order not to continuously redesign the facility, the facility layout problem should be handled very carefully. There are many goals in facility design such as keeping the material movement at a minimum level, avoiding bottlenecks, minimizing machine interventions, enhancing employee morale and security, and providing flexibility. There are three basic types of layouts: product, process, and fixed position. Three hybrid types of layouts are also used: cellular, flexible manufacturing systems, and mixed-model assembly lines. Essentially, two distinct types of layout (product and process) are widely implemented. Product layout mainly affects the assembly line arrangement and is very much concerned with the products produced. Process layout, on the other hand, is established according to the production processes that are used to generate the products. Product layout is principally applied to high-volume repetitive operations, while process layout is applied to low-volume make-to-order operations. Carefully planning the layout of a facility can have significant long-term benefits for the company’s manufacturing and distribution activities. Creating a sustainable growth plan is an essential key to develop this plan. Many issues (production process routings and flows, material handling methods and equipment requirements, product mix and volumes, etc.) must be considered while developing this plan. Basic purpose of layout is to ensure a smooth flow of work, material, and information through the system. However, a lot of objectives are considered to achieve that: minimization of material handling

REL O c ATI O N O F T HE P TD D I v ISI O N

215

costs; efficient utilization of space and labor; elimination of bottlenecks; facilitation of communication and interaction between workers, between workers and their supervisors, and/or between workers and customers; reduction of manufacturing cycle time and customer service time; elimination of wasted or redundant movement; facilitation of the entry and exit; placement of material, products, and people; incorporation of safety and security measures; promotion of product and service quality; encouragement of proper maintenance activities; providing a visual control of operations or activities; and providing flexibility to adapt to changing conditions. In designing process layouts, the most significant objective is to minimize material handling costs. This implies that departments that incur the most interdepartmental movement should be located closest to one another. For this purpose, two main approaches are widely used to design layouts, which are algorithmic and procedural approaches (Yang et al., 2000). Algorithmic approaches consider only quantitative factors and do not consider any qualitative factors, whereas procedural approaches can use both. Algorithmic approaches can efficiently generate alternative layout designs with often oversimplified objectives (Yang and Hung, 2007). They can be computationally complex and prohibitive. That is why systematic layout planning (SLP) was adopted in industries as a viable approach in the past few decades (Han et al., 2012). Therefore, a procedural layout design approach— SLP—is preferred in this chapter to solve the facility relocation problem of an electronics and electrical company. Furthermore, the performance of the preferred method is compared to those of Nadler’s ideal systematic approach (another procedural approach) and deltahedron (a graph theoretic-based heuristic algorithm), by use of linear weighting in factor analysis. After giving the facility layout problem definition and literature survey results along with the introduction of the techniques used in the study, details of the application are given in the following sections. 10.2  Facility Layout Problem

The placement of facilities on the plant site is often known as facility layout problem. This activity has a significant influence

216

M E SU T KUM RU

on manufacturing costs, operation processes, lead times, and productivity. A suitable placement of facilities contributes to the overall efficiency of the plant and reduces the operating expenses up to 50% (Tompkins et al., 1996). Simulation studies are usually carried out to measure the benefits and performance of given layouts (Aleisa and Lin, 2005). Since layout problems are known to be complex and generally NP-hard (Garey and Johnson, 1979), numerous research studies were conducted in this area during the past decades. As researchers have taken into consideration various ideas in their studies, they could not agree on a standard and exact definition of layout problems. A facility layout is an arrangement of everything needed for the production of goods or delivery of services. A facility is an entity that facilitates the performance of any job. It may be a machine tool, a work center, a manufacturing cell, a machine shop, a department, a warehouse, etc. (Heragu, 1997). Koopmans and Beckmann (1957) defined the facility layout problem as a common industrial problem where the objective is to configure facilities in a way to minimize the cost of transporting materials between them. Azadivar and Wang (2000) reported that the facility layout problem is the determination of the relative locations for a given number of facilities and allocation of the available space among those facilities. According to Lee and Lee (2002), the facility layout problem consists in arranging unequal-area facilities of different sizes within a given total space, which can be bounded to the length or width of site area so as to minimize the total material handling and slack area costs. Shayan and Chittilappilly (2004) defined the facility layout problem as an optimization problem that tries to make layouts more efficient by considering various interactions among facilities and material handling systems while designing layouts. Drira et al. (2007) stated that the problems addressed in research works differ depending on such factors as follows: Workshop characteristics impacting the layout: Products variety and volume, facility shapes and dimensions, material handling systems, multifloor layout, backtracking and bypassing, and pickup and drop-off locations.

REL O c ATI O N O F T HE P TD D I v ISI O N

217

Static versus dynamic layout problems (formulation of layout problems): Discrete formulation, continual formulation, fuzzy formulation, multiobjective layout problems, and simultaneous solving of different problems. Resolution approaches: Exact approaches and approximated approaches. Above all, recent papers rest on complex and realistic features of the manufacturing systems studied. Facility layout is taken into consideration together with typical parameters such as pickup/deposit points, corridors, and complex geometric constraints, when formulating the layout design problem. A lot of research contains restrictive assumptions that are not adapted to the complexity of many manufacturing system facilities. This is an outdated approach and certainly an important issue that should be considered (Benjaafar et al., 2002). However, research is still needed. Designing a plant using a third dimension as a recent approach necessitates more research, such as to select and optimize resources related to the vertical transportation of parts between different floors. Researchers have preferred mostly to deal with static layout problems rather than dynamic ones. However, considering the changing conditions of operation systems, it is clear that the static approaches are unable to follow up these changes. The dynamic approaches have been developed against these changing business conditions in the future and are sometimes seen as good alternatives. Also, fuzzy methods may offer possibilities to assess uncertainty. Meanwhile, as already noted by Benjaafar et al. (2002), research is still needed for suggesting or improving methods to design (1) robust and adaptive layouts, (2) sensitivity measures and analysis of layouts, and (3) stochastic models used to evaluate solutions. When methods used in the solution of layout problems are concerned, it is seen that the metaheuristic methods have been widely used in facility layout studies dealing with problems in a larger size and taking into account constraints in a more realistic way. Evolutionary algorithms seem to be among the most popular approaches. Solution methods are also hybridized (integrated) to solve complex problems or to develop more realistic solutions. The studies based on artificial intelligence are now rarely published.

218

M E SU T KUM RU

On the other hand, due to the difficulty in solving any problem without the use of expert systems, hybrid methods, capable of optimizing the layout, are likely to be still needed while taking into account the available expert knowledge. Most of the published research has focused on the determination of plant layout. However, in practice, this problem is often addressed with other design issues like the selection of production or transportation source, the design of cells, and the determination of capacity resources. These problems are generally dependent on each other, for example, selection of a material handling conveyor as a means of transportation induces the selection criteria of automated guided vehicles. Therefore, during the plant design, research is needed to bring solutions to a variety of problems addressed simultaneously rather than sequentially. Such studies are promising in solving problems toward development and improvement of plant layout. This approach will indeed direct the researchers to focus on workshop design problems rather than being concentrated only on facility location problems. 10.3  Literature Search

Facility layout design approaches in the literature are commonly categorized as algorithmic and procedural approaches (Yang et al., 2000). Algorithmic approaches can efficiently generate alternative layout designs with often oversimplified objectives (Yang and Hung, 2007). In these approaches, quantitative use of material handling distances and loads are used to develop layout alternative with minimum total material handling cost. Since these approaches take the flow distance, either measured in Euclidean or rectilinear distance, which may not represent the physical flow distance, they simplify both design constraints and objectives in order to achieve surrogate objective function for attaining the solution. When qualitative design criteria are concerned, these approaches cause lack of functionality and credence for a quality solution. The shortcoming of qualitative approaches comes out when all qualitative factors are aggregated into one criterion. These approaches can generate better results when commercial software is available. The basic limitation

REL O c ATI O N O F T HE P TD D I v ISI O N

219

of these approaches is that they consider only quantitative factors and do not consider any qualitative factors. Some additional approaches in this category used the flow distance as the surrogate function to solve the layout design problem by utilizing mixed-integer programming formulation (Heragu and Kusiak, 1991; Peters and Yang, 1997; Yang et al., 2005; Chan et al., 2006), but they were often computationally prohibitive. Heuristics, metaheuristics, neural network, and fuzzy logic were also utilized in generating layout alternatives as well as exact procedures (Singh and Sharma, 2006). The majority of the existing literature reports on algorithmic approaches (Heragu, 1997). On the other hand, procedural approaches can incorporate both qualitative and quantitative objectives in the design process, which is divided into several steps that are then solved sequentially (Han et al., 2012). These approaches rely on experts’ experience (Yang et al., 2000). An effective and most famous method in this category is known as SLP procedure (Muther, 1973). SLP is widely used among enterprises and the academic world. The practical applications in a traditional SLP require intricate steps that can lead to lack of stability in results, if not applied properly. Since algorithmic approach requires for advanced training in mathematical modeling techniques, SLP was adopted in industries as a viable approach in the past few decades (Han et al., 2012). Chien (2004) proposed new concepts and several algorithms to modify procedures and enhance practicality in traditional SLP. In order to solve a factory layout design problem, Yang et al. (2000) applied the SLP as infrastructure and then the AHP for evaluating the design alternatives. Considering the hygienic factors, Van Donk and Gaalman (2004) utilized SLP in planning the layout of food industry. Based on SLP and AHP, a cellular manufacturing layout design was applied to an electronic manufacturing service plant (Nagapak and Phruksaphanrat, 2011). Mu-jing and Gen-gui (2005) combined SLP and the genetic algorithm to solve facility layout problem. As different from the earlier studies, Han et al. (2011) proposed parametric layout design for a flexible manufacturing system. The SLP method proposed in this study is a practical approach for new layout designs that do not require deep mathematical knowledge.

220

M E SU T KUM RU

Its performance is compared in this work to those of graph theoreticbased deltahedron heuristic and procedure-based Nadler’s approaches. All three approaches are shortly described in the next section. 10.4  SLP, Deltahedron, and Nadler’s Approaches

The SLP procedure (Muther, 1973) uses sequential steps in solving the layout problem. It is based on the input data and an understanding of the roles and relationships between activities. In the first step, a material flow analysis (from–to chart) and an activity relationship analysis (activity relationship chart) are performed. From these analyses, a relationship diagram is developed to be used as the foundation of the procedure. The next two steps involve the determination of the amount of space to be assigned to each activity, and the allocation of the total space to the departments, considering the relationship diagram. The criterion (objective measurement) for the positioning of departments is the department adjacency or some other user-defined metrics. Irrelevant adjacencies are reflected by zero scores, while relevant adjacencies are demonstrated by relationship/adjacency scores determined as the number of interdepartmental material flows. Based on modifying considerations and practical limitations, a number of layout alternatives are developed and evaluated. The preferred alternative is then recommended. Deltahedron heuristic was developed by Foulds and Robinson (1978) to construct a maximal planar adjacency graph by a sequence of insertions of a new vertex and three edges into a triangular face. It starts with a complete graph on four vertices (K4). Input requirements are the initial K4 and the insertion order in which the vertices will be processed. Each vertex is successively inserted into the face of the triangulation that results in the largest increase in edge score (Giffin et al., 1995). The objective is maximizing relationship (adjacency) scores. Once the adjacency graph has been obtained, a corresponding block layout is then constructed. Nadler’s ideal system approach (Nadler, 1961) is one of the procedural approaches to facility layout planning and is based on the following hierarchical steps: (1) aim for the theoretical ideal system, (2) conceptualize the ultimate ideal system, (3) design the technologically workable ideal system, and (4) install the recommended system.

REL O c ATI O N O F T HE P TD D I v ISI O N

2 21

10.5  Case Study 10.5.1 Company

The case company is a global powerhouse in electronics and electrical engineering, operating in manufacturing, energy, and health-care sectors. It has operations in almost 190 countries including Turkey and owns approximately 285 production and manufacturing facilities with nearly 470,000 employees around the world. The company has operation facilities in Istanbul as well and is already producing various products in its Kartal plant. The range of products in Kartal plant varies in 14 main categories from middle voltage panels to protection systems. Around 2500 employees are hired in Kartal plant. 10.5.2 Problem

Among its production divisions, the power transmission and distribution (PTD) division has achieved the highest growth rates in the past years. Since it has reached to its full capacity, an urgent facility revision was needed for the division. The assembly and the tooling areas, as well as the preproduction, preassembly, and storage areas were not sufficient to meet new demands. Factory management decided to conduct a research on relocating the PTD division in another separate area. For this purpose, a project team was established to carry out the project. The new facility area was taking place in Gebze, a district of Kocaeli city, and was more than two times larger than the existing facility area in Kartal. The current facility area was about 9,000 m 2, while the new area was almost 20,000 m2. Because of this fact, an entirely different and current layout-centric new facility placement was necessary for the plant. The new facility would also include a new department called voltage circuit breaker (VCB). The products of VCB department were formerly being outsourced, but due to quality and transportation cost reasons, the company decided to establish this department within the company. One of the most important problems was the placement of the quality control (QC) department as it was outside the main area. In addition, the dispatching department was experiencing a shortage of place. Though all of the departments (except VCB) were taking place under the same roof in the current

222

M E SU T KUM RU

facility, the available site was not sufficient to meet the increasing customer demand due to unexpected high growth rates. Therefore, instead of redesigning the existing layout in regard to the needs of QC, VCB, and dispatching departments, they were transferred to a particular production area outside the plant as an urgent solution. In this way, additional places of production adjacent to the main location were provided. However, in the meantime, material handling costs increased and also waste of time became an important matter because of different production places. Thus, a new settlement (relocation) of the division was considered unavoidable. According to the research and forecasts, the current production capacity was almost full, and the existing production area would not be adequate in the coming 3 years. Hence, the relocation of the production site has become a matter of priority for the company. This issue was addressed in a project that would include various alternative layouts for the new area. The problem was not only changing the current layout, but it was more than that. A relocation plan of the production facility would be established with a brand-new design. After all alternative layouts were prepared, they would be compared in terms of effectiveness scores, and the layout best to solve the problem would be chosen. 10.5.3 Methodology

While solving the relocation problem, the machine and equipment costs were ignored; the material flow and the departmental needs were taken into account. The figures about the sizes of areas required for the new project were attained from the previous research projects. In designing the layout plans, SLP technique was used. The SLP technique was based on the quantitative data as well as the identification of the roles and relationships between production activities. A material flow analysis (from–to chart) and an activity relationship analysis (activity relationship chart) were constructed. The relationship diagram positioned activities spatially. Proximities were typically used to reflect the relationship between pairs of activities. The next step involved the determination of the amount of space to be assigned to each activity. The SLP procedure, which depends on the relationship scores between the departments, was used to develop

REL O c ATI O N O F T HE P TD D I v ISI O N

223

first a block layout and then a detailed layout regarding each department. If two departments’ borderlines are touching each other, the score was calculated; otherwise, it was considered “0”. All the interdepartmental relationships were quantified according to this principle. After preparing the relationship diagram, three alternative layout solutions (rectangular and L shapes) were developed. Two more layout solutions were also generated by using graph theoreticbased deltahedron heuristic and Nadler’s ideal system approach, for performance comparison of the techniques. All of the alternative layout solutions were compared by use of linear weighting on the basis of the criteria of relationship/adjacency, waiting time, flexibility, safety, and ease of supervision. Three alternative layouts were developed. Depending on the results, the performances of the techniques were evaluated comparatively, and the most effective solution was recommended. 10.5.4  Relocating the PTD Division

10.5.4.1 Production Capacity, Departments, and Their Abbreviations in PTD Division  In the middle voltage product range of the company,

there are three main kinds of products: BT panels, BK panels, and Simoprime. BT panels have also three subgroups identified as 8BT1, 8BT2, and 8BT3. Hence, the total number of product groups is five (8BT1, 8BT2, 8BT3, 8BK20, and Simoprime). Production capacities as to the groups of products are given in Table 10.1. In PTD division, 13 departments are operating. The names of those departments and their abbreviations used in the proceeding sections are given in line with the process flow in Table 10.2.

Table 10.1  Production Capacities NO.

PRODUCT

PRODUCT QUANTITY/3 YEAR

MONTHLY AVERAGE

PIE%

1 2 3 4 5

8BT1 8BT2 8BT3 8BK20 Simoprime

74 954 160 1887 2841

3 27 5 53 79

2 16 3 32 47

224

M E SU T KUM RU

Table 10.2  Departments and Their Abbreviations In Line with the Process Flow NO.

DEPARTMENT

ABBREVIATION

NO.

DEPARTMENT

ABBREVIATION

Voltage circuit breaker Truck assembly Main assembly Quality control Crash area Dispatch

VCB

1

Warehouse

WH

8

2 3 4 5 6 7

Trumatic (Punching) Bending Welding Painting Logistic center Low-volt panel assemb.

TR B W P LC LV

9 10 11 12 13

TA MA QC CA D

Table 10.3  Minimum Space Requirements of the Departments NO.

DEPT.

SPACE (M2)

NO.

DEPT.

SPACE (M2)

NO.

DEPT.

SPACE (M2)

WH TR B W P

900 700 800 700 1600

6 7 8 9 10

LC LV VCB TA MA

1300 1700 700 700 3000

11 12 13 Total

QC CA D

1,700 600 1,500 16,000

1 2 3 4 5

The minimum space requirement of each department in PTD division is determined as given in Table 10.3. The activity relationships represented by codes indicating which activities are in relation to each other are given in Table 10.4. The annual number of material flows between departments is called as score. The closeness (adjacency) rating as to the scores is determined with respect to the maximum score of 958 as given in Table 10.5. Table 10.4  Activity Relationships A E I O U

WH

TR

B

W

P

LC

LV

VCB

TA

MA

QC

CA

D

TR — — — —

— — — B —

— — W, P — —

— P — MA —

MA — — — TA

VCB — — LV TA, MA

MA — — — —

TA MA — — —

— MA — — —

— — QC — —

D — — — CA

— — — — —

— — — — —

Table 10.5  Closeness Rating as to the Scores A (ABSOLUTELY NECESSARY)

E (ESPECIALLY IMPORTANT)

I (IMPORTANT)

O (ORDINARY)

U (UNIMPORTANT)

766.4–958

574.8–766.4

383.2–574.8

191.6–383.2

0–191.6

225

REL O c ATI O N O F T HE P TD D I v ISI O N

To

TR

B

W

P

LC

LV

VCB

TA

MA

QC

CA

D

786 (A) 

0 (U) 

0 (U) 

0 (U)

0 (U)

0 (U)

0 (U)

0 (U)

0 (U)

0 (U)

0 (U)

0 (U)

294 (O)

0 (U)

0 (U)

0 (U)

0 (U)

0 (U)

0 (U)

0 (U)

0 (U)

0 (U)

0 (U)

512 (I)

480 (I)

0 (U)

0 (U)

0 (U)

0 (U)

0 (U)

0 (U)

0 (U)

0 (U)

723 (E)

0 (U)

0 (U)

0 (U)

0 (U)

308 (O)

0 (U)

0 (U)

0 (U)

0 (U)

0 (U)

0 (U)

69 (U)

958 (A)

0 (U)

0 (U)

0 (U)

303 (O)

881 (A) 

182 (U)

146 (U)

0 (U)

0 (U)

0 (U)

LV

0 (U)

833 (A)

0 (U)

0 (U)

0 (U)

VCB

880 (A)

727 (E)

0 (U)

0 (U)

0 (U)

622 (E)

0 (U)

0 (U)

0 (U)

542 (I)

0 (U)

0 (U)

36 (U)

940 (A)

From WH TR B W P LC

WH

TA MA QC CA

0

D

Figure 10.1  From–to chart with closeness ratings.

The relationships between departments (from–to chart) are given in Figure 10.1, in terms of the corresponding scores and closeness ratings. 10.5.4.2  Analysis of the Existing Layout  The current block layout of the

PTD division is shown in Figure 10.2. The interdepartmental closeness ratings as to the scores and the total score attained for the existing layout are given in Table 10.6. 10.5.4.3  Alternative Layouts  Based on the current layout input, five

new alternative layouts were generated for the PTD division to be established in the new plant area in Gebze region. The first three layout alternatives were generated by the use of SLP approach, while

226

M E SU T KUM RU

QC

LC

MA

D

CA

B

LV

TR

W

P TA

WH

Figure 10.2  Existing block layout (Kartal). Table 10.6  Relations and Scores for the Existing Layout DEPARTMENTS LV–MA LV–TA MA–LC MA–P MA–TA LC–P LC–B

WEIGHTS

DEPARTMENTS

WEIGHTS

DEPARTMENTS

WEIGHTS

833 0 146 958 622 0 0

B–TR B–W TR–W TR–WH W–WH QC–CA QC–D

294 512 0 786 0 36 940

CA–D QC–LV QC–MA D–LV B–P TA–P Total

0 0 542 0 480 0 6149

the fourth and the fifth alternative layouts were generated by the use of deltahedron and Nadler’s approaches, respectively. Three of these layout alternatives are of rectangular shape, and the remaining two are of L shape. These layouts and their corresponding total scores are given in Figures 10.3 through 10.7 and Tables 10.7 through 10.11. Adjacency was adopted as the definition of closeness. If two activities had a common border, they were judged to be close; otherwise, they were not. Detailed drawings of the current and alternative/candidate layouts were generated by the use of AutoCAD software. Alternative configurations with varying placements of departments were tested several times in order to find the best settlement. 10.5.5  Evaluation of Alternative Layouts

The alternative layouts developed in this study differed from each other with respect to the relationship scores. When only relationship

227

REL O c ATI O N O F T HE P TD D I v ISI O N

WH TR

P

LC

B W

VCB

MA

TA

QC

CA

D

LV

Figure 10.3  Alternative layout 1.

VCB TA

MA

LC LV

D

QC

W P

TR

B

WH

CA

Figure 10.4  Alternative layout 2.

scores are considered, the second SLP alternative seems to provide the best solution with its highest score of 9531. The other alternatives take place in a descending order by score as Nadler’s (8124), deltahedron (7761), SLP1 (7559), and SLP3 (6539). The best solution is of rectangular type, but the second one is of L shape. Deltahedron took the third row with its block design. Though SLP has generated the best solution with its block diagram (SLP2), it has also generated the worst one with its L-shape diagram (SLP3). It seems that procedural approaches of SLP and Nadler’s have provided good solutions by taking the first two rows in the list. The graph theoretic-based deltahedron approach could have taken only the third row. When

228

M E SU T KUM RU

TR

WH

LC

VCB

B LV

W

P

TA

MA

QC

CA

D

Figure 10.5  Alternative layout 3.

VCB TA

LC

W

MA

LV D

WH

TR

P

B

CA

QC

Figure 10.6  Deltahedron layout (layout 4).

the first three layout alternatives were regarded, the SLP method seemed superior to others. Though the relationship score is an objective measurement for layout planning, it is not usually sufficient to select the proper layout and should be supported with some other criteria that are also influential in decision-making process. In our case, four more criteria were taken into consideration along with the relationship (adjacency) score in order to select the appropriate layout. These additional criteria were

229

REL O c ATI O N O F T HE P TD D I v ISI O N

LC TR

LV

CA

MA

QC

D

WH

B

P

W

TA VCB

Figure 10.7  Nadler’s layout (layout 5).

Table 10.7  Relations and Scores for Alternative Layout 1 DEPARTMENTS WH–P WH–TR TR–B TR–P B–W B–P W–VCB W–MA VCB–QC

WEIGHTS

DEPARTMENTS

WEIGHTS

DEPARTMENTS

WEIGHTS

0 786 294 0 512 480 0 308 0

VCB–MA QC–D QC–MA CA–LV CA–QC CA–D LC–P LC–TA LC–LV

727 940 542 0 36 0 0 182 303

LC–MA TA–MA TA–LV CA–TA MA–P W–P MA–CA Total

146 622 0 0 958 723 0 7559

Table 10.8  Relations and Scores for Alternative Layout 2 DEPARTMENTS TR–WH B–WH B–W B–P W–P WH–LC WH–P LC–LV LC–VCB

WEIGHTS

DEPARTMENTS

WEIGHTS

DEPARTMENTS

WEIGHTS

786 0 512 480 723 0 0 303 881

LC–TA LC–MA P–MA P–QC TA–QC TA–MA MA–VCB MA–LV MA–QC

0 0 958 0 0 622 727 833 542

MA–W LV–CA LV–QC QC–CA QC–D CA–D TA–VCB Total

308 0 0 36 940 0 880 9531

230

M E SU T KUM RU

Table 10.9  Relations and Scores for Alternative Layout 3 DEPARTMENTS TR–WH B–WH B–W B–P W–P WH–LC WH–P LC–P LC–VCB

WEIGHTS

DEPARTMENTS

WEIGHTS

DEPARTMENTS

WEIGHTS

786 0 512 480 723 0 0 0 881

LC–TA LC–MA P–TA P–QC TA–QC TA–MA MA–VCB MA–LV MA–QC

182 146 69 0 0 622 727 833 542

MA–CA LV–CA LV–QC QC–CA QC–D CA–D Total

0 0 0 36 0 0 6539

Table 10.10  Relations and Scores for Deltahedron Layout DEPARTMENTS TR–WH B–WH B–TR B–P W–P WH–LC WH–P LC–P LC–VCB

WEIGHTS

DEPARTMENTS

WEIGHTS

DEPARTMENTS

WEIGHTS

786 294 0 480 723 0 0 0 881

LC–TA LC–MA VCB–TA P–QC TA–QC TA–MA MA–VCB MA–LV MA–QC

182 146 880 0 0 622 0 833 0

MA–P LV–CA LV–QC QC–CA QC–D CA–D Total

958 0 0 36 940 0 7761

Table 10.11  Relations and Scores for Nadler’s Layout DEPARTMENTS TR–WH TR–B B–WH W–B B–P W–P WH–LC WH–P LC–LV

WEIGHTS

DEPARTMENTS

WEIGHTS

DEPARTMENTS

WEIGHTS

786 294 0 512 480 723 0 0 303

LC–MA P–LC TA–VCB P–TA VCB–P MA–LV MA–TA MA–P LV–CA

146 0 880 69 0 833 622 958 0

QC–CA CA–D QC–D QC–MA TA–QC VCB–QC Total

36 0 940 542 0 0 8124

2 31

REL O c ATI O N O F T HE P TD D I v ISI O N

flexibility, safety, waiting time, and ease of supervision. Flexibility criterion in designing the facility layout was concerned with taking into account the changes over short and medium terms in the production process and manufacturing volumes. Safety criterion was concerned with safety level in the movement of materials and personnel workflow. Waiting time criterion referred the elapsed time during the workflow. Ease of supervision was related to the complexity of the workflow. Factor analysis technique by linear weighting was applied to the selection criteria that have impact on the facility layout decision. First, appropriate weights were assigned by the plant experts to each criterion on the relative importance of each. Later, alternative layouts were assessed one by one on the basis of those four criteria. Scores over 100 were assigned to each layout alternative with respect to the criteria identified (Table 10.12). While assigning scores to the criteria for each layout alternative, experts took into account several inputs, like minimum movement of people, material, and resources; space allocation and free space area; complexity and density of the layout; and interdepartmental disconnection distances. After normalization of the scores, total weight for each layout alternative was computed, and the one with the highest score was selected as the best alternative (Table 10.13). Table 10.12  Scores of the Layout Alternatives as to the Criteria METHODS SLP1 SLP2 SLP3 Deltahedron Nadler’s

RELATIONSHIP SCORE

FLEXIBILITY

SAFETY

WAITING TIME

EASE OF SUP

7559 9531 6539 7761 8124

20 40 70 50 100

50 75 40 60 20

60 80 40 70 30

40 60 80 50 100

Table 10.13  Normalized Scores and Total Weights of the Layout Alternatives METHODS SLP1 SLP2 SLP3 Deltahedron Nadler’s

RELATIONSHIP SCORE (0.70)

FLEXIBILITY (0.15)

SAFETY (0.08)

WAITING TIME (0.05)

EASE OF SUP (0.02)

TOTAL WEIGHT

0.19 0.24 0.16 0.20 0.21

0.07 0.14 0.25 0.18 0.36

0.20 0.31 0.16 0.25 0.08

0.21 0.29 0.14 0.25 0.11

0.12 0.18 0.25 0.15 0.30

0.1724 0.2319 0.1743 0.2025 0.2189

232

M E SU T KUM RU

When we look at Table 10.13, it is seen that the relationship score was given the highest weight of 0.70, while the other criteria took weights in between 0.15 and 0.02. This implies that the plant experts had selected the relationship criterion as the most important one affecting the layout decision. According to the total weights, the layout alternatives took place in a descending order by weight as SLP2 (0.2319), Nadler’s (0.2189), deltahedron (0.2025), SLP3 (0.1743), and SLP1 (0.1724). Though this is quite consistent with the ordering of layouts with respect to relationship scores, only the SLP1 layout was negatively affected from flexibility and ease of supervision criteria when compared to other alternatives and took the last row instead of the forth row. Hence, SLP2 layout alternative has proved to be superior to Nadler’s and deltahedron alternatives and has been selected as the best one. If we think that the relationship score of the existing layout is 6149, then the relationship score of the layout selected is about 55% better than the existing layout with its relationship score of 9531. Even if it cannot be easily measured, we may expect a quite high overall efficiency by implementing the selected layout in the PTD division. These five alternative layouts bring nearly optimal solutions. Theoretically, it is possible to find better solutions as well. But when we regard the applicability (building limitations, etc.) of those solutions, the proposed alternative layouts stand out feasible compared to the others. 10.6 Conclusion

The most important cause of high material handling costs is the lack of strategic facility planning. In an effective layout, material handling costs can be reduced by 10%–30%. This situation affects the cost of production directly. This research was conducted in line with the expectations and needs of a multinational company to find an urgent appropriate solution to the relocation problem of one of its production divisions, namely, PTD. The PTD division was incurring high amount of handling costs that necessitated an urgent layout revision. On the way to solve the problem, first, the needs of the division were defined later; the solution was reached easily by the use of SLP, Nadler’s, and

REL O c ATI O N O F T HE P TD D I v ISI O N

233

deltahedron methodologies, and the linear weighting for criteria comparisons. SLP is found to be the most effective and widely used alternative method in this kind of relocation problems. In our application, five alternative layouts were generated including the most effective one. Of course, several other methods could be used in such problems too. It is expected that the handling cost was reduced about 50% at PTD division after relocation. Each layout design application is unique in nature, that is, there are different attributes associated with different applications; thus, the success of the present study has no guarantee for its applicability to other applications. Judicious use of a design method is advised in solving a specific application. The project implementation given in this chapter focuses on manufacturing system applications. In fact, layout design problems exist in almost every type of system, such as manufacturing, hospitals, hotels, ports, and supermarkets. However, advances made in the specific areas of manufacturing may lead to positive influences on designing the layouts for other systems. Finally, we should refer to commercial software tools available on the market developed for assisting in the design of manufacturing layout problems. Though these tools have been developed considering the manufacturing systems, they are limited in number. Therefore, additional software tools with generic solution approaches are needed in order to bring easy and quick solutions to every type of layout design problems.

References

Aleisa, E.E. and Lin, L. (2005). For effectiveness facilities planning: Layout optimization then simulation, or vice versa? Proceedings of the 37th Winter Simulation Conference. Orlando, FL, pp. 1381–1385. Azadivar, F. and Wang, J. (2000). Facility layout optimization using simulation and genetic algorithms, International Journal of Production Research, 38(17), 4369–4383. Benjaafar, S., Heragu, S.S., and Irani, S.A. (2002). Next generation factory layouts: Research challenges and recent progress, Interface, 32(6), 58–76. Chan, F.T.S., Lau, K.W., Chan, P.L.Y., and Choy, K.L. (2006). Two-stage approach for machine-part grouping and cell layout problems, Robotics and Computer-Integrated Manufacturing, 22, 217–238.

234

M E SU T KUM RU

Chien, T.-K. (2004). An empirical study of facility layout using a modified SLP procedure, Journal of Manufacturing Technology Management, 15(6), 455–465. Drira, A., Pierreval, H., and Gabou, S.H. (2007). Facility layout problems: A survey, Annual Reviews in Control, 31(2), 255–267. Foulds, L.R. and Robinson, D.F. (1978). Graph theoretic heuristics for the plant layout problem, International Journal of Production Research, 16, 27–37. Garey, M.R. and Johnson, D.S. (1979). Computers and Intractability: A Guide to the Theory of NP-Completeness. W.H. Freeman, New York. Giffin, J.W., Watson, K., and Foulds, L.R. (1995). Orthogonal layouts using the deltahedron heuristic, Australasian Journal of Combinatorics, 12, 127–144. Han, K.H., Bae, S.M., Choi, S.H., Lee, G., and Jeong, D.M. (2011). Parametric layout design and simulation of flexible manufacturing system. Proceedings of the 10th WSEAS International Conference on System Science and Simulation in Engineering, Penang, Malaysia, October 3–5, 2011, pp. 94–99. Han, K.H., Bae, S.M., and Jeong, D.M. (2012). A decision support system for facility layout changes. In: Latest Trends in Information Technology, Anderson, D., Yang, H.-J., and Varacha, P. (eds.), Wseas Press, Vienna, Austria, pp. 79–84. Heragu, S.S. (1997). Facilities Design. PWS Publishing, Boston, MA. Heragu, S.S. and Kusiak, A. (1991). Efficient models for the layout design problem, European Journal of Operational Research, 53, 1–13. Koopmans, T.C. and Beckmann, M. (1957). Assignment problems and the location of economic activities, Econometrica, 25(1), 53–76. Lee, Y.H. and Lee, M.H. (2002). A shape-based block layout approach to facility layout problems using hybrid genetic algorithm, Computers & Industrial Engineering, 42, 237–248. Mu-jing, Y. and Gen-gui, Z. (2005). Method of systematic layout planning improved by genetic algorithm and its application to plant layout design, Journal of East China University of Science and Technology, 31(3), 371–375. Muther, R. (1973). Systematic Layout Planning, 2nd edn. Cahners Books, Boston, MA. Nadler, G. (1961). Work Design: A Systems Concept. Richard D. Irwin, Inc., Homewood, IL. Nagapak, N. and Phruksaphanrat, B. (2011). Cellular manufacturing layout design and selection: A case study of electronic manufacturing service plant. Proceedings of the International Multiconference of Engineers and Computer Scientists, Kowloon, Hong Kong, Vol. 2, March 16–18, 2011. Peters, B.A. and Yang, T. (1997). Integrated facility layout and material handling system design in semiconductor fabrication facilities, IEEE Transactions on Semiconductor Manufacturing, 10(3), 360–369. Shayan, E. and Chittilappilly, A. (2004). Genetic algorithm for facilities layout problems based on slicing tree structure, International Journal of Production Research, 42(19), 4055–4067.

REL O c ATI O N O F T HE P TD D I v ISI O N

235

Singh, S. P. and Sharma, R.R.K. (2006). A review of different approaches to the facility layout problems, International Journal of Advanced Manufacturing Technology, 30(5/6), 425–433. Tompkins, J.A., White, J.A., Bozer, Y.A., Frazelle, E.H., Tanchoco, J.M., and Trevino, J. (1996). Facilities Planning. Wiley, New York. Van Donk, D.P. and Gaalman, G. (2004). Food safety and hygiene: Systematic layout planning of food processes, Chemical Engineering Research and Design, 82(A11), 1485–1493. Weiss, H.J. and Gershon, M.E. (1993). Production and Operations Management, 3rd edn. Allyn & Bacon, Boston, MA. Yang, T. and Hung, C.-C. (2007). Multiple-attribute decision making methods for plant layout design problem, Robotics and Computer-Integrated Manufacturing, 23, 126–137. Yang, T., Peters, B.A., and Tu, M. (2005). Layout design for flexible manufacturing systems considering single-loop directional flow patterns, European Journal of Operational Research, 164, 440–455. Yang, T., Su, C.-T., and Hsu, Y.-R. (2000). Systematic layout planning: A study on semiconductor wafer fabrication facilities, International Journal of Operation Production Management, 20(11), 1359–1371.

11 L O CAtI ON P ROBLEmS WItH D EmANd R EG I ONS D E R YA D İ N L E R , M U S TA FA K E M A L T U R A L , A N D CE M İ Y İGÜ N Contents

11.1 Introduction 237 11.2 Location Problems with Demand Points 239 11.3 Location Problems with Demand Regions 240 11.3.1 Problems with Maximum Distance 243 11.3.1.1 Single-Facility Minisum Problem with Euclidean Maximum Distance 243 11.3.1.2 Other Minisum Problems with Maximum Distance 244 11.3.2 Problems with Minimum Distance 245 11.3.2.1 Minisum Problems with Minimum Distance245 11.3.2.2 Minimax Problems with Minimum Distance246 11.3.3 Problems with Average Distance 246 11.3.3.1 Minisum Problems with Average Distance 246 11.3.3.2 Minimax Problems with Average Distance 248 11.3.4 Problems on Networks 248 11.4 Conclusions 248 References 251 11.1 Introduction

Facility location problems involve strategic decisions requiring large investments and long-term plannings. They have been extensively studied by researchers from a variety of disciplines, like geographers and marketing and supply chain specialists. 237

238

D ERYA D İ N L ER E T A L .

The facility location problem is to locate q serving facilities for m demanding entities and to allocate the demanding entities to the facilities so as to optimize a certain objective such as minimizing transportation cost or the distance to the farthest entity. Most facility location problems are combinatorial in nature and challenging to solve to optimality. Locations of warehouses, hospitals, retail outlets, radar beams, and exploratory oil wells are some of the application areas of the facility location problems. These problems can differ in several ways including the objective aimed, the number of facilities to locate, and the solution space in which the problem is defined. The facility location problem is called as a discrete facility location problem if there are a finite number of candidate facility locations. If the facilities can be placed anywhere in some continuous region, then the problem is called as a continuous facility location problem. When q is equal to 1, the problem is called as a single-facility location problem; otherwise, it is called multiple-facility location problem. More details about facility location problems particularly about their classification can be found in [17,19,20]. In location theory, customers are generally assumed as fixed points in space. When the sizes of the customers are relatively small with respect to the distances between facilities and customers, this assumption can be justified. Otherwise, it would be better to treat customers as groups of points or regions with density functions or distributions representing the demand over the regions. Here, we restrict ourselves to the case where each demanding entity is represented by a region. It would be more appropriate to represent a demanding entity as a region instead of a fixed point, when 1. The size of the demanding entity is not negligible with respect to the distances in the problem 2. The location of the demanding entity follows a bivariate distribution on the plane 3. The number of demanding entities is so large that they may be clustered into regions instead of treating each one separately The concept of demand spreading over an area appears in several applications. For example, first consider the problem of locating a fire station that will serve forests. If each forest is represented as a point by its center and a fire bursts out at an area far from the center, it may take more

L O c ATI O N P R O B L EM S WIT H D EM A N D REGI O NS

239

than the estimated time for the firefighters to reach the fire area. In such cases, representing forests as demand areas would be more meaningful as in case (1). Second, consider establishing mobile headquarters or mobile health centers. The location of each unit will follow a bivariate distribution. The decision of where to place a facility to serve these units should consider each as a region along with a density function that represents the likelihood of the presence of the unit as in case (2). Third, consider the problem of waste collection from many districts. Waste collection center should be located by treating each district as groups of demand points (private residences) or regions as in case (3). In this chapter, we consider continuous facility location problems where the demanding entities are represented as regions in the plane. The chapter is organized as follows. In Section 11.2, we introduce two solution approaches for continuous facility location problems with demand points that are commonly used in solving the problems with demand regions. In Section 11.3, we model 12 continuous location problems with demand regions. We focus on one of the problems and discuss several solution approaches. We provide a brief literature review on some of the remaining problems. 11.2  Location Problems with Demand Points

The Weber problem is a well-known continuous facility location problem [25]. It is to find a center (x*, y*) so as to minimize the sum of weighted Euclidean distances between this center and m fixed points (demand points) with given coordinates (ai, bi ), i = 1, 2, …, m. Each point i is associated with a positive weight wi. The problem can be formulated as follows: m

Minimizex , y W (x , y ) =

∑ w d (x, y), (11.1) i i

i =1

where

d i (x , y ) =

( x − ai )

2

2

+ ( y − bi ) . (11.2)

A common approach to solve the Weber problem is the Weiszfeld procedure [28]. It is an iterative method that expresses and updates the facility location as a convex combination of the locations of the customers.

24 0

D ERYA D İ N L ER E T A L .

Another important continuous facility location problem is the location allocation (LA) problem, which is a generalization of the Weber problem to the case of multiple facilities. Given the location of a set of m demand points, LA problem is to find the locations of q facilities and to allocate the demand points to the facilities while minimizing the total distance between the demand points and the facilities they are allocated to. The alternate location allocation (ALA) heuristic developed by Cooper [12] for the LA problem is one of the most commonly used schemes in the multifacility location literature. ALA heuristic mainly depends on two simple problems: (1) given the facility locations, determine the allocations of the demand points, and (2) given the allocations of the points, find the locations of the facilities. These problems can be solved in an iterative manner that starts with a set of initial facility locations and generates q subsets of demand points by allocating each point to one of the facilities. Then, for each subset, a single-facility location problem is solved, and each demand point is allocated to the nearest facility hence generating new subsets of demand points. Iterations are repeated until a stopping criterion is achieved. 11.3  Location Problems with Demand Regions

When demand regions are in consideration, an important aspect is the way of measuring the distance between a demand region and a facility. Usually, there are three ways to define the distance [22]: 1. Maximum distance 2. Minimum distance 3. Average distance Using one of these distance measures, a facility location problem can be modeled with several objectives. Here, we consider four different objectives: (1) minimize the sum of distances, (2) minimize the maximum distance, (3) maximize the sum of distances, and (4) maximize the minimum distance. In total, there are 12 possible problems. Assuming each demand region has a finite number of demand points, all these problems are formulated using the following notation: • q ≥ 1: number of facilities • m ≥ 1: number of demand regions

L O c ATI O N P R O B L EM S WIT H D EM A N D REGI O NS

2 41

• xi ∈ ℜ 2: location of the ith facility, i = 1, …, q • kj ≥ 1: number of demand points in the jth demand region, j = 1, …, m • K j : = {1,…, k j }, j = 1, …, m • s kj ∈ ℜ 2: location of the kth demand point of the jth demand region, j = 1, …, m, k ∈ Kj • wj > 0: weight of the jth demand region, j = 1, …, m • d: a distance metric on ℜ 2 that measures the distance between any two points is ℜ 2 The maximum, minimum, and average distances from the jth demand j j j region to the closest facility denoted, respectively, by d max , and d avg , d min are calculated as

(

)}

{

(

)}

i =1,…,q k∈K j



j d min = min min d s kj , xi , (11.4) i = 1,…,q k∈K j



d

{

j d max = min max d s kj , xi , (11.3)

j avg

⎧⎪ 1 = min ⎨ i = 1,…,q k j ⎪⎩

kj

⎫⎪ d s kj , xi ⎬ . (11.5) ⎪⎭ k =1

∑(

)

All of the 12 possible problems are summarized in Table 11.1. The single-facility versions of problems 2, 8, and 9 are equivalent to known problems with demand points [18]. The same does not hold for the multiple-facility versions of these problems. The single-facility version of problem 11 is equivalent to a single-facility location problem where the objective is to maximize a weighted sum of the distances between the facility and all demand points. Selected solution approaches from the literature for some problems in Table 11.1 will be reviewed in the following sections. Our aim is not to completely review the related literature. We will explain some problems in more detail while giving less or no attention to some others.

242

D ERYA D İ N L ER E T A L .

Table 11.1  Classification of Problems NO.

OBJECTIVE

OBJECTIVE FUNCTION

Problems with maximum distance 1

Minimize the sum of maximum distances

⎧⎪ m ⎫⎪ j Minimize ⎨ w j dmax ⎬ ⎩⎪ j =1 ⎭⎪

2

Minimize the maximum of maximum distances

j Minimize {max j =1,…,m {w j dmax }}

3a

Maximize the sum of maximum distances

⎧⎪ m ⎫⎪ j Maximize ⎨ w j dmax ⎬ ⎩⎪ j =1 ⎭⎪

4a

Maximize the minimum of maximum distances

j Maximize {minj =1,…,m {w j dmax }}





Problems with minimum distance 5

Minimize the sum of minimum distances

⎧⎪ m ⎫ j ⎪ Minimize ⎨ w j dmin ⎬ ⎩⎪ j =1 ⎭⎪

6

Minimize the maximum of minimum distances

j Minimize {max j =1,…,m {w j dmin }}

7a

Maximize the sum of minimum distances

⎧⎪ m ⎫ j ⎪ Maximize ⎨ w j dmin ⎬ ⎩⎪ j =1 ⎭⎪

8a

Maximize the minimum of minimum distances

j Maximize {minj =1,…,m {w j dmin }}





Problems with average distance 9

Minimize the sum of average distances

⎧⎪ m ⎫ j ⎪ Minimize ⎨ w j d avg ⎬ ⎪⎩ j =1 ⎪⎭

10

Minimize the maximum of average distances

j Minimize {max j =1,…,m {w j d avg }}

11a

Maximize the sum of average distances

⎧⎪ m ⎫ j ⎪ Maximize ⎨ w j d avg ⎬ ⎪⎩ j =1 ⎪⎭

12a

Maximize the minimum of average distances

j Maximize {minj =1,…,m {w jd avg }}

a





The solution space of these problems should be restricted, as otherwise they would not have a finite optimal objective function value.

L O c ATI O N P R O B L EM S WIT H D EM A N D REGI O NS

24 3

11.3.1  Problems with Maximum Distance

For the problems where the worst-case scenarios are important such as the location of emergency facilities, for example, fire stations, police stations, and hospitals, the maximum distance is commonly used. In such problems, the demand regions can be taken as closed convex polygons as the farthest point will occur at a corner of the convex hull of the demand points in a region (in cases with finitely many demand points). The problems having regions with an infinite number of demand points, for example, ellipsoids, can be handled by approximating the regions with polygons. In this subsection, the regions are all assumed to be closed convex polygons. 11.3.1.1 Single-Facility Minisum Problem with Euclidean Maximum Distance  The single-facility version of this problem with the

Euclidean norm can be modeled as a second-order cone programming (SOCP) problem [16]. It can be solved in the worst case in time O(m2 N 3/2), where m is the number of demand regions and N is the total number of corners in all the regions. Note that the problems including closed circular regions in addition to polygonal ones can also be directly handled (without a polygonal approximation) by an SOCP formulation. SOCP problems can be solved in polynomial time [1,23], and several efficient software packages have been developed to solve such problems [14]. Jiang and Yuan [21] studied the same problem considering closed convex polygonal and circular demand regions. The difficulty of solving this problem is the discontinuity of the farthest points. They, therefore, partitioned the plane into polygonal fixed regions in such a way that within the interior of each fixed region, there is a single farthest point of each demand region. So, the problem is turned into a set of Weber problems with the additional constraint that the facility has to be confined within a certain polygon. Solving such constrained Weber problems, they find the optimal solution. Weaknesses of this approach are twofold: constructing the fixed regions and possibility of solving a large number of constrained Weber problems. Even in the case with r rectangular regions, the number of fixed regions can be as large as 2r 2 + r + 1, see [21]. The authors proposed an approach to discard some of the fixed regions

24 4

D ERYA D İ N L ER E T A L .

without solving the associated constrained Weber problem. Since the computational time of this algorithm is highly dependent on the starting fixed region, the authors also proposed a heuristic for the initialization. Drezner and Wesolowsky [18] proposed another approach for the same problem. They did not explicitly construct the fixed regions. Instead, they dealt with the discontinuity of the farthest points by an algorithm whose iterations use the Weiszfeld procedure repeatedly together with golden section search as needed. All three methods explained so far solve the problem to optimality. SOCP formulation is very easy to implement and can be used to solve small- or medium-size instances. As there has not been any computational comparison of the aforementioned methods, it is not known which one would perform the best for large-size instances. 11.3.1.2 Other Minisum Problems with Maximum Distance  In [18],

authors also studied a single-facility minisum problem with maximum distance using rectilinear norm. They showed that the problem can be formulated as a linear programming problem. In [22], Jiang and Yuan considered a multiple-facility version of the problem in [21], that is, problem 1 with the Euclidean norm. They developed an ALA heuristic. In the location step, they solved constrained Weber problems as in [21]. Since the number of constrained Weber problems to solve may be too large, the authors proposed a version of Barzilai–Borwein gradient method and proved its convergence. In the allocation step, each region is assigned to the closest facility. They demonstrated the efficiency of their method with a numerical study. In [15], authors considered a multiple-facility version of the problem using the squared Euclidean norm. They modeled the problem as a mixed-integer SOCP problem. Since this formulation is weak, they proposed three heuristics applicable to general polygons. Two of them are ALA heuristics. Differently, the third one uses a smoothing strategy to turn the problem into an unconstrained nonlinear problem that is then solved with a quasi-Newton algorithm. The authors also proposed a special heuristic for the case where the demand regions are of rectangular shape with sides parallel to the standard coordinate axes.

L O c ATI O N P R O B L EM S WIT H D EM A N D REGI O NS

24 5

11.3.2  Problems with Minimum Distance

For the problems where the flow from/to the facilities will enter/leave the given demand area at the closest point (i.e., drop-off and take-off points), the minimum distance is commonly used. The internal distribution costs within the demand area are usually not considered in such problems. 11.3.2.1  Minisum Problems with Minimum Distance  The single-facility

version of this problem with closed convex demand regions (not necessarily polygonal regions) is studied in [7]. They proposed an iterative algorithm starting with an arbitrary initial facility. In each iteration, the closest point for each demand region is found, and those points are treated as fixed points replacing the associated demand regions. The resulting Weber problem is solved with the Weiszfeld algorithm. Convergence properties of the algorithm and modifications in some special cases are discussed in detail in [7]. In [5], authors considered both the regional demands and regional facilities. Their objective was to locate a facility in order to minimize the sum of the distances from the closest point in the facility to the closest point in the demand areas. They proved that when the demand regions and the facility are closed convex regions, the distance is a convex function of a defined center of the facility for any norm. Therefore, objective function of the problem is convex, and a classical descent method to find global optimum can be used. However, authors stated that the calculations of step size and descent directions are not easy because of the discontinuity in derivatives and unobtainability of the closest distance in an explicit form. They overcame these difficulties in some special cases by using the rectilinear norm. They presented a solution approach for the case where both the demand and the facility regions are of rectangular shape. The results obtained for the single-facility case with rectangular regions and facility were extended to the multiple-facility case. In [26], authors represented the demanding entities as convex sets of points. The objective is to minimize an increasing convex function of the minimum distances between the facility and the demand regions. The single-facility minisum problem with minimum Euclidean

24 6

D ERYA D İ N L ER E T A L .

distances is a special case of the considered problem. The geometrical characterization of the set of optimal solutions is presented by using tools from convex analysis. Moreover, a constructive approach is developed for the case with polyhedral norms. 11.3.2.2  Minimax Problems with Minimum Distance  The single-facility

minimax problem with Euclidean minimum distance is also a special case of the problem considered in [26]. Same problem with some distance measures was also studied in [6]. They developed a procedure based on the iso-contours. It was shown that the proposed methodology can lead to efficient solution methods in some special cases, for example, rectangular regions with rectilinear distance. 11.3.3  Problems with Average Distance

For the problems where the distances from the facility to each demand point in the region are important, the average distance is commonly considered. It is also generally used in problems where the demand points are represented as random vectors. Average distance has been more extensively used in the literature than the other distance definitions. Problems with average distances generally require evaluation of complex integral expressions. In [27], Stone gave the explicit expressions and approximate power series for four common cases, namely, rectangular and circular demand regions with both rectilinear norm and Euclidean norm. For each case, expressions for the average distance between the demand region and the facility at the center, interior or exterior of the demand regions, are presented. 11.3.3.1  Minisum Problems with Average Distance  Love [24] consid-

ered the situation in which the number of demand points is too large to treat each of them separately. He grouped the demand points into rectangular regions. His objective was to find the location of a facility so as to minimize total expected Euclidean distances between the rectangular regions and the facility. He developed a response surface technique utilizing a gradient reducing process.

L O c ATI O N P R O B L EM S WIT H D EM A N D REGI O NS

2 47

In [3], the authors extended Love’s study [24]. They replaced the demand regions, which are not necessarily rectangular and uniformly distributed, with the centroids of the regions. Their method can be used for demand regions having a geometric shape with an easily found centroid. Comparing with the study in [24], they computationally showed that their method is faster for the rectangular regions with uniformly distributed demand. In [13], the location of each demanding entity is assumed to be a random variable having a bivariate normal distribution with zero correlation. The author aimed to locate a single facility to minimize the sum of the expected Euclidean distances between the demanding entities and the facility. It was proven that the objective function of the problem is strictly convex and an iterative algorithm for the problem was proposed. In [2], the problem of locating one or more facilities to serve existing rectangular regions was studied where the rectilinear norm was used. The authors proposed a gradient-free direct search method for the problem and a heuristic for the initialization. Their method converged experimentally, but no formal proof of the convergence was given. Carrizosa et al. [8] discussed the similarities and differences between a generalized Weber problem with demanding entities represented by density functions and its point version. They showed that when the probability distributions of the demanding entities are absolutely continuous, gradient descent algorithms can be used instead of evaluating complex expectations. The authors also proved that the problem has a unique optimal solution in some special cases. In [9], the authors approximated demand regions with simpler regions while keeping an approximation error under control. For example, an elliptical region can be approximated by an n-sided polygon where the approximation error gets smaller with larger values of n. For polygonal approximations, the triangles constructed with the corners of the polygons are used to calculate the expected distances. Using this idea, they proposed an algorithm whose running time increases with the number of sides of the approximation polygons. However, they obtained promising results even when the number of sides of the approximation polygon is not too large.

24 8

D ERYA D İ N L ER E T A L .

Chen [11] proposed a Weiszfeld-like iterative approach to locate a single facility that serves circular demand regions using the Euclidean norm. He recommended taking a weighted average of the centers of the circular demand regions as a starting point for the approach. Cavalier et al. [10] studied the problem of minimizing the sum of weighted expected distances between convex polygonal demand regions having uniform demand and a single facility (or multiple ones). For the single-facility case, an iterative algorithm that uses the Weiszfeld technique was developed and the convergence of the algorithm was proven. For the multiple-facility case, an ALA heuristic was presented. This algorithm was also convergent, but the global optimality was not guaranteed. 11.3.3.2  Minimax Problems with Average Distance  In [18], authors also

studied a single-facility minimax problem with average distance using the Euclidean norm. An Elzinga–Hearn-type algorithm was proposed. 11.3.4  Problems on Networks

All the papers mentioned until now studied location problems where both the demanding entities and the facilities lie on the plane. In [4], the authors studied location problems on a network. They investigated nine different problems (as in Table 11.1 except problems 3, 7, and 11) where the demand points are clustered into groups. For each problem, a set of potential locations of the facility(s) are derived. For the multiple-facility case, heuristics (tabu search and simulated annealing) were proposed. 11.4 Conclusions

Classical location problems with demand points have been studied for a long time. Recently, new variants of facility location problems with demand regions have attracted the researchers. In this chapter, several facility location problems with demand regions are reviewed and selected solution approaches from the literature are discussed. The papers reviewed are summarized in terms of the type of the problems studied, number of facilities located, type of the demanding entities, distance measure used, and the solution approaches, and presented in Table 11.2.

PROBLEM TYPE (SEE TABLE 11.1)

9 9

1, 2, 4, 5, 6, 8, 9, 10, 12

5 5

6

5

9

9 9

9 9 10

REFERENCES

[2] [3]

[4]

[5] [5]

[6]

[7]

[8]

[9] [10]

[10] [11] [13]

Multiple Single Single

Single Single

Single

Single

Single

Single Both single and multiple

Both single and multiple

Both single and multiple Single

NUMBER OF FACILITIES

Table 11.2  Summary of Papers Reviewed

Convex polygonal region Circular region Bivariate normally distributed random variable

Random variable with a bivariate distribution Bounded set of points Convex polygonal region

Closed convex region

Rectangular region

General convex region Rectangular region

Rectangular region A geometric shape with an easily found centroid Cluster of points on networks

DEMANDING ENTITY

Euclidean norm Euclidean norm Euclidean norm

A gauge Euclidean norm

A gauge

Euclidean norm

Rectilinear norm

Arbitrary norm Rectilinear norm



Rectilinear norm Euclidean norm

DISTANCE MEASURED BY

(Continued )

Properties of the problem are discussed Iterative algorithm using Weiszfeld technique Alternate location allocation heuristic Weiszfeld-like iterative approach An iterative algorithm

Heuristics (improved greedy algorithm, tabu search, and simulated annealing) General convexity results are derived Reduced the problem to standard minisum problem Descent procedure based on iso-contours Iterative algorithm using Weiszfeld technique Properties of the problem are discussed

Gradient-free direct search method Response surface method

SOLUTION APPROACH

L O c ATI O N P R O B L EM S WIT H D EM A N D REGI O NS

24 9

PROBLEM TYPE (SEE TABLE 11.1)

1

1

1

1

1

10

1

1

9 5

REFERENCES

[15,16]

[15,16]

[15,16]

[18]

[18]

[18]

[21]

[22]

[24] [26]

Single Single

Multiple

Single

Single

Single

Single

Single

Multiple

Multiple

NUMBER OF FACILITIES

Table 11.2 (Continued)  Summary of Papers Reviewed

Closed convex polygonal or circular region Rectangular region Convex sets of points

Rectangular region with sides parallel to coordinate axes Closed convex polygonal region Closed convex polygonal or circular region Closed convex polygonal or circular region Closed convex polygonal or circular region Closed convex polygonal or circular region

Closed convex polygonal region

DEMANDING ENTITY

Euclidean norm Polyhedral norm

Euclidean norm

Euclidean norm

Euclidean norm

Rectilinear norm

Euclidean norm

Euclidean norm

Euclidean norm

Euclidean norm

DISTANCE MEASURED BY

Response surface method A constructive approach

Divided the solution space into fixed regions, and solved several constrained Weber problems Alternate location allocation heuristic

Elzinga–Hearn-type algorithm

Second-order cone programming formulation Iterative algorithm using Weiszfeld technique Linear programming formulation

Heuristics (two alternate location allocation heuristics, an iterative algorithm based on a smoothing strategy) Alternate location allocation heuristic

SOLUTION APPROACH

250 D ERYA D İ N L ER E T A L .

L O c ATI O N P R O B L EM S WIT H D EM A N D REGI O NS

References

2 51

1. Alizadeh, F., Goldfarb, D. 2003. Second-order cone programming, Mathematical Programming 95:3–51. 2. Aly, A.A., Marucheck, A.S. 1982. Generalized Weber problem with rectangular regions, The Journal of the Operational Research Society 33:983–989. 3. Bennett, C.D., Mirakhor, A. 1974. Optimal facility location with respect to several regions, Journal of Regional Science 14:131–136. 4. Berman, O., Drezner, Z., Wesolowsky, G.O. 2001. Location of facilities on a network with groups of demand points, IIE Transactions 33:637–648. 5. Brimberg, J., Wesolowsky, G.O. 2000. Note: Facility location with closest rectangular distances, Naval Research Logistics 47:77–84. 6. Brimberg, J., Wesolowsky, G.O. 2002. Locating facilities by minimax relative to closest points of demand areas, Computers and Operations Research 29:625–636. 7. Brimberg, J., Wesolowsky, G.O. 2002. Minisum location with closest Euclidean distances, Annals of Operations Research 11:151–165. 8. Carrizosa, E., Conde, E., Munoz-Marquez, M., Puerto, J. 1995. The generalized Weber problem with expected distances, RAIRO Operations Research 29:35–57. 9. Carrizosa, E., Munoz-Marquez, M., Puerto, J. 1998. The Weber problem with regional demand, European Journal of Operational Research 104:358–365. 10. Cavalier, T.M., Sherali, H.D. 1986. Euclidean distance location-allocation problems with uniform demands over convex polygons, Transportation Science 20:107–116. 11. Chen, R. 2001. Optimal location of a single facility with circular demand areas, Computers and Mathematics with Applications 41:1049–1061. 12. Cooper, L. 1964. Heuristic methods for location-allocation problems, SIAM Review 6:37–53. 13. Cooper, L. 1974. A random locational equilibrium problem, Journal of Regional Sciences 14:47–54. 14. CVX Research, Inc. April 2011. CVX: Matlab software for disciplined convex programming, version 2.0. http://www.cvxr.com/cvx (accessed August 7, 2014). 15. Dinler, D., Tural, M.K., İyigün, C. 2013. Heuristics for a continuous multi-facility location problem with demand regions, Submitted for publication. 16. Dinler, D. 2013. Heuristics for a continuous multi-facility location problem with demand regions, Master’s thesis. Middle East Technical University, Ankara, Turkey. 17. Drezner, Z. (ed.). 1995. Facility Location: A Survey of Applications and Methods. Springer, New York. 18. Drezner, Z., Wesolowsky, G.O. 2000. Location models with groups of demand points, INFOR 38:359–372. 19. Drezner, Z., Hamacher, H.W. (eds.). 2002. Facility Location: Applications and Theory. Springer, Berlin, Germany.

252

D ERYA D İ N L ER E T A L .

20. Farahani, R.Z., Hekmarfar, M. (eds.). 2009. Facility Location: Concepts, Models, Algorithms and Case Studies. Springer, Berlin, Germany. 21. Jiang, J., Yuan, X. 2006. Minisum location problem with farthest Euclidean distances, Mathematical Methods of Operations Research 64:285–308. 22. Jiang, J., Yuan, X. 2012. A Barzilai-Borwein-based heuristic algorithm for locating multiple facilities with regional demand, Computational Optimization and Applications 51:1275–1295. 23. Lobo, M.S., Vandenberghe, L., Boyd, S., Lebret, H. 1998. Applications of second-order cone programming, Linear Algebra and Its Applications 284:193–228. 24. Love, R.F. 1972. A computational procedure for optimally locating a facility with respect to several rectangular regions, Journal of Regional Sciences 12:233–242. 25. Luangkesorn, L. Weber problem [PDF document]. Retrieved from: http://www.pitt.edu/~lol11/ie1079/notes/ie2079-weber-slides.pdf (accessed September 24, 2013). 26. Nickel, S., Puerto, J., Rodriguez-chia, A. 2003. An approach to location models involving sets as existing facilities, Mathematics of Operations Research 28:693–715. 27. Stone, R.E. 1991. Some average distance results, Transportation Science 25:83–91. 28. Weiszfeld, E. 1937. Sur le point par lequel la somme des distances de n points donnés est minimum, Tohoku Mathematical Journal 43:355–386.

12 A N E W A ppROACH FOR S YN CHRONIZIN G P ROdUCtI ON ANd D IStRIBUtI ON S CHEdULIN G Case Study E . G H O R B A N I -T O T K A L E H , M . A M I N N AY E R I , A N D M. SHEIK H SAJA DIEH Contents

12.1 Introduction 253 12.2 Literature Review 254 12.3 Problem Description 256 12.4 Solution Approach 262 12.4.1 Exact Solution 262 12.4.2 Proposed Algorithm 263 12.5 Numerical Study 266 12.6 Conclusions 267 References 268 12.1 Introduction

Production and distribution are two important processes in supply chain. In the recent two decades, many works have been done on integrated production–distribution models in strategic and tactical planning fields (Bilgen and Ozkarahan, 2004, Chen, 2004, Goetschalcks et al., 2002). These articles consider inventory decisions to link these two parts of supply chain in decision problems. Contrary to this, integrated production and distribution scheduling problem has come into consideration, recently. For having a global optimal solution for our 253

25 4

E . G H O RBA NI -T O T K A L EH E T A L .

scheduling problem in production and distribution parts of supply chain, we must consider these two parts integrated. It is very important in make-to-order products because for this kind of products like time-sensitive products, we should find an optimal solution to have on-time delivery with the minimum cost. Formerly, scheduling of production and distribution parts was done consecutively, in other words, the output of production scheduling decision was an input of distribution scheduling part. It has been shown that this sequential approach has suboptimal solution (Chen and Vairaktarakis, 2005, Pundoor and Chen, 2005). Thus, integrated production–distribution scheduling is optimal and more beneficial. Moreover, the integrated approach for scheduling can reduce total cost and increase the customer service level due to lead time reduction. We can also estimate lead time or due date more accurately if considering delivery and production schedules integrated. Production and distribution scheduling problems can be seen separately in many articles in the past decades (Ball et al., 1995, Pinedo, 2002). By considering production and distribution integrative for scheduling, we have a relatively new area that has been studied in the past decade. This stream of literature can be called integrated production and outbound distribution scheduling (IPODS) problems (Chen, 2010). The rest of the chapter is organized as follows. In Section 12.2, we review related literature and existing models in IPODS area. Notation and description of our specific problem in real world are given in Section 12.3. In Section 12.4, after modeling our problem mathematically, the detailed solution method is presented. Numerical results for evaluating our proposed solution method and for gaining insights are presented in Section 12.5. Conclusions are provided in Section 12.6. 12.2  Literature Review

Zhi-Long Chen (2010) proposed a five-field notation, α|β|π|δ|γ, to represent most IPODS models. In this notation, α describes the machine configuration in the production part, β specifies restrictions about orders, and γ describes the objective function. These three factors are used in production scheduling models (Lawler et al., 1993,

Sy N c H R O NIZIN G P R O D U c TI O N A N D D IS T RIBU TI O N

255

Table 12.1  Types of Machine Configuration (α) VALUE

DESCRIPTION

1 Pm Fm Bm F(m1,m2) MP

Single machine Parallel machine Flow shop Bundling Two-stage flexible flow shop Multiple plants

Table 12.2  Restrictions and Constraints on Order Parameters (β) VALUE

DESCRIPTION

rj dj ≡ d

Orders have unequal release dates Orders have a common due date d

dj

Each order j has a deadline

[aj, bj] fdj sij Prec Pmtn Pickup No wait r-a

Order j must be delivered to its customer within this time window Each order j has a fixed delivery time There are sequence-dependent setup times between different orders Orders have precedence constraints between them Order processing can be preempted and resumed later Orders must be picked up from the customer before they can be processed Each order should be processed without idle time from one machine to the next One machine has a known unavailable period of time

∑D ≤ D j

0

Total delivery time of the orders cannot exceed a specified deadline

Pinedo, 2002). Values that can be taken for α and β are presented in Tables 12.1 and 12.2, respectively. γ shows the objective functions like customer service level, total cost, and total revenue. There are two other factors in the notation proposed by Chen (2010), π and δ, which specify the delivery process and the number of customers, respectively. The first factor, π, shows the features of vehicles by V(x,y) (number and capacity of vehicles) and delivery method, that is, there are two parameters in this notation. The value of each parameter can be seen in Table 12.3. Most of existing models consider homogeneous vehicles, and this notation shows these models only. The second factor, δ, defines the number of customers, and it is 1 (one customer), k (k customers who are fixed in each scheduling horizons), or n (different number of customers in different scheduling horizons).

256

E . G H O RBA NI -T O T K A L EH E T A L .

Table 12.3  Delivery Characteristics (π) PARAMETER

VALUE

X

1 υ ∞

Y

1 c ∞ Q iid direct routing fdep split

DESCRIPTION

WHAT IT SHOWS

Single-delivery vehicle Number of available vehicles is finite or limited There are sufficient number of vehicles or infinite Each shipment can deliver only one unit of product Each shipment can deliver up to c orders (orders have an equal size) Each shipment can deliver infinite number of order Each shipment can deliver at most Q units (orders have different sizes) Individual and immediate delivery

Number of vehicles

Capacity of vehicles

Methods of distribution

Batch delivery by direct shipping Batch delivery with routing Shipping with fixed delivery departure date An order can be split to be delivered in multiple shipments

Table 12.4 reviews the recent articles considering IPODS models. 12.3  Problem Description

In this part, we want to model a problem in a company. We consider each 30 minutes, to be one time scale. Thus, there are 16 time units per day, as the time horizon of scheduling. In our case study, orders need to be processed on a single machine (α = 1). Each customer has a time window such as [ai, bi] for the delivery of their demands. If the delivery is received earlier (delivery time < a i) or later (delivery time > bi), a penalty cost for per unit of time delay is considered. So orders have specific time window for delivery (β = [ai, bi]). In this problem, there are three vehicles of two different types with different capacities, so they have different transportation costs. The specific notation assignment for different vehicles is not mentioned by Chen (2010). We can show it as V1(1,Q  1), V2(2,Q  2). A routing method for the delivery of our orders is used, as this method can

1

1

1 MP

1

B2

1

1 1 MP

1 1 1

Pm Pm 1

Chen and Lee (2008)

Zhong et al. (2007) Li and Ou (2007)

Averbakh and Xue (2007)

Li and Vairaktarakis (2007)

Ji et al. (2007)

He et al. (2006) Dawande et al. (2006) Chen and Pundoor (2006)

Pundoor and Chen (2005) Li and Ou (2005) Wang and Lee (2005)

Chen and Vairaktarakis (2005) Chen and Vairaktarakis (2005) Li et al. (2005)

α

Chen and Pundoor (2009)

REFERENCES

sij, dj

dj

pickup

sij

rj,pmtn

dj

β

Table 12.4  IPODS Models in Existing Articles

V(∞,c), direct V(∞,c), routing V1,c), routing

V(∞,c), direct V(1,c), direct V(∞,1), iid

V(1,Q), direct V(∞,1), iid V(∞,c), direct

V(∞,∞), direct

V(∞,c), direct/routing

V(∞,∞), direct

V(1,Q), direct V(∞,c), direct

V(∞,c), direct

V(∞,Q), direct, split

π

1 k k

k 1 N

1 k 1

1

k

1,k

1 1

k

1

δ

j

j

j

j

•D

j

Dmax + TC Dmax +TC

Lmax + TC Dmax TC

j

∑ D + TC + PC

Dmax αCmax + (1 − α)Lmax

j

j

j

∑ D + TC ∑ D + TC ∑ D + TC ∑ w D + TC

Dmax

∑ w D + TC

TC

γ

Heuristic algorithm Heuristic algorithm Polynomial solvable

Heuristic algorithm Heuristic algorithm Exact algorithm

Heuristic algorithm Exact algorithm Heuristic algorithm

(Continued)

Heuristic algorithm, Polynomial-time approximation scheme Polynomial-time exact algorithm

Online algorithm

Heuristic algorithm Heuristic algorithm

Polynomial-time algorithm

Heuristic algorithm

SOLUTION METHOD

Sy N c H R O NIZIN G P R O D U c TI O N A N D D IS T RIBU TI O N

257

Pm

1 MP

Pm

1,Pm Fm 1

Pm 1 Pm

1 Pm

1 1 1

Pm 1

Chang and Lee (2004) Garcia et al. (2004)

Garcia and Lozano (2004)

Mastrolilli (2003) Kaminsky (2003) Hall and Potts (2003)

Gharbi and Haouari (2002) Liu and Cheng (2002) Garcia et al. (2002)

Lee and Chen (2001) Wang and Cheng (2000)

Van Buer et al. (1999) Woeginger (1998) Cheng et al. (1996)

Woeginger (1994) Hall and Shmoys (1992)

α

Garcia and Lozano (2005)

REFERENCES

rj,prec

sij, dj sj

rj rj,sj, pmtn fdj

rj rj

fdj

fdj

[aj,bj]

β

π

V(∞,1), iid V(∞,1), iid

V(∞,Q), routing V(∞,1), iid V(∞,∞), direct

V(υ,c), direct V(∞,∞), direct

V(∞,1), iid V(∞,1), iid V(υ,1), iid

V(∞,1), iid V(∞,1), iid V(∞,∞), direct

V(∞,1), iid

V(1,Q), direct/routing V(υ,1), iid

V(υ,1), iid

Table 12.4 (Continued)  IPODS Models in Existing Articles

n n

n n 1

1 1

n n n

n n k

n

1, 2 n

n

δ

j

j

Dmax Dmax

j j

∑ w E + TC

TC + VC Dmax

∑ D + TC

Dmax j

j

•R

Dmax Dmax

∑U + TC

Dmax Dmax

j

∑ R − TC •R

Dmax

j

•R

γ

Heuristic algorithm Heuristic algorithm

Heuristic algorithm Polynomial-time approximation scheme Exact algorithm

Polynomial-time exact algorithm

Branch-and-bound-based exact algorithm Polynomial-time approximation scheme Heuristic algorithm

Polynomial-time approximation scheme Heuristic algorithm Ordinary NP-hard

Min cost network flow

Heuristic algorithm Heuristic algorithm

Tabu search

SOLUTION METHOD

258 E . G H O RBA NI -T O T K A L EH E T A L .

Sy N c H R O NIZIN G P R O D U c TI O N A N D D IS T RIBU TI O N

259

save transportation costs. In each time horizon, there are a number of different customers (δ = n). We need to detect the time of starting production as well as the time of dispatching each vehicle in order to minimize total cost including the travel cost and tardiness and earliness penalty costs. The model can be written as follows:

1 ⎡⎣a j ,b j ⎤⎦ V1 ( 1, Q

1

) , V2 ( 2, Q 2 ) , routing n VC + ∑(α i Ei + βi Ti )

Our innovation in this chapter is to solve a real problem in a company, and our goal is to minimize company’s tardiness and earliness penalty costs and its transfer costs. For modeling this problem, some assumptions need to be made as follows to relax the problem: 1. Each region is considered as a customer, that is, we calculate the total demand of one region and program to send its demand in the best time to minimize the transfer and penalty costs. 2. We relax the service time in the time window, that is, if the service time takes one unit of time and time window is [a,b], then we consider it as [a – 1, b – 1]. 3. The total demand of each region is not over the capacity of all of the vehicles, and thus, at least one vehicle is enough to transfer goods to destinations. 4. All vehicles are in the depot at time zero. Moreover, they do not return to the depot after finishing their job. So each vehicle is available once in each horizon. The problem is characterized by the following sets and parameters: • Sets C = {1,2, …, n} set of customers in scheduling horizon (1 day) Z = {0,1,2, …, s} set of region zones V = {1,2, …, m} set of vehicles • Decision variables xjkk′ binary decision variable, taking a value of 1 if vehicle j is used for transporting from region k to k′ and 0 otherwise s t j dispatching time of vehicle j from depot t pk´ production start time for demands of region k′

260

E . G H O RBA NI -T O T K A L EH E T A L .

• Nondecision variables and parameters Capj capacity for vehicle j [ai,bi] delivery time window for ith customer cjkk′ cost of traveling from k to k′ by vehicle j′ qk volume of the demands of region k α penalty per unit time of tardiness β penalty per unit time of earliness tkk′ travel time between k and k′ pk production time for demands of region k Drk time arriving at region k in the rth order τik binary variable, taking a value of 1 if customer i is in the region k and 0 otherwise Ti tardiness of delivery to customer i Ei earliness of delivery to customer i The mixed-integer nonlinear programming (MINLP) model is formulated as follows: m

min

s

s

∑∑∑x

jkk ʹ

s.t.

j=1 k ʹ=1

≤ m (12.2)

j0k ʹ

≥ 1 (12.3)

s

∑∑ x

jkk ʹ

≤ 1 k ∈ {1,…,s} (12.4)

jkk ʹ

= 1 k ʹ ∈ {1,…,s} (12.5)

j=1 k ʹ=1

m

i

s

∑∑ x m



j0k ʹ

j=1 k ʹ=1



i

s

∑∑ x m

∑ ( αT + βE ) (12.1) ∀i

m



c jkk ʹ +

j=1 k = 0 k ʹ=1

s

∑∑ x j=1 k=0

Sy N c H R O NIZIN G P R O D U c TI O N A N D D IS T RIBU TI O N s

s



x 1kk ʹ +

x 2 k ʹl +

l =1

∑x



∑x + ∑x 1k ʹl



2 k ʹl

≤ 1 ∀k ; ∀k ʹ (12.8)

(12.7)

∑x l =1

s

∑∑q



≤ 1 ∀k ; ∀k ʹ

s

x 1k ʹl +

l =1

s

3k ʹl

l =1

s



≤ 1 ∀k ; ∀k ʹ (12.6)

s

l =1

x 3kk ʹ +

3k ʹl

l =1

s

x 2 kk ʹ +

2 61

x jkk ʹ ≤ Cap j



k=0 k ʹ=1 s

s j

s

∑ ∑(p

t=

∀ j ∈ V (12.9)



+ t kpʹ )x jkk ʹ

k=0 k ʹ=1

∀j ∈ V (12.10)

m

∑(t

D1k ʹ =

j =1

m

s

j=1

k=1

∑ ∑(D

D rk ʹ =

s j

r-1,k

x j0k ʹ + t 0k ʹ x j0k ʹ ) ∀k ʹ ∈ {1,…,s} (12.11)

x jkk ʹ + t kk ʹ x jkk ʹ ) r ≥ 2 , ∀k ʹ ∈ {1,…,s} (12.12) s

Ti + b i ≥

s

∑∑D s

a i + Ei ≥

rk ʹ

τik ʹ

k ʹ=1 r=1

∀i ∈ C (12.13)

s

∑∑D k ʹ=1 r=1

rk ʹ

τik ʹ

∀i ∈ C (12.14)

x jkkʹ ∈ {0,1} , k = 0,…,s, k ʹ,r = 1,…,s , k ≠ k ʹ (12.15)

j = 1, …, m, i = 1, …, n The objective function (12.1) minimizes the total cost including the travel cost, tardiness and earliness penalty costs. Constraints (12.2) and (12.3) specify that the number of all vehicles, which are traveled

262

E . G H O RBA NI -T O T K A L EH E T A L .

from depot to all regions, should be limited in the range of [1, vehicle number]. Constraint (12.4) shows that we cannot send over one vehicle from each region, while constraint (12.5) ensures that every region is only serviced once. Constraints (12.6) through (12.8) show that we enter into and exit from all regions by the same vehicle, and we cannot change the vehicle in regions. It should be noted that we consider these three constraints because in our case study, we have three vehicles. Constraint (12.9) controls the vehicle capacity. Constraint (12.10) calculates t sj for each vehicle. Constraints (12.11) and (12.12) calculate the service time for all regions by considering the order of serving. Constraints (12.13) and (12.14) represent tardiness and earliness ranges. 12.4  Solution Approach

In this section, we propose an exact and a heuristic method to solve the problem. Our case study has a small and medium size, so we can solve our problem exactly. However, as Karp (1972) proved that the traveling salesman problem (TSP) is strongly NP-hard, we propose a heuristic algorithm for solving some large size of such problems. 12.4.1  Exact Solution

For solving and testing this model, we identify 20 experiments of the company from 20 days. This company is active in furniture industry. We consider the warehouse of this company as a main depot for production and distribution. The scheduling horizon is assumed to be a working day, starting from 8:00 AM to 4:00 PM. By dividing this time window into 16 parts, each time unit will be 30 min. We consider the process of assembly, disassembly, and packing of the products as manufacturing process. A team of workers do these works together, and can be considered as a single machine. Three different vehicles do the job of transferring products to the customer’s place. Each vehicle has a different capacity and different transporting cost, but they have the same speed. Customers can specify their delivery time window. We try to schedule the manufacturing and delivery process to minimize the sum of transportation cost and the penalty of tardiness and earliness of deliveries.

Sy N c H R O NIZIN G P R O D U c TI O N A N D D IS T RIBU TI O N

263

We coded this mathematical model in Microsoft visual studio 2010 and then used its text output for Lingo 11.0, and from all experiments, we can get best answer by the branch-and-bound solver. It is important that for bigger sizes of problems, this exact method cannot give any answer in a reasonable time, so we have to use a heuristic algorithm. 12.4.2  Proposed Algorithm

A heuristic algorithm is proposed for solving the problem, because this routing problem with multiple customers contains the strongly NP-hard TSP (Karp, 1972). As mentioned before, our case study problem is small and medium, and can be solved by the exact method. The proposed algorithm is then appropriate for solving the other cases with larger sizes. In this algorithm, we use genetic algorithm (GA) because experiences show that population-based algorithms have better proficiency for solving routing and batching problems like our case. After generation and improvement with GA, this algorithm starts local improvement phase by using simulated annealing (SA). After a specific iteration, the algorithm stops and gives the best answer over all iterations. Figure 12.1 shows the flowchart of the proposed algorithm. To explain the proposed algorithm, some important steps are given as follows: 1. Define chromosomes as can be seen in Figure 12.2. 2. Use partially mapped crossover as shown in Figure 12.3. As you can see in Figure 12.3, this operator randomly specifies two points of each parent. From the first parent, the specific part will be copied to the child exactly, and another part of child will come from the second parent randomly. 3. Use replacement operator for mutation. You can see an example of this operator in Figure 12.4. 4. For determining the initial temperature in SA part of the algorithm, we define a linear connection between T0 and number of regions (s): T0 = θ·s; where θ is a constant number.

264

E . G H O RBA NI -T O T K A L EH E T A L .

Start: Generation of initial population

End: Give the best solution that is found Yes No

Improvement of generated population

Is this algorithm finished?

Send improved population for local search

Checking finished condition

Execution of SA algorithm

Find the best solution in this iteration

Figure 12.1  Flowchart of solution algorithm.

Regions (sequence array)

5

4

3

1

2

6

2

1

3

2

3

1

Vehicles (allocation array)

Figure 12.2  A sample of chromosomes in the proposed algorithm.

5. Use geometric approach for the cooling of initial temperature: Ti = αi·T0; where 0.5 ≤ α ≤ 0.99. 6. Neighborhood creation is done by using inversion operator for sequence array and insertion operator for allocation array (see Figures 12.5 and 12.6). For parameter tuning, factorial design is used and three levels of three important parameters in the algorithm are tested. Table 12.5 shows these levels of experiment. By running all 33 = 27 tests, best levels are as determined as follows: Number of initial population: 60, probability of crossover: 0.6, and ratio of initial temperature: 1.

Sy N c H R O NIZIN G P R O D U c TI O N A N D D IS T RIBU TI O N

5

4

3

1

2

6

2

1

3

2

3

1

Second parent 5

4

3

1

2

4

3

1

2

265

First parent 6

2

1 1

2

6

5

4

6

5

4

3

6

5

6

5

4

1

Child

3

Figure 12.3  Crossover operator in the solution algorithm.

2

6

4

1

5

3

2

5

4

1

6

3

3

1

1

3

2

2

3

1

1

3

2

2

(b)

(a)

Figure 12.4  Mutation operator in the proposed algorithm: (a) before mutation and (b) after mutation.

2

6

4

5

3

1

2

6

3

5

4

1

Figure 12.5  Neighborhood creation by inversion operator for sequence array.

3

2

3

1

1

2

2

1

2

3

1

2

Figure 12.6  Neighborhood creation by insertion operator for allocation array.

266

E . G H O RBA NI -T O T K A L EH E T A L .

Table 12.5  Parameter Tuning PARAMETER

INDEX OF LEVELS

LEVELS

Initial population

1 2 3 1 2 3 1 2 3

60 80 100 0.6 0.7 0.8 1 2 3

Probability of crossover Ratio of initial temperature

12.5  Numerical Study

Twenty test problems which are taken from 20 working days are solved by the exact method and heuristic algorithm. All examples were tested on a personal computer with an Intel Core i5 CPU, 2.66 GHz, 4 GB RAM. Solutions of exact method and the proposed heuristic algorithm are compared with real practice. Table 12.6 shows results for 20 test problems. Values in this table show the value of the objective function in three methods, and improvement in 85% of tests is achieved. Table 12.6  Results of Test Problems PROBLEM NUMBER

1

2

3

4

5

Real practice Exact method Proposed algorithm

21 17 17

17 10 10

20 19 19

27 25 26

23 21 21

PROBLEM NUMBER Real practice Exact method Proposed algorithm

6 20 17 17

7 19 15 15

8 26 19 21

9 14 14 14

10 16 13 13

PROBLEM NUMBER Real practice Exact method Proposed algorithm

11 10 8 8

12 17 10 10

13 28 20 22

14 18 18 18

15 16 8 8

PROBLEM NUMBER Real practice Exact method Proposed algorithm

16 17 13 15

17 8 7 7

18 15 11 11

19 8 6 6

20 12 12 12

Sy N c H R O NIZIN G P R O D U c TI O N A N D D IS T RIBU TI O N

267

30 Exact method Proposed algorithm Real practice

25

Cost

20 15 10 5

1

2

3

4

5

6

7

8

9

10 11 12 13 14 15 16 17 18 19 20

Index of experience

Figure 12.7  Comparison between exact method, proposed algorithm, and real-world case, in terms of total cost.

The exact method gives us the best solutions in all the experiences because our problems are small and medium, but for large problems, this exact method might not be able to answer, so we should try a heuristic algorithm for large-size problems. Our algorithm gives the best answer 80% of times. This proposed algorithm can solve the problems in an average of 0.7 s, while the exact method solves them in an average of 43 s, so it shows that for large-size problems, this reduced solving time is better. The graphical comparison between the exact method, the proposed algorithm, and real-world case, in terms of total cost, is demonstrated in Figure 12.7. As you see in this figure, the exact method reduces the objective function value in 17 problems (85% of problems), and in 3 problems, total cost is equal in all methods. 12.6 Conclusions

Our method shows that our model might be useful for many companies that hope to reduce their transfer costs and increase the satisfaction of their customers. For large problems, we propose the heuristic algorithm. Our goal for working on this problem was to decrease hidden costs, which were created by dissatisfaction of customers from late or soon delivery, although we should consider the best and minimum costs

268

E . G H O RBA NI -T O T K A L EH E T A L .

for transferring costs. For our considered problem, which has small and medium sizes, proposed algorithm is efficient; however, this algorithm should be tested for large-size problems. In this chapter, we have successfully formulated the IPODS problem as a mixed-integer nonlinear programming model with different vehicles, time windows, and routing method of distribution. A heuristic algorithm, which is a combination of GA and SA, is proposed to solve the research problem. We solved 20 real problems by exact method with Lingo 11.0 and compared the results with the solutions of proposed heuristic algorithm and real-world cases. Computational results showed that our solution method is effective and efficient. It can reduce 20% of costs, which is equal to 1035 units of costs, and our proposed algorithm can calculate the best solution for 80% of problems. For the future research, we suggest improving and using the proposed solution method as software for enterprises, which takes data from users and gives the best solution of the sequencing and scheduling problem for each horizon.

References

Averbakh, I., Z. Xue. 2007. On-line supply chain scheduling problems with preemption. European Journal of Operational Research 181:500–504. Ball, M.O., T.L. Magnanti, C.L. Monma, G.L. Nemhauser. 1995. Network Routing. Handbooks in Operations Research and Management Science, Vol. 8. North-Holland, Amsterdam, the Netherlands. Bilgen, B., I. Ozkarahan. 2004. Strategic tactical and operational productiondistribution models: A review. International Journal of Technology Management 28:151–171. Chang, Y.C., C.Y. Lee. 2004. Machine scheduling with job delivery coordination. European Journal of Operational Research 158:470–487. Chen, B., C.Y. Lee. 2008. Logistics scheduling with batching and transportation. European Journal of Operational Research 189:871–876. Chen, Z.L. 2004. Integrated production and distribution operations: Taxonomy, models, and review. In D. Simchi-Levi, S.D. Wu, Z.J. Shen (eds.), Handbook of Quantitative Supply Chain analysis: Modeling in the E-Business Era. Kluwer Academic Publishers, Norwell, MA. Chen, Z.L. 2010. Integrated production and outbound distribution scheduling: Review and extensions. Operation Research 1:130–148. Chen, Z.L., G. Pundoor. 2006. Order assignment and scheduling in a supply chain. Operations Research 54:555–572. Chen, Z.L., G. Pundoor. 2009. Integrated order scheduling and packing. Production Operation Management 18:672–692.

Sy N c H R O NIZIN G P R O D U c TI O N A N D D IS T RIBU TI O N

269

Chen, Z.L., G.L. Vairaktarakis. 2005. Integrated scheduling of production and distribution operations. Management Science 51:614–628. Cheng, T.C.E., V.S. Gordon, M.Y. Kovalyov. 1996. Single machine scheduling with batch deliveries. European Journal of Operational Research 94:277–283. Dawande, M., H.N. Geismar, N.G. Hall, C. Sriskandarajah. 2006. Supply chain scheduling: Distribution systems. Production and Operations Management 15:243–261. Garcia, J.M., S. Lozano. 2004. Production and delivery scheduling problem with time windows. Computers and Industrial Engineering 48:733–742. Garcia, J.M., S. Lozano. 2005. Production and vehicle scheduling for readymix operations. Computers and Industrial Engineering 46:803–816. Garcia, J.M., S. Lozano, D. Canca. 2004. Coordinated scheduling of production and delivery from multiple plants. Robotics and Computer-Integrated Manufacturing 20:191–198. Garcia, J.M., K. Smith, S. Lozano, F. Guerrero. 2002. A comparison of GRASP and an exact method for solving a production and delivery scheduling problem. Proceedings of Hybrid Information Systems: Advances in Soft Computing. Physica-Verlag, Heidelberg, Germany, pp. 431–447. Gharbi, A., M. Haouari. 2002. Minimizing makespan on parallel machines subject to release dates and delivery times. Journal of Scheduling 5:329–355. Goetschalcks, M., C.J. Vidal, K. Dogan. 2002. Modeling and design of global logistics systems: A review of integrated strategic and tactical models and design algorithms. European Journal of Operational Research 143:1–18. Hall, L.A., D.B. Shmoys. 1992. Jackson’s rule for single-machine scheduling: Making a good heuristic better. Mathematics of Operations Research 17:22–35. Hall, N.G., C.N. Potts. 2003. Supply chain scheduling: Batching and delivery. Operations Research 51:566–584. He, Y., W. Zhong, H. Gu. 2006. Improved algorithms for two single machine scheduling problems. Theoretical Computer Science 363:257–265. Ji, M., Y. He, T.C.E. Cheng. 2007. Batch delivery scheduling with batch delivery cost on a single machine. European Journal of Operational Research 176:745–755. Kaminsky, P. 2003. The effectiveness of the longest delivery time rule for the flow shop delivery time problem. Naval Research Logistics 50:257–272. Karp, R.M. 1972. Reducibility among combinatorial problem. Complexity of Computer Computations. Plenum Press, New York, pp. 85–103. Lawler, E.L., J.K. Lenstra, A.H.G. Rinnooy Kan, D.B. Shmoys. 1993. Sequencing and scheduling: Algorithms and complexity. Handbooks in Operations Research and Management Science, Vol. 4. North-Holland, Amsterdam, the Netherlands, pp. 445–552. Lee, C.Y., Z.L. Chen. 2001. Machine scheduling with transportation considerations. Journal of Scheduling 4:3–24. Li, C.L., J. Ou. 2005. Machine scheduling with pickup and delivery. Naval Research Logistics 52:617–630.

270

E . G H O RBA NI -T O T K A L EH E T A L .

Li, C.L., J. Ou. 2007. Coordinated scheduling of customer orders with decentralized machine locations. IIE Transactions 39:899–909. Li, C.-L., G. Vairaktarakis. 2007. Coordinating production and distribution of jobs with bundling operations. IIE Transactions 39:203–215. Li, C.L., G. Vairaktarakis, C.Y. Lee. 2005. Machine scheduling with deliveries to multiple customer locations. European Journal of Operational Research 164:39–51. Li, K.P., V.K. Ganesan, A.I. Sivakumar. 2005. Synchronized scheduling of assembly and multi-destination air-transportation in a consumer electronics supply chain. International Journal of Production Research 43: 2671–2685. Liu, Z., T.C.E. Cheng. 2002. Scheduling with job release dates, delivery times and preemption penalties. Informatics Processing Letters 82:107–111. Mastrolilli, M. 2003. Efficient approximation schemes for scheduling problems with release dates and delivery times. Journal of Scheduling 6:521–531. Pinedo, M. 2002. Scheduling Theory, Algorithms, and Systems, 2nd edn. Prentice Hall, Upper Saddle River, NJ. Pundoor, G., Z.L. Chen. 2005. Scheduling a production-distribution system to optimize the tradeoff between delivery tardiness and total distribution cost. Naval Research Logistics 52:571–589. Van Buer, M.G., D.L. Woodruff, R.T. Olson. 1999. Solving the medium newspaper production/distribution problem. European Journal of Operational Research 115:237–253. Wang, G., T.C.E. Cheng. 2000. Parallel machine scheduling with batch delivery costs. International Journal of Production Economics 68:177–183. Wang, H., C.Y. Lee. 2005. Production and transport logistics scheduling with two transport mode choices. Naval Research Logistics 52:796–809. Woeginger, G.J. 1994. Heuristics for parallel machine scheduling with delivery times. Acta Informatica 31:503–512. Woeginger, G.J. 1998. A polynomial-time approximation scheme for singlemachine sequencing with delivery times and sequence independent batch set-up times. Journal of Scheduling 1:79–87. Zhong, W., G. Dosa, Z. Tan. 2007. On the machine scheduling problem with job delivery coordination. European Journal of Operational Research 182:1057–1072.

13 A N I NtEG R AtEd

R EpLENISHmENt ANd TR ANSp ORtAtI ON M Od EL Computational Performance Assessment R A MEZ K I A N, EMR E BER K , A ND ÜLKÜ GÜR LER Contents

13.1 Introduction 271 13.2 Model 274 13.2.1 Assumptions 274 13.2.2 Formulations 276 13.3 Numerical Study 279 13.3.1 Overall Assessment 280 13.3.2 ANOVA Assessment 282 Acknowledgment 287 References 287 13.1 Introduction

Transformation processes with multiple inputs typically exhibit nonlinearities in their output with respect to input usages. They have been traditionally modeled via production functions in the microeconomics literature (Heathfield and Wibe, 1987). One of the most common production functions is the Cobb–Douglas (C–D) production function. This production function assumes that multiple (n) inputs (also called factors or resources) are needed for output, Q, and they may be substituted to take advantage of the marginal cost differentials. In general, αi n ⎡x (i ) ⎤ , where A represents the total factor it has the form Q = A ⎦ i =1 ⎣ productivity of the process given the technology level, x(i) denotes the amount of input i used, and αi > 0 is the input elasticity. The total



2 71

272

R A M E Z KIA N E T A L .

⎞ ⎛ 1 ⎜ ⎟ elasticity parameter r ⎜ = m ⎟ may be greater than (smaller than) αi ⎟ ⎜ i =1 ⎝ ⎠ or equal to 1 depending on whether there is diminishing (increasing) returns to resources, resulting in convex (concave) operational costs. The C–D production function was first introduced to model the labor and capital substitution effects for the US manufacturing industries in the early twentieth century (Cobb and Douglas, 1928). Despite its macroeconomic origins, since then, it has been widely applied to individual transformation processes at the microeconomic level, as well. For example, the C–D production function was employed to model production processes in the steel and oil industries by Shadbegian and Gray (2005) and in agriculture by Hatirli et al. (2006). Logistics activities associated with shipment preparation, transportation/delivery, and cargo handling also use, directly and/or indirectly, multiple resources such as labor, capital, machinery, materials, energy, and information technology. Therefore, it is not surprising that there is a growing literature on the successful applications of the C–D-type production functions to model the operations in the logistics and supply chain management context. Chang’s (1978) work seems to be the earliest to construct a C–D production function to analyze the productivity and capacity expansion options of a seaport. Rekers et al. (1990) estimate a C–D production function for port terminals and specifically model cargo handling service. In a similar vein, Tongzon (1993) and Lightfoot et al. (2012) consider cargo handling processes at container terminals for their production functions. In a recent work, Cheung and Yip (2011) analyze the overall port output via a C–D production function. Studies on technical efficiency in cargo handling and port operations provide additional support for the C–D-type functional relationships, where output is typically measured in volume of traffic (in terms of twenty-foot equivalent unit—TEUs) and inputs may be as diverse as number or net usage time of cranes, types of cranes, number of tug boats, number of workers or gangs, length and surface of the terminals, berth usage, volume carried by land per berth, and energy (e.g., Notteboom et al. 2000, Cullinane 2002, Estache et al. 2002, Cullinane et al. 2002, 2006, Cullinane and Song 2003, 2006, Tongzon and Heng 2005). Comprehensive surveys can be found in



RE p L ENISHM EN T A N D T R A NS p O R TATI O N M O D EL

2 73

Maria Manuela Gonzalez and Lourdes Trujillo (2009), Trujillo and Diaz (2003), Tovar et al. (2007), and Gonzalez and Trujillo (2009). For land transportation, we may cite the evidence from Williams (1979) and for supply chain management, Ingene and Lusch (1999) and Kogan and Tapiero (2009). Although multi-input activities in the area of logistics have received the attention of researchers for economic modeling and efficiency measurements, this body of knowledge has been only partially incorporated into decision making at the operational level. As Lee and Fu (2014) observed, the most commonly used transportation cost structures are tapering rates, proportional rates, and blanket rates (Lederer 1994, Taaffe et al. 1996, Ballou 2003, Coyle et al. 2008). Hence, scale economies are the most frequently made assumption. (See also Xu [2013] in a location context.) However, we believe that this assumption ignores the fundamental economic fact that output is typically nonincreasing in the input usage. That is, a C–D production function with total input elasticities being less than unity results in optimal input usage with usage costs being convex in the output level. Our work has been motivated by that the existing literature on the dynamic joint replenishment and transportation models lacks incorporation of the economic production functions. Incorporation of such functions of transportation/delivery activities into the existing logistics management models yields interesting theoretical and practical insights. First, these empirically supported functions, typically, result in the models to be nonlinear and convex in the decision variables for certain parameter settings. For such settings, the theoretical findings of the classical models do not hold any longer. Hence, these new settings are of theoretical interest. Second, the solution methodologies suitable and satisfactory for the classical models become less useful and, in some cases, even unusable. This necessitates the development of novel heuristics. (For a detailed discussion of both aspects in a dynamic lot-sizing framework, see Kian et al. 2014.) In this work, we focus on the suitability of the existing generic solvers and their computational performance for a logistics model with convex costs. We envision a firm that produces a single product and delivers the production quantity to its vendor-managed inventory warehouse. We consider the dynamic joint replenishment and transportation

2 74

R A M E Z KIA N E T A L .

problem for this integrated two-stage inventory system where the delivery times of the items from the production site to the warehouse and from the warehouse to a customer’s site are negligible, but the logistical operations associated with shipment preparation, transportation/delivery, and cargo handling are nonlinear in the shipment quantity. In particular, we assume that the quantity transported requires multiple inputs whose usage is expressed by a C–D-type production function so that the resulting transportation costs are convex. Therefore, our work differs greatly from the existing models on replenishment and inbound/outbound logistics. Among the significant works in this area, we may cite Lippman (1969), Lee (1989), Pochet and Wolsey (1993), Lee et al. (2003), Jaruphongsa et al. (2005), Berman and Wang (2006), Van Vyve (2007), Hwang (2009), and Hwang (2010). Integrated replenishment and transportation problems have close similarity with the dynamic lot-sizing models in mathematical structure and analytical properties. A dynamic lot-sizing model with convex cost functions of a power form has been studied recently by Kian et al. (2014). It was shown that replenishment is possible even with positive on-hand inventory (contrary to the classical Wagner–Whitin model in Wagner and Whitin [1958]), and thereby, a forward solution algorithm does not exist. In lieu of the optimal solution, heuristics were designed and approximate solutions were investigated. For the related literature and the analytical intricacies of the particular lot-sizing model, we refer the reader to the aforementioned work. The rest of the chapter is organized as follows. In Section 13.2, we present the assumptions of the model and provide three formulations. In Section 13.3, we provide a numerical study and discuss our findings. 13.2 Model 13.2.1 Assumptions

We consider a single item. The problem is of finite horizon length, T. The demand amount in period t is denoted by dt(t  =  1,…,T). All demands are nonnegative and known, but may be different over the planning horizon. No shortages are allowed. The amount of

RE p L ENISHM EN T A N D T R A NS p O R TATI O N M O D EL

275

replenishment (production) in period t is denoted by qt and is uncapacitated. Replenishment in any period t incurs a fixed cost (of setup) Kt (≥0) and unit variable cost, pt. All units replenished in a period are transported to the warehouse; that is, dispatch quantity in a period is the same as the production quantity. Fixed costs associated with shipments are assumed negligible (or, equivalently may be viewed as subsumed in the fixed replenishment cost under the assumed dispatch policy). Each unit shipped in period t incurs a cost of τt. Additionally, the transportation and delivery use m (≥1) inputs with unit acquisition cost of input i in period t being at(i ) for 1 ≤ i ≤ m. It is assumed that there are no economies of scale in the acquisition of the inputs and that unit acquisition costs are nonspeculative over the problem horizon. These assumptions dictate that a lot-for-lot acquisition policy is optimal for the inputs needed. (A similar set of assumptions are implicitly made for the ingredients/raw materials needed for the replenishment that involves actual manufacturing.) The input usage for transporting qt units of the item in period t is determined through αi m ⎡xt(i ) ⎤ with αi ≥ 0 for all i. a stationary C–D function as qt = ⎦ i =1 ⎣ The stationarity of the function parameters are realistic in that the planning problem considered herein would be of very short term compared to the timeframe required for technological changes that would impact the values of the elasticity and total factor productivity parameters. The inventory on hand at the end of period t at the warehouse is denoted by It; each unit of ending inventory in the period is charged a unit holding cost of ht. Without loss of generality, the initial inventory level, I0, is assumed to be zero. Given that the short-term nature of the decisions, no discounting is assumed over the horizon although it can easily be incorporated into the model. The objective is to find a joint replenishment and transportation plan that determines the timing and amount of production and delivery (qt) such that total costs over the horizon are minimized. Before we proceed with the formulations of the problem, a few remarks are in order about the particulars of our problem setting. (1) In the presence of zero fixed costs of shipment, the assumed dispatch policy is optimal. However, with nonzero fixed costs, it would be suboptimal. This particular fixed cost structure has been studied by Jaruphongsa et al. (2005) with zero unit variable costs. Under



276

R A M E Z KIA N E T A L .

nonspeculative (fixed and unit) costs, it has been established that the replenishment quantity in any period k needs to be either zero or equal to the sum of a number of future dispatch quantities. In our setting, we chose fixed shipment costs to be zero for the impact of the special nature of the variable costs to be brought to the foreground. (2) Since Lippman (1969), the shipments have taken into account cargo capacity of individual vehicles and considered stepwise cost structures. Again, for better exposition of the special cost function we assume herein, we ignore this aspect. Thus, our results may be viewed as a relaxation of this cargo capacity constraint. (3) The dynamic lot-sizing problems are special cases of the joint replenishment and transportation problems and, thereby, show close affinity with them under certain cost structures and policies. This is true in our setting, as well. The characteristics of the model herein are similar to those of Kian et al. (2014), and the two-echelon inventory system may be reduced to the single location lot-sizing model studied in the mentioned work. Therefore, in this work, we focus on the computational issues. 13.2.2 Formulations

We first formulate the problem as a mixed-integer nonlinear programming (MINLP) problem. We will consider two equivalent variants. In the first formulation, PT1 , the decision variables are the replenishment (and shipment) quantities qt, the binary variables yt for replenishment setup, the input quantities xt(i ) for i = 1, … ,m with the intermediate inventory variables It for 1 ≤ t ≤ T. The objective function is linear in the variables, but the constraints contain the nonlinear production function that relates the inputs to the replenishment/shipment quantity. In the second formulation, PT2 , we first determine the optimal input usage for any replenishment/shipment quantity (which may be viewed as preprocessing) and incorporate the production function relationship into the objective function rendering the problem into a form with a nonlinear objective function with only linear constraints. In PT2 , the decision variables are the replenishment (and shipment) quantities qt, the binary variables yt for replenishment setup with the intermediate inventory variables It for 1 ≤ t ≤ T. We state the first formulation PT1 , which acts as a building block for the second formulation, formally as follows:

RE p L ENISHM EN T A N D T R A NS p O R TATI O N M O D EL

T

min

∑ t =1



⎡ ⎢ K t y y + ( pt + τt ) qt + ⎢⎣

m

qt = A

i

t

i

t t

i =1

∏ ⎡⎣x ( ) ⎤⎦ t

i

αi

t ∈ {1,…,T } (13.1d)

i =1





t

t ∈ {1,…,T } (13.1c)

I t = I t −1 + qt − d t





∑ ( a( )x ( ) ) +h I ⎥⎥⎦ s.t.. (13.1a)

t ∈ {1,…,T } (13.1b)

Myt ≥ qt



m

277

yt ∈ {0, 1} , xt(i ) ≥ 0, qt ≥ 0, i ∈ {1,…, m} , t ∈ {1,…,T } (13.1e)

where M is a sufficiently large positive number. The first set of constraints (13.1b) ensures that setups are performed only in the periods in which replenishment is positive, (13.1c) gives the evolution of onhand inventories, (13.1d) represents the production function relating the inputs and the transported quantity, and (13.1e) are binary and nonnegativity constraints. We assume that the initial inventory is zero and these demands are net demands. The second formulation PT2 is obtained from PT1 by first deriving the optimal input allocations for a given shipment quantity. To this end, consider the subproblem where the input acquisition costs in period t are minimized given qt  =  Q. As the input usage is uncapacitated, the first-order conditions imply that, for any i and j, j ∈ {1, … ,m}, xt( ) (Q ) * = i



αi at( ) ( j ) x Q * (13.2) (i ) t ( ) α j at j

where xt( ) (Q ) * is the optimal usage of input i to transport Q units of the item. Hence, for 1 ≤ i ≤ m, i

(i )

xt

(Q ) * =

αi

A i at( )

m

−r

∏ j =1

⎛ at( j ) ⎞ r r ⎜ ⎟ α j Q (13.3) α j ⎝ ⎠

(For details, see Heathfield and Wibe 1987.) Correspondingly, for a shipment quantity Q, the minimum transportation cost in period t, C t* (Q ), becomes

2 78

R A M E Z KIA N E T A L .

C t* (Q ) = wtQ r + τtQ (13.4)



where ⎛1⎞ wt = ⎜ ⎟ A − r ⎝r ⎠

m

∏ j =1

⎛ at( j ) ⎞ ⎜⎜ ⎟⎟ ⎝ αj ⎠

α jr

The expression of C* (Q) enables us to rewrite the MINLP formulation as PT2 as follows: T

min

(13.5a)

t ∈ {1,…,T }

(13.5b)

t

t

t t

t

t

t t

t =1



Myt ≥ qt



I t = I t −1 + qt − d t





∑ ⎡⎣⎢K y + p q + C * (q ) + h I ⎤⎦⎥ s.t. t ∈ {1,…,T }

yt ∈ {0, 1} , qt ≥ 0, i ∈ {1,…, m} , t ∈ {1,…,T }

(13.5c) (13.5d)

where M is as defined before. The constraints (13.5b), (13.5c), and (13.5d) perform the same function as in PT1 , but we have been able to eliminate the input variables and to render all constraints linear at the expense of nonlinearizing the objective function. Clearly, the second formulation is more compact and has computational advantages as demonstrated in our numerical study. We can also formulate the problem as a dynamic programming (DP) problem. Define J tT ( I t ) as the minimum total cost under an optimal joint replenishment and transportation plan for periods t through T, where It is the ending inventory as defined before in the recursions (13.1c) or (13.5c). Then, J tT−1 ( I t −1 ) =

min

qt ≥ max( 0 ,d t − I t −1 )

{K 1

t ∈ {1,…,T }

t { qt > 0 }

}

+ ht I t + pt qt + C t* ( qt ) + J tT ( I t )

(13.6)

where 1{ qt > 0 } indicates the existence of a setup in period t, with the boundary condition in period T being J TT ( I T ) = 0 for any IT ≥ 0. T The optimal solution is found using the earlier recursion, and J 0 (0) denotes the minimum cost over the problem horizon.

RE p L ENISHM EN T A N D T R A NS p O R TATI O N M O D EL

2 79

The main difficulty with this formulation is its high dimensionality. The memory requirements and the system state size become prohibitively large, and the solution times are too long. It is not suitable for problems of large sizes in terms of horizon lengths and/or demand values. For our work, this formulation is important in that it provides a guaranteed optimal solution and serves as the benchmark in our numerical study. 13.3  Numerical Study

For our numerical study, we constructed our experiment set in line with Kian et al. (2014). We considered a problem horizon of T = 100 periods. Period demands are generated randomly from three normal distributions with respective coefficients of variation, cov = 0.8, 0.4, and 0.2 and standard deviation σ (=40) where negative demand values have been replaced with zero demands. We denote the three demand patterns by D1, D2, and D3, respectively. All other system parameters are stationary. Noting that unit replenishment cost pt and unit transportation cost τt can be subsumed into ht by simple transformations through inventory recursions, we assume them to be negligible over the entire problem horizon. We set unit holding cost rate, ht = h = 1, and setup cost is selected as a function of the mean demand rate, Kt = K = [ J2/2]μ, where J may be viewed as a proxy for the average size of a replenishment quantity under the simple EOQ formula. We have J ∈ {2, 3, 4, 5}. We considered r = 1.5. This corresponds to the C–D-type economic production function with convex costs. To select the parameters for the nonlinear transportation/delivery component, we used the formulation P2T as the base. For this formulation, we set wt  =  w and considered the variable cost of transportation per unit when a dispatched quantity equals the average demand per period, w where w  =  [wμr]/μ = wμr−1. Letting a = h /w, we have w =  hμ/(aμr) with a ∈ {0.02,0.05,0.1} so that the resulting variable cost for a shipment quantity of q units is given by [hμ/a](q/μ)r. Note that w is decreasing in a. The same sets of 10 demand realizations generated for each demand distribution were used for all experiment instances throughout the study. Overall, we have 120 = (4 × 3 × 10) experiment instances for PT2 . As part of our study, we also tested the efficacy of formulation PT2 , which is structurally different from PT2 .

280

R A M E Z KIA N E T A L .

For consistency, we selected the parameters for this formulation as follows. We considered three values of number of iso-elastic inputs, m = 1, 2, 5 and αi = α for 1 ≤ i ≤ m with mα = 1/r. (All other parameters were selected as for PT2.) Overall, we have 360 = (3 × 4 × 3 × 10) experiment instances for PT2 . The optimal plan has been obtained by the DP algorithm discussed earlier. We tested the solvers AlphaECP, Baron, Bonmin, Couenne, LINDOGlobal, and KNITRO available online at the NEOS server (http://www.neos-server.org/neos/solvers/index. html). The server’s goal has been described as specifying and solving optimization problems with minimal user input (Dolan et al. 2002). The solver defaults/options were set at their defaults except that the time limits on all have been set to 1500 s since lower time resources resulted in too many interrupts in preliminary tests. In our numerical study, (1) we considered an overall assessment of the computational performances of the two formulations with respect to the demand patterns and the number of inputs using different optimizers, and (2) focusing on the formulation PT2 , we used the ANalysis Of VAriance (ANOVA) to identify the factors that have statistically significant impact on the solution quality. 13.3.1  Overall Assessment

The performance measures are (1) the number of instances in which a feasible solution has been obtained by a solver, and (2) the percentage deviation from the optimal solution for the obtained solutions averaged over all 120 experiment instances for a particular demand distribution. Note that in the latter computation, the experiment instances in which a solver failed have been excluded. We begin our analysis with our findings on formulation PT1 . The overall performance summary with m = 1, 2, 5 for the entire experiment set for this formulation is presented in Table 13.1, where # denotes the first performance measure and % denotes the second. For the cases when no feasible solution was obtained, an m-dash (—) has been used to denote the unavailable second measure. AlphaECP failed to obtain a solution in all experiment instances, whereas LINDOGlobal was able to obtain a solution in all experiment instances except for the demand distribution D2. However, for that pattern, it also resulted in a solution in the most number of instances.

PT2

PT1 m = 5

PT1 m = 2

P m = 1

1 T



— — —

— — —

— — 1.45

0.53 0.38

D1

D2 D3 D1

D2 D3 D1

D2 D3 D1

D2 D3

%

120 120

0 0 120

0 0 0

0 0 0

0

#

ALPHAECP

4.03 1.36

— — 15.57

— — —

161.44 159.94 —

119.37

%

BARON

120 120

0 0 120

0 0 0

82 120 0

118

#

0.86 0.53

0.00 0.00 2.90

0.00 0.00 0.00

0.00 0.00 0.00

0.00

%

120 120

27 30 120

27 30 27

27 30 27

30

#

BONMIN

Table 13.1  Overall Summary of Performance Measures

0.67 0.45

79.96 80.16 1.28

— 94.91 71.62

75.35 73.91 86.91

63.65

%

120 120

60 120 120

0 48 84

72 120 72

96

#

COUENNE

1.54 1.02

199.35 272.1 5.44

15.05 11.67 231.66

1.26 0.82 18.74

3.51

%

120 120

72 120 120

72 120 120

108 120 120

120

#

LINDOGLOBAL

1.23 0.81

12.92 6.16 6.20

1.43 1.14 23.81

5.70 9.29 6.48

56.25

%

53 48 52

38 45 87

19

#

120 120

32 106 120

KNITRO

RE p L ENISHM EN T A N D T R A NS p O R TATI O N M O D EL

2 81

282

R A M E Z KIA N E T A L .

Bonmin has low performance in obtaining a solution, but the quality of the obtained solution is very good (optimal in many instances). Regardless of the number of inputs in the system, it was able to get a near-optimal solution for D1. The distribution D2 seems to present the most difficulty for given m and other parameters except for Bonmin. For LINDOGlobal, the number of inputs in the problem setting has a negative impact on the quality of the obtained solutions. For other solvers, the behavior may not be monotone (e.g., KNITRO, Bonmin). However, in a very general qualitative sense, we get the impression that solver performance (in both criteria) tends to worsen as the number of inputs increases in the problem setting. This observation has motivated us to construct the second formulation, PT2. For PT2 , the performances of all solvers have improved significantly in terms of the number of instances for which a feasible solution was obtained; none of the solvers failed across the entire experimental bed. Also, the solution quality for all solvers except LINDOGlobal (for m = 1 case) has increased. These indicate that the formulation PT2 is more amenable to use on the available solvers. 13.3.2  ANOVA Assessment

The overall assessment presented earlier was based on the performances of the two formulations and the solvers in an aggregate sense. Next, we focus on the formulation PT2 and use the formal statistical tool ANOVA to identify the factors that impact the solution quality significantly in a statistical sense. We considered a three-way ANOVA where the factors are (1) K (representing the fixed replenishment cost) considered in four levels Ki, i = 1,…, 4; (2) W (representing the transportation cost coefficient, w) considered in three levels, Wj, j  =  1, 2, 3 as given earlier in the experimental bed; and (3) the different solvers denoted by S with six levels, Sk, k  =  1,…,6 corresponding to the solvers in the order given earlier with n = 10 replications (corresponding to the demand realizations) at each experimental instance. The response variables yijkl, i  =  1,…,4;  j  =  1,2,3;  k  =  1,…,6; and l  =  1,…,10 are taken as the percentage deviations of the solutions provided by the solvers from the optimal solution, which is obtained by DP. The ANOVA study was conducted for each demand distribution separately.

RE p L ENISHM EN T A N D T R A NS p O R TATI O N M O D EL

283

Table 13.2  ANOVA for D1 SOURCE

DF

SEQ SS

ADJ SS

ADJ MS

F

P

K W S K *W K *S W *S K *W *S Error Total

3 2 5 6 15 10 30 648 719

18285.12 4611.81 17151.57 12221.28 7259.13 2259.74 6201.09 3168.54 71158.26

18285.12 4611.81 17151.57 12221.28 7259.13 2259.74 6201.09 3168.54

6095.04 2305.9 3430.31 2036.88 483.94 225.97 206.7 4.89

1246.5 471.58 701.54 416.56 98.97 46.21 42.27

0 0 0 0 0 0 0

The ANOVA tables for the three distributions are given in Tables 13.2 through 13.4. The performance statistics for each factor level computed across the other experiment parameters are tabulated in Table 13.5 for each demand distribution. Finally, the interaction effects of the factor levels are provided in Figures 13.1 through 13.3 for the each distribution, respectively. The inspection of these results reveals the following findings. Firstly, all the factors and the interactions have significant impact on the solution quality, which is indicated by very large F values and correspondingly very small P-values, implying that the hypothesis that states that all factor levels have the same effect on the response variable is rejected for all three distributions. A closer inspection of the results provides further information regarding (1) the relative impact of the factors, (2) direction of the factor-level impact, and (3) the interaction effect. We treat each demand distribution separately. Table 13.3  ANOVA for D2 SOURCE

DF

SEQ SS

ADJ SS

ADJ MS

F

P

K W S K *W K *S W *S K *W *S Error Total

3 2 5 6 15 10 30 648 719

640.025 542.101 1020.997 1163.260 134.743 120.064 347.503 183.140 4151.832

640.025 542.101 1020.997 1163.26 134.743 120.064 347.503 183.14

213.342 271.051 204.199 193.877 8.983 12.006 11.583 0.283

754.86 959.05 722.52 685.99 31.78 42.48 40.99

0 0 0 0 0 0 0

28 4

R A M E Z KIA N E T A L .

Table 13.4  ANOVA for D3 SOURCE

DF

SEQ SS

ADJ SS

ADJ MS

F

P

K W S K *W K *S W *S K *W *S Error Total

3 2 5 6 15 10 30 648 719

451.116 353.541 86.901 855.188 82.740 63.062 218.256 31.066 2141.870

451.116 353.541 86.901 855.188 82.74 63.062 218.256 31.066

150.372 176.771 17.38 142.531 5.516 6.306 7.275 0.048

3136.61 3687.26 362.53 2973.06 115.06 131.54 151.75

0 0 0 0 0 0 0

Consider Table 13.2. Comparing the F values, we observe that the most important factors are, respectively, K, S, W, and the twoway KW interaction. From Table 13.5, we see that K4, W3, and S2 (Solver Baron) result in the worst solution quality on average. Next, inspecting the impact of average effect of different levels of factors from Table 13.5, we see that the largest deviation from the optimal results is observed when fixed cost is highest at K4 level, when W is at the W3 level, and the solver S2 is used. From Figure 13.1, we observe that the differential effect as K increases depends on the level of W implying a significant interaction of K and W with the worst performance occurring at K4W3 combination. Although not as significant, there is also some interaction of K with the solvers. As K level changes from 3 to 4, the performance deteriorates significantly with solvers S5 (LINDOGlobal) and S6 (KNITRO). A similar relation also holds regarding the interaction between W and the solvers. Similar analysis for D2 and D3 reveals the following. For D2, the factors with the highest F values are ordered as W, K, S, and the two-way interaction KW. Table 13.5 shows that there are less drastic differences between the average solution quality corresponding to different levels of the factors. Figure 13.2 shows that the KW interaction is still significant, and the difference between the levels of K is highest for W3, where the interaction of solvers with K and W is reduced. The ordering of solver performances is similar to that of D1. For D3, we note that the factors with the highest F values are ordered as W, K, KW, and S. We again observe that the average solution quality corresponding to different factor levels generally becomes closer

K1 K2 K3 K4

K1 K2 K3 K4

K1 K2 K3 K4

D1

D2

D3

0 0 0 0

0 0 0 0

0 0 0 0.547

MIN

1.4 1.86 2.053 9.099

5.44 5.167 6.166 14.35

18.753 17.793 16.648 60.361

MAX

0.236 0.201 0.476 2.11

0.878 0.656 1.316 3.057

2.354 2.205 3.16 14.181

AVE.

W1 W2 W3

W1 W2 W3

W1 W2 W3

0 0 0

0 0 0

0 0 0

MIN

1.86 1.56 9.09

5.441 4.118 2.867

18.753 33.234 60.361

MAX

Table 13.5  Response Variable Statistics for Different Factors

0.28 0.244 1.748

1.61 0.662 0.362

2.979 4.578 8.87

AVE. S1 S2 S3 S4 S5 S6 S1 S2 S3 S4 S5 S6 S1 S2 S3 S4 S5 S6

0 5.931 0 0 0 0 0 1.4 0 0 0 0 0 0.141 0 0 0 0

MIN 11.85 53.504 19.52 6.934 50.718 60.361 4.088 14.351 6.423 3.753 12.708 12.708 3.48 9.099 4.595 3.024 8.848 8.848

MAX 1.451 15.574 2.897 1.283 5.438 6.204 0.531 4.029 0.858 0.669 1.542 1.23 0.37 1.359 0.527 0.45 1.017 0.811

AVE.

RE p L ENISHM EN T A N D T R A NS p O R TATI O N M O D EL

285

286

R A M E Z KIA N E T A L .

Interaction plot for RESP Fitted means 1

2

3

1

2

3

4

5

6 20

K

10 0 20 W

10

K 1 2 3 4 W 1 2 3

0 S

Figure 13.1  Factor interaction for D1.

Interaction plot for RESP Fitted means 1

2

3

1

2

3

4

5

6

8 4

K

0 8 4

W

K 1 2 3 4 W 1 2 3

0 S

Figure 13.2  Factor interaction for D2.

to each other, while the KW interaction is still emphasized and the interactions with the solvers become less emphasized. From the earlier analysis, we see that the solvers’ performances get more and more closer to each other as the coefficient of variation of the demand distribution gets smaller and the worst performances

RE p L ENISHM EN T A N D T R A NS p O R TATI O N M O D EL

287

Interaction plot for RESP Fitted means 1

2

3

1

2

3

4

5

6 5.0

K

2.5 0.0 5.0 W

2.5

K 1 2 3 4 W 1 2 3

0.0 S

Figure 13.3  Factor interaction for D3.

are observed for the K4W3 large fixed cost and low transportation cost coefficient combination. Furthermore, S1, S3, and S4 (Solvers AlphaECP, Bonmin, and Couenne, respectively) are always among the best three performing solvers (although their ordering may change), whereas the worst performer is S2 in all three demand distributions. We observe that solver performances depend drastically on problem formulations as well as cost parameters. We should also mention that they may as well depend on possible user interventions such as initial point selections that were not imposed in our study.

Acknowledgment The work of Ramez Kian is partially supported by TUBİTAK (The Scientific and Technological Research Council of Turkey).

References

Ballou, R.H. (2003). Business Logistics/Supply Chain Management, 5th edn. Prentice Hall, Upper Saddle River, NJ. Berman, O. and Q., Wang. (2006). Inbound logistic planning: Minimizing transportation and inventory cost. Transportation Science 40(3): 287–299.

288

R A M E Z KIA N E T A L .

Chang, S. (1978). Production function and capacity utilization of the port of mobile. Maritime Policy and Management 5: 297–305. Cheung, S.M.S. and T.L. Yip. (2011). Port city factors and port production: Analysis of Chinese ports. Transportation Journal 50(2): 162–175. Cobb, C.W. and P.H. Douglas. (1928). A theory of production. American Economic Review 8(1): 139–165. Coyle, J.J., C.J. Langley, B.J. Gibson, R.A. Novack, and E.J. Bardi. (2008). Supply Chain Management: A Logistics Perspective, 8th edn. South-Western College Publication, Cincinnati, OH. Cullinane, K., T.-F. Wang, D.-W. Song, and P. Ji. (2006). The technical efficiency of container ports: Comparing data envelopment analysis and stochastic Frontier analysis. Transportation Research Part A 40(4): 354–374. Cullinane, K.P.B. (2002). The productivity and efficiency of ports and terminals: Methods and applications. In C.T. Grammenos (Ed.), The Handbook of Maritime Economics and Business, Informa Professional, London, U.K., pp. 803–831. Cullinane, K.P.B. and D.-W. Song. (2003). A stochastic Frontier model of the productive efficiency of Korean container terminals. Applied Economics 35: 251–267. Cullinane, K.P.B. and D.-W. Song. (2006). Estimating the relative efficiency of European container ports: A stochastic Frontier analysis. In K.P.B. Cullinane and W.K. Talley (Eds.), Port Economics, Research in Transportation Economics, Vol. XVI. Elsevier, Amsterdam, the Netherlands, pp. 85–115. Cullinane, K.P.B., D.-W. Song, and R. Gray. (2002). A stochastic frontier model of the efficiency of major container terminals in Asia: Assessing the influence of administrative and ownership structures. Transportation Research A: Policy and Practice 36: 743–762. Dolan, E., R. Fourer, J.J. Mor, and T.S. Munson. (2002). Optimization on the NEOS server. SIAM News 35(6), 1–5. Douglas, P.H. (1976). The Cobb–Douglas production function once again: Its history, its testing, and some new empirical values. Journal of Political Economy 84(5): 903–916. Estache, A., M. Gonzalez, and L. Trujillo. (2002). Efficiency gains from port reform and the potential for yardstick competition: Lessons from Mexico. World Development 30(4): 545–560. Gonzalez, M.M. and L. Trujillo. (2009). Efficiency measurement in the port industry: A survey of the empirical evidence. Journal of Transport Economics and Policy 43(Part 2): 157–192. Hatirli, S.A., B. Ozkan, and C. Fert. (2006). Energy inputs and crop yield relationship in greenhouse tomato production. Renewable Energy 31(4): 427–438. Heathfield, D. and S. Wibe. (1987). An Introduction to Cost and Production Functions. Humanities Press International, New Jersey, NJ. Hwang, H.C. (2009). Inventory replenishment and inbound shipment scheduling under a minimum replenishment policy. Transportation Science 43(2): 244–264.

RE p L ENISHM EN T A N D T R A NS p O R TATI O N M O D EL

289

Hwang, H.C. (2010). Economic lot-sizing for integrated production and transportation. Operations Research 58(2): 428–444. Ingene, C.A. and R.F. Lusch. (1999). Estimation of a department store production function. International Journal of Physical Distribution & Logistics Management 29(7/8): 453–464. Jaruphongsa, W., S. Cetinkaya, and C.-Y. Lee. (2005). A dynamic lot sizing model with multi-mode replenishments: Polynomial algorithms for special cases with dual and multiple modes. IIE Transactions 37: 453–467. Kian, R., Ü. Gürler, and E. Berk. (2014). The dynamic lot-sizing problem with convex economic production costs and setups. International Journal of Production Economics 155: 361–379. Kogan, K. and C.S. Tapiero. (2009). Optimal co-investment in supply chain infrastructure. European Journal of Operational Research 192(1): 265–276. Lederer, P.J. (1994). Competitive delivered pricing and production. Regional Science and Urban Economics 24(2): 229–252. Lee, C.-Y. (1989). A solution to the multiple set-up problem with dynamic demand. IIE Transactions 21: 266–270. Lee, C.-Y., S. Cetinkaya, and W. Jaruphongsa. (2003). A dynamic model for inventory lot sizing and outbound shipment scheduling at a third-party warehouse. Operations Research 51(5): 735–747. Lee, S.-D. and Y.-C. Fu. (2014). Joint production and delivery lot sizing for a make-to-order producer-buyer supply chain with transportation cost. Transportation Research Part E 66: 23–35. Lightfoot, A., G. Lubulwa, and A. Malarz. (2012). An analysis of container handling at Australian ports, 35th ATRF Conference 2012, Perth, Western Australia, Australia. Lippman, S.A. (1969). Optimal inventory policy with multiple set-up costs. Management Science 16: 118–138. Notteboom, T.E., C. Coeck, and J. Van den Broeck. (2000). Measuring and explaining relative efficiency of container terminals by means of Bayesian stochastic Frontier models. International Journal of Maritime Economics 2(2): 83–106. Pochet, Y. and L.A. Wolsey. (1993). Lot-sizing with constant batches: Formulations and valid inequalities. Mathematics of Operations Research 18(4): 767–785. Rekers, R.A., D. Connell, and D.I. Ross. (1990). The development of a production function for a container terminal in the port of Melbourne. Papers of the Australiasian Transport Research Forum 15: 209–218. Shadbegian, R.J. and W.B. Gray. (2005). Pollution abatement expenditures and plant level productivity: A production function approach. Ecological Economics 54(2): 196–208. Taaffe, E.J., H.L. Gauthier, and M.E. OKelly. (1996). Geography of Transportation, 2nd edn. Prentice-Hall, Inc., Upper Saddle River, NJ. Tongzon, J. and W. Heng. (2005). Port privatization, efficiency and competitiveness: Some empirical evidence from container ports (Terminals). Transportation Research Part A 39: 405–424.

290

R A M E Z KIA N E T A L .

Tongzon, J.L. (1993). The Port of Melbourne Authority’s pricing policy: Its efficiency and distribution implications. Maritime Policy and Management 20(3): 197–203. Tovar, B., S. Jara-Daz, and L. Trujillo. (2007). Econometric estimation of scale and scope economies within the Port Sector: A review. Maritime Policy & Management 34(3): 203–223. Trujillo, L. and S. Jara-Daz. (2003). Production and Cost Functions and their Application to the Port Sector: A Literature Survey, Vol. 3123. World Bank Publications, http://dx.doi.org/10.1596/1813-9450-3123. Van Vyve, M. (2007). Algorithms for single-item lot-sizing problems with constant batch size. Mathematics of Operations Research 32: 594–613. Wagner, H.M. and T.M. Whitin. (1958). Dynamic version of the economic lot size model. Management Science 5(1): 89–96. Williams, M. (1979). Firm size and operating costs in urban bus transportation. The Journal of Industrial Economics 28(2): 209–218. Xu, S. (2013). Transport economies of scale and firm location. Mathematical Social Sciences 66: 337–345.

Engineering - Industrial & Manufacturing

“The editors have done a great job of covering a very broad area of research with their selection of topics. The selection includes traditional topics, such as a review of the state of the art in location selection models, as well as contemporary ones, such as logistics in the face of global warming, models in healthcare, and disaster management applications. The topics...are relevant and interesting, well written, technically sound, and constitute significant contributions to the area of logistics. The topics collectively emphasize the importance of simultaneous consideration of multiple aspects of logistics problems together (e.g., tactical and operational, distribution and inventory, here-and-how, and wait-and-see decisions), which is a major and relatively recent trend that researchers as well as practitioners should be aware of. The book communicates this important message. Each one of several applicationoriented chapters could easily be used as case studies in graduate-level applied operations research or in a logistics course.” ‒Suleyman Karabuk, University of Oklahoma, Norman, USA Global Logistics Management focuses on the evolution of logistics in the last two decades, and highlights recent developments from a worldwide perspective. The book details a wide range of application-oriented studies, from metropolitan bus routing problems to relief logistics, and introduces the state of the art on some classical applications. The book addresses typical logistic problems, most specifically the vehicle routing problem (VRP), followed by a series of analyses and discussions on various logistics problems plaguing airline and marine systems. The authors address problems encountered in continuous space, and discuss the issue of consolidation, scheduling, and replenishment decisions together with routing. They propose a methodology that supports decision making at a tactical and operational level associated with daily inventory management, and also examine the three-echelon logistic network. Global Logistics Management clearly illustrates logistic problems encountered in many different application areas, and provides you with the latest advances in classical applications.

an informa business

www.crcpress.com

6000 Broken Sound Parkway, NW Suite 300, Boca Raton, FL 33487 711 Third Avenue New York, NY 10017 2 Park Square, Milton Park Abingdon, Oxon OX14 4RN, UK

K22513 ISBN: 978-1-4822-2694-2

90000 9 781482 226942

w w w.crcpress.com