Project Management and Engineering Research: AEIPRO 2019 [1st ed.] 9783030544096, 9783030544102

This book gathers the best papers presented at the International Congress on Project Management and Engineering, in its

1,491 57 16MB

English Pages XI, 549 [531] Year 2021

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Project Management and Engineering Research: AEIPRO 2019 [1st ed.]
 9783030544096, 9783030544102

Table of contents :
Front Matter ....Pages i-xi
Front Matter ....Pages 1-1
Strategies to Enhance Impact and Visibility of Research Projects (L. Canals Casals, B. Amante, M. M. González)....Pages 3-15
Comparative Analysis of the SCRUM and PMI Methodologies in Their Application to Construction Project Management (M. A. López-González, L. Serrano-Gómez, V. Miguel-Eguía, J. I. Muñoz-Hernández, M. Sánchez-Núñez)....Pages 17-31
Comparative Study of Project Management Approaches: Traditional Versus Holistic (U. Apaolaza Perez de Eulate, A. Lizarralde Aiastui)....Pages 33-48
Study of an Innovation Indicator Applied to Future Projects Monitoring Using the Earned Value Technique (E. García-Escribano)....Pages 49-62
Rethinking Maturity Models: From Project Management to Project-Based (Víctor Hermano)....Pages 63-73
Classification of Software in the BIM Process According to the PMBoK Knowledge Areas and Levels of Development (LOD) (J. María Rodrigo-Ortega, J. Luis Fuentes-Bargues)....Pages 75-91
Portuguese Project Management Profile—An Overview (A. Andrade Dias, Antonio Amaral)....Pages 93-101
The Utilization of Project Management Tools and Techniques and Their Relationship with the Success in Chemical Engineering (B. Llorens Bellón, R. Viñoles-Cebolla, M. J. Bastante-Ceca, M. C. González-Cruz)....Pages 103-115
Human Aspects of Project Management: Agent-Based Modeling (M. Nakagawa, K. Bahr, V. Lo-Iacono-Ferreira)....Pages 117-129
Application of the DBR Approach to a Multi-project Manufacturing Context (U. Apaolaza Perez de Eulate, A. Lizarralde Aiastui)....Pages 131-145
A Bibliometric Analysis of the Professional Skills in the Scientific Journals of Project Management (J. R. Otegi-Olaso, J. R. López-Robles, N. K. Gamboa-Rosales)....Pages 147-158
The Influence of the Use of Project Management Tools and Techniques on the Achieved Success (B. Llorens, R. Viñoles-Cebolla)....Pages 159-171
Design Thinking: Virtues and Defects of a Project Methodology for Business, Academic Training, and Professional Design (M. Puig Poch, F. Felip Miralles, J. Galán Serrano, C. García-García, V. Chulvi Ramos)....Pages 173-187
Statistical Learning Techniques for Project Control (Fernando Acebes, Javier Pajares, Adolfo López-Paredes)....Pages 189-204
Program and Project Management Articulation: Evidences from the Infrastructure Sector (V. González, E. Hetemi, M. Bosch-Rekveldt, J. Ordieres-Meré)....Pages 205-219
Competences and the Digital Transformation (C. Wolff, O. Mikhieieva, A. Nuseibah)....Pages 221-234
Front Matter ....Pages 235-235
Influence of the Separation Between Photovoltaic Modules Within an Array on the Wind Pressure Distribution (R. Escrivá-Pla, C. R. Sánchez-Carratalá)....Pages 237-251
How Are Sustainability Criteria Included in the Public Procurement Process? (L. Montalbán-Domingo, C. Torres-Machi, A. Sanz-Benlloch, E. Pellicer)....Pages 253-262
Front Matter ....Pages 263-263
Inclusive Design at the Different Phases of the Ageing Process (A. González-de-Heredia, D. Justel, I. Iriarte)....Pages 265-276
A Methodological Approach to Analyzing the Designer’s Emotions During the Ideation Phase (V. Chulvi, J. Gual, E. Mulet, J. Galán, M. Royo)....Pages 277-292
Effect of the Technique Used for the Particle Size Analysis on the Cut Size of a Micro-hydrocyclone (Javier Izquierdo, Jorge Vicente, Roberto Aguado, Martin Olazar)....Pages 293-301
Design and Development of a Snow Scoot with an Innovative Impact Absorption System (H. Malon, V. Romero, D. Ranz)....Pages 303-317
Robustness Analysis of Surface Roughness Models for Milling Carbon Steels (M. Ortells-Rogero, J. V. Abellán-Nebot, J. Serrano-Mira, G. Bruscas-Bellido, J. Gual-Ortí)....Pages 319-332
Forecasting on Additive Manufacturing in Spain: How 3D Printing Would Be in 2030 (M. P. Pérez-Pérez, M. A. Sebastián, E. Gómez)....Pages 333-349
Front Matter ....Pages 351-351
Analysis of the Key Variables That Affect the Generation of Municipal Solid Waste in the Balearics Islands (2000–2014) (C. Estay-Ossandón, A. Mena-Nieto, N. Harsch, M. Bahamonde-Garcia)....Pages 353-367
Front Matter ....Pages 369-369
Paradoxes Between Energy Labeling and Efficiency in Buildings (M. Macarulla, L. Canals Casals)....Pages 371-380
Spanish Winery Interest in Energy Efficiency Based on Renewables (N. García-Casarejos, P. Gargallo, J. Carroquino)....Pages 381-390
Sustainability Building Rating Systems. A Critical Review. Time for Change? (G. Martínez Montes, J. Alegre Bayo, B. Moreno Escobar, T. Mattinzioli, M. J. Álvarez Pinazo)....Pages 391-404
Analysis and Risk Management in Projects of Change to Led in Street Lighting According to ISO-21500 and UNE-EN-62198 (M. J. Hermoso-Orzáez, R. D. Orejón-Sánchez, A. Gago-Calderón)....Pages 405-424
Improvement of Energy Efficiency in the Building Life-Cycle. Study of Cases (J. M. Piñero-Vilela, J. J. Fernández-Domínguez, A. Cerezo-Narváez, M. Otero-Mateo)....Pages 425-436
Methodology for Valuation of Water Distribution Network Projects Based on the Potential Energy Recovery (M. Iglesias-Castelló, P. L. Iglesias-Rey, F. J. Martínez-Solano, J. V. Lozano-Cortés)....Pages 437-448
Use of Grey-Box Modeling to Determine the Air Ventilation Flows in a Room (M. Macarulla, M. Casals, N. Forcada, M. Gangolells)....Pages 449-461
Front Matter ....Pages 463-463
Development of Local Capabilities for Rural Innovation in Indigenous Communities of Mexico (F. J. Morales-Flores, J. Cadena-Iñiguez, B. I. Trejo-Téllez, V. M. Ruiz-Vera)....Pages 465-475
Adoption of Good Project Management Practices in the International Cooperation Sector in Colombia (Maricela I. Montes-Guerra, H. Mauricio Diez-Silva, Hugo Fernando Castro Silva, Torcoroma Velásquez Pérez)....Pages 477-489
The Logical Framework Approach, Does Its History Guarantee Its Future? (R. Rodríguez-Rivero, I. Ortiz-Marcos, L. Ballesteros-Sánchez, J. Mazorra, M. J. Sánchez-Naranjo)....Pages 491-501
Front Matter ....Pages 503-503
The Use of the Cloud Platform to Register and Perform Intelligent Analysis of Energy Consumption Parameters in the Service and Industrial Sectors (V. Rodríguez, J. García, H. Morán, B. Martínez)....Pages 505-515
Including Dynamic Adaptative Topology to Particle Swarm Optimization Algorithms (Patricia Ruiz, Bernabé Dorronsoro, Juan Carlos de la Torre, Juan Carlos Burguillo)....Pages 517-531
Front Matter ....Pages 533-533
Quantitative Analysis on Risk Assessment in Photovoltaic Installations: Case Study in the Region of Murcia and the Dominican Republic (G. C. Guerrero-Liquet, M. S. García-Cascales, J. M. Sánchez-Lozano)....Pages 535-549

Citation preview

Lecture Notes in Management and Industrial Engineering

José Luis Ayuso Muñoz José Luis Yagüe Blanco Salvador F. Capuz-Rizo   Editors

Project Management and Engineering Research AEIPRO 2019

Lecture Notes in Management and Industrial Engineering Series Editor Adolfo López-Paredes, INSISOC, University of Valladolid, Valladolid, Spain

This book series provides a means for the dissemination of current theoretical and applied research in the areas of Industrial Engineering and Engineering Management. The latest methodological and computational advances that both researchers and practitioners can widely apply to solve new and classical problems in industries and organizations constitute a growing source of publications written for and by our readership. The aim of this book series is to facilitate the dissemination of current research in the following topics: • • • • • • • • • • • • • •

Strategy and Enterpreneurship Operations Research, Modelling and Simulation Logistics, Production and Information Systems Quality Management Product Management Sustainability and Ecoefficiency Industrial Marketing and Consumer Behavior Knowledge and Project Management Risk Management Service Systems Healthcare Management Human Factors and Ergonomics Emergencies and Disaster Management Education

More information about this series at http://www.springer.com/series/11786

José Luis Ayuso Muñoz José Luis Yagüe Blanco Salvador F. Capuz-Rizo •

Editors

Project Management and Engineering Research AEIPRO 2019

123



Editors José Luis Ayuso Muñoz E.T.S. Ingeniería Agronómica y de Montes Universidad de Córdoba Córdoba, Spain

José Luis Yagüe Blanco Departamento de Ingeniería Agroforestal Universidad Politécnica de Madrid Madrid, Spain

Salvador F. Capuz-Rizo Departamento de Proyectos de Ingeniería Universitat Politècnica de València Valencia, Spain

ISSN 2198-0772 ISSN 2198-0780 (electronic) Lecture Notes in Management and Industrial Engineering ISBN 978-3-030-54409-6 ISBN 978-3-030-54410-2 (eBook) https://doi.org/10.1007/978-3-030-54410-2 © Springer Nature Switzerland AG 2021 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

The Spanish Association of Project Management and Engineering (AEIPRO) is pleased to issue this volume. It compiles a selection of the best papers presented at the 21st and 22nd International Congress on Project Management and Engineering. They are a good sample of the state of the art in the fields of Project Management and Project Engineering. Since 2008, AEIPRO Scientific Committee follows a two-step procedure for selection. Firstly, it assesses all the papers presented to select the approved ones to the Congress. After the conclusion and taking into account the chairman reports of every session, a second assessment is performed by a reduced Scientific Committee. We hope that the fruit of this process, this volume, contributes to the improvement of project engineering research and enhances the transfer of results to the job of Project Engineers and Project Managers. The Spanish Association of Project Management and Engineering is a nonprofit organization founded in 1992. It is an entity for the professionalization of Project Management and Engineering with the following goals: to facilitate the association of scientists and professionals within the Project Management and Engineering areas; to serve as a tool for improving communication and cooperation among these professionals; to improve experts’ knowledge in the different fields of Project Management and Engineering; to promote the best professional practices in these fields; to identify and define the needs that may arise in the everyday development of these activities; and finally, to adopt positions in order to orientate society when faced with differences with the fields of action. AEIPRO is the Spanish Member Association of IPMA (International Project Management Association), a federation of about 70 Member Associations that brings together more than 50,000 project management professionals and researchers from Europe, Asia, Africa, the Middle East, Australia, and North and South America. Chapters presented in this book address methods, techniques, studies, and applications to project management and all the project engineering areas. The contributions have been arranged in eight thematic areas:

v

vi

Preface

• • • • • • • •

Project Management Civil Engineering and Urban Planning. Construction and Architecture Product and Process Engineering and Industrial Design Environmental Engineering and Management of Natural Resources Energy Efficiency and Renewable Energies Rural Development and Development Cooperation Projects Technologies of Information and Communications (TIC). Software Engineering Risk Management and Safety

We want to acknowledge all the contributors and reviewers. Valencia, Spain November 2020

José Luis Ayuso Muñoz José Luis Yagüe Blanco Salvador F. Capuz-Rizo

Contents

Part I 1

2

3

4

5

6

7

Project Management

Strategies to Enhance Impact and Visibility of Research Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . L. Canals Casals, B. Amante, and M. M. González Comparative Analysis of the SCRUM and PMI Methodologies in Their Application to Construction Project Management . . . . . . . M. A. López-González, L. Serrano-Gómez, V. Miguel-Eguía, J. I. Muñoz-Hernández, and M. Sánchez-Núñez

3

17

Comparative Study of Project Management Approaches: Traditional Versus Holistic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . U. Apaolaza Perez de Eulate and A. Lizarralde Aiastui

33

Study of an Innovation Indicator Applied to Future Projects Monitoring Using the Earned Value Technique . . . . . . . . . . . . . . . E. García-Escribano

49

Rethinking Maturity Models: From Project Management to Project-Based . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Víctor Hermano

63

Classification of Software in the BIM Process According to the PMBoK Knowledge Areas and Levels of Development (LOD) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . J. María Rodrigo-Ortega and J. Luis Fuentes-Bargues Portuguese Project Management Profile—An Overview . . . . . . . . . A. Andrade Dias and Antonio Amaral

75 93

vii

viii

Contents

8

The Utilization of Project Management Tools and Techniques and Their Relationship with the Success in Chemical Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 B. Llorens Bellón, R. Viñoles-Cebolla, M. J. Bastante-Ceca, and M. C. González-Cruz

9

Human Aspects of Project Management: Agent-Based Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 M. Nakagawa, K. Bahr, and V. Lo-Iacono-Ferreira

10 Application of the DBR Approach to a Multi-project Manufacturing Context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 U. Apaolaza Perez de Eulate and A. Lizarralde Aiastui 11 A Bibliometric Analysis of the Professional Skills in the Scientific Journals of Project Management . . . . . . . . . . . . . . . . . . . . . . . . . . 147 J. R. Otegi-Olaso, J. R. López-Robles, and N. K. Gamboa-Rosales 12 The Influence of the Use of Project Management Tools and Techniques on the Achieved Success . . . . . . . . . . . . . . . . . . . . 159 B. Llorens and R. Viñoles-Cebolla 13 Design Thinking: Virtues and Defects of a Project Methodology for Business, Academic Training, and Professional Design . . . . . . . 173 M. Puig Poch, F. Felip Miralles, J. Galán Serrano, C. García-García, and V. Chulvi Ramos 14 Statistical Learning Techniques for Project Control . . . . . . . . . . . . 189 Fernando Acebes, Javier Pajares, and Adolfo López-Paredes 15 Program and Project Management Articulation: Evidences from the Infrastructure Sector . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 V. González, E. Hetemi, M. Bosch-Rekveldt, and J. Ordieres-Meré 16 Competences and the Digital Transformation . . . . . . . . . . . . . . . . . 221 C. Wolff, O. Mikhieieva, and A. Nuseibah Part II

Civil Engineering and Urban Planning. Construction and Architecture

17 Influence of the Separation Between Photovoltaic Modules Within an Array on the Wind Pressure Distribution . . . . . . . . . . . 237 R. Escrivá-Pla and C. R. Sánchez-Carratalá 18 How Are Sustainability Criteria Included in the Public Procurement Process? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253 L. Montalbán-Domingo, C. Torres-Machi, A. Sanz-Benlloch, and E. Pellicer

Contents

Part III

ix

Product and Process Engineering and Industrial Design

19 Inclusive Design at the Different Phases of the Ageing Process . . . . 265 A. González-de-Heredia, D. Justel, and I. Iriarte 20 A Methodological Approach to Analyzing the Designer’s Emotions During the Ideation Phase . . . . . . . . . . . . . . . . . . . . . . . . 277 V. Chulvi, J. Gual, E. Mulet, J. Galán, and M. Royo 21 Effect of the Technique Used for the Particle Size Analysis on the Cut Size of a Micro-hydrocyclone . . . . . . . . . . . . . . . . . . . . 293 Javier Izquierdo, Jorge Vicente, Roberto Aguado, and Martin Olazar 22 Design and Development of a Snow Scoot with an Innovative Impact Absorption System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303 H. Malon, V. Romero, and D. Ranz 23 Robustness Analysis of Surface Roughness Models for Milling Carbon Steels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319 M. Ortells-Rogero, J. V. Abellán-Nebot, J. Serrano-Mira, G. Bruscas-Bellido, and J. Gual-Ortí 24 Forecasting on Additive Manufacturing in Spain: How 3D Printing Would Be in 2030 . . . . . . . . . . . . . . . . . . . . . . . . 333 M. P. Pérez-Pérez, M. A. Sebastián, and E. Gómez Part IV

Environmental Engineering and Management of Natural Resources

25 Analysis of the Key Variables That Affect the Generation of Municipal Solid Waste in the Balearics Islands (2000–2014) . . . . 353 C. Estay-Ossandón, A. Mena-Nieto, N. Harsch, and M. Bahamonde-Garcia Part V

Energy Efficiency and Renewable Energies

26 Paradoxes Between Energy Labeling and Efficiency in Buildings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371 M. Macarulla and L. Canals Casals 27 Spanish Winery Interest in Energy Efficiency Based on Renewables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381 N. García-Casarejos, P. Gargallo, and J. Carroquino 28 Sustainability Building Rating Systems. A Critical Review. Time for Change? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391 G. Martínez Montes, J. Alegre Bayo, B. Moreno Escobar, T. Mattinzioli, and M. J. Álvarez Pinazo

x

Contents

29 Analysis and Risk Management in Projects of Change to Led in Street Lighting According to ISO-21500 and UNE-EN-62198 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405 M. J. Hermoso-Orzáez, R. D. Orejón-Sánchez, and A. Gago-Calderón 30 Improvement of Energy Efficiency in the Building Life-Cycle. Study of Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425 J. M. Piñero-Vilela, J. J. Fernández-Domínguez, A. Cerezo-Narváez, and M. Otero-Mateo 31 Methodology for Valuation of Water Distribution Network Projects Based on the Potential Energy Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437 M. Iglesias-Castelló, P. L. Iglesias-Rey, F. J. Martínez-Solano, and J. V. Lozano-Cortés 32 Use of Grey-Box Modeling to Determine the Air Ventilation Flows in a Room . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449 M. Macarulla, M. Casals, N. Forcada, and M. Gangolells Part VI

Rural Development and Development Cooperation Projects

33 Development of Local Capabilities for Rural Innovation in Indigenous Communities of Mexico . . . . . . . . . . . . . . . . . . . . . . 465 F. J. Morales-Flores, J. Cadena-Iñiguez, B. I. Trejo-Téllez, and V. M. Ruiz-Vera 34 Adoption of Good Project Management Practices in the International Cooperation Sector in Colombia . . . . . . . . . . . 477 Maricela I. Montes-Guerra, H. Mauricio Diez-Silva, Hugo Fernando Castro Silva, and Torcoroma Velásquez Pérez 35 The Logical Framework Approach, Does Its History Guarantee Its Future? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491 R. Rodríguez-Rivero, I. Ortiz-Marcos, L. Ballesteros-Sánchez, J. Mazorra, and M. J. Sánchez-Naranjo Part VII

Technologies of Information and Communications (TIC). Software Engineering

36 The Use of the Cloud Platform to Register and Perform Intelligent Analysis of Energy Consumption Parameters in the Service and Industrial Sectors . . . . . . . . . . . . . . . . . . . . . . . 505 V. Rodríguez, J. García, H. Morán, and B. Martínez 37 Including Dynamic Adaptative Topology to Particle Swarm Optimization Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517 Patricia Ruiz, Bernabé Dorronsoro, Juan Carlos de la Torre, and Juan Carlos Burguillo

Contents

Part VIII

xi

Risk Management and Safety

38 Quantitative Analysis on Risk Assessment in Photovoltaic Installations: Case Study in the Region of Murcia and the Dominican Republic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535 G. C. Guerrero-Liquet, M. S. García-Cascales, and J. M. Sánchez-Lozano

Part I

Project Management

Chapter 1

Strategies to Enhance Impact and Visibility of Research Projects L. Canals Casals, B. Amante, and M. M. González

Abstract Research quality is commonly evaluated by the quantity of papers published in high impact factor-indexed journals. In fact, many of these journals and conference articles show partial or final results of research projects done individually or together with other research centers, universities, and companies. However, the visibility and impact on society of these projects, even with the existence of specific websites, funding, and publicizing requirements, are generally low. This study takes the example of the ReViBE project, in which the Research Group on Project Engineering of the Polytechnic University of Catalonia takes part, by presenting how the use of three tools designed to generate ideas, such as surveys, brainstorming, and teamstorming, serves at the same time to give support to research and as a loudspeaker to enhance the visibility of the project. The study compares the quality and quantity of data collected or generated by each tool and methodology and evaluates its capacity for dissemination. Keywords Project management · Impact evaluation · Brainstorming · Teamstorming

L. Canals Casals (B) · B. Amante · M. M. González GIIP Group. Dpt. of Project and Construction Engineering, ETSEIAT, Universitat Politècnica de Catalunya, Barcelona-TECH. C\Colom 11, 08222 Terrassa, Spain e-mail: [email protected] URL: https://giip.upc.edu/es B. Amante e-mail: [email protected] M. M. González e-mail: [email protected] L. Canals Casals IREC (Catalonia Institute for Energy Research), Energy System Analytics group, 08930 Sant Adrià de Besòs, Spain © Springer Nature Switzerland AG 2021 J. L. Ayuso Muñoz et al. (eds.), Project Management and Engineering Research, Lecture Notes in Management and Industrial Engineering, https://doi.org/10.1007/978-3-030-54410-2_1

3

4

L. Canals Casals et al.

1.1 Introduction Projects having public funding are required to disseminate and present their results. To satisfy the interests of funding administrations, many projects do not only publish their work on scientific publications but they also use web pages and participate in social networks such as Facebook, Instagram, and Twitter among others. However, the academia evaluates the quality research of projects basically by the quantity and impact of the articles published in indexed journals, with those with a higher impact factor and most cited being best qualified. But are these tools valid for this purpose? Does research dissemination reach the interested general public? To provide an answer to these questions, this study works in the framework of the ReViBE project (Reuse and life of batteries and energy, TEC2015-63899-C3-1-R), funded by the Spanish Ministry of Economy, Industry and Competitiveness together with the public European Regional Development Fund (ERDF). The ReViBE project studies the degradation of lithium ion batteries, develops control and monitoring systems, and searches businesses and applications where second life electric vehicle (EV) batteries may be reused considering that EV batteries are not useful for traction purposes when they have lost 20% of their capacity [21], participating in the circular economy [11]. To achieve this last objective, the project prepared strategies, such as the use of several creativity tools to generate ideas. Creativity is useful to find good quality original solutions in unspecific and unstructured new situations [12]. The reuse of EV batteries fits perfectly in this reality. Moreover, as the sessions dedicated to the generation of ideas require working in groups of people, they will be used to give visibility to the project. The project’s website counts on a monitoring tool that is useful to analyze access and participation. Through the sessions and other actions (news, surveys, etc.), this study could evaluate if they had a significant impact on the visibility of the project. Additionally, this study presents the results obtained during the creativity sessions and compares the proposed energy storage business ideas from common citizens in relation to what the scientific community publishes in scientific journals. Scientific publications perceive that the applications presenting a higher interest are expected to give support to the electricity grid such as the improvement of the quality service, area and frequency regulation [10, 5], and transmission deferral [13], or to give support to renewable power sources that depend on weather conditions to produce energy and suffer from instability [17, 9]. In addition, industries are interested in peak shaving services to reduce the electricity bill [1], load leveling, or time shifting storing energy during low fare periods to use it when electricity is more expensive [7]. A similar idea is used in energy arbitrage that takes advantage of short-term price fluctuations [3]. In residential or tertiary buildings, businesses such as the self-consumption ones are gaining interest, although regulation changes from one country to another. For example, in Spain it is forbidden to use batteries for such purposes [16]. Finally, second life batteries are also studied to give support

1 Strategies to Enhance Impact and Visibility of Research Projects

5

to fast-charging EV stations, putting together two important sectors: the automotive and the energy sector [8].

1.2 Objective In summary, this study verifies if diffusion tools are useful and, at the same time, if they contribute to increasing the knowledge in the field of second life applications of EV batteries.

1.3 Methodology In the first place, the aim of the study is to measure the effectivity of tools that increase the visibility of projects. To do so, the ReViBE project prepared a website (http://revibe.upc.edu/en) following the corporate image of the Universitat Politècnica de Catalunya, Barcelona TECH. To monitor visits to this website, this study took advantage of Google Analytics. The website content is available in two languages: English and Spanish. ReViBE was also included as a project in ResearchGate, the social network for researchers (https://www.researchgate.net/project/ReViBE), and was linked to the involved researchers. Once the website was ready, the study launched several strategies to promote visits to the website that, at the same time, served to work on the scientific challenge: the identification of business models for second life EV batteries. Working based on problem-solving is, on its own, a challenge [19] that this study faces with the use of four tools to generate ideas. The first tool is an online survey (described with further detail in Sect. 1.3.1) and three group sessions to generate ideas using different dynamics: a teamstorming session and two brainstorming sessions (described in further detail in Sect. 1.3.2). Finally, the study analyzes the impact and quality of the collected ideas from each strategy.

1.3.1 The Survey The online survey [6] became the first activity launched to generate ideas in the ReViBE project. It was done using the template for surveys from Google and Google Forms. The survey begins with some misleading images to prepare and warm up the brain toward creativity. After that, there are three simple and fast-to-answer questions that guide the participant to the point of interest, businesses for second life batteries. In these three questions, the participants are required to write the first two words that

6

L. Canals Casals et al.

come to their mind when reading Energy, Electricity, and Batteries. Then, the survey follows with a detailed explanation of its purpose and asks for five ideas in six areas where batteries could be used: in buildings, terrestrial mobility, water, air, hobbies, and assistance. Finally, the survey ends with two general questions to classify the profile of the participant. This survey was spread through different platforms at different moments to facilitate the evaluation of the impact of each strategy. It was first published on the project website and then it was sent by e-mail to researchers and known people. Finally, it was published on social networks: Facebook, LinkedIn, and ResearchGate.

1.3.2 Face-to-Face Creativity Sessions The first session, which took place in Barcelona (ETSEIB-UPC) on November 23rd, 2017, was directed by a specialist in creative exercises following the Creative Problem Solving (CPS) methodology using a Teamstorming tool [15]. The participants of this session were related to EV batteries in some way, including four members of the ReViBE team, two PhD students, and three teachers. The session was divided into three clearly distinct stages: the first stage was oriented to guiding the participant toward a challenge to solve; the second stage was focused on the generation of ideas; finally, the last stage was conceived to select the best ideas. Before the first and second stages, participants did some warm-up exercises to prepare them. 1st stage: In this stage, the participants generate a wide perception of the challenge, as a clear identification of the challenge or question promotes knowledge [14]. This stage has several phases: • Selection of a challenge to work on: Each participant selects one of the working fields where batteries could fit (the same areas defined in the survey). • List and question the general belief : In this phase, each participant analyzes and tries to dismantle his/her prejudices when confronting the problem. • Collect information about the challenge: Working in groups of two or three participants. • 5-Why: These same groups ask for causes and motivations. • Focus of the work: Each participant carefully prepares the challenge. 2nd stage: The generation of ideas count on two additional phases: • Searching for inspiration: Individually, each participant should find inspiring examples of innovative or revolutionary ideas in the past to use as a mirror. • Teamstorming: It is in this phase that all participants work together. Each participant begins by writing three ideas to solve his/her challenge and posting them on the wall. From this moment on, participants start to pick up ideas randomly from any challenge and should develop new ideas from these picked-up ideas and put them back on the wall, having more and more ideas at each round. This

1 Strategies to Enhance Impact and Visibility of Research Projects

7

process is done ten times following the indications of the specialist that orients the creativity toward contrast, juxtaposition, getting closer, or taking distance from the picked-up ideas. This methodology follows the premise that ambiguity foments the generation of ideas to solving a problem [2]. 3rd stage: The selection is done individually and each participant takes the ideas of his/her challenge and orders them from most realistic and new to least realistic and new. The second session took place in Castelló (Spain) on March 14, 2017, at the Jaume I University, and it was guided by two members of the ReViBE project who participated in the first session. This second session counted with 11 participants, all teachers from the university who had never worked with energy storage systems but who had knowledge of eco-design and creative decision-making methodologies. This session was less guided, leaving much more space for the participants to search in the direction they felt more appropriate. Additionally, to better predispose the participants to creativity, many elements to break the structure were used, such as the selective attention test [18], counting triangles in a pentagon, building towers with children’s games, and drawing inspiring images from a sheet of paper filled with circles and a Kahoot (https://kahoot.it/) competition using ambiguous images. Additionally, all participants selected some disguising elements, such as hats or brightly colored wigs, with the objective of disinhibiting and reducing the influence they could have on the rest of the participants. Once ready, the directors of the activity briefly described the main aspects of electric vehicle batteries and the strategies available to reuse them [4]. With these premises, the participants discussed the areas where they thought it made sense to store electricity, and each participant selected one of them to start to work with. To do so, an A3 sheet was given to all participants and each one drew the selected area on top of it. With the areas defined and selected, each participant had five minutes to draw (if they felt comfortable with it) all the ideas they could gather in their area. After these five minutes, they passed their A3 sheet to the participant on his/her right, so they all had a new sheet to work with for five more minutes. This process was repeated four times. Then, all the ideas were put together having five more minutes to add additional ideas they could find in any of the areas on the table. The session ended selecting the ideas that they thought were more interesting, and discarding those they felt were useless. To do so, all participants took off the disguise, so they recovered the seriousness and personality required in a process of alternative selection, and took three red stickers to put on the ideas they did not like and three green stickers to put on the most interesting ideas. Finally, the session ended by sharing their feelings about the session itself. The third session took place on March 21, 2017, in Terrassa (back in the UPC) gathering eight participants (five PhD students and three professors) from the Department of Project and Construction Engineering, the department in which some ReViBE researchers work . This third session was similar to the second session in Castelló but had fewer transgressive activities and had no disguises. In this case,

8

L. Canals Casals et al.

before the brainstorming took place, the participants passed the selective attention test, counted triangles in a pentagon, and did the Kahoot. The rest of the session followed the same dynamics and phases from the second session. Again, at the end of the session, the participants shared their feelings and were asked to highlight the good aspects and those that should be improved.

1.4 Results and Discussion This section is divided into two parts. The first part focuses the attention on the results regarding the impact and visibility of the project, and the second part analyzes the ideas generated with the different creative tools. Using Google Analytics it was possible to count us to view the number of visits of the ReViBE website along time, shown in Fig. 1.1. The page was created in June, receiving 439 visits since then, equating to 1.5 visits per day. The first action carried out to give visibility to the project was to publish news from relevant events related to the project research (green arrows in 2). As expected, this activity seems not to have any impact on visibility, but it shows that the project is alive. On November 18, 2016, the survey was not only published on the website but it was also sent by e-mail to contacts (orange arrows). In this case, there is clearly a reaction with more than 15 visits. The link of the survey was not directed to the project website; it went directly to the survey, which implies that, after doing the survey, 15 people were interested in the project and visited the website intentionally. The yellow arrows indicate the days when creative sessions were done, not having much impact on website visibility. On the contrary, the publication of the survey on social networks (red arrows) does seem to have an impact. Notice that in these cases, the link was directed to the project website instead, leading afterwards to the survey. The survey was first published on Facebook, showing more than 20 visits. Then on 30 25 20 15 10 5 0 -

Fig. 1.1 ReViBE website visits through time

1 Strategies to Enhance Impact and Visibility of Research Projects 2% 2% 2%

PhD Master Bachelor

3% Engineering Sciences

22%

45%

Social

PF

Educaon

Highschool

Literature

3% 5%

8%

6%

55% 23%

Elementary

Other

9

Other

26%

Fig. 1.2 Participants’ profile distribution of the survey

January 26, 2017, it was published on ResearchGate, having almost no impact, and just one person participated in the survey following this path. The higher visit peak is found after the second creative session, when the ReViBE researchers encouraged the participants to visit the website, sending them the website link after the session. Additionally, this moment coincides with a second round of social network publication on LinkedIn and Facebook again. In summary, 15% of the visits to the website of the ReViBE project come from social networks. In particular, 64% correspond to Facebook, 21% to LinkedIn, and only 15% to ResearchGate. Moreover, the visits to the website do coincide with diffusion activities. It seems that there is a soft tendency to increase the visits to the website. The second part of this study analyzes the responses given in these creativity sessions and in the survey in order to evaluate their quality. After three months on the website of the project, the survey received 58 valid responses. Figure 1.2 shows that the profiles of the participants are quite high, with 45% being doctors and 92% having finished university studies. Moreover, the formation is aligned with engineering and sciences, which cover 78% of the participants. It seems obvious that this profile is not very representative of society, but it points out that the researcher’s close environment follows this profile or that the people interested in responding, due to the area treated, are closer to technology. In any case, the survey provided 1167 ideas where batteries could be used, resulting in an average of 20 proposals per participant. Figure 1.3 shows how mobile devices similar to “Hoverboards” or “Segways” catch the attention of most of the participants when thinking of applications using batteries. In fact, in 25% of the ideas, mobility is the area where the use of batteries seems more effective, including applications in planes, boats, motorbikes, bicycles, cars, or UAVs (Unmanned Aerial Vehicles). Lighting had many applications, such as local applications, emergency lights, traffic lights, and illumination in events, among others, and is the fourth area best evaluated. Motorbikes, in the sixth place, garnered 4% of the ideas. We obtain this same percentage putting together heating and cooling systems (independently if it concerns

10

L. Canals Casals et al.

40

20

0

Hoverboard, … Plane Ship Lighting Entertainment Motorbikes Safety Bicycles Phones… Appliances Heating Solar and Wind Car Electric chair Drones Food Photography Tramway Bus Hot air baloon Tidal power plant Buoy Cooling Train Submarine Machinery Location Radar Irrigation Lighthouse Truck Park Signals Helicopter Hang glider Mountain Defibrillator

Número de ideas

60

Fig. 1.3 Battery SOH evolution under different second life applications at 25 °C

spaces or food) or renewable energy generation systems (such as wind generators, solar panels, and tidal wave energy). Finally, there are two more areas to highlight. The entertainment area, including outdoor concerts, cinema, and theater, takes the fifth place and security seems to be among serious concerns too. This latter area includes devices such as emergency sounds and lights, speakers, and remote signal emitters. Concerning the first in-person creativity session, teamstorming, it should be taken into consideration that the working areas were given by the director of the activity. The teamstorming session generated a total of 140 ideas, which divided by the number of participants indicates that each participant gave an average of 15.5 ideas. Curiously, 45 of these ideas, corresponding to 32%, had no relation with batteries, so they should be removed from the list, leaving 95 ideas related to batteries (almost 10 per participant). Following this methodology, the generation of ideas per area depends on the participant working on it, as he/she is the only one responsible for the selected challenge. Table 1.1 shows how, effectively, the distribution of ideas is heterogeneous, having some areas with less than 10 ideas and others over 20. Additionally, as the director of the teamstorming session was frequently giving new directives to force the participants to think in different directions, results show that some participants lost sight of their challenge. As an example, from all the generated ideas, some participants took into account the use of batteries only in 33% of the cases, while others had them clearly in mind and used them in 86% of cases. Analyzing the results with the director of the activity, we could observe that there were some ideas that misled the participants when taking them from the wall to create new ideas. That is, when participants took bad ideas, they normally generated new equally unfortunate ideas. These misleading ideas were defined as “troll or infested ideas” as they spread through all the areas without any control or filter. In fact, unexpected proposals appeared from successive deviations such as the creation of batteries from animal excrement or the use of electric shocks to keep people awake.

1 Strategies to Enhance Impact and Visibility of Research Projects

11

Table 1.1 Areas, challenges, and generated ideas during the teamstorming session Application areas Challenge

Ideas

Ideas with batteries

Relevant ideas

Agriculture and farming

From fuel to electric machinery

17

7 (41%)

3

Maritime

Use batteries in cruise ships entering into port

14

11 (78%)

3

Air

Use batteries to offer new services in airports

11

7 (64%)

3

Industry

Gain competitiveness in productive processes

22

17 (77%)

7

Hobbies

Power street vendor markets

17

8 (47%)

3

Solar

Reduce the size of solar fields

13

5 (38%)

1

Home

Use batteries to pass from a centralized to a distributed grid

22

12 (55%)

3

Mobility

Establish a transportation system in “megacities”

9

3 (33%)

1

Mobility 2

Use batteries to reduce 15 operating costs of public transportation offering additional services

13 (86%)

5

However, the dissemination of ideas through the areas of interest affected also good ideas. Good examples are the use of solar panels on rooftops, buses, shops, façades, windows, cars, planes, or even in sails or sunshade fabric. Electric trolleys were also repeatedly used in industries (to transport tools), in markets (to distribute products or groceries), in airports and ports (for people or baggage), and in the agro industry (to feed animals). Finally, we could also find some funny ideas, such as an electric autonomous scarecrow, interchangeable production steps, and piezoelectric systems on the road or touchscreens. To evaluate the brainstorming sessions (sessions two and three), we began analyzing the profile of the participants from their response to the Wilson Learning test [20]. The result distribution shows that both groups are quite similar (Fig. 1.4), having a tendency to be expressive and friendly (black dots represent members from Jaume I University and blue dots represent those from UPC). Bigger dots indicate the average of both groups. However, some differences appeared in the duration of the brainstorming session from the very beginning. The group from “Jaume I” took wide concept areas: islanded

12

L. Canals Casals et al. 1

Fig. 1.4 Number of cycles done per battery and total accumulated capacity discharged on each second life application

Friendly

Expressive

-1

1

Manager -1

UPC Jaume I

Analytic

buildings, home, industry, agriculture, deserts, water basins, air, cities, and recreational spaces. On the other hand, the UPC group worked on more focused areas: selfconsumption, electricity generation, lighting, mobility, signals, rural areas, cities, and recreational spaces. Both groups coincide only in the latter three options. Nonetheless, there were many more coincidences when dealing with specific applications where batteries could be used. Figure 1.5 shows how in both cases lighting and signal systems and devices had special relevance. These applications go from urban lighting to helmet lights for firemen or speleology helmets. In the case of the UPC, this prominence was expected because two specific areas dealt with this kind of application (resulting in nine ideas per area). What was surprising is that the “Jaume I” group identified similar ideas not having detected these areas as such. Coincidences concerned lighthouses, emergency exit lights, traffic lights, luminous traffic panels, lighting of roads, camping, and islanded zones. 20 15 10 5 0

Jaume I

UPC

Fig. 1.5 Most frequent ideas from the second and third brainstorming sessions

1 Strategies to Enhance Impact and Visibility of Research Projects

13

Another point of convergence was appreciable in security applications, where participants seem to be seriously concerned. The applications repeated in both sessions were detection sensors, alarms, megaphones, and other similar applications. Notice that cooking, heating, and cooling food and drinks was also one of the main topics, including disparate ideas like babies’ bottle heating, selling machines or electric barbecues. Curiously, in both sessions, the use of electric stretchers in sports events to assist injured sportsmen and women was identified and, together with bicycles and elevators, were the only mobility systems identified where second life batteries could be used. Additionally, from all the applications highlighted by scientific studies, only those related to renewable energy generation were identified, on several occasions, by the participants in both sessions. However, the session from UPC had a specific area dedicated to electricity generation and distribution that pointed out other scientifically common applications such as peak shaving, contracted power reduction, energy arbitrage, electric vehicle charge, and uninterrupted power supply. It is also interesting to see the differences regarding cleaning and acclimatization of spaces, being quite frequent in the second session (Jaume I) and almost unappreciated by UPC members. During the process of classification of ideas with green and red stickers, participants indicated that applications related to islanded house, rural areas, agriculture, industry, and lighting were counted with major acceptance. On the contrary, maritime and aerospace applications gathered most of the red stickers, discarding these areas for second life battery applications. A big difference between both sessions is found in the presentation of ideas. Figure 1.6 shows how the second session (Jaume I) used mainly drawings to represent the ideas while the third session (UPC) preferred to write them down in plain text. This seemed to have an effect on the quantity of ideas generated, as the second session generated more ideas per participant than the third session, as Table 1.2 shows. However, this production difference could be somehow attributed to the differences in the warm-up activities.

Fig. 1.6 Pictures of the ideas generated in the second (left) and third (right) sessions

14

L. Canals Casals et al.

Table 1.2 Generation of ideas from all the activities and tools Nº participants

Survey

Teamstorming

Brainstorming Jaume I

Brainstorming UPC

58

9

11

8

Nº ideas

1167

140

174

90

Ideas/person

20, 1

15, 5

15, 8

11, 25

Table 1.2 shows how the survey presents better results in terms of ideas generated per person. This is surely caused by the fact that the survey had six questions asking for five ideas each, giving some pressure to the participant. Something similar happened in the teamstorming session, as the director required a number of ideas in each round. On the contrary, brainstorming sessions did not have this kind of pressure, based solely on the motivation of the participants. Therefore, it seems that motivation in the second session was noticeable, having better results than the other in-person sessions.

1.5 Conclusions This study shows that, effectively, diffusion activities may contribute to generating valuable knowledge if appropriate tools are used. This study presents four proposals to generate ideas: an online survey, a teamstorming session, and two brainstorming sessions that contributed to increasing visibility and knowledge at the same time and several methods to spread them, confirming that social networks are very useful in disseminating results. Results from these activities show that individual transportation (electric boards or similar devices), lighting, entertainment, security and mobile phone, tablet and laptop recharge are some of the areas where more interest arises or are of concern to the participants. On the contrary, scientific research prefers large-size stationary applications offering electricity grid and generation services. Therefore, this study confirms that second life battery businesses or applications perceived by people not related to batteries or electric vehicles are seriously distanced from what specialized research works on. How can we explain this divergence in business perception? Is it caused by ignorance in the field? By the commercial interests of certain sectors that enhance the research on them first? To the difference of shortterm against long-term perspectives? Or is it only because some search for useful applications while others are more interested in making money? Acknowledgments Authors appreciate the contribution of the UPC and the ReViBE project (TEC2015-63899-C3-1-R) from MINECO in the research of second life battery businesses.

1 Strategies to Enhance Impact and Visibility of Research Projects

15

References 1. Ahmadi L et al (2014) Energy efficiency of Li-ion battery packs re-used in stationary power applications. Sustain Energy Technol Assessments 8:9–17. https://doi.org/10.1016/j.seta.2014. 06.006. (Elsevier Ltd) 2. Berg H, Taatila V, Volkmann C (2012) Fostering creativity—a holistic framework for teaching creativity. Develop Learn Organizations: Int J 26(6):5–8. https://doi.org/10.1108/147772812 11272242. (Taatila V (ed). Emerald Group Publishing Limited) 3. Brivio C, Mandelli S, Merlo M (2016) Battery energy storage system for primary control reserve and energy arbitrage. Sustain Energy, Grids Netw 6:152–165. https://doi.org/10.1016/ j.segan.2016.03.004. (Elsevier Ltd) 4. Canals Casals L, Amante García B (2016) Assessing electric vehicles battery second life remanufacture and management. J Green Eng 6(1):77–98. https://doi.org/10.13052/jge19044720.614 5. Canals Casals L, Amante García B (2017) Second-life batteries on a gas turbine power plant to provide area regulation services. Batteries 3(1):10. https://doi.org/10.3390/batteries3010010 6. Encuesta (2016). Available at: https://docs.google.com/forms/d/e/1FAIpQLSc7CRlkG4_Yk_f tp2kOiBV_7cl1iuJkKfjadtIhAwgzb-ArdA/viewform. Accessed 5 Apr 2017 7. Gitis A, Leuthold M, Sauer DU (2015) Chapter 4—applications and markets for grid-connected storage systems. In: Electrochemical energy storage for renewable sources and grid balancing, pp 33–52. https://doi.org/10.1016/b978-0-444-62616-5.00004-8 8. Hamidi A, Weber L, Nasiri A (2013) EV charging station integrating renewable energy and second—life battery. In: International conference on renewable energy research and applications. Madrid, pp 20–23. https://doi.org/10.1109/icrera.2013.6749937 9. Knowles M, Morris A (2013) Impact of second life electric vehicle batteries on the viability of renewable energy sources. Br J Appl Sci Technol 4(1):152–167. Available at: http://www. sciencedomain.org/uploads/Revised-manuscript_version1_5632.pdf 10. Lacey G, Putrus G, Salim A (2013) The use of second life electric vehicle batteries for grid support. IEEE EuroCon 2013:1255–1261. https://doi.org/10.1109/EUROCON.2013.6625141 11. Lewandowski M (2016) Designing the business models for circular economy-towards the conceptual framework. Sustainability 8(1):1–28. https://doi.org/10.3390/su8010043 12. Mumford MD, Medeiros KE, Partlow PJ (2012) Creative thinking: processes, strategies, and knowledge. J Creative Behav 46(1):30–47. https://doi.org/10.1002/jocb.003 13. Neubauer J, Pesaran A (2011) The ability of battery second use strategies to impact plugin electric vehicle prices and serve utility energy storage applications. J Power Sources 196(23):10351–10358. https://doi.org/10.1016/j.jpowsour.2011.06.053. (Elsevier BV) 14. Neve TO, Skanska AB (2012) Begin with the end to solve business problems and create a competitive advantage. In: Proceedings of the 4th European conference on on intellectual capital. Arcada University of Applied Sciences, Helsinki, pp 338–344 15. Osborn AF (1953) Applied imagination. Scribners, Oxford 16. Reinhardt R et al (2016) Critical evaluation of European union legislation on the second use of degraded traction batteries. In: 13th international conference on the European energy market, EEM. IEEE, Porto. https://doi.org/10.1109/eem.2016.7521207 17. Shokrzadeh S, Bibeau E (2012) Repurposing Batteries of Plug-In Electric Vehicles to Support Renewable Energy Penetration in the Electric Grid. https://doi.org/10.4271/2012-01-0348 18. Simons DJ, Chabris CF (1999) Gorillas in our midst: sustained inattentional blindness for dynamic events. Perception 28(9):1059–1074. https://doi.org/10.1068/p281059 19. Vizioli R, Kaminski PC (2017) Problem definition as a stimulus to the creative process: analysis of a classroom exercice. J Technol Sci Edu 20. Wilson Learning W (2007) Social style profile. Available at: http://www.wilsonlearning.com/ wlw/products/brv. Accessed 29 Jan 2017 21. Wood E, Alexander M, Bradley TH (2011) Investigation of battery end-of-life conditions for plug-in hybrid electric vehicles. J Power Sources 196(11):5147–5154. https://doi.org/10.1016/ j.jpowsour.2011.02.025. (Elsevier BV)

Chapter 2

Comparative Analysis of the SCRUM and PMI Methodologies in Their Application to Construction Project Management M. A. López-González, L. Serrano-Gómez, V. Miguel-Eguía, J. I. Muñoz-Hernández, and M. Sánchez-Núñez Abstract The application of the SCRUM methodology to construction projects may provide improvement and significant benefits to their management without compromising the project’s rigor and control, thus, minimizing risks, lowering costs and reaction times for the required changes during the project evolution. In the first place, the more relevant aspects with regard to the state of the art of the mentioned methodology are highlighted in the present work. Also, a review of different applications for construction projects is performed. Then, a generic comparative study is performed of the SCRUM methodology with regard to the classic project management methodology established by the Project Management Institute, PMI. In order to M. A. López-González (B) Grupo INGENIERIA DE PROYECTOS, Dpto. de INGENIERIA, iONE Ingeniería & Peritaciones S.L. C/Dr. Fleming Nº 45, 02004 Albacete, Spain e-mail: [email protected] L. Serrano-Gómez (B) · J. I. Muñoz-Hernández Grupo INGENIERIA DE PROYECTOS, Dpto. de MECANICA APLICADA E INGENIERIA DE PROYECTOS, Escuela de Ingenieros Industriales de Albacete, Universidad de Castilla-La Mancha, Edificio Infante Don Juan Manuel, Avda. de España S/N, 02071 Albacete, Spain e-mail: [email protected] J. I. Muñoz-Hernández e-mail: [email protected] V. Miguel-Eguía Grupo Ciencia e Ingeniería de Materiales, Dpto. de MECANICA APLICADA E INGENIERIA DE PROYECTOS, Escuela de Ingenieros Industriales de Albacete, Instituto de Desarrollo Regional, Universidad de Castilla-La Mancha, Edificio Infante Don Juan Manuel, Avda. de España S/N, 02071 Albacete, Spain e-mail: [email protected] M. Sánchez-Núñez DIRECCION Y GERENCIA DE PROYECTOS DE INGENIERIA Y CONSTRUCCION, Dpto. Construcción Internacional ACCIONA, Ingeniero Industrial, PMP®, Scrum Fundamentals, Madrid, Spain e-mail: [email protected] © Springer Nature Switzerland AG 2021 J. L. Ayuso Muñoz et al. (eds.), Project Management and Engineering Research, Lecture Notes in Management and Industrial Engineering, https://doi.org/10.1007/978-3-030-54410-2_2

17

18

M. A. López-González et al.

verify the applicability of the SCRUM methodology to construction project management, it is applied to the construction of a wind farm. Advantages and disadvantages of SCRUM are identified compared to those of the traditional PMI methodology. Keywords SCRUM · Wind energy · Construction projects · PMI

2.1 Introduction and State of the Art Historically, project administration has been developed and has evolved into the concept accepted nowadays. From the moment, in 1920, when Henry Gantt introduced his famous scheduling graph, the Gantt diagram, to this day, different methods and systems have been developed for project planning, control, administration, monitoring, and execution. In the beginning of the ’80s, Nonaka and Takeuchi [10] identified and defined an agile development model, after analyzing how new products were developed in the most important technological manufacturing companies (Fuji-Xerox, Canon, Honda, Nec, Epson, Brother, 3M, and Hewlett Packard). In their study, they compared the new teamwork modality with the advance in melee formation (scrum) of Rugby players. This way, the term “SCRUM” was coined to name this teamwork modality. SCRUM, like other agile methodologies, is based in the four postulates of the “Agile Manifesto,” defined in 2001 by 18 advocates of these methods. In addition to the postulates of these four values in which it is based, the Agile Manifesto establishes 12 principles [1]. Although the SCRUM methodology emerged in technology products manufacturing companies, it is applicable not only to projects with unstable requirements, but also to all projects requiring speed and flexibility [8]. Tomanek et al. [11] developed a comparison between two work schemes, PRINCE2 and SCRUM. Shiohama et al. [9] analyzed the determination of sprint length or actuation sequences and proposed several methods for effort estimation and duration of these efforts. In a similar sense, Zahraoui and Abdou Janati Idrissi [12] proposed several factors to be considered in the calculation and estimates of efforts and the sprint length. López-Martínez et al. [6] have identified the disadvantages arising from the adoption of the SCRUM methodology in software development. They encompass work habit changes in four groups: people, processes, projects, and the organization. On the other hand, Gartzen et al. [4] address the uncertainty generated in project management and how this uncertainty diminishes with the application of SCRUM agile methods to product development outside the software context, especially in the development of prototypes for new technology products. Finally, Klein and Reinhart [5] present the results of their research work on the socalled agile engineering, on the transfer of agile procedures such as SCRUM that were exclusive of software engineering and which are applicable to the development of mechatronic systems and also to the development of machines for their construction.

2 Comparative Analysis of the SCRUM and PMI …

19

In the present work, a comparison is developed of the Classic Methodology in the Integrated Project Management, according to the Project Management Foundation Guidelines PMBOK® 5th edition [7], with regard to the “SCRUM” agile methodology approach, according to the Guide to the SCRUM body of Knowledge SBOKTM [8].

2.2 Comparison Between Agile and Classic Methodologies, According to the Project Management Institute (PMI) Agile methodologies were developed to provide swift responses to changes that could be required during project development. They allow obtaining deliverables in less time than with the classic methodology and minimize the occurrence of new risks in the project. Other than SCRUM, the main agile methodologies are XP or eXtreme Programming, Agile Unified Process or AUP, and ICONIX and Crystal Methods, among others. In Figueroa et al. [3], a comparison of the different agile methodologies mentioned above can be seen. It can be highlighted that with the XP and SCRUM methodologies, large and highly complex projects can be executed with a small development team. The SCRUM method is based on a group of roles, events, artifacts, and associated rules. Each role, which corresponds to a team member, has a specific and completely defined purpose. Rules established by SCRUM manage the relationship and interaction among roles, artifacts and events, and the SCRUM teams. SCRUM is based on a logical sequence of steps cyclically repeated, called Sprint, for each one of the User Stories of the Product Backlog, consisting of tasks, until the entire finished product is completed (SBOKTM Guide). The traditional method with its predictive approach begins with the scope definition and the project management planning in the first phases of the project life cycle. It implies an extra effort to predict the possible tasks that would arise in the medium and long terms. If the scope definition changes, the project requirements will change, along with the planning, thus affecting times, the budget, and the project quality. Regarding the project team, the construction world frequently requires the participation of different trades to execute the different tasks, depending on the specific project. It is then very complicated to have such a multidisciplinary team with the experience and training to fulfill all the needs of the different execution phases of the project, as SCRUM suggests. In spite of these big concept differences, both methodologies have common characteristics. They share three fundamental concepts: project objectives, times, and costs, as Table 2.1 indicates. This table is a review of the study made by Figueroa et al. [3]. When comparing the development phase structure of both methodologies, it can be seen that both the Classic methodology (PMBOK®) and SCRUM (SBOKTM) are

20 Table 2.1 Classic and SCRUM methodologies’ characteristics

M. A. López-González et al. Characteristic

Classic

SCRUM

Focus

In processes

In people

Process

Lineal

Iterative

Planning

Advance planning high

Advance planning low

Management

Centralized

Decentralized and self-organized

Based on

In norms and standards

In team heuristics

Resistance to change

High

Very low/null

Command and control

Leadership

Collaborative leadership

Rules imposition

Externally

Internally (by the team)

Process control

High, many policies/norms

Low, few principles

Client interaction

Through meetings

Is part of the development team

Artifacts

Many artifacts

A few artifacts

Roles

Many roles

A few roles

Team size

Big and distributed

Small and in the same place

Contractual relationship

Predefined and very rigid contract

Do not exist or it is quite flexible

divided into five large management process groups but there are differences in the identification of the subprocesses they include which can belong to different phases and may not have a direct correspondence among them, as shown in Table 2.2. In the agile methodologies, documentation is also prepared, but unlike in the traditional methodology it is less important, because product functionality is a priority. It is observed that several identified SCRUM processes correspond to different process groups in the classic methodology, in some instances up to three different groups at the same time. Also, some of these processes are encompassed in others. For example, Risk Planning and Quality Planning that are part of the User Stories Preparation and the Product Backlog. This is due to the agility which characterizes SCRUM that requires a constant review of the project evolution in order to identify obstacles that could arise and solve them immediately, thus minimizing unnecessary risks for the project. Classic methodology considers 23 processes distributed in five process groups, while SCRUM proposes 18 processes in the other five groups. This noticeable reduction of 22% in the number of SCRUM processes compared to the classic

2 Comparative Analysis of the SCRUM and PMI …

21

Table 2.2 Processes of the Classic and SCRUM methodologies (PMBOK and SBOK) Classic / SCRUM Initiation / Initiate

Classic

SCRUM

Project Charter.

Create Project Vision

Identify Stakeholders.

Identify SCRUM Master and Stakeholders Form SCRUM Team. Develop Epics. Create Prioritized Product Backlog. Conduct Release Planning.

Planning / Plan and Estimate

Project Management Plan.

Create User Stories

Requirements Identification

Approve, Estimate and Assign User Stories.

Scope Definition.

Estimate Tasks.

WBS (Work Breakdown Structure) Creation.

Create Sprint Backlog.

Activities Definition Legend: Initiate Plan and Estimate Implement Review and Retrospect Release.

Activities’ Sequence Definition. Estimation and Duration of Used Resources. Schedule Development. Budget. Quality Planning. Human Resources Planning. Communication Planning. Risk Management Planning. Procurement Planning.

Execution / Implement

Performance and Control / Review and Retrospect

Closing / Release

Execution Management.

Create Deliverables.

Quality Assurance Management.

Conduct Daily Standup.

Project Team Development.

Maintenance of the Prioritized Product Backlog

Work Performance and Control.

Convene SCRUM of SCRUMs.

Project Scope Control.

Demonstrate and Validate Sprint.

Risk Monitoring and Control.

Retrospect Sprint.

Project and Phases Closing.

Ship Deliverables. Retrospect Project.

methodology implies that resources that should be allocated for the completion of these processes can be assigned to other tasks, thus streamlining management, reducing time, and therefore costs, without losing information or diminishing project management capacity.

2.3 Case Study Hereafter, the SCRUM Agile Methodology is applied to a practical project management case for the construction of a 36 MW Wind Farm located in Castilla–La Mancha [2], composed of 18 wind turbines with 2 MW of nominal power. This practical case is not typical of the traditional application of SCRUM, which is always focused on the Technology and Software industry. Management of the proposed project for the case study will be executed with both methodologies, the classic one, according to the PMI’s PMBOK® guidelines

22

M. A. López-González et al.

and SCRUM, according to the SBOKTM guidelines, highlighting the advantages and disadvantages resulting from the application of both methodologies.

2.3.1 Time Management Schedule differences arise when the civil work of the transformer substation is divided into its different components. Due to Sprint planning in SCRUM, periods of different duration are required, compared to the classic methodology, depending on the estimated execution time. The project schedule changes, regrouping or dividing line items related to the classic management planning to adjust it to SCRUM Sprints, as shown in Table 2.3. Table 2.3 Execution line items’ comparison Classic

SCRUM

Line item

Start

End

Line item

Start

End

4.2.1.5 SET civil work 02/28/17 05/16/17 4.2.1.5

SET civil work

02/28/17 05/16/17

A01

Earth moving Concrete

A03

Steel

02/28/17 03/09/17 A01, 03/09/17 03/31/17 A02 and 03/31/17 04/12/17 A03

Earth moving, concrete and steel

02/28/17 04/12/17

A02

A04

Masonry Earthing network

A06

Sanitation network

A07

Illumination

03/31/17 04/07/17 A04, 04/07/17 04/14/17 A05, A06, A07 03/31/17 04/14/17 and A08 04/14/17 04/26/17

Masonry, Earthing network, Sanitation network, Illumination, Metallic Struct

03/31/17 05/08/17

A05

A08

Metallic structure

04/26/17 05/08/17

A09

Control building civil work

02/28/17 05/16/17 A09

Control building civil work

02/28/17 05/16/17

4.2.2.2 Wind turbine 05/08/17 11/03/17 4.2.2.2. A02 installation and A02 commissioning 1st. part

1–6th Wind 05/08/17 07/10/17 turbines Installation and Commissioning

4.2.2.2. A02 2nd. part

7–12th. Wind 05/08/17 07/10/17 turbines Installation and Commissioning

4.2.2.2. A02 3rd. part

13–18th. Wind 05/08/17 07/10/17 turbines installation and Commissioning

2 Comparative Analysis of the SCRUM and PMI …

23

Global times are reduced because in this case three Sprints defined in line item 4.2.2.2 A02 can be executed simultaneously, thus reducing the time defined according to the classic methodology by 33% in SCRUM. This can be observed in Fig. 2.1 and Table 2.4. This time reduction due to activity overlapping should not be mistaken with resources’ increase or planning strategy, because it is due to the multidisciplinary premise of the SCRUM teams. The schedule adjustment implies adjusting the Gantt diagram as well, without changing the critical path. This is due to the prevalence of the execution priority hierarchy already established, as shown in Fig. 2.2 and Table 2.5.

Fig. 2.1 Gantt diagrams comparison of items 4.2.2.2 A02

Table 2.4 Line items 4.2.2.2 A02 comparison Methodology

Task name

Classic

Start Wind turbine installation and commissioning

SCRUM

Duration (days)

Start

End

0

Mon 05/08/17

Mon 05/08/17

130

Mon 05/08/17

Fri 11/03/17

End

0

Mon 11/06//17

Mon 11/06//17

Start

0

Mon 05/08/17

Mon 05/08/17

1–6th. Wind turbine installation and commissioning

46

Mon 05/08/17

Mon 07/10/17

7–12th. Wind turbine installation and commissioning

46

Mon 05/08/17

Mon 07/10/17

13–18th. Wind turbine installation and commissioning

46

Mon 05/08/17

Mon 07/10/17

0

Mon 11/06//17

Mon 07/10/17

End

24

M. A. López-González et al.

Fig. 2.2 Gantt diagrams comparison for line item 4.2.1.5

Table 2.5 Execution line items 4.2.1.5 comparison Methodology Task name

Duration (days) Start

End

Classic

Start

0

Tue 02/28/17

Tue 02/28/17

Earth moving

7.5

Tue 02/28/17

Tue 02/28/17

Concrete

16

Thu 03/09/17

Fri 03/31/17

Steel

8

Fri 03/31/17

Wed 04/12/17

SCRUM

Masonry

5

Fri 03/31/17

Fri 04/07/17

Earthing network

5

Fri 04/07/17

Fri 04/14/17

Sanitation network

10

Fri 03/31/17

Fri 04/14/17 Wed 04/26/17

Illumination

8

Fri 04/14/17

Metallic structure

8

Wed 04/26/17 Mon 05/08/17

Control building civil work

55

Tue 02/28/17

Mon 05/15/17

End

0

Tue 05/16/17

Tue 05/16/17

Start

0

Tue 02/28/17

Tue 02/28/17

Earth moving, concrete, and 32 steel

Tue 02/28/17

Mon 07/10//17

Masonry, Earthing network, 26.5 sanitation network, illumination, and metallic structure

Fri 03/31/17

Mon 05/08/17

Control building civil work

55

Tue 02/28/17

Mon 05/15/17

End

0

Tue 05/16/17

Tue 05/16/17

Line item grouping represents the execution line item 4.2.1.5 of the schedule in Table 2.5. It matches Sprint 5. 6 and 7 of the Release Planning Program, as can be seen in Table 2.6.

2 Comparative Analysis of the SCRUM and PMI …

25

Table 2.6 Release Planning Schedule Release planning schedule Project: Works for the wind farm in Castilla–La Mancha (OPECM) Functionality: Wind Farm Sprint Deliverable

Start

4.2.1.5 SET Civil works

Release

02/28/17 05/16/17

5

A01, A02, and A03

Earth moving, concrete and steel

6

A04, A05, A06, A07, and Masonry, Earthing network, 03/31/17 05/08/17 A08 Sanitation network, Illumination, and metallic structure

7

A09

Control building civil work

02/28/17 04/12/17

02/28/17 05/16/17

Table 2.7 Release planning schedule Release planning schedule Project: Works for the wind farm in Castilla–La Mancha (OPECM) Functionality: Wind farm Sprint

Deliverable

Start

Release

Wind turbine installation and commissioning

05/08/17

07/10/17

12

A021

1st–6th. Wind turbine installation and commissioning

05/08/17

07/10/17

13

A022

7th–12th. Wind turbine 05/08/17 installation and commissioning

07/10/17

14

A023

13th–18th. Wind 05/08/17 turbine installation and commissioning

07/10/17

4.2.2.2. A02

For the second case, line items breakdown represents the 4.2.2.2. A02 execution line item is shown in Table 2.4 schedule. It matches Sprint 12, 13, and 14 of the Release Planning Schedule in Table 2.7.

2.3.2 Roles Assignment Regarding the acting roles in each methodology, Table 2.8 shows there is no direct correspondence between them. Roles in SCRUM are distributed among the four participant figures: the Client, the Product Owner, the SCRUM Master, and the SCRUM Team.

26 Table 2.8 Role identification from Classic to SCRUM

M. A. López-González et al. Role

Classic

SCRUM

“Big shot”

Manager

Client

General manager

General manager

Product owner

Change control

Project manager

Product owner

Project manager

Project manager

Product owner

Field engineer

Field engineer

SCRUM team

Technical office engineer

Technical office engineer

SCRUM team

Quality inspector

Quality manager

SCRUM team

Studies coordinator

Studies manager

SCRUM team

Procurement coordinator

Procurement manager

SCRUM team

H.R. coordinator

H.R. manager

P. Owner/SCRUM master

ORP coordinator

ORP manager

SCRUM team

Construction workers

Trade crew

SCRUM team

It becomes clear, after identifying and comparing roles among both methodologies, that the organizational charts are different. In SCRUM, the organizational chart will have two levels less, as Fig. 2.3 shows. This comes from the premise that SCRUM teams must be composed of multidisciplinary professionals, who are experts in a given field and master some others. The matrix organization with functional departments, which are transversal to the project in the classic model, should not be mistaken with the notion that these departments should be SCRUM Teams, because resources are merely given so they can function as SCRUM teams for the specific project.

Fig. 2.3 Organizational Charts

2 Comparative Analysis of the SCRUM and PMI …

27

2.3.3 Generated Documentation With regard to documentation, SCRUM encourages minimum document generation within its five management phases. In some cases, these documents are similar to those generated with the classic methodology, but they are not always produced in the processes of the same phase. Moreover, one document of the traditional methodology may correspond to a group of contents of several documents in SCRUM, as stated in Table 2.9. The difference is found in the initial premise regarding documentation reduction in which the SCRUM methodology is based: SCRUM does not assign too much importance to the “how is the product going to be” documents generation and makes more emphasis on the “how is” the functional product. SCRUM does not limit document generation; it just offers guidance and advice on the essential documents, in the interest of resource optimization and, therefore, agility. Classic methodology generated 49 documents versus 28 documents generated in SCRUM. It represents a 43% reduction. This significant reduction implies that resources that should be used to generate these documents can be allocated to other tasks, thus streamlining management, reducing time and, therefore, costs.

2.3.4 Summary Table 2.10 shows a summary of the comparison between methodologies when applied to the construction of a wind farm, beyond the characteristics that define each methodology. As can be seen, there are big differences in the application of one methodology or the other, but they have the same purpose: to obtain a functional product in terms of time, quality, and cost. Process and documentation reduction in SCRUM with regard to the classic methodology does not affect the information content. It has been proved that a document of the classic methodology can be formed by the information contained in up to 5 SCRUM documents. The same applies to processes. Such reduction of 22% of processes and 43% of generated documents in SCRUM entails that resources that should be used to the execution of these processes can be destined for other tasks, speeding management, reducing time and, therefore, costs.

2.4 Conclusions In this work, the advantages and disadvantages of the Agile Methodology SCRUM as a replacement for the Classic PMBOK methodology are discussed in an objective manner, for construction projects. Many aspects of the agile methodologies have been clarified as well as their possible use in a successful project. It is of crucial importance

28

M. A. López-González et al.

Table 2.9 Documents generated in the Classic and SCRUM methodologies (PMBOK and SBOK) Classic / SCRUM Initiation / Initiate

Classic

SCRUM

Project charter

Product Owner Identification

Stakeholders Identification

Project Vision Statement

Stakeholders Classification

SCRUM Master Identification

Stakeholders Register

Stakeholders Identification

Stakeholders Management Strategy

SCRUM Team Identification Epic Development Persons Creation Prioritized Product Backlog Finished Criterion Release Planning Program Sprint Duration.

Planning / Plan and Estimate

Integrated Change Control

User Stories

Configuration Management Plan Preliminary Scope Statement

Approved, Estimated and Committed User Stories

Requirements Documentation

Task List

Requirements Traceability Matrix

Estimated Task List

Project Management Plan

Sprint Backlog List

WBS Creation

BurnDown Chart

WBS Dictionary Identification and Sequencing Project Network Resources and Activities Estimation Project Schedule Phases Budget Weekly Budget Legend: Initiate Plan and Estimate Implement Review and Retrospect Release.

Budget-Time Graphs Quality Metrics Quality Baseline Quality Activities Matrix Quality Management Plan Human Resource Management Plan Professional Staff Organizational Cart Responsibilities Assignment Matrix Roles Description Personnel Recruitment Log Communication Management Plan Communication Matrix Terminology Index Risk Identification and Assessment Risk Response Plan Contract Management Plan Project Contracts Matrix

(continued)

2 Comparative Analysis of the SCRUM and PMI …

29

Table 2.9 (continued) Classic / SCRUM Execution / Implement

Classic

SCRUM

Work Performance Report

SCRUM board Sprint

Project Team Register

Obstacles Registry.

Project Coordination Meeting Minutes

Sprint Deliverables

Quality Audit Reports

Prioritized Product Backlog Update

Controversies Control Performance and Control / Review and Retrospect

Change Requirements

Team Coordination

Project Performance Report

Sprint Validation.

Risk Control Report

Deliverables Acceptance Report Sprint Retrospect Agreed Improvement Measures

Closing / Release

Phase/Project Acceptance Document

Deliverables Shipment

Project Final Performance Document

Project Retrospect

Project Acceptance Document Lessons Learned Generated Lessons Learned Information Project Documentation Report

Table 2.10 Summary of the comparison of Classic and SCRUM methodologies

Comparison

Classic

SCRUM

Number of phases

5 phases

5 phases

Number of processes

23 processes (28% more)

18 processes (22% less)

Number of documents

49 documents (75% more)

28 documents (43% less)

Information quantity

High

High

Organizational chart

Rigid, 5 levels

Flexible, 3 levels

Human resources More quantity

Less quantity, multidisciplinary

Schedule

The same

Less

Gantt diagram

The same

Less

Critical path

The same

The same

Monitoring

Periodical weekly

Periodical daily

Product delivery

Final

Periodical, sprint

Value for the client

At the end

At the end

that all the SCRUM Core Team members have a deep knowledge of the methodology and its principles. It has been shown that there is no direct acknowledgement of the roles among participants, management processes, and generated documentation from one method to the other.

30

M. A. López-González et al.

Hierarchy reduction and, consequently, simplification of the organizational charts favor agility, because any internal procedure in the organization needs a shorter route for its accomplishment and, therefore, less cost. The SCRUM methodology has some complications and limitations that advise against its use in substitution of the Classic one: lack of knowledge of the Methodology outside the software and technology product’s environment; SCRUM Team participants must have a multidisciplinary background and this situation is completely different in the construction context (where trades are frequent); the client must be completely involved in the project; deliverables at the end of each Sprint lack value for the client, aside from recognizing the appropriate project evolution; generated documentation decrease produces distrust and loss of project control which will disappear as experience in agile management is acquired. Some important advantages of SCRUM utilization in construction projects are process reduction (22% less), documentation reduction (43% less), execution time reduction due to activities overlapping without resource use increase (multidisciplinarity, up to 67% less), roles and resources reduction, and cost reduction. It is evident that disadvantages emerge when replacing the classic methodology by SCRUM in construction projects because the construction context is traditionally very stable. These disadvantages emerge not because the method cannot be applied to this type of project, but because previous experiences do not exist to support its success in this field. Once SCRUM’s principles are known and understood in detail, some established rules can be avoided using, of course, sensible criteria and it is possible to adopt a pragmatic SCRUM, personalized and better adapted to the project and the team’s specific circumstances. Specific rules can be defined considering individual needs, thus supporting the adaptation principle and continuous improvement. In this sense, it is these authors’ belief that it is necessary to evolve toward the agile methodology but not in a rigorous application, but rather in a classic–agile fusion, so the rigidity and hierarchy of PMBOK® are diminished for the most part, and the participative, collaborative, and agile philosophy of the SBOKTM is adopted. This new approach can be called “the Pseudo Agile” approach.

References 1. Beck K, Beedle M, Bennekum AV, Cockburn A, Cunningham W, Fowler M, Thomas D (2001) El Manifiesto Ágil. Extraído el 29-02-16 de http://www.agilemanifesto.org 2. Castillo VD (2014) Trabajo Fin de Master Nº Master/0014, Gestión y Dirección Integral en un Parque Eólico. de la Escuela de Ingenieros Industriales de Albacete, UCLM 3. Figueroa RG, Solís CJ, Cabrera AA (2008) Metodologías tradicionales versus Metodologías Ágiles. Escuela de Ciencias en Computación de la Universidad Técnica Particular de Loja (Ecuador). Extraído el 16-07-2016 de: https://adonisnet.files.wordpress.com/2008/06/articulometodologia-de-sw-formato.pdf

2 Comparative Analysis of the SCRUM and PMI …

31

4. Gartzen T, Brambring FY, Basse F (2016) Target-oriented prototyping in highly iterative product development. In: 3rd international conference on Ramp-up management (ICRM), pp 19–23. https://doi.org/10.1016/j.procir.2016.05.095 5. Klein TY, Reinhart G (2016) Towards agile engineering of mechatronic systems in machinery and plant construction. In: Science direct procedia CIRP 52, pp 68–73. https://doi.org/10.1016/ j.procir.2016.07.077 6. López-Martínez J, Juárez-Ramírez R, Huertas C, Jiménez S (2016) Problems in the adoption of Agile-SCRUM methodologies: a systematic literature review, pp 141–148. https://doi.org/ 10.1109/conisoft.2016.30 7. Project Management Institute (2013) Guía de los Fundamentos para la Dirección de Proyectos (Guía PMBOK®)-5th Edición 8. Satpathy T (2013) Conocimiento de SCRUM (Guía SBOKTM). Extraído el 29-02-16 de http:// www.SCRUMstudy.com/SBOK/SCRUMstudy-SBOK-Guide-2013-spanish.pdf 9. Shiohama R, Washizaki H, Kuboaki S, Sakamoto KY, Fukazawa Y (2015) Investigating the relationship between project constraints and appropriate iteration length in agile development through simulations. Int J Comput Appl Technol x(x):1–11. https://doi.org/10.1504/ijcat.2016. 10001351 10. Takeuchi H, Nonaka I (1986) El nuevo juego del desarrollo de producto nuevo. Harvard Business Review 11. Tomanek M, Cermak R, Smutny Z (2014) A conceptual framework for web development projects based on project management and agile development principles. In: 10th European Conference on Management Leadership and Governance (ECMLG). Zagreb, Republica de Croacia, pp 550–558 12. Zahraoui H, Abdou Janati Idrissi M (2015) Adjusting story points calculation in SCRUM effort and time estimation. ISBN:978-1-5090-0220-7

Chapter 3

Comparative Study of Project Management Approaches: Traditional Versus Holistic U. Apaolaza Perez de Eulate and A. Lizarralde Aiastui

Abstract Several studies and publications support the idea that it is critical for companies to manage projects successfully, thus gaining a competitive edge. The effort made in the last decades to develop project management has led to a dramatic progress of this discipline. As a result, the theories, know-how, means and capabilities available today are superior by far to those available a few years ago. However, it has also been concluded that projects continue to struggle to achieve their objectives. Furthermore, some authors consider that the respective approach in itself is one of the main causes for this situation. This study is a real-world based comparison of two approaches, applied to the same project. In particular, the regular approach of the company, based on a traditional project management perspective, has been confronted with a holistic perspective. The results suggest that aspects which are generally considered to be adequate can be counterproductive. Keywords Project management · Traditional · Holistic · Systemic · Performance measurement

3.1 Introduction The importance of project management (PM) for companies has been widely discussed. Its contribution to aspects such as strategy or producing results has generated, over the last decades, an exponential development of this discipline, driven by academics, professional associations, and practitioners and organizations. As a result, there has been a breakthrough regarding knowledge, development of methodologies or means of support such as software. U. Apaolaza Perez de Eulate (B) · A. Lizarralde Aiastui Industrial Organisation, Mechanical and Industrial Production Department, Faculty of Engineering, Mondragon Unibertsitatea, Loramendi 4, 20500 Mondragon, Spain e-mail: [email protected] A. Lizarralde Aiastui e-mail: [email protected] © Springer Nature Switzerland AG 2021 J. L. Ayuso Muñoz et al. (eds.), Project Management and Engineering Research, Lecture Notes in Management and Industrial Engineering, https://doi.org/10.1007/978-3-030-54410-2_3

33

34

U. Apaolaza Perez de Eulate and A. Lizarralde Aiastui

Undoubtedly there has been a great leap forward, and the conditions available today to manage projects successfully are far superior to those available only a few years ago. However, there are a number of studies claiming that the results still leave significant room for improvement [7, 15, 21]. These studies highlight the difficulties still existing today in meeting projects’ objectives. And while it is true that complexity has undergone exponential growth in a short time, due to technological progress or globalization having significantly altered the business context, they do not appear to be the only causes that have led to such results. Despite the aforementioned development, the appropriateness of PM approaches has been constantly questioned [9, 10, 18]. The emergence of new management methodologies such as Critical Chain [4] or agile PM methods [2] is an example of this. On one hand, its mere existence calls into question the perspective of traditional PM, since both methods are based on their own philosophies. Thus, while Critical Chain is based on the Theory of Constraints [5], the different methods inside the agile philosophy share the same foundations, gathered in the Agile Manifesto [2]. These different interpretations of the problems related to PM, logically, suggest different solutions. Likewise, other aspects such as the definition and use of performance metrics have also been analysed by other authors, who consider these aspects to have a significant impact on PM [11, 19]. Traditionally, project-based organizations and manufacturing-based organizations have been treated in a very different way, despite obvious similarities. However, a number of authors have combined existing knowledge from both worlds aiming to achieve better results [6, 10]. This paper addresses the problem of PM in an organization in the manufacturing sector. The origin of the study lies in the difficulties of the organization to successfully manage its projects in accordance with the traditional perspective. The remaining part of this paper proceeds as follows: it first gives a brief overview of relevant literature, also describing the research background. Then it states the objectives and the research method. Chapter four describes the work done, leading to the results explained in chapter five. Finally, chapters six and seven respectively provide a discussion of the results and the main conclusions of the study.

3.2 Literature Review and Research Background PM actually can be considered to be a competitive advantage for companies [6, 10]. However, as a result of the constant debate on the PM methods and approaches, several authors have explicitly advocated the appropriateness of changing the perspective of PM. Thus, it has been proposed that the traditional PM approach is not suitable to manage current big, complex and fast projects, concluding that this approach can “be counterproductive” and “seriously undermine performance” [9]. The necessity for organisational alignment has also been highlighted, emphasizing the relevance of consistent integration of strategy, context, and performance measurement; with a particular focus on what is being measured [11]. Importantly, Smith

3 Comparative Study of Project Management …

35

and Smith [19] assert that in order to achieve high due-date performance, companies should work aligned with return on investment (ROI), and the key for that purpose is flow. However, they found that companies run into problems getting and using the required information. Thus, they conclude that the assumption that local efficiency leads to improved global results causes companies to lose the connection with flow, and consequently the opportunity to measure their performance aligned with ROI. These perspectives are consistent with the idea that in the current scenario it is essential to constantly adapt to changing needs [14]. Indeed, the traditional PM-based models seek organisational alignment and consistency through program and portfolio management. This view is based on a proper individual management of each single project, which at the same time is connected and aligned with the whole system. In other words, although projects are part of programs and portfolios, each project must be individually managed according to its objectives, beyond the dependence on portfolio and program management. The Earned Value Management (EVM) method is a clear exponent of this issue. It is the main method suggested by traditional approaches [3, 16], and its “contributions and cost-effectiveness are widely recognized, regardless of industry sector, motivation, country, or other variable” [20]. Developed as a supplement to the PMBOK, it is defined as “a management methodology for integrating scope, schedule, and resources; for objectively measuring project performance and progress; and for forecasting project outcome”. The underlying idea is that by a proper management of the project costs, overall costs will also be reduced, and that this will be beneficial to the company. Nevertheless, a number of authors have criticised the method due to its limitations [8, 12, 13, 21]. The practice standard for earned value management itself warns of the risks of “Not using EVM at the portfolio and program management levels” as one of the main pitfalls [17]. It explicitly clarifies that the concepts of resource consumption and cash flow are different, also highlighting that “…was developed to measure scope accomplishment and cost and schedule performance”. It is also stated that “In most cases for projects, the budget of the scope accomplished does not equate the value of business benefits achieved, nor economic value produced”. However, the authors of this standard stop short of suggesting how to do so, but merely state that EVM should be applied at the program and portfolio level. Thus it remains unclear how to properly use EVM in a project aligned with the overall objectives and needs. The practice standard for earned value management is explained in a single-project context, and it is hardly mentioned in the standard for portfolio management. The issue is to determine a simple way to coherently link the individual project management with program and portfolio management. It seems that many companies are not capable of finding a way to do this correctly by themselves, especially in the case of companies without the required PM knowledge, means and maturity. This assumption is consistent with other authors’ conclusions, who agree that companies often do not understand the holistic approach needed to perform in entrepreneurial contexts [1, 10].

36

U. Apaolaza Perez de Eulate and A. Lizarralde Aiastui

3.2.1 Research Background The company analysed here operates worldwide and is a market leader in high precision machining and high value machined and welded solutions. It is specialised in providing customised solutions for their customers, such as complex structures, pressure vessels or vacuum chambers. Therefore, the evolution of the company has been based on the mastery of processes and on a strategy founded on the following aspects: • Orientation towards niche markets which are defined as being very demanding from a technical and organizational point of view. • Diversification in markets, both geographically and by industry. • Integration in the customer’s value chain, providing solutions beyond manufacturing. • On-time delivery orientation, a critical condition to gain customers’ trust. The company employs a staff of 65 highly qualified professionals who combine their long experience with wide-ranging know-how to make use of the latest process management tools. It also owns the required manufacturing technologies for that purpose: cutting, press-forming, welding, blasting, pickling and passivating, shotblasting, painting and machining. For some years, the company has been immersed in a transformation process towards a participatory and people-based culture. Once the vision and the strategy were defined, the company started to change the organizational system. This study is set in this transformation process towards a global management model. The managers expressed their concern for the real results of the company right from the initial contact. While those results were not negative, they were not as good as they should have been. Actually, despite having identified the critical aspects mentioned above, problems to meet project deadlines have remained. According to the managers, the existing capacity should be sufficient to complete the work on schedule. However, overtime and the need to outsource have been very often used to complete the projects on time or to minimize the delay. Consequently, overall profitability has been lower than expected. Based on the overall diagnosis provided by the organization, a detailed analysis of the procedures was carried out, and a diagnosis of needs was elaborated. The fundamental underlying idea in the diagnosis is the misalignment between objectives, metrics and actions. The work sessions with managers showed the generalized nature of this organizational misalignment, which involved all departments. Specifically, the difficulties of maintaining consistency between single PM, simultaneous management of multiple-projects, and project portfolio management were highlighted. Under these conditions it was decided to launch a transformation aimed at achieving better overall results. It would address the improvement of the whole PM process from a systemic perspective. Consequently, it would include single PM,

3 Comparative Study of Project Management …

37

multi-project management and project portfolio management. From the organizational point of view, the areas most affected by such a project are the Technical Department, Production and Procurement Departments, directly involved in both the individual and the multi-project management. The Sales Department, aiming at getting orders, is mainly involved in the project portfolio management (i.e. selection and prioritization of projects). This study focuses on the transformation related to PM, but excluding project portfolio management. As a result of the diagnosis, it was decided that the transformation would focus mainly on the Production Department, seeing its extraordinary difficulties in meeting project deadlines. The capacity constrained resource (CCR) of the system, also called Drum or bottleneck, are the machining facilities. Consequently, the selected method to address the problem was the Drum-Buffer-Rope (DBR) approach of Goldratt’s [5] Theory of Constraints (TOC). Importantly, DBR implies a break with the previous paradigm, which was based not only on a local view of resource-efficiency, but also on a managerial perspective, founded on the sum of the different parts composing the system, but ignoring their respective dependencies. DBR systemic approach on the other hand is based on the management of the organization’s bottleneck, aligned with its objectives. Therefore, the rest of the resources are not constraints and have some protective capacity. According to this view, the protective capacity of these resources should be used to timely provide the needs of the bottleneck, instead of producing more than actually needed or before the need date. This way, DBR was expected to improve the overall result of the company. The objective of the transformation is to improve the overall result of the organization through the subordination of the rest of the system to the needs of the bottleneck. Likewise, the new approach is expected to provide the Technical Department with timely and accurate information about the actual need dates for each project. This information should facilitate this department to work aligned with the organizational objectives.

3.3 Objectives and Research Methodology This study aims to demonstrate the existence of a real risk of making incorrect decisions, even when they are based on apparently correct decisions from a local perspective. In other words, it is intended to confirm that decisions taken in accordance with commonly accepted approaches, criteria, indicators, etc. for the individual management of projects can be counterproductive from a global perspective. Consequently, the following two hypotheses are considered as basis for the contrast: • Hypothesis 1: seemingly correct decisions taken from a local perspective (i.e. individual or project) can be counterproductive from a global perspective (i.e. company).

38

U. Apaolaza Perez de Eulate and A. Lizarralde Aiastui

Fig. 3.1 Research methodology

• Hypothesis 2: seemingly incorrect decisions taken from a local perspective can be correct from a global perspective. In order to test these hypotheses, this study focuses on a project framed within this context (hereafter called Project 4042–16). The project allows analysing and comparing the decisions based on the old paradigm against the approach proposed by the new paradigm. In addition, the observation of the global context allows verifying the impact, validity and significance of the findings in their real context (i.e. multiple projects managed simultaneously and sharing the resources of the company). Figure 3.1 summarizes the research methodology of this study, which is further detailed in Sect. 3.4. The contribution made by this study is twofold: on one hand, it stresses the limitations of traditional project management approaches in achieving consistent organisational alignment through a pilot project; on the other hand, it proposes a simple systemic approach for aligned project management in manufacturing environments.

3.4 Description of the Project 3.4.1 Current Approach: “How We Do It Today” The analysis carried out with the managers of the company starting from the initial diagnosis has shown that the PM approach coincides with the perspectives suggested by the traditional project management approach, as explained in Sect. 3.2. As a general rule, the company seeks to obtain maximum overall profit by maximizing profit in each individual project. Thus, projects are planned aiming to achieve both on-time delivery and maximum profit. On-time delivery of projects entails fulfilment of all commitments with a customer’s order. Thus, an order can consist of a single final delivery of the product or of several partial deliveries of parts. Likewise, each delivery may include (or not) an inspection. Thus, when it comes to planning, these commitments become milestones, i.e. significant events ruling the management of the project. The search for maximum profit in each project starting from an existing budget causes project plans to be oriented towards cost minimisation. According to the company’s accountancy system, the main costs in a budget are material (mainly raw material and components), labour costs and subcontracting costs. Therefore, the key guidelines to maximizing the result of a project are minimisation of the number human-resources hours and reduction of purchasing and subcontracting expenses.

3 Comparative Study of Project Management …

39

Resource cost minimisation is addressed through “resource optimisation”, i.e. minimizing the hours necessary to perform the jobs. As a result, similar jobs of the same project are often grouped into batches, seeking the maximum occupation of resources by minimizing the time spent on activities such as changes and adjustments. Similarly, the minimization of purchase expenses is addressed through negotiation for better prices. Consequently, the purchasing needs are grouped into purchase batches to get better prices. Finally, in certain situations the same task can be carried out by both own resources and subcontracting. The decision criterion is the comparison between the hourly cost of subcontracting and the internal equivalent cost.

3.4.2 Analysis of Deficiencies The company originally expected on-time delivery of projects through an efficient use of resources, as a consequence of the application of the criteria described in Sect. 3.4.1; as well as minimising purchase and subcontracting expenses. However, as explained in Sect. 3.2, the reality is that this approach has not led to the expected result. An analysis was performed in order to identify the causes for said poor result. The information provided by the organization and observation have revealed problems arising from the procedures of the company. These problems are mainly related to purchasing, supply and production policies, as detailed below. The search for “efficient” production through resource optimisation leads to grouping project deliveries into large production batches, including as many similar deliveries for the same project as possible in a single production batch. Furthermore, the same criterion may also be applied to different projects, leading to grouping parts of these projects into a single batch. This implies buying and manufacturing according to the need date of the most urgent delivery in the batch. Consequently, the rest of the material and parts composing this artificially created batch are bought and/or manufactured sooner than necessary. Batching generates activity peaks that require significant capacity reserves. In addition, other parts from the same project or other projects may need the same capacity at the same time. In such a case there is a real risk of delay, and the system is forced to react. Recovering the delays entails additional cost, such as overtime or subcontracting. It has also been noted that the parts manufactured in advance (i.e. the part of a batch not required yet) remain parked in progress on the shop floor until needed, often waiting for weeks or even months. Given the big volume and weight of many parts, the regular identification and manipulation of the parts actually required is significantly obstructed, slowing the flow of orders. Furthermore, the risk of errors, accidents, etc. is also increased. Another problem arising from batch-manufacturing is that it requires raw material and components also to be purchased in batches. This entails that each batch is bought according to the earliest need dates of the parts composing it. Obviously,

40

U. Apaolaza Perez de Eulate and A. Lizarralde Aiastui

buying the material before the required dates also entails paying for them sooner. The high cost of material implies early and costly disbursements, which significantly affect the cash flow of the project and consequently also the company’s cash flow. Importantly, according to the organization’s approach this aspect is irrelevant to the project managers, whose decisions are guided by the abovementioned local objectives. In addition, the volume of grouped purchase orders may also entail a capacity problem for the supplier. The need to produce large batches in a short period of time can unnecessarily jeopardize the timely supply of orders, especially if several orders affect the same supplier. In such cases, the organization is forced to react hastily, with little room for manoeuvring and consequently poor negotiation conditions, triggering additional costs. Significantly, this problem may be caused by a single project, but the consequences can affect other projects and vice versa. Likewise, it has been verified that the “Make or Buy” decisions, based on unit cost criteria as mentioned in Sect. 3.4.1, often cause a negative economic impact to the company. The cost-based comparison for certain jobs has led to decisions to outsource work in situations where the existing capacity allowed to assume the work internally, theoretically avoiding additional costs. Surprisingly, inverse situations have also been identified. Thus, by focusing exclusively on unit cost criteria, internal capacity was assigned regardless of the actual available capacity. Delays were caused by attempting to carry out “internally profitable” work.

3.4.3 Proposed Approach and Expected Results Taking into account the shortcomings identified in the study, an alternative reference model has been designed. It suggests a different approach to planning and managing projects, aligned with the aim and the actual context of the organization. From this perspective, the project is a contribution to the objectives of the organization and must be developed in a way compatible with the rest of projects as well as with the limitations of the company. The immediate implication of this approach is that decisions must simultaneously take into account the needs and constraints of both, the local (project) and the global (organization) perspectives. The general criteria defined in the new model for single project management are set out below. Milestones, such as deliveries, inspections or tests, determine the shape of the project. This involves decomposition into subprojects oriented towards the fulfilment of these partial objectives. As a general rule subprojects will be managed individually, but aligned with the perspective and objectives of the overall project. This implies that the main criterion of production will be to produce the different parts as late as possible, according to the required need dates whereas taking into account the limitations of the production system. With a view to gaining flexibility for planning, this measure allows a greater margin to move project (or subproject) schedules forward or backward, depending on the actual workload.

3 Comparative Study of Project Management …

41

Consequently, the impact of erroneous production policies on other projects is also expected to be reduced or even eliminated. As a result of staggering the launch of production orders in accordance with real need dates of deliverables, a significant reduction of work in progress (WIP) is also forecasted. By launching orders timely, a double impact is foreseen: reduction of the number of projects/subprojects running simultaneously, and reduction of the amount of parts in process. This should also result in other indirect improvements, such as reduction of quality-related incidents or increased productivity. Obviously, exceptions to the general rule are expectable, such as those situations derived from near necessity dates of several deliverables, or long changeover times. In these cases, due to the lack of flexibility it will be necessary to perform groupings in manufacturing batches. However, the underlying logic must remain valid: maximum adaptation to the actual need dates respecting the limitations of the system. In addition, it is probable that the change from batch-based production to need-date-based production will entail an increase of resource needs in the initial phase. These not being CCRs, this change should not cause any capacity problems, but it is advisable to control this variance. On the other hand, purchases must be subordinated to production needs. The general criterion is the same: to get the required material as late as possible, according to the need dates. The expected results of this change are improved cash flow and the level of compliance of suppliers, the latter resulting from a more uniform flow of purchase orders. The exception to this criterion are those cases in which the cost savings related to the grouped purchases are negligible when compared to the benefits of getting material timely. However, it is important to note that a grouped purchase may imply multiple deliveries according to the real needs, instead of a single delivery. This option is especially interesting when it entails identical or similar prices to those of the single delivery grouped purchase, as it includes the positive side of each case. The main guidelines for subcontracting are the same as for purchases, but also considering other aspects: • The “Make or buy” decision must be based on a real need, and not artificially induced. If so, a cost-benefit analysis should be carried out from the company’s overall perspective, i.e. taking into account the organization’s available capacity and seeking the most appropriate use of it. • The confidential or strategic nature of certain jobs may make it advisable not to outsource jobs which, from an economic point of view, should be subcontracted. • Likewise, depending on the potential impact on the on-time delivery, subcontracting may or may not be convenient.

3.4.4 Application: Project 4042-16 The application of the new approach to the selected project started by rethinking the project strategy. Based on the mandatory milestones, the project was divided into

42

U. Apaolaza Perez de Eulate and A. Lizarralde Aiastui

Fig. 3.2 Comparison of the expected flow of costs after versus before change

four subprojects. These were launched taking into account both the deadlines and the available capacity of the company. As a result, a new plan was developed, resulting in a different project schedule and budget curve, if compared to the traditional approach. Figure 3.2 shows, on a monthly basis, the traditional budget and the new budget, from the perspective of cash flow. The execution and control of the project deadlines would be managed according to both the on-time delivery TOC approach and metrics provided by the system, enabling objective and aligned prioritisation. Project cost control (except labour hour costs) would be performed as previously, but subordinated to project/subproject on– time delivery. Internal labour hours would be measured for historical information purposes, but ignored for cost control purposes.

3.5 Results As explained above, there has been a change in the management of projects that implies a break with the previous approach. It is still in the transition phase, under the influence of previous decisions according to the old criteria, so it is too early to discuss definitive results and conclusions. However, already within the first months after the implementation of the new approach, relevant findings and partial results have been achieved and seem to confirm the initial expectations. The main results so far, both in the project and from the company’s overall perspective, are described and analysed below.

3.5.1 Project 4042-16 The first aspect to be analysed is on-time delivery of the project deliverables, one of the main objectives of the new model. Subproject 1 has not been completed on date, mainly due both to supply problems derived from the old way of purchasing; and

3 Comparative Study of Project Management …

43

to capacity adjustments required to work in accordance with the new model. As a result, the product test has been delayed, although this has had no consequences for the customer. The test for subproject 2 has also been delayed, exactly for the same reasons as subproject 1. However, it is noted that the delay decreased when compared to subproject 1. Furthermore, this delay recovery trend has continued in subprojects 3, 4 and 5. Thus, these subprojects are expected to meet their deadlines, since work is now flowing correctly and orders are already arriving to the bottleneck on time, as explained in Sect. 3.2.1. In addition, an increase of the hours dedicated in the first phases was expected as a consequence of the new planning model. Indeed, the hours in the cutting and forming operations increased by 25% and 15% respectively. Despite this fact, these sections have delivered all orders on time without any delays or capacity problems. Finally, no relevant deviations have been reported on the rest of operations. Another aspect of interest is the evolution undergone by purchases and subcontracting, from a triple perspective: cost of purchases, size of the purchase-batches and evolution of the budget. Though, no significant purchasing cost differences have been reported. The only relevant difference is the inclusion of a clause by suppliers to cope with possible variations in the cost of raw material. In cases where there is a significant difference in the price from the moment of receiving the order to the time it had to be purchased in accordance with the need date, that clause would be applied. However, as deviations in the cost of raw material are subject to market fluctuation, this aspect is not considered relevant for this study. The direct consequence of buying as late as possible in accordance with the real need date is a considerable reduction in the size of the purchase batches. The exceptions found to this policy are two: low-cost material whose acquisition “as needed” would overcomplicate their management, and purchases in which transport cost constitutes a significant factor. In both cases material needs are grouped in batches for purchasing. On the other hand, the production is managed according to real needs, that is, purchased material not required for current orders remain in the warehouse until they are really required. In brief, the budget of the project is evolving in a very similar way to that foreseen in Fig. 3.2. No significant differences from what was expected have been found.

3.5.2 General Results From the general perspective of the organization, the result observed to date is summarized as an unprecedented production flow. This is expressed mainly in terms of reduction of WIP levels, as shown by the following data. They summarize the main results achieved so far, which still keep on improving: • Lead time reduction: 10%. • Reduction in the number of open purchase orders: 20%. • Reduction in the number of open manufacturing orders: 25%.

44

U. Apaolaza Perez de Eulate and A. Lizarralde Aiastui

• Reduction of the volume of semi-finished material in progress: 40%. • WIP reduction (Number of projects running simultaneously): 20%. • Reduction of quality-related incidents: 20%. Other aspects also seem to be improving, although the values are still modest. Thus, it is too early to obtain significant conclusions. Firstly, once the adjustment period has passed, the bottleneck deadlines are being met in general (as in project 4042-16). This is a critical intermediate deadline for on-time project completion. Thus, this trend suggests that the same result will also be reached for project deadlines in a short period of time. Additionally, it has been observed that the instantaneous capacity actually available has increased occasionally. With the new planning system, by not launching the work unnecessarily in advance, availability time windows have arisen for nonbottleneck resources. This has not only allowed reducing overtime, but to take on jobs that previously were outsourced. This way, more tasks have been carried out by the staff, on time and generating real savings for the organization. The performance of suppliers also seems to be improving in terms of level of service. The number of supplier-related problems has been considerably reduced, as have quality-related incidents. The global cash flow has also been improved, but due to the long lead times of the projects significant improvements have not yet been achieved. Thus, this trend it is expected to continue growing in the following months, thereby providing significant results. There are other improved qualitative aspects relevant for this study, as they contribute to better control of both individual projects and the entire system. As for the planning side, projects now are launched with discretion and reliability. Thanks to the new approach the system knows how a project fits into the production system right from the beginning. This way, it is also possible to identify the point of no return in planning, that is, the latest time a project can be launched to be completed on time. This information is essential to managing uncertainty and making decisions consistent with priorities and capacity; as well as to providing relevant information to the Design Department. Regarding multi-project execution, the portfolio of projects is managed from a global perspective, with greater manoeuvrability. This means an agile and aligned management that allows to adjust (i.e., to advance or to delay) the different operations, at any given moment and according to real needs. The short-term execution also has been improved. Those in charge of the sections have specific production plans consistent with their capacity, subordinated to the bottleneck and therefore aligned with the general objectives. This way, coordinators know what must be done at every moment, as well as what they should ask from other sections.

3 Comparative Study of Project Management …

45

3.6 Discussion The results to date show a clear evolution of the system towards a better use of its capacity. Still in the adaptation and adjustment period of the system to the new management approach, the results and data indicate that the change is moving in the foreseen direction. Thus, on-time completion rate of the successive deliverables of project 4042-16 is progressively improving. Furthermore, this trend reflects in the rest of orders. Therefore, the aimed-for situation has already been almost reached, as the material is arriving timely at the bottleneck. This way, the entire organization is operating at full capacity and in accordance with the general objectives, thereby having achieved an outstanding flow. This fact confirms the validity of the approach based on the proper use of the organization’s bottleneck. The corresponding subordination of the rest of the system has led to staggered launching of the subprojects, in accordance with the available capacity and with the deadlines, i.e. consistently aligned with the global priorities. As a result, projects in progress are now perfectly monitored and controlled both individually and from a general perspective. However, this approach is in conflict with the old approach, based on the apparent benefits of working in batches. Said model seeks to increase productivity and reduce costs in every individual project as way to achieve better global profit rates. These seemingly local improvements have proven to be potentially counterproductive because of their possible negative overall impact. The change from batches to timely real needs has not generally meant significant cost increases. In addition, whenever bundling seems convenient, it doesn’t necessarily entail batch-based production. Thus, it is proven that by searching local productivity improvements, workload peaks are artificially induced, leading to unnecessary in-advance purchasing and subcontracting. In other words, the lack of aligned and coherent vision leads to misaligned local decisions, causing a negative overall impact. One of the most conflicting aspects in this regard is accounting. According to the traditional PM perspective, the project budget should include both external and internal costs. Internal costs are imputed proportionally to the resource application to the project. For instance, having used more cutting and forming resource hours in the project 4042-16 project, consistently with this point of view, its cost would be higher compared to the traditional approach. Therefore, the project margin and the profit of the company would be lower. The reality is that neither the income nor the expenditures of the organization have been altered for this reason. The increase of working hours caused by the new approach is limited to non-bottleneck resources, and has been covered by the existing resources without the need to subcontract or pay overtime. This result is already considered to be a general trend, as can be observed in general and not only in project 4042-16. This way, the fallacy of internal resource dedication-based cost estimation for projects becomes evident. In that approach all the hours imputed to the project are equally considered, regardless of whether they are critical or not, or whether they are carried out by the bottleneck or by other resources. The result is that individual

46

U. Apaolaza Perez de Eulate and A. Lizarralde Aiastui

project management through traditional methods may lead to incorrect decisions results from a global perspective, causing a negative impact to the company. This confirms hypothesis 1. The impact of the new approach on the cash flow of project 4042-16 also confirms the validity of the new approach. The result so far is similar to the one predicted (see Fig. 3.2), and the expected final result also points in this direction. Taking into account that the cost of procurement and subcontracting of the project is estimated to be around 70% of the total cost of the project, the conclusion is that expenditures have been significantly delayed whereas the difference in cost is negligible. Projecting these results onto other projects suggests a very positive financial impact in the short term. Indeed, it is evident that if the WIP of the company lowers and individual cash flow of projects improves, the overall cash flow of the organization will be noticeably improved. This idea summarizes the main conclusion drawn from this study: in order to achieve good overall results: the management of projects and resources should not be exclusively based on local approaches and decisions. The overall capacity of the system (and therefore its entire capacity to carry out projects and generate incomes) is dependent on the proper use of resources. In this sense it is essential to distinguish between CCRs and others. As bottlenecks limit the global capacity of the company, their use determines its outcomes. The underlying idea is that, as long as no additional capacity is required, it is false that an increase in hours used by non-CCRs will increase the cost of the project. Therefore, the policies for resource usage must be based on the bottlenecks. Those in turn must be oriented towards maximum overall results, in contrast with local measures such as resource productivity or minimisation of resource usage. This confirms hypothesis 2.

3.6.1 Limitations of the Study and Future Research As discussed above, the main limitation of the present study is the limited period available for observation. Thus, the results and conclusions cannot be deemed definitive. However, despite the recent implementation, the first results have been achieved quickly and in the foreseen direction. This seems to confirm the appropriateness of the initial approach. In any case, the next step would be a broader analysis, including all projects carried out over a longer time span. On the other hand, the conflict observed between individual project management and simultaneous project management also occurs at the organizational level. Clearly, the various departments have misaligned approaches. Their local objectives are apparently aligned with the general objectives, but they are inconsistent with each other. An analysis of the whole organization from an aligned global perspective could result in the identification of keys for improved overall performance.

3 Comparative Study of Project Management …

47

3.7 Final Conclusion The way of measuring project performance as well as the expected results of projects strongly influences the project management approach. Hence, if organizational misalignment occurs, adequate management from the local perspective can lead to negative or inferior overall results when compared to those that can be achieved with the actual capacity. Organisational misalignment leads to poor performance, taking place at different levels simultaneously. Thus, the lack of consistency between the objectives of the different levels of decision-taking can lead to achieving certain objectives while making it impossible to achieve others. Likewise, the lack of consistency between metrics can lead to decisions seemingly improving the outcome of a project, but actually being harmful for the company. The effect of misalignment is not limited to the sum of the individual causes. Their simultaneous existence feeds further factors, increasing the total real impact. A system managed from multiple local perspectives will hardly be able to identify the impact of its own decisions on the overall outcome. In order to get positive results from a general perspective, decisions should not solely be taken from local perspectives. On the contrary, it is imperative that the system itself is actually able to function in an aligned way, so that local decisions can be subordinated to the global perspective.

References 1. Abrantes R, Figueiredo J (2015) Resource management process framework for dynamic NPD portfolios. Int J Project Manag 33:1274–1288. https://doi.org/10.1016/j.ijproman.2015.03.012 2. Beck K, Beedle M, Van Bennekum A, Cockburn A, Cunningham W, Fowler M, Grenning J, Highsmith J, Hunt A, Jeffries R (2001) The agile manifesto 3. Caupin G, Knoepfel H, Koch G, Pannenbäker K, Pérez-Polo F, Seabury C (2006) ICB-IPMA competence baseline, Version 3.0. International Project Management Association 4. Goldratt EM (1997) Critical chain. The North River Press, Massachusetts 5. Goldratt EM, Cox J (1984) The goal: excellence in manufacturing. North River Press 6. Gray V, Felan J, Umble E, Umble M (2000) A comparison of drum-buffer-rope (DBR) and critical chain (CC) buffering techniques. In: Proceedings of PMI research conference. Paris, France, p 257 7. Hastie S, Wojewoda S (2015) Standish group 2015 Chaos Report 8. Kim Y-W, Ballard G (2000) Is the earned-value method an enemy of work flow. In: Eighth annual conference of the International Group for Lean Construction (IGLC-8), pp 17–19 9. Koskela L, Howell G (2002) The underlying theory of project management is obsolote. In: Proceedings of the PMI research conference, pp 293–302 10. Maylor H, Turner N, Murray-Webster R (2015) It worked for manufacturing.! operations strategy in project-based operations. Int J Project Manag 33:103–115. https://doi.org/10.1016/ j.ijproman.2014.03.009 11. Melnyk SA, Bititci U, Platts K, Tobias J, Andersen B (2014) Is performance measurement and management fit for the future? Manag Account Res 25:173–186. https://doi.org/10.1016/j.mar. 2013.07.007 12. Montero G (2016) Diseño de indicadores para la gestión de proyectos. Thesis, Universidad de Valladolid

48

U. Apaolaza Perez de Eulate and A. Lizarralde Aiastui

13. Narbaev T, De Marco A (2014) An earned schedule-based regression model to improve cost estimate at completion. Int J Project Manag 32:1007–1018 14. Petit Y (2012) Project portfolios in dynamic environments: organizing for uncertainty. Int J Project Manag 30:539–553 15. PMI (2016) Pulse of the profession 16. PMI (2013) Project management body of knowledge a guide to the project management body of knowledge. https://doi.org/10.1002/pmj.20125 17. PMI (2011) Practice standard for earned value management. Project Management Institute 18. Sauser BJ, Reilly RR, Shenhar AJ (2009) Why projects fail? How contingency theory can provide new insights–a comparative analysis of NASA’s Mars Climate Orbiter loss. Int J Project Manag 27:665–679 19. Smith D, Smith C (2014) Demand driven performance: using smart metrics. McGraw Hill Professional 20. Song L (2010) Earned value management: a global and cross-industry perspective on current EVM practice. Project Management Institute 21. White D, Fortune J (2002) Current practice in project managementan—an empirical study. Int J Project Manag 20:1–11

Chapter 4

Study of an Innovation Indicator Applied to Future Projects Monitoring Using the Earned Value Technique E. García-Escribano

Abstract Evidence shows that companies with a business model based on innovation have a higher growth in terms of sales and operating margins. The objective of this work is to study through a practical case the innovation indicator applied to the tools and techniques of monitoring and controlling processes of the different project management knowledge areas. Two real cases are analyzed to study the information that this indicator can provide in order to improve forecasts for future projects using the earned value technique, knowing past project innovation indicators with the same characteristics and where similar efforts and investments have been made in innovation of control tools. Keywords Innovation indicator · Earned value · Project management · Monitoring processes

4.1 Introduction Innovation is an important aspect of organizations to achieve competitive advantages. Managers use the management control systems of companies to ensure an effective and efficient use of the resources to achieve the goals set. Many authors, including Draft [7], Athey and Schmutzler [3] have conducted research works to show the link between innovation, improved competitiveness and economic growth. Innovation can be defined as the internal or external adoption of new systems, policies, programs, processes, products or services. Utterback and Abernathy [24] show how the rate of adoption of new products and the rate of integration of new processes are very different in the stages of a company’s business development. During the last months, a work on innovation measurement that can be applied to project management has been carried out, trying to model the innovation applied to the different monitoring and control processes applied in each of the knowledge E. García-Escribano (B) INSISOC Social Systems Engineering, Escuela de Ingenierias Industriales, Universidad de Valladolid, Paseo del Cauce, 59, Valladolid 47011, Spain e-mail: [email protected] © Springer Nature Switzerland AG 2021 J. L. Ayuso Muñoz et al. (eds.), Project Management and Engineering Research, Lecture Notes in Management and Industrial Engineering, https://doi.org/10.1007/978-3-030-54410-2_4

49

50

E. García-Escribano

areas [19]. Innovation indicators have been calculated on various real projects carried out in several industrial companies. A mathematical innovation indicator model has been applied taking into account the innovation activities that can be included in project management and the relationship between them, since they obviously will not be independent from each other. It has been a joint task together with the general managers and project managers of these companies to introduce innovation activities in the monitoring processes in order to carry out an analysis of the innovation influence on project monitoring. Measurement is the first step that leads to control and eventually to improvement. If something cannot be measured, it cannot be controlled or improved. The objective of this research work is to improve and help project control, improving forecasts and reducing uncertainties so that project evolution can be followed by studying possible deviations between actual costs and budgeted costs, and between deadlines and planned deadlines. The article is structured as follows: first, the objectives that want to be achieved are listed. Next, a brief description of the problem to be studied is given and a review is made of the state of the art. Then, the practical case study is described. Finally, the results are analyzed and the conclusions and contributions in this research work are specified.

4.2 Objectives The main objective of this work is to analyze through a real practical case the influence of innovation applied to the tools and techniques of project management monitoring and control processes. To do this, it will be necessary to measure the innovation applied to the controlling and monitoring tasks using an innovation indicator. The two most representative projects of all studied along these months will be presented. All the collected data will be shown in order to understand the main advantage of knowing the innovation indicator in key projects, which would improve their control tasks. The analysis of the earned value management at a given point of the two selected projects will be performed and the differences between them will be shown once the innovation indicators have been calculated and the projects have been finished. With the results obtained, it can be verified if the innovation indicator gives some extra information in each possible scenario to try to optimize project development or, on the contrary, extra resources and efforts need to be added in order to reorient its execution towards the main goal set.

4 Study of an Innovation Indicator Applied …

51

4.3 Problem Description and Literature Review Most authors evaluate innovation basing their studies on the economic results of companies. The authors Amirteimoori and Shafiei [1], Chen et al. [4], Kao [15], and Tone and Tsutsui [21] can be mentioned. There are others that have studied the relationship between innovation and business management at the process or product level such as Merchant and Otley [17], and Chenhall, Kallunki and Silvola [5]. Also, from the point of view of industrial production, there are studies like the ones that can be read in Li [16] that consider the influence of inputs on outputs including the influence of external agents to evaluate innovation. Although Hollanders and Celikel-Esser [14] argued that innovation is not a linear process where inputs automatically transfer into outputs, it has been taken into consideration for this study the innovation from a linear point of view where inputs really transfer automatically into outputs. This makes much more sense in a practical investigation from the perspective of empirical measurement. In Rossi and Emilia [23] an empirical study can be found where the development of an innovation production activity is seen as a process made up of sequential stages, characterized by unidirectional causal relationships. Innovation processes can be seen as chains of innovation [20] where the value of innovation flows and it is encapsulated in different events [12]. So, innovation can be considered as a sum of consecutive processes with a linear relationship between them. Project management can be seen as a series of interrelated processes, where each process is characterized by inputs, outputs and tools and techniques. All of these authors have achieved interesting results in the evaluation of innovation processes at a global company level, but there are no studies where an innovation analysis in relation with project monitoring can be seen. There are also many relevant studies on how to evaluate companies’ innovation activities. Furman et al. [8] conclude that to carry out the innovation evaluation an indicator that links the efficiency between the different activities and provides information of the generated value is needed. This indicator, which can be called innovation indicator, quantitatively shows the skills, knowledge, tools and financial resources applied in project monitoring tasks, so it can be a great help in taking decisions. It is very common to use software that implements the earned value technique to control projects in companies. This methodology is described in an easy way in Christensen [6], Anbari [2] and Henderson [13]. This method shows measures of cost and time efficiency, warns about deviations, and allows to make new budget estimations and set new project completion dates. However, the earned value methodology can be improved and complemented in order to obtain a comprehensive monitoring system that overcomes some of the limitations of this technique. In Garcia-Escribano [10], a study can be seen that improves the predictive capacity in the control of industrial projects, which can complement the most commonly control techniques used such as those described by García-Escribano [9] and Vanhoucke [25]. In this article, a practical case will be used

52

E. García-Escribano

to describe how the earned value technique can be improved and how the applied innovation on the project control processes can influence their development.

4.4 Practical Case Study The earned value methodology takes into account three fundamental variables: planned value (PV), which is the budgeted cost of the scheduled work; actual cost (AC), which is the money spent to perform the actually performed work; and earned value (EV), which is the budgeted cost of the performed work. It is necessary to see which work has been actually performed, in order to control the project. After reviewing the literature on the earned value technique, different hypotheses to make new forecasts can be considered: the first one is to suppose that problems that have led to extra costs or delays are perfectly identified and resolved, with the result that the rest of the project will be executed as it was planned. The second one is to assume that a mistake in the planning was made, and consequently the rate of execution of the project observed until the review date is reasonable. And the third one is that neither of the above assumptions is true, and a complete recalculation of the project costs and deadlines needs to be made. Now, it is necessary to add the results obtained from the innovation indicator to make new forecasts and complement these hypotheses. Innovation, as a global process, is very complex, within which many iterations and interventions take place so, to simplify, this work has been based on the hypothesis that it is possible to identify and evaluate the innovation associated to the monitoring activities analyzing the innovation of the different monitoring and control processes applied in each knowledge areas of project management. As has been said before, this work attempts to view the innovation from an empirical perspective. Thus, the innovation applied to the tools and techniques that link inputs and outputs has been evaluated. 11 processes in total have been tested, where innovation has been evaluated in 50 tools and techniques. In Vega-González [26] and Morel and Boly [18] models based on questionnaires and multi-criteria measurement algorithms can be seen that are used to measure technological innovation in companies through indicators that can be observed in a series of practices and an innovation value through an expert judgment can be assigned. Also, in the studies presented by Grant and Pennypacker [11] there are approaches based on the evaluation of the innovation process that consider different levels or practices to calculate an indicator of innovation. They subdivide the practices into directly observable and measurable sub-practices. The objective is also to obtain an overall innovation score. In the work presented here, the simple and focused way applied in these studies to estimate the value of innovation in the monitoring projects tasks is also used. For this, different degrees of innovation and weights are evaluated and assigned to each of the tools and techniques of project management control processes. These tools

4 Study of an Innovation Indicator Applied …

53

and techniques are considered to be the minimum evaluable element to which a value to the applied innovation can be given. To explain this broadly with a practical case, an applied innovation estimator (Sij ) can be considered over the tools (j) for each process (i) which is calculated according to Eq. 4.1. n Sij =



j=1 G j tools_and_techniques_(process_i) N ◦ _tools_and_techniques_(process_i)

 (4.1)

Each estimator (Sij) will have a value between 0 and 1 and for its calculation a degree of applied innovation to the tool (Gj ) must be assigned, also between 0 and 1, which quantifies the innovation that is being applied. The value 0 indicates total absence of innovation and the value 1 indicates the application of a total innovation to the used tools. The innovation degree (Gj ) of each tool is calculated together with general managers and project managers of the studied companies, the value of which is also the result of past experience and knowledge. After assigning the innovation degree (Gj ) and calculating the applied innovation estimator (Sij ) in each process (i), a weight (wj ) to each tool (j) needs to be assigned, depending on how important it is for the control of the ongoing project on which the innovation wants to be evaluated. The project innovation indicator (IIij ) can be calculated according to Eqs. 4.2, 4.3 and 4.4, multiplying the obtained innovation estimator (Sij ) in each process by the weight (wj ) assigned to each tool used by project managers. The total sum of the weights of the tools and techniques applied will be equal to the unit. This indicator will also be in the range [0, 1] and it will give an idea of the total innovation applied to the project monitoring tasks. I Iij =

n 

wj Sij

(4.2)

i=1 j=1 n 

wj = 1

(4.3)

j=1

0 ≤ I I (x)ij ≤ 1

(4.4)

All weights have been assigned to each tool in collaboration with project managers who have carried out the project management control tasks in each of the companies tested in this study. Summarizing, in the next tables, from 1 to 11, the innovation degrees (Gj ) can be seen that have been assigned in one of the real projects studied. This is an example of a project where the project manager has invested heavily in innovation of the tools and techniques of the control processes in each knowledge area. With this, the

54

E. García-Escribano

applied innovation estimator (Sij ) is calculated and a weight (wj ) to each tool of each process is assigned (Tables 4.1, 4.2, 4.3, 4.4, 4.5, 4.6, 4.7, 4.8, 4.9, 4.10 and 4.11). It can be observed how the calculation method of the innovation indicator of each process (IIij [Px ]) can be developed in order to obtain the project innovation indicator (IIij ), which has a value of 0.405 in this case. This value gives an idea of the great efforts made by the company and the great interest shown by the project manager in applying the greatest possible innovation in the control processes. Table 4.1 Knowledge area, integration management—processes, controlling and monitoring project work Gj

Sij

P1. Controlling and monitoring project work

wj

0.20

IIIIij (P1 ) 0.008

HT1. Expert judgment

0.15

0.01

HT2. Analytical techniques

0.15

0.01

HT3. Information system for project management

0.20

0.01

HT4. Meetings

0.30

0.01

Table 4.2 Knowledge area, integration management—processes, perform integrated control change Gj

Sij

P2. Perform integrated control change

wj

0.33

HT5. Expert judgment

0.20

IIIIij (P2 ) 0.033

0.02

HT6. Meetings

0.30

0.03

HT7. Control change tools

0.50

0.05

Table 4.3 Knowledge area, scope management—processes, scope validation Gj

Sij

P3. Scope validation

wj

IIIIij (P3 )

0.30

0.015

HT8. Inspection

0.30

0.02

HT9. Decision group techniques

0.30

0.03

Table 4.4 Knowledge area, scope management—processes, scope control Gj P4. Scope control

Sij

wj

0.35

IIIIij (P4 ) 0.028

HT10. Control change system

0.40

0.02

HT11. Analysis of variation

0.30

0.02

HT12. Re-plan

0.40

0.02

HT13. Configuration management system

0.30

0.02

4 Study of an Innovation Indicator Applied …

55

Table 4.5 Knowledge area, time management—processes, schedule control Gj

Sij

P5. Schedule control

wj

0.58

IIIIij (P5 ) 0.088

HT14. Progress report

0.50

0.01

HT15. Schedule change control system

0.60

0.02

HT16. Performance measurement

0.60

0.02

HT17. Project management software

0.70

0.02

HT18. Analysis of variation

0.60

0.04

HT19. Schedule comparison bar chart

0.50

0.04

Table 4.6 Knowledge area, cost management—processes, costs control Gj P6. Costs control

Sij

wj

0.47

IIIIij (P6 ) 0.056

HT20. Cost control system

0.40

0.01

HT21. Performance measurement analysis

0.50

0.01

HT22. Projections

0.30

0.02

HT23. Project performance reviews

0.50

0.02

HT24. Project management software

0.80

0.03

HT25. Variation management

0.30

0.03

Table 4.7 Knowledge area, quality management—processes, perform quality control Gj

Sij

P7. Perform quality control

wj

IIIIij (P7 )

0.40

0.020

HT26. Inspection

0.20

0.01

HT27. Control charts

0.50

0.01

HT28. Pareto diagrams

0.40

0.01

HT29. Statistical sampling

0.30

0.01

HT30. Flowcharts

0.50

0.01

HT31. Trend analysis

0.50

0.02

Table 4.8 Knowledge area, communications management—processes, communications control Gj P8. Communications control

Sij

wj

0.40

IIIIij (P8 ) 0.020

HT32. Information management system

0.60

0.01

HT33. Expert judgment

0.30

0.02

HT34. Meetings

0.30

0.02

56

E. García-Escribano

Table 4.9 Knowledge area, risks management—processes, risks control Gj

Sij

P9. Risks control

wj

0.33

IIIIij (P9 ) 0.040

HT35. Risk reassessment

0.60

HT36. Risk Audits

0.60

0.02 0.02

HT37. Analysis of variation and trends

0.20

0.03

HT38. Measurement of technical performance

0.30

0.01

HT39. Backup analysis

0.10

0.02

HT40. Meetings of project status

0.20

0.02

Table 4.10 Knowledge area, acquisitions management—processes, acquisitions control Gj

Sij

P10. Acquisitions control

wj

0.43

IIIIij (P10 ) 0.056

HT41. Contract change control system

0.60

0.01

HT42. Acquisition performance reviews

0.70

0.02

HT43. Inspections and audits

0.50

0.01

HT44. Performance reports

0.30

0.02

HT45. Payment systems

0.30

0.01

HT46. Claims management

0.30

0.03

HT47. Registration management system

0.30

0.03

Table 4.11 Knowledge area, stakeholder management—processes, manage commitments with stakeholders Gj P11. Manage commitments with stakeholders

Sij

wj

0.33

IIIIij (P11 ) 0.033

HT48. Communication methods

0.40

0.02

HT49. Interpersonal skills

0.40

0.03

HT50. Management skills

0.30

0.05

4.5 Results and Validation Pajares and Lopez-Paredes [22] show how the earned value technique does not take into account the learning effect throughout the project life cycle; however, a range of reference innovation indicators can be calculated for each project type according to the assigned weights. This range of indicators can be interesting to manage future projects with similar characteristics, where the tools and the processes are repetitive, taking advantage of information obtained in past projects. Figure 4.1 illustrates the schedule performance aspect of the earned value management of the project that has been used as an example in the article to calculate the

4 Study of an Innovation Indicator Applied …

57

Fig. 4.1 Earned value management in project 1 at month 10

innovation indicator (IIij ). The analysis was carried out in the month 10 of the project. Up to that moment, it can be seen how the project was above the budget in relation to the amount of work done since the beginning of the project, but it progressed faster than planned. If the analysis is carried out once the execution of the project has been finished, it can be seen how the trend remains until the end, increasing project costs above the costs budgeted. This trend can be analysed in Fig. 4.2. This increase in expense could be due to the great interest of the project manager in investing in innovation in project monitoring tasks. It should be noted that at the end of the project, the earned value tends to the planned value, since the budgeted value of the work performed matches with the budget after completing all the tasks. In Fig. 4.3, the aspect of the earned value management studied in month 10 of the project can be seen in order to compare it with the previous project. Now, the project is above budget, in relation to the amount of work done from the beginning, but it progresses faster than planned as it happened in the previous project. The analysis can also be carried out once the project has been completed. It can be seen in Fig. 4.4, where the project is delayed in the last month with respect to the initial planning. The innovation indicator of project 1 (IIij ) is 0.405. This is a very high value, as it has already been mentioned, and it has the highest value obtained from all the projects studied, so the indicator calculated can be considered as valid. By contrast, the indicator of project 2 has a value of 0.056, which is one of the lowest of the study.

58

Fig. 4.2 Earned value management in project 1. execution finished

Fig. 4.3 Earned value management in project 2 at month 10

E. García-Escribano

4 Study of an Innovation Indicator Applied …

59

Fig. 4.4 Earned value management in project 2. execution finished

In this case the project manager and the general manager of the company didn’t show interest in applying innovation on the control tools.

4.6 Conclusions and Future Research 4.6.1 Empirical Results and Findings The weights and the innovation degree of each tool have been assigned in collaboration with project managers and general managers who have coordinated the management and control tasks of the projects studied. The importance of performing a correct allocation of weights lies in obtaining a reference range of innovation values for each type of project. This will allow to classify projects according to the different innovation indicators obtained, and as it has already been mentioned, this might be very useful for the realization of future projects of similar characteristics, where tools and processes are repetitive. Another important finding is that, in the case of the first project and according to the results obtained, it can be said that the earned value technique shows the innovation applied to the control tools, which could translate into a monitoring improvement and into a decision making optimization. The project has progressed faster than planned, despite it has exceeded the costs, which in part could be due to the high investment in innovation. However, in project 2, it can be seen how innovation has not influenced

60

E. García-Escribano

at all, so it will not be used as a pattern for other projects. Note that these are the relationships sought between the indicator and the project control management that could provide more flexibility or points of alerts in the control tasks. On a methodological point of view, this research is a contribution to an improvement project control framework and on a practical point of view, it helps project managers to complement the project management control techniques that they are using to achieve the goals set.

4.6.2 The Limitations of the Indicator The study does not take into account the inefficiency embedded into the internal operations. There is no precise control over the efficiency in the use and application of the tools and techniques of project management carried out by the project manager and all the parties involved. Another limitation of this empirical method, at least in the context of project control management, is the temporal dependence between inputs and outputs. The time lag between input innovation and output project control management improvement has not been considered. This is not a static model that assumes input–output correspondences are contemporaneous in the sense that the innovation indicators observed in a lagged time period are only the product of the innovation applied to the controlling and monitoring tasks observed during some period. This is a complex and difficult problem.

4.6.3 Concluding Remarks and Future Research A representation could be made with an innovation S curve that can be developed with the monitoring and control processes. A study of the S curve parameterization of the cumulative cost of a project can be found in García-Escribano [10]. According to this model, innovation is slow at first, but little by little and after dedicating more resources, it progressively accelerates to a level where, despite dedicating more resources, the performance remains constant and the innovation capacity tends to stabilize. As has been said before, a range of reference innovation indicators can be calculated that can help to manage future projects with similar characteristics, where the tools and the processes are repetitive, taking advantage of information obtained in past projects. The final utility of this indicator or rage of indicators consist in the control and monitoring tasks, since they could allow to generate an innovation frontier that, if achieved, it could affect the variability of project risks, project schedule and project costs. Among the many possible directions for future work, the possibility to model this limit line applying techniques such as earned value management results particularly

4 Study of an Innovation Indicator Applied …

61

intriguing, in order to see if an increase in efficiency and a control tasks improvement can be obtained. This analysis could also include the cost increase of the investment in innovation control tools and the study to see if the project can be affected positively or negatively.

References 1. Amirteimoori A, Shafiei M (2006) Measuring the efficiency of interdependent decision making sub-units in DEA. Appl Math Comput 173(2):847–855 2. Anbari FT (2003) Earned value project management method and extensions. Project Manag J 34(4):12–23 3. Athey S, Schmutzler A (1995) Product and process flexibility in an innovative environment. RAND J Econ 26:557–574. 4 4. Chen Y, Cook WD, Li N et al (2009) Additive efficiency decomposition in two-stage DEA. Eur J Oper Res 196(3):1170–1176 5. Chenhall R, Kallunki J, Silvola H (2011) Exploring the relationships between strategy, innovation and management control systems: the roles of social networking, organic innovative 6. Christensen D (1999) Using the earned value cost management report to evaluate the contractor’s estimate at completion. Acquisition Review Quarterly 7. Daft R (1982) Bureaucratic versus nonbureaucractic structure and the proceess of innovation and change. Res Soc Organisations 1:129–166 8. Furman JL, Porter ME, Stern S (2002) The determinants of national innovative capacity. Res Policy 31:899–933 9. García-Escribano E (2010) Specific project portfolio management techniques for monitoring and controlling enterprise project portfolios. Best Practices in Project Management. CEPMaW 2010. Insisoc Ed 10. García-Escribano E (2013) Estimación de la evolución de proyectos en el ámbito de la producción industrial mediante la parametrización de la curva S del coste acumulado. In: Book of proccedings of 7th international conference on industrial engineering and industrial management. XVII Congreso de Ingeniería de Organización. CIO 2013. Valladolid 11. Grant KP, Pennypacker JS (2006) Project management maturity: an assessment of project management capabilities among and between selected industries. IEEE Trans Eng Manag 53(1):59–68 12. Hansen MT, Birkinshaw J (2007) The innovation value chain. Harvard Bus Rev 85(6):121–130 13. Henderson K (2003) Earned schedule: a breakthrought extension to earned value theory? a retrospective analisys of real project data. The Measurable News 14. Hollanders H, Celikel-Esser F (2007) Measuring innovation efficiency. INNO Metrics 2007 report. European Commission, DG Enterprise, Brussels 15. Kao C (2009) Efficiency decomposition in network data envelopment analysis: a relational model. Eur J Oper Res 192(1):949–962 16. Li X (2009) China’s regional innovation capacity in transition: an empirical approach. Res Policy 38(2):338–357 17. Merchan K, Otley D (2007) A review of the literature on control and accountability. Handbook of management accounting research. Elsevier Press, Amsterdam 18. Morel L, Boly V (2008) Chapter 23 in management of technology innovation and value creation: selected papers from the 16th international conference on management of technology. from World Scientific Publishing Co. Pte. Ltd, pp 381–397 19. PMBOK (2013) A guide to the project management body of knowledge, 5th edn 20. Roper S, Dub J, Love JH (2008) Modelling the innovation value chain. Res Policy 37(6/7):961– 977

62

E. García-Escribano

21. Tone K, Tsutsui M (2009) Network DEA: a slacks-based measure approach. Eur J Oper Res 197(1):243–252 22. Pajares J, López-Paredes A (2007) Gestión integrada del coste y del plazo de proyectos. Más allá de la Metodología del Valor Ganado (EVM). In: International conference on industrial engineering and industrial management—CIO 23. Rossi F, Emilia UMR (2002) An introductory overview of innovation studies. MPRA Working Paper No. 9106 24. Utterback JM, Abernathy WJ (1975) A dynamic model of process and product innovation. Omega 3(6):639–656 25. Vanhoucke M (2009) Measuring time (improving project performance using earned value management). Springer, E.U.A. 26. Vega-González LR, Saniger-Blesa JM (2010) Valuation methodology for technology developed at academic R&D groups. J Appl Res Technol 8(1). (México)

Chapter 5

Rethinking Maturity Models: From Project Management to Project-Based Víctor Hermano

Abstract Project management maturity models are increasingly being considered important building blocks in modern organizations. The majority of these maturity models have their roots in the discipline of total quality management and claim that better processes lead to better outcomes. Thus, academics, consultants and project management institutions have played an important role in the development of maturity models and today’s offer exceeds the number of 30 different maturity models. However, there is no empirical study supporting the link between higher level of maturity and improved project and/or organizational performance. This paper presents an original maturity model that abandons the process perspective and focus on the development of capabilities and project-based learning. Specifically, we state that project-based maturity instead of project management maturity is the dimension directly linked with an improved organizational performance; hence, we propose a project-based maturity model. Keywords Dynamic capabilities · Organizational performance · Project-based learning

5.1 Introduction Modern organizations are increasingly implementing projects for developing a greater variety of activities. Thus, projects have been recognized as important elements for achieving both operational and strategic goals of organizations [1]. Given the important role projects play in organizational success, there has been a drive to improve the discipline of project management [2]. Among the different tools developed for improving the management of projects (e.g. project management bodies of knowledge and standards developed by project management institutions, certification systems for project management professionals, etc.), maturity V. Hermano (B) Faculty of Economics and Business, University of Valladolid, Avenida Valle Esgueva, 6, 47011 Valladolid, Spain e-mail: [email protected] © Springer Nature Switzerland AG 2021 J. L. Ayuso Muñoz et al. (eds.), Project Management and Engineering Research, Lecture Notes in Management and Industrial Engineering, https://doi.org/10.1007/978-3-030-54410-2_5

63

64

V. Hermano

models (MMs) have become an essential framework treated both by academics and professionals [2]. Based on the discipline of total quality management, MMs belief that better project management processes would lead to better project performance [2]. The majority of MMs asses the quality of project management processes implemented by the organization against the processes described in a particular project management standard [3, 4]. Moreover, MMs are designed to identify areas upon which improvements efforts should be directed [5]. However, despite of the great effort for developing and improving MMs, today’s offer exceeds 30 different project management maturity models (PMMMs) [5], both scholars and consultants have found disappointing results achieved in all types of large capital projects [6, 7]. In fact, while the claim that better processes lead to better performance is conceptually appealing, empirical research has yet failed to prove this assertion [2]. Thus, the assumption that maturity is good and more maturity is better has lost the status of mantra and so MMs’ philosophy has been assessed. Theoretically, scholars have found several limitations of MMs, e.g. most of them are based on software MMs which lack theoretical foundations [3]; they are based on a normative and control-oriented perspective assuming that elimination of uncertainty and control is feasible [2]. Researchers have also criticized MMs from a practical perspective, e.g. they are inflexible and do not account for the rapid pace of change in projects environment [3]; they focus on problems identification but do not provide clues for problem solving [3]; they focus on project management processes but ignore the broader organizational and contextual factors [2]. Given all these liabilities, scholars suggest that in order to be meaningful, MMs should broad their focus towards the organizational level and increase their flexibility [2, 8]. The aim of this paper is to develop a maturity model that instead of focusing on project management routines, takes project and organizational capabilities as the cornerstone. Following the research stream that calls for abandoning the so called lonely-project perspective [9], we draw on the dynamic capabilities approach for studying the issue of maturity from an organizational level. Thus, we are not developing another project management maturity model that enlarges today’s offer, but a project-based maturity model that might help to explain how organizations achieve their organizational goals through multiple projects implementation. The rest of the paper is structured as follows. Section two provides a literature review on PMMMs highlighting their liabilities and current research trends. Section three explains our project-based maturity model. Section four discusses the model and presents the main conclusions of the paper as well as the limitations and avenues for future research.

5 Rethinking Maturity Models …

65

5.2 Literature Review 5.2.1 General Overview of Maturity Models The majority of MMs have their roots in the Plan-Do-Check-Act cycle of Deming [10] that constitutes one of the cornerstones of the discipline of Total Quality Management [2]. Total Quality framework states that process control and increase in process maturity have two important consequences: (1) the reduction of variability within process performance and (2) the increase in process performance. The concept of process maturity as a way for improving performance has been assumed by several disciplines including project management. Specifically, the application of MMs to project management starts in the 1990s and has suffered a great expansion since then [11]. Both project management consultants and professional associations have been promulgated the utility of PMMMs and nowadays there are more than 30 different PMMMs that can be applied [5]. Even though there is some diversity in the frameworks and operational aspects those different PMMMs applied, all of them share some characteristics: • PMMMs’ underlying assumption is that better processes lead to better performance, hence, there is a positive relationship between higher levels of maturity and improve performance [4, 2]. Thus, PMMMs assume that maturity is good and more maturity is better [2]. • PMMMs identify the areas upon which improvement efforts should be directed by assessing how mature an organization is [3, 4, 2]. • PMMMs assess organizational practices against standard criteria (best practices) from professional bodies of knowledge [3, 4, 2]. • PMMMs promote process uniformity. There is the presumption that all projects should be managed in the same fashion, so the goal is to achieve that all project teams adhere to the same process in all circumstances [2]. • PMMMs assume that both the elimination of uncertainties and the achievement of total control are feasible. Thus, PMMMs conceive projects as linear events that proceed in conformance with a plan [2]. • Most PMMMs are organized around ladders of maturity. Specifically, most PMMMs assume that maturity develops in time and so it can be assessed through several linear stages [12].

5.2.2 Liabilities of Current Maturity Models Despite of the great effort dedicated to the development of PMMMs both operationally (more than 30 models have been developed and applied) and theoretically (since 1990s there has been a great amount of papers researching three main topics i.e. the construction of PMMMs; the comparison of project management maturity

66

V. Hermano

along different industries; and the analysis of benefits of project management maturity; [13]), the basic premise of PMMMs, namely that better project management processes lead to better project performance, has not been empirically supported [5]. Researches have not overlooked the fact that PMMMs’ basic premise remains a promise, and a great effort has been devoted for identifying PMMMs shortcomings. On the one hand, PMMMs have limitations from a theoretical perspective. Most of PMMMs are founded on a software maturity model, i.e. the capability maturity model for software developed by Carnegie Mellon University, which lack a theoretical basis [3]. Thus, although the logic of PMMMs is conceptually appealing, the actual means by which they are put in place are not as rigorous as they should be [14]. Furthermore, some scholars have questioned the own definition of maturity assumed by PMMMs. Most PMMMs assume that maturity is synonymous with standardization and so they evaluate how mature an organization is by focusing in what people are doing, which has been considered as too narrow by several researchers [12, 3]. Finally, scholars have highlighted that there should be a debate on the real benefits of PMMMs applications. Specifically, one can distinguish between the general benefits of the application of a PMMM (e.g. an strategic thinking of project management) and the benefits of being more mature (increases in projects performance?), which is not the same [15]. On the other hand, PMMMs have also limitations from an operational point of view. Based on a literature review, Jugdev et al. [3: 6] provide a list of six operational problems PMMMs have: • • • •

PMMMs are inflexible and should be applied equally to all types of projects. PMMMs geared toward identifying problems but not in how to solving them. PMMMs do not account for environmental shifts. The maturity ladders do not offer enough granularity to measure organizational progress over time. • PMMMs are impractical and overwhelming as methodologies. • PMMMs focuses on the “know-what” (project management processes) ignoring another project management aspects such as human resources or organizational issues. Given all this shortcomings, some scholars question if PMMMs provide meaningful and relevant guiding for improving project management [2] or if they can be considered as a source for organizational competitive advantage [3].

5.2.3 The New Wave of Maturity Models Given the great amount and the importance of shortcomings PMMMs actually suffer, there is logic in the lack of empirical support for the main PMMMs’ assumptions. In this sense, several scholars call for a new wave of research within the PMMMs literature that help to strength the incipient, or inexistent theoretical framework and to ease the correct application of the models [3, 16, 2].The first thing to do, as stated

5 Rethinking Maturity Models …

67

by Van Looy et al. [16] is to provide a clear definition of what process maturity is (in our case, what is project management maturity), and to establish the differences between maturity and capability (in our case between project management maturity and project management capability). By clearly defining these two concepts, PMMMs would increase their accuracy in what are they actually measuring and so empirical studies about project management maturity and their consequences would also be more conclusive. Secondly, Mullaly [2] state that the main problem of PMMMs is to prove that the inherent assumptions they are founded on are actually realistic. As presented in Sect. 5.2.1, the main assumption of PMMMs i.e. better processes lead to better performance, has not been empirically supported. Moreover, there are two other important assumptions embedded within PMMMs structure that are also far away from being theoretically, or empirically supported, that is the presumption of uniformity and the presumption of certainty and control [2]. Regarding the presumption of uniformity, PMMMs claim that rigorous codification of project management practices through project management standards will result in best practices, that correctly applied, will increase the chances of project success. Moreover, PMMMs presume that these best practices can be applied to every project, since all projects within an organization must be managed in the same fashion. Therefore, PMMMs presumption of uniformity establishes that the ultimate goal of a mature organization is a uniform and rigorous adherence the same project management processes by all projects, in all circumstances. As a way to overlap the rigidities of the presumption of uniformity, scholars suggest a broad interpretation of PMMMs based on the literature of contingency models [2]. Specifically, researchers propose to adopt a contingent view of projects (assuming that there are different types of projects that should be managed differently), a contingent view of project management processes, and a contingent view of both organizational and environmental factors [2]. Moreover, in order to make PMMMs really meaningful, scholars posit to expand even more their focus from individual project processes to broader organizational and contextual factors [4, 2, 8]. Regarding the presumption of certainty and control, as previously stated in Sect. 5.2.1, PMMMs asses project management practices against standards criteria. Thus, some of the shortcomings of project management standards are directly transfer to PMMMs. Most project management standards assume an engineering approach where problems are fully specifiable and can be fully solved through optimal solutions [17, 18]. In this sense, PMMMs assume that the elimination of uncertainties is feasible, and so the most important task of project managers is to control activities in line with a pre-established plan. Under the presumption of certainty and control, projects always proceed straightforward. However, today’s projects develop in a turbulent environment, and so none of them can be specified as a linear sequence of operations [18, 19]. In order to overlap the rigidities of the presumption of certainty and control scholars again claim for a revision of the literature of contingency models (specifically to apply a contingent view of processes and context).

68

V. Hermano

5.3 Project-Based Maturity Model As presented in Sect. 5.2.2, PMMMs suffer from serious shortcomings that narrow their theoretical foundation and hinder their correct real-life application. Recently, a new wave of research within the PMMMs topic has put in an appearance focused on a new philosophy for PMMMs that asks for clarifying terminology, introducing flexibility, and broadening the approach from isolated projects to organizational issues [2]. Drawing on the dynamic capabilities approach, a theory of the firm connecting static approaches such as Agency Theory or Transaction Costs Economics, to the dynamic Evolutionary Approaches [20], in this paper we respond to the call for a new wave of PMMMs by developing a new maturity model that instead of measuring project management maturity, focuses on measuring project-based maturity. The first thing to do before developing our project-based maturity model is to define maturity and capability. The Webster dictionary (:617) defines mature as being ripe of having reached the state of full natural or maximum development. In an organizational setting, maturity refers to a state where the organization is perfectly suited to achieve its objectives [12]. For the purpose of this paper, project management maturity means that the organization is perfectly conditioned to deal with its projects. Project-based maturity means that the organization is perfectly conditioned to deal with its projects, programs and portfolios. Moreover, mature project-based organizations create and shape organizational capabilities through project processes and learning, independently of their internal organizational structure (matrix, functional or adhocracy), the number of activities developed through projects and the purpose of the projects. We clearly distinguish the concept of maturity (both project management and project-based maturity) from the concept of capability. A capability is defined as the ability of firms to combine resources in such a way that enables superior performance [21]. Specifically we define project management capability as the organization ability to get into new markets and perform unique projects and also exploit current routines when environmental conditions become more predictable and stable and we define program and portfolio capability as the organizations ability to consolidate project learning and spreading it throughout the business unit and the entire firm [22]: 3451. Regarding our project-based maturity model, we build it around the concept of ladders of maturity (see Fig. 5.1), which is used in the majority of maturity models [12]. Our ladder of project-based maturity consists in four steps. The first level is the ad hoc problem solving level. At this level project managers and project team manage projects without any established process. Problems are solved in unique ad hoc ways and so, the organization has no project-management capability at all and neither project nor organizational learning occurs. The second level is project management capability level. At this level both project manager and the project team sense project environment in the search for uncertainties that could affect the project; determine how the opportunities and threats previously sensed would affect project content, establish decision-making protocols; and ensure that projects and project management processes apply the needed changes

5 Rethinking Maturity Models …

Project Management Capability

69

Portfolio Management Capability

Top Management Involvement

Ad-hoc Problem Solving Fig. 5.1 Project-based maturity model

identified [23]. Thus, in order to develop a project management capability, organizations should implement sensing-seizing-transforming routines at the project level (for more information regarding the sensing-seizing-transforming model of dynamic capability building please see [23], [24]). The next level is Portfolio management capability. At his level, the organization captures individual project learning and spreads it throughout the entire organization. Moreover, the portfolio management capability allows the organization to institutionalizing new routines and capabilities based on those project experiences [23]. Thus, in order to develop a portfolio management capability, organizations should implement sensing-seizing-transforming routines at the portfolio level Hermano 2014, [24]. Finally, the fourth level is Top Management Involvement. It is recognized that and involved top management team is decisive in the success of projects, programs and portfolios [22]. Specifically, Top managers participate in projects definition, have the capacity to allocate the appropriate funds and resource endowment, and are empowered to make difficult decisions during crisis situations [25, 22]. Thus, an involved top management team is consider as a project champion that motivates the project team to do even more than they thought possible [26].

5.4 Discussion Even though there are actually more than 30 different PMMMs [5], there are still two open questions: (1) do PMMMs deliver value to organizations? (2) Is maturity as currently codified a source for organizational competitive advantage? In order to answer these questions some scholars propose that MMs need to be more flexible in their structure, more adaptable in their application, and more responsive in their

70

V. Hermano

interpretation [2]. Moreover, in order to be meaningful, MMs need to expand their focus from a process view to a broader organizational and contextual approach [2]. This paper argues for the necessity to broaden the focus of MMs. Specifically, we develop a maturity model that instead of focusing on the project management maturity, it focuses on project-based maturity. Thus, our model do not pretend to measure how good an organization is in managing its projects (the better its project management processes the better its projects performance) but to measure the ability of an organization for achieving organizational performance through multiple project and portfolio implementation. Our model assumes that a Project-based organization is an organization where project and portfolio capabilities shape not just project management process but all internal and external competences of the organization (Hermano 2014: 40). Thus, the higher the ladder an organization achieves in our project-based maturity model, the most project-based this organization is. We agree with Van Loy et al. [16] and we assume that and organization applying project management is not necessarily project-based. As can be deduced from our maturity model, being project-based implies more than developing several projects, programs and portfolios. Being project-based entails to achieve top management involvement; to consider projects, programs and portfolios as strategic weapons for achieving the overall organizational goals; to be able to learn from projects implementation and to spreads that knowledge throughout the entire organization with the ultimate goal of building or modifying new organizational capabilities that help the organization to address environmental shifts. Our maturity model contributes to the maturity models literature in several ways. First, we provide a clear definition of project management maturity, and projectbased maturity. Moreover, we establish differences about maturity and capabilities by defining both project management and portfolio management capabilities and by including these two concepts as specific ladders in the maturity model. Scholars recognized that an important shortcoming of PMMMs is the confusion between maturity and capability. Thus, a claim is made for defining both concepts and establishing the existing relations between them. Secondly, our maturity model helps to develop a theoretical framework for the maturity models. By drawing on the dynamic capabilities approach, and specifically on Teece’s [24, 27] sensing-seizing-transforming model for dynamic capability building, we provide theoretical reasons for establishing how mature a project-based organization is and also we provide managers with the explanatory power of dynamic capabilities for linking project-based maturity and organizational performance. Our project-based maturity model directly contributes to the new stream of research advocating for a new wave of maturity models. This new literature stream calls for maturity models that shift their focus from isolated projects to organizational issues, maturity models that asses know-how instead of know-what and maturity models applying a contingent approach [3, 2]. Our maturity model assesses projectbased maturity instead of project management maturity. Moreover, the ladders of our maturity model do not contain the specific processes that must be deploy for being in one level or another, but the nature of the routines necessary to build project

5 Rethinking Maturity Models …

71

management and portfolio management capabilities. Finally, our project-based maturity model applies a contingent approach since it is founded on the dynamic capabilities framework. The dynamic capabilities framework establishes the processes developed by firms for creating and reconfiguring their capabilities and assets endowment in order to address environmental changes [28]. As a possible extension of our project-based maturity model, we could link eah of the four ladders of project-based maturity with the different definitions of projectbased organizations that can be found in the literature. Although the project-based organization has been recognized in the literature for over 25 years, there is no consensus about its specific definition or terminology (e.g. several terms can be found in the literature such as, project-based organizations, project-oriented companies, project-led organizations, or multi-project organizations), [29]. Each of our ladders of project-based maturity could be related with a term and a definition given in the literature. • Project-led or project-supported organizations are those where the main business processes adopted are routine, but project-based working makes a significant contribution to operations [30]. These organizations will be positioned in our second ladder of project-based maturity, project management capability. • Multi-project organizations are those that rely on many projects executed at the same time where resources are allocated among projects [31]. These organizations will be positioned in our third ladder of project-based maturity, portfolio management capability. • Project-oriented organizations of project-based organizations are those that define management by projects as its organizations strategy; manage their work through projects and programs; manage portfolios of different types of projects; use project program and portfolio management as specific business processes; and views themselves as being project-oriented or project-based [32]. These organizations will be positioned in our fourth ladder of project-based maturity, top management involvement.

5.5 Concluding Remarks Drawing on the dynamic capabilities approach this study provides a project-based maturity model. Based on four ladders of project-based maturity, our model asseses if an organization manages its projects chaotically; through a project management capability that senses environmental shifts and ensures that projects and project management processes apply the needed changes; through a project management capability and a portfolio management capability that capture individual project learning and spread it throughout the entire organization; and through a project management capability, a portfolio management capability and the involvement of the top management team. We state that the more project-based mature an organization is, the better the overall organizational performance.

72

V. Hermano

References 1. Irja H (2006) Project management effectiveness in project-oriented business organizations. Int J Project Manag 24(3):216–225. Available at: http://www.sciencedirect.com/science/article/ pii/S0263786305000955 2. Mullaly M (2014) If maturity is the answer, then exactly what was the question? null. Int J Managing Projects Bus 7(2):169–185. Available at: http://dx.doi.org/10.1108/IJMPB-09-20130047 3. Jugdev K, Thomas J (2002) Project management maturity models: the silver bullets of competitive advantage. Project Manag J 33(4):4–14 4. Lee LS, Anderson RM (2006) An exploratory investigation of the antecedents of the IT project management capability. e-Serv J 5(1):27–42 5. Grant KP, Pennypacker JS (2006) Project management maturity: An assessment of project management capabilities among and between selected industries. IEEE Trans Eng Manag 53(1):59–68 6. Group S (2009) No title. The CHAOS report 7. Lovallo D, Kahneman D (2003) Delusions of success. Harvard Bus Rev 81(7):56–63 8. Pasian B (2014) Extending the concept and modularization of project management maturity with adaptable, human and customer factorsnull. Int J Managing Projects Bus 7(2):186–214. Available at: http://dx.doi.org/10.1108/IJMPB-01-2014-0006 9. Engwall M (2003) No project is an island: linking projects to history and context. Res Policy 32(5):789–808. Available at: http://www.sciencedirect.com/science/article/pii/S00487333020 00884 10. Deming WE (1993) The new economics. Massachusetts institute of technology. Center for advanced engineering study, Cambridge, MA, p 240p 11. Brookes N et al (2014) The use of maturity models in improving project management performance: An empirical investigation. Int J Managing Projects Bus 7(2):231–246. Available at: http://dx.doi.org/10.1108/IJMPB-03-2013-0007 12. Andersen ES, Jessen SA (2003) Project maturity in organisations. In: Selected papers from the fifth Biennial conference of the international research network for organizing by projects. Held in Renesse, Seeland, The Netherlands, 21(6):457–461. Available at: http://www.sciencedirect. com/science/article/pii/S0263786302000881 13. Jan CA, Spang K (2014) Linking the benefits of project management maturity to project complexity: Insights from a multiple case study. Int J Manag Projects Business 7(2):285–301. https://doi.org/10.1108/IJMPB-08-2013-0040 14. McBride T, Henderson-Sellers B, Zowghi D (2004) Project management capability levels: an empirical study. In: Software engineering conference, 2004. 11th Asia-Pacific. IEEE, pp 56–63 15. Jan CA, Spang K (2014) Linking the benefits of project management maturity to project complexity: Insights from a multiple case study. Int J Managing Projects Bus 7(2):285–301. Available at: http://dx.doi.org/10.1108/IJMPB-08-2013-0040 16. Van Looy A, De Backer M, Poels G (2011) Defining business process maturity. A journey towards excellence. Total Qual Manag Bus Excellence 22(11):1119–1137 17. Dybå T, Dingsøyr T (2008) Empirical studies of agile software development: a systematic review. Inf Softw Technol 50(9):833–859 18. Hermano V, Martin-Cruz N (2019) Expanding the knowledge on project maangement standards: A look into the PMBOK with dynamic lenses. In: Ayuso JL, Yagüe JL, Capuz-Rizo S (eds) Project management and engineering research. Springer International Publishing, pp. 19–34. https://doi.org/10.1007/978-3-319-92273-7 19. Styhre A et al (2010) Garbage-can decision making and the accommodation of uncertainty in new drug development work. Creativity Innovation Manag 19(2):134–146. Available at: http:// dx.doi.org/10.1111/j.1467-8691.2010.00551.x 20. Douma S, Schreuder H (2008) Economic approaches to organizations. Pearson Education, NJ 21. Amit R, Schoemaker PJH (1993) Strategic assets and organizational rent. Strateg Manag J 14(1):33–46

5 Rethinking Maturity Models …

73

22. Hermano V, Martín-Cruz N (2016) The role of topmanagement involvement in firms performing projects: a dynamic capabilities approach. J Bus Res 69:3447–3458 23. Hermano V, Martín-Cruz N (forthcomming) The project-based firm: a theoretical framework for dynamic capabilities building. Sustainability 24. Teece DJ (2009) Dynamic capabilities and strategic management: organizing for innovation and growth. Oxford University Press, USA 25. Boonstra A (2013) How do top managers support strategic information system projects and why do they sometimes withhold this support? Int J Project Manag 31(4):498–512. Available at: http://www.sciencedirect.com/science/article/pii/S0263786312001275 26. Kissi J, Dainty A, Tuuli M (2013) Examining the role of transformational leadership of portfolio managers in project performance. Int J Project Manag 31(4):485–497. Available at: http://www. sciencedirect.com/science/article/pii/S026378631200110X 27. Teece DJ (2007) Explicating dynamic capabilities: the nature and microfoundations of (sustainable) enterprise performance. Strateg Manag J 28(13):1319–1350. Available at: http://dx.doi. org/10.1002/smj.640 28. Teece DJ, Pisano G, Shuen A (1997) Dynamic capabilities and strategic management. Strateg Manag J 18(7):509–533. Available at: http://dx.doi.org/10.1002/(SICI)1097-0266(199708)18: 7%3C509::AID-SMJ882%3E3.0.CO 29. Miterev M, Mancini M, Turner R (2017) Towards a design for the project-based organization. Int J Project Manag 35(3):479–491. Available at: http://www.sciencedirect.com/science/art icle/pii/S0263786316304847. Accessed 21 Apr 2017 30. Hobday M (2000) The project-based organisation: an ideal form for managing complex products and systems? Res Policy 29(7–8):871–893. Available at: http://www.sciencedirect.com/ science/article/pii/S0048733300001104 31. Canonico P, Söderlund, J (2010) Getting control of multi-project organizations: combining contingent control mechanisms. Int J Project Manag 28(8):796–806. Available at: http://www. sciencedirect.com/science/article/pii/S0263786310000876. Accessed 21 Apr 2017 32. Gareis RM, Huemann M (2000) Project management competences in the project-oriented organization, Published in: Turner JR, Simister SJ (eds) The gower handbook of project management, Gower, Aldershot, pp 709–721

Chapter 6

Classification of Software in the BIM Process According to the PMBoK Knowledge Areas and Levels of Development (LOD) J. María Rodrigo-Ortega and J. Luis Fuentes-Bargues Abstract The purpose of this communication is to associate the different knowledge areas of the PMBOK and the different software in the market, and how can they be used to add value to the BIM project through its development according to the Levels Of Development (LOD) established by the document G202-2013 “Project Building Information Modeling Protocol Form” from the American Institute of Architects (AIA). For this objective, an analysis of available software and their role in the BIM process will be performed, in order to classify them firstly by their use and function in the different areas of knowledge of the PMBOK, and secondly, by the Level Of Development (LOD) from which they can begin to be used. Given that one software can be used in more than one area of knowledge or LOD, a classification according to the suitability for the use will be performed, establishing a matrix relationship that places the different existing software in the market according to the area of knowledge and Level Of Development to which they are applicable. Keywords LOD · BIM · PMBOK · Knowledge areas · Software

J. María Rodrigo-Ortega (B) Master Dirección y Gestión de Proyectos, Escuela Técnica Superior de Ingenieros Industriales, Universitat Politècnica de València, Camino de Vera s/n, 46022 Valencia, Spain e-mail: [email protected] Cofounder and BIM Manager in BIMTEC Facility Management, S.L. C/Leonardo Da Vinci 10 4º, 46980 Paterna, Spain J. Luis Fuentes-Bargues Grupo de Investigación de Diseño y Dirección de Proyectos, Dpto. de Proyectos de Ingeniería, Escuela Técnica Superior de Ingenieros Industriale, Universitat Politècnica de València, Camino de Vera s/n, 46022 Valencia, Spain e-mail: [email protected] © Springer Nature Switzerland AG 2021 J. L. Ayuso Muñoz et al. (eds.), Project Management and Engineering Research, Lecture Notes in Management and Industrial Engineering, https://doi.org/10.1007/978-3-030-54410-2_6

75

76

J. María Rodrigo-Ortega and J. Luis Fuentes-Bargues

6.1 Introduction The BIM methodology is currently in the process of implementation at a global level. Many countries have already established their roadmap for State implementation, and how to enforce it for public tenders. In the European Union, in particular, the directive 2014/24/EU establishes the need for electronic systems (media and tools to model the data of the building) in processes of contracting of works, services, and supplies starting from September 2018. The implementation of this BIM methodology implies, on the one hand, a change of mentality throughout the design process of the building, moving toward a point of view that takes into account the life cycle of the building during its design process, what has attracted numerous professionals and researchers [1]. On the other hand, it also implies learning new software tools adapted to this methodology. One of the approaches having more international acceptance is that of project development through different levels of development or Level Of Development (LOD) which raises the standard G202TM—2013 of the American Institute of Architects (AIA), which establishes five LODs: LOD 100, LOD 200, LOD 300, LOD 400, and LOD 500 [2]. These levels range from lower to higher development of the project in time, and are used with different objectives, from the conceptual design to the building maintenance. The new BIM methodology has led to the flourishing of a multitude of new software products compatible with this technology, with different aims such as building design, structure design, and scheduling work. In addition, these computer programs may apply only to a level of development of the project, or to several, so to the difficulty resulting from the change of perspective of the BIM methodology is added the effort of learning new tools and knowing how to put them in context throughout the project.

6.2 Objectives and Methodology Accordingly, this communication will analyze the role of the most popular BIMrelated software tools of the market, classifying them according to the level of development LOD in which they are applicable, and the area of knowledge for which they are being used, starting from the LOD 100 to the LOD 500, for each of the areas of knowledge of Integration, Scope, Time, Cost, Quality, Human Resources, Communication, Risk, Procurement, and Stakeholders [3]. The list of software tools that will be used as reference appears as a reference for BIM tools on the website of the BIM Forum (the U.S. Chapter of Building Smart, www.bimforum.org). To these software tools, others can be added that the authors of this communication consider relevant by their judgment and experience. Similarly, those software tools which are not placed on the market or are being significantly used in Spain will not be included.

6 Classification of Software in the BIM Process …

77

Lastly, given that there are different software companies that have different products for different purposes, those software programs will be incorporated that belong to the same manufacturer, in such a way that there are products to choose from along the BIM implementation if a particular software manufacturer is preferred. The methodology followed in this paper is based on linking, for each LOD, the PMBOK knowledge areas with the most used software of the BIM environment that allows the direction and management of a project.

6.3 Results 6.3.1 LOD 100 This is the first level of development of the project, and corresponds with the first studies of economic viability and the first volumetric sketches.

6.3.1.1

Integration

The first step is to find the appropriate field and conduct a market study. To this end, there are City Information Modeling (CIM) tools, such as Autodesk Urban Canvas (recently acquired from UrbanSim, www.urbansim.com), which are able to enrich GIS construction plans and CAD land registry, with other land regulation information such as urban development regulations, infrastructure, and zoning that allows elaborating far more comprehensive predictive models of economic viability.

6.3.1.2

Cost

On the other hand, at the beginning of a construction project, usually, land is acquired, which is why predicting the growth and price of the land in the area in the future is of special interest. A software that can perform this function, other than Urban Canvas (Fig. 6.1), is Microsoft Business Intelligence (https://powerbi.microsoft.com), which allows establishing relationships between data that does not necessarily need to be hosted in an own computer. For example, it could complement the previous feasibility analysis, linking data with a search for prices in the area on the idealista website and create predictive models of the prices in the area from user-generated algorithms.

6.3.1.3

Scope

The building design begins with a conceptual volumetric approach. The architect will design the distribution based on volumes with the square meters contained in

78

J. María Rodrigo-Ortega and J. Luis Fuentes-Bargues

Fig. 6.1 Predictive model of the increase in the price of the land in the city of Valencia Source own elaboration

the program specified by the customer. A tool that allows verifying if the program is fulfilling its aim is dRofus (www.drofus.no), through which the client can set his requirements of square meters and equipment in each room and link it to the BIM model. This software may be present during the entire project, in order to serve as a support to check if all those elements that the client requests are being included. Given that the building concept is being worked on, the rapidity of the sketch becomes a priority, and in this respect Autodesk FormIt (www.formit360.autodesk. com) stands out, which compared to other programs like Trimble SketchUp (www. sketchup.com) has a greater degree of integration with the BIM model to be produced later. The first volumetric approximations can also be designed with the same software tools that can later be used to develop a model, such as GRAPHISOFT ArchiCAD (www.archicad.com), Nemetschek Allplan (www.allplan.com), or Autodesk Revit Architecture (www.autodesk.es/products/revit-family/overview), among others.

6.3.1.4

Quality

Although the design of the building is still in its conceptual stage, it is already possible to conduct energy analysis from these conceptual volumes, which will be useful to evaluate different design options from the point of view of energy efficiency and sustainability. For this purpose, the tools from Autodesk Green Building Studio are available, from now on GBS (www.gbs.autodesk.com) integrated within FormIt and Revit and EcoDesigner (www.graphisoft.es/archicad/ecodesigner/), integrated within ArchiCAD.

6 Classification of Software in the BIM Process …

6.3.1.5

79

Communication

On this phase of the project, the architect must create explanatory views of the model and schematic renders to help in the design’s decision-making process. To this end, the same programs as for the scope can be used.

6.3.2 LOD 200 Once an alternative for volumetric design and distribution of spaces has been chosen, the basic design will be developed. This corresponds to the LOD 200, where the basic partitioning and structure will be defined, detailing elements such as its thickness, but without specifying its internal composition. The Bim Execution Plan (BEP), which contains the guidelines and strategies to use and integrate the BIM methodology throughout the project, must be completed before the end of this development level to ensure the proper use of computer tools along the project.

6.3.2.1

Scope

Basically, the work of the architecture team will be to elaborate the building plans with an almost definitive interior layout, determining partitions, openings, rooms, pillars, furniture, etc., although these are still susceptible to small position changes, and its internal composition or construction details are not yet determined. To this end, BIM design software programs such as GRAPHISOFT ArchiCAD, Nemetschek Allplan, and Autodesk Revit Architecture can be used for this work. Also, the first BIM structure model approaches shall be included.

6.3.2.2

Integration

So far, the documentation contained in the BIM model was very scarce. From this moment on, all teams will truly begin to work on the model. For this reason, the BIM Manager will turn the model of the previous stage into a central archive for the architecture discipline, and will create two additional central files, of structures and MEP, linked to each other, so that all teams can see the work development of the other teams, but without interrupting one another. From this point on, a great number of archives are going to be generated, with subsequent revisions during the phase of design, paying particular attention to the file structure of the project. For the project’s document management, there are several software solutions available, including Autodesk BIM 360 (www.bim360.autodesk.com) and Aconex.

80

6.3.2.3

J. María Rodrigo-Ortega and J. Luis Fuentes-Bargues

Human Resources and Communication

In addition to the fact of an increase in file production and the subsequent documentation, the Responsible Parties for each task will have to be established those who will be Accountable for them, those who will be Consulted and Informed, through the RACI matrix. To this end, access control must be established giving read or write permissions to relevant users, creating different types of roles which must comprise the different users. The BIM 360 (www.bim360.autodesk.com) and (www.aconex.com) Aconex applications contain these features.

6.3.2.4

Cost

At this point in the project, only an approximation of costs based on rooms and finishes and certain structural elements can be extracted by assigning costs per square meter. To this end, the following software stand out because of their integration: Arquímedes (www.arquimedes.cype.es) and Presto (www.rib-software.es/presto).

6.3.3 LOD 300 Once the relevant licenses have been obtained, the LOD 300 phase will begin. In this phase, a detailed design for each of the disciplines shall be developed: architecture, structure, and Mechanical, Electrical, and Plumbing (MEP), including the necessary calculations and the justification of current regulations. All integrating elements of the project shall be detailed so that their execution can take place. Illustrations or renders will have to be contributed that show the considered solutions for the design, estimated costs based on the elements and not on rooms, the energy analysis, etc., so as to obtain the license to begin construction.

6.3.3.1

Scope

This is the phase in which more design hours will be invested; so hereafter, the scope for each one of the disciplines will be defined. Architecture Starting with the architecture discipline, in this LOD level all the elements and their integrating parts must be defined; therefore, in contrast with the design information in LOD 200, where a wall was simply defined by its thickness, in this LOD level it is necessary to define the layers that compose this wall and their thickness, specifying the material of these elements, as shown in Fig. 6.2.

6 Classification of Software in the BIM Process …

81

Fig. 6.2 Internal structure of dividing walls in LOD 300. Source Own elaboration

To this end, the same software programs used in the scope can be used, increasing only the detail level of the BIM model from the program itself in which the LOD 200 model has been created. The elements that must be designed by the structures or MEP team shall be extracted, and the models that make these equipment shall be linked. Structure Regarding the structures team, the structure of the building shall be designed and its corresponding loads assigned in order to perform the load analysis and determine its dimensions. Depending on the software, the structure can be designed with the architecture modeling program and exported later to a structural analysis program. The first task will be to design the 3D structure according to the changes that have occurred in the project, giving preliminary values to the structural predesign that will be verified further ahead. For this purpose, the programs that stand out would be CSI Sap2000 and ETABS (www.csiespana.com), Trimble Tekla Structures (www.tekla.com/la/ productos/tekla-structures), CYPECAD 3D (www.cype3d.cype.es), Nemetschek Scia (www.scia.net/en), and Autodesk Robot (www.autodesk.com/products/robotstructural-analysis/overview). Autodesk Robot can link the structure and integrate it into Revit Structure. MEP The acronym MEP corresponds to Mechanical, Electrical, and Plumbing, which refers to the works of mechanics (ventilation and air conditioning), electricity, and plumbing. In parallel with the work of the architecture and structure disciplines, the MEP team will design and calculate the building’s MEP systems.

82

J. María Rodrigo-Ortega and J. Luis Fuentes-Bargues

The first internal deliverable for the team will be bringing up to date the values of occupation and load estimate of spaces in the LOD 200 MEP model, and to update them according to the estimates of the LOD 300 model. The software mentioned for this purpose are CYPECAD MEP (www.instalaci ones.cype.es), Trimble Duct Designer and Pipe Designer (www.buildings.trimble. com/products/ductdesigner-3d), GRAPHISOFT MEP Modeler (www.graphisoft. es/archicad/mep_modeler), Nemetschek Data Design System (https://www.nemets chek.com/en/brands/data-design-system/), and Autodesk Revit MEP.

6.3.3.2

Integration

For the integration of the project in LOD 300, the same tools as in LOD 200 can be used. The Autodesk solution will now be called BIM 360 Glue, which incorporates a visual display of the models, and, in addition, allows to detect clashes between them. Aconex, for its part, also has this visual display although it cannot detect interferences.

6.3.3.3

Quality

To make sure that design errors do not occur that reduce the quality of the project, or that the design does not comply with standards, it is possible to carry out “automatic” computerized quality controls. This consists of creating model verification rules such as “checking that the width of any room is greater than a given value” or checking that “the distances from any point in the room to a fire escape do not exceed a certain distance” [4]. For this type of verification, the software of Nemetschek Solibri Model Checker (https://www.solibri.com/products/solibri-model-checker/) is proposed. It can detect, for example, collisions between the opening of a window and a pillar, an action that a standard collision detection software would not perform. Besides, in this point of quality, it must be mentioned that they can be used for the calculation of the building’s energy efficiency, with the remark that at this moment it will not use for calculation conceptual volumes as in LOD 100 but the concrete elements of the building with its thicknesses and materials. These tools can be, for example, GBS, EcoDesigner, and CYPETHERM HE Plus (cypetherm-heplus.cype.es).

6.3.3.4

Communication

In addition to the communication tools included in the integration software, it is possible to show the model to someone external to the project, or someone who does not have the software with which the design is being made. For this purpose, IFC viewers are available.

6 Classification of Software in the BIM Process …

83

The Industry Foundations Classes (IFC) is a type of file whose content is defined by ISO 16739: 2013 (https://www.iso.org/standard/51622.html) and which serves as a common connection point between different software programs, since it contains the information about the elements of the building and its properties, although with limitations. There are free IFC viewers, such as Nemetschek Solibri Model Viewer (www.solibri.com/products/solibri-model-viewer), Trimble Tekla BIM sight (www. teklabimsight.com/), and GRAPHISOFT BIMx (www.graphisoft.es/bimx). Although it is not linked to a specific software, it is important to highlight the BCF file format (Building Collaboration Format), which allows creating a report for the communication of incidents that identifies the elements by its Global Unique IDentifier (GUID, a specific identifier that corresponds only to one element), in such a way that there can be no errors in the interpretation of the elements to be checked.

6.3.3.5

Procurement

In this LOD, the detailed building design will be created, specifying materials and manufacturers; therefore, it will be important to obtain the BIM objects from the catalog of these manufacturers as far as possible. There are websites that contain BIM libraries of manufacturers’ objects, the following two being the most relevant: • www.nationalbimlibrary.com (BIM objects adapted to the British NBS standard) • www.bimobject.com (the most widespread and with the largest catalog of real manufacturers). These objects could have been incorporated into the LOD 200 model, but it is not until the LOD 300 when a certain supplier is actually prescribed for bidding.

6.3.3.6

Risks

In this risk section, it is appropriate to mention, not as a risk but as an opportunity, the opportunities for improvement that can be obtained from computational design. The computational design consists of, through a programming language, creating algorithms that are capable of generating complex geometries. Another application is to perform simple repetitive processes that require time on the part of the designers, for example, to generate new views and apply a certain template for each of them, and then place them on a sheet to print them. This type of process can be automated through the use of these programming languages, reducing in this way the hours invested by the designers in repetitive tasks that, on the other hand, wear down the moral, thus leaving more time to the designers for their own design tasks. Most BIM modeling programs can connect with programming languages through their Application Programming Interface (API), but this can be especially complicated for designers, so at this point the (more simple) visual programming software

84

J. María Rodrigo-Ortega and J. Luis Fuentes-Bargues

will be highlighted: Dynamo (dynamobim.org) and Grasshopper (www.grasshopp er3d.com).

6.3.3.7

Costs

Unlike in the LOD 200 where costs were elaborated from determined elements and from the surfaces of rooms, on this level of development of LOD 300, the elements are already defined, and their “types” are the definitive ones, so that measurements can be directly extracted from “keynotes” assigned to these “types”. The keynotes are a .txt file that contains a classification of constructive elements by their nature and type. This file of keynotes can be extracted directly from the price bases owned by the company, or from official websites, such as the price generator for construction (www.generadordeprecios.info). With this keynotes file, each constructive element is associated with an existing “item code” that is in the price database, in order to obtain measurements in a much faster and direct way, through budget software already mentioned like Presto or Archimedes.

6.3.4 LOD 400 At this level, the definitive information of other disciplines such as structure and MEP are incorporated into the architectural model. The information is already coordinated and there are no clashes, so the model is ready to begin its real construction. This is the level that corresponds to the execution project (although it adds more information than it traditionally contains). In practice, given the great leap in information and the need for coordination between different disciplines, most projects establish an intermediate deliverable known as “LOD 350”, which is where the first coordination approaches are made.

6.3.4.1

Integration

The first thing that must be done is the LOD 350. For this, it will be necessary to carry out the intrinsic coordination of each model and the coordination with the rest of the models. The most outstanding programs for this purpose are the already mentioned Solibri Model Checker and Autodesk Navisworks Manage (www.autodesk.com/pro ducts/navisworks/overview). From these programs, reports will be generated in BCF format that will be used to modify the BIM models.

6 Classification of Software in the BIM Process …

6.3.4.2

85

Scope

With the reports generated in the previous section, the BIM model will be modified to free it from clashes and prepare it for its construction. To this end, any of the software programs used in the LOD 300 can be used, since they are compatible with the BCF format. And so, the level known as “LOD 350” will be reached, and the next step is to construct the building. The scope of the elements to be built will be mostly set by the BIM model; however, there are procedures and auxiliary elements necessary for construction that are often not included in this model. Therefore, it is of special interest to be able to link these elements within the planning to an element of the BIM model, in such a way that it can be seen that in order to execute a certain element (a slab) another unmodeled element (a crane) is needed that is not in the BIM model, but it is linked to the same task in which that element is built (and, therefore, its leasing cost can be extracted, for example). The software tools that stand out for this purpose are Trimble Vico Office (www.vicosoftware.com/products/vico-office), Synchro Pro from Synchro software (www.synchroltd.com), and iTWO from RIB Software (www.itwo.com).

6.3.4.3

Time

As mentioned, once the LOD 350 is reached, the model is ready to carry out the detailed programming based on it. To produce the project schedule and link it to the BIM model, two paths can be taken, one more indirect, which would be to use the Microsoft task planning software Project (www.products.office.com/microsoft/Project) and then link it to other software that is able to import this planning and link it to BIM models, such as Naviswork Manage. The most direct way is to use planning programs that can directly integrate BIM models, and in this aspect the programs of Synchro PRO, iTwo, and Vico Office stand out. These tools could have been used in the previous LOD, but retrospective replanning would have been necessary.

6.3.4.4

Quality

From the Navisworks Manage, Vico Office, iTWO, and Synchro Pro BIM tools, custom parameters such as “7-day concrete strength” can be created and technical documentation can be linked to BIM planning models, such as technical specifications of certain elements that may have been obtained directly from the price bases.

86

6.3.4.5

J. María Rodrigo-Ortega and J. Luis Fuentes-Bargues

Risks

As explained in the scope section, from the BIM planning tools non-modeled elements can be added to the schedule. This implies that protection elements can be added to the model, and the date in which they must be installed can be assigned. Given the ability to detect the development of planning activities in spaces that have not been equipped with adequate protection measures, the Synchro Pro program stands out in this regard. In addition, this software has the capacity to manage risks according to the directives of the PMBOK, assigning probabilities of occurrence and risk impact to the tasks.

6.3.4.6

Human Resources

Once the detailed budget and the schedule has been completed, the necessary human resources will be recruited and the material resources and equipment will be acquired. The planning BIM tools provide functions for assigning human resources to tasks and leveling them according to the specific project [5].

6.3.4.7

Costs

Similarly, for costs the same planning tools can be used, since incorporating the cost data to the resources and tasks will allow to establish baselines and compare planned prices against real prices, generating curves of gained value [6].

6.3.4.8

Procurement

In turn, when the aforementioned tools have the BIM model linked to planning, it is possible to anticipate the moment at which it will be necessary to buy, or for example, make estimates of what amount of concrete is going to be needed every day.

6.3.4.9

Communication

In addition to the communication in BCF required for integration and that existing in the document management, it should be noted that the possibility exists of using the Aconex, BIM 360, and Synchro Pro programs from mobile devices like tablets, so that a photo of the building can be taken and associated to an element of the BIM model or the planning in order to report potential problems.

6 Classification of Software in the BIM Process …

87

6.3.5 LOD 500 Once the construction work of the building is completed, the purpose of the LOD 500 is to gather all the information that has been generated during the construction, including the changes that have taken place on site, to deliver a model in Revit “as built”, that is, as it has been built.

6.3.5.1

Scope

To update the models with these changes on site, the same software that was used in the LOD 400 and 300 will be used. The maintenance information should be extracted from the model in COBie format (Construction Operations Building Information Exchange). This format is an XML file, a spreadsheet that does not include the model geometry, but the values of the most important parameters for maintenance, such as “manufacturer’s name”, “serial number”, and “date of purchase “, and it is especially useful for building maintenance.

6.3.5.2

Integration

With these updated models, a software will be required that can manage the warranties and the maintenance equipment associated with the elements that make up the model. An example of these programs is Autodesk Building OPS (www.ops.autodeskbuil dingops.com) and GRAPHISOFT Archi FM (www.archifm.net).

6.3.5.3

Human Resources

These same programs can record the hours that each worker spends in the maintenance of each element, or notify a worker in particular if a malfunction occurs.

6.3.5.4

Stakeholders

Finally, having prepared already the databases and the maintenance teams determined, it should be noted that the Building OPS application offers a free version for mobile phones that any user can download and report incidents in the building. Table 6.1 shows a summary of the relationship between software, LOD, and knowledge areas.

Revit

Autodesk

Synchro

Allplan

Nemetschek



FormIt

ArchiCAD

GRAPHISOFT



Revit

Allplan

ArchiCAD



Synchro

Revit Structure and Robot (1)

Revit MEP (2)

Revit Architecture (3)

Allplan (3)

Data Design System (2)

Scia (1)

MEP Modeler (2)

ArchiCAD (3)

Duct and Pipe Designer (2)



CYPECAD 3D (1)

Drofus

Aconex

Solibri



Navisworks

BIM 360

LOD 400

Tekla structures (1)



CYPECAD MEP (2)

SAP2000 (1)

ETABS (1)

Drofus

Aconex





BIM 360

LOD 300



– SketchUp

Trimble









CYPE





Drofus

Aconex



CSI



Aconex Drofus



Nemetschek



BIM 360

LOD 200

Drofus

Microsoft BI

Microsoft

Scope

U. Canvas

Autodesk

Integration

LOD 100

Developer

Knowledge areas

Table 6.1 List of programs with knowledge areas and LODs

– (continued)

Drofus

Aconex

Archi FM



B. OPS

LOD 500

88 J. María Rodrigo-Ortega and J. Luis Fuentes-Bargues

Communications

Human Resources

Quality

– – –

Nemetschek

RIB

Aconex FormIt



Trimble

Autodesk



Synchro

GRAPHISOFT –

– EcoDesigner

CYPE

Autodesk



CYPE

Nemetschek



Microsoft GBS

Microsoft BI

Autodesk

Autodesk

– U. Canvas

RIB



Synchro

Costs



Trimble –



Synchro

RIB



Microsoft

Time



Autodesk

Time

LOD 100

Developer

Knowledge areas

Table 6.1 (continued)

BIM 360

Aconex









BIM 360









Arquímedes





Presto













LOD 200

BIM 360

Aconex









BIM 360

EcoDesigner

CYPETHERM

Solibri

GBS







Presto

Synchro

iTWO

Vico Office

Synchro

Project

Navisworks

LOD 300

BIM 360

Aconex

iTWO



Vico Office

Synchro

Navisworks











Project

Navisworks

iTWO

Synchro

iTWO

Vico Office

Synchro

Project

Navisworks

LOD 400







(continued)

Archi FM





B.OPS



























LOD 500

6 Classification of Software in the BIM Process … 89

– – –

Synchro

RIB

Trimble –



Autodesk







Synchro





SketchUp

Trimble –



Synchro

R. McNeel



Nemetschek

Autodesk

– –









www.bimobject.com

www.nationalbimlibrary.com















Aconex





GRAPHISOFT

LOD 200

LOD 100

Aconex

Developer

(1) Structure. (2) MEP. (3) Architecture. (4) Free IFC viewers Source Own elaboration

Stakeholders:

Acquisitions

Risks

Knowledge areas

Table 6.1 (continued)















Grasshopper

Dynamo

BIM Sight (4)



Solibri Viewer (4)

BIMx (4)

Aconex

A360 (4)

LOD 300



Vico Office

iTWO

Synchro





Synchro







Synchro





Aconex



LOD 400

B.OPS

























LOD 500

90 J. María Rodrigo-Ortega and J. Luis Fuentes-Bargues

6 Classification of Software in the BIM Process …

91

6.4 Conclusions As can be seen in Table 6.1, there is a great number of software manufacturers in the market that offer different solutions throughout the life cycle of a project. Despite this variety of programs, only one is available to manage stakeholders, and that is at the end of the project for the end users. There is no software available that is able to manage the general public opinions with respect to a city-planning performance in its design phase. To this end, free IFC visualizers could be used but they would not integrate the mass generated comments linked to the BIM model. It is noteworthy that although there are 3 programs able to manage planning and costs in general, only one of them is able to manage risks according to the PMBOK directives. Finally, it shall be brought to attention that as a result of what was observed in the table, the only manufacturer that offers a software suite that covers most of the life cycle of the project and areas is Autodesk.

References 1. Zhao X (2017) A scientometric review of global BIM research: analysis and visualization. Automat Cons 80:37–47 2. National Institute of Building Sciences (NIBS) (2007). United States national building information standard modeling. Version1-Part 1: overview, principles and methodologies 3. Project Management Institute (PMI) (2013) A guide to Project Management Body of Knowledge (PMBOK Guide), 5th edn. PMI, Newton Square, Pennsylvania 4. Ciribini A, Mastrolembo Ventura S, Paneroni M (2016) Implementation of an interoperable process to optimise design and construction phases of a residential building: A BIM Pilot Project. Automat Cons 71:62–73 5. Liu H, Al-Hussein M, Lu M (2015) BIM-based integrated approach for detailed construction scheduling under resource constraints. Automat Cons 53:29–43 6. Lu Q, Won J, Cheng JCP (2016) A financial decision making framework for construction projects based on 5D Building Information Modeling (BIM). Int J Project Manag 34:3–21

Chapter 7

Portuguese Project Management Profile—An Overview A. Andrade Dias and Antonio Amaral

Abstract (English) The Portuguese Project Management Association—APOGEP planned to assess the project management community toward identifying an average profile. Based on an inquiry to the community, we’ve obtained 534 valid answers, on a global sample of 734 responses. The topics addressed in the inquiry were mainly focused on the project managers’ background, the sector, the organizational dimension and project categories, the average wage, and the characterization of the professional certifications. The primary results point to the following Portuguese Project Management profile: Male; age between 36 and 45; living and working in the big Lisbon; with a full-time activity in organizations with more than 250 employees; with 15 years of working experience; working in the IT and construction sectors; with a minimum academic curriculum of a 5 years Bachelor degree; with an average salary between 20 K and 40 K/year. The study proved the contribution of the professional certification to the career’s development and to the average salary increase. Keywords Project manager profile · Portugal · APOGEP, IPMA, trends, and salary · Certification

7.1 Introduction Since immemorial times, mankind has been projecting, evaluating, and managing projects. A project activity is commonly associated with the creative and artistic

A. Andrade Dias Porto Business School—Oporto University, Porto, Portugal e-mail: [email protected]; [email protected] A. Amaral (B) CIICESI, Escola Superior de Tecnologia e Gestão, Instituto Politécnico do Porto, Felgueiras, Portugal e-mail: [email protected] Algoritmi Research Center, Industrial Engineering and Management, Guimarães, Portugal © Springer Nature Switzerland AG 2021 J. L. Ayuso Muñoz et al. (eds.), Project Management and Engineering Research, Lecture Notes in Management and Industrial Engineering, https://doi.org/10.1007/978-3-030-54410-2_7

93

94

A. Andrade Dias and A. Amaral

nature of the human being, as well as with its inventive and entrepreneurial vein that aspires him to solve the problems and challenges of societies. It is generally recognized that a project’ development is limited to a temporary effort carried out toward the achievement of a product or service with a concrete and predetermined objective, consuming a limited set of resources, which may be human as well as nonhuman (materials, equipment, monetary, etc.), with a well-defined time horizon [1, 6, 9]. This definition is sufficiently comprehensive and somehow adaptable to any type of project despite its complexity, size, or even the time of its implementation. Each project compiles a vast set of competencies and, in some way, is originated by a very concrete environment that results from the combination of efforts from multiple knowledge areas, as well as, represents and personalizes several, well-defined, dimensions such as technical-technological, social, labor, context, and human [5]. Over time, organizations have undergone drastic changes in the way they operate and in the way their business has evolved. Market conditions, global competition, technological evolution, the employees’ skills level, and management paradigms are elements that trigger new needs and different ways of managing organizations, presenting unprecedented challenges and complexities. In this way, the perception and influence of project management evolve endemically, following the dynamics of change and giving a competitive advantage to the organization that is able to conveniently read and process the pace, vicissitudes, and complexities of a business area or markets [1, 2, 6]. The professional developments achieved in project management, leveraged in underlying scientific developments, confirm, within a short space of time, its socioeconomic importance of the area and its future potential for growth. Pinto and Kharbanda [8] reinforce the idea that project management will tend to replace traditional management in organizations and that will constitute a critical factor toward gaining competitive advantages in the twenty-first century. The Portuguese Project Management Association—APOGEP intended to explore the project management community toward identifying the Portuguese Project Manager profile. The study was constructed and applied in the survey monkey platform and was disclosed in the APOGEP website, social networks like Facebook and LinkedIn which resulted in an overall 534 valid answers, on a global sample of 734 responses. The topics addressed in the questionnaire were mainly focused on the project managers (PM) background, the business sector, the organizational dimension and project categories, the average wage, and the characterization of the professional certifications. This study has followed the survey pattern conducted annually by reference organizations, such as the Association for Project Management (APM Salary and Market Trends Survey) and the Project Management Institute (Pulse of the Profession). These studies have revealed the growing importance of project management professionals, the increasing demand for professionals in the market, and the project success rates growth. In the case of the APM survey, 2,700 professionals from public and private sectors have participated in the questionnaire. In the case of the PMI report, 3,234

7 Portuguese Project Management Profile—An Overview

95

project managers from diverse organizations have answered the survey. The results are, somehow, similar to those obtained in Portugal, even though the salary scale meets the contexts involved. Likewise, there is a tendency toward adopting agile methods and increasing the importance of skills development. On the other hand, the same document has shown that 32% of the involved organizations consider technical and leadership skills as high priorities for the following years. Therefore, this might be a clear evidence that the future of project management will be based on competencies development through the influence of local and international project management associations, such as IPMA’s member associations (APM and APOGEP) and IPMA on a global scale. The paper is organized as follows: 1. Methodology adopted; 2. Questionnaire results; 3. Conclusions, limitations, and further developments.

7.2 Research Methodology The research strategy followed was the survey—a quantitative monomethod approach—supported by the literature review performed, and its adoption was chosen since it is a suitable method to collect attitudes and perceptions on a significant scale [13]. The target group comprises project managers with or without professional certifications. The survey was available online for almost 2 months. The survey’s implementation allowed a global sample of 538 valid answers. The authors conducted the survey on behalf of APOGEP with a perspective of approach to knowledge and to the state of the art, to present and deliver results in public (as exemplified by this scientific work) and not to use it for commercial purposes. APOGEP has treated the data with impartiality and with a statistical approach so that the results obtained lacked deviations (bias). Throughout the research methodology application and due to ethical principles, the data collected from the survey was designed to fully respect the anonymity and confidentiality of all participants and it hasn´t incorporated the respondents’ information into any commercial file. Simultaneously, the data used for research purposes was requested through an informed consent among all participants to ensure their availability to participate and its acceptance in publishing this work’ results. The authors did not have any kind of involvement with the Certification Body of APOGEP during the research period, nor do they assume any kind of responsibility in management. The certification body is independent of the Executive Board of APOGEP and both authors had no link with it.

96

A. Andrade Dias and A. Amaral

7.3 Questionnaire Results As previously pointed, some homologous studies were performed by leading institutions related to Project Management, which intend to measure the pulse of the profession [10, 11] such as the ones performed by Project Management Institute (PMI) and the English Association of Project Management (APM) assessing the state of project management, the salary and market trends [3, 4] and the level of maturity of project management practices, performance, and success [14].

7.3.1 Portuguese Project Managers Profile The primary results obtained from the study point to the following project manager profile: • • • • • • •

Male (± 72%); Age between 36 and 45 years (46.8%); Living and working in the big Lisbon area (74.5%); With a full-time activity in organizations with more than 250 employees (57.44%); With an experience of 15 or more years (27.84%); Working in IT and construction sectors (40.1%); With a minimum academic curriculum of 5 years (33.3%) as Bachelor of Arts, postgraduation (20.6%), and masters (20.6%); • With an average salary between 20 K and 40 K/year (33.3%). The following results obtained from the questionnaire are important to measure and to form a clear picture of the current state of project management among its practitioners, the Portuguese community of Project Managers. Therefore, these clues are essential for defining new approaches and strategies toward embracing new challenges, especially those related to the project management profession recognition. Most of the Portuguese Project Managers (85.1%) indicated that they already use a methodology in project management. In fact, SCRUM (43.2%) and PRINCE2 (10.3%) [7] are the ones with higher relevancy, which can be easily explained by the number of PMs working in the IT sector where the SCRUM and other agile approaches are well widespread. Another important question placed deals with the type of professional certifications in project management. About 40% of the respondents mentioned the absence of possessing any type of certification, which presents an exceptional opportunity toward capturing new PMs to certification programs. This is extremely important toward achieving the recognition of the profession in project management and for strengthening and increasing the visibility of the project managers’ class. The more predominant certifications are, respectively, the IPMA D (24.5%), PMP (15.6%), and IPMA C (12.6%).

7 Portuguese Project Management Profile—An Overview

97

The associativism in Portugal is considered as a “poor” relative. This point is very noticeable when, roughly, 42.6% of the respondents are not members of any professional association of project management. And among the associations with more relevance in Portugal, APOGEP (38.7%) has a considerably more significant dimension than the PMI chapter Portugal (13.2%). This means that associations still have a way to go in order to make themselves known better and increase the level of recognition of their brands. According to Figs. 7.1 and 7.2, 34% of PMs managed 5 or more projects during the last year and in almost 60% of the cases the overall budget was higher than 250,000. This corroborates the fact that commonly project management is more recognized than its need in big firms and in this case the respondents (57.44%) are in full-time activities in organizations with more than 250 employees. As can be seen in Fig. 7.3, the typology of the project management scope is diversified, nevertheless, the predominant areas are Information systems (31%), organizational changes and improvements (18%), development of new products and services (18%), and construction and assembly (15%). Therefore, the importance of addressing new agile approaches might, in some contexts, introduce better results, especially because of the type of change that occurs in the projects with higher rhythm and frequency or level of complexity. Therefore, new approaches gain space because of the business area particularities and of the dynamics created by short project life cycles, high level of changes, and multiple stakeholders involved. It is also important to mention that the area of innovation is one of the areas with more importance and that it will grow in the near future. However, there will be distinct needs and challenges that must be supported by a set of competencies which will allow operating in circumstances of high level of uncertainty.

Fig. 7.1 Number of projects managed by PM during the last year

98

A. Andrade Dias and A. Amaral

Fig. 7.2 Overall budget for the projects managed during the last year

Fig. 7.3 Usual typology and scope of projects

Figure 7.4 presents the perception level of satisfaction and success of the projects managed during the last 5 years, according to the level of compliance in Time, Costs, and Quality, as well as, the level of satisfaction of sponsors, stakeholders, and overall classification. It’s interesting to see that the level of compliance and satisfaction is extremely high, especially, when it is compared with the data available

7 Portuguese Project Management Profile—An Overview

99

Fig. 7.4 Average level of project success perception for PM during the last 5 years

in Fig. 7.5 about the international behavior/perception of the PM about the same metrics. Therefore, it seems that there are some dazzled opinions in the Portuguese PMs community, or perhaps it can be explained by the culture of not assuming the errors and try to paint a pink reality. According to PMI [11], pulse of profession study, indicates the performance’ perception of the international PMs pointed during the execution of their projects. As can be compared, the differences are notorious and speak for themselves. It is, for that reason, vital to constitute a level of openness and sharing in organizations between peers so that they can diagnose problematic situations as prematurely as possible and thus be able to develop appropriate actions to mitigate unwanted effects. Therefore, as can be seen in Table 7.1, 5 of the 6 metrics of success presented a reduction between 2012 and 2016, according to PMI [10]. The only metric that presented a small improvement was the failed project’s budget lost. Notwithstanding, despite the knowledge gained over the years, the project management techniques, methodologies, and standards continue to be problematic, as can be seen by the results presented in the following table. Figure 7.5 points the success factors that have a higher level of importance in the performance obtained in projects, according to PMI [10]. The dimensions asked were, respectively: Team; Effective governance; Standards, Methods and tools; Financing; Commitment of stakeholders; Support from the organization; Sponsor Support, and Project Planning. All the dimensions assessed were extremely high evaluated, which in our view, once more, might not be reliable enough to constitute the common ground of project managers.

100

A. Andrade Dias and A. Amaral

Fig. 7.5 Factors for project success

Table 7.1 Evolution of international metrics of success

International KPIs of success

2012 (%)

2016 (%)

Meet original goals

64

62

Completed within original budget

55

53

Completed on time

51

49

Experienced scope creep

44

45

Failed project’s budget lost

34

32

Deemed failures

15

16

7.4 Conclusions, Limitations, and Further Developments The project manager function should be recognized at the national level as a profession—82.31% of the respondents answered affirmatively to this challenge. It is possible to see that the average annual salary of those who hold IPMA A, B, and C certificates is higher than the average of those who do not have any IPMA certificate, at least 500,000/year. Likewise, we can conclude that the value of the remuneration of professionals certified by the PMI is similar to the IPMA A and C certificates, being lower than those of IPMA B and higher than those of IPMA D.

7 Portuguese Project Management Profile—An Overview

101

We can conclude with 99% certainty that the fact of having or not having an IPMA certificate influences the annual level of remuneration. With 99% certainty, whether or not an IPMA certificate influences the overall budget value of the projects managed in the last year, it is possible to verify that the certification positively influences the overall budget value of the projects managed. As mentioned by Roth and Kleiner [pp. 58, 12], “Good intentions are not enough to guarantee improvements—commitment, support and skill are all essential. Furthermore, a clear and shared understanding of the organization’s objectives is important if organizations are to learn collectively and thereby reap the significant benefits associated with collaborative reflection”. Therefore, based on the results achieved throughout the collection of opinions and perceptions from the survey’s respondents, it allowed us, among other things, to clearly identify the key determinants that might contribute to increasing the awareness among the different stakeholders about the knowledge acquisition and the spiral of learning creation. Furthermore, other factors must be considered like technological environment and leadership profile in order to gain a broader view as well as to be able to assess their impacts and measure their contribution’s degree to this theme. Hereafter, this research intends to develop and validate a formal theoretical model in a newer future.

References 1. AMA (2011) The AMA handbook of project management. (PC Dinsmore, JC Brewin eds). Amacom Books, Broadway, New York 2. Amaral A (2012) Avaliação e gestão do portefólio de projetos. Accessed from http://repositor ium.sdum.uminho.pt/handle/1822/22255 3. APM (2009) The APM body of knowledge, 5th edn. APM, Princes Risborough, Buckinghamshire, UK 4. APM (2017) Salary and market trends survey 2017. APM, Princes Risborough, Buckinghamshire, UK 5. Caupin G, Knoepfel H, Koch G, Pannenbäcker K, Pérez-Polo F, Seabury C (2006) ICB4— IPMA competence baseline. IPMA, Nijkerk, Netherlands 6. Gower (2009) Gower handbook of project management (R. Turner, Ed.) Gower, Hampshire, England 7. OCG (2009) Managing successful project with PRINCE2. TSO, Norwich, UK 8. Pinto JK, Kharbanda OP (1999) How to fail in project management without really trying. Bus Horiz 39(4):45–53 9. PMI (2013) A guide to the project management body of knowledge, 5th edn. PMI, Newtown Square, PA 10. PMI (2016) Pulse of Profession—The High Cost of Low Performance. How will you improve business results? (8th Ed. Global Project Management Survey). PMI, Newtown Square, PA 11. PMI (2017) Pulse of Profession—Success Rates Rise. Transforming the high cost of low performance (9th Ed. Global Project Management Survey). PMI, Newtown Square, PA 12. Roth G, Kleiner A (1998) Developing organizational memory through learning histories. Org Dyn 27(2):43–60 13. Saunders M, Lewis P, Thornhill A (2007) Research methods for business students, 4th edn. Prentice Hall, London 14. Wellingtone (2016) The state of project management survey 2016. APM PMO SIG, Windsor, Berkshire, UK

Chapter 8

The Utilization of Project Management Tools and Techniques and Their Relationship with the Success in Chemical Engineering B. Llorens Bellón, R. Viñoles-Cebolla, M. J. Bastante-Ceca, and M. C. González-Cruz Abstract Managing a project successfully implies that project managers have to be experts at initiating, planning, executing, monitoring and controlling, and closing projects. For that, tools and techniques are utilized as a help, in order to manage activities properly during the entire project life cycle, because these tools and techniques allow them to implement the bases of this discipline more easily. The main objective of this work consists of studying the utilization of project management tools and techniques by chemical engineers, and establishing relationships with the success of the projects in which they have participated. The study begins with the selection of the most widely used project management tools and techniques. This selection was made taking into account not only the tools and techniques of traditional methodologies but also those related to agile methods, owing to their increase in use. After that, a survey was designed and handed to a panel of experts in Chemical Engineering to assess their professional profile, the success of the projects in which they have participated, and the utilization of the project management tools and techniques taken into account. Keywords Chemical engineering · Tools · Techniques · Project management · Success

B. Llorens Bellón (B) Escuela Técnica Superior de Ingenieros Industriales, Universitat Politècnica de València, Camino de Vera, s/n, 46022 Valencia, Spain e-mail: [email protected] R. Viñoles-Cebolla · M. J. Bastante-Ceca · M. C. González-Cruz Departamento de Proyectos de Ingeniería Grupo de Investigación de Diseño y Dirección de Proyectos, Universitat Politècnica de València, Camino de Vera, s/n, 46022 Valencia, Spain © Springer Nature Switzerland AG 2021 J. L. Ayuso Muñoz et al. (eds.), Project Management and Engineering Research, Lecture Notes in Management and Industrial Engineering, https://doi.org/10.1007/978-3-030-54410-2_8

103

104

B. Llorens Bellón et al.

8.1 Introduction Although there is wide acceptance of the iron triangle to graphically explain the success of project management, many authors have developed other definitions for this concept, such as Wit [18], the International Project Management Association [7], Kerzner [8], Ellatar [5], Carvalho and Rabechini [3], the Project Management Institute [13], or Morioka and Carvalho [9]. As a conclusion, as time goes by, this term has become a broader and more accurate definition by incorporating costs, schedule, results, objectives, resources, or risks to the older concept of just fulfilling the technique specifications or reaching the satisfaction of stakeholders. To make project management successful, it is usual to use tools and techniques to help in the management of tasks through the life cycle of the projects. Nevertheless, the reality is that companies and organizations, for the most part, do not use many of them, maybe due to the lack of training or certification in project management by their employees or maybe because they cannot afford it. As the National Agency of Quality Evaluation and Accreditation [ANECA] states [1], “one of the competencies that Chemical Engineers have is to lead, to coordinate and to manage complex and interdisciplinary projects”, which highlights the need to use the tools and techniques of project management to achieve the success. Therefore, for this paper, which shows the result of the work developed for the Final Master Thesis of the corresponding author, to complete the Master in Project Management performed at Universitat Politècnica de València, an analysis of the utilization rate of the tools and techniques of project management by Chemical Engineers, as well as, the influence of these on the success criteria of the projects will be determined.

8.2 Objectives The main objective of this work consists of studying the employment of the tools and techniques of project management and their relationship with the success of projects in the field of Chemical Engineering. Nevertheless, to achieve this objective adequately, some more specific objectives are to be met first, such as • To make a selection of the tools and techniques of both project management and agile methods most commonly used by experts. • To analyze the answers of experts in Chemical Engineering to the questionnaire about the employment of selected tools and techniques. • To determine the relationship between success factors and tools and techniques of project management through linear regression.

8 The Utilization of Project Management Tools …

105

8.3 Case Study 8.3.1 Determination of the Most Common Tools and Techniques Used by Experts Nowadays, there are lots of project management tools and techniques. Most of them can be found at Project Management Body of Knowledge, known as PMBOK [13] and in Managing successful projects with PRINCE2 [10]. Those tools and techniques are applicable to a wide range of different contexts, which make difficult the preference of ones over others. Because of this, some researchers have carried out empirical studies to identify the most commonly used (see Table 8.1). Regarding the research included in Table 8.1, Table 8.2 shows the 10 more common tools and techniques used by experts. Analyzing Table 8.2, the conclusion is that the results are qualitatively similar for Fernandes et al. [6], and Besner and Hobbs [2], since 6 out of 10 tools and techniques appears in both studies. This suggests a high degree of concordance between both researches. For the Patanakul, Lewwongcharoen, and Milosevic study, results were more different with regard to Besner and Hobbs, due to the incorporation of tools and techniques that did not appear at the reference study. In fact, from 10 tools and techniques most used by professionals, the only one the two studies have in common Table 8.1 Studies about the use of project management tools and techniques Authors

Description of the research

Number of tools and techniques studied

Questionnaire data

Thamhain [15]

Identification of 38 popularity of tools and techniques

180 projects and 294 professionals

Raz and Michael [14]

Use of tools for risk management

38

84 project managers

White and Fortune [17]

Identification of tools and techniques most used

44

236 projects

Besner and Hobbs [2]

Determination of use 70 of techniques and tools

753 project managers

Patanakul et al. [12]

Study of use of techniques and tools about project’s life cycle and success

39

412 PMI members

Fernandes et al. [6]

Identification of practices most used

68

793 project managers

Table generated from Besner and Hobbs [2], Fernandes et al. [6], Patanakul et al. [12], Raz and Michael [14], Thamhain [15], and White and Fortune [17]

106

B. Llorens Bellón et al.

Table 8.2 Prioritization of the use of project management tools and techniques according to research Nº

Besner and Hobbs

Patanakul, Lewwongcharoen, and Milosevic

Fernandes, Ward, and Araújo

1

PM software to task scheduling

Gantt chart

Progress report

2

Progress report

Similar estimation

Requirements analysis

3

Project scope statement

Ascendant estimation

Progress meetings

4

Requirements analysis

Performance report

Risk identification

5

Kick-off meeting

Checklist

Project scope statement

6

Gantt chart

Lessons learned

Kick-off meeting

7

Lessons learned

Milestone diagram

Milestone planning

8

Change request

Baseline performance measurement

Work breakdown structure

9

PM software to monitoring schedule

Project changes log

Change request

10

Work breakdown structure

Brainstorming

Project issue log

Table generated from Besner and Hobbs [2], Fernandes et al. [6], and Patanakul et al. [12]

is “Lessons learned”, which appears in 6th position in the ranking of Patanakul, Lewwongcharoen, and Milosevic study and in the 7th position in the list of Besner and Hobbs. In this sense, from the results of the three most recent researches, a reduced group of tools and techniques has been selected, by attempting to be common for all the studies and representatives, as far as possible, of the whole group of processes and knowledge areas of project management. Besides, due to their increasing use, they have considered tools and techniques associated with agile methods, described at Cervone [4], and Palacio et al. [11]. Table 8.3 list the tools and techniques selected for both cases. Regarding the column showing tools and techniques associated with project management of Table 8.3, it is important to remark that it includes, in an important sense, the tools and techniques that 3 groups of authors have in common. It also includes the standards analyzed (6 out of 20 elements, which means a 30%), or a combination of authors (also a 30% of total). The objective is to make the selection as well-founded as possible in the results of researches. The rest of the tools and techniques (8 out of 20 chosen, which means the las 40%) are the same, or similar, to those appearing in PMBOK [13] and PRINCE2 [10].

8 The Utilization of Project Management Tools … Table 8.3 Tools and techniques considered for this study

107

Associated with project management

Associated with agile methods

Analysis with a checklist

Financial indicators

Scrumpoker

Analysis of stakeholders

Probability and impact matrix

Estimation of duration

Earned value management

Critical path method

Progress chart

Audits

Communication methods

Product chart

House of quality

Projections and forecasts

Work in progress limit

Ishikawa diagram Meetings

Pending worklist

Gantt chart

PM software

Retrospective meetings

Breakdown structure

Decision-making

Daily meetings

Control chart

Brainstorming

Sprint review

Interpersonal skills

Expected monetary value

Dashboard

8.3.2 Description of the Method This study has been carried out by using a questionnaire, since it allows obtaining information in a standardized way. This will make it possible, afterwards, to treat statistically the information, in order to obtain the relevant conclusions. The questionnaire was sent by e-mail and by using social networks as channels to contact the addressees during January and February of 2017. The experts’ panel was formed by • Members of the Spanish Association of Project Management (AEIPRO), • Former students from Universitat Politècnica de València, • Chemical Engineers with the freedom to practice their profession. Regarding the questionnaire’s design, it has been divided into three different sections according to the following structure: • Section one (questions 1–9) gives information about the professional profile of addressees and the success degree of the projects in which he or she has participated. • Section two (questions 10–29) gives information about the use of tools and techniques associated with traditional project management. • Section three (questions 30–39) gives information about the use of tools and techniques associated with agile methods.

108

B. Llorens Bellón et al.

8.4 Results After 2 months of field survey, 132 questionnaires were considered as valid (from 850 addressees consulted). Figures 8.1, 8.2, 8.3, 8.4, 8.5, 8.6, 8.7, and 8.8 show the results after processing the data.

Fig. 8.1 Age interval of the experts

Fig. 8.2 Professional sectors more experienced by the experts

Fig. 8.3 Years of experience of experts in the selected professional sector

Fig. 8.4 Level of certification of experts in project management

8 The Utilization of Project Management Tools …

109

Fig. 8.5 Size of projects managed by experts

Fig. 8.6 Success rate of projects managed by experts

8.5 Analysis of Results 8.5.1 Determination of Sampling Error Since only a sample of individuals has been analyzed, there exists an associated sampling error. To calculate it, an expression links the sampling error (e) with the size of the sample (n), the size of the population (N), the factor associated with the confidence interval (Z), and the rate of individuals that have the characteristic (p). Thus, according to [16], the size of the sample, n, is calculated as n=

N · Z 2 · p · (1 − p) (N − 1) · e2 + Z 2 · p · (1 − p)

(8.1)

According to the results obtained for a sample of 132 individuals, considering an infinite size of the population, considering a confidence interval of 95%, and supposing that half of the individuals have the characteristic to be analyzed, the error (e) is 8.53%.

110

B. Llorens Bellón et al.

1: Expected Monetary Value

6: Projections and forecasts

11: Interpersonal skills

16: House of Quality

2: Brainstorming

7: Communication methods

12: Control chart

17: Audits

3: Decision making

8: Critical Path Method

13: Breakdown structure

18: Earned Value Management

4: PM Software

9: Probability and impact matrix

14: Gantt Chart

19: Analysis of stakeholders

5: Meetings

10: Financial indicators

15: Ishikawa diagram

20: Analysis with check list

Fig. 8.7 Level of use of tools and techniques associated with project management

Fig. 8.8 Level of use of tools and techniques associated with agile methods

8 The Utilization of Project Management Tools …

111

8.5.2 Analysis of Professional Profile First, concerning the experts’ age interval, it should be noted that the central interval (between 36 and 45 years) represents a 41.7% of the total. The relative importance of the rest of the intervals decreases considerably when moving to the edges. Regarding the professional sector where the experts are more experienced, the two large majority sectors are chemical, petrochemical or pharmaceutical industry (22%), and food or biotechnology industry (20.5%). Concerning the experience of experts, the higher percentage is for the central interval (between 6 and 10 years of experience), with 30.3% of answers. About the level of certification in project management, 107 out of 132 chemical engineers consulted do not have any specific training in project management. In respect of the size of the projects the experts have managed, almost half of the respondents participate in projects that lasted more than a year but their budget is less than 1,000,000. Finally, concerning the success factors, it underlines that the chemical engineers do not perceive that the criteria may be never reached (none of the answers was “never” for any of the factors). On the other hand, 3.8% consider that their projects always comply with the deadline, and a 4.5% consider that their projects always comply with costs. Lastly, none of the respondents considers the management of the team to be more appropriate.

8.5.3 Analysis of Use of Project Management Techniques and Tools On the one hand, about the tools and techniques associated with project management with a higher lack of knowledge, the three most unknown are, in order, Decisionmaking (47.7%), House of Quality (43.9%), and the Ishikawa diagram (40.9%). Regarding the tools and techniques that the experts have never used, despite they knowing them, the more outstanding are Earned Value Management (68.9%), Interpersonal skills (59.8%), and Critical Path Method (55.3%). On the other hand, the tools and techniques that the experts used always are mainly the Communications methods (64.4%), Meetings (46.2%), and the Gantt Chart (37.1%). Finally, regarding the most used tools and techniques associated with project management, independently of its degree of implementation, they are Meetings (100%), and Communication methods or Brainstorming (both with a 98.5%).

112

B. Llorens Bellón et al.

8.5.4 Analysis of Use of Agile Methods and Techniques First, regarding the tools and techniques associated with agile methods with the higher lack of knowledge, the three most unknown are, in decreasing order, the Scrumpoker (71.2%), the estimation of duration (58.3%), and the progress chart (34.1%). Concerning the tools and techniques that the experts have never used, despite they knowing them, the more outstanding are daily meetings (55.3%), Work in Progress limit (48.5%), and Dashboard (46.3%). On the other hand, the tools and techniques used always by the experts are mainly Dashboards and Pending worklist (both with a 4.5%), and the Sprint review (3%). Finally, with respect to the most used tools and techniques associated with agile methods, independently of their degree of implementation, Sprint review (97%), Retrospective meetings (94.7%), and Pending worklist (93.9%) are more remarkable.

8.5.5 Analysis of Linear Regression For this case, we considered 16 independent and 4 dependent variables, after a statistical analysis conducted to determine the validity of 30 tools and techniques considered, from the point of view of asymmetry and kurtosis coefficients. Table 8.4 shows the compilation of variables considered for this analysis. Table 8.4 Variables of multiple linear regression model

Symbol

Variable

Symbol

Variable

Y3

Customer expectative

Dependent variables Y1

Schedule’s compliance

Y2

Cost’s compliance Y4

Management of project team

Independent variables X1

Checklists

X9

Risk management

X2

Stakeholder analysis

X10

Critical path method

X3

Audits

X11

Meetings

X4

Quality management

X12

Specific software

X5

Gantt Diagram

X13

Decision-making

X6

Work breakdown structure

X14

Brainstorming

X7

Soft skills

X15

Limitation of tasks

X8

Financial indicators

X16

Dashboard and work packages

8 The Utilization of Project Management Tools …

113

Table 8.5 Results of multiple linear regression analysis Iteration R2 (%) PV 15

9.3

14

14.5

15

9.7

13

18.4

SL

Mathematical expression

6.6·10−4 0.023 Y1 = 0.428 + 0.117 · X5 + 0.251 · X7 3.8·10−5 0.035 Y2 = 0.446 + 0.116 · X2 − 0.235 · X4 + 0.184 · X6 5.2·10−4 0.020 Y3 = 0.455 + 0.100 · X6 + 0.351 · X11 5.1·10−6 0.041 Y = 4 0.160 + 0.194·X1 + 0.134·X5 + 0.378·X11 −0.163·X15

An iterative statistical estimation has been performed in order to determine the value of different coefficients, based on the calculation of parameters through ANOVA, and evaluating the result of determination coefficient (R2 ), F critical value (PV), and Significance Level (SL). After each iteration, the variable with the highest SL value is eliminated (provided it is higher than 0.05), and the process is repeated until all the values are lower than 0.05. Table 8.5 shows the results of this analysis. According to results shown in Table 8.5, it is demonstrated that there exists a mathematical relation between the use of the tools and techniques considered for the study and the success factors. In particular, the deadline’s compliance is explained in 9.3% by the use of Gantt chart and techniques for the improvement of interpersonal skills. On the other hand, cost’s compliance depends, 14.5%, on the stakeholders’ analysis, on the use of tools and techniques for quality management and on the breakdown structure. Customer expectative, in turn, are related to the use of work breakdown structures and meeting at 9.7%. Finally, management of the project team is explained, in an 18.4%, by the use of checklists, Gantt chart, meetings, or Work in progress limit. Nevertheless, there exists between 81.6% and 90.7% of data (depending on success factors considered) that are not explained by models of Table 8.5. This gives rise to the proposal of a new research line based on the determination of factors or tools and techniques that may affect or influence those success factors.

8.6 Conclusions In this paper, the tools and techniques more used by the project managers have been selected. The selection is based on previous researches. The analysis of these researches shows that there is a reduced group of tools and techniques that are common to all the previously considered studies, and that usually are included in the handbooks (such as checklists, analysis of stakeholders, earned value management, breakdown structures, critical path method, and meetings). When the number of answers to the questionnaire was considered as representative of the population, the information was analyzed and statistically treated. The conclusions are the following:

114

B. Llorens Bellón et al.

• The chemical engineers do not have the perception that the success factors may be “never” reached, whereas the perception that they can be “always” reached is very reduced for the majority of the cases (being nil for the “management team” factor). • Regarding the level of use of tools and techniques, those associated with project management have proven to be more widely known and more often used than those associated with agile methods. The lack of knowledge of the latter reaches, in some cases, more than 50%. • Finally, after performing a multiple linear regression models analysis to establish the relationship between the use of the tools and techniques associated with project management considered for the study, the conclusion is that there is a direct dependence between the success factors and the level of use of tools and techniques associated with project management. Specifically, deadline’s compliance is explained in 9.3% for the use of Gantt chart and interpersonal skills; cost’s compliance depends 14.5% on the stakeholders’ analysis, on the use of tools and techniques for quality management, and on the breakdown structure. Customer expectative, in turn, are related to the use of breakdown structures and meetings in 9.7%. Finally, management of the team is explained, in 18.4%, by the use of checklists, Gantt chart, meetings, or Work in progress limit.

References 1. Agencia Nacional de Evaluación de la Calidad y Acreditación (2005) Libro blanco del título de grado en ingeniería química. Agencia Nacional de Evaluación de la Calidad y Acreditación, Madrid 2. Besner C, Hobbs B (2004) An empirical investigation of project management practice: in reality, which tools do practitioners use? In: Slevin DP, Cleland DI, Pinto JK (eds) Innovations: project management research. Project Management Institute, Newton Square, pp 337–351 3. Carvalho MM, Rabechini R (2011) Fundamentos em gestão de projetos: construindo competências para gerenciar projetos. Atlas, São Paulo 4. Cervone HF (2011) Understanding agile project management methods using Scrum. OCLC Syst Serv Int Digital Libr Perspect 27:18–22 5. Ellatar SMS (2009) Towards developing an improved methodology for evaluating performance and achieving success in construction projects. Sci Res Essay 4:549–554 6. Fernandes G, Ward S, Araújo M (2013) Identifying useful project management practices a mixed methodology approach. Int J Inf Syst Project Manag 1:5–21 7. International Project Management Association (2006) IPMA competence baseline, 3rd edn. International Project Management Association, Zurich 8. Kerzner H (2009) Project management. A systems approach to planning, scheduling and controlling, 10th edn. Wiley, New Jersey 9. Morioka S, Carvalho MM (2014) Análise de fatores críticos de sucesso de projetos: um estudo de caso no setor varejista. Production 24:132–143 10. Office of Goverment Commerce (2009) Managing successful projects with PRINCE2, 5th edn. The Stationery Office, Belfast 11. Palacio J, Menzinsky A, López G (2016) Scrum Manager. Guía de formación, 2th edn. Iubaris Info 4 Media SL, Zaragoza

8 The Utilization of Project Management Tools …

115

12. Patanakul P, Lewwongcharoen B, Milosevic D (2010) An empirical study on the use of project management tools and techniques across project life-cycle and their impact on project success. J General Manag 35:41–65 13. Project Management Institute (2013) Guía de los fundamentos para la dirección de proyectos (Guía del PMBoK), 5th edn. Project Management Institute, Newton Square 14. Raz T, Michael E (2001) Use and benefits of tools for project risk management. Int J Project Manag 19:9–17 15. Thamhain HJ (1999) Emerging project management techniques: a managerial assessment. In: Kocaoglu DF, Anderson TR (ed) Portland international conference on management of engineering and technology. IEEE Xplore, Portland, p 243. https://doi.org/10.1109/picmet. 1999.80826 16. Valdivieso CE, Valdivieso R, Valdivieso OA (2011) Determinación del tamaño muestral mediante el uso de árboles de decisión. UPB—Investigación Desarrollo 11:148–176 17. White D, Fortune J (2002) Current practice in project management: an empirical study. Int J Project Manag 20:1–11 18. Wit A (1988) Measurement of project success. Int J Project Manag 6:164–170

Chapter 9

Human Aspects of Project Management: Agent-Based Modeling M. Nakagawa, K. Bahr, and V. Lo-Iacono-Ferreira

Abstract Having tools and techniques of project management are a necessary but not a sufficient condition for project success. If a manager cannot handle people, she or he will have difficulty with managing projects. Many successful projects have teams of people that are involved and committed; however, friction between the team members still occurs due to misunderstandings, conflicts, and personality differences. Project managers must be prepared to deal with these behavioral problems of team members. One way to minimize the impact of behavioral problems is to provide training for all team members (including managers themselves) in interpersonal skills. This area has often been neglected in many organizations; other times, just managers are required to be trained but not team members. This paper presents a unique opportunity for both managers and team members to gain insight into understanding the complex nature of collective behavior. The only way to understand how individual behavioral problems translate into those of a team is to model problems using a bottom-up simulation method like the one presented here. This introductory paper presents basic concepts and a simple application of Agent-based modeling. Keywords Project management · Agent-based modeling · Bottom-up simulation · Individual behavior

M. Nakagawa Colorado School of Mines, 1500 Illinois St. Golden, Golden, CO 80401, USA e-mail: [email protected] K. Bahr Graduate School of Environmental Studies, Tohoku University, Sendai 980-8579, Japan e-mail: [email protected] V. Lo-Iacono-Ferreira (B) Univeristat Politècnica de València, Plaza Ferrándiz y Carbonell 1, 03801 Alcoy, Spain e-mail: [email protected] © Springer Nature Switzerland AG 2021 J. L. Ayuso Muñoz et al. (eds.), Project Management and Engineering Research, Lecture Notes in Management and Industrial Engineering, https://doi.org/10.1007/978-3-030-54410-2_9

117

118

M. Nakagawa et al.

9.1 Introduction to Agent-Based Modeling (ABM) Agent-based modeling (ABM) is a versatile simulation technique that has been applied to a number of different disciplines from the behavioral analyses of real-world socioeconomic phenomena to the analysis of the system at far-from-equilibrium such as segregation phenomena of sand grains in deserts. Through all these applications, there exists a common theme and that is the complexity of the phenomena. For example, the diffusion process of a new technology to society is a complex process with many stakeholders (agents) interacting to exchange views [3]. One of the most unique aspects of this modeling method is the possibility of witnessing emergent phenomena that cannot be easily predicted. Unexpected and sometimes counterintuitive emerging phenomena are certainly difficult to produce when a set of well-behaved mathematical equations drive the outcome. The ABM is a bottom-up approach and intends to connect individual behavior to team behavior [4]. In Agent-based modeling (ABM), a system is modeled as a collection of autonomous decision-making entities called “Agents”. Each agent individually assesses its situation and makes decisions on the basis of a set of rules. Agents may execute various behaviors appropriate for the system they represent—for example, information transfer in the case of interacting community members. Repetitive, competitive interactions between agents are a feature of agent-based modeling, which relies on the power of computers to explore dynamics out of the reach of pure mathematical methods. At the simplest level, an ABM consists of a system of agents and the relationships between them that is defined by the interaction rules. Even a simple ABM can exhibit complex behavior patterns and provide valuable information about the dynamics of the real-world system that it emulates [2, 5–7]. In addition, agents may be capable of evolving, allowing unanticipated behaviors to emerge. Sophisticated ABM can incorporate neural networks, evolutionary algorithms, or other learning techniques to allow realistic learning and adaptation [8].

9.2 Example of ABM In this section, an example of ABM is given in the context of natural resources development. In particular, mining operations are becoming increasingly complex both in terms of their engineering aspects and their socio-econo-environmental aspects, but the technical complexity has been mitigated by ever-more sophisticated machinery and rapidly developing computer simulation capabilities. On the other hand, it has been a recognition among the impacted communities as well as the mining industry that the way the mining industry gains the community acceptance is not simply to implement the newest and fastest computer-aided state-of-the-art engineering. In fact, that may be the last thing that a particular mining community needs and desires. Obtaining a social license is getting more difficult and is not straightforward. It requires a very comprehensive understanding of the complex, and sometimes

9 Human Aspects of Project Management: Agent-Based Modeling

119

fragile, dynamics of an ecosystem that includes all stakeholders, villages, and the environment in which they live.

9.2.1 Model Description A brief description of an ABM model project is given. A mining company “X” wants to gain a social license to use some land for waste storage with the possibility of recovering waste energy in the future. The company X purchased the land, and has the approval of the community to store waste there. There are, however, several individuals in the community who have traditionally used this land for grazing, and feel that they should be further compensated for the use of the land. The company X wants to gain social license with these dissenters before they begin to use the land. X needs to know how many of their representatives they need to have speaking to these community members in order to reach a consensus in a given time frame, or how long it would take to reach consensus with a given number of people as representatives.

9.2.2 The Agents The first step in creating an ABM model is to establish who the participating agents are and define the variables that are relevant to the problems at hand. For this hypothetical model, we use two different types of agents, each with their own characteristics and rules for interaction among themselves and with the other agents. Figure 9.1 shows a computational environmental space where these two types of agents can interact. The company “X” agents are stars and the community people are shaped like people. The black box in which the agents are located represents an area in which Fig. 9.1 ABM environment and agents

120

M. Nakagawa et al.

agents may meet and interact. Agents are initially placed randomly in the environmental space and also move around this area randomly, and when they come into specified proximity with another agent, the interaction rules dictate what the agents will do next. Agent “X” The company X is interested in knowing how they can most effectively change the “opinion” of certain members of the local community. Their strategy for doing this is to have a number of their employees set up as “info-agents” that increase the information level of other agents they come in contact with. The variable that governs how much information is given at an interaction is the “opinion” of the member of the community that they come into contact with. Someone who is a “high blocker” will be less receptive to the information that they are given from an X agent. With this in mind, the following scale was devised for determining the level of information transfer in this interaction. The color scale shown is used to differentiate between “attitude” types in the model. Information is given on a 10-point scale. This means that when an X agent meets with a member of the community whose attitude level is “blocker,” the community member receives an increase of 4 points in their “information” variable representing a transfer of the 40% of the 10 information points. Possible values for an agent’s information variable range between 0 and 50, and each 10-point interval (0– 10, 10–20, etc.) represents each of the five opinion categories shown in the scenario where when a community agent gains or loses 10 information points, their attitude changes. Also, it should be noted that if a neutral agent gains 10 points, they become a supporter, and if they lose 10 points, they become a blocker. This continues until all community agents become high supporters. The amount of time it takes for this to happen can be adjusted by adding or subtracting the number of X agents involved in a given simulation run. The number of X “info-agents” can be adjusted using the “infoagent” slider discussed later. For the current model, the number of “info-agents” can be varied from none to eight. Several sets of initial conditions were tested in order to understand the general trends in the behavior of the agents. Three sets of initial conditions are particularly interesting, including a set where community members are initially all neutral, one where they are initially all high blockers, and a varied case that represents a realistic situation in which many are initially high blockers, but others vary from blocker to high supporters. These initial conditions allow us to examine many important features of the model, such as which interactions (individual–individual, individual–X agent, etc.) drive the consensus one way or another.

9.3 Community Individuals The second type of agent in our model is the individual member of the community. They interact with X agents as described above, and they can also interact with each other. Inter-community interactions are governed not only by the “attitude”

9 Human Aspects of Project Management: Agent-Based Modeling

121

variable, but also by the level of “influence” or sway that an individual holds over other agents. The community agents were given an influence value based on a 5-point scale corresponding to five discrete categories of influence defined as Dominant (D), Strong (S), Influential (I), Vulnerable (V), and Marginalized (M). These interactions are defined by the following symmetric matrix. The “attitude” variable shown in the previous table is also important here, since it determines whether agents gain or lose information. For instance, if a dominant individual meets with a marginalized individual, the marginalized individual will either gain or lose 10 information points since the matrix above dictates a 100% transfer of information for this influence combination. If the dominant individual has a lower attitude level than the marginalized individual, then the marginalized individual will lose 10 information points. If the dominant individual has a higher attitude level, then the marginalized individual will gain 10 information points. In contrast, if a vulnerable individual meets with a strong individual, then the vulnerable individual will gain or lose 5 points (50%) based on whether the attitude of the strong individual is higher or lower than that of the vulnerable individual. This interaction accounts for the information transfer that happens within the community, out of the control of the X agents, and is calculated using the so-called “utility function” shown in Eq. 9.1, which represents the interaction between two community-member agents, i and j.  For Ii > I j

    x j,τ +t = 0.25 Ii − I j sgn Ci − C j α xi,τ +t = 0

(9.1)

In Eq. 9.1, Ii and Ij represent the influence of agents i and j, and Ci and Cj represent the opinions of i and j, respectively. α is a scaling factor used for modeling convenience, and is set to a value of ten for the simulations reported here. In this iteration of the model, the influence of each agent is set at the beginning of the model run, and remains static throughout the simulation. In Sect. 9.4, we will discuss ways in which this model may be expanded to include dynamic influence, among other things.

9.3.1 Modeling Environment and Topology The second important attribute of an ABM is related to how agents are chosen for interaction. This is known as the model’s topology. There are several possible choices for topology, depending on the nature of the system being modeled. These include cellular automata, in which agents interact only with direct neighbors in a rigid spatial grid; Euclidean space, in which agents move around a virtual domain and interact when their paths intersect; networks, wherein agents are nodes in a network graph and interact with neighboring nodes sequentially or en masse; GIS, a virtual space overlaid on a mapped feature (such as a transportation or geologic map) and

122

M. Nakagawa et al.

representing a real physical location; and aspatial topology, in which agents have no physical or virtual location, but instead are paired for interaction randomly. In the model described in this paper, a two-dimensional Euclidean topology is used One of the features of this topology is that the probability of interaction of any two agents in a given time step is proportional to their spatial proximity in the Euclidean space in the previous time steps. In other words, as agents move closer together within the interaction space, the probability of an interaction between the two increases in the next time step. This topology was chosen for the simulation in order to introduce an element of randomness to the model, while still allowing for a distribution of interaction probabilities. Space-based topologies such as this and the GIS topology also allow for the introduction of environmental variables (physical and intellectual resources, etc.) to the model. Coupling this spatial topology with the constraining characteristics of a network topology potentially has many interesting features and implications, a discussion of which will follow the model description and results in Sect. 9.4 of this paper.

9.3.2 Architecture of the Simulation Model with NetLogo There are several controls used in this simulation software NetLogo [9]. They are shown in Fig. 9.2, and their uses are defined and explained below. Setup: This button “resets” the model. It clears all variables and re-randomizes agent positions. Go: This button runs the model by calling all the functions that define interactions between agents. Info-centers (Info-agents): This slider gives the number of Q agents present in a given run. It can easily be manipulated so that different cases can be run quickly. Show-information?: When this switch is “on,” information levels of individuals in the environment are shown. Fig. 9.2 Model control panel

9 Human Aspects of Project Management: Agent-Based Modeling

123

Interactions and Ticks: These two counters keep track of the number of interactions and time steps that elapse in a given run. These numbers are important for calibrating the model against real-time interactions. A tick is a time step in which the rules of interaction are implemented. The individual agents start out randomly distributed throughout the interaction space. In the first tick, each agent generates a random number between 0 and 360, turns that number of degrees to the right, and moves forward one space. If another agent already occupies the space, then they interact based on the characteristics and rules given above. If that space is empty, the agent simply waits until the next time step and repeats the process. Ticks have no real-time meaning and are only given significance based on the average number of ticks that occur between interactions, and the reallife amount of time that elapses between interactions. This means that the ticks must be carefully calibrated to time increments in real life.

9.3.3 Decision-Making Since it can sometimes be difficult to visualize or describe the procedures that happen in every time step, Fig. 9.3 shows the algorithm logic that is invoked in every time step. It should be noted that it is possible that no interaction between agents may take place in a time step.

9.4 Case Study In order to understand the team behavior of the agents in the model, several “what if?” scenarios were run by changing specific variables and analyzing the results. To start, the number of X agents was varied from one to eight to see whether more agents could cause a consensus within a given amount of time, and also to see what the marginal benefit would be by adding more agents. Since the results are somewhat randomized (due to the random movement of agents), the number of ticks that it would take to reach consensus should exhibit a normal distribution. A thousand runs were made for each case (1–8 X agents), and the results are shown in Fig. 9.4 below. In Fig. 9.4, the bins represent the number of ticks it took for each run to reach consensus. The frequency is the number of times that the runs reached consensus in a given bin value. One of the interesting features of these histograms is that each run shows the expected normal behavior. What varies between runs is the average number of ticks for consensus and the scale of standard deviations. With one infocenter (X agent), the average time to consensus was around 21,000 time steps. Two info-centers bring the average time down by 48% to 11,000 time steps. This time was further reduced by adding a third and fourth info-center, but after the fourth info-center, the marginalized benefit becomes much smaller. In other words, adding

124

M. Nakagawa et al.

Fig. 9.3 Interaction algorithm

a fifth, sixth, seventh, and eighth X agent does not decrease the number of time steps enough to justify their use. This diminishing return can further be seen in Fig. 9.5. The second scenario that was run was the initial condition comparison mentioned above. Three sets of initial conditions were run. The first was based on data from a real project wherein most of the community members start out as high blockers (case x). In the second scenario, all of the community members were high blockers (case y). In the third, all of the community members start out neutral (case z). The result shows a decaying trend, which would be expected based on the analysis of the comparative histogram of Fig. 9.5. Figure 9.5 shows the resulting curves for the initial conditions test.

9 Human Aspects of Project Management: Agent-Based Modeling

125

Fig. 9.4 Comparative histogram results

Fig. 9.5 Initial conditions comparison

The decreasing marginal benefit can be seen quite clearly in Fig. 9.5. In each case, the benefit of adding more than four info-centers is smaller than that gained by adding the first three. This information is useful since each agent employed by company X represents a cost that could be used elsewhere. In deciding whether a fifth agent should be used to educate the community, it is clear that the fifth agent should be used elsewhere.

126

M. Nakagawa et al.

9.5 On Parameterization and Model Expansion Parameterization One of the biggest difficulties involved in social simulation generally and ABM, in particular, is the challenge of choosing the parameters and values that reflect realworld behavior and will accurately produce realistic phenomena without forcing the model to be overly deterministic (over-constrained). For example, in the example above, the authors chose a linear scale for determining the amount of information transferred between agents (Tables 9.1 and 9.2). These linear values were estimated by a group of experts familiar with the social dynamics of the social group that served as the basis of the original model and case study, but could also be determined using other empirical observations or sociological theory. Other variable values such as opinion and influence of individual stakeholders should be measured using some combination of interview/survey techniques and network mapping. In the case study alluded to the above, the “Social License to Operate” measurement methodology of Boutilier and Thomson [1, 10] was used to gather data on the opinion of stakeholders about a local development project. In addition to measuring the opinion of stakeholders as a basis for model parameterization, Boutilier and Thomson also use snowball sampling to map the stakeholder network like the one shown in Fig. 9.6 below. The resulting graph serves as a convenient basis for measuring the relative influence of stakeholders. There are many quantitative metrics for assessing the importance of any given node in the overall network graph, including betweenness, eigenvector, page-rank, and degree centrality. Using one of these metrics—normalized against some maximum possible Table 9.1 Profile description

Profile of a community individual

Degree of information accepted per interaction (%)

High supporter

80

Supporter

70

Neutral

60

Blocker

40

High blocker

20

Table 9.2 Interaction definition D S I V M

D (%)

S (%)

I (%)

V (%)

M (%)

0

25

50

75

100

0

25

50

75

0

25

50

0

25 0

9 Human Aspects of Project Management: Agent-Based Modeling

127

Fig. 9.6 Agent network graph

value—makes it possible to assign a robust, empirical value to the influence variable of associated agents. Network Evolution Generally, when a network topology is used, it serves as the basis for interaction. However, coupling the measurability of network features with the probabilistic interactions of the Euclidean topology allows the modeler to observe the evolution of the network structure (Fig. 9.7).

Fig. 9.7 Potential evolution paths from one network structure to another

128

M. Nakagawa et al.

The structure of the network is important, because it relates to both the efficiency of that network in performing a given task, and also the distribution of influence measures among the nodes. For example, a strongly hierarchical network structure can diffuse information more quickly in a top-down manner, but can make it difficult for a good idea or practice to travel up the levels and become implemented. For the same reason, such a structure may be prone to opinion clustering, which makes group decision-making and buy-in more difficult to achieve. In addition to the efficiency of different networks, the structure has an effect on the measurable characteristics of a network by definition. For example, it is well known that random networks have a characteristic normal degree centrality distribution, while scale-free networks (commonly observed in both natural and social systems) approximately follow a power-law distribution. From a practical, project management perspective, this means that understanding the structure of the team network is vital in order to understand the social dynamics. Combining these observations and measurement techniques with the Euclidean topology agent-based modeling methodology above has the potential to create a considerable tool for understanding and exploring the sociological dynamics that are such a big part of team-based project management.

9.6 Concluding Remarks ABM can be a powerful tool in understanding individual behaviors, as well as the emergent behavior of the group as a whole. Through understanding the resultant group-level behavior, companies can make decisions based on statistically robust simulations, rather than relying wholly on “gut feelings” and intuition. The model presented here is deficient as it is clear that in a real-life situation, the community will not always come to a consensus on the side of company X. We are happy to inform you that a more sophisticated model with more realistic interaction rules has already been developed and tested. ABM is a complexity management tool, and with increasing reality in the model, the interpretation of results will become increasingly more complex. The readers can replace Agent X with Program Managers and Community Individuals with Team Members when interested in modeling how interacting individual behaviors of team members emerges to collective team behavior. Acknowledgements The authors would like to thank PhD. Juan Ignacio Torregrosa López at the Universitat Politècnica de València for his time to discuss our ABM projects and his encouragement.

9 Human Aspects of Project Management: Agent-Based Modeling

129

References 1. Boutilier R, Thomson I (2009) How to measure the socio-political risk in a project. XXVIII Convención Minera Internacional. Veracruz, Mexico, Asociacion De Ingenieros De Minas Metalurgistas Y Geologos De Mexico, A.C, pp 438–444 2. Doran J (2006) Agent-based modelling of ecosystems for sustainable resource management. In: Multi-agent systems and applications. Accessed from http://www.springerlink.com/index/ l4j1de865ym56ecv.pdf 3. Fujiono H (2011) The diffusion of innovation in the mining industry: agent-based modeling and simulation, PhD Thesis, Colorado School of Mines 4. Gilbert N (2008) Agent-based models. Sage Publications, Inc 5. Laurence D (2011) Establishing a sustainable mining operation: an overview. J Cleaner Prod 19(2–3):278–284. https://doi.org/10.1016/j.jclepro.2010.08.019. (Elsevier Ltd.) 6. Lucena J, Schneider J, Leydens JA (2010) Engineering and sustainable community development. Synthesis lectures on engineers, technology and society, vol 5, pp 1–230. https://doi.org/ 10.2200/s00247ed1v01y201001ets011 7. Macal CM, North MJ (2005) Tutorial on agent-based modeling and simulation. In: Winter simulation conference. Accessed from http://oai.dtic.mil/oai/oai?verb=getRecord&metadataP refix=html&identifier=ADA488796 8. Macal CM, North MJ (2010) Tutorial on agent-based modelling and simulation. J Simul 4(3):151–162. https://doi.org/10.1057/jos.2010.3 9. NetLogo (2012) Accessed from http://ccl.northwestern.edu/netlogo/ 10. Thomson I, Boutilier RG (2011) Social license to operate. In: Darling P (ed) SME mining engineering handbook, 3rd edn. Society for Mining, Metallurgy, and Exploration, INC., Englewood, CO, pp 1779–1796. http://doi.org/10.1007/s10551–015-2976-7

Chapter 10

Application of the DBR Approach to a Multi-project Manufacturing Context U. Apaolaza Perez de Eulate and A. Lizarralde Aiastui

Abstract In certain project-based contexts, the most effective managerial approach may be different from a “pure” Project Management approach. Thus, under certain circumstances, a Production Management approach may be more suitable. This research offers some important insights into the problem of addressing contexts in which both, Project and Production Management characteristics, coexist. This paper presents a real-world case study in an industrial company devoted to designing and manufacturing special cutting tools. Depending on the items in the orders, different lead times, resources, and batch sizes occur, leading to different levels of complexity. Production Management approaches are preferred for standard items, whereas complex items are better managed by applying a Project Management approach. In an attempt to maximize effectiveness, the management followed the suggestion to apply the DBR production approach in order to harmonize the capacity of the plant and the requirements of the orders. Thus, this research deals with the implementation of this methodology in the abovementioned environment. The consequences, limitations, and conclusions are discussed, offering insights for further research. Keywords Project · Production · DBR · TOC

10.1 Introduction The traditional perspective views Operations Management (OM) as responsible for managing the process of transforming inputs into outputs. More recently, focus has been placed on a general and inclusive view of processes, aiming for mutual consistency of the strategies that compose the system [11]. Thus, coherence with business U. Apaolaza Perez de Eulate (B) · A. Lizarralde Aiastui Industrial Organisation, Mechanical and Industrial Production Department, Faculty of Engineering, Mondragon Unibertsitatea, Loramendi, 4, 20500 Arrasate–Mondragón, Spain e-mail: [email protected] A. Lizarralde Aiastui e-mail: [email protected] © Springer Nature Switzerland AG 2021 J. L. Ayuso Muñoz et al. (eds.), Project Management and Engineering Research, Lecture Notes in Management and Industrial Engineering, https://doi.org/10.1007/978-3-030-54410-2_10

131

132

U. Apaolaza Perez de Eulate and A. Lizarralde Aiastui

unit strategy and functional areas is aimed at in particular. The benefits of linking strategy and operations are well documented in the literature [15]. Aligning strategy and operational capability is essential in order to prevent those problems or limitations that exceed the operational field [15]. Moreover, this way a company may gain a competitive advantage [20]. Given that a company cannot be good at everything, Operations Strategy (OS) looks for an understanding of the respective impact of the trade-offs within a system [20]. That way, an organization can create a resource configuration prioritizing the most important factors [16]. For that purpose, two aspects have been identified as key: focus and “operationalization” of that focus [20]. Focusing means deciding on which objectives the company should concentrate on, whereas operationalization implies consistently deploying the strategy to the operational level. Thus, once the focus has been identified, internal strategic consensus is required in order to achieve organizational alignment toward the objectives [3]. Although the benefits of focusing have been proven [20, 21], its application may be complex. OS and OM have traditionally been linked to repetitive operations, while Project Management (PM) literature has paid little attention to the OS/OM approach [15]. Hayes and Wheelwright [13] describe the relationship between product volume and process in their product–process matrix. Chase, Aquilano, and Jacobs [4] highlight the usefulness of this approach to connect marketing and production strategies, also introducing the relevance of an appropriate flexibility-vs-cost balance. According to these perspectives, Maylor et al. [15] define project-based operations (PBO) as low-medium volume and medium-high variety operations, and typically adopt job-shop or batch production. Thus, PBOs involve high flexibility and high costs if compared to other efficiency-oriented models. Maylor et al. [15] claim that the application of OS to PBOs is scarce. Hence, their contribution to filling a gap in the literature seems to confirm the convenience of a focus-based OS approach, but they also point toward a need for additional ingredients such as alignment and consistent configuration of resources. However, their findings are not conclusive. Therefore, they suggest further research in certain fields with special attention to focusing on organizational alignment rather than pursuing local optimization of isolated issues by individual parts of the organization. In brief, the application of manufacturing-management approaches in project contexts is not new. A number of authors have analyzed the characteristics that, depending on the scenario, a management system should have. However, recent research calls for further study in this field, in particular, the application of TOC [10, 23]. The Theory of Constraints (TOC) is a management philosophy based on systemic thinking [2]. Given its features, it offers a suitable framework for further research. It provides a global perspective of the relationship between OM and the organization. In addition, TOC consistently orients the components of a system toward its goal [5]. However, as in the case of OM, literature looks at TOC mainly in the context of its application to repetitive operations contexts [11, 17, 24]. The main underlying idea is that every system has at least one constraint that limits its performance [9]. One of its strengths is focusing on the constraints as a basis for the management and improvement of the system. According to Goldratt

10 Application of the DBR Approach …

133

and Cox [9], a constraint or bottleneck (BN) “is anything that limits a system from achieving higher performance versus its goal”. Therefore, it is critical to identify the BN(s) and manage the organization according to the impact of said BN(s) on the goal. Thus, an improvement of the BN’s performance entails an improvement of the entire system. On the contrary, the capacity of non-constrained (NC) resources—i.e., the rest of resources—is composed of both productive and idle capacity [14]. From a TOC’s perspective, idle capacity is not considered an excess of capacity but rather a margin of capacity that protects the system against uncertainty. The use of this idle capacity as productive capacity would not only not improve throughput, but would also unnecessarily increase inventory. The guide for implementing the TOC principles in companies is known as the Process Of OnGoing Improvement (POOGI), composed of two prerequisites and Five Focusing Steps (5FS) [18, 24]: • • • • • • •

First prerequisite: define the system and identify its purpose. Second prerequisite: define the measures to align the system with the purpose. Step 1: identify the constraint(s) of the system. Step 2: Decide how to exploit the system’s constraint(s). Step 3: Subordinate everything else to the above decision. Step 4: Elevate the system’s constraint(s). Step 5: If in any of the previous steps a constraint is broken, go back to step 1. Do not let inertia become the next constraint.

Physical constraints tend to be less usual than other kinds of constraints, such as inadequate policies, procedures, or ways of thinking; lack of market demand, or poor relationships with suppliers [8, 11]. Physical constraints must be exploited, i.e., managed as effectively as possible in terms of the goal, whereas managerial constraints must be eliminated and replaced with a policy which is aligned with the goal [18]. The scheduling mechanism of TOC, Drum-Buffer-Rope (DBR), is a powerful production planning and control technique in shops with BNs (also called Drums), oriented toward addressing market or physical constraints [23]. One strength of this approach is its simplicity: in order to control the whole system, precision is required only in the Drum [12]. Thus, once the constraint has been identified, DBR synchronizes production with customer requirements through the Rope (i.e., the connection between the input of work and the BN) [23]. Finally, DBR uses Drum and Shipping buffers (“time or a time-equivalent amount of work-in-process”) to enable this synchronization while protecting the throughput of the system from variability with reduced levels of WIP [23]. The benefits of applying TOC-DBR have been documented in the literature [17, 24]. Significant results have been reported in terms of inventory, WIP and lead-time reduction, or delivery performance improvement, among others. Similarly, much research related to the effectiveness of DBR focuses on simulations under specific conditions. Importantly, it is concluded that the severity of the constraint is a decisive aspect when choosing a release method in job-shops with a BN.

134

U. Apaolaza Perez de Eulate and A. Lizarralde Aiastui

Although these results are mainly related to repetitive manufacturing, they are especially promising when it comes to DBR in make-to-order (MTO) contexts [22]. However, in spite of the existence of such extensive bibliography discussing the use of TOC, there is a lack of real-world works related to current job-shop contexts. Thus, recent research works encourage further research to validate their findings. One of these areas is the study of additional actual complex cases including many products [15, 23]. Another topic of interest is the impact of the mix of jobs with and without BN usage in production processes in job-shop systems [15]. According to Fernandes et al. [7], controlling NC resources, as long as their throughput remains at a level similar to the level of throughput in the BN, leads to better performance. However, they did not observe significant differences when the capacity of NC resources is more than 20% higher than the BN capacity. This conclusion is consistent with Gupta and Boyd [11], who argue that, in spite of having some capacity margin, NC resources may occasionally turn into limitations due to deficient planning. Therefore, they advise considering other potential limitations when creating production plans. In brief, the existing literature is focused on the identification and exploitation of the current BN, and tends to consider the use of only one fixed BN as an important limitation of their research. In contrast, a number of authors claim that the constraint should be selected from a strategic perspective [5, 8, 9, 19]. In addition, the application of the two prerequisites and the five focusing steps remains insufficiently studied [11].

10.2 Objectives The present study deals with the practical implications of applying TOC to a jobshop environment in a real-world context. We agree with the idea of integrating a strategic perspective to select and maintain the constraint of the system. Given the lack of research, in particular, in PBO contexts, this paper contributes to the discussion so far lacking in the literature, as described by other researchers. Thus, the research questions are: Can TOC-DBR constitute a suitable approach for PBO contexts? How should TOC-DBR be implemented? For that purpose, we evaluate the overall impact and the practical implications of the implementation of the TOC-DBR approach. Hence, the focus is on the POOGI of TOC within an MTO-PBO context characterized by those limitations and research needs. Firstly, we present the case study and the research methodology. Secondly, we suggest an adaptation of the POOGI consistent with the strategic perspective for the BN selection. Then we report the implementation process step by step. Finally, we present and discuss the results, gaining insights, and suggesting areas for further research.

10 Application of the DBR Approach …

135

10.3 Case Study and Research Methodology 10.3.1 Case Study The present case study analyzes a company located in northern Spain, which designs, develops, and manufactures special cutting tools. Originally devoted to manufacturing tools for the woodworking industry, as a result of their growth strategy the company has expanded their activity to other manufacturing industries. Consequently, on-time delivery (OTD) and quality have become key aspects of survival and success. The context is characterized by customer orders demanding varying types and numbers of tools, involving different processes. Some tools are standard, but others are special and require design work, but all the items included in the same customer order must be supplied in one delivery. In addition, due to the high cost and variety of raw materials, only a limited amount of raw material stock may be held—if available at all. Therefore, depending on the product, raw materials may require extraordinary purchasing efforts. The main values of the company—a member-owner cooperative—are fair and sustainable human development, and decision-making based on consensus. Consequently, the company implemented a team-based management model, successfully implemented in other companies. Thus, apart from the staff, there are four teams: Design, Machining, Welding, and Sharpening. From a functional perspective, teams are equivalent to sections, and the main trait of this model is self-management. However, team performance is oriented toward achieving team objectives. On the one hand, teams have to complete the production orders of the production program. On the other hand, team performance is assessed through a metric called “Local Invoicing”: an estimate of the value of the weekly production of a team. As a result, the teams aim at maximizing the weekly departmental invoicing (WDI) by producing the orders of the production plan. We selected this case because of its fit with the context and needs described in the literature review. In brief, the company works on 100% MTO within a job-shop; additionally, the variety of products and resource needs to entail a complex context; finally, due to the problems to comply with their objectives, the company decided to implement a new model based on the TOC-DBR principles.

10.3.2 Research Methodology Qualitative research aims at a deep understanding of the phenomenon [25]. According to Yin [25], case studies are useful when “how” and “why” questions are asked about a contemporary set of events over which there was little or no control. Furthermore, they allow studying the phenomenon in its natural context, overcoming the limitations of other research methods. Thus, this research is defined as a qualitative inquiry with exploratory and descriptive purposes. It consists of a holistic single case study.

136

U. Apaolaza Perez de Eulate and A. Lizarralde Aiastui

Fig. 10.1 Research methodology [1]

Considering the research question, we consider the following proposition: “the DBR methodology can be successfully applied to PBO contexts”. This study was conducted combining the implementation methodology presented in Sect. 10.3.3, and the research methodology of Apaolaza and Lizarralde [1] shown in Fig. 10.1:

10.3.3 The Implementation Process The starting points for this research are the general POOGI of TOC and the integration of a strategic perspective for the BN selection, implementing an adapted methodology defined by the researchers (Table 10.1). As a result, this process includes the two prerequisites and steps 1–3 of the 5FS. Based on the process defined in Table 10.1, the implementation process is explained gradually. Analysis of the System The first step aims at gaining a profound understanding of the current management system, with a special focus on the manufacturing process and policies, resource capacities, and performance metrics. Despite the fact that the management model had been successfully implemented in other companies in the past, in the beginning Table 10.1 Implementation methodology based on the TOC Process of OnGoing Improvement Stage

Process

Purpose

1. Analysis of the system

Analyze the production process, resource capacities, batch policies, metrics, and parts routes

Familiarize with the company’s policies, rules, and metrics

2. Load versus capacity analysis

Compare the demand in a period against the resource capacity in the same period

Identify the current limitations of the system

3. Strategic decision: placing the constraint within the system

Research team and Select placement of the Management analyze where to constraint within the system position the constraint within the system

4. Decide how to exploit the system

Define the policies when Design aligned policies, rules, planning the use of the BN, as and metrics to measure the well as subordinate the rest of performance of the system the system

10 Application of the DBR Approach …

137

Table 10.2 Synthesis of the initial situation Overall view

Detailed view

• Low OTD rate (Objective: 80%; achieved: 50 to 70%)

• Delays often occur at initial stages

• Average delivery time (Objective: 4 weeks; achieved: 4 to 6 weeks)

• Lack of visibility: load versus capacity, priorities and overall perspective

• Delivery time: key to success (Strategy: delivery time vs price balance)

• No planning-programming tool • Lack of organizational capabilities • Problems with suppliers (time, quality) • Lack of qualification (need for training/learning)

of the project the staff reported significant problems in achieving their targets. The result of the analysis that was developed during the first stage is summarized in Table 10.2. As explained above, the company was trying to maximize its performance and output by maximizing the WDI metric. Thus, teams developed their local weekly plans based on the production orders but oriented toward the local performance metric. The effect of this approach was “as soon as possible” order launching, regardless of whether there was or was not sufficient capacity in the system, consequently flooding the workshop with an excess of WIP. As a result, other drawbacks occurred: it was difficult to control the WIP; cash flow decreased; lead times exceeded the standards; and OTD was below the required rate. Thus, the results reported by the company are consistent with the literature, according to which high resource usage rates and high OTD levels are difficult to achieve in this kind of environment [6]. Load versus capacity analysis The second step aims at identifying the current limitations of the system. For that purpose, capacity and workload are compared over a defined period. In this particular situation, the research team decided to analyze the data of the past two years, with a particular focus on the second year. It was found that the Design team showed a very low OTD performance, however, according to the data the team was functioning at below 50% of capacity. In addition, the resource with the highest average saturation was the Sharpening team, i.e., the three sharpening machines. On average, 90% of production drew on this resource. It was also found that, depending on the products composing the BN program, occasionally other resources seemed to turn into a constraint for short time periods. Finally, sometimes delays and quality problems in supplies resulting in delayed deliveries had been reported. Strategic decision: placing the constraint within the system To be consistent with the strategic view, the researchers advised considering certain criteria when analyzing potential BNs. Said criteria are based on the abovementioned findings from the literature. Firstly, the capacity of the BN is limited due to reasons

138

U. Apaolaza Perez de Eulate and A. Lizarralde Aiastui

such as need for high investment, complications when buying or contracting additional capacity, etc. Secondly, the know-how and/or strategic knowledge of the BN are key to the activity of the company. Thirdly, its workload should remain relatively stable despite product-mix changes. Finally, the BN processes the majority of the products. The analysis led to the decision that the tool-sharpeners would be the BN of the system. Indeed, the realized investment was high; tool sharpening is a significant added-value operation from the product perspective; the vast majority of products requires sharpening in these machines. Consequently, the percentage of the total workload tends to remain stable. In addition, selecting the tool-sharpeners as the constraint of the system entailed other measures, as some of the problems identified in the previous steps had to be solved in order to reach an effective performance in the BN. The first decision was to divide the system into two subsystems: Design and Workshop. The different nature of customer orders may involve fluctuating and demanding capacity needs in these systems. Thus, it is advisable to manage both subsystems independently but aligned with mutual final objectives. Consequently, the DBR approach would be applied to the Workshop—the place in which the BN is located—and the Design team would function as an internal supplier, independent but driven by a new main performance metric: OTD. That way, the Workshop would guide the management of the Design team while remaining aligned with global objectives. As a result, the Workshop had to be programmed in line with internal lead times, setting feasible delivery dates for both the Design team and external suppliers. The division of the system into two subsystems entailed the need to reorganize the Design team. The objective was to build a team capable of servicing the three types of activities that required Design resources: design tasks, support for the sales staff, and support for the ongoing activities in the workshop. A user-friendly approach was also required. The company decided to implement a number of measures to redress the situation: staff was trained to enable a different resource assignment policy, thus gaining flexibility. Then, the department redefined their routine, setting up a strict support schedule for Sales staff and Workshop. Finally, the need dates for design work in the Workshop were monitored, also setting a new priority: design tasks would be given the highest priority before support activities. Decide how to exploit the system Exploiting the BN goes beyond maximizing the use of the constraint. According to the analysis of the system, OTD is a key ingredient, and must be taken into account when deciding how to best use the scarce resource. In addition, other potential limitations of the system, which may hamper the proper use of the BN, require identification [11]. Thus, two main lines of action were developed: product analysis and capacity analysis. Product analysis identified the features of the products, as well as their manufacturing processes and resource needs. The analysis of the current manufacturing process revealed that there was a significant potential for internal lead-time reduction. The manufacturing criteria of the company caused the transference batches to

10 Application of the DBR Approach …

139

be the same as production batches, leading to long internal lead times. An in-depth study of the processes uncovered a significant lead-time reduction potential. Once the processes were analyzed, the research team created and defined seven product families, based on the production data of the past 2 years and according to DBR (e.g., BN and Shipping buffers). This way, each part of an order would be analyzed and assigned to one of these families, adopting their respective generic buffer values. Furthermore, the order deadline and the estimated BN capacity consumption would also be required in order to program the corresponding production order. Interestingly, some families consisted of products that do not consume BN capacity—also called “free-jobs” [23]. Regarding the capacity analysis, we further examined the initial Load versus Capacity Analysis (see step 2 in Table 10.2). As a result, two lines of action were developed: the use of the BN and the identification and management of other critical points (i.e., those non-constrained operations in which workload peaks occur occasionally). The cause for workload peaks in these critical points is an excess of production orders which do not consume any BN capacity. That way, non-constrained resources may occasionally be required over capacity, causing anticipated problems. Thus, the researchers developed a tool to enable production management according to DBR principles. Thus, the production program is focused on the BN. It is oriented toward OTD of customer orders, via OTD of production orders. Having received the information about customer orders and the production orders, the planner creates a program for the BN. This program results in three deadlines to comply with: customer delivery date, BN delivery date, and Design delivery date. Thus, the teams arrange their activity based on the priorities imposed by those deadlines. The deadlines enable monitoring the current priorities of each team according to global priorities, as well as displaying the remaining time to complete each task. On the other hand, the historical information facilitated the identification of the critical NC resources: lapping, welding, and electrical discharge machining (EDM) processes. In order to avoid peaks in those resources, the planner must keep their workload below certain limits. In this particular case, the consequence of this condition was the definition of workload limits for lapping, welding, and EDM processes. Thus, a warning system was added to the tool to prevent resource overload at these critical points. Similarly, based on the historical information, the company carried out an analysis of supplies and suppliers that had been a source of delays and quality problems in the past, causing delivery delays. The company took the following actions to prevent the abovementioned problems: stock of raw materials, renegotiation of the supply conditions, and search for alternative suppliers.

10.3.4 Modeling the System Starting from the initial work described in Sect. 10.3.3 (the implementation process), the research team developed the system to govern the entire company according to

140

U. Apaolaza Perez de Eulate and A. Lizarralde Aiastui

Informaon Flow Drum Buffer, Rope2

Launching Programme

DRUM Programme

Shipping Buffer, Rope 1

Delivery Programme

Design EDM

Welding Raw Material

Drilling

Sharpening

Final works

Product

Machining

Crical materials: • Stock of raw materials • Supply condions • Alternave suppliers

Crical points: • Electrical Discharge Machining (EDM) • Welding

Crical points: • Lapping

Material Flow

Fig. 10.2 Components of the DBR system (Workshop)

DBR. Due to the volatile nature of orders, the research team divided the system into two subsystems, Design and Workshop, to be able to react more effectively to changing demands. However, to make global functioning effective, both subsystems must be connected and work in an aligned manner. From a global perspective, there is a flow of information as well as a flow of materials (see Fig. 10.2). The applied DBR programming logic is iterative, as the system is affected by changing factors such as workload or product mix. The basic programming logic is further explained below: • The core of the system is the Drum, namely the three sharpening machines. The Drum and Shipping Buffers protect the system against variability. Finally, the Ropes link the Drum to both, the beginning and the end of the production process. Their purpose is to stagger and balance the workflow in the Drum, as well as guaranteeing OTD. • The planning (information flow) is based on the needs derived from the orders, the basis for developing the BN program. This program is developed taking into account capacity, dates of need, characteristics of the products, and priorities. Once the BN program is completed, the dates of need for the different inputs are displayed (designs, materials, components, etc.), and finally, the plan of each area is created. • The execution is carried out in reverse, following the respective plans. Thus, the entire system subordinates its activity to the needs of the BN program, ensuring that everything necessary reaches the BN in accordance with the deadlines required.

10 Application of the DBR Approach …

141

10.3.5 Keys to Managing the System Workshop subsystem: As explained in the literature, DBR facilitates the development of a production schedule based on the BN. The rest of the system is then subordinated to the main program, and organized accordingly. The following aspects stand out: • Procurement needs: the supply delivery deadlines are based on the general program and in line with the characteristics of each product and product family. We identified delivery delays and quality deficiencies as the main risks from the suppliers’ side. If the problem occurs in pre-BN activities, it may also cause BN capacity losses. • Production needs: each area has a local program aligned with global priorities. The need dates in the BN and the scheduled delivery dates govern the system. Thus, the teams organize the resources in accordance with these dates and priorities. Control of the workload in critical NC resources (i.e., lapping, welding, and EDM processes) is essential. When integrating free-jobs into the general program, it is crucial to ensure that resource overload does not occur. Design subsystem: It services several activities of different nature: design, offers, and assistance to production. When orders require design, they are scheduled within a period of weeks. However, other activities are not as predictable, and in general, the time span available for scheduling them is a few days or even less. Thus, a management system is necessary to schedule and deliver all the activities on time. The anticipated availability of delivery dates provides flexibility in the use of resources, so that these can be adapted to different situations. The main risk at this stage is related to managing capacity. Thus, the Design team must develop its local program only when both, maximum capacity and the current free capacity, are identified.

10.4 Results and Findings The aim of this research is to evaluate whether TOC-DBR is a suitable tool for PBO contexts, rather than being aimed at achieving quantitative results. Nevertheless, it is important to highlight the context in which the implementation was developed: during the observed period, the company increased its sales by 15% and tripled its annual earnings without significant staff variations. Obviously, the adoption of a TOC-DBR approach is not the only reason for such a result, but it could not have been reached under the previous model. Thus, the main result of this research is the successful performance of the TOC-DBR approach in a PBO context.

142

U. Apaolaza Perez de Eulate and A. Lizarralde Aiastui

Indeed, the systemic approach of TOC provided a different perspective, enabling the alignment of the entire company toward mutual objectives and priorities. The two prerequisites of the TOC served to set the goal (“Earn money now and in the future”) as well as the main criterion to manage the company, namely OTD, as the main metric to measure its performance. The 5FS of TOC establish how to maximize the results (i.e., the managerial approach of the company). Consequently, the company is now capable of reacting more effectively to the changing needs of the market. Consistent with the new approach, the system was divided into two subsystems. The Design subsystem changed its internal organization and was adapted to the new needs, thereby improving their response to the various external demands. Similarly, the Workshop subsystem was adapted to the DBR logic. As a result, the managers and the teams responsible for managing orders and resources gained visibility and alignment, resulting in significant lead-time reductions. We would like to point out that the main causes that restricted the system’s performance did not seem to be of a physical nature. Instead, the WDI metric was identified as the main cause for organizational misalignment. Consequently, the metric was replaced by the weekly compliance level of the programs. Furthermore, the company identified those supplies that had caused delays in the past, and implemented a set of measures to avoid such delays. A further remarkable result is the development and successful implementation of a specific tool that empowers the management in accordance with DBR. The spreadsheet we designed is an important means of providing information for aligned planning and execution management. Its user-friendliness makes planning and monitoring tasks faster and easier. In addition, the tool links subsystems, teams, and staff in turn providing updated and reliable information. This way, local decisions are indeed consistently subordinated to global priorities. Finally, we identified lead-time reduction as a key to reaching a competitive advantage. Indeed, delivery time was identified as an important factor in the market. The lead-time improvements that the company has achieved up to now have not yet been turned into delivery time reductions. This marks a clear opportunity for the company to address its original issue, namely unreliable OTD performance. In other words, if the company continues implementing future dynamic development processes, not only will they improve their OTD performance, but also gain a significant competitive edge in the market.

10.5 Conclusion The present research faced those problems identified by other researchers, who had asked for further research. Firstly, the context of the company is considered complex, as it involves a vast diversity of products and several process routings. The existence of free-jobs entailed the risk of workload peaks in NC resources. In addition, a number of nonphysical constraints were identified. According to the literature, the

10 Application of the DBR Approach …

143

particular situation defined by the amount, variety, and apparent complexity of the problems suggested a TOC-DBR approach. The short time span that was required for the implementation project, as well as the quality of the results, confirms the suitability of this approach. Thus, this study confirms the proposition that the DBR methodology can be successfully applied to PBO contexts. In particular, this case demonstrates that under certain circumstances the said approach is suitable for job-shop environments. Consequently, these systems might benefit from the strengths of DBR. From the perspective of the researchers, the aspects related to the implementation of the TOC are of special interest, particularly the application of the POOGI. Drawing on the analysis of the historical information of the company as well as on the result of the project, we conclude that both TOC and the implementation methodology are suitable for this kind of context. Furthermore, the strategical selection of the BN and its stability (i.e., keep the same BN over time) provide a simple and effective way to systemically govern the system. The application of the TOC principles supposed a rupture with respect to the previous scenario: although the general objective was the same, in the past its achievement was based on attaining local objectives, which in turn were measured in terms of WDI. The problem with this metric was that it was a source of organizational misalignment. Indeed, decisions oriented toward improving resource efficiency often led to better results according to the WDI indicator. However, this apparent improvement complicated the transitions between departments, thereby obstructing the workflow. Departments were forced to react and change their local plans in order to get good results, again according to the local metrics. The consequence was low OTD performance. In other words, the lack of mutual consistency of local metrics led to organizational misalignment downstream, causing overall poor results. Starting from the strategy of a company, the two prerequisites of TOC assist the company with setting a global orientation and defining an appropriate management model. Thus, the company has to establish its goal and turn it into objectives. Then it has to define the tactics to address the strategy, involving key aspects related to the market, the product, and resources, among others. Finally, consistent metrics must be defined and implemented in order to guide managers and resources toward general objectives. Importantly, selecting the BN is essential, as it limits the total capacity of the system. Deciding on the suitable BN goes beyond identification: it is a strategic decision that may involve additional measures to adjust the capacity in the entire organization. The underlying idea is to prevent BN changes by taking measures throughout the rest of the system, such as adding resources, subcontracting, and rescheduling tasks. This way, the BN will remain unchanged, and thus maintain suitable conditions for DBR management. In other words, a complex production system as described here can benefit from certain advantages experienced in repetitive operations contexts. The organizational alignment that the DBR methodology requires results in a BN-based production plan, which in turn results in subsidiary local plans oriented

144

U. Apaolaza Perez de Eulate and A. Lizarralde Aiastui

toward overall objectives. In other words, the entire system is consistently subordinated to the goal of the company, thereby improving overall results. This means that instead of maximizing local efficiencies, it is proactively aimed at a suitable workflow throughout the entire organization. The transition toward a DBR-based approach requires a profound change of mindset. Apart from expert guidance and support, it requires a real and effective commitment of the entire organization.

10.5.1 Limitations and Suggestions for Further Research The results of this research are in line with the promising results observed in previous experiences. However, the amount of real-world-based research in PBOs related to this perspective is still limited, and the need for further research is obvious. From our perspective, additional case studies in different organizations are of special interest in order to achieve a profound understanding of the underlying factors in these contexts, especially in low-medium volume and medium-high variety operations. Regarding the present investigation, the researchers progressively transferred the know-how and the responsibilities to the staff in order to provide the organization with the required capabilities to manage the system. That way, now the company is able to work autonomously according to the DBR system. However, we observed two areas that require special attention: • As mentioned above, the commitment of the organization is essential for an effective implementation of TOC-DBR. The sustainability of this approach requires maintaining this commitment over time. It is particularly crucial to respect the DBR logic, i.e., to keep the entire system aligned toward the main objectives. Thus, situations that may jeopardize the adopted managerial approach should be studied to strengthen future implementations. • Changes must be anticipated as a result of new demands from the market. In order to remain competitive, the company will have to keep adapting to the current approach. Although the new capabilities of the company allow them to work according to the current system, these may not be good enough to improve and/or adapt to changing needs. A question that remains open is whether the company in the future continues to fall back on and take advantage of external support as they did in this case, or whether they decide on developing internal procedures that allow them a continuous and autonomous adaption to an ever-changing external context.

10 Application of the DBR Approach …

145

References 1. Apaolaza U, Lizarralde A (2017) Comprehensive reorganization of project management: a case study BT—lect note manage ind res, In: Ayuso Muñoz JL, Yagüe Blanco JL, Capuz-Rizo SF (eds) Springer International Publishing, Cham, pp. 3–17 2. Boyd L, Gupta M (2004) Constraints management: what is the theory? Int J Oper Prod Man 24:350–371 3. Boyer KK, McDermott C (1999) Strategic consensus in operations strategy. J Oper Manag 17:289–305 4. Chase RB, Aquilano NJ, Jacobs, FR (1998) Production and operations management. Irwin/McGraw-Hill, Boston 5. Cox JF, Blackstone JH, Schleier JG (2003) Managing operations: a focus on excellence. North River Press, Great Barrington 6. Darlington J, Francis M, Found P, Thomas A (2015) Design and implementation of a Drumbuffer-rope pull-system. Prod Plan Control. https://doi.org/10.1080/09537287.2014.92640 7. Fernandes NO, Land MJ, Carmo-Silva S (2014) Workload control in unbalanced job shops. Int J Prod Res. https://doi.org/10.1080/00207543.2013.827808 8. Goldratt EM (1990) What is this thing called theory of constraints and how should it be implemented?. North River Press, Great Barrington 9. Goldratt EM, Cox JF (1984) The goal: excellence in manufacturing. North River Press, Great Barrington 10. Golmohammadi D (2015) A study of scheduling under the theory of constraints. Int J Prod Econ. https://doi.org/10.1016/j.ijpe.2015.03.015 11. Gupta M, Boyd LH (2008) Theory of constraints: a theory for operations management. Int J Oper Prod Man. https://doi.org/10.1108/01443570810903122 12. Gupta M, Snyder D (2009) Comparing TOC with MRP and JIT: a literature review. Int J Prod Res 47:3705–3739 13. Hayes RH, Wheelwright SC (1979) Link manufacturing process and product life cycles. Harvard Bus Rev 57:133–140 14. Lockamy A, Cox F (1995) An empirical study of division and plant performance measurement systems in selected world class manufacturing firms: linkages for competitive advantage. Int J Prod Res 33:221–236 15. Maylor H, Turner N, Murray-Webster R (2015) It worked for manufacturing…!: operations strategy in project-based operations. Int J Proj Manag. https://doi.org/10.1016/j.ijproman.2014. 03.009 16. Nigel S, Michael L (2008) Operations strategy. Prentice Hall Financial, Harlow 17. Panizzolo R (2016) Theory of constraints (TOC) production and manufacturing performance. Int J Ind Eng Manag 7:15–23 18. Rahman S (1998) Theory of constraints: a review of the philosophy and its applications. Int J Oper Prod Man 18:336–355 19. Ronen B, Pass S (2008) Focused operations management: achieving more with existing resources. Wiley, Hoboken 20. Skinner W (1974) The focused factory. Harvard Business Review, Brighton 21. Skinner W (1969) Manufacturing-missing link in corporate strategy. Harvard Business Review, Brighton 22. Stevenson M, Hendry LC, Kingsman BG (2005) A review of production planning and control: the applicability of key concepts to the make-to-order industry. Int J Prod Res 43:869–898 23. Thürer M, Stevenson M, Silva C, Qu T (2017) Drum-buffer-rope and workload control in high-variety flow and job shops with bottlenecks: an assessment by simulation. Int J Prod Econ. https://doi.org/10.1016/j.ijpe.2017.03.025 24. Watson KJ, Blackstone JH, Gardiner SC (2007) The evolution of a management philosophy: the theory of constraints. J Oper Manag 25:387–402 25. Yin RK (2009) Case study research: design and methods. SAGE Publications, Los Angeles

Chapter 11

A Bibliometric Analysis of the Professional Skills in the Scientific Journals of Project Management J. R. Otegi-Olaso , J. R. López-Robles , and N. K. Gamboa-Rosales

Abstract Project Management has become a core activity for organizations and professionals, independently of their area of expertise, bearing in mind that the objective of Project Management is the application of knowledge, skills, and techniques to implement, manage, and develop projects effectively and efficiently. It is necessary for organizations and professionals to develop a results-oriented culture, effective decision-making, and collaboration through the development of professional skills associated with it. To this purpose, there are different competency development models, including the ICB 4.0 model of the International Project Management Association (IPMA). In this way, this research proposes a bibliometric analysis of the main scientific journals within the Project Management field, in order to know the main research lines and see how these are related to the ICB 4.0 model. To do that, bibliometric techniques and tools have been used to analyze and visualize the intellectual structure of the scientific journals of project management. Keywords Bibliometric performance analysis · Co-occurrence mapping · ICB 4.0 · Strategic Intelligence · Professional skills · Science mapping

J. R. Otegi-Olaso (B) Department of Graphic Design and Engineering Projects, University of the Basque Country (UPV/EHU), Alameda Urquijo s/n, 48013 Bilbao, Spain e-mail: [email protected] J. R. López-Robles Postgraduate Program of Engineering and Applied Technology (National Laboratory CONACYT-SEDEAM), Autonomous University of Zacatecas, Av. Ramón López Velarde Col. Centro, 98000 Zacatecas, Mexico e-mail: [email protected] N. K. Gamboa-Rosales Academic Unit of Electrical Engineering, CONACYT - Autonomous University of Zacatecas (UAZ), Jardín Juárez 147, 98000 Zacatecas, Mexico e-mail: [email protected] © Springer Nature Switzerland AG 2021 J. L. Ayuso Muñoz et al. (eds.), Project Management and Engineering Research, Lecture Notes in Management and Industrial Engineering, https://doi.org/10.1007/978-3-030-54410-2_11

147

148

J. R. Otegi-Olaso et al.

11.1 Introduction The aim of this article is to analyze the evolution of scientific production in the Project Management field and the relationship of their main research themes with the ICB 4.0 competence model through the use of bibliometric techniques and tools. To this purpose, the research themes hosted by the scientific journals of the project management have been analyzed and compared with the competencies identified in the ICB 4.0 [9]. For this analysis, the project management journals within the WoS Core Collection of the Web of Science (WoS) was used as a data source [6]. Firstly, the project management journals have been identified and classified. Subsequently, an analysis period was defined to allow a comparative analysis between the journals, covering the last ten years (2008–2018) [5]. Secondly, the project management authors and main research themes in the field have been identified and analyzed in terms of production. As a final result, a co-occurrences map of the most relevant research themes in the project management field is presented. This map graphically identifies the competencies that are being emphasized in the literature and the relationship through the use of different bibliometric techniques and tools [7, 10–13].

11.2 Methodology and Data Set Bibliometrics is defined as the set of methods and tools for evaluating and analyzing the evolution of scientific publications and their citations. Its aim is to explore its impact and how these contribute to the progress of science in the main knowledge areas. In addition, bibliometrics allows the measurement of scientific and productive quality of the research field [8]. In the bibliometric work categories, performance analysis and co-occurrence map can be highlighted. The performance analysis focuses on the impact of publications based on its production and citation evolution, while co-occurrences map represents how the main publications, research themes, or authors are interrelated. This last category is widely used to visualize, understand, and discover relationships that are difficult to see or that are hidden between core themes for the development of the field [3]. Based on a first review of the scientific production and the journals identified, it was decided to take as source the Web of Science Core Collection, which is considered as one of the most important databases in scientific and academic terms. Within the Web of Science Core Collection, five specialized project management journals have been identified: • International Journal of Project Management (IJPM) (ISSN: 0263-7863); • Project Management Journal (PMJ) (ISSN: 1938-9507);

11 A Bibliometric Analysis of the Professional Skills …

149

• International Journal of Managing Projects in Business (IJMPB) (ISSN: 17538378); • International Journal of Information Technology Project Management (IJITPM) (ISSN: 1938-0232); • International Journal of Information Systems and Project Management (IJISPM) (ISSN: 2182-7796). The raw data has been retrieved from the Web of Science Core Collection using the following advanced query: IS = (“0263-7863” OR “1938-9507” OR “1753-8378” OR “1938-0232” OR “2182-7796”). In addition, this search was refined, limiting the results to articles, proceedings, and reviews published in English. This search retrieved a total of 1,639 publications, of which 1,464 correspond to the period 2008–2018. Once this was done, all publication abstracts were downloaded and reviewed.

11.3 Performance Bibliometric Analysis of the Project Management Journals In order to understand how Project Management has evolved in terms of publication, citations, and impact, its performance has been evaluated through the analysis of the main bibliometric indicators: publications and citations, most cited articles, most cited authors, geographical distribution, and h-index [4]. To this end, the bibliometric performance analysis of the project management journals identified has been structured in two parts: (1) analysis of the publications and their citations to evaluate the scientific field’s growth; (2) analysis of authors, publications, journals, and research areas to evaluate the impact of the main publications.

11.3.1 Publication and Citations In Table 11.1, the main indicators related to the performance of the journals identified are shown. The h-index represents a reference indicator that identifies the number of reference publications within the scientific production, for example, IJITPM has an h-index of 3, which establishes that all publications and their authors with more than 3 citations are part of the reference publications for this journal within the defined period of time [1]. In Fig. 11.1, the evolution of the project management journals is shown. In this way, it is possible to observe that during the last ten years, growth in the scientific production of this field has been maintained.

150

J. R. Otegi-Olaso et al.

Table 11.1 Performance of the main scientific journals of project management Journal (ISSN)

Publications

h-index

Citations average per year

Citations

Publications in which it is cited

IJPM (0263–7863)

974

50

15.89

15,480

7,453

PMJ (1938–9507)

258

23

8.49

2,191

1,364

IJMPB (1753–8378)

130

5

1.27

165

129

IJITPM (1938–0232)

58

3

0.69

40

29

IJISPM (2182–7796)

44

4

1.16

51

44

PM Literature

1,464

51

12.25

17,927

8,025

Fig. 11.1 Distribution of the publications per journal from 2008 to 2018

Two turning points can be identified within the period evaluated. The first of these was recorded in 2008 and 2009, when scientific production increased from 14 to 80 publications, respectively. This change is due to the publication of the International Journal of Project Management (ISSN: 0263-7863). The second inflection period occurred in 2015, with the production of three new journals (International Journal of Managing Projects in Business (ISSN: 1753-8378), International Journal of Information Technology Project Management (ISSN: 1938-0232), and International Journal of Information Systems and Project Management (ISSN: 2182-7796), going from 161 publications in 2014 to 248 in 2015. It is important to highlight that 2018 is an active period, so the final value does not represent the total publications for this year. Furthermore, in the last two years, there has been a slight decrease in scientific production, previous years reflect the growing interest in the project management area and its application. Therefore, it is necessary

11 A Bibliometric Analysis of the Professional Skills …

151

Fig. 11.2 Distribution of the citations per journal from 2008 to 2018

to wait for the final indicators of the current annuity (2018) before determining a loss in interest. On the other hand, the distribution of citations by year is shown in Fig. 11.2. In line with the publications, the distribution of citations showed a positive trend from 2008 to 2018. Nevertheless, despite the fact that recent years seem to offer a downward trend, this downward trend should be considered fictitious. The citations have a window period between 3 and 7 years according to the experts; it means that the publications of 2018 can explode as referring publications until 7 years after its publication. Taking into account the growth patterns, both in terms of publications and citations, it can be assumed that interest in project management will continue growing in the coming years. The most productive authors, the most cited authors, and geographical distribution are analyzed in the following section.

11.3.2 Most Productive Authors, Most Cited Authors, and Geographic Distribution Bearing in mind that project management is a growing field, it is necessary to know who are the most productive authors and who are the most cited authors, as well as their geographical correspondence, in order to identify the main agents and driving countries in the field. This section together with the previous one will allow establishing the parameters for the development of the co-occurrence map. In Table 11.2, the most productive authors for each journal are shown. It is important to mention that in cases where there has been a tie, the authors have been

152

J. R. Otegi-Olaso et al.

Table 11.2 Most productive authors per journal (2008–2018) IJPM

PMJ

IJMPB

IJITPM

IJISPM

Muller R (18)

Muller R (7)

Whitty SJ (5)

Alvarez-Dionisi LE (2)

Boonstra A (2)

Artto K (14)

Aubry M (6)

Anderse B (4)

Bhasi M (2)

Colomo-Palacios R (2)

Chan APC (11)

Davies A (5)

Walker D (4)

Brunoe TD (2)

Cormican K (2)

Ahola T (10)

Gemunden HG (5)

Aaltonen K (3)

Emsley MW (2)

Hussein BA (2)

Martinsuo M (10)

Bredillet CN (4) Costello SB (3)

Larsen JK (2)

Kaariainen J (2)

Whitty SJ (9)

Chipulu M (4)

Eskerod P (3)

Lindhard SM (2)

Lundqvist S (2)

Cheung SO (8)

Eskerod P (4)

Ibrahim CK (3)

Lundqvist S (2)

Van Hillegersberg J (2)

Gemunden HG (8)

Hobbs B (4)

Medina A (3)

Marcusson L (2)

Williams SP (2)

Klein G (8)

Ika LA (4)

Turner M (3)

Presulli M (2)



Kock A (8)

Ojiako U (4)

Wilkinson S (3)

Rahman N (2)



Ruuska I (8)

Sankaran S (4)







Soduerlund J (8) Williams T (4)







Zwikael O (8)







Zhao XB (4)

included in alphabetical order. This criterion is applied in all the tables presented in this research. It is important to point out that the most productive authors concentrate their participation in three journals: International Journal of Project Management (ISSN: 02637863), Project Management Journal (ISSN: 1938-9507), and International Journal of Managing Projects in Business (ISSN: 1753-8378). These journals concentrate more than 90% of total publications. In terms of citations, Table 11.3 shows the most cited authors for each journal. In this sense, Table 11.4 presents the most productive and cited authors for the period 2008–2018. Finally, Table 11.5 lists the most productive countries in terms of project management from 2008 to 2018. In order to effectively analyze project management field evolution, the following section determines the relationship between the main research themes developed during the last ten years (2008–2018) using VOSviewer (software tool to build and visualize bibliometric networks). Furthermore, these themes are compared with the ICB 4.0 competency model.

11 A Bibliometric Analysis of the Professional Skills …

153

Table 11.3 Most cited authors per journal (2008–2018) IJPM

PMJ

IJMPB

IJITPM

IJISPM

Artto K (404)

Blomquist T (172)

Lloyd-Walker BM (14)

Kar N (4)

San Cristobal JR (12)

Muller R (343)

Flyvbjerg B (141)

Walker D (11)

Mitra S (4)

Bathallat S (7)

Chan APC (306)

Muller R (134)

Amoatey (9)

Rodriguez V (3)

Smedberg A (7)

Turner R (279)

Soderholm A (108)

Sarmento JM (8) Cousillas S (3)

Kjellin H (7)

Asltonen K (231)

Martinsuo, M (106)

Renneboog L (8)

Ortega F (3)

Arviansyah S (5)

Ogunlana SO (178)

Hallgren M (99) Amoatey C (8)

Villanueva B (3)

Spil T (5)

Bryde D (159)

Nilsson A (99)

Ameyaw YA (8) Gharaibeh H (3)

Hillrhrtdbrtg J (5)

Table 11.4 Most productive and cited authors from 2008 to 2018 Author

Publications

Average (%)

Author

Cites

Average (%)

Muller R

27

1.84

Muller R

487

2.72

Artto K

15

1.02

Artto K

416

2.32

Martinsuo M

15

1.02

Chan APC

306

1.71

Aubry M

14

0.96

Turner R

279

1.56

Gemunden HG

14

0.96

Asltonen K

231

1.29

Whitty SJ

14

0.96

Ogunlana SO

178

0.99

Table 11.5 Most productive countries in terms of project management from 2008 to 2018

Country

Publications

Average (%)

Australia

227

15.50

England

215

14.68

Peoples R China

191

13.04

USA

167

11.40

Canada

89

6.07

Norway

88

6.01

Finland

82

5.60

Germany

79

5.39

Sweden

79

5.39

France

69

4.71

154

J. R. Otegi-Olaso et al.

11.4 Science Mapping Analysis of the Project Management Journals The co-occurrence map tool allows the analysis of scientific publications, and it can be used to visualize and evaluate complex research themes [2]. In Fig. 11.3, the project management co-occurrence map is shown. It is based on the information retrieved from the Web of Science (WoS), which covers the publications within the journals between 2008 and 2018. In this way, 150 main themes have been identified and classified into six clusters. The themes included in the figure have a minimum occurrence of 10 times within the database created for this research. The themes of the same color are considered as Cluster and these group the themes that are closely related because these are included in the same publications. For example, blue themes are focused on themes related to “Innovation”, “Product Development”, “Project Performance”, “Value Creation”, “Project Portfolio Management”, “Complex Project”, “Strategic Management”, among others. On the other hand, the thickness of the connection line indicates the strength of the relationship between two themes. For example, the strength of the relationship between “Product Development” and “Project Portfolio Management”

Fig. 11.3 Co-occurrence map of the main scientific journals of project management (2008–2018)

11 A Bibliometric Analysis of the Professional Skills …

155

is 22 and represents a thick line, which states a strong relationship in the development of both concepts. Taking into account the competencies identified in the ICB 4.0 model, Table 11.6 shows the relationship between the most developed themes in the literature and the competencies of the ICB 4.0 model. In line with the previous table, it can be seen that the PERSPECTIVE axis of the ICB 4.0 model is most exploited within the project management literature, followed by PRACTICE and PEOPLE, respectively. Another aspect to review is that the most developed competency is “Organization and Information” for its more general content, followed by “Strategy” and “Project Design”. Finally, Fig. 11.4 presents the relationship between the main authors of project management in terms of production. In this sense, sixteen Clusters were identified concentrating the 100 main authors. The size of the sphere is proportional to the scientific production of each author and the thickness of the line to the strengthen relationship.

11.5 Conclusions The size of literature related to Project Management shows a remarkable increase in the last decade (2008–2018). Given the large volume of citations received in this field, the growth of literature is expected to continue in the coming years. Taking the ICB 4.0 model as a starting point, the competencies that are most widely disseminated within the literature are Strategy, Governance, Structures and processes, Power and interest, Culture and values, Personal communication, Leadership, Teamwork, Project design, Time, Organization and information, Risk and opportunity, and Stakeholders. On the other hand, the competencies with less presence in the literature are Compliance, Standards and regulation, Self -reflection and self -management, Personal integrity and reliability, Relationships and engagement, Conflict and crisis, Resourcefulness, Negotiation, Results orientation, Requirements and objectives, Scope, Quality, Finance, Resources, Procurement, Plan and control, and Change and transformation. In the Project Management literature, there is only one publication associated with the ICB 4.0 model, IPMA ICB 4.0-A global standard for project, programme and portfolio management competences [9], a fact that can be translated as an opportunity to future research activities in this knowledge area.

156

J. R. Otegi-Olaso et al.

Table 11.6 ICB 4.0 and main research themes from 2008 to 2018 Individual competence baseline (ICB 4.0)

Cluster (color)

Links

Total links strength

Occurrences

Strategy

Light blue

117

451

88

Governance, structures, and processes

Light blue

62

174

39

Compliance, standards, and regulation









Power and interest

Green

40

60

12

Culture and values

Light blue

52

120

23

Self-reflection and self-management









Personal integrity and reliability









Personal communication

Yellow

75

219

38

Relationships and engagement









Leadership

Yellow

83

286

57

Teamwork

Yellow

45

96

15

Conflict and crisis









Resourcefulness









Negotiation









Results orientation









Project design

Purple

106

404

80

Requirements and objectives









Scope









Time

Purple

71

164

37

Organization and information

Green

140

1.418

268

Quality









Finance









Perspective

People

Practice

Resources









Procurement









Plan and control









Risk and opportunity

Red

99

332

74

Stakeholders

Red

53

105

22 (continued)

11 A Bibliometric Analysis of the Professional Skills …

157

Table 11.6 (continued) Individual competence baseline (ICB 4.0)

Cluster (color)

Links

Total links strength

Occurrences

Change and transformation









The competencies with the symbol (–) are not within the 150 most used concepts, which does not mean that these are not exploited elements within the literature

Fig. 11.4 Co-occurrence map of the main authors in terms of project management (2008–2018)

Acknowledgments The authors J. R. López-Robles and N. K. Gamboa-Rosales acknowledge the support by the CONACYT—Consejo Nacional de Ciencia y Tecnología (Mexico) and DGRIDirección General de Relaciones Internacionales (Mexico) to carry out this study.

References 1. Bornmann L, Daniel HD (2007) What do we know about the h index? J Assoc Inf Sci Technol 58(9):1381–1385 2. Calero-Medina C, Noyons EC (2008) Combining mapping and citation network analysis for a better understanding of the scientific development: The case of the absorptive capacity field. J

158

J. R. Otegi-Olaso et al.

Informetrics 2(4):272–279 3. Cobo MJ, López-Herrera AG, Herrera-Viedma E et al (2011) Science mapping software tools: review, analysis, and cooperative study among tools. J Assoc Inf Sci Technol 62(7):1382–1402 4. Glanzel W (2003) Bibliometrics as a research field a course on theory and application of bibliometric indicators 5. Hanisch B, Wald A (2012) A bibliometric view on the use of contingency theory in project management research. Project Manag J 43(3):4–23 6. Leydesdorff L, Rafols I (2009) A global map of science based on the ISI subject categories. J Assoc Inf Sci Technol 60(2):348–362 7. Van Eck NJ, Waltman L (2010) Software survey: VOSviewer, a computer program for bibliometric mapping. Scientometrics 84(2):523–538 8. Van Raan AF (2003) The use of bibliometric analysis in research performance assessment and monitoring of interdisciplinary scientific developments. Technol Assessment-Theory Practice 1(12):20–29 9. Vukomanovic M, Young M, Huynink S (2016) IPMA ICB 4.0-A global standard for project, programme and portfolio management competences. Int J Project Manag 34(8):1703–1705 10. López-Robles JR, Otegi-Olaso JR, Gamboa-Rosales NK, Gamboa-Rosales H, Robles-Berumen H, Gamboa- Rosales A (2019) Visualizing and mapping the project management research areas within the International Journal of Project Management: a bibliometric analysis from 1983 to 2018 11. López-Robles JR, Otegi-Olaso JR, Porto-Gómez I (2018) Bibliometric analysis of worldwide scientific literature in Project Management Techniques and Tools over the past 50 years: 1967– 2017 12. López-Robles JR, Otegi-Olaso JR, Cobo MJ, Robles R, López-Robles, Gamboa-Rosales NK (2020) Exploring the use of strategic intelligence as support tool in the project management field using advanced bibliometric methods. Research and education in project management (Bilbao, 2020), 65 13. López-Robles JR, Otegi-Olaso JR, Cobo MJ, Furstenau LB, Sott MK, Robles R, GamboaRosales NK (2020) The relationship between project management and industry 4.0: bibliometric analysis of main research areas through scopus. Research and education in project management (Bilbao, 2020), 56

Chapter 12

The Influence of the Use of Project Management Tools and Techniques on the Achieved Success B. Llorens and R. Viñoles-Cebolla

Abstract The management of a project is considered successful provided that it has been finished within the agreed period of time, if it does not exist any budget overrun, as long as all client requests and expectations are fulfilled, and on condition that the project team management has been appropriate. For that, there are many tools and techniques which allow the task management to be effective and efficient. The purpose of this work consists of studying the amount of success of the projects performed by Project Managers and Chemical Engineers without training in Project Management, by using tools and techniques of this discipline. The study starts with a selection of the most widely used project management and agile methodologies tools and techniques, from research made by specialists. Afterwards, a survey, given to an expert panel for each field, has been made, in which they have been asked for the scale of use of the chosen tools and techniques, as well as the level of success achieved in the executed projects. Finally, a statistical study has been performed, in order to determine if the fact of knowing and utilizing those tools and techniques improves the level of success. Keywords Project management · Agile methodologies · Tools · Techniques · Project success · Chemical engineers

12.1 Introduction When considering the success of project management, one of the most traditional approaches is the iron triangle, which states that cost, time and results are the three variables that characterize any project, and must be jointly managed [10]. However, many authors have created alternative definitions for this term, such as Wit [20], B. Llorens · R. Viñoles-Cebolla (B) Escuela Técnica Superior de Ingenieros Industriales, Universitat Politècnica de València, Camino de Vera, s/n, 46022 Valencia, Spain e-mail: [email protected] B. Llorens e-mail: [email protected] © Springer Nature Switzerland AG 2021 J. L. Ayuso Muñoz et al. (eds.), Project Management and Engineering Research, Lecture Notes in Management and Industrial Engineering, https://doi.org/10.1007/978-3-030-54410-2_12

159

160

B. Llorens and R. Viñoles-Cebolla

International Project Management Association [IPMA] [7], Ellatar [5], Kerzner [8], Carvalho and Rabechini [3], Project Management Institute [PMI] [15] or Morioka and Carvalho [11], so that this concept, with the passage of time, has become more complex. Successfully managing a project implies that project managers must be experts in initiating, planning, executing, monitoring and controlling, and closing projects. To achieve this, they often use tools and techniques as an aid in the management of activities throughout the life cycle of the project. The National Agency for Quality Assessment and Accreditation [ANECA] [1] specifies that one of the competences that chemical engineers possess is leading, coordinating, and managing complex and interdisciplinary projects. So, in this article, which is the result of the work done in the Master’s thesis of one of the authors, within the Master degree on Project Management at the Universitat Politècnica de València, a study will be carried out on the extent of the use of project management tools and techniques, as well as the perception of success by chemical engineers and project managers in the exercise of their profession.

12.2 Objetives The main objective of this article is to study the use of project management tools and techniques, as well as the perception of success of projects carried out by chemical engineers and project managers. However, to achieve this objective, more specific ones have to be accomplished, such as: • To select the project management tools and techniques and those of the agile methodologies, most often used by experts. • To analyze and compare the results, obtained through questionnaires, of the opinions of experts in chemical engineering and project management, on the use of the selected tools and techniques and the perception of success of the executed projects.

12.3 Case Study 12.3.1 Determination of the Tools and Techniques Most Often Used by Experts Most of the project management tools and techniques are included in Managing successful projects with PRINCE2 [12] and in the PMBOK Guide [15], which are used in a great variety of situations. This implies that it is extremely complicated to determine which is the most appropriate for each context.

12 The Influence of the Use of Project Management … Table 12.1 Tools and techniques considered in this study

161

Project management

Agile methodologies

Checklist analysis Stakeholders analysis Earned value analysis (EVM) Audits House of Quality Ishikawa diagram Gantt diagram Decomposition structures Control diagram Interpersonal skills Financial indicators Probability and impact matrix Critical path method (CPM) Communication methods Projections and forecasts Meetings Management software Decision-making Brainstorming Expected monetary value (EMV)

Poker estimation Estimation of duration Progress charts Product charts WIP (Work in Progress) limit Pending work list Retrospective Daily meetings Sprint review Visual boards

As indicated [9], some experts, among them Besner and Hobbs [2], Fernandes et al. [6], Patanakul et al. [14], Raz and Michael [16], Thamhain [17] and White and Fortune [19], have done research to determine what tools and techniques are the most used by project managers. Likewise, Cervone [4], and Palacio et al. [13] have described the tools and techniques pertaining to the agile methodologies, which are characterized by an increasingly more widespread use among professionals in project management. Thus, Table 12.1 shows the tools and techniques selected for the present study. As specified in Llorens et al. [9], the project management tools and techniques selected for this study (see Table 12.1) are common to groups of authors and to the standards consulted (30%), or to combinations of authors (30%). Other tools and techniques (40%) coincide, or are similar to those that appear in the PMBOK [15] and PRINCE2 [12] guides.

12.3.2 Method Description This study was conducted through surveys, since it is a standardized method that makes it possible to carry out a statistical analysis of the results. Between January and February 2017, surveys were sent through email and social networks to a panel of experts composed of:

162

B. Llorens and R. Viñoles-Cebolla

• Partners of the Spanish Association of Project Management and Engineering [AEIPRO]. • Former students of the Universitat Politècnica de València. • Practicing chemical engineers in Spain. • Practicing project managers in Spain. The questionnaire has been divided into three blocks, the questions of each covering a specific subject, as described below. • Block I (P6-P9): Project Management success parameters. • Block II (P10-P29): Degree of use of traditional project management tools and techniques. • Block III (P30-P39): Degree of use of agile methodologies tools and techniques.

12.4 Results After two months of field work, a total of 132 valid answers have been obtained from chemical engineers and 109 valid responses from project managers. The results mean is shown from Figs. 12.1, 12.2 and 12.3.

Fig. 12.1 Arithmetic mean of the success parameters in project management

12 The Influence of the Use of Project Management …

163

Fig. 12.2 Arithmetic mean of the use of conventional techniques and tools

Fig. 12.3 Arithmetic mean of the use of agile techniques and tools

12.5 Results Analysis 12.5.1 Sampling Error Determination As a sample of individuals has been studied, there is an error associated with the sample. To calculate it, an expression is used that relates the sampling error (e) with the sample size (n), the population size (N), the factor associated with the confidence

164

B. Llorens and R. Viñoles-Cebolla

interval (Z) and the fraction of individuals that possess the characteristic (p). Thus, according to Valdivieso et al. [18], the sample size is defined as: n=



    N · Z2 · p · (1 − p) (N − 1) · e2 + Z2 · p · (1 − p)

(12.1)

The existing error in the results obtained from a sample of 132 chemical engineers, assuming an infinite population size, within a 95% confidence interval and if half of the individuals has the studied characteristic, is of 8.53%. Similarly, in the case of project managers, the existing error for a sample of 109 individuals is 9.39%.

12.5.2 Analysis of the Project Management Success Parameters Results Analyzing Fig. 12.1, it is observed that for the four variables considered, the mean of the project managers is higher than that of the chemical engineers. In addition, it highlights the fact that for both samples, there is a consistent pattern in which maximum values are obtained for the satisfaction of customer expectations (0.66 for chemical engineers and 0.76 for project managers), while the minimum values belong to cost compliance (0.49 for chemical engineers and 0.57 for project managers) and suitability of team management (0.49 for chemical engineers and 0.59 for project managers). On the other hand, it is worth mentioning that, in the case of project managers, all the success parameters considered have a mean above 0.5. This implies that there is a greater perception of project success than of failure. However, among chemical engineers who participated in the study, the results are different, because the means of two parameters (cost compliance and team management suitability) are slightly lower than the mentioned value (0.49). In spite of this, assuming that the four variables represent project success in the same percentage, in both cases the perception of global success is greater than 0.5, this value being higher for project managers (0.63) than for chemical engineers (0.55). Regarding the quantification of data variability for both experts groups considered in the study, the standard deviation (s) will be used, since the results of this parameter are expressed in the same units as the original data. Thus, Fig. 12.4 shows the standard deviation associated with the success criteria. By analyzing Fig. 12.4, it is observed that the standard deviation values for both experts groups are very similar. However, in all cases, except for customer expectations, this dispersion parameter is higher for project managers than for chemical engineers. In addition, it is worth mentioning that this variable represents the lowest standard deviation of the whole, with a value of 0.15 for chemical engineers and 0.14 for project managers.

Relative frequency

12 The Influence of the Use of Project Management …

165

1.00 0.90 0.80 0.70 0.60 0.50 0.40 0.30 0.20 0.10 0.00

s (Chemical engineers)

s (Project managers)

Fig. 12.4 Standard deviation of project management success parameters

To determine data anomalies with respect to the Normal distribution, skewness and kurtosis coefficients are calculated since they allow to determine how far away the position data from the corresponding position in a Normal distribution (skewness), and the existence of anomalous data in samples (kurtosis) are. Thus, Fig. 12.5 shows the representation of skewness values (CA) and kurtosis (CC) for the success criteria. As observed by analyzing Fig. 12.5, there is no skewness nor kurtosis for the project management success parameters considered, since both the skewness and the kurtosis coefficients for the four criteria, are included between –2 and 2. It should be noted, on the other hand, that for customer expectations the skewness and kurtosis coefficients are very close to zero, which evidences a remarkable adaptation to a Normal distribution.

12.5.3 Results Analysis of the Use of the Considered Tools and Techniques Considering the means obtained for the tools and techniques of both methodologies (Figs. 12.2 and 12.3), in the majority of cases the frequency of use for project managers is higher than for chemical engineers. The frequency of use of the tools and techniques by chemical engineers is higher than in the case of project managers

166

B. Llorens and R. Viñoles-Cebolla

2.00 1.50 1.00 0.50 0.00 -0.50 -1.00 -1.50 -2.00

CA (Chemical engineers)

CA (Project managers)

CC (Chemical engineers)

CC (Project managers)

Fig. 12.5 Skewness and kurtosis of success parameters of project management

in meetings (0.82 for chemical engineers and 0.81 for project managers), decisionmaking (0.10 for chemical engineers and 0.09 for project managers), the expected monetary value (0.17 to chemical engineers and 0.14 for project managers) and the poker estimation (0.04 for chemical engineers and 0.03 for project managers). In addition, it is important to note that there are two tools and techniques of agile methodologies which use frequency is the same for both samples. It is the estimation of duration (0.07) and the work in progress limit (0.14). On the other hand, both project managers and chemical engineers agree that communication methods is the most used tool (0.92 for project managers and 0.86 for chemical engineers), while the poker estimate is the tool they least use (0.04 for chemical engineers and 0.03 for project managers). It should be noted that, grouped by methodologies, the conventional project management tools and techniques are more used than those of the agile methodologies, being project managers the study group that more often employ both types of tools and techniques. In particular, the relative frequency of use of the conventional tools and techniques is 0.45 for project managers and 0.29 for chemical engineers, while these values go down to 0.35 and 0.24 respectively in the case of the agile methodologies. In this way, altogether, project managers use more the 30 tools and techniques considered in this study, since the overall mean relative frequency is 0.40 for them, while in the case of chemical engineers, the value is noticeably smaller (0.26).

12 The Influence of the Use of Project Management …

167

Relative frequency

However, although project managers use more often the 30 tools and techniques, their overall average frequency does not even reach the value of 0.5. This implies that these tools and techniques are used, on average, in less than half of the projects that the consulted experts carry out. Regarding the quantification of data variability for both experts groups considered in the study, the standard deviation (s) will be used. Thus, Figs. 12.6 and 12.7 show the standard deviation associated with the tools and techniques considered in the study. 1.00 0.90 0.80 0.70 0.60 0.50 0.40 0.30 0.20 0.10 0.00

s (Chemical engineers)

s (Project managers)

Relative frequency

Fig. 12.6 Standard deviation of the use of conventional techniques and tools

1.00 0.90 0.80 0.70 0.60 0.50 0.40 0.30 0.20 0.10 0.00

s (Chemical engineers)

s (Project managers)

Fig. 12.7 Standard deviation of the use of agile techniques and tools

168

B. Llorens and R. Viñoles-Cebolla

According to the information shown in Figs. 12.6 and 12.7, in general terms, the standard deviation is higher for project managers than for chemical engineers. The opposite case is only true for the Gantt diagram (the standard deviation for chemical engineers is 0.32, while for project managers it is 0.22), and for communication methods (0.22 for chemical engineers and 0.12 for project managers). In addition, the fact arises that the value of this dispersion parameter is the same (0.23) than for the case of the expected monetary value. Calculating the average dispersion for each type of tools and techniques, in the case of project managers, the mean of the standard deviations is identical for conventional tools and techniques than for the agile ones (0.28). However, in the case of chemical engineers, values vary between the two types of tools and techniques, so that the mean standard deviation is 0.23 for the conventional ones, while for the agile ones it is 0.20. As indicated, when the dispersion of the success criteria for project management has been analyzed, there is a greater variability of the data for project managers regarding the arithmetic mean, which translates in that data are not compacted around it, but more dispersed than in the case of chemical engineers. To determine the anomalies in the data with respect to the Normal distribution, the skewness and kurtosis coefficients are calculated. Thus, Figs. 12.8 and 12.9 show the representation of the skewness and kurtosis values for the tools and techniques considered in the study. Based on the information shown in Figs. 12.8 and 12.9, it is observed, first, that poker estimation is the only case in which there is skewness and kurtosis for both groups of experts. In addition, the fact arises that the kurtosis coefficient associated with this technique for project managers is the highest of all the calculated ones (16.48), while for chemical engineers the highest kurtosis coefficient appears in product charts (8.35).

5.00 4.00 3.00 2.00 1.00 0.00 -1.00 -2.00 -3.00

CA (Chemical engineers)

CA (Project managers)

CC (Chemical engineers)

CC (Project managers)

Fig. 12.8 Skewness and kurtosis analysis of the use of conventional techniques and tools

12 The Influence of the Use of Project Management …

169

17.00 15.00 13.00 11.00 9.00 7.00 5.00 3.00 1.00 -1.00 -3.00

CA (Chemical engineers)

CA (Project managers)

CC (Chemical engineers)

CC (Project managers)

Fig. 12.9 Skewness and kurtosis analysis of the use of agile techniques and tools

Regarding the skewness coefficient, it is noteworthy that the earned value analysis (2.02), forecasts (2.24), the poker estimation (2.55) and product charts (2.71) for chemical engineers, as well as the House of Quality (2.18) and the estimation of duration (2.78) for project managers, present a slight skewness, since they are included between 2 and 3, with the exception of the poker estimation for project managers, which value is higher (3.95). Finally, it is important to mention that the values of the skewness and kurtosis coefficients that have exceeded the limits that ensure data normality, are all positive, which implies that, in these cases, the skewness is positive, moving the values with higher frequencies to the left of the bell curve, and that the data are leptokurtic, that is, they present values that are more frequently far from the mean than would be expected in a Normal distribution.

12.6 Conclusions In this article, a selection has been performed of the tools and techniques most used by project managers, based on previous research conducted by experts and on standards available for project management tools and techniques (PMBOK and PRINCE2). Once sufficient answers were obtained from surveys designed for chemical engineers and project managers, these were analyzed, concluding that projects carried out by project managers are more successful than those carried out by chemical engineers, since the mean of the four criteria considered are greater in the first case. Regarding the use of tools and techniques, it should be noted that, in general terms, project managers use more often the tools and techniques considered in the study (the

170

B. Llorens and R. Viñoles-Cebolla

average use rate is 40% for managers of projects compared with 26% for chemical engineers). Regarding the skewness and kurtosis coefficients, 7 tools and techniques presented skewness and/or kurtosis for chemical engineers, whereas for project managers, this value is reduced to 5, having in common only the poker estimation and the estimation of duration.

References 1. ANECA (2005) Libro blanco del título de grado en ingeniería química. Agencia Nacional de Evaluación de la Calidad y Acreditación (ANECA), Madrid 2. Besner C, Hobbs B (2004) An empirical investigation of project management practice: in reality, which tools do practitioners use?. Project Management Research, Innovations, pp 337–351 3. Carvalho MM, Rabechini R (2011) Fundamentos em gestão de projetos: construindo competências para gerenciar projetos. Atlas, São Paulo 4. Cervone HF (2011) Understanding agile project management methods using Scrum. OCLC Syst Serv Int Digital Libr Perspect 27:18–22 5. Ellatar SMS (2009) Towards developing an improved methodology for evaluating performance and achieving success in construction projects. Sci Res Essay 4:549–554 6. Fernandes G, Ward S, Araújo M (2013) Identifying useful project management practices: a mixed methodology approach. Int J Inf Syst Project Manag 1:5–21 7. IPMA (2006) IPMA competence baseline, 3th edn. International Project Management Association (IPMA), Zurich 8. Kerzner H (2009) Project management. A systems approach to planning, scheduling and controlling, 10th edn. Wiley, New Jersey 9. Llorens B, Viñoles R, Bastante MJ, González MC (2017) Uso de herramientas y técnicas de la dirección de proyectos y su relación con el éxito en ingeniería química. XXI International Congress on Project Management and Engineering, Cádiz 10. Machado FJ, Martens CDP (2015) Project management success: a bibliometric analysis. J Gestão e Projetos 6:28–44 11. Morioka S, Carvalho MM (2014) Análise de fatores críticos de sucesso de projetos: um estudo de caso no setor varejista. J Prod 24:132–143 12. OGC (2009) Managing successful projects with PRINCE2, 5th edn. Office of Goverment Commerce (OGC), Belfast 13. Palacio J, Menzinsky A, López G (2016) Scrum Manager. Guía de formación, 2nd edn. Iubaris Info 4 Media SL, Zaragoza 14. Patanakul P, Iewwongcharoen B, Milosevic D (2010) An empirical study on the use of project management tools and techniques across project life-cycle and their impact on project success. J General Manag 35:41–65 15. PMI (2013) A guide to the Project Management Body of Knowledge (PMBOK guide), 5th edn. Project Management Institute, Newton Square 16. Raz T, Michael E (2001) Use and benefits of tools for project risk management. Int J Project Manag 19:9–17 17. Thamhain HJ (1999) Emerging project management techniques: a managerial assessment. Portland Int Conf Manag Eng Technol. https://doi.org/10.1109/PICMET.1999.808260 18. Valdivieso CE, Valdivieso R, Valdivieso OA (2011) Determinación del tamaño muestral mediante el uso de árboles de decisión. UPB—Investigación Desarrollo 11:148–176

12 The Influence of the Use of Project Management …

171

19. White D, Fortune J (2002) Current practice in project management: an empirical study. Int J Project Manag 20:1–11 20. Wit A (1988) Measurement of project success. Int J Project Manag 6:164–170

Chapter 13

Design Thinking: Virtues and Defects of a Project Methodology for Business, Academic Training, and Professional Design M. Puig Poch, F. Felip Miralles, J. Galán Serrano, C. García-García, and V. Chulvi Ramos Abstract Design Thinking is a methodology that can be used for business innovation, allowing to address complex problems and provide viable solutions by placing the user in the center of reflection. But this methodology can corrupt the reflected and argued work of the design project when it is not carried out applying the appropriate knowledge. This work is structured around a reflection on this methodology and its application in different areas. Firstly, a comparative analysis of representative bibliography is conducted to outline a theoretical basis on Design Thinking. In the second part, a fieldwork based on interviews and observation of creative workshops helps decipher how the Design Thinking is used in different fields, to see up to what extent its application can modify the design results. This article concludes that the broad description of Design Thinking helps to explain the methodology of a project at an educational level, its tools helping to unlock a project at a business level, and its techniques helping to explain the phases of a project at a professional design level. Keywords Design thinking · Methodology · Project · Design process

M. Puig Poch ESDi Escola Superior de Disseny, Universitat Ramón Llull, Marqués de Comillas 81-83, 08202 Sabadell, Spain F. Felip Miralles (B) · J. Galán Serrano · C. García-García DACTIC Research Group, Department of Industrial Systems Engineering and Design, School of Technology and Experimental Sciences, Universitat Jaume I, Av. Sos Baynat S/N, 12071 Castelló de La Plana, Spain e-mail: [email protected] V. Chulvi Ramos DACTIC Research Group, Department of Mechanical Engineering and Construction, School of Technology and Experimental Sciences, Universitat Jaume I, Av. Sos Baynat S/N, 12071 Castelló de La Plana, Spain © Springer Nature Switzerland AG 2021 J. L. Ayuso Muñoz et al. (eds.), Project Management and Engineering Research, Lecture Notes in Management and Industrial Engineering, https://doi.org/10.1007/978-3-030-54410-2_13

173

174

M. Puig Poch et al.

13.1 Introduction The starting point of this work is built on the uncertainty of the own consistency of Design Thinking as a methodology beyond the use of tools and techniques, especially in the business field (departing from its own design), but also within the discipline, from the more traditional industrial design to the newest such as services and interface design and user experience (UI-UX), as well as the own designers’ training. At a time when design thinking seems to be everywhere and even in certain sectors, it becomes the structural axis of the project; it seems pertinent to wonder about the validity of the methodology by putting on the table questions such as where it comes from, on what basis it is built, and how it is applied or used today. Several fields where design thinking takes action are outlined. On the one hand, within a company design thinking sells innovation, a way to meet the new challenges, both social and technological. It also means a business structure, a new transversal and collaborative way of organizing the company. Both meanings use the main virtues of this methodology: its ability to build a global vision on any topic and place the person ahead of other priorities. As Kimbell [9] theorizes, perhaps the greatest fortune for design thinking to grow and reach so many areas is to have coincided with a deep economic crisis, in a global digital time and to have joined with a number of sociocultural characteristics that take as their own the premises proposed by design thinking relating to putting the person at the center of the project, creativity, or the promise of a procedure that verifies its steps. Thus, another application field of design thinking has been reached, in this case directly related to specific areas of design. The twenty-first century arrives with the takeoff of service design and UI-UX design. Both disciplines are born and, without methodological tradition, they pick up what is newer, in this case design thinking, assuming as their own their postulates and using the tools that they propose. Additionally, the methodology evolves creating diversifications, such as the double diamond of the Design Council, specific for service design. Simultaneously, other design disciplines with a much more settled methodological tradition, for example, product or interior design, find it hard to break with the past and are often limited to using their own tools and techniques. Both approaches are discussed through interviews with professionals who use design thinking, or part of it, in their day-to-day work. At the training level, design thinking is in all design schools, although in a very variable way. It is difficult for it to be introduced in a transversal manner, but its tools and techniques are inside the classrooms. In this sense, and as can be read from authors such as John Chris Jones [8] and Nigel Cross and Roozenburg [6], the fact of externalizing design thinking makes possible its popularization, and for the training methodology, it is a great help since it helps structure the project. However, it is not possible to advance without sketching, beyond the design thinking methodology, what is hidden under the concept of design thinking. In this sense, and thanks to the bibliographic review, it can be affirmed that the concept of design thinking exists, in lower case, as a concept that designates how designers

13 Design Thinking: Virtues and Defects …

175

think, how they organize their knowledge, how they move with uncertainty, and how their mind works to advance in a proposal for the future. It is a general concept that overlaps itself because of the language, since it can be translated without problems; in this case, it is not a proper name. In order to define the characteristics of this concept, basically, two of the initial works in the world of the design methodological research are gathered: Design Methods [8], published for the first time in 1970, and Research in Design Thinking [6], a publication that was a result of a research conference at Delft University in 1991. Subsequently, it is analyzed how this system of thought is collected, through creative techniques for innovation from business schools, becoming Design Thinking, in capital letters, as a proper name. To do this, the Cross and Dorst bibliography is analyzed, which continues with the research around design, and other authors are added [9, 15], experts in the world of creativity and design processes. Finally, and as the last field where Design Thinking appears as reviewed in this paper, an approximation is made to the extensive bibliography of Elizabeth Sanders to visualize how methodology evolves from design. The key question of this work is to know whether Design Thinking (as a methodology) is valid and current beyond the business world that embraces it, that is, if it is also valid as an attitude, a reflective capacity, a formative axis, or as a way of understanding and solving problems that are specific to the designer, who also takes the person as its central axis and structures a project iteratively focused on action.

13.2 Objectives The objectives of this work are • To outline the mental processes a designer develops in order for his thoughts to become a working method outside the design. • To review the concept of design thinking, how it was born, and how it evolves. • To outline the most common methodology uses at a theoretical level, both in the business and the design fields, as well as in design training. • To make an approximation to the practical use of design thinking in different areas, design schools, businesses, and design studies. • To compare the theory that conceptualizes design thinking with its application in these various areas beyond great success stories.

13.3 Methodology This paper has two different parts: a theoretical part that is addressed through the literature review, and a part of the analysis about the application of Design Thinking, which is addressed through interviews and observation. Furthermore, the literature review is also divided into three parts:

176

M. Puig Poch et al.

1. A first part that outlines the characteristics of the design process drawn at the end of the twentieth century: how the project is structured and how the designer’s mind works, 2. A second part which seeks to outline a theoretical map in relation to Design Thinking to describe its main characteristics, 3. A third part that shows how design uses Design Thinking already into the twentyfirst century. The three visions serve to be able to compare the main characteristics of the design process with the methodological characteristics proposed by Design Thinking. A review of this process shall be made to justify the inclusion of the word “Design” in the name of this methodology for business innovation, which supposedly copies the designer’s work methodology. On the other hand, for the analysis of the implementation of Design Thinking at present, a series of interviews with people using this methodology, or part of it, in different areas were held: academic, entrepreneurial, and professional designs. One of the main objectives of the interviews was to inquire about the conflictive issues (where methodology falters when applied) and about its strengths (when its application improves the project). In addition, fieldwork is conducted with nonparticipant observation, which makes it possible, on the one hand, to observe the development of creative workshops where Design Thinking tools are used, and on the other, to approach people outside the creative field little accustomed to this kind of methodologies.

13.3.1 Literature Review Understanding the characteristics of the designing process, what makes it different from other processes of thought, is the key to understanding why the methodology called Design Thinking adds the term “design” to its name. The literature review begins with a research about the designers’ mental processes in order to frame the characteristic features of the discipline and define a common thread that will guide the design thinking, understood as the way a designer works, to Design Thinking, under its own name, being a methodology of business innovation based precisely on the way designers work. Although during the sixties of the twentieth century some contributions on design research could already be found, it is not until the 1990s when special emphasis was made in the process of design itself. Katja Tschimmel [15] proposes as a starting point of this new approach the international seminar organized in 1991 by the University of Delft (the Netherlands) which result was so positive that it became the most important symposium on design thinking research. The result was published in 1992 [6]. Cross et al. [6] reviews previous studies on the design process with the intention of extracting universal methods, arriving at the conclusion that the designer works from ill-defined problems and focuses his efforts on the solution (and not on the

13 Design Thinking: Virtues and Defects …

177

definition of the problem); that is, potential solutions are used to explore the problem posed. For this, the designer works with intuition and creativity and makes use of the abductive thought described by Charles Sanders Peirce. For Cross [5], the big difference for discipline research is that it does not underline the forms of knowledge, but the ways that the design has to know, think, and act. That is, it focuses on the methodology of the project. On the other hand, an essential reference work is “Design Methods” [8], published for the first time in 1970 that appears again in 1980 and in 1992 with revised editions. In this work, Jones sets the bases for the design focused on people, in his own words “the purpose of design methods is simply to change the priorities” [8], by placing the machine behind the person. It is an important work of reference due to the innovative vision of its author, which already in its first 1970 edition includes 35 methods explained with examples, just as decades later Design Thinking manuals would. These methods are called “new methods”. For Jones, the new methods externalize the design process making it known and bringing a more creative and multidisciplinary way of working. In addition, this author breaks down the design process into three basic phases: analysis, synthesis, and evaluation, or as they appear in Chap. 5 of “The Design Process Disintegrated” [8] divergence, transformation, and convergence, the basic concepts of the Design Thinking which work iteratively with the divergence and convergence. Also in the 1990s, design underwent a process of transformation which changes the area where the designer adds value. Designers start to work more globally in intangible projects such as services, process organization and interaction. Parallel to the reflection on the project methodology, during the last decade of the twentieth century the process by which design thinking has become a methodology associated with innovation in business processes takes place, now called Design Thinking, a proper name which designates a specific methodology. According to Kimbell [9], one of the key companies in this process is the IDEO consultancy firm, headquartered in Palo Alto (California) and founded in 1991 whose CEO Tim Brown is one of its main theorists. IDEO is the first design consultancy that sells not only projects, but also their working method: Design Thinking. For Brown [2], reasoning as a designer can transform the way to develop products, services, and processes. In her Design Thinking analysis, Kimbell [9] states that since approximately 2005, the term Design Thinking can be considered widely known, has become a collection of effective tools and strategies, and has been separated from the world of design. Tim Brown [2] makes it clear when he states that it is not necessary to be a designer to take advantage of Design Thinking. The methodology provides the company with tools to identify, visualize, solve, and forecast problems in a systematic way within a framework where creativity is one of the main values. Brown also proposed structural and performance changes for the company, beyond innovation projects that may arise, referring about a much deeper change, which falls outside the scope of this work. According to Kimbell [9], the acceptance of this methodology is possible, thanks to certain changes that are happening within society at the same time. For example, the concept of “New spirit” of Capitalism that Boltanski and Chiapello [1] describe

178

M. Puig Poch et al.

evolves the hierarchic structures toward network structures, the bureaucratic discipline toward teamwork and makes emphasis on the people with different abilities. Three key points for the acceptance of the Design Thinking and the beginning of the co-era are codesign, cocreation, and coinnovation, among others. Another key point for the rise of Design Thinking is related to the deep worldwide economic crisis that began in 2008. Dorst [7] considered that companies give attention to design for the capacity of this discipline to work with indefinite and complex challenges, as well as uncertainty. Jones [8] explains that designing is a bet on the future with the information available at the time knowing that the forecasted future will only occur if the predictions are true. When facing the uncertainty of the crisis, a methodology is chosen that knows how to work with it. For Tschimmel [15], the Design Thinking reflects the ability of the designer to take into consideration human needs, available materials, and technical resources, while establishing restrictions and opportunities for the project or business; that is, a very valuable global vision for the company that must survive the crisis. For Simonet [14], the great virtue of Design Thinking is to be oriented toward results and the users, i.e., that innovation is defined as something new coming to the market to meet a need, which provides the company with a new approach. The Design Thinking knows very clearly that it can work with the initial uncertainty of the project and proposes techniques and methods to reduce this uncertainty. However, the specific tools are not part of this bibliographical review, since what is proposed here is the comparison of its methodological basis. Tschimmel [15] elaborates a table that exemplifies the differences between the way of doing things of a traditional company and the way to do them when applying Design Thinking, placing each one at an end of an ideal model that serves in this work as a sample of the differences of approach with and without Design Thinking (Table 13.1). Upon reaching the second decade of the twenty-first century, the division between the design itself and the business world becomes more difficult since the transfer of information is constant. Parallel to all this business development, from the University of Delft, research on design continues, with Liz Sanders and Jan Stappers, among others. Their work Convivial Toolbox (2012) takes a step beyond Design Thinking, and speaks from a perspective of generative design, with a different view regarding the user. Here, design thinking is still written in lower case. Years before, Sanders [10] described the design as a discipline which should not design for users but with the users. Sanders speaks of a change of mentality. One of the most interesting and valuable notes for the development of the design based on user experience (and not just the user) is the opportunity to work with creativity and psychology. Sanders describes a new language that includes both cognitive and emotional design tools. Another key point that Sanders describes in his theories can be found when he describes the fuzzy front end [12], p. 22. It is the initial and most chaotic part of the design process, through which designers and users work together to define a design challenge. In this initial step, the role of the designer changes, because his tools become tools of ethnographic observation, selection, classification, or anthropological coding. In line with what already Jones [8] did in his time linking the need for

13 Design Thinking: Virtues and Defects …

179

Table 13.1 Differences in business focus: traditional versus design thinking Design thinking

Traditional

Mainly visual, use of sketches and prototyping Mainly verbal, use of diagrams and tables tools Intensive observation and stereotyped and challenging (questioning) perception

Immediate perception and quick interpretation of the situation

Emotional and rational at the same time, subjective. Reality as social construction

Mainly rational and objective. The reality is fixed and quantifiable

Abductive and inventive reasoning

Analytical, deductive and inductive reasoning

Error is part of the process. Iteration leads to a Search for “the correct” answer and the best better solution solution Comfortable with ambiguity and uncertainty. Novelty search

Guided by planning. It seeks control and stability

Empathetic and directed to the user, deep understanding of people’s needs and dreams

Customer-driven, deep understanding of what the customers would want according to their social status

Mainly collaborative

Mainly individual

“new methods” with the rise of complexity, Sanders and Stappers [11] also speak of the environment complexity to affirm that the user-centered design approach is no longer enough for the twenty-first century, since it cannot respond to the increasing complexity and challenges of the present day. Sanders and Stappers [11] define here the designers as facilitators of a creative process; therefore, they provide the tools and knowledge so that the creative process can take place. This role given to the designer is very similar to the role of the (non) designer who works with Design Thinking within a company. Kimbell [9] also indicated it. Design thinking outlines a role of the designer that encompasses much more than the process of design itself, since the designer should also translate the various project elements, both at the cultural and communication levels between individual members of the team, which can be made up of people from very different areas. Finally, it is important to emphasize that Sanders and Stappers [13], from their academic and design field, coincide with Brown and Roger [3], from their field of business innovation, in focusing new methodologies toward action, developing by doing, putting into practice or building as a tool for interpretation and observation.

13.3.2 Interviews After the bibliographic review, this second section seeks to shed light on what happens in the real world when these creativity methodologies proposed by Design Thinking are applied, and how the professionals who use it perform.

180

M. Puig Poch et al.

To this end, three interviews have been conducted, the main objective of them being to know what people who work with this methodology think about it, also, to know if they are using it as a unique methodology or combining it with others, or if only their tools are being used. Furthermore, when using Design Thinking in different areas, it is intended to get to know what difficulties may be faced, as well as to detect the weak points and see how they are solved, also to identify the strong areas and see how they value them. It is interesting to see what happens when the big success examples, such as IDEO and their projects, are left behind, and the work is focused on a local area (Barcelona and area of influence), away from Anglo-Saxon and Scandinavian epicenters, more advanced in this type of creative processes. The first interview was conducted for Marc Garcia and Itziar Pobes, founders of We Question Our Project, a service design company, who are at the same time the organizers of the GovJam,1 and collaborate with different schools of design in Barcelona teaching courses in Masters. They come from the world of graphic and web design. The interview took place in their studio and lasted about an hour. A fluent conversation is established through a directed and semi-structured interview format, which revolves around their experiences with the GovJam, their day-to-day work, and their teaching role. In this interview, a special emphasis was made on reeling off their opinion about the validity of the Design Thinking and its resulting projects. A second interview was conducted for Carmen Mora, with a background in journalism, who was interested in the subject of creativity being applied to the business and education fields, a fact that led to Design Thinking, and now serves as a consultant on issues of co-creation using tools and techniques of this methodology. She has worked both in enterprises and design studios. This interview took place virtually through Skype and lasted approximately 40 min. Just as before, this interview was planned as an open, semi-structured conversation. In this case, the topic mainly focused on the personal experience of the interviewee in creative processes within the company, which were compared with the same processes in a product design studio. Finally, the third interview was conducted for Jessica Fernández, an Industrial Design Engineer, responsible for the project area of the school of design ELISAVA in Barcelona. In this case, the interview was also conducted virtually through Skype and lasted approximately one hour. In Jessica’s case, special emphasis was made on the creative processes within the schools of design, as well as the use of Design Thinking to explain the project methodology to future designers.

1 GovJam (www.govjam.org) is a global event that brings together workers in the public service in creative workshops of 48 h which take place simultaneusly. In these workshops, services are designed through Design Thinking techniques. GovJam is part of one larger project, which includes, under the same concept, 3 different workshops of 48 h: the GovJam, the Sustainability Jam (http://planet. globalsustainabilityjam.org) and the Service Jam (http://planet.globalservicejam.org).

13 Design Thinking: Virtues and Defects …

181

With these three interviews, the three main areas where Design Thinking is used as a working methodology can be covered: in the design field at a professional level, in the educational field to train designers, and in business through creative consulting. In addition, a non-participant observation in the GovJam took place in June 2015. During the three-day workshop, participants, mentors, and organizers could be observed, and it was possible to talk to them about their concerns regarding the event: what they learn, how they believe it can be used in the future, if it can later be applied to their jobs, etc. The participants of the GovJam were public workers concerned with change, but had no experience in creative processes of this kind, nor in methodologies such as Design Thinking. Thanks to this observation, as well as the interviews, it can now be sketched what happens with Design Thinking, how it is understood, beyond the written theory, and how it is implemented. Data collection makes possible the realization of a comparative analysis of the state of the art. The aforementioned theoretical approach has been useful to understand the very different views that exist in Design Thinking: a design process, a tool for business creativity and innovation, a way of organizing the business structure, an approach to people, collaborative design, etc.

13.4 Results This section includes explanations and reasoning of the interviewees to contrast it with features that are defined through methodologies, design thinking in lowercase, and uppercase. However, before starting, it is convenient to make a note about the definition of design. In this work, designing, and therefore design, is considered to be a process in itself, a series of repetitive actions ending with a proposal for the future that seeks to improve the lives of people in any field. A broad view of the design concept is that it relies, as Isabel Campi says, “in a faithful observation of reality and a wise use of materials and the available technology”2 [4]. On the other hand, the broad scope of design (fashion, product, interior, graphic, UI-UX, services, etc.) is justified because they share the same method and as Jones [8] says, designer is whoever puts the focus not on the final product, but on the initial need that will result in a product that improves the quality of life, understanding this product in the widest possible way. From the literature review, a number of features that define design are extracted that later are replicated in the Design Thinking characteristics at the theoretical level. The salient features of the design process are the following: • Designing is to carry out a project methodology that allows turning into products (of any kind) the needs and desires of people, as well as solving their problems. • The designer works on ill-defined problems and focuses on their solution. 2 Catalan

translation: “en una fidel observació de la realitat i en una sàvia utilització dels materials i tecnologia disponible”.

182

M. Puig Poch et al.

• To design is to make proposals for the future based on the present knowledge. The abductive thought described by Peirce is used for this purpose. • The person is the center of the design project. • The design can also be intangible (e.g., services or interaction design). • The design works in a multidisciplinary way to address the growing complexity of the surrounding world. • The design process is described in many different ways, but always includes at least one phase of information gathering (divergence) and another phase of decision-making (convergence). • The design process is holistic and iterative. • The designer plays an important role at a cultural level and is a communicator. The salient features of the design process of Design Thinking are the following: • Develop a project focused on the user. • It includes the design process: identify, visualize, and solve (divergence and convergence). • High value to creativity, innate ability of every person. • Ability to create an intangible product. • It is intended to be a fresh and innovative methodology. • Shows results in a graphic and visual form. • The methodology makes it possible to work on complex and undefined challenges. • It begins with the detection of problems or needs, and is oriented toward results and users. • There is no need for designers, but facilitators who are aware of the process and guide the different project participants in the process. • Promotes action, prototyping as an essential tool. • Works in a multidisciplinary and iterative way. • To be aware of the process at all times. All phases will be explicit. There are many aspects in which both views seem to agree, demonstrating that one is a reflection of the other, and vice versa. But in the vision of the applied methodology and in relation to the interviewees’ own experience, the following characteristics can be commented, beginning prominently with the affirmation that Design Thinking is very skillful in its own description, so that approaching the methodology is easy; however, mastering it requires experience. This leads to the fact that if the Design Thinking is misunderstood and poorly executed, it can become one of its main problems. Design Thinking explains the design process through iterative phases, with little variation between the different proposals (Table 13.2). In addition, Design Thinking proposes tools and techniques for the development of each of these phases, such as brainstorming, the user journey, card sorting, and the service blueprint, and tools that the facilitator must know in order to be able to apply them properly.3 3 For

further information visit www.designkit.org/methods as a display of proposals compilation.

13 Design Thinking: Virtues and Defects …

183

Table 13.2 Comparison between project phases described from various schools Schoolsa

Project phases

J. C. Jones

Divergence

Transformation

Convergence

3i

Inspiration

Ideation

HDC

Listen

Create

Give

d.School 1

Understand

Define

Design

Prototyping

Test

d.School 2

Empathize

Define

Design

Prototyping

Test

2 Diamond

Discover

Define

Develop

Deliver

Observe

Implementation

a J.

C. Jones describes the phases of the project in his work Design Methods (1992). 3i refers to the model proposed by IDEO (see also www.ideo.org). HDC, also proposed by IDEO, is the Humancentered Design model (see also www.designkit.org/human-centered-design). The d.School is a design school associated with Stanford, and model 1 corresponds to the proposal of the HassoPlattner Institute. Proposal 2 is the evolved model and the one with the most follow-ups (see also https://dschool.stanford.edu/). Finally, 2Diamond is the model proposed by the British Design Council specific to the design of services (see also www.designcouncil.org.uk)

Another feature that is widely defended is that design is a collective, multidisciplinary activity, which in no case is the result of the creative genius, but of work and experience. As Cross et al. [6] say, experience and practice will help make it better. Also, from the theoretical as well as the applied level, it is agreed that everyone can be creative. There is a need for a number of features that promote creativity to be present, but from this point on, anyone can contribute. As a counterpoint, and in relation to the previous paragraph, Carmen Mora additionally explains that it is necessary for those who pretend to be creative to believe that they are. As a consultant, she has seen cases where the group considers itself to have little creative capacity, being this indeed one of the causes for bad performance in a creative session. At this point, Design Thinking brings not only work techniques, but also of disinhibition to work with participants, the fear of ridicule, and their own confidence. In relation to the constant fast evolution of technology and technique, Design Thinking offers a global vision in every field, not focused on a final product, but rather focused on a need and finds a solution according to the current technological, sociocultural and economic circumstances. In this sense, it also works on change on the starting point that triggers the design process: traditionally, this process is triggered by a more or less specific briefing, but with the Design Thinking and the fuzzy front end that Elizabeth Sanders describes, it is required to start at a previous moment. Another criticism that Design Thinking faces is its volatility, understood as the lack of consistency of the proposals that have emerged from a creative process. This is a key point to understand the distrust generated by Design Thinking, especially at a business level, since the creative, short and intense workshop may be confused with the actual design process, which is much longer and more laborious. Precisely in this sense, all the interviewees coincide in specifying that the design process should not be confused with creative workshops (including GovJam) which must be described as simulation or even play. Marc Garcia and Itziar Pobes give as example

184

M. Puig Poch et al.

companies that after a creative workshop for innovation decide not to use Design Thinking anymore because it has not brought real solutions. Garcia and Pobes speak of disrepute of the term Design Thinking because of how badly it is explained by their own promoters. Mora, in turn, who promptly participates in companies, speaks of making it very clear from the beginning what shall be achieved with the workshop and to properly define the limits of the possible results to avoid misunderstandings. For example, GovJam works according to the development plan of a project: observe, generate ideas, prototype, test, fix, implement, and start again. And the goal is to lay the foundations of a project that has some viability. Therefore, why talk about simulation or game when using the actual tools of the methodology, if they are being applied to real problems, setting frameworks and real limitations, and even proposing feasible solutions? The answer is, on the one hand, that there is a temporary issue that cannot be ignored; the level of depth and analysis which can be reached within 48 h is limited. Design processes are much longer. On the other hand, and in a related way, the outgoing proposals are only the tip of an iceberg that should be developed. The reduced and express format is only the first point of contact with certain tools and techniques described by Design Thinking. The real challenge of a creative workshop is that participants come to understand the foundations that guide this process: the importance of the person and their experience, the importance that any initial idea must be accepted, as well as the importance of prototyping, testing, and reviewing. By limiting the experience to the world of design, it becomes only a simulation. An important difference between the traditional design and the one guided by Design Thinking is very well developed by Sanders and Stappers [12]: the designer turns into a facilitator, that is, someone familiar with a series of techniques that facilitate (guide) the creative process and stops designing himself. At the company level, workshops and creative conferences with Design Thinking can be used as “shock therapy”, that is how Garcia describes it during the interview. In this sense, it is about unlocking a particular topic without pretending to go deeper or implementing anything; they are just creative conferences on creativity and innovation. In the professional design context, as Mora claims in the interview, the designer already uses the methodology to a large extent as an intuitive own process, even if he does not do it explicitly. Design Thinking provides the designer with resources to visualize and document all the initial analysis information, focusing on what the user does as well as the user’s experience. On the other hand, at an educational level, Design Thinking has much to contribute. It is a methodology that puts all its emphasis on the process, on explaining the different phases of the design process rather than on specifying a final result, and for that, it provides countless tools and techniques that facilitate it. The process is precisely what should be taught in schools, the steps to follow to deal with any problem, even to discover where the problem lies, and be capable of providing solutions in any area. It is a way of approaching and learning about new areas through a few steps which diminish the complexity since they are well established. In the end, as Garcia and Pobes describe, the student must basically understand three things: the

13 Design Thinking: Virtues and Defects …

185

concept of iteration, the steps of divergence and convergence, and the fact of working from a briefing that expresses a need and not a solution. With this brief summary, and although referring to the educational field, it is very clearly drawn what Design Thinking means for design: it is a work process that allows approaching the end user and offers a series of techniques that facilitate the necessary processes to go through the different phases of a project.

13.5 Conclusions Although it can be considered that the design activity has been present throughout all history, it is not until the second half of the twentieth century when design research really started at a scientific and academic level in a methodological form. The results of these studies direct research toward the design process itself to explain how the project is developed and what a designer does. The characteristics describing the designer’s job, how decisions are made, how he focuses on the solutions, where he puts the value in his reasoning, and how he develops the project serve as an inspiration to business schools to develop a methodology for innovation and creativity to be called Design Thinking. From the second decade of the twenty-first century, this concept has been popularized and is applied in fields as diverse as business innovation or designers training. The most important characteristics of the methodology are related, on the one hand, with the global vision and the decision-making focused on the solution (rather than the definition of the problem), as well as an inaccurate or poorly defined starting point. On the other hand, it makes the person and their experiences a priority, to meet their needs and desires. All this is framed in a moment in which society is prepared to understand this way of proceeding capable of coping with increasing complexity, especially at the technological level. Design Thinking gathers these characteristics and vastly describes a process by phases, accompanied by multiple techniques and tools to carry them out. It is necessary to recognize, in this sense, its effort to describe itself and to create methodological content. This effort is compensated by its fast and easy expansion, but simultaneously, it is being turned into a methodology that is being applied without really understanding where the base is of what it proposes. Through interviews and fieldwork, the conclusion has been reached that Design Thinking is understood and used in very different ways according to the field it is applied in, and that, even under the same name, it can be many different things. For the company, in general, these are tools that will allow unlocking specific situations within the development of the project, although it becomes a greater challenge to apply it at a general level or structurally as a new way of functioning. But in this sector, events such as a creativity workshop can be very beneficial, since the great implementation versatility of this methodology facilitates this type of simulations. For professional design, it is about explaining a process that in many cases already takes place in an intuitive way, but making it explicit turns the phases of the project into something clear and iterative. In this field, the main contribution is related to

186

M. Puig Poch et al.

the vision about the user, more specifically, the user’s experience, to the point of considering that the user should be an integral part of the design team. Finally, at the educational level, Design Thinking can explain the design process in an understandable and well-schematized matter, providing a number of tools that make this process explicit. If the professional designer can move forward intuitively without problems, for a designer in training this methodology it can be a great help, because it makes explicit what should be done intuitively. All this turns Design Thinking into a value methodology, as long as its proposition is understood and is properly applied according to each area, defining and explaining what it is for and how far it can go. If with its application, the resulting projects are going to be better, with less risk of failure or more innovative, it is something that will depend on each specific case, since the Design Thinking can guide the process, but it cannot guarantee the results, nor can any other methodology do it. However, it is also true that Design Thinking strives to secure results as much as possible through a way of approaching the design process that places the person as a central axis, encourages action and prototyping, and structures the project iteratively with times of divergence and convergence, to draw conclusions that allow moving more safely during the development of the product. Acknowledgment This work was supported by Universitat Jaume I (grant number P1·1B2015-30).

References 1. Boltanski L, Chiapello È (2007) The new spirit of capitalism. Verso, London 2. Brown T (2008) Design thinking. Harvard Business Review [online], pp 84–92. Availabe at: https://www.ideo.com/by-ideo/design-thinking-in-harvard-business-review. Accessed 8 Jan 2018 3. Brown T, Roger M (2015) Design for action. Harvard Business Review [online], pp 56–64. Availabe at: https://www.hbr.org/2015/09/design-for-action. Accessed 8 Jan 2018 4. Campi I (1995) Què és el disseny?. Columna Jove, Barcelona 5. Cross N (2001) Designerly ways of knowing: design discipline versus design science. Des Issues 17(3):49–55 6. Cross N, Dorst K, Roozenburg N (1992) Research in design thinking. Delft University Press, Delft 7. Dorst K (2011) The core of ‘design thinking’ and its application. Des Stud 32(6):521–532 8. Jones JC (1992) Design methods. Wiley, Hoboken 9. Kimbell L (2011) Rethinking design thinking: Part I. J Des Stud Forum 3(3):285–306 10. Sanders E (2002) From user-centered to participatory design approaches. A design and the social SCIENCES. Taylor & Francis Books Limited, Abingdon 11. Sanders E, Stappers PJ (2008) Co-creation and the new landscapes of design. CoDesign 4(1):5– 18 12. Sanders E, Stappers PJ (2012) Convivial toolbox. Generative research for the front end of design. BIS Publishers, Amsterdam 13. Sanders E, Stappers PJ (2014) Probes, toolkits and prototypes: three approaches to making codesign. Codesign 10(1):5–14 14. Simonet G (2012) Design Thinking como estrategia. El caso IDEO [online]. Available at: http:// goo.gl/O8Ilt9. Accessed 8 Jan 2018

13 Design Thinking: Virtues and Defects …

187

15. Tschimmel K (2012) Design thinking as an effective toolkit for innovation. In: ISPIM (The International Society for Professional Innovation Management), XXIII ISPIM conference: action for innovation: innovation from experience. ISPIM, Barcelona, Spain

Chapter 14

Statistical Learning Techniques for Project Control Fernando Acebes, Javier Pajares, and Adolfo López-Paredes

Abstract There are proven techniques and tools that project managers use to monitor and control the project. Some of them are as common as the Earned Value methodology (EVM). Due to the limitations of the previous methodology, procedures arise that incorporate uncertainty in the project control. However, the advance in research on advanced statistical techniques allows the development of more current tools that determine, on the one hand, the status of the project, and on the other, predict the future situation at the end of the project. In this article, we show how to use these advanced classification and regression techniques applied to project management, focused on the supervision and control of the time and the cost of the project. An example is included where we apply Monte Carlo simulation to obtain a set of data that will be treated with statistical tools and algorithms, which will allow us to determine, for any project control point, the probability of compliance with the planned objectives of time and cost and, where appropriate, predict what will be the deviation from such planning. Keywords Project management · Project control · Duration forecasting · Cost forecasting · Statistical learning techniques

F. Acebes (B) · J. Pajares · A. López-Paredes GIR INSISOC, Dpto. de Organización y CIM, Escuela de Ingenierías Industriales, Universidad de Valladolid, Pso. del Cauce 59, 47011 Valladolid, Spain e-mail: [email protected] J. Pajares e-mail: [email protected] A. López-Paredes e-mail: [email protected] © Springer Nature Switzerland AG 2021 J. L. Ayuso Muñoz et al. (eds.), Project Management and Engineering Research, Lecture Notes in Management and Industrial Engineering, https://doi.org/10.1007/978-3-030-54410-2_14

189

190

F. Acebes et al.

14.1 Introduction Successful management of the schedule is a fundamental task for the timely completion of projects [32]. This requires a precise definition of the tasks, their sequencing and, of course, an accurate estimate of the duration of each of the project activities. The first planning methodologies based on the representation of complex network diagrams (PERT, CPM, and ROY techniques) emerged in the middle of the last century. Due to the great practicality of all of them and their extensive use, these techniques have lasted until today. However, after defining the sequencing and programming of the project, it is necessary to carry out the execution of the activities. In addition, for this reason, the establishment of systems to control the performance of activities related to the program and cost is a fundamental part of achieving the objectives of the organization [35]. The development of effective control systems becomes a crucial aspect in the organization, to ensure the fulfillment of project objectives, under the threat of various sources of uncertainty [17]. The project control process aims to minimize deviations from project plans during the execution phase, determine the status of the project, compare it with established planning, analyze deviations, and implement appropriate corrective actions. The project control system’s par excellence for its recognition and use is the Earned Value (EVM) methodology. Originally developed in the 1960s by the U.S. Department of Defense as a standard method for measuring project performance, this method uses a set of simple metrics to measure and evaluate the overall health of a project. We use these indicators as an early warning signal to detect project problems early or to take advantage of project opportunities. Shtub et al. [34], Anbari [5], Fleming and Koppelman [14], Project Management Institute [32], among others, explain the main features of the methodology and how to put it into practice, showing an overview of the fundamentals of EVM. Initial studies focused on the objective of cost. Lipke’s work [23, 24] proved to be a turning point as, using Earned Schedule (ES), it allowed project managers to monitor progress in units of time rather than using monetary units. In doing so, it solved the problems presented by the EVM methodology with respect to its forecasting capabilities in the later phases of the project life cycle. On the other hand, to predict and estimate the future development of the project, in terms of both time and cost, Earned Value metrics have been used, based on actual performance to date and in the assumptions on future performance. The use of indicators to predict the final cost of the project has been collected by Christensen [12], who reviews different cost forecast formulas and examines their accuracy, and by Zwikael et al. [43], who evaluate five prediction methods. Anbari [5], Jacob [19], Lipke [23], Vandevoorde and Vanhoucke [36], and Vanhoucke and Vandevoorde [37] have extensively studied weather prediction indicators. However, the EVM methodology presents certain limitations collected by different authors [1, 2, 16, 18, 30], Aliverdi et al. [4] who consider as a more important

14 Statistical Learning Techniques for Project Control

191

limitation the lack of consideration of the uncertainty incorporated in the activities that make up the project. Several methodologies try to incorporate uncertainty into the control of the project. Pajares and López-Paredes [30] propose two new control indicators: the Schedule Control Index (SCol) and the Cost Control Index (CCol). Acebes et al. [1] extend this methodology, incorporating a graphic control framework with which to interpret the status of the project. In addition, following this line of research, Acebes et al. [2] propose the Triad Methodology for the control of the project, using Monte Carlo simulation. In recent years, there has been an increase in researches related to the study of statistical concepts and techniques applied to project control. Lipke et al. [25] introduce confidence intervals, Barraza et al. [6, 7] introduce Stochastic S-curves, Narbaev and De Marco [28, 29] use S-curves; Naeni et al. [27] and Salari et al. [33] describe Fuzzy techniques for Earned Value Management. Willems and Vanhoucke [42] provide a more complete description of the literature on EVM and project control. In this article, we incorporate advanced statistical tools to carry out the control of the project, as well as to estimate the duration and final cost. We propose the use of statistical techniques based on learning, considering the stochastic behavior of the project. We use Monte Carlo simulation to extract the necessary data, in order to estimate the probability of delays or cost overruns in the project and define confidence limits. The analysis ends with the prediction of the budget and the planned time to complete the project. Hastie et al. [15] develop research known as statistical learning. More specifically, they implement new procedures to construct final project duration estimates. These technics come from the area of Artificial Intelligence (AI), which are used to solve problems of classification or prediction depending on the relationship between inputs and outputs, or prediction purposes. Cheng et al. [10] implement Support Vector Machine (SVM) to estimate the final cost in several construction projects. Cheng and Roy [11] used these tools for the approximation of functions and the estimation of costs. Wauters and Vanhoucke [39] applied Support Vector Regression for project time control and cost forecasting. The authors compared in a large number of data, the accuracy and robustness when applying this procedure, with respect to the best results offered by the EVM and ES methodologies. Their research showed that SVMbased predictions are currently more accurate than any other Earned Value-based prediction method, when the training set is equal to or at least similar to the test set. More recently, Acebes et al. [3] illustrated the use of various statistical learning techniques and anomaly detection algorithms by a case study. Finally, Wauters and Vanhoucke [40] present five Artificial Intelligence (AI) methods for predicting the final duration of a project. The objective of this research is to promote the use of advanced statistical techniques that make it possible to determine the current state of the project (probability of completing the project in time and cost). On the other hand, we intend to predict the future situation at the end of the project (estimation of the delay and/or final cost overrun). In this article, we show how to use these advanced statistical techniques

192

F. Acebes et al.

of classification and regression applied to the management of projects focused on monitoring and controlling the time and cost of the project. The rest of the article is developed as follows. In the next section, we develop the methodology that we have carried out in our research. Here is an example to show the results obtained at each step. We finish the article by sharing the main ideas of our research.

14.2 Methodology In this section, we will give an overview of the methodology used to apply to any project with the aim of controlling time and cost, while providing accurate forecasts of time and cost of completion. It consists of six different steps, which are generation of the project network to be executed, application of Monte Carlo simulation, selection of the control point, calculation of the expected range of variability, determination of the probability of delay/overcost (resolution of the classification problem), and prediction of the final state of the project (resolution of the regression problem). Next, we will analyze each part of the methodology.

14.2.1 Generation of the Project Network In this first phase, we have the activities that compose the project and the useful information of each one of them, that is to say, duration, cost, and previous activities. With these data, we can construct the project’s graph and calculate its planned duration. We can determine what the project’s schedule is, as well as its cost baseline or Planned Value curve, which will be used as a reference point for project control data. We will compare the project’s progress information with the reference schedule, to determine deviations in terms of time and cost. Due to the turbulent environment in which the projects are developed, characterized by their high complexity and dynamism, the activities that compose them are distinguished by the associated uncertainty in the estimation of their duration, becoming considered as activities of stochastic duration. In this sense, in this first phase, we will be able to assign to each activity the type of distribution function that best suits its behavior, adjusting the parameters defined for each type of function and activity. Because of the above, the cost of the activity, which depends on its duration, presents variability.

14 Statistical Learning Techniques for Project Control

193

14.2.2 Monte Carlo Simulation Due to the uncertainty with which we have characterized the project activities, we can incorporate variability in the duration and costs of the project. Using Monte Carlo simulation, we can calculate the statistical distribution functions of cost and duration at the end of the project. It also allows us to determine other parameters related to the activities, such as the Criticality and Cruciality Indices of each of them. Each of the simulations represents a possible execution of the project. Each of these projects has been simulated by randomly assigning, according to its distribution function, a duration (and therefore also a cost) to each activity in each of the simulations. Thus, we can store the information of each of these simulated projects, which we will use in the following phases of Classification and Regression. Specifically, we will have the data relating to the final duration of each of the simulations and, consequently, we will be able to know for each simulation, if it had a delay/overcharge with respect to the planned project. In addition, we will be able to calculate how much was the delay/overcharge of each simulation with respect to the planned project.

14.2.3 Control Point Selection After having planned the project, and having applied Monte Carlo simulation to obtain the assumed “n” projects, from which we know the duration and cost assigned to each activity in each of them, we move on to the execution phase. As suggested by Acebes et al. [2] is that at each monitoring and control moment, we calculate the parameter “% of project execution”, understood as % = EV/BAC, where EV is the Earned Value, and BAC is the final planned budget of the project. By definition, the Earned Value represents the budgeted cost of the work that has been carried out in the project until the control date. We know that, by definition of EV, this parameter coincides with the budgeted value (BAC) at the end of the project. The % of execution indicates the percentage of the budgeted cost of the project that has already been executed until the control period. In each of the simulated projects, we look for the “% of execution” value that coincides with the value we have calculated for our real project in the control period. Thus, we obtain “n” pairs of points (time, cost), each of them corresponding to a simulated project and each of them results in the same “% of execution” in your simulated project as our real project, which we want to control. Finally, we can represent a cloud of “n” points (time, cost) with the same “% of execution”. In addition, to each point, we will associate the information relative to the final result of your simulated project: if it has delay/overcost and how much the concrete value of the delay/overcost is, with respect to the initial planning (each one with respect to your simulated project).

194

F. Acebes et al.

We have used the computer application “Matlab” for the programming of the calculations made in this section, as well as those made in the previous section. For the following sections, we will use as a computer application “R”, since it is a programming environment and language with an approach to statistical analysis.

14.2.4 Calculation of the Expected Range of Variability Acebes et al. [2] use the simulation results to construct percentile curves in order to determine the expected ranges of costs and time, considering the variability planned in the different stages of development of the project. This approach improves the methods used by other authors to date, but does not consider the possible existence of a correlation between time and cost, which may exist due to the definition of duration and cost of project activities. The decomposition of the magnitudes time and cost can avoid the detection of abnormal situations in the project (that could produce false negatives). In this work, it is proposed to use techniques developed for Anomaly Detection [13] together with Monte Carlo simulation as a methodology for monitoring and controlling projects with uncertainty. There are different strategies for dealing with novelty detection techniques. These methodologies focus on determining probability density curves from training data. The calculated function will be used to estimate the probability that the distribution could generate a new observation [31]. Core density estimation is a nonparametric technique for estimating probability density functions. We have used the function “kde2d” for the estimation of the core density with radial nuclei, included in the package R “MASS” [38].

14.2.5 Determination of the Probability of Delay/Overcost In order to know the probability that the project will end earlier than planned or with savings on the budget, we can perform an analysis of the data and treat it as a classification problem. To solve a classification problem, we need to predict a quantitative variable often referred to as a response (dependent variable). To do this, we need a set of qualitative and/or quantitative variables, called predictors (independent variables). Solving this problem will allow the project manager to know whether the project will be completed on time and within budget. Alternatively, it will allow the project manager to know the likelihood of that happening. For the calculation of probability, we use the package R “CARET” [21]. This package allows us to carry out first, a cross validation. Cross validation is a technique used to evaluate the results of a statistical analysis and to ensure that they are independent of the partition between training and test data. It consists of repeating and

14 Statistical Learning Techniques for Project Control

195

calculating the arithmetic mean obtained from the evaluation measures on different partitions. Then, and after having handled the data properly, we go on to look for the algorithm that offers us more accuracy for the classification problem we are looking for. Among the different algorithms that we can choose are linear (Linear Discriminant Analysis—LDA), nonlinear (Classification and Regression Trees—CART, k-Nearest Neighbors—kNN) or complex nonlinear methods (Support Vector Machines with a radial kernel—SVM, Random Forest—RF). For a detailed study of these algorithms refer to Breiman [8, 9], James et al. [20], Mika et al. [26], and Weinberger et al. [41]. Once we have chosen the most suitable algorithm for our data set, we apply to this algorithm the control point of the project to which we want to monitor, and we obtain the prediction of the probability of finishing the project on time and the probability of finishing without extra costs.

14.2.6 Prediction of the Final State of the Project We can use the data obtained from Monte Carlo simulation not only to estimate the probability of incurring delays. We can also estimate the expected cost and time of project completion by treating the information as a regression problem. A regression problem consists of predicting a qualitative variable called response (independent variable) using a set of variables called predictors (qualitative and/or quantitative variables). In this case, the data we have from the Monte Carlo simulation, for each simulated project, are the delay/ahead value of each one of them, as well as the monetary value of overcost/infra cost incurred by each simulated project with respect to the planning. As in the previous section, we use the package R “CARET” [21]. In this case, we will apply linear algorithms (Linear Regression (LR), Generalized Linear Regression (GLM), and Penalized Linear Regression (GLMNET)) as well as nonlinear algorithms (Classification and Regression Trees (CART), Support Vector Machines with a radial basis function (SVM), and k-Nearest Neighbors (KNN)) to look for which of them offers us greater accuracy in the prediction. Once the algorithm to be applied has been chosen, we proceed to estimate the prediction of delay/ahead value and overcost/infra cost for our project, at the control point where we monitor it.

196

F. Acebes et al.

Table 14.1 Duration and cost of activities Activity Id.

Precedents

Duration

Variance

A1



2

0.15

Cost 755

A2



4

0.83

1750

A3



7

1.35

93

A4

A1

3

0.56

916

A5

A2

6

1.72

34

A6

A2, A3

4

0.28

1250

A7

A4

8

2.82

875

A8

A6

2

0.14

250

Fig. 14.1 Project AON Network

14.3 Case Study 14.3.1 Generation of the Project Network To illustrate our research, we chose as an example the project network previously used by Lambrechts et al. [22]. The information on the activities is shown in Table 14.1. The activities have been modeled with a normal distribution function, while the cost depends linearly on the duration of the corresponding activity (the whole process that we are going to explain could be solved if stochastic distributions were assigned to the costs as is done with time). With the activity data, we can represent the AON network (Activity On Node), as shown in Fig. 14.1. We could calculate the PVPERT duration of the project, obtaining 13 time units of planned duration.

14.3.2 Monte Carlo Simulation We use Matlab software tool to apply Monte Carlo simulation to the project. As a result, we obtain 10,000 fictitious projects, using the programmed data of the activities that compose them. In addition, we can extract the results related to Criticality

14 Statistical Learning Techniques for Project Control

197

and Cruciality Indices, percentiles of duration, and cost of the project, as well as other statistics of the functions of duration and total cost of the project.

14.3.3 Control Point Selection To illustrate the case study, we assume that the project manager decides to control the status of the project in an instant in which the % of project execution is 65% (% = EV/BAC = 16000/24613). At this control point, the Actual Time (AT) is 7.5 time units, the Earned Value is 16,000 monetary units, and the Current Cost is 17,000 monetary units ([AT, AC] = [7.5,17000]). In Fig. 14.2, we represent the point cloud of the finished projects, obtained after Monte Carlo simulation (blue color), together with the point cloud of the same simulated projects that are in the 65% of execution (red color). The black dot represents the current status of our ongoing project ([AT, AC] = [7.5,17000]). For the point cloud of the simulated projects in 65% of the execution, we distinguish by color which will be the state of each simulated project once it has finished, in cost and programming. We differentiate, for each of them, whether or not it will have extra costs and/or delays compared to the planned project, Fig. 14.3.

Fig. 14.2 Projects completed and projects at 65% execution

198

F. Acebes et al.

Fig. 14.3 Future status of simulated projects, at 65% implementation

14.3.4 Calculation of the Expected Range of Variability In Fig. 14.4, we represent the 10000 simulations in 65% of the execution of the project. We also represent the current status of the project (blue dot), located above the simulated projects. For the point cloud, we calculate what the adjustment of the density distribution of the represented projects is. The contour line represents the probability that the project is within the expected range, considering the hypothesis that the project follows a stochastic behavior. Figure 14.4 shows that this method detects anomalies resulting from the expected correlation between time and cost. This procedure corrects the calculation of point cloud percentiles as carried out by [2]. Using the computer application, we can calculate the percentile of the project executed at the current moment of control, corresponding to the value of 0.38718537. In other words, the project under monitoring and control has a 38.718% probability of being within the expected variability.

Fig. 14.4 Probability density estimation

14 Statistical Learning Techniques for Project Control

199

14.3.5 Determination of the Probability of Delay/Overcost The conclusions of the previous section have helped us to determine with what probability the project under control follows the expected variability model. In addition, we can evaluate the probability of finishing the project with delay/overcost, depending on the current moment in which the control is carried out (if we assume that the project follows the stochastic patterns specified in the planning). As represented in Fig. 14.3, if the control point corresponds to 65% of project execution, we have the data of the projects that were completed on time and those that were not, as well as those that had extra costs and those that did not. With these data, we can determine the value of probability that our project in execution ends with delay or extra cost. In order to do this, we obtain the algorithm that offers the most accuracy in the estimation of probability. From the computer application “R”, we obtain that the most exact algorithm for this classification problem is Support Vector Machines with a radial basis function (SVM), with an average accuracy of 0. 8075228, Fig. 14.5. Once we have found the best algorithm to use in the classification problem, we apply that algorithm to the data set, and obtain the probability we are looking for. Thus, for our control point, the highest probability is delay. In addition, the application tells us that the probability of delay is 61.60943%, while the probability of advancement is 38.39057%. We perform the same operation for the cost and obtain that the best algorithm is also Support Vector Machines with a radial basis function (SVM), with an accuracy of 0.8561450. The highest probability indicated by the application is overcome with a percentage of 81.87624% probability. Therefore, the probability of having a saving in the project control situation is 18.12376%.

Fig. 14.5 Comparison of accuracy applied to different algorithms

200

F. Acebes et al.

Fig. 14.6 Time and cost probability graphs for the classification problem

In Fig. 14.6, we represent the probability graphs obtained, for the term and for the cost, situating in them the current control point of the project.

14.3.6 Prediction of the Final State of the Project Undoubtedly, it is interesting to determine the probability of delay/overcost, but it is also important to estimate its size. From the simulation data, for each time–cost pair of data we can deduce the final duration and total cost of each simulated project (Fig. 14.7). We use this data set as input variables for regression models, and they will provide us with an estimate of the time and expected cost of the project, if it follows the planned variability. Following the process of the previous point, this time applied to a regression problem, we calculate the algorithm that most accurately provides us with the desired data (we use the criteria R Square and RMSE to determine the best algorithm). In this case, both for time and cost calculation, the algorithm chosen is Classification and Regression Trees (CART) whose mean RMSE value is 1.574231 for time and 1408.017 for cost. Applying each of the algorithms indicated for time and cost, the results of future estimation at the end of the project determine that the project will be delayed by 0.3947526 t.u. In terms of cost, the project will have an extra cost of 117.8164 m.u. As a summary of the case study, for the project control point [AT,AC] = [7.5,17000], we have calculated the Earned Value (EV) at that point with a result of 16000 um. This translates into a project execution rate of 65%.

14 Statistical Learning Techniques for Project Control

201

Fig. 14.7 Time and cost forecast graphs at the end of the project

With these data, applying Monte Carlo simulation, the project stands at a 38.718% probability of expected variability. As for future estimates, it is forecast, with a 61.61% probability, that the project will be delayed by 0.3947 t.u. In addition, the forecast for the term will be that, with an 81.87% probability, the project will have an additional cost of 117.82 m.u.

14.4 Conclusions In this work, we have proposed a methodology to control stochastically modeled projects, using advanced statistical techniques. At any stage of project execution, the project manager can monitor and control the status of the project. The necessary information to implement this methodology are the project definition (stochastic data of the activities and their sequencing), the planned value curve, the usual calculations of the Earned Value methodology at the moment of control and, finally, applying Monte Carlo simulation to obtain a database with which to apply statistical classification and regression techniques. This approach makes it possible to detect anomalous situations with respect to the definition of the project, considering the likely correlation that exists between time and cost, which previous methodologies did not take into account. In addition, the probabilities of delays and cost overruns at each control point can also be calculated. All this information can also be visually integrated into a series of graphs provided by the software application.

202

F. Acebes et al.

This methodology is a major step forward in comparison with the previous ones. With respect to the EVM methodology, since it incorporates uncertainty in the definition of the activities, with these conditions we are able to observe the current state of the project and predict the future state, considering the stochastic definition of the activities. Concerning other methodologies that incorporate uncertainty in activities. Not only because the project control is more precise since it detects anomalous situations due to time/cost correlation problems, but also because they incorporate the possibility of making predictions of the final state of the project. This research has been possible, thanks to statistical advances both in applications and in the algorithms used. The application using (“R”) is implemented with open source which allows a continuous renewal and updating of the algorithms, as well as the appearance of new ones that could incorporate new functionalities.

References 1. Acebes F, Pajares J, Galán JM, López-Paredes A (2013) Beyond earned value management: a graphical framework for integrated cost, schedule and risk monitoring. Procedia—Soc Behav Sci. 74:231–239. (Elsevier B.V.) 2. Acebes F, Pajares J, Galán JM, López-Paredes A (2014) A new approach for project control under uncertainty. Going back to the basics. Int J Project Manag 32:423–434 3. Acebes F, Pereda M, Poza D, Pajares J, Galán JM (2015) Stochastic earned value analysis using Monte Carlo simulation and statistical learning techniques. Int J Project Manag. https:// doi.org/10.1016/j.ijproman.2015.06.012 4. Aliverdi R, Moslemi Naeni L, Salehipour A (2013) Monitoring project duration and cost in a construction project by applying statistical quality control charts. Int J Project Manag 31(3):411–423. (Elsevier Ltd and IPMA) 5. Anbari FT (2003) Earned Value Project Management method and extensions. Project Manag J 34(4):12–23 6. Barraza GA, Back WE, Mata F (2000) Probabilistic forecasting of project performance using stochastic S curves. J Constr Eng Manag 126(2):142–148 7. Barraza GA, Back WE, Mata F (2004) Probabilistic monitoring of project performance using SS-curves. J Constr Eng Manag 130:25–32 8. Breiman L (1996) Bagging predictors. machine learning. Mach Learn 24(2):123–140 9. Breiman L (2001) Random forests. Mach Learn 45(1):5–32 10. Cheng M-Y, Peng H-S, Wu Y-W, Chen T-L (2010) Estimate at completion for construction projects using evolutionary support vector machine inference model. Autom Constr 19(5):619– 629 11. Cheng M-Y, Roy AF (2010) Evolutionary fuzzy decision model for construction management using support vector machine. Exp Syst Appl 37(8):6061–6069 12. Christensen D (1993) The estimate at completion problem: a review of three studies. Project Manag J 24:37–42 13. Ding X, Li Y, Belatreche A, Maguire LP (2014) An experimental evaluation of novelty detection methods. Neurocomputing 135:313–327 14. Fleming QW, Koppelman JM (2005) Earned value project management. newtown square. Project Management Institute Inc, PA 15. Hastie T, Tibshirani R, Friedman JH (2009) The elements of statistical learning: data mining, inference, and prediction. Math Intell 27(2):83–85. (Edited by Springer, New York)

14 Statistical Learning Techniques for Project Control

203

16. Hazir Ö, Shtub A (2011) Effects of the information presentation format on project control. J Oper Res Soc 62(12):2157–2161 17. Herroelen W, Leus R (2002) Project scheduling under uncertainty survey and research potentials. Eur J Oper Res 165(2):289–306 18. Hillson D (2004) Effective opportunity management for projects—exploiting positive risk. Marcel Dekker, New York 19. Jacob DS (2003) Forecasting project schedule completion with earned value metrics. Measurable News 7–9 20. James G, Witten D, Hastie T, Tibshirani R (2013) An introduction to statistical learning: with applications in R. Springer, New York 21. Kuhn M (2015) A short introduction to the caret package. In: R Foundation for statistical computing, pp 1–10. Available at: https://www.cran.r-project.org/web/packages/caret/vignet tes/caret.pdf 22. Lambrechts O, Demeulemeester E, Herroelen W (2008) Proactive and reactive strategies for resource-constrained project scheduling with uncertain resource availabilities. J Sched 11:121– 136 23. Lipke W (2003) Schedule is different. Measurable News. Summer 24. Lipke W (2004) Connecting earned value to the schedule. The Measurable News, Winter, pp 1–16 25. Lipke W. Zwikael O, Henderson K, Anbari FT (2009) Prediction of project outcome. The application of statistical methods to earned value management and earned schedule performance indexes. Int J Project Manag 27(4):400–407. (Elsevier Ltd and IPMA) 26. Mika S, Ratsch G, Weston J, Scholkopf B, Mullers KR (1999) Fisher discriminant analysis with kernels. In: Neural networks for signal processing IX, 1999. Proceedings of the 1999 IEEE signal processing society workshop, pp 41–48 27. Naeni LM, Shadrokh S, Salehipour A (2011) A fuzzy approach for the earned value management. Int J Project Manag 29(6):764–772. https://doi.org/10.1016/j.ijproman.2010.07.012. (Elsevier Ltd and IPMA) 28. Narbaev T, De Marco A (2014a) An Earned Schedule-based regression model to improve cost estimate at completion. Int J Project Manag 32(6):1007–1018 29. Narbaev T, De Marco A (2014b) Combination of growth model and earned schedule to forecast project cost at completion. J Constr Eng Manag 140 30. Pajares J, López-Paredes A (2011) An extension of the EVM analysis for project monitoring: the cost control index and the schedule control index. Int J Project Manag 29(5):615–621 31. Pimentel MAF, Clifton DA, Clifton L, Tarassenko L (2014) A review of novelty detection. Signal Process 99:215–249 32. Project Management Institute (2017) A guide to the project management body of knowledge: PMBoK(R) guide, 6th edn. Project Management Institute Inc 33. Salari M, Bagherpour M, Reihani M (2015) A time -cost trade-off model by incorporating fuzzy earned value management: a statistical based approach. J Intell Fuzzy Syst 28:1909–1919 34. Shtub A, Bard JF, Globerson S (1994) Project management—engineering technology and implementation. Prentice Hall, USA 35. Shtub A, Bard JF, Globerson S (2004) Project management: processes, methodologies, and economics. Prentice Hall 36. Vandevoorde S, Vanhoucke M (2006) A comparison of different project duration forecasting methods using earned value metrics. Int J Project Manag 24(4):289–302 37. Vanhoucke M, Vandevoorde S (2007) A simulation and evaluation of earned value metrics to forecast the project duration. J Oper Res Soc 58(10):1361–1374 38. Venables WN, Ripley BD (2002) Modern applied statistics with S. Springer, New York 39. Wauters M, Vanhoucke M (2014) Support Vector Machine Regression for project control forecasting. Autom Constr 47:92–106 40. Wauters M, Vanhoucke M (2016) A comparative study of Artificial Intelligence methods for project duration forecasting. Exp Syst Appl 46:249–261. https://doi.org/10.1016/j.eswa.2015. 10.008. (Elsevier Ltd.)

204

F. Acebes et al.

41. Weinberger K, Blitzer J, Saul L (2006) Distance metric learning for large margin nearest neighbor classification. Adv Neural Inf Process Syst 18:1473. https://doi.org/10.1007/978-3319-13168-9_33 42. Willems LL, Vanhoucke M (2015) Classification of articles and journals on project control and earned value management. Int J Project Manag 33(7):1610–1634. https://doi.org/10.1016/j.ijp roman.2015.06.003. (Elsevier Ltd. APM and IPMA) 43. Zwikael O, Globerson S, Raz T (2000) Evaluation of models for forecasting the final cost of a project. Project Manag J 31(1):53–57

Chapter 15

Program and Project Management Articulation: Evidences from the Infrastructure Sector V. González, E. Hetemi, M. Bosch-Rekveldt, and J. Ordieres-Meré

Abstract Project-focused structures in infrastructure endeavors involve the execution of simultaneous efforts with shared resources. This research highlights to what end such organizational structure is complex to manage. The study focuses on project governance structures’ impact over project-oriented organizations, particularly by exploring the ineffective co-operation/interaction between project(s) and the program. The paper is based on a single case study carried out at a Railway Infrastructure Companies’ programs located in Northern Europe, involving two embedded projects. From the study, it becomes possible to understand the relevance of the governance approaches in projects and programs. Moreover, some guidance is proposed in order to help in the accommodation procedure. Keywords Program management · Project complexity · Project–program tensions · Governance · Organizational design

V. González · E. Hetemi Grupo PMQ, Dpto. de Ingeniería de Organización, Administración de Empresas y Estadística, Escuela de Ingeniería Industrial, Universidad de Politécnica de Madrid, José Gutiérrez Abascal 2, 28006 Madrid, Spain e-mail: [email protected] E. Hetemi e-mail: [email protected] M. Bosch-Rekveldt Department Materials, Mechanics, Management & Design (3Md) Integral Design & Management Engineering Project Management, Faculteit Civiele Techniek en Geowetenschappen, Technical University of Delft, Stevinweg 1, 2628 CN Delft, The Netherlands e-mail: [email protected] J. Ordieres-Meré (B) Grupo PMQ, Dpto. de Ingeniería de Organización, Administración de Empresas y Estadística, Escuela Técnica Superior de Ingenieros Industriales, Universidad de Politécnica de Madrid, José Gutiérrez Abascal 2, 28006 Madrid, Spain e-mail: [email protected] © Springer Nature Switzerland AG 2021 J. L. Ayuso Muñoz et al. (eds.), Project Management and Engineering Research, Lecture Notes in Management and Industrial Engineering, https://doi.org/10.1007/978-3-030-54410-2_15

205

206

V. González et al.

15.1 Introduction A contemporary dynamic business environment requires organizations to orchestrate a number of concurrent change initiatives and adjustments, many of them executed as projects.1 This often means that organizations become project-oriented, where to a large extent, the operations are executed as simultaneous or successive projects, while drawing at least some resources from a common resource pool. Increasingly, projects are being used strategically to transform organizational practices and processes, not only to deliver products, services, or infrastructure [5]. Public investments for large-scale infrastructure projects have traditionally been delivered by line managers in the larger regional system. This has caused intraorganizational tensions, due to fragmentation caused by several autonomous projects. The following types of fragmentation can be identified in complex projects or programs, elaborated from previous works [8]: • Fragmentation of content, as every autonomous project has its own interest and goals, and • Fragmentation of management, as there is a natural tendency to manage projects separately. In this particular field, there are significant intersections between Project Managers (PMs) and those line managers, understood as Chief System Engineers (CSEs). Because of the intersections, sometimes the CSE becomes PM and in other cases is the PM who leads the project, but some other intermediate configurations are possible. However, when roles and responsibilities are not well defined early in the project life cycle, it becomes a source of tensions between figures, which was widely analyzed in former studies [7]. Other sources of tensions in projects that have been well studied in the past show that tensions are a product of the precursors of complexity, uncertainty, and equivocality, and an attempt is made to characterize tension as it arises in projects—its genesis, its nature, its effects, and (sometimes) its resolution [31]. Huge efforts have been dedicated in the past to analyze conflict mechanisms inside projects, conceived as a rather isolated entity [18, 25, 26]. However, in this research we are mostly interested in tensions when fitting into the perspective of projects interacting with business strategy. The concept is to consider those instruments as vectors for change and, from this perspective; tensions can affect their expected benefits. As Martinsuo and Hoverfält [20] state, programs have evolved from fuzzy and unmanaged entities or extensions of projects into mechanisms of coordinating and integrating various strategic change activities toward business benefits. Standards have been carried out at ISO level covering different topics as project management, portfolio management, and program management [16, 28, 29], which consolidates the clear recognition of professional utility for most businesses in most situations. 1 The

European Railway Operation Company helps protect the confidentiality agreement, does not reveal the actual company.

15 Program and Project Management Articulation …

207

Program management can be considered one form of multi-project organizing that is usually established to achieve certain strategic benefits through organizing and managing changes in the organization. Program management makes the execution of the project portfolio more effective [17], as it aims to coordinate projects delivered in parallel and efficiently allocate resources of the organization along the portfolio. Program managers have to deal with several projects simultaneously, which by definition are activities with their own logic and limitations in scope and schedule (cf. [8]), while applying the same managerial style to all projects of their program, at the same time. This kind of organizational structure is often complex and requires clear procedures and guidelines. The interest of this research is to realize the distance between the research outcomes and the practical implementation for the particular case of the infrastructure sector, to understand the sources of tensions on the dynamic interplay among project and program forces. The structure of the paper starts with Sect. 15.2, providing the literature background, particularly revealing the literature on sources of tensions, then in Sect. 15.3, the context for the accomplished case study and the data collection is presented. The next one, Sect. 15.4, presents the preliminary analysis of governing structures. Section 15.5 is devoted to the discussion, analyzing those effects under the lights of the research outcomes, discussing the advantages and limitations involved. The last section will draw the main conclusions.

15.2 Literature Background: Sources of Tension It is strongly argued and well established in the literature that the understanding of projects cannot be easily transferred to the program settings—the multi-project setting (cf. [23]). This due to distinctions between the two organizational forms [1]. The contingency perspective forms the cornerstone of program management research [20]. Herein, extant literature seeks to align the program management structures with the program context (e.g., [24]), on the other hand, the particularities of the project tasks are emphasized (e.g., [21]). Prior studies have explored various aspects of tensions in project-oriented organizations, and studies show that sources of tensions are first and foremost created by the coexistence of fundamentally different organizing routines. To this end it is relevant how the actors perceive themselves as line functions, or as more related with the projects and competing for limited organizational resources (cf. [2]). The van Buuren et al. [8] study on comparisons between program and project management approaches has reinforced the interplay between the integration of interrelated activities and the segregations of activities. According to the authors, program management is about the synchronization of project implementation trajectories. Another study that recognizes that program management sits between the project delivery and the overall organizational strategy is the one by Lycett et al. [19]. In the authors’ view, significant tensions arise between the “inward focus and task-oriented” set of projects, and the “strategy focused” set of programs. Important for this paper’s

208

V. González et al.

purpose is the argument that the standard program management approach exacerbates the project–program tensions. Tensions around projects being developed and their effects have been studied from different perspectives. Projects are complex social settings characterized by tensions between unpredictability, control and collaborative interaction among diverse participants on a project [9]. Such perspectives also include the business value for the project’s outcome depending on the used styles for project management [20]. Indeed, tensions within the organization play an important role in determining the path that an organization’s development will follow. In turn, new structural configuration, as for instance, the Project Management Office (PMO), realigns the power structure and creates new tensions [3]. Although previous studies are relevant and have brought the knowledge to the forefront, their perspectives on the research are relatively holistic. There are still aspects very relevant to the present analysis, e.g., how challenging the implementation of spatial projects can be. Indeed, implementation has traditionally been done by public line managers (cf. [8]). During the last few decades, however, the implementation of spatial investments has more often than not been placed in the hands of project managers [12, 14]. The main characteristic and focus of project management seems (in words of [8] p. 672) also “to be its main disadvantage: it tends to focus primarily on the realisation of one single project ambition, suffers from a singular logic and is limited in terms of scope and time”. This is rather problematic in project and program settings, it presents a grand challenge where the management is intertwined with social, technical, and environmental elements (cf. [6, 10]). There are often a variety of problems as well as a variety of projects in these settings. All the projects have to be realized in the same implementation space [8]. This perspective interests most to our research, as it addresses intra-structural organization tensions (projects-programs), as well as the relationship with the spatial dimension. On the other side, there is a relevant dimension under consideration when the project owner becomes a Public Body. Holt and Rowe [15] established that the problems posed by different stakeholder views were both technical and managerial. The project sponsor has the responsibility to explore divergent issues as to harmonize interests, and to realize satisfactory solutions given both, constraints and various points of views. A relevant aspect directly related to this end is the way to monitor and to report the program evolution. Consequently, there is still a need to study and trace back the tensions underlying project–program management.

15 Program and Project Management Articulation …

209

15.3 Methodology In line with the research interest at hand, the study is based on a case study method of a multi-project setting: the program involving two embedded projects in the infrastructure sector in the Netherlands. The case study method adopted here seems appropriate to tackle the complex and evolving mix of technical and social elements (cf. [10, 11]). Indeed previous research building from cases has powerfully addressed issues of tensions arising from project–program settings, thus developing insightful theorizing (e.g., [2, 8]).

15.3.1 The Context: Rail Infrastructure in the Netherlands The Netherlands is the most densely populated country of the European Union, and one of the most densely populated countries in the world. It has a rail network of approximately 3.200 km in length that contributes to transportation as a key factor of the Netherlands’ economy [22]. The railway network is dense, mostly focused on passengers’ transport—most distances traveled on Dutch public transport is by rail—and connection toward major towns and cities. In fact, there are as many train stations as there are municipalities in the Netherlands. The network has been found the most cost-effective in Europe, together with Finland’s: per kilometer of track, the Dutch rail network is the busiest in the European Union, handling over a million passengers a day. The Railway Infrastructure Company has responsibilities for caring for existing rails and tracks along the country, extensions, allocating rail capacity, traffic control, and development of new railway stations. The infrastructural facilities are related to the traffic on the main railway network and are managed by the Railway Infrastructure Company, such as the transfer areas in stations, refueling facilities, and bicycle storage facilities. Another company, however, operates the network as such. The Railway Infrastructure Company implements a dual structure, on one side the hierarchical and functional configuration, with one Board, and one Executive Committee, and Functional Departments. On the other side, there is a project-oriented structure, including the program layer and the project layer. Infrastructural programs in the Railway Infrastructure Company are focused either on realizing the same type of physical objects on the network (hereafter “repetitive programs”) and programs which reach their goals by realizing different assets in the network. Examples for the first category of programs is the Accessibility Program for disabled people and the Bicycle Parking Program. An example of the second category is the High Frequent trains program that aims to improve the density of utilization of the trains in the country. All programs consist of different projects and those programs fit into the Railway Infrastructure Companies’ strategic framework, through different strategic objectives, including Safety, Reliability, Sustainability, etc.

210

V. González et al.

The adopted methodology to better understand the tensions in the organization is a case study, as it allows performing explorative research and the possibility of analyzing qualitative data for gaining insight into complex social processes [11]. Such methodology enables direct observation of the interesting processes as well as the capability to inquire process owners about specific evidences found. In particular, this work aims to maximize the utility of information from small samples and single cases, as recommended from [11]. Two different sources of evidence have been analyzed. Reliability, on the other hand, will be achieved by the repeatability of the operations of the case study. A case study protocol will be used—structured interviews—and repeated through the timeline to different subjects and subcomponents. In the Railway Infrastructure Companies’ case, the frequency of reporting from the Program Manager to the end client—the Ministry, and from the subprograms to the main program—is fixed for every four months. The Program Manager and the Ministry representatives meet every six months, to review the evolution of the program, including requirements for extra resources. In order to be prepared, an internal deadline is established, requiring reporting every three months. In order to fully understand the complexity involved in the Railway Infrastructure Companies’ case, it is worth mentioning that its public nature and the requested actions from its strategic mandate require strong agreements with all the stakeholders, but in particular with the station’s operator. Those aspects introduce additional constraints which require additional managerial decisions. Such decisions are sources of tensions as well as elements challenging the existing organizational structure.

15.3.2 Program Descriptions and Overview The Accessibility Program is part of a larger portfolio and comprises the services provided to passengers with physical disabilities on railway stations. This program aims to comply with the European Regulations for Accessibility in the European Union, which requires stations that are currently under construction to be accessible and meet the accessibility requirements stated. However, the Netherlands is the only country from the European Union that is also implementing the accessibility standards in existing stations. This program coordinates three projects in parallel, related to Platform Heights, aiming to give individuals the possibility of getting into any train without the aid of other persons. The program also is related to Step Free Station Project, aiming to give individuals the possibility in reaching every platform of the station by covering existing height differences with either an elevator or ramps. Finally, the program initiated a Small Measures subprogram. This subprogram realizes measures to help individuals with visual impairment into—without the aid of other persons—finding their way in the train stations, by means of Braille maps and transport card readers and floor signaling.

15 Program and Project Management Articulation …

211

The Bicycle Parking Program has the main objective of providing every station of the Netherlands with parking for bicycles. This program has a slight difference in the organizational structure, as the layer of subprogram does not exist, as the diversity of goals is lower: the program is directly divided into projects, looking at each one to develop a bicycle parking in one station (or similar ones). It is worth remembering that, in the Netherlands, stations belong to a different company not related to Railway Infrastructure Companies. This means that specific agreements are mandatory between the two companies when planned projects are going to be developed into those specific areas.

15.3.3 Data Collection A total of 12 interviews were conducted for drawing conclusions from the empirical data. Interviews were scheduled in advance, and questions were designed according to the role of the interviewee. All interviews were recorded and a transcript was sent to the interviewed person for his consent. In addition, procedures, reports, and different documentation relating to the programs were reviewed. Table 15.1 presents a schematic view of all data collection methods. As Flyvbjerg [13] suggests, it is important to write down whatever impressions occur when developing case studies, because it is often difficult to know what will and will not be useful in the future. During the duration of the study, resources were achievable, as well as conversations with any kind of employees. The organization allowed rescheduling interviews, repeating meetings, and changing the agenda during the period of the study. Every member of both programs examined presented a positive attitude about the semi-structured interviews, sharing resources and time. Table 15.2 shows an overview of the interviews scheduled. Despite each interview focusing on one aspect more than the others, depending on the interviewee, all of them followed the same structure: Presentation and Functions. In this section, candidates were asked about the program and their contribution, how the program was organized, and daily activities. Relation between Program and Project. Table 15.1 Overview of data collection in the programs studied

212

V. González et al.

Table 15.2 Overview of interview roles and interest for the study. SAP stands for stations accessibility program; bp stands for bicycle parking program

This section of the interview mainly focused in finding if there was a formal description of Program Management in the company, as well as figuring out the relation between the different layers that are present in the program: reporting procedures, contact, and key performance indicators (KPIs) of both program and project. Decision-making in the Program and Project levels. Every candidate was asked how the decision-making process worked, from his/her point of view, especially in the allocation of resources, client requests, and change management. Most common conflicts and failures. This section focused on gathering information about the common problems that the candidate faced at work, as classifying the source of the conflict. Room for improvement. Based on the previous talk, and whenever it was possible, it was suggested to candidates to give a solution or recommendation to previous conflicts described. For the interviews, it was crucial that every answer tended to be anecdotic rather than pragmatic: examples were asked every time, as a way of supporting the argument, and anecdotes were linked to one another. Despite some interviews following the structure, others turned out to a discussion of a topic and still, valuable information was acquired: most questions included follow-ups to get more insight and a deeper understanding of the situation.

15 Program and Project Management Articulation …

213

Documentation was also a relevant source of information to gain a deeper understanding of the current procedures and organizational methodologies at the company. The Railway Infrastructure Company provided access to the following documentation: KPIs evaluation, contact list within the program, stakeholder list, overview of the planning, documents of their processes (based on PRINCE2 standard), examples of client reporting structures, and internal reporting methodologies, at the program level.

15.4 Data Analysis 15.4.1 Organizational Structure The organization is mainly functional as expected for long-term repeatable activities. However, as it aims to achieve temporal goals, it has implemented a matrix organization belonging to the Operations Area, with one Project Department hosting program and project managers. The programs are defined to cope with the development of strategic opportunities or goals. In the Railway Infrastructure Company, they have defined a particular configuration valid for the context they are committed to develop, and this configuration is presented in Fig. 15.1. According to such configuration, main programs are connected with high-level strategic goals (for instance, to increase Accessibility levels in Stations). Such

PROGRAM P01-Acon 1

PROGRAM P01-Acon 2

PROGRAM P01-Acon n

PROGRAM P01

Project 1

Project 1

Project 1

Project 2

Project 2

Project 2

Project m

Project n

Project k

Fig. 15.1 Configuration of programs in the railway infrastructure company, including both programs and projects

214

V. González et al.

programs are decomposed into narrow programs, addressing specific actions (for instance, Platform Heights, Step Free Station, and Small Measures). Inside of such subprograms, there are projects looking to implement such actions as per station. However, not all the stations require all the actions, therefore, projects inside subprograms are the convenient organizational representation. Such configuration that matches the new form of cooperation, called “interactive planning”, is characterized in terms of “political space”, “architecture”, and “action mechanisms” [14]. From the project management methods’ point of view, the Railway Infrastructure Company has developed its own made methodology to deal with projects, strongly based on the PRINCE2 standard [4]. In terms of roles, the Railway Infrastructure Company establishes per program one program manager, one program controller, one responsible for finance, one program planner, and one risk analyst as well as a program assistant. Almost all of those figures are part time devoted to the program. As per the subprogram, the team is similar, plus one technical team, with the number of members variable, depending on the workload compromised.

15.5 Discussion The case study carried out at the Railway Infrastructure Company included the analysis of different internal documents as well as seven semi-structured interviews with different roles in the Railway Infrastructure Companies, including Program Manager, Subprogram Manager, Program Planner, Program risk analyst, Program Assistant, Subprogram Assistant, and Program Coordinators. Through the analysis of the collected material, we found induced patterns for tensions, out of the already known sources: 1. 2. 3. 4.

Overlap between expectations at different structural levels, Lack of flexibility because of the existing methodologies, Lack of governance guidance, providing guidance when exceptions arise, and Overlaid or overladen managerial decisions because of the specific regulation of public bodies and time constraints.

Regarding the overlap of expectations at different structural levels, it is possible to realize how the adopted methodology enables the management of projects but its hierarchical extension to cover programs lacks enough managerial differentiation. The final effect is the over-management from the different roles for the same dimensions. Such decision-making process, as producing interferences between agents, is easily understood as a source of tensions and, at least, it becomes somehow inefficient. Part of these interferences happen as the adopted methodology is not clear enough in assigning fully differentiated layers of management, including coherent articulation of responsibilities. In the Railway Infrastructure Companies’ case, the lack of differentiation happens as the PRINCE2-like adopted methodology was scaled up over the program

15 Program and Project Management Articulation …

215

layers, but such an escalation process does not establish differentiated goals and responsibilities. The lack of flexibility happens because of the defined Key Performance Indicators (KPIs) which are based on the integration of the same KPIs at lower levels. Therefore, a learned lesson is that which when managing a set of projects in an integrated way provides benefits; there shall be an articulated methodology, caring about such different levels. Therefore, the scope management at the program level must not be the overview of the added scopes from the underlying projects, instead, it must consider the scheduled actions to cope with the expected outcomes at such a program level. Specifically, at the program level, it means considering when the scheduled projects are going to produce the expected products or services instead of just reviewing the performance of activity or Work Package items. Implementing flexibility at the appropriate management level allows to kick-off new projects or to reconfigure existing ones to cope with the changing context (cf. [1]). Lack of flexibility becomes a clear source for tensions, because of misalignments between current goals and scheduled instruments’ focus. It generates frustration when adequate decisions are not made, but it also happens at lower levels, when clear governing rules are required to manage specific situations (rewards, provide steering to specific agents, etc.). The Railway Infrastructure Company somehow lacks a formal governance guide at different managerial levels, which hardly helps to consolidate best practices through the different organizational structures when required to manage specific situations. By implementing such a governance guide, uncertainty levels get reduced as higher levels of reproducible decision-making are reached with higher levels of accountability. In addition to the other already discussed dimensions, specific constraints are linked to the interventions being carried out during operation of infrastructures when the responsible body is public, and it is enforced to follow strict regulations not only in terms of budget but also in terms of time for advertisements, etc. These constraints are not frequent enough; therefore, they have not been addressed in a formal and abstract way. In some cases, the approach represented in Fig. 15.1 is not fully respected. Such situations happen when the implementation of one specific project belonging to one specific subprogram as per one specific station is going to be implemented when some other project from another subprogram is scheduled to be deployed at the same station. Because of the delays in the bidding process and the overheads for work monitoring, the Railway Infrastructure Company has decided to implement a smart solution, which is to merge such two projects (scope, budget, quality, risks, etc.) integrating the on-site works in a way that distortions get reduced and better managed than when they become operated independently. Although the mentioned approach produces theoretical benefits, it also becomes a source of risk, when the methodology is not aware of it. When the project manager of the hosting project receives the extension, no matter if she wanted to, the risk is to manage such components as an added part. It can be perceived that her performance is not depending on the added part and, transparency upwards becomes accepted

216

V. González et al.

ORGANIZATIONAL DESIGN PROGRAM / PROJECT

TOOLS

STANDARDS METHODS

GOVERNANCE

Managerial dimension of KPIs Differenated methodologies Differenated rules and guidance

FUNCTIONAL Bridge procedures enabling consistency between both perspecves

Fig. 15.2 Elements able to reduce tensions at different organizational levels, depending on the selected tool

when activities account for tasks related to both projects. This is because when the new activities are configured, it is not usual to establish how to be related to the original tasks. Therefore, imputation to the relevant subprograms will be affected, and additional uncertainty values or extra efforts from the subprogram controllers will be required. Such external requirements, usually under pressure, become evident sources of tensions across the organizational structure. Overall, and considering that the study is ongoing, it is feasible to put forth a framework showing the relationship between management (tools) and organizational design, as presented in Fig. 15.2. In order to reduce tensions, there are different operational tools organizations can use. The lower level is to increase the standardization of the KPIs; establishing procedures to agree with KPIs should be used as well as the formal mechanism to increase their performance [30]. When the organization implements both project and programs, specific methodologies for managing them are convenient. In the same way, it would be a benefit to developing specific governance rules both at project and program management levels, providing guidance in dealing with aspects like accountability, responsibility, etc., having impacts on the managerial level. When the focus is the integration between program/projects and the functional part of the organization, it is particularly relevant to establish procedures not only for resource acquisition but also for outcomes’ transference. No matter what tool is going to be considered, to adopt a holistic vision both in logical and temporal dimensions becomes an additional advantage. Therefore, such a logical perspective enables to implement longitudinal actions capable of incremental improving mechanisms, e.g., maturity models.

15.6 Conclusions In this research, carried out by a case study methodology looking at the infrastructure sector, an analysis of the sources of tensions between programs and projects has

15 Program and Project Management Articulation …

217

been carried out. The method used here enabled the importance of emergent and spontaneous work activities into understanding the tensions’ landscape. Based on findings from interviews, non-participant observations, and reviewed documents, it was clarified that tensions are related to organizational misalignments as well as to individual preferences or interests. It was also found that those sources of tensions were not only related to the connection between functional and matrix perspectives, but also related to the effective management tool being the most convenient in use. Neglect of inter-project coordination is what we observed, thus confirming the theoretical arguments in practice [1], the learning myopia. In addition, the case study shows the limitations and impact of specific decisions, like hierarchical scale-up of the project management methodology to the program levels, etc. Yet, program management is not just scale-ups of projects, and must not be treated with such instrumental approach (cf. [1, 19]). It is proposed that through a synthesis process, a framework identifying relevant elements becomes adopted. This framework will be able to act at different organizational levels and tools to reduce the intensity of tensions. Certainly, there are specificities for each element which depend on the organization decision-making process, thus, yet misalignments (internal subcontracting of execution activities, etc.) can occur. However, the awareness will help responsible people in the most suitable ways of governing that helps accommodate and reduce such tensions. The articulated intervention will provide additional benefits as corporate learning contributes to the organization’s growth. The core argument here is that attention to the governance in projects implies a detailed analysis of the tensions among projects and programs. Particularly, the governing projects need to be seen not only through a set of organizational formals associated with governance, but focusing on spontaneous events enables a fruitful understanding of sources of tensions—and why they develop as they do. This should not be understood as governance being replaced by the governing focus, but as a complementary approach shedding new light on the tensions’ landscape. That is, governance cannot be conceptualized as a preplanned form of organization and consciously interpreted (cf. [27]). More detailed analysis is needed to give insight into how the studied programs effectively deal with tensions identified in the literature and how they perform in dealing with other kinds of existing tensions. From there, the next step will be to formulate suggestions by which programs can better cope with observed tensions in a project-oriented organization. Acknowledgments The authors express their gratitude to the interviewees, for their support and openness in discussing all the issues during the research.

218

V. González et al.

References 1. Artto K, Martinsuo M, Gemünden HG, Murtoaro J (2009) Foundations of program management: A bibliometric view. Int J Project Manage 27(1):1–18 2. Arvidsson N (2009) Exploring tensions in projectified matrix organisations. Scand J Manag 25(1):97–107 3. Aubry M et al (2010) Project management offices in transition. Int J Project Manage 28(8):766– 778 4. Bentley C (2012) Prince2: a practical handbook, Routledge 5. Bjørkeng K, Clegg S, Pitsis T (2009) Becoming (a) practice. Manage Learn 40(2):145–159 6. Bosch-Rekveldt M, Jongkind Y, Mooi H, Bakker H, Verbraeck A (2011) Grasping project complexity in large engineering projects: The TOE (Technical, organizational and environmental) framework. Int J Project Manage 29(6):728–739 7. Boswell JW, Anbari FT, Via JW (2017) Systems engineering and project management: points of intersection, overlaps, and tensions. In: 2017 Portland international conference on management of engineering and technology (PICMET). IEEE, pp 1–6 8. van Buuren A, Buijs J-M, Teisman G (2010) Program management and the creative art of coopetition: Dealing with potential tensions and synergies between spatial development projects. Int J Project Manage 28(7):672–682 9. Cicmil S et al (2006) Rethinking project management: researching the actuality of projects. Int J Project Manage 24(8):675–686 10. Eisenhardt KM, Graebner ME, Sonenshein S (2016) Grand challenges and inductive methods: rigor without rigor mortis. Acad Manag J 59(4):1113–1123 11. Flyvbjerg B (2006) Five misunderstandings about case-study research. Qual Inq 12(2):219–245 12. Flyvbjerg B, Bruzelius N, Rothengatter W (2003) Megaprojects and risk : an anatomy of ambition. Cambridge University Press 13. Flyvbjerg B (2004) Phronetic planning research: theoretical and methodological reflections. Plann Theor Pract 5(3):283–306. https://www.tandfonline.com/doi/abs/10.1080/146493504 2000250195 14. Glasbergen P, Driessen PPJ (2005) interactive planning of infrastructure: the changing role of dutch project management. Environ Plann C: Govern Policy 23(2):263–277 15. Holt R, Rowe D (2000) Total quality, public management and critical leadership in civil construction projects. Int J Qual Reliab Manage 17(4/5):541–553 16. ISO TC258 (2015) ISO 21504:2015—Project, programme and portfolio management—Guidance on portfolio management. ISO. Available at https://www.iso.org/standard/61518.html? browse=tc 17. Jerbrant A, Karrbom Gustavsson T (2013) Managing project portfolios: balancing flexibility and structure by improvising. Inter J Manag Proj Bus 6(1):152–172 18. Lewis MW et al (2002) Product development tensions: Exploring contrasting styles of project management. Acad Manag J 45(3):546–564 19. Lycett M, Rassau A, Danson J (2004) Programme management: a critical review. Int J Project Manage 22(4):289–299 20. Martinsuo M, Hoverfält P (2018) Change program management: Toward a capability for managing value-oriented, integrated multi-project change in its context. Int J Project Manage 36(1):134–146 21. Miterev M, Engwall M, Jerbrant A (2016) Exploring program management competences for various program types. Int J Project Manage 34(3):545–557 22. NFIA Holland (2018) Road & rail transportation in The Netherlands | NFIA. Road & rail web report. Available at https://investinholland.com/infrastructure/road-rail/ 23. Pellegrinelli S (2011) What’s in a name: project or programme? Int J Project Manage 29(2):232– 240 24. Pellegrinelli S, Partington D, Hemingway C, Mohdzain Z, Shah M (2007) The importance of context in programme management: an empirical review of programme practices. Int J Project Manage 25(1):41–55

15 Program and Project Management Articulation …

219

25. Pitsis TS et al (2014) Governing projects under complexity: theory and practice in project management. Int J Project Manage 32(8):1285–1290 26. Le Roy F, Fernandez A-S (2015) Managing coopetitive tensions at the working-group level: The rise of the coopetitive project team. Br J Manag 26(4):671–688 27. Sanderson J (2012) Risk, uncertainty and governance in megaprojects: a critical discussion of alternative explanations. Int J Project Manage 30(4):432–443 28. TC258 I (2012) ISO 21500:2012—Guidance on project management. ISO. Available at: https:// www.iso.org/standard/50003.html 29. TC258 I (2017) ISO 21503:2017—Project, programme and portfolio management—Guidance on programme management. ISO. Available at: https://www.iso.org/standard/66680.html?bro wse=tc 30. Villalba-Diez J, Ordieres-Meré J (2015) Improving manufacturing performance by standardization of interprocess communication. IEEE Trans Eng Manage 62(3) 31. Wilson T, Burström T (2016) Project based tensions: complexity, uncertainty and equivocality. In Proceedings of northeast decision sciences institute 2016 annual conference. pp 963–975. Available at http://www.diva-portal.org/smash/record.jsf?pid=diva2%3A934944& dswid=3422

Chapter 16

Competences and the Digital Transformation C. Wolff, O. Mikhieieva, and A. Nuseibah

Abstract People believe that the way we live and work will be transformed— digitally transformed. The proactive attitude toward such change is to be prepared, meaning, it is a very relevant issue to ask for the required competences which enable a person not only to cope with the digital transformation but maybe even to shape and steer it. For the project management community, this raises the question of how the well-elaborated project management competences match with the ones required for the digital transformation. On the one hand, we expect projects to be a major tool for shaping the digital transformation, on the other hand, we expect the way we do projects to be digitally transformed. Besides that pull effect caused by the obvious need for projects and the digital transformation, there is a possible push effect since the project management community could transfer their elaborated and proven view on competences (for project management) into the new domain of the digital transformation. This contribution attempts to structure the topic “competences” with the ambition to document what consequences the digital transformation has for our dealing with competences and what our elaborated systems of competence models, etc., can provide for the definition of the relevant competences for the digital transformation. By defining structures and terminologies, the foundation for a continuous development and iterative refinement of a competence baseline for the digital transformation is laid. Keywords Competence models · Competence profiles for the digital transformation · Competence development path · Competence maturity model

C. Wolff (B) · O. Mikhieieva · A. Nuseibah IDiAL Institute, Fachhochschule Dortmund, Otto-Hahn-Str. 23, 44227 Dortmund, Germany e-mail: [email protected] O. Mikhieieva e-mail: [email protected] A. Nuseibah e-mail: [email protected] © Springer Nature Switzerland AG 2021 J. L. Ayuso Muñoz et al. (eds.), Project Management and Engineering Research, Lecture Notes in Management and Industrial Engineering, https://doi.org/10.1007/978-3-030-54410-2_16

221

222

C. Wolff et al.

16.1 Introduction—Reasoning About the Digital Transformation The digital transformation is expected to be a driving factor for change in many areas of public, business, and private life. It causes tremendous chances and challenges for organizations and individuals—and it is a management task. Digital transformation projects will be one of the dominant tools for mastering the digital change. As a result, managing the digital transformation is a very important task for project managers and a major driver for the further development of project management methodology. Project managers, project management organizations, and project management methodology have to adapt to the needs of the digital transformation. It is necessary to understand what “digital transformation” means, what it means for management, and what it means for the people who want to manage it. Therefore, project management for the digital transformation will be an important topic for the project management community for a long time. On the other hand, project managers use the tools, technologies, and processes generated by the digital transformation. New ways of managing projects, a new way of working and collaboration, and new possibilities for data analytics and transparency are changing the project management methodology and the work of project managers with an impressing pace, meaning, the digital transformation of project management is another point where project managers get in touch with the megatrend of the digital transformation. Project management is interested in competences for quite some time. IPMA decided to build their project management methodology on a competence-based view [11]. Therefore, the question of how digital transformation changes the competences needed for project management and the question of which competences are needed to manage the digital transformation are quite obvious. In addition, the digital transformation will change the view on competences in general, and it will change the way we are dealing with competences in general. The standardization of competences, competence profiles, and competence levels is a result of the efficiency and effectivity mechanisms of the industrial revolution. Mass production requires standardization and with standardized procedures and products, the mass production becomes more efficient and more effective. The digital transformation is expected to lead to individualization or at least mass customization. In the digital era, competences probably can be examined on a very individual and fine level of granularity. Talking about the digital transformation can cause a lot of confusion about terminology and wording. There are three main terms which are sometimes used synonymously [12]: • “Digitization” is frequently used to describe the conversion of an analog (electrical) signal into digital data. It is used to describe technical processes. • “Digitalization” is frequently used to describe a process, e.g., the conversion of books or documents into digital files. This goes with the automation of document handling processes or (previously paper-based) business processes. It is used in the IT domains to describe IT-based business process engineering.

16 Competences and the Digital Transformation

223

• “Digital Transformation” is used for anything going beyond the two other terms. It can not only address the effect on an organization due to business process reengineering based on digital technologies, but it also addresses the changes caused in society by digital technologies. For project managers, the first two terms are well addressed by the domains of engineering project management and IT project management. The new challenge is the management of the complexity of the digital transformation. In project management, a logical system can be constructed which positions the project as an element in a cause-and-effect chain which provides results in order to achieve a certain effect in the environment. According to this logic [22], the project implements the production system of the cause-and-effect chain by turning resources into outputs. This implementation system is managed as a project. The outputs are delivered to the operational system which launched the project in order to contribute to the operations. It exploits the outputs of the project. The exploitation system run projects in order to generate outputs by doing projects. The outputs are cumulated into bigger outcomes (the effects of the projects) which are the intended results of the project for the exploitation system. With these outcomes, the intended impact is achieved. Ideally, the impact leads to improved performance of the organization/operation (which is doing the project) within the environment. By doing all the projects, the operation generates benefits which feed the projects in terms of the provision of the required resources. With this system, a cyclic operational model is proposed which understands projects as a tool in a cause-and-effect chain which turns resources into outputs. These outputs “pay-in” for the intended outcomes of the operations of the organization which does the projects. Due to the contribution to the operations, the benefits can be generated which are needed to feed the projects with resources. In addition, the outcomes generate the intended impact on the environment. For structuring the cause-and-effect chains of the digital transformation, it is useful to apply a result-oriented logic (as in the result-oriented monitoring (RoM) approach [6]). This approach is also applied in the assessment of the effectivity and efficiency of projects [3, 7, 25] and, therefore, is familiar in the project management community. The basic idea (see Fig. 16.1) is to build cause-and-effect chains where inputs/resources are transformed (e.g., by a project) into outputs—which are (shortterm) deliverables while doing the project. For the organization which is conducting the project, the outputs cumulate into the desired (medium-term) outcomes. The outcomes are the goals for which the project is done. They support the operations of the organization in pursuing more general goals. Therefore, the outcomes are part of the exploitation of the project. The organization itself causes an impact on the (projectized) operations. The impact can be caused within the organization or as a more long-term result within the society or business environment surrounding the organization. Based on the different levels of results in the input-output-outcomeimpact (IOOI) logic [3], the results can be monitored and assessed based on the different time scales (short-, medium-, long-term effects) and the affected target

224

C. Wolff et al.

The digital business of the future … impact

be a leader of the digital transformaon shape the development , drive socio-economic change, create a sustainable transformaon

outcome

output input

competence & content: innovaons for the digital transformaon the “right” (me, place, need) products and services for the digital, projeczed and service oriented future

delivery: turn innovaon into customer experience customer experience (online, digital), user centred (co-producon projects), individualized, adapve producon

community: your organisaon as an agile team digital, service-oriented processes, agile project teams, virtual collaboraon

3 main fields: digital, projeczed, adapve Fig. 16.1 Input-Output-Outcome-Impact (IOOI) chain for the digital transformation of a business

groups (within the project, the organization, and the society). Therefore, the resultoriented monitoring (RoM) based on the IOOI is a valid tool for measuring effectivity and efficiency. For the digital transformation, the IOOI logic leads to the different elements which need to be digitally transformed and therefore managed adequately. The broader understanding of the “digital transformation” assumes that complete (IOOI) cause-and-effect chains are transformed. The project management of a digital transformation project has to deal with all levels of the IOOI (see Fig. 16.1): • The input of the digital transformation project is the team, the organization, or the business process itself. It needs to produce the outputs in a new, digitally transformed way to lead to a really sustainable result. Therefore, building an organization for the digital era means building a community which is adaptive and organized with digital and projectized means. • The output layer is the way how services and products are delivered or produced (delivery layer). The community uses projects and delivers a digitally enhanced user (or customer) experience. • The outcome layer is the digitally transformed products and services. For this, the content (especially for products) and the competence (especially for services) need to be based on digital, projectized, and adaptive means. • In terms of sustainability and stakeholder management, the impact layer has to be considered. This is where a digitally transformed business or organization is positioned. Reasoning about the competences required by a project manager in digital transformation projects, it is visible that competences on all four layers can play a role. Many projects will not address all layers or will at least put a focus on one or two

16 Competences and the Digital Transformation

225

layers. Therefore, the required competences can be narrowed down to that area. Nevertheless, structuring the domain which is characterized by the term “digital transformation” is a necessary requirement for the development of an understanding of the competences for the digital transformation [4, 17, 23].

16.2 Dealing with Competences in the Digital Era 16.2.1 The Competence Development Path Researchers have proved that competence-based approaches can positively and to a high extent improve the performance of employees [2]. One of the human resource topics is the role of competence management in helping organizations to cope with the changing environment, and the need to integrate it into an organization’s human resource strategy and its business strategy [9]. These challenges are directly related to the digital transformation, which besides technical subjects requires dealing with business value, managing changes, and applying managerial and soft skills. A competence profile is a set of individual competences. Competence profiles are used for job descriptions and competence development plans. One of the examples of a competence development plan can be found in the Project Management Competency Development (PMCD) Framework; it lists the activities which are to be undertaken, including required learning outcomes, types of learning activities, target dates, mentors’ names, predevelopment deadlines, and post-development deadlines (level to be achieved), as well as costs and metrics [18]. For coping with the digital transformation, a person has to develop an adequate competence profile (see Fig. 16.2). To assess what “adequate” means, it has to be considered that competence profiles do not only describe the competences “owned” by an individual but they can also describe the competences required by a job. Such a job competence profile describes the competence profile of a kind of “ideal” person perfectly suitable for doing the job. An “adequate” competence profile of an individual who wants to do the job is therefore such a one where the competence gap between the job competence profile and the individual competence profile is not too big. Competence profiles change over time since a person can build up a certain competence. Therefore, a person starting at a time t0 with a respective competence profile (t0 ) may develop additional competences and have a competence profile (tn ) after the time tn − t0 has passed, meaning, somehow the competence of the person has been increased by a competence delta. This delta can be described by competence (tn − t0 ) = competence profile (tn )—competence profile (t0 ) (where “−” and “=” are operators on competence profiles). The competence delta (which is also a competence profile) may be caused by one or several competence deliveries (e.g., training).

226

C. Wolff et al.

job competence profile competence assessment

competence delivery

competence profile t0

competence gap

competence profile tn

competence development path ´ efficiency + effecvity

competence

competence models, competence breakdown structure Fig. 16.2 Terminology for a competence-oriented view on the digital transformation

Moving from a competence profile (t0 ) to a competence profile (tn ) for a person happens along a competence development path. Possibly, several competence development paths can lead to competence profile (tn ). Efficiency and effectivity of a competence development path and competence deliveries may be assessed by measuring how fast (1st derivation of the competence delta) the competence is increased. “Measuring” competence is related to competence assessment or competence evaluation. In terms of the digital transformation, it is straightforward to support dealing with competences with digital means. This is not only the digital competence delivery (e.g., by eLearning or online courses). Even more important is a digital representation of competence profiles. This allows “calculations” and the comparison of competence profiles. To allow the digitalization of competence profiles, competence models with quantifiable (or at least formally described) indicators are required. As a result, competences become objects of the digital transformation, and dealing with competences becomes a digitally transformed process. This is the basis for competence development and a competence delivery according to the IOOI model of the digital transformation (see Fig. 16.1).

16 Competences and the Digital Transformation

227

16.2.2 Competence Models and Competence Profiles There are three widely used sources of data for competency models: (1) resource panels or focus groups with subject matter experts, (2) critical event interviews with superior performers, and (3) generic competency dictionaries. Generic competency dictionaries are conceptual frameworks of commonly encountered competences and behavioral indicators. These generic competency dictionaries typically have 20–40 competences, each with 5–15 behavioral indicators [15]. For the digital transformation, it is a question if there is one single competence model. And—furthermore—there is the question of how stable this model would be over time, giving the tremendous pace of change in the digital transformation. Nevertheless, deriving a competence model and standardizing it is desirable for several reasons. First of all, it allows the exchange of competence profile data since competence profiles will be formulated using the “language” and taxonomy of the competence model. Second, doing “calculations” with competence profiles and competences requires operators (like the “−”) which only make sense in a consistent conceptual framework. Third, formulating competence profiles with such a competence model opens the door to the application of assessment, evaluation, and data analytics. The standardization of a competence model does not contradict the trend toward individualization. The competence model is the “language”, the competence profiles are the words and sentences which can be very individual. Especially in the case of competence profiles with quantified indicators, the comparison and evaluation of individual competence profiles are possible.

16.2.3 Competence Quantification Competence models lead to the digital transformation of the processing of competence profiles. Even qualitative indicators may be formally described and put on an ordinal scale. The digital representation of competence profiles opens the door for a number of “operations” which increase the usefulness dramatically, especially if the operations are digitalized. The competence assessment could be quantified [2]. This can be done by a kind of competence measurement, e.g., using tests or other competence assessments which intend to measure exactly if a person has competence on a certain level or not. Nevertheless, competence measurement may be an illusion. Another way is the correlation of observations or achievements of a person with a certain competence level. This is specifically promising since data analytics and deep learning associate and correlate available data (e.g., your coffee consumption) with other data (e.g., which handbag your wife wants) with pretty high accuracy. Learning analytics can be used to find the best way of competence delivery for a certain person and a certain learning situation [8]. It analyzes the efficiency and

228

C. Wolff et al.

effectivity of the competence delivery by correlation of the delivery method with the competence delta (or even the 1st derivation of the competence delta). Learning analytics is used to steer and control the competence development path in terms of achieving a good learning outcome. The last but not least effect of the digital transformation of competence management is the possibility of monetarization of competences. It is quite obvious that businesses put a price tag on a job competence profile (they call it salary, see Fig. 16.2). For a person, it can be quite important to know what monetarized value their own competence profile has. For educational organizations and their customers, it is interesting to monetarize the value of a competence delivery. It is obvious that such data has a high value in the digital era.

16.3 Competences for the Digital Transformation 16.3.1 Defining Competence Profiles for the Digital Transformation After reasoning about how competence profiles can be used in the digital era, it is still open how they can be generated. In a new era with disruptive innovations, it is very hard to predict which competences will be needed even for the next project. How much more difficult is it to develop a holistic and consistent set of competences for the whole process of the digital transformation? Therefore—as mentioned before—it may be more advisable to focus first on the development of a competence model as a basis for competence profile descriptions. The main body of work leading to competence profiles is then still the analysis of the jobs and qualifications in digital transformation projects and the prolongation and projection of that competence profiles along the possible competence development paths. Nevertheless, the digital transformation may be able to help with this tedious work too. Instead of expert panels and interviews as described in Sect. 16.2.2, data analytics may help to generate competence dictionaries. Assessing available training material and curricula of study programs can help to derive competence descriptions automatically (e.g., by semantic analysis). Structuring and clustering can be done by correlation analysis. Since the focus is on the management of the digital transformation by the means of doing projects, the connection to the existing competence models and competence profiles for project management is necessary and useful [24]. The structure of the competence models and frameworks can help to find a structure for the competences for the digital transformation too. This is based on the assumption that digital transformation projects are still predominantly projects or even just a special type of project. Nevertheless, deriving the models from project management frameworks may also lead to too much focus on the project management view on the digital

16 Competences and the Digital Transformation

229

transformation which is not valid for all professionals in the digital transformation (e.g., software developers). Reasoning about the competences for digital transformation projects and for the digital transformation, in general, is required to come to a more or less standardized and widely accepted and applied model (or meta model). Such a competence model for the digital transformation (CMDT) would be of great help for the compilation of relevant competence profiles. The competence profiles are then the required input for the definition of competence delivery methods as a basis for training, education, and learning. With well-described competence profiles, the competence assessment can be developed (or indirect assessment via correlation or data analytics can be used). Therefore, the work on an adequate and usable competence model for the digital transformation (CMDT) is the consequent next step in preparing project management for the challenges of the digital era. Even with the described approaches toward competence models and profiles, there will be still a lot of expert knowledge to support the data analytics, to select the right sets of data, and to condition the processing with the right models and frameworks. As mentioned in Sect. 16.1, the structuring (breakdown) of the domain of the digital transformation into interrelated competence areas is crucial for the definition of meaningful competence profiles and competence levels.

16.3.2 Competency Breakdown Structure The competency breakdown structure (CBS) is based on a common project management approach to structure a complex system into manageable elements. Both methods—competence profile and competency breakdown structures—can be useful both for the formalization of the digital transformation as an area itself and for managing teams of digital transformation projects due to the following advantages [16]: • • • • •

Common language; Basis for self-reflection and self-improvement; Provides the point of references for team building; Managing trust; Motivating for personal development.

The CBS method is attractive not only for structuring the competence area and the elements and indicators of the competence profile, but also for the development and partitioning of the competence delivery and for the different aspects considered in the competence assessment. Actually, it is a necessary tool for the management of the individual competence development path.

230

C. Wolff et al.

16.3.3 Organizational Competence and Organizational Maturity An organization’s maturity in project management is defined as the “organization’s ability to deliver the desired strategic outcomes in a predictable, controllable, and reliable manner” [1]. Furthermore, a maturity model is defined as a conceptual model that consists of different yet subsequent maturity levels for processes in one or more areas, and represents a wanted evolutionary path for these processes [21]. Maturity models attempt to systemize processes and areas within an organization, which explains their wide adoption worldwide [14]. Maturity models consist of “a set of maturity levels and provide precise criteria to achieve each level of maturity”. Then, an organization willing to follow a maturity model is to apply certain techniques or best practices to develop capabilities (and competences) that may allow them to achieve the next level. Moving upwards along those levels means the organization has improved; each level represents a higher state of maturity in comparison to the previous level or the level below. Because of this, the concept of maturity is linked to the success/failure rate an organization holds [1, 20]. The organizational maturity in dealing with the digital transformation can be linked to the combined competences for the digital transformation in the organization—forming the organizational competence. In terms of project management for the digital transformation, this is linked to the project management maturity models developed especially for strongly projectized organizations [13]. Consequently, a digital transformation maturity model (DTMM) should be developed to help organizations to assess their status in the process of adopting the digital transformation into their organization. Again, for projectized organizations the right way to derive the maturity model and the maturity levels could be the modification and adaptation of the existing project management maturity models [20, 21]. Maturity models are, in general, a common tool to describe how far an organization got in a transformation. Therefore, apart from the project management maturity models, a variety of maturity models for other domains was and is developed. An analysis of the existing maturity models in related fields (e.g., the Industrie 4.0 Maturity Index [19]) will contribute to the development of the digital transformation maturity model (DTMM). Nevertheless, the focus should be on competences, not on other technical or organizational capabilities. IPMA Delta® is a relevant example of such a maturity model since it focuses on competence instead of processes to achieve project management maturity, IPMA OCB® is the standard that in fact describes the organizational competence [5, 10]. A major advantage of the definition of a digital transformation maturity model (DTMM) is the possibility of an organizational assessment. Such a digital transformation maturity assessment (or even a quick check) can help organizations to understand how they are positioned in terms of the challenges of the digital transformation. For such an assessment based on digital transformation competences, the quantified representation of the competence profiles would be specifically useful. First, an organization can calculate a kind of cumulative (or even weighted) competence profile

16 Competences and the Digital Transformation

231

of the organization out of the competence profiles of the individuals. Second, it can also calculate the competence gap compared to a desired (cumulative) job competence profile or compared to the required competence profiles of customer projects. Furthermore, the organization can plan an organizational competence development path as a strategy for the digital transformation. The definition of an organizational competence profile and an organizational maturity (in terms of a competence level) can be a very powerful tool for business development, organizational learning, and even for company valuation (if the organization competence is monetarized) for digital companies (knowledge based or virtual).

16.4 Toward a Competence Baseline for the Digital Transformation 16.4.1 Competence Baseline Development in Dynamically Changing Environments A major challenge in defining the competences for the digital transformation is the dynamics and pace of the changes due to the digital transformation. Compared to the development of the project management domain, the rise of the digital era seems to be faster, more drastic, and less predictable. This is an issue for the goal of having widely adopted competence baselines published and distributed. The only way to cope with the dynamics of the change is to make the competence baseline dynamic and flexible as well. That leads to competence profiles which are not fixed for a period of several years but which are updated much more frequently. Roles and competence profiles attached to roles in the digital transformation need to be allowed to emerge and vanish relatively fast. Competence profiles need to get a time stamp or they even have indicators which are dynamic over time. This requires adequate competence models and it would benefit from a digital representation of the competence models and the competence profile data sets. Since not only the competence development paths but also the further development of competence profiles or even competence models are relatively uncertain and unknown, an explorative or iterative approach for the development and refinement can help. Again, we can learn from project management approaches in such innovative and uncertain environments. Agile and iterative approaches (a kind of Scrum for the development of competence profiles) with a constant involvement of the users can lead to the provision of competence profiles which meet the requirements of the progress of the digital transformation [23].

232

C. Wolff et al.

16.4.2 Steps in Competence Model and Competence Profile Development The considerations and conclusions of the previous sections do not lead to competence profiles for the digital transformation but to the outline of a process required for the generation of such competence profiles. This process has to fulfill certain requirements: • An important first step is the definition of a first draft or outline of a competence model for the digital transformation (CMDT) which allows the digital representation of competence profiles and the coding of (possibly dynamic) competence indicators. • Analysis of structural “maps” of the domains of the digital transformation and analysis of training and educational programs leads to role descriptions and requirements for competence profiles. In combination with the CMDT, such competence profiles can be derived and descriptions of competences and indicators can be generated. • Competence delivery for the respective competence profiles and roles can be defined and translated into educational formats and measures. • The first three steps are iterative. They need regular update and refinement. • Based on the competence profiles, it is possible to define competence levels and competence development paths to get from one competence level to another by using competence delivery. • Competence levels and competence profiles are the input for the development and iterative refinement of the digital transformation maturity model (DTMM) and the respective assessment of the organizational competence and maturity. • Competence assessment methodology, learning analytics, and competence evaluation are supporting processes and methods. With this approach, the community of people dealing with competences and the digital transformation should be able to move forward.

16.5 Conclusion and Outlook The digital transformation is a chance and a challenge for individuals and organizations at the same time. The expectation is the drastic change of many areas of life and work at a tremendous pace. Organizations and individuals want to be prepared and they want to be able to benefit from the digital transformation. Therefore, the question about the right competences is very important. Project management has seen a similar transformation—maybe less drastic and less fast—when many areas of life and work moved to a projectized approach and

16 Competences and the Digital Transformation

233

style. Therefore, the project management domain was forced to formulate competences and maturity levels to help individuals and organizations cope with the challenges and benefit from the trend. For the digital transformation—and especially the management of the digital transformation—the success of the competence orientation in project management may be repeated. This is a chance and a task for the project management community too. With this contribution, a path toward dealing with the competences for the management of the digital transformation is outlined. There are a number of similarities with the development of the project management competence and maturity models. But there are also differences, especially in complexity, dynamics of change, uncertainty, and the pace of change. Therefore, several methods of structuring and shaping the path toward competence models and competence profiles for the digital transformation are proposed. The next steps should be the definition of the competence model for the digital transformation (CMDT) and the digital transformation maturity model (DTMM) from the standpoint of project management. It looks straightforward to start with this area since it can learn and transfer from what has been achieved in project management. The results can be evaluated in a large community which is on the one hand used to the competence-based view and on the other hand forced to deal with digital transformation projects.

References 1. Agorio Comas MF, Toledo Gandarias N, Arraibi JR, Gutiérrez Terrón A (2018) Maturity models’ selection criteria for assessment in a specific organisation: a case study.In: Otegi JR, Toledo N, Taboada I (eds) Proceedings of the 1st international conference on research and education in project management – REPM 2018, Asociación Española de Dirección e Ingeniería de Proyectos (AEIPRO), ISBN-13: 978-84-697-9972-7, Bilbao, Spain 2. Bassi LJ (1997) What works: assessment, development, and measurement. In: Bassi LJ, RussEft D (eds) American Society for Training and Development, Alexandria, VA 3. Bertelsmann Stiftung (2010) Corporate Citizenship planen und messen mit der iooi-Methode, Bertelsmann Stiftung 4. Back A (2017) Digital maturity & transformation report 2017. IWI-HSG, St. Gallen 5. Bushuyev S, Wagner RF (2014) IPMA Delta and IPMA organisational competence baseline (OCB) new approaches in the field of project management maturity. Int J Man Projects Bus 7(2):302–310 6. DAAD, A guide to results-oriented project planning and monitoring. https://www.daad.de/der daad/unsereaufgaben/entwicklungszusammenarbeit/aufgaben/en/37674-results-oriented-mon itoring/. Accessed 10 June 2017 7. European Commission (2012) commission staff working document impact assessment - a reinforced european research area partnership for excellence and growth. European Commission, Brussels 8. Ferguson R (2012) The state of learning analytics in 2012: a review and future challenges. Technical Report KMI-12-01, Knowledge Media Institute, The Open University, UK. http:// kmi.open.ac.uk/publications/techreport/kmi-12–01

234

C. Wolff et al.

9. Heffernan MM, Flood PC (2000) An exploration of the relationships between the adoption of managerial competencies, organisational characteristics, human resource sophistication and performance in Irish organisations. J. Er Ind Train 24(2/3/4):128–136 10. IPMA (2013) IPMA organisational competence baseline - the standard for moving organisations forward. International project management association (IPMA®) ed., 1.0th ed 11. IPMA (2015) Individual competence baseline 4th version (ICB4), 2015 12. i-scoop (2018) Digitization, digitalization and digital transformation: the differences. https:// www.i-scoop.eu/digitization-digitalization-digital-transformation-disruption/ 13. Kerzner H (2001) Strategic planning for project management using a project management maturity model. John Wiley & Sons, New York 14. Kosieradzka A (2017) Maturity model for production management, 2017. http://www.sciencedi rect.com/science/article/pii/S1877705817312456 ISBN 1877-7058. https://doi.org/10.1016/j. proeng.2017.03.109 15. Mansfield RS (2005) Practical questions in building competency models. Workitect.inc 16. Mikhieieva O (2017) Competency breakdown structure for managing intercultural issues in international project teams. In: Proceedings of ProMAC2017, Munich, Germany, pp 977–983 17. Mikhieieva O, Wolff C (2018) Competencies for managing the digital transformation for the setup of a master programme. In: Proceedings of the 1st international conference on research and education in project management – REPM 2018, Editors: Jose Ramón Otegi, Nerea Toledo and Ianire Taboada, Asociación Española de Dirección e Ingeniería de Proyectos (AEIPRO), ISBN-13: 978-84-697-9972-7, Bilbao, Spain 18. PMI (2017) Project manager competency development framework. Project Management Institute Inc, Newtown Square Pennsylvania 19. Schuh, G., Anderl, R., Gausemeier J., ten Hompel, M., Wahlster, W. (2017). Industrie 4.0 Maturity Index. Managing the Digital Transformation of Companies (acatech STUDY), Munich: Herbert Utz Verlag 2017 20. Souza TFD, Gomes CFS (2015) Assessment of maturity in project management: a bibliometric tudy of main models. Procedia Comput. Sci. 55(Supplement C):92–101. https://doi.org/10. 1016/j.procs.2015.07.012 21. Tarhan A, Turetken O, Reijers HA (2016) Business process maturity models: a systematic literature review. ISBN 0950-5849. https://doi.org/10.1016/j.infsof.2016.01.010 22. Turner R, Zolin R (2012) Forecasting success on large projects: developing reliable scales to predict multiple perspectives by multiple stakeholders over multiple time frames. Project Manage J 43(5). https://doi.org/10.1002/pmj.21289 23. Wolff, C. (2017). Managing the Digital Transformation – Digital & Projectized Master Education, Proceedings of the Dortmund International Research Conference 2017, Eds. Carsten Wolff, Christian Reimann, ISBN 978-3-00-058090-1, Dortmund 24. Wolff C, Otegi JR, Bushuyev S, Sachenko A, Ciutene R, Hussein B, Torvatn TA, Arras P, Reimann C, Dechange A, Toledo N, Nuseibeh A, Mikhieieva O (2017) Master level education in project management – the Eurompm model. In: 9th IEEE international conference on intelligent data acquisition and advanced computing systems: technology and applications, Bukarest, Romania 25. Wolff C, Nuseibah A (2017) A projectized path towards an effective industry-universitycluster: Ruhrvalley. In: Computer sciences and information technologies (CSIT), 2017 12th international scientific and technical conference on (Vol 2, pp 123–131). IEEE

Part II

Civil Engineering and Urban Planning. Construction and Architecture

Chapter 17

Influence of the Separation Between Photovoltaic Modules Within an Array on the Wind Pressure Distribution R. Escrivá-Pla and C. R. Sánchez-Carratalá

Abstract As has traditionally been considered, the gap between photovoltaic modules within the same array would be one of the key factors in the development of wind pressure on the tables of a solar farm and, therefore, in the resulting wind action on these surfaces. Irrespective of how widespread this belief is, currently, there is no specific standard directly applicable on which to base these considerations. In this paper, various numerical analyses are carried out to assess the actual influence of the separation between modules on the pressure produced locally by the wind on both sides of a panel. To do this, numerical simulation techniques of the wind field are used by applying algorithms of Computational Fluid Dynamics (CFD), performing a battery of tests on common design geometries in utility-scale plants. In these tests, the influence of different geometric parameters is studied, including the separation between modules and the inclination of the arrays. Parallel computing techniques in a general-purpose cluster are implemented. The results indicate that the gaps between the panels tend to reduce the total force on the array in a proportion similar to that in which the net area of the surface is reduced. Keywords Photovoltaic solar plants · Support structures for panels · Wind loads · Separation between modules within an array · Parallel computing · Computational Fluid Dynamics

R. Escrivá-Pla · C. R. Sánchez-Carratalá (B) Dpto. de Mecánica de Medios Continuos y Teoría de Estructuras, E.T.S. Ingenieros de Caminos, Canales y Puertos, Universitat Politècnica de Valéncia, Camino de Vera s/n, 46022 Valencia, Spain e-mail: [email protected] R. Escrivá-Pla e-mail: [email protected] © Springer Nature Switzerland AG 2021 J. L. Ayuso Muñoz et al. (eds.), Project Management and Engineering Research, Lecture Notes in Management and Industrial Engineering, https://doi.org/10.1007/978-3-030-54410-2_17

237

238

R. Escrivá-Pla and C. R. Sánchez-Carratalá

17.1 Introduction The current rate of construction of power stations for photovoltaic production has caused an increasing interest in the optimization of support structures of panel arrays (also known as tables of generation modules). These are light prefabricated structures, of the plane frame type, formed by bars—normally metallic, either of aluminum or, sometimes, of steel—that support, as a cover, the table formed by the matrix of panels. This matrix is fastened to the framework of joists and purlins by means of anchoring elements (fixing clamps). Photovoltaic power generation stations located in an open field are constructed as an extensive repetition of structures like the one described above. The research interest in our case is the production, logistics, implementation, and maintenance of this type of structures, which are potentially subjected to relevant scale economies of special importance for the industry; this encourages the study of factors affecting their efficiency and behavior throughout their entire service life. Support structures are designed to withstand the wind as the main action, possibly acting together with an overload due to the accumulation of snow or sand on the panels’ surface, depending on the cases. Although the tables of generation modules can be considered as an exempt cover when subjected to wind, overloads caused by walking on them are usually not considered, since maintenance is carried out in such a way that the workers do not have to climb on the panels. No orientation or express guidance for calculating wind loads in these particular structures is currently found in codes and recommendations at the European or national level [2, 3, 9]. The most similar case is to consider them as a canopy roof or as a succession of parallel inclined planes like the ones in industrial premises with a multi-span sawtooth roof. Standards also include the possibility that the designer performs analytical studies, or physical or numerical experiments for the determination of wind loads; this type of approach is fully justified in cases such as the one in question, in which the required precision is compromised when trying to apply simplified prescriptions included in the standards. Recommendation PV2-12 [10] for photovoltaic support structures can also be used, but with a very specific field of application, limited to structures installed on flat roofs of low-rise buildings. This recommendation focuses on the two phenomena that mainly affect the determination of wind loads in that case, namely: the vortex shedding that occurs when an oblique wind acts on parapets in a roof; and the aerodynamic hiding that takes place between photovoltaic modules placed on the roof. In this context of information scarcity, numerical tests can be a very useful and cost-effective tool to shed light on the problems of flow transfer around blunt bodies, such as those represented by the panels of a solar park [4]. This type of tests has already been successfully applied to study complex phenomena such as the hiding between successive tables in a photovoltaic power plant [5]. Among the different factors that can affect the loads generated by the impact of the fluid mass on the panel arrays, there is one especially controversial that, despite being the subject of continuous discussion among the designers of support structures, has not received sufficient attention from researchers. This is the issue of the influence

17 Influence of the Separation Between Photovoltaic Modules …

239

that the separation between photovoltaic panels within the same array actually has in the distribution of wind pressure on the exempt cover that these panels form. The said separation is an advisable requirement (for refrigeration) for its normal operation, but it is also imposed by the presence of clamps that hold the panels attached to the corresponding table. The idea that the gap or groove between panels plays a relevant role in decreasing pressure due to the wind is common among many designers of support structures within the industry. The justification for this conjecture, with no basis other than mere intuition, is usually supported on two arguments: first, the local velocity increase in the vicinity of the gap, due to flow acceleration, which would be expected to produce a decrease in the pressure at the edges of the panels adjacent to the gap; and, second, the rupture of the turbulent wake leeward of the table due to the flow passing through the gap between the panels, which would result in a reduction of the suction on the rear face of the panels. The foregoing may contradict the fact that, by sufficiently separating the panels, they should be analyzed as two different planes. Applying without further detail the stipulations of EC1-1-4 [2] for roofs, both planes should incorporate the edge strips (areas where the absolute net value of the pressure is greater), so that greater wind forces would be obtained. Therefore, a more detailed study of the phenomenon is needed to be able to draw reliable conclusions for practical application. This paper focuses on addressing the conjecture about the substantial reduction of the resulting force due to the wind, that supposedly is produced by a small separation between photovoltaic modules, like the one imposed by cooling needs or the use of fixing clamps. To this end, numerical studies of certain specific geometries are performed based on Computational Fluid Dynamics. Cases with different arrangements of the matrix of panels within the table and with diverse angles of attack are tested, considering or not the gaps between panels. The obtained results allow us to make a comparison between forces generated in experiments where the only differentiating factor is the presence and orientation of the groove, thus making it possible to evaluate its real influence on the wind loads.

17.2 Problem Modeling Aspects related to the geometry of the panels and their separation, as well as to the wind flow over them, are outlined next. These aspects can condition, to a great extent, the modeling of the problem to be solved.

17.2.1 Considerations on Geometry Most manufacturers of modules establish zones where they recommend locating the anchorage elements required to fix the position of modules within the array or table. Fixing clamps can be considered as almost point-acting; they are metallic elements,

240

R. Escrivá-Pla and C. R. Sánchez-Carratalá

typically made of aluminum or stainless steel, which directly or through any type of elastomer (rubber) accommodate and fasten the modules with each other and to the lower bar structure formed by joists and purlins. The clamp function is mechanical, although it is subjected to certain conditions detailed hereafter. In the first place, clamps must provide the smallest possible separation between modules, in order to favor a better use of the total surface of the corresponding array. However, in most cases it is the manufacturer of the panels that recommends keeping a minimum separation between them, since transforming solar radiation into electricity causes a panel to heat. If not dissipated, this heating reduces the efficiency of the transformation; for this reason, it is advisable to separate the modules so that they can better dissipate heat by radiation to the surrounding air. The fact that there is a wind flow that forces convection further improves the efficiency of this process [6]. Secondly, the limitation in height of the clamp and its fixing elements (bolts, nuts, washers) is usually imposed to avoid shadowing on the modules and becoming a sand (or another material) accumulation spot on the upper face of the array. Regarding the constraints imposed by the element to be fixed, the frames of the panels are usually formed in aluminum and have a rectangular contour. Panels’ manufacturers usually recommend fixing them on the long sides at L/4 of the corners, L being the length of the long side. Regardless of the overall dimensions of the panel, an interval of ±100 mm for the recommended support point is commonly accepted as permissible tolerance for the anchoring. With a uniformly distributed load on the module surface, the position of supports that theoretically provides a greater allowable load per panel depends on the length ratio of panel sides. For the most usual cases, the best position is approximately at L/5 of the corners, since this way it is possible to match the maximum absolute values of the positive and negative moments on the long side, considered as a simply supported beam with two cantilevers. In our case, the assumption for the tests will be that the supports are placed at L/4 of the corners. From the point of view of the structural designer, the greatest interest in the separation between panels is due to its influence on the array opacity (which is related both with the net surface of the tables and with the airflow through them). This paper includes only the effect of the separation between panels in the same array on pressures (or forces) generated in it. However, opacity (along with separation between tables) being one of the most influencing aspects of the hiding factor between adjacent structures, it would be convenient to also study their effect on pressures occurring in the leeward tables. Doing a simplified analysis of the problem, it can be assumed that what takes place, when the gap is considered, is basically an effective reduction of the surface on which the pressures are developed. This simplification does not consider the nonlinear effects of the flow near the groove, nor the influence of other parameters such as the array inclination, the panels’ arrangement (with the long side horizontally or vertically), etc.; despite that, it can be considered as a reference value to compare with the results obtained in the numerical tests. In the case of the vertical arrangement of the modules (vertical groove), the opacity factor in a very long table would tend, disregarding the area of the clamps, to

17 Influence of the Separation Between Photovoltaic Modules …

ηo =

An b ≈ Ag b+s

241

(17.1)

where Ag is the gross area of the array in its plane, An is the net area of the array in its plane, s is the separation between panels, and b is the panel width (short side). On the other hand, the diaphaneity factor (which measures the reduction of the exposed surface) would tend to ηd = 1 −

An s ≈ Ag b+s

(17.2)

In the previous analysis, only the case with the vertical arrangement is mentioned, since it is the only one that can have enough development in plan (table length) to approximate the indicated limit. In a typical design case, with a 20 mm separation and a 995 mm panel width, the surface reduction would tend approximately to 1.97% in a very long table, being only 1.00% in a very short table consisting of only two panels.

17.2.2 Physical Conditioning of the Flow In the numerical experimentation of fluids, the decision-making on the physical parameters that mainly govern the simulated phenomenon is of crucial importance. In most cases, this decision is usually based on previous experience or recommendations by other authors, although in other situations it is the work constraints (time limitation, required accuracy, maximum cost, etc.) which have a greater weight over the choice. Alternatively, in the absence of these determinants, it is possible to consider the physical parameters as additional variables to take into account in the tests to be performed; the latter, in spite of facilitating decision-making, can notoriously increase the experimental work. In the works carried out for this paper, the physical restraints that govern the behavior of the incompressible supercritical flow are those corresponding to a RANS approach (Reynolds-Averaged Navier–Stokes equations), the k-epsilon one being the adopted turbulence model [8]. This decision is based, on the one hand, on the practical engineering application that is intended, as is traditionally made in the industrial context. On the other hand, since the problem to be solved considers a relatively large domain, but with relevant geometric details that are relatively small in size (gaps between panels), a very large mesh is required that needs the mentioned approach to be simulated with the available computing power. Regarding the boundary conditions, a right cuboid with its front face perpendicular to the wind progress direction is established as the external domain. The transfer conditions of the flow perpendicular to the boundary are controlled by the relative pressure: if the relative pressure at a point of the interior of the domain in a cell adjacent to the boundary is greater than zero, the flow leaves the domain; if the relative

242

R. Escrivá-Pla and C. R. Sánchez-Carratalá

pressure is less than zero, the flow enters. At the inlet (frontal face), a logarithmic profile of velocities, constant over time, is considered. This profile is applied to an initial uniform velocity field throughout the domain. At the outlet (face opposed to the one defined as inlet), an output condition is established (i.e., flow only comes out). Figure 17.1 shows the integration domain and the group of two panels to be tested, corresponding to modules installed in a vertical arrangement. Figure 17.2 shows the same, but for the horizontal groove model. The model without any separation between panels is also considered, as a reference.

Fig. 17.1 Geometric definition of the model with modules in a vertical arrangement for θ = 25°

17 Influence of the Separation Between Photovoltaic Modules …

243

Fig. 17.2 Geometric definition of the model with modules in a horizontal arrangement for θ = 25°

17.3 Analysis Methodology Computational Fluid Dynamics (CFD) is a discipline that emerged as an alternative to physical models, due to the need for obtaining practical application results to analytically unsolvable problems, and costly and complex laboratory experimentation. At first, it was only considered for simple problems that did not require great computing power; also to contrast, validate, or calibrate a series of small-scale physical tests of the same problem. With the development of computing technologies and the progressive increase of the available computing power, as well as with the implementation (both of equipment and programs) of parallel computing paradigms, CFD has evolved from a mere tool for obtaining practical application results, to a window toward scientific development [1], allowing to conceptualize unforeseen

244

R. Escrivá-Pla and C. R. Sánchez-Carratalá

processes and phenomena, as well as to pose and to experiment new hypotheses on fluids behavior. For the application to the problem under study, we have implemented our own methodology and a versatile and adaptable computer workflow, which can be broken down into the following steps: 1. Generation of the three-dimensional model in a local machine. 2. Export of the model and generation of the file with the meshing characteristics. 3. Transfer of the geometric model to a virtual machine in a server to perform its meshing. 4. Transfer of the mesh and introduction of the problem physical constraints in a general-purpose computing cluster. 5. Resolution and control of the numerical algorithm. 6. Export of the results to the local machine for post-processing. With regard to the selection of the numerical algorithm for solving the CFD problem, the program OpenFOAM has been chosen because it is an open-source project with great versatility and modularity. It allows the user to access the code and to modify it at will in order to adapt it to different types of problems [7]. It also has the advantage of its extensive use in the academic and scientific contexts, which guarantees continuity in its improvement and development.

17.4 Numerical Application An array consisting of only two 1,955 × 995 × 50 mm panels (50 mm thickness) is studied, with the two panels separated by two 50 × 20 × 50 mm fixing clamps (20 mm wide) placed in the long side of the panels. For this arrangement, the following actual diaphaneity factor is obtained (regardless of the horizontal or vertical arrangement of the panels): A g = (2 · 0.995 + 1 · 0.02) · 1.955 = 3.92955 m2 An = 2 · 0.995 · 1.955 + 1 · 2 · 0.05 · 0.02 = 3.89245m2 ηd = 1−

3.89245 = 0.94% 3.92955

The value thus obtained (logically, very similar to the aforementioned 1.00%) represents, for the indicated panels’ and clamps’ sizes, the smallest possible reduction of the exposed surface, because there are only two panels in the table (minimum influence of the joints between panels).

17 Influence of the Separation Between Photovoltaic Modules …

245

17.4.1 Studied Cases The study parameters considered are: the panel arrangement; the panel separation, s; and the panel inclination, θ. The coding adopted is as follows: • • • • • • •

VER: vertical arrangement, with groove and clamps in a vertical position HOR: horizontal arrangement, with groove and clamps in a horizontal position G10: inclination θ = 10° G25: inclination θ = 25° G40: inclination θ = 40° S00: separation s = 0 mm S20: separation s = 20 mm

The unobstructed height under the tables in the model is 0.800 m, a value required in some zones for livestock passage. The analysis will focus on the comparison between cases that differ in the only parameter to be studied, which for this research is the separation between panels (S00 and S20). This way, the biases that can alter the obtained result values are introduced equally in both tests to be compared, so that the assumed hypotheses and the rest of the parameters will affect both tests approximately in the same way. This will allow us to obtain greater robustness of the numerical test and, consequently, greater reliability of the conclusions derived from this comparison, as opposed to inaccuracies inherent to the simulation process of the physical reality that the test implies. The simulated cases are grouped into two sets or blocks, depending on the direction of the panel arrangement: VER Block • • • • • •

VER_G10_S00 VER_G10_S20 VER_G25_S00 VER_G25_S20 VER_G40_S00 VER_G40_S20

The meshes used for the three VER_S00 cases are made up of around 250,000 nodes, 1,350,000 cells, and a mesh file size in UNV format of about 200 MB. For the three VER_S20 cases, there are around 350,000 nodes, 1,800,000 cells, and a UNV file size of about 250 MB. HOR Block • • • • • •

HOR_G10_S00 HOR_G10_S20 HOR_G25_S00 HOR_G25_S20 HOR_G40_S00 HOR_G40_S20

246

R. Escrivá-Pla and C. R. Sánchez-Carratalá

The meshes used for the three HOR_S00 cases are made up of around 300,000 nodes, 1,650,000 cells, and a mesh file size in UNV format of about 210 MB. For the three HOR_S20 cases, there are around 350,000 nodes, 1,850,000 cells, and a UNV file size of about 245 MB.

17.4.2 Analysis and Discussion of Results Hereafter, results from the processing using the OpenFOAM libraries are given; these libraries calculate the resultant of the pressure and viscosity forces over all the exposed surfaces for all the panels on the array. The drag force (horizontal component in the direction of the inlet wind, positive in the forward direction) and the lift force (vertical component, positive upwards) are represented for each pair of cases with the same inclination angle (G10, G25, or G40) but with different separations between panels (S00 and S20). With the purpose of assessing the influence of the separation, it must be taken into account that the S20 cases have a higher mesh resolution (finer meshing) in the zone surrounding the groove, due to the smaller maximum size of the fluid elements considered in those areas; this is so because the separation between panels is, in the application case, smaller than their thickness. Therefore, both on the upper and lower panel surfaces close to the gap, and on the lateral surfaces of the gap itself, somewhat greater force values than in the case of panels without separation are to be expected. According to experimental principles, this represents what is known as a negative bias for the hypothesis to be validated by the test (that the total force on the table is reduced by the presence of gaps), which reinforces the validity of the result if the hypothesis is confirmed. VER Block Figure 17.3 shows the drag and lift forces for vertically arranged panels. As it could be anticipated, when the inclination angle (or angle of attack) is greater, more drag and lift are produced (in absolute value) on the studied array. The force ratios do not seem to follow a predictable distribution with the angle of attack. For G10, both the drag and the lift in permanent regime (which is reached in about 0.60 s) are greater for the geometry with a gap (S20); this may be due to the fact that the groove is almost parallel to the incident flow and the presence of the clamps adds more frontal area. On the contrary, it is observed that the peak in the lift force due to the initial transient (like the one that would be obtained during a gust of wind of short duration) is slightly greater in the case without a gap; this can be explained considering that the gap permits an effective vertical flow through it. For G25 and G40, both the drag and the lift are smaller in the case with a gap. Anyway, the influence of the separation is altogether small, as can be observed. Analyzing the variation of the total force exerted on the panels due to the presence of the vertical gap, the values given in Table 17.1 are obtained for all the simulated time (mean and median) or for the maximum force that appears during the transient

17 Influence of the Separation Between Photovoltaic Modules …

247

Fig. 17.3 Drag and lift forces with modules in a vertical arrangement Table 17.1 Variation of the total force on an array with modules in a vertical arrangement

Inclination (°)

Variation (%) Mean

Median

Peak

10

2.09

2.95

−1.80

25

−1.87

−2.16

−2.71

40

−1.54

−1.67

−0.38

248

R. Escrivá-Pla and C. R. Sánchez-Carratalá

(peak). It is verified that the variation is approximately between 1 and 3%, being positive (force increase) for very small inclinations, but negative (force reduction) for usual inclinations in photovoltaic parks located in Mediterranean latitudes (inclinations around 25–35°, to optimize annual production). Therefore, it is verified that the reduction is not very different from the value given by the diaphaneity factor in the panel plane. For this reason, as an engineering recommendation, the opacity factor could be applied to the force calculated over the gross surface. HOR Block The horizontal arrangement of the panels causes one of the lateral walls of the gap to be inclined with respect to the flow direction, which, on a first approach, will affect the transfer of flow passing through it. Since the panel sides used in the numerical experiment have a length ratio of 1.96:1 (approximately 2:1) and the clamps’ width is small, the total height of the table above ground level turns out to be approximately the same as with the vertical arrangement. Therefore, although the velocity profile at the inlet is logarithmic, it is to be expected that the total forces without separation between modules will be very similar to those calculated in the previous block. Figure 17.4 shows the drag and lift forces for horizontally arranged panels. On average, when the permanent regime is reached, it can be observed that the horizontal arrangement of the gap implies, in almost all cases, a slight increase in the effect of the gap (load decrease) with respect to the vertical arrangement; this can be attributed to the channeling effect of the flow lines over the upper surface of the first panel toward the gap. It is interesting to note that there is a reduction of the total force in the permanent regime for all the inclinations studied; only for G40, a small increase in the force is obtained in the transient peak. Figure 17.5 shows the current lines for an inclination of θ = 25°. It can be observed how those lines passing through the groove slightly affect the vortex that is generated in the leeward face of the table. The velocities are given in m/s and the relative pressures (i.e., with respect to the atmospheric pressure) in Pa. Finally, Table 17.2 gives the total force variation that is obtained in the array, which turns out to be of the same order of magnitude as in the vertical arrangement. Therefore, as a general rule, the same variation range as in the previous test block can be identified.

17.5 Conclusions In this paper, results obtained in two blocks of numerical tests performed on different geometries of photovoltaic panels subjected to wind action and arranged both vertically and horizontally have been presented, having as a fundamental parameter of study the existence of a separation (gap) between them, through which the wind can circulate. The analysis includes modeling of the anchoring elements of the modules inside the groove.

17 Influence of the Separation Between Photovoltaic Modules …

249

Fig. 17.4 Drag and lift forces with modules in a horizontal arrangement

It has been demonstrated that, for both arrangements, the separation between photovoltaic panels tends to decrease the resulting force on an array due to the wind action, except in the case of panels in a vertical arrangement and with small inclination angles. In addition, the force variation is of the same order as the one deduced from a simplified analysis based on the net area in the array plane. This means that the reduction is not large enough to have the engineering relevance that it was supposed to have among many designers of photovoltaic parks.

250

R. Escrivá-Pla and C. R. Sánchez-Carratalá

Fig. 17.5 Current lines above and below an array and through the gap with modules in a horizontal arrangement for θ = 25°

Table 17.2 Variation of the total force on an array with modules in a horizontal arrangement

Inclination (°)

Variation (%) Mean

Median

Peak

10

−1.98

−1.54

−1.33

25

−2.05

−1.99

−2.52

40

−2.08

−3.06

0.76

In view of the results obtained, other points of view can be proposed about a phenomenon such as the one studied, of which almost nothing was proven and almost everything was taken for granted up to now. The interest of the performed analysis invites to further continue deep in the study of the influence of other parameters with potential relevance in the design of solar energy structures subjected to wind action. Experimentation must be, in each case, the basis on which to progress in the knowledge of a scientific field as complex as that of fluid dynamics on blunt bodies of diverse geometry.

References 1. Coleman GN, Sandberg RD (2010) A primer on direct numerical simulation of turbulence - Methods, procedures and guidelines. Tech Rep AFM-09/01a. University of Southampton, Southampton, UK, p 21 2. Comité Técnico AEN/CTN-140 (2005) UNE-EN 1991-1-4:2007. Eurocódigo 1: Acciones en estructuras. Parte 1-4: Acciones generales. Acciones de viento, EC1-1-4/07, Spanish version of EN 1991-1-4:2005. AENOR, Madrid, Spain

17 Influence of the Separation Between Photovoltaic Modules …

251

3. Dirección General de Carreteras (2011) Instrucción sobre las acciones a considerar en el proyecto de puentes de carretera, IAPC/11. Dirección General de Carreteras, Ministerio de Fomento, Madrid, Spain 4. Escrivá-Pla R (2015) Estudio aerodinámico del factor de ocultamiento para la determinación de las cargas de viento a aplicar en el proyecto de centrales fotovoltaicas de generación eléctrica. Aplicación al proyecto de planta solar de 20 MW en Beneixama (Alicante). MSc Thesis, ETSICCP, Universidad Politécnica de Valencia, Spain 5. Escrivá-Pla R, Sánchez-Carratalá CR (2019) A computational study of the aerodynamic hiding factor between panels’ arrays for the optimization of support structures in solar parks. In: Ayuso Muñoz JL et al (eds), Project Management and Engineering Research, Lecture Notes in Management and Industrial Engineering. Springer International Publishing. Cham, Switzerland, 271–284 6. Gökmen N, Hu W, Hou P, Chen Z, Sera D, Spataru S (2016) Investigation of wind speed cooling effect on PV panels in windy locations. Renew Energy 90:283–290 7. Jasak H, Jemcov A, Tukovi´c Z (2007) OpenFOAM: a C++ library for complex physics simulations. In: Proceedings of the International Workshop on Coupled Methods in Numerical Dynamics, Dubrovnik, Croatia, p 20 8. Menter FR (2011) Turbulence modeling for engineering flows. A Technical Paper from ANSYS Inc, ANSYS, Canonsburg, PA, p 25 9. Puertos del Estado (1995) Recomendaciones para obras marítimas: Acciones climáticas II: Viento, ROM-0.4/95. Puertos del Estado, Ministerio de Obras Públicas, Transportes y Medio Ambiente, Madrid, Spain 10. SEAOC (2012) Wind design for low-profile solar photovoltaic arrays on flat roofs, PV2-12. Structural Engineers Association of California, Sacramento, CA

Chapter 18

How Are Sustainability Criteria Included in the Public Procurement Process? L. Montalbán-Domingo, C. Torres-Machi, A. Sanz-Benlloch, and E. Pellicer

Abstract According to the World Economic Forum (2014), trillions of US dollars have been invested over the past decades to build and maintain the global infrastructure asset base, and every year, US$ 2.7 trillion is invested in new infrastructure. Construction and maintenance activities have a significant impact on the three pillars of sustainability; i.e., social, environmental, and economic sustainability. It is thus crucial to improve sustainable performance in this sector. In order to implement sustainable procurement, it is important to assess the current status of the use of the triple bottom line of sustainability (i.e., social, environmental, and economic factors) in the procurement process. This study is intended to show the current state of sustainable procurement in the field of road maintenance at the international level and the sustainable selection criteria identified in the content of tenders. Keywords Sustainability · Public procurement · Maintenance · Sustainability criteria · Road

18.1 Introduction During the past decade, public policies have been adapted to the new trends of society, which demands a more sustainable management. Indeed, sustainable procurement L. Montalbán-Domingo · A. Sanz-Benlloch · E. Pellicer (B) School of Civil Engineering, Universitat Politècnica de València, Camino de Vera s/n, 46022 Valencia, Spain e-mail: [email protected] L. Montalbán-Domingo e-mail: [email protected] A. Sanz-Benlloch e-mail: [email protected] C. Torres-Machi Department of Civil, Environmental and Architectural Engineering, University of Colorado Boulder, 1111 Engineering Drive, Boulder, CO 80309-0428, USA e-mail: [email protected] © Springer Nature Switzerland AG 2021 J. L. Ayuso Muñoz et al. (eds.), Project Management and Engineering Research, Lecture Notes in Management and Industrial Engineering, https://doi.org/10.1007/978-3-030-54410-2_18

253

254

L. Montalbán-Domingo et al.

has recently acquired a high degree of salience in international policy. These trends are being reflected in the inclusion of environmental and social considerations on the daily activities of governments. Implementing effectively sustainable procurement has the potential to cut costs, shorten timescales, enhance stakeholder relationships, increase sales, reduce risks, enhance reputation, and improve margins [8].

18.1.1 Sustainable Public Procurement On the basis of the definition of UK Sustainable Procurement Task Force, Sustainable procurement is [3] “A process whereby organizations meet their needs for goods, services, works, and utilities in a way that achieves value for money on a whole life basis in terms of generating benefits not only to the organizations, but also to society and the economy, whilst minimizing damage to the environment”. Thus, sustainable public procurement refers to the integration of social and environmental impacts within procurement undertaken by the government or public sector bodies. However, according to the literature reviewed, there is no consensus on the definition of sustainable procurement and sustainable public procurement, at the international level. In fact, the emphasis on sustainable procurement in the European Union is environmental rather than social aspects. But, in other countries, such as the USA, Canada, and Australia, nondiscrimination issues, local issues, or ensuring procurement opportunities from Aboriginal businesses are an important part of the sustainable procurement policy [2]. On the other hand, not all stakeholders use the terms “Sustainable Public Procurement”. The related concepts of Green Public Procurement, Environmentally Preferable Procurement, Socially Responsible Procurement, and Responsible Purchasing are also included, which cover many of the same concepts. In this article, the term Sustainable Public Procurement (SPP) will be used, incorporating social, environmental, and economic aspects.

18.1.2 Impacts of Roads The effect of road maintenance on the environment comprises, among others, the consumption of natural resources, and impacts on human health, landscape, and natural environment. On the other hand, social value is important in road maintenance procurement. Aspects such as the creation of new jobs, the increase in labor participation of long-term unemployed in the local community, the creation of new skills and training programs, the promotion of fair wages and fair working-time, as well as equal opportunities and freedom of association at the workplace, should be considered in the procurement process [1]. Compared to road construction or traffic, the impacts of road maintenance activities might seem moderate. However, it is important to consider that road maintenance shall continue for a considerable period of life (the service life of a road is estimated

18 How Are Sustainability Criteria Included …

255

at 40–60 years) and that the cumulative impacts over time might be extensive. This indicates that there is a need to integrate environmental and social aspects of road management in order to minimize future problems [4].

18.2 Project Procurement and Delivery Methods in the Field of Road Maintenance As Garbarino et al. [5] state, the form of procurement chosen can have a significant influence on the outcome. This is because each type of contract brings with it distinct interactions between the different stakeholders. Moreover, each type of contract has advantages and disadvantages when seeking to procure a road with an improved environmental and social performance. It is therefore important to identify the main points in the sequence of procurement activities where sustainable public procurement criteria should be integrated.

18.2.1 Maintenance Actions Regarding maintenance actions, Garbarino et al. [5, 9] defined two main types of works, whose definitions were taken from PIARC Road Dictionary [10]: • Road maintenance considers the actions carried out in order to maintain and restore the level of service of roads, including – Routine maintenance, which represents the scheduled operations to satisfy a specific level of service. – Preventive maintenance, whose activities are mainly focused on preserving or restoring serviceability and extending the service life of a road. • Road reconstruction gathers those works aimed to increase the level of service of a network; this type of work can be based on replacing the entire road section.

18.2.2 Contract Types According to Road Maintenance Outsourcing Models The outsourcing of maintenance activities has evolved during the last years, according to the following three levels [11]: • Input-Driven Contracts: The owner of the road provides detailed method-based specifications and defines the activities which must be done by the Contractor.

256

L. Montalbán-Domingo et al.

• Output-Driven Contracts: The intervention criteria and performance standards for individual repairs are defined by the authority but the contractor has the flexibility to select the best methodology for the development of works. • Outcome-Driven Contracts: In these contracts, only the desired outcomes and required levels of services are defined by the authority, and the contractor chooses the best methodology to achieve them. According to these levels of outsourcing, the main contract types for maintenance of road network are [6, 12]; • Unit price contract: This contract modality includes a unit price for every work method. On this modality, payment to the supplier is calculated based on the quantity of work performed, multiplied by the respective unit price. • Performance-specified maintenance contract (PSMC): In this type of contract, the working method is not indicated. The contractor is paid to achieve some performance metrics and the method used is up to him. • Hybrid model: In this case, the contract includes two sections: one, which specifies a unit price for every work method and another one, which specifies the performance.

18.3 Materials and Methods The empirical material of this study was gathered from the documents of tender competitions. Following the method employed by Igarashi [7], a three-stage process was used for the current review: • Definition of boundaries: To delimitate the research, three important notes were made: – This analysis was aimed only at tender competitions related to road maintenance. – The countries analyzed were Australia, Chile, Spain, and the United Kingdom. – The analyzed tender competitions were open from May–July 2016. • Material collection: The search for tender competitions was mainly conducted following a structured keyword search. Public procurement websites were used to search for tender competitions in each country. To select the appropriate documents, the classification system of each country for public procurement was identified. A total of 97 tender documents were obtained as initial candidates from the database search (see Table 18.1). • Material classification and evaluation. The documents were analyzed through a careful reading of tender documents, looking for sustainable criteria.

18 How Are Sustainability Criteria Included … Table 18.1 Number of gathered documents for each country

257

Country

Documents

Australia

22

Chile

30

Spain

27

United Kingdom

18

TOTAL

97

18.4 Results The aim of this paper was to analyze the content of tenders in terms of sustainability criteria in the field of road maintenance. The documents were analyzed in search of the sustainable criteria. The criteria were analyzed on the basis of the four phases of the tendering process [13]: • Selection criteria (SC) which includes information about selection and exclusion provisions and solvency conditions. • Technical specifications (TS) which are detailed prescriptions of the characteristics that the product or service must accomplish to be accepted. • Award criteria (AC) which are the criteria that are taken into account by the contracting authority to select the best bid. • Contract performance clauses (CPC) which describe the execution clauses with which the awarded company contract must comply. The results show that, with respect to sustainable criteria included in tender documents, social criteria (SOC) stand out over environmental criteria (ENV), as displayed in Fig. 18.1. A detailed analysis of the results obtained from different countries (Fig. 18.1) shows that some countries, such as Spain, only consider environmental aspects as

Fig. 18.1 Distribution of sustainable criteria in the four phases of the tendering process by country

258

L. Montalbán-Domingo et al.

award criteria, while they are neglected in the selection criteria, technical specifications, and performance clauses. On the other hand, Australian tender documents show a more balanced approach between social and environmental aspects in all the phases of the tendering process. In general terms, Spain and Chile are more focused on social aspects, while the United Kingdom and Australia are focused on both environmental and social aspects. Moreover, Australia is the only country which includes social criteria at the award criteria phase. Table 18.2 Main social (SOC) and environmental (ENV) criteria included in the selection criteria phase of the tendering procedure Criteria

Percentage of occurrence Chile (%) Spain (%) Australia (%) UK (%)

SOC Compliance with social obligations and 100 equality legislation

100

100

100

Indigenous Development Plan Proposal

0

0

50

0

Work Health and Safety Management System

0

0

33

0

ENV Environmental Management System

0

0

33

0

Quality Management System

0

0

50

0

Compliance with environmental obligations

0

0

0

100

Compliance monitoring with environmental legislation on the part of suppliers

0

0

0

100

Table 18.3 Main social (SOC) and environmental (ENV) criteria included in the technical specifications phase of the tendering procedure Criteria

Percentage of occurrence Chile (%) Spain (%) Australia (%) UK (%)

SOC Holding a Health and Safety Policy

0

0

17

100

Employ and train a minimum number of apprentices/trainees who are registered in the Northern Territory

0

0

17

0

Companies that include in their workforce people with disabilities or in a situation of social exclusion

90

89

0

0

Labor occupational program generated by the contract

20

0

0

0

0

0

17

0

0

0

33

100

ENV Quality Management System Environmental Management System

18 How Are Sustainability Criteria Included …

259

Table 18.4 Main social (SOC) and environmental (ENV) criteria included in award criteria phase of the tendering procedure Criteria

Percentage of occurrence Chile (%) Spain (%) Australia (%) UK (%)

SOC Enhancement of industry and business capability in the territory

0

0

50

0

Improved capacity and quality in carrying out the Works

0

0

50

0

Accredited training programs currently supported by the Tenderer or that will be supported or utilized in carrying out the Works

0

0

50

0

Proposed level of usage of apprentices and trainees in carrying out the Works

0

0

50

0

Proposed number of jobs for Territorials that will be supported or utilized in carrying out the Works

0

0

50

0

Proposed level of involvement of local Indigenous enterprise in the Works

0

0

50

0

Regional development opportunities

0

0

50

0

0

11

0

0

0

11

0

0

0

0

17

0

50

44

0

100

ENV Minimization of impacts to users Environmental action program Fatigue Management Plan Quality

Table 18.5 Main social and environmental criteria included in contract performance clause phases of the tendering procedure Criteria

Percentage of occurrence Chile (%) Spain (%) Australia (%) UK (%)

SOC To comply with the work health and safety act and regulations

100

100

33

0

To employ a percentage of permanent workers, new recruitments, and disabled workers for the execution of the contract

0

100

0

0

To comply with the local benefit commitment

0

0

50

0

To use the services located and obtain supplies/materials available within the territory

0

0

50

0

To use accredited apprentices/trainees who are registered in the territory

0

0

33

0 (continued)

260

L. Montalbán-Domingo et al.

Table 18.5 (continued) Criteria

Percentage of occurrence Chile (%) Spain (%) Australia (%) UK (%)

The contractor will maintain and implement the indigenous development plan throughout the course of the contract

0

0

33

0

ENV To conserve energy, water, wood, and other resources, reduce waste and phase out the use of ozone-depleting substances and minimize the release of greenhouse gases, volatile organic compounds, and other substances damaging the health and the environment

0

0

0

100

Contractor’s environmental management plan

0

0

33

0

Maintenance requirements of the vegetation areas

0

0

17

0

A traffic management plan to be implemented during the contract

0

0

17

0

To ensure at all times that the requirements of all the relevant facts concerning noise, air, water, dust, and other pollutants are fully observed

0

0

17

0

To ensure that deleterious material deposited as a result of the works is removed from external roads, footpaths, and public areas

0

0

17

0

Any work or material damaged by water from any source shall be removed, replaced with fresh material, and reconstructed by the developer.

0

0

17

0

Appropriate arrangements must be made to provide anti-siltation measurements to prevent any deleterious matters entering the stormwater system

0

0

17

0

During the construction period and throughout the duration of the maintenance period, completed pavements shall be maintained by the developer in a clean and sound condition

0

0

17

0

Tables 18.2, 18.3, 18.4, and 18.5 show the main sustainable criteria, the stage at which they are included in tender documents and their percentage of occurrence by country.

18 How Are Sustainability Criteria Included …

261

According to the data, social criteria are better established than environmental criteria in all phases of the tendering process. With respect to social criteria, the following are the most common: “compliance with social obligations and equality legislation”; “including in their workforce people with disabilities or in a situation of social exclusion”; and “complying with the Work Health and Safety Act and Regulations”. These criteria are present in over 50 percent of the analyzed documents. With regard to environmental criteria, the following criteria are the most used (accounting for about 20% of occurrence): “compliance with environmental obligations”; “compliance monitoring with environmental legislation on the part of suppliers”; “environmental management system”; and “conserving energy, water, wood, paper, and other resources, reduce waste and phase out the use of ozone-depleting substances and minimize the release of greenhouse gases, volatile organic compounds, and other substances damaging health and the environment”.

18.5 Discussion and Conclusions This paper analyzed the sustainable criteria included in the four phases of the tendering process in the field of road maintenance. Documents of tender competitions were gathered from public procurement Internet websites to develop this research. The analyzed countries were Spain, the United Kingdom, Australia, and Chile. The results indicated that the most important sustainable criteria are those related to social aspects. Certain criteria are widely used. There is an important difference between the sustainable approaches in each country. There are many criteria that are rarely used and there are tendering organizations which use practically no environmental criteria. Additionally, there is a problem of determining what sustainable criteria are important and how important they are. There is no standard answer on the importance of sustainable procurement parameters, thus being complex to assess their suitability. On the other hand, thoroughly analyzing the documents, a lack of metrics and indicators to evaluate sustainability has been noted. The sustainable criteria are defined in a general way, and the definition of many of the criteria related to sustainability is vague and confusing. Thus, a new general way to assess sustainable performance needs to be found.

References 1. Akenroye TO (2013) An appraisal of the use of social criteria in public procurement in Nigeria. J Pub Procurement 13(3):364–397 2. Brammer S, Walker H (2011) Sustainable procurement in the public sector: an international comparative study. Int J Oper Product Manage 31(4):452–476. https://doi.org/10.1108/014435 71111119551

262

L. Montalbán-Domingo et al.

3. DEFRA (2006) Procuring the future. Sustainable procurement national action plan: recommendations from the sustainable procurement task force. Department for. http://www.keepsc otlandbeautiful.org/media/317324/sptf_presentation_bm_v2-051206.pdf 4. Faith-Ell C (2005) The application of environmental requirements in procurement of road maintenance in Sweden. KTH 5. Garbarino E, Rodriguez Quintero R, Donatello S, Gama Caldas M, Wolf O (2016) Revision of Green Public Procurement Criteria for Road Design. Constr and Mainten Proposal. https:// doi.org/10.2791/683567 6. Haapasalo H, Aapaoja A, Björkman S, Matinheikki M (2015) Applying the choosing by advantages method to select the optimal contract type for road maintenance. Int J Procure Manag 8(6):643. https://doi.org/10.1504/IJPM.2015.072385 7. Igarashi M, de Boer L, Fet AM (2013) What is required for greener supplier selection? A literature review and conceptual model development. J Purchasing Supply Manage 19(4):247– 263. https://doi.org/10.1016/j.pursup.2013.06.001 8. Kalubanga M (2012) SUSTAINABLE PROCUREMENT: concept, and practical implications for the procurement process. Int J Econ Manage Sci 1(7):01–07 9. Pakkala PA, Martin de Jong W, Äijö J (2007) International overview of innovative contracting practices for roads 10. PIARC Road Dictionary (2016). http://www.piarc.org/en/Terminology-Dictionaries-Road-Tra nsport-Roads/. Accessed 20 May 2016 11. Porter TM (2005) Procurement models for road maintenance. The 2005 annual conference of the transportation association of Canada 12. Ruparathna R, Hewage K (2015) Sustainable procurement in the Canadian construction industry : challenges and benefits. Can J Civil Eng 426(August 2014):417–426. http://doi. org/dx.doi.org/10.1139/cjce-2014-0376 13. Testa F, Grappio P, Gusmerotti NM, Iraldo F, Frey M (2016) Examining green public procurement using content analysis: existing difficulties for procurers and useful recommendations. Environ Dev Sustain 18(1):197–219. https://doi.org/10.1007/s10668-015-9634-1

Part III

Product and Process Engineering and Industrial Design

Chapter 19

Inclusive Design at the Different Phases of the Ageing Process A. González-de-Heredia, D. Justel, and I. Iriarte

Abstract People begin their ageing process well before the age of 65. In addition, although some patterns have been identified, each person ages differently. The tools of Inclusive Design help us to understand peoples’ capabilities, and therefore, develop products to meet their needs. However, these tools often require the involvement of users, and the tool itself is the initial obstacle. This study describes the different kinds of ageing, collects and classifies tools according to the degree of exclusion they cause. A selection of Inclusive Design tools is finally made and recommendations for their application are proposed for each stage of the ageing process. Keywords Inclusive design · Methodology · Ethics · Ageing

19.1 Introduction The world population is ageing, and will continue to do so if the increase in life expectancy and the current declining birth rates are maintained. For this reason, elderly people are becoming an interesting segment of the population for many companies. However, due to the diversity of this segment, developing products and services that respond to their needs and goals is not an easy task. This is because each person ages differently due to their genetic characteristics, their life history, and even their environment. Understanding their needs and desires, therefore, is increasingly complex. In general, all people experience some loss in their abilities throughout the ageing A. González-de-Heredia (B) · D. Justel · I. Iriarte Design Innovation Center, Faculty of Engineering, Mondragon Unibertsitatea, Loramendi 4, 20500 Arrasate-Mondragon, Spain e-mail: [email protected] D. Justel e-mail: [email protected] I. Iriarte e-mail: [email protected] © Springer Nature Switzerland AG 2021 J. L. Ayuso Muñoz et al. (eds.), Project Management and Engineering Research, Lecture Notes in Management and Industrial Engineering, https://doi.org/10.1007/978-3-030-54410-2_19

265

266

A. González-de-Heredia et al.

process. The Inclusive Design tools help us to understand the capabilities and needs of different people in order to ensure that the product meets their needs. However, these tools often require the participation of final users, and the tool itself is the initial obstacle. The aim of this study is to select the most suitable tools from Design for All or Inclusive Design to understand people’s needs throughout the ageing process, and to integrate them into the design process. In this study, we present a review of the description of the ageing process presented by different authors [3, 9, 12, 13, 16]. We also present a review of the state of the art of Design for All and Inclusive Design tools. Six methods are analyzed [1, 4, 5, 8, 11, 15]. Subsequently, the tools that explicitly consider user capabilities are identified and are then classified according to the degree of user participation that they demand. Finally, a selection of Inclusive Design tools is proposed, and their suitability for each stage or type of ageing is assessed.

19.2 State of the Art The following section describes the different concepts to be taken into account when we talk about the ageing process, and presents a review of the state of the art of Design for All and Inclusive Design.

19.2.1 Ageing Process The World Health Organization [16] describes the ageing process as follows: The changes that constitute and influence ageing are complex. At the biological level, ageing is associated with the accumulation of a wide variety of molecular and cellular damage. Over time, this damage leads to a gradual decrease in physiological reserves, an increased risk of many diseases, and a general decline in the intrinsic capacity of the individual. Ultimately, it results in death. But these changes are neither linear nor consistent, and they are only loosely associated with a person’s age in years. Furthermore, older age frequently involves significant changes beyond biological losses. These include shifts in roles and social positions, and the need to deal with the loss of close relationships. However, intrinsic capacity is only one of the factors that will determine what an older person can do. The other is the environments they inhabit and their interactions with them. These environments provide a range of resources or barriers that will ultimately decide whether people with a given level of capacity can do the things they feel are important. Thus, while older persons may have limited intrinsic capacity, this combination of people and their environments, and the interaction between them, is their functional capacity, defined by the report as the health-related attributes that enable people to be and do what they value in life.

19 Inclusive Design at the Different Phases of the Ageing Process

267

Also according to the WHO [16], people usually go through three stages during old age: • High and stable intrinsic capacity • Declining capacity • Significant loss of capacity Depending on the environment and the resources available to each person, their functional capability will vary in proportion or remain more stable. However, these phases occur in a nonuniform manner, which explains the diversity among the elderly population. The trends in the different subgroups of elderly people can vary enormously. Fernández-Ballesteros [9] proposes an alternative way to classify the types of ageing: • Successful ageing, as one with high physical and cognitive abilities and full of vitality • Normal ageing, with some ailments typical for the age, but with no serious illnesses, and still independent and autonomous • Pathological ageing, where the person suffers from more than one illness and depends on other people to develop daily activities Cavanaugh and Blanchard-Fields [3] identify another approach to understanding ageing and propose the following types of ages in addition to chronological age: • Chronological age, the time from the date of birth • Biological age, related to the physical and physiological changes that occur in the body • Psychological age, ability to react to events that occur in the environment, motivation, feeling, memory • Subjective age, the age the person feels • Social age, the roles determined by the society in a specific place and time, and how effectively they are carried out • Functional age, autonomy and independence to carry out daily activities In this approach, as in the WHO approach, we see how some of the ages relate to the intrinsic characteristics of the person (chronological, biological, and subjective age), while others depend on the person’s environment and their interaction with it (psychological, social, and functional age). The health system provides several methods to understand the level of capabilities of each person. Two of the most used methods are the Basic Activities of the Daily Life (BADL), and the Instrumental Activities of the Daily Life (IADL). The BADL evaluation [14], includes the following activities: feeding, cleaning, going to the bathroom, getting from one place to another, getting around in the home, climbing stairs, and having a bath. The IADL [12], includes doing the shopping, cooking meals and doing the housework, getting dressed, leaving the house, and communicating with others.

268

A. González-de-Heredia et al.

These methods help to predict how soon a person will become dependent. We define a dependent person as someone who is no longer able to carry out their basic daily activities without help. This occurs when their intrinsic capability can no longer be compensated for by environmental factors or by the use of welfare technologies. We should bear in mind that people can maintain autonomy, despite relying on care, if they retain the ability to make decisions about issues that concern them.

19.2.2 Relationship Between Ageing and Inclusive Design We can deduce that diversity is an essential factor to consider when talking about ageing. If we want to understand and meet the needs of people during their ageing process, it is necessary to include the concept of diversity of abilities in the design processes. Therefore, before starting the bibliographic review, we must define the concepts of disability, Design for All and Inclusive Design. According to the WHO [16], disability is an umbrella term for impairments, activity limitations, and participation restrictions. Impairments are problems in body function and structure, such as significant deviation or loss; activity limitations are difficulties in executing actions or tasks; and participation restrictions are problems in participating in vital situations. Therefore, disability is a complex phenomenon that reflects the interaction between the characteristics of the human organism and the characteristics of the society in which it lives. Design for All is the intervention on environments, products, and services in an effort to ensure that everyone, regardless of their gender, abilities, age, sexual orientation, lifestyle or any other aspect of human diversity, including future generations, is able to enjoy all the opportunities and activities offered by our society [1]. Inclusive Design is the design of mainstream products and/or services that are accessible to, and usable by, as many people as reasonably possible without the need for special adaptation or specialized design [2]. Thus, the objective is to apply inclusion criteria from the beginning of the design process to avoid adapting specific designs for certain users, as is often the case with people with disabilities. These measures are also interesting from the commercial point of view, since the ageing of the population, among other factors, is increasing the size of the market segment with specific needs. Therefore, the number of potential users interested in having inclusive products is increasing.

19.2.3 The Ageing Process and Inclusive Design Approaches Six interesting approaches related to Design for All and Inclusive Design have been identified:

19 Inclusive Design at the Different Phases of the Ageing Process

1. 2. 3. 4. 5. 6.

269

Universal Design (UD) [13] Inclusive Design Toolkit (IDT) [5] Human-centred Design (HCD) [11] Designing with People (DWP) [15] Innovating with People (IWP) [8] HUMBLES Method [1]

In this section, these approaches are described in terms of the theoretical approach, the design process, and the tools proposed, based on the references cited above [1, 8, 11, 13, 15].

19.2.3.1

Universal Design [4]

This approach, developed by the Center for Universal Design at the University of North Carolina, identifies seven principles that serve as a guide when designing inclusive products. The approach does not describe any specific design process, but rather is a design philosophy that includes as many users as possible. This philosophy consists of the aforementioned seven principles of Universal Design [4]: equitable use, flexibility in use, simple and intuitive use, perceptible information, tolerance for error, low physical effort, and appropriate size and space for approach and use. Based on these principles, the use of the Universal Design performance measures tool is proposed as a procedure to evaluate the level of satisfaction with the Universal Design principles.

19.2.3.2

Inclusive Design Toolkit [5]

The Inclusive Design Toolkit is an initiative carried out by the University of Cambridge, an approach that consists of a set of tools provided online (www.inc lusivedesigntoolkit.com). Clarkson et al. [5] defined the concept of Inclusive Design through three main characteristics: focused on users, aware of the capabilities of people, and focused on the business. In addition, they define how an inclusive product/service should be functional, usable, desirable, and feasible. Regarding the design process, this approach presents a cyclic model that consists of three phases: explore, create, and evaluate. In addition, specific tools are proposed for each of the phases, for example, the impairment simulator software that simulates different visual disabilities by distorting the images on a screen; simulation gloves and glasses that allow for simulating osteoarthritis and visual impairment; and the exclusion calculator that estimates the number of people excluded from using a particular product, based on disability statistics and data from the United Kingdom.

270

19.2.3.3

A. González-de-Heredia et al.

Human-Centred Design Toolkit [11]

The Human-centred Design Toolkit, developed by IDEO in 2011, is an approach that describes in a simple way the Human-centred Design process. The toolkit is a publication in written format downloadable free of charge from the network. The objective of this methodology is not the inclusive design per se, but rather it is aimed more specifically at facilitating the development of projects for the Base of the Pyramid [11]. The Base of the Pyramid concept refers to the population of developing countries and disadvantaged environments, and the opportunity that this population offers the industry in general. As mentioned, it is not an inclusive design approach like the others, but it shares many of the characteristics of these and is more comprehensive when proposing a people-centred design process. More specifically, the Human-centred Design is based on three aspects: desirability, technical feasibility, and financial viability. When designing, the primary starting point is the person (desirability), and from that point, the technical and financial viability aspects come into focus. The objective is to find solutions that satisfy all three aspects.

19.2.3.4

Designing with People [15]

Designing with People is the result of a collaborative project between the Engineering Design Centre of Cambridge University, the Well-being Institute, the University of Loughborough, and the Helen Hamlyn Centre (belonging to the Royal College of Art in London), with the objective to generate resources to help designers start designing “for people”. With this aim, they created a web page (www.designing withpeople.org) that offers relevant information in four blocks: people (users with different capability and skill levels, so that the designer can understand the characteristics of each of them); activities (analyzes the effects of diversity on daily activities); methods (includes twenty methods that allow designing that takes people into account). These methods are designed to be integrated into the designer’s own design process, although as a common basis it is cited in the Double Diamond design model [7]. The final block is ethics, which provides an ethical guide for carrying out research with users that defines five phases: contact, consent, confidentiality, conduct, and context.

19.2.3.5

Innovating with People [8]

Innovating with People [8], is a Design for All approach that seeks innovation by integrating the capabilities of people. The objective of the initiative is to inspire, motivate, and demonstrate to the industrial and commercial companies, the advantages and possibilities of implementing a people-centred approach. This approach considers that Design for All is a way to generate new ideas of high value in the market through the process of reducing costs.

19 Inclusive Design at the Different Phases of the Ageing Process

271

The goals and desires of people evolve, and values such as simplicity, efficiency, ease of use, sustainability, and ethical design are increasing in popularity. It is believed that an Inclusive Design strategy has innovative potential, as it can provide a meeting point for social interests (social inclusion, equality) and commercial interests (potential market increase). Nordby (2003), shows the market potential of Inclusive Design in pyramid form, and in doing this describes four market segments. The lower segment is the general market or primary segment, characterized by healthy consumers and usually referred to as means. The second segment represents a large number of people who need certain adjustments to be able to live in the designed world. This group includes people who wear glasses, left-handers, people with dyslexia, pregnant women, travelers with a lot of luggage or people with hearing problems. The other two segments include people who need assistive technologies or personal assistance to carry out daily activities such as having a shower or eating. Since their needs are very different from the main segment, they are generally not considered the main market of Inclusive Design; this would be the major difference between it and Design for All. In this sense, the consumers of the second mentioned segment, those who need individual adjustments, represent a very important potential market for Inclusive Design.

19.2.3.6

HUMBLES [1]

The HUMBLES method [1], is an approach aimed at helping companies take into account a greater number of customers by adapting their products and services to the diversity of their real and potential customers. HUMBLES consists, in its progressive approach, of seven phases to implement Design for All in the company. It advocates incorporating the user’s point of view into the design process, allowing companies to evaluate their business strategy in relation to their consumer needs, thus achieving a competitive advantage within their market. As mentioned above, the methodology has seven steps (whose initials are, in English, HUMBLES): Highlight design for all opportunities, User identification, Monitor interaction, Breakthrough options, Layout solutions, Efficient communication, and Success evaluation. The HUMBLES method is a holistic approach that allows innovation in terms of Design, for All as a part of the company strategy.

19.3 Contributions of the Inclusive Design Tools Suitable for the Ageing Process In this section, the tools from the analyzed methods that contemplate the capabilities in an explicit way are identified. The role that the user must take in the application of these tools is also analyzed, and consequently, the suitability of these tools at the different stages of ageing is considered.

272

A. González-de-Heredia et al.

19.3.1 Review of the Design for All and Inclusive Design Approaches As we have seen before, it is difficult to consider ageing without understanding the capabilities of people. Therefore, we have selected the tools that serve to analyze specifically the capabilities of people and their relationship with the products and environments that surround them. We can create two groups of tools: 1. Empathy tools. People (DWP), impairment simulation software, simulation gloves & glasses (IDT), and empathy tool (DWP). 2. Tools for quantifying the exclusion. Exclusion calculator (IDT) and measurements of the performance of the Universal Design (UD). The most important difference between the other types of tools and these tools is that these tools have been developed specifically to understand diversity and promote inclusion, generating results that are entirely focused on designing in a more inclusive manner. In total, thirteen (seven methods and six tools) of seventy-four methods and tools can be considered as specifically related to inclusion. However, IDT, DWP, and IWP are very valid approaches to working on inclusion, as they offer a structured and conscious way of developing a critical perspective beyond the traditional noninclusive design. Regarding the rest of the tools, the vast majority related to user research and, in general, HCD is used repeatedly in several approaches.

19.3.2 The Role of the User The tools that explicitly consider the capabilities of the people have been selected. Subsequently, they have been classified according to the level of user involvement required. Finally, the ability of people to participate in each stage of the ageing process has been estimated. To classify the tools according to the level of involvement required, the scheme of the Design Innovation Center [DBZ] [6], shown in Fig. 19.1, has been taken as a reference. The following changes are proposed to the Role of the User scheme: • To consider the personas tool (DWP) at the informs level, since as noted in previous studies [10], most of the versions of the personas tool analyzed to date, including that of DWP, propose a qualitative methodological approach but based on interviews with real people. • To include the empathy tools compiled in the state of the art of design for all: empathy tool (DWP), impairment simulator software (IDT), and simulation gloves and glasses (IDT), when users are not directly involved. • To include the tools for quantifying exclusion—exclusion calculator (IDT) and Universal Design performance measurements (UD)—at the level of not involved. They will also be included at the informs level, although in this case only partially,

19 Inclusive Design at the Different Phases of the Ageing Process

273

Fig. 19.1 Role of the user scheme. From [6]. Retrieved from https://dbz.mondragon.edu/docume nts/701437/711296/metodologia-dbz-mu.pdf/cdcaf18d-0dd5-42dd-8710-0db73dd03660 where it was published under a CC BY SA licence. Further permission for reuse in this (commercial) publication has been granted

since, depending on the phase and the type of project in which it is applied, user participation might not be necessary.

19.3.3 Role of the User According to His Stage or Type of Ageing On the other hand, the stages and types of ageing are compared with the possible roles of the user, and the levels of participation achievable by each segment are deduced (Table 19.1). Table 19.1 shows the level of participation achievable by users based on their stages or types of ageing. Thus, a person with successful ageing could adopt any of the roles. However, a person with normal ageing could get to inform and evaluate, but depending on their cognitive abilities will be more or less able to generate ideas in participatory design dynamics. Something similar is foreseen for people with pathological ageing, who depending on their cognitive abilities may be able to inform in interviews or participatory observation activities.

274

A. González-de-Heredia et al.

Table 19.1 Role of the user according to his stage or type of ageing Role of the user

Successful ageing High and stable capacity

Normal ageing Declining capacity

Pathological ageing Significant loss of capacity

Not involved

x

x

x

Informs

x

x

/

Evaluates

x

x

Generates

x

/

Develops

x

19.3.4 Selection of Tools for Each Stage or Type of Ageing The most appropriate tools for each stage or type of ageing are defined below.

19.3.4.1

High and Stable Capacity

In this case, all the tools described in the different approaches of Design for All and Inclusive Design could be used. In addition, the tools for quantifying the exclusion, in particular, could be used to quantify the slight losses of capabilities.

19.3.4.2

Declining Capacity

In general, all the tools that do not require a high level of involvement could be used. Some dynamics, such as brainstorming or lead user, could be a barrier for some of the normally ageing people. From the analysis of the tools that specifically analyze capabilities, it can be deduced that the tools for quantifying exclusion are more suitable for detecting slight impairments of capability. These tools promote a detailed analysis of the actions to be carried out in the use of a product and the demand for each capability. Therefore, it can be deduced that they can be adequate for the declining capacity or normal ageing stages.

19.3.4.3

Significant Loss of Capacity

It is understood that the personas tool is difficult to apply in the case of people with considerable loss of cognitive capacity, but it is proposed to look for alternatives. For example, we could direct the interviews to their main caregivers to create the personas with pathological ageing. It is considered that this tool deserves adaptation, since numerous projects and authors have demonstrated its value in focusing the design process on the person.

19 Inclusive Design at the Different Phases of the Ageing Process

275

On the other hand, it has been found that most of the empathy tools are focused on understanding physical and sensory abilities, and not so much on cognitive abilities. However, on the other hand, these are the tools that require less user participation. New empathy tools could be created to understand the significant losses of cognitive abilities suffered by some people with pathological ageing.

19.4 Conclusions This review of the state of the art of Design for All and Inclusive Design has revealed that the tools dedicated to understanding capabilities are relatively scarce. Specifically, thirteen out of the seventy-four methods work with capabilities explicitly. In addition, these tools largely focus on specific disabilities, mostly physical or sensory. The direct relationship between ageing and the gradual loss of capacity has not been observed in the tools. Therefore, there is a need to develop new tools that help us understand these capacity losses. Another solution could be the adaptation of existing inclusion tools to fit the ageing process. Only in this way will we be able to design new products and services that respond to the real needs of the elderly. In this study, a first proposal to adapt inclusion tools to fit the ageing process is made. These adaptations must be probed in future cases studies. The first one, the adaptation of the personas tool, is already being carried out. Thus, Elderpersonas has been tested with people from the three stages of ageing, and the results published [10]. The other adaptations will be the focus of experimentation in future projects.

References 1. Aragall F, Montana J (2012) Universal Design. The H.U.M.B.L.E.S. method for user-centred Business, Editorial Gower. ISBN 13: 9780566088650 2. British Standards Institute (2005) British standard 7000-6:2005. Design management systems – Managing inclusive design – Guide 3. Cavanaugh JC, Blanchard-Fields F (2014) Adult development sixth edition 4. Center for Universal Design (2002). Universal design: product evaluation countdown. Consultado el 20 de diciembre de 2013 en. http://www.ncsu.edu/ncsu/design/cud/pubs_p/docs/ UDPEC.pdf 5. Clarkson J, Coleman R, Hosking I, Waller S (2007) Inclusive Design toolkit. Cambridge University Consultado por última vez el 21 de Abril de 2017. http://www.inclusivedesigntoolkit. com/ 6. DBZ (2014) Metodología para la Innovación centrada en el usuario. Programa para la promoción de Gipuzkoa como un territorio que aprende 2013. Diputación de Gipuzkoa. Disponible en. https://dbz.mondragon.edu/documents/701437/711296/metodologia-dbz-mu. pdf/cdcaf18d-0dd5-42dd-8710-0db73dd03660 7. Design Council (2005) Double diamond design process. Consultado el 10 de Abril de 2017 en. http://www.designcouncil.org.uk/about-design/how-designers-work/the-design-process/ 8. Eikhaug O, Gherawo R, Berg S, Kunur M (2010) Innovating With People - The Business of Inclusive Design. Norwegian Design Council. ISBN: 978-82-991852-2-6

276

A. González-de-Heredia et al.

9. Fernández-Ballesteros R (1998) Vejez con éxito o vejez competente: un reto para todos En Ponencias de las IV Jornadas de la AMG: Envejecimiento y Prevención. AMG, Barcelona 10. González-de-Heredia A, Justel D, Iriarte I, Lasa G (2017) Elderpersonas, adapting personas to understands the real needs of elderly people. In: 21st International conference on engineering design 2017 11. Ideo (2011) Human centered design toolkit. Available online at: http://Www.Ideo.Com/ Work/Human-Centered-Design-Toolkit/ [Accessed 12.07.2013], 200. https://doi.org/978098 4645701 12. Lawton M, Brody E (1969) Assessment of older people: self-maintaining and instrumental activities of daily living. Gerontologist 9(3):179–186 13. Mace R (1997) What is universal design. Retrieved from https://www.ncsu.edu/ncsu/design/ cud/about_ud/udprinciplestext.htm 14. Mahoney F, Barthel D (1965) Functional evaluation: the barthel index. Maryland State Med J 14:56–61 15. Myerson J, Lee Y, Gheerawo R, Bichard J (2011) Designing with people | Putting people at the heart of the design process. Retrieved March 17, 2019 from http://designingwithpeople. rca.ac.uk/ 16. World Health Organization (2015) WHO | World report on ageing and health 2015. World Health Organisation. WHO, Geneva. https://doi.org/10.1016/j.jmgm.2016.10.012

Chapter 20

A Methodological Approach to Analyzing the Designer’s Emotions During the Ideation Phase V. Chulvi, J. Gual, E. Mulet, J. Galán, and M. Royo

Abstract The present paper shows a methodological approach for the realization of studies on designer’s emotional parameters during the idea generation phase. This approach is based on a comparative analysis of the different experiences carried out for measuring emotional states using different methods and tools. The aim of this methodological approach is to know the reactions of designers when solving a design problem or when using a specific method, and their relation with the results achieved, both at functional, creative or emotional level. This fact will allow a great advance in the field of design method development, since a better method-designer adaptation can be achieved, and consequently a better design results. The methodological approach has been tested thanks to the collaboration of the Vitale design study’s staff, from Castellón de la Plana (Spain). The present paper also shows this experience performance, as well as the results registered during it Keywords Emotions · Conceptual design · Designer’s perceptions · Problem-solving

20.1 Introduction Design for or through emotions is an attractive topic within the field of product design [9]. The key point is how to locate the elements that generate emotions in the product interaction experience, since these elements will allow an emotional link to be generated between the user (or consumer) and the product (or brand), and they also help to guarantee users’ satisfaction, as defended in the concept “Feel marketing” [32]. Different design methods and techniques are employed by designers in order to find V. Chulvi (B) · E. Mulet · M. Royo Departamento de Ingeniería Mecánica y Construcción, Universitat Jaume I de Castellón, Castelló de la Plana, Spain e-mail: [email protected] J. Gual · J. Galán Departamento de Ingeniería de los Sistemas Industriales y Diseño, Universitat Jaume I de Castellón, Castelló de la Plana, Spain © Springer Nature Switzerland AG 2021 J. L. Ayuso Muñoz et al. (eds.), Project Management and Engineering Research, Lecture Notes in Management and Industrial Engineering, https://doi.org/10.1007/978-3-030-54410-2_20

277

278

V. Chulvi et al.

these affective components, which are personal but classifiable, as well as to increase the knowledge about the product experience. In this respect, we can look at Kansei engineering, which allows the emotional needs to be located in order to achieve a mathematical model with the aim of establishing the relations between product characteristics and users’ preferences [15, 35]. Another method commonly used in the emotional design, mainly to quantify product perception in affective terms, is the Semantic Differential [26]. Customer surveys or questionnaires are also a basic technique that we must consider at this point. They are useful to gather information about affective responses in quantitative terms, as can be seen in Desmet [8], who synthesized 25 positive emotions related to human-product interaction through an exhaustive procedure, the core of which was a questionnaire, or Mauss and Robinson [18], whose research defends that different types of stimuli that cause an emotion are associated with discrete patterns of response. There are also many qualitative techniques that make it possible to “capture” the emotions generated by users when they experiment with a product: structured interviews, focus groups or direct observation, among others [4, 5, 19, 36]. Some authors, however, chose to combine the techniques that best fit their objectives when they design an experiment. So, protocols are created that are useful to other researchers to broaden their studies, such as the case of the usability tests employed in the field of emotions and behavior [27]. Yet, besides product-generated emotions, the designer’s emotions should also be focused on, since they may affect the design process and, consequently, the design results. Several studies defend the idea that the emotional state of the individual will affect the results of the design activity, such as the Flow Theory in Nakamura and Csikszentmihalyi [23], where it is defended that there is an optimal mental state that is experienced when the individual perceives the opportunities for action as being in balance with his or her perceived skills. Similar to this line of work is the research carried out by Akinola and Mendes [1], who state that negative emotions promote artistic creativity, or the research by Hutton and Sundar [11], which, in contrast to the work of Akinola and Mendes, defends the notion that happy people are more creative than angry people. Thus, there is an interest in knowing what emotions they feel when designing and how these emotional states are transmitted to users and costumers through their creations. It is at this point where we find the study by Rodríguez [30], who works with design professionals and enterprises in order to discern the strategies that allow the generation of the “magical” surprise effect on customers. In this search for emotions [24], it becomes essential to be able to measure emotional parameters, where emotional parameters can be understood as defined emotional responses that can be identified and measured. These can be measured with different methods and devices, some of which look into them at a deeper level than the simple question-to-individual strategy, thus making it possible to avoid the subjective nature of the data gained in this way. Moridis and Economides [20] point to some common techniques, including facial expressions, speech recognition, physiological data, and questionnaires, among others. Nonetheless, cerebral response to external stimuli produces measurable electrical changes, which supposedly allow the researcher to measure what the subject perceives and his or her response to these

20 A Methodological Approach to Analyzing …

279

perceptions in a more accurate way, since the signals are measured directly from the brain and in the same instant in which the emotion is generated. For this purpose, electroencephalographic (EEG) signals must be recorded with sensors (there are both invasive and noninvasive devices). There is an extensive body of literature related to the brain-computer interaction behind this process [13, 21, 31], which points to the use of this kind of devices to capture and decode these signals from different parts of the brain in order to be able to analyze them. Many recent works have dealt with the recognition and treatment of customer emotions when faced with particular products or stimuli, ranging from chromatic aspects [22] to durable products [25], or their technological aspects [36]. Nonetheless, few works have measured, not customer emotions, but the emotions and mental processes that are generated in the designers’ minds when they are in their creative phase [33]. The possibility of being able to know the designer’s reactions (emotional response) when tackling a design problem or using a design method, and the relation between these reactions and the functional, creative, and emotional results achieved will represent a great step forward in the field of design methods, since the better the method-designer relationship is, the better the design results will be. For this purpose, the present paper shows, first of all, a comparative analysis of different studies carried out to measure the emotional parameters using different methods and devices. It also analyzes which emotional parameters are usually studied, the sample analyzed and the time of each experiment. A methodological approach is then presented in order to conduct studies on measuring the emotional parameters of designers during the design process. This methodology aims to act as a standard in the measurement of the emotional parameters of designers, thereby helping researchers, when preparing their experiments, with the aim of improving the performance of their measurements, and avoiding the problems that could cause incorrect data collection. The methodological approach has been tested, thanks to the collaboration of the staff working at the Vitale Design Studio. The paper also describes how this experiment was conducted, as well as the results registered and their formats, just to exemplify how data is collected and presented. The data analysis of this test is not used to defend any hypothesis about the designer’s emotions.

20.2 Background 20.2.1 Measuring Emotion Through Facial Recognition or Eye-Tracking Several works have been carried out in which facial recognition or eye-tracking has been used to identify human emotions (see Table 20.1). In some cases, they have been combined with other kinds of sensors to measure emotions, such as the studies by Arroyo et al. [2] and Cooper et al. [6], who also use sensors on the chair, the mouse and the wrist, or the work of Marti et al. [17], who use biosensors.

280

V. Chulvi et al.

Table 20.1 Studies on emotion recognition through facial expressions or eye-tracking Reference

Parameters

Sample

Time

Arroyo et al. [2]

Confidence, frustration, excitement, interest

38 students

(1) Motivation and perception test (2) 4–5 days using software (3) Perception test

Cooper et al. [6]

Confidence, frustration, excitement, interest

(a) 35 students (b) 29 students (c) 29 students

(1) Motivation and perception test (2) (a) and (b) 4–5 days using software (c) One session (3) Perception test

Florea and Kalisz [10]

• Boredom—curiosity • Distress—enthusiasm • Anxiety—confidence

Not tested Focused on students

Terzis et al. [34]

Neutral, angry, and sad

172 students

45 min to answer 45 multiple choice questions

Sun et al. [33]

Fixation

41 designers

30 min of sketching task

The facial recognition or eye-tracking measuring system has been used in most of the studies conducted on students (both secondary school and university). In these studies focused on students, confidence, excitement or enthusiasm and frustration were chosen as emotional parameters to measure interest or curiosity. There are also few works that concentrate on designers. A significant case is the work of Sun et al. [33], which is not focused on emotions themselves, but on the designer’s fixation during the sketching process in the conceptual phase. Regarding the conditions of the studies, they vary widely in terms of sample and time. That is, the sample size ranges from 29 to 172 subjects, and the time from 30 min to 5 days of data recording. Nonetheless, the study regarding designers’ activity comprises 41 subjects and 30 min of problem-solving.

20.2.2 Measuring Emotion Through Questionnaires, Interviews or Semantic Differential Among the studies on measures of emotion, there are some that base their methodological model on semantic differential tools. This technique frequently uses a questionnaire with previously selected adjectives as emotional items, commonly taken as bipolar pairs. The selection of these adjectives must be done carefully, since it can condition the results and cause bias. For this purpose, there are works like those

20 A Methodological Approach to Analyzing …

281

Table 20.2 Studies on emotion recognition through questionnaires, interviews or semantic differential in product analysis Reference

Parameters

Sample

Timing

Na and Suk [22]

150 emotional words (Experiment 1) 20 emotional words (Experiment 2) Flamboyant, soft and clear (Experiment 3)

30 participants



Desmet [8]

25 positive emotions

221 (Main study)

20–30 min

Ortiz et al. [25]

25 positive emotions

56 participants

15 min

Jordan [14]

10 emotional feelings

18 subjects

1 h (interview)

Yoon et al. [36]

Interest, annoyance, anxiety, disappointment and frustration

25 participants (Main study)

Duration used as behavioral measure

by Na and Suk [22], who introduce a previous phase in their studies in order to compile a set of words related to emotional parameters on which the study will be based. Following this line, Desmet [8], summarizes the positive emotions of the human-product interaction in 25 adjectives, which have been used by other authors in subsequent research work [25]. Nonetheless, the “emotional words” based questionnaires are normally presented in combination with other stimuli, like images [8, 25], mock-ups [22], or sounds [36]. There are other qualitative techniques for determining emotional reactions, such as interviews or direct observation. For example, Jordan [14], used semi-structured interviews in his studies on the pleasure of using a product. Yoon et al. [36] also combined these qualitative techniques with the use of questionnaires in their research. The research reviewed (Table 20.2), presents many differences in parameters, samples and time. The emotional parameters vary from 5 to 150, the sample size from 18 to 221, and the time from 15 min to 1 h.

20.2.3 Measuring Emotion Through Neuronal Response The most frequent studies in the field of direct measurement of emotional parameters through neurosensors are those referring to psychological works in which a predefined emotion is triggered and neuronal response is measured (see Table 20.3). That is, images, sounds or videos created to cause a predetermined emotion are shown to the subjects, and the neuronal electrical response that is caused is registered [3, 28, 29]. This is the reason why the role of the subjects is not really relevant in this kind of studies. The sample size ranges from 6 to 25 subjects. Time dedicated for each “measurement” is short because it is conditioned to the exposure to the stimulus considered in each case: seconds for sounds or images, minutes for videos.

282

V. Chulvi et al.

Table 20.3 Studies on emotion recognition through EEG Reference

Parameters

Sample

Time

Inventado et al. [12]

Frustration and excitement

10 students

(1) Software introduction tutorial (2) 20 min. using software (3) “Big Five” personality test

Ramirez and Vamvakousis [29]

Arousal-valence of positive and negative stimuli

6 subjects

12 sounds lasting 5 s. each with a 10 s. rest between them

Pham and Tran [28]

Amusement fear

Not given

8 videos with a 2 min. rest between them

Bayer and Schacht [3]

Arousal-valence of positive, negative and neutral stimuli

25 subjects

viewing 72 faces, pictures, and words on a computer

20.2.4 Considerations on Methodology Development From the background analysis, the aim is to discern the best way to identify and measure the emotional parameters of designers. It is also necessary to determine which emotional parameters are recommended for measurement. Moreover, an adequate sample size and time must also be set. Of the three options analyzed—facial recognition, questionnaires, and neuronal response—the first is not recommended, since the face must be oriented toward the same point all the time so that the cameras can detect or record the facial expressions or the eye movements. It is, therefore, not suitable for measuring design activities, in which the subject must move, and so the cameras must cover the full sphere of possible positions of the face (41,253 square degrees). Moreover, the face could be hidden from time to time by the designer’s body movements or hands. Questionnaires and interviews present the subjective memory of the subject after the action, while neuronal response can be measured at the same time as the action happens, and is presumably objective. Hence, a device for measuring neuronal response seems to be a better option for our purpose than the other analyzed options. Nonetheless, most of the related studies analyzed in the previous point consist of inducing an emotion with a stimulus that is known to cause this emotion in order to measure and identify the neuronal response, while the approach proposed in this paper aims to act in the opposite direction. That is, the methodology will be for performing experiments in which the emotion caused by an action is unknown, so we need to measure the neuronal response in order to identify the emotion. This is similar to the method presented by Inventado et al. [12], in which they measure students’ emotions when using a computer software application. Furthermore, from the background analysis, it can be seen that most of the work reported to date is centered on students’ emotional parameters or just “subjects”, with no information about their profession, due to the fact that those research studies are

20 A Methodological Approach to Analyzing …

283

not related to any specific task. Only the work by Sun et al. [33], is related to the work of designers, and it was more focused on identifying the designer’s fixation during the sketching process than on their emotional parameters. Thus, a methodological approach is needed in order to be able to perform this kind of studies—measuring the emotional parameters of designers during the design process.

20.3 Methodological Approach The measuring tool selected for testing the proposed methodological approach is the Emotiv EPOC neuroheadset (Fig. 20.1), since it has previously been validated [7, 16], and used successfully in several works [12, 28, 29]. The emotional parameters that it can measure are excitement (versus calm), disinterest (versus engagement), and the degree of meditation (understood as a state of deep thought), which are measured in the literature review presented above. The data-collecting electrodes are located and labeled according to the international 10–20 system depicted in Ramirez and Vamvakousis [29]. Another reason for selecting this device is that the Emotiv EPOC neuroheadset will allow the designer freedom of movement and it will record the emotional parameters in real time. A review of the research to date shows that sample size may vary widely (from 6 to 221). As the aim is for the results of the experiments to be analyzed statistically, the authors suggest a minimal sample size of 12 and an optimal one of 32 samples. With respect to the expected time to solve each design problem, all similar experiments reviewed allow for a range of 20 to 45 min per exercise. In addition, a break between problems is provided, in which the emotional response of the subjects is also recorded so it can be used as a control measurement. This point is easily achievable with the selected measuring tool, since it does not restrict the designers’ movement. For an initial pilot test, the authors decided to set 25 min for each problem-solving phase and 10 min for resting between problems. Fig. 20.1 Emotiv EPOC neuroheadset

284

V. Chulvi et al.

Moreover, a test in the last step of the methodology is proposed to discern whether the designers’ perceived emotions agree with those measured in real-time. Hence, the stages of the experiments will be: 1. Informal chat with some non-stimulant drink and/or food (in order to try to achieve a similar initial mood for all participants) 2. Explanation and instruction (if required) 3. Solving of case 1 (25 min) 4. Rest (10 min)—data recording is not stopped 5. Solving of case 2 (25 min) 6. Questionnaire Steps 4 and 5 can be repeated if there is more than one variable under consideration in the raised hypothesis.

20.4 Initial Results 20.4.1 Development of the Experiment The proposed method was tested on two professional designers from Vitale, a design studio specialized in furniture, lamps, and corporative design. The testing was performed only to prove that the proposed methodology is useful, to show how the results will be presented, and to find and solve the possible problems that can appear when using the methodology experimentally. The volunteer designers were two young adult males with more than 10 years’ experience in the field of design. As it was a test of the method and the selected tools, no hypothesis was analyzed. Thus, steps 4 and 5, were not considered and just one problem was raised. In this case, the designers had to generate ideas for the design of a small recycling container placed in urban spaces for collecting specific kinds of waste such as light bulbs, batteries, ball pens, mobile phones, battery chargers, ink cartridges, and DVDs. The main objective of the design was to make it fun to use. Following the steps of the method, the experiment was carried out as follows: 1. The Emotiv EPOC neuroheadsets were introduced to the designers, and they were shown a Powerpoint presentation, consisting of a reference video fostering good behavior in public spaces (https://www.youtube.com/watch?v=tcrhp-IWK2w), an image related to the problem (Fig. 20.2), and the problem statement. 2. The designers were equipped with the Emotiv EPOC neuroheadsets, and they were given 25 min to solve the problem individually. They were provided with A3 sheets of paper and markers of different colors. They were free to move how and when they wanted. 3. During the design, an external observer recorded every time the designer changed from one stage of the design process to another, the stages being: understanding the problem and searching for information (PU), thinking about ideas (TI),

20 A Methodological Approach to Analyzing …

285

Fig. 20.2 The image related to the problem that was shown to the designers in step 1

selecting the final solution from among the ideas (SS), and representing the solution (RS). 4. The results were recorded both graphically and numerically. An extract of the numerical data captured is shown in Table 20.4. The numerical data vary from 0 to 1 and a decrease in the data means that the emotion had become less intense and, on the contrary, if the number measured increases, then the emotion is more intense. A brief analysis of these data is shown in “4.2. Data analysis”. Steps 3 and 4 were omitted, as explained previously. 5. A test was answered by the designers in order to evaluate the experience and their feelings with the aim of comparing their answers with the measurements of the neuroheadsets. First of all, the test asked them about how motivating they felt the design problem was, by assigning a score of between 1 and 10. They then had to value whether their emotional state had changed or not during the four design tasks: problem understanding, searching for ideas, selection of ideas, and final representations. If their emotional state changed, they were asked to assess whether their enthusiasm increased or decreased, how intensely it changed, and finally if this change was a one-off occurrence or remained stable throughout the task. The last question concerned the intensity of concentration required for each design task, and they were asked to score it from 1 to 10 (Fig. 20.3). The answers of the two designers are also marked in Fig. 20.3.

Excitement

0.725

0.725

0.719

0.719

0.719

0.719

0.719

0.719

0.719

0.713

0.713

0.713

Stage

1

1

1

1

1

1

1

1

1

1

1

1

0.552

0.552

0.552

0.552

0.552

0.552

0.552

0.552

0.552

0.552

0.552

0.552

Disinterest

0.333

0.333

0.333

0.333

0.333

0.333

0.333

0.333

0.333

0.333

0.333

0.333

Meditation

1

0

0

0

1

1

1

1

1

0

0

1

Eyes open

0

0

0

0

0

0

0

0

0

0

0

0

Right blink

0

0

0

0

0

0

0

0

0

0

0

0

Left blink

0

0

0

0

0

0

0

0

0

0

0

0

Look left

Table 20.4 Extract of the numerical data captured (each measure—row—is taken approximately every 0.5 s)

0

0

0

0

0

1

1

0

0

0

0

0

Look right

0

0

0

0

0

0

0

0

0

0

0

0

Look up

0

0

0

0

0

0

0

0

0

0

0

0

Look down

286 V. Chulvi et al.

20 A Methodological Approach to Analyzing …

287

Fig. 20.3 Questionnaire given to the designers in step 5

20.4.2 Data Analysis Despite the fact that the aim of the study is to propose a methodology and no hypothesis was proposed about the data analysis, it can be commented on in order to better exemplify the case. Nonetheless, the authors are fully aware that the sample size is too small to be able to show solid conclusions about this data analysis. The data collected can be shown as a graph, in order to analyze peak points or general trends over time. Figure 20.4 shows an extract from the parameter excitement graph of one of the designers who were analyzed. Nonetheless, the huge number of points that the program provides suggests it might be better to use another kind of analysis, like a box and whiskers plot. Figures 20.5 and 20.6 show the plots of excitement and meditation for both designers in each of the design phases previously defined. Figure 20.5 shows that the two designers have a higher level of excitement after the initial stage (problem understanding) and it is remarkable how the second designer has a significant peak of excitement at the end. If we compare the acquired data with the designers’ perceptions, as they answered in the final questionnaire (Fig. 20.3), it can be seen that their perception on excitement also increases during the task. However, the first designer (orange marks) says that his excitement is stable during the selection and representation of the solution, but the box and whiskers plot shows that his excitement still had a fluctuation and maybe the perception of stable excitement is not easy to identify. Designer 2 said that he felt an occasional increase in his excitement until the end, when it became stable, which agrees with the data collected (Fig. 20.5), since excitement is less dispersed. On comparing the two designers, it is observed that the design problem is equally interesting for both of them and both

288

V. Chulvi et al.

1

Excitement

0,8 0,6 0,4 0,2

165,1 168,7 172,1 175,2 178,3 181,7 185,0 188,2 190,9 193,6 196,9 200,1 203,4 206,6 209,9 213,2 216,8 219,9 223,5 226,8 229,8 233,0 236,8 239,8 242,5 245,3 247,9

0

Fig. 20.4 Extract of the graph showing the evolution of the parameter “excitement” of one of the designers analyzed

Fig. 20.5 Box and whiskers plot of excitement measured for problem understanding (PU), thinking ideas (TI), solution selection (SS), and representing the solution (RS)

have increased the level of excitement during the task, but the second designer was more excited than the first. Figure 20.6 represents the data for the parameter “meditation” collected for the two designers. Two of the sensors of Designer 1 turned orange in the phase of “representing the solution”, which means that the data acquired may not be reliable. In this particular case, they do seem to affect the measurement of meditation, as can be seen in the graph, in which no variation of data is measured. According to Fig. 20.6, the median of meditation changed slightly during the design task but occasional increments were measured. These data contrast with the responses to the questionnaires, which show that concentration has been increasing throughout the design task for both designers. The reason for this could be that the designers

20 A Methodological Approach to Analyzing …

289

Fig. 20.6 Box and whiskers plot of the parameter “meditation” measured for problem understanding (PU), thinking ideas (TI), solution selection (SS), and representing the solution (RS)

have equated peaks of high levels of concentration with stable concentration during the design stage. The concentration levels are similar in both designers, as could be expected considering that they have a similar profile and they work on the same projects.

20.5 Discussion and Conclusions The paper shows a methodological approach for contrasting hypotheses by measuring different emotional parameters of designers during the idea generation phase. Such hypotheses include concepts such as: is there one design method that causes more excitement in designers than another? And frustration? Does this happen to all designers or does it vary depending on their personality profiles? Is there an interaction between emotional responses and the quality of the design results? In order to perform the measurement in a direct way and in real-time, the use of EEG or EPOC headsets is proposed. In our study, the Emotiv EPOC neuroheadset was chosen, as it provides direct data, allows the designers freedom of movement, and has a relatively low-cost. It has also been used in several recent studies. In order to set guidelines, times, and rests, a study of the state of the art of similar experiments or experiences was performed. Moreover, the last step consisting in a questionnaire was also established in order to compare the results achieved with direct measurement and to obtain the participants’ feedback. An experiment was performed to test the methodology and the equipment, thanks to the collaboration of two professional designers from the Vitale Design Studio. After the questionnaire in step 5 of the methodology, the researchers conducted an interview with the designers in order to get first-hand feedback about the methodology and the way the experiment was conducted. From this feedback, the usability

290

V. Chulvi et al.

of the tool becomes apparent, since the designers could carry out their design activity normally, and they were able to move about without any difficulty. The designers consider that they had enough time to work on concept generation, but they do not think the time will be enough if the idea needs to be expressed with the quality required for it to be evaluated later. That is, if future experiments need only to register the designer’s emotions without later having to analyze the solutions presented, 25 min per problem is enough for steps 2 and 4 of the methodology. On the other hand, if the problem results have to be presented correctly, more time must be assigned to these steps. As a result of testing the methodology, some advice and considerations can be postulated in order to improve future research using this methodology, as well as to avoid incorrect data recording. First of all, the selected tool is stable against most electromagnetic signals, mobile phones, wifi, etc., but it is not so stable with Bluetooth signals. As communication between the neuroheadset and the computer is achieved by Bluetooth, the system can record noise or even lose the signal when there are other Bluetooth signals close to them. Nonetheless, despite the fact that the neuroheadsets work with Bluetooth, the proximity of various neuroheadsets does not cause interferences between them if and when the virtual line between each neuroheadset and its corresponding receiver is not crossed by another one. This fact has to be taken into special consideration when planning experiments for design teams. Therefore, each designer must have their zone clearly delimited and not cross the other designer’s virtual zone. The authors also recommend performing the experiments in a Bluetooth-isolated area. As the sample size in the test consisted of only two designers, no confluent data can be discussed about the data analysis. Nonetheless, the aim of the research was not to compare any hypothesis against the data recorded, so this aspect is left for future research. There is no other direct method for analyzing the emotions of designers during their design process. The advantage of the proposed method over other possibilities is the free movement of designers when using the suggested device compared with the EEG devices with wires, and that the information is objectively obtained from the source, avoiding the personal interpretation of their own feelings, that can vary from one individual to another. So, the proposed method seems suitable for measuring different emotions, distinguished by type and intensity, in designers that need freedom of movement in order to perform their design activity. Moreover, the conversion into numerical data allows the statistical analysis of the results in order to contrast different hypotheses. Hence, as future lines of work, the proposed methodology will be applied to measure the emotions of designers when working, both individually or in design teams, during the creative design phase in order to test different hypotheses, such as whether designers’ feelings, emotions or mood are reflected in their jobs, if they affect the design results or the level of creativity achieved, or whether the kind of creative method employed during the design work (for example, more structured or more intuitive) causes different reactions in designers depending on their personality.

20 A Methodological Approach to Analyzing …

291

Acknowledgments This research has been possible thanks to the project GV/2017/098 “creación de espacios emocionales para incrementar los resultados creativos del diseñador durante la fase conceptual” founded by the Generalitat Valenciana government. Authors would also like to express their gratitude to the staff members of Vitale for their cooperation in this study. This research complied with the American Psychological Association Code of Ethics and was approved at the University Jaume I. Informed consent was obtained from each participant.

References 1. Akinola M, Mendes WB (2008) The dark side of creativity: Biological vulnerability and negative emotions lead to greater artistic creativity. Personality and Social Psychology Bulletin 2. Arroyo I, Cooper DG, Burleson W, Woolf BP, Muldner K, Christopherson R (2009) Emotion sensors go to school. AIED 200:17–24 3. Bayer M, Schacht A (2014) Event-related brain responses to emotional words, pictures, and faces–a cross-domain comparison Frontiers in psychology 5 4. Bradley MM, Lang PJ (1994) Measuring emotion: the self-assessment manikin and the semantic differential. J Behav Ther Exp Psychiatry 25(1):49–59 5. Coan JA, Allen JJ (2007) Handbook of emotion elicitation and assessment. Oxford university press 6. Cooper DG, Arroyo I, Woolf BP, Muldner K, Burleson W, Christopherson R (2009) Sensors model student self concept in the classroom. In: Modeling User (ed) Adaptation, and personalization. Springer, Berlin, Heidelberg, pp 30–41 7. Debener S, Minow F, Emkes R et al (2012) How about taking a low-cost, small, and wireless EEG for a walk? Psychophysiology 49:1617–1621 8. Desmet PMA (2012) Faces of product pleasure: 25 positive emotions in human-product interactions. Int J Des 6(2):1–29 9. Desmet PMA, Hekkert P (2009) Special issue editorial: design & emotion. Int J Des 3(2):1–6 10. Florea A, Kalisz E (2005) Embedding emotions in an artificial tutor. In: Symbolic and numeric algorithms for scientific computing, 2005. SYNASC 2005. Seventh international symposium on, 6. IEEE 11. Hutton E, & Sundar SS (2010) Can video games enhance creativity? Effects of emotion generated by Dance Dance Revolution. Creativity Res J, 22(3):294–303 12. Inventado PS, Legaspi R, Bui TD, Suarez M, (2010), Predicting student’s appraisal of feedback in an its using previous affective states and continuous affect labels from EEG data. In: Proceedings of the 18th international conference on computers in education, Putrajaya, Malaysia 13. Jackson MM, Mappus R (2010) Applications for brain-computer interfaces. Brain-computer interfaces. Springer, London, pp 89–103 14. Jordan PW (1998) Human factors for pleasure in product use. Appl Ergono 29(1):25–33 15. Jordan PW (2000) Designing pleasurable products. An introduction to the new human factors. Taylor and Francis. London 16. Lang M (2012) Investigating the Emotiv EPOC for cognitive control in limited training time. Honours report, University of Canterbury, 8 17. Martín H, Bernardos AM, Tarrío P, & Casar JR (2011) Enhancing activity recognition by fusing inertial and biometric information. In 14th International Conference on Information Fusion (pp. 1-8). IEEE 18. Mauss IB, Robinson MD (2009) Measures of emotion: a review. Cogn Emot 23(2):209–237 19. McDonagh D, Bruseberg A, Haslam C (2002) Visual product evaluation: exploring users’ emotional relationships with products. Appl Ergon 33(3):231–240

292

V. Chulvi et al.

20. Moridis CN, Economides AA (2008) Toward computer-aided affective learning systems: a literature review. J Educat Comput Res 39(4):313–337 21. Motte D (2009) Using brain imaging to measure emotional response to product appearance. In: 4th international conference on designing pleasurable products and interfaces-DPPI’09 (pp 187–198). Université de Technologie de Compiègne (UTC) 22. Na N, Suk HJ (2014) The emotional characteristics of white for applications of product color design. Int J Des 8(2):61–70 23. Nakamura J, Csikszentmihalyi M (2009) Flow theory and research. Handbook of positive psychology, 195–206 24. Oatley K, Keltner D, Jenkins JM (2006) Understanding emotions. Blackwell publishing 25. Ortiz JC, Aurisicchio M, Desmet PMA (2014) Pleasantness and arousal in twenty-five positive emotions elicited by durable product. In: The 9th international conference on design and emotion, Ediciones Uniandes, Bogotá, Colombia, 221–227 26. Osgood CE, Suci GJ, Tannenbaum PH (1957) The measurement of meaning. University of Illinois Press, Urbana 27. Partala T, Kangaskorte R (2009) The combined walkthrough: measuring behavioral, affective, and cognitive information in usability testing. J Usability Stud 5(1):21–33 28. Pham TD, Tran D (2012) Emotion recognition using the emotiv epoc device. In Neural information processing (pp. 394–399). Springer Berlin, Heidelberg 29. Ramirez R, Vamvakousis Z (2012) Detecting emotion from EEG signals using the emotive epoc device. Brain Informatics. Springer, Berlin, Heidelberg, pp 175–184 30. Rodríguez E (2014) Industrial design strategies for eliciting surprise. Des Stud 35(3):273–297 31. Rodríguez Bermúdez G, García Laencina PJ, Brizion D, Roca Dorda J (2013) Adquisición, procesamiento y clasificación de señales EEG para diseño de sistemas BCI basados en imaginación de movimiento. VI Jornadas de introducción a la investigación de la UPCT 6:10–12 32. Schmitt B (2010) Experience marketing: concepts, frameworks and consumer insights. Found Trends ® Mark 5(2):55–112 33. Sun L, Xiang W, Chai C, Yang Z, Zhang K (2014) Designers’ perception during sketching: an examination of creative segment theory using eye movements. Des Stud 35(6):593–613 34. Terzis V, Moridis CN, Economides AA (2013) Measuring instant emotions based on facial expressions during computer-based assessment. Pers Ubiquit Comput 17(1):43–52 35. Vergara M, Mondragón S (2008) Ingeniería Kansei: una potente metodología aplicada al diseño emocional. Faz 2:49–56 36. Yoon J, Desmet PMA, van der Helm A (2012) Design for interest: Exploratory study on a distinct positive emotion in human-product interaction. Int J Des 6(2):67–80

Chapter 21

Effect of the Technique Used for the Particle Size Analysis on the Cut Size of a Micro-hydrocyclone Javier Izquierdo, Jorge Vicente, Roberto Aguado, and Martin Olazar

Abstract Small diameter hydrocyclones, or micro-hydrocyclones (below 100 mm), are commonly used in the industry due to their ability to achieve cut sizes lower than 40 µm. In order to know this cut sizes, the analysis of the particle size of the hydrocyclone streams is required. There are different techniques to obtain this information, like sieving, laser diffraction, sedimentation or microscopy. The work presented is part of one of the stages of an R&D project performed between the CPWV research group of the Chemical Engineering Department of the UPV/EHU and NOVATTIA DESARROLLOS Ltd. with the objective of achieving a more flexible, compact, and efficient hydrocyclone technology than the actual one. The goal of the study was to test the influence of the technique used to analyze the particle size of the streams on the cut size of the micro-hydrocyclone. The equipment employed to measure the particle size were Mastersizer 2000 (laser diffraction) and Sedigraph III (sedimentation). The results showed that there is a great influence of the analysis technique on the cut size. Keywords Micro-hydrocyclone · Particle size measurement technique · Cut size

21.1 Introduction The use of hydrocyclones is a worldwide known technique for separation, classification, and thickening of particles in solid-liquid processes. Typical industries where this kind of equipment is used are oil refining, petrochemical industry, natural gas purification, mineral treatment, and paper industry [14]. A conventional hydrocyclone has one entry and two exits, one for the overflow and the other one for the underflow. The overflow is the diluted stream which carries J. Izquierdo (B) · R. Aguado · M. Olazar UPV/EHU, Leioa (Bizkaia), Spain e-mail: [email protected] J. Vicente NOVATTIA DESARROLLOS S.L, Derio (Bizkaia), Spain © Springer Nature Switzerland AG 2021 J. L. Ayuso Muñoz et al. (eds.), Project Management and Engineering Research, Lecture Notes in Management and Industrial Engineering, https://doi.org/10.1007/978-3-030-54410-2_21

293

294

J. Izquierdo et al.

Fig. 21.1 Parts of a hydrocyclone and the important dimensions

the finest particles, whereas the underflow is responsible for carrying the coarser ones, and has a high concentration of solids. The hydrocyclone is composed of six parts, as can be seen in Fig. 21.1. The vortex finder is situated at the top of the cyclone (1), it is commonly known as the vortex, and is the exit for the overflow. The next part is the feed body (2) that normally consists of a tangential inlet. The cylindrical body (3) is connected to the feed body, and is followed by a conical body (4). Finally, at the bottom of the hydrocyclone, the apex (5), and the spigot (6) are placed, which is the exit of the underflow. The most important dimensions of the hydrocyclone are its diameter (D), the inlet diameter (Di ), the length (L), the vortex finder diameter (Do ), the vortex finder length (l), and the spigot diameter (Du ) [12]. The principle of operation of hydrocyclones is, primarily, the centrifugal sedimentation. When the feed flow enters tangentially through the hydrocyclone, a velocity field with three components (tangential, axial, and radial) is generated in the particle. This phenomenon generates a vortex that is responsible for creating the centrifugal force that impulses the particles to the wall, allowing the separation in the radial direction. The particles with high density or with a big diameter are directed to the wall and, as the density or diameter decreases the particles are placed to the center [3, 13]. In order to obtain better cut sizes, small diameter hydrocyclones or microhydrocyclones are used due to their ability to obtain high centrifugal forces. To select the correct device that is needed for a specific application it is necessary to know their cut size, so tests have to be done, where, among other things, the particle size of the different streams is measured.

21 Effect of the Technique Used for the Particle Size …

295

There are different methods to analyze the particle size. The most common one in hydrocyclones is the laser diffraction [1, 6–9, 15]. The reason is that this technique is able to give results in a quick, precise and reproducible way. Laser diffraction measures particle size distributions by measuring the angular variation in the intensity of light scattered, as a laser beam passes through a dispersed particulate sample. Large particles scatter light at small angles relative to the laser beam and small particles scatter light at large angles. The angular scattering intensity data is then analyzed to calculate the size of the particles responsible for creating the scattering pattern, using the Mie theory of light scattering. The particle size is reported as a volume equivalent sphere diameter (“Laser diffraction particle sizing technique”, s. f.). Another technique used to measure particle size is sedimentation [4], that consists of measuring the gravity-induced settling rates of different size particles in a liquid with known properties. Particle mass is measured directly via X-ray absorption by measuring the rate at which particles fall under gravity through a liquid. Having known properties as described by Stokes’ law, the SediGraph determines the equivalent spherical diameter of particles ranging from 300 to 0.1 µm. This is a simple yet extremely effective technique for providing particle size information for a wide variety of materials (“SediGraph III Plus | Micromeritics”, s. f.).

21.2 Objectives The work presented is part of one of the stages of an R&D project performed between the CPWV research group of the Chemical Engineering Department at the UPV/EHU and Novattia Desarrollos Ltd. with the objective of achieving a more flexible, compact, and efficient hydrocyclone technology than the actual one. The aim of this paper is to test the influence of the particle size measurement technique on the product quality and cut size of a micro-hydrocyclone.

21.3 Methodology Experiments were performed using a 50 mm micro-hydrocyclone, supplied by Novattia Desarrollos Ltd. in a test rig, as shown in Fig. 21.2. The test rig is composed of a 750 L feed tank with a stirrer (TK-1), a tank to feed the pump (TK-2), an eccentric screw pump (P-1), a magnetic flowmeter, a pressure indicator, a 50 mm microhydrocyclone (HCM-50), and the sample collection tank (TK-2). As it can be seen, the pilot plant is prepared to work in a closed circuit. The feed material used during the tests was kaolin, supplied by Caolines Lapiedra Ltd. The particle size distribution of the feed material was analyzed with a Malvern Mastersizer 2000, and the results are presented in Fig. 21.3. Using the kaolin, a 35 g/L suspension was prepared with water.

296

J. Izquierdo et al.

Fig. 21.2 Test rig with the 50 mm hydrocyclone

Fig. 21.3 Particle size distribution of the kaolin measured with a Malvern Mastersizer 2000

21 Effect of the Technique Used for the Particle Size … Table 21.1 Experimental data of the tests

Feed material

Kaolin

Solid content

35 g/L

Particle density

2600 kg/m3

Hydrocyclone diameter

50 mm

Spigot diameter

6 mm

297

Test I

Test II

Vortex diameter

10 mm

10 mm, normal-spiral-narrow

Feed pressure

1, 2, 3 and 4 bar

2 bar

Two types of tests were carried out. On the first one, a normal 10 mm diameter vortex was used and the kaolin suspension was fed into the hydrocyclone at 4 different pressures (1, 2, 3 and 4 bar). Samples of the overflow and underflow were taken for each pressure, and then the samples were filtered and dried to measure the concentration of each stream. As is usually the case, the overflow is the product stream of the hydrocyclone; for this reason, the particle size of these samples was measured with the laser diffraction technique (Mastersizer 2000) and sedimentation (Sedigraph), with the goal of seeing the effect on the product quality of the particle size analysis technique. On the second test, three different vortex geometries were used. The first one with a normal configuration, the second one with a spiral that forces the flow to the bottom, and the last one with a narrower head than the normal one. The kaolin suspension was fed into the hydrocyclone at 2 bar. Samples of the overflow and underflow were taken, and following the same procedure as in the previous test, were analyzed. In this case, the particle size of all the samples was measured with laser diffraction and sedimentation in order to see the effect of the measurement technique on the cut size. In Table 21.1, the material characteristics and the tests made is summarized. The analysis with Mastersizer 2000 was carried out on the facilities of SGIker, in the Faculty of Science and Technology of the University of the Basque Country. The Sedigraph analysis of the samples was done on the IGME (Instituto Geológico y Minero de España). To obtain the cut size, it was necessary to calculate the partition curve of the hydrocyclone. In order to do that, Eq. 21.1 was used [10]. The flow of the overflow and underflow streams was obtained by applying a mass balance to the hydrocyclone. η(d) =

Mu · xu (d) M f · x f (d)

(21.1)

where M u and M f are the solid mass flow in the underflow and in the feed, and x u (d) and x f (d) are the particle fraction of d diameter in the underflow and overflow.

298

J. Izquierdo et al.

21.4 Results 21.4.1 Results of the Stream Quality Figure 21.4 shows the comparison between the results of the particle size distribution of the overflow obtained at different pressures measured with the Mastersizer 2000 and the Sedigraph III. The particle sizes obtained with the Sedigraph III are in all the cases lower than the sizes obtained with the Mastersizer 2000. Table 21.2 includes the diameter where the 90% of the total volume of material in the samples is contained, D90 . As has been commented previously, the difference between the two analysis is obvious, but is difficult to say which is the best because it depends on a lot of factors, especially for this kind of materials (clays) where the particle shape is flat [2].

Fig. 21.4 Particle size distribution of the overflow samples obtained at different pressures. Note Here it is shown where the overflow samples are obtained at: a 1 bar, b 2 bar, c 3 bar and d 4 bar. In blue, the results obtained with Sedigraph are presented and in red the results of the Mastersizer

21 Effect of the Technique Used for the Particle Size … Table 21.2 D90 of the overflow samples obtained at different pressures

Pressure (bar)

299

D90 (µm) Sedigraph

Mastersizer

1

6.9

19.1

2

5.4

18.4

3

4.1

17.5

4

3.7

16.5

21.4.2 Results of the Cut Size Figure 21.5 presents the comparison between the partition curves obtained with Mastersizer and Sedigraph for the three different vortexes. The difference that exists when different particle size analyzers are used is clearly shown; the curve is displaced to the left, which means that the hydrocyclone is able to separate the finest particles. Therefore, a manufacturer can say that their hydrocyclone works very well, only by changing the method of measurement of the particle size.

Fig. 21.5 Partition curves of the micro-hydrocyclone with different vortex. Note a Normal vortex, b spiral vortex, c narrow vortex. In blue, the results obtained with Sedigraph are presented and in red the results of the Mastersizer

300 Table 21.3 Cut size of the micro-hydrocyclone obtained with different vortex

J. Izquierdo et al. Vortex

d50 (µm)

Normal

2.7

9.2

Spiral

3.1

10.8

Narrow

2.2

7.7

Sedigraph

Mastersizer

In order to see this difference numerically, in Table 21.3, the cut size (d50 ) obtained with the three different vortexes is gathered. This parameter is obtained from the partition curve, so the difference is the same that appears in Fig. 21.5.

21.5 Conclusions The objective of this work was to study the influence of the particle size measurement technique on the quality of the overflow stream and on the cut size of the hydrocyclone. From the results obtained in the analysis of the overflow samples, it was concluded that the measurement technique had a great influence on the particle size distribution, and in all the cases, the particle size obtained with the Sedigraph III was lower than the one obtained with the Mastersizer 2000. To decide which of the two analyses is better, a scanning electron microscopy (SEM) should be done, because it allows to see the real shape and size of the particles. Concerning the partition curve of the hydrocyclone, it was confirmed that if the particle size is analyzed with the sedimentation technique, the cut size obtained is smaller than if it is analyzed with the laser diffraction technique. This was a very important conclusion, because this parameter is one of the first that companies look when they need to install a hydrocyclone in their facilities. So it is important to specify the particle size measurement technique on the cut size of the hydrocyclones in order to ensure the correct operation of the equipment on the field.

References 1. Bourgeois F, Majumder AK (2013) Is the fish-hook effect in hydrocyclones a real phenomenon? Powder Technol 237:367–375. https://doi.org/10.1016/j.powtec.2012.12.017 2. Cooper JJ (1991) Particle-size measurements. Ceram Eng Sci Proc 12(1–2):133–143 3. Hwang K-J, Hwang Y-W, Yoshida H (2013) Design of novel hydrocyclone for improving fine particle separation using computational fluid dynamics. Chem Eng Sci 85:62–68 4. Ignatova T, Mincheva K, Ignatov S, Dzhelyaydinova A, Petkov T, Kyazimov A (2013) Obtaining coarse dispersed Kaolin for sanitary ceramics through hydrocycloning. J Chem Technol Metal 48(2):186–189 5. Laser diffraction particle sizing technique (s. f.) Recuperado 21 de abril de 2017, a partir de. http://www.malvern.com/en/products/technology/laser-diffraction/default.aspx

21 Effect of the Technique Used for the Particle Size …

301

6. Majumder AK, Shah H, Shukla P, Barnwal JP (2007) Effect of operating variables on shape of “fish-hook” curves in cyclones. Miner Eng 20(2):204–206. https://doi.org/10.1016/j.mineng. 2006.10.002 7. Neesse T, Dueck J, Minkov L (2004) Separation of finest particles in hydrocyclones. Miner Eng 17(5):689–696. https://doi.org/10.1016/j.mineng.2004.01.016 8. Neesse T, Dueck J, Schwemmer H, Farghaly M (2015) Using a high pressure hydrocyclone for solids classification in the submicron range. Miner Eng 71:85–88. https://doi.org/10.1016/ j.mineng.2014.10.017 9. Pasquier S, Cilliers JJ (2000) Sub-micron particle dewatering using hydrocyclones. Chem Eng J 80(1–3):283–288 10. Saengchan K, Nopharatana A, Songkasiri W (2009) Enhancement of tapioca starch separation with a hydrocyclone: effects of apex diameter, feed concentration, and pressure drop on tapioca starch separation with a hydrocyclone. Chem Eng Process 48(1):195–202 11. SediGraph III Plus | Micromeritics (s. f.) Recuperado 21 de abril de 2017, a partir de. http:// www.micromeritics.com/Product-Showcase/SediGraph-III-Plus.aspx 12. Svarovsky L (1977) Solid-liquid separation, 3rd edn. Butterworth & Co., London 13. Svarovsky L (1984) Hydrocyclones. Technomic Publishing Co., Inc, Lancaster, Pennsylvania 14. Yang Q, Li Z, Lv W, Wang H (2013) On the laboratory and field studies of removing fine particles suspended in wastewater using mini-hydrocyclone. Sep Purif Technol 110:93 15. Zhu G, Liow J-L, Neely A (2012) Computational study of the flow characteristics and separation efficiency in a mini-hydrocyclone. Chem Eng Res Des 90(12):2135

Chapter 22

Design and Development of a Snow Scoot with an Innovative Impact Absorption System H. Malon, V. Romero, and D. Ranz

Abstract This study focuses on the design of a Snow-bike (snow scoot), paying special attention to the structural optimization of the mainframe and the design and development of an innovative impact absorption system on the front skate, which favors the absorption of impacts produced during the jumps typical of this sport. From the development point of view, this project is approached through the use of a multi-criteria methodology, which involves different knowledge areas related to ergonomics, mechanical behavior, manufacturing economy, and final usability of the product. The different areas involved, feed each other in order to find the best final product among the different conceptual alternatives arising from the creative stage. For the analysis and structural optimization of the Snow-bike, a methodology based on the numerical calculation by finite elements is proposed to comply with the strong structural requirements to which the Snow-bike is subjected. Keywords Snow scoot · FEM · Impact wave · Multi-criteria · Design · Impact absorption

H. Malon (B) Dpto. de Ingeniería Mecánica. Escuela Politécnica Superior de Huesca, Universidad de Zaragoza, Carretera de Cuarte S/N, 22071 Huesca, Spain e-mail: [email protected] V. Romero (B) · D. Ranz (B) Dpto. de Ingeniería de Diseño y Fabricación. Escuela de Ingeniería y Arquitectura, Universidad de Zaragoza, Campus Río Ebro. C/María de luna S/N, 500018 Saragossa, Spain e-mail: [email protected] D. Ranz e-mail: [email protected] © Springer Nature Switzerland AG 2021 J. L. Ayuso Muñoz et al. (eds.), Project Management and Engineering Research, Lecture Notes in Management and Industrial Engineering, https://doi.org/10.1007/978-3-030-54410-2_22

303

304

H. Malon et al.

22.1 Introduction Within the winter sports-oriented to mountain practice, there is a minority one called snow scoot [4]. It is encompassed within the category of Snow-bike practice. The name Snow-bike designates the type of vehicles or products intended for leisure in the snow that combines movements typical of skis and snowboards with those of bicycles and motorcycles. They are similar to snowmobiles, but simpler structurally and functionally. This vehicle encompasses different variants of the same concept: to take the bicycle to the snowy mountain as a sport. Within this modality (Snow-bike) there are several variations between models and brands. They have different number of skis or boards, supports, shock absorbers, wheels, engines, etc. The presence of snow is essential for the operation of these products, as the element on which the user moves. The snow scoot, on the other hand, combines snowboarding and BMX [16], in a single product and allows the user to descend the mountain as if on a bike without seats and to turn by means of the movement of the body and the legs, sliding with the back of the vehicle on each curve. It is also aimed at competition and enjoyment in snow parks (areas where tricks, jumps, and pirouettes are performed). The user acts the same way as with skis and snowboards, climbing to the top of the mountain by ski lifts and sliding downhill through the snow, as shown in Fig. 22.1. Snow scoots always seek a combination of simplicity and functionality in all their designs, trying to improve the user experience by evolving the primitive concept of replacing the wheels of a bicycle with two boards similar to skis or snowboards. Components and functions of each of them could be summarized in two boards that can vary in width depending on the type of descent or practice performed. The front board is responsible for guiding the turn in each path, while the rear one is responsible for supporting the pilot’s weight and for sliding on its edge in each turn. This second board has a grip in the form of a strap for the feet, in charge of anchoring both legs to the body of the bike and allowing the pilot to guide the turn with his Fig. 22.1 Professional snow scoot pilot

22 Design and Development of a Snow Scoot …

305

body. The main structure of the snow scoot, also called frame on bicycles, consists of a set of tubes welded together and assembled to the boards. The structure has an upper handlebar with two grips or handles that allow the user to maintain a perfect balance and control of the entire product during the descent either on a straight path or in the different turns. It also has a strap as a safety accessory that allows to tie the structure of the snow scoot to the user, either to an arm or a leg. So in case of a fall, the snow scoot will not continue down the mountain without any control, avoiding possible accidents with other skiers.

22.2 Objectives Companies today seek to improve and evolve their models year after year, which leads to a constant search for failures, errors or possible changes that help users to improve their practice and enjoyment of this sport. To improve this leisure product, this work has been performed in collaboration with the inventor company and main manufacturer of this product. In addition, the world champion of this speciality has been consulted to improve some aspects of the design. The main objective of this study is aimed at achieving a competitive product but with which it is possible to reduce productive design costs, through optimization of materials, designs, and production in general, in order to reduce the purchase price, which for the range developed in this study is currently around e1,000. To be able to offer an economical version of the conventional snow scoot, it is necessary to reduce its weight, and therefore, the size of its structure, since it is there that more resources are invested and where the device gains more weight. In turn, it is essential to achieve a snow scoot with good performance and resistance, despite having less structure and frame. In addition, this implies the need to redesign the rear anchor for the feet, because changing the structural frame affects how the user can hold on to the vehicle. Thus, a new system or a new way of using the original model is necessary. To achieve the product’s price reduction while creating a new model, maximum use of existing and available pieces and materials must be accomplished. So the structure, fork, and front skate will not undergo development or modifications. Likewise, only professional buffering systems can be seen in events or championships, in which, brands sponsor experts in this sport and equip them with out-ofmarket models. So the company seeks as a second objective to innovate and develop an innovative system that improves current benefits without generating a very high extra cost.

306

H. Malon et al.

22.3 Methodology and Case Studies To carry out this study, a method based on the analysis of the current model is applied, in order to identify its strengths and weaknesses, and thus be able to propose several alternatives to improve it. A 3D model of the original snow scoot is made using the Solidworks software, and subsequently, an analysis is performed using the Finite Element Method (FEM), a technique widely used in the design of machine and vehicle components [1–3, 5–9, 11, 13–15]. The maximum body weight recommended for the snow scoot use is 150 kg making it clear that achieving high-speed descents is not guaranteed for this maximum weight. In the absence of specific regulations for this type of vehicle, the critical cases to be analyzed are established together with experts in this discipline. Habitual actions during descents are considered. Three types of falls, flat, front and rear, and an extreme turn (braking), are shown in Fig. 22.2. To establish the loads to which the snow scoot is subjected, the hypothesis was used to consider the jumps as parabolic shots in which the horizontal component is disregarded because the displacement is made on snow or ice. So only the vertical fall should be analyzed. Subsequently, to simulate these virtual tests the program SolidWorks is used, with its DROP TEST module, configuring the different key values in this test: height of fall, gravity (value and direction), ground plane, and its type of response to the impact. But the fundamental factor in this type of analysis is the solution time upon impact, showing the forces at a given moment once the model makes contact with the ground surface. Several analyses are made to calibrate the test and to determine the solution time after impact, so that the maximum stresses can be accurately determined. This solution time after the impact is finally established at 4,000 µs for the rear and front fall, whereas in flat fall 1,500 µs are enough to study the model behavior. The analysis method for falls consist of obtaining the nodes of maximum stress of 7 time intervals up to 4,000 or 1,500 µs according to the fall case to be analyzed. These maximum values will be identified using the tool “to identify values”, and later the “history” graph will be created with the solution time in order to obtain the stress absolute maximum value for the entire response time to the impact.

Flat fall

Fig. 22.2 Types of case studies

Rear fall

Front fall

Braking

22 Design and Development of a Snow Scoot …

Time (μs) 250 500 750 1000 1250 1500

Node Von Mises (MPa) 63026 260.4 68258 250 64619 263.2 65020 247.4 65591 264.7 64918 256.8

Part Front anchorage Rear end Main bar welding Main bar welding Support bar Main bar welding

307

Maximum Time (μs) Different mes

Node All

Von Mises (MPa) 265.1

Areas All

Fig. 22.3 Flat fall. Evolution of stresses in time for critical points

Time (μs) 1000 1500 2000 2500 3000 3500 4000

Node 2591 1638 4469 4262 4851 2258 2258

Von Mises (MPa) Part 51.5 Hole 60.3 Welding sheet 65.1 Lower sha tube 87.2 Upper sha tube 94.3 Lower sha tube 110.3 Hole 98.4 Hole

Maximum Time (μs) 3329

Node 4851

Von Mises (MPa) 94.4

Areas Lower sha tube

Fig. 22.4 Front fall. Evolution of stresses in time for critical points

Regarding the case of extreme turn, the braking is considered as the most critical case of turn. This sudden change of direction makes the device stop in a matter of a few meters and a short time, depending on the velocity. In order to define the braking situation, some fixed parameters are established: inclination 45°, weight 150 kg, and a velocity of 50 km/h (maximum velocity that could be reached in a track). The braking length is 20 m due to the high velocity and inclination. With these assumptions and the established values, the frictional force is determined that the board must sustain to be able to stop the movement in the indicated distance. Additionally, it is necessary to emphasize that the movement does not stop only by means of the friction force with the ground, but also the sum must be considered of the frictional force of the board steel edge with the snow and its displacement made by the board when braking (it raises and usually moves quite an amount of snow when making an aggressive braking). For the study, a braking force of 2,000 N is established, to which the board and the snow scoot structure will have to react.

308

Time (μs) 1000 1500 2000 2500 3000 3500 4000

H. Malon et al.

Node Von Mises (MPa) Part 47761 17.0 Rear holes 47761 43.6 Rear holes 47761 40.2 Rear holes 56823 32.1 Rear holes 51965 36.8 Support bar joint 53544 42.6 Curved rear bar 56821 40.9 Rear holes

Maximum Time (μs) 1759

Node 47761

Von Mises (MPa) 63.65

Areas Rear holes

Fig. 22.5 Rear fall. Evolution of stresses in time for critical points

Figures 22.3, 22.4 and 22.5 show the obtained results of Von Mises equivalent stresses for the different analyses performed on the original model for the three fall cases. Specifically, the nodes or salient points during impact are represented, as well as the time in which they are relevant. Figure 22.6 shows the results obtained for the braking case through a static study of the original model. In the scheme on Fig. 22.7, the different affected zones are shown, as well as the types of falls that influence them. Additionally, the impact intensity and relevance are highlighted with circles. Node

Von Mi s es (MPa )

Pa rt

63026

260.4

Front a nchora ge

Fig. 22.6 Braking, Von Mises stress

22 Design and Development of a Snow Scoot …

309

The welded connection of the main bars with the axis of rotation of the frame is affected by the types: front fall and flat fall The welded connection of the support bar with the axis of rotation of the frame is affected by the type: front fall

The welding between the support bar and the blade that supports the user is affected by the types: rear fall and flat fall

The front bends of the main structure are affected by the type: flat fall

The rear anchorage part of the structure to the board with threaded bolts is affected by the type: rear fall

Fig. 22.7 Affected zones

As conclusions on the initial model, it can be established that in the flat falls the more affected zones are the welded connections between the different bars, especially in the union of the support bar with the feet support blade, as well as the union of the main bars to the rotation axis of the frame. In the front fall, this last zone becomes important again since the greater stresses in magnitude and repetitions are grouped around the inferior zone of this axis, near the welded connection with the main bar. In the rear fall, the rear anchorage area is affected where the welded connection between the support bar and main structure is located. It is a complicated area since the end of this bar must adapt to two different heights, the support blade’s height and the transverse bar’s height. Additionally, in the braking analysis, the same zone stands out again, the rear anchorage, in particular, the rear anchorage orifice which connects the structure with the board. As a conclusion of the initial model analysis, the need is established to pay special attention to the anchorage areas of the structure with the rear board, as well as the connections with the rotation axis, as it is there where more stresses build up.

310

H. Malon et al.

Finally, from the braking analysis, the need for some kind of control on the rear skate can be extracted. When confronting a very big force, the flexion on the board is very large and it is limited by the main structure, in particular, by the curved end of the back part.

22.4 Alternatives and Study Regarding these latest conclusions, four alternatives are proposed. The first two simplify the support structure and the frame by totally replacing the anchor of the original model by a tube that runs longitudinally through the board. The difference between these two options is only at the end, to study the need for a final appendix serving as a stop for the board’s flexion. The other two alternatives offer a version of the snow scoot in which the anchor of the frame ends directly on the board and there is no connection between both frame tubes after their connection to the wood (Fig. 22.8). The same tests are performed to these alternatives in order to verify their performance and to obtain a set of conclusions which will allow to get closer to the final evolution. Table 22.1 summarizes the results of Von Mises equivalent maximum stresses obtained through numerical analysis performed on the four proposed alternatives. In this table, maximum stresses obtained for each load cases or analyzed maneuvers are highlighted in bold in order to facilitate a fast result analysis.

Fig. 22.8 Design alternatives

Table 22.1 Comparison of Von Mises stresses (MPa) of the alternatives Model

Flat fall (MPa)

Front fall (MPa)

Rear fall (MPa)

Braking (MPa)

Original

265.1

94.4

63.6

109.3

Alternative 1

264.0

96.8

74.3

200.0

Alternative 2

265.2

99.6

60.8

150.9

Alternative 3

264.7

97.3

77.4

144.4

Alternative 4

264.3

100.6

64.9

228.3

22 Design and Development of a Snow Scoot … Table 22.2 Material savings

Model

311

Mass (g)

Mass saved (g)

Mass saved (%)

Original

2,700.00

0

0

Alternative 1

1,479.22

1,220.78

45.21

Alternative 2

1,341.12

1,358.88

50.33

Alternative 3

1,109.42

1,590.58

58.91

Alternative 4

1,155.82

1,544.18

57.19

As can be seen, the results of Von Mises stresses for each maneuver are similar for the four alternative proposals and the initial model, except for the braking case in which stresses are almost doubled. In addition to the generated stresses, material saving is also analyzed for each alternative comparing to the original model. These savings are shown in Table 22.2. The results of mass saving show values higher than 45%. Reductions up to 58% are achieved. The obtained mass reduction shows values far higher than those obtained in other vehicles components’ optimization processes [10–12], indicating the need for optimization of the snow scoot’s initial design analyzed in this study. Analyzing the obtained mass saving variable, the third and fourth models are those that show more material saved. Also, from the point of view of the production process, these two alternatives are also the simplest to manufacture. Considering all these aspects, a weighting grid is created where the 4 alternatives are rated from 1 to 5 according to the performance of each model in different aspects: reaction to analyzed stresses (criterion 1), mass reduced with respect to the original (criterion 2), simple and economic production processes (criterion 3), and best response to everyday use (criterion 4). Assessment of the 4 alternatives considering the analyzed aspects is summarized in Table 22.3. Alternative number 3, which is shown in Fig. 22.9, is the one that combines the best qualities and characteristics and is taken as a starting point to achieve the final design. The conclusions drawn from all these analyses are taken into account in the evolution and further development of the snow scoot, and they are as follows:

Table 22.3 Multi-criteria assessment Model

Stresses

Mass

Processes

Use

Total

Alternative 1

4

3

1

4

12

Alternative 2

4

3

1

3

13

Alternative 3

4

4

3

4

15

Alternative 4

4

4

3

2

13

312

H. Malon et al.

Fig. 22.9 Chosen alternative

• Control and limitation of the back board’s flexion must be achieved. • A new location for the pilot’s feet and boots is necessary because the board itself supports too much impact modifying and altering the correct displacement of the wave along the board and structure of the snow scoot. • And finally, the design of the tubular frame must be modified with the objective of reducing the weaknesses exposed in fall studies.

22.5 Final Development To solve the problem of uncontrolled flexing of the rear board and the support of the user’s feet, an intermediate modification between the original model and alternative 3 is proposed. In the original snow scoot, there was an entire base of tubes and a metal sheet on the rear board that controlled the vehicle and served as a support. Now, the snow scoot will incorporate a steel plate that will fulfill the same functions as that original block (Fig. 22.10). In addition, with a more commercial approach, perforations remaining on the metal plate are used now to inscribe the company’s logo. And finally, to achieve a really good tubular frame that meets perfectly the defined needs without any weak points, a new redesign is necessary. For this reason, two new models are proposed that solve the critical connecting points between the tubes and the anchoring metal plates. By performing the same studies, the obtained tubular frame model is shown in Fig. 22.11. To complete the final redesign, the anchoring system of the original model is maintained in this new design and a velcro strap is used as anchorage for the feet, fixed laterally to the steel plate and with a rubber plate over the metallic surface, which is shown in Fig. 22.12.

22 Design and Development of a Snow Scoot …

313

Fig. 22.10 New steel plate design

Fig. 22.11 New tubular frame

Finally, it is necessary to develop a new buffering system that meets the established objective. The current basic models incorporate rubber cylinders between the boards and the structure as a cushion. These are the so-called silentblocks (anti-vibration elements that absorb vibrations and shocks that involve mechanical components). Thanks to the collaboration of an engineer in this sector, a new buffering system based on radial cylinders instead of axial (the ones usually used) and a support spring are implemented. This implies a structural modification of the front part of the snow scoot (Fig. 22.13).

314

1. The snow scoot attachment strap for the user's feet. It is used to fix the pilot's boots and prevent them from moving and separating from the snow scoot's support surface, but allowing the user to quickly remove it with its velcro closure.

H. Malon et al.

2. A pair of sleeves is added to the handlebars that improve the grip of the pilot's gloves to the steering.

3. In addition, a number of metal cylinders with a threaded hole in the center are added to the sides of the support plate. These pieces are welded to the support surface just like in the original model. They anchor the fastening tape.

4. A rubber-eva sheet provides the necessary grip and fastening between the user's boots and the frame structure. It is fixed with adhesive on the optimized sheet of the frame, and will follow the same drawing as the metal plate.

Fig. 22.12 Final details Fig. 22.13 New buffering system

22 Design and Development of a Snow Scoot …

315

22.6 Final Conclusions A calculation methodology has been established that has allowed the development of a new snow scoot model that meets the initial requirements of cost reduction and good structural behavior. In addition, aesthetically, the final model proposed incorporates a frame that visually is quite different from the original, which minimizes the number of components, because it comprises 3 tubes, a steel plate, 4 gusset plates, and 8 small fixing cylinders. All these elements are manufactured with the same aluminum type to facilitate the union between them. The final model obtained through the redesign and optimization process is shown in Fig. 22.14. With this final design, the objective set in the study to lighten the frame is achieved, obtaining a 30% mass saving. Initially, more than 50% of mass saving was achieved with alternative 3 from which this design started, but modifications have been made and a series of elements have been introduced to the optimized model in the final development phase (support plate, new tubular frame design, different grips, etc.) to guarantee an experience similar to the current one without affecting the user’s comfort. Finally, the buffering system developed previously is added to the final model, so that the redesigned snow scoot has a complete buffering system, which includes a compression spring system for great efforts such as jumps, and a silentblock in charge of reducing vibrations and motion noises, improving comfort and the pilot’s experience on the snow scoot. This final design is shown in Fig. 22.15.

Fig. 22.14 Optimized model

316

H. Malon et al.

Fig. 22.15 Final model

References 1. Beermann HJ (1984) Static analysis of commercial vehicle frames—A hybrid finite-elements and analytical method. Int J Veh Des 5(1–2):26–52 2. Cappello F, Ingrassia T, Mancuso A et al (2005) Methodical redesign of a semitrailer. In: 9th international conference on computer aided optimum design in engineering. Computer Aided Optimum Design in Engineering IX Book Series: Wit Transactions on the Built Environment, vol 80, pp 359–369 3. Carrera M, Castejon L, Miralbes R et al (2010) Behaviour IA rear underrun protection system on car-to-tank vehicle impact used for fuel transportation. Int J Heavy Veh Syst. 17(3–4):199–215 4. Ciao (2016) Snowscoot en Arinsal Pal, Vallnord, Andorra. http://www.arinsalskishop.com/not icias-temporada-de-esqui-en-arinsal-andorra/snowscoot-en-arinsal-pal-vallnord-andorra 5. Deng YD, Wang J, Wen Y et al (2011) The static and dynamic characteristics study of aluminum tank semitrailer. In: 2nd international conference on manufacturing science and engineering. Manufacturing process technology, PTS 1–5 Book Series: Advanced Materials Research, vol 189–193, pp 2233–2237 6. Hoefinghoff J, Jungk A, Knop W et al (2011) Using 3D field simulation for evaluating UHF RFID systems on forklift trucks. IEEE Trans Antennas Propag 59(2):689–691 7. Karaoglu C, Kuralay NS (2002) Stress analysis of a truck chassis with riveted joints. Finite Elem Anal Des 38(12):1115–1130 8. Kodiyalam S, Sobieszczanski-Sobieski J (2001) Multidisciplinary design optimization—Some formal methods, framework requirements, and application to vehicle design. Int J Veh Des 25(1–2):3–22 9. Li MH, Lam F, Lee G (2007) Structural assessment of van trailer floor systems with aluminium frame and wood decking. Int J Heavy Veh Syst 14(2):216–226 10. Malon H, Valladares D, Carrera M, Castejon L, Martin-Buro P, Lozano R (2014a) Experimental test and optimization by means of the finite elements method of a sub-frame for truck with crane. In: 18th International congress on project management and engineering. Alcañiz (Spain), pp 1134–1146

22 Design and Development of a Snow Scoot …

317

11. Malon H, Garcia-Ramos FJ, Vidal M, Bone A (2014) Design and optimization of a chassis for an Air-assisted sprayer with two fans using the finite element method. Proy Manag Eng Res. https://doi.org/10.1007/978-3-319-26459-2_8 12. Malon H, Castejon L, Cuartero J, Martin B (2015) Numerical analysis of two platforms designs for auxiliary construction truck. Exp Tech 39(6):53–60 13. Miralbes R, Castejon L (2010) Fatigue design of tanker semi-trailers. Dyna 85(6):480–488 14. Miralbes R, Malon H, Castejon L (2011) Diseño de accesorios para el acoplamiento en carretillas manipuladoras: plumines y portapalets. En: Libro de resúmenes del XV International Congress on Project Engineering. ISBN 978-84-615-4542-1 15. Vidal M, Bone A, Garcia-Ramos FJ, Malon H, Villacampa R (2011) Desarrollo de máquina para la aplicación localizada de cebo rodenticidas en parcelas agrícolas. En: Libro de resúmenes del XV International Congress on Project Engineering. ISBN 978-84-615-4542-1 16. Vanesha and Trisha (2015) ¿Que es el Snowscoot? [consulted 20th of April 2017]. https://actividadfisicaybienestar.com/category/deportes/deportes-de-invierno-deportes/ snow-scoot/que-es-el-snow-scoot/

Chapter 23

Robustness Analysis of Surface Roughness Models for Milling Carbon Steels M. Ortells-Rogero, J. V. Abellán-Nebot, J. Serrano-Mira, G. Bruscas-Bellido, and J. Gual-Ortí Abstract The modeling of surface finish in machining processes is very important in order to optimize cutting conditions and ensure surface quality. In the literature, many research works have presented specific surface roughness models for a given application. However, these models are not applicable in other environments with different machine tools, cutting tool, starting material, etc. In this work, different roughness models presented in the literature for milling carbon steels are studied. The most significant factors of each model, the ability to apply these models in different environments, and their robustness are analyzed. The study enables the analysis of the basic structure of the roughness model that provides greater robustness for the estimation of surface roughness under changes in the process. The model is validated in different machine tools under different cutting conditions. Keywords Surface roughness · Robustness · Machining · Modeling · Design of experiments

23.1 Introduction In machining processes, surface roughness is one of the most important characteristics of the product quality and has a great influence on manufacturing costs, aesthetic appearance, and the tribological properties of parts, such as fatigue strength or corrosion. The superficial quality is directly related to the cutting conditions and, therefore, it is very important to quantify the relationship between the surface roughness and the cutting parameters: the cutting speed, the feed rate, and the cutting depth. However, surface roughness is not only influenced by these cutting parameters, but also by a large number of factors such as: wear of the cutting tool, the characteristics of the cutting material and the part to be machined, the geometry of the cutting tool, the M. Ortells-Rogero · J. V. Abellán-Nebot (B) · J. Serrano-Mira · G. Bruscas-Bellido · J. Gual-Ortí Department of Industrial Systems Engineering and Design, School of Technology and Experimental Sciences, Jaume I University. Av. de Vicent Sos Baynat, s/n, 12071 Castellón de la Plana, Spain e-mail: [email protected] © Springer Nature Switzerland AG 2021 J. L. Ayuso Muñoz et al. (eds.), Project Management and Engineering Research, Lecture Notes in Management and Industrial Engineering, https://doi.org/10.1007/978-3-030-54410-2_23

319

320

M. Ortells-Rogero et al.

stability and rigidity of the machine tool, the clamping system, the emergence of a built-up edge during cutting, the existence of refrigerant/cutting lubricant, etc. [20]. Numerous research studies have tried to model surface roughness based on these factors in order to be able to estimate the final quality of the part, optimize cutting conditions, and reduce costs. Franco et al. [11] modeled the axial and radial runout effect on surface roughness and later expanded their study to examine the effect of back-cutting on surface roughness [12]. The influence of the machine tool rigidity in surface roughness generation was also later studied by De Aguiar et al. [8], where it was observed that in spite of vibrations during cutting, when using slender tools, it is possible to obtain a good surface roughness provided that the frequency of the tooth pitch used in the milling process (and its harmonics) does not produce a high frequency response. One of the most critical and studied factors associated with surface roughness is the wear of the cutting tool. In some experiments, as shown in the work of Yan et al. [23], higher roughness values are observed when the insert is in good condition (new) in comparison with a semi new state. When wear begins to develop, the radii of the tool edges increase, and therefore, the distance between peaks of the roughness profile due to the geometry of the cutting tool decreases, improving surface roughness. However, in most operations when the tool wear exceeds a certain limit, the surface roughness tends to increase considerably [20]. The effect of the cutting tool coating over the surface roughness has also been studied by different researchers. Nalbant et al. [16] analyzed the effects of coating on the cemented carbide inserts over surface roughness in CNC turning. The result was compared with inserts without coating, with PVD coating, and with CVD coating. In the experimentation, it was observed that the increase in the number of coating layers reduces the friction coefficient and decreases at the same time the mean surface roughness of the workpiece. Cakir et al. [4] used two types of inserts with the same geometry and substrate but different coating layers to assess the effects of the coating layers, as well as the cutting parameters, on the surface roughness. In this research, the authors showed that surface roughness values were significantly smaller when inserts coated with PVD (TiAlN) were used rather than inserts coated with CVD (TiCN + Al2 O3 + TiN). Other factors of great influence on the surface finish are related to the geometry of the cutting tool, such as the rake angle, the relief angle, the inclination angle, the tool nose radius, the tool edge, and the chip breaker geometry among others. In fact, these parameters can influence surface roughness as shown in the majority of technical catalogs of cutting tools manufacturers [5]. Some of these parameters have also been considered by researchers to analyze and model surface roughness under different cutting conditions. For example, Grzesik [13], studied the effect on the surface roughness of ceramic tools with different shapes when turning highhardness materials. Özel et al. [17] also examined the effect of the cutting geometry (polished edges) on surface roughness. The cutting parameters and their effect on surface roughness have also been extensively studied in the literature [5]. Many references show how increasing the cutting speed improves surface quality, since it reduces the cutting forces and also the effect

23 Robustness Analysis of Surface Roughness Models …

321

on vibrations. On the other hand, too low cutting speeds can produce adhesion of the material in the inserts (built-up edge) that decreases the surface quality, especially in soft materials [22]. A critical aspect in the cutting parameters is the feed rate and its relationship with the minimum undeformed chip thickness. The surface roughness profile shows more fluctuations as the feed decreases, which can be explained by the influence of undeformed chip thickness. As the feed rate decreases, smaller values of undeformed chip thickness are obtained and chip removal becomes more difficult. In these circumstances, there is a greater tendency toward elastic and plastic deformation of the workpiece surface, which implies greater fluctuations in the surface finish profile. The cutting depth is another parameter that may have an impact on surface roughness. Tammineni and Yedula [21] observed that surface roughness increases rapidly with the increase of the feed rate and decreases with the increase of the cutting speed, but the cutting depth has no clear impact. Cakir et al. [4] also observed that the cutting depth has a negligible effect on surface roughness; however, Darwish [6] did detect that this parameter was important and, according to his experimentation, the greater the cutting depth the lower the surface roughness. The effect of the workpiece material on the surface roughness has also been analyzed in different studies. Desale and Jahagirdar [9] observed that the influence of the workpiece material hardness on the surface finish is significant. Routara et al. [18] found in their experimentation that the cutting parameters (cutting speed, cutting depth, and feed rate) have a significant effect on the roughness parameters, but their influence varies according to the nature of the workpiece material (steel, aluminum, and brass). Therefore, their proposals for modeling based on the surface response to surface roughness were performed independently for each working material. The use of coolants and lubricants for cutting is another factor that has been studied in detail in various researches. Sreejith [19] studied the effect of different lubrication techniques (flood lubrication, MQL, and dry machining) during the machining of aluminum alloys 6061 with diamond coated carbide tools. The presence of built-up edges (adhesion of the workpiece material to the cutting tool), was observed in all cutting operations. This presence was most marked during dry machining and less so with flood lubrication, so the lubrication technique definitively conditioned the final surface quality of the workpiece. The use of the MQL technique improved the surface finish with respect to dry machining. Moreover, higher flow rates of lubricant produced lower surface roughness values. Yalcin et al. [22] observed similar results when machining soft materials with different cooling strategies. In their experimentation, they detected that the use of dry cutting for the milling operation of soft materials with tools of HSS material is inadequate due to the tendency to adhesion. Other cooling strategies such as air cooling can minimize the appearance of built-up edges in this type of materials. Despite research performed in the surface roughness modeling field in machining, there are few works that have studied robust strategies to model the process, with the capacity to be applied in different contexts and that do not limit their use to the process on which the model has been obtained. The problem of generating ad hoc models for each machine/tool/part system is that these models are difficult to

322

M. Ortells-Rogero et al.

use in the industrial field, since changes in the starting material, refrigerants, cutting tools, etc., makes the model quickly useless. Abellan-Nebot et al. [1, 2] introduce the problem of using a model based on experimental data regarding the process when it is subjected to changes such as a change in the material hardness or the use of another machine tool, or even a change in the lubrication type to be used. The model is no longer valid in most cases, showing the need to look for strategies to obtain robust models that are easily adaptable to changes in the process. This paper analyzes different surface roughness models published in independent research works and focused on milling operations of carbon steels, specifically in face milling operations using indexable milling cutters. The object of the study is to analyze the variability in similar roughness models made by different researchers. This seeks to study what factors are the most relevant for the modeling of surface roughness and if there is a base model structure in all the reviewed studies that allows establishing a robust “skeleton” model for the majority of cases within this type of machining operations. This communication is structured as follows. Section 23.2 presents the works previously published by other authors who model surface roughness in face milling operations on carbon steels. Models are compared and the result of applying each of the obtained models on the experimental data of the remaining studies is analyzed. This will show the particularity of each model to explain only the data with which it has been obtained. Section 23.3 analyzes the structure of each model and the parameters which influence all cases. Thus, this work aims to analyze the base structure of any surface roughness model for this specific machining application and search for the design of experiments with minimal experimentation that allows obtaining this model. This will be designated the base or robust model since its implementation is valid in any face milling process with carbon steel as work material. Section 23.4 shows the use of the robust model on the revised articles and also discusses its implementation in an experimental case. This section details the results of the robust model application in terms of its efficiency in the use of experimental data and its accuracy when compared to the results of the ad hoc models. Finally, the most relevant conclusions of the work are discussed.

23.2 Review and Comparison of Surface Roughness Models In order to analyze the problems of modeling surface roughness and the difficulty of generating valid models for different contexts, various experimental models published in journals and international conferences have been studied. To delimit the study, all the studied works focus on face milling operations using indexable milling cutters over carbon steel. Models derived from these works are shown in Table 23.1. As can be seen, each application modeling error is relatively low. However, the model is distinctly different in each case; despite the fact that the application is very similar in all of them: the face milling operation uses carbide inserts in indexable milling cutters and the material to be machined is carbon steel.

23 Robustness Analysis of Surface Roughness Models …

323

Table 23.1 Surface roughness models in recent publications Refs.

Author

Ref. [3]

Bajic et al.

Workpiece material

Error (%)

Ra Range (µm)

Cutting parameters V c ( m/min)

f z mm/tooth

ap ( mm)

St 52-3

9

0.34 1.10

140 190

0.20 0.40

1 3

Ra Model: 5.662 − 0.064 V c + 0.00017 V 2c + 7.773 f 2z + 0.027 a2p (*) Ref. [15]

Moghaddamm and Kolahan

AISI 1045

3.5

1.47 2.22

126 314

0.06 0.18

1 2

Ra Model: 2.02 − 0.012 V c + 13 f z + 2 × 10−5 V 2c − 42.2 f 2z +0.002 V c ap Ref. [14]

Kulkarni and Rathi

SAE 1541

7

0.59 1.48

94 125

0.09 0.17

0.2 0.4

Ra Model: 0.084 + 0.557 ap + 37.9 f 2z (*) Ref [10]

Erviti

AISI 1013

4

0.58 1.43

98 233

0.05 0.26

1

0.1 0.3

0.5 2

Ra model: 0.69 + 0.0038 V c + 8.7276 · f 2z −0.022 V c f z (*) Ref. [1]

Abellan-Nebot et al.

DIN S275JR

11

0.2 1

200 335

Ra Model: 0.06 + 2.5 f z + 3 × 10−7 V 2c − 0.007 a2p V c : Cutting speed, f z : Feed per tooth, ap : Depth of cut. (*) Models obtained using the data provided in the paper

The problem of the model’s portability, i.e., the possibility of using a model designed for a specific application (for a certain machine tool, workpiece material, etc.) in another context albeit with a similar application, can be displayed by exchanging models defined in Table 23.1, so the influence of using a model from a study with data from another similar study can be observed. This comparison is shown in Table 23.2, and it easily illustrates how experimental models are ad hoc and specific for the context in question. The diagonal of the table refers to the application of the model on the validation data that were used during its construction. These Table 23.2 Error matrix when using in each work the models described in the rest of communications Models Ref. [3] (%) Data

Ref. [15] (%)

Ref. [14] (%)

Ref. [10] (%)

Ref. [1] (%)

208

896

86

261

80

3

35

40

218

28

127

7

26

363

76

56

55

4

102

Ref. [3]

9

Ref. [15] Ref. [14] Ref. [10] Ref. [1]

47

80

56

63

11

Average

48

95

210

44

191

324

M. Ortells-Rogero et al.

values are reported in the table above, and it shows that for any combination of data from other works, the different surface roughness estimation models generate a very high estimation error, so the model is not feasible in these conditions. In the best scenario, the average of the model error using the values of all the works listed above is 44%.

23.3 Robust Model Structure On the other hand, in addition to the impossibility of applying empirical models derived for a context to a different one, it is interesting to observe that even the model structure is different. The importance of each cutting parameter and its interactions is shown in Table 23.3. As it is shown, in the majority of works the most important variables are, in this order, feed per tooth (f z ), the cutting speed (V c ), and the cutting depth (ap ). The interactions of these parameters may be important, in particular, the V c · f z interaction. However, only the feed per tooth parameter is used on all models. The fact that the model structure used in Table 23.1, is different may depend on the phenomena that occur in the range of the analyzed cutting conditions. For example, the range of working conditions can include the appearance of a built-up edge during cutting which creates a great impact on the surface roughness. The cutting speed is the main parameter influencing this phenomenon. Alternatively, working conditions can be those where surface finish is clearly determined by the cutting edge geometry (high feed speed), being the main cutting parameter the feed per tooth, without an important influence from other parameters. Davim [7] shows a typical behavior of the surface roughness against the feed per revolution and the cutting speed in turning operations (see Fig. 23.1). Therefore, it is not possible to obtain a model structure for surface roughness valid for all cutting conditions, even if the cutting operations and the work material are similar, as shown in Table 23.1. However, a Design of Experiments (DoE) strategy can be defined which allows, with minimal experimental cost, to obtain a base/robust model for surface roughness with acceptable accuracy and valid for most applications, leaving the development of more accurate models and at the same time more expensive experimentally for applications that require them. Table 23.3 Influence of each cutting parameter in surface roughness ap

V 2c

a2p

Vc · f z

V c · ap

80.9%

6.1%







3.4%





42.6%



91.6%











2.3%



39.9%





0.2%



0.3%







Vc

fz

Ref. [3]

11.6%





1.3%

Ref. [15]

17.6%

27.6%



8.5%

Ref. [14]





8.3%



Ref. [10]

57.7%





Ref. [1]



99.5%



f 2z

f z · ap

23 Robustness Analysis of Surface Roughness Models …

325

Fig. 23.1 Normal behavior of surface roughness according to feed rate and cutting speed

To delimit the scope of the proposed robust model, the behavior of the surface roughness is separated into 5 clear working zones. Zone I refers to the area where the feed per tooth is very small, which means that the thickness of the chip is so low that the shearing of material is not carried out properly. This may cause the upsetting of the material and increase surface roughness. Zone II refers to the area where the chip removal is properly conducted and the peaks and valleys of the surface roughness is due to the geometry of the tool tip. That is, zone II would encompass the zone of surface roughness that defines the cutting tool geometry when the feed per tooth is very high. Zone III, refers to the intermediate zone between the zone I and zone II. That is to say, the zone where neither the upsetting occurs, nor a roughness formed by the cutting tool geometry is generated. Zone IV occurs when the cutting speed is low and produces the phenomenon of built-up edge. When this appears, part of the sheared chip adheres to the cutting tool and is then dislodged, remaining embedded in the machined surface and considerably raising surface roughness. Finally, in zone V, the built-up edge phenomenon does not appear and the surface roughness remains relatively stable against variations in the cutting speed. There are many other variables that can influence surface roughness in addition to the cutting parameters (cutting speed, feed rate, and cutting depth), such as the machine-tool dynamics, the cutting tool wear, the application of different lubrication techniques, etc. However, many of these variables remain constant for a given application (for example, the used machine tool is established), and on other occasions, their influence is not as significant as the influence of the cutting parameters. In order to obtain a base model, it is necessary to propose a methodology that takes into account the area of behavior of the surface roughness considered. For the proposed model, it is assumed that the process is performed in zones II, III, and/or V, that is, the work is not done within a range where there is an excessive built-up edge or upsetting. Note that working in zones I and IV is meaningless since the appropriate cutting conditions for that application would not be used. In this way and looking at the factors influencing surface roughness models of the revised works (Table 23.3), common factors that influence all the works are feed per tooth and cutting speed, and within these factors, the quadratic component of feed per tooth is often present in all of them.

326

M. Ortells-Rogero et al.

DoE 2 x 3 +1 Regression, step by step Order of regressors: fz2, Vc·fz, Vc, fz

Vc

Ra = a·fz2 + b·Vc·fz + c·Vc + d·fz + e -1 -1

0 fz

+1

Fig. 23.2 Methodology based on Design of Experiments to obtain the base/robust model

Under these considerations, a minimum design of experiments that would allow to obtain a base/robust model in any application of face milling of carbon steels would be formed by a design of two levels in the cutting speed (maximum and minimum Vc values of the range to study) and three levels of feed rate (maximum, minimum, and intermediate fz values of the range to be studied). Thus, the model has a factorial design of experiments 2 × 3 = 6 experiments involving 5 degrees of freedom, that will allow to adjust four regressors that will define the robust model: V c , V c · f z , f z , f 2z . Due to the low number of experiments, to perform the regression of this model it is recommended to implement a procedure step by step, including a new regressor every time and in the following order: f 2z , V c · f z , V c , f z , which corresponds to the importance obtained in the analysis of Table 23.3. When the entry of a regressor does not significantly improve the adjustment, it would be eliminated proceeding with the next regressor. Figure 23.2 shows the methodology for base/robust models with minimal experimentation. Based on this methodology, a surface roughness model can be obtained, valid for the majority of cases, provided that the range of conditions analyzed is in the zones II, III or V described above. Otherwise, the quadratic factor of the cutting speed or several quadratic functions where the f z term would apply according to the work zone should be added which makes the model difficult to be obtained with minimal experimentation. This robust model could be the basis for a generic model that, with a small additional experimentation, could be adapted to any type of context. For example, the influence of changes in lubrication, wear of the tool, or other factors could be studied and analyzed later to be included in a model with greater accuracy and scope.

23.4 Robust Model Validation In order to validate the proposed methodology for obtaining base/robust surface roughness models, two independent studies are carried out. On the one hand, the outcome of the base model which would be obtained by applying this methodology

23 Robustness Analysis of Surface Roughness Models …

327

to the Refs. [1, 3, 10, 14, 15] of Table 23.1 is analyzed. These references indicate the experimental values obtained from the designs of experiments carried out to model the surface roughness, so this data can be used to obtain and compare the result of the base model with respect to the ad hoc model proposed in each reference. On the other hand, an experimental study is performed to validate the proposed modeling methodology on a real application. The accuracy of the robust model and its experimental cost can be analyzed and compared to other modeling methodologies, valuing the possibility of using it when the precision of the required modeling is not excessive.

23.4.1 Theoretical Validation. Comparison with Ad hoc Models Table 23.4 shows the characteristics of the models of Table 23.1, indicating the modeling technique (Central composite design, Taguchi orthogonal array or BoxBehnken technique); the number of factors; levels used in experimentation; number of experiments carried out for the model construction; experiments for model validation; and model accuracy. As shown, most of the models require a reasonable amount of experimental data. Reference 1 is the one that requires more experimental data since it is a DoE which afterwards aims to produce a model based in response surfaces where all factors used could have a quadratic component. The most efficient technique is the Taguchi orthogonal array L9, which indicates 9 experiments with a maximum of 4 three-level factors to analyze, though most of the interactions between factors are confused with one another or with their own main factors. The result of the application of the proposed methodology is shown in the second part of Table 23.4. As the necessary DoE is a 2 × 3 factorial design, two levels of V c and 3 levels of f z , 6 experiments are needed to carry out the model. The experimental data required for modeling is extracted from the data provided in each of the articles of Refs. [1, 3, 10, 14, 15], studied. In the case the experimental result of that combination of levels of the factors V c and f z is not available, this information is obtained by making use of the ad hoc model. To validate and quantify the accuracy of the model, the validation experiments available in each reference are used. In many cases even though the number of experiments carried out for the validation of the model is mentioned, these data are not shown explicitly so they cannot be used in the validation process. In these cases, validation is performed using all the experimental data available in the reference without discriminating between those used for the model construction or those excluded. As shown in Table 23.4, the models obtained using only 6 experiments (base/robust models) are less accurate than those obtained with a greater number of experiments. However, the difference in accuracy is not high, and the development of a sufficiently precise surface roughness model can be greatly simplified for many applications with a minimal number of experiments. According to Table 23.4, the average error of the ad hoc models is 7%, while the base/robust models obtained

328

M. Ortells-Rogero et al.

Table 23.4 Comparison of surface roughness modeling error. Ad hoc models and robust model Ad hoc models provided by references

Ref. [3]

Ref. [15]

Ref. [14]

Ref. [10]

Ref. [1]

Model1

A

B

C

D

E

Modeling technique

Central composite design

Taguchi array (L9)

Taguchi array (L9)

Not defined Box-Behnken

Variables & levels

V c, f z & V c , f z & ap V c , f z , ap ap , 3 levels 3 levels &Q3 levels

Nº Exp. (model)

Robust Model

20

9

18 9

f z 3 levels, V c 4 levels

V c , f z & ap 3 levels

12

15

Nº Exp (valid.)

Not indicated

6

Model error

9%

3.5%

7%

4%

11%

Model2

F

G

H

I

J

Modeling technique

Factorial DoE 2 × 3

Variables & levels

V c 2 levels, f z 3 levels

Nº Exp. (model)

Not indicated

6

6

Nº Exp. (valid.)

20

9

18

12

13

Model error (%)

13

10

9

5

9

5.662 − 0.064 V c + 0.0002 V 2c + 7.77 f 2z + 0.027 a2p ; B: 2.02 − 0.01 V c + 13 f z + 2 × 10−5 − 42 f 2z + 2 × 10−3 V c · ap ; C: 0.08 + 0.56 ap + 37.9 f 2z ; D: 0.7 + 0.004 V c + 8.72 · f 2z − 0.02 V c f z ; E: 0.06 + 2.5 f z + 3 × 10−7 V 2c − 0.007 a2p . 2 F: 0.458 + 14.4 f 2z − 0.0227 V c · f z ; G: 1.8 + 24.6 f 2z − 0.0129 V c · f z ; H: 0.28 + 9.14 f z ; I: 0.78 + 5.5 f 2z + 0.003 V c − 0.016 V c · f z ; J: −0.095 − 3.75 f 2z + 4.43 f z

1 A:

V 2c

with the proposed methodology, using only 6 experiments, allow a surface roughness prediction with an average error of 9%. Therefore, for the 5 references analyzed, the use of the robust model is feasible.

23.4.2 Experimental Validation. Case Study The methodology proposed for obtaining a robust model is experimentally validated on a Deckel Maho DMC70V machining center. The object of the surface roughness modeling is the operation of face milling of steels with low carbon content using carbide inserts and indexable milling cutters. These characteristics are similar to the

23 Robustness Analysis of Surface Roughness Models …

329

research works analyzed in Table 23.1. The workpiece material is a 250 × 250 mm billet made of DIN S275JR (HB 150) carbon steel. The machining is carried out with a 52 mm diameter face mill (reference KDM200RD12S075C) with 12 mm diameter round inserts (reference RDPX12T3M0SHN KCPM20). For the experimentation, a roughness profilometer Mitutoyo SJ-210 is used, calibrated with a pattern of 2.95 µm using a basic length of 0.8 mm and an evaluation length of 4 mm. 15 measurements were made and the Chauvenet criterion was applied to rule out measurement errors. Uncertainty of measurement of the roughness profilometer of I1 = ±0.012 µm was obtained as a result of the calibration. On the other hand, to estimate the inherent variability of the chip removal process, the face milling operation of several cutting passes was analyzed. For this study, the possible effect of wear on the surface roughness is not considered, so the experimentation is performed with a new tool, changing the inserts every 4 complete cutting passes. An ANOVA analysis of the surface roughness in these 4 cutting passes indicated that the possible wear during the machining of these passes was not statistically significant. The analysis shows that the natural variability of the machining process for this application is ±0.10 µm within a 95% confidence interval. Figure 23.3 illustrates the machining and inspection process. This application aims to model the surface roughness with a minimum number of experiments, in order to obtain a base or robust model to explain the generic behavior of the process. The experimental results after applying the methodology described above are shown in Table 23.5. As this is a 2 × 3 factorial design, 6 experiments are carried out (machined) with the parameter combinations indicated in Table 23.5. The cutting conditions used cover the range recommended by the manufacturer: V c [150, 300] m/min; f z [0.1, 0.5] mm/tooth. Applying step by step regression with the Cutting direction

Surface roughness measurements

Parameters: λc = 0.8 mm; ln = 4 mm 3 measurements per cutting pass (Ra)

Fig. 23.3 Face milling operation modeled and measurement of surface roughness

Table 23.5 Experimental results after the 2 × 3 factorial DoE Exp

Vc

fz

ap

Ra measured

Exp

Vc

fz

ap

Ra measured

1

150

0.1

0.5

0.203

4

300

0.1

0.5

0.210

2

150

0.3

0.5

0.463

5

300

0.3

0.5

0.559

3

150

0.5

0.5

1.309

6

300

0.5

0.5

1.082

330

M. Ortells-Rogero et al.

Table 23.6 Experimental results for the model validation Exp

Vc

fz

ap

Ra measured

Ra pred.

Exp

Vc

fz

ap

Ra measured

Ra pred.

1

225

0.1

1.25

0.24

2

262

0.4

0.5

0.81

0.18

8

225

0.5

0.5

1.36

1.18

0.74

9

300

0.1

2

0.16

3

187

0.2

0.5

0.26

0.16

0.30

10

150

0.5

2

1.31

1.28

4

225

0.5

1

1.29

1.19

11

300

0.5

2

1.08

1.09

5

225

0.1

1.75

0.26

0.18

12

150

0.1

2

0.28

0.20

6

262

0.4

0.5

0.80

0.74

13

150

0.1

0.5

0.22

0.20

7

187

0.2

0.5

0.27

0.30

regressors order listed above, the regression model Ra = 0.191 − 0.0026 V c · f z + 5.14 f 2z is obtained. To validate the obtained model, 13 additional experiments are performed where values are used from different points of the range of study, including different values of axial depth of cut, which can vary in the application from 0.5 to 2 mm. Note that in the experiments shown in Table 23.5, axial depth of cut was set to 0.5 mm, since in the 2 × 3 factorial design the ap factor remains constant under the hypothesis that the effect of this variable is less than that of the factors V c and f z . The results in Table 23.5, show both the measured value and the value estimated by the robust model. The model error, considering the set of experimental validation data, is 11% (Table 23.6).

23.5 Conclusions In spite of the numerous works published on surface roughness modeling regarding chip removal operations, specific ad hoc models are still being developed and no work is known that studies the possibility of developing robust models valid for the majority of cases within a given application. This work has analyzed different models published in international journals and conferences to study the possibility of extracting an empirical and relatively robust model structure for roughness models that can be used in different environments. The model obtained from the study of previous works has been subsequently validated both theoretically, making a comparison between the accuracy of the ad hoc models and the robust model according to the experimental data provided in the previously published works, and experimentally, obtaining a base/robust model with an experimental design consisting of 6 machined parts and validating the model with 13 additional machined parts. The results obtained show that this robust model based on only 6 experiments can be valid for the majority of machining applications analyzed (face milling operations with carbide inserts and indexable milling cutters and using low alloy carbon steels as working material), obtaining a model with 9% average prediction error. The obtained results show that

23 Robustness Analysis of Surface Roughness Models …

331

this study can be a valid starting point for the development of portable robust models in machining contexts. Future work focuses on developing online strategies to adapt these basic models in order to improve their accuracy so that an excessive experimentation is not required and even allowing the use of inspection data for adjustment and adaptation of these models under changing conditions, such as the state of the cutting tool, batch changes on the starting material, etc.

References 1. Abellan-Nebot JV, Bruscas GM, Serrano J, Vila C (2017) Portability study of surface roughness models in milling. Procedia Manuf 13:593–600 2. Abellan-Nebot JV, Mira JS, Bruscas GM, Vila C (2010) Portability problem of empirical surface roughness models. Annal of DAAAM & Proc 1131–1133 3. Bajic D, Jozic S, Podrug S (2010) Design of experiment’s application in the optimization of milling process. J Theory Pract Metal 49:123–126 4. Cakir MC, Ensarioglu C, Demirayak I (2009) Mathematical modeling of surface roughness for evaluating the effects of cutting parameters and coating material. J Mater Process Technol 209:102–109 5. Coromant S (2017) Milling. In: Cutter position. Sandvik Coromant. http://www.sandvik. coromant.com/en-us/knowledge/milling/getting_started/general_guidelines/cutter_position. Accessed 18 May 2018 6. Darwish SM (2000) Impact of the tool material and the cutting parameters on surface roughness of supermet 718 nickel superalloy. J Mater Process Technol 97:10–18 7. Davim JP (2010) Surface integrity in machining. Springer, London 8. De Aguiar MM, Diniz AE, Pederiva R (2013) Correlating surface roughness, tool wear and tool vibration in the milling process of hardened steel using long slender tools. Int J Mach Tools Manuf 68:1–10 9. Desale PS, Jahagirdar RS (2014) Modeling the effect of variable work piece hardness on surface roughness in an end milling using multiple regression and adaptive Neuro fuzzy inference system. Int J Ind Eng Comput 5:265–272 10. Erviti AG (2015) Influencia de Parámetros de Corte en la Rugosidad Superficial en Procesos de Fresado (Thesis), Escuela Técnica Superior de Ingeniería Universidad de Sevilla, Sevilla 11. Franco P, Estrems M, Faura F (2004) Influence of radial and axial runouts on surface roughness in face milling with round insert cutting tools. Int J Mach Tools Manuf 44:1555–1565 12. Franco P, Estrems M, Faura F (2008) A study of back cutting surface finish from tool errors and machine tool deviations during face milling. Int J Mach Tools Manuf 48:112–123 13. Grzesik W (2008) Influence of tool wear on surface roughness in hard turning using differently shaped ceramic tools. Wear 265:327–335 14. Kulkarni SS, Rathi MG (2014) The effect of process parameters on surface roughness in face milling. Int J Eng Sci Invent 3:54–58 15. Moghaddam MA, Kolahan F (2016) Application of orthogonal array technique and particle swarm optimization approach in surface roughness modification when face milling AISI1045 steel parts. J Ind Eng Int 12:199–209 16. Nalbant M, Gökkaya H, Tokta I, Sur G (2009) The experimental investigation of the effects of uncoated, PVD- and CVD-coated cemented carbide inserts and cutting parameters on surface roughness in CNC turning and its prediction using artificial neural networks. Robot ComputIntegr Manuf 25:211–223 17. Özel T, Hsu TK, Zeren E (2005) Effects of cutting edge geometry, workpiece hardness, feed rate and cutting speed on surface roughness and forces in finish turning of hardened AISI H13 steel. Int J Adv Manuf Technol 25:262–269

332

M. Ortells-Rogero et al.

18. Routara BC, Bandyopadhyay A, Sahoo P (2009) Roughness modeling and optimization in CNC end milling using response surface method: effect of workpiece material variation. Int J Adv Manuf Technol 40:1166–1180 19. Sreejith PS (2008) Machining of 6061 aluminium alloy with MQL, dry and flooded lubricant conditions. Mater Lett 62:276–278 20. Stephenson DA, Agapiou JS (2016) Metal cutting theory and practice. CRC Press, Boca Raton (USA) 21. Tammineni L, Yedula HPR (2014) Investigation of influence of milling parameters on surface roughness and flatness. Int J Adv Eng Technol 6(6):2416–2426 22. Yalçin B, Özgür AE, Koru M (2009) The effects of various cooling strategies on surface roughness and tool wear during soft materials milling. Mater Des 30:896–899 23. Yan L, Yuan S, Liu Q (2012) Influence of minimum quantity lubrication parameters on tool wear and surface roughness in milling of forged steel. Chin J Mech Eng 25:419–429

Chapter 24

Forecasting on Additive Manufacturing in Spain: How 3D Printing Would Be in 2030 M. P. Pérez-Pérez, M. A. Sebastián, and E. Gómez

Abstract Additive Manufacturing (AM) is presented as a set of alternative technologies to conventional manufacture. AM can cause profound changes in the industry, the economy, and the society of the future. These changes will affect in different degrees, all aspects that make up the current manufacturing landscape. The Delphi method is a technique that allows collecting the opinion of a group of individuals (experts) using repeated inquiries. This method is used, among other cases, when attempting to have a vision about future events. The main objective of this work is to analyze the responses to a Delphi inquiry addressed to Spanish experts from the professional and university field committed to the manufacturing and engineering sector. The consensual answers to the issues this consultation raises can provide a horizon indicating what changes the Additive Manufacturing will bring to the manufacturing scene. The issues that arise revolve around groups of technologies to be established in the future, to the different business models that will be used, and the standards and regulations that will be required. Keywords Additive manufacturing · 3D printing · Prospection · Forecasting · Delphi

M. P. Pérez-Pérez (B) · M. A. Sebastián (B) ETS de Ingenieros Industriales, Universidad Nacional de Educación a Distancia UNED, Juan del Rosal 12, 28040 Madrid, Spain e-mail: [email protected] M. A. Sebastián e-mail: [email protected] E. Gómez (B) ETS de Ingeniería y Diseño Industrial, UPM (Universidad Politécnica de Madrid), C/Ronda de Valencia 3, 28012 Madrid, Spain e-mail: [email protected] © Springer Nature Switzerland AG 2021 J. L. Ayuso Muñoz et al. (eds.), Project Management and Engineering Research, Lecture Notes in Management and Industrial Engineering, https://doi.org/10.1007/978-3-030-54410-2_24

333

334

M. P. Pérez-Pérez et al.

24.1 Introduction Additive Manufacturing (AM) is a process of union of materials to manufacture objects based on data from 3D models, usually layer to layer, as opposed to manufacturing methods that use material removal and forming as it is defined on the ISO/ASTM 52900:2017 standard. Since the 1980s, the AM has been known under different names, such as 3D Printing, Rapid Prototyping (RP), Rapid Manufacturing (RM), Generative Manufacturing, eManufacturing, Constructive Manufacturing, Additive Layer Manufacturing-ALM, Direct Digital Manufacture (DDM), Freeform Fabrication (FFF), Solid Freeform Fabrication (SFF) (ASTM Committee F42 on Additive Manufacturing Technologies, ASTM Committee F42 on Additive Manufacturing Technologies. Subcommittee F42.91 on Terminology 2012 [21]. Manufacturing capacity without using tools, manufacturing customization and the possibility of obtaining parts that are impossible to produce with conventional manufacturing, make AM a manufacturing option with huge future potential. Many authors define the 3D Printing as the new industrial revolution linked to ICT (information and communication technologies), comparable to the industrial revolution of the eighteenth century, because the inclusion of the Additive Manufacturing in manufacturing can lead to important changes in the industry, the economy, and/or the society of the future [4, 15, 16, 20]. These changes can affect in different degrees the various links of the manufacturing chain and the business models [5, 17] or distribution [19]. The way in which parts are designed, the manufacturing method, customer consumption habits or the definition of customer and supplier [6], may be globally affected, as well as employment [14], legal aspects of manufacturing, intellectual property, or what is currently the product warranty [10]. Currently, a specific Additive Manufacturing technology cannot be used with any material or under any condition, but it is clear that the progress of AM is in full development (Jornada IK4). This fact produces enormous expectations in the manufacturing world, because no one really knows how it will develop, how it will affect consumption, how the demand will vary or which these technologies will end up prevailing in the market, in short, what the future of 3D printing will be. The Delphi method consists of eliciting a confirmation from a consultation to experts in a field, with the purpose of obtaining the most reliable consensual opinion of the consulted group [18]. This work uses the Delphi survey as a consulting tool to explore the most likely scenarios that, according to a group of experts, may occur in the next decade. To perform the inquiry, a numerous group of Spanish experts dedicated to the manufacturing and engineering sector, mainly but not exclusively from the university field, has been selected. The issues that arise in this work, despite the brevity imposed by the format, try to cover all relevant aspects of Additive Manufacturing, from markets and supply chains to technologies or key issues to the development or not of 3D Printing [3]. The Delphi survey has been designed in two rounds, so that in the second round experts know the more consensual responses

24 Forecasting on Additive Manufacturing in Spain …

335

reached in the first round, and may, if they wish, analyze their answers in the first round, agreeing or not to the group consensus.

24.2 Objectives The main objective of this study is to be able to conclude which are the main changes that are expected to occur regarding the AM in 2030, and with what probability the Spanish experts believe that these changes will occur, using the Delphi prospecting as a tool. The final purpose is that the conclusions serve as a guidance in the university field so that a vision of the modifications to include in technicians formation can be deduced to ensure their adequate preparation for the expected new industrial revolution. The results of the study and a summary of the conclusions are offered, contributing the scenarios that have obtained greater and smaller consensus in the expert’s answers.

24.3 Methodology The Delphi method is an information gathering technique that allows obtaining the opinion of a group of individuals (experts) using repeated inquiries. This research technique is used, among other occasions, when the aim is to reach a probable scenario of future events. It is also useful when objective information is not available or in situations of uncertainty. The results of the Delphi method are more representative than the judgment of an individual expert because as several experts participate, the possible biases can be avoided. In addition, the various consultation rounds allow experts to reflect on the convergence or divergence of their views with respect to the group and address the issues from different perspectives, so the result is that a more elaborate consensus can be reached compared with a simple survey. The main characteristics of the Delphi Method are, according to [12], the anonymity, the interaction and controlled feedback, the group response and the heterogeneity of the participating experts [12]. The Delphi survey consists of four stages [9], represented in Fig. 24.1.

Problem formulation

Fig. 24.1 Delphi process stages

Experts’ election

Execution of the Delphi prospection

Development of future scenarios

336

M. P. Pérez-Pérez et al.

The year 2030, has been chosen as the time horizon of the study because it is a date that is about 10 years away from the present (2018). Developments and innovations regarding the AM are developing breathtakingly, not only by the speed with which technological advances are implanted in the market (due to globalization and digitization), they are also driven by the great expectation that these technologies are creating in the manufacturing world. In this context, 10 years seems a not-sodistant horizon, but enough to allow the academic changes that must occur to be developed. To achieve a sufficient level of response, all aspects of the questionnaire must help elicit an easy and brief response from the participants. The initial conditions for structure and extension for the elaboration of this inquiry, in form format, were as follows. The questions should be formulated so that no additional explanation is required. The questions should be brief and the wording should be simple. The questionnaire must be prepared so that the answers are given, if possible, with a single action by the participant, without the expert introducing data or values, for example, with a mouse “click”. The assessment of the answers must be qualitative, not quantitative (the transformation of the qualitative results to quantitative values has been done by the research team after the compilation of the individual questionnaires). The number of options which the participant can choose from is defined as even, four, in this case, to allow the expert to define if it is absolutely, occasionally, mostly or totally in favor of the presented future projection. If odd options are presented, the central election which is neutral is too comfortable and easy to select. Overall, an expert should not spend more than 5 min filling the form completely. It must have less than 15 questions. The form was designed with all the above features except the last, since finally, 21 projections were obtained. It was decided to keep all the scenarios because even with this extension, the questionnaire could be completed in less than 5 min. A classic Delphi procedure was maintained, discarding the “Real-time” application introduced by Gordon and Pease in 2006, and used in the latest published studies. The Delphi prospection in real-time, assumes that the survey participants can see the responses from all those who have answered before, so this information may influence their response (even in the first round). In this study, participants have to respond individually, without knowing what the rest of the participants’ answers are. Only when a round is closed, the group data is shared with all the experts who decide if that response of the group influences them or not in the next assessment. As a tool for sending questions and for answers collection, it has been decided to use the free Google tool (Google Forms), not only because it is free, but because after a survey of the 3 most popular tools to carry out digital surveys, it was the only one that allowed sending of data to Excel immediately, and therefore, the only one that allows to freely work the Delphi survey results.

24 Forecasting on Additive Manufacturing in Spain …

337

24.3.1 STAGE 1: Problem Formulation With the intention of including as many relevant aspects related to the Additive Manufacturing as possible and different perspectives [13], the research team collected 67 different scenarios. After an important simplification effort and the reformulation of most of the questions, 21 issues have been defined that revolve around the following major axes: • BLOCK 1: Questions about the market, the business models, the supply chain, new products, services and applications, specific training, certification, warranty, and intellectual property. • BLOCK 2: More specific data regarding what categories of processes will be applied depending on the manufacturing model. • BLOCK 3: Manufacturing technologies that will prevail, key factors for 3D Printing launching (strengths) and fundamental factors to be addressed to allow this launching (weaknesses). These 21 scenarios were tested and adjusted by the research team and are described in detail in Tables 24.1, 24.2, and 24.3.

24.3.2 STAGE 2: Experts Selection A large group of experts working on formation, as well as manufacturing and materials technologies, was selected to perform the inquiry. These experts belong, either to Spanish universities or to institutions and technological centers associated with the mentioned sectors. The consulted specialists represent 31 Spanish centers and universities largely.

24.3.3 STAGE 3: Implementation of the Delphi Survey For the execution of the traditional Delphi questionnaire in two rounds, it was decided to display the form on a single page, less elegant than the presentation “one scenarioone screen” but with the quality of allowing experts to see the extension of the questionnaire from the beginning, assuming that it could help the level of response. In the first block, questions were drafted in such a way that the answer will need a single reply and that the expert could appreciate what was his proximity to the proposed scenario, on a scale of four from “ not at all” (total discordance), up to “completely” (total agreement). In the second block, questions were configured as matrixes because it was intended to assess which Additive Manufacturing process category (according to the definition in standard ISO 017296-2) will adapt more to the two types of manufacturing extremes: homemade (domestic) and hybrid manufacturing. In the third block, the expert is asked to choose 3 options among the

In 2030, more than 50% of products will be manufactured in specialized stores with specialized personnel close to consumers (much like photocopiers in their day)

In 2030, more than 50% of products will be manufactured in factories where AM is included among their manufacturing processes as just another group of technologies

In 2030, more than 70% of prototypes will be manufactured using AM technologies

In 2030, more than 50% of the tools will be manufactured using AM technologies

In 2030, more than 50% of global production will be done using AM technologies

In 2030, lathes will not be used for manufacturing

2

3

4

5

6

7

(continued)

Development of real sectors in the market/changes in manufacturing

In 2030, it will be possible to manufacture parts in more than 50% of homes in Business model: domestic/specialist store/large factory industrialized countries

1

Scope

Block 1



Table 24.1 Block 1 of questions. Delphi scenarios for 2030

338 M. P. Pérez-Pérez et al.

In 2030, there will be a new market niche for customized production runs that can only be manufactured using AM

In 2030, users will obtain the digital formats of the parts to be manufactured from one (or more) databases

In 2030, digital formats will be free of charge and freely available

In 2030, more than 50% of manufacturing will be delocalized or distributed. Production will take place at points close to the consumer and the distribution sector (supply chain) will have changed to service this new kind of manufacturing

In 2030, more than 75% of AM processes and technologies will be classified, its production characteristics documented and standardized

In 2030, specific training and qualification will be needed to produce using AM

In 2030, there will be a procedure for issuing a warranty of unique parts (personalized) manufactured using AM technologies

In 2030, AM will have contributed to the sustainability of manufacturing (manufacturing will be less polluting than at present)

9

10

11

12

13

14

15

16

Sustainability

Degree of AM maturity/qualification of personnel/legal

Supply chain and distribution

In 2030, AM processes will be monitored in real-time. Sensors and production Quality assurance and part inspection control devices will be integrated and widespread in AM

8

Scope

Block 1



Table 24.1 (continued)

24 Forecasting on Additive Manufacturing in Spain … 339

340

M. P. Pérez-Pérez et al.

Table 24.2 Block 2 questions. Delphi scenarios for 2030 nº

Block 2

Scope

17

In 2030, the techniques extensively used for “DOMESTIC Manufacturing” will be:

Process categories

Photopolymerization in tanks or vats Material extrusion Material projection Powder bed fusion Agglutinant projection Localized energy deposition Sheet lamination Other 18

In 2030, the techniques extensively used for “HYBRID Manufacturing” will be

Process categories

Photopolymerization in tanks or vats Material extrusion Material projection Powder bed fusion Agglutinant projection Localized energy deposition Sheet lamination Other

proposals shown or those he chooses to add. A summary of the actions of the Delphi prospection is shown in Fig. 24.2.

24.4 Results 133 replies were obtained in the first consultation round, from 31 different centers and universities. The form was then sent back and 78% of them decided to participate again. Responses from both rounds provide the basis to generate the results of this study as reflected in Tables 24.4, 24.5, and 24.6 of Annex. A clear consensus can be observed in the second answers of the participants as seen in Fig. 24.3, which shows a concentration of replies reflected in the decrease of standard deviation values. The greatest consensus effect between the two rounds of consultations was achieved in homemade scenarios (mostly), in the manufacture of tools with 3D Printing technologies (on occasion) and in the existence of procedures for the issuance of warranty for unique pieces (mostly). Also, a slightly higher agreement is observed in the occasional possibility of using sheet lamination technologies in homemade manufacture and of the predominant use of material extrusion in the hybrid model.

24 Forecasting on Additive Manufacturing in Spain …

341

Table 24.3 Block 3 of questions. Delphi scenarios for 2030 nº Block 3

Scope

19 Choose the 3 technologies that, in your opinion, will prevail over others on Technologies the market in 2030 (i.e. will be most used). Which 3 technologies will outlive the others? Stereolithography (SLA) Fused deposition modeling (FDM) Selective laser sintering (SLS) Selective laser modeling (SLM) Direct metal deposition (DMD) Laminated object manufacturing (LOM) Other 20 Indicate the 3 factors you consider most relevant for AM to “impose itself” on other manufacturing methods in 2030

Strengths

Democratization of manufacturing Freedom of designers. Flexibility for design changes Reduction in product development cycles and time to market Lower tooling costs Shorter production runs. Customized or one-off production Lower raw material costs (less waste) Reduction in transportation and distribution costs and times Reductions in storage: of raw materials and finished products Contribution to environmental sustainability Other 21 Indicate the 3 key factors you think must be resolved for AM to “take off” by Weaknesses 2030 The technical limitation for achieving the required properties in the end product The certification of parts and finished products Changes in the way of thinking when designing parts Industrial property rights, taxation and the safety of the manufactured products The needs for training of AM equipment operators The cost of raw materials, machinery and/or transportation Limitations on the volume and/or speed of manufacture The need for post-processing The integration of AM into current manufacturing methods Other

342

M. P. Pérez-Pérez et al.

Formulation 67 scenarios

Conclusions

Simplification 21 scenarios

Preparation of results

Round 1 21 scenarios

Responses from experts

Responses from experts

Round 2 21 scenarios

Preparation of results

Fig. 24.2 Elaboration and development of the Delphi survey

Fig. 24.3 Standard deviation of answers of the first block of questions and part of the second block

The higher coincidence among experts (Fig. 24.4), is found in the rejection of the possibility of eliminating the lathe as a manufacturing tool by 2030. The intention with this question was to find out if experts consider that in 2030, and given the evolution and development of the AM, there will be a radical change in the manufacturing means with respect to the way in which it is performed today, using the lathe as the most significant example. A majority affirmative answer should lead to reflecting on the radical rather than sequential changes that would be required in student training and in the tools used for such training, with the time profile of 2030. A new market for custom productions will appear when using Additive Manufacturing technologies in manufacturing, as well as a niche for prototype manufacturing, with the advantages provided by: no increased cost, time or effort. The future most likely manufacturing model is the current concept factory, where technologies,

24 Forecasting on Additive Manufacturing in Spain …

343

Fig. 24.4 Results on block 1 of questions (%)

additive or not, will be combined and used by those in charge of manufacturing to produce in the most efficient way possible. In this model, 3D Printing technologies will be used in factories in an integrated manner and will be applied according to the convenience or efficiency of the processes. According to participants, the less likely commercialization model is the specialized store near the consumer. On the other hand, the joint result indicates that extrusion will be the most used processes category in households. More than 50% of the answers value four categories of processes that will be used mostly in factories: extrusion, material projection, powder bed fusion, and localized energy deposition (Fig. 24.5). The 3 prevailing technologies, according to the experts, that will be more used in the market by 2030, are: Fused Deposition Modeling (FDM), Selective Laser Sintering (SLS), and Selective Laser Melting (SLM), as shown in Fig. 24.6. The determining factors that experts have selected for the AM to be placed among the manufacturing methods in 2030, (Fig. 24.6), are: freedom of design and flexibility in design changes (which gives the possibility of manufacturing products with until now impossible geometries); the possibility of custom or unitary production and

Fig. 24.5 Results on block 2 of questions (%)

344

M. P. Pérez-Pérez et al.

Fig. 24.6 Results on block 3 of questions (%)

reduction of batches (it is a singularity of the 3D Printing technologies that does not render the process more expensive—except in the digital part); and reducing the product development cycles and time to market (which offers the possibility of checking the market reaction to product changes making it agile, rapid, and less expensive, representing a great value for developers and promoters). All these factors are related to the product itself and the manufacturing and marketing elements. The technical limitation to achieve the required properties in the final product stands out among the remaining challenges to overcome, the anisotropy, the control of physical properties (tensile strength, dimensional control, surface finish, etc.), chemical, conductivity, etc. They are clearly an essential current limitation so that Additive Manufacturing technologies can be used beyond specific developments or very specific market niches. In addition to improving the control of properties in the product, the experts believe that there is a current limitation in the manufacturing process (manufacturing volume limitation, which is given by the manufacturing machine, the production speed or the combination of both). The latest developments that machines manufacturers have placed in the market try to solve speed limit by placing several heads working simultaneously. Finally, certification and warranty of the finished product is the third major challenge most voted by the experts. It is not worth making the desired piece, with the required specification and at the right speed if the final product cannot be certified.

24.5 Conclusions Additive Manufacturing will be developed and standardized in the next decade as the experts panel suggests. For this, technicians training must be adjusted to the changes that this new form of manufacturing needs. The results obtained are optimistic regarding 3D Printing evolution in the near future, and conclude that manufacturing centers will not vary radically from the

24 Forecasting on Additive Manufacturing in Spain …

345

current ones, given that the most likely business model for this technology will be to integrate itself into the factories among the existing processes. AM will occupy an important place among the manufacturing processes, in particular in prototype and occasionally tool manufacturing, without implying a drastic change in the manufacturing manner, or making other consolidated procedures disappear. However, there will be a trend toward shorter productions, in which the supply chain will have to give a different service. These technologies will cause a new market niche of individual or customized productions to emerge, but it will not be predominant. To do this, legal and technical aspects will have to be developed that permit the issuance of warranties for unique pieces. The certification and warranty of the finished product is one of the three challenges to be solved according to participants. The standardization and certification of unique pieces is a current difficulty still unsolved. Freedom and flexibility in design changes, customization, and the ability to perform short productions are the determinants for AM technologies to emerge definitively. The designs will be acquired mainly from a digital market to which users will have access, but which gratuity is not clear, opening the opportunity for a digital market parallel to the manufacturing itself. The following questions immediately arise: who will control this market? Large corporations or users who design or platforms like Google or Amazon? One of the challenges currently facing in order to be used as a form of standard manufacturing is to control processes simultaneously to the manufacture itself, to ensure manufacturing according to specifications. Results reveal that this integration will mainly take place in 2030, which indicate that the sensor and control field has a great immediate development and it is a great opportunity to focus technicians’ formation. The contribution of these technologies toward a more sustainable manufacture can be a determining factor. Other aspects that have to be solved are the current technical limitation to obtain the required properties in the final product, the manufacturing volume or the production speed limitation (which depend on the manufacturing machine), or both factors combined. Both are technical aspects for which engineering can provide solutions. Each one of the Additive Manufacturing technologies requires specific training, although they all have in common the digital preparation of the design. If in 2030, a formation and specific qualification in a general or complete way is required, as the results reveal, the change in universities should begin as soon as possible. Acknowledgments The present paper has been produced within the scope of the doctoral activities carried out by the lead author at the International Doctorate School of the Spanish Universidad Nacional de Educación a Distancia (EIDUNED). The authors are grateful for the support provided by this institution. The authors of this work would also like to thank the more than 100 experts for their selfless participation in the prospecting process.

Annex 1: Result Tables of the Delphi Prospection See (Tables 24.4, 24.5 and 24.6).

53 6

Mostly

Completely 68

37

Occasionally

Standard deviation in number of responses

5

1

Not at all

Question nº/option

Table 24.4 Results of block 1 questions (%)

74

4

37

47

13

2 2

78

54

32

13

3 9

0

65

58

34

4 8

54

0

23

69

5

57

0

7

62

32

6

52

1

1

21

77

7 0

60

29

61

11

8 2

0

53

6

35

9

56

23

67

10

0

10

68

2

38

50

11

11

67

6

55

35

5

12

57

16

71

12

1

13

65

27

62

10

2

14

49

10

76

14

0

15

74

26

52

20

2

16

346 M. P. Pérez-Pérez et al.

24 Forecasting on Additive Manufacturing in Spain …

347

Table 24.5 Results of block 2 questions (%) “Domestic” manufacturing Process/option nº

1

2

3

4

5

“Hybrid” manufacturing

6

7

1

2

3

4

5

6

7

Not at all

14

2

18

17

13

22

20

11

1

3

5

5

2

18

Occasionally

73

22

61

63

61

62

64

67

24

21

29

62

22

61

Mostly

12

55

20

18

25

15

15

21

68

68

57

28

55

20

1

21

1

2

1

1

0

1

7

8

10

6

21

1

Standard deviation in 55 number of responses

72

65

66

64

64

60

59

56

61

71

66

72

65

Completely

Table 24.6 Result of block 3 questions (%) Results on technologies

Results on AM strengths

Stereolithography (SLA)

6%

Fused deposition modeling (FDM)

29%

Selective laser sintering (SLS)

25%

Selective laser modeling (SLM)

25%

Direct metal deposition (DMD)

12%

Laminated object modeling (LOM)

2%

Others

1%

Democratization of manufacturing

5%

Freedom of designers. Flexibility for design changes

28%

Reduction in product development cycles and time to 21% market Lower tooling costs

5%

Shorter production runs. Customized or one-off production

25%

Lower raw material costs (less waste)

3%

Reduction in transportation and distribution costs and 6% times

Results on AM weaknesses

Reductions in storage: of raw materials and finished products

5%

Contribution to environmental sustainability

2%

Others

0%

The technical limitation for achieving the required properties in the end product

28%

The certification of parts and finished products

19%

Changes in the way of thinking when designing parts 6% Industrial property rights, taxation and the safety of the products manufactured

3%

The needs for training of AM equipment operators

1% (continued)

348

M. P. Pérez-Pérez et al.

Table 24.6 (continued) The cost of raw materials, machinery and/or transportation

7%

Limitations on the volume and/or speed of manufacture

21%

The need for post-processing

4%

The integration of AM into current manufacturing methods

10%

Others

1%

References 1. ISO/ASTM 52900:2017 Fabricación aditiva. General principles Parte 2: Visión general de categorías de procesos y de materias primas 2. ASTM Committee F42 on Additive Manufacturing Technologies, & ASTM Committee F42 on Additive Manufacturing Technologies. Subcommittee F42 (2012) Standard terminology for additive manufacturing technologies. ASTM International 3. Birtchnell T, Urry J (2013) 3D, SF and the future. Futures 50:25–34 4. Dawson F (2014) How disruptive is 3D printing really. Forbes 5. Despeisse M, Ford S, Viljakainen A (2016) Product life extension through additive manufacturing. https://www.researchgate.net/profile/Melanie_Despeisse/publication/289335537_Pro duct_life_extension_through_additive_manufacturing_The_business_model_implications/ links/568b945708ae1975839f58b9.pdf 6. Ford SJ, Mortara L, Minshall TH (2015) The emergence of additive manufacturing: introduction to the special issue. Technol Forecast Soc Chang 102:156–159 7. Google Forms. https://docs.google.com/forms/u/0/ 8. Gordon T, Pease A (2006) RT delphi: an efficient, “round-less” almost real time delphi method. Technol Forecast Soc Chang 73(4):321–333 9. Heiko A, Darkow I (2010) Scenarios for the logistics services industry: a delphi-based analysis for 2025. Int J Prod Econ 127(1):46–59 10. Jiang R, Kleer R, Piller FT (2017) Predicting the future of additive manufacturing: a delphi study on economic and societal implications of 3D printing for 2030. Technol Forecast Soc Chang 117:84–97 11. Jornada IK4 (2017) Impacto de la fabricación aditiva en la industria: Visión tecnológica y últimas tendencias. Sede del CDTI 12. Landeta J (1999) El método Delphi. Una técnica de previsión del futuro. Ariel 13. Mitroff II, Linstone HA (1995) The unbounded mind: breaking the chains of traditional business thinking. Oxford University Press 14. Pérez-Pérez MP, Gómez E, Sebastián MA (2018) Delphi prospection on additive manufacturing in 2030: implications for education and employment in Spain. Materials 11(9):1500 15. Petrick IJ, Simpson TW (2013) 3D printing disrupts manufacturing: how economies of one create new rules of competition. Res-Technol Manag 56(6):12–16 16. Potstada M, Parandian A, Robinson DK, Zybura J (2016) An alignment approach for an industry in the making: DIGINOVA and the case of digital fabrication. Technol Forecast Soc Chang 102:182–192 17. Rayna T, Striukova L (2016) From rapid prototyping to fabrication home: how 3D printing is changing business model innovation. Technol Forecast Soc Chang 102:214–224 18. Reguant M, Torrado M (2016) El método Delphi. REIRE. Revista D’Innovació i Recerca En Educació 9(2):87–102 19. Srai JS, Kumar M, Graham G, Phillips W, Tooze J, Ford S, Tiwari MK (2016) Distributed manufacturing: scope, challenges and opportunities. Int J Prod Res 54(23):6917–6935

24 Forecasting on Additive Manufacturing in Spain …

349

20. Steenhuis H, Pretorius L (2016) Consumer additive manufacturing or 3D printing adoption: an exploratory study. J Manuf Technol Manag 27(7):990–1012 21. Zahera M (2012) La fabricación aditiva, tecnología avanzada para el diseño y desarrollo de productos. In: 16th international congress on project engineering. Valencia, pp 2088–2098

Part IV

Environmental Engineering and Management of Natural Resources

Chapter 25

Analysis of the Key Variables That Affect the Generation of Municipal Solid Waste in the Balearics Islands (2000–2014) C. Estay-Ossandón, A. Mena-Nieto, N. Harsch, and M. Bahamonde-Garcia

Abstract An econometric model based on official data from the Balearic Regional Government is presented. The main objective is to determine, through a multivariate regression analysis, the most significant variables that directly affect the generation of municipal solid waste (MSW) in the study area. A set of environmental and socioeconomic data has been obtained for the period 2000 to 2014, and panel data have been selected to generate and develop the model. Furthermore, the previous scientific literature on MSW management and generation, as well as econometric models, on MSW management are analyzed. As a result, six key variables that have a significant influence on the generation of municipal waste in the Balearic archipelago have been identified. Of these, five have a direct influence, with the resident population and the tourist population having the greatest influence, while the level of education has an inverse influence. The obtained linear model does not present autocorrelation or heteroscedasticity although a slight multicollinearity is omitted since the model’s explanatory capacity reaches 99.8%. Finally, the conclusions obtained are presented and the coefficients associated with the model are interpreted. Keywords Municipal solid waste · Econometric model · Balearic islands

C. Estay-Ossandón Westfälische Wilhelms-Universität Münster, Münster, Germany e-mail: [email protected] A. Mena-Nieto (B) · M. Bahamonde-Garcia School of Engineers, University of Huelva, Huelva, Spain e-mail: [email protected] M. Bahamonde-Garcia e-mail: [email protected] N. Harsch Zentrum für Lehrerbildung, Westfälische Wilhelms-Universität Münster, Münster, Germany e-mail: [email protected] © Springer Nature Switzerland AG 2021 J. L. Ayuso Muñoz et al. (eds.), Project Management and Engineering Research, Lecture Notes in Management and Industrial Engineering, https://doi.org/10.1007/978-3-030-54410-2_25

353

354

C. Estay-Ossandón et al.

25.1 Introduction Improvement in the management of municipal solid waste (MSW) is a key sustainability subject for any territory, most especially for touristic areas like the Balearics. An efficient and sustainable MSW management system is essential for the population’s health, environmental protection, and its conservation [8]. This is why MSW management represents a complex problem that integrates environmental, economic, public, private, and social concepts [15]. The Balearic Islands constitute a good study case in terms of MSW management, since they are among the most important touristic islands in the world according to the global tourist penetration index [16]. The contribution of tourism to the gross domestic product (GDP) reaches over 44.8%. Additionally, like any other European territory, the Balearic Islands must comply with the European Directive 2008/98/CE [6] on MSW, with the National Integrated Waste Plan of Spain [21] and the regional Director Sectoral Plan for the Management of Wastes [18]. In the study area, little research has been made on the improvement and planned management of MSW. Only a few authors have studied the compliance with European objectives regarding energy consumption and carbon dioxide emissions, as well as tourism and environmental pollution [3, 20]. It must be taken into account that a detailed understanding of who the actors are and which are their responsibilities is an important step toward establishing an efficient and effective MSW management system [1]. With regard to the optimal MSW management, Lilai et al. [11], suggest that the exact forecast of MSW generation is key and fundamental for the planning, operation, and optimization of any MSW management system. Other studies conclude that the population variable is the main factor in the generation of MSW [13]. Moreover, Márquez et al. [12], also consider variables specifically related to housing. Other authors corroborate the correlation between MSW generation and education [17]. Finally, others evidence the existence of a positive correlation between MSW generation and GDP [5, 10, 14]. Regarding the application of econometric models for the improvement of MSW management, without pretending to be thorough, the studies of Ali-Abdoli et al. [2], can be mentioned, who developed various models based on a multivariable econometric approach; Ghinea et al. [7] predict the generation of MSW through a prediction and regression analysis tool; Weng et al. [22] focus on the management and generation of MSW and short term projections of MSW generation.

25 Analysis of the Key Variables That Affect the Generation …

355

25.1.1 Objective The main objective of this work is the development of an econometric model that, under a regression analysis, identifies the driving forces or key variables that affect the generation of MSW in the Balearic Islands. This is important in order to improve the planning and decision-making regarding MSW management in the mentioned territory.

25.2 Methodology 25.2.1 Methodology for the Development of a Regression Model and Characteristics of the Study Area The Balearic Islands have a total population of 1.1 million inhabitants and a total area of 4,491 Km2 . However, it currently welcomes over 12 million tourists per year, being the biggest Spanish community that attracts foreign tourism. Recently, tourism has increased mainly because of the improving global economy, travel preferences, and other external factors. This increment also affects the generation of MSW, since it influences the design and dimensions of the collection facilities and treatment methods. In order to develop the model, panel data has been used (2000–2014). The statistical data of both the generation of MSW and socioeconomic data were extracted from official sources of the Balearic Government [9]. The data was used to determine, through a multivariable regression analysis, which were the most significant key variables that directly affect the generation of MSW in the study area. After establishing the main variables, the estimation method of the model’s parameters to be used is the Ordinary Least Squares Method (OLS) with the application of the econometric software, E-views 7. In order to reach this objective, the Backward method was also applied. This method estimates the regression coefficients of the variables using all available variables, later eliminating step by step those of no relevance in order of importance.

25.2.2 Specification and Construction of the Econometric Model The criterion of the OLS estimation method used is to provide estimations for the parameters that minimize the sum of the squares of errors. Operationally, the process is to define a function in terms of the sum of the square of the errors and obtain the calculation formulas of the estimators. Its algebraic expression is the following:

356

C. Estay-Ossandón et al.



ei2 =



Yi − Yˆi

2

=



Yi − βˆ1 − βˆ2 X i

2 (25.1)

according to the OLS principle. min



ei2 = min



Yi − βˆ1 − βˆ2 X i

2 (25.2)

By deriving this expression with respect to βˆ1 and βˆ2 , making it zero, respectively, and solving the normal equations, the estimators of the regression parameters. In order to find the best adjusted model, firstly an initial model was generated that included all variables of the data pool (15 initial variables). Subsequently, the Backward method was applied, which reduced the number of significant variables to 8 independent and 1 dependent variable.

25.2.3 Operative Definition of the Model Variables Endogenous or dependent variable: Generation of Municipal Solid Waste (MSW_gen) = Amount of urban solid waste generated in a specified period in kg/inhabitant/year. Exogenous or independent variables: Resident population (Res_Pop): Includes all inhabitants in a community (inhabitants/year). GDP per capita (GDP_per_cap): Annual average purchasing power of the population (e). Tourist population (Tour_Pop): Stabilized amount of tourists in a year (Inhabitants/year). Market share (Mark_share): Purchasing power or consumption capacity index (e). Industrial index (Indust_index): Annual gross production index (dimensionless). Education level (Educ_level): Index of the amount of people with general basic education, high school, higher education, and university (dimensionless). Economic units (Econ_units): Amount of companies that produce goods and services (dimensionless). Housing units (Hous_units): Amount of occupied housing (dimensionless). Postulate of the functional model form: equation that uses all variables. MSW_gen = f (β0 + β1 ∗ Res_Pop + β2 ∗ GDP_per_cap + β3 ∗ Tour_Pop + β4 ∗ Mark_share + β5 ∗ Indust_index + β6 ∗ Educ_level + β7 ∗ Econ_units + β8 ∗ Hous_units) + μi (25.3) Being μi the random disturbance of all errors.

25 Analysis of the Key Variables That Affect the Generation …

357

25.2.4 Adjusted Model for the MSW Generated According to the Model Output (Table 25.2) After finishing the calibration process with all variables in accordance with the Backward method, the following final model was determined that explains the generation of MSW through socioeconomic variables (Table 25.1): MSW_gen = f (β0 + β1 ∗ Res_Pop + β2 ∗ GDP_per_cap + β3 ∗ Tour_Pop + β6 ∗ Educ_level + β7 ∗ Econ_units + β8 ∗ Hous_units) + μi (25.4) The graphics on Fig. 25.1, corroborate the linear correlation among the selected variables. Down below, the output of the adjusted model with its corresponding statistics and preliminary signs analysis is shown.

25.2.5 Preliminary Analysis of the Value Signs in the Model Output By observing the model output, it can be verified that the estimation was done with 6 explanatory variables leaving 9° of liberty (15 observations − 6 variables = 9). The first analysis will be done on the signs of the variables. The aim is to analyze whether the signs of the parameters associated with each variable have a direct or inverse relation to the explanatory variables and the dependent variable. That is, they will be correct or not depending on the theoretical knowledge possessed on the event throughout the model specification. By analyzing the column Prob (Table 25.2), it can be observed that the six independent variables are significant to a 5%, meaning that all and each one of them have a great explanatory power over the dependent variable (MSW_gen), both individually and jointly. It can be appreciated that the Prob F-Statistic is almost zero, and therefore, there is a high probability of the variables together positively explaining the model (Table 25.3). Additionally, a statistical Durbin-W atson higher than 2 can be observed, which indicates that the model does not present autocorrelation. However, this does not present a limitation for the model to show heteroscedasticity and multicollinearity, which shall be analyzed further on. The parameters offered by the E-Views output on the coefficient column do not allow a precise knowledge of the relative importance of each variable. Therefore, it is not possible to affirm with total certainty which high coefficients identify which variables of major importance and vice versa. This is a result of handling variables expressed in different measurement units.

1.000000

0.991040

0.912500

0.97688

0.907843

0.983550

0.954210

MSW_gen

Res_Pop

GDP_per_cap

Tour_Pop

Edu_level

Econ_units

Hous_units

MSW_gen

0.984263

0.9966371

0.936343

0.987765

0.921525

1.000000

0.991040

Res_Pop

0.958810

0.983550

0.984263

0.997654

1.000000

0.971245

0.932564

GDP_per_cap

Table 25.1 Correlation matrix of key variables according to the model output

0.996754

0.968764

0.993225

1.000000

0.982341

0.967450

0.987989

Tour_Pop

0.939682

0.941122

1.000000

0.906787

0.988500

0.936343

0.907843

Edu_level

0.989819

1.000000

0.941122

0.968874

0.956456

0.998856

0.983550

Econ_units

1.000000

0.989819

0.939682

0.945678

0.912500

0.984263

0.954210

Hous_units

358 C. Estay-Ossandón et al.

25 Analysis of the Key Variables That Affect the Generation …

359

Fig. 25.1 MSW_gen correlation matrix versus independent variables

Table 25.2 Output model E-view (Dependent variable: MSW_gen; Method: OLS; Included observations: 15) Variables

Coefficient

Std. Error

t-Statistics

Prob.

C

3.022979

685.2062

0.004412

0.9965

Res_Pop

0.763623

0.066941

11.40735

0.0000

GDP_per_cap

23384.1

0.010620

12.33721

0.0005

Tour Pop

0.885401

0.022520

7.05211

0.0002

Educjevel

−0.301325

0.103100

−2.922649

0.0100

Econ_units

3.231921

0.822007

3.931742

0.0012

Hous_units

1.615429

0.139850

−11.55112

0.0000

R2

0.998546

Mean dependent var

25357.48

R2 adjusted

0.998182

S.D. dependent var

49175.68

S.E, of regression

2096.785

Akaike info criterion

18.33845

Sum squared resid

70344110

Schwarz criterion

18.58715

Log likelihood

−187.5538

F-statistic

2746.192

Durbin-Watson stat

2.282722

Prob(F-statistic)

0.000000

This leads to the necessity of transforming all initially obtained coefficients and transform them into standardized coefficients. To this end, the standard deviation calculation of both independent and dependent variables shall be obtained, which is given by the table of the model final statistics.

28367.51

7.789.000

218391.0

4.360.000

49175.68

3.167.501

1.264.319

1.164.526

0.115100

532507.2

4.84E+10

Mean

Media

Max.

Min.

Std. deviation

Abs. deviation

Kurtosis

Jarque-Bera

Prob.

Sum

Square Sum. Dif.

M5W_gen

1.39E+10

914416.0

0.102300

1.576.594

1.450.966

3.453.616

83403.31

1.324.100

383303.0

14.200.00

43543.62

Res_Pop

Table 25.3 Final general statistics of the obtained model GDP_per_cap

1.57E+10

67980.00

0.106310

4.482.646

4.712.975

0.825834

1.08221

0.283000

0.479000

0.359000

0.337759

1.92E+10

952875.2

0.520812

1.984.876

1.615.110

5.681.714

96807.12

1.175.004

441550.1

18.105.2

55623.0

Tour_pcp

3.73E+09

17614B.7

0.606318

1.309.363

1.335.446

3.256.629

13651.44

4.670.350

62786.81

3.970.146

8.338.286

Educ_level

1.45E+09

81465.00

0.561342

1.964.111

1.601.308

3.712.446

8500.01

4.500.000

39388.00

1.008.000

3.879.286

Eton_unitis

1.17E+10

232778.0

0.150675

2.184.193

1.679.194

3.853.718

24197.32

5.490.000

113384.0

4.494.000

11084.67

Hous_units

360 C. Estay-Ossandón et al.

25 Analysis of the Key Variables That Affect the Generation …

361

25.2.6 Coefficient Standardization Variables measured in different units have been used and it is not possible to know precisely the relative importance of each. Therefore, standardized coefficients allow determining which explanatory variable has a greater weight explaining the regressor (dependent variable). Now, using the value of standard deviations, the value of standardized parameters can be calculated using the following formulas: Sd (x j ) , where βˆ ∗j represents a standardized parameter. βˆ j is the nonβˆ ∗j = βˆ j Sd(y) standardized value of the same parameter. Sd (x j ) and Sd (y) are the standard deviations of the exogenous variable which parameter is being standardized and of the dependent variable (Table 25.4). 83403.31 sd(Res_ pop) ≈ 0.763623 ∗ = 1.295125 sd(M SW _gen) 49175.68 sd(G D P_ per _cap) 1.08221 = βˆG D P_ per _cap ≈ 23384.1 ∗ sd(M SW _gen) 49175.68 = 0.514450 96807.12 sd(T our _cap) ≈ 0.885401 ∗ = 1.781206 = βˆT our _cap sd(M SW _gen) 49175.68 13651.44 sd(Educ_level) ≈ −0.301325 ∗ = βˆEduc_level sd(M SW _gen) 49175.68 = −0.083634 8500.00 sd(Educ_units) ≈ 3.231921 ∗ = 0.558636 = βˆEduc_units sd(M SW _gen) 49175.68 sd(H ous_units) 24197.32 = βˆ H ous_units ≈ 1.615429 ∗ = 0.794881 sd(M SW _gen) 49175.68

∗ ˆ βˆ Res_ pop = β Res_ pop

βˆG∗ D P_ per _cap

βˆT∗ our _cap ∗ βˆEduc_level

∗ βˆEduc_units

βˆ H∗ ous_units

It can be observed that there is not much difference between coefficients before and after standardization. However, there is a significant difference in the economic units and GDP_per_cap, but that does not affect the model estimation. Table 25.4 Sumary of standardized variables

Variables

Non-standardized coefficients

Standardized coefficients

Res_Pop

0.763623

1.295125

GDP_per_cap

23384.1

1.514450

Tour_Pop

0.885401

1.781206

Educ_level

−0.301325

−0.083634

Econ_units

3.231921

0.558636

Hous_units

1.615429

0.794881

362

C. Estay-Ossandón et al.

In Fig. 25.2, it can be observed that residues distribute normally with a mean of zero, which indicates the absence of autocorrelation. As observed in the column Prob in Fig. 25.3, no significant coefficients can be observed in the residues, that is to say, all are above 5%. This means the model does not present heteroscedasticity. Down below, to confirm the previous result a second test will be used to verify the mentioned heteroscedasticity. It can be observed that both p-values of F are higher than 0.05. This leads to accept that the model is homoscedastic and presents no heteroscedasticity. Next, the normality of residues will be analyzed (Fig. 25.4).

Fig. 25.2 Graphic residual analysis to detect autocorrelation

Fig. 25.3 Correlogram to detect heterocedasticity (Sample: 1 12; Included observations: 15)

25 Analysis of the Key Variables That Affect the Generation … 14

Series: Residuals Sample 1 21 Observations 21

12 10 8 6 4 2 0 -4000

363

-2000

0

2000

Mean Median Maximum Minimum Std. Dev. Skewness Kurtosis

-4.14e-12 689.2322 3801.481 -3535.678 1875.421 -0.425762 2.722452

Jarque-Bera Probability

0.701861 0.704033

4000

Fig. 25.4 Error or normality measurements of residues

It can be observed that the statistic Jarque-Bera is significant in 70%, which means errors are distributed in a normal way.

25.2.7 Multicollinearity Detection Subsequently, to control the presence of multicollinearity which generally is present in cross-sectional data, a multicollinearity analysis must be performed. To this end, several steps have to be followed: in the first place, an equation is generated which involve all variables independent of each other. In order to detect and to see the degree of multicollinearity, the Condition Number (CN) which is a very well accepted measure in these cases, has to be calculated first. This number is generally considered as problematic if it reaches a value over 30. Another way to detect it is through the Variance Inflation Factor (VIF) and the Tolerance Index (TI) calculation. These calculations allow observing which variables are involved in the collinearity problem and in which degree. Viewing the eigenvalue “ev” (eigenvalue is obtained through an algebraic sequence used on E-view), it can be observed the maximum and minimum ev’s, where CN will be given by CN = (ev max/ev min)1/2 The ev obtained with each variable on this study are the following: ev1 0.002275 ev2 0.013451 ev3 0.062467

364

C. Estay-Ossandón et al.

Table 25.5 Application of the White Test to detect heterocedasticity

F-statistic

1.528394

Prob.F (6,11)

0.244764

Obs*R-squared

10.59845

Prob. Chi-Square

0.225506

ev4 0.743268 ev5 2.098122 ev6 0.011097 Substituting in the formula: CN = (ev max/ev min)1/2 CN = (2.098122/0.002275)1/2 CN = (922.251429)1/2 CN = 30.3685928 A slight multicollinearity in the independent variables can be seen, since they are over 30, but literature recommends for this case either to apply mountain range regression that would diminish collinearity problems or, simply, to disregard the multicollinearity in the model, since the variables possess a great capacity to explain the dependent variable, with high individual and joint significance. And so, the Prob (F-statistic on Table 25.2), is almost null and the goodness of fit of the model is of a coefficient of correlation R2 = 0.99. The following table shows the results of the FIV and IT application, obtained by doing the regression between the explanatory variables. Another effective way to detect which variables are responsible for the presence of collinearity (Table 25.5). According to the Tolerance Index (TI) values on Table 25.6, the variables that presumably present multicollinearity are: Res_Pop, Tour_Pop, Econ_units and Hous_units, since they all have values far from the unit and their VIF associated are all higher than 10. On the other hand, with Tolerance Indexes closer to the unit and with VIF < 10, the variables Educ_level, GDP_per_cap do not present collinearity problems. Table 25.6 Results of applying TI and VIF Regressor

R2

Tolerance index TI = 1-R2

Variance Inflation Factor Statistical VIF = 1/(1-R2 ) durbin-watson

Res_Pop

0.99

0.01

100

2.2

GDP_per_cap

0.92

0.11

9.05

2.1

Tour_Pop

0.98

0.02

100

2.2

Educ_level

0.89

0.11

9.01

2.2

Econ_units

0.99

0.01

100

1.8

Hous_units

0.98

0.02

50

2.0

25 Analysis of the Key Variables That Affect the Generation …

365

25.3 Result Analysis In order to analyze the results of the final model obtained, the verification will be performed of each assumption that an econometric model must meet: – Linearity of parameters: as observed in the final adjusted model approach the model presents total linearity of the parameters. – Adequate functional form: according to the literature, the best way to estimate the generation of residues is through a linear model as observed in the research of Buenrostro, Bocco, and Vence [4] and Rodríguez-Salinas [19]. – Adequate specification: thanks to the calibration process and the theoretical bases given by the literature, it can be assured that the proposed model includes all the most relevant variables as the base objective of this investigation. – The probability of the While statistic showed absence of heteroscedasticity and rejects the null hypothesis of homoscedasticity. – The Durwin Watson statistic shows there is no autocorrelation in the model. – The value p associated with the Jarque-Bera statistic indicates that noise has a normal distribution, and therefore, the interference in the model is valid. Therefore, after putting together the proper background obtained with the contrasts performed and the theoretical, statistical, and economic justifications and knowing all confounding variables are left in standby, the final equation of the model can be presented.

25.3.1 Results of the Obtained Final Model Substituting by the standardized coefficients of the variables in the model (Table 25.4), the equation obtained is MSW_gen = f (β0 + β1 ∗ 1.295125 + β2 ∗ 1.514450 + β3 ∗ 1.781206 + β6 ∗ − 0.083634 + β7 ∗ 0.558636 + β8 ∗ 0.794881) + μi (25.5)

25.3.2 Interpretation of Coefficients (Key Variables) Res_Pop: Ceteris paribus, an additional inhabitant unit causes an increase of 1.29 ton/year of residues generation in the Balearics. GDP_per_cap: Ceteris paribus, a 1% additional increase in the GDP per capita (associated consumption level) generates an increase of 1.51 ton/year of generated residues.

366

C. Estay-Ossandón et al.

Tour_Pop: Ceteris paribus, an additional unit of tourists causes an increase of 1.78 ton/year of generated residues. Educ_level: Ceteris paribus, an extra year of education or schooling results in a decrease of 0.083 ton/year of generated residues. Econ_units: Ceteris paribus, an additional economic unit (Companies) causes an increment of 0.55 ton/year of generated residues. Hous_units: Ceteris paribus, an additional housing unit causes an increase of 0.79 ton/year of generated residues. The values missing of the sub-indices (β 4 , β 5 ) correspond to the variables of nonsignificant influence that affect the generation of MSW. Therefore, they are not included in the equation. Additionally, it should be noted that the three main key factors (according to the theory) that influence the generation of MSW are: Resident population, Tourist population, and Education.

25.4 Conclusions A model has been obtained in which 6 variables proved to be significant, both jointly and individually. That is to say, that the explanatory capacity of these independent variables over the dependent one (MSW_gen) is 99,8% (R2 ). The regression analysis determined that the 6 key variables that directly affect the generation of MSW in the Balearics are, by order of significance: Res_Pop, Tour_Pop, Hous_units, GDP_per_cap, Econ_units, and Educ_level. Although after applying the Backward method, some nonsignificant variables to the model were left out, this does not mean they are not significant in areas different to the one on this study, since they might present a direct or inverse influence on the generation of MSW, like consumption level, employment level, and literacy rate among others. The goodness-of-fit statistics shown in Table 25.2, are very good for the model, which can be interpreted as the model being a good explanatory instrument of the generation of residues for the study area.

References 1. Abarca L, Mass G, Hogland W (2012) Solid waste management challenges for cities in developing countries. Waste Manag Res 33:220–232 (Netherlands). https://doi.org/10.1016/j.was man.2012.09.008 2. Ali-Abdoli M, Falahnezhad M, Behboudian S (2011) Multivariate econometric approach for solid waste generation modeling: a case study of Mashhad, Iran. Environ Eng Sci 1–7. https:// doi.org/10.1089/ees.2010.0234 3. Bakhat M, Roselló J (2011) Estimation of tourism-induced electricity consumption: The case study of Balearics Islands, Spain. Energy Econ 33:437–444. https://doi.org/10.1016/j.eneco. 2010.12.009

25 Analysis of the Key Variables That Affect the Generation …

367

4. Buenrostro O, Bocco G, Vence J (2001) Forecasting generation of urban solid waste in developing countries. A case study in Mexico. J Air Waste Manag Assoc 51(1):86–93. https://doi. org/10.1080/10473289.2001.10464258 5. Dangi MB, Pretz CR, Urynowicz MA et al (2011) Municipal solid waste generation in Kathmandu, Nepal. J Environ Manag 91(1):240–249 6. European Directive on Urban solid Waste (2008/98/EC) DOUE n. 312, 3–30. https://www.boe. es/buscar/doc.php?id=DOUE-L-2008-82319. Accessed 30 Dec 15 7. Ghinea C, Dragoi E, Comanita E et al (2016) Forecasting MSW generation using prognostic tool and regression analysis. J Environ Manag 182:80–93. https://doi.org/10.1016/j.jenvman. 2016.07.026 8. Gonzalez-Martínez T, Bräutigan K-R, Seifert H (2012) The potential of a sustainable municipal waste management system for Santiago de Chile, including energy production from waste. Energy Sustain Soc 2(24):1–14 9. IBESTAT (Statistical Institute of the Balearics Islands) (2014). http://ibestat.caib.es/ibestat/est adistiques/entorn-fisic/residus/a2a39da8-46d6-4191-a1e3-41a21061b5ed 10. Johnstone N, Labonne J (2004) Generation of household solid waste in OECD countries: an empirical analysis using macroeconomic data. Land Econ 80(4):529–538. https://doi.org/10. 1177/1070496507312575 11. Lilai X, Peiqing G, Shenghui, C et al (2012) A hybrid procedure for MSW generation forecasting at multiple time scales in Xiamen City, China. Waste Manag 33:1324–1331. https:// doi.org/10.1016/j.wasman.2013.02.012 12. Marquez M, Hidalgo H, Ojeda S (2008) Identification of behaviour patterns in household solid waste generation in Mexicali‘s city: study case. Resour Conserv Recycl 52(11):1299–1306 13. Mazzanti M, Zoboli R (2008) Waste generation, waste disposal and policy effectiveness. Evidence on decoupling from the European Union. Resour Conserv Recycl 52(10):1221–1234 14. Mazzanti M, Zoboli R (2009) Municipal Waste Kuznets curves: evidence on socio-economics drivers and policy effectiveness from EU. Environ Resour Econ 44:203. https://doi.org/10. 1007/s10640-009-9280-x 15. Mingua Z, Xiumin F, Rovetta A et al (2009) Municipal solid waste management in Pudong new area, China. J Waste Manag 29:1227–1233 16. McElroy JL (2003) Tourism development in small islands across the world. Geogr Annal 85B(4):231–242 17. Ojeda-Benıtez S, Lozano-Olvera G, Adalberto Morelos R et al (2008) Mathematical modeling to predict residential solid waste generation. Waste Manag Res 28:S7 18. Regional Plan for Waste Management in Mallorca (2006–2013). https://www.tirme.com/ct/upl oad/file_aj30_06_11_11_11_59.pdf 19. Rodríguez-Salinas M (2004) Diseño de un modelo matemático de la generación de residuos sólidos municipales en Nicolás Romero, Mexico. http://www.bvsde.paho.org/bvsacd/cd48/ modelo.pdf 20. Saenz-de-Miera O, Roselló J (2014) Modeling tourism impacts on air pollution: the case study of PM10 in Mallorca. Tour Manag 40:273–281. https://doi.org/10.1016/j.tourman.2013.06.012 21. National Integrated Waste Plan of Spain (2008–2015) Quantitative goals of the plan. http:// www.cepco.es/Uploads/docs/ISA_PNIR_26_11_2007.pdf. Accessed Octo 2016 22. Weng Y-C, Fujiwara T, Matsouka Y (2009) Municipal solid waste management and short-term projection of the waste discard levels in Taiwan. J Mater Cycles Waste Manag 11(2):110–122

Part V

Energy Efficiency and Renewable Energies

Chapter 26

Paradoxes Between Energy Labeling and Efficiency in Buildings M. Macarulla and L. Canals Casals

Abstract Buildings are responsible for 40% of the energy consumption and around 36% of CO2 emissions in the European Union. Administrations are funding projects enhancing energy efficiency to achieve a zero-emission building. As a consequence, many companies perceive the business potential. Several public buildings are analyzed by comparing their energy rating labels against their real energy consumption. Results show that highly rated buildings are also the ones with higher consumption. This study analyzes and discusses the factors that end up in this paradox. Results show that new buildings consume more energy, which is caused by higher interiorcomfort standards. Moreover, control systems that should warrant optimal functionality of the building consume non-negligible amounts of energy. Finally, technical exigencies on buildings have increased in recent years. In conclusion, project management should incorporate the structure, control, security, and comfort systems together in the design phase. Moreover, this phase should also include the operating definition of the building during the day and throughout seasons of the year. Keywords Efficiency · Energy label · Buildings

26.1 Introduction Nature is in constant pursuit of improving the efficiency in almost all its evolutionary phases and activities. Similarly, humans aim to obtain the best results with minimum effort in all their activities. The use of technology and the availability of affordable energy power sources enormously accelerated our advances and increased M. Macarulla (B) · L. Canals Casals Dpto. de Ingeniería de Proyectos. Escuela ETSEIAT, Universitat Politècnica de Catalunya, C\Colom 11, 08222 Terrassa, Spain e-mail: [email protected] L. Canals Casals (B) Energy Systems Analytics Group. IREC-Institut de Recerca En Energia de Catalunya, C\Jardins de les dones de negre 1, 08930 Sant Adrià de Besòs, Barcelona, Spain e-mail: [email protected] © Springer Nature Switzerland AG 2021 J. L. Ayuso Muñoz et al. (eds.), Project Management and Engineering Research, Lecture Notes in Management and Industrial Engineering, https://doi.org/10.1007/978-3-030-54410-2_26

371

372

M. Macarulla and L. Canals Casals

the efficiency and impact of our activities during the last centuries. Paradoxically, the astonishing improvement of efficiency also came with an incredible increment of energy consumption never seen before, as Jevons suggested in 1866 [3], and consequently, the environmental impact grew accordingly. These dynamics did not change that much in the last decades. However, due to climate change, the growing consciousness of the environmental impact of human activities and the energy dependency of European countries [2], EU directives, administrations and governments insist in the design and implementation of strategies to reduce the energy consumption and the environmental impact as well. In fact, buildings are responsible for 40% of the energy consumption and 36% of the greenhouse gas (GHG) emissions in the EU [4]. Buildings consume energy all along their life cycle, from their construction until their dismantling, with the use phase having the highest consumption, between 80 and 90% of the total consumption [1, 5, 6, 8]. EU directives force EU countries to develop laws and regulations to reduce the buildings’ energy consumption and emissions. In addition, the EU enhances the research and development of technology and tools to improve efficiency using international calls for innovative projects. Many of these calls focus, again, on the increase of energy efficiency of buildings, in the pursuit of the topic zero-emission building. Moreover, these projects require the cooperation of research institutions with companies, showing the potential businesses that energy efficiency on buildings brings. Good examples of this cooperation are the two projects within the framework of this study: EnerGaWare and REFER.

26.2 Objective Knowing that Jevons perceived more than one hundred years ago that the improvement of efficiency does not implicitly entail a reduction of the global consumption, this study analyzes several public buildings and compares the results of the energetic certification label with their measured energy consumption. The main goal of this study is to determine if the qualification of the energetic label of buildings is correlated with the measured building energy consumption. Moreover, this study analyzes the possible causes of this paradox and how project management could improve these results.

26.3 Methodology This study compares the energy label of 13 public buildings from the Universitat Politècnica de Catalunya (UPC) against the data obtained from the buildings’ monitoring system.

26 Paradoxes Between Energy Labeling and Efficiency in Buildings

373

The analyzed buildings are dedicated to academic purposes. The uses of those buildings are divided into classrooms, laboratories, libraries, offices, and computational centers. The selected buildings may have one or more of the aforementioned uses. Table 26.1 shows the specific use of each of them. Energy labels were obtained during the years 2016 and 2017, using the software CEXv2.3. The estimated consumption of primary energy, GHG emissions, and heating demand was extracted from each energy label. The energy demand from the cooling systems was not considered because there was no access to monitored data related to the cooling of the studied buildings. The monitored energy consumption of the studied buildings was obtained using a platform installed specifically to monitor the building’s energy consumption. This platform is called SIRENA and it is part of an initiative by the UPC with the objective of knowing and understanding the UPC buildings’ energy consumption. The final aim is to develop strategies to reduce the buildings’ energy consumption. The energy labeling software used, provides the primary global energy consumption of the building. Thus, to correctly compare the consumption values between the label and the monitoring system, the average consumption of each building during the years 2013–2016 was taken, transforming this value to primary energy. To do so, the monitoring data should incorporate the energy losses caused by the transmission and distribution of electricity. In this study, the energy losses through the Spanish grid were assumed to be around 8.7% [9]. Table 26.1 Analyzed buildings, year of construction, uses, heating system, and surface area Building

Year

Uses

Heating

Surface (m2 )

R Rectorat

1920

Offices

Electricity

VX. Vèrtex

1999

Offices

Electricity & Gas

16,025

2,635

TR1-2-3

1901

Classrooms, offices and labs

Gas

12,803

TR4-45

1960

Classrooms, offices, labs and computational center

Electricity & Gas

TR5

1960

Classrooms, offices and labs

Gas

9,282

TR7

1960

Offices and laboratories

Electricity

2,077

TR8

1992

Classrooms, offices and labs

Gas

5,333

TR9

1994

Offices and library

Electricity

1,617

TR10

1960

Offices and computational center

Gas

1,676

TR11

1995

Classrooms, offices and labs

Gas

2,332

TR12

2002

Offices and laboratories

Electricity

2,418

TR14

2007

Offices and laboratories

Gas

4,947

TR30

1955

Offices and laboratories

Gas

965

9,733

374

M. Macarulla and L. Canals Casals

26.4 Results Figure 26.1 shows the evolution of the annual energy consumption of the studied buildings. There are three types of tendencies: buildings that increase their consumption considerably, buildings with a slight increment in the energy consumption, and buildings that keep a constant energy consumption or show a soft decrease. It is important to note that the energy consumption of TR14 increases considerably. This can be explained by the fact that this building was built in two phases, increasing its occupation rate in 2015. On the other hand, an increment of the opening hours explains the slight increase in the energy consumption of other buildings, such as TR9. In fact, this building is a library that expanded the opening hours to users since 2013. Finally, the majority of buildings keep a constant consumption or show a soft reduction during the first year. Therefore, taking the average consumption of these four years seems to be a coherent decision to start with the comparison. Table 26.2 shows the distribution of the energy label per building. The label obtained during the certification process indicates the characteristics of the building in relation to a reference building, considered to be the average of its class in the region. None of the 13 buildings analyzed was classified with the highest label (A); only four of them are below the average, nine are considered to consume a little more than the average, and none of them obtained the worse labels (F or G). Notice that, except the TR5 building, the other 3 buildings with a B or C label are recent constructions (less than 20 years old). Considering the GHG emissions per square meter (right column in Table 26.2), the labels show a better tendency, indicating that six buildings show better performance than the reference building and the rest of

Consumption (kWh/m2)

160

120

80

40

0 2013 R Rectorat TR7 TR12

2014 VX. Vèrtex TR8 TR14

years

TR1-2-3 TR9 TR30

2015 TR4-45 TR10

Fig. 26.1 Consumption of the analyzed buildings in the last four years

2016 TR5 TR11

26 Paradoxes Between Energy Labeling and Efficiency in Buildings

375

Table 26.2 Energy label and number of the analyzed buildings with this qualification according to the energy consumption (kWh/m2 ) or emissions (kgCO2 /m2 ) Nº (kWh/m2 )

Nº (kgCO2 /m2 )

Label

Obtained label value against a reference building

A

Label value < 40%

0

0

B

40% < Label value < 65%

2

2

C

65% < Label value < 100%

2

4

D

100% < Label value < 130%

7

7

E

130% < Label value < 160%

2

0

F

160% < Label value < 200%

0

0

G

200% < Label value

0

0

Estimated consumption from the certification (kWh/m2)

them are close to the reference building, that is, within the mean of the buildings of its kind. Figure 26.2 compares the estimated value obtained from the energy label software against the monitored energy consumption of the studied buildings including the energy losses from the transportation and distribution of electricity. Notice that the energy consumption estimations from the certification are significantly higher than the monitored energy consumption of the building. There is only one building (TR14) where the estimation and the monitoring results are similar. In fact, TR14 is the newest building, built in 2007, following the CTE norms. Results shown in Fig. 26.2, demonstrate that the labeling software used in this research tends to overestimate the global energy consumption of buildings. Only the buildings following the most updated norms show results close to the estimations.

400 300 200 100 0 0

100

200

300

400

Monitored energy consumption (kWh/m2) Fig. 26.2 Monitored energy consumption (considering the losses from transportation and distribution) against the estimated consumption indicated by the energy label

376

M. Macarulla and L. Canals Casals

In conclusion, Fig. 26.2, shows that the accuracy of the energy consumption estimation obtained from the building labeling software is very low. However, this was an expected result, as labeling does not take the building energy use into account. For example, a building having a computational center has higher energy consumption than an office building. In fact, the total energy consumption of a computational building can be distributed as follows: 40% caused by the server, 5% from communications, 10% from the energy power sources, 5% from storage systems, and another 40% caused by the cooling system [7]. In such cases, the cooling system is used to bring comfort to users and to keep servers under controlled temperature levels. In such buildings, the labeling process does not estimate the cooling demand and the global consumption correctly. This effect is clearly appreciable in Fig. 26.3, where the consumption of the TR4-45 building is presented in two circles; the solid circle includes the consumption of the computational center and the dotted circle shows the consumption of the building without the computational center, which is half of the global consumption of the building. A similar situation occurs with the TR10 building, although, in this case, it was impossible to determine the consumption of the computational center. Not considering buildings that have a computational center, Fig. 26.3, shows how the building with higher consumption (TR14) is the one with better labeling (the smaller dimension of the circle indicates that the energy label letter obtained is better). In fact, TR14 has the best equipment for heating, cooling, and ventilation, but it is also the one with more sensors and electronic devices to manage the entire building. Another important fact is that new buildings should fulfill higher air quality requirements and comfort standards than older buildings. As an example,

Real consumption (kWh/m2)

250 TR10

200 150 100

TR4-45 50 0 1890

1910

1930

1950 Year

1970

1990

2010

Fig. 26.3 Monitored energy consumption of buildings against the year of construction and labeling (size and color of circles)

26 Paradoxes Between Energy Labeling and Efficiency in Buildings

377

older buildings have natural ventilation, that is, opening and closing windows, and in many cases only a few rooms count on cooling systems. In this context, it is normal that newer buildings may have higher energy consumption than older ones, bringing us back to the Jevon’s paradox: while increasing the efficiency, the instant consumption is reduced but the use of the model entails an increment of the global energy consumption. In fact, Fig. 26.4, seems to indicate that there is no relevant difference in the real average energy consumption of buildings independently of the label obtained. On the contrary, the average estimated energy consumption resulting from the labeling process increases gradually in correspondence with the label obtained. Figure 26.4 endorses the statement that building labeling overestimates the energy consumption for this type of building. Regarding the heating systems, Fig. 26.5, shows how the estimation from labeling software is closer to the real energy consumption read by the monitoring equipment. In particular, there are two buildings in Fig. 26.5, (identified by empty circles) that have major dispersions in comparison to the others. In these two cases, the heating system uses two energy power sources, gas and electricity, but the monitoring system for electricity was unable to distinguish the consumption of the heating system from the general electricity consumption of the building. Thus, Fig. 26.5, presents only the monitored consumption of gas. Contrary to what Fig. 26.3 presented, where no reduction of consumption could be attributed to a better label qualification, Fig. 26.6, shows how buildings with better label qualifications do consume less energy. In Fig. 26.6, buildings identified with dashed circles correspond to the same two buildings having electricity and gas to run the heating system identified in Fig. 26.5. This is the reason why these two buildings present such a low monitored heating gas consumption.

Consumption (kWh/m2)

300 250 200 150 100 50 0 B

C

D

E

Fig. 26.4 Comparison between the average energy consumption of buildings estimated by the labeling process (orange columns) and data obtained from monitoring (blue columns)

Heating energy demand estimated by the labeling process (kWh/m2)

378

M. Macarulla and L. Canals Casals

200 150 100 50 0 0

50 100 150 Monitored gas consumption (kWh/m2)

200

Fig. 26.5 Monitored heating gas consumption in relation to the estimated labeling energy demand

Monitored gas consumption (kWh/m2)

250 200 150 100 50 0 1890

1910

1930

1950 Year

1970

1990

2010

Fig. 26.6 Monitored consumption of gas for each building in relation to the year of construction and the label qualification (color and dimension of the circle define the label qualification)

For the construction of new buildings, project designers generally define the building and its installation. In a few cases, they participate in the construction but they are almost never in charge or involved in the use phase when the effective management of the building is important. This results in the installation of very efficient systems that do not operate in coordination with the uses of the building or that they do not operate in their best performing operational range. In fact, many of the

26 Paradoxes Between Energy Labeling and Efficiency in Buildings

379

operational problems are unappreciated by designers and the error is repeated over and over. For this reason, it is convenient that building designers be involved and participate in the operational and maintenance tasks of the buildings they design.

26.5 Conclusions This study analyzes 13 public buildings from the UPC comparing the energy label and energy consumption estimations against their monitored energy consumption data. Results show that the global primary energy consumption estimated by the labeling software is substantially different from the energy consumption presented by the monitoring equipment. This deviation is caused by the fact that the energy label does not consider the specific uses of buildings. In this sense, labeling should not be used to perform global regional energy consumption estimations. Moreover, results show that the average monitored energy consumption of buildings is constant, independently of the label obtained. In some cases, buildings with a higher qualification are the ones consuming more energy. In fact, although having very efficient systems, these buildings also have lots of sensors and electronic devices to manage and control the installations. Moreover, newer buildings are subjected to higher requirements from updated regulations, such as the air quality and comfort in the interior of buildings. On the other hand, the heating energy demand estimated by the labeling software is closer to the monitored energy consumption of buildings. Moreover, in all cases, buildings with better labels do have lower real energy consumption. Acknowledgments Authors appreciate the contribution of the UPC and the REFER project (COMRDI15-1-0036) from ACCIÓ with European Regional Development Fund (ERDF).

References 1. Atmaca A, Atmaca N (2015) ‘Life cycle energy (LCEA) and carbon dioxide emissions (LCCO2A) assessment of two residential buildings in Gaziantep, Turkey. Energy and Build 102:417–431. https://doi.org/10.1016/j.enbuild.2015.06.008 2. Iten M, Liu S, Shukla A (2016) A review on the air-PCM-TES application for free cooling and heating in the buildings. Renew Sustain Energy Rev 61:175–186. https://doi.org/10.1016/j.rser. 2016.03.007 3. Jevons WS (1866) The coal question; an inquiry concerning the progress of a nation and the probable exhaustion of our coal mines. Macmillan and Co. https://doi.org/10.1093/library/s5XVII.3.238 4. Pérez-Lombard L, Ortiz J, Pout C (2008) A review on buildings energy consumption information. Energy Build 40(3):394–398. https://doi.org/10.1016/j.enbuild.2007.03.007 5. Praseeda KI, Reddy BVV, Mani M (2016) Embodied and operational energy of urban residential buildings in India. Energy Build 110:211–219. https://doi.org/10.1016/j.enbuild.2015.09.072

380

M. Macarulla and L. Canals Casals

6. Ramesh T, Prakash R, Shukla KK (2010) Life cycle energy analysis of buildings: an overview. Energy Build 42(10):1592–1600. https://doi.org/10.1016/j.enbuild.2010.05.007 7. Rong H et al (2016) Optimizing energy consumption for data centers. Renew Sustain Energy Rev 58:674–691. https://doi.org/10.1016/j.rser.2015.12.283 8. Whitehead B et al (2015) Assessing the environmental impact of data centres part 2: building environmental assessment methods and life cycle assessment. Build Environ 93:395–405. https:// doi.org/10.1016/j.buildenv.2014.08.015 9. Worldbank (2014) Electric power transmission and distribution losses. http://data.worldbank. org/indicator/EG.ELC.LOSS.ZS. Accessed: 15 Jan 2016

Chapter 27

Spanish Winery Interest in Energy Efficiency Based on Renewables N. García-Casarejos, P. Gargallo, and J. Carroquino

Abstract The expanding agri-food industry is highly interested in product and process innovation and acutely aware of climate change’s impact on the sector; however, it has still not seen an energy transformation that could lead to huge savings. These savings could come from energy efficiency and from the total or partial replacement of traditional fuels by renewable technology. Saving energy and using it more efficiently will enable companies in the sector to reduce their greenhouse gas emissions, generate wealth, and improve their competitiveness. In this industry, wineries are more innovative than the rest of the sector and they can serve as a model of how adopting energy efficiency and renewable use measures on a small scale can be profitable. The purpose of this study is to discover how willing Spanish wineries are to implement renewable energies in their vineyards and the measures they have taken or plan to take to improve their energy efficiency. Keywords Energy efficiency · Renewable energy · Wine sector · Survey · Sustainability

27.1 Introduction Some companies have followed a certain course of action in recent years that has led to environmental catastrophes, a deterioration of the ecosystem, a loss of biodiversity, and so on, which are contributing to climate change and global warming. These negative effects are managing to change the attitude of companies most committed to sustainability and they are now actively and voluntarily involved in the social, economic and environmental improvement of their surrounding area. N. García-Casarejos (B) · P. Gargallo Facultad de Economía y Empresa. Universidad de Zaragoza, Dirección Gran Vía, 2, 50005 Zaragoza, Spain e-mail: [email protected] J. Carroquino Intergia Energía Sostenible S.L, Avenida de Cataluña, 50014 Zaragoza, Spain © Springer Nature Switzerland AG 2021 J. L. Ayuso Muñoz et al. (eds.), Project Management and Engineering Research, Lecture Notes in Management and Industrial Engineering, https://doi.org/10.1007/978-3-030-54410-2_27

381

382

N. García-Casarejos et al.

Agribusiness is one of the industries that can best help society achieve socioeconomic development and progress, since it is the economic sector with the highest business volume in the European Union (EU), e1.24 trillion per year, which represents 15% of the EU’s total production, and it directly employs over 4.2 million people [2]. This evidences its economic dynamism, as it is an expanding sector—concerned with product innovation (IVIE, Valencian Institute of Economic Research [4] and process innovation—and is acutely aware of climate change’s potential impact on the industry. Agribusiness is also one of the main energy consumers. In fact, in 2013, its consumption represented 26% of the EU’s total, and 28% of this consumption occurred directly in the industrial process [3]. This means that 7.3% of all the energy consumed in the EU is for the manufacture of food and drink. Furthermore, as Spain is one of the five European countries with the largest food and drink industry (the others are Germany, France, Italy, and the United Kingdom), it is also one of the countries that consumes most energy. The data in Spain, show that agribusiness has consolidated its position as the country’s top industrial sector, with almost half a million direct employees, representing 21% of the manufacturing industry in the country. These figures reveal that Spain has strengthened its position as the fourth economy in the EU in gross production (valued at e95,000 million) [4]. Consequently, anything relating to energy consumption that is valid for the EU overall will also be valid for the Spanish case, where the implementation of renewable energy would be associated with lowering costs and environmental improvements. This circumstance is reinforced by the gradual depletion of fossil fuel resources, the strong impact caused by overhead cables in rural areas (landscape, environmental degradation, and so on) and the continuing reduction in the costs of generating renewable energy, which make this option more competitive. The starting hypothesis is that the agri-food industry (agribusiness plus primary sector) is highly receptive to implementing renewable energies, since this not only favors the environment but also the essence of the business. Green, ecological, and organic aspects tie in with consumers’ tastes and demands. They also have a positive impact on companies offering these aspects by improving their image and reputation and creating a competitive edge, which is a source of differentiation [1], that helps them maintain or increase their profit margin. Within the agri-food industry, the particular characteristics of wineries and their vineyards make them more innovative than the rest of the sector, and therefore, they can serve as a model of how the use of renewables on a small scale can be profitable. The purpose of this study is to learn the penetration of renewable energies in the Spanish wine sector in both wineries and their vineyards. The aim of conducting surveys was to learn how willing Spanish wine companies are to implement renewable energies in their wineries and vineyards and to characterize the sector into geographical types, levels of activity, their perception of environmental aspects, their level of application of measures to mitigate climate change or improve energy efficiency and their energy uses and consumption. The presentation of the study has been organized as follows. The method followed in the process of creating the questionnaire and the design of the sample is briefly

27 Spanish Winery Interest in Energy …

383

described in Sect. 27.2. The results obtained from the statistical analysis conducted with data from the survey are detailed in Sect. 27.3. Lastly, the most relevant conclusions of this study are presented in Sect. 27.4.

27.2 Method In Sect. 27.2.1 of this section, we present the design of the questionnaire. The design of the sample of wineries selected to participate in the study is presented in Sect. 27.2.2.

27.2.1 Questionnaire As is usual in the design of a questionnaire, winery owners were first pretested in several Spanish regions to identify the key aspects and most relevant characteristics of this sector. This process involved in-depth semi-structured and directed interviews. The definitive questionnaire started with a block of questions to identify and locate the wineries by name, municipality and province, the wine area they belong to, the year they were established, the majority shareholder, and the legal form of the company. The purpose of the next block of questions was to learn the company’s activity. This involved questions on the number of employees, turnover volume, and gross floor area to establish whether it is a large or small winery. Wineries were also questioned on the percentage represented by the different qualities of the wine they produce (PDO, PGI, and so on) and the percentage of their foreign market sales out of their total sales. Another important issue was establishing whether wineries perform other additional activities (tourism, leisure sector, other agricultural products, renewable energy production...) and how much they represent of their total turnover (in percentage). Lastly, also of interest was learning where the grapes used in producing their wine come from, in other words, whether they are their own, come from winegrowers with or without a contract or from cooperatives. It was obviously important to discover how many hectares were involved and whether they participated in their management while the crop was growing or not. The next block of questions focused on environmental responsibility and policy. Wineries were asked if they had any organic winemaking certification, if they had calculated the carbon footprint of their activity or their products, if they conducted energy audits, and if they had their own resources to manage the company’s environmental policy. Another block of questions aimed to analyze wineries’ attitudes to climate change. Their purpose was to learn whether wineries were convinced that the climate had changed and their level of willingness to reduce their CO2 emissions to mitigate

384

N. García-Casarejos et al.

climate change. We were interested in finding out which measures they had already adopted. Another block included questions on renewable energy. We asked about the type of environmentally responsible energy wine companies use, whether they were convinced of the need to use them, their opinion on the outlay involved in implementing them, and the aspects they valued to adopt their use, such as reliability, environmental sustainability, grants, and the impact on their image. The last block of questions aimed to learn whether the use of renewable energies was technically and economically viable. Consequently, we included questions on the use and consumption of nonrenewables, especially electricity, diesel, and gas. Wineries were asked about their consumption trend throughout the year, their level of concern for energy costs, and whether they had reviewed invoices and the power stated in their contract with a view to reducing them.

27.2.2 Selecting the Sample of Wineries According to the SABI database in Spain, which contains information on Spanish and Portuguese companies, the Spanish wine map was formed by 3,894 wineries. We decided to use a simple random sample and stratify by autonomous community to simplify the sampling procedure as far as possible, yet ensure it was still representative. Nevertheless, sample sizes varied depending on confidence and permitted error. With the confidence of 95% and an error of 5%, the necessary sample size is 384 wineries. We decided to stratify by autonomous community so that the sample would contain all existing winegrowing areas. Consequently, Table 27.1, shows the sample sizes for the wineries by autonomous community.

27.3 Results This section presents a preliminary analysis of the data extracted from the survey organized into the main blocks shown in Table 27.2.

27.3.1 Identification and Location The main designations of origin in the country and the autonomous communities with the most production are represented in the sample. All the wineries belong to different municipalities. Almost 60% are limited liability companies.

27 Spanish Winery Interest in Energy … Table 27.1 Sample sizes by autonomous community (CCAA)

385

CCAA

Wineries population

Andalusia

287

27

Aragon

144

14

Asturias

19

2

Balearic Islands

57

6

Canary Islands

84

7

Cantabria

5

1

Castile-León

597

54

Castile-La Mancha

445

40

Catalonia

603

53

Extremadura

118

11

Galicia

342

30

La Rioja

326

29

Community of Madrid

195

17

87

8

Region of Murcia Navarre

116

11

Basque Country

261

24

Valencian Community

208

18

Total

Table 27.2 Questionnaire blocks

Wineries sample

Block I

3,894

384

Identification and location

Block II

Company activity

Block III

The company’s environmental policy

Block IV

Attitude to climate change

Block V

Renewable energies, use and attitude

Block VI

Use and consumption of nonrenewable energy

27.3.2 Company Activity The number of staff employed by the wineries selected is very varied; in fact, the average total at the analyzed wineries is 15 and 50% of them have over seven employees. The same large disparity is observed when analyzing their turnover volume. The average turnover volume is e5,358,365 and 50% of the wineries have a volume above e1 million. The average surface area is 99,126.44 m2 and 50% of the wineries have a surface area over 2,000 m2 . The average total wine production is 206,120.88 hl and 50% of the analyzed wineries have a production over 2,800 hl.

386

N. García-Casarejos et al.

On average, Protected Designation of Origin (PDO) wine represents 73.39% of the total of the winery’s sales at the analyzed wineries, and for 50% of the wineries, PDO represents 100% of the total of their sales. Approximately 51% of the wineries only obtain their income from the wine they produce, the remaining 49% obtain additional income from: tourism and leisure sector (81%); processing and/or sale of other agricultural products (cheese, preserves) (19%); and renewable energy production (11%). Nevertheless, their revenue for additional activities represents less than 10% of their turnover in 71% of the cases, which means these activity types are residual in the winery’s main business. Concerning the relationship these wineries have with other winegrowers (not grouped), 43% of the wineries in the sample use grapes from winegrowers with a stable contract in their winemaking process. On average, the number of hectares providing these grapes to the wineries is 29.18 ha. As was to be expected, out of the wineries with stable contracts with other winegrowers, 82.4% participate in the management of the grape-obtaining process while the crop is growing. Vineyards are diversifying their crops, as reflected by the fact that 33.3% of them have something other than vine cultivation. The number of total hectares dedicated to other crops on average is 127 ha, of which, on average, 35 ha are irrigated crops. Out of the vineyards we analyzed, 85.3% have rain-fed hectares and only 16% of these vineyards are planning to change to irrigated hectares; therefore, 84% have no plans to change. There are several reasons why the vineyards are considering transforming their rain-fed hectares into irrigated hectares. Firstly, 40% are planning to transform the hectares to increase grape production, 40% to compensate for weather and rainfall variations and 80% because they believe it improves grape quality. Similarly, there are many reasons why 84% do not want to transform their rain-fed hectares into irrigated hectares. Lack of water is the reason for 20%, no electricity supply for 2.5%, the vineyard’s microclimate for 25%, the type of wine they produce for 52.5%, administrative problems for 7.5%, maintaining traditions for 30% and other reasons, such as the designation of origin standards or moisture at night, for 15%.

27.3.3 The company’s Environmental Policy Only 28% of the wineries we analyzed have organic winemaking certification (organic agriculture, organic wine, organic vineyard, organic winery, ISO 14001, Aragonese Committee of Organic Agriculture, and so on). This percentage is even less, 16%, when it refers to the carbon footprint calculation or the winery’s energy audits. These data are consistent with the resources the wineries in the sample allocate to environmental management, since 33% of them do not assign any resource to these tasks, and only 7% have an exclusive environmental department. Intermediate positions can be found between both extremes. Consequently, 46% of the wineries have internal officers assigned other roles, but they are also responsible for environmental management, and 10% use external officers or consultants.

27 Spanish Winery Interest in Energy …

387

The percentage of vineyards using organic agriculture methods totals 58.7%, and out of these, 50% have some kind of certification. The measures they adopt to adapt to climate change or for energy efficiency vary. Some, 28%, have either started or increased the use of irrigation; 46.7% have brought forward the usual grape harvest dates; 21.3% are working with new grape varieties; 21.3% are growing the vines on land at a higher altitude; 13% have introduced changes in crop architecture; 25.3% have transformed the plant cover; and 4% have implemented other measures, such as using rootstocks or night harvesting. In short, vineyards’ average commitment level to adapt to climate change is high, 7.12 (out of 10), and 50% of the companies have a commitment level above 8 (out of 10).

27.3.4 Attitude to Climate Change Wineries seem highly aware of the climate change problem, as demonstrated by the average rating, 7.83 points (out of 10), given to the statement ‘the climate has changed’. This question was rated above 8 points (out of 10) by 50% of the wineries. Similarly, the average willingness of the sampled wineries to decrease their CO2 emissions is high, 7.83 (out of 10), with 50% of the wineries providing a score above 8 points (out of 10). Although these wineries still do not directly employ qualified personnel to manage environmental tasks, the measures the sampled wineries have already adopted to mitigate climate change or improve energy efficiency also highlight their awareness of the need to care for the environment. As a result, 93.3% of the analyzed wineries recycle; 69% manage their consumption more efficiently; 64% have improved thermal insulation; 51% have acquired low-consumption machinery, facilities and vehicles; 35% use new containers; 44% have decreased the weight of their boxes, bottles, containers, packaging, etc.; 15% have reduced their emissions; and 13% have implemented other measures, such as making better use of residual heat, intelligent temperature control, planting trees, and so on.

27.3.5 Renewable Energies, Use and Attitude No change was inferred in the energy mix of these companies towards more environmentally-friendly energies, since the use of renewables did not represent more than 10% of all the wineries’ consumption in any case. The three most usual renewable energy types are: biomass (11%), photovoltaic power (11%), and solar thermal power (9.3%). Despite the low level of use of renewable energies, wineries are convinced of the need to use them and rate this aspect with an average score of 7.28 (out of 10), and 50% of wineries rate it with more than 8 points (out of 10). This gap could be due to the wineries rating the initial investment involved in implementing renewable energies

388

N. García-Casarejos et al.

as high, with 8.33 points (out of 10), on average. They consider that operating costs and maintenance costs are lower, and they rate them, on average, with 5.5 (out of 10) and 5.4 (out of 10), respectively. Concerning aspects favoring wineries’ adoption of renewables, reliability is rated with 7.24 points (out of 10), on average, and 50% of the wineries rate this aspect with a score above 7 (out of 10). Environmental sustainability is rated with an average score of 8.6 points (out of 10), and 50% of the wineries give it a rating above 8 (out of 10). The average rating of the existence of grants was 7.12 (out of 10), and 50% rate it above 7 (out of 10). Lastly, the impact on the image was rated 7.84 (out of 10), on average, and 50% give it a score above 8 points (out of 10). As occurred in the wineries, the use of renewables in vineyards is still very limited. Only 5% of the vineyards use environmentally-friendly energies, and out of these, 50% use photovoltaic for pumping and 50% produce biomass in their own vineyard. Concerning pruning waste, 53.3% of the vineyards add it to the land, 28% burn it, 8% collect it to use as biomass, 5.3% sell it, and 2.7% chop it for compost.

27.3.6 Use and Consumption of Nonrenewable Energy To discover wineries’ energy use and consumption pattern throughout the year and to see whether this energy can be replaced by renewables, we need to know their demand. Our analysis shows that the demand for electricity is not stable throughout the year given seasonal trends in the wine-production process. In fact, the months with the highest consumption are September and October, coinciding with the harvesting period, although the two months before that also register high consumption. Between March and June, energy use is at average levels and the months of December and January represent the least consumption. Wineries’ concern for their increase in electricity consumption has resulted in 62.7% reviewing their reactive energy invoices, and 66.7% the contracted power, with a view to reducing them. To learn the energy requirements of Spanish wineries, the analyzed sample was asked if they had heating and air-conditioning systems; 50% of them have a heating system and 67% air-conditioning. Concerning possible consumption at vineyards, on average they have 1.5 independent irrigation pumping systems, 2.9 tractors, 0.55 grape harvesting machines, and 2.14 personnel transport vehicles. None have heaters and/or fans for protection against frost. Furthermore, machinery and vehicle diesel consumption at vineyards is not stable throughout the year. The lowest consumption occurs in November, December, and January, while September is the month with the most consumption, coinciding with the harvesting period, although the preceding months also register high consumption. The total average consumption of diesel at vineyards in 2016, was e14,092, although it varied greatly; 50% of the vineyards had a consumption of less than e5,000. Out of this consumption, on average, 82.7% corresponded to agricultural

27 Spanish Winery Interest in Energy …

389

machinery and 7.3% to irrigation pumping. To learn the vineyards’ energy requirements, they were asked how they obtain power for their pumping systems, and 32% stated that it was from the electricity grid, 12% from a diesel generator, 5.3% from a tractor, and 1.3% from gravity. Lastly, 40% of the analyzed sample had a power line expressly to supply a pumping system.

27.4 Conclusions In this study, we have used surveys to characterize the Spanish wine sector by geographical type, level of activity, perception of environmental aspects, level of application of measures to mitigate climate change or improve energy efficiency and energy uses and consumption. We also studied the reasons for implementing renewables at the wineries and their vineyards. A simple random sample was used for this study stratified by autonomous community. The wineries in the sample are very focused on their core business—the production and sale of wine—and a high level of connection between the ownership of the winery and the ownership of the vineyard was observed. In fact, more than threequarters of the analyzed wineries have their own vineyards. However, this contrasts with the fact that the wineries have hardly any secondary or tertiary sector activities to supplement their income, while many vineyards have expanded their crops. This demonstrates that diversifying income has occurred more in the primary sector than in the secondary and/or tertiary sectors. Companies also have a strong environmental awareness, since almost half practices organic agriculture and, in many cases, they obtain some form of certification. It is not yet possible to confirm that the production and sale of renewable energy forms a stable source of income for the wineries. Concerning the wineries’ environmental management, it is still in an embryonic phase in the company’s organization, since they either outsource these services or, if they have their own officers, they are not allocated exclusively to these tasks. A strong environmental awareness was perceived, however, as shown by the number of measures already adopted to mitigate climate change and improve energy efficiency. Furthermore, as the wineries are aware that the climate has changed, they would be prepared to adopt measures that can reduce CO2 emissions. However, this contrasts with the wineries’ low implementation of this type of measures, demonstrating that there is still great room for improvement, and therefore, that the proposal of actions targeting this area could be well received. The gap between high awareness and low implementation is probably due to a lack of knowledge of how to proceed and the perception of the high cost. The penetration of renewables is also still quite low and the most used energies are solar, photovoltaic, and biomass. Environmental sustainability is one of the main reasons the wineries in the sample give to implement renewable energies, which is consistent with their awareness of climate change and their willingness to reduce emissions. In fact, focusing on sustainability has become a key aspect to remain competitive.

390

N. García-Casarejos et al.

Although the qualitative leap of incorporating renewables has still not happened in either wineries or their vineyards, conditions for them to be incorporated do exist, since the main source of energy in the wine sector is still electricity, whose costs are constantly increasing. This could be due to the opinion that the initial investment involved in implementing this type of energies in the wine sector is high and not to other reasons, for example, operating and maintenance costs. Other studies posit the same by pointing out that financial barriers [5], are the most obvious for not implementing clean energies, since depreciation periods are lengthy. These authors also highlight that there is less interest in energy saving when no internal policy monitors or controls energy savings. Therefore, as an incentive to implement renewables in the sector, we should establish mechanisms that could change perception, conduct a medium-term cost–benefit analysis or design formulae that are better suited to each winery’s financial possibilities. Acknowledgments We thank the Life REWIND “Renewable Energy in the Wine Industry Profitable small-scale renewable energy systems in agribusiness and rural areas: a demonstration in the wine sector” LIFE13/ENV/ES/000280 for financial support. This research was funded by the ECO2016-77-P and ECO2016-79392-P projects (AEI/FEDER, UE). The Government of Aragón (Spain) and the ERDF have also collaborated: Group Métodos Estadísticos (grant number S41_20R) and Group CREVALOR (grant number S42_20R).

References 1. Fiore M, Silvestri R, Contò F, Pellegrini G (2017) Understanding the relationship between green approach and marketing innovations tool in the wine sector. J Clean Prod 142:4085–4091 2. FoodDrinkEurope (2016) European Food and Drink Industry 2014–2015. Data & Trends. https:// www.fooddrinkeurope.eu/uploads/publications_documents/DataandTrends2014-20151.pdf 3. Institute for Energy and Transport and Institute for Environment and Sustainability (2015) Energy use in the EU food sector: State of play and opportunities for improvement. European Commission Luxembourg Publications Office of the European Union. ISBN: 1831-9424 (online). https://doi.org/10.2790/158316 4. Instituto Valenciano de Investigaciones Económicas (2015) FIAB alimentamos el Futuro 2020 Informe Económico 2015. http://fiab.es/wp-content/uploads/2017/12/Informe-Econo% CC%81mico-2015.pdf 5. Meyers S, Schmitt B, Chester-Jones M, Sturm B (2016) Energy efficiency, carbon emissions, and measures towards their improvement in the food and beverage sector for six European countries. Energy 104:266–283

Chapter 28

Sustainability Building Rating Systems. A Critical Review. Time for Change? G. Martínez Montes, J. Alegre Bayo, B. Moreno Escobar, T. Mattinzioli, and M. J. Álvarez Pinazo

Abstract Over the last two decades, an increase in social awareness has occurred regarding the sustainability of all areas of human activity. Engineering and construction projects are no exceptions to this trend. In order for project developers to make their social corporate responsibility policies visible, many companies have opted for certification methods, which, at an international level, have attended to this awareness, increased the added value of the projects developed. Some examples of these methods are LEED, BREEAM, PASSIVHAUS, VERDE, and CASBEE. Enough time has passed for a critical review of these methods to be carried out and for possible improvements in the sustainability management of projects to be analyzed, which could lead to greater involvement on the part of the public administrations and institutions. Keywords Sustainability · Green buildings · Certification methods · Corporate social responsibility

G. Martínez Montes (B) · J. Alegre Bayo · B. Moreno Escobar · T. Mattinzioli Department of Engineering Construction and Projects, University of Granada, Campus Fuentenueva, C/ Severo Ochoa S/N, 18071 Granada, Spain e-mail: [email protected] J. Alegre Bayo e-mail: [email protected] B. Moreno Escobar e-mail: [email protected] T. Mattinzioli e-mail: [email protected] M. J. Álvarez Pinazo (B) GUAMAR S.A. C/Puerto, 14 – 3rd floor, 29016 Malaga, Spain e-mail: [email protected] © Springer Nature Switzerland AG 2021 J. L. Ayuso Muñoz et al. (eds.), Project Management and Engineering Research, Lecture Notes in Management and Industrial Engineering, https://doi.org/10.1007/978-3-030-54410-2_28

391

392

G. Martínez Montes et al.

28.1 Introduction The construction industry delivers a significant contribution to the numerous negative global environmental impacts [16]. Buildings are responsible for 40% of total energy consumption and 36% of CO2 emissions in the EU. During the life cycle of buildings, energy, water, and raw materials are consumed, waste is generated and potentially harmful atmospheric emissions are released [34]. In addition, more than half of the world’s population is now living in towns and cities, and by 2030, this number will have increased to roughly 5 billion [30]. These rising statistics and measurable effects have caused the green building concept to gain momentum, leading to the conception of many assessment tools and sustainable building rating systems, which are being put into practice around the world [1, 3, 11, 19]. Sustainability in the construction sector is going to be a mandatory concept in all building projects.

28.1.1 Defining the Concept of Sustainability The concept of sustainability is similar, but not equal to, the original definition of sustainable development, which is: “the ability to make development sustainable to ensure that it meets the needs of the present without compromising the ability of future generations to meet their own needs” [35], p. 8]. While somewhat generic, this highlights the comprehensive nature of sustainability, which, as stated in sustainability literature by business authors and the United Nations Economic and Social Council, is a holistic and multidimensional approach required, which considers the three pillars of sustainability: the economic, social, and environmental aspects of any project [3, 23], UN ECOSOC, n.d. After the 1992 Agenda, 21 summit [12] took place, an institutional pillar was introduced on politics, social security systems, education etc. [11, 23]. This pillar will not be considered in this work due to its lack of clarity and its incorporation into rating systems, but it is recognized as having the power to accelerate change in building regulations via government involvement [28].

28.1.2 The Sustainable Building Movement The green building concept has gained recognition internationally, and a vast market of rating systems has developed [16]. Green buildings generally provide better energy performance compared to conventional buildings, reflected in their energy efficiency, water efficiency, and carbon emission reduction, saving up to 30% in energy consumption [25, 36].

28 Sustainability Building Rating Systems …

393

Novel sustainable building rating systems enable the evaluation of many aspects of construction, by establishing a comprehensive methodology including design checklists and credit rating calculators [3, 13]. Building performance can now be described extensively, creating an essential tool to stimulate an open dialog and learning, and assist in re-shaping the design process towards an approach which is more thoughtful, innovative, and integrated [10, 13]. It has been stated that: the more we build or renovate buildings in a sustainable way, the more we produce sustainable cities and reduce global environmental impacts [9]. This holds true as long as ecolabeling provides benefits to society, without being misleading, and truly represents a holistically sustainable building review [12].

28.1.3 European Union Energy Performance of Buildings Directive According to the 2010 Energy Performance of Buildings Directive [26] and the 2012 Energy Efficiency Directive [27], all new buildings must be nearly zero-energy by 31 December 2020, and all new public buildings by 31 December 2018. Furthermore, EU countries must set minimum energy performance requirements for new buildings, major renovations, and the replacement or retrofit of building elements. While the Directive expedites the advancement of sustainable buildings legally, a work platform—sustainable building certification systems—is required to guide and measure these changes.

28.1.4 Private and Public Rating Entities While energy efficiency labeling is generally mandatory, sustainable building labeling is mostly carried out on a voluntary basis [19]. The two largest sustainable building certification bodies, the USGBC and BRE, with their certification schemes LEED and BREEAM, respectively, are in fact private [4, 5], yet they do provide support to the public sector (for example, in the UK, BREEAM assessment is a mandatory requirement to receive funding from Department of Health and Education). The Japanese system CASBEE, on the other hand, is an exception, having actually been largely developed by the Japanese government [11]. As stated by J. Elkington, a founder of the corporate responsibility movement, private systems create an important change in governance. With the increased power of these systems, transparent methodologies are even more important for international private institutions [2]. The presence of various building certification schemes, with different indicators, does cast doubt upon the validity of these systems, even more so with their private nature and the implication of having different stakeholders with different interests [23].

394

G. Martínez Montes et al.

Following on from the concept of corporate responsibility, the notion of a circular economy has begun to gain traction too. With its aim to remove the current intensive linear economic framework, and make way for firms and consumers to reduce harm to the environment with a closed loop understanding of product lifestyle [22]. Sustainable building certification systems assist in solving the needs of environmental resilience and help promote the desire for the economic growth of a circular economy, via incorporating sustainable design strategies and measuring a building’s performance, making it quantifiable, and in turn, improvable. Once rating tools are stable in the market and considered more complete, it will be possible to implement some of the principles within the legislation, and governments have recognized the positive impact and contribution of these schemes [17]. It is the hope of the authors that rating systems are accepted by public institutions, as this would greatly improve the promotion of sustainable development in the construction sector, create a single, complete sustainable building rating methodology, and remove the bias from its evaluation. With the highest rate of rating system introduction occurring between 1995 and 2010 [3], we can observe that this phenomenon is now steadying [3], and so, now that many of these systems have been in place for a considerable amount of time, and the definition of sustainable buildings has become slightly outdated [12], we are granted the opportunity to robustly evaluate the effectiveness of these systems.

28.2 Objectives The scope of this paper is to evaluate the most popular sustainable building certification systems through an in-depth literature review and technical evaluation. To carry out this objective the following focuses have been identified: 1. Identify a wide spectrum of global and local sustainable building certification systems for new residential and commercial construction. 2. Explore existing published critical reviews on sustainable building certification systems to understand the current attitude in academia, develop an effective framework for evaluation and the key requirements for a modern rating system. 3. Critically evaluate the key criteria of the elected rating systems.

28.3 Methodology In order to provide a state-of-the-art investigation, it is necessary to understand the current situation in worldwide research. To provide an integral, systematic, clear, reproducible, and up-to-date review, a literature review was carried out utilizing the following keywords: critical; review; environmental; sustainability; certification system; rating system; sustainable building; green building. The following keywords

28 Sustainability Building Rating Systems …

395

were applied to the Web of Science and open source databases. Carrying out an indepth literature review has highlighted the need for this article [24]. It must be noted that for the extent of this study the terms green and sustainable buildings will be used interchangeably throughout this study [36]. Whilst a literature review provides an insight to various methodologies, this alone is not sufficient to complete a critical review, therefore, certification systems will be elected, and their manuals will then be analyzed by academic researchers. In this study, the systems have been elected according to the following criteria: residential & commercial focus; cited in at least 60 papers in the Science Direct database (appearing in titles, abstracts and keywords); incorporated in the evaluation of at least 500 projects; organization has been in vigor for at least 10 years, and scheme for 5 years. The selected systems will represent the global and regional, as well as the public and private aspects of sustainable building rating systems. Through this selection, it is hoped that this study will also provide guidance to academia and industry on sustainable building rating systems.

28.4 Results and Discussion 28.4.1 Introduction to Elected Certification Systems Various sources provide different answers as to how many rating systems are currently available, but it is believed to be an elevated amount [3,11]. Some of which have a local application (CASBEE etc.) and others an international one (BREEAM, LEED, PASSIVHAUS, etc.). In fact, the World Green Building Council was established to coordinate the efforts of various green building councils all over the world [36]. To initiate the evaluation of the selected certification systems, first they must be introduced. A brief introduction to the four chosen systems, including their history, methodology, and criteria can be found below.

28.4.1.1

BREEAM

The Building Research Establishment Environmental Assessment Method (BREEAM), developed in the UK, by the Building Research Establishment (BRE), is considered as the pioneer of the sustainable building certification systems, first launched in 1990, and is widely accepted that other systems have been influenced by it: LEED and CASBEE, for example [11]. BREEAM has now expanded to rating buildings all over the world, and has its largest presence in Europe occupying 80% of the market [3, 11, 33]. Multiple versions of BREEAM are available, which can evaluate the various aspects of a building´s life cycle: design, use, and refurbishment of buildings. It measures the performance of buildings following 10 main criteria: management,

396

G. Martínez Montes et al.

health & wellbeing, energy, transport, water, material, waste, land use & ecology, pollution and innovation, and depending on its adherence to the criteria system (with credit points, mandatory credits and a weighting system) it can be awarded a Pass, Good, Very Good, Excellent or Outstanding award [6]. If a project achieves an Outstanding rating, a case study will be produced for the building, as it provides an example of exemplar work, and will be published [6].

28.4.1.2

LEED

The Leadership in Energy and Environmental Design (LEED) method was first created in the USA, in 1998, by the United States Green Building Council (USGBC). It is widely considered as the most popular rating method worldwide [1, 11, 33]. There is a total of 9 credit categories for the latest version of LEED v4 Building Design and Construction: integrative process, location & transportation, sustainable sites, water efficiency, energy & atmosphere, materials & resources, indoor environmental quality, innovation, and regional priority. These credit categories are available in 5 different evaluation systems (building design & construction, neighborhood development, etc.). As part of the requirements for LEED credits, there are various options one can choose to fulfill certain credit criteria, depending on the nature of the project (accommodating different project types: historic district or brownfield site, etc.) [32]. Through an additive credit system, with a total of 100 points, the non-weighted score awarded can correspond to either a Certified, Silver, Gold or Platinum score [3, 31]. In addition to credits, in most sections (water efficiency, energy & atmosphere, etc.), there are prerequisites to be completed to obtain a certification, but do not count as summative credits for the final score.

28.4.1.3

CASBEE

The Comprehensive Assessment System for Built Environment Efficiency (CASBEE) method originates from Japan, and was launched in 2005, by the Japan Sustainable Building Consortium (JSBC) with the involvement of government [3, 15]. The Built Environment Efficiency (BEE) system, in its simplest form, assesses the environmental quality of the building against the environmental load of the building. To manage this method, two sections are considered: the internal (Q—Built Environment Quality: project design sustainability, etc.) and the external (L—Built Environment Load: energy and resource consumption, etc.), divided by a hypothetical virtual boundary. The final score is simply Q/L, and the result is then applied to the gradient-based BEE ranking chart which identifies if the building is either: Superior (S), Very Good (A), Good (B+ ), Fairly Poor (B− ), and Poor (C) [14]. The four assessment fields of the CASBEE rating system include: energy efficiency, resource efficiency, local environment, and indoor environment [15], which can be applied to various certification types: Predesign, New Construction, Existing

28 Sustainability Building Rating Systems …

397

Buildings, Renovation, and Cities. Supplementary rating systems are also available when the full rating system cannot be used, including: detached houses, temporary constructions, heat island effect, urban development, and city and market developments [3].

28.4.1.4

PASSIVHAUS

The Passive House concept was developed by Wolfgang Feist and Bo Adamson, in 1996 [33]. Born from the dominating output of CO2 emissions in residential buildings, a Passive House is a smart building with very small heating energy demand but with comfortable indoor environmental conditions and great indoor air quality [18]. To meet its criteria on heating energy (primary & renewable), energy generation, airtightness, and thermal comfort, house designers focus on: thermal insulation, passive house windows, adequate ventilation strategy, airtightness, and thermal bridge reduced design [20]. Depending on its final renewable energy demand and generation, a house can be awarded a Classic, Plus or Premium rating. The Passive House standard can be considered as the exception to the schemes evaluated in this study, due to its respectively smaller criteria and sustainability considerations.

28.4.2 Evaluation of Elected Certification Systems 28.4.2.1

Literature Evaluation

In order to gain an insight into the systems elected and develop a robust framework for the methodology of this work, literature references have been selected from the databases evaluated to act as a reference for this critical review (they were the most cited and include several sources of knowledge). These references will deliver stable, reliable, and unbiased information, and display the current attitude of rating systems in academia. The results can be seen in Table 28.1.

28.4.2.2

Scheme Evaluation

For each system, the latest available versions of manuals on residential and commercial new construction will be evaluated. The following manuals have been considered: BREEAM International New Construction (2016); LEED v4 Building Design and Construction (2018); CASBEE for Building (New Construction): Technical Manual (2014); Criteria for the Passive House, EnerPHit and PHI Low Energy Building Standard (2016). It is worth noting that all systems also offer a scheme which covers the design stage and post-construction stage for buildings and are considered as rating systems, instead of life cycle assessment tools [3].

398

G. Martínez Montes et al.

Table 28.1 Critical review of literature sources Author & Date

Source

Systems reviewed

Doan et al. [11]

Journal of Building and Environment (IF: 4.053)

LEED, BREEAM, CASBEE

Bernardi et al. [3]

Journal of Sustainability (IF: 1.789) BREEAM, CASBEE, LEED

Awadh [1]

Journal of Building Engineering

LEED, BREEAM

Mattoni et al. [19]

Journal of Renewable and Sustainable Energy Reviews (IF: 8.050)

CASBEE, LEED

Vega Clemente [33]

Thesis: Polytechnic University of Madrid

BREEAM, LEED, CASBEE, PASSIVHAUS

Liang et al. [18]

Energy Procedia

PASSIVHAUS

In Table 28.2, an insight is provided into the elected systems, created from data taken from literature sources and scheme manuals, and layout based on the work of Bernardi et al. [3], Doan et al. [11], and Mattoni et al. [19]. The years included are the general years in which publications were released (from construction to city manuals etc.), whereas all other information regards the specific manual stated above. The largest weighted section of a scheme can vary, as many systems apply area-dependent weighting, leading to different weightings per project. As seen in Table 28.2, the variation between the sustainable building systems can be immediately recognized. BREEAM is the oldest system, and accordingly, has the highest number of certified buildings, however, LEED has surpassed the BRE standard in geographical flexibility. CASBEE is the only organization considered as being public, and PASSIVHAUS has the most limited criteria. As expected, in the data provided it is easy to distinguish the presence of three clearly international schemes (BREEAM, LEED, and PASSIVHAUS), and a more locally applied scheme (CASBEE). Zuo and Zhao [36] state that building rating tools are defined according to local climatic and geographic conditions. It can be noted that the international systems offer a far simpler framework than the local system. This fact can provide a reason as to why these systems have been able to expand globally, giving versatility towards building evaluation in different regions. As a side note, CASBEE not only provides a result for adherence to the environmental criteria, but also a life-cycle CO2 production chart. Throughout the schemes evaluated, the energy criterion is clearly the key focus of the systems, as also found in the work of Illankoon et al. [16]. This is understandable, as to meet the EU targets for nearly zero-energy buildings (as stated in Chap. 1.3.), energy consumption requires the most attention. However, if the percentage of the energy criteria in the systems is considered, along with the other types of credit categories, it is apparent that the focus of these sustainable building rating systems does not adequately cover the three pillars of sustainability. This observation is also made throughout the relevant literature [3, 11, 16].

28 Sustainability Building Rating Systems …

399

Table 28.2 Comparison between the four systems System

BREEAM

LEED

CASBEE

PASSIVHAUS

Country of origin

UK

USA

Japan

Germany

Organization

BRE

USGBC

IBEC

PHI

Year introduced

1990

1998

2002

1996

Latest version

2016 (Pilot: 2018) 2018

2014

2016

Presence

77 countries

160 countries

1 country

>45 countries

Rating type

Weighted categories

Additive credits

BEE ranking chart

Ranking table

Establishment type

Private

Private

Public

Private

Scheme focus

Energy (23%), health & wellbeing (16%)

Energy & Atmosphere (30%), SS & EQ (15%)

Q: Indoor Environmental (40%) & L: Energy (40%)

House energy consumption & generation, and thermal insulation

Mandatory credits/sections

Excellent: 11%, 11% Outstanding: 17%





Certified buildings

561,600 (2017) [11]

79,100 (2017) [11]

> 14,000 (2015) 50,000 (2013) [3] [18]

Average Certification Price [8]

6,125e (min: 1.750e – max: 9.620e) 27 requested budgets analyzed

16,383e (min: 2.080e – max: 29.720e) 78 requested budgets analyzed



6.566e (min: 3.130e – max: 9.960e) 56 requested budgets analyzed

Recognition for innovation

Yes: 6%

Yes: 5.5%

No

Indirectly: renewable energy criterion

N.B.: where not stated, the source is from scheme manual or website of the corresponding system

The PASSIVHAUS system, despite being limited in its incorporation of holistic sustainability, is a very internationally accepted system. With a similar price to international rating systems and a number of certified buildings very close to that of LEED, it infers that even a small-scale system with a more direct, simple approach can be effective in saving energy. However, PASSIVHAUS does not display considerations outside of the building envelope, in comparison to the other systems, and so its elevated price raises the following question: is it just a hungry private business taking advantage of the name sustainability. However, the source for system pricing is only for Spain, and a more global evaluation may bear different results. The large price difference between the LEED and BREEAM pricing is interesting. An important feature in a modern building rating system is the recognition of innovation in construction. The work by Conte and Monno [10], also highlights the importance of innovation in the construction industry. Not only does it permit new technologies to develop faster, but it also provides value to stakeholders as to why

400

G. Martínez Montes et al.

they should invest. The LEED system can be considered to be taking advantage of the innovation criteria and offer a credit for using one of their own accredited professionals [32]. This behavior only does not assist the development of the industry and It has not additional value for sustainable construction. The authors elected the numerical quantity of certified buildings for CASBEE from the study of Bernardi et al. [3], as in the work of Doan et al. [11] and on the CASBEE website (2015), the number of certified projects appears outdated and questionable. As each rating method was originally developed for its local region, the criteria for each section can differ. The UK can be considered to have an abundance of water, compared to other countries, such as the US. For example, the water consumption (lpm) for a tap in the LEED manual is 1.9 (public room) or 8.3 (private room), whereas in the BREEAM manual a washbasin tap is much higher at 12 [6]. Similar results were found in the work of Awadh [1]. CASBEE only requires the implementation of water-saving valves and equipment but does not specify flow amounts (even though it provides rough guidelines for toilets). The standards all represent similar categories, but each has its own interpretation of sustainability. In previous critical reviews, such as in the work of Doan et al. [11], Awadh [1], Bernardi et al. [3], and Mattoni et al. [19], the evaluation of weighting distributions and credit allocation has been evaluated in depth. However, for the manuals chosen for this evaluation the credit and weighting methods between systems vary: BREEAM changes the weighting depending on the project type and location, LEED has summative credits but depends on project type, CASBEE has an independent system via the BEE gradient chart and PASSIVHAUS, while considers location, has no weighting system applied. Mattoni et al. [19] found that different systems with different methodologies obtain different sustainability results: BREEAM and CASBEE obtain higher scores than LEED and SB Tool, etc. Therefore, going back to Chaps. 1.2. and 1.4., a unification of these systems is required, to not make rating buildings a subjective exercise [23]. While the existence of the World Green Building Council has linked these systems, a consistent methodology is still lacking. The results of the evaluation of mandatory credits provided an interesting result. Only LEED and BREEAM enforced mandatory criteria in order to obtain a sustainable building status. In the opinion of the authors, mandatory criteria should be enforced, and the credits must be specifically chosen, so that a building which only obtains a minimal rating, still displays consideration of the holistic sustainability definition in Chap. 1.1. This means that the mandatory adherence to the three pillars should be imposed. In addition, the proportion of mandatory criteria should be increased from roughly 11%, to a higher 30–40%. BREEAM effectively incorporates additional mandatory criteria in order to achieve an Outstanding rating, but only by 6% from the previous rating level. Both LEED and BREEAM administer most of their mandatory credits in the energy section. Since the beginning of the decade, standards have been implemented for the proper development of sustainability in building construction: ISO 21929:2011 &

28 Sustainability Building Rating Systems …

401

Table 28.3 System manual and rating system comparison Scheme

Manual Clarity

Rating system Pragmatisms Simplicity Methodology Simplicity Three Innovation Pillars

BREEAM

3

3

3

3

3

2

2

LEED

1

3

2

3

3

2

2

CASBEE

2

2

3

2

3

1

1

PASSIVHAUS 2

2

3

2

3

1

1

21931:2010. This may explain why, since the beginning of the decade, the building rating system phenomenon has achieved stabilization [3].

28.4.2.3

System Comparison

It is important to review the clarity of the manuals. In spite of not being a representation of the effectiveness of a system as a standard, a clear and transparent manual would assist any system (Chap. 1.4.). This is a subjective exercise, as it is based on the opinions of the authors. But, as previously mentioned, the requirement for increasingly larger companies to have transparent methodologies, does prompt organizations to publicly provide this material. Through this evaluation, it is hoped that discussion may be encouraged on unifying a clear methodology for rating systems. In Table 28.3, the systems are compared via key aspects considered by the authors to be requirements for a clear, effective, and robust sustainable building certification system. The aspects are rated from 1 (poor), 2 (medium) to 3 (good). The values displayed were discussed and determined by the research group (it includes four academics and two practitioners). It took two discussion rounds before the final result showed in Table 28.3.

28.5 Conclusion This study presents an overview of four different sustainable building rating systems, which vary in methodology, location, magnitude, and cost. The increasing global interest in sustainability over the last two decades has caused many organizations to publish their own versions on how to measure sustainability. With various interpretations in the industry, no coherent and unified rating method has been developed. Now the sustainable building phenomenon has plateaued, the opportunity has arisen to critically review these systems for their unification and possible public involvement. The public and private orientation of these rating systems has also been investigated. CASBEE has been found to be the only public scheme out of the four. It is the

402

G. Martínez Montes et al.

youngest of the schemes considered, but also provides the smallest gap between environmental and social sustainability aspects—weighing energy and the indoor environment equally. However, BREEAM and LEED are more advanced, as they both provide mandatory credits and innovation recognition. The fact the private systems have adopted a circular economy mindset earlier displays its increased competitiveness as a sector, and the benefits and power the sector can provide through the development of modern rating systems. The technical instruments evaluated in this study also display both global and local methodologies, providing a limitation to one singular one-size-fits-all system. For the design of a customised system it is important that it incorporates both local and global considerations. However, systems should not, under any circumstances, include credits to increase financial gain, as in the case of LEED—the reoccurring theme of the drawbacks of private systems, and the need for regulation. PASSIVHAUS includes significantly lower certification criteria than BREEAM, with a higher cost of certification. What you pay for, does not necessarily represent what you are obtaining. Consequently, WGBC should have more of an input in regional methodologies, and the imbalance in prices should be amended. In conclusion, through the large development of sustainable building rating systems, many different systems have borne fruit internationally and have improved global awareness and discussion on sustainability. However, for these systems to be fully complete, further considerations are still necessary: improved sustainability and innovation considerations, manual clarity, increased mandatory criteria, and regional consensus. Also, the equilateral implementation of sustainability is required in modern businesses and society.

References 1. Awadh O (2017) Sustainability and green building rating systems: LEED, BREEAM, GSAS and Estidama critical analysis. J Build Eng 11:25–29. https://doi.org/10.1016/j.jobe.2017. 03.010 2. Berkovics D (2010) Cannibals with forks - the triple bottom line of 21st century Bus Rev. Majeure Alternative Management - HEC Paris 3. Bernardi E, Carlucci S, Cornaro C, Bohne RA (2017) An analysis of the most adopted rating systems for assessing the environmental impact of buildings. Sustainability 9(7):1–27. https:// doi.org/10.3390/su9071226 4. Bloomberg (2018) Building research establishment limited: private company information - Bloomberg. https://www.bloomberg.com/research/stocks/private/snapshot.asp?privcapId= 140563. Accessed 12 Jan 2018 5. Bloomberg (2018) U.S. Green Building Council (USGBC): Private Company Information - Bloomberg. https://www.bloomberg.com/research/stocks/private/snapshot.asp?privca pId=37826486. Accessed 26 Feb 2018 6. BRE (2016) BREEAM international new construction 2016 technical manual. Watford, United Kindgom 7. BREEAM (2018) BREEAM website. https://www.breeam.com/. Accessed 9 Feb 2018 8. Certicalia (n.d.) Guia de precios de trámites - Certicalia. https://www.certicalia.com/precio. Accessed 2 Mar 2018

28 Sustainability Building Rating Systems …

403

9. Conte E, Monno V (2011) Evaluating environmental performances of sustainable technologies: from the building to the city scale. Hels World Sustain Build Conf 2:814–822 10. Conte E, Monno V (2012) Beyond the buildingcentric approach: a vision for an integrated evaluation of sustainable buildings. Environ Impact Assess Rev 34:31–40. https://doi.org/10. 1016/j.eiar.2011.12.003 11. Doan DT, Ghaffarianhoseini A, Naismith N, Zhang T, Ghaffarianhoseini A, Tookey J (2017) A critical comparison of green building rating systems. Build Environ 123:243–260. https:// doi.org/10.1016/j.buildenv.2017.07.007 12. Drexhage J, Murphy D (2010) Sustainable development: from Brundtland to Rio 2012. International Institute for Sustainable Development (IISD), New York, USA. 13. Fenner RA, Ryce Basc T (2008) A comparative analysis of two building rating systems. Part 1: Evaluation. Eng Sustain 161(ES 1):55–63. https://doi.org/10.1680/ensu.2008.161.1.55 14. IBEC (2014) CASBEE for building (new construction): technical manual. Institute for Building Environment and Energy Conservation (IBEC), Tokyo, Japan 15. IBEC (2015) CASBEE website. https://www.ibec.or.jp/CASBEE/english/. Accessed 9 Feb 2018 16. Illankoon IMCS, Tam VWY, Le KN, Shen L (2017) Key credit criteria among international green building rating tools. J Clean Prod 164:209–220. https://doi.org/10.1016/j.jclepro.2017. 06.206 17. Krizmane M, Slihte S, Borodinecs A (2016) Key Criteria across existing sustainable building rating tools. Energy Proc 96:94–99. https://doi.org/10.1016/j.egypro.2016.09.107 18. Liang X, Wang Y, Royapoor M, Wu Q, Roskilly T (2017) Comparison of building performance between Conventional House and Passive House in the UK. Energy Proc 142:1823–1828. https://doi.org/10.1016/j.egypro.2017.12.570 19. Mattoni B, Guattari C, Evangelisti L, Bisegna F, Gori P, Asdrubali F (2018) Critical review and methodological approach to evaluate the differences among international green building rating tools. Renew Sustain Energy Rev 82:950–960. https://doi.org/10.1016/j.rser.2017.09.105 20. PHI (2015) Passive house requirements. https://www.passiv.de/en/02_informations/02_pas sive-house-requirements/02_passive-house-requirements.htm. Accessed 3 Dec 2017 21. PHI (2016) Criteria for the Passive House, EnerPHit and PHI low energy building standard. https://passiv.de/downloads/03_building_criteria_en.pdf 22. Prieto-Sandoval V, Jaca C, Ormazabal M (2018) Towards a consensus on the circular economy. J Clean Prod 179:605–615. https://doi.org/10.1016/j.jclepro.2017.12.224 23. Rid W, Lammers J, Zimmermann S (2017) Analysing sustainability certification systems in the German housing sector from a theory of social institutions. Ecol Ind 76:97–110. https:// doi.org/10.1016/j.ecolind.2016.12.022 24. Shaw J (1995) A schema approach to the formal literature review in engineering theses. System 23(3):325–335 25. The Economist (2004) The rise of the green building. https://www.economist.com/node/342 2965. Accessed 18 Jan 2018 26. The European Commission (2010) DIRECTIVE 2010/31/EU OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 19 May 2010 on the energy performance of buildings. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32010L0031 27. The European Commission (2012) DIRECTIVE 2012/27/EU OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 25 October 2012 on energy efficiency, amending Directives 2009/125/EC and 2010/30/EU and repealing Directives 2004/8/EC and 2006/32/EC. https://eur-lex.europa.eu/legal-content/EN/TXT/?qid=139937 5464230&uri=CELEX:32012L0027 28. UN (2014) Sustainable development indicator framework for africa and initial compendium of indicators. (United Nations Economic Commission for Africa, Ed.), Ethiopia 29. UN ECOSOC (n.d.) Sustainable development. https://www.un.org/ecosoc/en/sustainable-dev elopment. Accessed 19 Jan 2018 30. United Nations (2016) Urbanization. https://www.unfpa.org/urbanization. Accessed 22 Jan 2018

404

G. Martínez Montes et al.

31. USGBC (2018) LEED|USGBC website. https://new.usgbc.org/leed. Accessed 9 Feb 2018 32. USGBC (2018) LEED v4 for building design and construction. https://www.usgbc.org/sites/ default/files/LEEDv4BDC_01.5.18_current.pdf 33. Vega Clemente R (2015) Evaluación de la Sostenibilidad de Sistemas de Construcción Industrializados de Fachada en Edificios de Vivienda Colectiva 34. Vierra S (2016) Green building standards and certification systems|WBDG Whole Building Design Guide. https://www.wbdg.org/resources/green-building-standards-and-certificationsystems. Accessed 1 Dec 2017 35. WCED (1987) Our common future: report of the world commission on environment and development. Oxford University Press, Oxford 36. Zuo J, Zhao Z-Y (2014) Green building research–current status and future agenda: a review. Renew Sustain Energy Rev 30:271–281. https://doi.org/10.1016/j.rser.2013.10.021

Chapter 29

Analysis and Risk Management in Projects of Change to Led in Street Lighting According to ISO-21500 and UNE-EN-62198 M. J. Hermoso-Orzáez, R. D. Orejón-Sánchez, and A. Gago-Calderón Abstract LED technology is consolidated as the main alternative for lighting in all areas (home, industrial, and road). Its generalized implementation is guaranteed with excellent energy results and great life expectancy and quality. The conventional systems of lighting, incandescence, fluorescence, HM (Metal Halide) or VSAP (High Pressure Sodium Vapor) have been exhaustively analyzed and studied in depth. However, the sudden irruption of the LEDs has brought about doubts about the consequences and their effects on people, animals, and plants, which are associated with the unique characteristics of the LEDs: Their electronic nature, discrete high intensity point light sources, directional emission, high radiation energy (blue 440 nm), etc. Recently, studies have analyzed the effects LEDs can have on the health, visual perception, or well-being of living beings (glare, stroboscopic effects, circadian regulation processes, or systemic diseases). In this work, we establish mechanisms for observing the results of these investigations, analyze and assess associated risks, and establish criteria, guidelines, and components to properly manage implementation processes, renovation, or reconversion of facilities with LED technology. For this, we apply the procedures of the rules for the management of risks in projects: ISO 21500 and UNE-EN 62198. Keywords LED · Lighting · Projects · Risks

M. J. Hermoso-Orzáez (B) Departamento de Ingenieria Gráfica Diseño y Proyectos. Escuela Politecnica Superior de Jaén. Universidad de Jaén, Campus de las Lagunillas, 23071 Jaén, Spain e-mail: [email protected] R. D. Orejón-Sánchez · A. Gago-Calderón Departamento de Ingenieria Gráfica Diseño y Proyectos. Escuela de Ingenierías Industriales. Universidad de Málaga, C/ Dr. Ortiz Ramos s/n. Campus de Teatinos, 29071 Málaga, Spain e-mail: [email protected] © Springer Nature Switzerland AG 2021 J. L. Ayuso Muñoz et al. (eds.), Project Management and Engineering Research, Lecture Notes in Management and Industrial Engineering, https://doi.org/10.1007/978-3-030-54410-2_29

405

406

M. J. Hermoso-Orzáez et al.

29.1 Introduction In the past few years, LED lighting has become the most versatile and least consumed technology for general lighting, and in addition, to becoming a new architectural and design option, it allows for greater comfort and well-being and offers high quality light and great visual performance [10]. According to data from the Spanish Electricity Network, electricity consumption in Spain is approximately 260,000 GWh/year. Public lighting, around 4.4 million luminaires, represents 1.35% of the electricity consumed (3510 GWh/year), reaching a cost of over 300,000 million euros per year, which is equivalent to 0.29% of GDP [22]. On the other hand, a report from the Light Pollution Study group of the Department of Astrophysics and Atmospheric Sciences of the Complutense presents a dystopia as Spain looks to the future. Current data indicate that Spain holds the European record for energy consumption in street lighting per inhabitant, with 118– 114 kWh/year per citizen, compared to 90–77 in France and 48–43 in Germany. Consumption in public lighting accounts for 2.3% of world electricity consumption [11], and according to different studies in municipalities in developed countries, it reaches values of between 40% [15] and 60% of municipal electricity consumption [3]. A large number of public lighting installations were implemented between 30 and 40 years ago and are, therefore, obsolete [8]. At present, LED and/or SSL technology has reached high levels of efficiency (more than 276 lm/W) at increasingly lower costs. In addition, its product life expectancy is several times higher than that of discharge lamps [11]. According to the results of an independent and global pilot program published in the magazine of the Spanish Committee of the Illumination [2], public lighting with LED technology can generate energy savings of up to 85%. In the city of Detroit, energy savings were achieved in this amount quickly and at a reduced cost [9]. It seems clear that LED technology applied to public lighting will contribute to the achievement of sustainable and efficient growth, in Spain allowing reducing energy consumption and CO2 emissions [6]. However, despite the energy efficiency, economic savings, and good lighting performance of this new technological form, weaknesses and threats that add associated negative aspects are also beginning to be observed [5]. In general, there is a need for the designer to make an identification and subsequent assessment of all the risks associated with these projects [21], whose final objective will be the more or less massive replacement of the current discharge lamps by LED technology [4]. All this leads us to consider the need to evaluate the risk in projects to replace conventional luminaires with LED luminaires. Risk itself is an abstract concept. If we use the Dictionary of the Real Academia de la Lengua (RAE), a risk is defined as a “Contingency or proximity of a damage". It, therefore, considers the possibility that damage will happen. On the other hand, the Project Management Guide of the PMI [16] (PMBoK) or IPMA [7], defines the risk of a project as “an uncertain event or condition that,

29 Analysis and Risk Management in Projects …

407

if it occurs, has a positive or negative effect on one or more of the project’s objectives.” [20]. Additionally, the standard UNE-EN 62198: [14], “Risk management of the project Application guidelines” defines risk as the “effect of uncertainty on the fulfillment of project objectives". In any case, in the UNE-ISO 21500: [13], standard on “Guidelines for Project Management and Management”, risk is defined within the 10 groups of subjects to be controlled in any project, regardless of its nature, by dedicating special attention to it due to its impact [1]. In our case study, the risk can be positive or negative and its impact must be felt in one of the objectives of the project. It must also be taken into account that a risk may have one or more causes and, if materialized, one or more different consequences. The perception of risk also has a high component of subjectivity and this perception is conditioned by the boundary conditions: Activities that in other times or circumstances were not considered risky might currently become so, or vice versa. The same activity, according to its environment, can be considered risky or not. Thus, a simple or routine activity can become risky if environmental conditions change. On the other hand, “ISO 31000 Risk Management. Principles and Guidelines” defines risk as the “effect of uncertainty on the fulfillment of objectives” and risk management as “Coordinated activities to direct and control an organization with respect to risk". In this line, it describes the principles related to effective risk management, the framework that establishes the foundations and organizational provisions for the design, implementation, monitoring, review, and continuous improvement of risk management throughout the organization, and a management process that can be applied to any type of risk and organization. Basically, it is a very effective tool for determining and assessing the uncertainty of projects [17].

29.2 Objectives and Valuation Technique Associated Risks 29.2.1 Objective The objective of this work is to carry out an exhaustive analysis of the risks associated with projects that opt for the conversion of their lighting to LED technology applied to public lighting. It is intended to make use of the recently published regulations ISO 31000, ISO 21500, and UNE-EN 62198: [14], an assessment and quantification of risks and using the method proposed by the standard, which allows the designer to quantify the risk associated with the projects of this nature. To this end, an analysis of the most common risks associated with projects based on this new technological application will be made, based on a compilation of data obtained from technical publications. The conclusions obtained will point out the most outstanding aspects and parameters of this technology as opposed to other alternatives and will allow to provide a method of assessment analysis, which we understand can be very useful for designers and project managers of these characteristics.

408

M. J. Hermoso-Orzáez et al.

29.2.2 Risk Assessment Technique There are two determining factors to quantify risk: The probability that the risk event will occur and the consequences of the event. While events with a high probability of occurrence, such as 1/10 or 1/100, should be taken into consideration in complete safety, other events, whose probabilities are small, such as 1/1,000,000 0 1/100,000,000, will be negligible risks. Table 29.1 provides some risk levels of certain situations. To assess and evaluate the risk, the indicator “Impact of risk” is used, which is related to the probability of occurrence and the consequences of the event. Thus, we define risk impact, IR, as (29.1) IR = POR ∗ PA

(29.1)

where IR: Impact on risk. BY: Probability occurrence risk. PA: Estimated associated loss. Perception can also be taken into account (29.2). IR =

Probability occurrence risk × Estimated associated loss × Possible risk perception. IR = POR ∗ PA ∗ PPR

(29.2)

where IR: BY: PA:

Impact on risk. Probability occurrence risk. Estimated associated loss.

Table 29.1 Probabilities and tolerances Intolerable risk. Fundamental improvements are needed. Just consider if there are no alternatives and people are well informed Little tolerable. Significant improvement efforts are needed. Higher limit of involuntary risk It is only tolerable if it brings significant benefits and greater reductions in risk cannot be achieved. Public acceptance of voluntary risk Tolerable. The precautions must be maintained, and the cost of the effective alternatives studied. Public acceptance of natural disasters Negligible Public acceptance of human disasters Source prepared by the author

29 Analysis and Risk Management in Projects …

409

Table 29.2 Risk matrix (own preparation) Probability

Impact 1-insignificant 2-little

3-moderate

4-high

5. Almost certainly happens

Medium (5)

High (10)

High (15)

Very high (20) Very high (25)

4. High probability

Medium (4)

Medium (6)

High (12)

High (16)

3. It is possible

Low (3)

Medium (5)

Medium (9) High (12)

High (15)

2. It is rare that it happens

Very low (2)

Low (4)

Medium (6) Medium (8)

High (10)

1. It would be exceptional

Very low (1)

Very low (2) Low (3)

Low (4)

5-catastrophic

Very high (20)

Medium (5)

PPR: Possible perception of risk. In this way, if we assess the risks on a scale of 1–5 based on the probability of occurrence and we value, on a scale of 1–5, the consequence (impact or potential damage) associated with the risk, we can assess the impact of the risks with a risk matrix like the one in Table 29.2 [25].

29.3 Methodology for Risk Management/Case Study LED Risks The objectives of project risk management are to increase the probability and impact of positive events, and especially to reduce the probability and impact of negative events in the project [18]. Among the methodologies of risk management in projects, we can highlight two fundamental ones: The one established by the UNE-EN 62198 Standard “Risk management of the project. Application guidelines”, and the one proposed by the UNE-ISO 21.500 Standard “Guidelines for the management and management of projects". However, both maintain a common structure that bases the risk management of the project in the processes of identification, evaluation, treatment, and control of risks [23].

410

M. J. Hermoso-Orzáez et al.

Fig. 29.1 Flow chart for risk management, according to UNE-ISO 21500 (Source prepared by the author)

29.3.1 Risk Management in Accordance with UNE-EN 62198 and UNE-ISO 21.500 The UNE-EN 62198 Standard “Risk management of the project. Application Guidelines”, published in January 2015, establishes that risk management includes all management and control activities that allow an organization to face risk. The integrated processes in risk management, according to ISO 21.500, are: • • • •

Identification of risks; Risks evaluation; Risk treatment; Risk control.

These processes are located throughout the life cycle of the project, in the groups of planning, execution, and control processes [24]. See Fig. 29.1.

29.3.2 Techniques for the Identification of Risks There are numerous techniques for the identification of risks that we have enumerated: • • • • •

Information collection techniques (Brainstorming, Delphi technique, etc.); Analysis with checklist; SWOT Weaknesses Threats Strength Opportunities analysis; Expert judgment; Other techniques.

In our methodology, we used the SWOT Analysis of previous studies [5], as the basis, and expert judgment, historical information, review of scientific articles and studies, etc.

29 Analysis and Risk Management in Projects …

411

29.3.3 Definition of Associated Risks in LED Change Projects and Types of Risks In the case of road lighting installation projects, such as the study case proposed for the change to LED technology, where solutions are designed and implemented to complex problems to be evaluated in some aspects, and normally not all the information that would be necessary, we can also find many sources of risk. It is common for these projects to be executed with a certain level of uncertainty, which gives rise to situations of risk. In this way, and depending on the project, we can find risks due to the lack or delay of a permit, lack or uncertainty in the delivery time of the materials (LED lights), defective deliveries, accidents, strikes, political or legal changes, design errors, climatological factors, diseases or casualties in the project team, bankruptcy or failure of a supplier, management failures, etc. There are risks of different complexity and degree of uncertainty (See Fig. 29.2). Throughout the life cycle of the project, the probability of occurrence of risk will decrease as the uncertainty associated with the project decreases. However, the consequences of the risk event will be increasingly costly to recover. Therefore, there will be a part of the life cycle of the project, normally in the execution phase, where the risk impact (probability x consequence) will be at its maximum. See Fig. 29.3. Depending on where the risk is located, we can talk about two sources of risks in the projects: Internal sources and external sources. Fig. 29.2 Flowchart for the management of information and risk (Source prepared by the author)

Fig. 29.3 The impact of risk throughout the life cycle of the project (Source prepared by the author)

412

M. J. Hermoso-Orzáez et al.

• Internal risks are risks that are inherent in the project and depend on the type of project (size, complexity, degree of innovation, location, etc.); • External risks are risks derived from situations that are external to the project and beyond the reach of the project managers. Changes in inflation, market behavior, legislative changes, political changes, and so on will be external risks. Based on this classification, Table 29.3 defines the risks associated with LED change projects depending on their origin.

29.3.4 Determination of the Main Technical Risks Associated with LED Change Projects We then went on to identify, in greater detail, the technical risks that can affect the LED change project in its purely technical aspect and documented its characteristics. These risks can have a positive or negative impact on the project’s objectives. It is an iterative process, because new risks can be known or changed as the project progresses during its life cycle. Technical risks with a potentially negative impact on the project are called “threats or weaknesses” (Table 29.4), while the risks that have a potentially positive impact on the project are called “opportunities or strengths” (Table 29.5). This process can involve many of the participants: The client, the project sponsor, the project manager, the project team, senior managers, users, risk management experts, other members of the project management committee, and experts, according to the matter at hand. The main result of this process will be the risk registry and its potential assessment. We list the main associated risks in Tables 29.4 and 29.5.

29.3.5 Registration of the Associated Risks in LED Change Projects According to the Method proposed in the Reference Standards, we must establish a risk register. The risk register is a document in which the results of the risk analysis and the planning of the response to the risks are shown. We will include: • A list of identified risks. The risks identified are described at a reasonable level of detail. • A list of potential responses. We will identify potential responses to a risk during the process. • A definition of data associated with each risk, identifying: Type description, responsibility, probability/consequence (estimated) of its appearance, objectives affected and economic quantification of the perceived impact, activities affected by its appearance, possible emergency plans for if this contingency occurs, and

Strikes or political instability

Permits, bails, guarantees and/or administrative authorizations

Embargoes/Expropriations/Construal processes

Theft or theft

Overvoltages/fortuitous overcurrents

Storms/lightning

Quality deficiencies

Defective Contracts or lack of confidence (subcontracts)

Arrears in materials supplies (suppliers)

*Opportunities

*Strengths

*Threats

Excessive *Weaknesses hierarchy in decision making (rigid organization chart)

Note risks associated with the technical performance of LEDs are developed in the following Sect. 29.3.5

Subcontractors

Contest of creditors (suppliers or subcontractors)

Liquidity/Resource availability

Inflation/Project delays and changes in the CPI (defective sheets)

Change of laws/regulations and/or community directives

Modification of the Terms of execution

Defective works or hidden vices

Changes in the Design or improvement of the benefits not included in the original project

Modified projects (economic modification of the project)

/Inst. Related to the execution/inst

External stoppages of administrative processes

*Technical features or design

Related to the organization

Politics

Overwhelming force

Financial/Economic

Internal origin to the project (changes in design, technical, or execution)

External origin to the project (variations of the medium)

Table 29.3 Risks of the projects of change to LED depending on their origin

29 Analysis and Risk Management in Projects … 413

414

M. J. Hermoso-Orzáez et al.

Table 29.4 Negative technical risks of LED. (weaknesses-threats) Weaknesses Naturalness and temperature of Tª color ªK associated with heating by doping phosphorus Durability not contrasted Uncertainty associated with constant technological change Effects on health Light pollution Thermal dissipation Threats

Lack of regulations or technical regulations Damage to Led due to current peaks in cold start or overvoltages Excessive dependence on subsidies Reduction of costs at the expense of quality Annoying and disturbing glare Excessively corseted sheets in the technical characteristics of the LED material Properties influenced by external variables (delivery times, strikes, subcontractors breach, etc Perception in the quality of lighting

Table 29.5 Positive technical risks of LED. (opportunities-strengths)

Strengths

Energy and economic savings Good lighting performance Optimal uniformity Reduced maintenance costs

Opportunities

Market price tending to the downside Advanced technological development Temperature monitoring measures Sustainability and recycling of LED Telemanagement innovations associated with driver control Advances in hiring Easy implementation of solar energy Positioning in Smart City models

secondary risks arising from the occurrence of the contingency. (See Table 29.7 as an example of Risk Register.)

29 Analysis and Risk Management in Projects …

415

29.3.6 Qualitative and Quantitative Analysis of the Associated Risks in Projects of Change to LED The purpose of evaluating risks is to measure and prioritize the risks identified for a subsequent action. This process includes estimating the probability of the occurrence of each identified risk and the corresponding consequence in the objectives of the project if the risk occurs. Subsequently, risks are prioritized according to this evaluation, considering other factors, such as terms and risk tolerance of the main stakeholders. A qualitative analysis or a quantitative analysis can be used, and the result of this process will be the prioritized risks. It consists of prioritizing the risks to perform other analyses or subsequent actions, evaluating and combining the probability of occurrence and the impact of said risks. Thus, organizations can improve project performance by focusing on high priority risks. The priority of the identified risks is evaluated using the relative probability of occurrence, the corresponding impact on the objectives of the project, as well as other factors, such as the response time and risk tolerance on the part of the organization. The usual tool to perform qualitative risk analysis is the risk matrix, where we combine the probability of risk and the expected impact to obtain a prioritized risk classification. It is a quick and economical means of setting priorities for risk response planning and laying the groundwork for quantitative risk analysis if required. An example of a five-level assessment scale for assessing the consequences of risk to LED, according to four criteria (people, environment, economic aspects, and reputation) is the following, also taken from the UNE_EN 62,198 Standard (Table 29.6).

29.3.7 Treatment of the Associated Risks in Projects of Change to LED The purpose of treating risks is to develop options and determine the actions to be taken to respond to risks, improving opportunities and reducing threats to the project’s objectives. This process addresses the risks by incorporating resources and activities into the budget and schedule. The treatment should be appropriate to the risk, cost-effective, timely, realistic within the context of the project, understood by all interested parties, and assigned to an appropriate person. Risk treatment includes measures to avoid, reduce, transfer risk, or accept it by developing the contingency plans to be used if the risk occurs. The main result of this process will be the responses to the risks. The treatment strategies will be:

416

M. J. Hermoso-Orzáez et al.

Table 29.6 Scale of evaluation of the consequences in projects of change to LED, according to the impact on diverse agents and aspects Assessment

People

Environment

Financial aspects

Reputation

5

Several deaths or total permanent disability caused by an accident or occupational disease

Massive effects: Serious and lasting affectation to the environment or important annoyances caused in a large area. Great commercial, recreational, or relative losses to the conservation of Nature

Losses or direct earnings greater than 10 million euros

International impact: Public attention of international media (positive/negative)

4

A single dead person or a single permanent totally disabled person caused by an accident or an occupational disease

Major effects: Serious environmental impact. Intense measures developed by the company for the restoration of the contaminated or damaged environment to its original state

Direct losses or gains between 500.00 and 10 million euros

National impact: Attention of the public and national media (positive/negative)

3

Serious injuries or significant health effects (causes irreversible damage to health, chronic condition)

Localized effects: Limited losses or toxicity emissions known to affect the neighborhood, spontaneous recovery from limited damage in one year

Direct losses or gains between 100.00 and 500,000e

Significant impact: Attention of the public at regional level (positive/negative) and extensive attention from the local media

2

Injuries or slight effects on health (case of injured person who can work with restrictions or with sick leave). Limited or reversible health effects

Minor contamination: Damages important enough to affect the environment, but without permanent effects

Direct losses or gains between 10.00 and 100,000e

Limited impact: Relative attention of the local public (positive/negative), some attention from the local media

1

Wounds or minor health effects (First aid, medical treatment)

Unimportant effect: Damage to the local environment within limits

Direct losses or gains of less than 10,000e

Unimportant impact: Public knowledge without concern

1. Avoid/Remove Action is taken to eliminate the threat or to protect the project from its impact. It involves changing the project management plan in order to completely eliminate the threat. You can also isolate the project’s objectives from the impact of the risk or change the objective that is threatened. Examples of elimination in

29 Analysis and Risk Management in Projects …

417

the case of LED Change Projects: Limit excessive downtime in public tenders, avoid unproven or validated technical solutions, improve technical and economic studies of the project, define precontracts. 2. Reduce/Mitigate It acts to reduce the probability of occurrence or impact of a risk. It implies reducing the probability and/or the impact of an adverse risk to an acceptable threshold. Adopting early actions to reduce the likelihood and/or impact is more effective than trying to repair the damage. Examples of mitigation for the case of LED Change Projects: Adopt less complex projects, perform more tests, select a more reliable supplier, surveil and continuously control the project of change to LED, provide training and education to the installers and personnel involved. 3. To transfer The impact of a threat is transferred to a third party, along with the responsibility for the response. The transfer of a risk simply confers on a third party the responsibility for its management; it does not eliminate it. Transferring risk almost always involves paying a risk premium to the party that assumes the risk. The transfer of responsibility for a risk is more effective when dealing with exposure to financial risks. The transfer tools can be: Insurance subscription, compliance guarantees, bonds, guarantee certificates, etc. You can use contracts or agreements. Examples for the case of LED Change Projects: Subcontracting of risk processes (driver defects, LED diode defects, etc.), establishment of insurance for certain activities, use of manufacturer’s guarantees of LED supplier, transfer of risk by notices/contract. 4. Accept/Retain The risk is recognized, and it is decided not to take any action unless the risk materializes. This strategy is adopted when it is not possible or cost-effective to address a specific risk in another way. This strategy can be passive or active. Passive acceptance does not require any action, but it is a negligent attitude. The active acceptance strategy consists of preparing a contingency plan and a contingency reserve. Examples for the case of LED Change Projects: Contingency plans, contingency funds, reserve personnel.

29.4 Results for the Study of LED Change Projects. Analysis, Valuation, and Treatment of Risks Let us suppose a project of change to LED of 4000 points of light or luminarias with a total cost of PEM Price of material execution before taxes of 1,000,000 of Euros and a term of execution of 6 months. In Table 29.7, we tabulated the aspects related to the identification and initial assessment of the risks. We did this only for technical risks with a negative impact, to be used as an example and with the understanding that this procedure could be perfectly extended to risks with a positive impact and remnants of risks. In addition to identifying each of the risks with a number, we defined those responsible, as well

Color temperature (ºK)

Durability

Technological change

Effects on health

Light pollution

Thermal dissipation

001

002

003

004

005

006

Description

Effects on durability, quality, consumption, and maintenance of LED

Effects on the environment

Glare, stroboscopic effects, circadian regulation processes, or systemic diseases

Constant technical improvements competence

Premature aging of the LED

Perceived color

Risk

Identification and valuation of risk

Facilities Manager

Facilities Manager

Facilities Manager

Facilities Manager

Facilities Manager

Facilities Manager

4/4

3/3

3/4

4/4

5/4

3/3

dd/mm/aa

dd/mm/aa

dd/mm/aa

dd/mm/aa

dd/mm/aa

dd/mm/aa

3/4/4/5

1/2/3/4

1/3/3/5

High (16)

1/5/3/4

Medium 1/5/3/4 (5)

High (12)

High (16)

Very High (20)

4

4

4

3

5

2

(continued)

500,000/2 months

500,000/2 months

500,000/6 months

200,000/3 months

100,000/3 months

30,000/1 month

*Risk ***Category Impact cost assessment (e)/Time P/MA/CE/R

Medium 1/3/1/2 (9)

Responsibility Probability/Consequence Date/Ident Type of risk

Table 29.7 Risk Register, Valuation, and Action Plan for LED Change Projects

418 M. J. Hermoso-Orzáez et al.

Excessive dependence on subsidies

Reduction of costs at the expense of quality

Annoying and Effects on the disturbing glare health of pedestrian, driver, and environmental users

009

010

011

Low quality LEDs and impact on performance and durability

Risk of lack of liquidity to make investments

Effects on durability and maintenance. Nuisance to users

Damage to LED due to current peaks in cold start or over voltages

008

Effects on the technical characteristics of LED manufacturing

Regulations or technical regulations

Risk

007

Description

Identification and valuation of risk

Table 29.7 (continued)

Facilities Manager

Management

Management

Facilities Manager

Facilities Manager

4/3

3/2

4/3

5/4

3/4

dd/mm/aa

dd/mm/aa

dd/mm/aa

dd/mm/aa

dd/mm/aa

1/1/3/3

1/3/4/5

1/1/3/4

High (12)

2/5/2/3

3

2

3

4

3

(continued)

100,000/1 month

10,000/1 month

100,000/2 months

1,000,00/6 months

500,000/6 months

*Risk ***Category Impact cost assessment (e)/Time P/MA/CE/R

Medium 1/1/2/1 (5)

High (12)

Very High (20)

High (12)

Responsibility Probability/Consequence Date/Ident Type of risk

29 Analysis and Risk Management in Projects … 419

Technical bills and administrative clauses not in accordance with the technological evolution of LEDS

Properties influenced by external variables (delivery times, strikes, non-compliance subcontractors, etc.)

Perception in the quality of lighting

012

013

014

Description

Subjective risk but that can have a great impact

Management

Risks linked to Management external agents which are difficult to predict quantify or measure

Investigation and Development

3/2

3/3

4/4

dd/mm/aa

dd/mm/aa

dd/mm/aa

1/1/3/4

Medio (5)

1/1/1/3

2

2

3

(continued)

10,000/1 month

100,000/2 months

500,000/4 months

*Risk ***Category Impact cost assessment (e)/Time P/MA/CE/R

Medium 1/1/2/3 (9)

High (16)

Responsibility Probability/Consequence Date/Ident Type of risk

Decrease in Management levels of competitiveness for advanced companies with heavy investment I + D+I

Risk

Identification and valuation of risk

Table 29.7 (continued)

420 M. J. Hermoso-Orzáez et al.

Provider Provider

Transfer/ensure production against technical changes of the competition

Mitigate/protect the luminaries against harmful or annoying glare. Regulate the sources

Eliminate/protect those with diffuse covers and direct led to the Southern Hemisphere

Mitigate/improve systems of heat dissipation and ventilation

Transfer/development of R + D + I and reversible production lines

Eliminate/progressive start. Slow curve curves and surge protection

Delete/search for self-financing lines

Retain/find balance quality/price

Mitigate/improvements on the envelope of Led or luminaire covers

Transfer/constant feedback is needed with customers, users and/or Administration

Transfer/need to ensure production against unforeseen events

Retain/informative and informative campaigns. Surveys of subjective perception

003

004

005

006

007

008

009

010

011

012

013

014

*Risk assessment Scale from 1 to 5: (P (People)/MA (Environment)/CE (Economic Cost)/R (Reputation) **Description: Delete/Mitigate / Transfer/Retain ***Category risk. Scale from 1 to 5

Management

Management

Management

Management

Management

Management

Provider

Provider

Provider

Provider

Provider

Eliminate/Improve refrigeration source and equipment

002

Provider

Mitigate / Improve Doping LED

Responsible

001

Description**

Actions to take

Table 29.7 (continued)

10,000/2 months

100,000/2 months

100,000/4 months

100,000/1 month

10,000/1 month

100,000/2 months

500,000/6 months

500,000/6 months

500,000/6 months

250,000/2 months

1,000,000/4 months

100,000/3 months

200,000/3 months

20,000/1 month

Final quantification of the risk (e) /Time execution

Final quantification

Low

Medium/ low

Medium

Medium

Medium/Low

Medium

Medium

Medium

Medium

Medium/Low

Medium/Low

Medium

High/Medium

Low

Impact value

29 Analysis and Risk Management in Projects … 421

422

M. J. Hermoso-Orzáez et al.

as calculating them based on the probability and consequence of each risk, according to Table 29.2 (risk matrix). Once they were quantified, we defined the type of risk and its category based on the consequences that the risk has for the agents (people (P) and the environment (MA) and its impact on economic costs (CE) and the reputation (R) of the company executing the project, quantifying the economic impact and the term of each risk assessed. Once the risk had been quantified and categorized, we described the actions to be taken in order to avoid/eliminate, reduce/mitigate, transfer, or accept/retain in the way of dealing with risk, indicating who is responsible for these actions. Finally, we quantified the risk once we had acted, in order to assess its impact once it had been treated (see Table 29.7).

29.5 Conclusions With this work, we intended to analyze, assess, and treat the risks associated with the projects of massive change to LED, trying to determine, quantify, and categorize the different types of risks, according to the Methodology proposed in the updated Standards ISO 31000, ISO 21500, and UNE-EN 62198: [13], based on their potential danger, probability, consequences on people and the environment, economic impact, or the reputation of the companies that execute this type of project. Presenting the results in the form of a registry, we tried to establish for each of the assessed risks their corresponding corrective actions, in the sense of eliminating, mitigating, transferring, or retaining the assessed risk as the case may be, and sought, in any of the cases, a reduction of the impact of the same on the final result of the project. We believe that this work can be very useful for companies that want to know and assess the risks associated with this type of project, especially for the bidding companies that agree to compete in the numerous public bidding projects that are currently being published by the different local, regional, or state public administrations that are being promoted with great force from different areas of the European Union, and that aim to improve energy efficiency and to achieve the gradual replacement of traditional lighting systems by new forms of sustainable lighting with LED.

References 1. Barafort B, Mesquida AL, Mas A (2017) Integrating risk management in IT settings from ISO standards and management systems perspectives. Comput Stand Interfaces 54(3):176–185. https://doi.org/10.1016/j.csi.2016.11.010 2. CEI. Comité Español de Iluminacion (2015) Requerimientos técnicos exigibles para luminarias con tecnología led de alumbrado exterior. Rev 4–120815 (CEIIDEA). Publicaciones CEI, Madrid

29 Analysis and Risk Management in Projects …

423

3. Fiaschi D, Bandinelli R, Conti S (2012) A case study for energy issues of public buildings and utilities in small municipality: investigation of possible improvements and integration with renewables. Appl Energy 97:101–114. https://dx.doi.org/10.1016/j.apenergy.2012.03.008 4. Hermoso-Orzáez MJ, Gago-Calderón A, Rojas-Sola JI (2017) Power Quality and energy efficiency in the pre-evaluation of an outdoor lighting renewal with light-emitting diode technology: experimental study and amortization. Energies 10(7):836 5. Hermoso-Orzáez MJ (2016) Technological led´s change in public projects lighting. “swot” analysis: strengths and weaknesses. In: XX Congreso Internacional de Dirección e Ingeniería de Proyectos - CIDIP 2016 (Cartagena) 6. IDAE (2015) Requerimientos técnicos exigibles para luminarias con tecnología LED de alumbrado exterior 7. IPMA (2015) IPMA Individual competence baseline for project, programme and portfolio management. International Project Management Association. https://products.ipma.world/wpcontent/uploads/2016/03/IPMA_ICB_4_0_WEB.pdf 8. Jägerbrand AK (2015) New framework of sustainable indicators for outdoor LED (Light Emitting Diodes) lighting and SSL (Solid State Lighting). Sustainability 7(1):1028–1063. https:// dx.doi.org/10.3390/su7011028 9. Kinzey B (2015) Restoring Detroit’s Street Lighting System. (U.S. Department of Energy No. PNNL-24692). Pacific Northwest National Laboratory (PNNL), Richland, WA (US) 10. Kostic A, Djokic L (2014) Lighting research and technology. Subjective impressions under LED and metal halide lighting, vol 46 11. Lobao JA, Devezas T, Catalao JPS (2015) Energy efficiency of lighting installations: Software application and experimental validation. Energy Rep 1:110–115. https://dx.doi.org/10.1016/j. egyr.2015.04.001 12. Norma UNE – ISO 31000:2010. Gestión del Riesgo. Principios y directrices. Edicion (2010) Comité CTN 307-Gestión de riesgos. 13. Norma UNE-ISO 21500:2013 Directrices para la dirección y gestión de proyectos. Guidance on project management (Fecha Edición) ( 92013–03–20) ICS03.100.40/Comité CTN 157 – PROYECTOS Equivalencias Internacionales ISO 21500:2012 Idéntico 14. Norma UNE-EN 62198:2014. “Managing risk in projects. Application guidelines” Comité DS/1.( Edicion) (2014). ICS 03.100.01. ISBN 978 0 580 78138 4 15. Ozadowicz A, Grela J (2014) The street lighting integrated system case study, control scenarios, energy efficiency. In: IEEE Emerging Technology and Factory Automation (ETFA). Barcelona, Spain, pp 1–4. https://dx.doi.org/10.1109/ETFA.2014.7005345 16. PMI (PMBOK) (2013) Guia para la dirección y Gestión de Proyectos. (Guia PMBOKª) Project Management Book Of Knowledge (5 Edición). Project Management Institut 17. Popa DM (2018) Case study of engineering risk in automotive industry. Manag Syst Prod Eng 26(1):27–30. https://doi.org/10.2478/mspe-2018-0004 18. Purwanggono B, Margarette A (2017) Risk assessment of underpass infrastructure project based on ISO 31000 and ISO 21500 using fishbone diagram and RFMEA (project risk failure mode and effects analysis) method. In: 10TH international seminar on industrial engineering and management: sustainable development in industry and management. Colección: IOP Conference Series-Materials Science and Engineering. Volumen: 277. Número de artículo: UNSP 012039. Fecha de publicación: 2017. Tipo de documento: Proceedings Paper. Ubicación: Esa Unggul Univ, Tanjung Pandan, INDONESIA. Fecha: NOV 07-09, 2017 Patrocinador(es):Trisakti Univ; Tarumanagara Univ; Al Azhar Indonesia Univ; Atma Jaya Cathol Univ Indonesia; Pasundan Univ; Pancasila U. https://doi.org/10.1088/1757-899X/277/1/012039 19. R.A.E. Real Academia Española (Ed) (2017) El nuevo diccionario de la Real Academia Española de la Lengua Madrid : Felipe IV, 4 - 28014 Madrid - Teléfono: (34) 91 420 14 78. https://www.rae.es 20. Rehacek P (2014) Standard ISO 21500 and PMBoK® Guide for project management. Int J Eng Sci Innov Technol 3(1):2998–3295 21. Rehacek P (2017) Risk management standards for project management. Int J Adv Appl Sci 4(6):1–13. https://doi.org/10.21833/ijaas.2017.06.001

424

M. J. Hermoso-Orzáez et al.

22. Serrano-Tierz A, Martínez-Iturbe A, Guardon-Muñoz O, Santolaya-Sáenz J (2015) Análisis de ahorro energético en iluminación LED industrial: Un estudio de caso. Dyna Volumen 82 23. Šviráková E, Soukalová R (2015) Creative project management: reality modelling. In: Innovation Vision 2020: From Regional Development Sustainability to Global Economic Growth, International Business Information Management Association (IBIMA), Amsterdam, Netherlands, vol 1, pp 1085–1097. https://publikace.k.utb.cz/handle/10563/1005704. 24. Varajao J, Colomo-Palacios R, Silva H (2017) ISO 21500:2012 and PMBoK 5 processes in information systems project management. Comput Stand Interfaces. 50:216–222. https://doi. org/10.1016/j.csi.2016.09.007 25. Wijnia Y (2011) Asset risk management: issues in the design and use of the risk matrix. Engineering asset management and infrastructure sustainability. Springer, London, pp 1043– 1110

Chapter 30

Improvement of Energy Efficiency in the Building Life-Cycle. Study of Cases J. M. Piñero-Vilela, J. J. Fernández-Domínguez, A. Cerezo-Narváez, and M. Otero-Mateo

Abstract The study of energy efficiency through life cycle assessment—LCA— unlike other researches that focus on minimizing direct energy consumption only during exploitation and maintenance phase, allows to evaluate the environmental impact of a building in all its phases, considering the energy incorporated in all the processes involved, from the manufacture of its components, its construction, and subsequent use until the end of its life—dismantlement—. Beginning with the study of a couple of cases, a Sports Complex and a Center for Children Education in their drafting phases of the executive project—stage in which the design team defines the technical specifications— and in order to bring on the design not only of more efficient buildings but also more sustainable ones, a series of alternative constructive solutions—building envelope and lighting and air-conditioning facilities— are proposed to those originally prescribed, which, although they meet the minimum legal requirements, can be optimized considering not only the costs incurred in the operations and the energy efficiency in service but also the whole LCA, concluding that it is necessary to take this factor into account when defining priorities and actions in terms of energetic sustainability. Keywords Energy efficiency · Life cycle assessment · Sustainability · Environmental impact · Energy rating · Building energy efficiency label

J. M. Piñero-Vilela TEP-955 InTelPrev Research Group, School of Engineering, University of Cádiz, 10 Universidad de Cádiz Avenue, 11519 Puerto Real, Spain J. J. Fernández-Domínguez Mechanical Engineering and Industrial Design Department, School of Engineering, University of Cádiz, 10 Universidad de Cádiz Avenue, 11519 Puerto Real, Spain A. Cerezo-Narváez (B) · M. Otero-Mateo TEP-955 InTelPrev Research Group, Mechanical Engineering and Industrial Design Department, School of Engineering, University of Cádiz, 10 Universidad de Cádiz Avenue, 11519 Puerto Real, Spain e-mail: [email protected] © Springer Nature Switzerland AG 2021 J. L. Ayuso Muñoz et al. (eds.), Project Management and Engineering Research, Lecture Notes in Management and Industrial Engineering, https://doi.org/10.1007/978-3-030-54410-2_30

425

426

J. M. Piñero-Vilela et al.

30.1 Introduction The 2010/31/EU directive [6], establishes that from December 31st, 2018, all new publicly owned buildings must have nearly zero power consumption, and from December 31st, 2020, all new buildings must comply. However, new materials incorporated into the envelope and facilities, which are necessary to achieve nearly zero consumption, have consumed primary energy necessary in all the production processes from the extraction of raw materials up to their use on site. Nonetheless, this incorporated energy is not counted in the energy certification nor in the definition of nearly zero energy buildings [2]. The life cycle analysis—LCA— of a building allows to evaluate its environmental impact, “from cradle to grave”, identifying the consumption of primary energy in all its stages. Beyond energy consumption during the useful life of a building, the study of consumption in the manufacture, transport, and construction of the various building components, as well as in the demolition and waste transport to a landfill or a recycling plant must be taken into account [7], as summarized in Fig. 30.1. The incorporation of the LCA in the design phase of a building allows to identify the environmental impacts that it will generate, and therefore, to take appropriate measures to reduce emissions [4]. For the application of the LCA methodology in a building, the use of the necessary materials must be identified and quantified, as well as the energies consumed

Demolition

Exploitation Maintenance

Raw Material Extraction

LCA Phases

Construction

Manufacture

Transport Logistic

Fig. 30.1 Life cycle stages of a building

30 Improvement of Energy Efficiency …

427

during the whole life of the building, and the emissions and generated waste, so that the caused environmental impacts can be determined. The use of renewable/nonrenewable resources, global warming, acidification, eutrophication, damage to the ozone layer, accumulation of polluting gasses—smog—, abiotic deterioration, and waste recycling must also be considered [10]. In order to identify the energies involved and associated with the building process, from its early stages until the end of its useful life, the following definitions will be used: • Used: Energy consumed by ventilation, cooling, heating, lighting, and sanitary hot water systems. • End: Energy consumed by the user. • Primary: Energy necessary to produce final energy. • Incorporated: Energy needed for raw materials extraction, transport, treatment, and processing until the final constructive solution definition. Buildings consume 42% of the final energy and generate near 35% of the greenhouse gas emissions in the European Union [15]. Quantified environmental information of a building life cycle can be disclosed through environmental product declarations—EPD— [3] or type III eco-labels, according to the classification of the European standard EN 15,804 [1]. This will facilitate an objective, comparable and reliable communication of environmental performance, so it would become a practical instrument to achieve the objectives of the energy study of a building life cycle [2]. In parallel to the LCA, which helps characterize the energy impact of a building, the life cycle cost analysis—LCCA— represents a paradigm shift, opposing a long term vision against the traditional perspective that aims to obtain immediate profitability with a minimum initial investment, ignoring their future economic and environmental effects [8]. The LCCA is an options comparison method, so multiple parameters can be used to select an “optimal” solution. Instruments for project evaluation such as the net present value—NPV—, total investment benefit ratios, analysis of annual costs reduction, static or dynamic recovery periods, rates of return or internal rates of return—IRR— can also be used. Each building presents a series of intrinsic characteristics, so the application of the LCA is different for each case. However, there are common characteristics— materials, construction elements, facilities, use— so they can be compared based on these similarities [9]. It is also necessary to consider that to make an efficient building design in the use stage, the increase in energy incorporated to reduce consumption cannot represent an increase of the total energy consumption for the whole life cycle. The energy incorporated to decrease consumption in the use stage increases the total energy consumption, so in order to solve a problem, another can be created [11].

428

J. M. Piñero-Vilela et al.

30.2 Objectives The objective of this research is to study energy consumption, pollutant emissions, and associated costs—LCA and LCCA— throughout the life cycle of two edifications to be built in the province of Cádiz, a Sports Complex in Puerto Real and a Center for Children Education in Conil de la Frontera. In order to contrast the obtained results, two constructive states are proposed: • Standard construction solution—less efficient— in which traditional solutions of envelope—cover, facade, walls, foundations— and facilities—air-conditioning, sanitary hot water, and lighting— are defined. • Improved construction solution—more efficient— which includes passive and active alternatives more efficient both in the use phase and throughout the life cycle but more initially expensive, requiring an initial expenditure.

30.3 Case Studies The first case study—Case 1— is a Center for Children Education—CCE— located in the town of Conil de la Frontera—Cádiz— on a 770 m2 plot, developed on three floors—basement, ground floor, and upper floor— as summarized in Fig. 30.2, with a built area of approximately 750 m2 . The second case study—Case 2— is a building that houses a Sports Complex—SC— located in the municipality of Puerto Real— Cádiz— on a 1.490 m2 plot and a built area of approximately 1.350 m2 , distributed in two plants—ground floor and mezzanine— as shown in Fig. 30.3. CCE: Location in Lot

CCE: Building Section

C/ TORRE DEL PUERCO

CCE: Ground Floor

CCE: Upper Floor

Fig. 30.2 Center for Children Education—Case 1—

CCE: Basement

30 Improvement of Energy Efficiency …

429

SC: Building Section

SC: Ground Floor

SC: Mezzanine

Fig. 30.3 Sports Complex—Case 2—

The optimization of the thermal envelope—materials and building elements— and the facilities—air-conditioning, sanitary hot water, and lighting— of both buildings is studied, in order to achieve a higher energy efficiency and to contribute, through the LCA and LCCA, to the promotion of sustainability, without financial loss to owners and users.

30.3.1 Thermal Envelope In Tables 30.1 and 30.2, which summarize the case studies thermal envelopes, it is observed that after slight variations in the thickness and/or construction materials of the previous state—LESS efficient case—, the thermal transmittance of these elements is reduced in the—MORE efficient case— proposed. In both buildings, an “ETICS” system in facade has been chosen which, through insulation on the side in contact with the outside, decreases the thermal transmittance. For improvement of the basement wall, in the Children Education Center, insulation is placed on the outer face of the enclosure. In the remaining enclosures, the thickness of the insulation has been increased.

430

J. M. Piñero-Vilela et al.

Table 30.1 Thermal envelope variation in the Children Education Center case

LESS Efficient Case Materials and Construction Elements

MORE Efficient Case

U: Transmittance e: Thickness

Materials and Construction Elements

U: Transmittance e: Thickness

Facade U: 0,33 e: 30

Facade U: 0,18 e: 35

Basement wall U: 0,88 e: 30,2

Basement wall U: 0,23 e: 38,2

Slab U: 0,56 e: 20

Slab U: 0,21 e: 30

Covering U: 0,37 e: 61,7

Covering U: 0,27 e: 64,7

Transmittance -U- in W/(m2 ·K), Thickness -e- in cm Table 30.2 Thermal envelope variation in the Sports Complex Case

LESS Efficient Case

MORE Efficient Case

Materials and

U: Transmittance

Materials and

U: Transmittance

Construction Elements

e: Thickness

Construction Elements

e: Thickness

Facade U: 0,39 e: 25

Facade U: 0,18 e: 35

Slab U: 0,44 e: 20

Slab U: 0,12 e: 30

Covering U: 0,44 e: 7,5

Covering U: 0,33 e: 10

Transmittance -U- in W/(m2 ·K), Thickness -e- in cm

30 Improvement of Energy Efficiency …

431

30.3.2 Facilities 30.3.2.1

Air-Conditioning Installation

Following the Regulation for thermal installations in buildings—RITE— in Spanish [12], the air-conditioning of buildings is carried out by installing VRV systems— variable refrigerant volume systems— with heat recovery, which allows providing heat and cold at the same time. These systems have the advantage of controlling the refrigerant volume provided to the condensation-evaporation batteries, resulting in energy savings and optimized control.

30.3.2.2

Sanitary Hot Water System Installation

To meet the sanitary hot water demand—SHW— of both buildings, the adopted solution is the installation of solar panels that meet the minimum solar contribution established in Sect. 30.4, of the core document for energy savings of the building technical code CTE HE 4—Energy savings [13]. The contribution of auxiliary power in the previous state is carried out by electric boilers. In order to improve energy efficiency on the case studies, the electric boilers are replaced by biomass boilers.

30.3.2.3

Lighting Installation

The lighting installation in the previous state is solved by using fluorescent lamps in all enclosures of both buildings. For the proposed state, these fluorescent lamps are replaced by LED technology lamps, with better lighting energy efficiency values— EEV—. Other advantages of these type of lamps are: the useful life of these lamps is longer, they turn on instantly, do not contain toxic substances—mercury and gaseous pollutants, the emission of UV rays ceases and the CO2 emissions are reduced in a high percentage.

30.4 Results With the reduction of the thermal transmittance of materials and construction elements chosen to replace standardized solutions on the geographical, social, and economic context in which the buildings are placed, a reduction of the thermal loads is achieved in the different enclosures of the two proposed case studies, as shown in Tables 30.3 and 30.4, with the resulting reduction in the energy required to supply power to the air-conditioning units.

432

J. M. Piñero-Vilela et al.

Table 30.3 Thermal cooling and heating loads for the Children Education Center

Table 30.4 Thermal cooling and heating loads for the Sports Complex

Thermal loads

LESS efficient case

MORE efficient case

(W/m2 )

96.6

87.3

Heating (W/m2 )

49.1

40.7

Thermal loads

LESS efficient case

MORE efficient case

Cooling (W/m2 )

135.5

132.6

(W/m2 )

158.4

153.0

Cooling

Heating

The environmental loads—CO2 emissions and energy consumption— from all the LCA phases are assessed, thanks to the CYPECAD MEP 2018, a professional software [5], in order to quantify the results and suggest environmental improvements. Tables 30.5 and 30.6 show the results obtained for both buildings. The emissions and consumption of non-renewable primary energy of the main facilities, which influence the energy rating—heating, cooling, sanitary hot water, and lighting— were obtained using the LIDER-CALENER Unified Tool—HULC— [14]. HULC is a software developed by Associations and universities with the approval by the Ministry of Development that complies with CTE DB-HE—Basic Energy Saving Document [13]. Tables 30.7 and 30.8 show the results obtained for both buildings. Once the emissions of CO2 and the consumption of non-renewable primary energy are measured and contrasted, both building energy efficiency labels are obtained, as shown in Figs. 30.4 and 30.5. Both emissions and consumptions are halved in both cases. Table 30.5 LCA results for the Children Education Center Phases

Product

Annual amount Total 50 years

Emissions (kg Consumption CO2 ) (e/m2 )

Consumption (e/m2 ) 203.1

332,560.0

215.8

5,620.0

4.2

6,050.0

4.5

590.0

0.4

590.0

0.4

Per year

15,083.4

18.8

7,564.7

9.3

50 years

754,170.0

942.5

378,235.0

462.9

2,736.8

2.0

2,736.8

2.0

354,860.2

228.6

349,501.5

232.0

1,093,946.8

1,152.2

720,171.8

685.6

Construction

Demolition

MORE efficient case

Emissions (kg CO2 ) 330,830.0

Transport Use

LESS efficient case

30 Improvement of Energy Efficiency …

433

Table 30.6 LCA results for the Sports Complex Phases

Product Transport

LESS efficient case

MORE efficient case

Emissions (kg CO2 )

Emissions (kg Consumption CO2 ) (e/m2 )

237,840.0

82.4

313,550.0

114.8

5,090.0

2.1

6,880.0

2.8

Construction Use

Consumption (e/m2 )

890.0

0.4

890.0

0.4

Per year

22,630.7

16.1

11,521.0

8.5

50 years

1,131,535.0

805.5

576,050.0

423.4

Demolition Annual amount Total 50 years

4919.8

2.0

4919.8

2.0

266,450.7

100.6

332,841.0

126.5

1,380,274.8

892.3

902,289.8

543.4

Table 30.7 Energy rating of the Children Education Center Partial indicators

LESS efficient case

MORE efficient case

Emissions

Emissions

Consumption of non-renewable primary energy

Consumption of non-renewable primary energy

Heating

8.1

47.6

7.1

42.0

Cooling

10.0

58.8

5.1

30.2

3.7

21.8

0.2

1.0

Sanitary hot water Lighting

11.1

79.5

4.0

28.8

Annual amount

32.9

207.7

16.4

102.0

Emissions in kg CO2 /m2 year Consumption of Non-renewable Primary Energy in kWh/m2 year Table 30.8 Energy rating of the Sport Complex Partial indicators

LESS efficient case

MORE efficient case

Emissions

Emissions

Consumption of non-renewable primary energy

Consumption of non-renewable primary energy

Heating

0.4

2.2

0.4

2.2

Cooling

0.7

3.8

0.7

3.9

Sanitary hot water

4.7

27.8

0.3

1.2

Lighting

5.3

37.7

4.2

30.2

Annual amount

11.1

71.3

5.6

37.5

Emissions in kg CO2 /m2 year Consumption of Non-renewable Primary Energy in kWh/m2 year

434

J. M. Piñero-Vilela et al. LESS Efficient Case

MORE Efficient Case

Global

Energy

Indicator

Consumption

Emissions

KWh/m2 year

KgCO2/m2 year

A

Global

Energy

Indicator

Consumption

Emissions

KWh/m2 year

KgCO2/m2 year

A

102

16,4

B

B C

207,7

32,9

C

D

D

E

E

F

F

G

G

Fig. 30.4 Energy rating of the Children Education Center LESS Efficient Case

MORE Efficient Case

Global

Energy

Indicator

Consumption

Emissions

KWh/m2 year

KgCO2/m2 year

Global

Energy

Indicator

Consumption

Emissions

KWh/m2 year

KgCO2/m2 year

A

A

B

B

C

71,23

10,98

37,36

C

D

D

E

E

F

F

G

G

Fig. 30.5 Energy rating of the Sport Complex

5,47

30 Improvement of Energy Efficiency …

435

30.5 Conclusions The energy efficiency is improved based on the analysis of the previous state of the thermal envelope and principal facilities. Studies conducted for the modification and associated improvement of materials and building elements, as well as the lighting system, lead to suggest a technology change from fluorescent to LED technology. Regarding sanitary hot water facilities, the minimum solar contribution is included, and a change from electric boilers to new biomass boilers is suggested. This proposal leads to improve energy efficiency and sustainability by minimizing material costs, as summarized in Tables 30.9 and 30.10, and amortizing the initial outlay over the useful life of the building. At the Children Education Center, for an initial extra cost of 68 euros, the consumption of primary non-renewable energy, for a useful life of fifty years, is reduced by 466 euros, almost 7 times the initial increase. By contrast, at the Sports Center, almost 3.5 times the initial one —349 euros for an initial extra cost of 101 euros—. Finally, as shown in Table 30.11, savings are achieved in CO2 emissions with the construction materials and installations proposed for the most efficient cases. Just over a third reduction in total emissions is obtained. Table 30.9 Material execution budget—MEB—

MEB (e/m2 )

Children Education Center

Sports Center

LESS efficient case MORE efficient case

LESS efficient case

MORE efficient case

919.00

540.80

641.90

987.20

 (%)

+7.4%

+18.7%

Table 30.10 Energy consumption—EC—

EC (e/m2 )

Children Education Center

Sports Center

LESS efficient case

MORE efficient case

LESS efficient case

MORE efficient case

1,152.20

685.60

892.30

543.40

 (%)

−40.5%

Table 30.11 Saving of CO2 emissions

−39.1%

Emission kg CO2

Emission kg CO2

Children Education Center

Sports Center

LESS efficient

1,093,946.80

1,380,274.80

MORE efficient

720,171.80

902,289.80

Saving

373,775.00

477,985.00

 (%)

−34.2%

−34.6%

436

J. M. Piñero-Vilela et al.

References 1. AENOR (2014). UNE EN 15804+A1:2014 Sustainability of construction works. Environmental product declarations. Core rules for the product category of construction products. AENOR, Madrid 2. Assiego R (2015) Eficiencia energética en edificios desde la perspectiva de ciclo de vida. Casos de estudio. PhD thesis of the Universidad de Málaga 3. Benaviste G, Gazulla C, Fullana P, Celades I, Ros T, Zaera V, Godes B (2011) Life cycle assessment and product category rules for the construction sector. The floor and wall tiles sector case study. Informes de la Construcción 63(522):71–81. http://doi.org/10.3989/ic.10.034 4. Carabaño R, Bedoya C, Ruiz D (2014) La metodología del análisis de ciclo de vida para la evaluación ambiental en el sector de la construcción: Estado del arte. In: 1st International congress on research in construction and architectural technologies. UPM, Madrid 5. CYPE Ingenieros (2018) CYPECAD MEP, versión 2018.a 6. European Union (2010) Directive 2010/31/EU of the European Parliament and of the Council of 19 May 2010 on the energy performance of buildings. Off J Eur Union 153:13–35. ISSN 1977-0928 7. Fullana P, Puig R (1997) Análisis del ciclo de vida. Barcelona, Rubes. ISBN 978-844970070 8. García F, Armengot J, Ramírez G (2015) Life cycle cost analysis as an economic evaluation tool for sustainable building. State of the art. Informes de la Construcción 67(537):1–8. http:// doi.org/10.3989/ic.12.119 9. Hernández JM (2013) Metodología basada en ACV para la evaluación de sostenibilidad de edificios. PhD thesis of the Universitat Politècnica de Catalunya 10. Lamana R, Hernández A (2005) Análisis del ciclo de vida. UPM, Madrid 11. Ramesk T, Prakash R, Shukla KK (2010) Life cycle energy assessment: an overview. Energy Build 42:1592–1600. https://doi.org/10.1016/j.enbuild.2010.05.007 12. Spanish Government (2007) Real Decreto 1027/2007 para la aprobación del Reglamento de Instalaciones Térmicas en los Edificios. Boletín Oficial del Estado -BOE- 207:35931–35984. ISSN 0212-033X. 13. Spanish Government (2013) Orden FOM/1635/2013 para la actualización del Documento Básico DB-HE “Ahorro de Energía”, del Código Técnico de la Edificación, aprobado por Real Decreto 314/2006. Boletín Oficial del Estado -BOE- 219:67137–67209. ISSN 0212-033X. 14. Spanish Government (2017) Herramienta Unificada LIDER-CALENER -HULC-, version 1.0.1564.1124 15. Zabalza I, Aranda JA, Scarpellini S (2012) “EnerBuilCA” project: Desarrollo de una base de datos y una herramienta de análisis de ciclo de vida de edificios adaptada a la región sudoeste de Europa. In: XI Congreso Nacional del Medio Ambiente. CONAMA Foundation, Madrid

Chapter 31

Methodology for Valuation of Water Distribution Network Projects Based on the Potential Energy Recovery M. Iglesias-Castelló, P. L. Iglesias-Rey, F. J. Martínez-Solano, and J. V. Lozano-Cortés Abstract Water distribution networks are considered as large energy consumers and usually have excess of energy that, in a classical methodology, are evaluated from the difference between the pressure and the minimum required (potential energy recovery, PRE). These excesses could be recovered either by the users (consumers) of the system (PREu ) or by the managers of the water supply systems (PREr ). This distribution depends mainly on the topography of the network. In this paper, an algorithm to determine the PREr of a water distribution network is presented. Traditionally, water distribution network sizing projects ware based on the determination of the diameters with the lower investment but satisfying the hydraulic requirements of the network. In this paper, a different sizing alternative based on the determination of the PREr is presented. This method allows defining an energy efficiency index that could be used as an indicator for comparing different solutions. The method is applied to several networks in the literature in order to validate the study. Keywords Energy recovery · Water distribution network · Sustainability

M. Iglesias-Castelló · P. L. Iglesias-Rey (B) · F. J. Martínez-Solano · J. V. Lozano-Cortés Dpto. de Ingeniería Hidráulica y Medio, Universitat Politècnica de València, Camino de Vera s/n, 46022 Valencia, Spain e-mail: [email protected] M. Iglesias-Castelló e-mail: [email protected] F. J. Martínez-Solano e-mail: [email protected] J. V. Lozano-Cortés e-mail: [email protected] © Springer Nature Switzerland AG 2021 J. L. Ayuso Muñoz et al. (eds.), Project Management and Engineering Research, Lecture Notes in Management and Industrial Engineering, https://doi.org/10.1007/978-3-030-54410-2_31

437

438

M. Iglesias-Castelló et al.

31.1 Introduction The excessive use of the energy demand has caused one of the biggest energy crises worldwide, given the current heavy dependence on energy resources and the inability to meet the energy needs. But not only does the excessive use affect the availability of resources, but also their inefficient consumption produces a great environmental impact. As a consequence, the emission of GHG (Greenhouse Gas) increases. This fact has led to the emergence of different energy technologies that mitigate pollution with the implementation of measures for the control of emissions into the atmosphere. The water-energy binomial is composed of two basic resources for our society, which have been very important throughout the history of mankind, but traditionally have not been considered jointly. Throughout history, only the nexus water-power has been discussed in regard to the hydroelectric energy production. However, both resources are very connected since they have a mutual interdependence and influence in many aspects in a joint way for a sustainable and effective development of our society. Thus, the water-energy problem must be understood from a dual approach. One of the first studies to be undertaken relating the water industry is to determine if it requires any special analysis on its associated energy consumption. In other words, it is about determining the energetic implication the new demands and requirements may have in the water supply. At the same time, it is necessary to analyze whether the planned increase in energy consumption for the coming years moves along the same lines as the available resources. That is, if the increase in water use related to the production of electrical energy [10], will be compatible with the expected increase of the other water uses. Within this framework of concern for the environmental effects of energy, the water market has proved to be one of the main energy consumers in the world. In fact, approximately 7% of the energy consumed in the world is related directly or indirectly with the production, transportation, distribution, collection, and discharge of water. Although the water industry is a great consumer of energy, not all of its phases consume the same energy. Thus, the water transportation and distribution can represent between 25 and 40% of the total energy consumption; the purification processes between 4 and 40% (depending on the quality of the raw water source); treatment and purification of wastewater between 15 and 55% (depending on the level of demand of the discharged water quality); and the transportation and distribution of reused waters can consume between 3 and 15% of the total sector’s energy. In short, when a water supply network project is addressed, it must be borne in mind that the issues related to the dimensioning of the potable water transportation network can amount to up to 40% of the energy of the process. Therefore, this work will focus on the study on the development of a series of indicators that allow evaluating water distribution networks (WDN) from the energy point of view. This work is the continuation of others which in recent years have focused on the water distribution systems (WDS) energy study. Specifically, some of these studies have focused on reducing energy costs and water losses [7, 8].

31 Methodology for Valuation of Water Distribution …

439

The methodology for the evaluation of WDN projects will be centered on the concept of the system’s potentially recoverable energy (PRE). That is to say, it’s about analyzing the possible excesses of energy in the WDS and to analyze the way in which these can be recovered. In this sense, energy recovery (ER) is a subject already studied in WDN by various authors [3, 5, 12]. A part of these works focuses on using microturbines [2, 9], which are generally pumps running inversely and modified to have power generation capacity. These are called PATs (Pump as Turbines). Other alternatives for reducing energy in WDS are closely linked to the definition of sectors in the network. In a quite traditional manner, a pressure reducing valve (PRV) has been installed at the entrance of each of these sectors [11, 13]. An alternative approach based on the ER aims to replace these PRVs by elements with the ability to take advantage of dissipated energy [4, 15]. So far, there are no studies that allow defining in a clear, two-way fashion the maximum PRE for a WDS. In this sense, some methods can be found in the literature to energetically analyze a WDN [6]. As described below, in this type of approach the PRE is defined in terms of excess pressure at the consumption points, understood as the difference between the level of pressure supplied and the minimum pressure level required for operating the system. Therefore, the main focus of this work is the PRE can be recovered by WDS managers, either by installing microturbines, PATs or PRVs. On the other hand, another part of the PRE can only be recovered by users in the form of picoturbines that load small electric batteries. The ultimate goal of this work is to quantify what percentage of the PRE can be recovered in the WDN (PREr ) and what percentage could only be retrieved by users (PREu ).

31.2 Indicators and Energy Balance of a WDS Developing a good WDS energy balance [1] can be a beneficial way to detect problems that may arise in the management of water resources throughout the network. A WDS energy balance can also detect the state of the system’s pipelines or indicate how much of the PRE could be used. When implementing a WDS, it is necessary to take into account all the energy that participates in the distribution of water, from the energy provided by deposits or pumps, up to the energy consumed by end users. The analysis of the energy balance can be done from two different perspectives: the energy produced or the consumed energy (Fig. 31.1). The energy the network receives (E sum ) is supplied by the potential energy contained in tanks and reservoirs (E e ), since they have a height with respect to the consumption points and is also supplied by pumping units (E b ). On the other hand, the energy that is consumed on the network has different terms, although it is divided into two large groups: dissipated energy (E d ) and useful energy (E u ). The dissipated energy is that which is lost along the network. Within this group, it can be distinguished between:

440

M. Iglesias-Castelló et al.

Tanks

Recoverable energy

Supplied Energy

Useful Energy

Network

Minimum required energy Topography

Consumed Energy Pumps

Users

Minimum pressure

Apparent losses Energy Loss

Leaks

Tanks

Real losses Fricon

Pumps

Valves

Fig. 31.1 Energy balance of a WDS

• Real losses (E d, r ). It is the energy that dissipates and produces no benefit. There are different terms that make up these real losses: – Frictional losses (E d, f ): caused by the turbulence of the flow along the pipelines. – Losses in pumping units (E d, b ): due to the different electro-mechanical failures. It is the difference between the energy consumed and the energy transmitted. – Losses in valves (E d, v ): It is necessary for the valves to perform maneuvers for the correct functioning of the system, but they bring losses that must be accounted for. • Apparent losses (E d, a ), that energy which is not delivered to users, but cannot be considered a dissipation of energy. Apparent losses are composed of: – Losses due to leaks in the system (E d, l ): It is an energy that has been transported into the system, but that at the end is not delivered to the end users because it gets filtered by the different damages the system has. – Losses by the water accumulated in the deposits (E d, d ): When the simulation is performed, there is a potential energy in the tanks due to the level of water they have. If at the end of the simulation the level of water in the deposits is higher, it means that a part of the energy is stored in the tanks. This energy is an input energy that has not been delivered to the users. Therefore, this energy can be considered an energy loss. If at the end of the simulation the water level is lower than the initial level, it is an input energy and not dissipated. Therefore, it is a negative energy loss. The energy that is delivered to users is called useful energy (E u ). Two clearly defined parts can be distinguished within this useful energy: the energy which is needed to carry water to the topographic elevation or potential energy (E u, t ) and the pressure energy that is provided at each consumption node (E u, p ). However, a more practical differentiation of the useful energy is:

31 Methodology for Valuation of Water Distribution …

441

• Minimum energy (E min ): In this term, both the topographical energy (E u, t ) and the energy associated to the fact of having minimum pressure available in each node of consumption (E pmin ) are taken into account. • Potentially recoverable energy (PRE). It is the excess of energy that reaches the nodes and is not needed. In short, the PRE is an indicator of excess energy a system may have. Thus, a system with high PRE values supposes oversizing. This oversizing may stem from issues such as: high diameters; oversized pumps for the required performance; tanks too high for the levels of pressure required; or systems with a favorable terrain where energy has not been properly exploited. In general, any excess energy in the consumption nodes could be recovered by users. These users could install micro or picoturbines for charging batteries that then allow them to use energy, either directly or through inverters. However, depending on the topography of the network, there is a part of the PRE which allows the installation of ER devices in the WDN itself. That is to say, the installation of microturbines or PATs on certain points of the network allows to recover part of the PRE. This is called PREr . The rest of the PRE that cannot be recovered on the network is that which only users (PREu ) can recover. Mathematically, the energy balance of a WDS is expressed as E sum = E e + E b

E con

 ⎧ E d,r = E d, f + E d,b + E d,v ⎪ ⎪ ⎪ E = E d,r + E d,a → ⎪ ⎨ d E d,a = E d,l + E d,d = Ed + Eu →  ⎪ E min = E u,t + E p min ⎪ ⎪ ⎪ ⎩ E u = E u,t + E u, p = E min + PRE → PRE = PREu + PREr

(31.1) Thus, the valuation of WDN projects will be based on the breakdown of the PRE. That is, it will be necessary to break down what percentage of the PRE system managers recover (PREr ) and what percentage can only be recovered by users (PREu ). Therefore, the methodology described below focuses on the identification of the maximum PREr of a WDN.

31.3 Methodology The methodology for obtaining the PREr takes as a starting point any operating design of a WDS. That is, it is admitted that a fully defined WDN is available in such a way that the circulating flows through the pipes and the pressures in the different nodes can be determined. Moreover, in order to speak of PREr it is admitted that all

442

M. Iglesias-Castelló et al.

nodes verify that the pressure is greater than the minimum pressure required in each case. The hypothesis on which the method is based is to consider that the distribution of flows before and after performing any ER is the same. This hypothesis poses that the installation of a certain ER device automatically causes the cancellation of a certain number of pipelines in which it is not possible to install any additional ER elements. In some cases, this restriction has its origin in maintaining the distribution of flows. In other cases, this restriction is intended to ensure the compliance of minimum pressure restrictions on all points of the network. In order to illustrate the methodology, its application will be directly analyzed on an example network. Thus, the example network of Fig. 31.2, is considered to have 15 nodes of consumption which require a minimum pressure of 20 m. Figure 31.2 represents the pressure levels in the network when the pump is running. As can be seen, there are notable excesses of pressure in all consumption nodes. However, it is not possible to install any element to reduce pressure in the system since this would entail a modification of the circulating flows for some of the network pipes. An opposite case is shown in Fig. 31.3, representing the operation of the same network but in a different scenario. In this case, the pump is present and all consumption nodes are fed from the reservoir. As in the scenario in Fig. 31.2, there is an excess of pressure in all knots with respect to the minimum pressure. However, unlike the scenario in which the pump is running, here pressure reduction systems can be installed without affecting the circulating flows through the lines. In short, the first step of the methodology (see Fig. 31.4), is to cancel the sections where it is not possible to install any ER system. In general, this process is done T1 3.00 N11

N12

N13

35.23

35.98

37.55

N21

N22

N23

N24

N10

E1

36.52

38.93

40.28

47.63

48.13

0.00

N31

N32

N33

N34

36.98

36.49

39.66

43.53

N41

N42

N43

N44

37.23

35.53

40.20

41.66

Fig. 31.2 Example network. Scenario 1: operating pump

31 Methodology for Valuation of Water Distribution …

443

T1 4.41 N11

N12

N13

33.91

34.97

37.10

N21

N22

N23

N24

N10

E1

34.86

36.84

35.82

38.81

34.81

0.00

N31

N32

N33

N34

34.82

33.80

35.80

38.78

N41

N42

N43

N44

34.80

32.79

36.79

37.77

Fig. 31.3 Example network. Scenario 2: pump not running; supply from the tank

Localize Crical Node

Pressure nodes (tanks, reservoirs)

Pressure reducon in Origin Nodes

Definion of Origin Nodes (ONs)

Cancellaon of secons without the possibility of installing ER devices

Hydraulic analysis (EPANET)

Define new Origin Nodes (ONs)

Nº available Nodes > 0

Excess Pressure Calculaon

SI

NO

END

Fig. 31.4 Diagram of the proposed methodology

whenever what are referred to as Initial Origin Nodes (ION) are defined. These nodes are network points at the ends of lines where no more ER elements can be installed. Thus, the first IONs are initial pressure nodes of the model (E1 and T1 nodes in the considered example). From this point on, sections will be cancelled. These sections that are cancelled are all those which interconnect origin nodes along the flow lines defined by the simulation. The process of pipes cancellation implicitly involves the definition of new IONs. In the case of the scenario in Fig. 31.2, starting with node E1 and moving onto knot 1, all the pipes are scanned along the flow lines defined by the flows. That is why in this case all pipes get cancelled, leaving no available nodes. On the contrary, in the case of the scenario in Fig. 31.3, initially there is a single ION (node T1); therefore, no pipelines get cancelled and all nodes remain available. In this case, since no lines gets cancelled, all IONs become new origin nodes (ON).

444

M. Iglesias-Castelló et al.

Given that the ON number is greater than zero, it is necessary to perform a new network hydraulic behavior analysis by using the EPANET model. This analysis recalculates the circulating flows (which by hypothesis should not be different) and the pressures in the network nodes. This WDN analysis also allows locating the critical node, which pressure is closest to the required minimum. In the critical node, the difference between its pressure (pc ) and the required minimum pressure (pmin ) is the drop in pressure HER which all ER systems installed at that time should have H R E =

pmin pC − γ γ

(31.2)

Therefore, at that time, an ER device is installed in all the lines of the model that have as initial node one of currently active ONs. Such ER will represent a pressure drop equal to the calculated value of H ER . Once this process is completed, it is necessary to analyze again the possible lines of the network that are excluded from the possibility to install ER elements. These lines to cancel are: either a line that connects any of the ON with the critical node; or the lines where an ER installation entails a variation of the flow distribution. The lines cancelled generate new ONs allowing to repeat the process again. The results of applying the proposed method to the example network case Scenario 2 are presented below. Here, the results of the two iterations can be seen. The only ON is node T1 and no pipeline has been cancelled (Fig. 31.5. Initial iteration). In the first iteration, N42 is the critical node (with an initial pressure of 32.79 m), which represents an excess of pressure, transformable in ER, of 12.79 m. This means adding the ER of line T1-N13 (Fig. 31.5. Iteration 1). The introduction of the ER entails the necessity of cancelling the marked lines (Fig. 31.4. Iteration 1), in order to, maintain the pressure in the node N42 n in 20 m. Therefore, new nodes appear (nodes N23, N33, and N43). In the following iteration, the critical node is N44, with an initial pressure of 24.98. The reduction of the pressure in this node to 20 m entails introducing 3 ER elements in lines N23-N24, N33-34, and N43-44, each one of them with a reduction in pressure of 4,98 m. Ultimately, the final solution is the one gathered in Fig. 31.4. Final iteration. After introducing T1

N11

N12

N13

N21

N22

N23

N24

N31

N32

N33

N34

N41

N42

N43

N44

Initial iteration

Fig. 31.5 Example resolution

N13b

N11

N12

N13

N13b

T1

N21

N22

N23

N23b

N24

N34

N31

N32

N33

N33b

N34

N44

N41

N42

N43

N43b

N44

T1

N11

N12

N13

N21

N22

N23

N24

N31

N32

N33

N41

N42

N43

Iteration 1

Final iteration

31 Methodology for Valuation of Water Distribution …

445

the 3 ERs of the last iteration, all pipelines are cancelled since in none of them can an ER system be installed without altering the circulating flows. For this reason, the process defined in the methodology is completed.

31.4 Case Study The methodology proposed in the previous section has been applied to a real case. It is the network of Balerma [14] (see Fig. 31.6), an irrigation network with more than 300 consumption nodes and a total supply of 857 l/s. The network has an extension of 71.2 km and diameters from 80 to 550 mm. The great slope existing in the network generates the potential presence of ER elements in it. The network defined in the mentioned figure actually consists of four different sectors. The analysis can be carried out either individually in each sector or in the entire network. To see the results of the implementation of the described methodology, the energy analysis is presented as the final result (Table 3), as defined above. In this case, the development of the methodology entails performing 148 iterations as described previously. The end result (Fig. 31.5), is the determination of 152 different locations in which an ER system could be installed. This does not mean that all the determined elements are actually profitable in their installation, but they allow defining precisely the maximum amount of energy that can be recovered from the network. In other words, the total excess energy that is supplied has been determined. The remaining PREu is not retrievable from the network, but needs to be recovered by the users of the system. As can be appreciated in Fig. 31.7, the initial PRE value was 297.29 kW, of which a large part can be recovered from the network through the installation of suitable

Original network.

Fig. 31.6 Balerma network

Network after the ER.

446

M. Iglesias-Castelló et al.

Ed,r Ed Ed,a Econ Emin Eu

Ed,f

102.02

Ed,b

0.00

Ed,v

0.00

Ed,l

0.00

Ed,d

0.00

Eu,t

342.70

102.02

Ed,r Ed

102.02

Ed,a

0.00 893.92

Econ

494.60

Emin

Epmin 151.90 791.89

PREu 297.29

PRE

PREr ER

0.00

297.29

Eu

PRE

Ed,f

102.02

Ed,b

0.00

Ed,v

0.00

Ed,l

0.00

Ed,d

0.00

Eu,t

342.70

102.02 0.00 893.92 494.60

Epmin 151.90 791.89

PREu 44.37 PREr ER

0.00

102.02

0.00

297.29

252.67

Accumulated energy recovery

Fig. 31.7 Energy analysis of the Balerma network before and after applying the ER

100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0%

1

10 100 Number of energy recovery systems

1000

Fig. 31.8 ER accumulated in relation to the number of installed systems

systems. Nevertheless, from a strictly practical point of view, the installation of ER systems in the 152 points determined by the methodology will not be possible. Thus, an analysis of the energy generation capacity of each one allows to determine which are more adapted to install. Thus, Fig. 31.8, collects the accumulated ER according to the number of installed systems, understanding that these are installed from higher to lower power of recovery. Hence, it can be seen how the installation of 5 or 6 high power systems represents 70% of the total potentially recoverable energy. Finally, it should be noted that the method of determining the PRE and its discrimination between PREu and PREr allows defining an energy efficiency index (EEI) of the network as the ratio between the PRE and the total energy consumed. Mathematically, it can be expressed as EEI = 1 −

P RE E con

(31.3)

This index allows establishing a criterion of goodness and energy use of a WDS. Thus, the lower the excess of pressure on the nodes compared to the minimum power

31 Methodology for Valuation of Water Distribution …

447

(E u – E min ), and the lower the energy losses (E d ), the closer the mentioned index will be to one. In the case of the studied network, this entails going from an EEI of 66.7% before ER to an index of 95% after applying the methodology described.

31.5 Conclusions In order to recover the energy a series of considerable actions are required, that can significantly affect the normal operation of the network, which turns it into a complex problem. Nevertheless, being able to recover the potentially recoverable energy is decisive from several points of view. Developing a methodology for the recovery of the PREr is essential for a WDS, to be able to take advantage of the energy that is not used and to use it in other processes. In addition, the development of this methodology can be applied to any network regardless of its size, since its calculation is fast and indicates where that recovery should occur. Moreover, the statistical analysis of the obtained solution allows for a preliminary study to be made on the points on which ER could actually be implemented. Secondly, the described method allows defining an energy efficiency index for the network. This index allows assessing the level of energy use and to determine how close the proposed solution is to the minimum level of energy required in a network. Without a doubt, this index can be used to evaluate different design solutions of a network.

References 1. Cabrera E, Pardo M, Cobacho R, Cabrera E Jr (2010) Energy audit of water networks. J Water Resour Plan Manag 136(1):669–677. https://doi.org/10.1061/(ASCE)WR.1943-5452.0000077 2. Carravetta A et al (2013) PAT design strategy for energy recovery in water distribution networks by electrical regulation. Energies 6(1):411–424. https://doi.org/10.3390/en6010411 3. Corcoran L, McNabola A, Coughlan P (2016) Optimization of water distribution networks for combined hydropower energy recovery and leakage reduction. J Water Resour Plan Manag 142(2):04015045. https://doi.org/10.1061/(ASCE)WR.1943-5452.0000566 4. Giugni M, Fontana N, Ranucci A (2014) Optimal Location of PRVs and turbines in water distribution systems. J Water Resourc Plan Manag 140(9):06014004. https://doi.org/10.1061/ (ASCE)WR.1943-5452.0000418 5. Giustolisi O, Laucelli D, Berardi L (2013) Operational optimization: water losses vs. energy costs. J Hydraul Eng 139(4):410–423. https://doi.org/10.1061/(ASCE)HY.1943-7900.0000681 6. Hashemi S, Filion YR, Speight VL (2015) Pipe-level energy metrics for energy assessment in water distribution networks. Proc Eng 119(1):139–147. https://doi.org/10.1016/j.proeng.2015. 08.864. (Elsevier BV) 7. Iglesias-Rey PL et al (2016) Combining engineering judgment and an optimization model to increase hydraulic and energy efficiency in water distribution networks. J Water Resour Plan Manag 142(5):C4015012. https://doi.org/10.1061/(ASCE)WR.1943-5452.0000605

448

M. Iglesias-Castelló et al.

8. Marchi A et al (2012) (2012) Battle of the water networks II. J Water Resour Plan Manag 7:1–14. https://doi.org/10.1061/(ASCE)WR.1943-5452.0000378 9. De Marchis M et al (2014) Energy recovery in water distribution networks. Implementation of pumps as turbine in a dynamic numerical model. Proc Eng 70:439–448. https://doi.org/10. 1016/j.proeng.2014.02.049. (Elsevier BV) 10. Organisation for Economic Co-operations and Development (OECD) (2007) World Energy Outlook: China and India Insights (I. E. Agency (eds)) 11. De Paola F, Galdiero E, Giugni M (2017) Location and setting of valves in water distribution networks using a harmony search approach. J Water Resour Plan Manag 04017015. https:// doi.org/10.1061/(ASCE)WR.1943-5452.0000760 12. Pérez-Sánchez M et al (2017) Energy recovery in existing water networks: towards greater sustainability. Water (Switzerland) 9(2):1–20. https://doi.org/10.3390/w9020097 13. Prescott SL, Ulanicki B (2008) Improved control of pressure reducing valves in water distribution networks. J Hydraul Eng 134(1):56–65. https://doi.org/10.1061/(ASCE)0733-9429(200 8)134:1(56) 14. Reca J, Martínez J (2006) Genetic algorithms for the design of looped irrigation water distribution networks. Water Resour Res 42(5):1–9. https://doi.org/10.1029/2005WR004383 15. Samora I et al (2016) Energy recovery using micro-hydropower technology in water supply systems: the case study of the city of Fribourg. Water 8(8):344. https://doi.org/10.3390/w80 80344

Chapter 32

Use of Grey-Box Modeling to Determine the Air Ventilation Flows in a Room M. Macarulla, M. Casals, N. Forcada, and M. Gangolells

Abstract To determine the air ventilation flows in a room deterministic approaches based on the CO2 tracer-gas mass balance equation are usually used. Those equations do not enable to introduce to the system uncertainties such as the sensor measurement error. In this paper, the use of the stochastic differential equations based on the CO2 tracer-gas mass balance are presented to obtain the air ventilation flows in a room introducing uncertainty elements. This kind of modeling, also known as grey-box modeling, combines physical knowledge of the system and information embedded in the monitored data to identify a suitable parametrization of the differential equation used. Finally, a set of statistical tests are used to assess the model’s accuracy. Models that fulfill the tests are considered correct, and as a consequence, its parametrization too. Having the parametrization a physical meaning, it is possible to obtain the air ventilation flow of the room. Results show the viability of this kind of approach. Keywords Indoor air quality · Stochastic methods · Ventilation

M. Macarulla (B) · M. Casals · N. Forcada · M. Gangolells GRIC. Dpto. de Ingeniería de Proyectos y de La construcción. Escuela Superior de Ingenierías Industrial, Aeronáutica y Audiovisual de Terrassa. Universitat Politènica de Catalunya, C/ Colom 11, 08222 Terrassa, Spain e-mail: [email protected] M. Casals e-mail: [email protected] N. Forcada e-mail: [email protected] M. Gangolells e-mail: [email protected] © Springer Nature Switzerland AG 2021 J. L. Ayuso Muñoz et al. (eds.), Project Management and Engineering Research, Lecture Notes in Management and Industrial Engineering, https://doi.org/10.1007/978-3-030-54410-2_32

449

450

M. Macarulla et al.

32.1 Introduction One of the objectives in the operation phase of a building is to maintain good indoor air quality. This objective must be achieved with solutions involving low energy consumption [22]. In this context, knowing the existing levels of ventilation is essential to ensure that the ventilation system works correctly [23]. Usually, in the literature, deterministic methods are used to determine the levels of ventilation in buildings based on the mass balance equation for CO2 [4, 7, 13, 15, 20, 21, 25]. On the other hand, the stochastic approaches in this area are poorly developed [16]. In this paper, how models can be used based on the stochastic differential mass balance equation of the CO2 to determine the level of ventilation of a room will be presented. This paper presents various stochastic differential equations, and then, using the maximum likelihood method the parameters of the differential equations are obtained. The obtained models are then verified with a series of statistical tests to determine the accuracy of the models. Finally, it is discussed which model is the most indicated to determine the level of ventilation of a room.

32.2 Section Heading The levels of existing ventilation measurement tasks are difficult and expensive [7]. Currently, methodologies for measuring ventilation levels are based on observing variations of interior CO2 [3, 18, 19]. These methods use ordinary differential equations (ODE) of the mass balance of CO2 to determine the ventilation levels. This approach is considered to be deterministic since the solutions of the ODEs are functions that depend on time in a deterministic way. Consequently, it is assumed that future CO2 concentrations can be predicted in an exact way [9]. Deterministic approaches do not consider disturbances entering the system, such as the influence of non-identified or non-modeled entries nor those due to measurement noise [1, 24]. As a result, the deterministic approach does not allow considering the uncertainties affecting the system. The use of stochastic approaches allows considering the disturbances that can affect the system. The models based on stochastic differential equations (SDE), also known as “grey-box” models, combine knowledge of the physical system with intrinsic information from monitoring data [2]. Models based on stochastic differential equations can be generalized as follows: d X t = f (X t , Ut , θ, t)dt + G(θ, t)dWt

(32.1)

Ytk = h(X t , Ut , θ, t) + et

(32.2)

32 Use of Grey-Box Modeling to Determine …

451

The equation describing the behavior of the variable X t over time is composed by a drift term f (X t , U t , θ, t)dt and a diffusion term, G(θ, t)dW t . The diffusion term represents the approximations and noise introduced into the system due to unknown or unmodeled perturbations. The diffusion term is composed of a function that represents the perturbation that enters the system G(θ, t), and a standard Weiner W t process. Discrete observations over time, Y tk , include the measurement error, et which is considered distributed with a Gaussian distribution. The error in measurement represents the noise due to the accuracy of the sensor. U t represents entries in the system, and  represents the system parameters. Usually, the maximum likelihood method is used to determine  [1, 24]. With the separation of the residual error between the diffusion term and the measurement noise, the model can be validated since a precise description of the prediction is obtained. If the model is sufficiently detailed to describe the dynamics of the system, residues should not be correlated [9]. In addition, the inclusion of the diffusion term helps identifying how an insufficient model can be improved [12].

32.3 Methodology The methodology used in this paper to determine the level of ventilation in a room is divided into 3 stages: experimental design and data acquisition, modeling process, and validation process.

32.3.1 Experimental Design and Data Acquisition The room used to conduct the experimentation belongs to an office located in the building identified as TR5 on the campus of the Universitat Politècnica de Catalunya (UPC) in Terrassa (Barcelona). This office has 6 workstations being used by Ph.D. and master students. Although the maximum occupancy of the room is 6 people, during the experimentation it was 3 people. The room has a surface of 31.16 m2 and a volume of 95 m3 with an average height of 3.05 m. This room can be accessed through a door in a hallway. The room has a well-preserved aluminum window 1.4 m wide and 1.7 m high, located at 1.25 m from the ground. The room is naturally ventilated through a ventilation grille located in the access door. The ventilation grille is 0.40 m wide and 0.20 m high, and is located at 0.10 m from the ground. The room has no forced ventilation or heating or cooling systems. In addition, during the experimentation, the window was closed. The CO2 concentration in the room was measured using an Advanticsys IAQMTHCO2 sensor with a range from 0 to 3,000 ppm, with a resolution of 1 ppm, and a precision of ±2% of full scale. The sensors were calibrated and configured by the manufacturer and captured data every 15 min.

452

M. Macarulla et al.

The CO2 concentration in a room is not homogeneous because of the breathing of its occupants, the windows infiltration, and the influence of the heating and ventilation systems. According to Buli´nska et al. [3], in order to perform a meaningful measure of CO2 concentration in a room, areas near people, windows or heat fluxes should be avoided. For this reason, to perform the experiments the sensor was located in the center of the room, at ceiling height. The occupation of the room was extracted from occupancy sheets. The occupancy sheets were used to calculate the average occupation every quarter of an hour. The experimentation was carried out for 5 days in the month of May 2016. Figure 32.1 shows the captured data during experimentation and Table 32.1, shows the average environmental conditions of the experimentation period.

32.3.2 Modeling Process Models based on stochastic differential equations (SDE), also known as “grey-box” models, combine physical knowledge of the system with intrinsic data information [2]. Physical knowledge of the system is represented as a set of stochastic differential equations composed of a drift term and a diffusion term. The drift term is a function that describes the known dynamics of the system. In this study, the mass balance equation of CO2 was used [10, 14–16, 25] as the drift term (Eq. 32.3), representing the variation of CO2 concentration at a given point and time (C int ) in the room, with a volume (V t ) and a ventilation flow (Qven ): dC int Vr = (Cven − Cint ) · Q ven + G C O 2 dt

(32.3)

where C ven is the concentration of the air that enters the room due to ventilation, Qven is the natural ventilation coming from the hall, and G C O2 is the CO2 generated by the occupants of the room. Equation 32.3 assumes that the CO2 is stable and chemically inert, in addition, no absorption of CO2 is taking place in the room. Thus, the walls, the ceiling or furniture do not absorb CO2 . Finally, the above equation assumes that there is a perfect mixture of the air in the room and that ventilation is constant over time [16]. In this case, it is considered that the ventilation flow is due to natural ventilation. In this paper, G C O2 is calculated with the following equation: G C O 2 = K occ · P

(32.4)

where K occ is the CO2 exhaled by the occupants, and P is the occupancy of the room. Equation 32.4 assumes that the CO2 exhaled per occupant is constant over time, and is the same for each people. This is a reasonable hypothesis since all occupants carry out normal office tasks, sitting, reading, and writing. The increase in exhaled CO2

32 Use of Grey-Box Modeling to Determine …

453

Fig. 32.1 Data obtained from the experiment

due to the movement of the occupants entering and leaving the room is considered negligible. To complete the stochastic differential equation, the diffusion term must be added dC int =

Q ven K occ · P · dt + σ · dw (Cven − Cint ) · dt + Vr Vr

(32.5)

454 Table 32.1 Environmental conditions of the experimentation

M. Macarulla et al. Location

Parameter

Units

Value

Corridor parameters

Average temperature

ºC

23.5

Average relative humidity

%

49

Average temperature

ºC

23.1

Average relative humidity

%

51

Average temperature

ºC

14.7

Average relative humidity

%

82

Mean atmospheric pressure

hPa

1006.8

Room parameters

External parameters

Average wind speed m/s

2.1

Exterior CO2

372

ppm

where dw is a Weiner process, and σ is the incremental variance of the Weiner process. Finally, the measurement equation must be added Ytk = Cint,t k + ek

(32.6)

where Ytk is the measurement of the interior CO2 in the room at the instant t k , and ek is a white noise process that describes measurements noise. In this study, 3 models were used to determine the ventilation level of the room. The first (M1), considers that the exhalation of CO2 of the occupants and the volume of the room are known, as well as it is considered that the concentration of CO2 of the ventilation flow is constant. The second (M2) considers as known only the volume of the room in addition to considering that the concentration of CO2 of the ventilation flow is a constant. The last model (M3) considers as known only the volume of the room and uses the measured concentration of CO2 of the ventilation flow. Using the monitoring data of the CO2 inside the room, the unknown parameters of the three models were obtained with the maximum likelihood method.

32.3.3 Validation Process This study proposes the use of stochastic equations that have a physical meaning. For this reason, the first aspect to be considered in the validation of the parameters

32 Use of Grey-Box Modeling to Determine …

455

obtained is their feasibility. Whether or not the order of magnitude of the parameters obtained is consistent, will be evaluated first. Subsequently, a series of statistical tests will be used to determine if the obtained models correctly describe the dynamics of the system. These tests are based on previous studies [2, 17]. The first statistical test consists of evaluating the significance of the parameters obtained. The p-value of all parameters must be lower than 0.05; otherwise, the parameter is not significant for the model and must be eliminated [6]. The second test consists of evaluating the derivative of the objective function compared to the initial state for each of the evaluated parameters. If the obtained value is not close to 0, it means that the solution found is a local optimum, but not the real optimum [5]. The third test consists of evaluating the derivative of the penalty function in relation to the initial state for each of the evaluated parameters. If the value is significant compared to the derivative of the objective function, it implies that the initial state of a parameter is close to one of its limits [5]. The correlation matrix of the estimated parameters is used to determine if the model is over-parameterized. If the values of the diagonal are close to 1 or –1, it implies that the model is over-parameterized and eliminating some model parameter should be considered [5]. Finally, to evaluate the white noise hypothesis, the next step prediction residues are used, as well as the autocorrelation functions and the cumulated periodogram. Finally, the pure simulation residues and the mean squared error are used to determine the accuracy of the model. All statistical tests are calculated using the R CTSM-R package.

32.4 Results and Discussion The parameters obtained in all the models are feasible at the physical level (Table 32.2). The number of air changes obtained is between 0.16 and 0.21 air changes per hour. Studies in which air changes due to natural ventilation with similar environmental conditions have been measured, have obtained values between 0.21 and 0.5 Table 32.2 Results of the model’s parametrization

Location

Parameter

Units

Value

M1

Qven

m3 /h

19.97

M2 M3

h−1

0.21

m3 /h

15.51

h−1

0.16

K occ

L/h

13.53

Qven

m3 /h

20.13

h−1

0.21

L/h

12.79

Qven

K occ

456

M. Macarulla et al.

air changes per hour [15, 16, 20]. The exhalation of CO2 per person obtained is 12.79 and 13.53 L/h. These values are significantly lower than those used in the literature which are around 18.50 L/h [8, 11, 25]. The statistical tests results of model 1 indicate that all the model parameters are significant, that the solution is the real optimum and is not close to one of its limits. The correlation test also indicates that there is no correlation between the obtained parameters. Figure 32.2 shows the pure simulation errors, as well as the simulation with a 95% confidence interval. It can be appreciated how monitoring values are outside the confidence interval in many points of the time series. Figure 32.3 shows the errors of the M1 one-step ahead simulation. As can be seen in the graphic on the left side of Fig. 32.3, the errors exceed the dotted line that indicates the accuracy of the sensor. In addition, the hypothesis of the white noise of these residues is not fulfilled since the correlation function and the cumulated periodogram have values that exceed the 95% confidence interval (dotted line). Finally, the mean squared error is 86 ppm, a value greater than sensor precision. The results of the statistical tests of model 2 are also positive. These indicate that all the parameters of the model are significant, that the solution is the real optimum and that the solution is not close to one of its limits. The correlation test also indicates that there is no correlation between the obtained parameters.

Fig. 32.2 In the upper part of the graphic, the residues of the M1 pure simulation are shown; the lower part shows the simulation with confidence intervals of 95% of M1, as well as the measured data

32 Use of Grey-Box Modeling to Determine …

457

Fig. 32.3 The graphic on the left shows the errors of the M1 one-step ahead simulation; the graph in the middle is the autocorrelation function of M1 residues, and the graph on the right is the cumulated periodogram of such M1 residues

Figure 32.4 shows the pure simulation errors, as well as the simulation within a 95% confidence interval. It can be appreciated how the results are slightly better than the M1’s. Anyway, monitoring data are still not within the 95% confidence interval of the simulation. In this sense, the simulation still does not faithfully reproduce the monitoring data. Figure 32.5 displays the errors in the M2 one-step ahead simulation. It can be seen how, in the chart on the left side of Fig. 32.5, errors improve substantially but still

Fig. 32.4 In the upper part of the graphic, the residues of the M2 pure simulation are shown; the lower part shows the simulation with confidence intervals of 95% of M2, as well as the measured data

458

M. Macarulla et al.

Fig. 32.5 The graphic on the left shows the errors of the M2 one step ahead simulation; the graphic in the middle is the autocorrelation function of M2 residues, and the graphic on the right is the cumulated periodogram of such M2 residues

remain outside the dotted line that indicates the sensor accuracy. Analyzing the onestep ahead simulation errors, it can be observed the greatest errors occur at twelve o’clock noontime. When looking at the data obtained during the experimentation (Fig. 32.1), it can be observed that midday is when the concentration of CO2 on the ventilation airflow is higher. This indicates that by introducing the concentration of CO2 on the ventilation airflow as a model entry, instead of using a fixed value, the accuracy of the model should improve. In this case, a couple of points of the correlation function exceed the 95% confidence interval (dotted line), but there is a considerable improvement in relation to M1. On the other hand, all the cumulated periodogram points are within the 95% confidence interval (dotted line). For these reasons, it is reasonable to state that the white noise hypothesis of the one step ahead residues is met with the results obtained from M2. Finally, the mean squared error is 69 ppm. This value is still higher than the accuracy of the sensors. Finally, the mean squared error is 69 ppm. This value is still higher than the accuracy of the sensors. The results of the statistical tests of model 3 are also positive, indicating that all the parameters of the model are significant, that the solution is the real optimum and that the solution is not close to one of its limits. The correlation test also indicates that there is no correlation between the obtained parameters. Figure 32.6 shows the pure simulation errors, as well as the simulation with a 95% confidence interval. It can be appreciated how monitoring data are within the 95% confidence interval of the simulation. In this sense, M3 is the model which better reproduces the interior concentration of CO2 in the room. Figure 32.7 shows the errors of the one step ahead simulation of M3. In this case, errors improved significantly, but still, exceed on a couple of the time series points the dotted line that indicates the accuracy of the sensor. In this case, a couple of points of the correlation function exceed the 95% confidence interval (dotted line), but there is a considerable improvement in relation to M1 and M2. On the other hand, all the cumulated periodogram points are within

32 Use of Grey-Box Modeling to Determine …

459

Fig. 32.6 In the upper part of the graphic, the residues of the M3 pure simulation are shown; the lower part shows the simulation with confidence intervals of 95% of M3, as well as the measured data

Fig. 32.7 The graphic on the left shows the errors of the M3 one-step ahead simulation; the graphic in the middle is the autocorrelation function of M3 residues, and the graphic on the right is the cumulated periodogram of such M3 residues

the 95% confidence interval (dotted line). For these reasons, it is reasonable to state that the white noise hypothesis of the one step ahead residues is met with the results obtained from M3. Finally, the mean squared error is 41 ppm. This value is lower than the sensor precision. With all the results presented regarding M3, it can be considered that this is the model that allows reproducing more faithfully the interior concentration of CO2 in the room. Consequently, it is possible to state that the level of ventilation of a room

460

M. Macarulla et al.

can be measured by means of “grey-box” models that use a CO2 sensor in the room to be measured, a CO2 sensor in the ventilation flow and by measuring the occupancy of the room.

32.5 Conclusions The present study shows how “grey-box” models can be used to indirectly measure the ventilation level in a room, based on the CO2 mass balance equation. The results suggest that to carry out the measurement of the ventilation level, a CO2 sensor must be used to measure its concentration inside the room, a sensor for measuring the CO2 concentration in the ventilation flow, as well as to monitor the occupancy of the room, in which the measurement will be performed. This type of modeling can have multiple applications in the building sector: it can be used to carry out the monitoring of the ventilation levels of buildings in real time, it is possible to determine the occupancy of different spaces in real time and it also can be used to simulate buildings with very low computing costs. The main words in all headings (even run-in headings) begin with a capital letter. Articles, conjunctions, and prepositions are the only words which should begin with a lower case letter.

References 1. Andersen KK, Madsen H, Hansen LH (2000) Modelling the heat dynamics of a building using stochastic differential equations. Energy Build 31(1):13–24. https://doi.org/10.1016/S03787788(98)00069-3 2. Bacher P, Madsen H (2011) Identifying suitable models for the heat dynamics of buildings. Energy Build 43(7):1511–1522. https://doi.org/10.1016/j.enbuild.2011.02.005 3. Buli´nska A, Popiołek Z, Buli´nski Z (2014) Experimentally validated CFD analysis on sampling region determination of average indoor carbon dioxide concentration in occupied space. Build Environ 72:319–331. https://doi.org/10.1016/j.buildenv.2013.11.001 4. Cheong KW (2001) Airflow measurements for balancing of air distribution system—tracer-gas technique as an alternative? Build Environ 36(8):955–964. https://doi.org/10.1016/S0360-132 3(00)00046-9 5. CTSM-R Development Team (2013) Grey-box modeling of the heat dynamics of a building with CTSM-R. Available at: https://ctsm.info/. 6. CTSM-R Development Team (2015) Continuous time stochastic modeling in R user’s guide and reference manual. https://ctsm.info/ 7. Cui S, Cohen M, Stabat P, Marchio D (2015) CO2 tracer gas concentration decay method for measuring air change rate. Build Environ 84:162–169. https://doi.org/10.1016/j.buildenv.2014. 11.007 8. Dougan DS, Damiano L (2004) CO2 -based demand control ventilation. Do risks outweigh potential rewards? ASHRAE J (47):53 9. Duun-Henriksen AK, Schmidt S, Røge RM, Møller JB, Nørgaard K, Jørgensen JB, Madsen H (2013) Model identification using stochastic differential equation grey-box models in diabetes. J Diabetes Sci Technol 7(2):431–440. https://doi.org/10.1177/193229681300700220

32 Use of Grey-Box Modeling to Determine …

461

10. Heidt FD, Werner H (1986) Microcomputer-aided measurement of air change rates. Energy Build 9(4):313–320. https://doi.org/10.1016/0378-7788(86)90036-8 11. Krawczyk DA, Rodero A, Gładyszewska-Fiedoruk K, Gajewski A (2016) CO 2 concentration in naturally ventilated classrooms located in different climates—measurements and simulations. Energy Build 129:491–498. https://doi.org/10.1016/j.enbuild.2016.08.003 12. Kristensen NR, Madsen H, Jørgensen SB (2004) A method for systematic improvement of stochastic grey-box models. Comput Chem Eng 28(8):1431–1449. https://doi.org/10.1016/j. compchemeng.2003.10.003 13. Labat M, Woloszyn M, Garnier G, Roux JJ (2013) Assessment of the air change rate of airtight buildings under natural conditions using the tracer gas technique. Comparison with numerical modelling. Build Environ 60:37–44. https://doi.org/10.1016/j.buildenv.2012.10.010 14. Laussmann D, Helm D (2011) Air change measurements using tracer gases: methods and results. significance of air change for indoor air quality. In: Chemistry, emission control, radioactive pollution and indoor air quality. InTech. https://doi.org/10.5772/18600 15. Li H, Li X, Qi M (2014) Field testing of natural ventilation in college student dormitories (Beijing, China). Build Environ 78:36–43. https://doi.org/10.1016/j.buildenv.2014.04.009 16. Macarulla M, Casals M, Carnevali M, Forcada N, Gangolells M (2017) Modelling indoor air carbon dioxide concentration using grey-box models. Build Environ 117:146–153. https://doi. org/10.1016/j.buildenv.2017.02.022 17. Madsen H, Holst J (1995) Estimation of continuous-time models for the heat dynamics of a building. Energy Build 22(1):67–79. https://doi.org/10.1016/0378-7788(94)00904-X 18. Mahyuddin N, Awbi HB, Alshitawi M (2014) The spatial distribution of carbon dioxide in rooms with particular application to classrooms. Indoor Built Environ 23(3):433–448. https:// doi.org/10.1177/1420326X13512142 19. Menassa CC, Taylor N, Nelson J (2013) A framework for automated control and commissioning of hybrid ventilation systems in complex buildings. Autom Constr 30:94–103. https://doi.org/ 10.1016/j.autcon.2012.11.022 20. Ng LC, Wen J (2011) Estimating building airflow using CO2 measurements from a distributed sensor network. HVAC&R Res 17(3):344–365. https://doi.org/10.1080/10789669. 2011.572223 (Taylor & Francis Group) 21. Pantazaras A, Lee SE, Santamouris M, Yang J (2016) Predicting the CO2 levels in buildings using deterministic and identified models. Energy Build 127:774–785. https://doi.org/10.1016/ j.enbuild.2016.06.029 22. Persily A (2015) Challenges in developing ventilation and indoor air quality standards: the story of ASHRAE Standard 62. Build Environ 91:61–69. https://doi.org/10.1016/j.buildenv. 2015.02.026 23. Škrjanc I, Šubic B (2014) Control of indoor CO2 concentration based on a process model. Autom Constr 42:122–126. https://doi.org/10.1016/j.autcon.2014.02.012 24. Thavlov A, Madsen H (2015) A non-linear stochastic model for an office building with air infiltration. Int J Sustain Energy Plan Manag 7:55–66. https://doi.org/10.5278/IJSEPM.201 5.7.5 25. Zhang W, Wang L, Ji Z, Ma L, Hui Y (2015) Test on ventilation rates of dormitories and offices in university by the CO2 tracer gas method. Proc Eng 662–666. https://doi.org/10.1016/j.pro eng.2015.08.1061. Elsevier

Part VI

Rural Development and Development Cooperation Projects

Chapter 33

Development of Local Capabilities for Rural Innovation in Indigenous Communities of Mexico F. J. Morales-Flores, J. Cadena-Iñiguez, B. I. Trejo-Téllez, and V. M. Ruiz-Vera Abstract In Mexico, 68 indigenous ethnic groups have common cultural traits: use of native languages or own forms of organization. Efforts to address social deficiencies in the population have proven to be ineffective due to cultural and linguistic barriers that hinder social mobility. Obtaining material and financial resources to promote community development has been a major constraint in rural areas. A twoyear training program for Bilingual Local Managers (BLM) inside the Mayo ethnicity in Sonora, Mexico was implemented to identify local development needs, formulate projects, and design rural innovation programs linked to endogenous resources based on individual and collective initiatives. Forty-six BLM and several diagnoses identify and prioritize in localities initiatives related to management, infrastructure development, desired welfare levels, profits generation for reinvestment inside the community, and levels of initial capital contribution. Main development axes are related to agriculture, livestock, forestry, rural infrastructure, seafood small business, local language identity conservation, culture, housing and food sovereignty. BLM formalized some projects and legal society’s producers. Keywords Empowerment · Project management · Development strategies · Local resources

F. J. Morales-Flores · J. Cadena-Iñiguez (B) · B. I. Trejo-Téllez · V. M. Ruiz-Vera Programa de Postgrado en Innovación en Manejo de Recursos Naturales, Campus San Luis Potosí, Colegio de Postgraduados, Iturbide 73. 78621, Salinas, San Luis Potosí, Mexico e-mail: [email protected] F. J. Morales-Flores e-mail: [email protected] B. I. Trejo-Téllez e-mail: [email protected] V. M. Ruiz-Vera e-mail: [email protected] © Springer Nature Switzerland AG 2021 J. L. Ayuso Muñoz et al. (eds.), Project Management and Engineering Research, Lecture Notes in Management and Industrial Engineering, https://doi.org/10.1007/978-3-030-54410-2_33

465

466

F. J. Morales-Flores et al.

33.1 Introduction Local development is oriented towards the identification and sustainable utilization of available natural resources, as well as the increment in participation and organization of the territory inhabitants in order to achieve better living conditions [3]. The improvement of social conditions in rural or indigenous communities can be materialized in two different ways: one, to take advantage of available governmental subsidies which respond to the current government policies; or two, to foster the development of local capabilities to guide the economic, social and environmental growth as an intelligent community [7, 8]. The second way presents more difficult challenges: the alignment of the interested parties with a common vision, effort coordination, negotiation and transparency in the use of common resources in order to build a social capital and ascending governance [3]. However, this alternative represents an opportunity to preserve the identity and social direction of the communities without foreign intervention or deviations. Local capabilities development begins with the empowerment of the different actors who inhabit, use and preserve the available natural resources. To accomplish this step, the native communities must change to achieve community awareness. This communication proposes a method and the definition of a vision. The implementation of a Social Intervention Model (SIM) in rural communities can be vitally important because it promotes dialogue among rural actors as well as the intervention of government, businessmen and educational institutions [1]. It uses as a fundamental premise that “community development must be defined in and by the communities” through the participation of rural actors (including young people and adults, men and women) [1, 2]. Also, that problems must be identified through dialogue and collaborative solutions and that must be achieved in order to guide the community towards a sustainable development. In parallel, communities must consider the use and conservation of endogenous (local) natural resources to which they have access. This empowerment can be achieved with the assistance of service providers who will construct or facilitate the concrete definition of improvement initiatives. Said agents can come from the private sector or be supported by the government. Based on the above mentioned, this research describes a rural innovation program in indigenous communities based on the detection of local development needs. This is achieved through initiatives identification in rural communities of indigenous origin, thus developing a biannual training program for bilingual local managers belonging to the Mayo ethnic from South Sonora in Mexico.

33 Development of Local Capabilities for Rural Innovation …

467

33.2 Methodology and/or Case Study 33.2.1 Initiative Context During the years 2009 and 2010–2011 a program was developed to train bilingual managers in South Sonora, Mexico, with natives of the Mayo ethnic. They are Amerindian people who inhabit North Sinaloa and South Sonora in the coastal region (Fig. 33.1). The program consisted of two stages: (1) local management capacities development in young bilingual professionals during 2009. These young professionals had to be part of the indigenous communities; and (2) the design by rural actors of a rural innovation program associated to endogenous resources during 2010–2011. During local bilingual managers training the focus was on communities with mayo ethnic natives’ presence. The selection of the participating communities was defined by identifying professionals from the agricultural sector who resided in the communities selected in a sample, in order to reinforce the contextual competencies needed to develop a long term project when compared with a governmental temporal action (organizational competencies in the ICB version [6]. The bilingual managers training program formed in territorial issues, conceptual framework and SIM, ascending decision making, participative-community forum development, field methods to obtain information, initiative identification and project formulation, strategic alliances and team work, effective communication, conflict resolution, formalization of businesses and projects to obtain funding. Another topic included was the analysis of past experiences with social intervention models for

More than 50% From 50 to 80% From 80 to 90% From 90% to 95% From 95% to 99%

More than 5 technicians 3 to 4 technicians Less than 2 technicians

a)

b)

Fig. 33.1 Sonora State, Mexico: a Indigenous population distribution (2009); b Social intervention areas with the number of trained bilingual managers

468

F. J. Morales-Flores et al.

Table 33.1 Training plan for the development of capabilities of Bilingual Managers Concept imparted

Work week Theory Practice Community work

Territorial approach. Conceptual framework 1–2 and social intervention model

16

20

The Ascending Methodology: participative-community forums

2–3

36

20

30

Field methods to obtain information

4-6; 8-10

20

90

30

Project identification and formulation

5–7

32

20

30

Strategic alliances and team work

6–7

16

10

30

Effective communication and conflict resolution

10–11

16

20

30

Results delivery

12

TOTAL

16

0

0

152

180

150

management in public calls as elements in the development of a long term community vision (Table 33.1). All the above mentioned was considered once the bilingual managers completed the theoretical-practical instruction of the three first modules of the training program (Table 33.1). The integration of the local authorities (traditional governors of the Mayos peoples) allowed agreeing on the form and scope of the participative forums in each community. During 2010–2011, participative forums were conducted in each community convened by managers and local authorities (Traditional governors). Step (1) Inhabitants are invited to enunciate a community vision focusing on solutions that can be generated from society to government (increasing focus of influence). Step (2) Each attendant publically proposes an initiative, describing its characteristics, advantages and needs to execute it as an individual or community project alternative; Step (3) Each attendant indicates their personal preferences on the basis of the proposed initiatives in three levels (1st., 2nd., 3rd.) over a sheet of paper as a blackboard, during the session, considering what could be executed with the attendants. Step (4) Supported initiatives are publically counted and concrete work commitments are made, which will be coordinated by local managers. Step (5) Community work commitments are made to crystalize the supported initiatives.

33.3 Initiative Groups Construction In this communication, similar initiatives from seven different communities were grouped: Buaysiacobe, Campanichaca, the Kipor, Loma de Refugio, Maycoba, Mochipaco and Moroncarit from the Huatabampo municipality in South Sonora.

33 Development of Local Capabilities for Rural Innovation …

469

The native communities defined the attraction level of each initiative, by following two steps: • Step (a) The 3 most preferred initiatives in each consulted native community were classified by managers according to attributes related with the initiative implementation: previous existence of infrastructure (Infrastructure), the need for an own monetary contribution (Own contribution), the reach of the benefits (individual, group, communal) (Reach), the level of well-being generated (Well-being), the specific impact produced by the initiative (job generation, product manufacturing and/or service provision) (Output type), economic profit generation (Profit), as well as the attendants support as a first, second or third preference. • Step (b) rural development initiatives are classified in short and medium term strategies. As short term strategies, annual actions that lack multi-annual governmental investment commitment were identified. This strategy solves immediate needs with the subsequent release of government commitment on the beneficiary part (an individual or a family): Capital: Actions in response to individual benefit initiatives to create businesses: grocery stores, internet cafés and “informal” outlets non-accredited or recognized by the commerce chambers. Governmental commitment liberation is represented by the delivery of the support provision. Equipment: Delivery of defined goods characterized by their low cost and acquired in large batches, such as governmental consolidated purchases (equipment, packages, etc.). Governmental commitment liberation in this case is a voucher from the beneficiary stating support reception. Training: Acquisition and development of abilities through training and informal education methods. This does not imply formal acknowledgement of acquired capacities, or that the beneficiaries would begin activities immediately and compulsorily. Governmental commitment liberation is represented by the training received. As long term strategies, projects were selected that represented a multi-annual governmental commitment with the inhabitants. These projects were focused on new capabilities construction or expansion which will be used as samples of future experiences and new paradigms of collaboration and group work. The following are some of the main long term strategies: Education: Educational strategies (courses, workshops) aimed at increasing the beneficiaries’ educational level by awarding certificates and accrediting acquired capabilities to perform specific activities. Governmental commitment liberation is represented by the certificate of increased educational level. Infrastructure: Construction of buildings, meeting rooms, ejidal rooms, churches, classrooms, dwellings and drinking water wells with beneficiary engagement by providing capital to increase their capabilities. Contracting: Temporal recruitment of a professional to provide specific services for the local population, for instance: community doctor, bilingual teacher. Incubation: Equipment and capabilities’ shared acquisition using governmental funding completed with contributions of the beneficiaries. They will be part of the

470

F. J. Morales-Flores et al.

beneficiaries’ assets, as long as they agree to be part of a legal association. This strategy was generally defined as rural business incubation with local resources exploitation. Each one of the enumerated proposals was grouped in a catalogue of rural initiatives typical of native communities. A data base was constructed in an analytical manner which contained all mentioned characteristics (requirements for implementation and project type). The initiative catalogue was grouped using multivariate statistics by means of principal components analysis in order to identify similar initiatives and to design a more coherent rural innovation program (RIP). Independent characteristics were identified using a correlation matrix; groups with similar characteristics were formed, which explained the higher variability (principal components) reflected in a classification tree. Each group was described in order to make the initiative classification. The SAS programming language was used to perform this classification (SAS Institute 2004). Finally, an interest network is shown which identifies the importance of each initiative using the graph theory. It reflects the interest of the native communities to implement each initiative. The graph network methodology highlights the bilingual manager and the initiatives preferred by his community. Initiatives are presented in four areas: agriculture, community infrastructure, commerce and manufacture. The software used to create the graphs is NodeXL [4].

33.4 Results 33.4.1 Forums Development A total of 936 local actors participated in forums (71.5% women and 28.5% men). A greater engagement of women stands out (maybe due to their higher interest on the community development or because they have more flexible schedules despite their house activities). With this participant profile, local rural development groups can be formed. It is worth emphasizing that men may not have been able to go to the forums because they are usually in charge of the family support, but this does not justify their lack of interest. Forum attendees 36 to 60 years old were registered (48.61%), followed by young people aged between 19 and 35 (30.26%). An important participation (18.78%) of people older than 60 years is to be noted and in the last place, 2.3% of young people under the age of 18 participated. This ratio of young attendees is worrisome, because it reflects a limited generational replacement. Young people show little interest in their community development, so they could be opting to migrate to zones which offer better opportunities. Young people engagement is important because they contribute to adopt innovations and they are capable to incorporate themselves much easier to rural development processes [1].

33 Development of Local Capabilities for Rural Innovation …

471

33.4.2 Rural Innovation Program The initiatives chosen by forum attendees were: agriculture fostering, drinking water supply, public street lighting installation, training courses delivery, monetary capital loans, health centers installation, facilities construction, teacher recruitment for bilingual education, temporal employment sources, financing for business creation, livestock acquisition, tools acquisition, hygiene (sanitary fumigation), sport installations, urban waste management, maintenance (road improvement, dredging of rivers), businesses (family business financing), reforestation (of local vegetation), government services (police services and civil registry), transport routes, backyard activities and dwelling improvement (in descending order). The rural innovation program is constructed by focusing the preferences evidenced by rural actors with regard to the initiatives. This allows knowing their needs and interests which converge to achieve a development level that ensures project continuity and the establishment of associations as permanent organizations. Of the analyzed variables, the generated gain and the initiative output type showed a strong dependence (correlation higher than 0.7, Table 33.2). Five main components were defined which structured seven relevant characteristics (84 of explained variance): (1) the initiative nature includes the reach of the benefits (reach), the generation of economic profits (profits) and the produced impact (job generation, product manufacturing and/or service provision) (output type); (2) integrated initiative feasibility due to own monetary contribution (own contribution) and attendees support as a first option; (3) Plan B being the second preferred option; (4) previous requirements comprised by the need of a prior infrastructure; and (5) Plan B, the second option location. Details of each main component are shown in Table 33.3. Table 33.2 Correlation between initiative descriptors Initiative characteristic (a) Prior infrastructure (b) Own contribution (c) Generated well-being

a

b

c

d

e

f

g

h

1.00 −0.22

1.00

0.26

0.30

1.00

(d) Output type

−0.12

0.13

−0.47

1.00

(e) Obtained gain

−0.25

0.31

−0.45

0.76

1.00

0.45

−0.56

0.51

−0.46

−0.63

1.00

−0.06

0.23

0.04

−0.04

0.05

−0.15

1.00

(f) Reach of benefits (g) Firt option

i

(h) Second option

0.19

0.00

0.05

−0.12

−0.10

0.07

−0.07

1.00

(i) Third option

0.01

−0.10

0.06

0.08

−0.05

0.19

−0.20

0.23

1.00

472

F. J. Morales-Flores et al.

Table 33.3 Main Components (MCn) which explain the variability of the Sonora indigenous peoples initiatives Component

Own value Accumulated variance Initiatives relevant characteristics

Nature

3.0

0.331

Reach (0.51); Profit (−0.50); Output type (−0.43)

Feasibility

1.6

0.503

Own contribution (0.55); First option (0.47)

Plan B

1.2

0.639

Second option (0.60)

Prior requirements 1.0

0.745

Prior infrastructure (0.75)

Plan B

0.841

Second option (−0.59)

0.9

Finally, 11 similar initiatives groups were formed (similarity = 0.8) due to the coincidence between participants communities’ interests (Fig. 33.2). Initiatives with a limited benefit reach were differentiated (groups 2, 6, 8, 9 and 11) from initiatives with benefit reach for the population in general (1, 3, 4, 5, 7 and 10). Initiatives which generate profit (2) from other initiatives which do not generate economic profit. Groups 8 and 9 requested the improvement of the family dwelling (which shows no interest in developing a permanent organization). Groups 2 and 11 are seeking to start a business or company (showing that they want to join a more permanent organization, because they are trying to integrate themselves in the local economy, for example creating businesses such as a bakery, a haberdashery, a kindergarten, etc.); groups 1, 3, 4, 7 and 10 are interested in community services that can launch a permanent organization (transportation, drain dredging, drinking water network, public lighting, electric workshop, dairy products and cold meats shops). Rural Innovaon Program

Broader scope Small profit

Lile or null scope Lile or null profit

Reduced scope Great profit

Nature

Broader scope Very lile profit

Very reduced scope Great profit

Lile or null scope Slight profit

Feasibility

No contribuon

No contribuon

3

Maintenance Street lighng Transportaon

High contribuon

No contribuon

4

Health Government Sports

1 Construcon Training

Low contribuon

Low contribuon

No contribuon

No contribuon

High contribuon

Very high contribuon

Very high contribuon

5 Maintenance Educaon

10

7

Health Maintenance Employment Educaon

6 Agriculture Livestock

11

2

9

8

Company

Business

Dwellings

Dwellings

Fig. 33.2 Dendrogram by initiative similarity in south Sonora’s indigenous peoples

33 Development of Local Capabilities for Rural Innovation …

473

Group 5 is characterized by initiatives where the government is the community service infrastructures’ constructor. This contribution is complemented with the preference network of the South Sonora’s Mayo ethnic. The different initiatives connect the managers trained in project prototypes generation and these projects can be adapted depending on the location in which they will be developed. Common initiatives distribution tended by Bilingual Local Managers (BLM) is shown using NodeXL. 50% of the initiatives included vegetable cultivation, chicken farming, embroidery elaboration, clothing manufacturing, cattle breeding, greenhouse production, bread elaboration and sheep breeding. These productive activities were identified as priorities for income generation for living expenses and thus they must be incorporated to the rural innovation program. 50% to 80% of initiatives were activities related with shrimp farming (shrimp gathering, preparation and packaging), fishing and fish capture, fish-related activities, corn, eggs and sheep production. These activities were complemented with educative services and drinking water supply. The group of activities preferred only by 20% of the population, are activities of different nature related with individual initiatives instead of needs expressed by the entire population, such as household furniture, handbags, leather clothing items and shoes manufacturing, slaughter of animals, supplies elaboration (food, fertilizer), milk production, agrosilvopastoral operations, bird farming (other than hens) and services such as masonry, school transport service, restaurants, Internet access, management consulting services and retail trade (stationery, restaurants) and finally, housing construction, job generation (education, sports, waste management, consulting, automobile repair and cold storage of sea products) (Fig. 33.3).

Agriculture Manufacture Infrastructure Services

BLM: Bilingual local manager BLM-RDA: BLM associated to a Regional Development Agency

Fig. 33.3 Preferences expressed by the south Sonora’s Mayo ethnic, Mexico

474 Number 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

F. J. Morales-Flores et al. Name Final score Torres Galindo Rigoberto Poqui Félix Alma Delia Yocupicio Valenzuela Germán Carlos Sotomea Jocobi Rosalva BLM Borbón Baynori Zahira Adriana BLM-RDA Leyva López Sheila Briseida BLM Yevismea Valenzuela Celedonia Romo Moroyoqui Lizeth Guadalupe Guerrero Miranda Olivia Gastelum Chaparro Francisco Javier BLM-RDA Rodríguez Bacasehua Jerónimo BLM Muñoz Valenzuela Joaquín BLM-RDA Valenzuela Ibarra Marn Humberto BLM Escalante Gómez Francisco BLM-RDA Yocupicio Buimea Elda María BLM-RDA Maldonado Valenzuela Sonia BLM Sanaba Bajeca Estanislada García Duave Marna Isaura -

Number 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36

Name Vega Armenta Reynaldo González Espinoza Juan Teofilo Jorge Yocupicio Jocobi Marcos Rosario Valenzuela Yevismea Alberto Regino Aguilasocho Sotelo Julio Castro López Jorge Luis Choygua Varela Pedro Frías Galaviz Eligio Porllo Lavega Carlota Avalos Osuna Sergio Noel Buimea Duarte Cesar Gabriel Ana Rosa Orduño Gaeta Cota Robles Janeth Poovi Félix Socorro Valdéz Cabrera Luz Berónica Nocamea Matus Marcela Valenzuela Ozua Flor María Bacasehua Bumea Mario

Final score BLM BLM-RDA BLM-RDA BLM-RDA BLM-RDA BLM BLM BLM-RDA BLM-RDA BLM-RDA BLM BLM BLM-RDA BLM BLM-RDA

Fig. 33.4 Bilingual Managers final ranking, by their training program’s qualification level

The initiatives, which are prioritized alternatives chosen by the inhabitants, are related with monetary profit producing businesses which have a product as an output. These initiatives show the highest interest among the Mayo ethnic inhabitants. Figure 33.4 shows the final location of each trained bilingual manager, either as a Technician (–), Promoter or Coordinator of a Rural Development Agency (RDA) with the aim of managing in an orderly manner the identified and prioritized initiatives. The main criteria for the bilingual local managers training were: (a) a training program which enables specific project detection in the short term (less than a year) and high impact, as well as in the medium term (one to three years) and high impact; (b) the definition of actors, groups and communities interested in adopting and launching local initiatives to promote their own development; (c) the intervention orientation and specific actions implementation (management and commissioning) to stimulate a competitive and job generating economy; (d) the development of a new value chain with innovation inducing technologies to accelerate local jobs creation. These two stages are not reported in this communication. The application of this social model in rural areas of Mexico’s different regions has allowed the Postgraduate College to accumulate successful experiences (agriproducts producer-exporter networks) with technical, scientific and economic solvency.

33.5 Conclusions Sonora’s evaluated communities have as a priority initiative the development of an own income, and, for this reason, business projects in which housewives are interested should be implemented. This decision would increase family income because they would continue to fulfil their role as homemakers. Training and education projects with specific activities have less permanence through time and, thus, less consolidation. Furthermore, young people must be included in the different projects as

33 Development of Local Capabilities for Rural Innovation …

475

an important and fundamental part of rural development. This way, the generational replacement and migration would not have significant repercussions in the community improvement process.

References 1. Cadena-Iñiguez J, Martínez-Becerra A, López-Romero G, Trejo-Téllez B, Figueroa-Rodríguez KA, Talavera-Magaña D, Hernández-Rosas F (2010) El proceso de investigación vinculación (I + V) para la asociación empresarial en núcleos agrarios de México. Agroproductividad 3:23–30 2. De los Ríos-Carmenado I, Díaz-Puente JM, Cadena-Iñiguez J (2011a) La iniciativa LEADER como modelo de desarrollo rural: aplicación a algunos territorios de México. Agrociencia. 45:609–624 3. De los Ríos-Carmenado I, Cadena-Iñiguez J, Díaz-Puente JM (2011b) Creación de grupos de acción local para el desarrollo rural en México: enfoque metodológico y lecciones de experiencia. Agrociencia. 45:815–829 4. Hansen DL, Shneiderman B, Smith M (2010) Analyzing social media networks with NodeXL: insights from a connected World. Morgan Kaufmann. 284 p 5. SAS (2011) Statistical Analysis System/Statistical Procedures. User’s Guide SAS/STAT 14.3 User’s Guide’. North Carolina. USA. 10814 p 6. IPMA (2015) Individual competence baseline for project, programme & portfolio management, 4th edn. IPMA, International Project Management Association, The Netherlands, p 415 7. Lyon A, Hunter-Jones P, Warnaby G (2017) Are we any closer to sustainable development? Listening to active stakeholder discourses of tourism development in the Waterberg Biosphere Reserve, South Africa. Tour Manag 61:234–247 8. Lederer J, Ogwang F, Karungi J (2017) Knowledge identification and creation among local stakeholders in CDM waste composting projects: a case study from Uganda. Resour Conserv Recycl 122:339–352

Chapter 34

Adoption of Good Project Management Practices in the International Cooperation Sector in Colombia Maricela I. Montes-Guerra, H. Mauricio Diez-Silva, Hugo Fernando Castro Silva, and Torcoroma Velásquez Pérez Abstract For the International Cooperation Sector, the project is the most used financing mechanism for the transfer of resources, and given the peculiarities of the sector, the projects have a set of differentiating characteristics that catalogs them as a highly complex particular typology, which makes its successful management a permanent challenge. Despite the significant amounts of money mobilized by the sector and the importance of the objectives of these projects for aid recipient countries, the low performance rates of these projects have become a generalized result and the empirical study of the causes of this situation is scarce. This descriptive and exploratory research statistically analyzes the perception of a representative sample of project managers from the sector in Colombian Non-Governmental Organizations in order to understand the general context for the adoption of good project management practices in this sector and their impact on the success of these interventions.

M. I. Montes-Guerra (B) · H. M. Diez-Silva · H. F. Castro Silva · T. Velásquez Pérez PROCESSES AND SERVICES MANAGEMENT GROUP. Escuela Internacional de Ciencias Económicas y Administrativas, Universidad de La Sabana, Campus Universitario, Autopista Norte de Bogotá Km. 7, Puente Del Comun, Chia, Colombia e-mail: [email protected] H. M. Diez-Silva e-mail: [email protected] H. F. Castro Silva e-mail: [email protected] T. Velásquez Pérez e-mail: [email protected] H. M. Diez-Silva · H. F. Castro Silva · T. Velásquez Pérez Departamento de Proyectos. Facultad de Ingeniería, Universidad EAN, Calle 79 #11-45, Sede fl Nogal, Bogotá, Colombia Grupo de Investigación OBSERVATORIO. Escuela de Ingeniería Industrial, Universidad Pedagógica Y Tecnológica de Colombia, Calle 4 Sur, Sogamoso #15-134, Colombia Grupo GITYD, Departamento de SISTEMAS E INFORMACIÓN, Universidad Francisco de Paula Santander. Vía Acolsure Sede Algodonal, Ocaña, Colombia © Springer Nature Switzerland AG 2021 J. L. Ayuso Muñoz et al. (eds.), Project Management and Engineering Research, Lecture Notes in Management and Industrial Engineering, https://doi.org/10.1007/978-3-030-54410-2_34

477

478

M. I. Montes-Guerra et al.

The results demonstrate a low rate of adoption of project management methods and a relationship between their use and project performance. Keywords Project management · Non-governmental organizations · Development aid projects

34.1 Introduction Development cooperation and the processes used to manage its resources can be understood as the transfer of resources from donor countries to developing countries, to other territories or multilateral agencies [7], having as an essential objective the promotion of economic development and welfare in recipient territories. In this sense, development cooperation is based on equality, individual and collective, through the search for conditions that make it possible for members of a community to have the options and opportunities to achieve a decent quality of life, sustainable economic and social development, and poverty reduction [23]. To achieve its purposes, the International Cooperation sector carries out various actions, and one of them, “the project”, is perhaps one of the most used financing mechanisms for the transfer of resources. However, the unique characteristics of projects in the development or International Cooperation (IC) sector make them come to be considered as a particular typology [39], among other reasons because they involve a large number of diverse interest groups such as governments, government agencies, non-governmental organizations (NGOs), local beneficiaries and civil society, in many cases of different ethnic groups or nationalities, who differ in their perceptions and conceptions due to particular aspects of each culture such as language and values [6, 25]. In addition to the complexity in the relations between groups of interest in the IC projects, other particularities must be considered, such as: (1) IC projects cover almost all sectors of projects implementation [6]; (2) lack of a well-defined customer compared with conventional construction or engineering projects [19]; (3) they may be considered as public sector projects [38]; (4) they comprise a large number of different stakeholders [40]; (5) their environment is difficult, complex, and risky, due to conditions of poverty, inequality, and insecurity in recipient countries [30]; (6) resource scarcity is common [21]; (7) being international projects, several cultural differences must be managed [20]; and (8) its nature is focused on the generation of social welfare and not on financial gain [12]. Based on the above, it is convenient to analyze conceptual elements of project management, from which a configuration can be extrapolated for IC projects. It can be said that project management has become an essential element for projects success in all types of organizations, regardless of the industrial sector or the project size, and that this type of PM practices are seen as a means to improve the probability of achieving project goals [3, 22, 29]. The implementation of this type of practices

34 Adoption of Good Project Management Practices …

479

always represents an advantage to the organizations that implement them, in comparison to those that do not. For example, the first ones have more clarity about their projects’ objectives, they identify better the necessary resources for the execution, and they assure better accountability and results and improve the achievement of projects’ objectives [3, 17, 29, 35]. Some researchers, such as Ika and Thuillier [13], have questioned if IC projects are so different from other projects that they require a particular analysis of their success factors, and especially, if it is necessary to distinguish this type of projects’ management considering the need for specific processes, techniques, tools, and abilities. Ika et al. [13] have also stated that the answer to that question is simple: their environment is undoubtedly unique; project managers or coordinators in the development aid or international cooperation industry sector have to deal with complexity, resistance to change, aligning the agendas of a large number of stakeholders, and diverse and even contradictory expectations that make commitments very difficult to achieve [2, 4–6, 11, 18]. In this regard, the definition of IC project proposed by Johnson [16], brings the need to use specific management techniques and tools, which does not contradict the content of the Project Management Guide for Development Professionals (PMDPro), where the definition of PM for this type of project is stated as the discipline of planning, organizing, and managing resources to carry out the successful delivery of the specific project’s objective, products, and results [30]. According to this same guide, PM is science and art and includes specific competencies for a project manager which are classified as technical, leadership or interpersonal competencies, selfmanagement competencies, and those specific to the development sector such as understanding the sector’s values and paradigms, showing cultural sensitivity, and performing adequately in complex environments, among others. Despite these facts, international cooperation projects management has not been a subject of great interest for researchers, so there is little literature that relates them to the effectiveness of development aid [14]. Particularly for IC projects, little has been written about aspects related to good management practices or methodologies, successful projects, critical success factors or team management [5, 6, 12, 14, 15, 18]. Research focused on verifying the efficacy and effectiveness of international cooperation projects have concluded that the failure of these interventions is the general rule [12, 15]. Analyzing the causes of this situation, academics and practitioners in this sector have directed his studies toward policies, procedures, and mechanisms for the allocation of aid, more than in practices used to manage such projects. Notwithstanding the above, this communication explores the relationship of the project performance with the adoption of good project management practices, topic which has not been extensively studied at global and local levels, and which will allow to generate an improvement path for the way to manage these projects in order to influence their success in the short and medium term. To find the causes of the constant failure in international cooperation projects, some eyes turn to the project management tool predominantly used in the sector, the Logical Framework Methodology (LFM). Developed in 1969, for the United States

480

M. I. Montes-Guerra et al.

Agency for International Development (USAID), the Logical Framework Methodology (LFM) is the most widely used mechanism for IC project management [32]. The LFM operational tool is a matrix that integrates concepts that should be used in conjunction and in a dynamic way to describe and design an easily assessed project [32]. However, this communication addresses the issue of IC project management from a perspective broader than the use of LFM. At the contextual level, international assistance for development is currently being received in Colombia, and according to figures and reports of the Colombian Presidential Agency for Cooperation (PAC) and the Organization for Economic Cooperation and Development (OECD), the trend of cooperation funds remains positive as opposed to many other Latin American countries [1]. Despite the importance of this revenue, the management of IC projects in Colombia records a behavior similar to that described above for the sector, characterized by the inefficacy and ineffectiveness of interventions but also generating an opportunity to improve planning, control, and evaluation processes [10]. Therefore, mechanisms that might work to manage these projects should contain some peculiarities, as a result of this typology, so it is required to analyze both forms and methods that are used to generate new options from Project Management. With this literature revision frame, the following sections of this communication are dedicated to present the objectives, the methodology used to reach these objectives, the results obtained after the application of information harvesting instruments, the results analysis, and the conclusions obtained from the study.

34.2 Objectives This communication’s objective is to present a general context of good project management practices adoption, applied to the international cooperation sector in Colombia, based on a representative sample of project managers with the intention to describe methodologies, processes, techniques, and tools used to manage this type of interventions. On the other hand, this research intends to verify if a relationship exists between the use of generally accepted practices of project management and the internal and external performance of the implemented international cooperation projects in Colombia. In order to verify this relationship, techniques of inferential statistical analysis are used, through the R ®software. The results contained in this communication are useful for researchers in the project management area, financing organizations for interventions in development aid, non-governmental organizations (NGOs) implementers of projects funded with resources from international cooperation, and for managers of such projects interested in improving the performance of their projects.

34 Adoption of Good Project Management Practices …

481

34.3 Methodology This study is exploratory and descriptive [34], since it comprises the registry and interpretation of the PM process in the development aid sector; a series of variables are defined that can determine its performance, in this case, related to tools application. Likewise, the research is causal considering that it attempts to determine the influence of certain variables on another independent variable, in this case, the project performance [26]. For the purpose of being able to compare the results of this investigation with those obtained in other studies, the methodology design is similar to other related studies in different areas such as those of Patanakul et al. [28]; White and Fortune [37]; and Raz and Michael [31]; and in the IC sector, those of Montes-Guerra et al. [24]; and Golili et al. (2015). Once the review of literature was completed, a questionnaire was designed for data collection which consists of three sections: in the first one, demographic information is gathered related to the type of organization and the characteristics of their projects; the second section is dedicated to collecting data related to the application of good practices in every phase of the project’s life cycle. These questions are designed using a Likert scale where 1 corresponds to having never used a particular PM practice and 5 to having always adopted it in their interventions. Finally, the third section contains questions related to the project internal and external performance appraisal, also using a Likert scale. The instrument validation was made through the application of pilot tests and experts’ judgement. A simple random sampling was used and sample size calculation was made using the Spiegel & Stephens expression [33], obtaining that a representative sample for this case should contain a total of 62 project managers. The tabulation and data analysis either descriptive or of an independent hypothesis verification type were made in R® statistics software.

34.4 Results The data analysis results are presented. The first part of this section is dedicated to the descriptive analysis of the sample demographic characteristics, focused on NGOs; the second part is dedicated to the analysis required to answer the investigation’s first question in order to establish the degree of adoption of PM tools. To this end, the most widely used tools are identified, as well as their application in each phase of the project life cycle. The third part is focused on answering the second question of the investigation related to determining if the tools adoption degree influences the project’s success. To this end, the hypotheses of statistical dependency test results are presented.

482

M. I. Montes-Guerra et al.

34.4.1 Demographic Characteristics of the Sample The sample comprises 62 IC project managers linked to ONGs in Colombia. Regarding organization seniority, the considered NGOs have an extensive experience in the sector: 46% have dedicated between 5 and 10 years to the management of this type of projects, 21% have an experience of more than 10 years, and the remaining 32% less than five years. In terms of intervention undertaking over the past three years, 65% of the NGOs initiated between 1 and 5 projects and the remaining 35% of organizations between 6 and 10 projects. The time dedicated to the sector, as well as the number of projects managed by the organizations, justify the implementation of generally accepted MP practices. Figure 34.1 shows a summary of the sample demographic characteristics. The project’s scope of application was classified based on the thematic priority areas established by the IC strategy of the PAC. With regard to the sample composition, more than half (53%) the projects managed by the surveyed managers are related to the environmental area. A third of the projects are oriented to governance and the scope of 6% of the projects is aimed at victims, reconciliation, and human

Fig. 34.1 Sample demographic characteristics

34 Adoption of Good Project Management Practices …

483

rights. The three remaining priority areas, such as risk management, equal opportunities and economic growth and competitiveness, comprise projects that together represent 10% of the total sample. With respect to the geographical distribution of international aid recipient projects managed by the survey respondents, the greater representation is located in Antioquia, Bogota, and Norte de Santander with 23%, 21%, and 10% of the total. In a smaller proportion, follow the departments of Bolivar with 9%, Córdoba with 8%, Cauca with 6%, and Choco and Nariño, both with 4%. The remaining group of IC projects included in the sample is located in the rest of the National territory. Whereas the official and nonofficial aid funds for these projects come mainly from the United States (40%) and from member states of the European Union (58%), only 2% comes from other countries.

34.4.2 Adoption of Good Project Management Practices Research data demonstrates that the tools more frequently used by NGOs’ project managers in Colombia, are in the first place the Logical Framework Approach LFA, reporting systems, problem and alternative trees, budget control, performance indicators, and Work Breakdown Structure WBE, most of them identified as typical of the PM of the sector [30]. Secondly, those related to goal oriented project planning ZOPP, community-based participatory tools, stakeholders’ matrix and critical path can be highlighted. Finally, in the third place, the following group of PM tools was identified by the respondents: risk analysis tools, responsibility matrix, earned value analysis, critical chain, inspections and audits, and quality control statistical tools. Results are shown in Fig. 34.2. As shown in Fig. 34.2, IC project managers clearly prefer traditional tools such as the LFA confirming that it is still the planning technique most commonly used in the sector [36], which is to be expected since most of the major funding bodies include the LFA within their processes and calls [4, 8, 21]. They also favor other analysis tools to identify people’s expectations and interests adopted from psychology, sociology, and anthropology. However, standard tools derived from generally accepted good PM practices have a very low level of adoption and a large number of these tools has not been considered by project managers as it is the case of the earned value analysis, critical chain analysis, quality tools, communications matrix, among others. It is convenient to consider these findings, based on the LFA weaknesses, and the possibility to integrate it with some PM techniques that could increase its potential for project management. This low level of adoption of PM standard tools is related to the low percentage of project managers who know the sector’s general and specific standards with regard to PM. Of the respondents, 30% identify the PMBOK® standard as the most recognized in the discipline, and of them, only 10% have sought to implement it in their interventions, which represents only 3% of the total. Along the same lines, it is noteworthy that 98% of surveyed project managers claim not to know the two bodies

484

M. I. Montes-Guerra et al.

Fig. 34.2 Degree of adoption of PM tools in the IC sector

of knowledge, specific to the development aid PM which are the PMDPro and the PM4DEV, when in other latitudes, these standards are widely used [12]; Landoni & [21]. Golini et al. [9] showed in their study, mainly aimed at project managers in Europe and Africa, the existence of four clusters differentiated by the degree of tools adoption. In the first of them, NGOs are found which use a limited group of basic tools such as the LFA and reporting systems; in the second one, more analytic tools are used such as risk analysis and management, Gantt charts, and cost accounting; NGOs grouped in the third cluster apply standard PM tools associated with scope, time, and stakeholder management; while the last NGOs group is characterized by applying a broad range of more complex and analytic standard tools. NGOs analyzed in the case discussed in this communication could be located mainly in the first and second cluster of this categorization.

34.4.3 Influence of the Adoption of Good Project Management Practices on Projects Success As internal critical factors decisive for project success, those related to compliance with basic restrictions such as budget, deadlines, deliverables, and quality are considered. Figure 34.3 shows the obtained results, in which, it can be highlighted

34 Adoption of Good Project Management Practices …

485

Fig. 34.3 Projects’ internal performance

that according to project managers, it is common to find unsuccessful projects when applying the defined criteria. The variable showing the worst performance is time compliance, followed by the compliance of products and their quality, while the variable that shows the best result is related to performance under budget. These results would be consistent with the general acceptance of the poor performance of this sector projects. The criteria considered to determine success in the project external performance were: satisfaction of the beneficiary community, project sustainability once products were transferred to the beneficiaries, community participation, and project long-term impact. Although 22% of project managers declared they did not monitored any of these variables, if the external performance of the project is assessed, only in 54% of the cases the beneficiary community is satisfied, whereas 77% of the projects have a suitable long-term impact. These results are shown in Fig. 34.3. Statistical independence tests are performed to establish the possibility of a relationship between tool implementation and project internal performance success. From this relationship, an improvement opportunity could be identified for currently used IC project management methods. The variable tools degree of adoption results from the overall average of the ratings given by project managers to the use of each tool considered in the questionnaire. Table 34.3 shows the obtained results. From these results, considering a significance level of 5%, it can be stated that a relationship exists between the degree of adoption of tools and the internal performance of the project in terms of compliance with time, scope, quality, and budget. Regarding the influence of PM tools adoption in the IC sector on the external performance success of the interventions, the results of this investigation with a significance of 5% allow to conclude that there is sufficient statistical evidence to accept that a relationship exists between the use of tools and the community participation. Likewise, it has been determined that there is no relationship between the

486 Table 34.1 Independence tests: tools adoption and project internal performance

Table 34.2 Independence tests: tools adoption and project external performance

M. I. Montes-Guerra et al. Critical success factor

Chi-squared

Observed significance

Time compliance

86.339

0.0001

Quality compliance

94.752

0.0002

Deliverables compliance

86.090

0.0001

Budget compliance

118.590

0.0002

Critical success factor

Chi-squared

Observed significance

Beneficiary community satisfaction

57.943

0.0635

Sustainability after transfer

61.534

0.0724

Beneficiaries engagement

78.642

0.0201

Long-term impact:

89.107

0.0722

adoption of PM tools and the satisfaction of the community at which the project is directed. There is also no relationship with intervention sustainability nor with the long-term impact. Chi-squared values and the observed significance resulted from the test can be consulted in Table 34.2.

34.5 Conclusions In this research work, an analysis has been presented for the adoption of Project Management instruments in interventions on the International Cooperation sector in Colombia, as well as the existing relationship between that use and some specific project results. Low use rates in many elements the PM have been found, but also some important considerations on the influence that those elements have on success criteria and project performance. The results of this empirical study allow to argue that there is a low adoption level of generally accepted PM tools in the IC field in Colombia. In the maturity scale from 1 to 4 established by Golini, Kalchschmidt, and Landoni [9], Colombian NGOs can be classified in the second level at the most. Tools used by IC project managers are generally those typical of the sector and tools such as the LFA, the report system and those associated with analysis and community engagement approaches stand out.

34 Adoption of Good Project Management Practices …

487

Standard tools typical of the PM discipline are scarcely used and the low level of expertise on PM specific bodies of knowledge for development aid stands out. It is ratified that the logical framework approach is still the most commonly used planning tool in the sector, although project managers used it primarily as a means for the identification and achievement of project funding and they leave the tool as the intervention progresses to implementation and monitoring and control phases. This practice is also observed in other European geographical contexts. The project work breakdown by means of the WBS is used as a tool in the planning and implementation phases. Regarding monitoring and control tools for IC projects, tools related to performance indicators, ex-post assessments, inspections, and audits stand out. Statistical evidence allows assimilating the results of studies by Patanakul et al. [28], White and Fortune [37], Raz and Michael [31] and Papke-Shields et al. [28], in other sectors different from IC PM, in the sense that the use of tools is positively related with the internal performance of the project, although this element would not be sufficient to ensure project success. Regarding the influence of PM tools adoption on external performance, a positive relationship is found with the participation of the community which benefits from the cooperation actions. However, it is also found that there is no relationship between the use of PM tools and critical success external criteria such as beneficiaries’ satisfaction, project sustainability, and its long-term impact.

References 1. APC (2015) Documento de análisis de la Cooperación Internacional en su dimensión ambiental. Agencia Presidencial para la Cooperación, 7–14 2. Cernea M (1998) La Dimension Humaine dans les Projets de De´veloppement: les Variables Sociologiques et Culturelles. Karthala, PARIS, Paris 3. Charvat J (2003) Project management methodologies. Selecting, Implementing, and Supporting Methodologies and Processes for Projects (Vol. 1). New Jersey: Wiley 4. Crawford P, Bryce P (2003) Project monitoring and evaluation: a method for enhancing the efficiency an effectiveness for aid project implementation. Int J Project Manage 21(1):363–373 5. Diallo A, Thuillier D (2004) The success dimensions of international development projects: the perceptions of African project coordinators. Int J Project Manage 22(1):19–31 6. Diallo A, Thuillier D (2005) The success of international development projects, trust and communications: an African perspective. Int J Project Manage 23:237–252 7. EuropeAid. (2011) Guía Gestión del ciclo del proyecto. EuropeAid oficina de cooperación, Madrid, España 8. Golini R, Landoni P (2014) International development projects by non-governmental organizations: an evaluation of the need for specific project management and appraisal tools. Impact Assessment and Project Appraisal 32(2):121–135 9. Golini R, Kalchschmidt M, Landoni P (2015) Adoption of project management practices: the impact on international development projects of non-governmental organizations. Int J Project Manage 33(3):650–663 10. Grasa R (2014) La cooperación internacional para el desarrollo en Colombia. Una visión orientada al futuro. Agencia Española de Cooperación Internacional para el Desarrollo, AECID. Agencia Presidencial de Cooperación Internacional de Colombia, APC-Colombia

488

M. I. Montes-Guerra et al.

11. GTZ (2003) How successful is technical cooperation? Project results of GTZ and its partners. Retrieved April 12, 2014, from www.gtz.de/publikationen/english 12. Hermano V, López-Paredes A, Martín-Cruz N (2013) How to manage international development (ID) projects successfully. Is the PMD Pro1 Guide going to the right direction. Int J Project Manage 31:22–30 13. Ika L, Diallo A, Thuillier D (2010) Project management in the international development industry. The project coordinator’s perspective. (E. G. Limited, Ed.) Int J Managing Projects Business, 3(1):61–93 14. Ika L, Diallo A, Thuillier D (2012) Critical success factors for World Bank projects: an empirical investigation. Int J Project Manage 30:105–116 15. Ika L, Hodgson D (2014) Learning from international development projects: Blending Critical Project Studies and Critical Development Studies. Int J Project Manage 32:1182–1196 16. Johnson K (1984) Organizational structures and the development project planning sequence. Public Administration Develop 4(2):111–131 17. Kautz K, Pries - Heje, J. (1999). System development education and methodology adoption. ACM SIGCPR Comput Personnel 20:6–26 18. Khang, D., & Moe, T. (2008). Success criteria and factors for international development projects: A lifecycle based framework. (P. M. Institute, Ed.) Project Management Journal, 39(1), 72–84 19. Kamrul A, Indra G (2010) Analysis of cost and schedule performance of international development projects. Int J Project Manage 28:68–78 20. Kwak Y, Dixon C (2008) Risk management framework for pharmaceutical research and development projects. International Journal Managing Projects Business. 1:552–565 21. Landoni, p., & Corti, B. (2011). The Management of International Development Projects: Moving Toward a Standard Approach or Differentiation? (P. M. Institute, Ed.) Project Management Journal, 42(3), 45–61 22. Milosevic D, Patanakul P (2005) Standardized project management may increase development projects success. Int J Project Manage 23(3):181–192 23. Montes-Guerra M (2010) Seguimiento y control en proyectos de cooperación para el desarrollo. Universidad Pública de Navarra, Pamplona 24. Montes-Guerra M, De-Miguel A, Pérez-Ezcurdia M, Gimena Ramos F, Díez-Silva H (2015) Project Management in Development Cooperation. Non-Governmental Organizations. Innovar journal 25(56):53–67 25. Muriithi, N., & Crawford, L. (2003). Approaches to project management in Africa: Implications for international development projects. . (P. M. Institute, Ed.) International Journal of Project Manajement, 21, 309–319 26. Ouellet A (2001) Procesos de investigación: Introducción a la metodología de la investigación y las competencias pedagógicas. (1ª edición en español ed.). Bogotá. D.C., Colombia: Escuela de Administración de Negocios –EAN-. Centro de investigaciones 27. Papke-Shields K, Beise C, Quan J (2010) Do project managers practice what they preach, and does it matter to project success? Int J Project Manage 28:650–662 28. Patanakul Stevens, Iewwongcharoen Boonkiart, Bangkok, & Milosevic Dragan. (2010) An empirical study on the use of project management tools and techniques across project lifecycle and their impact on project success. 2010. The Braybrooke Press Ltd. Journal of General Management. Vol. 35. No. 3 Spring. 2010 29. Pitagorsky G (2003) The business value of embracing a unified PM methodology. Allpm.com 30. PM4NGOs (2012) Guía para el PDMPro—Gestión de proyectos para profesionales del Desarrollo (Vol. 1). PM4NGOs 31. Raz T, Michael E (2001) Use and benefits of tools for project risk management. Int J Project Manage 19(1): 9–17 32. Rosenberg L, Posner L (1979) The logical framework: a manager’s guide to a scientific approach to design and evaluation. Practical Concepts Incorporated, Washington, DC 33. Spiegel M, Stephens L (2009) Estadística (4ta, edición edn. Mc Graw-Hill, México, D.F., México

34 Adoption of Good Project Management Practices …

489

34. Tamayo & Tamayo, M. (2003) El proceso de la investigacion cientifica (Cuarta, edición edn. Limusa Noriega Editores, México, México 35. Turbit N (2005) Project management & software development methodology. The Project Perfect White Paper Collection. The Project Perfect, Australia 36. Vázquez-De Francisco MJ, Torres-Jimnenez M, Caldentey-Del Pozo P (2015) Límites del marco lógico y deficiencias. Revista Iberoamericana de Estudios de Desarrollo/Iberoamerican Journal of Development Studies 4(2):80–105 37. White D, Fortune J (2002) Current practice in project management—an empirical study. Int J Project Manage 20(1):1–11 38. Wirick D (2009) Public-Sector Project Management: Meeting the Challenges and Achieving Results. Wiley, Hboken. 39. Youker R (2003) The nature of international development projects. PMI conference 2003 (pp 202–211). Pennsylvania: Project Management Institute 40. Youker R (1999) Managing international development projects-lessons learned. Project Manage J 30(2):6–11

Chapter 35

The Logical Framework Approach, Does Its History Guarantee Its Future? R. Rodríguez-Rivero, I. Ortiz-Marcos, L. Ballesteros-Sánchez, J. Mazorra, and M. J. Sánchez-Naranjo

Abstract International Development (ID), which is often questioned for its lack of effectiveness, is a specific sector in the application of Project Management (PM) in relation to the use of projects that are undertaken to generate changes. In this case, ID seeks to have an impact on living standards of the most vulnerable groups of the population. This study reveals, by an extensive review of the most pertinent literature, the particularities and complexities of ID projects. It initiates the debate on the convenience of adapting existing tools in PM that are broadly used in, and familiar to, the industrial sector (PMI, Prince or any other) to Cooperation, or the need to improve those tools that are already in place, mainly the Logical Framework Approach (LFA), or even to design new alternatives. Due to the respect owed to a tool that has been used for nearly five decades, this research studies the development and evolution of LFA from its origins to the latest presentations, including its strengths and limitations. The proposals to improve LFA, as a result of these investigations, are re-examined in cooperation with professionals, whose valuable opinions on the matter facilitate an increase in efficiency level in ID. Keywords Project management · International development · Logical framework approach

R. Rodríguez-Rivero (B) · I. Ortiz-Marcos · L. Ballesteros-Sánchez · M. J. Sánchez-Naranjo Department of Organization Engineering, Business Administration and Statistics, ETSI Industriales, Universidad Politécnica de Madrid, C/José Gutiérrez Abascal 2, 28006 Madrid, Spain e-mail: [email protected] J. Mazorra Innovation and Technology for Development Centre, Universidad Politécnica de Madrid. Avenida Complutense S/N. Ciudad Universitaria, 28040 Madrid, Spain © Springer Nature Switzerland AG 2021 J. L. Ayuso Muñoz et al. (eds.), Project Management and Engineering Research, Lecture Notes in Management and Industrial Engineering, https://doi.org/10.1007/978-3-030-54410-2_35

491

492

R. Rodríguez-Rivero et al.

35.1 Introduction Every year, International Development (ID) provides aid to developing countries. For decades, professionals and researchers have debated whether ID aid works. Whereas supporters maintain that aid works in the long-term [30], detractors contend that aid is ineffective [7] or, even worse, that it actually is the problem [23]. In recognizing the development process as multidimensional, and in considering the aid effectiveness principles of the Paris Declaration (2005), which concerned ownership, alignment, harmonization, management for results, and mutual accountability, it becomes clear that achieving sustainable development not only involves the volume of aid given, but also how that aid is given and managed [19]. Thus, directly affects ID projects. Most international assistance that governments and Non-Governmental Development Organizations (NGDOs) provide to developing countries is given for projects [6]. These ID projects seek to improve living conditions in emerging countries by, for example, enhancing agricultural, health, or educational systems [20]. In contrast to emergency projects, ID projects do not have the objective of providing immediate assistance to populations that are affected by wars or natural disasters. They usually take place in more stable contexts with the goal of improving living conditions. This may take the form of a stronger economy or better education or health [10]. Projects have always been at the core of ID activities, as demonstrated by Hirschman’s classic book, Development Projects Observed (1967). The ID projects sector is specific, but similar to other sectors of Project Management (PM) application, in regard to the ubiquitous use of projects to deliver change [22, 32]. In this case, the intangible nature of these changes to combat poverty reduction, for example, present a special challenge in managing ID projects. The main characteristics of most ID projects are included in Table 35.1. Table 35.1 Particularities of ID Projects Particularities

Reference from the literature review

ID includes almost every sector

Diallo and Thuillier [5]

Are typically public sector projects

Ika and Hodgson [16]

Are frequently carried in difficult environments

Ika and Hodgson [16]

Involve many stakeholders in different countries (North Grisham [8] and South) Their goals are usually not visible and measurable in the Khang and Moe [18] short-term The customer is a community, and its members do not fund the project

Ahsan and Gunawan [1]

Beneficiaries are often not included in the Project design Ika [15] Tools of PM are not valid in some of these cultures

Muriithi and Crawford [24]

35 The Logical Framework Approach …

493

To consider these particularities of PM practices, some PM guidelines for NGDOmanagement of ID projects have been specifically established. The two best-known guidelines are, perhaps, PMDPro and PM4DEV. Both require the integration of the standard methodologies of PM to be included in the Project Management Body of Knowledge PMBOK® or the International Project Management Association Competence Baseline (ICB®). Previous comparisons of these methodologies, specific and standard [9, 11], showed that their tools are very similar and that all tools that are included in the PMBOK® Guide appear also in the PM4DEV and PMDPro, except for the Logical Framework Approach (LFA) and tree analysis. The LFA tool is analyzed in great detail in this work. Tree analysis is a tool for investigating cause and effect relationships that are also included in the previous steps of the LFA. LFA was developed for the United States Agency for International Development (USAID) as the 1960s, ended in response to specific kind of projects such as ID Projects. This methodology was designed to ease and guide ID projects’ design and evaluation all over the world [11], and it has maintained its essence. Academic literature and managerial experience highlight how critical the proper use of specific methodologies and tools is to the successful management of projects (e.g., [14, 25]).

35.2 Objectives and Scope The objective of this research is to understand fully the LFA in order to propose an update that highlights its strengths and reduces its limitations. Two specific objectives, one academic and one practical, support this goal. Only when these two areas merge is the work meaningful. The first specific (and academic) objective requires a study of the development of the LFA from its origin almost five decades ago to its latest presentations. The second specific (and practical) objective is to analyze the opinions of 51 professionals in ID, who are experienced in the LFA methodology. Since LFA is the tool used by aid donors and NGDOs to appraise, manage and evaluate project interventions from the beginnings of the ID [21], the improvement in this tool contributes to an increasing in the effectiveness level of ID Projects. That is the long-term objective of this work. This work covers the academic objective by an extensive review of the concerning the LFA, and the practical objective by the evaluation of the survey that addressed the ID project managers working in concert with the LFA. The next section of this paper presents the methodology used in this work. The remainder of this paper consists of a Results section and Conclusions. The former describes the evolution of the LFA, according to the literature review, and makes use of the data that the survey provided to explain, from a practical perspective, the limitations of the LFA and how changes should be implemented. The Conclusions section summarizes the research and its main contributions and limitations.

494

R. Rodríguez-Rivero et al.

35.3 Methodology The first step of the method that was employed consisted of presenting the evolution of the LFA and the academic opinion of the effectiveness of this tool. This is based on a review of more than 50 journal citation reports that specialized in PM and ID. The second step involved the design and execution of a survey of professionals who are managing ID projects. The survey had three sections. The first section sought general information about the organization. The second section sought information about the project manager’s experience in ID projects during the last five years. The last section is devoted to the use of the LFA. Tables 35.2 and 35.3 provide a complete description of the survey sample. The project managers who participated in the survey work for NGDOs (82.1%) and managed more than ten projects during the last five years (70.6%), Most of these projects (77.1%) were executed according to the Spanish Agency for International Development Cooperation (AECID). The priorities were Latin America, North Africa, and sub-Saharan Africa. The average duration of these projects was one to three years (62.7%) with an average budget of less than 100.000 e (52.9%). Table 35.2 Descriptive statistics of the organizations Type

%

No. employees

%

No. projects/year

%

NGDO

82.1

200

13.7

Table 35.3 Descriptive statistics of the project manager No. % projects in the last five years

The areas where projects were executed

%

Average duration

%

Average projects size (e)

%

500.000

Central Am. & Caribe

24.6

Asia

10.2

7.9

35 The Logical Framework Approach …

495

The questionnaire was distributed by the 17 NGDOs Autonomous Coordinators, depending on the NGDO Spanish Coordinator, which encompasses 400 organizations. The questionnaire was addressed to professionals in the private sector who are working in Spanish companies in the infrastructure and energy sectors and who work on ID projects that are funded by the main financing institutions (World Bank, OECD, etc.). Professionals who work on these projects in Spanish Universities also were consulted. The total number of respondents was 51.

35.4 Results Remembering the special features of ID projects and the respect for a tool that was used for nearly five decades, the study of the LFA to investigate the possibilities for a suitable modification is the alternative that was chosen as the research objective, instead of the design of a completely new framework or the use of traditional PM standards in ID projects.

35.4.1 The Logical Framework and Its Evolution Practical Concepts Incorporated first developed the LFA in 1969, for the United States Agency for International Development (USAID) as a project design and evaluation tool [31]. Since then, the LFA has become used widely by aid agencies and NGDOs and is now a prerequisite for funding by many of the major donor agencies. Initially, LFA was a 4 × 4 matrix that summarized the ID project’s objectives, activities, assumptions, indicators, and sources of verification (Fig. 35.1). The matrix keeps a hierarchical vertical logic where activities deliver outputs that contribute to

Fig. 35.1 LFA Matrix with the additional box of Preconditions

496

R. Rodríguez-Rivero et al.

outcomes or purposes that contribute to the overall goal. The horizontal logic also presents the progress of each objective by means of Objectively Verifiable Indicators (OVI), the Means of Verification (MOV), and the external factors or assumptions, which might interfere with execution. This first generation of LFA was criticized as being inflexible, complex, and difficult to integrate with other PM tools. Lately, in the 1980s, the German Aid Agency (GIZ) modified the LFA by creating an extended version, the Ziel Orientierte Pojeckt Plannung (ZOPP) [20]. It was necessary to address four matters before designing the project planning matrix or LFA matrix. They were stakeholder’s analysis, problem analysis, objective/solution analysis, and alternative analysis. The modified LFA is known as the second generation LFA and is recognized as being more practical and participative. The third generation of the LFA, in the 2000s, promoted creative and participative analysis and inspired a great number of publications [3]. Today, there are many versions and variations in terminology, but LFA is typically presented as a matrix that breaks down a project into its component parts in order to facilitate its management [2]. The fundamental structure and purpose of the tool has remained unchanged since its conception [4]. The approval of this tool by academics and practitioners for so long is due to its recognized power of providing a complete overview of a project in a condensed format [3]. The widespread use of the LFA provides a terminology that is shared by governments, donor agencies, contractors, and clients, which makes it easier to undertake sector studies and comparative studies in general [34]. However, despite its evolutions, the LFA presents some limitations. These include inflexibility, the unclear terminology or the unclear responsibilities in regard to project success [3], and the fact that the assumptions do not reflect the current reality [35]. These weaknesses, together with a recognition of the tool’s usefulness has encouraged some authors to suggest new versions.

35.4.2 Proposals for New Versions The fact that the LFA has survived so long and in so many organizations as a planning tool indicates its value. Nevertheless, during this time some new versions have been proposed in the literature. They appear in Table 35.4, and guided the design of the professionals in this work with the idea of comparing their proposals.

35.4.3 ID Project Managers’ Opinions In a sample of 51 ID project managers, 96% confirmed the use of the LFA. Those same project managers use LFA matrix, but the percentages vary when they are asked about the previous steps that are defined by ZOOP (stakeholder’s analysis, problem analysis, objective/solution analysis, and alternative analysis). Just 49% used the full

35 The Logical Framework Approach …

497

Table 35.4 Proposals for new presentations of the LFA Proposal

Suggested by

The inclusion of internal factors as assumptions or risks

Yamaswari et al. [35]

The division into the manageable part and non-manageable part of the project (New Logical Framework)

Pfeiffer [26]

The inclusion of influencing factors (key success criteria and factors, and risks), costs and benefits.

Ika and Lytvynov [13]

The inclusion of success measures and responsible authority, and change assumptions by risks (Millenium Logframe)

Couillard et al. [3]

The inclusion of time dimension in a tridimensional pyramid (3D-logframe)

Crawford and Bryce [4]

steps of the LFA. For example, the stakeholders’ analysis is used only by 52.9% of the respondents. Similarly, only 49% of the project manages use the preconditions box in the matrix. The results of the survey of the positive statements in the literature about the LFA are included in Table 35.5. Table 35.5 shows that practitioners think that the LFA needs to be improved (68.7%), although it provides a complete overview of the project (68.7%) and is indispensable for requesting funds (66.7%). These results agree with the opinion of academics. Most of the professionals recognize, as do academics, that the LFA is a rigid methodology (72.6%), but they do not identify that this methodology uses unclear terminology (64%). On the question of the relation of this tool to the success of the projects, most of the respondents think that it has clear responsibilities (58.8%) on it, and even meet connection with other PM tools (53%), contrary to what emerged in the literature review. The usefulness of the LFA not only in the planning and implementation phases but also during the monitoring and evaluation phases received a high level of recognition (82%). In regard to possible changes in the LFA, the introduction of risk management to the entire process is the most accepted proposal (74.5%). However, although most of the respondents believe that the Assumptions column does not consider the risks properly (68.6%), only 54% agreed with change this column by the Risk column, with a great number of nonrespondents (22%). The suggestion of better define Indicators and Means of Verification is the second most accepted proposal (68.7%). It is followed closely by the idea of introducing the time dimension (64.7%). On the other hand, the proposal of adding Cost & Benefit column is barely accepted (37.3%). There was a similar result with the suggestion of focusing only on the manageable part of the project, which received only modest support (35.3%).

498

R. Rodríguez-Rivero et al.

Table 35.5 Level of agreementa with some statements about the LFA Statement about the LFA

FD (%) D (%) A (%) FA (%) NR/DK (%)

The LFA is a rigid methodology

2

21.6

56.9

15.7

3.9

The LFA uses an unclear terminology prone to 18 wrong interpretation

46

30

4

2

The LFA has unclear responsibilities with regard to project success

5.9

52.9

31.4

7.8

2

The LFA has no integration with other PM tools and processes

5.9

47.1

37.3

2

7.8

The LFA provides a complete overview of a project in a condensed format

0

29.4

37.3

31.4

2

The LFA is indispensable for requesting funds 2

23.5

25.5

41.2

7.8

The LFA is not only useful in the planning and 0 implementation phases, but also during the monitoring and evaluation phases

16

62

20

2

The LFA considers uncertainty and risks properly in the Assumptions column

7.8

60.8

17.6

7.8

5.9

The LFA should change the Assumptions column by risks, including the importance of these risks

2

22

44

10

22

The LFA should integrate risk management in the entire process

0

13.7

54.9

19.6

11.8

The LFA should add cost and benefits columns 7.8 to complete later as the information becomes available

39.2

25.5

11.8

15.7

The LFA should include the time dimension

0

27.5

49

15.7

7.8

The LFA should better define the Indicators and the means of verification

2

17.6

47.1

21.6

11.8

The LFA should focus only on the manageable 13.7 part of the project and exclude goals and outcomes

41.2

33.3

2

9.8

The LFA needs to be improved

15.7

33.3

35.3

15.7

0

a Level

of agreement meaning: FD—Fully disagree, D—Disagree, A—Agree, FA—Fully agree, NR/DK—No response/Don’t know.

35.5 Conclusions The main purpose of this work was to analyze the LFA from the academic and practitioner’s point of view, to understand the arguments to consider an update, to learn its strengths and to identify its weaknesses. The results show that the current LFA needs improvement. Contrasting the options suggested by the literature review with those of professionals who are engaged in the sector, the three proposals that gained most acceptance were (i) including the risk management in the entire process,

35 The Logical Framework Approach …

499

(ii) better define Indicators and Means of Verification, and (iii) introduce the time dimension. The increase in accuracy in the definition of Indicators and Means of Verification is a traditional request from donors, who need to justify the investment in the project with the fulfillment of the activities, outputs, purpose, and goal. In addition, the better the indicators and their Means of Verification are defined, the easier it will be for all stakeholders to understand the project. This will ensure their greater involvement. As improving Indicators or Means of Verification columns is a retouch of the LFA, the introduction of risk management or time dimension will be an evolution. A necessary evolution upon the inclusion of uncertainty, well represented by risks and time. These concepts are related: the risks can modify the schedule, and the future time can modify the risks. Special features of ID projects make even more necessary that the risk management be introduced to consider all possible scenarios that could arise during the life cycle of the project and suitable responses to them. Some scenarios are unimaginable. For that reason, as well as the need to having good risk management, having a project team that can adapt is essential. The authors trust that the identification, assessment, and control of the risks, together with the adaptation capacity, will contribute to increasing the efficiency levels of ID projects. This research is not without limitations. Although the literature review is international, the sample of project managers was Spanish, and the number of respondents was limited to 51. Although the response rate was higher than the expected, it still is low. The data analysis of the survey is only descriptive. Future studies will attempt to extend the survey to other countries and to work with the responses from a more analytic perspective.

References 1. Ahsan K, Gunawan I (2010) Analysis of cost and schedule performance of international development projects. Int J Project Manage 28:68–78 2. Cracknell BE (2000) Evaluating development aid: Issues, problems and solutions. Sage, London 3. Couillard J, Garon S, Riznic J (2009) The logical framework approach-millennium. Project Manage J 40(4):31–44 4. Crawford L, Bryce P (2003) Project monitoring and evaluation: a method for enhancing the efficiency and effectiveness of aid project implementation. Int J Project Manage 21:363–373 5. Diallo A, Thuillier D (2004) The success dimensions of international development projects: the perceptions of African project coordinators. Int J Project Manage 22(1):19–31 6. Diallo A, Thuillier D (2005) The success of international development projects, trust and communication: an African perspective. Int J Project Manage 23:237–252 7. Easterly W (2006) The white man’s burden: why the West’s efforts to aid the rest have done so much ill and so little good. Penguin Books, New York 8. Grisham WT (2010) International Project Management: Leadership in Complex Environments. Wiley, Hoboken 9. Golini R, Landoni P (2013) International Development Projects: Peculiarities and Managerial Approaches. Project Management Institute Inc., Newtown Square, Pennsylvania

500

R. Rodríguez-Rivero et al.

10. Golini R, Kalchscmidt M, Landoni P (2015) Adoption of project management practices: The impact on international development projects of non-governmental organizations. Int J Project Manage 33:650–663 11. Hermano V, López-Paredes A, Martín-Cruz N, Pajares J (2013) How to manage international development (ID) projects successfully. Is the PMD Pro1 Guide going to the right direction? Int J Project Manage 31:22–30 12. Hirschman AO (1967) Development Projects Observed. Brookings Institution, Washington, DC 13. Ika LA, Lytvynov V (2011) The “Management-Per-Result” Approach to International Development Project Design. Project Management Journal 42(4):87–104 14. Ika LA, Diallo A, Thuillier D (2010) Project management in the international industry: the project coordinator’s perspective. International Journal of Managing Projects in Business 3(1):61–93 15. Ika LA (2012) Project management for development in Africa. Why projects are failing and what can be done about. Project Management Journal 43 (4), 27–41 16. Ika LA, Hodgson D (2014) Learning from international development projects: Blending Critical Project Studies and Critical Development Studies. Int J Project Manage 32:1182–1196 17. International Project Management Association (IPMA) (2015) Individual Competence Baseline for Project, Programme & Portfolio Management, 4th edn. IPMA, The Netherlands 18. Khang DB, Moe TL (2008) Success criteria and factors for international development projects: a lifecycle-based framework. Project Management Journal 39(1):72–84 19. Kharas H, Makino K, Jung W (2011) Catalyzing Development. Brookings Institution Press, Washington, DC 20. Landoni P, Corti B (2011) The management of international development projects: moving toward a standard approach or differentiation? Project Management Journal 42(3):45–61 21. McEvoy P, Brady M, Munck R (2016) Capacity development through international projects: a complex adaptive systems perspective. International Journal of Managing Projects in Business 9(3):528–545 22. Morris PWG (2013) Reconstructing project management reprised: a knowledge perspective. Project Management Journal 44(5):6–23 23. Moyo D (2009) Dead Aid: Why Aid Is Not Working and How There Is A Better Way for Africa. D&M Publishers Inc., Vancouver 24. Muriithi N, Crawford L (2003) Approaches to project management in Africa: implications for international development projects. Int J Project Manage 21:309–319 25. Papke-Shields KE, Beise CM, Jing Q (2010) Do project managers practice what they preach, and does it matter to project success? Int J Project Manage 28(7):650–662 26. Pfeiffer P. (2016). The New Logical Framework: A Tool for an Effective Development Project Design. Retrieved from http://www.projectmanagement.com/articles/337043/The-New-Log ical-Framework–A-Tool-for-an-Effective-Development-Project-Design on 2017 June 27. Project Management for Development Organizations (PM4DEV) (2015) Development Project Management. A methodology to manage development projects for international humanitarian assistance and relief organizations. PM4DEV 28. Project Management for NGOs (PM4NGOS) (2013) A Guide to the PMD Pro. Project Management for Development Professionals, second edition. PM4NGOs 29. Project Management Institute (PMI) (2017) A Guide to the Project Management Body of Knowledge (PMBOK® Guide), 6th edn. Project Management Institute Inc., Newtown Square, Pennsylvania 30. Sachs JD (2005) The End of Poverty: Economic Possibilities for Our Time. Penguin Books, New York 31. Sartorius R (1996) The third-generation logical framework approach: Dynamic management for agricultural research projects. Journal of Agricultural Education and Extension 2(4):49–62 32. Shenhar A, Dvir D (2007) Reinventing Project Management. Harvard Business School Press, Boston

35 The Logical Framework Approach …

501

33. Spanish Agency for International Development Cooperation (AECID) (2012) The 4th Master Plan of the Spanish Cooperation 2013-2016. The Spanish Government, Madrid 34. The World Bank (2005) The Logframe Handbook. A Logical Framework Approach to Project Cycle Management, Washington, DC 35. Yamaswari IAC, Kazbekov J, Lautze J, Wegerich K (2016) Sleeping with the enemy? Capturing internal risks in the logical framework of a water management project. Int J Water Resour Dev 32(1):116–134

Part VII

Technologies of Information and Communications (TIC). Software Engineering

Chapter 36

The Use of the Cloud Platform to Register and Perform Intelligent Analysis of Energy Consumption Parameters in the Service and Industrial Sectors V. Rodríguez, J. García, H. Morán, and B. Martínez Abstract This communication describes the developed web system by which companies of the industrial and service sectors can access their energy consumption data (electricity, gas, and water) and perform energy efficiency analysis. Data is collected using measuring devices installed in the clients’ premises and transmitted to the analysis platform. Each client of this service, after being authenticated, can access and consult his historical consumption data, displayed in the form of graphs and personalized reports. The user can obtain consumption performance patterns through advanced data analysis. Keywords Energy efficiency · Big data · Data mining

36.1 Introduction Obtaining data related to the different aspects of energy consumption efficiency requires measuring a great number of parameters. Nowadays, a multitude of digital devices exist that allow registering continuously over time the parameters of the different supplies of a facility, especially for electricity supply. These devices are capable of recording the most common variables related to parameters such as electricity consumption, active/reactive and apparent power, lines’ voltage and intensity, as well as the consumed flow of other inputs such as water and gas. By capturing data over an extended period of time, very useful analyses can be performed that allow identifying the best possible offered rates or to evaluate the degree of utilization of the contracted options (power, penalties for reactive energy…). To ensure measurement representativeness it is necessary to sample the operation parameters during extended periods of time and with relatively high sample frequencies, thus, generating high data quantities. From this context arise the main problems that need to be resolved: how to efficiently capture and store this data volumes and how to V. Rodríguez (B) · J. García · H. Morán · B. Martínez Área de Proyectos de Ingeniería. Dpto. de Explotación Y Prospección de Minas, Universidad de Oviedo. C/Independencia 13, 33004 Oviedo, Spain e-mail: [email protected] © Springer Nature Switzerland AG 2021 J. L. Ayuso Muñoz et al. (eds.), Project Management and Engineering Research, Lecture Notes in Management and Industrial Engineering, https://doi.org/10.1007/978-3-030-54410-2_36

505

506

V. Rodríguez et al.

process information to extract the analyses and reports that will allow the user to make the adequate decisions. The system described here, called NeoS Analytics, is conceived as a cloud platform to allow massive processing of data generated by multiple service subscribers (hereafter clients) and also to process this data with intelligent analysis techniques in order to extract relevant information for decisionmaking. In this manner, the system will capture data generated by measuring devices installed in the clients’ premises and store them in a platform placed in a hosting service. Each service client, previously authenticated, can access the platform in order to consult his historical consumption data, presented as graphs and personalized reports and also to obtain consumption performance patterns obtained through data advanced analysis. This system is conceived for an industrial and service environment rather than a domestic environment. Typical clients of this system are industries, hotels, commercial centers, sports centers, etc. Likewise, the user that will access this system will be a person with or without technical background, but familiarized with consumption analysis and energy efficiency management of facilities.

36.2 Description of the System Architecture and Used Technologies Technologically, the system is conceived to provide a cloud service, such that the processing system receives data through the Internet coming from capture systems installed in the client’s facilities. This data is processed by a group of servers remotely hosted and clients have access to the information through an internet connection from any mobile or fixed device located anywhere. One of the most extended and popular definitions of Cloud Computing is offered by the USA National Institute of Standards and Technology (NIST), according to which “cloud computing” is a model for enabling ubiquitous, convenient, on demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction [7]. Several reasons are behind the selection of this computer model for the project. In the first place, to offer ubiquitous access to reports and information analysis is considered one of the project’s key success factors. This is an essential characteristic associated with this computing paradigm. On the other hand, the companies commercializing this service would not want to be restrained by the limitations generated by depending on their own systems to deliver the service, and will prefer to take advantage of the high computer capabilities services offered nowadays. By following a traditional software model, they might be forced to maintain their own servers, to have an internet connection infrastructure adequately dimensioned to receive and process high data volumes and to have exclusively dedicated computer maintenance staff. The Cloud Computing based model allows contracting storage space with a

36 The Use of the Cloud Platform to Register …

507

specialized provider (often known as hosting). Moreover, the service provision characteristics are dynamically adjustable, automatically escalating as a function of the number of clients and services to be provided. The use of this paradigm will allow the company to contract at any given moment the computer capability suitable for each situation, as a function of the client number and the data volume to be processed, which represent a very important cost optimization. Another system requirement is to allow for a quick inclusion of new measuring devices and it must be flexible enough to allow the swift integration of new system functionalities (new report formats and new parameters analysis). To achieve these characteristics, the system architecture has been developed by following a design pattern called MVC (Model View Controller). In general, this is a software architecture pattern which separates data and the business logic of an application from the user interface and the module in charge of managing events and communications. To achieve this, MVC proposes the construction of three different components which are the model, the view, and the controller. That is, it defines components for information representation and other components for user interaction. In this project’s case, the use of this pattern allows separating the management of data originated on the measurement systems (the model layer), from the data analysis and processing part (which is integrated in the controller) and from the data visualization part (integrated in the view). Initially, MVC was developed for desktop applications but it has been broadly adopted as architecture for the design and implementation of web applications in the most important computer languages. Multiple frameworks have been developed which implement this pattern. For this project, the PHP CodeIgniter framework has been selected in its 3.10 version (“CodeIgniter Web Framework,” n.d.). The system has been developed to be executed on Linux servers with Apache Web Server (2.4.18), PHP (7.0.8) and MariaDB 10.0.28 as database. In the web application design, Bootstrap and JQuery (3.1.0) have been used for the graphic design development. The selected database manager has functionalities that allow working with the Big Data paradigm, handling large amounts of information. The system can be integrated in a distributed environment of massive information processing using the MaxScale extension (“Database Proxy, Database Router, MaxScale | MariaDB,” n.d.). This is a module that allows configuring scalable database clusters enabling load balancing, failovers managing (automatic forwarding of requests to another database in case of nonavailability), as well as the so-called sharding or partitioned method. This technology enables to distribute data among different shards (server groups which store part of the data), in order to distribute the load when performing consultations or insertions. One of the most important system parts is graph representation (consumption, scorecards, data analysis, etc.). For the development of these functionalities HighCharts and HighStock libraries have been used (“Interactive JavaScript charts for your webpage | Highcharts,” n.d.). Highcharts is a library developed in JavaScript to generate interactive graphs in Web applications. It supports several graph types such as lines, columns, bars, foot, bubbles, and cascade among others. HighStock (“Highstock product | Highcharts,” n.d.) is based on HighCharts, so it includes the

508

V. Rodríguez et al.

basic functionality of HighCharts, plus other added functionalities that enable to represent large temporal data series, including additional browsing functions, such as a temporal window selector, displacements, zooms, range selector, etc. In order to perform some of the system functions, such as the hourly rates configurator and heat maps, a technology called Data Driven Documents, specifically the D3.js library [1] has been used. It is a JavaScript library used to produce from data dynamic and interactive infographics in web navigators. It uses well-founded technologies, such as SVG, HTML5, and CSS. Essentially, this library allows handling data based documents by using web open standards so browsers can create complex visualizations without depending on proprietary software. Another of the most used components within the system is the PhpGrid library (“PHP Grid Framework—Supercharge your Development Speed,” n.d.) in order to perform all the data grids functions and the CRUD management. CRUD is an acronym of Create, Read, Update, and Delete.

36.3 Description of the System’s Functions Below, the NeoS Analytics system’s general functionalities are described. The system has two fundamental parts: the part accessible to the user (user front panel) and the administrator part (administrator front panel), from where the whole system can be configured and managed. In this communication, only the user front panel is going to be explained, making special emphasis in those more innovative system functions (Fig. 36.1). Figure 36.2 shows the consumption summary panel, of the default client facility for the current month. This information is shown initially when the user is validated and accessed the system. The upper part of the window contains the selector for the company and facility. The system is designed so that a client can manage the consumption of information from several companies which, in turn, can have several facilities, offices or locations. From the main window, the client can navigate to the application’s different options through a menu bar included under the header. The menu’s navigation options are shown in Table 36.1, along with a brief description of each option’s purpose.

36.3.1 Analysis The “Analysis” menu, as shown in Table 36.1, enables access to functions which graphically represent several energy consumption parameters. The most important variables represented are energy consumption and the associated cost for a given time. To illustrate this, Fig. 36.3 shows the screen corresponding to consumption analysis. In general, this menu’s screens follow a similar scheme and allow selecting for different periods of time (predefined or personalized), the type of energy supply to be represented (electricity, water or gas). This scheme allows grouping the information

36 The Use of the Cloud Platform to Register …

509

Fig. 36.1 System’s general scheme

Fig. 36.2 Electricity consumption summary panel

in periods of 15 min, hours, days or months and showing a table at the bottom of the screen, with the chosen period’s statistics. The system has the options to compare data between different periods of time and to simulate rates. Generated graphs can be exported as images or Excel files.

510

V. Rodríguez et al.

Table 36.1 Functions included in the main menu Menu

Function

Description

Control panel Summary Electricity Water Gas See map

Group of functions that allow accessing the control panels where consumption data of the facilities are summarized

Analysis

Consumption Cost Cost simulation Variables

Functions used to graphically represent consumption and energy inputs’ costs in different periods of time and to simulate the effect of the different energy rates

Management

Facility Management Supply Management Rates Management Alert Management

Group of menus to parameterize facilities data, energy inputs and to model applicable charges. Also allows configuring alerts

Reports

Reports visualization

Tool that allows viewing the reports and converting them to PDF format

Energy audit

Energy audit Weekly heat map Maximum, minimum and mean values Standard deviation Percentiles Lorenz curve

Group of functions aimed at performing graphical analysis in order to characterize energy consumption patterns of a facility

Smart meter

Natural light Air conditioning Phantom consumption Neutral current Power adjustment

Intelligent analysis of behavior patterns which the system performs in order to detect the appropriate use of natural light, air conditioning, phantom consumption, current shunt to neutral and the appropriate use of the contracted power

36.3.2 Management This menu groups options to parameterize facilities, to manage rates applicable to supplies and to create and manage alerts. Inside the facilities, the system allows configuring and associating measuring devices to different lines. For instance, a facility can have a measuring device for the general service connection and different other measuring devices to gauge acclimatization, illumination lines, power installations, etc. The system enables modeling complex rate structures and associates them to the facilities. Figure 36.4 shows as an example, a rate modeler capture with a 4 period rate structure. Another important function available in the system is the possibility of configuring alerts, which will send notifications to the users in case of some noteworthy occurrence. These alerts can be associated with consumptions over the threshold values

36 The Use of the Cloud Platform to Register …

511

Fig. 36.3 Consumption analysis

Fig. 36.4 Rate structures modeler

during vacations, holidays, during the night or on nonworking days. Also, consumptions over the threshold values when there is natural light, consumptions over last year mean value, power demands higher than the contracted level or penalties for reactive currents.

512

V. Rodríguez et al.

Fig. 36.5 Heat map for weekly consumption

36.3.3 Energy Audit Several visualization tools are grouped under this heading. These tools allow analyzing consumption patterns with the main purpose of generating key information for decision-making over contracting modalities and energy saving policies. Heat maps can be generated, which will enable to study the time periods when higher consumptions are produced. As an example, Fig. 36.5 shows a weekly heat map generated from consumptions in a date range selected by the user. This map will enable obtaining an image showing the days of the week when consumptions are produced and the hours when these consumptions occur. Graphs can also be generated from this module with the analysis of mean, maximum, and minimum consumptions over a period of time. These graphs will reflect consumption and mean hourly cost compared with the standard deviation, percentile analysis to determine hours in which the maximum consumption and costs are concentrated with the objective of focusing in those hours the consulting work or the Lorenz curve [4], generated from the percentile, so the more close is the curve to the diagonal, the more homogeneous are consumptions and costs. All these graphs can be generated to analyze consumption as well as to analyze costs for the chosen periods. As an example, Fig. 36.6 shows the statistical analysis of the hourly mean consumption, where the envelope curve represents the standard deviation.

36.3.4 Smart Meter This section gathers the group of intelligent analyses the system can perform to help find improvement opportunities in consumption and energy efficiency of a facility. The first analysis is aimed at assessing the appropriate use of the natural light in a facility. To achieve this goal, the system graphically represents the electricity consumption in a specific period (in the example shown in Fig. 36.7, this period is a week). It identifies on the graph the consumption during daytime (highlighted in

36 The Use of the Cloud Platform to Register …

513

Fig. 36.6 Standard deviation analysis

Fig. 36.7 Natural daylight use analysis

black) with regard to general consumption. If the facility has a measuring device connected to the illumination line, the system automatically quantifies the total amount of Kwh consumed during daylight. This analysis allows generating savings by detecting if the illumination is being used inadequately. For instance, offices in which the visualization conditions are good with natural light but even so the luminaries are on. To generate this analysis, it is not necessary to have an illumination sensor, because the system automatically makes the calculation based on the geographic coordinates of the facility and the daylight hours in every different time of the year. The analysis of the inadequate use of air-conditioning is similar to the above described analysis. This tool allows analyzing consumption related to airconditioning and identifying if the actual temperature matches the one established to use the air-conditioning or if the air-conditioning system is functioning outside of

514

V. Rodríguez et al.

Fig. 36.8 Air conditioning use analysis

regular business hours due to programing errors. Figure 36.8 shows an analysis of this type, in which it can be observed that the acclimatization systems are activated outside of business hours. In the same menu, there are tools that function in a similar manner to analyze if there are night phantom consumptions, if there are current shunts to neutral in a given period of time and if the contracted power is being properly used.

36.4 Conclusions This communication describes the technological architecture used by the developed energy analysis tool, as well as the most important functionalities implemented. Its aim is to serve as a model for similar projects to be developed in the future. Technologically, the architecture is designed to be deployed under the Cloud Computing paradigm. This article has described the technologies on which it is based. Regarding the functionalities, the tool stands out for the group of different analyses it offers, allowing clients to acquire a clear knowledge of their consumption patterns. Thus, this tool represents a key element for decision-making when implementing energy saving measures, as well as choosing the tariff modalities to contract.

References 1. Bostock M, Ogievetsky V, Heer J (2011) IEEE Trans. Vis. Comput. Graph. 17:2301–2309 2. CodeIgniter Web Framework [WWW Document], n.d. URL https://codeigniter.com/ (accessed 4.21.17) 3. Database Proxy, Database Router, MaxScale | MariaDB [WWW Document], n.d. URL https:// mariadb.com/products/mariadb-maxscale (accessed 4.21.17) 4. Gastwirth, J.L., 1971. Econom. J. Econom. Soc. 1037–1039

36 The Use of the Cloud Platform to Register …

515

5. Highstock product | Highcharts [WWW Document], n.d. URL http://www.highcharts.com/pro ducts/highstock (accessed 4.21.17) 6. Interactive JavaScript charts for your webpage | Highcharts [WWW Document], n.d. URL http:// www.highcharts.com/ (accessed 4.21.17) 7. Mell, P., Grance, T., 2011 8. PHP Grid Framework — Supercharge your Development Speed [WWW Document], n.d. URL http://www.phpgrid.org/ (accessed 4.21.17)

Chapter 37

Including Dynamic Adaptative Topology to Particle Swarm Optimization Algorithms Patricia Ruiz, Bernabé Dorronsoro, Juan Carlos de la Torre, and Juan Carlos Burguillo Abstract Particle Swarm Optimization algorithms (or PSO) have been widely studied in the Literature. It is known that they provide highly competitive results. However, they suffer from fast convergence to local optima. There exist works proposing the swarm decentralization by including some specific topologies in order to deal with this problem. These approaches highly improve the results. In this work, we propose PSO-CO, a PSO algorithm able to reduce the exploitation of the algorithm by introducing the concept of coalitions in the swarm. There is one leader in each of these coalitions, so that the particles belonging to a coalition are only influenced by their local leader, and not the global one. This mechanism allows different coalitions to explore different parts of the search space, reducing thus the convergence speed and enhancing the exploration capabilities of the algorithm. Moreover, the particles can leave a coalition and join another, facilitating the exchange of information between coalitions. For testing the efficiency of the proposed PSO-CO, we have chosen a relevant benchmark in the literature, specially designed for continuous optimization. Results show that PSO-CO highly improves the results obtained compared to classical PSO. Keywords Particle swarm optimization · Coalitions · Adaptive topology · Game theory

P. Ruiz (B) · B. Dorronsoro · J. C. de la Torre Universidad de Cádiz, Cádiz, Spain e-mail: [email protected] B. Dorronsoro e-mail: [email protected] J. C. de la Torre e-mail: [email protected] J. C. Burguillo Universidad de Vigo, Pontevedra, Spain e-mail: [email protected]

© Springer Nature Switzerland AG 2021 J. L. Ayuso Muñoz et al. (eds.), Project Management and Engineering Research, Lecture Notes in Management and Industrial Engineering, https://doi.org/10.1007/978-3-030-54410-2_37

517

518

P. Ruiz et al.

37.1 Introduction Metaheuristics are generic optimization algorithms that have been widely studied and used in the Literature for solving complex and computationally expensive problems. The main challenge these techniques face is finding a tradeoff between the exploitative and the explorative behavior of the algorithm in the search space of the problem [1]. A too exploitative behavior focuses only on promising solutions, trying to improve them by analyzing only their neighboring solutions in-depth, and thus, with a high probability of getting stuck in a local optimum of the problem. Contrary, in a too explorative behavior the algorithm visits large areas of the search space, but does not analyze in-depth the most promising solutions, thus finding the global optimum is more complex. Many of these metaheuristics are called bioinspired as they try to solve complex problems emulating the behavior of biological systems. There exists a large number of bioinspired optimization algorithms, such as genetic algorithms [2] or ant based [3], among others. In this work, we focus on a bioinspired type of algorithm based on swarms, known as particle swarm optimization, or PSO [4]. PSO has been widely used for solving continuous optimisation problems by emulating the social behavior of bird flocking or fish schooling. These groups of animals follow a leader. The main problem associated to a PSO is its fast convergence, generally a consequence of using a centralized population (the swarm) that with high probability gets stuck in local optima. Literature reveals that one of the most promising techniques for preventing the fast convergence is decentralizing the population, which helps keeping the diversity of the solutions in the population for longer [5]. Game theory [6] analyzes possible strategies (cooperative or not) that an individual can follow in a game or a competition. In cooperative strategies, individuals collaborate in order to improve their performance. They can even obtain results they are not able to get individually when gathering into coalitions. In this work, we are proposing a new particle swarm optimisation algorithm called PSO-CO, able to mitigate the exploitative behavior of the original PSO by introducing game theory techniques for decentralizing the population, so that the diversity of the solutions increases. Particularly, we are introducing in the swarm the concept of coalitions as it was done in [7], for genetic algorithms. The performance of the proposed algorithm is compared to the original PSO on a well-known benchmark proposed in CEC 2005, for the continuous optimisation competition. The paper is structured as follows. Next section briefly explains the mechanism of particle swarm optimisation algorithms. Section 37.3 analyzes the formation of coalitions and its origin in game theory. The proposed algorithm is presented in Sect. 37.4. The experiments performed and the results obtained are shown in Sects. 37.5 and 37.6, respectively. Finally, we conclude and present the future work lines in Sect. 37.7.

37 Including Dynamic Adaptative Topology …

519

37.2 Particle Swarm Optimisation Algorithms, PSO PSO is an optimisation algorithm that works on a set of tentative solutions to the problem, called swarm, and evolves it toward the most promising regions of the search space. This evolution is done emulating the behavior of swarms, like bird flocking or fish schooling, where the members move following a leader. In PSO, a particle refers to a tentative solution to the problem (i.e., each member of the swarm). In every iteration, the position of a particle (i.e., values of the variable of the represented solution) is updated in terms of its previous position and the calculated speed, as shown in Eq. 37.1 for particle i → − → → xi (t − 1) + − vi (t) . xi (t) = −

(37.1)

The speed of each particle is influenced by the leader’s position (the position of the particle with the best fitness value, i.e. the global best position), and by the best position the particle visited (i.e. historical local best position). The speed is defined as follows: → → − → → − → vi (t − 1) + C1 · r1 · (− xli − − x i ) + C 2 · r 2 · (− x→ vi (t) = w · − gi − x i ) ,

(37.2)

→ x→ where − xli represents the best solution visited by particle i, − gi refers to the best particle of the swarm (the leader), w is the weight of the particle’s inertia (controls the balance between local and global experience), r1 and r2 are two random numbers uniformly distributed in the [0, 1] interval, and C1 and C2 are two specific parameters that control the effect of the position of the best particle and the leader. Algorithm 37.1 Pseudocode of PSO 1: 2: 3: 4: 5: 6: 7: 8: 9:

I nitialiseSwar m(); EvaluateSwar m(); num Evals = 0; while num Evals < max N um Evals do CalculateSpeeds(); U pdate Positions(); EvaluateSwar m(); end while Retur n Best Solution Found();

Algorithm 37.1 presents the pseudocode of PSO. It first creates and evaluates the initial swarm (lines 1 and 2). Then, it iterates the swarm until the stop condition, typically the maximum number of evaluation of the fitness function (line 4). In the loop, the speed of each particle of the swarm is calculated (line 5), then the positions are updated according to the previous position and the calculated speed (line 6), and all the new generated solutions are evaluated (line 7). Finally, the algorithm returns the best solution found during the search.

520

P. Ruiz et al.

37.2.1 PSO with Speed Constraint In the classical implementation of PSO, the particles that are usually located at the boundaries of the search space may leave it because of the speed and the inertia, provoking erratic movements. In our work, we are including a novel mechanism proposed in [8], that limits the particle’s speed for preventing these erratic movements. Authors proposed to include a restriction coefficient (χ ), defined in terms of a restriction factor (ϕ) χ=

2  , 2 − ϕ − ϕ 2 − 4ϕ

(37.3)

where  ϕ=

C1 + C2 1

i f C1 + C2 > 4 . i f C1 + C2 ≤ 4

(37.4)

Moreover, we include in our PSO-CO an additional mechanism to limit the accumulated speed in each variable of each particle, shown in Eq. 37.5. This mechanism was introduced by Nebro et al. [9], for multi-objective optimisation, and we are proposing it here for the first time for a mono-objective PSO. ⎧ δ j i f vi, j (t) > δ j ⎨ −δ j i f vi, j (t) ≤ −δ j , vi, j (t) = (37.5) ⎩ vi, j (t) other wise where δj =

upper _limit j − lower _limit j , 2

(37.6)

being upper _limit j and lower _limit j the maximum and minimum value, respectively, that variable j can take. Therefore, in PSO-CO the speed of the particles is calculated using Eq. 37.2. This value is multiplied by the restriction factor defined in Eq. 37.3. Finally, the resulting speed is limited as specified in Eq. 37.5.

37.3 Game Theory and Coalitions In this section, we will briefly review the main principles of game theory and also introduce the concept of coalitions, a relevant topic arose from game theory, that is widely used nowadays.

37 Including Dynamic Adaptative Topology …

521

37.3.1 Game Theory Game theory [6] provides useful mathematical tools for easily understanding the possible strategies individuals may follow when competing. This branch of mathematics is commonly used in social science (mainly economics), biology, engineering, politics, international relations, computer science or philosophy. Game theory was widely developed by many researchers in the 50s, but from 1970 it was applied to biology [10], although there are studies dating from 1930 [11]. Initial studies of game theory are devoted to analyze the competitions where a player improves at the expenses of another player: zero sum games [12]. Afterwards, game theory was mainly applied to find the equilibrium in the games, where players adopt a strategy that will probably remain unchanged. Many equilibrium theories have been developed like the well-known Nash equilibrium [13]. As we previously said, game theory has many real-world applications, including automatic agents. The application of game theoretic principles into a hot topic like agent systems has favored in-depth analyses of the social choices and computational aspects in the last years [14]. We can mainly differentiate between two branches in game theory: cooperative [15, 16] and noncooperative [17]. In noncooperative game theory, also called competitive games [18], every participant is assumed to play independently, with no communication nor collaboration with the rest. Each player chooses a strategy for improving only its own benefit. This can be applied to many real-world problems like resource allocation [19] or congestion control [20]. Cooperative game theory studies the behavior of agents that cooperate among them. These games have been widely studied and applied to different disciplines like economics or politics, but also to other domains like networking. We can find coalition games within the cooperative game theory when a set of players allies in order to improve their performance. These coalitions allow agents to achieve goals they might not be able to accomplish independently.

37.3.2 Coalitions Thanks to coalitions, multi-agent systems are able to cooperate so that they can improve their performance, accomplish tasks or even increase their benefit with respect to others. The coalition formation increases the ability of the agents for finishing their task and accomplishing their goals [21]. It is very important in distributed systems (like mobile and ubiquitous computing or e-commerce) where adapting to dynamic and variable resources or environments is crucial. It is also commonly used in sensor networks [22], communication networks [23] or e-commerce [24].

522

P. Ruiz et al.

In [25], authors divide coalition games into three categories: canonical coalition games, where a large coalition composed of all players is the optimum strategy; coalition formation games, where the structured formed depends on the gains and the expenses of the cooperation; and finally coalitional graph games, where interaction between players is ruled by a graph that defines the communication structure. In [7], it was introduced for the first time the idea of grouping individuals using game theory techniques, particularly coalitions, into the population of an evolutionary cellular algorithm. This synergy significantly improved the performance of the algorithm. The coalition formation is not a trivial process, and it has been usually tackled from a sociological point of view because it is a general aspect of the social life [26]. Both, the formation and the stability of coalitions depend on rules proposed at their creation [27]. For example, an infinite horizon game, where the coalition is formed if and only if all participants accept to be part of it, is proposed in [28, 29]. In [30], coalitions can be divided into smaller coalitions. However, a work where individuals that do not belong to any coalition can join an existing one without the permission of its member is proposed in [31]. The coalition formation has also been tackled from the point of view of game theory. However, this approximation is generally centralized and computationally intractable. In [32], the computational complexity of the problem is studied for the formation of different coalitions, assuming that the number of agents is fixed. Additionally, there is another work regarding the computational tractability of algorithms where, using a real multi-agent system, authors not only study the implementation of a distributed coalition formation algorithm but also all the problems that arise when this algorithm is applied to real-world systems. The typical coalition formation algorithms in multi-robot systems are classified into three groups [33]: (1) deterministic search algorithms, like in [34]; (2) task allocation algorithms like in PARKER’S ALLIANCE [35]; (3) evolutionary algorithms [36]. As we previously mentioned, coalition formation can be computationally intractable, because all the possible shapes of a coalition depend exponentially on the number of agents. Finding the optimum partition of a set of agents, checking the complete search space, can be computationally very expensive, thus evolutionary algorithms are generally applied to solve this problem [37, 38]. Implementing cooperation in large scale communication networks faces different challenges, such as efficiency, fairness, complexity or appropriate modeling. Coalition games have demonstrated to be a powerful tool for designing fair, robust, and practical cooperation strategies in networks. In [25], we can find both, the most relevant contributions of the state of the art that tackle the main challenges and opportunities in coalitional games, as well as the understanding and design of new communication systems, giving emphasis to the new analytical techniques and novel application scenarios. In traditional models, each agent belongs to a coalition. However, in real-world, an individual can generally participate in more than one group, performing a task in each of them. Overlapping coalitions formation are cooperative games where agents

37 Including Dynamic Adaptative Topology …

523

can belong to different coalitions simultaneously. In [39], an iterative overlapping coalition formation algorithm is presented and it is demonstrated that agents benefit from this type of formation. This kind of coalitions is also applied to sensor networks in [40]. In both studies, agents are supposed to completely cooperate, but this is not the general case, as agents tend to maximize their own benefit. Therefore, a new model of game theory, more realistic, for overlapped coalitions formation is proposed in [41, 42].

37.4 A Particle Swarm Optimisation Algorithm with Coalitions: PSO-CO In this work, we are proposing PSO-CO, a novel particle swarm optimisation algorithm, designed over a dynamic topology, that uses the mechanisms of coalition formation for decentralizing its population. By doing this, the diversity of the swarm increases, which generally prevents the algorithm from a fast convergence [43]. PSO benefits from two of the most well-known population structures: cellular and island. We introduce the concept of coalition into the swarm. The participation of each particle of the swarm in a coalition depends on some predefined rules, analyze later in this section. Once a particle belongs to a specific coalition, it is allowed to interact with all other particles belonging to that coalition, as if it were a subpopulation of an algorithm with a population structured in islands. Each coalition behaves as a PSO that searches over the different regions of the search space. Particles belong only to one coalition, and it is expected that they join the most profitable one. In the same way, particles can leave a coalition if they consider the benefit is low. When a particle leaves a coalition there are two options: (1) join another existing coalition, (2) create a new coalition. This decision depends on the expected reward. Therefore, particles are considered selfish entities, able to collaborate with other particles in order to obtain benefits. A particle that remains isolated, that is, it did not join any coalition or it is a frontier particle that just left a coalition, analyzes the quality of the coalitions of the neighboring particles (considering von Neumann neighborhood and behaving like in a cellular topology model), in order to decide whether to join one of them or not. As in any optimisation algorithm, it is necessary to keep the diversity of solutions in the population in order to obtain good results. Belonging to a large coalition of good and diverse solutions is desirable for every particle. Therefore, we will measure the quality of coalitions according to their size, as well as the quality and diversity of its solutions. This way, belonging to a coalition of high quality will be beneficial for particles.

524

P. Ruiz et al.

The basic features of our PSO-CO are detailed next: • Larger coalitions, i.e., coalitions with a higher number of particles, generally perform a better search in the solutions space. • Coalitions whose solutions have a higher fitness value, i.e., more promising solutions, usually find better solutions to the problem. • Coalitions with higher diversity explore better the search space, have a higher probability of escaping from local optima, and thus, their convergence is generally not fast. • Every coalition has a global value, the quality of the coalition, that depends on the above mentioned parameters. • Particles of each coalition modify their direction and speed according to the leader of that particular coalition, so it behaves as a PSO. • Particles, following a selfish behavior, may leave a coalition, join a neighboring one or even create another in terms of the expected reward. • Independent particles interact with particles in its von Neumann neighborhood in order to decide whether to join a coalition or not. • There exists a parameter called Independent Coefficient, I ndc , that models (using a probabilistic value) the desire of a particle of leaving a coalition and remaining independent. As already mentioned, each coalition is associated to a quality value, defined as follows: Qualit y(Ci ) = α · Si ze(Ci ) + β · V ar (Ci ) + γ · Avg(Ci )

(37.7)

where α + β + γ = 1, and these coefficients model the importance given to the size, diversity and average of the fitness value of all the solutions composing the coalition, respectively. This is the quality value used by the particles for evaluating the potential reward obtained if they join a coalition. In Algorithm 37.2, the pseudocode of the proposed PSO-CO is shown. The main difference with PSO is that the swarm is divided into different coalitions and every particle belongs to one. In that way, particles follow the leader of their own coalition instead of the leader of the swarm, promoting the explorative behavior of the algorithm. The benefits of including coalitions in PSO are: (1) reduce the convergence speed following the leader of the coalition instead of the leader of the swarm (there are more than one single leader in the swarm); (2) increase the diversity of the swarm because different coalitions explore different regions of the search space; (3) increase the probability of escaping from local optima because changes of particles from one coalition to another cause exchange of information between them.

37 Including Dynamic Adaptative Topology …

525

Algorithm 37.2 Pseudocode of PSO-CO I nitialiseSwar m(); 2: EvaluateSwar m(); BelongCoalition(); 4: num Evals = 0; while num Evals < max N um Evals do 6: for i = 1 : N umCoalitions do CalculateSpeeds(); 8: U pdate Positions(); EvaluateCoalition(); 10: end for DecideBelongCoalition(); 12: end while Retur n Best Solution Found();

37.5 Experiments In order to analyze and compare the performance of the proposed algorithm, we have chosen a set of problems widely used in the literature that was first proposed in the competition of CEC 2005, for continuous optimisation problems with real parameters [44]. This benchmark is composed of 25 minimisation problems and the mathematical description of each of them is detailed in the work. Each of these functions has different characteristics such as epistasis, multimodality or noise, among others, what make them difficult to solve. Moreover, all of them have been shifted in order to move the optimum out of the center of the search space. Functions F1 to F5 are unimodal, while the rest are multimodal. From F6 to F12 are basic modal functions, F13 and F14 are expanded functions, and from F15 to F25 are hybrid functions. The number of variables in all of them is scalable (up to 50). In this work, we are considering dimensions D = 30 and D = 50 for all problems. The parameterization used for the coalition formation is shown in Table 37.1. The values have been experimentally chosen after an exhaustive evaluation of PSO-CO. As we can see in Table 37.1, the selected values are I ndc = 0.05, α = 0.1, β =0.1 and γ = 0.8. This means that the coefficient that influences more in the evaluation is the functional medium average of the coalition, i.e., the measure of its diversity. The parameterization used in PSO-CO is presented in Table 37.2. These values have also been experimentally chosen after an exhaustive preliminary analysis, as those for the coalition formation. As detailed in Table 37.2, the size of the swarm is Table 37.1 Parameterization for the formation of coalitions

Parameter

Value

I ndc α β γ

0.05 0.1 0.1 0.8

526 Table 37.2 Parameterization for PSO-CO

P. Ruiz et al. Parameter

Value

Swarm size W C1 C2 r1 r2 Max evaluations

400 0.1 random value in [1.5, 2.5] random value in [1.5, 2.5] random value in [0, 1] random value in [0, 1] 1,000,000

400 particles, the inertia has a value of 0.1 and the other four parameters take random values for each particle in each iteration. The maximum number of generations if the optimum is not found is 2500 (i.e., 1,000,000 of evaluations).

37.6 Results The results presented in this section represent the error of the solutions reported by the algorithms, with respect to the optimal solutions to the problems (known for all of them). All results were obtained after performing 100 independent runs for every algorithm on all problems, and they are compared using Wilcoxon unpaired signed-ranks test to look for significant differences at 95% confidence level. After analyzing different statistical tests (García et al., 2009), we decided to use Wilcoxon test because we only need to perform pairwise comparisons. We show in Table 37.3, the results we obtained in our experiments for PSO and PSO-CO algorithms, for all considered problems, with 30 and 50 variables. We show the average value after 100 independent runs, as well as the standard deviation (below the average value, in smaller font). Best results are emphasized in bold font. The dark gray cells represent those cases where PSO-CO is statistically better than PSO (according to Wilcoxon test), while those cases in which PSO is better are over light gray background. We observe that PSO-CO significantly outperforms PSO, because PSO is statistically better than PSO-CO only for 4 and 2 problems out of 25, for 30 and 50 variables dimensions, respectively. However, for both dimensions, PSO-CO outperforms PSO for 17 out of the 25 problems. For dimension 30, PSO-CO is significantly better than PSO in 9 problems, and for the high dimension case, it is performing statistically better in 12 out of the 25 problems. Regarding the execution time, PSO-CO is, on average, 27.1% slower than PSO for 30 variables problems, and 27.9% in the case of the 50 variables ones. We show in Fig. 37.1, the execution times of every algorithm when solving all considered problems in our study, together with the difference (in percentage) between the two algorithms, where positive values stand for those cases where PSO is faster then PSO-CO. It can be appreciated how

37 Including Dynamic Adaptative Topology … Table 37.3 Computational results Problem 30 Variables SPSO SPSOCO F1 F2 F3 F4 F5 F6 F7 F8 F9 F10 F11 F12 F13 F14 F15 F16 F17 F18 F19

4, 02E − 04 ±3, 57E − 04 2, 08E00 ±1, 19E00 8, 82E05 ±3, 80E05 1, 67E03 ±1, 05E03 5, 02E03 ±1, 88E03 5, 44E02 ±1, 36E03 4, 82E03 ±4, 78E02 6, 94E01 ±4, 84E02 1, 45E02 ±2, 68E01 2, 38E02 ±5, 82E01 3, 03E01 ±4, 15E00 1, 68E04 ±2, 11E04 2, 83E02 ±2, 75E03 1, 27E01 ±6, 86E − 01 5, 20E02 ±1, 36E02 3, 31E02 ±1, 04E02 3, 54E02 ±1, 044E02 8, 94E02 ±5, 75E01 9, 00E02 ±1, 67E00

4, 86E − 06 ±3, 55E − 06 1, 05E − 01 ±7, 67E − 02 4, 44E05 ±2, 11E05 2, 04E02 ±1, 82E02 3, 09E03 ±1, 67E03 3, 12E02 ±4, 72E02 4, 82E03 ±4, 58E02 6, 95E01 ±4, 84E02 1, 53E02 ±3, 38E01 2, 32E02 ±6, 65E01 2, 99E01 ±4, 43E00 1, 22E04 ±1, 52E04 2, 67E02 ±2, 58E03 1, 25E01 ±7, 89E − 01 5, 47E02 ±1, 36E02 3, 18E02 ±9, 43E01 3, 20E02 ±1, 09E02 8, 99E02 ±5, 58E01 9, 07E02 ±7, 87E00

527

SPSO

50 Variables SPSOCO

6, 50E − 03 ±2, 86E − 03 9, 29E01 ±4, 09E01 2, 35E06 ±9, 76E05 1, 72E04 ±5, 55E03 1, 29E04 ±2, 52E03 1, 28E03 ±2, 46E03 6, 39E03 ±5, 88E02 8, 45E01 ±6, 33E02 3, 07E02 ±4, 77E01 5, 76E02 ±1, 17E02 6, 17E01 ±4, 83E00 7, 57E04 ±7, 63E04 1, 47E03 ±1, 44E04 2, 24E01 ±5, 57E − 01 5, 61E02 ±1, 57E02 3, 68E02 ±6, 79E01 4, 21E02 ±6, 79E01 8, 95E02 ±5, 22E01 9, 00E02 ±0, 00E00

7, 65E − 05 ±4, 48E − 05 6, 46E00 ±3, 35E00 1, 20E06 ±4, 81E05 5, 55E03 ±2, 52E03 8, 84E03 ±1, 69E03 4, 07E02 ±6, 90E02 6, 40E03 ±6, 44E02 8, 49E01 ±6, 38E02 3, 25E02 ±5, 04E01 5, 62E02 ±1, 245E02 6, 01E01 ±6, 90E00 4, 74E04 ±4, 98E04 9, 37E02 ±9, 16E03 2, 21E01 ±7, 26E − 01 5, 56E02 ±1, 50E02 3, 60E02 ±7, 70E01 3, 83E02 ±7, 16E01 8, 95E02 ±5, 87E01 9, 03E02 ±1, 79E01 (continued)

528

P. Ruiz et al.

Table 37.3 (continued) Problem SPSO F20 F21 F22 F23 F24 F25

9, 00E02 ±1, 65E00 1, 55E04 ±1, 45E05 9, 94E02 ±8, 58E01 1, 06E03 ±2, 67E02 9, 99E02 ±1, 24E02 1, 49E03 ±4, 97E01

30 Variables SPSOCO 9, 04E02 ±7, 69E00 3, 41E03 ±2, 34E04 9, 32E02 ±5, 52E01 1, 07E03 ±2, 64E02 9, 97E02 ±1, 92E02 1, 52E03 ±5, 32E01

SPSO

50 Variables SPSOCO

9, 00E02 ±0, 00E00 2, 04E04 ±1, 92E05 1, 03E03 ±7, 34E01 1, 21E03 ±1, 33E02 1, 27E03 ±1, 07E02 1, 47E03 ±2, 61E01

9, 02E02 ±9, 90E00 2, 03E04 ±1, 91E05 9, 83E02 ±4, 01E01 1, 22E03 ±1, 29E02 1, 27E03 ±9, 82E01 1, 47E03 ±4, 42E01

Fig. 37.1 Execution times of PSO and PSO-CO for the considered problems

PSO is generally faster than PSO-CO, except for F8 (30 variables dimension) and F1 and F25 (both dimensions) problems. We can also see how the differences between the two algorithms can raise to up to 50%, but it happens only in cases in which run times are around a tenth of a second, so we suspect these large differences can be due to memory management, because PSO-CO has a considerably larger memory demand, compared to PSO. However, we would like to remark that in all cases run times of both algorithms are in the same order of magnitude.

37 Including Dynamic Adaptative Topology …

529

37.7 Conclusions and Future Work We propose in this work a novel swarm-based optimisation algorithm, called PSOCO, whose population is decentralized using the concept of coalitions, taken from game theory. The result is a dynamic topology similar to the islands topology but, in addition, with some features from cellular topologies for the particles in the borders. The topology decentralization induces a higher diversity in the algorithm, because the different coalitions explore different regions of the search space, generally leading to a slower convergence. PSO-CO was tested in a well-known benchmark of 25 difficult continuous optimisation problems. The obtained results show how PSO-CO is highly competitive, outperforming PSO (or not showing any statistical differences) in 21 out of the 25 problems for dimension 30, and for 23 out of the 25 problems for dimension 50. Regarding the execution times, the formation of the coalitions slows down the search process, as we could expect. However, in most cases, PSO-CO is less than 30% slower than PSO. As our most immediate work, we are performing a sensibility study to assign values to the parameters of both the algorithm and the coalitions formation mechanism, in order to analyze their influence on the accuracy of the algorithm. Moreover, we are studying the possibility of increasing the number of iterations in the most promising coalitions, in order to enhance the exploitation of those regions. As another line of future work, we will export the proposed coalitions model to other types of evolutionary algorithms. Acknowledgments The authors would like to acknowledge the Spanish MINECO-FEDER for the support provided under contracts TIN2014-60844-R (the SAVANT project) and RYC-2013-13355.

References 1. Alba E, Tomassini M (2002) Parallelism and evolutionary algorithms. IEEE Trans Evolution Comput 6(5):443–462 2. Bäck T (1996) Evolutionary algorithms in theory and practice: evolution strategies, evolutionary programming, genetic algorithms. Oxford University Press, NY 3. Dorigo M, Gambardella LM (1997) Ant colonies for the traveling salesman problem. BioSystems, 43:73–81 4. Kennedy J and R. Eberhart, “Particle swarm optimization,” in Proceedings of the IEEE International Conference on Neural Networks, 1995, pp. 1942–1948 5. Alba E, Dorronsoro B (2008) Cellular Genetic Algorithms, ser. Operations Research/Compuer Science Interfaces. Springer-Verlag, Heidelberg 6. K. Binmore, Game theory. Mc Graw Hill, 1994 7. B. Dorronsoro, J. C. Burguillo, A. Peleteiro, and P. Bouvry, Handbook of optimization, ser. Intelligent Systems. Springer, 2013, vol. 38 8. Clerc M, Kennedy J (2002) The particle swarm - explosion, stability, and convergence in a multidimensional complex space. IEEE Transactions on Evolutionary Computation 6(1):58– 73

530

P. Ruiz et al.

9. A. J. Nebro, J. Durillo, J. Garcia-Nieto, C. Coello Coello, F. Luna, and E. Alba, “Smpso: A new pso-based metaheuristic for multi-objective optimization,” in IEEE symposium on Computational intelligence in miulti-criteria decision-making, 2009, pp. 66–73 10. Smith JM (1982) Evolution and the theory of games. Cambridge University Press 11. Fisher R (1930) The genetical theory of natural selection. Clarendon Press, Oxford 12. Morgenstern O, von Neumann J (1947) The theory of games and economic behavior. Princeton University Press 13. Nash J (1950) Equilibrium points in n-person games. Proceedings of the National Academy of Sciences of the United States of America 36(1):48–49 14. Y. Bachrach and J. S. Rosenschein, “Coalitional skill games,” in Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 2, ser. AAMAS ’08. Richland, SC: International Foundation for Autonomous Agents and Multiagent Systems, 2008, pp. 1023–1030. [Online]. Available: http://dl.acm.org/citation.cfm?id=1402298. 1402364 15. G. Owen, Game theory. Saunders, 1968. [Online]. Available: http://books.google.es/books? id=v6lMAAAAMAAJ 16. E. Kalai, “Game theory: Analysis of conflict : By roger b. myerson, harvard univ. press, cambridge, ma, 1991. 568 pp.” Games and Economic Behavior, vol. 3, no. 3, pp. 387–391, August 1991. [Online]. Available: http://ideas.repec.org/a/eee/gamebe/v3y1991i3p387-391.html 17. H. Ehtamo, “Dynamic noncooperative game theory : Tamer basar and geert jan olsder, 2nd ed. (academic press, san diego, ca, 1995) isbn 0-12-080221-x,” Journal of Economic Dynamics and Control, vol. 21, no. 6, pp. 1113–1116, June 1997. [Online]. Available: http://ideas.repec. org/a/eee/dyncon/v21y1997i6p1113-1116.html 18. J. Nash, “Non-cooperative games,” The Annals of Mathematics, vol. 54, no. 2, pp. pp. 286–295, 1951. [Online]. Available: http://www.jstor.org/stable/1969529 19. Han Z, Liu KJR (2008) Resource Allocation for Wireless Networks: Basics, Techniques, and Applications. Cambridge University Press, New York, NY, USA 20. T. Alpcan and T. Basar, “A globally stable adaptive congestion control scheme for internetstyle networks with delay,” IEEE/ACM Trans. Netw., vol. 13, pp. 1261–1274, December 2005. [Online]. Available: http://dx.doi.org/10.1109/TNET.2005.860099 21. X. Li, “Improving multi-agent coalition formation in complex environments,” 2007 22. R. Glinton, P. Scerri, and K. Sycara, “Agent-based sensor coalition formation,” in Information Fusion, 2008 11th International Conference on, 30 2008-july 3 2008, pp. 1 –7 23. W. Saad, H. Zhu, A. Hjorungnes, D. Niyato, and E. Hossain, “Coalition formation games for distributed cooperation among roadside units in vehicular networks,” Selected Areas in Communications, IEEE Journal on, vol. 29, no. 1, pp. 48 –60, january 2011 24. P. Faratin and J. A. Rodríguez-Aguilar, Eds., Agent-Mediated Electronic Commerce VI, Theories for and Engineering of Distributed Mechanisms and Systems, AAMAS 2004 Workshop, AMEC 2004, New York, NY, USA, July 19, 2004, Revised Selected Papers, ser. Lecture Notes in Computer Science, vol. 3435. Springer, 2005 25. W. Saad, H. Zhu, M. Debbah, A. Hjorungnes, and T. Basar, “Coalitional game theory for communication networks,” Signal Processing Magazine, IEEE, vol. 26, no. 5, pp. 77 –97, september 2009 26. W. A. Gamson, “A theory of coalition formation,” American Sociological Review, vol. 26, no. 3, pp. pp. 373–382, 1961. [Online]. Available: http://www.jstor.org/stable/2090664 27. S. Yi, “Stable coalition structures with externalities,” Games and Economic Behavior, vol. 20, no. 2, pp. 201 – 237, 1997. [Online]. Available: http://www.sciencedirect.com/science/article/ pii/S0899825697905674 28. F. Bloch, “Endogenous structures of association in oligopolies,” RAND Journal of Economics, vol. 26, no. 3, pp. 537–556, Autumn 1995. [Online]. Available: http://ideas.repec.org/a/rje/ randje/v26y1995iautumnp537-556.html 29. F. Bloch, “Sequential formation of coalitions in games with externalities and fixed payoff division,” Games and Economic Behavior, vol. 14, no. 1, pp. 90–123, May 1996. [Online]. Available: http://ideas.repec.org/a/eee/gamebe/v14y1996i1p90-123.html

37 Including Dynamic Adaptative Topology …

531

30. D. Ray and R. Vohra, “Equilibrium binding agreements„” Journal of Economic Theory, vol. 73, no. 1, pp. 30–78, Mar. 1997. [Online]. Available: http://dx.doi.org/10.1006/jeth.1996.2236 31. S. Yi, Endogenous formation of coalitions in oligopoly, ser. Working paper series. Harvard University, 1992. [Online]. Available: http://books.google.com/books?id=RGF1OwAACAAJ 32. T. Shrot, Y. Aumann, and S. Kraus, “On agent types in coalition formation problems,” in Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1 - Volume 1, ser. AAMAS ’10. Richland, SC: International Foundation for Autonomous Agents and Multiagent Systems, 2010, pp. 757–764. [Online]. Available: http:// dl.acm.org/citation.cfm?id=1838206.1838307 33. Z. Li, B. Xu, L. Yang, J. Chen, and K. Li, “Quantum evolutionary algorithm for multi-robot coalition formation,” in Proceedings of the first ACM/SIGEVO Summit on Genetic and Evolutionary Computation, ser. GEC ’09. New York, NY, USA: ACM, 2009, pp. 295–302. [Online]. Available: http://doi.acm.org/10.1145/1543834.1543874 34. Shehory O, Kraus S (1998) Methods for task allocation via agent coalition formation. Artificial Intelligence 101(1):165–200 35. Parker LE (1998) Alliance: an architecture for fault tolerant multirobot cooperation. Robotics and Automation, IEEE Transactions on 14(2):220–240 36. H. Liu and J. Chen, “Multi-robot cooperation coalition formation based on genetic algorithm,” in Machine Learning and Cybernetics, 2006 International Conference on, aug. 2006, pp. 85 –88 37. J. Yang and Z. Luo, “Coalition formation mechanism in multi-agent systems based on genetic algorithms,” Applied Soft Computing, vol. 7, no. 2, pp. 561 – 568, 2007. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S1568494606000421 38. Gruszczyk W, Kwasnicka H, “Coalition formation in multi-agent systems; an evolutionary approach,” in Computer Science and Information Technology, (2008) IMCSIT 2008. International Multiconference on, oct. 2008:125–130 39. O. Shehory and S. Kraus, “Formation of overlapping coalitions for precedence-ordered taskexecution among autonomous agents,” in ICMAS-96, December 1996, pp. 330–337 40. V. D. Dang, R. K. Dash, A. Rogers, and N. R. Jennings, “Overlapping coalition formation for efficient data fusion in multi-sensor networks,” in In 21st National Conference on AI (AAAI, 2006, pp. 635–640 41. G. Chalkiadakis, E. Elkind, E. Markakis, and N. R. Jennings, “Overlapping coalition formation,” in Proceedings of the 4th International Workshop on Internet and Network Economics, ser. WINE ’08. Berlin, Heidelberg: Springer-Verlag, 2008, pp. 307–321 42. G. Chalkiadakis, E. Elkind, E. Markakis, M. Polukarov, and N. Jennings, “Cooperative games with overlapping coalitions,” Journal of Artificial Intelligence Research (JAIR), vol. 39, pp. 179–216, September 2010. [Online]. Available: http://eprints.ecs.soton.ac.uk/21574/ 43. B. Dorronsoro and P. Bouvry, “Adaptive neighborhoods for cellular genetic algorithms,” in Nature Inspired Distributed Computing (NIDISC) sessions of the International Parallel and Distributed Processing Symposium (IPDPS) 2011 Workshop, 2011, pp. 383–389 44. Suganthan PN, Hansen N, Liang JJ, D. K., Y. P. Chen, A. Auger, and S. Tiwari, “Problem definitions and evaluation criteria for the cec, (2005) special session on real-parameter optimization,” in IEEE Congress on Evolutionary Computation. Nanyang Technol. Univ, Singapore, Kanpur Genet, p 2005

Part VIII

Risk Management and Safety

Chapter 38

Quantitative Analysis on Risk Assessment in Photovoltaic Installations: Case Study in the Region of Murcia and the Dominican Republic G. C. Guerrero-Liquet, M. S. García-Cascales, and J. M. Sánchez-Lozano Abstract Geographic latitudes that have high solar radiation utilization and are susceptible to the installation of photovoltaic plants must be subject to risk management and assessment to verify the viability of such plants during their useful life. To demonstrate the profitability of these projects, we present a study that allows planning mitigation responses and comparing quantifiable technical and financial data. For this purpose, a quantitative risk analysis is performed combining two decision methods (ANP-TOPSIS) to evaluate the influence of one risk on another. Through a group of experts, the effect of the risks on the photovoltaic installations from the design stage right through to the construction and start-up is determined. The risks taken into account have been identified by carrying out the process recommended by the project management guide (PMBOK) in its risk management section. In the work, the most profitable facility is selected from the comparison in countries like Spain and Dominican Republic, specifically in the Region of Murcia and Santo Domingo. This analysis helps photovoltaic developers and investors to make decisions in cases of uncertainty. Keywords Decision-making · Quantitative risk analysis · Photovoltaic systems (PV) · Analytic network process (ANP) · TOPSIS

G. C. Guerrero-Liquet (B) · M. S. García-Cascales Dpto. de Electrónica Tecnología de Computadoras y Proyectos. Universidad de Politécnica de Cartagena. Plaza del Hospital no 1, 30202 Cartagena, Murcia, Spain e-mail: [email protected] M. S. García-Cascales e-mail: [email protected] J. M. Sánchez-Lozano Centro Universitario de la Defensa de San Javier, Academia General del Aire, Universidad Politécnica de Cartagena. C/Coronel López Peña S/N, 30720 Murcia, Spain e-mail: [email protected] © Springer Nature Switzerland AG 2021 J. L. Ayuso Muñoz et al. (eds.), Project Management and Engineering Research, Lecture Notes in Management and Industrial Engineering, https://doi.org/10.1007/978-3-030-54410-2_38

535

536

G. C. Guerrero-Liquet et al.

38.1 Introduction and Objectives Risks affecting profitability are of great importance in the financial analysis of decentralized renewable energy infrastructures. Nowadays, developers and local investors are often in charge of anticipating the risks that affect this type of projects but lack the knowledge of analysis and evaluation; and thus they need practical tools to overcome management risks. Therefore, fundamental factors such as the Net Present Value (NPV) or the time of return of the investment (Pay Back) which transmit the attractiveness of a project to investors are subject to risk analysis [2]. An additional difficulty associated with the problem is the appropriate modeling of uncertainties depending on the behavior of renewable technology sources and unexpected operational events. These include damage from natural phenomena, lack of supplies and replacements that may affect generation units, etc. These uncertainties add to those already present in the network, such as losses, poor maintenance, or energy prices. Knowing how to quantify this risk would allow investment in the project with greater confidence and reliability. Different models which translate the detailed uncertainties into their potential impact on the project’s objectives are used to apply quantitative risk analysis in a project. Estimates are usually made using different techniques: sensitivity analysis, Monte Carlo simulation, monetary value analysis, scenario analysis, decision-making, etc. [11]. There is a difference in the scope of the criteria when evaluating the effects of the options depending on if the criteria must be assessed quantitatively or qualitatively, and whether the evaluation is carried out in the framework of an estimation or forecasting process [10]. The availability of time and budget, as well as the need for qualitative or quantitative assessments of the risks and their impacts, will determine what methods are used for a particular project [11]. This paper presents a methodology that uses decision-making to obtain the economic behavior of renewable energy projects, taking into account the risk in the project’s sustainability. In order to show how this methodology can be used, two systems of connection to photovoltaic networks are analyzed: one in the capital of the Dominican Republic, Santo Domingo; and another one in a city of eastern Spain, concretely in the city of Murcia. The ANP-TOPSIS combination is applied as a tool to assess the risk of investment in renewable energy facilities. It represents the entire process of profitability of a project through a group of experts that determines the effect of the risks on photovoltaic installations from the design stage right through to the construction and start-up. The article is structured as follows: Sect. 38.2 defines the quantitative analysis of risks and links with renewable energies. The proposed decision-making model and the applied methodology are presented in Sect. 38.3. In Sect. 38.4, the application

38 Quantitative Analysis on Risk Assessment …

537

of the model to the case study is shown and the results are obtained. Finally, the conclusions obtained are detailed in Sect. 38.5.

38.2 Quantitative Analysis of Risks in Renewable Energies The quantitative analysis of risks consists of numerically analyzing the effect of identified risks on the general objectives of a project. It is used to assign risks to an individual numerical rating or to evaluate the cumulative effect of all risks affecting the project [11]. The evaluation is classified according to its importance and according to the impact and the probability of the risk occurring. Likewise, quantitative methods can be classified into two main groups; the prediction methods of series and the method of forecast chance. Among them are the Markov method or chain, the gray model and neural networks, scenario analysis, the regression method, and the Bayesian networks [9]. The quantitative analysis allows the following tasks: quantify the project results and their probabilities; evaluate the project’s sensitivity to risks and its components; evaluate the probability of achieving the project’s specific objectives and consequences; assess the risks which require greater attention to which the project is more sensitive; identify realistic and viable goals; and adopt the best decision for project management when some conditions are uncertain. There are four main components that result in the quantitative risk analysis [11]: 1. Likelihood of achieving the objectives: With the risks facing the project, it is possible to estimate the probability of achieving the project’s objectives using the results of the analysis. 2. Probabilistic analysis of the project: Estimates of potential outcomes are made, listing findings, and possible costs with their associated confidence levels. 3. Prioritized list of quantified risks: It includes the risks that represent the greatest threats or present the greatest opportunities for the project. 4. Trends in quantitative risk analysis results: As the analysis is repeated, a trend may be evident leading to conclusions affecting risk responses. Commonly used techniques cover both event-oriented and project-oriented analyses, and include the [11]: • Sensitivity analysis: This method evaluates the degree to which the uncertainty of each element of the project affects the objective being examined, when all other uncertain elements remain in their values. • The analysis of expected monetary value: Statistical concept that calculates the average result when scenarios that may or may not occur in the future are included (analysis under uncertainty).

538

G. C. Guerrero-Liquet et al.

• Simulation of project: Model that translates the project’s detailed uncertainties into its potential impact on the project’s objectives. Iterative simulations are usually performed using the Monte Carlo technique. • Expert judgment: It is required to identify the potential impacts of the risks and to assess the probability of them occurring. The expert judgment is also involved in the interpretation of the data. Experts should be able to identify the weaknesses of the tools, as well as their relative strengths. There is very little in the literature on quantitative risk analysis in renewable energies. To date, some studies such as [2] addressed the need for advanced risk management tools and presented a simulation approach to risk analysis based on a full cycle representation of project life. Some applications have been carried out to projects of solar installations such as [5] in which an autonomous photovoltaic system is presented, taking into account the risks in the sustainability of the project assumed by an investor in energy generation projects.

38.3 Proposed Decision-Making Model There are numerous existing Decision-Making Methods (MDMC). These are used successfully in different planning processes. The concepts and methods of these methodologies present a framework that incorporates multiple contradictory criteria and that involve uncertain or imprecise valuations in planning. The criteria in these methods can be qualitative and quantitative and usually involve different units of measurement [19]. The best-known models based on multi-criteria decision include the following: AHP, ANP, ELECTRE, PROMETHEE, TOPSIS, VIKOR, and DEMATEL [1]. In order to achieve the proposed objectives, we will combine two techniques of evaluation of expert judgment that are widely extended today, one developed by [13], called the Analytical Process ANP methodology and another called TOPSIS developed by [8]. In the following scheme (Fig. 38.1), we present the model that we propose for the quantitative analysis. In the literature, we have not found any work that uses ANP and TOPSIS techniques together to solve problems in the field of photovoltaic solar energy. The reason could be due to the fact that the ANP methodology has recently been introduced in the scope of multi-criteria methodology [3]. However, there are some studies in other areas where both techniques have been combined (Table 38.1). Each of the techniques used in the present study is explained below.

38 Quantitative Analysis on Risk Assessment …

539

Fig. 38.1 Proposed model outline (compiled by author)

38.3.1 Analytical Network Process ANP Analytical Network Process is a method proposed by Saaty in [12]. In ANP, the decision problem is structured as a network [13]. ANP is a generalization of the Analytical Hierarchical Process methodology by which hierarchies are replaced by networks to capture the results of dependency and feedback within and among elements AHP [12]. ANP is used as a general theory on the scale of indices that measures influence, based on a methodology that deals with dependence and feedback [20]. This method is divided into two parts. First, there is the network of criteria and sub-criteria that control the interactions and second the elements and groups that make up the influence of the network [20]. The weights or coefficients of importance of these criteria are obtained through the paired comparison judgments of homogeneous elements [19]. ANP produces more precise weights of criteria, since it allows the consideration of the dependence between the factors in the problems of decision-making. Unfortunately, ANP requires many comparisons in pairs, depending on the number and interdependence of the criteria. The weight assigned to the criterion can be estimated either from the data, or subjectively by the decision makers. It is possible to measure the consistency of decision makers’ judgments. ANP provides a measure through the consistency ratio (CR), which is an indicator of the model’s reliability [17]. The one who makes the decision based on his knowledge of the problem determines the influence of the different elements of the model. Due to the difficulty of

540

G. C. Guerrero-Liquet et al.

Table 38.1 Publications focused on ANP-TOPSIS (compiled by author) Author

Year

Journal

Shyur Huan-Jyh

2006

Applied Mathematics and COTS evaluation using Computation modified TOPSIS and ANP

Paper title

Chena Jui-Kuei; Shuo Chen I.

2010

Expert Systems with Applications

Using a novel conjunctive MCDM approach based on DEMATEL, fuzzy ANP, and TOPSIS as an innovation support system for Taiwanese higher education

Wua Cheng-Shiung, Linb Chin-Tsai; Leec Chuan

2010

International Journal of Production Economics

Optimal marketing strategy: A decision-making with ANP and TOPSIS

Aya˘ga Z; Özdemirb R.G.

2012

International Journal of Production Economics

Evaluating machine tool alternatives through modified TOPSIS and alpha-cut based fuzzy ANP

Büyüközkan G; Çifçi G.

2012

Expert Systems with Applications

A novel hybrid MCDM approach based on fuzzy DEMATEL, fuzzy ANP and fuzzy TOPSIS to evaluate green suppliers

Tavanaa M; Zandic F; Katehakisd Michael N.

2013

Information & Management

Hybrid fuzzy group ANP–TOPSIS framework for assessment of e-government readiness from a CiRM perspective

Changa K.L; Liaob Sen-Kuei; Tsengc Tzeng-Wei; Liaod Chi-Yi

2015

Asia Pacific Management An ANP based TOPSIS Review approach for Taiwanese service apartment location selection

Sakthivela G; Ilangkumaranb M; Gaikwada A.

2015

Shams Engineering Journal

A hybrid multi-criteria decision modeling approach for the best biodiesel blend selection based on ANP-TOPSIS analysis

Mortezaa Z; Mohamad Rezab F; Mohammad Seddiqc M; Shararehd P; Jamal G.

2016

Ocean and Coastal Management

Selection of the optimal tourism site using the ANP and fuzzy TOPSIS in the framework of Integrated Coastal Zone Management: A case of Qeshm Island

38 Quantitative Analysis on Risk Assessment …

541

identifying the influence of each criterion and the relative intensity, this first one is the most critical of the ANP [1]. Anyway, ANP is widely applied in situations of interdependence and captures influence, so it is postulated as one of the most viable and precise solutions for generating the weights of the selected criteria [4]. In this quantitative analysis, we use ANP to obtain the weight of the risks in the proposed model. The following describes the two steps of the model in which ANP is applied: Step 1. Build the hierarchy and structure of the problem The hierarchy will be determined by the opinion of decision makers through the exchange of ideas or other appropriate methods, such as reviews of the literature. Step 2. Determine the perspectives and weights of the criteria In this step, the decision-making committee makes a series of comparisons in pairs to establish the influence of the risks between each other and the weight of the criteria. In these comparisons, a scale of 1–9 is applied to compare the criteria according to the interdependence and each of the risks identified. These weights together with the influence calculation will be used to obtain the final result of the quantitative analysis.

38.3.2 TOPSIS The technique for the Preference Order of Similarity with the Ideal Solution (TOPSIS) was developed for the first time by Hwang and Yoon in 1981 [18]. The basic principle of the TOPSIS method is to search for a positive ideal solution with a small distance and an ideal negative solution with a greater distance [18]. In TOPSIS, the weight of each of the criteria is known a priori. However, in many real situations, clear data are inadequate to model real-life situations, since human judgment is vague and cannot be estimated with exact numerical values [18]. The ideal positive solution is a solution that maximizes performance criteria and minimizes cost criteria, while the ideal negative solution maximizes cost criteria and minimizes performance criteria [14]. The calculation process is presented as follows [17]: • • • • • • •

Step 1. Establish a decision matrix. Step 2. Normalize the decision matrix. Step 3. Calculate the normalized weighted decision matrix. Step 4. Determine the optimal positive (PIS) and negative ideal (NIS). Step 5. Calculate the separation distances of each alternative to PIS and NIS. Step 6. Calculate the relative proximity to the ideal solution. Step 7. Obtain the rank or ranking in the order of preference of the alternatives.

In order to evaluate the influence of the risks, either quantitatively or qualitatively in this analysis, ANP is applied to generate the weights of the criteria and risks. The

542

G. C. Guerrero-Liquet et al.

TOPSIS method is used to obtain the valuation of the alternatives. TOPSIS is applied because it eliminates many of the procedures that are performed in ANP and allows the system to reach a solution in a shorter time.

38.4 Application of the Proposed Model to the Study Case To demonstrate the profitability of these projects, we apply a model that allows planning mitigation responses and comparing quantifiable technical and financial data. Three experts with the proven ability make up the decision-making committee for this quantitative risk analysis. We obtained the consent of these participants to participate in the study and to authorize the publication of this work. This is with regard to the anonymously expert opinion cited in the paper. This analysis evaluated two photovoltaic installations that are in different geographic latitudes but have a high use of solar radiation. Two alternatives will be considered in which the risks that affect the profitability of the installation have already been studied. The first alternative is a grid-connected photovoltaic system, integrated into the cover of an existing parking lot at the University of Murcia (Spain). This facility was built in late 2008 and began operating and providing information in July 2009. The latitude and longitude of the solar installation are, respectively, 38° 01’12 “N and 1° 09’56” W, and the annual average daily irradiance is 5320 Wh/m2 [16]. The technical, physical, and economic characteristics of this installation are shown below in Fig. 38.2. The second alternative is a solar photovoltaic installation in the parking area of a building for administrative use, namely the central offices of a bank in the city of Santo Domingo (Dominican Republic).

Fig. 38.2 Photovoltaic data collection Murcia, Spain (compiled by author)

38 Quantitative Analysis on Risk Assessment …

543

Fig. 38.3 Photovoltaic data collection Dominican Republic (compiled by author)

The risks that affect the profitability of this photovoltaic installation have been previously identified [6]. Figure 38.3 details the technical, physical, and economic characteristics of the facility to enable expert knowledge to be extracted.

38.4.1 Selection of Risks and Criteria The criteria used for this quantitative risk analysis have been selected on the basis of 11. These are considered as risks C1: Technical aspects and risks C2: Economic factors according to the group they belong to, based on the origin of their occurrences and impacts. As mentioned above, the risk identification had been carried out through a group of five experts in a previous study [7]. In that study, the experts identified the risks of the installation through a combination of methods following the guidelines and recommendations of the PMBOK guide. In this study, it was possible to establish an order of importance for each of these risks. The risks identified by the experts were as follows: • C1.1. Atmospheric Phenomena damage: Earthquakes, hurricanes, storms, etc., which may cause damage to the facility. • C1.2. Lack of replacements and supplies: Replacement of any component or structure which prevents efficiency during the life of the installation. • C1.3. Shadows losses: Losses of solar radiation by photovoltaic installations to shadows. • C1.4. Lack of Maintenance: Effects caused by not cleaning the dust and dirt of a facility over its lifetime. • C.2.1. Lack of Financing: no bank loans or funding to support the project investment.

544

G. C. Guerrero-Liquet et al.

• C.2.2. High Pay Back: It takes a long time to recover the initial outlay invested in the production process. • C.2.3. Variability NPV: It is the route as a percentage of the project’s profitability. The net present value or net present value of an investment is an indicator of the net absolute return provided by the project. It measures the baseline and the benefit provided in absolute terms, after discounting the initial investment. • C.2.4. Maintenance Costs: Total price to pay for cleaning dirt from the solar panels over the lifetime of the installation. • C.2.5. Changes in the legislative framework: Different changes that may arise in current legislation and rules governing aids applied to this type of investment.

38.4.2 Assessment of the Weight and Interdependence of Risks Using ANP In the first place, the hierarchy diagram consists of four levels, objective of the problem, the main criteria, the risks, and the alternatives that are situated level by level, respectively, see Fig. 38.4. We begin the process by identifying the degree of influence between risks. With this scheme, a first questionnaire is carried out where the three experts indicate which risks influence the other risks. In this way, the property of interdependence between them is determined by applying ANP. After constructing the hierarchy diagram for the above problem, we compare the risks that have a greater influence of the unit for each risk using the fundamental scale of [13] 1–9. The interdependencies with which the influence matrix is obtained are thus calculated. At this stage, the consistency of each of the experts is measured.

Fig. 38.4 Diagram of hierarchies with ANP (compiled by author)

38 Quantitative Analysis on Risk Assessment …

545

The next step is to perform the binary comparison of the criteria and the subcriteria in order to determine the weight or coefficient of importance of each of the risks. This process is done for each expert through a second questionnaire. With the addition of these two results the final weight is obtained (Table 38.2), and then we proceeded to perform the evaluation stage of the alternatives.

38.4.3 Evaluation of Alternatives Using the TOPSIS Method On the basis of the TOPSIS method, the alternatives are evaluated taking into account the weights previously obtained with ANP and estimating the probability of the risks occurring, according to whether they are considered as qualitative or quantitative. The weighted quantitative risk data (C1.3, C2.2, C2.3, and C2.4) are obtained from the technical, physical, and economic characteristics of each of the facilities detailed above in Sect. 38.4. The other risks have been considered as qualitative and the data that have been weighted to perform the quantitative analysis have been obtained through the last questionnaire that was given to each one of the experts in which they had to value on a scale of 1–10 the probability of each of the risks occurring in each project. Finally, at this stage, we apply the TOPSIS methodology for these alternatives. Table 38.3 presents the process and result obtained with expert 1. This procedure was done in an identical way for the rest of the experts taking the weights of occurrence that responded in the final questionnaire. Table 38.4 shows the final results showing that the three experts have shown a very similar order of preference. With this result, it is established that alternative 1: Solar photovoltaic installation at the University of Murcia, Spain, is more profitable than Alternative 2: Solar photovoltaic installation at a bank in Santo Domingo, Dominican Republic.

38.5 Conclusions Evaluating the influence of the risks and obtaining which photovoltaic plant has the greater profitability through the quantitative analysis, we have achieved our main objective of applying multi-criteria decision tools to the process of risk management. With this quantitative analysis, investors and developers can evaluate the profitability of any project and make the best decision in cases of uncertainty using the model we propose. In this evaluation process, it is crucial that the group of experts selected have a very thorough knowledge of the topic addressed and that it maintains a high consistency. From the methodology applied, it should be pointed out that with ANP we were able to establish the weight of the identified risks, taking into account the influence that each one of them has among them. The TOPSIS methodology allowed an order

546

G. C. Guerrero-Liquet et al.

Table 38.2 Results of the risk weights with ANP (compiled by author)

4

5

3

0.970

0.106

A2

0.059

0.036

0.857

0.514

0.029

0.014

0.894

0.447

C1.3

0.032

min

20

10

C1.3

0.032

20

10

%

C1.3

0.285

0.143

0.894

0.447

C1.4

0.319

min

4

2

C1.4

0.319

4

2

Min:1, Max:10

C1.4

0.027

0.106

A+

A−

0.059

0.036 0.029

0.014

0.285

0.143

Determine the Ideal Positive Solution (PIS) and the Ideal Negative Solution (NIS)

0.027

A1

Weighting of the Decision Matrix

0.243

A2

C1.2

C1.1

min

0.069

min

0.109

A1

Weight (W)

Standardization of the Decision Matrix

1

C1.2

A2

0.069

C1.1

5

0.109

A1

Weight (W)

Construction of the Decision Matrix

4

A2

3

Min:1, Max:10

Min:1, Max:10

1

A1

C1.2

C1.1

C1 Technical risks

Valorization of Alternatives

0.190

0.076

0.190

0.076

0.928

0.371

C2.1

0.204

min

5

2

C2.1

0.204

5

2

Min:1, Max: 10

C2.1

C2 Economic risk

Table 38.3 Procedure for obtaining the ranking of alternatives expert 1 (compiled by author)

0.010

0.006

0.006

0.010

0.530

0.848

C2.2

0.012

min

5

8

C2.2

0.012

5

8

Years

C2.2

0.014

0.009

0.009

0.014

0.555

0.832

C2.3

0.017

min

20

30

C2.3

0.017

20

30

%

C2.3

0.072

0.070

0.070

0.072

0.698

0.716

C2.4

0.100

min

205.27

210.3

C2.4

0.100

205.27

0.042

0.030

0.030

0.042

0.581

0.814

C2.5

0.052

min

5

7

C2.5

0.052

5

7

Min:1, Max:10

e/Wp 210.3

C2.5

C2.4

38 Quantitative Analysis on Risk Assessment … 547

548 Table 38.4 Results of the ranking of alternatives (compiled by author)

G. C. Guerrero-Liquet et al. Distance Measures Expert 1

Expert 2

Expert 3

d + A1

0.0135

0.0510

0.0405

d + A2

0.1665

0.0714

0.1358

d – A1

0.1665

0.0762

0.1574

d – A2

0.0364

0.0282

0.1226

Relative Proximity RA1

0.925

0.599

0.795

RA2

0.179

0.283

0.474

A1 > A2

A1 > A2

Order Preferences A1 > A2

of preference of the alternatives to be obtained, taking into account quantitative and qualitative data, a fact that is primordial for an analysis of this type. In addition to this risk assessment, it is necessary to add a sensitivity analysis that demonstrates the robustness of the obtained results. Nevertheless, ANP-TOPSIS allows obtaining results in a short time, taking into account that to reach a conclusion in a quantitative analysis involves a number of very complex mathematical operations and without the use of excessively extensive surveys. Finally, we conclude that this quantitative analysis allows us to evaluate not only qualitative data, but also quantifiable data. It also allows planning mitigation and contingency responses of a project through clear estimates and forecasts. Acknowledgments This work is partially supported by projects TIN2014-55024-P from the Spanish Ministry of Science and Innovation and P11-TIC-8001 from Junta de Andalucía (including FEDER funds), and the project 19882/GERM/15 SÉNECA 2004 program in addition to the doctoral scholarship from the Ministry of Higher Education from the Dominican Republic (MESCYT) with the contract number 2564–2016, respectively.

References 1. Aragonés-Beltrán P, Chaparro-González F, Pastor-Ferrando JP (2014) An AHP (Analytic Hierarchy Process)/ANP (Analytic Network Process)-based multi-criteria decision approach for the selection of solar-thermal power plant investment projects. Energy 66(1), pp 222–238 2. Arnold U, Yildiz O (2015) Economic risk analysis of decentralized renewable energy infrastructures e A Monte Carlo Simulation approach. Renew Energy 77:227–239 3. Aya˘g Z, Özdemir RG (2012) Evaluating machine tool alternatives through modified TOPSIS and alpha-cut based fuzzy ANP. Int J Product Econom 140(2):630–636 4. Chang KL, Liao Sen-Kuei, Tseng Tzeng-Wei, Liao Chi-Yi (2015) An ANP based TOPSIS approach for Taiwanese service apartment location selection. Asia Pacific Manage Rev 20(2):49–55

38 Quantitative Analysis on Risk Assessment …

549

5. Da Silva Pereira E, Tavares Pinho J, Barros Galhardo AB, Negrao Macedo W (2014) Methodology of risk analysis by Monte Carlo Method applied to power generation with renewable energy. Renewable Energy 69 347–355 6. Guerrero-Liquet GC, Sánchez-Lozano JM, García-Cascales MS, Lamata MT, Verdegay JL (2016) Decision-Making for risk management in sustainable renewable energy facilities: a case study in the Dominican Republic. Sustainability 8(5):455 7. Guerrero-Liquet GC, Sánchez-Lozan JM, García-Cascales MS, Campos-Guzmán V (2016) Comparative Analysis of Risk in Photovolyaic Installations: Murcia-Dominican Republic. 20th International Congress on Project Management and Engineering, AEIPRO. Cartagena 13–15th July 1643–1655 8. Hwang CL, Yoon, K. (1981). Multiple Attribute Decision Methods and Applications. Springer, Berlin Heidelberg.Mahmood Shafiee. (2015). A fuzzy analytic network process model to mitigate the risks associated with offshore wind farms. Expert Systems with Applications. Volume 42, Issue 4, March, Pages 2143–2152 9. Marhavilas PK, Koulouriotis DE (2012) A combined usage of stochastic and quantitative risk assessment methods in the worksites: Application on an electric power provider. Reliability Engineering and System Safety 97:36–46 10. Mena R, Hennebel M, Yan-FuLI; Ruiz, C; Zio E. (2014) A risk-based simulation and multiobjective optimization framework for the integration of distributed renewable generation and storage. Renew Sustain Energy Rev 37:778–793 11. PMBoK (2012) Project Management Institute “Project Management Body of Bnowledge (Guide PMBOK)”. 5a Ed, Pennsylvania USA 12. Saaty, T. L. (2000). Fundamentals of Decision Making and Priority Theory with the Analytic Hierarchy Process. RWS Publications 13. Saaty, T.L. (2001). The Analytic Network Process: Decision Making with Dependence and Feedback. RWS Publications, 4922 Ellsworth Ave., Pittsburgh, PA 15213, 1996, completely revised and published 2001, completely revised 2004b 14. Sakthivel G; Ilangkumaran M; Gaikwad A. (2015). A hybrid multi-criteria decision modeling approach for the best biodiesel blend selection based on ANP-TOPSIS analysis. Ain Shams Engineering Journal. Volume 6, Issue 1, March, Pages 239–256 15. Sánchez-Lozano JM, García-Cascales MS, Lamata MT (2015) Evaluation of suitable locations for the installation of solar thermoelectric power plants. Comput Ind Eng 87:343–355 16. Serrano-Luján L, García-Valverde R, Espinosa N, García-Cascales MS, Sánchez-Lozano JM, Urbina A (2015) Environmental benefits of parking-integrated photovoltaics: a 222 kWp experience. Prog Photovoltaics Res Appl 23(2):253–264 17. Shyur Huan-Jyh. (2006). COTS evaluation using modified TOPSIS and ANP. Applied Mathematics and Computation. Volume 177, Issue 1, 1 June, Pages 251–259 18. Singh R.K; Benyoucef L. (2011). A fuzzy TOPSIS based approach for e-sourcing. Engineering Applications of Artificial Intelligence. Volume 24, Issue 3, April, Pages 437–448 19. Tavana M, Zandi F, Katehakis Michael N (2013) A hybrid fuzzy group ANP–TOPSIS framework for assessment of e-government readiness from a CiRM perspective. Inf Manag 50(7):383–397 20. Wu Cheng-Shiung, Lin Chin-Tsai; Lee Chuan. (2010). Optimal marketing strategy: A decisionmaking with ANP and TOPSIS. International Journal of Production Economics. Volume 127, Issue 1, September, Pages 190–196