Understanding Human Errors in Construction Industry: Where, When, How and Why We Make Mistakes (Digital Innovations in Architecture, Engineering and Construction) 3031376668, 9783031376665

This book provides an analysis of the origin and effect of human errors in engineering. Using cases from everyday constr

98 76 4MB

English Pages 225 [223] Year 2023

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Understanding Human Errors in Construction Industry: Where, When, How and Why We Make Mistakes (Digital Innovations in Architecture, Engineering and Construction)
 3031376668, 9783031376665

Table of contents :
Preface
Contents
1 Human Errors in Engineering
1 Conceptualisation of Human Errors (Adapted from Moura et al. 2017)
2 Limitations to Calculate Human Errors Probabilities (Adapted from Moura et al. 2016)
References
2 Design and Construction Case Studies
1 Introduction
2 Genesis of Errors
2.1 Conceptual Errors
2.2 Modeling Errors, Misconception, Omission
2.3 Calculation Errors
2.4 Errors of Translation, Communication and Coordination
2.5 Errors of Execution
2.6 Errors Due to Use
2.7 Errors Caused by Alteration
2.8 Errors in Deconstruction
3 Perpetuation and Interception
3.1 Detection, Attention and Murphy’s Law
3.2 Recognition, Interpretation
3.3 Communication
3.4 Follow-up. From the Tip to the Real Iceberg
4 Compensation for Uncertainty
4.1 Safety Margins
4.2 Robustness and Checking
5 Correction and Cost of Errors
5.1 Correction as Part of an Optimization Process
5.2 The Cost of Correction
5.3 Insurance, Sharing Liability and Responsibility
6 Methodology of Quality Control
6.1 Preset Protocols
6.2 The Periodic Project Review
6.3 Targeted Checking
6.4 Circumspect Surveillance
6.5 Testing
6.6 Conclusion
7 Case Studies
7.1 The Use of Brittle Materials
7.2 Traditional Methods Versus Modern Materials
7.3 The Notorious Formwork Collapse
7.4 The Error of Use: No Maintenance
7.5 A Mix-up
7.6 The Crooked Mast
7.7 Vibration Problems
7.8 The Computer Glitch
7.9 The Problem with Repetitive Work
7.10 The Not so Calculated Risk
7.11 The Problem with Drawings
7.12 The Intricacy of Creating Ductile Structures
7.13 Stiffness, Brittleness and Weak Links
7.14 The Problem with Indoor Parking Structures
7.15 Destructive Temperature Gradients
7.16 The Perpetual Problem of Balconies
7.17 Earthquake Resistance in the Real World
7.18 Inadequate Substitution
7.19 Misplaced Foundations
7.20 Prestressed Concrete Railroad Ties
7.21 Stability of Masonry Walls
7.22 Water Leakage and Its Consequences
7.23 Repairs and Their Adequacy
7.24 Beyond Elastic Theory
7.25 Incomplete Structures
7.26 Post-tensioning in Building Construction
7.27 Forgotten Physics
7.28 Excavation Next to Existing Construction
7.29 Hydraulic Heave
7.30 Internal Corrosion
7.31 Problems with Isostatic Systems
7.32 The Problem with Construction Stops
7.33 Problems with Access
7.34 The Importance of the Ground Water Table
7.35 Forgotten Effects of Fire
7.36 Durability of Timber Piles
7.37 The Devil Is in the Detail
7.38 Bad Luck with Piling Technique
7.39 Problems with Brick Veneer
7.40 The Anomaly of Water
7.41 Trouble with Precast Concrete
7.42 The Risk of Single Path Tension
7.43 Anchorage of Large Forces
7.44 The Art of Load Transfer
7.45 Past Construction Methods and Their Problems
7.46 The Importance of Wind
7.47 Drilling Is Blind
7.48 Inspection Impossible
7.49 Thoughtless Modification
7.50 Extreme Weather and Vulnerable Structures
7.51 Theory and Performance
7.52 Trouble with Cracking
7.53 Thoughtless Cutting
7.54 The Strange Story of Nocturnal Noise
7.55 Precast Concrete Structures
7.56 Geothermal Installations and Their Problems
7.57 The Myth of Watertight Concrete
7.58 Shrinkage and Its Problems
7.59 No Checking
8 Conclusion
References
3 Learning from Large-Scale Accidents
1 Introduction
2 The Construction of a Dataset to Understand Human Errors (Moura et al. 2016)
3 Data Analysis Methods
3.1 SOM (Moura et al. 2017a, b)
3.2 Bayesian Approach (Morais et al. 2018)
4 Lessons Learned from Past Accidents (Moura et al. 2017a, b)
5 Using Past Accident Lessons to Mitigate Future Events (Moura et al. 2017a, b)
References
4 Insights from Psychology
1 The Underlying Causes: Psychological Factors Behind Human Errors
2 Internal Human Factors
3 Compromise of Ethical and Professional Judgement
4 Emotions Versus Cognitive Processes
5 Neurological Origin of Emotions
6 Past Experiences and Emotional Loads
7 Modulation of Emotions
8 Balance and Harmony
9 Emotional Intelligence
10 Emotional Competence and Professional Conscience
11 Cognitive Bias and Human Error
12 Human Error and the University Program
13 Analysis of the Study Cases from a Psychologist Perspective
13.1 Scaffolding and the Inclined Overpass
13.2 The Crooked Mast
13.3 A Mix-up
13.4 The Roof of the Olympic Stadium in Montreal
13.5 Earthquake Resistance in the Real World
13.6 The Problem with Repetitive Work
13.7 The Problem with Drawings
13.8 The Perpetual Problem of Balconies
13.9 Inspection Impossible
13.10 Thoughtless Cutting
13.11 The Problem with Indoor Parking Structures (Case 14)
14 Conclusion
References
5 Insights from Construction Management
1 Megaprojects in Construction as Mega Challenges
2 Multidimensional Complexity of Megaprojects
3 Expanding Risk Management
3.1 Project Management
3.2 Engineering
3.3 Communication and Collaboration
3.4 Culture
4 Reducing Complexity by Digitalisation
5 Sustainable Implementation of Risk Management
References
6 Evaluation of Case Studies/A Template Approach
1 The Work
1.1 Concept
1.2 Workmanship, Quality
1.3 Novelty
1.4 Magnitude
1.5 Modifications
1.6 Temporary Work
2 The Client
2.1 Budget Constraints
2.2 Time Pressure
2.3 Selection of the Team
2.4 Payment Morale
2.5 Client’s Participation
3 Organization
3.1 Management
3.2 Team Work
3.3 Assignment of Tasks and Responsibility
3.4 Methodology of Decision Making
3.5 Communication
4 The Human Individuals
4.1 Competence, Knowledge
4.2 Devotion, Care and Attention
4.3 Continuity
4.4 Ignorance and Learning
4.5 Delegation to Software
5 The Structure
5.1 Robustness, Resilience
5.2 Appropriateness, Suitability
5.3 Complexity
6 Use
6.1 Maintenance
6.2 Exposure
7 Checking, Quality Control, Insurance
7.1 Effectiveness
7.2 Timing
7.3 Methodology, Targeting
7.4 Response to Circumstances, Follow-up and Trouble Shooting
7.5 Resources
8 Results, Conclusion
References
Appendix Matrix of Circumstances Versus Errors

Citation preview

Digital Innovations in Architecture, Engineering and Construction

Raphael Moura Franz Knoll Michael Beer   Editors

Understanding Human Errors in Construction Industry Where, When, How and Why We Make Mistakes

Digital Innovations in Architecture, Engineering and Construction Series Editors Diogo Ribeiro , Department of Civil Engineering, Polytechnic Institute of Porto, Porto, Portugal M. Z. Naser, Glenn Department of Civil Engineering, Clemson University, Clemson, SC, USA Rudi Stouffs, Department of Architecture, National University of Singapore, Singapore, Singapore Marzia Bolpagni, Northumbria University, Newcastle-upon-Tyne, UK

The Architecture, Engineering and Construction (AEC) industry is experiencing an unprecedented transformation from conventional labor-intensive activities to automation using innovative digital technologies and processes. This new paradigm also requires systemic changes focused on social, economic and sustainability aspects. Within the scope of Industry 4.0, digital technologies are a key factor in interconnecting information between the physical built environment and the digital virtual ecosystem. The most advanced virtual ecosystems allow to simulate the built to enable a real-time data-driven decision-making. This Book Series promotes and expedites the dissemination of recent research, advances, and applications in the field of digital innovations in the AEC industry. Topics of interest include but are not limited to: – – – – – – – – – – – – – – –

Industrialization: digital fabrication, modularization, cobotics, lean. Material innovations: bio-inspired, nano and recycled materials. Reality capture: computer vision, photogrammetry, laser scanning, drones. Extended reality: augmented, virtual and mixed reality. Sustainability and circular building economy. Interoperability: building/city information modeling. Interactive and adaptive architecture. Computational design: data-driven, generative and performance-based design. Simulation and analysis: digital twins, virtual cities. Data analytics: artificial intelligence, machine/deep learning. Health and safety: mobile and wearable devices, QR codes, RFID. Big data: GIS, IoT, sensors, cloud computing. Smart transactions, cybersecurity, gamification, blockchain. Quality and project management, business models, legal prospective. Risk and disaster management.

Raphael Moura · Franz Knoll · Michael Beer Editors

Understanding Human Errors in Construction Industry Where, When, How and Why We Make Mistakes

Editors Raphael Moura National Agency for Petroleum Natural Gas and Biofuels Rio de Janeiro, Rio de Janeiro, Brazil

Franz Knoll NCK Inc. Montreal, QC, Canada

Michael Beer Institute for Risk and Reliability Leibniz University Hannover Hannover, Germany Institute for Risk and Uncertainty University of Liverpool Liverpool, UK

ISSN 2731-7269 ISSN 2731-7277 (electronic) Digital Innovations in Architecture, Engineering and Construction ISBN 978-3-031-37666-5 ISBN 978-3-031-37667-2 (eBook) https://doi.org/10.1007/978-3-031-37667-2 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

With this book on human error and mistakes, the authors attempt to put in perspective what has been recognized as one of the main problems humanity has with progress. Human error has become a buzzword loved by the media who delight in assigning blame for everything that is going wrong, and rightly so. In the past few decades, it has become doubtlessly clear that accidents, lack of quality, damage, loss of value have their origin in human action, its absence or deficiency, with little exception. Even where a natural disaster, ‘an act of God’ appears to be the cause of a mishap, the absence of prudence, or the lack of precautionary measures is most often instrumental for the severity of its effects. It is becoming increasingly difficult to explain away the human influence in climate change and its association with the increasing intensity and frequency of extreme events. At this time, only the activity of the earth’s crust—earthquakes, volcanism can be thought to be independent of human influence but why are settlements placed too close to the volcano, and why is fragile construction allowed in earthquake-active zones. It is not always easy or trivial to find the main culprit for things turning out badly where a complex array of circumstances unites to cause the misfortune but, especially in the reality of everyday production and operation of systems, pseudo rules like ‘Murphy’s laws’ take a prominent place. Research about human error and its effects has intensified very recently after having been carried by a few on a ‘small burner’ for decades in spite of its importance, both in terms of accidents and monetary losses. The reasons for this are manifold, the main consideration being that, in the seat of scientific research, i.e. academia, the subject was shunned as too thorny and difficult, not lending itself to the usage of the scientific method that depends classically on subdividing problems into handy subquestions, facilitating the production of numerous scientific papers by individuals and small groups. The problem with human intervention in most endeavours is the bewildering variety of circumstances that can affect the outcome and cannot be decomposed into handy decoupled packages. A holistic approach is the only way to come to terms with the problem. In a typical construction project or the operation of an industrial system, the intervention of a multitude of individuals or groups, of boundary conditions, of v

vi

Preface

temporal, spatial and systemic interfaces are interacting throughout its history to produce an outcome that, in the majority of cases is quite congruent with the specified goal, but not so in a few instances, where predictions based on modelling the future have failed to deliver what was desired. By and large, success is limited in reality to rates such as 99/100 or 999/1000; this places the rate of failure to somewhere between 10–2 and 10–3 . It is far from what the probability theory based on the individual parameters (exposure, resistance, control, physical properties of materials, etc.) is predicting or what, as in the case of structures, has been made the basis of prescribed numerical safety margins to be used in engineering calculations. In the realm of industries such as nuclear power stations, rates of failure of 10–5 and 10–6 are the common currency used in theoretical work, only they are far from reality. From the approximately 500 nuclear reactors three or four of them have failed dramatically depending on how one counts. This is 10–2 in reality, rather than theory. Of course, human error has been identified in each and every case to be the main cause of the failure as in all cases of major industry failures. In the section ‘Learning from accidents’ of this book, an approach is sought to treat major industrial accidents, grouping their properties in the sense of a classification, in order to make common features perceptible. Where the failure to act correctly in response to a signal is identified to be the immediate cause of a mishap it may mostly be the fault of the system that allowed it to happen. That system was conceived by humans and is therefore prone to errors itself. Likewise, for everyday construction, the section Design and Construction Case Studies proposes a simplified classification (the ‘template approach’) of some 60 examples of failures or ‘near misses’ along with an array of circumstances that were associated with the misfortune. A section dealing with the cognitive and psychological situation of individuals or groups participating in the construction process explores how errors occur. It is found that our daily work, attention and decision-making are very much affected by our psychological and emotional situation and stress and distress tend to make us error-prone. A selection is made from the case studies and interpreted to illustrate these relationships. A section dealing with construction management highlights this all-important aspect that is pervading every element of the construction process: ‘Every error is also an error of management’ is one of the very few findings that apply ‘across the board’, to say that management, invariably and essentially, is always involved with the outcome and its origin, good or bad. This book wishes to give an impression of the present state of knowledge in the field where it is hoped that in the near future, substantial progress will be achieved in elucidating relationships among the circumstances that contribute to the genesis, the perpetuation and the ultimate manifestation of errors. By no means the impression wants to be created that a simple decoupling into separate features is achievable, everything being correlated in multiple ways. Where a principal element of ‘wrongness’ is identified to be related to insufficient knowledge, to lack of attention, to the wrong individuals assigned to tasks beyond their faculties, to time pressure, etc., it

Preface

vii

is most often accompanied by other contributing factors. However, in order to make things digestible to the informed reader, simplification was necessary and minor correlations had to be put to the wayside. Simplification implies the loss of information, and the authors have attempted to ascertain that no important correlations were sacrificed. The database accessible at this time may seem to be rather insufficient to permit valid conclusions in the sense of modern science (some 100 records of accidents in the work of Moura et al., 60 or so cases in the ‘template’ approach used to discuss everyday construction). The anecdotic records that form the database of this book may not be as complete as desired either, some of the details having been lost in the harvesting. It is hoped that in the future, ways can be found to make a database accessible that is commensurate with the diversity of the problem. Nevertheless, the authors believe that even with the narrow database available at this time, some valid as well as fundamental conclusions can be made, such as • A large proportion of errors have their origin in the conceptual design of the system; • A lack of robustness is the major reason for the severity of numerous mishaps; • Where quality control activities are not well organized, errors will escape detection and correction. To a seasoned practician, this may sound trivial, but it is the first time that data, however scarce, has demonstrated it to be prominently important as well as real. From a comparison of the findings in the work on large-scale industrial accidents and the everyday construction work, it becomes clear that the rate of failure in the two very different domains is of the same order of magnitude i.e. about 10–2 to 10–3 . The reasons for this can only be surmised at this point. Where in large-scale industry, much effort is invested in the detailed analysis and modelling of the system, via different scenarios that are ‘thought trough’ by groups of experts, the inherent complexity is letting particular sequences of events escape the attention of the analysts (Fukushima, Chernobyl, etc.). In the simple and seemingly more trivial circumstances of everyday construction, no great investment is possible toward optimization, time and economic pressure being prominently implicated in the error proneness. In the background of all of this, one finds the one persuasive feature that may be the controlling agent holding the general rate of failure at the current level, regardless of technological progress; it is the social consensus that things are what they should be. A major change or correction would imply an enormous price that society at large, consciously or not, is not prepared to come up with. The industrialized countries have been living with this more or less invariable rate of failure for a century and code committees, in charge of stipulating safety margins and methods of analyses, have been very careful to ‘leave it as it is’ adjusting newly created rules and methods to yield essentially the same results that the previous ones did, in order not to trigger an outcry in either direction. The question to be asked is therefore: What can be done to improve matters, given these constraints.

viii

Preface

The answer to such questions and apparent paradoxes is the conclusion of this book: • Intensify the analysis of the conceptual design as much and as early as possible; • Provide the system with robustness in order to mitigate the consequences; • Aim the control activity to where it can be effective, and tailor it appropriately—in terms of timing, personnel, targeting, protocols, etc. Montreal, Canada

Franz Knoll Dipl. ing. ETH, Dr. sc.techn., Dr.h.c. NCK Inc [email protected]

Contents

1 Human Errors in Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Raphael Moura

1

2 Design and Construction Case Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . Franz Knoll

9

3 Learning from Large-Scale Accidents . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 Raphael Moura, Edoardo Patelli, and Caroline Morais 4 Insights from Psychology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 Marlène Déom 5 Insights from Construction Management . . . . . . . . . . . . . . . . . . . . . . . . . 173 Hamid Rahebi and Katharina Klemt-Albert 6 Evaluation of Case Studies/A Template Approach . . . . . . . . . . . . . . . . . 193 Franz Knoll Appendix: Matrix of Circumstances Versus Errors . . . . . . . . . . . . . . . . . . . 213

ix

Chapter 1

Human Errors in Engineering Raphael Moura

1 Conceptualisation of Human Errors (Adapted from Moura et al. 2017) The term “human error” has been coined in several different fields such as engineering, economics, psychology, design, management and sociology, with numerous interpretations and diverse objectives. Although most of the researchers and practitioners would (probably!) agree that human error can be generally understood as a failure to perform a certain task, the indiscriminate usage of the label “human error” to define some sort of human underperformance can be highly controversial. Hollnagel (1993, 1998), Woods et al. (2010) and Dekker (2014) claim that errors are best seen as a judgment in hindsight, or an attribution made about the behaviour of people after an event, being a quite misleading term, of limited practical use and nothing more than a tag. Conversely, Reason (1990, 2013) favoured the usage of the nomenclature, describing three necessary features to define human error: (i) plans; (ii) actions (or omissions); and (iii) consequences, surrounded by two situational factors: intention and absence of chance interference. Therefore, according to Reason, human errors can be acknowledged when an intention is reflected in a planned sequence of actions which fails to accomplish its projected outcome, with observable consequences. The plan can be flawed or the action(s) can be imperfect, and some chance agency (e.g. an act of God or force majeure) is not recognisable. The understanding of human error is typically encapsulated by a wider concept entitled “Human Factors”. This applied discipline has grown significantly after World War II, where the consideration of the human aspect was deemed necessary for achieving a realistic reliability assessment (Swain and Guttmann 1983; Dhillon 1986). Since then, the multi-disciplinary nature of the human factors’ studies, which R. Moura (B) National Agency for Petroleum, Natural Gas and Biofuels, Rio de Janeiro, Brazil e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 R. Moura et al. (eds.), Understanding Human Errors in Construction Industry, Digital Innovations in Architecture, Engineering and Construction, https://doi.org/10.1007/978-3-031-37667-2_1

1

2

R. Moura

focus on the relationship between humans, tasks, technologies, organisations and the surrounding environment, allowed for some variances among the views and needs of engineers, psychologists, sociologists, managers and the general public. Hollnagel (1998) suggested that the original engineering and design approach be aimed at analysing humans as components, in order to assign a human failure probability (or human error likelihood) to risk and safety assessments. Adopting a different perspective, psychologists attempted to understand mental processes and awareness mechanisms leading to erroneous actions, while sociologists were looking for flaws in the socio-technical system, usually attributing errors to management and organisational shortcomings. But how do stakeholders see errors, especially after a major accident? Two common attitudes towards human errors were distinguished by Dekker (2014). The first one is what he called the “old view”, which considers errors as causes. On the other hand, the “new view” regards human errors as consequences of accidents, effects or symptoms of some sort of organisational shortcoming. Hollnagel (1998) highlighted that human error has been seen as the cause of events (when accidents are said to be due to the human intervention), the event itself (when an action, e.g. pressing the wrong button, is said to be a human error) or the consequences (the outcome of the action is said to be an error, e.g. the driver made the error of fuelling a petrol-fuelled car with diesel, inferring a car malfunction). Reason’s characterisation of human errors turns out to be extremely beneficial, tying up two loose ends: firstly, the need for recognition and common understanding of human error, a deeply rooted concept in both technical and general public reality, serving as a useful bridge between the two worlds; and secondly, a clear and convenient definition, focused on the internal and external characteristics of the analysed subject and on the genesis of error. This approach allows for the search for profounder issues related to accidents and can help to reduce the knowledge gap among authorities, the general public and wider stakeholder groups, in order to accomplish an improved learning environment.

2 Limitations to Calculate Human Errors Probabilities (Adapted from Moura et al. 2016) Human Reliability Analysis (HRA) can be generally defined as a predictive tool, intended to estimate the probability of human errors and weigh the human factors contribution to the overall risk by using qualitative and/or quantitative methods. In the early 60s, the first structured method to be used by industry to quantify human error was presented by Swain (1963), which later evolved to the wellknown Technique for Human Error Rate Prediction—THERP (Swain and Guttmann 1983). This technique was initially developed to deal with nuclear plant applications, using in-built human error probabilities adjusted by performance-shaping factors and dependencies (interrelated errors) to deliver a human reliability analysis event tree.

1 Human Errors in Engineering

3

Some holes due to active failures

Defences in depth Other holes due to latent conditions

Fig. 1 “Swiss cheese model” after Reason (1997)

Some researchers (e.g. Reason 1990; Kirwan 1997; Everdij and Blom 2013) refer to THERP as the most well-known method to assess human reliability and provide data to probabilistic safety assessments. The accident model acknowledged as the “Swiss Cheese model”, developed by Reason (1990), can be addressed as the most influential piece of work in the human factors field. It has been widely used to describe the dynamics of accident causation and explain how complex systems can fail through a combination of simultaneous factors (or as a result of the alignment of the holes of the Swiss cheese slices) (Fig. 1). Many Human Reliability Analysis subsequently developed were, to some extent, inspired by this model. Examples are the Human Factors Analysis Methodology— HFAM (Pennycook and Embrey 1993), the Sequentially Outlining and Follow-up Integrated Analysis—SOFIA (Blajev 2002), the Human Factors Analysis and Classification System—HFACS (Shappell et al. 2007), extensively used to investigate military and commercial aviation accidents, and the Systematic Occurrence Analysis Methodology—SOAM (Licu et al. 2007). The concept that accidents arise from an arrangement of latent failures, later renamed to latent conditions (Reason 1997), and active failures in complex systems demonstrated accuracy and practicality to guide prevention measures (Hopkins 1999). Reason’s studies of human errors have focused on the work environment, human control processes and safe operation of high-technology industrial systems, and included management issues and organisational factors. There are several methods to assess human performance in different domains, and the development of such tools was notably triggered by the advances in hightechnology industrial systems, particularly nuclear plants, aerospace, offshore oil and gas, military and commercial aviation, chemical and petrochemical, and navigation. Some of them were assessed by Bell and Holroyd (2009), who reported 72 different techniques to estimate human reliability and considered 35 to be potentially relevant. Further analysis highlighted 17 of these HRA tools to be of potential use for major

4

R. Moura

hazard directorates in the United Kingdom. These techniques are usually separated by generations, which basically reflect the focus of the analysis. The first-generation methods (developed between the 60s and early 90s) are mainly focused on the task to be performed by operators. Essentially, potential human erroneous actions during the task sequence are identified, and the initial probability is then adjusted by internal and external factors (performance shaping factors, errorforcing conditions, scaling factors or performance influencing factors, depending on the methodology) to deliver a final estimation of human error probabilities. The key step in this approach is selecting the critical tasks to be performed by operators, which are considered to be elements or components subjected to failure due to inborn characteristics, thus having an “inbuilt probability of failure”. These methods are widely recognised and commonly preferred by practitioners, probably because they provide a straightforward output such as an event tree or a probability value that can be directly integrated to Probabilistic Risk Assessments. Some examples are THERP, HEART (Human Error Assessment and Reduction Technique), presented by Williams (1986), and JHEDI (Justification of Human Error Data Information), introduced by Kirwan and James (1989). Alternatively, second generation techniques have been developed from late 90s and are based on the principle that the central element of human factors assessments is actually the context in which the task is performed, reducing previous emphasis on the task characteristics per se and on a hypothetical inherent human error probability. “A Technique for Human Error Analysis”—ATHEANA (Cooper et al. 1996), the Connectionism Assessment of Human Reliability (CAHR) based on Sträter (2000) and the Cognitive Reliability and Error Analysis Method (CREAM) by Hollnagel (1998) are good examples of this kind of approach, all reflecting the focus shift from tasks to context to provide a better understanding of human error and integrate engineering, social sciences and psychology concepts. More recent literature (e.g. Kirwan et al. 2005; Bell and Holroyd 2009) alludes to the Nuclear Action Reliability Assessment—NARA as the beginning of the third generation methods. However, it seems to be merely an update of first generation techniques, i.e. HEART, using more recent data from newer databases such as CORE-DATA (Gibson and Megaw 1999). All these methods provide a number of taxonomies to handle possible internal and external factors that could influence human behaviour. Modern data classification taxonomies are mostly derived from Swain’s (1982) work, in which he organised human errors in errors of omission and errors of commission, being the former a failure to execute something expected to be done (partially or entirely), while the latter can be translated as an incorrect action when executing a task or a failure to execute an action in time. The issue modelling human errors through the prediction of human behaviour during complex rare events was addressed by Rasmussen (1983), who envisioned the Skill-Rule-Knowledge (SRK) model. He differentiated three basic levels of human performance: skill-based, when automated actions follow an intention (sensory-motor behaviour); rule-based, when there is a procedure or technique guiding the action; and knowledge-based, represented by actions developed to deal with an unfamiliar situation. Reason (1990) split human errors in slips and lapses, when an execution failure or an omission occurs, and mistakes, which result

1 Human Errors in Engineering

5

from judgement processes used to select an objective, or the means to accomplish it. Later, Rasmussen’s theory was encompassed by Reason to further categorise mistakes in rule-based mistakes, when a problem-solving sequence is known, but an error choosing the right solution to deal with the signals occurs; and knowledgebased mistakes, when the problem is not under a recognisable structure thus a stored troubleshooting solution cannot be immediately applied. Reason also highlighted an alternative behaviour from a social context, called “violation”. This concept was split in exceptional and routine violations, both emerging from an intentional deviation from operating procedures, codes of practice or standards. Although the classification schemes are usually connected to the industrial domain for which they were originally developed, some of them are nonspecific (e.g. HEART) and thus have been successfully applied in a broader range of industries. Regardless of the variety of HRA methods available to enable practitioners to assess the risks associated with human error by estimating its probability, the substantially high uncertainties related to the human behavioural characteristics, interlaced with actual technology aspects and organisational context, turn this kind of evaluation into a very complicated matter, thus has been raising reasonable concern about the accuracy and practicality of such probabilities. Data collection and the availability of a meaningful dataset to feed human reliability and other approaches related to the assessment of human performance in engineering systems seems to be the most severe constraints. Many studies in the early 90s addressed these issues, and both the unavailability of data on human performance in complex systems (Swain 1990) and limitations related to the data collection process (International Atomic Energy Agency 1990) were considered to be problems extremely difficult to overcome. Therefore, some efforts to collect accident data such as the Storybuilder Project (Bellamy et al. 2007) were undertaken, aiming at the classification and statistical analysis of occupational accidents. In a contemporary review, Grabowski et al. (2009) suggests that the exponential rise of electronic records even worsened the problems related with human error data, stating that data validation, compatibility, integration and harmonization are increasingly significant challenges in maritime data analysis and risk assessments. This indicates that difficulties to find usable human error and human factors data are still a major concern, which deserves to be carefully addressed by practitioners and researchers. Moura et al. (2015) discriminated some of the difficulties that might be preventing the development of a comprehensive dataset to serve as a suitable input to human performance studies in engineering systems. Main issues can be summarised by: (i) dissimilar jargons and nomenclatures used by distinct industrial sectors are absorbed by the classification method, making some taxonomies specific to particular industries; (ii) the effort to collect human data is time-consuming, and the need for inserting the “human contribution figures” into safety studies favours the immediate use of expert elicitation, instead of a dataset; (iii) the accuracy of the collection method is very difficult to assess, and distinct sources (e.g. field data, expert elicitation, performance indicators or accident investigation reports) can lead to different results; and

6

R. Moura

(iv) the interfaces between human factors, technological aspects and the organisation are context-dependent and can be combined in numerous ways. This turns early predictions into a very difficult matter, due to the variability of the environment and the randomness of human behaviour.

References Bell J & Holroyd J., 2009. Review of human reliability assessment methods. Suffolk: HSE Books. Bellamy L.J. et al., 2007. Storybuilder — A tool for the analysis of accident reports. Reliability Engineering and System Safety 92: 735–744. Blajev, T., 2002. SOFIA (Sequentially Outlining and Follow-up Integrated Analysis) Reference Manual. Brussels: EATMP Infocentre. Cooper, S. et al., 1996. NUREG/CR-6350 - A Technique for Human Error Analysis (ATHEANA) - Technical Basis and Methodology Description. Washington, D.C.: US Nuclear Regulatory Commission Library. Dekker, S., 2014. Field Guide to Understanding ‘Human Error’. 3rd ed. Farnham: Ashgate Publishing Ltd, 2014. Dhillon, B.S., 1986. Human Reliability: With Human Factors. New York: Pergamon Press Inc. Everdij, M. & Blom, H., 2013. Safety Methods Database version 1.0 [Online]. Amsterdam: National Aerospace Laboratory (NLR). Available from: http://www.nlr.nl/downloads/safety-methods-dat abase.pdf (Accessed: 9 April 2014). Gibson W. H. & Megaw T. D., 1999. The implementation of CORE-DATA, a computerised human error probability database. Suffolk: HSE Books. Grabowski, M. et al., 2009. Human and organizational error data challenges in complex, large-scale systems. Safety Science 47: 1185–1194. Hollnagel, E, 1993 The phenotype of erroneous actions. International Journal of Man-MAchine Studies, 39 (1) 1–32. Hollnagel, E., 1998. Cognitive Reliability and Error Analysis Method. Oxford: Elsevier Science Ltd. Hopkins, A., 1999. The limits of normal accident theory, Safety Science, 32, pp. 93–102. International Atomic Energy Agency, 1990. Human Error Classification and Data Collection. Report of a technical committee meeting organised by the IAEA, Vienna, 20–24 February 1989. Vienna: INIS Clearinghouse. Kirwan, B., 1997. Validation of Human Reliability Assessment Techniques: Part 1 – Validation Issues. Safety Science 27(1): 25–41. Kirwan, B. and James, N.J., 1989. Development of a human reliability assessment system for the management of human error in complex systems. Proceedings of the Reliability ‘89, Brighton, 14–16 June: pp. 5Al2/1–5A/2/11. Kirwan, B. et al., 2005. Nuclear action reliability assessment (NARA): a data-based HRA tool, Safety & Reliability, 25, No. 2, pp. 38–45. Licu, T. et al., 2007. Systemic Occurrence Analysis Methodology (SOAM) - A “Reason”-based organisational methodology for analysing incidents and accidents. Reliability Engineering and System Safety 92: 1162–1169. Moura, R. et al., 2015. Human error analysis: Review of past accidents and implications for improving robustness of system design, Nowakowski, T. et al. (Eds), Proceedings of the 24th European Safety and Reliability Conference, 14–18 September 2014, Wroclaw. London: Taylor & Francis Group, pp. 1037–1046. Moura, R. et al., 2016. Learning from major accidents to improve system design, Safety Science Journal 84: 37–45, https://doi.org/10.1016/j.ssci.2015.11.022.

1 Human Errors in Engineering

7

Moura, R. et al., 2017. Learning from major accidents: Graphical representation and analysis of multi-attribute events to enhance risk communication, Safety Science 99: 58–70, https://doi.org/ 10.1016/j.ssci.2017.03.005. Pennycook, W., Embrey, D., 1993. ‘An operating approach to error analysis’, Proceedings of the First Biennial Canadian Conference on Process Safety and Loss Management, April, Edmonton, Canada. Rasmussen, J., 1983. Skills, Rules, and Knowledge; Signals, Signs, and Symbols, and Other Distinctions in Human Performance Models, IEEE Transactions on Systems, Man and Cybernetics 3, May, vol. SMC-13. Reason J., 1990 Human error. New York: Cambridge University Press. Reason, J., 1997. Managing the Risks of Organizational Accidents. Brookfield, USA: Ashgate. Reason, J., 2013. A Life in Error: From Little Slips to Big Disasters. 1st ed. Farnham: Ashgate Publishing Ltd. Shappell, S., et al., 2007. Human Error and Commercial Aviation Accidents: An Analysis Using the Human Factors Analysis and Classification System. Human Factors 49(2): 227–242. Sträter, O., 2000. Evaluation of Human Reliability on the Basis of Operational Experience. Cologne: GRS. (English translation of the Report GRS-138: Beurteilung der menschlichen Zuverlassigkeit auf Basis von Betriebserfahrung.) Swain, A., 1963. A Method for Performing Human Factors Reliability Analysis, Monograph-6851, Albuquerque: Sandia National Laboratories. Swain, A., 1982. Modeling of Response to Nuclear Power Plant Transients for Probabilistic Risk Assessment, Proceedings of the 8th Congress of the International Ergonomics Association, August, Tokyo. Swain, A., 1990. Human Reliability Analysis - Need, Status, Trends and Limitations. Reliability Engineering and System Safety 29: 301–313. Swain, A., & Guttmann, H., 1983. NUREG/CR 1278 - Handbook of Human Reliability Analysis with Emphasis on Nuclear Power Plant Applications. Albuquerque: Sandia National Laboratories. Williams, J.C., 1986. HEART – A Proposed Method for Assessing and Reducing Human Error. Proceedings of the 9th Advances in Reliability Technology Symposium, Bradford, 2–4 April 1986. Warrington: National Centre of Systems Reliability. Woods, D. et al., 2010. Behind Human Error. 2nd ed. Farnham: Ashgate Publishing Ltd.

Chapter 2

Design and Construction Case Studies Franz Knoll

1 Introduction Some or most of the considerations reviewed in the following text may appear to the reader to be nothing more than trivial recapitulations of common sense, thoughts and things that are part of the everyday activity of a seasoned building participant. This may be so but I believe it to be worthwhile to assemble some of that common sense in one document to serve as a basis for discussion and for the following review of case studies. From these it will become apparent that real life scenarios, where something went wrong or amiss, are of a bewildering variation, with a number of parameters becoming important in one situation while in another context, a different set needs to be considered. Likewise, the strategies for “error hunting” to be effective, must be adapted to each situation according to its particular character. “Errare humanum est”, the ancient conclusion of impotence and resignation in the face of human imperfection has become an expensive one in the construction industry where the consequences of an error are measured in large amounts of money or severe injury and death to, mostly as it were, other humans. A globally leading school has recently, in one of its self congratulating periodicals, been trumpeting the value of errors in research. This may be so at the inside of facilities devoted to basic research, it is not a very useful attitude in real life where responsibility for one’s action is made to bear immediately and unmitigated. It means that shrugging our shoulder and repeating a wisecrack proverb is not doing justice to the situation, and something wants to be done about this old fact of life. There is little hope that it can be made to disappear altogether, but ways and means ought to be found to minimize errors, in terms of frequency, as well as effects. F. Knoll (B) NCK Inc., Montréal, Canada e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 R. Moura et al. (eds.), Understanding Human Errors in Construction Industry, Digital Innovations in Architecture, Engineering and Construction, https://doi.org/10.1007/978-3-031-37667-2_2

9

10

F. Knoll

The following discussion is focussed on just this, to eliminate and mitigate as much as possible the mistakes, as well as their consequences. Beginning with the circumstances where errors originate, their presumable history of perpetuation and eventual apprehension and correction is discussed, along with the reasons why it does not always turn out well. Strategies to improve the situation in general, or in particular circumstances are being explored with their application and performance at the various stages of the building process. It can be viewed as an organism, much like a living being, where infections, maladies, diseases, injuries occur and are mostly corrected or made good by the organism and its participating parts themselves, or by external agents such as doctors. The comparison does not hold in all respects, but a number of parallels exist that can teach us something about the process of healing and correction. It is the centre piece of the first part of this text, namely the forms of direct and directed intervention aimed at minimizing errors and their effect, indeed an optimization process in itself which must respect the variable elements of every context. In the second part of the text, a number of real-life scenarios are reviewed, some of them of a typical or even notoriously recurring nature, some others relating to the specific circumstances of the history of the event. As a matter of course, in many instances, anonymity must be preserved, except where earlier publications, news reports etc. have made this unnecessary. The selection of cases has been made without any specific mission in mind, except this one: To show the bewildering variety of causes, circumstances and consequences associated with each specific event. Part of the anecdotic reviews relate to events in which the author was implicated in one way or another, so the tale will reflect my subjective interpretation which in turn will reflect on the selection of features left out as immaterial or unimportant. Every older engineer will remember a number of occasions where things went wrong, some of which may have involved his/her own decisions and interventions. From the general discussion as well as from the review of the reported cases it will become evident that it may not be very useful to insist on a clear and unequivocal definition of the concept of error as it blends in with the industrial, or one might say artistic approach to construction which is essentially an optimization process, as well as, in another sense, a gamble with risks taken more or less consciously and judiciously. To pinpoint a mistake may be easy when standing in front of the debris after a collapse caused by punching shear failure. It may be a different thing when reviewing an excavation failure due to a combination of uncertainty about soil conditions, unfavourable weather, inferior workmanship, where the excavation method had been selected in an informal consensus among the knowledgeable and interested parties, where a risk was perceived but had to be accepted. Whose fault was it then? Courts have begun to recognize this problem and construction is seen more and more as a team affair, with the sorting out and apportioning of responsibility considered impossible or unnecessary. However, if something goes wrong, an error has been committed regardless of philosophical or linguistic considerations and it is the

2 Design and Construction Case Studies

11

purpose of this text once again to explore ways and means to understand, to try to avoid errors and to mitigate their effects.

2 Genesis of Errors The construction process, starting with the first idea, through planning, design, physical production and use, to modification, “deconstruction” and clean-up, can be understood as a history of information created, optimized, modified, translated, expressed in physical form, exploited, removed from physical reality and eventually lost. Modern obsession with nostalgia has not succeeded to completely and comprehensively record the fate of built objects. As everyone knows who had to do with modification, restoration, conservation, certification and demolition of existing construction: original drawings and construction records tend to become lost following changes of ownership or administration, let alone records of alterations carried out in the meantime. Public owners are faring considerably better than private owners in this respect but even in their archives it is sometimes impossible to find pertinent and up-to-date information. That information takes a number of forms as it develops, like some life forms such as insects, amphibians or plants. From the proverbial hand sketch on the napkin made during business lunch, through representation on drawings, in calculations, specifications, the physical reality and eventually history books, it embodies a similar complexity which challenges the capacity of even the most thorough and universal tools of analysis. It has a “life of its own” and the participants who may be seen as part of the organism cannot command more than a partial view of it at best, limited to certain aspects. Much like in a living organism, errors occur in the creation, translation, communication and transformation of information, with mechanisms of correction ready and at work but less than completely successful. Biological research is, at enormous expense, presently trying to understand what is happening in living beings, with errors and their correction very much in focus. Human endeavours like construction have been lagging behind badly in this respect, with the price for errors still being paid at a sorry rate. It is not research about legitimate physical behaviour of materials which can be blamed for this after the x-millionth piece of concrete has been crushed in the laboratory, and the studies of wind characteristics and wave dynamics, or of seismic records which are filling books and data banks. It is the mistakes and errors human agents make in their work which have escaped scientific attention almost completely, in spite of conclusive evidence that almost everything going wrong is due to human error of some kind. In this chapter, the circumstances of the genesis of errors shall be examined, following the stations of the construction process.

12

F. Knoll

2.1 Conceptual Errors To have an idea for the solution of a structural problem is the act of an artist, much like a composer hitting on a melody or harmony upon which to build a musical piece. Of course, this does not apply to trivial tasks such as selecting a steel section for a predetermined bending moment, or similar pieces of dimensioning where the concept is known beforehand from earlier, similar work and the task reduces to putting numbers to it. However, in most projects a concept of some novelty must be found, forming the basis for further steps. If a faulty or inferior concept is the starting point, the rest of the process will suffer, becoming more complicated, more difficult to control, more costly or faulty. Often this makes the finding of a good idea or concept a difficult task, demanding a high level of experience and insight into creation, function and performance of structures, be it single elements or entire assemblies, including “non structural” elements, topology and geometry, choice of materials, construction methods etc. Many options exist at the beginning of a project, sometimes in a bewildering array of variations. It is rare however that a large number of options and their combinations can be analyzed thoroughly and with sufficient precision to form the basis of value engineering. Usually, the building team will shortcircuit this task and select very few options, most of the selection being inspired by past experience with successful applications, the basic concept literally being pulled out of a drawer. And here it is where the first serious pitfall typically lurks. The new application of a known and tried concept amounts to an imitation which may commonly be thought to be inferior to the original. Rightly so, if it is limited to copying, rather than readaptation and thoughtful adjustment. The same exact circumstances almost never apply which is self evident, given the many parameters involved in the description of the task in the case of a real challenge: Physical, contractual, economical, social, legal, administrative, even political circumstances do not usually repeat themselves precisely and in concert. However, in many cases the review of the most important ingredients quickly leads to a small selection of reasonable options, all the others being discarded for evident reasons. For an idea to mature into a valid concept, it must be seen to satisfy to a reasonable degree all requirements and violate no limitations or restraints. If one of them escapes attention, is ignored, or discarded out of hand without good reason, the ground is laid for an error to be generated. This is not trivial since a systematic and consistent methodology for the analysis of newly created concepts does not exist. Much of it is left to the circumspection and associative thinking of the participants which includes an element of “chance”, related to the partially “stochastic” or “chaotic”, i.e. uncontrollable character of that activity which is strongly subjective and may even include emotional elements. This may not be such a bad thing since the alternative, a fully controlled protocollike process, is limited by definition to the aspects and parameters that went into the creation of the protocol. This consideration will accompany much of the discussion about the elimination of errors: Rigid preconceived protocols have their use, but their

2 Design and Construction Case Studies

13

scope is limited to the specific aspects they address. They will never be instrumental in the finding of ideas, concepts and solutions to real problems, or conversely, to the detection of basic faults. Conceptual errors are the most difficult to rectify because they will affect most everything that follows, much like a genetic defect which will be inherited by the offspring and disturb their well being. Conceptual errors cannot always be measured in terms of numbers or values but may for example result in excessive complexity of the work, leading in turn to oversights, omissions or incompatibilities. Complexity is one of the major enemies of good quality, hampering control of many features of the project and opening the door to Murphy’s Law because attention is spread too thinly. Errors of concept often originate in situations where the needs and necessities of some aspects or trades were introduced too late in the process. If the structural engineer is called only when the architectural drawings are finalized, chances are that awkward structural conditions were created which to deal with will require complicated, even risky solutions at a cost which could have been avoided had the advice of the structural engineer been sought earlier. Innovative projects require more and more thorough insight into the consequences of the decisions constituting the conceptual information. As an example consider long span structures: One can identify ideal ranges of spans where a certain type of structure can be built advantageously with certain materials. This range can be determined using relative unit cost per constructed area (for example for a roof) which typically results in curves as suggested in Fig. 1. Most often a relatively wide “trough” will be found, equivalent to the range of a reasonable application, beyond which the cost of the building rises quickly. To leave this range may by itself constitute a conceptual error, requiring special and complex methods of fabrication, transportation, erection, control as well as special designs of other components adjacent or in contact with the “out of range” element. Modern technology is advancing, and what may have been a severe limitation in the past may become softened up by new techniques, making things work which previously could not. This progress is incremental and very high prices have been paid for approaching or exceeding the contemporary limits (Examples: Cathédrale de Beauvais, Tacoma Narrows Bridge, Quebec Bridge, Stade Olympique de Montréal, Tower of Babel, Pyramid of Pharaoh Snefru in Saqqara and the Tower of Pisa where the angle was changed on the way up when problems started to show up). Fig. 1 Typical relationship span versus cost

14

F. Knoll

The example of the Quebec Bridge which collapsed twice during construction, shows that the consequences of conceptual errors (e.g., failure to consider a collapse mode triggered by local instability) are specially severe when the limits of technology are being approached. The Titanic may be cited as another example: Modern research has revealed that not everything had been perfect with the construction of this “record” ship, with time pressure, less than ideal contractual and working conditions as well as unsuitable materials and inferior workmanship, all interfering with the intended quality of the product. Normally, the project team ought to be aware of this sort of relationship, at least in the back of the minds of the leading participants. Conceptual decisions are made at a time when the future stages of the construction process are only partially known, the gaps being mentally filled in by imagination about how things might turn out. Most of the time, experience will provide a reservoir of predictions for many aspects. Novel features however will often end up in a “learn as you go” approach. Whether or not a conceptual decision turns out to constitute an error will in this case become known only later in the process, when the consequences begin to show—and to cost. The engineers’ task has often been described as “to make the right decision on the basis of insufficient information”. This is particularly true at the beginning, when the degree of the “insufficiency” is at its maximum. Experience provides the advantage of “hindsight” on past exercises, and it is well worth it to exploit this as rationally as possible, reducing that degree. Conceptual errors are normally thought of as qualitative or “discrete” (choice of materials, topology, geometry etc.) rather than quantitative (sizing of elements, detail specification of material properties, quality control), the latter being more a task of later stages in the process named design, dimensioning etc. There is a link however between the two where a qualitative decision may lead to something which quantitatively cannot be. A recent example may illustrate this: On a tall concrete tower an extension in structural steel (“Antenna Mast”) was to be erected, in an area of high seismic activity. A conceptual decision had been taken early on not to provide post-tensioning in the concrete part for valid reasons, leaving it with a reduced overall bending stiffness due to cracking in a severe earthquake. The dynamic analysis of the structural model revealed that this exposed the Antenna Mast to forces which made it impossible to be designed (material thicknesses, bolted connection etc.) within the restrictions imposed (user requirements, availability of materials, geometry, erection methods etc.). Fortunately this was learned in time and the decision about the absence of post-tensioning could be relaxed, creating acceptable stiffness conditions. A conceptual error had been recognized and corrected in time.

2 Design and Construction Case Studies

15

2.2 Modeling Errors, Misconception, Omission Structural design and analysis in our time is mainly based on model representations of what is projected to become a future physical reality. Representations not being identical with the real thing, the degree of congruence between the two becomes a decisive parameter “representativeness” if one wishes to peruse a linguistic atrocity. It is an all-inclusive qualification, involving completeness (no relevant property of the proposed structure and its future exposure must be left out), similarity (all properties such as topology, geometry, stiffness, limits of resistance, etc. must be introduced in such a way as to match the structural response of model and reality) as well as the identification (quantitative and qualitative) of anticipated divergence or variation (in the form of a “Bell” curve for example) between one and the other. This seems trivial on first sight, thinking of a simple structural element or a typical assembly. With increasing complexity it becomes gradually less so however, and this may not only involve physical properties but such things as future developments of the project (delays, modifications, deficiencies in material supply, qualification of personnel, quality control etc.). Even the most careful planning in ideal conditions will not prevent the future reality to diverge from its anticipated description which was used for the model representation. Modeling usually makes use of simplification for two good and one less fortunate reasons: • A simple model makes it possible to review and study all its features in useful time. • A simple model is transparent and provides digestible evidence about how a structure performs. • At the time model analysis usually takes place, detail information about the future structure is not available anyway with a precision justifying complex modeling. To be representative, a simplified model must provide correct answers to all pertinent questions concerning the response of structural systems to exposure, and eventually its interaction with the environment. This brings about an important consideration: Besides being representative of the structure itself, a model must reflect conditions at the boundaries, for example the characteristics of its interaction with the soil. This is notoriously infested with a rate of uncertainly considerably greater than is true for the structure itself which is after all man-made, i.e. controllable to a degree whereas soil conditions must frequently be accepted “as is”, alterations being too costly or impossible. For this reason: among others, models may have to be varied using different sets of values to parametrically represent potential conditions. This is a further reason to keep models simple, in order to keep the data produced clear and “übersichtlich”, i.e. comprehensible. Often a set of different simplifications is used to analyse different features of a design, each one targeted and adapted to a limited set of questions: If foundation settlements are being studied, the roof structure of a high rise building need not be modeled in detail.

16

F. Knoll

Thanks to powerful computers, work with all inclusive detailed models is tried often with the illusion that everything, including all relevant relationships are being analysed exhaustively in this way. Analysis must include however, besides the production of large amounts of data, it’s digestion through reading and interpretation, a feat which becomes difficult when the answers sought are buried in stacks of paper or masses of flickering numbers and graphs on the computer screen. This brings up the error of oversight where the right warning lights had really been there but drowned to invisibility in the mass of irrelevant data. In a recent experience the results of a numerical analysis of a relatively simple structural steel assembly were submitted for review, including representation of a large number of loading situations. It turned out, following a “back of the envelope check” that the proposed structure was nearly unstable laterally, due to the absence of bracing and of reliable framing connections. Careful and thorough “digging” in the massive output of results would have shown that this was so, on the condition that the relevant numbers were made visible on the print-out which they had not been. Design consists of two complementary activities: To create a proposition for a structure and to analyse its performance. This is basically an iterative optimization process as it may involve numerous conditions of different character to be respected by the final version. Most of the time numerical or graphic models are used to do this, physical models such as prototypes or scale models being expensive and lengthy to prepare and to vary. The seasoned designer will usually, except in novel or complex tasks, be able to “zero in” quickly on the relevant features of a structure, reducing the modeling to a set of extreme simplifications and the iteration to one or two steps. However, the risk of omission is always present and even the most experienced designer may succumb to this type of error. Some people have difficulties sleeping in the early morning hours—this may be a useful thing as it features a sort of rumination of the mental processes of the recent past and has, in this author’s experience, led to a number of occasions where omissions, things forgotten, surfaced and could be “rescued”. The error of omission is at the root of a substantial proportion of mishaps, even if unforeseen events or circumstances are excluded. It comes in different guises: In the case of a highrise building which became public, the situation of the wind acting diagonally on the square building had not been considered in the design, resulting in a dangerous distress the structure would have suffered, had it not been reinforced in time. The failure to properly verify the resistance of a single element on the load path, especially where it is a singular load path, has been notorious for causing accidents, often in the form of a “bad detail” where a seemingly minor part broke and triggered a progressive collapse because the design and analysis had not done justice to its real situation, including imperfect installation, cyclic stress, or deformation. The error of omission may be found at other stages of the construction process but it appears obvious that the design activity must include a complete review of all parts and portions and all the foreseeable exposure before construction is started—this is the only time when all parts and their interaction can be seen at once. Practically,

2 Design and Construction Case Studies

17

it may often turn out unachievable to do this systematically since not all pertinent information is simultaneously available at any one time. The error of misrepresentation is closely related to the error of omission and in some cases may amount to the same thing. As an example, consider the case of a building erected on piles or caissons with pile caps embedded in soft soil, with no basement. A strong earthquake will impose displacements on the structure which will be communicated to it through the piles and pile caps. If the top layers of the soil are—as is probable in the circumstances—very soft as well as disturbed by the construction activity, the majority of loads will act on the piles themselves below the pile caps which will be exposed to shear and bending in addition to the vertical load from gravity. If the structural model did not include this effect, an error of misrepresentation as well as omission has been committed. Another example of misrepresentation is taken from Knoll and Vogel (2009) in “Design for Robustness” where the case of the corner column in a multistorey building is discussed which relies on the lateral resistance of the frame at the perimeter of the building. If elastic analysis is used to determine the forces in the columns, an unsafe situation may be created where the corner column becomes overloaded prematurely: Modeling with elastic properties does not always lead to safe designs. More trivially, misrepresentation in modeling may be created by simple mistakes, wrong numbers, faulty support conditions. Sometime ago one such mistake has found its way into a legal court battle where it became part of the judgement following lengthy and painstaking questioning.1 The mass of data which is being created to represent a structure in its mathematical model form is bound to hide mistakes—a simple probabilistic reflection shows that the more one gives in to the temptation of unlimited computer power and adds detail to the model, the more mistakes will find their way into the analysis and escape detection. Most of these data are generated with the aid of software, making it quick and easy to build the model. What is not so easy is to verify if what was generated is what one was intending it to be. In large and complex models even an experienced engineer finds it difficult to locate the modeling mistake when his “back of the envelope calculation” does not seem to agree with what the computer is saying. If that hand calculation is omitted, no one will even try to find mistakes and the numbers will be taken at “face value”. Stories about this state of affairs abound and will certainly continue to do so, the engineer with a pencil in his hand becoming a discontinued model. Modeling errors are likewise finding their way into those hand calculations, resulting in “bad details” where the modeling of the load path through a succession of elements has not been properly represented for each of these, and for the connections between. Often that load path is imagined to be simple and straightforward, forgetting about eccentricities, prying forces etc. due to imprecisions. If a measure of robustness, in the form of “just in case” extra strength or ductility is not provided, the consequences may turn out to be severe. It is not always easy to foresee what will happen in the reality of construction, under time pressure. 1

See Sect. 7.15.

18

F. Knoll

If means of adjustment to accommodate geometric imprecisions, or thermal deformations are not provided, the construction team, or eventually the structure itself will suffer from the incompatibilities in their own ways. Modeling that did not include representation of long term or cyclic deformations at least in a simplified way is often part of the reason why things do not work as they were thought to do, with leakage, corrosion, cracking, even buckling (e.g. of veneers) setting in over time. These consequences may not always look as dramatic as an outright collapse but in terms of lost value due to costly correction they represent a large proportion of the total cost of mistakes. Analysis of structures with mathematical models should always be accompanied by keeping track of uncertainties, at least qualitatively gauging the consequences of potential deviations. What will happen if the rigid connection which was modeled does deform? Or what will be the consequences of a certain erection sequence on the real distribution of loads among a number of elements? Modeling is synonymous with adopting hypotheses, or assumptions. Each hypothesis must be accepted to be accompanied by uncertainty of a certain type and it is the engineers task to see to it that values assumed for calculation, be it with the computer, the slide rule or the pencil, are such as not to produce unsafe results; in this case “unsafe” means causing overly optimistic, unrealistic decisions. To be on the “safe side” is often quoted to be what should be done in cases of doubt or uncertainty. Cases exist however where a “safe side” does not exist: An increase in strength may be accompanied by an increase in weight, or of brittleness.

2.3 Calculation Errors People who were educated before the arrival of electronic calculators and mass produced computers remember what it was like to do arithmetic with a pencil and a piece of paper or a slate: One could never be sure of the result until one had checked it by redoing the calculation, or at least by “casting out nines”. Mostly, mistakes were made in the elementary operations making up calculation with numbers with several digits. This is still the case of course when doing hand calculations. If one uses a calculator or computer, this type of mistake has mostly disappeared, electronic devices being rather trustworthy in this respect. A new type of mistake has taken its place however—pushing the wrong button, and the jury is still out whether the change was for the better or the worse. What may be said though is that hand calculation engages concentration during the entire operation, unfamiliar patterns or unexpected outcomes standing out triggering a second look. This is not the case for automatic arithmetic, let alone large computer calculations where the time the operation takes in the electronic device approaches zero which leaves no time for the brain to feel about what one is doing. The “thinking process” in the brain that accompanies hand calculation includes a sort of peripheral perception, like somebody walking or driving a car faintly perceives all sorts of things around himself most of which are of no consequence to what is

2 Design and Construction Case Studies

19

intended. Some of them may include a cue however that relates to the process, like an obstacle, or something triggering an interest. If the “thinking” is done by an automatic device, that circumspection will not take place as there is no time and no engagement of the brain. Modern software offers calculation routines of increasing sophistication and inclusiveness with handy aids to make input easier. This shortens the time one is forced to pay attention to the task once again, cutting down on the critical review process which attention includes, and it introduces a new style of mistake—it shall be called the computer glitch for this discussion. It mostly affects automated data production through instructions to the computer, and it often produces results which are so obviously wrong as to be recognized even by uninformed people. These are not the serious glitches. What is far more dangerous are errors which result in conclusions which are not sufficiently bizarre to be caught anyway. Modern safety factors for structural performance are of the order of magnitude of 1.5–4 or so, and it is easy to see that an error of a similar magnitude may escape attention even though it “eats up” the safety margin completely. Many errors leading to serious accidents are of this class. An experienced engineer may recognize a variation of 20% or even less from familiar values in similar circumstances but if that similarity as well as the experience is reduced, more substantial errors may escape recognition. In a recent case a peer review brought up after extensive study of a project that the general design of bolted connection was in error systematically, by a factor of nearly 2, on the unsafe side. It had escaped the attention of the review team that all bolted connections were using twin splice plates so that the bolts acted in double shear. Fortunately, the design was not in error but the peer review was. It illustrates the above discussion. Complicated calculation is fertile ground for errors, and the more complex it gets the more error prone it will be, a simple plausible consideration of probabilistics. But worse than this, error-proneness will increase more than linearly with the volume of calculation. It is well known from computer programming that the time it takes to make it work increases with a power of about 2.5–3 with the length—where length can be taken as a measure for complexity. Most of the time spent for programming does not go to the creation of the programme in the first place but to error hunting. This, in this author’s experience, applies to any calculation involving human action such as writing down numbers, pushing buttons or creating numerical models. A calculation longer than a few lines must be considered wrong until checked by independent means—repetition of the same operation being quite ineffective. This is especially true with modern style building codes and handbooks where calculations must be based on complicated and opaque expressions involving a multitude of parameters, some of which require a set of hypothetical assumptions and complex calculations themselves. If one does not repeat the same operation routinely, results are bound to be erroneous—not by orders of magnitude because if so, they will be readily recognized by standing out, but by factors of 2, 3, 4 etc., i.e. most dangerous. Most of these empiric rules and expressions are based on some series of laboratory testing and/or schematic models of the physical circumstances at work, in the minds

20

F. Knoll

of the authors of the paper which led to the insertion of the formula in the rule book. Empirical approaches being what they are, based mostly on relationships of two parameters, more rarely three or four, one translates what amounts to a cloud of dots in the graphic representation into a more or less inspired algebraic relationship which often has no comprehensible relation to the physical reality being addressed. Generations of researchers have not succeeded to find a rational description of the response of reinforced concrete to shear. Users of the rule book are therefore confronted with the task of calculating something which largely escapes imagination and comprehension. Imagination, i.e. making up a mental picture of what is at hand, is an extremely important ingredient in the process of engineering, i.e. the creation of things that will work. To be bereft of it because one does not “understand” what one is doing, is a very dangerous condition indeed. The same generation of researchers who have made building codes and handbooks exponentially more voluminous and complicated with time, have commendably tried to help with the imagination, sometimes successfully sometimes not. It is mostly left to the practical engineer to try and “catch on” to the findings of academia, without much defence in his hand. Error—proneness being the business of the practician, this looks like a rather sorry state of affairs, and one encounters numerous seasoned designers who will desperately fall back onto the codes and books of long ago for their calculations2 and to find out what is reasonable, and what is not. Younger engineers do not have this privilege and will be bound to believe what the computer puts out.

2.4 Errors of Translation, Communication and Coordination As the information which makes up a project at any one time is translated into different representations, from hand sketches into drawings, from calculation notes into dimensions and specifications, and eventually into physical reality, it is exposed to deleterious agents and events and to modification; some of it may become lost, forgotten, ignored, discarded. Some of it will become distorted, qualitatively or quantitatively, or both. As a rule, translation will reduce, rather than enhance the quality of information unless reviewed and verified. Modern computer aided drafting produces drawings of a sharpness, clarity and neatness which were not achievable earlier. These drawings though are just as wrong as before, or even more so because the time spent and the attention paid to their making has become much shorter. To make things worse, part of that attention must be channelled toward the operation of the computer aided drafting system which with its many options, is increasingly more complicated then the pencil and the ruler were. Often drawings are made by technicians whose experience does not usually include much insight about the consequences of their activity; it has been replaced, at 2

The Canadian Building Code of 1970 weighs about 600 g. The same subject matter is presently treated in documents weighing several kilogrammes.

2 Design and Construction Case Studies

21

least partially, by computer protocols inside the “black box”. It has become a matter of seconds to reproduce or transfer a graphic representation of a detail from one drawing to another, from one project to the next or to modify it. Time pressure will most often be present which hampers cognitive digestion of what one is doing. Computerized drafting and reproduction has brought along a particular side-effect which amounts to a serious draw-back for coordination: Where in the days of handmade drawings a full size representation of what one was working on, was laid out on the table, this has been replaced by a screen of lesser dimension and reduced resolution, so that certain elements of information, e.g. a detail section, need to be “zoomed up” to become legible. It will then fill the entire screen, at the exclusion of everything else relating to the same feature (e.g. plan view, orthogonal section, location and application of the detail in the assembly). To view each of these, it must be put onto the screen, one after the other which has now disappeared from view etc. Each step taking time, it can become difficult to visualize a feature of some complexity, and incompatibilities or outright mistakes may escape attention. In order to avoid this difficulty, some engineers still like to work with pen and paper copies of drawings. This brings along the error of scaling. Scaling dimensions from drawings with a ruler is a very quick and efficient way to perceive geometric parameters, it has become rather dangerous however: The computer will happily print graphic representations in any scale, fitting paper or table sizes. It will also readily print numerical indications of scale but without any verification whether the scale indicated really corresponds to what was used for the printing. This is a very trivial pitfall it would appear but it comes up time and again and has caused a lot of grief. It will continue doing so if a method cannot be found to avoid it. As an example one could enforce a rule that a drawing cannot be printed bearing an indication of scale unless that indication corresponds to the scale of the printing. This was tried in this author’s office, with mixed success. It requires military style adherence to rules which is difficult to carry out in an environment of creativity which engineering is. Translation of information into other forms is closely related to coordination where data with different origins are combined—e.g. geometry specified by architects with dimensions required for strength and stability which are determined by the structural engineer—things may have to be adjusted to satisfy all aspects of a design, and in the process of doing this, some aspects may escape attention, resulting in an error of translation as well as coordination: for example, in order to fit into a restricted space, a beam must be reduced in size and a deficit of stiffness may be created, resulting in excessive deflections or vibrations. Translation of data and information from one form, format or documentation into another is mostly performed to serve communication: The construction site would not know what to do with design calculations or the scribbling of the engineer, which are given to the technician to incorporate in drawings. For the intent of the engineer to reach the construction team, what must be communicated is the whole information and nothing but the (correct) information. Often redundant information finds its way into documentation and communication, creating confusion, or outright mistakes. This is especially critical where changes and modifications must be incorporated; If the same information—e.g. the specification of a weld—is quoted in more than one

22

F. Knoll

place, the likelihood is quite substantial that in the case of modifications, one or the other place will be missed and the—now—erroneous information perpetuated.

2.5 Errors of Execution Often construction takes place in less than ideal circumstances (weather, contentious working climate, time pressure etc.). Since the act of giving physical expression to a concept is itself an act of translation of information from one form into another— from drawings to poured concrete, fabricated and erected steel—it is subject to the same reservations that apply to any act of transformation. Additional sources of error are of course inherent in the execution itself, such as careless and bad workmanship, inferior materials, accidents, degraded of materials, mix-ups, accidents, break down of machinery, power failures etc., all of which may result in deficiencies of the product if not corrected in time. Experience shows that this affects especially temporary work such as shoring or scaffolding where the fact that it is temporary and will be removed shortly registers in the minds of the participants in the form of a lesser need for quality and precision. The notoriously recurring falsework collapses which can often be retraced to worn out material, or careless assembly producing instabilities, eccentricities and local failures, bear witness to this. Often insufficient time and less than qualified staff is assigned to the verification of these assemblies (see Sect. 7.3). At the same time it must be remembered that other trades will be busy on the same construction site with their requirements to penetrate, reduce or move the structure. Material and geometric conflicts are often resolved long after the bearing structure was put in place, almost invariably to its detriment. What makes this particularly ill-suited is that it often takes place unbeknownst to the structural engineer. The consequences will therefore dwell in the structure as an unknown deficiency which may under changed circumstances, combine with other detriments (see similar considerations for the period of use in the following section). Errors of execution are often quite visible, as well as embarrassing and costly to correct. One is therefore tempted to make them disappear from view and from the records which, once again, will result in hidden faults. Often also, corrections are implying substantial delays and other options must be found such as strengthening, or the addition of elements. Situations come up where the consequences of the error can be gaged and found to be sufficiently compensated by the safety margins such as if often the case with material properties which do not meet specified values: the standard concrete strength is only determined with 28 days delay which, in the case of most high rise buildings means that about four stories have been added in the meantime. If it is decided to leave things to themselves, a deficiency, although a know one, is left in the structure. Execution is the phase in the construction process where the major portion of the money is spent, typically between 85 and 90% of the total. Incentives will therefore be commensurately great to seek savings through alternative designs, methods, through

2 Design and Construction Case Studies

23

shortcuts, some of which may be approved by the clients’ team, some not so, down to outright cheating with substitutes of inferior quality or simply by omission. One still meets representatives of construction companies who need to be persuaded that reinforcing steel is provided for a good reason and should be placed into the concrete of a foundation, or in a retaining wall. Not to buy and to place the steel represents a saving, at least at construction time. The substitution of unqualified labour for tasks requiring skill is of a similar basic character, and has, in some cases become so notorious that entire protocols of certification have become necessary and current as for example with welding. Protocols themselves often invite cheating, and to enforce them effectively may be beyond the powers of the engineer. In that case it may be a good idea to “design around the problem”, in other words to avoid conditions where the quality of workmanship is decisive in terms of potential failure. Welding details in particular can often be designed to lessen the importance of perfection: Butt welds at concentration of tensile stress will always be critical, whereas welds on lap plates can be sized generously to provide robustness.

2.6 Errors Due to Use Once a structure is completed, it begins its “life”, being used for certain functions which were generally anticipated by the builders. In time it may change function, however, and the correlation of the design parameters such as exposure or strength, and the knowledge of the same, may become uncertain or lost. Most styles of construction require monitoring and maintenance, without which a structure may deteriorate progressively. In certain environments, durability of materials may be limited, of the basic material itself, or of the protective measures which were added to enhance its long-term performance. Zinc-coating of steel has a limited life if left bare to exterior exposure, or in contact with materials producing chemical reaction on contact such as soil, polluted water or air. Most structures are designed and built for certain anticipated loading conditions relating to gravity or environment exposure, the reality of which may depend on what happens to the structure itself, or to its surrounding. Privately owned buildings are often changing hands. At every transfer of ownership, knowledge concerning the original design parameters risks to get lost or distorted so that subsequent use may no longer respect the limits which governed the concept. Overloading or degradation may be the result. Most frequently though, it is the lack of maintenance which constitutes the “error of use”. Deleterious processes tend to negatively affect the performance of the structure over time, and may reduce its resistance to a degree where it becomes unsafe. Maintenance and the monitoring which must accompany it, cost money which, much like quality control does not produce any ostensible return—only its absence will make itself felt, negatively—and it is therefore not popular with administrations

24

F. Knoll

whose goal and “raison d’être” is to maximize that return. Maintenance tends to be perceived as an unwelcome expense and left unattended, delayed or outright neglected, especially in environments where personnel is replaced frequently, for example due to change of ownership… A few years from now it will be somebody else’s problem… namely the consequences of lack of maintenance. This sort of scenario has become rather common in the past few decades with the result of a huge backlog of degraded structures, the repair and replacement of which strains budgets everywhere, and progressively. There is another very serious aspect to this which links the problem of maintenance to errors of conceptual design: Many structural elements disappear from view behind ceilings, wall coverings, paint etc. and their fate cannot be assessed by inspection. This has turned out to be of particular concern for the attachment details of façade elements. These are often made from heavy materials such as concrete, brick or stone while the attachment details are made from carbon steel—and frequently involve field welding. This means that even if a protective coating was provided, it will be destroyed locally and perhaps touched up with some replacement. In any case, humidity in the form of condensation, or rainwater will usually be present and cause corrosion on the long run. Very serious accidents have been caused by such circumstances, and it is fair to say that to let them take their course constitutes the error. Whether it is assigned to the design phase or the use is immaterial but it is important to note that it is an error the full impact of which will be felt progressively as the great quantity of buildings constructed in the 1960s and 1970s is coming up for the long term effects such as corrosion to manifest themselves. It can also be identified as a basic conceptual error, namely to make critical elements uninspectable—one could compare it to a latently hidden disease the symptoms of which become perceptible only when it is too late.

2.7 Errors Caused by Alteration Most commercial or institutional buildings will see their mechanical, electrical and plumbing system changed or replaced every three to five decades. Typically, this will imply new routings of conduits, piping, air ducts etc., involving new penetrations of the structure. Sometimes, several generations of these systems will leave their consequences, mostly amounting to a weakening of the structural system. Similar things happen to residential construction where new expectations of the inhabitants must be accommodated such as enlarged spaces, balconies, etc. Infrastructure is not exempt from alteration, modification or replacement. Likewise, the exposure of structural elements may change through a change of use, of work being carried out in the neighbourhood, or of partial demolition, extension or other structural modification. If the attention of a structural engineer is engaged altogether—which is often not judged to be necessary—he will usually face a situation where he does not command the necessary knowledge to design adequate measures of intervention—drawings are

2 Design and Construction Case Studies

25

lost, or faulty, the structure cannot be made visible completely, information about earlier alterations is unavailable. It is easy to see that in such circumstances errors are bound to occur, and it is up to the engineer to see to it that adequate load paths, conservation of durability and stability and robustness are provided to a sufficient degree. It is the latter property in particular which must endow the altered structure, in compensation of the uncertainty which accompanies all modifications. Often, structural alterations will involve changes in load path for important gravity or lateral loads. This usually implies deformations, especially in cases where those principal loads continue to be present during the process of modification. Temporary instabilities may likewise occur if certain elements of the structure are removed, to be replaced, or to be modified. An entire class of accidents can be traced back to structural alteration which was carried out thoughtlessly, without, or with inadequate professional help. The professional engineer, where he is engaged to attend a project of modification, is well advised to invest time for research on the actual condition of what is to be altered, and to compensate for information that cannot be mobilized by assuming worst case hypotheses for his conceptual design.

2.8 Errors in Deconstruction Demolition, or deconstruction as it is called today for political correctness, as measured by insurance rates, is the second most risky trade in construction after tunnelling. To take a structure down piece by piece as it most often must be done, leaves it in numerous consecutive stages some of which may be precarious where certain elements along with their function have been removed. That function may have included other features than bearing gravity loads, e.g. the stabilization of other portions of the structure. To understand and appreciate this for every intermediate stage is sometimes difficult since portions of the load paths may be less than obvious or hidden from view, in other words, it is easy to get deceived. The study of drawings if of course useful in order to gain insight but for older structures deserving to be taken down, the original drawings are usually lost, not to speak of modifications carried out during the life history of the structure; What can be seen is therefore all one knows. This is especially true for structures that have been damaged by an accident or a fire and are therefore in a precarious impaired state to start with. In these circumstances one may prudently choose to use remote controlled equipment rather than send humans inside or to proximity of the impaired structure as it may move or collapse on its own, or as a consequence of a seemingly minor intervention. Another aspect of demolition/deconstruction concerns adjacent premises. Any uncontrolled movement of the structure being taken down may cause debris or

26

F. Knoll

portions to fall sideways, causing damage or injury to whatever or whoever is within reach of the falling objects. This is one of the principal considerations for demolition methods involving remote controlled action such as blasting or heavy machinery, e.g. the wrecking ball. The structure to be removed may have provided lateral support to neighbouring construction such as bearing walls which may become unstable if and when floor plates are removed. Careful assessment of lateral stability during all stages must precede the demolition. Old construction was usually created from the bottom up, setting element upon element. The structure has therefore seen some of the intermediate stages which are enacted in reverse during demolition. However, during the original building, temporary supports, bracing or tie-down may have been installed and later removed to secure incomplete stages. This is even more true for more modern methods that may also involve the use of inherently less forgiving materials like high strength concrete or steel. These materials have a tendency to accumulate high amounts of elastic energy before rupturing explosively, releasing all that energy in a short moment. Traditional materials such as wooden framing, tied with metal elements like spikes or nails, as well as mortar bound masonry have different rupture characteristics where the final rupture is usually preceded by large and visible deformations which are mostly nonlinear, meaning that much of the deformation energy is lost in the process to the environment, rather than having accumulated in a spring like fashion. This author’s experience includes some work with cast iron which was used for compression elements in the nineteenth century, before the arrival of steel at useful prices. This is essentially a very brittle material, but usually, rupture occurs near the connections such as column capitals which does not necessarily remove the entire column and its function from the load path. One particular word of caution is merited by arched structures. They are usually built with essentially brittle materials, and supported by falsework until complete. It may in some circumstances be a good idea to re-enact this temporary support in order to avoid widespread movement of the collapsing vault. Falling structures may involve large amounts of mass which, when hitting the ground, may trigger shock waves causing large amplitude vibration in the neighbourhood. This by itself may be sufficient ground to avoid large scale demolition methods. All this taken together makes deconstruction seem to be a hazardous proposition as witnessed by the above-mentioned insurance rates. It is certainly not something that can be entrusted to a newcomer since experience must, most often, be substituted for clear cut knowledge. Demolishers are wary people who have been through numerous scenarios where things did not turn out as presumed. In this context it is especially delicate to speak of errors since the taking of risks is an integral part of the game and cannot be avoided. One has to be content to minimize the risk, a matter of perception most of all, trying to foresee the consequences of an intervention for which there is often no alternative. The definition of “error” therefore becomes rather fuzzy in the context of deconstruction/demolition, and therefore less than useful. One is, after all, inducing failure

2 Design and Construction Case Studies

27

of the structure, something to be avoided in every other context, and success must now be measured on the consequences of that failure, in terms of damage to everything else, rather then the structure itself, a fundamental change to the paradigm of “error”. This resembles what happens in another context, i.e. the survival of structures in strong earthquakes where one accepts a certain degree of damage only if it stays limited to the impairment of functions one has decided to dispense with, e.g. the normal use of the structure where it is not needed in the immediate aftermath of the earthquake. In the case of structures needed for shelter, or for healthcare, this is not the case, and in a similar way, for demolition, damage becomes a matter of context, and error exists only where the goal of limitation to damage was not attained.

3 Perpetuation and Interception Information, once created and introduced in the building process will persist and eventually take physical expression unless it is modified or replaced. Many different causes for modification or replacement exist. They are mostly related to the optimisation that accompanies the building process from the start to the very end, following the options as they come up and are recognized at every step. In this sense, the recognition and correction of errors can be seen as part and parcel of the normal course of events where the available information is seen, reviewed, completed and adjusted continuously among the various participants. The question then comes up what constitutes an error because wrongness is now a matter of degree: Does an incompatibility which shows up and is being ironed out in the course of coordination work, e.g. among disciplines, constitute an error? Or does a sub-optimal design contain or constitute an error, keeping in mind that in most cases, with the exception of the most trivial direct dimensioning, the optimal or best solution is out of reach due to limited time, uncertainty, complexity, or a shortage of bright ideas. From this one must conclude that a clear and unequivocal definition of error does not seem to be possible and one must substitute a concept which is compatible with common sense scenarios. So here goes: An error is an element of information which, if perpetuated, will lead to a loss through accident, impaired function or costly correction. Perpetuation is therefore an essential ingredient or property of the concept of error: a fault which has been corrected at little or no extra cost is no longer an error, not in reality nor for this discussion. Erroneous information once introduced in the building process will persist unless intercepted and modified into correct information. It is the absence of interception therefore which permits it to take its course. It is well worth it to reflect about the reasons and circumstances of that absence. The act of intercepting an error requires agents who are able and willing to do it. Typically, this can be broken down into a

28

F. Knoll

number of basic elements each of which is a condition “sine qua non” and carries its own description which varies with the particular scenario: • • • • •

Detection: to perceive erroneous information as what it is Recognition: to interpret what is seen as wrong or faulty Communication: to let the right party know there is something wrong Follow up: to make sure corrective action takes place Compensation: Correction of errors normally translates into work and expenditure, sometimes to be supplied by parties not related to the origin of faulty information.

3.1 Detection, Attention and Murphy’s Law Lack of attention is most often the main reason why faulty information is not discovered in time. The human mind has been shown to work much like a network of links which “light up” when associations among different stations are triggered. These associations must follow a certain steering agent not to be entirely random as explained in the part IV: Insights from psychology. Phenomenologically it is evident that this routing or channeling is in fact taking place, if not so we would not be able to have coherent thoughts and reasoning, and to “concentrate” on a particular subject during a useful time span. Limits to the intensity and duration of this concentration exist though, as we all know too well and mostly it must be said that we are not entirely masters of the routing of thought, which may become diverted onto something outside the intended path, something which happens to call louder for our attention. Think of the boring task of verifying detailed shop drawings of a repetitive structure where dozens or hundreds of almost identical representations must be checked for conformity with specifications, codes, and standards and for consistency with the design concept and each other. Everyone who has done this knows that thoughts have a tendency to wander off onto more attractive features of life, and that it takes a periodical shaking up of oneself to get the concentration back on track. In other situations, the power of attention is simply overwhelmed by the quantity of information which is being displayed as for example on a construction site in full progress where all our senses are bombarded with a cacophony of impressions. It takes an act of will to direct the mind onto the targeted features, such as for example the positioning and fixing of reinforcing bars while it is being carried out simultaneously in several places and stages. When checking one’s own work, e.g. calculations or detail sketches, attention can be “short circuited” by the familiar patterns one is looking at. Mistakes which have passed our eyes earlier may now no longer trigger our attention and critical review. This is well known from the repeated lecture of texts, especially texts written by ourselves where even the most obvious mistakes such as missing words are no longer caught because the text has become so familiar that the mistake no longer

2 Design and Construction Case Studies

29

stands out. As well, we are not reading individual words unless we will ourselves to do so. Construction meetings are another scenario where attention must be kept awake even when subjects are being discussed that do not seem to relate to one’s own discipline. It is perhaps a good idea not to be seated vis a vis to an interesting member of the opposite sex while the routing of large air ducts or bunches of electrical conduits penetrating the structure is being discussed and modified. Murphy’s Law states in the bluntest form possible what we all know to be a fact of life: “If something can go wrong, it will” which can be reworded: if things are left to themselves i.e. unattended they will go bad, unattended meaning they receive no attention. Very often, the “going wrong” is perceptible, it can be seen, heard or smelled through symptoms if only one is looking in the right direction at the right time. The steel bars which are missing or too short will be spotted instantly if only the inspector is present and directs his or her attention systematically to each element of the structure in turn. Sometimes, it takes a closer look or the use of a tool e.g. to spot bolts which were not tightened or are cheap forgeries with inferior properties. From the study of cases where mistakes were made that took effect in terms of structural failure or cost of correction it can be seen that most often a small quantity of attention would have been sufficient to intercept the course of events. Symptoms were quite perceptible for a while but nobody asked the pertinent question: Is it correct and normal what I see, hear or smell? In other cases practitioners are all too familiar with, the features showing the mistake have disappeared from view: the concrete was already in place when the inspector arrived to check the reinforcing steel. Attention had been planned and organized but not effectively. Some studies (Melchers 1982) suggest that a large proportion of mistakes are made at the design stage where one operates mostly with mathematical models of what is to come. Analytical methods are used to verify whether what one has thought up will perform satisfactorily in various circumstances which one also modelled mathematically. Errors such as incomplete analysis of all relevant scenarios may eventually result in performance deficits unless attention is concentrated on the appropriateness and completeness of the modelling. (For illustration see case 7.8 “The computer glitch”.)

3.2 Recognition, Interpretation Erroneous information comes in different abstract guises just like the correct information, be it in the form of numbers, graphic rendering or symbols as is often done if the information is repetitive. In order to recognize an error as what it is, the visible representation must be translated mentally into an image of reality and put into context.

30

F. Knoll

Consider the case of repetitive information such as for example a connection in a steel frame structure. It may be shown graphically in detail on one drawing and indicated symbolically in a map-like fashion on a general layout, say by an “A”. Or even more simplified, the note “typ(ical)” may appear on the detail drawing. One of the problems is now to make sure that it is applied in all the right locations and not in any wrong one—a matter of context. Another typical situation of a similar nature is the application of a known design (=set of information) for which experience has shown that it worked in the past. Construction being what it is, a field with near infinite variation of the numerous parameters, conditions and circumstances are often not precisely the same as they were in the past and what may have worked then, does not necessarily do so in the novel situation. If so, an error of context is being committed and only the analysis of the novel application will help to eliminate it. Most design activities are using simplifications of one kind or another to represent the properties, configuration and exposure of what is to be constructed. Modeling is essentially an abstraction of the intended future reality and does not share all the features with the latter: In the model form the information may look quite innocent, numbers and line drawings can be produced in large quantities and apparent perfection by today’s computers and their software. Even the colored graphics that often replace the spread sheets of yesterday do not show if the modeling and the hypotheses which were used, are appropriate. Most often, only the reference to work of the past, or the verification through “back of the envelope” style calculations will make the wrong information stand out, to be recognized as what it is. The information making up the construction process will eventually take physical form. This is when another avenue to recognize errors is opened up as the symptoms of something that went wrong may begin to show, for instance in the form of cracks or excessive deflections. Once again even a crack indicating a serious problem may “melt” into a crowd of other, harmless cracks such as are normally found on concrete surfaces exposed to a harsh environment. It will take a competent inspector to recognize what the perceived symptoms, i.e. the particular crack, is indicating—whether it is related to serious distress such as an imminent shear failure, or simply the structure relieving itself of some mechanical incompatibility such as restrained shrinkage or thermal contraction (see e.g. case 7.9, the problem with repetitive work). It is a very different matter with cracks in a steel structure which are always related to distress and will almost invariably progress in time. A crack in a steel structure can indicate a variety of things and it will be up to the inspector or his resources to come up with an interpretation. The crack may be a material fault or may have been caused by inappropriate handling. More serious, it may indicate bad quality work, overloading or a fatigue problem, depending on its configuration and the exposure of the structure. In the case of cracks in a steel structure, an interpretation cannot be dispensed with because they almost invariably indicate the interruption of the load path of tensile stresses, i.e. serious distress. The causes must therefore be found and eliminated, preferably before the crack is repaired by welding, or by replacing the cracked element with an intact one.

2 Design and Construction Case Studies

31

The informed and competent inspector will direct and concentrate his or her attention onto the places, elements or subsystems which are either critically important, or notorious for defects, or both. In the first case once again, isostatic systems come to mind where every element is found to be a link on the one and only load path. Equally critical can be systems with brittle behaviour where, even if members are arranged in parallel fashion, the rupture and loss of resistance of one element will cause overloading of its neighbours with insufficient load redistribution, and their failure in turn. The distinction between truly ductile, or essentially brittle systems is very important in this case, with ductility being a variable with values from one (sudden and complete loss of resistance following purely elastic deformation) through incomplete ductility (gradual loss of resistance with deformation beyond a point of maximum) to the more or less ideal ductility of many metals and some plastics, and eventually to the even more favourable behaviour of “mild steel” which includes strain hardening. It is important to keep in mind that sudden failure can be triggered by a number of different causes and may occur even in structural systems made from ductile materials, through instability, crack propagation, fatigue, or because the quantity of material involved in plastification is insufficient (“notch effect”, e.g. at bolt holes, or welds). Often the symptoms picked up even by a competent and circumspect inspector are equivocal, ambiguous or not clearly perceptible. In this case the inspector will try to obtain better information through “digging deeper”, by exposing the suspicious element to better view, through testing, detail analysis or comparison with similar circumstances. He will also tend to err on the conservative side, by causing replacement of a defective element by an undamaged one, by strengthening or by providing supplementary elements of stability, or load paths.

3.3 Communication Once perceived, an apparent anomaly must be communicated to the right agent, namely the one who is in a position to recognize, and interpret it, and eventually the one who can cause it to be corrected. These may be different participants in the building process, or they may be positioned outside, as for example testing laboratories, or specialist experts. In case of doubt it may be a good idea to alert several of them in order to “get to the bottom” of things, i.e. to gain the correct interpretation and to trigger the right correction. Often, all of this is happening after the fact, forensically, i.e. when the error or anomaly has taken effect by symptoms alerting everybody, or by causing a structural failure. This is when funds are made available for correction or remedy, with the view of later recovery from the parties ending up being blamed. Before that, i.e. before something dramatic has happened, time and finance tend to be scarce and the means to mobilize testing and expertise may be lacking since nobody is alerted to something being potentially wrong or amiss. Even if something is suspected, communication to

32

F. Knoll

the right agents may therefore be difficult, delayed or omitted. It is up to management to see to it that such information reaches the right agent rather than getting lost. From this it is becoming clear that every error which is permitted to persist, has become a management error. For it is up to management to organize and direct the human agents of the building process in such a way that the error, i.e. the faulty information it represents, is intercepted and rectified in time. One may even put it more bluntly: Every human error is also and primarily, a management error: Why was the incompetent individual or group put in a position of responsibility for something which was beyond their capability to process or to comprehend? Situations have been seen where the personal relations, for example an imposing boss inspiring his subordinates with the fear of losing the job, have prevented somebody from “blowing the whistle” on a perceived anomaly he or she was perhaps not certain about. Rather than exposing themselves to potential embarrassment, they let it go, events taking their course justifying their inaction to themselves with the notion that it was not in the realm of their assigned responsibility to pay attention to and to report what had been seen. What this amounts to is a breach of communication, and it happens in many different situations, not just of the kind described above. It is here where the disadvantage of an adversary climate becomes obvious, with walls being put up by every party for their protection from potential litigation, made from paper filled with clever legalese, in the attempt to divest each one of as much responsibility as possible. This impairs the filtering detection process severely which in a collaborative climate can clean out faulty information quite effectively. Not so when everyone is hiding behind a paper wall. Similarly, communication tends to become tenuous when one of the participants is perceived to cheat, trespassing the bounds of fairness, good faith and team spirit. This author has seen examples where collaboration was denied, to the point where this was even recorded in writing, with the dire consequences to be expected to come from this sort of scenario. Once a job has “gone sour” it is very difficult to bring it “back on track”, and it usually implies replacement of actors, with all the problems that brings along. Communication in the building process requires very often more than the bare statement of the observed facts as they may be equivocal and need interpretation. The observer sees what he sees, directly, or first hand, so to say, while he may not be able to find the explanation for what he observes. It is therefore essential to communicate as detailed a description as possible to a recipient who is able to find that explanation. He may be handicapped however by the fact that he received the observations “second hand”, i.e. through verbal, written or graphic communication while not being able to see the reality directly. In some of the most dramatic accidents the unsuccessful communication has played a major role: Signs of distress were seen and reported but were not considered serious or important enough to trigger action. The communication failed in its essence since the seriousness of the scenario was lost in transmission. The conundrum, once again, comes from the organizational set-up where most often the person performing the inspection, the agent in charge of the interpretation, and the one making the decision on the follow-up, are not the same. If the communication is solely in paper

2 Design and Construction Case Studies

33

form, through a prepared form to be filled out by the site inspector, the situation is aggravated even more because in the context of structures and their behaviour, it is impossible to create “one size fits all” style forms which do justice to all potential observations. Some part of the information gained at inspection will always be lost in the process, unless some “personal union” can be brought about where the “finder” of the information, i.e. the inspector on site, is identical with or close to the interpreter, and eventually also the decision maker. The arrival of digital cameras has alleviated the problem to some extent, providing a visual rendering of what was seen, to the recipients of the observations without delays: “A picture tells more than a thousand words” especially if these must be recorded in standardized form, in a preset mode of classification. The facility and ease of digital photography ought to be exploited to the utmost to improve communication. Any inspection will turn up a large quantity of features and aspects of the observed object. For the follow-up it is essential that the symptoms of a serious problem are picked out: one crack indicating the beginning of a shear failure may be a member of a multitude of cracks appearing on a concrete surface—it may not even be the most visible or the widest. It is then up to the “interpreter” to identify it as what it is—the symptom of a serious problem. This can only be done by somebody who at the same time, is able to see the crack in reality, or at least on good photographs, and is in possession of sufficient engineering knowledge to know what he is looking at: The most straightforward solution to this problem is obviously to send a “competent” inspector, in other words establish personal union at least for the inspector and the interpreter—not a thing which is provided in many inspection protocols. The next best thing would then be, in a situation where knowledgeable personnel is hard to mobilize, to serve the interpreter with a set of good photographs which may raise his attention and induce him to go and inspect some situation in person, when needed, in order to obtain a more direct impression. Whichever way this is handled, the communication between observation and follow-up is one of the necessary elements of quality control, in the sense of elimination of faults and deficiencies. These may not constitute errors per se as they may be part of normal wear and tear e.g. in the context of used infrastructures, but to leave them unattended, or to lose information about them in communication will do precisely that, introduce an error in the process, as exemplified by the story of the timber trestle (see Sect. 7.4). The principal error is not the deficiency itself but the impression of the absence of a problem due to loss of information, created by insufficient communication.

3.4 Follow-up. From the Tip to the Real Iceberg Of great importance for the follow-up, i.e. the corrective intervention, is the forensic analysis of the fault and the error that caused it as it may be a conceptual or repetitive error affecting other elements of the same configuration, or in similar circumstances which so far may have escaped detection. Often this is so self-evident that it is done

34

F. Knoll

as a matter of course; in other cases, it may take thoughtful detective deliberation to find and identify the elements and places which may be affected by the same error. If a defective weld is found, it may be a good idea to check in the records—if they exist—who the welder was, or what the circumstances were which caused the defect, in order to find other potentially defective welds. The same basic principle applies to all checking and verification activities, in particular where they must be, or are being performed in an incomplete protocol, i.e. where each and every element is not being visited. Once a particular type of element is found defective, it becomes mandatory to look out for its “brothers and sisters” as they may, and probably do suffer from the same impairment. A typical case for this relates to a practice which was wide spread in North America in industrial floor construction where composite action was established by shear connectors which were “welded” through the corrugated steel deck, using a strong electrical current to produce fusion. It turned out that the method, while it might have been demonstrated to work satisfactorily in laboratory conditions, did not do so on the construction site, due to a number of circumstances. The engineer who checked the solidity of the weld by knocking the shear stud with a hammer, would see whether he was bending it, or breaking the weld at the base. This author has performed this rather crude but effective testing on numerous occasions. Since shear connectors usually come in series, i.e. multiple elements working in parallel, checking of a random sample could be considered sufficient. However, if defective welds are found in the sample, the consequence would be to increase the sample, in order to come up with an appropriate assessment of the scenario. Often a substantial percentage of bad connections is found, leading to a change of method, e.g. welding all studs by hand with the welding rod rather than by fusion. Fusion welds have been found defective in other circumstances and their reliability in construction must be seriously questioned, unless a thoroughly controlled industrial protocol can be applied, eliminating all problems with corrosion residues, cleanliness, power supply, thermal or material incompatibility. Collapses of lightweight steel structures (light weight trusses assembled with fusion welds) have shown this to be a very serious issue, with the decision to use an unreliable welding method constituting the human error. Once again, that decision may have been based on past applications where the procedure worked, i.e. no failure did occur. However, this time, a glitch slipped into the process, showing the method to be vulnerable to imperfections, i.e. unreliable and not robust. The ultimate follow-up in cases such as this where a method is found to cause problems and risks, must be to abandon the method for a better one. This may be in contradiction with considerations of economy and speed where pressures may arise to continue the use of the method, perhaps along with some proposition for improvement. The proof of such proposed improvements can be brought about only by a commensurate quantity and variety of applications in real construction, rather than tests in laboratory conditions, the essence being the question of robustness: If a method is vulnerable to glitches, it should not be used, especially in circumstances of isostatic load paths, or where domino-like failure modes are controlling.

2 Design and Construction Case Studies

35

It is interesting and important to note that pressures for economy and speed are often at the root of problems even though the human error might be assigned to the fabrication or manipulation process—in order to find a human scapegoat who can be blamed and punished. It is a management decision however, to select methods and processes that produce robust systems—or not. Often the economy which appeared on paper, does not survive to the end of the day when all the problems caused by that economy have been ironed out. The follow-up includes therefore the manager who is the only one in a position to set up the process of production and to bring about error-prone circumstances—or to avoid them. In this sense the choice of an error-prone situation itself becomes the human error, a logical conundrum which often ends up in the courts to be sorted out. More generally, we once again conclude that every human error, its genesis and more important, its perpetuation is at the same time a management error, for it is the manager who directs and organizes the building process and selects personnel for the execution as well as for quality control and assurance (in its proper sense), i.e. verification and checking. It was a management decision to place the concrete anyway even though there had not been sufficient time to check the solidity and stability of the falsework. It is up to management to select personnel for the inspection of deteriorating infrastructure where certain symptoms must be correctly interpreted lest a serious defect is permitted to persist.

4 Compensation for Uncertainty All structures are intended to possess reserve resistance to the effects of the exposure they will see, the description of which forms part of the set of hypotheses adopted for the design. Uncertainty affecting that description must be admitted to exist, and the reserve resistance is calibrated to compensate for the rate of uncertainty which is assumed. Uncertainty will also exist as to the real properties of the structure the modeling of which is made the basis for the design. Reserve resistance will need to compensate for this as well. Reserve resistance, be it in the form of strength or other properties such as deformability or ductility, comes in two modes: Safety margins as stipulated by Codes and Standards, and through a “conservative” design approach where the real exposure and structural properties are anticipated to turn out less favorable than hypothesized and specified on paper. Every engineer who is not overly sure of himself will tend to err slightly or more, on the pessimistic side in his calculations, rounding numbers, providing extra strength or load paths, or “designing around” critical situations where possible. This sort of approach amounts to the introduction of robustness in the sense of stipulations which have appeared recently in building codes. It usually creates additional initial cost over and above a minimalistic design but may turn out to be a good investment at the end of the day when the need for reserves manifests itself.

36

F. Knoll

4.1 Safety Margins Safety margins as stipulated by building Codes and Standards are intended to compensate for the negative effect of discrepancies between the proposed product as hypothesized by mathematical or physical models, specifications, drawings, or through prototypes, and the future reality, i.e. its physical expression. This includes as well the modeling of the future exposure. At the time of design and elaboration of a project, considerable uncertainty exists regarding all of this. This uncertainty has been the object of much thinking, research and publications, mostly on the basis of probabilistic representation of single parameters, using data from laboratory testing or, more scarcely, from forensic investigation. More recently, many have tried to supplement or replace traditional statistics and probability theory by what is called “fuzzy methods” or by using Bayesian approaches, in the attempt to improve theoretical support for the safety margins and their format used in the design and analysis of structural models. Reality has it that these theoretical approaches are used to the extent that they can elucidate relationships between models and physical construction, in other words to account for “legitimate” variation of modeling and building parameters as for example stiffness values used for the analysis, or properties of materials and assemblies such as strength, ductility, resistance to fatigue, etc. It has been known for some time however, that qualitatively, this approach does not do justice to the reality of construction, a fact which has been appropriated to the consequences of human action and decision making which tends to be erroneous in ways which have so far escaped scientific analysis and probabilistic quantification. Human error and its effects must be blamed for the discrepancy which persists between what the safety margins and their theoretical support are promising, i.e. a failure rate of the order of magnitude of 10–5 to 10–6 , and what construction delivers: 10–2 to 10–3 . On the other hand, the safety margins used presently, are considered to be sufficient by and large, as the result of a consensus of sorts where a level of safety against failure is being produced which is acceptable to society—no incentive exists to come up with the means to cover the cost of higher safety margins in general which would mostly be related to increased expenditure for materials—making structural elements more massive, or produced more elaborately. The general level of safety margins is therefore not directly supported by scientific analysis but rather by general acceptance. This is illustrated by the fact that adjustments to the values of safety margins, although periodically reviewed by code committees, have become progressively minor—code committees, when discussing and modifying the format and application in expressions of design criteria are carefully making sure that the results will closely match what has been found to work to satisfaction in the recent or medium past—one adheres therefore to the experience with reality, rather than any theoretical base. From all of this one may conclude that safety margins are instrumental to produce acceptable conditions in the building industry, but they cannot compensate for the effects of gross human error. Insofar as the rate of failure in construction can be

2 Design and Construction Case Studies

37

interpreted as a measure of quality, the consensus as to what it should be has changed very little in the recent history of construction, and from place to place at least in the industrialized countries. Quality expectations are principally a matter of the general level of wealth in a society among other factors having to do with history, and some differences of expectation may still be observed when it comes to structural safety, much less so, however, than for other aspects such as durability, appearance and precision. The effects of human errors do not follow any known law of probability except in a very general sense: Gross errors tend to be more visible and will be caught and eliminated proportionally through the filtering of the building process. In a general way then we know that with increasing magnitude a deficiency will become less likely to persist and in this sense, the safety margins will in effect compensate for anomalies of lesser magnitude, not so however, for gross defects. For the lesser anomalies, it may also be said that the limit between “legitimate” variations of say, the strength of a structural element and a variation which is caused for example by inferior workmanship (i.e. human insufficiency), cannot be identified. One is therefore tempted to propose that all that is reasonably covered by safety margins may be considered “legitimate” and treatable by classic probability theory, including minor effects of error. This would extend to anomalies reducing real safety margins by 10% or so, e.g. in the form of temporary overloading, or of concrete strength which has turned out not to conform to specified values. At the same time a “caveat” must be stated where inferior quality standards are the rule, more than a single anomaly is prone to happen and the cumulative effect of two or several such relatively minor ones may still be fatal. From the fact that the quantitative level of safety margins has been finely tuned to the quality expectations of society at large, it does not follow of course, that individual accidents have become acceptable to the segment of society which is directly affected—a witch hunt will still and always ensue, with someone being found guilty and dealt with accordingly. Experts can always be found who will state that some rules for design or construction have been breached and that this was the cause of things going wrong. Frequently, these “findings” will concentrate on safety margins which turned out insufficient, due to human error or neglect.

4.2 Robustness and Checking Modern building codes are demanding that structures are made robust in order to survive unforeseen events more or less intact, depending on the particular scenario. It was found that this can mostly compensate for external circumstances (exposure) which were not part of the normal design in terms of character or amplitude (e.g. the proverbial run-away truck hitting a column). Structural robustness is however only of limited use when it comes to internal flaws which have their origin mostly in human errors and are therefore not bound or

38

F. Knoll

limited in amplitude by the laws of nature nor are they foreseeable, because of their nearly infinite variety. The only means to counteract gross errors and defects is continuous or targeted verification and checking, and it is well worth it to apply the idea of robustness onto this activity since its own deficiencies translate into gaps and flaws of attention, where things are left to themselves, Murphy’s Law taking its course. We have seen that the building process is in reality acting as a filter, eliminating most but not all the faultiness of construction, to a degree silently accepted by society at large, in the sense that better quality would have a price no one is prepared to pay. However this filtering is mostly of an informal, unsystematic character and must be so, and usually it is being supplemented by organized checking which is applied more or less systematically. If the reliability of this were to be improved it would be similar to providing robustness to the system, in this case to the quality control itself, and similar modes of intervention come to mind as were found to constitute structural robustness: • Strength. To employ more and more highly qualified personnel for the quality control will reflect directly on the expenditure for the exercise; limits will therefore make themselves felt very quickly, apart from the problem with Mr. Superman who is likely to be less than readily available. • Redundancy. This translates into a review by different agents, preferably as independent as possible from each other, and with different fields of competence. They will be able to perceive different aspects of the project and its elements. This may begin to sound rather tedious and cumbersome but on an informal basis it is part of the common practice where in fact it constitutes the filtering function of the building process. In some countries and on large, novel or complex projects the function is institutionalized and follows a pre-set protocol. This author has not seen any falsifiable proof that one method is superior to another. It may well be that in certain circumstances, an institutionalized validation process creates the impression that everything is under control and further alertness and verification is unnecessary. This would make the protocol counterproductive. This author remembers one case where a highly regarded controlling agent failed to discover a serious modeling error in the calculations. It was detected at a later stage by the contractor and exploited for his partisan purposes, implying considerable additional cost. The conclusion is then once again that no one-size-fits-all solution exists and that each case needs to be considered on the basis of its own circumstances in order to arrive at a rational set-up of the verification activity; the term rational may be overly extolling for the fuzziness which infests it, involving much personal judgement of the actors involved. Whenever a solid (and falsifiable, again) basis for a decision is missing, personal experience and—often emotionally tinged—judgement is substituted by necessity, for the good or the bad. Dramatic events will leave their mark in an engineer’s memory like in every human, associated with apprehension which is triggered by similarities with the past drama. If this is the case, a reasonable option

2 Design and Construction Case Studies

39

is to have somebody else contemplate the situation—a matter of robustness, now applying to the quality control itself. One ultimate goal of the quality control is completeness, which is to say that nothing has escaped attention. Similar to the fate of a structure from its inception to its end, the process of quality control is far from a straight road but rather resembles a complex network linking related events such as decision making, information being produced, translated, communicated or transformed into physical reality. As we have seen earlier, all the information is not available for examination at any one time but is continuously being created, modified, replaced, hidden or lost in the process. Means and time for quality control being finite, perfect completeness of control will effectively be out of reach and must be approached with an optimizing methodology as far as possible, much like the building process itself. The similarity does not stop here but it leads quite naturally to the requirement for robustness with precisely the same sense and purpose for both, i.e. to preserve the function of the system in the face of complexity, unpredictable exposure, limited time and financial means, and internal deficiencies. Here and there, hierarchical considerations apply, investment of effort and attention must be concentrated onto the critical features of the process in order not to leave anything important to itself.

5 Correction and Cost of Errors Once an error and its potential or manifest consequences have been discovered, recognized, interpreted, communicated and followed-up, the corrective intervention must be planned and undertaken. Time will be of the essence since most often the construction process is now being held up. Delays may represent substantial additional cost all by themselves and must be minimized. Sometimes it is possible to find measures to restitute sufficient performance without delays, e.g. by providing supplementary load paths in the form of additional supporting elements. Expenditure for the additional materials may be minor compared with the cost of a delay implied by corrective action on the faulty element itself. In certain cases, reinforcements are the only reasonable option as for example on a high rise building column where insufficient concrete strength is found out at a time when construction has already proceeded to several stories above.

5.1 Correction as Part of an Optimization Process Basically a project starts with the creation of a proposed solution to a problem (which may itself not or not yet be quite clear at the start). As we have seen earlier, the “solution” or “proposal” will, in the course of the construction process, “travel” through several stations, being continuously adjusted, adapted, clarified and

40

F. Knoll

improved according to the requirements, options and precisions as they are coming up—this is the description of an iterative optimisation process. Usually, no “best”, i.e. optimal solution can be found, due to complexity, vagueness and inconsistency of the criteria and limited time and this will result in a measure of uncertainty about the aptitude of the adopted solution with respect to the system of which it is expected to become an element. Since the elaboration of such a “solution” implies a considerable amount of “educated guess work” in a field of insufficient, contradictory and time dependent information, it could be compared to a “guided Monte Carlo” process where proposed descriptions of a solution are advanced, to be analysed, revised and improved progressively. As we have seen before, it is not always simple and straight forward to identify an “error” at its origin as it may simply be an inconsistency which arose because of absent or insufficient knowledge about the problem itself and the boundaries and limitations of the optimisation. It was found that inadequate information becomes an error through perpetuation. Consider for example the following story from the construction of the CN Tower, Toronto (which included a considerable amount of innovation at the time, before it became the tallest free standing structure for the next 35 years). As the slipforming of the concrete shaft started in the hottest season of the year, problems were anticipated with the premature setting and hardening of the concrete which would then tend to adhere to the moving form, “freezing” it. Therefore, a certain dosage of retarding agent was used as an admixture to the fresh concrete, with the result that the concrete did not set (i.e. become firm) in time and, because the form has to rise at a certain minimum rate, collapsed on itself when it became exposed. Advance testing had demonstrated otherwise and only real time use of the proposed method showed it to be unsuitable. Another experience from the same project: The tower shaft walls received a high rate of post-tensioning, using high strength cables made from 7 wire strands. The four inch (100 mm) diameter sheaths made from corrugated metal were to be grouted in 100 ft (30 m) lifts with the grout being injected through a plastic pipe at the base of every lift, until it began to spill out through another plastic pipe at the top of the lift. Advance testing was carried out using a 200 ft tall scaffolding tower, in order to find out about the do’s and dont’s for the procedure. In order to counteract shrinkage, an expansive agent was tried in an initial test. This is a gas-producing admixture which for horizontal ducts (in bridges or buildings) had been proven. However, due to the high hydrostatic pressure in the vertical ducts, the gas was pressed out of the mix and collected near the top in the form of foam, preventing the grout from filling the duct completely. The use of an expansive agent was abandoned in consequence. The two anecdotes are representative of what is happening quite frequently where an element of novelty is present. The question—and it ends up being a moot question—is: Did the propositions which were advanced, constitute errors? In the second case certainly not as the desired knowledge was gained intentionally through experimental research, albeit at considerable cost. In the first story, the experimental research had not turned out the pertinent knowledge which was obtained only on

2 Design and Construction Case Studies

41

a full scale prototype, at the cost of a delay of approximately one week, some loss of material and some work spent for cleaning up. The “error”, if it was one, had taken effect and the erroneous information was corrected without causing any further problem. A project starts with a conceptual idea which usually comprises a considerable content of imitation, of past experience lived by ourselves, or by colleagues and competitors. Construction not being an industry bogged down by a multitude of patents and proprietary hurdles to the use of each and every method and procedure, this is rarely posing a problem. The finding of a solution in terms of approaching the optimal must therefore include identification of the things which differ from the circumstances of the referenced experience. These differences are only partially known at the start and may continue manifesting themselves as surprises, giving rise to adjustments and modifications of the proposed solution. Also accounted for must be the variation (“bell curve”) of the reference application as it may have been a “fluke” that it worked. When time has run out on the optimization using model representations, materials have to be procured and fabrication and installation processes started which makes the cost of correction take a great leap since mostly, the correction of an error on something which is physically produced or built will involve, besides materials and products to be discarded, delays while workers and machinery wait idle, and additional needs for materials and work. This suggests that an “error hunting” campaign should concentrate onto the phase/time just before substantial amounts of materials, manpower and machinery are committed, contractually and physically. This is consistent with another consideration found earlier that the best timing of a thorough review of the project is as late as possible but before construction starts, in order to catch as many flaws as possible. At this time, the information representing the project is most complete and assembled in a systematic format as tender documents, or in the form of documentation—specifications and drawings—being issued for construction. Often, in more complex projects, the work is subdivided into packages, or “lots” which may be given to different contractors at different times. This makes the checking more difficult as it has to follow the “phasing” of the contracts, in particular at the interfaces of the different segments of work where consistency becomes a primary consideration. It is a good idea to have the same team, or individuals performing the verification of all aspects and “lots” so that circumspect attention is paid in a holistic way and deficiencies of coordination are caught. It is the same for the different subtrades and their interfaces where a lack of coordination can cause havoc (see Sect. 7.9 for illustration).

42

F. Knoll

5.2 The Cost of Correction Unlike natural organisms in certain circumstances, in construction things will not, as a rule, improve by themselves, true to Murphy’s Law. The longer one waits with remedial action to correct or cure what is going wrong, the more difficult and complicated it will become. As we have seen, erroneous information which is produced in the earlier stages of the building process, is usually caught, improved or corrected in subsequent stages, as part of the construction process. This happens mostly in the planning phase where the project consists of various types of model representations such as computer models, drawings, specifications etc. This optimization or filtering process—there is no definite distinction in practice—is at the same time a process of definition and progressive precision. Information which may only exist in the form of vague ideas at the beginning, is progressively firmed up, clarified, coordinated with other information, until it has reached a state of consistency that is considered to be sufficient to form the basis for contracts and eventually, to carry out the work; one may say that matters are becoming serious at this time since financial means and manpower of a greater magnitude (typically 10–20 times) than during planning are now being committed, materials and equipment are ordered and the public and its authorities are notified. All the traffic lights are green, everyone is ready to roll and all seems to be as it should, were it not for the erroneous information which has survived and will now begin to be “cast in stone” along with all the other features. It will then become a manifest error. Its negative value will be in proportion to the magnitude of what is being committed as it will be measured by the cost of eventual correction (see Sect 7.8). This cost will at the same time be a function of the work climate which is likely to have become contentious following the discovery of an error and the evaluation of its consequences. Fingers will be pointed, lawyers and experts hired and communication will become difficult, circumstances which have become much intensified recently and presently but have always followed something that went wrong. Following the collapse of the tower of Babel, people ceased to understand each other, something which in today’s setting translates into lawyers letters, litigation and partisan expertises. Things must not have been very different in antique Mesopotamia, considering the building laws as exemplified by Hammurabi’s stele. In this climate of contention it is sometimes difficult to arrive at a solution for the corrective work because following a mishap, potential or real “experts” will have a field day, with little incentive to be reasonable. Every engineer who has been seated on the wrong side of the table facing the experts of the client, being blamed for what happened—or could happen according to the experts—has learned that he is engaged in an uphill struggle—it is presently called damage control. Things must now be made super safe to the satisfaction of everyone’s whims, and weren’t there some other aspects of the planned work which cause “concern” and “apprehension” with the experts, the lawyers and clients? In fact, more often than not in such circumstances, the entire design and project may be put into question and given to the experts to

2 Design and Construction Case Studies

43

criticize, unless the designer succeeds to preserve a sufficient measure of trust in his competence and good intentions. Let us not forget that an accident or a costly correction shows up with a positive sign on the spread sheet of the GDP, as well as on the cost–benefit account of the layers and their experts. The only thing that may help the poor engineer who finds himself at the “wrong end of the stick” is an expert who is a colleague who may have been or must anticipate to be in the same spot one day, with the tables turned. It may therefore be a good idea to opt for the selection of experts who are in fact designers themselves rather than professional litigation agents. To keep the experts reasonable may be in the interest of all parties since very often things do not turn out to be unequivocally and entirely the fault of one party, in our case mostly the design engineer, but some part of the negative value—never mind the GDP and its absurdities—may, at the end of the day, stick to several or all the participants. To minimize that negative value which is the global cost of correction plus litigation, may therefore be in the interest of several players, keeping in mind as well that complicated and lengthy legal battles are associated with delays which will equally translate into an economic negative. In this author’s experience, this is all fine and consistent with common sense, except in cases where one of the parties has said good bye to that common sense and is acting out a revenge, resentment or some other urge to hurt a competitor—which some may consider rational enough to be pursued. Cases have been seen where, in a context such as suggested above, errors have been alleged by certain interested parties in terms of a perceived lack of sufficient safety. The designer as the defendant will then bear the burden of proof that this is not so. With the complexity and volume of building regulations on an exponential curve, this will become more and more common (e.g. the Canadian Building Code including design standards for the most common materials, has increased in volume by a factor of 15 or so from 1970 to 2020, measured by the volume of printed paper, not counting several associated specialty standards such as provisions for parking structures etc.). The question of alleged rather than manifest fault and deficiency will increasingly occupy the courts. It will also cause an increasing amount of adversity and aggravation within the profession as more and more engineering decisions are being transferred from engineering judgement to complex codification. The typical consequence has of course been to computerize all of this codification, the result being a new style of error, the computer glitch, including erroneous input, misconception about what is happening inside the “black box”, and “bugs” in the software. To catch and correct this type of error is not a simple task at all, as it will likely be buried in a mass of automatically produced data. Modern software offers to do “design”, in actual fact dimensioning, in one go from the input of geometry and loading conditions to the completed detail drawings of say a reinforced concrete structure. The only way to effectively check and verify the results of this is through a careful application of simplified hand calculation and pattern recognition by somebody with experience in this task. This points to older engineers who were educated before the arrival of the sophisticated software. At this time they are in the process

44

F. Knoll

of retiring from professional practice. It is their task and vital duty to see to it that this faculty is passed on since academia appears to be ill equipped to do it, given its tendency to strive for ever finer analysis of problems construed to suit this. But this is another discussion which reaches beyond the hunt for errors. The dilemma is of course also an economic one. As witnessed by examples given in text books and taught in undergraduate curricula, for the selection of properties— dimensions, materials, configuration—of common structural elements such as beams or slabs, the task has become so lengthy and tedious that it takes up several pages of complicated calculation. This translates into manpower being tied down for lengthy amounts of time unless the task is fed into the computer. To do it by hand in all the complicated detail, using obscure empirical expressions concocted in the comfortable atmosphere of academic research—time being of no concern—has become all but unaffordable for the practice. Current engineering fees translate into time allotments for the “design” of one structural element, of the order of magnitude of 5 min while to do it following the text book may take a full hour or more. No wonder the computer has replaced manpower for this activity, performing it in nearly zero time and at nearly zero cost. The only thing: Who will make sure the information being thus produced is making sense before it is fed into the building process? One option to short circuit the complicated calculations required in the building codes and textbooks is to design everything so much more massive that all the critical issues can be dismissed out of hand. This takes some experience however, which can only be gained by having worked one’s way “on foot” through a sufficient amount of examples; it will also result in substantial “over design” and, at the same time, a degree of robustness… Many engineers are taking this avenue, consciously or not, as a way out of the minefield the modern codes and textbooks have become. Returning to the subject of this chapter, the cost of correction for glitches originating in computer work can be staggering as in case 7.8 although the error is caught, fortuitously, before causing an accident.

5.3 Insurance, Sharing Liability and Responsibility Mistakes, if perpetuated long enough, as we have seen, will have a tendency to lead to losses of a magnitude which exceed the means at the disposal of small or medium sized engineering enterprises by far which would mean that any serious claim for compensation could be fatal, leading to bankruptcy directly. Because in that case, full compensation can not, as a rule, be recovered, all parties are interested in acquiring insurance. Exceptions exist where large firms may decide to “self-insure”, i.e. to provide and keep sufficient capital reserves to be able to satisfy claims confirmed through negotiations or court action. These reserves will, in principle, be of the same order of magnitude as the insurance coverage bought from a financial institution would be. Insurance companies are corporations whose only goal is maximizing profit. They will try to do so with any means at their disposal, with little or no regard whether

2 Design and Construction Case Studies

45

it may be compatible with moral values. In the case of errors in construction, some small print in the contractual documents will open the door for their representatives to question the competence, professional conduct and good governance of the insured defendant and to translate any fault found into a reduction or refusal of the claim. If so, the entire compensation or the part which is left to the defendant to come up with, may still exceed his means. Equally, an upper limit is usually stated in the insurance agreement. If it is exceeded, it may, once again lead to the “exit” of the defendant, with part of the loss remaining with the claimant. This translates into an incentive for that party, and in fact for all parties engaged in the building process, to avoid or to correct errors in time, and would constitute a good reason for everybody to participate in the “error hunt”. Experience shows that very often this incentive and the logic behind it, exceeds the reasoning power of some parties who will then pay lip service or no service to verification and checking, trying to hide behind legal protection in the form of disclaiming responsibility. This type of behaviour is rather typical for public institutions, which in turn is something that engineers engaging in work for the latter are well advised to keep in mind. Public institutions are run and served by public servants whose priority is often not to help deliver a good job but to protect their position and retirement. The times where a competent and long term chief was in charge, on whose desk the “buck stopped” are no longer with us, with responsibility being subdivided, limited and delegated so much that it is in fact getting lost. One can cut a cake in 4, 8, or maybe 16 sections but beyond that, only crumbs will remain. Similar problems may arise in large engineering organizations if they are not run by people who are engineers themselves, or possess at least a good degree of understanding with regard to the work the corporation is performing. Managers educated in schools of management will have a tendency to consider the human resources as just a commodity, the individuals, once classified, being considered exchangeable. This author recalls a site superintendant complaining that, on the medium size job he was serving, he had, at about the half time mark, met 83 different representatives of the structural engineering firm, each one showing up with an assigned responsibility limited in scope and time. One may reflect that in practical terms, responsibility as it should exist in terms of incentives in the minds of the participants, has been mostly lost. In the particular case, nothing dramatic happened fortuitously except for a massive cost overrun—the bill for errors not caught and corrected in time. In cases of minor grievance it is often not considered worth it by the parties to precisely determine the rates of responsibility, read guilt, as this would translate into delays and aggravation, hurting team spirit and mutual trust. Mostly in these circumstances, the cost ends up with the client—in terms of claims for extra work, once again—a good reason for all to participate in the quality control and elimination of errors.

46

F. Knoll

6 Methodology of Quality Control Quality has been postulated to be measured by the absence/presence of flaws. Traditionally it applies to the final product and is used where errors are given physical expression in the form of potential or manifest distress, unacceptable risk, premature degradation, etc. and amount to a loss of value relative to a perfect product. In this context, errors which were caught and corrected in time would not be relevant for the assessment of quality any longer, unless their correction amounts to a loss of value itself—it is not always possible/desirable to fully make good for something which went wrong, parties may decide to accept a state of affairs which falls short of complete and perfect restitution. The purpose of any strategy toward the eliminations of flaws is to catch and correct them as soon as possible since the correction itself usually represents a negative value which grows with the progress of the construction process. As was seen before, the elimination of errors must function all along the construction process, at all stations and in all phases to be effective. For the following discussion therefore, quality control shall mean the entire activity aimed at elimination and correction of errors, whatever the timing or phasing of each activity/intervention. As we have found before, this goes hand in hand with the iterative optimization of the project before the start of construction—and cannot clearly be distinguished from the normal process of progressive improvement. The question then becomes where/when and which resources should be assigned to the task and how their activity should be organized and directed, a task for management in the first place. For the filtering/improvement to function the proper selection of personnel, the timing and the protocols must be adapted to maximize their effect, i.e. the timely elimination of errors and potential flaws—itself an optimization task. Several modes of action are on the menu for doing this, each one relating to particular elements/features of the scenario and each one associated with advantages and drawbacks. • Protocols based on systematic documentation and review of the project information at certain stages. • Periodic review of the state of the project. • Targeted checks of the important elements and aspects. • Circumspect surveillance. • Testing and monitoring. Usually, resources for quality control are limited so that a complete allencompassing and repeated review is not possible. An optimal balance must therefore be found for the assignment of resources to the various tasks. In the following text, the characteristics of each mode of action shall be discussed, in the attempt to identify the elements of the construction process where each mode can be assumed to be effective. It will not be possible to state clear and generally applicable rules—there is no one-size-fits-all approach and the organization of the quality control/checking process will have to be related to the needs and merits of

2 Design and Construction Case Studies

47

each case. However, a few basic conclusions can be drawn to apply more or less generally, and to be used for the methodology.

6.1 Preset Protocols These are the most rigid, routine and inflexible mode of verification. They are also the most systematic and complete checking mode if not hindered by lack of time or timely access to the data. Protocol checking is mostly used to establish conformity of one representation of project information e.g. shop drawings, with another, previous one, such as for example “engineering drawings” or calculations. The effectiveness of protocol checking is therefore limited to certain aspects and phases of the construction process and only errors in these particular phases will be caught. Errors from earlier or later phases will go unnoticed. Protocol checking will often be carried out by personnel with limited understanding of engineering, with the emphasis on documentation. Shop drawings are returned with a stamp of approval which signifies that the verification agent is now sharing the responsibility for the information shown. The stamps used are usually adorned with a small print style text, attempting to disclaim that participation or to mitigate it, with little real significance. Checking protocols are limited in terms of detecting errors for reasons such as discussed above. They should not be used as the principal or the only tool of quality control even though the quantity of documentations thus produced may be seductive, creating the impression that all is well, in the sense of “satisfiction”. Checking protocols can be tedious and boring work and the monotony of it implies the risk that errors will escape attention. For this reason once again, it is not a good idea to rely overly much on their effect and resources should be spared for other, more inspired modes of checking. Protocol checking can be used to establish the completeness of information (dimensions, mapping, specifications of materials, work procedures etc.) as well as the absence of redundant or contradictory information: information should appear only once in the documentation, e.g. on drawings. If it is repeated, this leads inevitably to contradiction and confusion, as soon as modifications are introduced. Systematic checking should also be used to control execution, in particular the details such as reinforcing steel when it is in place, bolted connections or welds. If critical, the latter should be verified with appropriate testing methods (see below for discussion of testing procedures). The documentation produced by protocol checking helps to reconstruct the timing and substance of information being communicated. At the same time, it may be used to sort out contentions, assigning responsibilities, although the attribution of this will usually have to be decided in negotiations, or by courts.

48

F. Knoll

6.2 The Periodic Project Review This is sometimes carried out in the form of meetings where the available information is displayed as completely as possible, the main purpose being coordination. All participants are then asked to document their agreement with what was seen. In other cases, e.g. in order to establish concordance among different disciplines—architecture, structural or mechanical engineering documentation is sent around for approval or correction if necessary. Site meetings have a similar goal, i.e. the coordination of modifications, as well as a late review of information before it is turned over to the construction team for execution. Mostly this is done following a cumulative record of questions and problems, identifying the parties responsible for the follow-up. Periodic reviews are the key theatre for the continuing optimization of a project. Here it is that interdisciplinary problems of compatibility, value engineering and technical decisions concerning more than one discipline are dealt with. Since much of the discussion will relate to improvement and modification of the design, at least in the beginning, it is important that each discipline is represented, and represented by somebody who commands an understanding of the whole project and is able to gage the consequences of propositions and decisions. If the implications are too complex, requiring studies and calculations or if subordinate personnel is sent to the meetings, decisions will have to be deferred until sufficient insight has been mobilized. It is mostly up to the meeting participants to decide if decisions can be made on the spot, or not. Periodic review and site meetings have an important if secondary function concerning the elimination of errors and laws. Because each modification and decision necessitates a second look at the features of the project which are or may be affected, it is here that the filtering of the process comes to bear. This is thanks to two corollary functions of this mode of review: Firstly, the project receives the attention of people who were not directly involved in the checking protocols. Things are looked at from a different angle. Secondly, features are being moved into different context, e.g. in order to accommodate requirements of another discipline, or new wishes of the client or the contractor executing the work. Both these functions are basically of a random character as far as systematic checking goes and therefore far from complete. Nevertheless, experience shows that many serious issues have been brought to light in this fashion; often enough stupid sounding questions being asked during a review turn out to be not so stupid at all but rather pertinent—“listen to the court jester” for lateral thinking.

6.3 Targeted Checking In terms of risk control and mitigation, this is the most effective mode of verification since it can be directed onto the potential or real sources of risk, i.e. critical

2 Design and Construction Case Studies

49

elements of the load path, modes of instability, particular types of exposure, e.g. earthquakes, notorious scenarios known to cause problems—think of falsework collapses, punching shear failure—or onto elements of novelty etc. Most of these situations are of course part and parcel of a normal process of designing and developing, in other words, optimizing a project, and are dealt with through analysis, calculation and professional discussion in the course of events. However, where the possible consequences of something going wrong are particularly serious, or where experience shows empirically that for reasons known or unknown, certain unfortunate scenarios repeat themselves, it is obvious that a higher degree of attention is called for. A similar conclusion applies to features that, due to their high cost, large number of repetitive production, or high uncertainty become critical. Simple arithmetic shows clearly that the larger the risk, the more paying it is to invest in its reduction, in this case in terms of resources appropriated to this. The design of an element which will be fabricated in large numbers is of a critical nature since the negative value of any flaw, deficiency or defect will be multiplied by that number. In industrial manufacture this is well recognized and the development of even minor elements is given high attention, with prototypes, testing and auditing, before and during mass production… The same considerations apply to the construction process, a mass produced element being critical due to the large quantity it represents in terms of investment as well as the cost of potential correction, replacement, repair or rehabilitation. Targeted quality control must be inspired since neither the recognition and gaging of risk nor the choice of the method of verification, is usually a simple affair. To select the critical welds in a steel structure requires insight into the structural function of each element or group of elements, as well as experience with the behaviour of various types of welding. The testing method selected must be appropriate for each individual application. The compressive strength of concrete is normally not a parameter of primary importance and some variation, even below the specified value can be acceptable, with the exception of elements under high compressive stress such as columns in highrise buildings. Other properties of concrete have turned out to be at the root of a host of problems which are perhaps less dramatic than a compression failure but more frequent and therefore economically more important in terms of cost or loss of value. One may think of shrinkage and its consequences, or of degradation in the presence of chlorides (deicing salt, sea water). There is of course another way to diminish risk, it is to “design around” critical elements but this takes even more insight and experience: butt to butt welds in zones of high tension are critical to a high degree and their control must be as thorough and reliable as possible which, in particular for field welding, translates into complexity and high cost of testing. It is therefore certainly a good idea to avoid this type of welds, to move them to places of lesser stress, or to execute them in the workshop rather than on site. Complexity makes situations more critical and risky, for at least two reasons: • Insight into the behaviour of complex elements or assemblies is made more difficult.

50

F. Knoll

• Verification and testing are becoming more difficult and labour intensive, i.e. expensive. For the reinforcing of concrete frames in seismically active regions, complicated and labour intensive detailing has been proposed in order to achieve ductility. This is not to the liking of the people having to execute the details on the construction site, a situation which calls for intensified control. This in turn is associated to a corollary problem of quality control: completeness. In a highrise building relying on moment frames for resistance and ductility, there will be many hundreds of beam column intersections, each of which is critical since it will represent an element of weakness on the load path if not built correctly. This requires a thorough organization of control, lest one or a few of these intersections escape it, degrading the behaviour of the entire structure. It does not take a high degree of knowledge in fracture mechanics to understand that a shear failure in or near a joint can jeopardize the ductility of an entire system even though all the others are perfect. A chain is only as strong as its weakest link.

6.4 Circumspect Surveillance The seasoned engineer or technician, when called to review a project, or some of its aspects, will mostly rely on the recognition of patterns, at least initially. Anything that stands out as unfamiliar, i.e. looking different from what one is used to, will trigger a second look. Where something looks suspicious, one digs deeper, either to find a rational explanation for what one sees, or to establish whether or not something requires corrective action. This activity is very much related to lateral thinking, i.e. to review relations of what one sees to what is going on in the spatial, functional and temporal environment. The work team when faced with difficulties to execute what the project documentation specifies, will adjust methods or sequence to ease problems and may end up doing things that differ from what the engineer or architect had in mind. It is now up to him or her to determine if the element in question or other features of the project may be compromised with what is being done. Or simply, mistakes from earlier phases of the construction process may have survived and are now being given physical expression. This can take virtually infinite variations, beginning with missing or inappropriately formed elements—think of reinforcing bars too short, too small, placed wrongly etc.—to assemblies containing modes of instability—falsework collapses come to mind—and on to improper replacements—see case No. 7.18 …. This author’s memory includes the short story of a workman happily chipping away at a column carrying three storeys of a concrete structure. Taken to task when one third of the column section was already gone, his answer was that he had been instructed by his foreman to find an electrical conduit inside the column. Similar anecdotes abound, and some of them are associated with very serious accidents. They usually go back onto the fact that the recognition and comprehension of the

2 Design and Construction Case Studies

51

consequences, i.e. the risk, is disassociated personally or temporally from the action itself. The numerous accidents involving collapsed excavations bear witness to this: The workman digging the hole was not aware that he was endangering himself by waiting too long with the shoring of the earthwall. It takes an experienced person to recognize and judge this sort of risk who must be there at the right time. This brings up an important point once again, the timing of the activity of control and supervision. An error or flaw can be caught only when and if it is perceptible in some form. In the context of site surveillance this means that this must take place during the time window of visibility, e.g. when the reinforcing steel is placed completely, and before the concrete arrives. On occasion this time window shrinks to zero—an error-prone situation. Similarly, structural elements must be seen in their place before disappearing behind wall boards or veneers. Every engineer tries to forget scenes where he arrived too late to see what was going on, and it was not in his power to make a drama of it, or even if he did the inspectability could not be retrieved. All that is left then is to hope for the best… In the case of reinforcing steel embedded in concrete, methods exist presently to detect its presence or absence in hardened concrete (radar, ultrasound, etc.); in cases of doubt, these procedures may have to be mobilized.

6.5 Testing Some flaws or defects are perceptible only with the help of appropriate tools, tests, or over time. Some will also progress and become more serious over time until a point is reached where correction becomes urgent lest serious events are precipitated. It is not usually a good idea to wait for that point in time since the risk of something happening suddenly and unexpectedly is always present and progresses with time. Periodic observations/measurements will tell about that progress. This is especially true for rupture in tension (or shear which usually amounts to a tensile situation) involving all kinds of materials and scenarios and normally manifesting itself in the form of cracks. Most often problems of progressive damage and weakening are the result of errors in design, caused by misjudgement or lack of comprehension of the physical processes at work. The consequences of such errors may frequently escape notice for some time which means that corrective intervention must take place during use of the structure, making it unwelcome, difficult and expensive. Often the symptoms seen do not directly interfere with the use, or not yet, and it is decided to wait and see, rather that incur the expense of corrective intervention. Monitoring then becomes the alternative and as such the “sequitur” of the error, replacing outright correction, at least for the time being. Some faults or deficits remain imperceptible up to the sudden rupture—fatigue is one of them (see case 7.7); another is associated with internal corrosion (see case 7.30).

52

F. Knoll

6.6 Conclusion The preceding section is an attempt to elucidate the properties of errors, their origin and presumed history leading to detection, correction, or failure. The concept of error cannot be described or defined clearly and unequivocally as it blends into other notions such as the taking of risk, unknowingly, or consciously with the concept of gambling at the extreme end of the sliding scale. As will become clear from the case studies, irrational parameters such as the diversion of attention from the task at hand, or the notion that temporary works require less care and safety, are very much at work and must be dealt with if ever the effect of errors may be attenuated. Likewise, the ever present facts of life, restrained and profit driven budgets, time pressure—time is also money—are severely limiting the effect of quality assurance, revision and checking, and auditing. All these activities do not produce any return and do therefore appear rarely as separately identified portions of budgets or mandates. Even field supervision is often taken out of the engineer’s mandate and given to some other agent in order to reduce fees. All of this amounts to the clear indication that it belongs to management to organise the work so that none of these problems will affect decision making, communication, and quality of work, in other words that error and faults are prevented or eliminated in time. Since this is part of the optimisation process which every construction project is, it is normal that suboptimal or even plainly wrong information may be produced along the way—but that this will constitute an error only if it is not corrected in time, causing loss of value, or failure. The construction process is itself a powerful filtering agent doing just that, eliminating faults and errors. It will do so only however, if it is kept sound and healthy, i.e. free of “ills” such as incompetence, lack of time and means for quality control at all stages, and worst of all, unresolved contentions.

7 Case Studies The following selection of case studies, or better anecdotic tales, of mishaps and near misses comes from the experience of the author who has been implicated with most of the reported events in one way or another. The list includes two “skeletons” from the author’s own closet where his participation was at least partially instrumental to what happened. As the reader will perceive, a wide variety of scenarios or circumstances are being reviewed—this may just be as close to a truly random sample as one may come by and therefore quite representative of the subject, i.e. the large quantity of instances, where in the construction industry, things turned out and still do turn out wrong, with

2 Design and Construction Case Studies

53

the cost of the consequences varying through several orders of magnitude in terms of loss of value, and in some cases, injury and loss of life. A few notorious cases are included where similar circumstances are returning over and over (see cases 7.2, 7.3, 7.4, 7.9, 7.10, 7.16, 7.19, 7.26, 7.28, 7.32, 7.47), some of them particularly insidious, where the fault remains latent and hidden, only to lead to dramatic failure after some time. Since these scenarios keep returning they must be related to habits and “innate” fallacies of the construction industry and the attitude of society at large. For most if not all of these recurring scenarios, time pressure can be cited as a major ingredient taking precedence over prudence and care. This can be traced back directly to the modern obsession with short term profit which leaves little room for doing things with the care they merit (see for example case 7.3). Some bad habits or practices may largely belong to history, and the industry has begun to recognise the cost of long term consequences (see cases 7.9, 7.14). The enormous problem with concrete infrastructure built in the 1960s and 1970s is an example, where it was believed that concrete would last forever, and in any circumstances. The theoretical material properties were exploited to the “last drop”, safety margins reduced to “the bones” and a competition was on for minimising initial cost—the structures were left with no reserves for robustness and durability, in the face of unforeseen circumstances. No maintenance seemed to be needed and so it was withheld for several decades. The error has now been recognised as the bills are coming in, and a new approach to quality and prudence has been adopted, at least by most of the public bodies owning these structures. This wisdom is not entirely new; the author recalls the story of a small bridge built in the 1970s where—for the period—the design had provided generous concrete dimensions only to be severely criticized by a reviewing peer who proposed to reduce the dimensions drastically, as this was the fashion of the day. Fortunately, the engineer in charge of public works turned this down and the proposed design was adopted. The bridge is still standing intact after 40 years, without any sign of distress—in an Alpine climate where the use of deicing salt is a recurring necessity. In some scenarios, the concept of error blends into the taking of risk where the outcome will decide whether the decision was the right one, or better the lucky one. Mostly, risks are taken for economic reasons, i.e. to save money initially but almost invariably, where things turn out wrong, the decision to do so is found to have been a “penny wise” one, with the cost of the consequences exceeding the hoped for savings by a large factor. The attitude to gamble has always been with us, especially in construction. History abounds with failures where the limits of the practicable are continuously pushed back. The product wants to be higher, lighter, more elegant, and cheaper where the double meaning of this word applies both ways. I have tried to abstain from attributing responsibility as much as possible, as this is a very difficult matter in real life. Obviously, in some cases, it cannot be avoided that the reader perceives where the blame must be laid but it is useful to recall what was found earlier, i.e. that every error, no matter who is associated with it, is also an error of management. It is similar to the driving of a truck. The driver will be blamed

54

F. Knoll

to have caused the accident but it was management who chose the driver and the truck, as well as imposing the itinerary, timing and work schedule.

7.1 The Use of Brittle Materials Among the most perfectly brittle3 materials used in construction, some popular fabrics must be counted (Aramid, glass, carbon fibres). They are far more brittle than such products as concrete, masonry or very slender metal elements which in engineering design have traditionally been treated with the necessary caution, excluding them from tasks requiring resilience or ductility. The problem with fabrics is that, depending on the weave and exposure, they possess very low tearing resistance. This compounds the problem with brittleness dramatically, leading to premature and unexpected failures through ripping, i.e. a progressive failure. The roof of the Olympic (1976) Stadium in Montreal was completed in 1987 and its replacement in 1991 (Olympic Games are held in open sky, therefore the Stadium did not need to be completed for the games but received its first roof only later). Both roofs were constructed using high strength fabric, made from Aramid (Keflar) and glass fibres respectively, embedded in a relatively flexible matrix. Both roofs failed, due to wind and snow effects respectively, in a sudden and dramatic way, by ripping progressively. The calculations had been made using relatively high safety margins (of the order of magnitude of 3–4) against tensile rupture. The problem with this is that in the calculations of stress neither imperfections such as stress concentrations or small defects, nor the effect of the low tearing strength are accounted for. The two effects are compounding each other with the result that safety factors become meaningless since they do not relate to the relevant failure mode. This in effect constitutes the error in this case, namely that the critical mode of rupture was ignored—the fact of the low tearing resistance was known although the tools to analyse it correctly were not, much like the man who is looking for his lost glasses under the street light rather than where he lost them because there it is too dark. Both ruptures were accompanied by excessive deformations, another property of fabric structures due to their highly nonlinear behaviour which combines badly with the brittle material characteristics. One might conclude that in this case a gamble was taken with a risk of unknown magnitude which is typical for endeavours near the—in this case practical—limits of technology. The stadium can still not be used in the winter months as it is deemed unsafe in the case of snowfall, although the fabric roof is being continuously repaired.

3

Brittle in the structural context means an elastic (linear) load/ deformation behaviour up to rupture, without residual strength or non linear deformability.

2 Design and Construction Case Studies

55

7.2 Traditional Methods Versus Modern Materials Traditional masonry goes back to prehistoric times and has in the past reached admirable peaks of success (Hagia Sofia, cathedrals of Europe, temples of Asia, etc.). It was also used as a construction method for vast amounts of buildings intended for more mundane purposes such as housing, fortresses, institutional buildings, infrastructure including bridges, sewers, etc. It is based on bond among the masonry units produced by mortars of various types, mostly of the lime type—a material which was relatively easy to produce from resources that were found almost everywhere. In the harsher climates of the northern hemisphere were high variations of temperature and humidity are recurring periodically, the durability of these mortars is limited, water penetration and frost action will reduce life expectancy of masonry exposed to exterior conditions to a fraction of what it would be in, say the Mediterranean. Nevertheless, with reasonable maintenance—replacing damaged stones and repointing the joints periodically every few decades, the durability can be enhanced to reach one or two centuries without major overhaul. This is in contradiction with modern expectations where maintenance is perceived as a nuisance, a necessary evil that does not produce any positive return. It will therefore often be put off to later years (where it may become somebody else’s problem), or methods are sought to deal with the durability question once and for all. One of these interventions has turned out to produce disastrous consequences since building physics was left out of consideration: Masonry, especially stone masonry made with lime based mortars, possesses a certain permeability for gaseous substances such as air and water vapour—the wall can breathe, relieving itself of the water which entered in liquid form through leakage or capillary action, or as vapour condensating on cooler surfaces. This property has been jeopardized in two ways by modern construction: • On the inside of modern buildings, one does not normally wish to look at raw masonry and the walls are therefore covered with panelling made mostly from gypsum boards, the joints being sealed with tapes and painted over. This produces a nearly impenetrable vapour barrier. • On the outside the joints have been repointed in the 1960s–1970s with mortar made from Portland Cement because of its superior strength, which once again makes a seal preventing water vapor from escaping. This leaves the body of the wall completely enclosed in what amounts to nearly impenetrable vapour barriers. The water which enters at window sills or through cracks is trapped effectively. It will periodically freeze and thaw when the dew point moves with the temperature gradient through the wall, with the consequence that the mortar inside the wall— the original—will disintegrate, it will essentially become wet sand. In addition, the volume change when the water freezes, will push the exterior face of the wall towards the outside, leaving the wall split into two more or less freestanding

56

F. Knoll

wythes. This is bad news for the effective load bearing capacity, as well as for durability. Restauration work in the last decade or so has brought the problem to light. It is affecting numerous venerable and heritage buildings. The error at work can be seen as the thoughtless combination of incompatible methods and materials. This sort of problem manifests itself very often only after some time and, because of being latent, it can cause a large amount of damage before being recognized and corrected. New materials and methods are brought to the market and promoted vigorously in great numbers. The prudent engineer or architect will tend to wait with their application until it can be seen how they perform, especially so when building physics are involved, and in climates that include frost.

7.3 The Notorious Formwork Collapse The example examined here is just that, an example from a large number of accidents with similar causes—accidents which keep recurring persistently, often with tragic consequences. An inclined overpass was to be built with a massive solid concrete slab (approximately 1 m thick), and about one third of the concrete had been placed when a movement was perceived by the work crew who were able to escape in time, including a worker who had been sent to look at the scaffolding, when the portion supporting the concrete already in place collapsed. The scaffolding and formwork had been completed the day before the concrete was placed, and had been inspected quickly by the designer of the bridge, an experienced engineer. The case was sorted out among the three parties mainly implicated, i.e. the engineer, the formwork contractor and the supplier of the equipment. It was settled as a civil case out of court since luckily nobody had come to harm. The scaffolding consisted of braced “towers” with four supports, and aluminium beams placed approximately along the contour lines, i.e. horizontally, on wooden supporting beams which rested in the box-shaped shoes at the top of the posts. On the portion of scaffolding which had not collapsed, the following observations were made: • The X-braces of the scaffolding “towers” were attached with one-bolt connections. Some bolts were found missing. • The aluminium beams, because they had been placed horizontally rather than in the slope, were leaning and therefore received a horizontal load component at the top flange, from the weight of the concrete. They had been secured by nails bent around the bottom flange in some places but not consistently. • Some of the contact points in the shoes at the top of the posts were eccentric and inclined, and makeshift wooden shims had been used to adjust.

2 Design and Construction Case Studies

57

• The aluminium beams themselves had seen heavy use and showed bent and damaged flanges, general wear and tear and some twisted parts. • The system had been assembled quickly and the conclusion was that a combination of causes had led to the failure. Assemblies of this kind are more or less isostatic with little or no redundancy, meaning that the failure of one element, e.g. a buckling post, can bring the entire system down. The multiple towers with the cross braces essentially working in tension (they are usually made from small caliber pipes which can resist little compression) form a “forest” of several hundred posts, each one of which must be braced effectively in both directions. Probabilistically seen, this is not a favourable situation and it requires thorough checking of every element as each one is a potential culprit of disaster. In our example, this verification was carried out quickly under time pressure and in all likelihood, defects may have existed in the collapsed portion, similar to what was found in the portion still standing up. A fundamental error must be cited in the fact that the aluminium beams were leaning and were neither consistently nor effectively secured against toppling over, a condition of instability. Add to this the rickety condition of the assembly and the worn material, and the recipe for disaster is complete. The elements used for the formwork support are usually designed and thoroughly tested for their load bearing capacity—under laboratory, i.e. ideal conditions: No defects, no tilt, loads applied concentrically etc. Field conditions are usually very different and the effective load capacity will be reduced from what was found in the laboratory, including the safety margins one believes to have been provided. Particular attention must be paid to the numerous potential modes of instability: • • • • • • •

Buckling of posts due to faulty bracing connections. Lateral buckling and toppling of beams. Web crippling at load concentrations. Combined instability involving two or more elements on the vertical load path. Eccentric load transfer at the points of support. Uneven loading of posts due to variable soil conditions. Failures due to wear and tear.

Some of these effects are difficult to avoid and can be counteracted only through providing robustness, e.g. by installing redundancy. Formwork supports are temporary structures which in fact encourages the fallacy that lesser safety margins and attention may be acceptable. The frequency and recurrence of scaffolding collapses says otherwise.

7.4 The Error of Use: No Maintenance Every human error is also an error of management. This is especially true for the second last stage of the construction process: Use. At this time, no professionals or

58

F. Knoll

contractors are primarily involved and the owner/user is on his own to see to it that what needs to be done is done. The particular incident examined here involves a timber trestle carrying a railroad which is mostly used to transport timber logs on very highly loaded flat cars (the wheel loads of these cars may exceed those of the locomotive). This type of bridge consists of an assembly of timbers of massive dimensions (8,, to 16,, –200 to 400 mm or more) which for durability are impregnated with a suitable (mostly based on hydrocarbons) chemical. This does not however, penetrate more than about 2,, (50 mm) deep, leaving the core mostly unprotected. The bridge was inspected some twenty years after its creation, and the inspection report notes that the degradation of the interior of some of the massive timbers— which were made from relatively rot resistant conifer wood—had progressed to “wet” or “wet and soft”, these being stages preceding “rot”. These conditions are established by taking cores from the timbers, reaching to the interior. In the same year, a repair crew was sent to the trestle who replaced some timbers which had progressed beyond the acceptable condition. However, their work was stopped before completion, leaving about half the bridge unattended. The bridge collapsed four years later under the weight of a heavy freight train. Apparently, the failure was triggered by the lead engine and the front portion of the train fell into the valley, followed by a large fire. Two men lost their life, including the engineer. The details of the collapse remain controversial as it is often impossible to reconstruct a chain of events from the—in this case badly burnt—debris of a cataclysmic accident. However, it appears to be quite clear that the collapse had something to do with the degradation of the timbers of the bridge, the suspicion concentrating on a large “cap” beam directly supported by the timber piles of one of the “bents” (piers). The questions are then: • Why was the protective treatment not adjusted to the massive timbers the interior of which remained unprotected? • Why were inspections not carried out more frequently, say every two years instead of five years, or sometimes more? • Why was the repair and replacement campaign not completed? All these questions are directly related to management since the decisions about directing and scheduling inspection and repair work are entirely theirs. The details of the legal battle which ensued are of little interest except that they followed the usual game plan of expertises and counter expertises, the effect of which is mostly to obfuscate the matter so that the truth may conveniently get lost in the fog.

7.5 A Mix-up This case involves the project of an industrial building, to be used for the assembly of machinery. Two parallel alleys with overhead cranes of different load carrying capacity were to be provided. The client’s information was less than clear and was

2 Design and Construction Case Studies

59

communicated through several stations until it reached the work team, i.e. the individuals in charge of preparing the documentation for construction. An emotional problem prevented the technician originally assigned to prepare the drawings from doing so in time. Time pressure—which is nearly normal in construction—forced the work team to give the task to someone else who was one station farther away from the source of information. The result was that the two crane alleys were mixed up, with the high capacity crane ending up where the lighter one was supposed to be, and vice versa. The drawings went to the fabricator and erector unchecked and the mistake was made into physical reality and was discovered only when the client started to use the facility. The lessons to be learned here are the following, all of them being fairly common: • If negative emotions interfere with the work, special attention is needed to ensure the correctness of the information being produced. • Time pressure must not be allowed to cause the omission of verification and checking. In this case, a simple but direct review of the original documentation of the client’s intent would have been sufficient to eliminate the error. • The fact that it was omitted constitutes an error on the part of the individuals who elaborated the construction documents but even more so of management. • If the line of communication is long, involving numerous relays, the original source of information must be reactivated.

7.6 The Crooked Mast The top of a multipurpose (broadcasting, tourism, proud landmark) concrete tower was to receive a 104 m high top of structural steel, the “mast”. This structure was designed as a tapering tubular assembly of plates, joined by high strength bolted connections. The mast was assembled in place by pre-erected ring-shaped sections the weight (and therefore the size) of which was limited by the load bearing capacity of the helicopter used for erection. A preassembly protocol had been specified to guarantee straightness and fit. Four units were assembled in the yard of the fabricator, each time the bottom unit was removed and shipped to the construction site, another one was added at the top. This method turned out quite successful except in one instance where a pronounced kink was discovered in the fully erected mast. The steel erector took it upon himself to correct the mistake, using methods from the bottom drawer of the tool box, the delay involved was not considered cause for aggravation—since the whole project was approaching the then limits of technology some problems had been anticipated. The story behind the mishap—this author learned it only years later when the chief engineer of the steel company had retired and the company itself was defunct, was the following: The pre-erection protocol had been followed meticulously but once one of the pre-erected modules was hit by a pay loader unbeknown to the people in charge,

60

F. Knoll

dislodging some of the preassembled parts. It went to the site in this condition, causing the crookedness of the mast which was visible by naked eye. The reason for the mistake, its circumstances and its perpetuation can be summarized as follows: • The driver of the pay loader abstained from notifying his superiors, presumably because he feared reprisals, e.g. losing his job. He was found out by company’s intern detective work. • No preset protocol is capable of dealing with unforeseen circumstances. It must be supplemented by independent control. On another tower of the same type, a similar procedure had been specified and carried out, with very mixed success. Many substantial adjustments were needed during erection. • It is a good idea to always have a “plan B” up one’s sleeve, in case something turns out unexpected, in order to deal with eventualities quickly and effectively. It helps when the lead people have some command of imagination.

7.7 Vibration Problems This case involves a small cable stayed bridge (total length 150 m), a type of structure which has turned out to be prone to difficulties. This is not because of the basic principle which is of obvious simplicity, but “the devil is in the details”. Everything went well at first and the bridge was opened for use by normal road traffic. However, the cumulative effects of a number of faults ultimately led to an accident scenario where the structure suffered severe damage, the bridge had to be closed for traffic and repaired at a cost comparable to the initial cost of construction, including the expense for all the legal and procedural action that accompanies such events. • The tight schedule had caused problems of procurement, and among numerous other concessions, the substitution of an inferior class of steel had to be accepted. Instead of the specified quality which would have had a favourable behaviour at cold temperature, a steel with a transition temperature well above the winter weather was used for the guys. With temperature to be expected in the low − 20’s, this turned out to be a serious problem. Forensic investigation, following the failure, turned up the fact that the steel for the bridge had been procured from 24 different sources, all with impeccable certificates… • The guys were made from 3½ in. (89 mm) diameter solid rods, each guy consisting of four units arranged in blocks of four in section, with a spacing of 8,, (200 mm). (The use of solid rods for this purpose has recently been discouraged or prohibited by some authorities, presumably due to an accumulation of bad experience). • The mode of attachment at both ends of the rods did not permit rotation without secondary bending moments.

2 Design and Construction Case Studies

61

• The damping associated with vibration of the guys was nearly nonexistent, and no tieing of the four rods had been provided because no reason had been identified to do this. In a cold winter night (−20° or so) a blizzard with winds of moderate speed blowing principally across the bridge apparently caused the leeward members of each guy to vibrate vertically (wake buffeting). Because of the very low damping, the movement in and out of the wind shade of the windward rods, was excited to a damaging amplitude for a number of cycles of the order of 20,000 during this particular storm. This led to low cycle fatigue failure of 4 rods in 4 different places, at the anchorage points, due to the secondary moments. The bridge did not collapse however, since the four units at each guy provided sufficient robustness. It turned out—after the event—that similar phenomena had been known to specialists; evidently therefore insufficient procurement of design information was to blame, although one might argue that knowledge which exists in the head of a specialist somewhere is not readily accessible, and difficult to mobilize in the framework of a small project. In this case no important emotional component must be cited which might have influenced the course of events, and the following ingredients can be listed, having contributed to the failure: • Inferior quality steel was substituted, due to procurement problems. This was corrected by replacing all guys with a steel of sufficient rupture energy. • The extremely small damping was not anticipated. A system attaching the four rods of each guy should have been installed, preventing the rods from vibrating individually. It was added as part of the repair. • The anchorage detail at the ends of the guys should have permitted a certain rotation without causing secondary bending. This was corrected. • The dynamic excitement of the leeward rods in and out of the wind shade of the windward rods (wake buffeting) was known to some specialists but that knowledge was not mobilized for the design. The bridge having been the first cable stayed bridge designed by the design team, the absence of experience with this type of structure should have raised a “caveat” and triggered a greater investment in pertinent research. The cumulative effect of several errors caused this accident. Its consequences were limited, thanks to robustness which had been provided through redundance (four rods for each guy).

7.8 The Computer Glitch A service tunnel for piping and cables below an express way tunnel, made from factory produced precast concrete units had to be replaced at substantial cost, due to an error in the design calculations. These had been carried out using state of the art

62

F. Knoll

software of the type doing everything from calculating loads to concrete dimensions and specification of reinforcing steel. A retired engineer was charged with the calculations, due to a shortage of personnel. It must be assumed that he was not familiar with the software he was given for the task. This is why the automatic calculation of the self weight was erroneously applied to the live load as well, i.e. weight and impact of vehicles circulating on top, involving a multiplication factor reflecting the thickness of the concrete shell—in this case 0.3 (=0.3 m). Live loads were multiplied with this factor, causing a substantial rate of underdesign. Technically, this was a modelling error since the live loads were not represented correctly. At the same time, possible fatigue problems with the reinforcing steel were ignored (in small span structures bearing vehicular traffic, the contribution of live loads to the total solicitation is relatively more important than the permanent loads, meaning that the amplitude of the loading cycles may become critical. Recent research has demonstrated that potential fatigue must be considered, especially in cracked reinforced concrete. The design which was entirely based on the computer results, was not verified until a modification made a review necessary, and only after a substantial amount of structures had been fabricated and placed. The case involved substantial cost and delays, as well as adversary legal action and embarrassment. Not all the details of its history are therefore accessible. However, a few features may be noted to have contributed to the error: • Substitution of personnel in situations of tight schedule must be accompanied by enhanced quality control, as the individual(s) involved may not be sufficiently familiar and experienced with the task and the tools they are assigned. • Modern software, produced by people removed in time, location and/or communication must not be used exclusively, i.e. without effective checking through independent assessment of the results. This may be done by very simplified and approximate methods—The problem is not imprecisions of 10% or even more but errors which approach or exceed the amplitude of the safety margins. It is clearly the task of management to arrange for the independent check calculation and the major error in this case was that this was not done. It is very important that the verification is completely independent from the original calculation, in terms of method and personnel. The case at hand suggests another, mainly economic consideration: If a design decision, e.g. the reinforcing to be built into a concrete element, involves a large amount of repetition, any error permitted to persist will have consequences of a similar magnitude. It seems therefore a good idea to concentrate special care on such elements.

2 Design and Construction Case Studies

63

7.9 The Problem with Repetitive Work This is exemplified by a type of structure which repeats itself with relatively little variation but in very large numbers. The Canadian Province of Ontario alone owns about 10,000 bridges, most of them small to medium sized structures such as highway overpasses. This type of structure has seen a great peak of production in the 1960s and 1970s when the construction of a network of express autoroutes followed the dramatic increase of the number of road vehicles. Literally hundreds of over and underpasses were built in every province each year, thousands in the larger countries of the industrial world. The availability of qualified engineers, technicians and contractors became a problem and, although strict standards had been created and decreed, reality very often left much to be desired in terms of quality. The production of such overpasses resembles that of industrial mass production such as of cars for example. However, this is deceptive: where the assembly line inside an industrial plant permits strict and consistent control at all times, this is very different for a construction site which follows procedures and work sequences that vary from case to case involving cooperation of numerous and variable individuals and parties. They may, after a number of repetitions of similar work, become inattentive and even thoughtless since in their mind success begins to be taken for granted. Two examples may illustrate the problem: (a) The first example involves a one span highway overpass in post-tensioned, cast in place concrete. The concrete work had been completed and the post-tensioning cables placed and provisionally anchored—fortunately—but not stressed. The formwork contractor’s representative was apparently not aware of this condition and decided to start removing the falsework anyway. When about half of the scaffolding had been taken away, the bridge collapsed and was left “hanging” in the unstressed post-tensioning cables. Large deflection and gaping cracks accompanied the accident and it became a rather delicate engineering task to salvage the bridge, the details of which need not be discussed here. (b) Another overpass that collapsed and killed five people tells a different story: It had been built in reinforced concrete during the frenzy of express way construction in the 1970s, and the forensic investigation on behalf of a commission of inquiry brought the following facts to light: • The reinforcing steel specified was not consistent with the dimensions of the bridge, the vertical stirrups being too short in most places. • The construction failed to correct the mistake and the bars were placed anyway, without “embracing” the longitudinal bars, creating a surface which was not crossed by any reinforcement. This led eventually to a shear failure across the entire width of the bridge. • The bridge stood up for 36 years until a shear crack had overcome the “tensile” resistance of the concrete.

64

F. Knoll

• Periodic inspections had recorded the presence of cracks, including the one which corresponded to the eventual rupture. However, it was never recognized for what it was—among numerous other cracks, most of which are unrelated to a structural deficiency as is normal for reinforced concrete exposed to the weather. What can be learned from the two cases: • Repetitive work tends to make people careless, blunting attention. • If people with lesser qualification must be put in charge, control must be sharpened. If it is not, risks will increase dramatically. • Concrete in tension may hold out for an extended period but will eventually rupture. Cracks may give warning but must be recognized for what they are. This again requires qualified personnel. Concrete can hide some secrets and the symptoms of certain types of distress are less than obvious, the telltales may be visible but it takes insight to perceive the story they are telling.

7.10 The Not so Calculated Risk Often, avoiding initial expenditure by skipping necessary studies and tests looks sufficiently attractive to become the adopted strategy. In particular, this frequently concerns the geotechnical investigation for excavation and foundation work. The consequences can be quite dramatic, in particular in terms of delay when unexpected movements such as collapsed excavations, excessive settlements, excavation uplift, or flooding must be dealt with. The specialist’s fee may quickly look like a worthwhile investment in retrospect when events take their course which could have been avoided. The example examined here is not of newsworthy drama but it is representative of a large number of cases which add up to a substantial loss for the construction industry and its clients. On a hillside—approximately 25° of slope—excavation for two adjacent residences was almost complete when a large portion of the slope slid into the excavation, causing delays and additional expenditure of approximately 1 M$. It was known that the slope was of uncertain stability (silty clay) especially following rain but neither stabilizing measures nor a protection against precipitation had been provided. The slope of the excavation in the hill side, at approximately 45°, was found to be unsafe. The services of a geotechnical engineer were engaged after the problem had become unmanageable. The cost of an adequate stabilisation of the excavation and slope was hypothetically determined at approximately 12% of the total cost of interventions made necessary by the accident, not counting the loss related to delays. The case is quite typical and reflects an attitude which has always been and continues to be current in construction projects; it is the gamble which is taken hoping to get away with inadequate but cheaper measures, in the sense of gaining a “free lunch”. The problem is of course that in numerous cases it works, which invites imitation: “If it was Ok for those guys, why would it not work for me?”. So it

2 Design and Construction Case Studies

65

would seem that the “penny-wise” decisions will stay with us indefinitely. However, it should be remembered that the consequences of accidents are not always limited to delays and money lost. The failure of excavations has caused injury and cost lives in the past; so if one decides for a gamble, it better not include risk to life and limb. What all this is suggesting, is that a grey area exists where the taking of risks, intentionally, knowingly, or less consciously, blends into what is commonly considered an error, i.e. a violation of such concepts as “good practice”, “state of the art”, or official regulations such as building codes and bylaws which, due to their gigantic volume and complexity, have become quite unmanageable themselves. Grey areas will always provide fodder for the lawyers. The situation is similar to the choice one faces when deciding to pay or not to pay for parking the car on a street where the city has posted parking meters. One considers then the probability of the police turning up during one’s absence which will cost a fine of X, and the saving one accumulates by getting away with it Y times. The city can of course do the same calculation, and will calibrate the fines accordingly.

7.11 The Problem with Drawings Often, drawings representing different stages of the building process—concept, design development, detail design, fabrication and construction drawings—are elaborated by different actors, information being transferred from one representation to the following. This information transfer implies the risk that what is being transferred is incorrect by being incomplete, redundant, or simply wrong. The prudent engineer will insist to see the drawings that go out for construction, in order to catch the wrongness which has accumulated through the various stages. If only circumstances would always permit this to be properly done. Frequently, delays with the production of these documents accumulate to the point where shortcuts become the order of the day, and the drawings go out unchecked, or only furtively so. Errors will then survive and be perpetuated into execution, leaving faults in the structure being created. A recent case may illustrate this: the roof of a gymnasium had been designed with steel wide flange beams resting on concrete columns as primary structure. At the support points the engineer had provided stiffeners to prevent crippling (buckling) of the thin webs of the beams—typically, sections used for long span roofs with relatively modest loads are designed for maximum bending stiffness with minimized material thickness for webs and flanges which then must be protected against local instability. Shop drawings were then produced by a team working for the fabricator, taking the information from the engineers’ drawings and adding the details to provide the necessary information to the workshop. The stiffeners specified by the engineer were left out, however. The shop drawings were submitted to the engineer for review who failed to note that the stiffeners were missing. The beams were installed following

66

F. Knoll

the information on the shop drawings and the roof collapsed following a snow fall fortunately with no loss of life or injury. The actual snow load was approximately 25% of the load specified by the building code. The verification of shop drawings is a tedious task. Even for a small project, one is usually looking at a thick stack of paper—today the production via the computer is much less time consuming than it used to be and, being very repetitive may even take less time than the checking which implies looking at each and every detail since errors could have slipped in anywhere. Repetitive trivial tasks have a habit of switching off circumspection which would have been necessary in the case at hand. Usually there are a number of aspects to be verified on a drawing: • Does it reflect completely and correctly the information which was used as input— in this case the engineer’s drawing? • Does it conform to codes and standard practice? • Is the load path through the pieces shown secure for every section and for each solicitation (bending, shear, axial load, torsion)? • Is everything specified that needs to be specified, and unequivocally? • etc. This would imply the switching of attention from one to the next parameter for every piece represented on the shop drawings, something that will rarely be carried out in reality, shortcuts being used to be able to carry out the verification in useful time.

7.12 The Intricacy of Creating Ductile Structures During the past four decades or so, ductility has emerged as a desirable property of structures in earthquake land; and it has become the dominating paradigm as well as buzzword. More recently it has been defined as the goal and essence of “capacity design” where one tries to establish a hierarchy of structural failure with “fuse” elements having a ductile behaviour placed on the load path. This will prevent— or so one is led to believe—internal forces to exceed limits controlled by the fuse element. By doing this, one protects the rest of the structure—which may be less than ductile—from overloading in a seismic event. The protection is bought with the sacrifice of plastic, i.e. mostly permanent deformation at the location of the “fuse” element when it is activated, i.e. its elastic limit exceeded. There is a major problem with this: Capacity design is a very clean, clear and easy to understand paradigm—on paper. It turns out that to put it into reality is much less clean, clear and easy to control, as the following story may illustrate. A fifteen storey building in reinforced concrete had been designed initially for “full” ductility, following all the detailing rules which must be applied for the structure to qualify for this class. Ductile design results in the permission to reduce design forces in the structure from what an elastic structure would experience, by substantial factors, in this case (Canadian National Building Code) a factor of 5.6.

2 Design and Construction Case Studies

67

The “fuse” element was to be the core walls near the base forming a plastic hinge at bending moments commensurate with the reduced inertial forces produced by the codified earthquake (return period 2500 years approximately). For the plastic hinge to function properly, certain rules for the configuration and placing of reinforcing steel must be followed, especially concerning the splicing of the vertical reinforcing. The engineering drawings showed the splicing details of the reinforcing clearly and conforming to the Code requirements for the plastic hinge region of the shear walls but construction practice being what it is, they were not followed by the fabricator and steel setters. This fact was noticed too late, i.e. when two stories above the “plastic hinge” portion of the core walls had already been built. To change the reinforcing to its proper configuration would have caused a substantial delay, additional expenditure, as well as aggravation and embarrassment. Other solutions had to be found and they were, since the engineers, not trusting perfection produced in the real world, had quietly dimensioned the walls to seismic loads approximately twice the magnitude of the reduced forces in a “fully ductile” structure. This permitted the building to be reclassified to “moderate ductility” where the requirements are less stringent. The question whether the design rules for ductility are adequate or not shall not be raised here but what needs to be pointed out is the problem of reliability which arises: Does the fact that everything shown on the construction documents is conforming to the rules imply that reality does so as well? For criteria such as confinement where the absence or presence, as well as the perfectly correct configuration of, for example, a few stirrups will decide whether or not the structure will react as thought to extreme exposure, the concept of safety margins providing some additional strength may not be appropriate (see the second example of case 7.9 where sufficient reinforcing i.e. safety margins had been provided but its configuration was not compatible with its purpose). More philosophically, the problem is reliability in a different sense than usually proposed where deviations are gradual and their consequences quantifiable. In the case of the bad detail (stirrups misplaced, left out or wrongly configured) the variation in the resultant real resistance may far exceed the safety margin.

7.13 Stiffness, Brittleness and Weak Links This case concerns a large football (soccer) stadium constructed in a land of medium to strong seismicity. Lateral forces are transmitted to the foundation via the framing system shown in Fig. 2. Because of the particular configuration it turns out that most of the forces from the upper levels (wind, inertia forces from an earthquake) end up in the diagonal “raker” beams which carry the (precast) grand stands and receive the forces mainly in terms of axial load. This load path is therefore much stiffer than the rest of the structure and the axial load will pass through the juncture of the “raker” beam and the framing of the first upper level, mostly in terms of horizontal shear.

68

F. Knoll

Fig. 2 The missed weak link

The original design did not account for this fact, making this the weak link of the entire system, making rupture probable at a fraction of design load. Presumably the reason why this was missed is that the entire design was based on an elastic computer model of the entire structure, without comprehensive checking; scanning was done for bending resistance where the vertical member in the model representing the critical element is very short and receives little bending. Scanning for shear might have highlighted it. The error was corrected in the planning stage but demonstrates that to design and build, based exclusively on computer calculations is a dangerous proposition. Insight in the working of a structure cannot be dispensed with.

7.14 The Problem with Indoor Parking Structures In a two story underground parking garage, the structural slab carrying the upper level collapsed after having served for 35 years. It was a punching shear failure. The clear span between columns was approximately five meters and the slab thickness 200 mm (8,, ) which at the time of construction was within the limit of the permissible. The front page picture which appeared on the day after in the daily newspapers, showed the column free standing for two stories, without any horizontal reinforcing crossing the column—it had disappeared. Some pieces of reinforcing bars were found in the debris which had the appearance of a sharpened pencil, the tip ending in nothing, corrosion had eaten the rest. The concrete of heated indoor parking garages in countries where during long winters deicing salt is used, suffers from one of the most extreme chemical exposures where it is not protected by an effective water proof membrane. Buildings erected in the 1970s and 1980s were often not equipped with such a protection, for reasons of initial cost saving—the buildings were frequently sold quickly after construction, making the ensuing problems the burden of a successive buyer.

2 Design and Construction Case Studies

69

In this case, the problem—chlorine penetration leading to corrosion and eventually, to a shear failure—had been latent, with the concrete all by itself as it were, resisting the shear stresses near the perimeter of the column; the reinforcing steel had become a memory. Concrete will resist some tensile stress but cracks will grow over time, diminishing that resistance (see also case 7.5). The story is quite simple in this case and two major errors can be cited having been instrumental for the collapse: • The failure to provide effective protection for the concrete against chemical attack. • The lack of monitoring and maintenance which would have brought up the problem earlier, for interpretation and correction. (The top surface of the slab had received an asphalt cover sometime after construction, effectively hiding the problem.) The case is not isolated and similar conditions have led to similar consequences, with little variation. It is equivalent to a mortgage which was taken up on the future, by neglecting two major measures in which a good building owner must invest. Failure to do so will be punished although punishment may hit another party unfortunate enough to be around at the critical time, with the cost of repair and the consequences of an accident—in the case at hand, one casualty resulted from the collapse.

7.15 Destructive Temperature Gradients Temperature variations, spatial and temporal, are everywhere, they are one of the principal agents for recycling, breaking down even the hardest stone in time. The Romans used temperature gradients for tunnelling and mining, lighting fires on the rock surfaces to be broken which when cooled down, would crack and disintegrate to a certain depth. Likewise, brittle building materials such as concrete behave much in the same way which in this case is usually undesirable however. The following story may illustrate this. A cylindrical concrete tower about 100 m tall which was to serve as the central relay for a telephone system, was built without interior peripheral (horizontal) reinforcing. This was based on a paper by a well known leading engineer of the era where it was argued that such reinforcing, if stressed in tension, would have a tendency to snap toward the inside by straightening out locally, destroying the concrete surface. It is now known that this does not usually occur, the adhesion forces being sufficient to keep the reinforcing bars in their place. In the course of about five years, the tower developed large vertical cracks on the inside surface in four places roughly corresponding to the azimuths of the “wind rose”. The cracks opened progressively due to a ratchet-like phenomenon where, once a crack has opened to a certain width, loose particles would be caught and stuck, preventing the crack from closing again completely.

70

F. Knoll

Since reinforcing steel had been provided near the outside face, no cracks were visible there, all the movement taking effect inside, from the ovalling caused by the varying temperature gradient. It was found from one summer’s measurements that, due to the low conductivity of the concrete, only longer (weekly) variations would show up on the inside whereas near the outside (to a depth of 50 mm or so) daily, even hourly variations were seen due to sun radiation, rain and wind. Rather steep local temperature gradients would therefore exist near the outside surface, producing, when warm, horizontal tension strain on the inside. This effect would wander around the tower every sunny day. In time, the cracks became so wide that pieces of concrete were spalling and falling down, damaging the electrical installations inside the tower, and remedial action had to be taken. Since the tower needed to stay fully operational at all times, being the hub of the telephone system serving about half a million inhabitants, no construction work was allowed inside. A new tower enveloping the original one was therefore built around the outside—with inside reinforcing. The conclusions from this story are the following, obviously: • Do not take at face value what is written in engineering journals, especially when it is based only on theoretical considerations, and a different context. • Reinforced concrete and its mostly advantageous behaviour derives from the presence of reinforcing—if it is absent, the concrete is no longer reinforced and will behave differently. A minimal amount of interior reinforcing steel would have prevented the cracks from opening beyond the critical destructive width. • It is often a good idea to provide some reserve strength beyond what calculations and theoretical considerations demand, as a measure of robustness. In the lengthy court battle that followed, one of the points of contention concerned the foundation which had been analysed by one of the parties, using finite elements to model it. The goal was to prove that rehabilitation of the structure had been unnecessary. In lengthy and painstaking cross-examination, it was found out that the reason for the results was the fact that the boundary conditions were represented wrongly, including restraints that in reality did not exist—a rather trivial matter once one has identified it but if it is buried in a large mass of data, it is easily overlooked.

7.16 The Perpetual Problem of Balconies The cumulative cost of rehabilitation of concrete balconies amounts to a substantial percentage of the cost of these structural elements over time. Remedial work usually implies considerable expenditure which is carried under maintenance—very often, especially in “economic” building construction, the parties participating in the construction have “disappeared” by the time the problem is discovered. The description of the usual failure scenario is rather trivial: The reinforcing steel which is supposed to be located near the top surface of the cantilevering concrete slab ends up in a lower position where it was trampled down during the

2 Design and Construction Case Studies

71

placing of the concrete, since proper support (“chairs”) were not provided in sufficient numbers. Sometimes this fact is discovered during repairs for another equally notorious problem, the degradation of the surfaces due to carbonation, chlorine ion penetration and freeze/thaw action, accompanied by corrosion of the reinforcing bars, when the acidification of the concrete has reached them. It is not difficult to identify the “error” leading to the sorry scenario where repairs and corrective rehabilitation must be undertaken: • The natural side effects of “economic” construction methods are often sloppy work by less than qualified personnel, including field supervision. This pertains to a management decision where the lowest bidding price is made the chief criterion for the selection of contracting and consulting firms, sometimes followed by hard negotiations, lowering the price even more. On the part of the executing parties—contractors, engineers, architects whose resources have been curtailed by conditions imposed by the client, the direction of those resources onto targets that, experience showing, are notoriously causing problems, must be the rule of the day, including the didactics needed to persuade the executing work team of the importance of the key details, in this case to provide sufficient chairs for the reinforcing bars. If matters are left to themselves, Murphy’s Law will take effect.

7.17 Earthquake Resistance in the Real World The following is the summary of a paper by Hugo Bachman (2013)(1). It is the sad story of a succession of human agents involved in the construction and use of a building in a location of well known high seismicity who, in spite of state of the art Code requirements and first hand practical experience did not succeed to build it sufficiently well to avoid its complete collapse in a strong earthquake. Translated and abbreviated from the German original: Although the earthquake was stronger than anticipated numerous nearby building survived the event. However the six storey headquarters of a local broadcaster collapsed, killing 115 people and injuring many more. Two major causes for the collapse were identified by the inquiry into the accident: • The earthquake was considerably stronger than anticipated (accelerations 3 times the projected values). • Human shortcomings. Had some parties acted correctly the structure would have survived. The partially precast concrete structure was built in 1986/7 (at a time when design methods for adequate earthquake resistance were well developed and codified). Following are the conclusions of the forensic investigation: • The engineer in charge of the structural design had little experience with the task, and was scantily supervised by his chief, the owner of the engineering firm.

72

F. Knoll • This caused considerable shortcomings in the structural layout and details where the sole resisting element was a staircase placed near the perimeter of the building which caused strong torsional oscillations. • In critical locations the reinforcing steel was inadequate as well as detailed incorrectly. Numerous violations of code requirements were found. • The city’s building office is reputed to have had reservations concerning the design, but no documentation was found concerning this. The officer in charge is reported to have been pressured to issue the permit anyway although important rules had not been followed. • The construction itself was directed by a competent foreman but he is reported to have been insufficiently supported by the company direction, with the result that important details were executed inadequately: surface treatment (roughening) of prefabricated elements at joints, detailing of reinforcing at intersections of columns which did not follow the already inadequate drawings. • A due diligence expertise in connection with a potential sale of the building identified serious shortcomings of the design such as insufficient anchorage of slabs into the shear walls. This creates the risk that the floor plates would detach themselves from the core, causing the structure to come apart—it is what actually happened during the collapse. Reinforcements were installed subsequently but only in the upper stories. The building authority never approved this work. • Following an earthquake of magnitude 7.1, no serious damage was seen although the inspection team did not include an engineer. An engineer was mandated subsequently to assess the condition of the building but was not given the drawings. He released the building anyway for occupancy, recommending further investigations which were never carried out.

The entire story is a tale of multiple human failures, starting with a faulty concept, followed by inadequate execution, but most important, the failure to follow-up when deficiencies had been identified. This exposes a notorious problem with all quality control where faults are recognized but left uncorrected, the unwelcome correction getting caught in the maze of responsibility divided, delegated, transferred, ignored and lost. Corrective action costs money, is associated with embarrassment, litigation, loss of face and reputation—the incentives to avoid it abound, for all parties involved. This is especially so in the context of seismic risk where strong earthquakes are rare and are therefore perceived to be less than real. It may be said that every collapse due to seismic effects can be traced to serious shortcomings in the design, execution or both. This, for modern construction, cannot be excused since the basic principles for the design of earthquake resistant structures were developed in the 1970s, followed by what are mostly refinements. Recent seismic events have confirmed this unequivocally.

7.18 Inadequate Substitution The reasons for substitution of planned materials, elements, production or placing methods are numerous and varied: Cost saving, problems of procurement, resistance

2 Design and Construction Case Studies

73

to unfamiliar conditions, concerns caused by lack of experience or comprehension, time pressure, carelessness etc. The following story includes several of these elements. The accident was fortunately limited to the scare of the participants and some cost for the correction of the deficiency. In the course of a modification to an indoor sports facility, the spectators seating was transferred to a new structural system which included, on the load path of the suspension structure, a detail featuring a heavy calibre steel angle. The specified piece of steel could not be found—very little choice of steel products available “off the shelf” exists today which hurts especially small interventions such as building modifications where details must usually be worked out on short notice. The steel angle was substituted by a bent plate of the specified thickness which was supplied by a local fabricator equipped for the bending of plates. As it turned out, the plate had been bent “cold”, i.e. with insufficient heating, and it ruptured suddenly during the transfer of load. Fortunately, the structural system had sufficient residual resistance, preventing an outright collapse and the participants escaped with the fright. Bent plates, especially thick ones were hence to forth made a no-go by the engineering firm involved. Several lessons can be learned from the example: • Substitutions must be verified for their fitness to function—in this case in terms of strength and ductility—in place of the specified system. • Geometric and material similarity is no guarantee for this. In this case the cold bending had caused cracks which were not visible through the paint coat. This caused the piece to break under stresses much inferior to usual design values. • The “sacred” schedule is often found at the original of accidents, things being done in a hurry, forgetting about prudence and verification. Every “quick fix” bears the risk of creating a danger scenario. • The design of details, especially when time is short, should include the procurement of specified elements as a primary parameter to be verified ahead of releasing the design for execution. • Substitutions should not be inferior to what was specified, for any property.

7.19 Misplaced Foundations The following example is representative of a scenario which repeats itself persistently, indicating a system deficiency in the planning process. For a residential building deep foundations had been specified since the soil immediately below the basement floor was judged to be unsatisfactory. The piles were indicated on the engineers’ excavation drawing which did neither show the contour nor the structure of the future building. Field supervision was not part of the engineers’ mandate but was included in the contract of the architect who discovered that one row of piles had been erroneously

74

F. Knoll

placed away from their true position. Additional piles had to be built to receive the load from the superstructure. The scenario is rather trivial and the details may vary from case to case but the fact that it repeats itself notoriously, every time causing additional expenditure as well as delays and embarrassment, merits a discussion. The principal root of the problem is the lack of coordination which amounts to a gap in communication. At the time when the position of the piles is staked out in the field by the surveyor, the drawings of the future building are usually not finalised and it is difficult to verify whether the plans of different disciplines—in the case the geotechnical, the structural and the architectural—are free of contradictions. One might say that the various parties have not yet woken up to what is happening in the field. In the case at hand, the engineering drawings were used exclusively to position the piles, without any reference to the layout of the architectural concept. Had anybody laid the architectural and structural drawings on top of each other, the mistake would have been obvious.

7.20 Prestressed Concrete Railroad Ties These (about 3 million) were designed and fabricated by a company making precast elements, now defunct. They lasted about 3 years or less instead of the specified 30 years. The failure mechanism was loss of prestressing in the wires, caused by a combination of the following: • Alkali reaction of the aggregate with the cement. • Temperature incompatibility. The elements were cast into long prismatic forms (100 m or so) with the concentrically placed prestressing wires under stress, and steam cured for 24 h. Subsequently they were sawn into the desired length and brought outside (mostly in winter, temperatures to −30°± , with wind. This caused very abrupt temperature gradients between outside surface and interior. Numerous wires lost their anchorage and prestressing over time which could readily be seen where their ends had sunk into the concrete. Tests of the aggregate for alkali reactivity would have shown it to be unsuitable for this application. Also, a more gentle treatment might have extended the life of the sleepers. The whole story reflects an attitude of disregard for prudence which, in the case of highly stressed concrete, leads frequently to premature deterioration and failure.

2 Design and Construction Case Studies

75

7.21 Stability of Masonry Walls The wall enclosing the mechanical penthouse of a university building was made from 6,, (150 mm) concrete block masonry; it stood about 16, (4.9 m) tall and was tied with the concrete structure of the penthouse roof only by a mortar bed. Over time, part of the wall adjacent to the main fan room began to lean into the room, ostensibly due to the low pressure, caused by the operating fan. The wall had to be taken down and rebuilt stronger and anchored to the concrete structure. The penthouse is enclosed by a decorative precast concrete curtain wall which suggests that the designer assumed no wind load to act onto the block wall, forgetting about the deficit caused by the mechanical equipment inside. Other cases have been seen, where pressure differentials from one side to the other of an envelope element have caused failure. Usually these differentials are relatively low but, in the absence of other significant design loads, should not be forgotten. A simple measure of robustness may be sufficient to prevent trouble, at low or no additional cost. In the case discussed here, the thickness of the wall was insufficient for its height in any context and the applicable building standards had been disregarded. No engineer had been involved in the design of the wall, it was not considered to constitute a structural element.

7.22 Water Leakage and Its Consequences The overpass crossing a six lane expressway had been inspected periodically over three decades. It was built on a series of parallel two-span steel girders, with the concrete bridge deck engaged in composite action by shear connectors. At the ends of the bridge the steel girders are supported by elastomere pads, with vertical plate stiffeners reinforcing the girder webs for resisting the concentrated reaction at the abutment. Expansion joints had been arranged in the bridge deck near the end of the girders. Signs of water penetration were seen when looking close—most expansion joints are leaking, a persistent fact of life. The ends of the girders were difficult to access, being hidden in the narrow space between the abutment and the bridge deck. This was presumably the reason why it escaped the notice of the inspector, apparently on repeated occasions, that the bearing stiffeners of some of the girders had begun to buckle because they had become too thin, due to corrosion, to resist buckling, causing the load transfer to become precarious. The fact was caught in the course of an audit of the inspection protocol.

76

F. Knoll

Essentially the conclusions to be noted are the following: • Critical features of a structure should be targeted directly by an inspection. In this case the leakage of the expansion joint should have been anticipated, as well as its consequences on the steel structure below. • It is not a good idea to make key features of a structure inaccessible, or difficult to inspect. This is a notorious problem in building construction where a near infinite number of façade attachments are removed from view but not from water penetration, making deterioration to progress uncontrolled. • Numerous bridges have failed because of ill-designed end supports. These are places of discontinuity with abrupt changes of section, as well as concentrated loads. Exposure to chemically aggressive water compounds these conditions.

7.23 Repairs and Their Adequacy Another expressway overpass built about 70 years ago had seen repeated repairs, one of which included the replacement of the parapet, a concrete barricade of the usual height of about 1 m, paralleling a narrow sidewalk. This, along with the sidewalk, was found to become detached from the rest of the structure, leaning at a precarious angle away from the bridge, a large crack separating it. From the drawings of the repair project it was found that the anchorage of the newly added concrete to the bridge deck had not been designed properly with insufficient reinforcing crossing the interface. Apparently, no sufficient loads had been considered to act onto the newly added structural element, presumably because of the elevated side walk which is supposed to keep road traffic from getting into contact with the parapet. This leaves out the push by the snow plow, as well as of vehicles which might jump above the sidewalk, for example in winter conditions when ice makes control of the car or truck difficult. • What can be learned from the case is that special attention must be paid to the transfer of forces across newly created interfaces. These amount to discontinuities and surfaces of weakness which fact must be considered in the design concept. • Accidental forces may act onto the structure, especially in unusual conditions such as ice or snow cover. They should not be forgotten. • Cracks are omnipresent in concrete structures but their causes and interpretation vary widely, most of them being relatively innocuous. The significance of each observed cracks must be assessed however, before dismissing it as harmless. In the case at hand the crack separating the repair from the main bridge deck must have progressively widened over time, permitting the inclination of the parapet.

2 Design and Construction Case Studies

77

7.24 Beyond Elastic Theory This concrete structure supporting a 500 seat stadium was to be braced by a pair of single real diagonals forming a “K” on its side, to resist earthquake forces. This resembles an X-brace where the member receiving compression from a lateral load gives way to buckling at a significantly lower load than the tension element which will rupture only after considerable plastic stretching, when the breaking strength is reached. It is essentially a non-linear situation and purely elastic considerations will yield erroneous answers, especially where the braces are very slender (l/r = about 150, in the case at hand). This must be reflected in the design of the connections which must resist forces commensurate with the breaking strength of the tensioned brace, in order to avoid sudden collapse. In the case at hand, the anchor bolts tying the braces to the substructure had been dimensioned using earthquake forces obtained from an elastic linear model, making them effectively into the weakest link on the load path. Since their mode of failure—rupturing the anchor bolts—is of an essentially brittle type—no significant deformability—this would have resulted in an unsafe situation. It was corrected at the drawing stage.

7.25 Incomplete Structures This case is of a rather trivial nature but it is representative of numerous similar events where the fact that the condition of the structure was of short duration was used to reduce effective safety margins to values below unity. This is a gamble situation where one chose to take a risk rather than spend the money and time to do things prudently. In the case at hand events took an unfavorable turn—the bluff was called, and the bill was about $55,000 including a delay of about two weeks for cleaning up and rebuilding what had been lost. The case concerns a brick wall which was left free-standing for the weekend, in spite of a stormwarning issued by the media. As it was not braced or anchored laterally, it did not resist the wind pressure and collapsed. In this case the damage was limited to material loses, nobody came to harm. It was settled by the insurance of the contractor which will presumably increase the policy in the future to try and recover the loss—insurance companies are financial institutions without benevolent intentions. One aspect, or better, a “caveat” must be noted once again: if you decide to take the gamble of waving prudence, make sure that the risk does not encompass bodily harm to people involved, or to bystanders. It may not be considered sufficient by the court, just to put up a sign that access to the site is forbidden—or “at your own risk”.

78

F. Knoll

7.26 Post-tensioning in Building Construction Post-tensioning in building construction has its own delicate aspects as every engineer has learned who has used it. The case discussed here is typical in a general way of the problems which, in a continuous construction, are caused by incompatible deformations due to shortening of the post-tensioned elements. If only a portion of a continuous structure is post-tensioned, the consequences for the rest of the structure may amount to a difficult situation if cracks and other effects (doors jamming, excessive deflection, out-of plumbness etc.) are to be avoided. In a school building, a long-span girder (L = 12 m) was post-tensioned in order to control deflection, as the cross section of the beam was too shallow for the span and would have deflected excessively if built in reinforced concrete. Unfortunately, the shortening of the beam due to the longitudinal post-tensioning was not considered in the design and it was cast monolithically with the columns supporting it. Over time the columns were pulled out of plumb at an angle which caused them to split—they had to be replaced. In the circumstances, it would have been difficult to accommodate the displacement at the end of the beams although structurally a sliding seat or an articulated column could have been provided. However, the architectural installations were not designed to do this either; they were adjusted after the event, i.e. when the long-term deformations had petered out. One might call the story trivial but it is representative of numerous situations where long-term deformations have caused distress to structures restrained. Local post-tensioning is often introduced to help in cases where structural elements would suffer from excessive deflections caused by bending, because they are made too shallow. In building construction this is often related to architectural constraints and may concern only one or a few elements in an otherwise adequately dimensioned assembly—one wishes to limit the height of the building rather than increasing it for a single beam or girder. Post-tensioning will then remedy the problem of bending deflection, but at the price of longitudinal shortening—basically a no win situation. In the case at hand, a conceptual error can be identified as the root of the problem— the architectural layout should have been adjusted to avoid a single long span element in an otherwise shallow and short span assembly. It may be surmised that the engineer had not participated in the conceptual design, an error of management once again. Concrete construction abounds with incompatible deformations, especially of the long-term type where the end value may be reached asymptotically long after construction, sometimes taking several years. Some of this can be accommodated through distributed cracking. However, the local introduction of concentrated forces such as post-tensioning may create situations where the structure must relieve itself of the excessive stresses in an equally concentrated way, i.e. through the formation of large cracks, near the application of the forces, i.e. near the anchorages. Normally, these incidents are not life-threatening but costly and embarrassing.

2 Design and Construction Case Studies

79

7.27 Forgotten Physics In a refrigerated warehouse, during the cooling phase—from ambient temperature to −20°, the suspended ceiling collapsed—no air pressure relief had been provided, and the lowering of the temperature caused a pressure difference of about 0.3 kPa acting on the ceiling, pulling it down. The opening of a door, or a pressure valve would have prevented the failure, as would have a stronger suspension. The fact that neither had been provided constitutes an error on the part of the engineer—if an engineer had been involved. From his physics classes in high school he would or should have remembered that the lowering of the temperature of a gas in an enclosed space lowers the pressure of that gas, creating an inward force on the enclosure. Alternatively, the design of the ceiling could or should have provided for a moderate amount of pressure difference. A ceiling that collapses at a pressure of −0.3 kPa is not a robust ceiling.

7.28 Excavation Next to Existing Construction To excavate next to existing structures is always associated with a certain risk, especially where such excavation is reaching below the neighboring foundation. Several strategies exist to do this but experience shows that, with the exception of certain favorable conditions (e.g. sound rock) they will almost invariably be associated with settlements and/or horizontal movements of the adjacent construction, regardless of theoretical considerations. The question is then how to minimise such movements. The case considered here is not news worthy but very typical of circumstances repeating themselves with some variation of the incident parameters—dimensions, history, soil conditions, shoring system, materials etc. The extension of a museum included excavation to a depth of about two stories below the lowest floor of a building housing a restaurant nearby. This building was about 100 years old and had seen a long history of extensions, modifications, renovations etc., some of which were of a less than perfect nature as it turned out. To support the excavation, a shoring wall with steel wide flange piles and wooden lagging had been selected, anchored against lateral earth pressure with posttensioned cable anchors reaching under the restaurant building. While the excavation was proceeding, and after it was completed, cracks, separations and denivellations started to appear in the restaurant and repairs had to be carried out at the cost of the museum project, in a climate of apprehension concerning the safety of the building structure. Had the more expensive slurry (Bentonite) wall method been selected instead of the shoring wall, the movements of the building might have been of a lesser

80

F. Knoll

magnitude, but the additional expense would not necessarily have eliminated the problem completely. Since the matter was settled more or less amiably, nobody spoke of an “error” as it were. The selection of the shoring method had been taken by consensus of the parties involved—engineer, geotechnical specialist, excavator and museum’s administration, so no party had exposed itself to the reproach of having acted erroneously. The risk had been accepted, weighing the economy of the construction method with the damage scenario to be expected, and its cost. The events could have, as they often do, taken another turn where an adversary situation is created, inspired mainly by greed. In that case, liability, prosecution and a court case usually ensue, taking precedence over reasonable technical measures: an error has now become manifest as an object of legal proceedings the cost of which, in terms of expert, lawyer and court fees, will be added to the GDP of the country… It will also demonstrate that the definition of “error” is fuzzy at the boundaries depending on the attitude of the parties involved, as well as communication—in the case described above, the risk related to the deep excavation had been indicated to the operator/ owner of the neighbouring restaurant who then cooperated with the repairs that became necessary.

7.29 Hydraulic Heave This case involves an interplay of three main parties typically co-responsible for the outcome of the excavation phase of a project: A large excavation had to be performed in soft cohesive soil underlain by an impermeable layer, followed by permeable sand and gravel which was found to contain water under artesian pressure equivalent to about the elevation of the original grade. A sheet pile wall was driven to about the depth of the cohesive layers but not into the permeable zone. Discussions were held via correspondence whether or not to drive it deeper which would have caused additional cost and delay. The result was that the heaving was precipitated in the classic way, with the artesian pressure lifting the impermeable layer, accompanied by settlements outside the excavation. All three parties involved were experienced in excavation work in this sort of situation. However, circumstances united to prevent recognition of the risk which was underestimated by all. It was vacation time and communication between the individuals was delayed or altogether interrupted—the key person of the engineering company had left his job shortly before the event. The geotechnical engineer did not visit the site during the critical period although this was in his mandate. He had also advised to proceed with the excavation, without lowering the sheet pile wall.

2 Design and Construction Case Studies

81

The list of circumstances which caused the accident includes the following: • In the critical phase of the construction process, several of the key actors were not available or only intermittently—communication was therefore interrupted. • The risk was underestimated or dismissed by all. • The time and money saved by adopting the risky strategy were far exceeded by the consequences in terms of delay and extra cost. • Because of the communication problem, the gamble was not recognized for what it was. • Responsibility for decisions was not clearly defined and could not be, especially since all three participants should have known better. In this case, unlike the description of the preceding one (28) the decision to proceed with what caused the incident was not created in consensus but sort of happened by itself, one might say, in a situation of lack of communication. No assessment of the risk or comparison of notes had taken place and the incentives of the day—i.e. to proceed without delay—had won over prudence.

7.30 Internal Corrosion This case was widely publicised as it was an eye-opener. Over the public swimming pool in the town Uster, Switzerland, a massive cement ceiling had been installed, suspended from a large number of stainless steel rods anchored in the structure above. The suspension system was inspected periodically, and the collapse followed a short time after the latest inspection, by an experienced engineer. The stainless steel rods showed very little corrosion on the surface, and the forensic investigation turned up the fact that in the interior of the rods, local stress corrosion had progressed, triggering the progressive rupture of all the hangers. It was strongly enhanced by the continuously humid and chemically aggressive atmosphere around the hangers, in the space above the ceiling. The essence of the case is twofold: 1. The rupture was of a brittle nature due to the fact that the internal corrosion was very local—yielding which normally precedes rupture of steel, was spatially so limited that it prevented redistribution of loads among the hangers, triggering a zipper type collapse although ample redundancy had been provided. 2. The damage had remained latent and hidden from view up to the sudden rupture, the visual inspections had not discovered the problem—similar damage was found subsequently in other applications of tensile elements made from stainless steel. The “stainlessness” or “corrosion resistance” of Chromium/Nickel alloy steel is found to depend on the precise composition of the alloy. Careful selection is therefore imperative in chemically aggressive conditions especially where inspection is difficult or impossible. The current types of stainless steel (AISI

82

F. Knoll

304 and 316) do not perform well it was found. The accident caused loss of life (12) and produced a long investigation, legally as well as technically. It is a case where knowledge about the causes of the accident had not been currently available although it may have existed in the archives and heads of research institutions. The basic error in this case is perhaps not found in the fact that the destructive combination of stress corrosion and chemically aggressive environment had escaped recognition, due to lack of available knowledge, but rather in a violation of common sense: one does not place a heavy ceiling above a swimming pool, suspended with an untried structural system.

7.31 Problems with Isostatic Systems A very common type of roof structure is a corrugated steel deck on parallel truss-like lightweight steel joists. The chord members are made from small double angles, or hat-shaped bent sheet metal, the diagonals often from solid rods, bent in zigzag fashion and welded to the chords for the length of sufficient proximity to permit welding. These welds are routinely carried out automatically and it has been found in several instances that the quality, length or caliber of these welds was not sufficient to even transmit nominal (unfactored) forces. In one case, the roof of a warehouse collapsed under a fraction of the snow load it was supposed to have been designed for. In another case, the welds were found deficient and a large quantity of joists was rejected and had to be rewelded by hand. This is a situation where robustness is absent. Even though a system of parallel bending members is highly redundant and as such normally considered of excellent robustness, in cases where the load bearing capacity is controlled by a failure mechanism of brittle nature (failure of welds rather than axial loads in the chord members due to overall bending) this does not hold true: The rupture of any weld will make the resistance of the joist disappear completely, meaning that the neighboring joists will now have to resist about 50% more load which, with an automatic serial fabrication, will very probably rupture the same welds there in turn. There are three salient features to this type of failure that have their application in a rather wide spread variety of structural systems. • The truss like joists are perfectly isostatic elements where the rupture of one member or connection causes the total failure of the entire element. • The robustness of a system of parallel bending members exists only where the critical failure mode is of a ductile character, with nonlinear deformation preceding rupture while resistance is conserved, at least to a large fraction of the peak load. This allows for load distribution to neighboring elements. • The critical weld is located where the last diagonal (=the end of the zigzag rod) is joined to the top chord, next to the support. This weld is exposed to a loading

2 Design and Construction Case Studies

83

regime involving a tension component perpendicular to the weld, in addition to the shear parallel to the weld. It is usually found that an eccentricity also exists with respect to the middle of the weld length which means a stress peak at one end with the tendency to peel the weld off in zipper like fashion. This is of course not unknown to the fabricators of such products who base their fabrication process on tests. However, this relationship is only valid when the tests are representative of the real product in all respects, including quality control. This scenario is a good example for targeted checking where the critical detail is well known. Much of the resource allotted to verification should be concentrated on this, in order to obtain an optimum effect eliminating faults.

7.32 The Problem with Construction Stops Numerous projects are begun and brought to some intermediate stage of construction, only to be stopped for a period of time. It may take weeks, months, or years for construction to resume, if it is not abandoned altogether. The structural—and other—elements that may thus become exposed to outside conditions may not have been designed for this unexpected exposure and may suffer from it if precautions are not put in the works. As we have seen earlier, the outcome may decide if an error was committed. Two stories may illustrate this, along with some related implications: • The steel structure for an institutional building included columns made from square tubing which had been erected in the warm season when the construction was stopped, as it turned out for many months. The columns became exposed to the effects of the cold season which included partial filling with water which in the following winter froze and caused the tubular columns to burst at the weld seams. The damage was considerable, involving the replacement of numerous damaged columns. It would have been a minor matter to provide drainage (“weep”) holes near the base of the columns to prevent them to be filled with water—nobody thought of it in time. Following the incident, it was made a rule in the engineering firm involved to require drainage holes in all tubular members, regardless of circumstances—just in case. • Another story from a seemingly very different experience has the same implication (adapted from Knoll/Vogel): Unbonded post-tensioning is still vigorously promoted; experience in practice though has been mixed to say the least, and in some cases, outright disastrous. One of the authors of this text remembers vividly a 33-storey office building which could only just be saved from having to be condemned and torn down because loose strands had been found. Painstaking forensic detective work and extensive testing revealed that the relaxed strands were confined to one specific part of the structure along with a reason being identified why this was so (in the construction records it was found that the particular slab had been exposed to exterior

84

F. Knoll conditions for a longer period of time than planned, due to an interruption of construction, having to do with financing problems.

Once again, a construction stop involved an unforeseen exposure—outside weather—which caused water to enter the ducts, probably at the anchorages. Nobody thought of providing protective measures—typically, in cases such as this, a construction stop implies a stop to all payments, so everybody involved will be busy cutting their losses, with not much thought about what might happen to the works already in place. Obviously, it would have been up to management to organise protective measures but in circumstances such as these, management has a tendency to become disorganised itself, individual actors leaving with their knowledge, the work being orphaned, and Murphy’s Law taking effect: Things left to themselves…

7.33 Problems with Access Some types of construction require intervention after, sometimes long after the end of the original construction. This is often due to long term deformations causing cracking and water penetration. In some cases these interventions—injection of cracks or the like, are part of the concept as the most economical method to deal with. However, this can only work if the elements in question will be accessible when the time comes. This may interfere with the usage of the premises which after all, are built for this rather than for the injection of cracks. In a recent case this fact escaped attention and the bottom plate of an institutional building was built about 3 m below the high ground water table. No problem existed with the strength of the slab to resist the water pressure, however, unavoidably, cracks developed due to contraction of the concrete (shrinkage, creep, variable temperature) allowing water infiltration. These cracks and their injection had been anticipated in the concept for the construction method however, in the meantime, a false floor had been installed over the entire surface of the plate, removing it from view, and from access. It had to be removed and rebuilt to permit the waterproofing of the structure. The story is somewhat more complicated than this brief suggests which may explain why this obvious consideration had escaped attention: The concrete slab had already been rehabilitated since the original version was not resisting the water pressure and had heaved. Supplementary tension piles had to be introduced and a portion of the plate rebuilt. Experts were employed in a climate of contention and the planning of the rehabilitation work was mandated in piece meal fashion to several different parties, with the coordination of it all being left out—a lack of management. The error here can be pinpointed to missing communication—even without management overview it would have occurred to the participants that a problem was being created—had the various parties been talking to each other but this tends to be avoided in a situation of conflict which had already been created through earlier faults, their correction, and the assignment of liability for the associated cost.

2 Design and Construction Case Studies

85

7.34 The Importance of the Ground Water Table This case is typical of a situation coming up frequently where the bottom of a building must be built below the ground water table, i.e. “into the water”. Various methods exist to do this but they all have to solve the problem with water under pressure acting on the bottom and sides of the substructure. Mostly these structures are built in concrete and the contraction of this material due to shrinkage and temperature variations is creating cracks inviting the water to penetrate. In some cases, membranes are installed protecting the outer surface of the concrete, in some instances this is not done, with mixed success in both cases. In the case at hand a silo for storing wood chips was built about two meters “into” the ground water and a serious leakage problem had to be dealt with at considerable cost and delay. The risk associated with these circumstances is well known and was accepted by all parties. However as it turned out, there was no reason to build the structure at this level, it could have been constructed higher up, thus avoiding the difficulty and cost associated with the water pressure—a conceptual error of a very basic nature.

7.35 Forgotten Effects of Fire This story is about the walls of a chimney/fireplace in a golf club house. The building had been destroyed by a fire and rebuilt from the ground up, except for the masonry of the chimney and fireplace which was left in the condition brought about by the event. The walls were built mostly from granite boulders (200–300 mm) on the front and side faces, and clay brick in the back. An engineer was called because some of the users had expressed concerns about the safety of this masonry and rightly so: Not only the mortar joints but the large stones of the wall themselves were in a state of disintegration where they could be removed in pieces by hand, leaving the walls in a precarious state, where even a slight shaking by a minor earthquake could have brought it down along with part of the building structure above. This was the consequence of the water used by the firefighters which cooled the stones very rapidly at the outside surface. Whoever has laid a boulder, especially of crystalline petrography, in a large camp fire, knows that these boulders will split into shards if heated long enough and cooled rapidly, for example when thrown in a lake. Not so for the brick which was found well preserved—it had been cooked at high temperature before. The error in this case was to have left the masonry in its degraded state, presumably due to cost considerations, unaware of the manifest risk of collapse. The engineers recommendation was to remove the chimney without delay, eliminating the risk.

86

F. Knoll

7.36 Durability of Timber Piles Numerous historic buildings, built in the nineteenth century or earlier, are founded on wooden piles which at the time were the only technically viable method to create deep foundations in soft soil. Wood, as long as it is submerged in water, or encased in an impervious material such as clay, will last “for ever” since the organisms causing rot do not survive without a supply of oxygen. This means that if these conditions are not or no longer met, a risk of deterioration exists. In the case referred here, the head portion of the wooden piles supporting the perimeter of a 130 years old building had, in the course of modern urban development in the neighbourhood, become exposed to the oxygen in the dried out soil which surrounded them. The original water table had been affected by deep excavations nearby, repeated opening of the neighbouring streets—urban streets are being excavated frequently to place or replace electric installations, water pipes, sewers etc. Thus a ground water table that had been maintained at a more or less constant position in the historic past, had become variable, or lowered permanently which led to the destruction of the piles by rot. As a consequence, the exterior bearing walls of brick and stone began to separate from the rest of the building since no elements tieing them to the structure of the floor plates existed. A large portion of these exterior walls had to be taken down and rebuilt from the bottom up, including a modern style foundation. Most of the movement had taken place during and following the execution of a nearby construction project involving excavation to a level below the basement of the old building which suggests a causal connection. It may have been enhanced by the vibrations related to that construction, due to demolition work and blasting. The error in this case can be identified in the decision not to consolidate preventively the foundation of the historic building in time—it took about one year and rather dramatic displacements to persuade management that remedial action was necessary although another portion of building had been consolidated by underpinning, a few years earlier, and for the same reason. Once again, management chose to take the gamble and lost; the cost of rebuilding the historic structure and architecture was added to the expense for the consolidation of the foundation.

7.37 The Devil Is in the Detail The attachment of the sometimes quite massive elements of building façades has for a long time been something like an orphan in structural design. Few architects and engineers wished to touch it and left the concept and detail design to the contractor and the engineers he might have on his staff.

2 Design and Construction Case Studies

87

The case reported here, anecdotically, is quite typical for this situation. It is about precast concrete elements attached to a concrete structure by a composition of small steel elements. The precast concrete elements were to receive the weight of a masonry composition which when put in place, caused the assembly to sag progressively until contact among the stories was established and the entire weight of the façade was supported by the foundation substructure. The deformations associated with this were not compatible with the adjacent elements, and the entire assembly had to be dismantled and rebuilt. The engineer had calculated the steel elements of the assembly and checked for bending and shear but did not verify the bolted connection which when loaded, deformed the steel pieces locally. No or insufficient washer plates had been provided under the nut and the bolt heads. An additional weakening of the steel elements was caused by oblong holes for the bolts, for adjustment of the positioning of the precast elements. A sample specimen of the detail was tested forensically in the laboratory and found to give way at about one third of the unfactored design load. The computer output for the calculation of the triangular piece was 28 pages long but the local resistance of the pieces at the bolted connection had not been addressed. The design of such details is a task requiring systematic attention to all stations of the load path. The calculations necessary for this may seem simple and trivial but cannot presently be performed by computer software. The insertion of sufficiently large “washer” plates would have eliminated the problem but nobody thought of it.

7.38 Bad Luck with Piling Technique To replace a bridge that had reached the end of its useful life, new piers had to be built in a riverbed. The ground consisted of less than competent glacial and river alluvions of different description but mainly sand and other fine-grained materials. This made a deep foundation necessary, and it was decided to use a drilled in caisson type pile to the required depth of about 20 m. The construction of the piles/caissons followed a well-known technique the contractor was used to, involving the rotational drilling in of steel pipes of 700 and 900 mm diameter respectively, to be filled with concrete. The drilling in the river bed was based on a raft floating on the river, and difficulties manifested themselves due to movements of the raft, involving flexure of the pipes and rotation of the raft, making the entire assembly unstable and difficult to control. The major problem arose however, because of hydraulic heave of the sand at the open bottom of the pipes during the excavation, since the water level inside the pipe could not be controlled, a risk that had been identified but underestimated. This caused a loosening of the soil around the pile and in turn the loss of mantel friction. The method had to be abandoned and another older technique adopted instead, involving delays, adversity and of course, cost. The decision for the choice of the drilling method was demonstrated by the events to have been wrong, the gamble was lost.

88

F. Knoll

Construction history includes numerous stories of this type, especially involving work in the ground where the behaviour of the soil or rock base is more highly unpredictable than one is used to with building materials such as concrete or steel, especially when involving the interaction with water. This makes every decision to be associated with risks, and the mobilisation of as much pertinent experience as possible the necessity of the day. It is here that the meaning of error becomes unstable itself, so to say, and the history of construction is rich of events where risks were taken and decisions made that involved some quantity of gamble. In circumstances where innovation is an element, this can be seen as part of progress—one is trying to do what nobody has done before. In the case at hand, such a claim would have been difficult to invoke as all the elements of its description had been seen before, if perhaps not in the precisely the same combination. Careful evaluation of the soil base, including historic records, boreholes and laboratory testing, as well as calculations had preceded the work, based on generally accepted hypotheses found in the literature. It is these hypotheses that are the essence of engineering work, and their evaluation will decide the outcome. Engineers mostly tend to err on the prudent side with the hypotheses they adopt—not necessarily so for contractors who are looking for profit through cost savings,—every party will act according to its own incentives. It is therefore often a good idea to try to even out the diverging attitudes by bringing about a consensus. Apparently, in the case at hand, this was not done.

7.39 Problems with Brick Veneer The façade of a large and prestigious building came up for rehabilitation and repair after about 70 and 55 years, respectively for two portions created to the same design but in two periods of construction. The rehabilitation followed a detailed architectural inspection and report, specifying that the upper part of the façade had to be rebuilt. The façade consists of two wythes of brick, the exterior of the thickness of one clay brick (approximately 90 mm), tied in with the inner wythe of two bricks thickness, through tie bricks. No horizontal joints with lintels allowing vertical movement of the outside single brick relative to the back wall had been provided. The rehabilitation included the upper third of the 30 m high brick veneer and the construction method repeated the original concept, with tie bricks linking the veneer to the back wythe. An inspection 10 years later found that most of the tie bricks in the repaired portion were either broken, or were made up of two half bricks, in other words a fraudulent make-believe. No metal ties of any kind were found linking the veneer to the back wall, across a space of between 10 and 30 mm. The one-brick-veneer was seen to begin buckling away in some places, posing a threat to passers-by below. Experience with similar construction several decades before had shown that a problem of differential vertical expansion and contraction exists with this mode of construction, the exterior brick

2 Design and Construction Case Studies

89

being exposed to temperature variations of much greater amplitude than the sheltered interior portion of the wall. Consequently, rules were included in the building regulations to the effect that horizontal expansion joints with supporting lintels must be provided at every storey for walls higher than 11 m. This rule did not exist at the time of the original construction and was ignored in the rehabilitation campaign. Bricks can tolerate very little deformation and will break successively if deformations are imposed which is the reason why a second set of rules was included in the building by-law, to provide metal ties between the different wythes. This had equally been ignored in the repair campaign. The fact that the two wythes are sitting on the same foundation at the base means that the thermal movement accumulates toward the top—hence the upper third had to be repaired and was found to have relapsed to the same condition as before. To complete the narration, it must be added that the contractor entrusted with the rehabilitation was of a doubtful reputation and had been hired in less than transparent circumstances as the low bidder for the work. A few years later the company went bankrupt. Here one is looking at a combination of several contributing factors: A system which had failed in its original configuration, the tie bricks having been broken, was replaced by a repair which suffered from the same deficiencies and failed again. This can be traced back to the ignorance and/or negligence of the agents entrusted with the restauration, compounded by the substandard quality of the work carried out in reality where some of the tie bricks were not what they pretended to be, having been replaced by what amounts to a forgery. Another question mark must be placed with the hiring practice of the client who gave the work to a party of doubtful reputation. The circumstances of this happening must be seen as part of the local culture and politics where the rules of the game often give way to considerations of a less than tidy nature.

7.40 The Anomaly of Water Water freezes and melts at about 0 °C depending on its purity, and the volume of its solid form, ice, is about 10% greater than the liquid phase. In temperate or polar climates this phase change can repeat itself cyclically many times each year at ambient, i.e. exterior exposure. This fact is at the basis of some striking differences in the fate of construction, depending on geographic location: Where masonry and structures such as bridges or castles are happily surviving since Roman or medieval times without much maintenance in warm countries, their equivalents in colder regions have either disappeared or had to be restored periodically at intervals varying from a few decades to perhaps a century or so. The following case is telling a story that also includes some other, less physical but nonetheless interesting aspects:

90

F. Knoll

The exterior wall of an institutional building which served as bearing wall for six floor plates and a roof was composed of cut stones on the outside and brick backing inside. The space between was filled incompletely with a mix of mortar and small stones—a method of construction which was imported from Europe to North America (see also case 7.2), along with the masons who came mostly from Italy. This type of wall has a very limited lifespan in colder climates without maintenance because the mortar in the joints and inside the wall will suffer from cyclical wetting/drying and freezing/thawing where humidity penetrates. That humidity has several sources: • Rainwater running down the outside of the wall infiltrates the mortar joints, especially where the mortar has become deficient or absent. • Condensation due to adverse temperature gradients and high humidity of the air. • Infiltration at openings, windowsills, or at the top where waterproofing is deficient. • Capillarity may expand the affected volume of humidity in all directions. • In the case at hand, the wall had been repointed several times during the first 90 years or so of its existence and was partially rebuilt at that age. At the same time the roof of the building was up for repair, by a different contractor. About 15 years later, the aspect of the wall had become alarming once again, worse than before the restauration, with mortar falling out, stones being dislocated outward and disintegrating, and with long vertical cracks hinting at an instability coming up. What had happened? Interpretation in circumstances such as these is usually neither easy nor unequivocal. It was determined that the main culprit of the problem was probably water infiltration in large quantities at the edge of the roof where flashing and waterproofing work was found deficient, the work having been divided between two contractors/ teams, one for the roof, the other for the wall. Coordination and general oversight was left with the administration of the institution, people with little technical knowledge and experience. The fact that water entered the body of the wall in substantial quantities was compounded by freshly repointed and rebuilt mortar joints which impeded evaporation, leaving some of the interior of the wall permanently soaked. In winter, the temperature gradient through the wall migrates cyclically, with the freezing temperature visiting the interior of the wall at each turn, increasing the volume of the water by 10%—and reducing it again. This creates a ratchet type mechanism because the void left at melting when water contracts its volume to the liquid state, will not close again completely, with grains of mortar displaced and caught between harder particles. Stones will now be pushed outward and sideways, progressing toward a disintegration of the wall. Several errors can be cited at the root of the misfortune of this case, of a totally different character: • The work was divided up into two different contracts given to two different teams, without sufficient attention to the interface between replacing the roof and restoring the masonry. This is a classic error of management exclusively

2 Design and Construction Case Studies

91

when the division of responsibility leaves out a certain portion of it unattended, with nobody in charge. • This caused a situation where the flashing and waterproofing at the top of the wall was left deficient and the wall exposed to massive water infiltration. • The repointing and partial reconstruction of the masonry sealed the surface of the wall against evaporation of the water which was now trapped inside, making the soaked condition permanent. • The fact that the type of masonry construction with an interior volume made from mortar and left-over stone fragments of uncertain behaviour, but certainly interspaced with voids of various caliber, is not suited to climates with frost cycles, has over time become a reality of life of sorts rather than an error that can be blamed on any party alive. Observation of some other buildings of this type has discovered an interesting consequence of this state of affairs: As the exterior stones are pushed out of their original position with respect to the backing, a more or less continuous void is created so that the exterior wythe has now effectively become a rain screen, much along this more modern principle. This shields the wall against massive water penetration from the outside face while accepting that some water will exist anyway in the air space inside the wall where it can and must be drained in an organized fashion. The wall which was meant to be one massive unit becomes a system with a two-tier defense— it has relieved itself of the problem. Outward movement of the stones typically stops after about 5–20 mm, and in some instances it was decided to leave it at that until such time that further consequences would show up. Drainage of the interior of the wall must however be organized, for example by providing weep holes. If it is not, or insufficiently, events will take their course as in the case described above. The general morale of this sort of stories is: The water will always find a way in, make sure it can get out. And—if you have to subdivide the work, watch the interface.

7.41 Trouble with Precast Concrete In the years and decades following the second world war, precast concrete became the favoured method to create façades/cladding, loved especially by architects due to its great versatility to create visual aspects expressing tendencies, fashion and ideology. Some 80–90 years after its inception, the method is still alive and vigorous, and is being married to revival tendencies of classic architecture. The following case is a tale of how such marriage can lead to miscarriage and accident in the modern circumstances of time pressure and obsession with quick profit. On a 12 storey building housing upper scale condominiums, the architect wished to create a classic aspect by providing massive cantilevering cornices at the roof which

92

F. Knoll

were chosen to be made from regular density precast concrete, to be installed onto the primary reinforced concrete structure, using some small steel anchors which were cast in with the structure and with the cladding elements. This involves several production agents, and at the interfaces, geometrical, temporal and organisational incompatibilities often arise which, under time pressure, result in short-cuts and unsafe interventions, leaving the work in a precarious condition and, more importantly, unknown so because the cutting and incompetent replacement of some anchorage elements went unreported. To add to the delicacy of the matter, the detail shown on the drawing was called “seismic”… A short time after the installation, a precast concrete element weighing some 17,000 lbs (7.5 tons) fell onto a balcony triggering another element of similar weight attached to the edge of the cantilevering balcony, to fall to the ground. Fortunately, nobody was hurt but the investigation following the accident turned up a number of places where conditions of a similar nature had to be corrected. As a matter of course, the cause went into the blame game mode where everyone is trying to hide behind concepts like: Not me, he too… The worker who cut the anchor bar because it was interfering with the reinforcing steel he was placing, had done so without telling his foreman who had no time to deal with such details in the heat of the moment, with the massive element hanging from the crane, ready to be placed… Also, it was found that the local reinforcing of the concrete at the point of anchorage had been altered in the field into something less resistant, i.e. insufficient to resist the loads. The installation and stability of cornice-type elements has always been a delicate matter, ever since architects took a delight in stone elements protruding beyond the face of the wall below. On older buildings one finds these elements held in their place by the weight of stones placed on top of them—a mode of stabilisation no longer favoured and replaced by other methods wherever possible, in order to provide resistance in earthquake situations among others. Usually this is done by steel elements of small dimensions, anchored or cast into concrete or stone, with detail dimensions, such as the depth of anchorage, taken from a book, exploiting the high strength of this material—a hazardous methodology for heavy elements placed high upon building faces, unless robustness is provided. This was obviously not done in the case at hand, the loss of one anchor being sufficient to trigger collapse. Let us not forget that the anchorage of steel elements in concrete is an extremely complex matter, still being explored in our days. Theories, Codes and Standards have been published in the past decade or so, and are invariably based on precisely prepared laboratory specimens. Often the reality of construction does not much resemble those laboratory conditions, and to compensate for that difference with safety factors is perhaps providing some mental comfort to the builders but less so in physical reality, much in the sense of what has been aptly called “satisfiction”. If one follows the load path from the weight, wind or seismic load acting on a cantilevering element to the point of safe resistance, for example in a massive

2 Design and Construction Case Studies

93

reinforced concrete element, often this recalls the experience of an obstacle course, with traps and hazards lurking everywhere: • The anchorage in the precast concrete is usually installed at the factory and becomes invisible as soon as the concrete is placed. It relies on the bond of the concrete to the steel bar or strap on its length which is limited by the dimensions of the concrete element, unless hooks or anchor plates are provided—these anchors are then often welded to an anchor plate which in turn is welded in the field to an intermediate spacer piece which in turn is welded to another steel piece which is anchored in the concrete of the structure. • These anchors often interfere with the reinforcing cage of the supporting structure into the interior of which they should reach. If they do not, one relies on the pull-out strength of the concrete “cone” or “pyramid” in plain concrete which is mobilized by the anchor, concrete that may be or become cracked sometime in the future, depending on its exposure to restrained shrinkage, chemical attack and/or freeze–thaw cycles. A tension load associated with a cantilevering weight, or with inertia forces caused by an earthquake has to therefore typically pass through a considerable number of stations—at least two anchorages controlled by adhesion/pull-out strength of concrete, several welds, some of them placed in the field. The precision of a concrete structure built for economy in time and money must be expected to be somewhat imperfect which makes adjustments necessary. These are invariably provided by the anchorage assembly since the two concrete elements cannot be modified at the time of installation. This will usually result in conditions with eccentricities, misalignments, or the cutting of anchors already cast in, to be replaced by drilled in bolts or bars grouted with cement or epoxy—all of this under time pressure, and by personnel likely less than experienced and knowledgeable, and with questionable attitudes.. Add to this the fact that the anchorage assembly most often disappears from view as soon as the precast concrete is placed, and the elements of an accident waiting are evident, corrosion further degrading the effective resistance of the assembly in time. The only way out of the conundrum with the heavy concrete element hanging out there, is to provide robustness, in terms of redundance or overstrength, or both (see also case 7.48). The problem with load paths involving tensile stress is quite ubiquitous in modern construction where architectural audacity is the word of the day, exploiting the theoretical possibilities offered by modern materials and methods. Practical application often produces conditions where all the theories developed by science, and validated by laboratory tests, are invalidated by the field conditions. An effective method to mitigate the hazard overhanging elements represent is to make them of light-weight materials such as thin-walled metal, plastics or hollow ceramics. Given these alternatives available today the use of heavy precast concrete or stone elements for overhead or overhanging structures is approaching the criminal and ought to be ruled out or at least strongly discouraged. More accidents are certain to follow until architects and engineers learn more about the implications of gravity.

94

F. Knoll

7.42 The Risk of Single Path Tension “One rivet is no rivet” was a pronouncement of the professor teaching us steel construction. It applies to all cases of tension elements, especially when used to resist permanent gravity loads. A certain probability is always present that the single element resisting collapse in tension happens to be the defective unit from the batch, or that something happens to it after the builders have left, for example the installation of a new mechanical system which implies penetrating the structure. The following story is the tale of a near miss which must be seen as representative of accidents out there, waiting to happen (see for example the case of the Hyatt Hotel Kansas City). In a high rise office building the second floor is suspended from the third level by hangers spaced at some 30 ft (9 m) since the floor structure at this level does not connect to the perimeter columns of the building except in two places out of fourteen. The hangers were made from posttensioning wires anchored top and bottom with “button heads”, a system no longer used presently but which came with the smallest anchor heads available, in order to minimize interference with the architecture. About fifty years after the construction of the building the surface of the third floor came up for renovation and installation of contemporary electronic equipment which made the false floor installed originally unnecessary. The anchor heads of the hangers were then found and one of them was cut off as it was perceived to be in the way of the new floor tiling. Fortuitously the hanger that was cut was in one of the two places where a connection to the next perimeter column, by a short concrete beam existed so that no immediate collapse followed, the structure standing up due to an element not apparently meant for this. The story reached a structural engineer doing modification work on the building, some time after the cutting of the anchor, who then sounded the alarm, and the subsequent investigation turned up the additional truth that these anchors did not have any fire protection—a situation of high risk was discovered after some 50 years. The newly installed hangers are planned to be in pairs, providing robustness just in case, as well as being packed in a fireproofing envelope. The man operating the cutting wheel or the diamond drill will not ask any questions but do what he is told to do.

7.43 Anchorage of Large Forces When the posttensioned concrete bridge in a prestigious location, crossing a large river, was beginning to receive the tensioning of the cables, it was noticed that the concrete immediately behind the anchor head began to split. Examination of the drawings turned up the fact that the transverse reinforcing necessary to resist this,

2 Design and Construction Case Studies

95

had been left out, an error generated at the design stage and perpetuated through all the subsequent phases of the construction process. This was not a case where inadequate knowledge or the hiring of individuals or firms of doubtful skill could be cited, for a careful selection process had been enacted given the high visibility of the project. Engineers and contractors of the highest reputation attended to the work which shows that trivial errors can happen to the best. The information, or better the missing information to provide the transverse reinforcement, had passed before the eyes of at least four principal actors, i.e. the engineer, the contractor, the posttensioning specialist, the steel setter. This being a public project, the client, i.e. the administrations of the several levels of government involved, were employing engineers and technical personnel as well, in order to control and supervise the work. At the time, quality assurance protocols had all been formalized and put in place. All of this did not prevent the omission and its consequences. The repairs turned out to be very difficult and quite visible for anybody with at least a general technical knowledge. The major part of the cost was absorbed by the insurance companies, the most prominent being the engineer’s.

7.44 The Art of Load Transfer Architects and commercial business owners love open spaces inside buildings, chiefly on the ground floor or in nearby storeys. For new construction this requires massive transfer structures if the upper floors rest on a narrower column grid. Many structural engineers are familiar with this situation, and during a long career they may have met with the challenges this poses more than a few times, so that it has become a matter of routine. The situation becomes quite different for existing buildings that were not built with this in mind which means that the task is now to eliminate existing columns or walls, transferring the loads—mostly gravity loads—to other paths. It is one of the more exciting moments in a structural engineer’s life when the existing column is physically removed, an event the engineer is traditionally expected to attend. Several aspects must be considered and dealt with in the planning of such an event: • the new load path—mostly through a transfer structure bringing the loads to neighbouring vertical elements, must be properly strengthened so it will successfully, and without adverse effects, resist the added loads. • every transfer of substantial loads is associated with deformations, according to the laws of mechanics (e.g. Hooke: “Ut tensio sic vis”—deformation follows stress, in modern terms). Since most of the time the path the original loads have been taking is not known with precision, any such “surgery” will be associated with a measure or uncertainty, and this property must be dealt with in real life, beyond all careful modelling and analyses of the structure before and after the transfer.

96

F. Knoll

Often the deformations will also take their time to manifest themselves fully, especially in structures assembled from materials such as concrete or wood, and those with deformable connections (nails, bolts in oversized holes, slipping bolts or rivets etc.). A recent case may illustrate this, including some non-technical aspects associated with the load transfer. In a five storey residential building, mainly built in wood, the ground floor was to be made into a retail store, necessitating the removal of a main column. The engineer who was put in charge was not new to the task and invested considerable time and thought to it. This included a concept to counteract some of the deformations (deflections of the floors supported at the base by the column which was to be removed) by jacking and shimming. In spite of this effort, an adversary situation ensued where the occupants of the floors above complained about annoying cracks in the walls, doors jamming etc. This had to be anticipated for the building had seen some history affecting load paths before as is often the case in wooden structures. Wood, being a “living” material responds to changes in ambient humidity and stress in nonlinear and long term = delayed fashion. All the careful preparation and operation of equipment counteracting the expected sagging had not been sufficient to eliminate the problem mainly due to uncertainty associated with the hypotheses used for the calculations. It would have needed to be communicated to all parties potentially affected; it was not, and this is constituting the major error in this story, apart from some minor flaws in the execution which may or may not have affected the outcome. Whoever has “tinkered” with existing structures knows that a measure of “unpredictability” must be reckoned with, having to do with a multitude of factors going back to the original construction, to use and abuse, past modifications or additions, accidents such as small fires etc. The structure may date back to a time before consistent engineering calculations were current, or to inadequate rules such as were sometimes written in building codes—to be corrected later following bad experience. All this makes building modification, and especially major load transfer, into an art of its own, transcending what can be taught at school.

7.45 Past Construction Methods and Their Problems Up to the introduction of comprehensive building codes addressing design criteria and procedures, construction was mostly proprietary, i.e. in the hands of the building contractors who used their inhouse procedures. At the same time materials saw widespread use which were discontinued since because other materials and techniques became more competitive. One such material and construction method involved floor plates made up of small steel beams and hollow terracotta elements—“hourdis”—arranged in an arching

2 Design and Construction Case Studies

97

assembly. A case where this technique resulted in a very unfortunate situation involves a large 4 storey institutional building dating from about 1900. Buildings of this age category have seen a number of remakes of the technical systems (plumbing, climatization, electricity) which are typically coming up every 30–40 years while the structure remains basically unaltered. This holds mostly true for the framing but less so far “two-dimensional” elements like walls and floor plates where typically, new penetrations must be provided for pipes, air ducts, as well as for their attachment while much of the previous generations of installation is left in place, now “dead”, for economic reasons. If this process is repeated, the penetrations and the damage associated with these, accumulate and may eventually result in precarious conditions, especially for construction such as the terracotta floors. What triggered the alarm in the building in question was a piece of terracotta large enough to penetrate the ceiling of an office, landing on the desk of a staff member. A thorough investigation followed and turned up a rather scary situation. Not only was the terracotta in a sorry state of repair, with all the thoughtless massacre left by the past renewals of the technical systems, but complete gaps through the thickness of the terracotta were found in numerous places, putting in serious question the fire separation these floors were supposed to provide. A rather thorough study of possible repair methods including practical exploratory applications turned up no satisfactory technique, due to the fragility of the terracotta which works only when complete and essentially intact. Makeshift measures were then adopted to provide protection against the falling pieces and to re-establish the lost fire-separation. The owner, a branch of government public administration, decided to try to sell the building “as is” which does not bode well, considering its condition and the fact that it is classified as “heritage” value, with all that involves in terms of restrictions, obligations and the associated cost which may exceed the value of the building. It has not changed hands to this day. To identify an error in this example is somewhat more complicated than in most other instances but the unresolved problem with the building constitutes testimony of something that went terribly wrong. In essence it is a cumulation of several facts: • the original construction which produced something very vulnerable, not lending itself to modification, robustness being entirely absent. • the thoughtless and careless interventions during the history of the building. Typically the agents planning and performing new installations do not include structural engineers giving practically a free hand to everyone to break, drill and remove pieces of the structure—in this case the terracotta as it were which is very easy to break. It may be altogether summarized as an error of use, or in this case “abuse”, very clearly an error of management who did not see to it that the destructive interventions were performed in a controlled manner, including the “making good” for the damage that was being created. This author has seen a good number of similar scenarios, the seriousness of degradation varying with the type of construction. The most vulnerable are floor plates

98

F. Knoll

made from fragile materials such as terracotta, and from highly prestressed concrete “planks”, especially of the hollow core type. Every period had its own preferred method of construction, imposed by the contemporary limits of technology, and of course economic considerations such as the minimization of initial investment/ expenditure. Rather rarely can one find the medium and far future being seriously considered in the decision making. It will be somebody else’s problem…

7.46 The Importance of Wind Wind forces have most of the time been respected and provided for in the design of structures though not always it would appear. The nineteenth and twentieth centuries have seen a large quantity of construction, especially institutional buildings like hospitals, schools or office buildings for administration. Not much was known about the real wind loads which were considered in summary fashion, and the resistance to wind was often assigned to building elements which were necessary anyway, such as façades or partitions without much thought, forget analysis. Most of the time this was successful and nothing untoward happened to these buildings. As building construction grew higher and higher, this attitude became increasingly questionable and risky but it remained in the minds of many engineers and architects, at least as a basic notion that wind loads were nothing to worry about seriously. The following story tells about the end of this state of affairs. The building of 63 storeys with a footprint in form of an elongated parallelogram became famous at first for the trouble it had with the glass curtain wall where a large percentage of the glass was lost in the first few months, during and after completion, due to a fabrication problem. What is less known is that the primary building structure consisting of the elevator cores was found to be seriously deficient when, as is usually the case when trouble arises, a complete review of the structural design was carried out. The wind loads had been considered in a simplified way for the broad side of the building, and bracing had been provided in the walls of the elevator and stair shafts to resist these loads; however for winds on the narrow face, and for winds in a diagonal direction, no system of sufficient resistance had been provided. Wind tunnel tests of some sophistication that had been developed in the two preceding decades showed a serious deficiency in the longitudinal direction, somewhat influenced by the presence of other high-rise construction upwind. Additional bracing had to be introduced at great cost and delay. A number of dramatic disasters are telling a story of ignorance where, similarly to the middle ages and antiquity, construction was pushed to audacious dimensions with dramatic failures teaching about the limits of the knowledge and technology of the period (Beauvais Cathedral, Tacoma Narrows bridge, cooling towers Great Britain, etc., see also cases 7.1, 7.7). More is known presently about the real forces produced by wind, including such phenomena as vortex shedding, wake buffeting, turbulence and dynamic interaction with the structure etc.

2 Design and Construction Case Studies

99

The case just discussed has taught a lesson: • For highrise construction wind is always an important exposure. Wind blows from all and many directions. Other cases have been seen where the orthogonal selection of wind directions missed the critical diagonal one. • Wind is of a dynamic nature, which implies that the topography upwind affects the forces acting on a structure. • For innovative structures all the existing knowledge must be mobilized. Ignorance is no excuse.

7.47 Drilling Is Blind In the centre of an important city, a portion of the main river embankment came up for rejuvenation, along with a ban of automobile traffic alongside the river. On one of the bridges within the precinct of the project a new drainage system was to be installed, the storm water being directed toward a drain and into piping installed under the bridge. A survey of all the pipes and conducts in the concrete structure of the bridge was conducted in order to find an appropriate location to drill the hole for the drain through the bridge deck. Among the elements cast in the concrete was a 11,000 V cable encased in a pipe the location of which was indicated on a sketch given to the contractor responsible for the work. The information was taken from a drawing obtained from the city archives—only it was erroneous. The workman doing the drilling with a diamond crowned hollow drill was very fortunately doing this using abundant cooling water which saved his life, when the fuse in the next electric substation “blew” before the current reached him—a near miss. In this author’s experience there are several similar events, not involving electricity but other types of conduits like water mains, sewers, interior stairs where drilling, rather than miss the built in installations, went right through them due to erroneous information, triggering the sometimes rather dramatic effects this can have. All these cases can be classified as a group where knowledge about existing installations is either absent or erroneous. It is a good idea to mobilize the modern tools such as radar or ultrasound to determine the real location of these, rather than trust existing documentation—which has been composed and updated by numerous different agents—before engaging the diamond drill. It is very difficult to feel the difference when the drilling tool hits another material—think of steel or plastic embedded in concrete, or a void.

7.48 Inspection Impossible It was a brutal wake-up call when a lady having a coffee with her partner in an outdoor restaurant was struck and killed by a piece of precast concrete falling down

100

F. Knoll

from the curtain wall of a high-rise hotel. The accident was followed as usual by a regulation frenzy of the public administration who passed legislation demanding inspection and review of all building façades by professionals. Only, it turned out to be impossible to follow-up on these newly installed requirements, within the delay that had been legislated, for two reasons: • the very large quantity of building envelopes; in the metropolis alone it totals a surface of the order of magnitude of several square kilometers, depending on how one counts. • the fact that most of the façades elements, especially and mostly precast concrete or masonry veneers, are anchored and supported from behind, making it very difficult or impossible to inspect the essential details tieing the façade elements to the structure. In most cases, such an inspection implies opening of the assembly from inside, or removal of an element which can often not be done without destroying it in the process. Therefore, even if one carried this out, all the knowledge gained will be based on a very small sample—one or a few elements out of hundreds or thousands. Mostly, anchorage elements were made from galvanized steel, and involved welding in many instances. Since modern curtain walls are usually designed following the “rain screen” principle, water and humidity are present behind the first layer of wall, the “rain screen”, and it is easy to see that the small steel pieces will lose their galvanic protection, eventually and especially where they were welded together and—officially and contractually—touched up with some corrosion protective paint, a process that is impossible to control effectively. This is inviting corrosion which takes it’s course invisibly and at various rates of progression, depending on a number of factors. The accident that turned into an eye opener demonstrates that we are facing an error situation, in every sense, and including the disastrous manifestation of the consequences. It is an error that has two major components: • to install attachments made from a material that is well known to lose its substance and strength over time, in its position exposed to water—not to forget polluted water, in a city of our era. • to make these details invisible and difficult or impossible to inspect without major destructive intervention. The first component turns out not too difficult to correct, by using non-corrosive materials—the cost of using stainless steel is often imagined to be prohibitive—it is not, considering the small quantity of material involved. The cost of the manpower needed to install and protect these devices is a large multiple of the price increment paid for stainless steel and is the same one way or the other. The main and principal error in this case is to have made these details impossible to inspect. Details do exist to avoid this but they may not be compatible with architects’ ideas about visual appearance. It must be surmised that a very high price will be paid in the near and medium future for this, considering the quantity of buildings aging,

2 Design and Construction Case Studies

101

which includes the attachment details of the curtain wall. The bills have already started coming in, in the form of accidents, government decrees and urgent repairs.

7.49 Thoughtless Modification This story is about modern adaptation of old structures. The case described here is representative of numerous occasions where a lack of comprehension of how historic construction methods worked and how they do not if one interferes with their structural system. The building was built around 1870 with bearing walls made from stone on the outside and brick inside, and wooden floor plates—a construction method that saw widespread use in North America until the early twentieth century. The building was commissioned by a nun’s order for use as part of a hospital. It was built with “donated” materials, with similar results as would presently be associated with “economic” construction. The wooden floors were supported by joists made from boards spanning between the bearing walls and a central beam supported by a line of columns. The bearing walls were built directly on soft silty clay, without any footing. All was well until sometime in the mid-twentieth century when a cafeteria was installed in the top storey. It had to be column free, so a new roof spanning directly from wall to wall, was installed and the original columns were removed. Sometime later it was found that the wooden joists supporting the ground floor had left the pockets in the wall that supported them, making the floor plate sag suddenly, to the consternation of the administrative staff having their offices there. The two bearing walls had spread apart in effect by buckling outward. At the same time it was noticed in the 2nd storey that the walls were now leaning, both in the same direction. Emergency bracing and tie-rods had to be installed and the building evacuated. Unfortunately, the main kitchen of the hospital was located in the basement of the building. It had to be decommissioned a few days before Christmas. The forensic interpretation can be summarized as follows: the entire construction had been of dubious quality to start with but was sufficiently solid to last about one century, until modern architecture interfered with its function: The removal of the column line in the top storey changed the path of the gravity loads, approximatively doubling the vertical forces acting on the bearing walls. These loads included the weight of the snow which in this location is quite important. It was sufficient to overcome the stability of the walls. To pinpoint the factors leading to the chain of events: • A latent problem of low quality had not been recognized. • This led to the overestimation—if it was estimated at all—of the real bearing capacity of the walls.

102

F. Knoll

• The walls having a total thickness of about 75 cm at the base and 50 cm or so near the top, the stresses one calculates are rather low when compared to the strength of stone, brick, or even mortar when determined by laboratory test. • This leaves out the absence of “structural integrity”, with the coherence of the structure resting solely with the friction of the wood joists in their pockets in the brick wall, as well as the coherence of the wall itself where in time the mortar of the joints will degenerate and disappear, the load transfer now being “ stone to stone”, at points of contact. Fortunately, this type of construction is of a relatively forgiving character, giving ample warning of something being amiss which permitted the timely evacuation of the building.

7.50 Extreme Weather and Vulnerable Structures In the winter of 1998/9 a part of eastern Canada and adjacent areas of the US were hit by an unusual sequence of weather events with two systems of “freezing rain” following one another within about one week. Freezing rain is a weather situation that occurs every winter more than once in the region but this time two extreme events followed each other in less time than it usually takes for the effects to disappear. The deposit of ice on all surfaces exposed to the rain, twigs, roofs, walls and electric wires exceeded what had been seen before, with accumulations of several inches (up to 100 mm) thickness on the electric transmission lines. This led to the collapse of more than 10,000 masts carrying the wires and cables, including several hundred kilometres of high voltage (750 kV) lines. The masts of these were of the lattice type with some members of high slenderness ratios ( rl > 150). The structure of the towers is essentially isostatic, meaning that the loss of any one member causes the entire structure to become a mechanism, unable to resist loads. To make things worse, the towers were all of the same type; when the loading became asymetrical due to the failure of the neighbouring mast, they were pulled down by the wires connecting it to the other neighbour still standing, like dominos—a classic scenario of progressive collapse. It also shows the consequence of a lack of robustness: Had there been “strong” points along the line, say every five or ten masts built to resist one-sided pull, the consequences of the event (some areas were left without power for more than a month) could have been greatly attenuated. The high vulnerability of slender lattice structures was demonstrated by the fact that another type of mast, made from a single tapering steel tube or concrete section fared much better in the event—its response to overloading is much more ductile. It is not easy to identify an error in this case beyond the discussion above. Economic and technical considerations strongly favour the use of lattice type masts which must be installed in areas with difficult access for surface transportation

2 Design and Construction Case Studies

103

and heavy equipment. Their weight and time of assembly are therefore at an important premium. On the other side of the equation is the evaluation of the risk imposed by this type of weather event the intensity of which had not been anticipated (one spoke of a return period of one or several centuries). However, the absence of robustness which could have been provided by “strong masts”, acting as “zipper stoppers” was clearly an error of omission which could have been avoided at relatively modest cost. Equally clearly, the event takes the character of a gamble, taking a risk with the weather, a natural exposure to something that was well known but the extreme magnitude of which is difficult to assess. In cases like this, it is a good idea to study the behaviour of structures when they are loaded beyond the elastic limit. If such an inelastic “realm” does not exist as in isostatic structures with highly slender members, a clear need for robustness exists. It can take various forms: Try to attenuate the vulnerability of the structure itself, avoiding slender lattice types, or introduce “zipper stoppers”. Heating the wires was another concept discussed at the time following the cataclysm but durability problems are notorious with heating wires as with all preventive systems which may lie unused for long periods of time to become suddenly and urgently needed.

7.51 Theory and Performance The case discussed here is probably rather unique in its specific description but representative of a class of structures that were conceived and built following the personal understanding or misunderstanding the creator had when making the key design decisions. Before formal building codes existed, building practice was proprietary to individuals and companies, sometimes based on theories available at the time, or on personal impression of how structures work, i.e. react to exposure by the environment. One example where things went decisively wrong in their context was discovered recently, it is a four storey building in concrete construction—slabs, beams and columns where the reinforcing of the columns consisted of vertical bars tied by “stirrups” made from wire at a spacing too wide to prevent buckling of the bars, and the beams “reinforced” by bars in a catenary configuration, with no other reinforcing. This, together with the usual degradation of the structure though the installation of several generations of plumbing, mechanical and electrical systems, left the structure in a state so precarious that the building had to be evacuated and eventually destroyed. It was observed that in numerous places the beams had separated from the columns, amounting to a discontinuity and therefore, the absence of structural integrity. Also in some of the vertical columns reinforcing had buckled, effectively weakening the columns to a dangerous degree. The reason why the building kept standing up for almost a century was thought to be the absence of a serious earthquake during its existence. In the meantime, being

104

F. Knoll

located in the center of a North American city where historical buildings are scarce, the building had been classified as an example of past construction methods along with a wish to preserve it as a historical monument. To do this turned out to be too difficult, cumbersome and costly however and to everyone’s regret, the building was taken down. It is an example of a mistake, in this case based on a misconception which, belatedly turned out to be fatal.

7.52 Trouble with Cracking The construction of concrete basins, especially when reaching below the ground water table, has never been easy and is usually followed by corrective work. This is due to the deformational incompatibility of the concrete and the ground. Both are in contact which, due to friction, does not allow relative movements in the plane of contact. Shrinkage and thermal contraction of the concrete can only be accommodated by cracks in the concrete when its deformability in tension is overcome. The case of a large underground parking garage considered here is a typical example of a widespread problem where the cracking of the concrete is difficult to control. It becomes aggravated when water pressure outside the concrete shell is present, causing infiltrations. As a rule cracking begins at the points of least tensile resistance and nature will readily find those points—not always where predicted. The formation of a crack will weaken the concrete structure even more in this location, so that most of the subsequent contraction will accumulate here, widening the crack. Various solutions to the problem have been tried, supported by laboratory testing or theoretical considerations, to control the cracking, mostly resulting in the demand for substantial amounts of reinforcing—with mixed success—too many factors are adding up to a high rate of uncertainty. Another avenue to crack control has been to provide control joints where contraction may take place and keeping them open as long as needed. One hopes that to do this, will prevent cracks forming in other places. When shrinkage has taken place, the cracks are filled preferably with an extensible material. This means that access to the concrete surface must be preserved until this is done which may delay the possession by the user by several months, shrinkage taking its time (see also case 7.33). Exposure to exterior conditions or partial temperature control as in the case of parking garages will add the thermal contraction to the shrinkage. The difficulty of predicting the future behaviour of the concrete structure is sometimes perceived to be a good reason to minimize initial costs to adopt a wait-and-see strategy. It amounts to taking a chance, or in other words—a gamble. In the case at hand, the gamble did not work out, new cracks continuing to show up even years after construction, accompanied by water infiltration due to ground water pressure. Minimal reinforcing had been provided, the shrinkage rate was high because a high water cement factor had been used for the concrete, and the cracks

2 Design and Construction Case Studies

105

had been filled with a hard resin-type compound, forcing the structure to form cracks in other places—which had in turn become the zones of least tensile resistance. This is typical and notorious for situations where a penny-wise decision was made to minimize initial expenditure, which was more than surpassed at “the end of the day” by the cost associated with corrective work and delays. It is paramount to a decision that was proven wrong by events. One may call it an error in this particular case; it might have turned out differently in other circumstances however, with an overall cost saving. It is important to note in cases such as this, that the uncertainty associated with the prediction of future behaviour is considerably greater than for other structural criteria, for several reasons: • the temperature gradient in the concrete (exposed surface and interior) at the time of curing, and later • the configuration of adhesive friction with the ground: was the concrete placed onto a drainage layer, membrane, or a lean concrete base • the rate, position and configuration of the reinforcing steel • the rate and timing of shrinkage • the provision, type and effectiveness of control joints Some of the factors are easily controlled by the building team, some less so. It may be difficult to persuade the low bidder for the work to provide careful and effective curing of the concrete, in reality and consistently. Continuous policing may become a necessity. The cumulative uncertainty associated with all these factors must be made part of the decision making about the preferred strategy. Not to do so may constitute the fundamental error, an error of management, once again.

7.53 Thoughtless Cutting The following two tales report on a notorious and repetitive problem created through physical intervention on existing construction that destroys the load bearing function of certain structural elements. Typically, this occurs in the course of modifications where penetrations are needed for mechanical systems, vertical transportation, etc. Concrete structures look to some people as though they could tolerate any kind of cutting, loads finding their path in other parts of the structure. The two cases referred to in this discussion are neither newsworthy nor did they lead to manifest structural failure. However, the cost of correction in terms of money and delays was quite significant. In the large leaning tower structure of the Olympic Stadium in Montreal, the internal concrete floor plates were to be converted from mere stiffening elements to usable office space. This implied the penetration of the concrete slabs for vertical transportation (elevators, stairs) and mechanical systems.

106

F. Knoll

Because these plates were essential parts of the overall tower structure, they contained significant posttensioning in the form of high strength wires bundled to cables, of a type no longer current today (7 mm parallel wires with swaged button heads for anchorage). One of these cables (55 wires) was cut accidentally with the diamond studded concrete saw by a worker who followed the instruction of his superiors who had failed to notice the presence of the cable although the drawings showed it clearly. The building team was not familiar with the structural concept of the tower and had their attention focussed exclusively on the detail work of remodelling the floors. The repair/correction work turned out to be rather difficult as it was found that the development/anchorage length of a cable of this caliber inside the grouted duct by bond was of the order of 8 m. The replacement posttensioning element had to be anchored more than this distance away from the cut. Another case involving the cutting of a vital structural element is more trivial, it involved the installation of an air duct penetrating a massive concrete beam. Due to an oversight, this penetration had not been provided when the beam was built. It was cut, unbeknownst to the structural engineer who discovered it a little later. The rectangular opening had all but destroyed the shear resistance of the beam and a complicated repair concept involving fibre reinforced plastic had to be developed. Both cases are of the same essential type where the team doing the physical work had received directions to proceed, by their superiors who, obviously, did not know what they were doing, due to incompetence or lack of attention, or both. Aside from the cost of correction, aggravation and discomfort to some of the participants, no serious accident had followed the errors. The important thing to note in the context of the two examples is that this is all they are, examples of a great number of similar circumstances, some of which have not been recognized and corrected, waiting to cause accidents. These latent deficiencies of structures represent a massive and ever-present risk, as demonstrated by structural failures that occur long after construction, triggered by degradation, increased exposure, or simply, age (see cases 7.4, 7.9, 7.30, 7.49, Morandi bridge Genoa and numerous others).

7.54 The Strange Story of Nocturnal Noise In a recently constructed high-end residential building complex the inhabitants were roused from their sleep at night by explosion-like noises. These were traced to the connections of the concrete elements—walls, slabs—to each other which had been designed to permit relative movements due to shrinkage and thermal expansion and contraction. Some load bearing walls were left exposed to ambient temperatures but for load transfer had to be connected to insulated parts with dowels made from a combination of medium high strength steel and stainless steel which were designed to bend, and to slide longitudinally in and out of their embedment.

2 Design and Construction Case Studies

107

It was known and had been elucidated in a paper that the combination of cross and longitudinal movement to the dowels caused them to generate static friction (=“stiction”) in their sleeves which creates a build-up of elastic energy that is released from time to time when the dowel suddenly slips, associated with considerable noise (up to 65 decibels were measured). It was determined that this constituted a devaluation of the building which was to be reflected in the revenue received from the inhabitants, additional to the cost of corrective intervention. The interesting feature to the case, besides the technical/physical aspect is the sorting out of responsibility: The engineers and architects responsible for the design had little or no experience with the use of this mode of connection. The general (=“total”) contractor took the position that it was not up to him to question the basic concept of the structural design as he was not and could not be asked to be knowledgeable about and familiar with the method, although he undertook the procurement of the dowel system from the fabricator. The fabricator/supplier of the dowel system had kept silent about the problem of the noises, with the notion hat this was the subject of publications and therefore common knowledge. The case is being sorted out on the basis of a correction method which is hoped to be successful. There is another, more philosophical aspect to the story: the introduction of the mode of connection permitting relative movement among different load resisting elements of the principal structure is an attempt to provide a system of relief from perceived geometric incompatibility, due to restrained shrinkage and thermal deformation. This incompatibility constitutes a well known and “classic” problem of construction causing cracking and other unwelcome effects. By allowing the walls and floor plates to move more readily versus one another the dowels are deforming in bending and shear while transmitting horizontal and vertical loads—one solves the problem, at least partially—but most evidently in the process of doing this, one creates a new one much like in the context of medication causing side effects. Events will show if the sum of the effects is of a positive nature. It is another philosophical question whether an error was committed in this case and what is its nature. In this author’s opinion the error lies in the lack of communication which prevented the knowledge that existed about the problem, to reach the decision makers. If a new method is to be put to work, it is a good idea to do this in the form of a consensus which demands communication. It was lacking in this case.

7.55 Precast Concrete Structures Building structures made up from precast concrete elements have their own particular problems which for monolithic construction do not exist. They mostly relate to the joints where loads have to pass from one element to the next on the load path, through contact, or joint details made from different materials. As a rule, these joints represent zones of weakness in the assembly where failure will occur at lower rates of loading them in the elements themselves except that the

108

F. Knoll

ends of the elements may be involved in the failure, due to local concentrations of stress, or inappropriate load transmission causing secondary effects. A recent case may illustrate this—it includes several features that are common to precast assemblies and must be attended to in the conceptual design as they will limit the real resistance, as well as the robustness of the entire structure. A six-storey garage was to be constructed from precast concrete columns and beams with cast in place slabs. When the sixth storey was put in place, the columns in the bottom storey started to fail, their heads splitting with large vertical cracks. What had happened? The design of the columns had provided plates at the head and toe of the columns to be welded to the vertical reinforcing. This reinforcing did not reach the steel plate and was therefore not connected; the portion of the load assigned to the reinforcing steel had to pass through the concrete under the “cap plate” causing it to split due to the horizontal tension in the end zones of the column, an effect well known posttensioning technique. No transverse reinforcing had been provided near the ends of the columns although it was indicated on the engineering drawings. In addition, no equalizing layer had been provided when the precast beams were installed in direct but unequal contact on the columns, due to deformation and imprecision of the casting. This introduced important eccentricities of the column load. The simple description of the technical details does not do justice however to the fundamental causes of the problem that are typical for a large quantity of commercial construction. The general contractor who had also engaged and therefore controlled the professionals, had opted for the lowest bidder for the supply of the precast elements, a firm from a neighboring country; additionally, its’ price had been “beaten down” in bad negotiations which did not bode well for the production of the building elements. The company went bankrupt soon after. The engineer whose fee had also been minimized was not able to have the error corrected, in spite of reputed demonstrations. Although incompetence can be cited to have caused the problem directly, the case is a sorry example of what may happen if greed takes precedence over reason and quality, an instance where the real question to be asked is “Cui bono”, i.e. who was profiting from the low initial cost?

7.56 Geothermal Installations and Their Problems The massive borings necessary for the efficient exploitation of the earth’s heat are not without consequences for the surrounding construction. Besides the well reported earthquakes caused by such interventions, other effects may be triggered, depending on the local soil characteristics. A recent case illustrates this rather clearly. In an area of extremely soft soil a single family house was built on wooden piles reaching down into the soft soil but not long enough to reach more competent layers. Calculations showed that sufficient mantle friction was mobilized in this fashion.

2 Design and Construction Case Studies

109

However, in the name of energy self-sufficiency, a geothermal system was installed with one boring approximately 4 m distant from a corner of the new building. The borings involved the removal of an approximate total of 20 m3 of material from a depth corresponding to the lower end of the piles, in the form of a slurry because the boring reached well below the local ground water table. Shortly after, a corner of the building started to sag and it was discovered that the pile supporting this corner had sunk into the ground, separating itself vertically from the foundation of the building by about 20 cm. It may be debated whether the pile had sunk due to negative friction, or because it lost it’s point support as the surrounding soil was hollowed out by the boring nearby, or both. There is little doubt however that the massive removal and disturbance of soil at the critical depth had caused the wood pile to move downward by itself, no load from the building being involved since contact had been lost. The mechanism involved in the event is quite obvious and could have been foreseen had the parties involved in the work been talking to each other. The case must therefore be assigned firstly to a lack of communication if not to incompetence.

7.57 The Myth of Watertight Concrete The roof of a large underground garage was to receive plantation, hiding it from view. The slab was cast with a specialized concrete, said to be watertight, and received no waterproofing membrane. After a short period of time, various leaky places manifested themselves, most of them clearly related to cracks caused by bending, restrained shrinkage or thermal contraction. Although technically, the concrete might have been watertight, this was offset by the fact that the resistance of concrete to tensile stresses—whatever their origin—is severely limited, meaning that cracks will open readily when limits are exceeded. This fact had obviously escaped the insight of the builders who believed that a structure made from such concrete would indeed be watertight. The distinction between the concrete as a material, as tested in the laboratory, and its behaviour in the real structure was missed. A costly retrofit, including removal of the plantation and the application of a bituminous membrane on the top surface, was the only option to avoid further water damage on the parked vehicles. Construction industry is still waiting for the invention of non-shrinking concrete, although some recent progress has been made towards this goal. However, this will only be part of the solution, with cracks being caused by other effects. The expression “watertight” is not more than an empty slogan in the context of reinforced concrete. It would be good to have it but reality wants otherwise at this time, at least for large expanses where contraction is effectively restrained (see also cases 7.33, 7.52) and cannot be avoided.

110

F. Knoll

7.58 Shrinkage and Its Problems This case is representative of a great amount of unwarranted extra cost that is caused by the disrespect of the shrinkage contraction concrete undergoes during the hardening process. Shrinkage is a fact associated with concrete work since its beginning, and contrary to the claims coming up periodically that the means to control and eliminate it have been found, we are still waiting for “non-shrink concrete” to be invented. The story told here is about an error of concept that, in various forms, keeps being repeated, causing negative value, contention and embarrassment. For the extension of a hospital a three storey building was created, with concrete slabs and columns at a spacing of 6 m or so. The building being located in a zone of moderate seismicity, shearwalls were provided in both perpendicular directions. Two walls in longitudinal direction were arranged near either end of the structure enclosing staircases, aligned with each other at a distance of approximately 45 m. No control or expansion joints had been provided, and the reinforcing steel had been designed exclusively for plate bending. The result was that the slab was literally ripped, with large cracks forming near both staircases—where the stress caused by the restrained shrinkage contraction had overcome the tensile capacity of the concrete. Nature will always find the weak spot in a concrete structure where it will relieve itself from incompatible stress conditions, by forming cracks in very few, in this case two locations, usually where the resistance of the structure is reduced by openings such as staircases. All the deformation required to accommodate the contraction will then be concentrated at those cracks which often as in this particular case, may cause a safety problem, because the shear capacity of the concrete slab was lost by the formation of the cracks. Auxiliary supports had to be installed, encumbering the space below and causing delay and additional cost. The case is exceptional in that it can be explained by a single identifiable cause— the faulty arrangement of the two shear walls that effectively and almost completely prevented the contraction of the concrete due to their rigidity in the longitudinal direction. The design concept had been developed by a young engineer with little experience, and had not been reviewed by somebody more senior, making it into a “school example” about what not to do. Special provisions such as expansion or control joints might have been provided to attenuate the problem but they are associated with their own drawbacks in the form of cost and delay. The easy and most straight forward solution is to change the orientation of the shear walls, with the longitudinal walls arranged near the centre of the structure rather than at opposite ends.

2 Design and Construction Case Studies

111

7.59 No Checking The last case of the collection relates to a story that assembles most of the elements that are found to be of primary importance, as demonstrated by the case matrix. The extension of a hospital was to be built not far from a lake and was to receive two basements at levels below the mean water level of the lake. The project engineer who was put in charge all by himself, left the company when the drawings were complete and given to the contractor for construction. It was discovered after he had left the company that the structural design suffered from a number of serious shortcomings that, if built as documented on the drawings, would have caused the structure to be seriously overstressed, including a substantial risk of collapse. What had happened? The fact that the walls and the slab of the second basement would be exposed to excessive water pressure when the ground water table would communicate with the water table of the lake had apparently been ignored. The dimensions of the walls and slabs were obviously insufficient for this to happen, and no provisions were planned to deal with the problem (pumping system and emergency overflow). These were eventually installed at great additional cost when construction had reached the ground floor. The weight of the building was to be supported by interior columns and the perimeter walls which transferred it to the rock base—an extremely poorly consolidated silt and clay stone. For the perimeter walls, no footing was indicated on the drawings and calculations performed after showed that the high local pressures under the foot of the wall would exceed the compressive resistance of the rock by a large margin which in turn would cause a shear failure between the wall and the slab. The corrective action was the addition of a second wall with a footing outside which had to be preceded by reexcavation behind the wall that had already been backfilled. In order to save some of the time that had been lost following the discovery of the problems, the exterior walls were built with a patented system based on prefabrication combined with poured in place concrete, to the dimensions originally specified which were not only insufficient for the load transfer at the foundation but for resisting the pressure from water and backfill. This resulted in an additional weakness because with the newly introduced system the wall was no longer continuous for bending at the levels of the slabs. The problem was solved by the addition of the second wall outside, as described above. The columns of the building were not to be aligned on subsequent stories, but an offset of approximately one-half column dimension was to be provided on each storey, for architectural reasons. The resulting eccentricities had not been considered and no local reinforcing was provided in terms of beams, additional slab thickness and reinforcing bars, which had to be introduced later as a design modification, with a number of negative consequences for the layout. The columns in the upper stories were to be prefabricated with very high strength concrete according to the fashion of the day, in order to gain some time and space. The problem which arises where the high stresses in subsequent columns must be

112

F. Knoll

transferred through the floor plate which is made from cast in place concrete of lower strength had not been considered, and the dimensions and reinforcing, as well as the preparation of the concrete for the slabs had to be modified at considerable additional cost. The discovery of the five major shortcomings described above led to a construction stop and substantial additional cost for materials, work and delays which had to be sorted out among the participants and their respective insurances. This is not exceptional and will not be discussed. What is essential in this case is the multiplicity of errors and why they were perpetuated to a stage where correction had become very expensive. The decisive error in this case is not one or several of the shortcomings that had to be corrected or compensated for, but the fact that all the design decisions (concept, dimensions, layout, etc.) were never validated but perpetuated to the construction phase unchecked. Nobody had bothered to have a critical look at what was documented on the drawings and asked whether it made sense. Fortunately, the situation was recognized in time to avoid an accident, but not the additional cost for corrective work and delays. It would not have taken much time for an experienced engineer to catch the mistakes for most of them were quite obvious, but this part of the quality assurance was never undertaken.

8 Conclusion In the preceding reviews of specific scenarios where things turned out wrong, or less than perfect, the author tried to explore how errors and their consequences are brought about, and to highlight the salient features of every case. In order to keep the text as concise as possible, numerous details had to be left out even if they had made the tales more colourful and entertaining. To select the essential elements of a scenario is a very subjective affair and very far from the contemporary idea of science where everything must be presented in a way to make it reproducible and/or falsifiable. However, the matter of human error and specifically, its effects in the construction industry are a very important subject and associated with costs of a staggering magnitude in terms of loss of life and value. It would therefore be mistaken to leave it to itself because today we do not have the tools to treat it with the rigour modern science purports to employ when analyzing matters of nature. The problem of ill performance—in German “Fehlleistung”—of humans is an extremely complex one, involving many aspects, to begin with the workings of cognition in our brains, on to social interaction, to psychology, to education, to communication, to experience, to technical knowledge and access to it, to organization and on to the conscience, emotions and particular incentives of the individuals involved in the building process. The only approach that promises any progress must therefore be a holistic one, based on the selection of the essential and relevant features of each specific scenario. This is an analytic task to start with. However, it will be the tool in the search for

2 Design and Construction Case Studies

113

the appropriate methodology towards the elimination of errors and flaws. It is only common sense that in order to find the best approach to do this, one identifies the potential sources of trouble and taylors the strategy accordingly. In modern building codes the approach is taken to try to eliminate by regulation every possible error or mistake, addressing every eventuality a building process may include, with limited success as witnessed by the reality that things keep going wrong. In so doing however, the volume of building codes and regulations has grown to a size unmanageable in its entirety by the human mind, with the dual result that everything is becoming compartmentalized, accessible only to specialists, and that its application is turned over to computer processing. This essentially removes it from human control, making the regulation counterproductive. All the members of technical committees writing regulations and codes are presumably well meaning people who wish to act only to the best of the industry. They tend to forget that the user of their texts is not usually a specialist in the field they are themselves. To do the “right thing” he must command sufficient insight, a task which is made impossible if he is drowned in the massive quantity of regulation of today, forget about the foreseeable future… It is a sign of our time and the politically inspired obsession with the pursuit of absolute safety, that the judgement of the experienced professionals or tradesmen is being replaced by regulation. One does not wish to expose oneself to possible reproach by making decisions but rather delegates these to codes, standards and data processing. Cases 7.13, 7.24, 7.37 illustrate this trend and its consequences where all the decisions making up the design of the structure were turned over in effect, to a computer model using elastic theory and a modern building code, with the effect that essential issues were missed, relating to seismic resistance. Impressive quantities of data were created which tended to hide the real problems, creating the illusion that, with a large number of load combinations, everything had been addressed and all was as it should be. Similarly, in case 7.1, very elaborate and detailed modelling had been used for the analysis, straining the computer capacity of the time. However, the critical mode of failure, progressive tearing, had been left out. Since the analytical tools were not available for this aspect, judgement would have had to be used, replacing the missing instruments. Nobody was there to take on this task. In the section “genesis and perpetuation of errors”, a clear and unequivocal definition of “error” has been sought, unsuccessfully. In a number of cases the reason for this becomes apparent: the concept of error depends on the context, and language with its inherent imprecision does in fact do justice to this. Errors, i.e. the creation of wrong or imperfect information and decisions, are part and parcel of the optimization process that a construction project is. Their correction, i.e. the replacement by correct or better information, is the essence of that optimisation. Only where the erroneous information is allowed to persist and take damaging effect, will language speak of an error. So it would seem that the concept of error must include the outcome. A hidden fault would thus not constitute an error, until it takes effect—one may think of all the instances where a deficit of safety is perceived but nothing has happened,

114

F. Knoll

a matter the courts are finding difficult to sort out, with conflicting expert opinions complicating matters. Another aspect where the concept of error becomes fuzzy is the more or less conscious taking of risks. If one accepts that uncertainty is the characteristic of future events, some decisions must be based on assessment of that uncertainty and its presumed magnitude (see for example cases 7.10, 7.28, 7.38, 7.52). Whoever has performed a risk analysis using mathematics to assess uncertainty in real scenarios, has learned that in most but the very trivial cases, this is a difficult task. There is nothing wrong with the mathematics, and research about appropriate “bell curves” and their treatment is abundant. However, all has to be based on hypotheses representing future reality, and this is where functions such as experience, judgement, educated guessing and gut feeling often must take the place of sufficient data for a rational assessment. Let us not forget that in real life every day construction, time is of the essence and lengthy scientific endeavours that take months or years, are not affordable in terms of time and expense. In the context of construction, the assessment of risk is often a matter of a discussion within one meeting or so, a decision being reached quickly by consensus, or by the dominant participant, on the basis of the impression created by that discussion. Does the decision so taken therefore constitute an error, or only so if the outcome is unfavorable? Of course, one may say that the error is to have taken the decision quickly on the basis of incomplete information and perhaps with emotions having taken their part in promoting the decision as it were. Often however, taking more time might not have improved matters since no more pertinent and precise information could have been mobilized because none was available. Or would the rule in this context have been to err in favor of prudence—prudence is usually associated with cost? Case 7.10 and in fact most other cases may illustrate the dilemma associated with the decisions making up the construction process: if the consequences are ill we perceive an error, if not nobody will do so once the risk seems to have disappeared. This brings us back to the concept of error as it is found in common language. It may be fuzzy at the fringes but most often circumstances permit to identify what, how and where something went wrong, namely where it was not congruent to “good” or “normal” practice. Good practice can only prosper in an equitable climate of mutual trust and cooperation and it is up to management to see to this. If it does not, that in itself may constitute the error. This confirms the one general conclusion which applies to each and every case, no matter its particular history: Every error is also a management error, for it failed to organize things so that mistakes were effectively prevented from taking effect. It is the management’s task to see to it that competent firms and individuals are put in charge of the works, that relations are equitable, and that quality assurance, whatever its form, is made effective. The matrix of the cases (see appendix) with the circumstances associated with the errors is rather revealing even though its data base is too small to permit precise

2 Design and Construction Case Studies

115

conclusions in the sense of modern science. It highlights rather persuasively however that the most important stage for the genesis of errors is the conceptual design. • The errors taking effect most dramatically are associated with the absence of robustness which is really a confirmation of common sense. • In terms of elimination and prevention of errors the effectiveness of quality control and verification is demonstrated to be the most important component. Based on these observations, it is tempting to conclude that, in order to optimize quality in the sense of absence of flaws and errors, resources for quality control should be extended, and directed to the things that matter, i.e. the design concept and the robustness. As well, methods of quality control and verification must be tailored to the properties of each case to be most effective, once again very much corresponding to reflections of common sense, the problem being of course that resources are usually limited quantitatively and perhaps more important qualitatively—super men and women are in short supply… If this text has created the impression that building, and particularly structural engineering, resembles a “mine field”, it’s because it does. To survive in a mine field one needs to detect and disarm all the mines in time. To do this effectively and completely is the ultimate goal.

References Bachmann (2013) “Christchurch: Erkenntrisse zum Beben” Tec 21 Knoll F. & Vogel T. (2009) “Design for Robustness”, IABSE Melchers (1982) “Human Error in simple Design Tasks” Monash University

Chapter 3

Learning from Large-Scale Accidents Raphael Moura, Edoardo Patelli, and Caroline Morais

1 Introduction Large-scale accidents have a multidimensional nature, arising from a wide range of contributing factors interacting in a seemingly random and sophisticated fashion to result in large-scale technological disasters. Many of these contributing factors are developed since the design conception, comprising of technical and non-technical issues and ultimately including the alluring influence of human errors. The relevance of human factors and the impact of human errors in industrial accidents were extensively emphasised by contemporary studies. Human error was regarded as a major contributor to more than 70% of commercial airplane hull-loss accidents (Graeber 1999). Correspondingly, according to Leveson (2004), operator errors can be considered the cause for 70–80% of accidents, given recurrent deviations between established practice and normative procedures. Considering the cost issue, a review by the United Kingdom Protection and Indemnity (P&I) Club indicated that US$541 million per year is lost by the marine industry due to human errors (Dhillon 2007). From an engineering perspective, the highly complex interaction between operators, technology and organisations is a recurring subject arising from investigations involving major events. The Bureau d’Enquêtes et d’Analyses pour la sécurité de l’aviation civile (2011) official report on the AF-447 Rio-Paris Airbus A-330 accident on 1st June 2009 acknowledged an apparently simple equipment defect (icing of the Pitot probes) resulting from a design failure, which led to some inconsistencies R. Moura (B) · C. Morais National Agency for Petroleum, Natural Gas and Biofuels, Rio de Janeiro, Brazil e-mail: [email protected] E. Patelli Department for Civil and Environmental Engineering, Centre for Intelligent Infrastructure, University of Strathclyde, Glasgow, UK © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 R. Moura et al. (eds.), Understanding Human Errors in Construction Industry, Digital Innovations in Architecture, Engineering and Construction, https://doi.org/10.1007/978-3-031-37667-2_3

117

118

R. Moura et al.

of the flight speed indicators. This deficiency triggered several human-related events (wrong system diagnosis and inappropriate control inputs, among others), ultimately resulting in the airplane hull-loss in the Atlantic Ocean, with 228 victims. The investigation report also highlighted some intricate factors such as the de-structuring of the task-sharing in the cockpit during the response to the anomalous event, training shortcomings in a predictable flight mode (manual handling of the airplane in high altitudes), and the lack of indication of the airspeed inconsistencies in the flight console, exposing a complex combination of several factors leading to the catastrophe. These major accident examples, mostly involving up-to-date technologies with numerous systems under normal operation (e.g. Airbus (2016) states that around 1,200 A330 airplanes are operated by over 100 companies, meaning that an aircraft takes off or lands somewhere every 20 s!), illustrates the complexity behind erroneous actions, mental models, technology, organisational issues, culture and the environment in high-technology industries. This highly interdisciplinary and intricate setting, including the fact that human errors are palpable and a compelling argumentation to explain undesirable events, brings a substantial challenge to learn from accidents and to understand human errors.

2 The Construction of a Dataset to Understand Human Errors (Moura et al. 2016) The underlying dynamics observed during critical events is so great that some renowned accident causation theorists consider the failures in complex, tightly coupled systems as inevitable (Perrow 1999), or not prospectively foreseeable (Taleb 2007). This is due to the acknowledged difficulties to capture and understand all facets of socio-technical systems and all circumstances leading to catastrophes, which pose a challenge to researchers and practitioners. As a result, any method to capture lessons from accidents will have inherent limitations. Nevertheless, accident investigations can be considered to be one of the most valuable and reliable sources of information for future use, provided that several man-hours from commissioned expert teams are applied in an in-depth analysis of an undesirable event sequence, providing detailed insight into the genesis of industrial accidents. Therefore, some of the drawbacks will be minimised by the construction of a novel industrial accidents dataset, bringing together major accident reports from different industrial backgrounds and classifying them under a common framework. The accidents collection will provide a rich data source, enabling the application of data analysis methods to generate input to design improvement schemes.

3 Learning from Large-Scale Accidents

119

The fact that the CREAM Framework (Hollnagel 1998) uses a nonspecific taxonomy, thus adaptable to most industrial segments, and its natural separation between man, technology and organisation, facilitating the accidents classification, made this terminology to be selected, in order to originate the structure of the new dataset. Figures 1, 2 and 3 show the dataset classification structure. The 53 factors which could have influenced each of the 238 assessed accidents are organised in the three major groups depicted in Figs. 1, 2 and 3. The “man” group concentrates human-related phenotypes in the action sub-group, representing the possible manifestation of human errors through erroneous actions (Wrong Time, Wrong Type, Wrong Object and Wrong Place), usually made by operators in the frontline. These flawed movements cover omitted or wrong actions; early, late, short, long or wrong movements, including in an incorrect direction or with inadequate force, speed or magnitude; skipping one or more actions or inverting the actions order during a sequence. Possible causes or triggering factors with human roots can be classified as Specific Cognitive Functions, or the general sequence of mental mechanisms (ObserveInterpret-Plan) which leads the human being to respond to a stimulus. Also, temporary (e.g. fatigue, distraction or stress) and permanent disturbances (biases such as a hypothesis fixation or the tendency to search for a confirmation of previous assumptions) can be captured under the sub-groups Temporary and Permanent Person-related Functions. These are the person-related genotypes. The second major group (Fig. 1) represent technological genotypes, associated with procedures, equipment and system failures, as well as shortcomings involving the outputs (signals and information) provided by interfaces. The last group (Fig. 3) encompasses organisational contributing factors, representing the work environment and the social context of the industrial activity. It involves latent conditions (such as a design failure), communication shortcomings and operation, maintenance, training, Fig. 1 “Technology” categorisation, adapted from Hollnagel (1998)

120

R. Moura et al.

Fig. 2 “Man” categorisation, adapted from Hollnagel (1998)

quality control and management problems. Factors such as adverse ambient conditions and unfavourable working conditions (e.g. irregular working hours) are also included in this category. This new accident dataset, named Multi-attribute Technological Accidents Dataset (MATA-D), aims to provide researchers and practitioners with a simple and innovative interface for classifying accidents from any industrial sector, reflecting apparently dissimilar events in a comparable fashion. The binary classification for the evaluated factors (presence or absence) allows data interpretation using uncomplicated statistical methods or sophisticated mathematical models, depending on the user’s requirements. The data classification results are shown in Table 1.

3 Learning from Large-Scale Accidents

121

Fig. 3 “Organisation” categorisation, adapted from Hollnagel (1998)

3 Data Analysis Methods 3.1 SOM (Moura et al. 2017a, b) The application of data mining methods aims to disclose common structures among accidents and significant features within the major-accident dataset. Firstly, a wellknown clustering approach named Self-Organising Maps (or Kohonen Maps)— SOM, developed by Kohonen (1998), is applied to the MATA-D. The objective is to convert the 53-dimensional dataset (a matrix of 238 accidents holding 53 possible contributing factors each) into a low-dimensional (i.e. 2-D) array, enabling the data visualisation and interpretation.

122

R. Moura et al.

Table 1 Data classification results (factors and sub-groups) Factor

Frequencya #

Sub-group %

Wrong time

35

14.70

Wrong type

28

11.80

Wrong object

06

2.50

Wrong place

75

31.50

Observation missed

37

15.50

False observation

08

3.40

Wrong identification

06

2.50

Faulty diagnosis

31

13.00

Wrong reasoning

27

11.30

Decision error

22

9.20

Delayed interpretation

11

4.60

Incorrect prediction

09

3.80

Inadequate plan

23

9.70

Priority error

17

7.10

Memory failure

02

0.90

Fear

05

2.10

Distraction

14

5.90

Fatigue

07

2.90

Performance variability

03

1.40

Inattention

05

2.10

Physiological stress

02

0.80

Psychological stress

07

2.90

Functional impairment

01

0.40

Cognitive style

00

0.00

Cognitive bias Equipment failure Software fault Inadequate procedure

17

7.10

131

55.00

06

2.50

105

44.10

Access limitations

03

1.30

Ambiguous information

06

2.50

Incomplete information

42

17.60

Access problems

04

1.70

Mislabelling

04

1.70

Communication failure

25

10.50

Freq.a %

Execution

54.60

Cognitive functionsb

47.50

Temporary person related functions

13.00

Permanent person related functions

7.60

Equipment

56.30

Procedures

44.10

Temporary interface

18.90

Permanent interface

3.40

Communication

29.00 (continued)

3 Learning from Large-Scale Accidents

123

Table 1 (continued) Frequencya

Factor

#

Sub-group %

Missing information

49

20.60

Maintenance failure

83

34.90

Inadequate quality control

144

60.50

Management problem

22

9.20

Design failure

157

66.00

Inadequate task allocation

143

60.10

Social pressure

17

7.10

Insufficient skills

86

36.10

Insufficient knowledge

84

35.30

Temperature

03

1.30

Sound

00

0.00

Humidity

00

0.00

Illumination

02

0.80

Other

00

0.00

Adverse ambient condition

17

7.10

Excessive demand

13

5.50

Poor work place layout

06

2.50

Inadequate team support

08

3.40

Irregular working hours

09

3.80

Freq.a %

Organisation

94.10

Training

54.20

Ambient conditions

8.80

Working conditions

11.30

a

Number of events where factors or sub-groups appeared b Cognitive functions detailed below

The representation of the dataset (a 53-dimension input space, Eq. 1) in a 2-D topographic map shows the MATA-D events organised by similarity, i.e. accidents with analogous contributing factors will be close to each other (e.g. Fig. 4). This enables the generation of clusters which can be analysed in an integrated way, revealing tendencies within a group of major accidents. A1.1 . . . A1.53 .. . . .. . . . A238.1 . . . A238.53 I nput Space: A dataset containing 238 samples × 53 f eatur es

(1)

124

R. Moura et al.

Fig. 4 Output space: a reorganised 2-D grid

Cluster 1 (Magenta) is the largest single group, covering 80 accidents which encloses between 4 and 24 contributing factors. It is largely dominated by Inadequate Task Allocation (95.0%), when the organisation of work is lacking due to deficient scheduling, task planning or poor rules and principles. Additionally, accidents within this cluster were deeply influenced by Design Failure (85.0%) and Inadequate Quality Control (81.3%). From an organisational perspective, factors such as Insufficient Knowledge (60.0%), Maintenance Failure (56.6%) and Missing Information (37.5%) were also significant. The most important technological contributor was the Inadequate Procedure factor with 78.7% of incidence, followed by Incomplete Information (36.2%). From a human factors perspective, erroneous actions labelled as Wrong Place (when actions in a planned sequence are omitted/skipped, repeated, reversed or when an unnecessary action is taken) were in 52.5% of the cluster, mainly accompanied by interpretation issues, specially Faulty Diagnosis, with 26.3% of incidence. Observation Missed and Wrong Reasoning were also noteworthy human-related factors, both appearing in 20% of the cluster. Cluster 2 (Red) grouped 57 accidents, all varying from 1 to 10 contributing factors and with low mean, median and mode figures (Table 2). Organisational factors such as Inadequate Task Allocation (68.4%) and Design Failure (50.9%), as well as the technological factor Inadequate Procedure (42.1%) were the most frequent in this grouping. Some hostile ambient and working conditions were highlighted, with Adverse Ambient Conditions (14.0%) and Inadequate Workplace Layout (7.0%)

3 Learning from Large-Scale Accidents Table 2 Data classification results (cognitive functions)

Cognitive function

125 Frequencya #

%

Observation

47

19.70

Interpretation

79

33.20

Planning

38

16.00

a

Number of events where cognitive functions appeared

standing above the overall data distribution. Decision Error (17.5%) was a noticeable human contributor, highlighting cases where workers were unable to make a decision or have made the wrong choice among possible alternatives. The leading factor for Cluster 3 (Yellow), which contains 39 major accidents, is a technological aspect labelled Equipment Failure, populating 94.9% of the grouping area. This cluster also presented very strong organisational factors, as Design Failure (87.2%), Insufficient Skills (76.9%), Management Problem (23.10%) and Communication Failure (20.5%) attained their maximum values in this cluster, being also the Inadequate Quality Control factor very relevant, with 79.5% of incidence. The human factors incidence is quite substantial, with actions occurring at the wrong time (41.0%) or being of the wrong type (30.8%). These cases include omitted, premature or delayed actions, as well as using disproportionate force, magnitude, speed or moving in the wrong direction. These human erroneous actions were accompanied by all three levels of cognitive functions, i.e. observation, represented by Observation Missed (23.1%), interpretation, with Wrong Reasoning (25.6%) and Decision Error (17.9%); and planning, with both Inadequate Plan (25.6%) and Priority Error (15.4%) attaining their maximum incidence. It is worth to notice that the number of contributing factors for each event in this cluster fluctuated from 5 to 22. Cluster 4 (Green) contains 62 events, each one encompassing 1–6 contributing factors, recording a mean of approximately 3, median of 3 and a mode of 2 features. For most of the accidents in this cluster (i.e. 87.1%), an Equipment Failure was the most frequent contributor to accidents, followed by Inadequate Quality Control (56.5%), Design Failure (41.9%) and Maintenance Failure (41.9%). Figures 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17 and 18 show the self-organising maps for individual features. These figures detail how individual characteristics were distributed in the map after the application of the SOM algorithm. Colder colours (tending to blue) mean the absence of a feature, while warmer colours (tending to red) mean the presence of a contributing factor. Multiple intersections of warm colours in different individual SOM maps can be interpreted as an interface/relationship.

3.2 Bayesian Approach (Morais et al. 2018) Bayesian network (BN) allows modelling complex relationships within variables of distinguished nature. Known as a systematic way of learning from experience and

126 Fig. 5 Inadequate task allocation SOM

Fig. 6 Inadequate quality control SOM

Fig. 7 Design failure SOM

R. Moura et al.

3 Learning from Large-Scale Accidents Fig. 8 Insufficient knowledge SOM

Fig. 9 Wrong place SOM

Fig. 10 Inadequate procedure SOM

127

128 Fig. 11 Equipment failure SOM

Fig. 12 Maintenance failure SOM

Fig. 13 Insufficient skills SOM

R. Moura et al.

3 Learning from Large-Scale Accidents Fig. 14 Inadequate plan SOM

Fig. 15 Wrong time SOM

Fig. 16 Observation Missed

129

130

R. Moura et al.

Fig. 17 Wrong type SOM

Fig. 18 Wrong reasoning SOM

incorporating new evidence (deterministic or probabilistic), Mkrtchyan et al. (2015) had also proposed that using BN, human reliability analysis also benefit from: • Its graphical formalism of conditional probability equations. Visual representation of conditional probability mathematical formula is a better way of discussing the relations and facilitates the communication between engineers, psychologists and sociologists. • Probabilistic representation of uncertainty, making it compatible with Probabilistic Safety Assessment; and • Combination of different sources of information: empirical sources as databases of events, theoretical model of human cognition and expert judgement. The mathematical background for Bayesian Networks was previously described by Tolo et al. (2014) as statistical models used to represent probability distributions, that can provide combined probability distribution associated with conditional

3 Learning from Large-Scale Accidents

131

Fig. 19 Example of a Bayesian network

dependencies, for example, between an organizational factor and a cognitive function. BNs are represented by acyclic graphs, where the nodes are connected by arcs. Child nodes have to have a causality relationship with each parent node. For example, consider in Fig. 19, the child node ‘cognitive function’. The probability of its occurrence is conditioned to the occurrence of its parent nodes: organization, technology and person-related functions. To have a proper causality, one must know the answer for the question: what is the probability of occurring a cognitive function when organization, technology or person-related function occur altogether? And if none of them occur? And if only organizational factor occurs, and no technology and person-related does? All possible 8 combinations from 3 parent nodes have to be answered, to stablish proper causality. The conditional probability tables were obtained from the MATA-D dataset. After constructing the model below and inserting prior probabilities for parent and child nodes, the marginal probability distributions can be calculated (Fig. 20; Table 3).

4 Lessons Learned from Past Accidents (Moura et al. 2017a, b) The application of the SOM algorithm resulted in the reorganisation of accidents originally arranged in a 238 × 53 matrix into topographic maps (Figs. 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17 and 18). Having the accidents grouped by similarity (considering the contributing factors), it is now possible to identify common patterns and highlight key relationships within the dataset. Moreover, the results are presented through a graphical interface, allowing analysts to effectively see the most

132

Fig. 20 BN for MATA-D

R. Moura et al.

3 Learning from Large-Scale Accidents Table 3 Marginal probabilities for human performance

133

Cognitive functions

Actions

Observation Observation missed

8.12e−05

Interpretation Faulty diagnosis

1.05e−03

Wrong reasoning

1.05e−03

Decision error

6.35e−04

Wrong time

4.21e−04

Wrong type

1.73e−04

Wrong place

4.64e−04

Planning Inadequate plan

1.22e−03

Priority error

6.50e−04

frequent features or have further insight into contributing factors which might be of interest (such as Design Failures, Human Erroneous Actions, Quality Control and Task Allocation). The combination of three organisational factors (Inadequate Task Allocation, Inadequate Quality Control and Design Failure—Figs. 5, 6 and 7) occupied most of the Cluster 1 area, meaning that these aspects are leading contributors to the grouping. Human erroneous actions contribute to 70% of the cluster, being the Wrong Place (Fig. 9) the most relevant one, covering more than a half of the grouping. The analysis of the individual maps clearly shows that Inadequate Procedures (Fig. 10) are highly associated with this specific type of human erroneous action, meaning that incorrect, incomplete, ambiguous or instructions open to interpretation provoked specific problems to implement a sequence of operational movements. A deep relationship between Inadequate Procedures (Fig. 10) and Insufficient Knowledge (Fig. 8) can be perceived in Cluster 1, denoting that written instructions presumed some level of specific knowledge to recognise the situation and complete the operation, which was not the case in many events. An example of this type of accident was described in a US Chemical Safety and Hazard Investigation Board (2004) safety bulletin, when operators were assigned to a cleaning process in a petrochemicals plant. They executed a nitrogen gas purging exactly as required by the written procedures, in order to remove a hazardous mixture from the pipe. Afterwards, they started a steam purge to finish the service. However, the procedural steps were not sufficiently detailed to ensure the removal of the mixture from the pipe, especially in the low points, and failed to describe the consequences of having flushing liquid in the system. The operators were unaware of the possibility of having residues in the line, as well as the chemical reactions that could occur. The steam purge heated the peroxide/alcohol mix above its thermal decomposition temperature, resulting in an explosion and fire. The SOM map exploration shows that this combination of contributing factors is not an isolated episode, but a recognisable pattern (or tendency) in Cluster 1, which should prompt the attention of risk analysts.

134

R. Moura et al.

The correlation of further factors, such as the design failure which allowed an unnecessary low-point section in the pipe route and the failure of the quality control to identify the low-point trap as well as the deficient procedure are also persistent in this grouping. As in Cluster 1, Cluster’s 2 most common features were Design Failure accompanied by Inadequate Task Allocation and Inadequate Procedures. However, the results for these factors are close to the overall dataset figures (65.97%, 60.07% and 44.09%, respectively) and thus cannot be considered to be major influencing factors to generate this clustering. A noticeable feature in this cluster was Adverse Ambient Conditions, which attained 14% in this grouping. The exploration of these events demonstrates that not only major natural events such as hurricanes, typhoons or earthquakes should be considered from a risk and safety management perspective, but also more common adverse situations like torrential rain, electrical storms and even the presence of airborne particles. A straightforward example of the latter is the case where haze from forest fires carried atmospheric particles to the intake of an air separation unit of a gas processing facility, causing an explosion and a large fire. Equipment Failure (as shown in Fig. 11) dominates almost the whole area of Cluster 3, but, in sharp contrast with Cluster 1, the association with Maintenance Failure (Fig. 12) is not relevant any longer. In fact, the analysis of the maps indicates that the equipment failure events tend to be associated with Design Failure and/or Insufficient Skills (Fig. 13) for this grouping. Therefore, it can be learned that enhancing maintenance cannot be considered the only solution to minimise the possibility of equipment failures. The lack of skills (training/experience) to operate a system or equipment may well be combined with equipment faults, as well as with design shortcomings. 71.8% of the Cluster 3 area was covered by human erroneous actions, mostly Wrong Time and Wrong Type (Figs. 15 and 17). A profounder analysis of specific cognitive functions influencing human actions can be also attained. Observation Missed, Wrong Reasoning and Inadequate Plan maps represented many cases where events or signals that were supposed to trigger an action were missed; the operator misinterpreted a given signal or cue—a deduction or induction error; or the mental plan/solution to solve an issue was incomplete or wrong. The airplane accident report mentioned in the introduction (Bureau d’Enquêtes et d’Analyses pour la sécurité de l’aviation civile 2011) perfectly illustrates circumstances where an equipment failure due to a design shortcoming can trigger cognitive disturbances, leading to errors and ultimately major accidents. The lack of skills of the co-pilot to handle some problems (i.e. the approach to stall) at high altitudes made him build an erroneous mental plan to react to the undesirable situation. He adopted a tactic applicable to low altitudes, destabilising the flight path with inappropriate control inputs. Therefore, these tendencies reveal that specific training, aimed at dealing with critical conditions and major hazards, is very complex and must be carefully selected. Instead of reassuring written procedures or transmitting instructions, an effective training strategy must embrace the systemic development of a

3 Learning from Large-Scale Accidents

135

problem-solving mindset. Although most of the simulation and training strategies are focused on conditioning the human to expected or predictable scenarios, critical situations will demand advanced decision-making skills and should focus on processes and techniques aimed at the identification and development of adequate operation alternatives. Equipment Failure is also the main contributor to Cluster 4, but the accidents within this grouping presented different characteristics from the former cluster. The map’s analysis for this cluster (i.e. Figs. 6, 7 and 11) unmistakably shows that equipment problems were accompanied by quality control issues and design shortcomings.

5 Using Past Accident Lessons to Mitigate Future Events (Moura et al. 2017a, b) An example of how to use the accident patterns to contribute to mitigate future events is given in Table 4. Checklist for risk studies validation. Accident tendencies disclosed by the application of the SOM approach and the consequent analysis of the maps, are being used to construct a checklist comprising common hazards, major risks and shortcomings involving the interactions between humans, technology and organisations. The checklist aims to give some insight into how these lessons learned from past accidents in different industrial environments can be used as part of a verification scheme, to ensure that recurring accident causation patterns will not escape from scrutiny in new safety studies.

136

R. Moura et al.

Table 4 Checklist for risk studies validation No. Item 01

Were the premises, hypothesis and justifications for the chosen design concept clearly stated? Was a safer known alternative/approach to achieve the same objective discussed?

02

Are the underlying basis and limitations of the method, the origin of the input data and further assumptions (e.g. duration of an event, flammable vapour clouds expected drifts, maximum spill size, release composition) that support probabilities, scenarios and results clearly stated? Are they consistent?

03

Are events’ frequencies used in probabilistic risk analysis reliable? Are they used exclusively when historical data is comparable (e.g. same operation type, facility or equipment)? Would alternative approaches (e.g. non-frequentist) be more suitable to estimate the events’ likelihood in the study case (e.g. no sufficient past experience or previous operation data)?

04

Although some regulations prescribe periodic reviews to risk studies, there is a tendency that assessments may fall into disuse due to people, process or environmental changes in between revision deadlines. Modifications usually lead to a management of change and some sort of risk analysis, but more complex, previous deeper safety studies are not revisited at this point. Are design verifications, as-built drawings, production checks, field data collection or other approaches required to confirm/maintain trust on the major/approved risk study throughout the facilities’ lifecycle, instead of using a rigid deadline for review? Have the facility’s critical factors/performance indicators that could indicate an up-to-date and trustworthy risk assessment been identified/listed?

05

Were possible critical changes affecting the original studies (e.g. in the operational philosophy, control logic and process modernisations) acknowledged? Are the conditions with the potential to invalidate the current safety study clearly stated?

06

The safety studies must contemplate a list of recommendations and safeguards, which can be rejected on a technical basis. Is the value of the implementation of risk reduction measures clearly stated? Are the justifications for favoured alternatives or rejections consistent with the best available knowledge? Do the underlying principles for rejections contemplate safety benefits over cost matters?

07

Is the data extracted from databases and standards (as well as calculations made) logical, traceable and consistent with the operational reality?

08

Were previous assessments in analogous installations used to give some insight into the hazard identification process?

09

Were the recommendations and risk control measures previously applied to analogous facilities? Is there any feedback about their suitability from previous designers and operators?

Yes No

(continued)

3 Learning from Large-Scale Accidents

137

Table 4 (continued) No. Item 10

Safety studies have shown a tendency to fail to adequately address risks related to the system interaction with the environment as well as possible interferences among individual components and systems. Was a comprehensive and integrated approach to design and risk management achieved? Were components and systems designed and implemented in a holistic way rather than on an individual and secluded fashion? Are human factors analysis integrated with engineering studies?

11

Some high-technology facilities are likely to start their operations before the whole system and all safeguards are in place. Offshore platforms may have to adapt their process while a pipeline is not operating or a pump/compressor is not commissioned. Refineries may be designed (or obliged) to operate without some processing modules, due to technical or economic reasons. Does the risk assessment contemplate all modes of operation (e.g. commissioning, start-up, partial operation, maintenance breaks) for the facility examined? Are transitory states (e.g. warm-up and cooling down times) also considered?

12

Have the studies taken into consideration thermal properties, hydraulics and electrical/electronic parts of components, equipment and systems, not being overly focused on mechanical/structural aspects?

13

Equipment and structural failures tended to arise from problems during the material selection stage and due to poor understanding and monitoring of well-known damage mechanisms. Has the material selected for construction, equipment fixation, pipelines and support structures identified and analysed by safety studies? Was a compatibility assessment (with loads, system and environment) conducted, including thermal, chemical and electrical properties?

14

Are the specificities of the assessed facility or process clearly identified, in a way that specific risks will be identified and addressed? Where expert advice is required to assess risk, are the correspondent technical reports included in the safety studies (e.g. to assess the possibility of catastrophic failures due to stress corrosion cracking in stainless steels, or corrosion mechanisms emerging from the saturation of wet hydrocarbons with dissolved carbon dioxide and sour environments)?

15

Are risks associated with the interaction of different materials addressed (e.g. with different temperature gradients leading to deformations and ruptures or with distinct electric potential resulting in galvanic corrosion)?

16

Are major hazards, complex areas and critical operations clearly identified? Are the level of detail, the methodology to assess these problematic cases and the safeguards proposed by studies compatible with the magnitude of the risks identified?

17

Are the steps taken to construct the risk scenarios developed in a logical way? Does the study sequence lead to a clear and rational understanding of the process and its possible outcomes?

18

Does the criterion for setting accident scenarios, especially the worst-case one(s), consider common-cause, domino or cascading effects and simultaneous/ multiple scenarios?

Yes No

(continued)

138

R. Moura et al.

Table 4 (continued) No. Item 19

Are the risks associated with third-party operations (material delivering, fuelling, electrical power, water supply) addressed by the safety studies? Are these risks considered in a holistic approach, occurring simultaneously and integrated with the facility’s risks?

20

Are risks associated with auxiliary systems (e.g. cooling and heating) contemplated?

21

Is technology evolution naturally considered by safety studies? Is the increasing usage of operational and non-operational portable devices (e.g. mobile phones, tablets, cameras, smartwatches and fitness wristbands) considered, for instance, as potential ignition sources in explosive/flammable atmospheres? Does human reliability analysis and task allocation processes consider the new technologies potential to impact the performance of workers (e.g. attention shifters)?

22

Have the studies evaluated the process plant safety when experiencing the effects of partial or total failures in critical elements (e.g. emergency shutdown valves fail in the safe position)?

23

Are process changes that modify the risk level clearly identified when, for instance, safety critical equipment or systems are removed, deactivated or bypassed/inhibited for maintenance?

24

Is the availability of safeguards and further risk control/mitigation measures addressed?

25

Were critical equipment and components with limited life spam properly identified? Were replacement operations affecting safeguards and/or increasing risk addressed?

26

Is quality control an active element of the risk assessment? Is it compatible with operational requirements for systems and equipment?

27

Are suitable quality indicators proposed to verify critical system elements status? Is there an auditable failure log, to confirm that the expected performance of components and systems is maintained through time?

28

Are chemical reactions and adverse events associated with housekeeping procedures (e.g. cleaning and painting substances, dust management), inertisation processes, equipment and pipeline deposits removal and necessary tests (e.g. hydrostatic tests) contemplated by the studies?

29

Were the design and process reviewed aiming at their optimisation to avoid pocket/stagnant zones for dusts, gases, fumes and fluids (e.g. reducing elevated spaces and corners prone to dust/particles built-up or minimising lower pipeline sections subjected to particles/heavier fluids decantation)?

30

Is the necessary information supporting non-routine tasks aiming at the risk reduction (e.g. pre-operational or restart inspections) sufficiently detailed, allowing the identification of process weak-points such as deposits accumulation, valve misalignments, damaged seals and rupture disks and equipment condition after, for instance, a process halt, or after maintenance works nearby and before resuming operations?

Yes No

(continued)

3 Learning from Large-Scale Accidents

139

Table 4 (continued) No. Item 31

Are permanent cues and signals (e.g. pipeline and equipment marking to indicate content, maximum pressure and direction of flow) proposed as risk reduction measures for standard and non-standard operations? If so, is the permanent marking wear through time a factor considered?

32

“The operator” is an entity sometimes subjected to extreme variations. When human intervention is considered by safety studies, are the expected skills (e.g. practical experience, acceptable performance variability level) and knowledge (e.g. the situational awareness level and the academic level—technician, engineer, expert) clearly indicated?

33

Underperforming when conducting non-standard operations (e.g. start-up, commissioning or partial plant operations) was also a noteworthy pattern. Were situations and conditions where an enhanced level of training (skills or knowledge) or even the support of specialised companies (e.g. to control an offshore blowout) are required to keep risks controlled or to reduce the consequences of undesirable events identified?

34

Is the essential risk information and knowledge arising from safety studies, which should reach the involved personnel, identified? Are there any special provisions to ensure that critical information will be conveyed by proper means (e.g. awareness campaigns, training, written procedures, simulation exercises) and will be accessible where needed?

35

Is operational reality such as process conditions (e.g. background noise, fumes, heat, wind from exhaustion systems or alarms) considered as a possible disturbance when some sort of communication is required to convey important information?

36

Are administrative/management aspects affecting the seamless continuity of operations (e.g. loss of information due to shifts, personnel replacement or reduction) addressed during the identification of safety critical tasks hazards? Is the prospect that obvious unusual situations (e.g. seemingly small leakages, unfamiliar odours and a flange missing some screws) may not be reported to supervisors promptly, affecting the effectiveness of risk reducing measures such as process plant walkthroughs, considered?

37

Do supervisory control and data acquisition systems produce a real-time operation overview, not being excessively focused on individual parameters?

38

Were the accessibility and visibility of instruments and equipment identified as critical in the risk studies and been ensured by an examination of the design drawings? Were 3-D models and/or mock-ups used to facilitate the visualisation of complex areas and reduce the possibility of interferences/visualisation issues? Are the external critical indicators/gauges fitness to the operational environment verified (e.g. visual impairment or working issues due to snow, rain or sun radiation)?

39

Was the possibility of obstruction of water intakes, air inlets, sensors and filters (e.g. by water impurities, air particles or formation of ice) assessed? Are mitigation measures in place?

Yes No

(continued)

140

R. Moura et al.

Table 4 (continued) No. Item 40

Have operators examined if the information supplied by indicators, panels and displays are sufficient, as active members of the safety assessment team? Do they have similar training level (skills and knowledge) as required for the operation of the system?

41

Is there an assessment of the usefulness of the information provided by supervisory control and data acquisition systems? Are the functions and outputs clear, in particular to operators? Do they know when and how to use the information provided, or some of the signals are perceived as excessive/useless?

42

Was the need to diagnose the system status and conduct special operations from alternative places (e.g. stop the operation from outside the control room) considered?

43

Are supervisory control and data acquisition systems failure modes assessed as critical hazards? Is the possibility that spurious or ambiguous error messages or information insufficiency/delays triggering human or automatic actions that can jeopardise the stability or integrity of the system carefully analysed? Were adequate mitigating measures put in place?

44

Is the damage to power and control cables, pipelines and hydraulic systems, their routing and its consequences to the supervisory control and data acquisition systems considered by the risk assessment?

45

Are safety critical alarms clearly distinguishable from other operational alarms?

46

Are process facilities and hazardous materials located within a safe distance from populations, accommodation modules, administrative offices and parking spaces? Is the storage volume of hazardous substances optimised to reduce risks? Is the transportation route for hazardous materials optimised in a way that the exposure of people to risks is reduced to the minimum practical?

47

Are control rooms and survival/scape structures protected from damage and located within a safe distance from the process plants? Does the risk study consider a scenario of control room loss? Is there any redundancy in place for emergency controls (e.g. fire control systems, shutdown systems)?

48

Are visual aids used as risk-reducing measures to increase the awareness level of operators? Are reactors, vessels and equipment arrangement and dimensions visually distinctive from each other (e.g. by position, size or colour) to minimise swap-overs or inadvertent manoeuvres?

49

Is the possibility of inadvertent connections of similar electrical, mechanic and hydraulic connectors an assessed risk? Are measures in place (e.g. using different connector dimensions or distinct thread types) to minimise hazardous interchangeability among connectors, elbows and other parts from different systems or functions?

50

Is the inadvertent operation of temporarily or permanently disabled components, equipment or systems considered as a risk-increasing factor? Are measures in place to enhance the visualisation of non-operational parts such as isolated valves? Are overpressure safeguards (e.g. safety valves and rupture disks) accessible and visible from the operational area of the equipment or system they are designed to protect?

Yes No

(continued)

3 Learning from Large-Scale Accidents

141

Table 4 (continued) No. Item 51

Are ignition sources (e.g. exhaustion, electrical equipment) optimised in order to be located within a safe distance from significant inventories of flammable materials (including piping) or in a position in which ignition is minimised, in case of leakage? Was the position of flares and vents revised by safety studies? Are exhaust gases routed to and flares and vents located in areas where the risk of ignition is minimised?

52

Are different scenarios (e.g. in distinct plant locations, with variable volumes) for pipeline and vessels leakages considered by safety studies? Are there risk-reduction strategies to limit the released inventory in case of leakage (e.g. the installation of automatic emergency shutdown valves between sections)?

53

Are safeguards prescribed by safety studies to minimise the possibility of creation of explosive atmospheres in enclosed compartments (e.g. deluge or inertisation (CO2 or N2 ) systems; exhaustion/vents)? Have the possibility of backflow in heating, refrigeration or ventilation systems been examined? Have the logic of automatic systems (e.g. automatic shutoff of air intakes after the detection of gases) and the reliability/availability of surrounding-dependent systems (e.g. positively pressurised rooms and escape routes) been assessed?

54

Are fire systems, emergency equipment, escape routes and rescue services designed to withstand extreme conditions expected during an accident (e.g. blast, fumes and intense heat)? Are accident probable effects (e.g. impacts from fragments of explosions or the duration/intensity of a fire) considered in the evaluation of the effectiveness/survivability of these systems?

55

Are alternative emergency power sources provided? Do the safety studies assess their functionality under distinct accident scenarios (e.g. main power cuts, flood, lightning storms and local fires)? Does the transition time from main to alternative power sources pose non-considered risks?

56

Is there a main safe escape route and further alternatives designed, including load-bearing structures such as anti-blast and firewalls calculated to resist until the facility has been fully evacuated?

57

Does the escape route contain clearance warnings by means of visual and audible cues? Are local alarm switches located in adequate positions to alert the remaining workers about the best available escape route? Are emergency lighting and alarms connected to the emergency power system (or have their own battery power source)?

58

Have safety studies assessed the possibility of collisions (e.g. with cars, boats and airplanes) and external elements (e.g. projectiles from firearms) affecting equipment and the structure of the facility? Are measures in place (e.g. mechanical protection, administrative prohibitions, policing) to minimise these risks?

59

Are distances among pipelines, equipment and modules optimised in order to consider the contents volatility, temperature, pressure and other risk-increasing factors? Is the separation among adjacent elements sufficient to avoid electromagnetic interferences, energy transfer or domino/cascading effects in case of failure? Were additional measures (e.g. physical separations and blast and fire protection walls) evaluated?

Yes No

(continued)

142

R. Moura et al.

Table 4 (continued) No. Item 60

When physical separation is not possible, does the safety study evaluated if the surrounding equipment endurance time is sufficient to withstand the consequence of possible failure modes (e.g. a release followed by a jet fire from a failed adjacent element, for the inventory depletion time)?

61

Does the safety study consider multiple safety barriers prone to common cause failures as a single barrier? Are alarms and sensors subjected to the same failure modes (e.g. same power supply or same cable routing) considered as non-redundant systems? Were redundant safety barriers subjected to an independence evaluation by safety studies?

62

Are the risk scenarios demanding automatized responses (e.g. fire alarm demanding the activation of deluge systems or gas detection demanding the neutralisation ignition sources) identified and assessed? Does the supervisory control and data acquisition system have the capability of interpreting multiple alarms and command automatized actions or present consistent diagnostics to operators though the interface? Is the harmonisation of automated functions and personnel actions assessed?

63

Is the position and type of sensors representative of the category of information they intend to convey? Are failures in sensors and indicators auto-diagnosed and clearly indicated by the interface?

64

Is there a consistent assessment of safety alarms? Is the alarm precedence logic based on its safety significance? Are they prioritised according to how quickly personnel should respond in order to avoid undesirable consequences?

65

Is the number of simultaneous alarms considered as a risk-increasing factor capable of disturbing cognitive functions? Are less important signals and alarms reduced/supressed (to minimise mental overburden) when the supervisory control and data acquisition system diagnoses a critical situation demanding full attention from the personnel involved?

66

Are reduction measures for the initiation and escalation of fires and explosions proposed (e.g. reduction of ignition sources, material selection based on flammability level, ability to spread flames, generate smoke or propagate heat and the toxicity level)? Is the likelihood of ignition assessed in susceptible sections of the installation, by consistent means?

Yes No

Total

References Bureau d’Enquêtes et d’Analyses pour la sécurité de l’aviation civile, 2011. Final Report on the accident on 1st June 2009 to the Airbus A330–203 [Online]. Available from: http://www.bea. aero/docspa/2009/f-cp090601.en/pdf/f-cp090601.en.pdf (Accessed: 06 November 2014). Dhillon, B.S., 2007. Human reliability and error in transportation systems. 1st ed. London: SpringerVerlag. Graeber, C., 1999. The Role of Human Factors in Aviation Safety in Aero Magazine QTR_04 1999 (p. 23–31). The Seattle: Boeing Commercial Airplanes Group. Hollnagel, E., 1998. Cognitive Reliability and Error Analysis Method. Oxford: Elsevier Science Ltd. Kohonen, T., 1998. The self-organizing map. Neurocomputing 21: 1–6.

3 Learning from Large-Scale Accidents

143

Leveson, N., 2004. A new accident model for engineering safer systems. Safety Science 42: 237– 270. Mkrtchyan, L. et al. “Bayesian belief networks for human reliability analysis: A review of applications and gaps”, Reliability engineering & system safety, 139, pp. 1–16, (2015). Morais, C. et al., 2018. Human Reliability Analysis - accounting for human actions and external factors through the project life cycle, European Safety and Reliability Conference (ESREL 2018), Trondheim, Norway. Moura, R. et al., 2016. Learning from major accidents to improve system design, Safety Science Journal 84: 37–45, https://doi.org/10.1016/j.ssci.2015.11.022. Moura, R. et al., 2017a. Learning from accidents: interactions between human factors, technology and organisations as a central element to validate risk studies, Safety Science 99:196–214, https:/ /doi.org/10.1016/j.ssci.2017.05.001. Moura, R. et al., 2017b. Learning from major accidents: Graphical representation and analysis of multi-attribute events to enhance risk communication, Safety Science 99: 58–70, https://doi.org/ 10.1016/j.ssci.2017.03.005. Perrow, C., 1999. Normal Accidents: Living With High-Risk Technologies. New Jersey: Princeton University Press. Taleb, N., 2007. The Black Swan: The Impact of the Highly Improbable. 2nd Ed. York: Allen Lane. Tolo, S. et al. “Bayesian Network Approach for Risk Assessment of a Spent Nuclear Fuel Pond”, Vulnerability, Uncertainty, and Risk: Quantification, Mitigation, and Management, pp. 598–607 (2014).

Chapter 4

Insights from Psychology Marlène Déom

1 The Underlying Causes: Psychological Factors Behind Human Errors Despite all the work and research carried out on human error, human behaviour and the decisions leading to erroneous actions are methodologically difficult to grasp. For Hollnagel (1993), “the erroneous action can be regarded in terms of their manifestations or phenotype (how they appear) and with respect to their cause or genotype (mechanisms assumed to have caused the action). Causes of human error can be divided into person related and system related causes. The person related causes refer to internal conditions that cannot be directly observed and the system related causes refer to communication, equipment/interface procedures and workplace” (Hollnagel 1993). The many models based on the systems approach certainly allow to isolate and control the external factors linked to the conditions of the organization, but what about what E. Hollnagel describes as the internal mechanisms of the operator responsible for decision making and their actions? He specifies that the internal mechanisms of the operator that he associates with cognitions and cognitive processes such as attention, concentration, memory and executive functions cannot be controlled, and that for this reason, the conditions of the environment in which the operator operates should be modified and optimized for performance and safety purposes. In the field of construction and engineering, many professionals have looked at the question of human error and come to the same conclusions as researchers like E. Hollnagel. To this end, Australian research shows that engineers specializing in building structures attach great importance to the desire to reduce human error as much as possible in their profession, which essentially consists of evaluating and calculating risk and its quantification. Complex constructions require the engineer M. Déom (B) NCK Inc., Montréal, Canada e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 R. Moura et al. (eds.), Understanding Human Errors in Construction Industry, Digital Innovations in Architecture, Engineering and Construction, https://doi.org/10.1007/978-3-031-37667-2_4

145

146

M. Déom

to assess and estimate what could go wrong, and therefore, to make decisions to avoid any likelihood of failures occurring during construction or throughout the structure’s life span. According to Nicolet (1997), human error is explained in the context of engineering and construction by three elements that is, equipment and processes, information and people. As put forward by Grigoriu (1984), “human error in analysis, design, construction and use of a building is the cause of a significant number of catastrophic structural failures. Human error cannot be eliminated from the design process. However, the rate of occurrence of this error can be reduced significantly by measures of quality assurance. The measures can include checking of calculations and assumptions in analysis, inspection of construction site, and other procedures.” Certainly, external factors as defined by Hollnagel can be relatively well controlled, but the fact remains that the human component of a system, organization or structure is unpredictable and difficult to control. For H. David, everything comes back to humans regardless of the positive or negative results of a system and for F. Knoll “the human unreliability, negligence and ignorance are the direct causes of gross errors and therefore of most accidents.”

2 Internal Human Factors The error of a system would be directly caused by the many components of the human factor leading the individual to adopt one or more behaviours that go against the intended objective. The cognitive components of a person play a determining role in the decision-making and behaviour of this individual. Among the cognitions identified as internal factors of the engineer, many authors count the concept of “situation awareness” as being an important component allowing the professional to make decisions in a given context. This concept is developed according to three levels of operation. The first level is the perception of environmental data. The second level concerns the understanding of the elements of the environment and consists of the integration and analysis of perceived data to understand how the information may or may not influence the intentions of the individual. Finally, the third level, which is the most elaborate in terms of cognition, consists in the extrapolation of the information perceived and understood to consider the future impact of the elements of the environment in interaction on the operations of the individual or of the system. In addition to these three levels of “situation awareness,” there are individual factors and operational factors that determine the quality level of decision-making and the performance of the engineer. Many studies on the concept of “situation awareness” conclude that deficiencies in one of the three levels necessarily result in an incomplete or erroneous analysis of the situation leading the engineer to adopt a behaviour that will not be entirely adapted to the situation to which he/she must react, thus resulting in an inadequate response in relation to the situational elements. Just like the concept of “situation awareness,” which is based on cognitive processes such as attention, concentration, knowledge, memory and executive functions, professional judgement, experience, values and ethics are also some of the

4 Insights from Psychology

147

human components that greatly influence human behaviour. As Bandura proposed decades ago, professional experience and judgement are important variables to consider in understanding human error. To this end, the Practical Guide of the Ordre des Ingénieurs du Québec (OIQ 2018) defines professional judgement as an activity of the mind allowing to judge, assess and evaluate the conditions of a professional situation to develop an opinion based on knowledge and competence experience. Professional judgement would therefore permit to make decisions and develop a procedure to offer services that meet the principles of safety, ethics, deontology and competence. According to this same practical guide, professional judgement would be based on the following four pillars: 1. Specialized theoretical knowledge in a specific field of engineering acquired through theoretical learning and continuing education. 2. Professional obligations according to professional standards and professional ethics (commands behaviours based on the professional values of the engineer such as honesty, integrity and customer-oriented decisions). 3. Work experience. 4. Risk assessment. These four proposed pillars refer not only to cognitive processes and cognitive intelligence, but also to complex human components such as an individual’s personality, psychological processes, and emotional intelligence. As mentioned in the introduction to this essay, theoretical knowledge in a given field and learning achieved through academic training and continuing education remain insufficient components for adequate and professional engineering practice. Several organizations including the Ordre des Ingénieurs du Québec provide tools, practical guides and quality systems to educate, raise awareness, orient and guide engineers in the context of their professional practice, thus enabling them to adopt rules, standards and knowhow with the objective of ensuring the quality of professional services rendered to the public. However, despite the structure, framework and ethical rules that engineers should respect, and despite the theoretical knowledge acquired academically and in continuing education and the skills developed through practical experience, wilful or involuntary errors remain a reality that can unfortunately lead to fatal disasters. According to studies by Li et al. (2018) the psychological component of the human factor would be the most influential element in connection with human error. Several authors explain these errors, on the one hand, by the influence of environmental factors and, on the other hand, by the impact of physical and psychological factors on cognitive skills. This multi-causal concept of the genesis of human error suggests, for example, that even if the engineer has all the knowledge required to make a decision in a given professional situation, the cognitive functions underlying this decision may be affected by a multitude of variables thus making the decision of the engineer subjective and not adapted to the situation to which he/she is exposed. This concept has led many researchers to the observation that the ability of a person to reason and to make logical, rational decisions adapted to a given situation directly depends on the cognitive skills and intelligence of the person, which in turn are greatly influenced by the physical state, mental state and emotional intelligence of that same person.

148

M. Déom

3 Compromise of Ethical and Professional Judgement No one is immune to the intrusions of emotions and psychological components inevitably causing a loss of objectivity in terms of the perception of the environment, both in professional and personal situations. This loss of objectivity interferes with decision-making and results in the adoption of behaviours subject to emotions rather than objective reasoning based on the four pillars of professional judgement as proposed by the OIQ. In these circumstances, the components of an individual’s personality can greatly contribute to either managing and regulating the emotions and maintaining a balance allowing adaptation and objective decision-making based on the referents of professional judgement, or to falling under the influence of the negative emotional charge thus leading to subjective decision-making and the adoption of behaviours inappropriate to the situation (professional misconduct) and/or the work environment. Although cognitive intelligence is essential for the acquisition of knowledge and theoretical knowledge learned through academic training, emotional intelligence, for its part, helps develop professional skills and professional judgement including critical thinking, tolerance for uncertainty and the ability to recognize one’s own limitations. According to Lafortune and Allal (2008) “professional judgement is a process that leads to decision-making, which takes into account various considerations arising from its expertise (experience and professional training). This process requires consistency and transparency. In this sense, it presupposes the collection of information using different means, the justification of the choice of means in connection with the aims or intentions and the sharing of the results of the approach in a perspective of regulation” (quotation translate from French). According to him, an individual’s personality and the psychological and affective factors must be considered to be determining actors in terms of decision-making, professional and critical judgement.

4 Emotions Versus Cognitive Processes For Daniel Goleman, there is no doubt that emotions block or amplify the ability to think and plan, to learn in order to achieve a goal, to solve problems or to adapt to a given situation. “They define the limits of our ability to think, to use our innate mental capacities and to the extent that we are motivated by the enthusiasm and the pleasure that it gives us in what we do, the emotions lead us to the road to success. In this sense, emotional intelligence is a master skill that deeply influences all other skills by stimulating or inhibiting them” (quotation translate from French). According to Goleman, emotional intelligence is as important if not more than cognitive intelligence in achieving professional and interpersonal academic goals. To this end, Martin Seligman’s theory of positive thinking clearly illustrates the influence of positive emotions on the success of an individual and conversely the influence of negative emotions on the failure of an individual in what they try to accomplish.

4 Insights from Psychology

149

According to him, positive thinking associated with positive emotions and negative thinking associated with negative emotions are two perspectives that have a profound implication on the way an individual behaves in everyday life. To this end, many contemporary psychological theories and approaches have developed under this premise, making positive thinking the cornerstone of their philosophy of therapeutic treatment. Unfortunately, the power of a person’s positive thinking is only effective if their negative emotions are zero or weak or well managed. The key to success would therefore lie in the ability of an individual to manage their emotions and self-regulate to curb negative thinking and thus adopt behaviours adapted to the situational reality and make objective decisions based on logical and rational reasoning. Before even seeing the glass half full, the individual must first understand the internal factors that lead them to see it half empty, which actually corresponds to the concept of self-knowledge. Once these factors have been identified, the person must be able to manage and regulate them (concept of self-control) in order to gain positive thinking.

5 Neurological Origin of Emotions According to Goleman (1995) “any conception of human nature that ignores the power of emotions would be singularly lacking in insight” (quotation translate from French). Emotions are a call to action. Etymologically, the word emotion is also composed of the Latin verb motere meaning to move and of the prefix e which indicates an outward movement. Therefore, the meaning of the word emotion suggests a tendency to act. To better understand the reason why emotions guide our actions and clearly have a hold on our intelligence and our reason, it is essential to fully understand how the human brain developed through its evolution. Anatomically, the most primitive part of the brain is the brainstem surrounding the upper end of the spinal cord. The primitive brain governs vital functions such as respiration and the metabolism of other organs in the body as well as stereotypical reactions and movements. It is a set of programmed regulators that allow the body to function normally and survive. From the brainstem emerge the nerve centres, the headquarters of emotions. Then developed the upper cerebral section, more precisely the neocortex, which consists of a bulb of tissue forming convolutions. The hippocampus and the amygdala are the two essential parts of the olfactory brain from which the cortex first developed and then the neocortex. Studies carried out on human development show that the neocortex considered as the “thinking brain” (headquarters of thought and reflection) is a recent evolution of the brain, which would have developed after the cortex and on its layers. Goleman explains, “that a hundred million years ago, the mammalian brain experienced a strong growth spurt. On the two thin layers of the cortex—the areas that pick up sensation, plan and coordinate movement—are several layers of brain cells that formed the neocortex. Unlike the two old layers of the cortex, the neocortex offered an extraordinary intellectual advantage” (quotation translate from French).

150

M. Déom

It is interesting to note that the neocortex is absent in some mammals and is found at certain stages in some reptiles and several mammals. However, it is highly developed in primates and presents a great complexity in the case of humans. The neocortex is composed of the frontal, occipital, parietal and temporal lobes and has several layers, the deepest of which are in synaptic connections with the thalamus, the brainstem (headquarters of emotions) and the spinal cord, thus allowing the transmission of sensory information. Each of the lobes perform separate tasks. For example, the occipital lobe contains the primary visual cortex and the temporal lobe contains the primary auditory cortex. As for the prefrontal lobe (anterior part of the frontal lobe), neurological research suggests that it is the seat of cognitive functions including working memory, language and executive functions (planning, deductive reasoning, complex problem solving, etc.) that the orbitofrontal lobe (the lowest part of the prefrontal lobe) is involved in affective and motivational processes including reward-based decision making and action control, mood control and social behaviour. Consequently, according to the evolution of the human brain, the emotional brain, also called the limbic brain, existed long before the rational brain. It is clear to Goleman that emotions play a key role in neural architecture as so many brain centres, including the neocortex, have developed from the limbic zone and expanded the range of their abilities. “Because it arises from emotional areas, the neocortex is linked to them by a myriad of circuits. This gives the centres of emotion immense power over the functioning of the rest of the brain including the centres of thought” (quotation translate from French). Neurologically, the limbic system includes the hippocampus, which is responsible for memorizing hard facts, and the amygdala, which is located on the upper part of the brainstem and is partly responsible for assessing the emotional load or content of events. Goleman explains that the amygdala isn’t just about affect. In fact, this structure controls all emotions, both pleasant and unpleasant. American neurologist J. LeDoux, researcher in neuroscience, was the first to demonstrate the fundamental role of the amygdala in the affective activity of the brain. Following his discovery of a neural bundle directly connecting the thalamus to the amygdala and thus allowing the amygdala to be directly supplied with information by the sense organs, this researcher suggests that the amygdala triggers reactions even before the neocortex has access to information. As a result, the amygdala would react instantly upon receipt of sensory information, while the slower but better informed neocortex would produce a much more elaborate and adapted response. According to Joseph LeDoux, “the system that governs emotions can act independently of the cortex. Certain reactions and emotional memories can form without the slightest intervention of consciousness, of cognition. The amygdala stores a whole repertoire of memories and reactions that we tap into without being fully aware of it, because the path between the thalamus and the amygdala bypasses the neocortex” (quotation translate from French).

4 Insights from Psychology

151

Since the primitive neural structures that make up the limbic system take precedence over the thinking brain (neocortex) in terms of receiving sensory information, it is therefore possible to suggest the following conclusions: • Emotions can have a major impact on the course of an event that can greatly harm cognitive processes and reasoning based on logic, rationality and personal knowledge (professional errors, errors, omissions, etc.…). • Emotions can lead the individual to act independently of the control of the cognitive faculties, learning, knowledge and professional experience of this person (power of emotion over reason). • The actions, behaviours and decisions generated by the limbic system (emotional brain) have an unpredictable nature and do not necessarily take into account an elaborate and complex analysis of the situation or the context to which the professional is exposed (primitive response and poorly adapted to a given context).

6 Past Experiences and Emotional Loads Goleman identifies the amygdala as a mnemonic container of affective memories that scrutinizes lived experiences by association, comparing an individual’s past events to the current events that this person is facing. When a stimulus or stimuli from a past event is related to a stimulus or stimuli of the current event, the amygdala associates and assimilates these two situations as having the same emotional value and leads the individual to take action even before the neocortex has a say. Therefore, the amygdala leads the individual to act in the present situation based on thoughts, emotions and reactions from a past experience, even though the circumstances between these two events are only vaguely similar. The greater the emotional load of the past experience, the stronger the reaction to a similar or somewhat similar situation (trauma). This principle of assimilation by association of the amygdala must also be considered in relation to the development of the human neuronal and cerebral system. Researches in neurology and neuropsychology rule on the fact that the amygdala, being a primitive structure in humans, is fully developed from the very young age of the child unlike the neocortex and more particularly the prefrontal lobe, where all cognitive processes and rational thinking are located. As the amygdala rapidly matures, the emotional burden of events is stored up independent of an elaborate analysis of the situation, leaving the child with an almost crude emotional load. In his book on emotional intelligence, Goleman explains that: “according to LeDoux, the role played by the amygdala during childhood reinforces a basic principle of psychoanalysis: the nature of the relationship between the young child and their relatives deeply marks the individual. If the lessons during the first years of life have such an impact, it is because they are imprinted in the amygdala in the form of blueprints at a time when emotional life is still in its raw state, when the child is still unable to translate their experiences into words. When these memories are subsequently awakened, the individual does not have any set of thoughts enabling them to understand the reaction that overwhelms them. One of the reasons our emotional outbursts are so confusing

152

M. Déom

is that they often originate at the very beginning of our life when everything amazed us and we didn’t have the words to describe the events” (quotation translated from French). The amygdala is therefore a component that acts quickly on the basis of conclusions based on emotion and on a few snippets of rudimentary sensory information and whose decisions do not involve any reflection. Therefore, when the triggered emotional load is great in a given situation, the amygdala may elicit a reaction even before the prefrontal lobe has received confirmation of objective facts. It is therefore not surprising to observe errors in judgement when a professional is juggling a large emotional load while making decisions or during important professional activities.

7 Modulation of Emotions Although the limbic system and the amygdala present a primitive functioning based mainly on the emotions, particularly when the emotional load is large, the prefrontal lobe of the neocortex can interact with the emotional brain thus allowing to modulate the emotions and to analyze the situation in depth and more objectively, leading to a more appropriate and reasoned decision. As mentioned previously, the neocortex, more precisely the prefrontal lobe, would be the headquarters of cognitive processes involved in the management of our emotions and therefore of our behaviours and in the planning and organization of our actions. In the 1950s, surgical intervention, which consisted in removing parts of the prefrontal lobe with the aim of reducing emotional suffering, admittedly made it possible to annihilate unpleasant emotions by disconnecting or destroying all of the fibres connecting the frontal lobe to the rest of the brain, but it also resulted in the complete absence of emotional life in the subjects. This procedure known as lobotomy or leukotomy, which was performed all over the world between the 1950s and 1970s, was finally banned, not only because of its irreversibility, but also because of the drastic results of this procedure on the quality of the emotional and affective life of the individuals undergoing the procedure who were, following the intervention, completely deprived of functions related to the control, coordination and execution of their behaviours and emotions, and functions responsible for their personality, their capacity to socialize and adapt, and their spontaneity. Goleman explains that “if the amygdala often acts as an alarm signal, the left prefrontal lobe participates in the device used by the brain to attenuate disturbing emotions: the amygdala proposes, the prefrontal lobe possesses. These connections between the prefrontal and limbic areas play a decisive role in mental life, which goes far beyond the fine tuning of emotions; they are essential to guide us when we make the big decisions of our existence” (quotation translated from French).

4 Insights from Psychology

153

8 Balance and Harmony Much like D. Goleman, Antonio Damasio, a research neurologist who focuses on the neurological origin of emotions and on the impact of these emotions on cognitive processes and decision-making, confirms, from his research on subjects with the circuit damaged prefrontal-amygdala lobe, the importance of the link between what he calls rational intelligence and emotional intelligence to be able to make balanced and rational professional and personal decisions. In fact, this researcher finds that emotional intelligence is just as important as rational intelligence and that without the emotional intelligence, the rational intelligence cannot operate on a rational and logical basis. Contrary to what one might think, emotions are essential for rational decisions. Our emotional knowledge is a warning that allows us, from the start, to circumscribe the scope of the decision by eliminating certain options and valuing others. In conclusion, the objective would not be to cancel the emotions in favour of cognition but to harmonize and find the balance between the two. In one of his many lectures, Damasio expresses the following premise: “It’s the emotion that allows you to mark things as good bad or indifferent. When you are making a decision any day of your life and the options you make of course are gonna produce a good or a bad outcome or something in between, you don’t only remember what the factual result is but also what the emotional result is, and the tandem of facts and associated emotions is critical and of course most of what we construct is wisdom overtime that is actually a result of cultivating that knowledge about how the emotions behave and what we learn from them (previous experience).”

9 Emotional Intelligence Despite the fact that the concept of emotional intelligence is relatively new in literature, many researchers and authors have for several decades developed the principle of the influence of emotions on an individual’s ability to adapt to a given situation. For Mayer and Salovey (1990), emotional intelligence corresponds to an individual’s ability to modulate and manage their emotions. They suggest that this operation is in fact the result of the interaction between emotions and cognitions, thus allowing an individual to adopt adapted and considered behaviours both in the personal dimension and in the social and professional dimensions. A pioneer in the development of the concept of emotional intelligence, Gardner (1983), for his part, proposed five personal aptitudes or skills on the basis of emotional intelligence: • Knowledge of emotions: directly related to the principles of self-awareness and self-knowledge, this skill mainly corresponds to the ability to recognize and identify your emotions.

154

M. Déom

• Control of emotions: this ability consists in being able to modulate, regulate and manage your emotions to each given situation, thus making it possible to make thoughtful decisions and adopt balanced and appropriate behaviours, with respect for yourself and others. • Self-motivation: skill allowing to channel your emotions and to postpone the satisfaction of your desires until later in order to be able to accomplish the tasks, responsibilities and commitments of daily life. • Perception of other people’s emotions: stemming from the concepts of selfawareness and awareness of others, this skill consists of the ability to empathize with others. According to Gardner and many other authors, this faculty is the basis of interpersonal intelligence. • Control of human relations: aptitude which consists in being able to maintain harmonious and balanced relations with others while respecting yourself and others. Gardner therefore makes a distinction between intrapersonal intelligence and interpersonal intelligence. The first faculty, the key to which is self-knowledge and introspection, allows you to control your own emotions, analyze your thoughts, make a choice among your feelings and manage your behaviour according to this choice while the second, based on empathy, corresponds to the ability to perceive the mood, temperament, motivations and desires of the other person and to react to them in an appropriate way. Interpersonal intelligence includes the ability to organize and manage a group, the ability to negotiate solutions, the ability to build personal relationships and the ability for social analysis. Drawing on the basic tenets of Gardner’s theory, Goleman describes emotional intelligence as a meta faculty for recognizing your emotions and those of others (empathy) and being able to control and manage them in order to maintain pleasant and respectful relationships with others. He considers that emotions directly influence the ability to think, reason and plan by inhibiting, stimulating or blocking cognitive functions thus resulting in the achievement of our goals or conversely in failure. According to this author, “emotional intelligence gives you a head start in the workplace.” Everything suggests that people who know about their feelings, who are able to understand and control their own, to decipher those of others and to deal with them effectively, have an advantage in all areas of life, in love and in work. Because they will have acquired the habits of thought that stimulate their own productivity, people with well-developed emotional skills will have a better chance of leading their lives satisfactorily and efficiently”. He argues that the set of five skills related to emotional intelligence—self-knowledge, self-regulation, motivation, empathy and social awareness—constitute a person’s character. This affective attunement that Brian Lynch presents as being a balance and a harmony between the emotions of an individual and the environment in which he evolves, allows the plasticity of the mind leading to the competence of human factors and adaptive capacity issues.

4 Insights from Psychology

155

10 Emotional Competence and Professional Conscience Goleman’s Five Components of Emotional Intelligence can be broken down into 25 emotional skills without which professional performance is impossible. Emotional intelligence and self-awareness are, therefore, basic premises for the development of social awareness and professional awareness. According to him, the acquisition of emotional competence, based on emotional intelligence as described above and the intrinsic values of the individual, are necessary and essential for the development of professional conscience. Goleman groups these 25 skills into 5 broad categories: self-awareness, self-control, motivation, empathy and social skills. 1. Self-awareness: • Emotional awareness: recognizing your emotions and their impact on your behaviors. • Self-assessment: the ability to identify and recognize your strengths and limitations (weaknesses) • Self-confidence: the feeling of being in full control of your body, behaviour and the outside world on the one hand, and the conviction that the individual is more likely to succeed than to fail in what the person does on the other. This skill requires a developed sense of self-esteem, self-worth and abilities. People with this skill demonstrate self-confidence in human relationships, are able to take risks for what they think is right and are able to make sound decisions despite uncertainties and pressures. Affirming that normal rules and procedures can be bypassed and the courage to do so is a sign of selfconfidence. 2. Self-control: • Self-control: the individual’s ability to manage, modulate and control their emotions, impulses and actions. • Reliability: corresponds to the values of honesty and integrity and the application of these in all circumstances. • Professional awareness: the ability of the individual to perform their work responsibly according to best practices and in accordance with the codes and regulations of their profession. Professional conscience also includes a sense of ethics and professional judgement. • Adaptability: the individual’s ability to adapt to changes and unforeseen events. • Innovation: intellectual flexibility, including the ability to consider new ideas, approaches and information. 3. Motivation: • Quest for perfection: the desire to make the necessary efforts to improve and achieve a high standard in terms of performance.

156

M. Déom

• Commitment: the ability to align with the goals of the group or organization in which the individual is involved. • Initiative: the ability to seize opportunities that arise, all with respect for yourself and others. • Optimism: pursuing the objectives set despite the obstacles and unforeseen events encountered. 4. Empathy: • Understanding of others: the ability to recognize and identify the emotions and feelings of others, to respect what others are experiencing and to have a real interest in their concerns or problems. • Passion for service: the ability to anticipate, recognize and meet the needs of others and thus strengthen their own capacities • Enrichment of others: the ability to sense the needs and deficiencies of others and to stimulate their capacities in order to give them confidence. • Exploitation of diversity: the ability to reconcile differences to better seize opportunities. • Political awareness: the ability to carry out an objective reading of the emotional trends and power relations of others. 5. Social skills: • Influence: the ability to persuade in respecting the members of a group. • Communication: the ability to convey clear and constructive messages to the interacting group based on the desire and the ability to exchange ideas and concepts and to share feelings with others through words. This skill is related to a feeling of trust in others and a pleasure in being connected with others. • Sense of mediation: the ability to negotiate and solve problems in the context of conflicts and disagreements within a group. • Leadership: the ability to organize and guide the members of a group in a constructive manner and in the interest of and respect for the objectives of that group. The leader’s primary ability is to know how to initiate and coordinate the efforts of a network of individuals. • Change management: the ability to manage and initiate changes, all while respecting the group’s objectives. • Bond building: the ability to bond, develop and maintain respectful and genuine bonds with others. • Sense of collaboration and cooperation: the ability to work and find a fair balance between your needs and those of others in order to achieve common goals. • Ability to mobilize a team: the ability to create a group synergy in the pursuit and achievement of collective objectives. As suggested in the introduction and depending on the skills described above, it therefore becomes evident that despite having mastered theoretical knowledge in a field such as engineering, the professional is not necessarily apt to practice and

4 Insights from Psychology

157

able to apply this knowledge in everyday practice. In addition to theoretical knowledge in engineering, the engineer must have developed many human skills essential to good professional practice such as judgement, know-how, interpersonal skills, the ability to assess risk, cognitive intelligence, emotional intelligence, deontology, ethics, adaptability, reasonable doubt, etc. These numerous components of the human factor, responsible, for the most part, for errors and therefore accidents, correspond in fact to the concept of professional conscience which can be summarized as the commitment, the assumption of responsibilities, the motivation, the implication and the constructive initiatives of engineers within the framework of the realization of their work, the whole in the respect of the team with which they collaborate, their working environment (company culture) and their professional code. The general criteria often associated with professional conscience correspond to a job well done according to ethics and professional values specific to the code and regulations of the professional environment in which the person practices and evolves. To this end, the Practice Guide for Engineers (2018 Quebec) states that professional competence goes well beyond the training required to be admitted to the practice of engineering. “It concerns the extent of the member’s qualifications to carry out the mandate in all aspects. This includes the knowledge, experience, know-how and the ability to make effective use of it for the benefit of the client and the employer” (quotation translated from French). This guide suggests that professionalism, ethics and deontology are fundamental aspects for the practice of engineering, and that the professional conscience of the engineer should ultimately be based on 4 major values, namely competence according to three components (knowledge, know-how and interpersonal skills), a sense of ethics, responsibility and social commitment. These fundamental values, according to which the Order of Engineers of Quebec defines professional competence, involve the interaction of a multitude of values, internal resources of the individual and social and interpersonal skills, all in perfect harmony to react rationally in a particular situation, face the unforeseen, and ensure an adaptability while evaluating the risks that the decisions taken may generate. • Knowledge competence: mastery of scientific and technical knowledge relevant to the engineers practice, knowing the regulations and rules specific to their profession and broadening their knowledge to additional subjects such as risk management. This section refers mainly to the cognitive and intellectual faculties and aptitudes of the individual in an essentially theoretical framework. • Know-how competence: ability to apply the rules of the art of practice by adequately using the tools necessary for his work and by applying the best practices suggested by development and research. • Interpersonal skills competence: mainly concerns the skills related to emotional awareness and emotional intelligence also including the skills of introspection, self-criticism, questioning, openness in interpersonal relationships, flexibility, adaptation to the system, listening and communication with others. • Ethics: the sense of ethics consists of a process of reflection on the meaning and consequences of the person’s actions. Legault (2003) insists on the fact that integrity should be at the heart of the actions of engineers in all aspects of their

158

M. Déom

work. He also insists on the values of honesty and transparency that engineers owe their clients and their professional order. • Responsibilities: The Order of Engineers of Quebec requires engineers to answer for their choices and actions by personally vouching for their work to their client and to society. This basic value of the profession essentially implies that engineers must accept only the mandates that they consider themselves able to accomplish according to their objective recognition of their skills and knowledge. In addition, the Order specifies that the engineer is accountable to them. • Social commitment: concerns several qualities and aptitudes of the individual on the social level and mainly refers to “interpersonal skills” competence. The Order of Engineers of Quebec suggests skills such as acting as a responsible citizen, exercising positive leadership with colleagues, sharing knowledge and experience with those around them, putting professional skills to the service of society and carrying out professional activities according to the principles of sustainable development. This last skill implies a social conscience and a very developed empathy because of the criterion of sustainable development, meaning that the decisions and actions of the professional must not only take into account the social, economic and environmental dimensions to meet the present needs, but must also consider the long-term effect of these so as not to compromise the needs of future generations. Sense of Ethics and Professional Judgement: To fully understand what the sense of ethics is, it seems essential to make a clear distinction between the concepts of professional deontology and professional ethics given the confusion that these two dimensions can raise for professionals, regardless of their fields of practice. According to the literature, a code of ethics essentially consists of the rules prescribed by the Professional Code to which a professional is subject, to be penalized if they do not respect their duties and obligations. This dimension, based on the principle of morality (good or bad), mainly concerns the knowledge that the professional has of the regulations that they should, in principle, apply in the context of their professional practice. Without listing all the ethical rules of engineers practising in Quebec, some of them have been highlighted given their relevance in relation to human error, and will be reflected in the case studies discussed in a future section of this essay. • In section II of the Code of Ethics concerning the duties and obligations toward the public, article 2.04 specifies that “the engineer should only express their opinion on questions relating to engineering, if this opinion is based on sufficient knowledge and on honest convictions” (quotation translated from French) • In section III concerning the duties and obligations toward the client, article 3.01.01 stipulates that: “before accepting a mandate, the engineer must take into account the limits of their knowledge and skills as well as the means they have to perform the work,” and article 3.01.03 which indicates that: “the engineer must refrain from practising under conditions or states likely to compromise the quality of their services,” article 3.02.01 which refers to the integrity value

4 Insights from Psychology

159

and stipulates that: “the engineer must fulfil their professional obligations with integrity,” and still in section III, article 3.02.05 which requires that: “the engineer must inform their client as soon as possible of any prejudicial error and error that is difficult to repair that they committed in the execution of their mandate” (quotation translated from French). • In section IV concerning the duties and obligations toward the profession, article 4.02.06 indicates that: “the engineer called upon to collaborate with a colleague must preserve their professional independence. If they are entrusted with a task contrary to their conscience or to their principles, they can ask to be excused from it” (quotation translated from French). The application of the few rules stated above necessarily implies skills and competences that refer more to emotional, intrapersonal and interpersonal intelligence than knowledge competence. A good understanding and application of these articles involves not only the capacity for self-criticism and introspection of the engineer, but also an emotional awareness, social skills and well-developed interpersonal skills and abilities. It is therefore obvious that a young engineer graduating from university would be able to memorize their code of ethics, and similarly pass their ethics course in order to obtain their licence to practice. However, it is obviously much more difficult to apply in everyday life and far less likely to be applied successfully and judiciously to the many issues with which they will be confronted at the beginning of their career. In addition to this Code of Ethics, there is a sense of ethics, which in general relates to the concept of good, fairness and human achievement. According to Racine et al. (1991), ethical reflection aims to determine the values that motivate professional conduct which are updated in the Code of Ethics. “The goal of ethical reflection is to determine not the most motivating values on the subjective level but those which constitute good reasons to act in one direction or the other in our relations with others” (quotation translated from French). Legault (2003) adds that ethics also consists of a continuous reflection on not only the meaning of professional actions but also on their multiple consequences. Finally, author Millès (2003) specifies, for his part, that unlike morality which defines laws and rules according to a binary code based on what is good or bad, ethical reflection only makes sense in the constraints of a situation and makes it possible to take the best possible decision in order to act according to the virtues and values of the profession (Code of Ethics). It goes without saying that the Millès (2003) description necessarily implies for the engineer to have developed an excellent capacity of adaptation and a balanced professional judgement whose basic premises are once again the emotional, intrapersonal and interpersonal intelligences. In fact, Goleman defines professional judgement as an activity of the mind allowing to judge, assess and evaluate the conditions of a professional situation to develop an opinion based on knowledge, experience and competence. Professional judgement presupposes consistency, reflexivity and tolerance of uncertainty to enable decision-making and the development of a procedure to be followed in order to offer services that meet the principles of safety, competence and ethics. Overly

160

M. Déom

structured methodologies, combined with a blind application of standardized procedures suppress the reality of judgement in favour of compliance. Reality is the very essence of judgement, which includes a critical dimension and requires flexibility of thought. To this end, the OIQ proposes that an engineer’s judgement be based on their theoretical knowledge, professional obligations, experience in the context of their practice (professional maturity) and the ability to assess risk. Professional judgement cannot therefore be taught, it develops conditionally on the flexibility of the mind, the ability to adapt, tolerance to doubt and uncertainty, and particularly on the emotional balance of the individual since unfortunately, professional judgement, just like an individual’s knowledge, skills and competencies, is not immune to the emotional load of an event and can be compromised when emotions intrude into thought and inhibit or contaminate the cognitive and mental processes of the professional. Reasonable doubt is also a factor that can contribute to the development of professional judgement. This concept is based on the consideration of ideas from colleagues who question an engineer’s initial position. In general, humans are not fond of this feeling because it exposes them to emotional instability and insecurity. However, Goleman claims that if a being is able to evolve, then doubt becomes the obligatory companion of this evolution: without questioning the certainties there can be neither motivation nor criticism. Professional doubt becomes relevant and constructive in the sense that it makes it possible to review or revise an approach, a reflection, a procedure… It prevents hasty judgements based on certainties and makes it possible to weigh the pros and cons of a judgement, opinion, idea or theory. Certainty can be a barrier to discovery, innovation and openness to new ideas, and thinking that we have the absolute truth is by no means a constructive position in a work team. Tolerance for uncertainty and reasonable doubt therefore allow professionals to reflect and analyze in depth various issues, particularly for professionals gaining assurance through their experiences. To this end, Brehaut et al. suggest that professional judgement rests on two levels of reasoning. One is more automatic and they call it intuitive reasoning; the second is more rational and relies on deliberate reflection in a given context. They explain that automatic intuitive reasoning “stems from instinctual cognitive processor from highly practised over learned behaviours” (Brehaut et al. 2007). According to them, intuitive reasoning therefore emerges from automatic cognitive processes conditioned by repeated exposure to professional situations or experiences, which seem similar to the context for which engineers must position themselves, but which, in fact, isn’t necessarily the case. Engineers new to the profession are less exposed to this type of intuitive reasoning unlike experienced engineers for whom decisions sometimes require very little reflexion: “for novices every decision involves deliberative consideration of relevant signs and symptoms, whilst experts often appear to make decisions effortlessly” (Brehaut et al. 2007). “However for all its advantages, intuitive reasoning is also subject to a well-known range of biases and contextual errors” (Tversky and Kahneman 1974). In this context, professional doubt therefore becomes essential in the practice of engineering and Bandura already insisted at the time on the fact that professional judgement and experience become important criteria to consider in the understanding of human error.

4 Insights from Psychology

161

11 Cognitive Bias and Human Error As discussed previously, the presence of human beings within a technical system is essential as long as this system encounters situations or scenarios that it is not in a position to anticipate and therefore to adapt to it in order to achieve the desired objective. However, this human presence is a double-edged sword; it can be harmful if human factors have limited capacity, or beneficial if human factors and internal resources are well developed. Elwyn Edward was already proposing in the 1970s that the human is the least controllable and the most volatile and unpredictable variable in a system and that the psychological components of this person are undoubtedly the most influential elements leading to human error. The analysis of these human factors therefore allows us to understand the elements contributing to the achievement of objectives or their failures, whether on a personal, interpersonal or professional level. When these human factors are poorly developed and when the mental and physical state of the individual impedes the objective analysis of information, planning and communication of decisions related to the ability to work in a team, the processes of thoughts and reasoning will lead to erroneous judgements. Cognitive biases and affective biases caused by the intrusion of emotions can seriously hamper a professional’s ability, skills, competencies and judgement, thus resulting in impaired reasoning and therefore in conduct, behaviour or decisions that may run against the intended objective. While a bias can affect all aspects of human reasoning in general, intuitive thinking remains the most vulnerable.

12 Human Error and the University Program The psychological components are arguably the most influential element leading to human error. Despite the repeated efforts of researchers like Hollnagel (1993) who suggest a methodology to reduce the risk of human errors occurring in a given system, human error as Taleb (2007) presents it in his book “The Black Swan” is inevitable. He claims that our knowledge and relentless research on a subject can be refuted by one single observation or event that he characterizes as “the black swan event”: “one single information can invalidate a general statement derived from millennia of confirmatory sightings of millions of white swans. All you need is one single (and I am told, quite ugly) black swan. By ruling out the extraordinary and focus only on the normal and the knowledge we have from it is a mistake.” In addition, he asserts that human beings are unpredictable and that no research or model can claim to be able to control and prevent human error in any given system. “If you know all possible conditions of a physical system you can in theory project its behaviour into the future. But this concerns inanimate objects. We hit a stumbling block when social matters are involved. You cannot predict how people will act. You simply assume that individuals will be rational in the future and thus act predictably.”

162

M. Déom

Although Taleb’s philosophy speaks out on the impossibility of controlling human error in a system due to the unpredictability of humans, the fact remains that the desire of researchers to reduce the risks of errors occurring in a system is honourable. An Australian study conducted by Ingles et al. (1983) indicates in his results that the engineering profession places great importance on experience in error reduction and thus suggests changing engineering education to focus on professional practice at the expense of theory. Novak (1986) share this idea by arguing that the reduction of human error in structural engineering would be possible by revising the engineering training and educational program and emphasizing the development of the theory of reliability, quality assurance and acceptable performance. They argue that for over a hundred years the science of engineering has been dominated by quantitative analysis, with little emphasis on the development of the engineer’s human skills. “Control of human error in structures may require a broadening of the educational background of the researcher and ultimately a change in emphasis if not a revolution in the formal education of structural engineers.” Other research, such as J. Reason’s studies of human error (1988), establishes an important relationship between predisposition to error and an individual’s vulnerability to the stress of their environment. Similarly, research by Broadbent et al. (1982, 1986) suggests understanding the association between increased vulnerability to stress and individual predisposition to cognitive failure. Finally, a study conducted by Lester and Tritter (2001) on mistakes made in the professional practice of physicians, concludes on the relevance of modifying the educational program to emphasize personal and interpersonal skills and on strategies for managing emotions and uncertainty in order to reduce the risk of errors. Despite the results of conclusive research as to the relevance of including, in the education and training of engineers, the dimension of human psychology and the human factors involved in professional errors, little effort has been made in this regard. Emotions affect our ability to think and therefore to work. They have an impact on our professional and personal life. The OIQ is clearly positioned on the fact that engineers must, throughout their career, develop know-how and interpersonal skills that go far beyond initial university training. Would it not be relevant to include, in the university engineering program, psychology courses allowing candidates to develop skills and abilities related to emotional awareness even before wanting to sensitize them to social and interpersonal awareness as suggested in their Code of Ethics? Because even before being able to develop the skills related to the awareness of others, human beings must first be able to master the skills related to their own person.

4 Insights from Psychology

163

13 Analysis of the Study Cases from a Psychologist Perspective In an effort to illustrate some of the concepts discussed in the previous sections, some of the cases described in the seven categories of F. Knoll’s case matrix are discussed to narrow down the relevant events and identify the human factors involved in the mistakes made. The selection and analysis of the anecdotes is based on the five major categories that contain Goleman’s 25 emotional skills: self-awareness, self-control, motivation, empathy and social skills, as well as professional skills in connection with professional conscience, namely knowledge, know-how, interpersonal skills, responsibilities, commitment, code of ethics, sense of ethics, judgement and reasonable doubt.

13.1 Scaffolding and the Inclined Overpass Several factors can be noted in this event and although the final settlement concerns the three parties mentioned in the case, the experienced engineer, being under pressure, did not carry out his inspection according to the rules of the art. The skills of self-awareness (self-control) and professional conscience are deficient in the engineer who does not seem able to perform his work in a responsible manner given the emotion of stress caused by the time factor that contaminates his reasoning and professional judgement (cognitive bias). The engineer seems to have based his decision to proceed with the installation of the slab according to his intuitive reasoning that Brehaut (2007) identifies as an advantage for experienced engineers but which can in certain cases lead to a cognitive bias and to errors. “For novices every decision involves deliberative consideration of relevant signs and symptoms, whilst experts often appear to make decisions effortlessly” (Brehaut et al. 2007). However, for all its advantages, intuitive reasoning is also subject to a well-known range of biases and contextual errors (Tversky and Kahneman 1974). Consequently, the engineer did not know how to manage the stress created by the time factor, and his great professional experience led him to make a hasty decision (intuitive reasoning), which was not based in any way on the objective evaluation of the situation, that is, the existing methods to allow the installation of the 1-m-thick slab. Fortunately, the construction worker was vigilant, and was able to save himself from the dangerous situation that the error generated. According to the information presented in the account, it is also evident that the structural elements supporting the slab, i.e. the aluminium beams, were not stable and secure: “A fundamental error must be cited in the fact that the aluminium beams were leaning and were neither consistently nor effectively secured against toppling over, a condition of instability.” It is therefore possible to assume that errors prior to the construction stage had been made and that the engineer and his team were not able to take responsibility for the mandate that had been granted to them. This case illustrates well the SCM (Swiss cheese model) of J. Reason whose

164

M. Déom

metaphor evokes the following principle: the more holes (errors) there are in a slice (phase of the project), the more defects or weaknesses there are in the basic element. If the hole(s) in each of these slices (project phase) line up from slice (project design phase) to slice (other project phase subsequent to the design phase), smaller or larger accidents, depending on the size of the holes, could result.

13.2 The Crooked Mast Responsibility, integrity and honesty were clearly violated in this event for the sake of the employee’s fear of losing his job. The self-control competence, including the management of emotions in the employee who made the error, is obviously deficient. Although the consequence of their action to remain silent about the error made did not generate any major drama, the fact remains that the result certainly had an impact in cost and time. The emotional load contaminated the judgement and objectivity of the employee who prioritized his well-being rather than acting rationally following the error made of which he was fully aware.

13.3 A Mix-up Communication and teamwork skills were the human factors responsible for reversing the location of the cranes. The emotional problem experienced by the technician cannot be considered as the problem but rather as one of the triggers making it possible to observe clear deficiencies in the work dynamics of the team who made the error of reversing the cranes on the plans. In fact, the lack of adaptation to the unforeseen situation of the work team who loses the technician responsible for making the shop drawings, the poor communication of information to the system responsible for responding to the mandate granted by the client and the stress factor generated by the time criterion are the characteristics of the system that influenced the capacity to manage the inconsistencies of the environment in order to be able to produce quality plans. Despite the insufficiency of details in this account, it is possible to suggest that the numerous stressors of the environment have led to several cognitive and emotional biases that greatly affected the objective analysis of information on the one hand, and on the other hand in planning and coordinating this information within the team. As described in the theoretical section, cognitive and emotional biases can seriously hamper the cognitive abilities, skills, competencies and judgement of all members of the same team, inevitably leading to one or more human errors.

4 Insights from Psychology

165

13.4 The Roof of the Olympic Stadium in Montreal To fully understand the events that took place in the design of the roof of the Olympic stadium, details must be added to allow an analysis of the human factors responsible for the two-time failure of the stadium roof. Originally, the roof mandate was given to a single firm claiming to have sufficient expertise to design a retractable roof made of Kevlar (Aramid fibres). Unfortunately, contrary to what Goleman defines as the characteristics necessary to have adequate professional judgement, the work team responsible for the design of the first roof worked in isolation, despite the limited knowledge of the literature of the time on the behaviour and resistance of Kevlar in the context of a retractable roof. In this context, the importance of considering the visions, knowledge, questions and positions of different consultants in order to develop an elaborate reflection on the use of this material, which was not commonly used at the time, was consciously dismissed by the work team both in the design phase and in the roof installation phase. Despite the fact that reasonable doubt is an essential factor contributing to reflection and professional judgement, the work team would have ignored it, resulting in a first roof rupture. Subsequently, the committee responsible for the Olympic installations consulted several experts to consider the possible options for the realization of a roof whose main characteristic is that it had to be retractable. The material proposed and selected for the second roof was a fibreglass composite. However, because of the cost considerations, the committee opted for a version of fibreglass more affordable and therefore less resistant than what had been proposed as a solution. This decision clearly illustrates a flagrant lack of judgement given the recommendations. Soon after the installation, the second retractable roof tore under the load of snow, resulting in a significant amount of snow in free fall. Since then, due to the inferior quality of the fibreglass material chosen for budgetary reasons, the roof of the Olympic stadium is under constant repair, resulting in costs considerably higher than those that would have occurred if a superior fibreglass quality would have been chosen initially.

13.5 Earthquake Resistance in the Real World This story clearly illustrates serious engineering deficiencies. The lack of skills of the team of engineers responsible for the design phases, the production of plans and specifications and the execution (construction) of the building in reference is obvious. The professional conscience of the project manager and the members of his team is failing, leading to the absence of human skills such as know-how, interpersonal skills, the ability to assess risk, emotional intelligence, ethical judgement and reasonable doubt. The author also insists that many violations of the code requirements were committed by the professional team of the project, which points to a lack of professional knowledge and skills both of the members who did not comply with the requirements of the building code, and of the project manager who was

166

M. Déom

unable to catch the errors of their team. In conclusion, even if the foreman on duty seemed competent, the project as conceived and planned by the team of structural engineers was doomed to failure and cost the lives of several people. The main shortcoming consists however in the lack of following up although serious deficiencies were identified and reported.

13.6 The Problem with Repetitive Work According to the points raised by F. Knoll in the illustration of these two cases, two important aspects can be identified as being responsible for the collapse of these two structures: the lack of capable work teams, the large number of construction projects and the time constraints for the execution. As suggested in the theoretical section of this essay, intuitive reasoning can become a two-edged sword, according to Tversky and Kahneman (1974). In both cases, and by eliminating the reality that the selected teams were unable to do the job properly which would add many variables to this analysis, the handicap of the teams at work in the design, planning and construction of infrastructure projects includes time pressure on the one hand, and the repetitiveness of work on the other. These two stressors can greatly influence the quality of work and the level of motivation and attention of team members. In addition, intuitive reasoning can generate, in the context of repetitive work, the illusion of satisfaction in all phases of the project both in the realization and in the verification of shop plans, leading the project manager to move forward despite the presence of flaws creating compromising or even dangerous situations. In the first case for which, according to the information in the insurance file, regular inspections had been carried out, the origin of the error seems to have been with the inspections that created the impression of the absence of a problem. In fact, the cracks that caused the structure to collapse some 35 years later were observed during inspections but not recognized for what they were. Unfortunately, it is not possible to determine, within the scope of this analysis, whether the engineers and/or road inspectors who carried out the inspections were experienced or not, but it is not uncommon in engineering offices to delegate the inspection task to staff with little experience, either for reasons of budget or quantity of work, particularly if the inspector is not supervised by the team leader.

13.7 The Problem with Drawings In this case, it is possible to clearly identify the stage at which the error first occurred and then unfortunately was missed in the subsequent stages of the mandate. In fact, the verification of the shop drawings is a responsibility incumbent on the engineer who must ensure the quality of the plans to be approved including their correctness. Although the verification of shop drawings issued for construction is, albeit tedious,

4 Insights from Psychology

167

an essential step in any realization, it is too rarely carried out with as much attention and interest as the phases leading up to this step. In this case, it is evident that if the engineer had caught the omission in the manufacturers’ drawings, the error could have been corrected. Too often, this type of error easily escapes the project managers who relies on their team of professionals and technicians to complete the mandates. In articles 2.02 and 3.03.01 of the Code of Ethics of Engineers, it is stipulated, on the one hand, that the engineer must support any measure likely to improve the quality and availability of their professional services, and on the other hand, that the engineer must demonstrate, in the practice of their profession, reasonable availability and diligence. These two articles mainly refer to the notion of professional conscience and the skills that refer to it. In addition, the engineer in charge of the project must have leadership skills and good interpersonal skills to support and coach their team, which in return, should ensure that the work to be carried out is completed properly. These skills, directly associated with emotional intelligence, are not, as described in the theoretical section of this essay, an innate dimension of the individual but a dimension that is constantly developing, conditional on the person having personal awareness and well-developed self-control. In the story about the roof of an arena, F. Knoll (in the previous chapter) suggests that human error stems from a combination of lack of experience, attention and communication. However, and according to the description of the events, it is possible to identify the stage at which the error was first made, and then the later stages where this error was not caught by the design engineer but perpetuated making it possible to identify more precisely the failings of the professional responsible. According to the facts, the initial error was in the design on the technical level, and professional judgement on the human level. The candidate for the engineering profession who was assigned the responsibility of designing the connection plates was not able to perform up to the expectations of the manufacturer and especially the project manager. However, professional misconduct rests mainly on the manufacturers engineer and project manager who have not fulfilled their mandate according to the rules of the art. Communication and the junior engineer are therefore not the variables responsible for the error in this situation, but the triggers that led the manufacturer and the design engineer to assume that the connection plates had been well designed rather than verify and find that the design of the connection plates did not meet the relevant criteria. With reference to the definition of professional judgement—which suggests that the design engineer/project manager should apply their knowledge, skills and professional experience taking into account the standards, rules and ethical principles of their profession to make decisions about what should be done to serve the best interests of the client—it seems clear in this case that the design engineer/ project manager lacked professional judgement and thoroughness, neglecting the checking of the design of the connection plates and not verifying the drawings that were issued for construction.

168

M. Déom

13.8 The Perpetual Problem of Balconies As proposed by F. Knoll, this situation reflects a reality too often encountered by engineers and architects having to collaborate, as best they can, with a client more concerned with their portfolio than with the quality and aesthetics of the structure they want to erect. In fact, this story illustrates well the context with which engineer are often confronted, in spite of themselves, if the client, regardless of the recommendations of the professionals, chooses the lowest bidder for the work. In the example of the balconies, the client’s decision to choose the lowest bidder results in shoddy work on the part of the contractor, omitting necessary chairs to be installed on balconies. Neglecting to do the work properly and not always being aware of the problematic results that this type of work generates in the medium or long term, this type of contractor is mainly motivated by the savings they can achieve, regardless of the methods specified by the professionals involved. Although the contractor’s main objective might appear to be to respect the “lowest bid” issued by saving on the quality of materials and on site personnel (workers), it is also possible that this type of contractor acts consciously without the knowledge of the client in order to extract money from them. The human error committed by the client to privilege the monetary aspect at the expense of the quality of their project leads them into a reality where the values of honesty, transparency, loyalty, respect, and trust do not truly exist and where the main motivations of the people selected are far from passion, desire of doing good work, empathy and professionalism. In closing, although the engineer can hardly control all the actions and decisions of the contractor on the site, it would however be relevant for the engineer to learn about this reality through their academic courses in order to know the issues they will be exposed from the outset of their professional practice.

13.9 Inspection Impossible The situation illustrated in this account refers to an incident in 2009 when a concrete panel fell from the 18th floor of the façade of a building, causing the death of a woman sitting in the outdoor terrace of a restaurant of the building in question. Bill 122 of the Régie du Bâtiment (RBQ) was then enacted following this unfortunate event, which revealed significant inconsistencies with regard to the maintenance, among other things, of the façades of high-rise buildings. The RBQ was then positioned on the fact that “concrete blocks that detach from cladding and façades that collapse or crack are all risks to public safety that can be reduced, or even avoided, by preventive, assiduous and rigorous building maintenance by the owners.” Since then, this law obliges the owners of buildings with five or more floors above ground to carry out periodic inspections of their façades on a five-year plan and to produce a report identifying the faults, deficiencies and vulnerabilities of the façades and their evolutions over time. This inspection report must be produced by an architect or an engineer specializing

4 Insights from Psychology

169

in building structure and must be sent to the Régie du bâtiment in the event that some of the elements identified are considered dangerous. This is no easy task for architects and engineers specializing in building structure. The main problem is the inaccessibility of the anchors that can only be assessed by making exploratory openings, which are costly for the owner who authorizes them, and in fact are not representative and exhaustive of the condition of all anchors on a single façade because they are limited to small sample. The inaccessibility of the anchors of certain building façades is a design error and it took the fatal event to consider the relevance of establishing a bill for the maintenance of the asset and for the type of accessible anchors and weather resistant material to ensure public safety. Since a complete inspection if often not possible, the law does not really solve the problem which has the character of a time bomb.

13.10 Thoughtless Cutting This account is an example of the type of human error possible in execution that can generate major, even dangerous consequences if not discovered and corrected by repairing the damage. As mentioned previously, it is not uncommon for the construction team (workers and contractors) to be oblivious to the possible consequences of their decisions and actions. In this case, although the plans clearly indicated the presence of the cables in the slab, the worker’s superior requested the cut without checking, asking or consulting a professional able to validate the intervention. Several hypotheses can explain the incompetence of the superior including, among others, the cognitive and emotional biases possibly at play due to the many stressors of the environment in which the superior must work, or the absence of reasonable doubt which is essential to review or revise an approach, a reflection, a procedure and thus prevent the hasty judgements of the superior based on their subjective experience and on what they believe to be certainties. Once again, emotional intelligence, as described by Goleman, becomes essential to allow the development of personal, interpersonal and social skills essential to the proper functioning of an individual in a position of decision-making power within a team working on a building site in other words, exercising circumspection. Although the second example once again reflects the possible danger of the actions of a team of workers on the site, the following factual information helps to understand the events that followed the error made by the contractor, and which calls into question the professionalism of the design engineers team for the building concerned in this story. After committing the error of making the opening without consulting the structural engineers, these engineers refused to collaborate with the contractor to repair the beam that had lost its shear resistance capacity due to, essentially, continued frustrations with regard to the contractor. The client had to hire another engineer who was ready to accept the challenge of designing a structural support for the beam of a building that they had not designed, in the hope of giving it back the necessary shear strength. The refusal of the design engineers to assist the client and

170

M. Déom

the contractor to find a viable solution, can be considered, from an ethical point of view, as a lack of professional conscience on the part of the engineers concerned. In fact, articles 3.02.10, 3.05.01 and 3.03.04 of the Code of Ethics of Engineers of Quebec indicate not only that the engineer must show impartiality in their relations between their client and contractors, suppliers and others people doing business with their client, but that they must also subordinate, in the exercise of their profession, their own personal interests to those of their client, which, in this case, was not respected. Of course, the accumulated frustrations of engineers with regard to the contractor guided this somewhat vindictive position of refusal to help remedy the problem which would, of course, not have taken place if the contractor had consulted them. Fortunately, a viable solution was found to restore the structural beam to its shear resistance capacities and thus ensure the safety of the building.

13.11 The Problem with Indoor Parking Structures (Case 14) The case of the concrete panel that broke off the façade of a building in Montréal in 2009, causing the death of a young woman, and this case, which occurred in a parking lot in Montréal in 2008, causing the death of a vehicle driver, are the two motivating events for the adoption of Bill 122 of the Régie du bâtiment du Québec on the obligation for an owner to have the façades of buildings and underground parking structures inspected according to a five-year plan. The multi-level parking concerned in this event consists of a construction dating from 1970, at which time the Canadian standard on parking lots CSA S413-14 was not yet in force. However, as early as the 1980s, in Québec, the freeze/thaw phenomenon and the chemical reaction of de-icing salt with water and moisture contaminating concrete and the reinforcement of parking slabs was increasingly the subject of more studies, leading the Standards Council of Canada to issue a standard for new parking structures in 1994. Despite the fact that the parking lot in question was not subject to Canadian standards at the time of its construction, and that Bill 122 did not exist before the events of 2008, the fact remains that the owner did not exercise judgement as to their civil liability. In fact, the observation of deterioration of the owner’s parking lot led them to hide the damage of the slab by covering it with a layer of asphalt rather than making real repairs, certainly much more expensive, but which would very likely helped prevent the fatal event of 2008. Unlike the 2009 fatal event involving building façade failure and deterioration of which the owner was unaware because the deterioration of the anchors was not visible, the 2008 multi-level parking lot event suggests that the owner was aware of the deterioration in his parking lot, which leads us to believe that they had prioritized and privileged the economic aspect at the expense of the safety of the public.

4 Insights from Psychology

171

14 Conclusion These few case studies attempt to make the reader aware of the reality of the human factor as the cause of errors in construction and in the world of structural engineering. As highlighted for each of the selected situations, all the actors involved in the construction world, from engineers to workers, including clients and contractors, are in no way immune to committing errors, despite the existence of numerous codes, standards, rules, knowledge and principles. As mentioned by F. Knoll, the question of human error, and its effects in the construction industry, is a very important subject considering the costs associated with it, particularly in terms of loss of life. It would be an error in itself not to attempt to explore the possible means of mastering the human variables at play and making each of the actors who contribute directly or indirectly to the construction industry accountable. The theoretical section of this essay clearly identifies the positive or negative impact of emotions on cognitive processes, executive functions, reasoning and logic of the individual. The key to success lies, therefore, in the development of emotional intelligence allowing the mastery and management of our own emotions, thus allowing access to a multitude of human skills, the most advanced of which are the awareness of others, empathy, ethical judgment, adaptability, circumspection and flexibility of thought. As suggested in the “human error and university program” section of this essay, psychological components are arguably the most influential element leading to human error. According to this premise, it would be essential to offer as part of educational programs, courses and training on the essence of emotional intelligence to allow everyone to achieve a life balance in terms of physical and mental health.

References Brehaut, J.C. & al., (2007), Cognitive and Social issues in Emergency Medicine Knowledge translation: A research agenda. Academic Emergency Medicine, Vol 14, Issue 11 984–990 Broadbent, & al. (1982) The cognitive failures Questionnaire (CFQ) and its correlates. British Journal of clinical Psychology 21 (1) 1–16 Broadbent & al. (1986), The Enterprise of Performance, The Quarterly Journal of Experimental Psychology, 38 A, 151–162 Gardner H. (1983) Frames of mind: the theory of multiple intelligence Goleman D. Emotional intelligence Bantam books New York 1995 Grigoriu, M. (1984) Risk, Structural Engineering and Human error. University of Waterloo Press Hollnagel, E (1993) The phenotype of erroneous actions. International Journal of Man-Machine Studies, 39 (1) 1–32 Ingles, O.G., Lee, I.K., White, W., Geotechnical Engineering. Pitman Publishing Inc. Massachusetts Knoll F., Error Proneness in Construction. ICASP 11 Zürich Knoll F. Human Error in the Building Process. A research Proposal. IABSE Journal J-17/82, Zürich Lafortune, L. & Allal, L. (2008). Jugement professionnel en évaluation. Pratiques enseignantes au Québec et à Genève. Québec, Québec: Pressesde l’Université du Québec Legault G.A., Professionnalisme et déliberation éthique, Québec, Presses de l’Université du Québec 2003 p 290

172

M. Déom

Lester H. & Tritter J. (2001), Medical error: a discussion of the medical construction of error and suggestion for reform of medical education to decrease error, Medical education 2001;35:855– 861 Li, Y. and al. (2018) The influence of Self efficacy on human erroring airline pilots: the mediating effect of work engagement and the moderating effect of flight experience. Springer Science + Business Media, LLC part of Spring Nature 2018 Mayer, J.D. & Salovey, P. (1990) Emotional Intelligence. Imagination, Cognition and Personality, 9, 188–211 Millès J.J, (2003) L’éthique est une compétence professionnelle Tribune 2003 Nicolet, R.R., Janvier 1997 Commission scientifique et technique sur la gestion des barrages. Bibliothèque Nationale du Québec-Bibliothèque Nationale du Canada – Rapport Vol. 1 Novak, A. (1986), Modeling Human error in structural design and construction published by American Society of Engineers, New York June 4–6 1986 OIQ Guide de pratique professionnelle, Le groupe média science inc. 2018 Racine L., Legault, G.A., Bégin, L., (1991) Éthique et Ingénierie, Montréal, Mc Graw Hill 1991 285 p Reason, J. (1988) Handbook of Life Stress, Cognition and Health. John Wiley & Sons Ed Taleb NN. The BlackSwan Ramdom House 2007 Tversky, A. & Kahneman, D., (1974), Judgment under uncertainty: Heuristics and Biases Science, 185 (4157) 1124–1131

Chapter 5

Insights from Construction Management Multidimensional Risk Management in Construction Projects Hamid Rahebi and Katharina Klemt-Albert

1 Megaprojects in Construction as Mega Challenges The Channel Tunnel between France and the United Kingdom (UK), Boston’s Big Dig, Germany’s railway project Stuttgart 21, UK’s high-speed railway project HS2, Dubai’s Burj Khalifa are some examples of a long list of large-scale construction projects. Next to their large dimensions, all these different projects have construction costs over $1 billion in common (Flyvbjerg 2017; Greiman 2013; HS2 Ltd. 2017; Rogers 2014; Irish 2010; Anthony 2014). With these huge amounts of investments, construction projects can deliver benefits for an enhanced globalised world and are main drivers for economic growth. Unfortunately, megaprojects are well known for cost and time overruns (Söderlund et al. 2017; Merrow 2011). So far, several studies have shown drastic cost and time overruns in construction projects: Morris and Hough (1987) analysed 3,500 projects and identified a cost overrun range between 40 and 200%. Furthermore, Flyvbjerg et al. (2003) identified an average cost overrun of 28% by analysing 258 infrastructure projects. The Hertie School of Governance conducted in 2015 research about German large-scale projects and identified for infrastructure projects a cost overrun range between 11 and 57%. The several studies imply that there exist challenges in managing large-scale construction projects. Moreover, Flyvbjerg (2017) calls the continuous bad performance of megaprojects the ‘Iron Law of megaprojects’ which says: ‘Over budget, over time, under benefits, over and over again’. H. Rahebi GOLDBECK GmbH, Bielefeld, Germany K. Klemt-Albert (B) Institute for Construction Management, Digital Engineering and Robotics in Construction, RWTH Aachen University, Aachen, Germany e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 R. Moura et al. (eds.), Understanding Human Errors in Construction Industry, Digital Innovations in Architecture, Engineering and Construction, https://doi.org/10.1007/978-3-031-37667-2_5

173

174

H. Rahebi and K. Klemt-Albert

The cause for the underperformance of construction projects can be explained by several different reasons. One crucial factor for achieving the planned targets is risk management. According to the PMBOK Guide (2017), project risk management includes several steps such as planning, identifying, analysing, developing strategies and monitoring risks. Risk management is an essential part of project management: on the one side, it can help to take advantage of chances and on the other side it enables project manager to avoid deviations from the actual goals. Sir Latham (1994), author of the Latham Report said: No construction project is risk free. Risk can be managed, minimised, shared, transferred or accepted. It cannot be ignored.

The statement of Latham contains one important rule which every project manager should be aware of: ignoring risks can cause dramatic consequences. Hence, risk management has many interfaces to other disciplines and aspects. For instance, how risks are managed strongly depends on the origins of the risk manager. In order to establish a holistic risk management system, current challenges in risk management must be identified. Based on that, an interdisciplinary risk management system can be designed.

2 Multidimensional Complexity of Megaprojects A reasonable explanation of how risks arise is the existing complexity of megaprojects (Locatelli et al. 2017). Complexity is defined as a system that contains several elements, relationships and can change its state (Beer 1995). Hence, in a complex system the cause-effect relationship can often only be understood retrospectively (Frahm and Rahebi 2018). Linking this explanation with construction megaprojects, it seems reasonable that several scholars describe these projects as complex systems. Overall, three different dimensions have to be taken into account: First of all, a construction megaproject is divided in different time phases like preparation, engineering and design, construction and operation which means that it changes its state in terms of time. As second dimension, the need for different disciplines in a megaproject has to be taken into account. Thirdly, the realisation requires the interaction with several different project parties such as owner, design engineers, construction managers or authorities. In other words, delivering the expected project outcome requires interdisciplinary interaction between the project participants. Despite this, making decisions in construction megaprojects includes the consideration of limited information and a high degree of uncertainties. Many of these uncertainties are only recognised after they happened or at least, the root causes can be identified only after the incident (Hajikazemi et al. 2016). In fact, considering the dimensions and elements of a megaproject system, it is far more than the sum of its parts. Hence, it is reasonable describing construction megaprojects as complex phenomena. But which factors drive their complexity? Probably, listing all complexity factors would be a life task and still not complete. Nevertheless, several scholars have

5 Insights from Construction Management

175

Fig. 1 Multidimensional complexity in construction megaprojects (Bahlau and Klemt-Albert 2017)

already explored the underperformance of megaprojects and developed explanations for increased complexity. According to Jünger et al. (2019), information asymmetry causes increased complexity in large-scale construction projects. They argue for delivering flexible, creative and individual solutions, project participants need to receive all required information. Unfortunately, reality has shown that information flows are partly uncontrolled and result into information asymmetry which complicates the management of disruptions. For instance, very often information gets loss during the transition between design and construction phase if construction managers are not involved in the design phase (Fig. 1). One possible way illustrating multidimensional complexity drivers in megaprojects is provided by Bahlau and Klemt-Albert (2017). Based on literature review, they identified four cluster of complexity which they call the four dimensions of complexity. Subsequently, the dimensions were validated and ranked through expert interviews with highly experienced project managers. The four dimensions namely communication, engineering, project management and culture are related to the objective model ‘triple constraint’ including the consideration of time, cost and quality targets (Fig. 2). 45% of the participants identified communication as the main challenge in construction megaprojects. During the whole project lifecycle communication between internal and external stakeholders is necessary. More precisely, a consistent and working communication strategy on different levels between direct project participants, supervisory and steering committees, public interest groups and the general public as well as a transparent exchange of communication is essential. The second main challenge are cultural issues. Misinterpretation, different expectations, irrational optimism in realization or a culture of error tolerance are strongly influenced by the cultural background of the project participants. Especially in megaprojects, where international cooperation is required, cultural differences can cause misunderstandings. This is consistent with the work of Tversky and Kahneman

176

H. Rahebi and K. Klemt-Albert

Fig. 2 Evaluation of the dimensions of complexity construction projects (Bahlau and Klemt-Albert 2017)

(2017). Habits such as optimism bias and strategic misrepresentation which causes underperformance (Tversky and Kahneman 2017) are rooted in the personality and cultural background of the project participants. According to Flyvbjerg (2017), these factors are the main root causes of megaproject project underperformance. The third dimension and the second most important challenge ranked by the experts is project management. Classic activities of project management such as interface management, coordination processes and the steady change of new contact persons including contractual consideration is a main driver for complexity and causes time and cost overruns. The last main factor deals with the engineering of construction projects. The development of new and innovative technological solutions, the compliance of all technological regulations and the pioneering risks of implementing unique megaprojects were categorised as another complexity driver. By dividing the complexity in construction megaprojects into four dimensions, the total complexity of megaprojects can be understood and decreased. In order to decrease the complexity, a holistic risk management system is required which considers classic components such as engineering and project management but also soft factors including culture and communication.

3 Expanding Risk Management Risk management is an essential part in large-scale construction projects in order to manage the above described complexity and the high uncertainties that exist. According to the bad performance of megaprojects (Söderlund et al. 2017; Merrow 2011), several scholars argue that the current risk management approaches are not sufficient. In contrast to classic project management literature such as the PM Book Guide (2017), Flyvbjerg (2006) argues that the biggest risk in a megaproject are the individuals. Aspects such as scope change, geology, bad data models are causes but not the root cause of megaproject underperformance. Root causes are based on

5 Insights from Construction Management

177

Fig. 3 Expanding risk management

psychological and social nature. In other words, the factor human is the key factor for achieving project objectives. In order to design a risk management system considering hard and soft factors in construction projects, a multidimensional risk management system can manage the complexity of the project (Fig. 3).

3.1 Project Management Managing risks from a project management perspective is a well-established methodology where processes and standards are widely defined. Norms such as DIN EN 31010 describes in a systematic way which steps are necessary. Based on the experience of the authors and evidence in literature, the following steps are recommended (Fig. 4):

178

H. Rahebi and K. Klemt-Albert

Fig. 4 Risk management cycle

Step 1: Risk Strategy Successful risk management starts with a well-defined risk strategy based on clear defined project-developed risk plans and principles (Greiman 2013). The project context needs to be defined and is an essential part of developing a risk strategy. Aspects such as defining responsibilities, basic principles how to deal with risks, the maximal risk appetite, project objectives and the operative risk management processes are obligatory in order to execute the operative risk management steps effectively (Reformkommission Bau von Großprojekten 2015). Furthermore, the project organisation defines resources and creates an interdisciplinary risk management team which can understand the given circumstances of the project including aspects such as political statement of the project, main stakeholders, financing system or unique technical challenges. Step 2: Risk Identification After defining a risk management framework, operative steps are executed in order to increase opportunities and reduce uncertainties. The first operative step includes the identification of risks. It serves as the fundament for all further steps. Risk identification can be conducted in two different ways: On the one hand rather intuitive methods such as brainstorming or a SWOT-analysis and on the other hand more systematic methods such as checklists can be used. Intuitive methods involve creativity whereas systematic methods offer approved tools. A rather holistic approach includes the combination of both methods (for example by conducting a risk management workshop with project participants with different background) is used (Girmscheid and Busch 2008). Effective risk management in construction megaprojects relies on the identification of risks in the early phases (Reformkommission Bau von Großprojekten 2015). Hence, an early risk identification includes the consideration of risks not only in that phase. In contrast, risks of the construction and operation phase should be already taken into account on a reasonable level. An early and detailed risk identification

5 Insights from Construction Management

179

Fig. 5 Risk identification

enables decision makers in early preparation stages to consider these later arising risks and adapt the risk management. Furthermore, risk identification is an iterative process meaning that during the project risk identification and updating should be conducted which allows the consideration of new circumstances. Depending on the project phase, different direct risk types exist in a construction project. Next to project related risks, external factors or general risks which affect the project indirectly need to be considered as well. Figure 5 provides an overview of some internal and external risks. Step 3: Risk Assessment According to Greiman (2013), risk assessment must answer the following three questions: ‘What can go wrong? How frequently does it occur? What are the consequences?’ It contains the evaluation and classification of identified risks. More precisely, the symptoms and root causes of each risk are determined. The main task of the assessment is to elaborate a risk forecast by considering for each risk its probability and impact. Risk probability describes the likelihood that a specific risk will occur. The risk impact considers the potential effect on the project objectives. The impact is differentiated between negative (threats) or positive (opportunities) (Project Management Institute 2017). In detail, the probability and impact can be assessed in qualitative and quantitative ways. A qualitative analysis is based on expert judgements who estimate the probability and impact by ranking the risks. Subsequently, the ranking of probability and impact are multiplied with each other to identify the most important risks. Qualitative analyses need to be objective. Subjective perceptions and judgements are one of the main treats by conducing qualitative analyses but can be easily avoided by integrating different perspectives during the risk management process. Alternatively, quantitative assessments can be conducted based on existing data. They are more precisely but also more time consuming and require realistic numbers for accurate results. Common quantitative approaches in construction projects are

180

H. Rahebi and K. Klemt-Albert

described in previous chapters (for example in Chap. 3) of this book. Some megaproject scholars believe in the usage of the method reference class forecasting. By categorising the type of megaproject into a reference group, historical data of time and cost overrun are used to define a contingency surcharge (Flyvbjerg et al. 2018). After the analyses, the risks can be categorised. Managing all identified risks is neither effectively due to their impact and probability nor economically viable for the construction megaproject. Therefore, a prioritisation of the main risks must be conducted. Suitable approaches are for example risk portfolio analysis, ABC-analysis or sensitivity analysis. In general, during the long project duration of a construction megaproject the circumstances can change which means that the new environment can have an effect on the conducted assessments. Therefore, it is recommended updating risk assessments in a regular frequency. Furthermore, sometimes risks which occur during the construction phase cannot be analysed in depth during the planning phase and are still uncertain. It is recommended to add risk supplements for these uncertainties at the beginning based on the experience of the project participants. Step 4: Plan Risk Response Planning risk responses can avoid threats and includes the development of options, selecting strategies and agreeing on actions in order to deal with risks. In general, there are four strategies dealing with risks namely avoidance, minimisation, transfer and acceptance: Risk avoidance includes countermeasures to eliminate the risk. Risk minimisation describes all activities to reduce the probability and or the impact until the risk is acceptable. Risk transfer includes the shift of the risk to other parties which is also known as risk allocation. Depending on the competences of the different involved parties, transferring the risk to the best suitable one is a common solution. The parties can include design engineers, contractors or insurances. There is also the option to accept risks. If the probability or the impact is very low, risks can also be accepted by the risk managers with no countermeasures planned. The way how risk managers respond to risks is strongly dependent on the risk tolerance. The risk tolerance defines the point where a risk is acceptable or not. The risk tolerance of public construction megaprojects tends to be lower due to the duty to protect the public interest (Greiman 2013). Both scholars and practitioners hold the view that each risk response is strong dependent on its cost–benefit relation. The effectiveness or the costs of the risks that are saved must outweigh the investment costs (Ballou et al. 2009). Step 5: Risk Controlling In many cases, not all risks can be eliminated entirely, risk controlling is a possible way to steer and fence risks. In other words, risk controlling describes the observation of the risk situation and the validation of the effectiveness of risk responses. In addition, risk controlling is necessary to guarantee the adaption of the risk management system on a dynamic environment (Boateng et al. 2016). Every decision made during the project can change the risk situation and result into the neutralisation of risk responses or the requirement of a new risk analysis (Girmscheid and Busch

5 Insights from Construction Management

181

2008). Part of the risk controlling is the development of a dashboard to monitor risks. The dashboard is based on a risk database. The risk database contains all internal information about the identified risks, the analyses and the conducted countermeasures. Hence, the risk management database should be integrated in the overall risk management system. It includes on the one hand that the risk controlling system is integrated into the overall commercial system of the company and on the other side that the risks controlling of each part project of the megaproject is connected to each other. Advantages such as improved decision-making can be generated (Zhao et al. 2015). Furthermore, in a best-case scenario the internal risk database is extended by external public accessible data. In general, risk controlling requires proactivity. An essential part of risk controlling is to identify opportunities and alternatives. The ability to influence the project decreases analogously to the costs, but with the progress of the project (Wideman 1992). Step 6: Lessons Learned and Best Practice As the steps above have shown, risk management is an ongoing iterative process and must be started at the beginning of the project and considered during the whole project lifecycle. Each megaproject is unique which increases the importance of pioneering risks (Kostka and Fiedler 2016). In order to deal with new risks, Kostka and Anzinger (2015) highlight the necessity of learning. Very often lessons learned are only summarised after finishing the project and not used as an instrument to improve risk management during the project. Due to this habit, the project organisation does not learn from their mistakes and cannot improve during the realisation of the project. It is recommended to conduct short cyclical lessons learned sessions to benefit from their mistakes. In addition, lessons learned findings need to be collected in a database where each project participant has access to it. Part projects can learn from each other. Hence, it is useful to learn from similar megaprojects and understand their challenges in order to prevent future risks. Based on lessons learned and best practice future mistakes can be avoided.

3.2 Engineering The engineering of construction megaprojects depends significantly on the type, size, location and further boundary conditions of the project. For instance, the project team of the Olympic Games Stadium 2012 in London challenged different issues than the project MOZAL, an aluminium smelter plant project in Mozambique and one of the biggest aluminium foundries in Africa. The engineering team of the stadium had the main task to deliver their project on time (Flyvbjerg and Stewart 2016) whereas the project team of MOZAL focused on designing sustainable engineering solutions for an industrial plant (Kumar 2017). To sum it up, each construction megaproject is on its own way unique which makes it difficult to generalise the dimension engineering of an expanded risk management system.

182

H. Rahebi and K. Klemt-Albert

3.3 Communication and Collaboration As implied above, communication—the most important complexity dimension in construction megaprojects—is crucial for achieving the defined project goals. Overall, communication has two primary functions: Firstly, it enables an internal exchange of information between the project members and secondly, it allows to strengthen the relationship to external parties. In other words, internal communication focuses on stakeholders within the project whereas external communication concentrates on external stakeholder such as public, media authorities, government, customer and environmental advocates (Project Management Institute 2017) (Fig. 6). As mentioned above, especially for managing an interdisciplinary team, communication is needed to avoid information asymmetry (Jünger et al. 2019). Consequently, interfaces must be identified and controlled with the purpose of communicating knowledge effectively and generating a sustainable information flow. Knowledge in construction megaprojects can be differentiated between explicit and implicit knowledge. Explicit defines describable and standardised knowledge which is communicated, stored and distributed easily. Implicit or tacit knowledge describes knowledge which is unwritten and based on expertise, experiences or intuition of an individual. Referring to construction projects, explicit knowledge can be related to technical models, specifications, technical drawings or workflows. In contrast, implicit knowledge requires the exchange of engineering judgements. This includes

Fig. 6 External and internal communication in construction projects adapted from Klemt-Albert (2020)

5 Insights from Construction Management

183

Fig. 7 Explicit and implicit knowledge (Klemt-Albert 2018)

critical reviews, understanding relationships, deriving conclusions and developing solutions (Pathirage et al. 2008; Klemt-Albert 2018) (Fig. 7). Sharing knowledge is conducted by using formal instruments such as reports, management summaries or by informal instruments such as having a coffee together. An advanced communication helps reducing the isolated perspectives of the project members and to recognise themselves as one important part of the project (Caniëls et al. 2019). It is crucial to develop a collaborative project atmosphere. So far, current construction projects are dominated by silo mentality and isolation. This is underlined by the conducted survey of Bahlau and Klemt-Albert (2017) who analysed responsibilities for missing project objectives and justifications. According to the respondents of the interviews including owners, design engineers and construction companies, the failure to meet objectives is not located in their own responsibility. The owner of the megaproject explains the underperformance because of the interests and power of politics and the public. Design engineers argue underperformance due to missing decisions by the owner and complications with authorities. Construction companies criticise planning performance of the design engineers. In summary, each project participant blames the other one resulting in an obligation. This kind of ‘finger pointing’ can be avoided by increased collaboration and transparency between the project participants. Therefore, a proactive communication involving the overall project team must be driven forward. External communication including external stakeholder management is an important aspect for construction megaprojects. Richard Capka, former CEO of the Massachusetts Turnpike Authority, describes megaprojects very applicable as ‘Public Journeys’ (Greiman 2013). The public journey includes the involvement of many different types of stakeholder with different interests and different powers. Each of them has a legitimate interest in the megaproject. Especially after the construction phase has started the public has an increased interest on the project. In fact, delays and cost overruns are punished by negative media presence of the project. Therefore, the construction megaproject organisation needs resources for involving external stakeholders. It is recommendable integrating external stakeholders during the engineering phase for the sake of achieving transparency and reducing the number of

184

H. Rahebi and K. Klemt-Albert

Fig. 8 Frequency of google searches of Stuttgart 21

project enemies. Nowadays, it is very common that megaprojects have a strong presence through media including social media in order to receive the support of external stakeholders to achieve the project objectives. One example of critical stakeholder integration is the megaproject Stuttgart 21, a railway project in Germany. It was criticised that the project management team did not identify nor respond to the potential interest of the public and missed the opportunity integrating the public by sharing information or involving them during the preparation phase. According to a local newspaper, neither opportunities enabled by the infrastructure project were communicated nor the integration of the public in the decision-making process were conducted (Stuttgarter Zeitung 2010). Figure 8 matches the development and milestones with the interest by public. The interest is displayed through the frequency of google searches of ‘Stuttgart 21’ over time. The figure shows that after the construction phase has started, the interest and disagreement towards the project increased and resulted into an escalation of conflicts and demonstrations in 2010. In conclusion, the escalation resulted into a referendum which caused further delays and cost explosions (Nagel and Satoh 2019). It can be stated that the public interest increases slightly after construction start but gets to a real peak when conflicts occur, and media is highly interested in cost and time overruns. But with the start of construction, the design and all preparations are mostly completed. Hence, initiating scope changes can cause dramatic cost and time increases. In other words, a large public interest exists when the possibilities of influence are substantially limited. This phenomenon is also known as the participation paradoxon (Hirschner 2017; Zeit Online 2011). Derived from the investigation of several construction projects the participation paradoxon can be described as follows (Fig. 9). The construction project ORESUND LINK, a bridge between Denmark and Sweden, was praised for effective stakeholder integration. At the beginning, the project was heavily criticised due to the negative environmental impact. Environmentalists such as Environmental Protection Act, the Natural Resources Act, the

5 Insights from Construction Management

185

Fig. 9 Participation paradoxon for construction megaprojects (Klemt-Albert 2020)

Water Act and a strong protest movement had a significant interest in the project and criticised it. The project team recognised this threat and one year after the construction decision, the project team decided to hire one of the leaders of the protest movement, Björn Gillberg, to supervise the environmental regulations. Gillberg increased the number of regulations which resulted into a cost increase of the megaproject. But the communication to the external stakeholders and integrating them in the project realisation, reduced the number of project enemies and guaranteed no negative environmental impact (Gatermann 2000). In conclusion, it is crucial to manage external peers by integrating any stakeholders with potential power like the public in early project stages in order to produce transparency.

3.4 Culture The impact of culture in organisations is a well-recognised and important field in research and in practice. Moreover, several articles already explored the impact of culture on project management. For instance, Rowlinson and Root (1996) analysed the impact of culture by using Hofstede’s (1991) four dimension and found that cultural factors in projects are highly dependent on the origin of the people involved. Another study conducted by Ankrah (2007), identified the impact of culture on construction projects. The findings have shown that culture can affect aspects such as participant satisfaction, innovation, learning outcomes, health and safety and quality outcomes. In fact, culture can be seen as a supportive element which enables effective risk management. The effectiveness of risk management tools is restricted without the right values. Hence, risk management stands and falls with the existing project culture. Appropriate risk management is only possible by dealing with risks in an unbiased manner and to maintain continuously a risk management system. Therefore, an open risk culture is necessary. An open risk culture means that if an error occurs,

186

H. Rahebi and K. Klemt-Albert

the project participants should not be scared to report the error and try to hide it. Rather it is about to allow errors and see them as an opportunity to learn. Based on the lean management philosophy, errors are treasures because without them no one would recognise that an optimisation of the process would be possible. In context to construction megaprojects, a risk culture which allows mistakes is a significant part of the learning process during the project. In order to drive the learning in the project forward, senior management and leadership are two important aspects. First of all, senior managers must support this way of thinking and leaders have to convince their staff that making errors will not be a reason for punishment. In fact, they had to guide their employees how to take advantage of occurred mistakes, improve the risk management system and provide resources. For instance, a controversial public megaproject fights with ongoing criticism by media and citizens. The management board of the project team is under pressure and attaches importance to avoid any errors. In other words, the management board establishes a ‘no error tolerance’. If a potential error occurs, participants try to hide mistakes or difficulties and do not report the error. Consequently, the loss of information about potential risks results into losing the information that the risk exists. Lastly, the potential threat emerges with a dramatic damage and causes cost and time overrun.

4 Reducing Complexity by Digitalisation The age of digitisation and interconnecting physical and digital systems has already shaped many industries. The current performance of computers and the ability of engineers to develop intelligent algorithms allow the creation of many new technical and business-related solutions. Especially in the construction industry one main driving force is known as Building Information Modelling (BIM). BIM is a digital method based on 3- to n-dimensional object-oriented building models. The digital model of the building serves as an information source and data hub for the collaboration of the project participants. At the centre of this is the digital registration and interconnection of all relevant data for illustrating the physical, functional, cost and time related characteristics of a building. The usage of data model and the continuous update of the data consider all project phases from survey until operation. The data can cover the entire life cycle and be used across all phases. If the method BIM is applied correctly and intelligently, added value can be achieved due to increased planning quality and improved schedule and cost reliability. This results from consistent information management and a collaborative and transparent way of working. Overall, by using the method BIM different objectives can be achieved. Based on classic project management objectives like time, cost and quality, BIM objectives can support the achievement of these. Some examples are: • Quality-assured integrated design that meets user-specific requirements • Time and cost stability

5 Insights from Construction Management

187

• Robust cost and quantity determination • Detailed assessment and increased understanding of the project by visualisation, simulation and controlling • Constructive, collaborative and transparent cooperation Based on the above mentioned approach, Bahlau and Klemt-Albert (2017) conducted an online survey about digitalisation and BIM in the construction industry. In total, 70 participant including managers, lawyers, engineers and commercial staff from companies of different sizes were involved. The study found that respondents rate the influence of digitalisation on the market in general as very high. For instance, the study identified that compared to architects’ engineers expect a high influence of digitalisation. In addition, the study related the method BIM to the four dimensions of complexity as described above. In the first step the identified dimensions were validated by the participants and based on that the test persons were asked to evaluate the greatest improvement potential of the four clusters. Digitalised Project Management In general, BIM increases the transparency by using a digital model throughout the whole life cycle. For instance, during the procurement phase of a construction megaproject, a digital model-based procurement process enables many opportunities such as deriving from the digital model information like quantity, specifications or required qualities. In addition, it enables the project team validating cost- and time forecasts based on the digital model. Hence, digital solutions can be used for designing an advanced risk management system. During the risk identification, analysis, assessment and risk controlling it is advantageous using the generated information provided by the digital model for decision making. By analysing big data and targeted filtering of specific information the risk management team receives more accurate information about the construction project. In summary, BIM supports risk managers by providing reliable information which can be used for advanced decision-making. Can Digitalisation Improve Engineering? Technical challenges as well as the coordination of all involved engineers can be enhanced by using an object-oriented 3D coordination model which follows defined standards. Having access to several information which are integrated in the model offer engineers a new opportunity to optimise their activities. For instance, the overall quality management process benefits of the usage of model checking. Model checking allows the identification of collisions or coincidences by defining rule-based checks. So far, the model checking process is not completely automated and requires a technical model review by a technical employee. Nevertheless, using object-oriented 3D models helps project managers understanding the complexity of megaprojects. To create a good initial situation, the specifications need to be formulated concisely and comprehensively.

188

H. Rahebi and K. Klemt-Albert

Digital Solutions for Project Communication The biggest level for increasing communication is the usage of a BIM-Platform also known as Common Data Environment (CDE) Fig. 6. The BIM-Platform allows to decrease the current existing information asymmetry by providing one decisive digital model including all information which is known as the Single Source of Truth (SSoT). Therefore, all project participants can access to their required information via the platform. Hence, communication about the model is handled via the BIM platform which reduces the amount of misinterpretation. More precisely, project participants can record issues directly in the model and define responsibilities by creating BCF (BIM Collaboration Format)-issues. This allows the risk management team to receive early signals of potential risks. Furthermore, an easy understandable visualisation of the design drawings, visualisation of interfaces between project participants and a common training for the communication of the project participants allows reducing the complexity. Both explicit and implicit communication of knowledge benefits from the usage of BIM and increases the degree of collaboration. The usage of a CDE enables on the one hand formalised communication to standardise knowledge and on the other side communication based on the model (Fig. 10). Establishing a Project Culture Through Digital Solutions One main advantage of digitisation is increased transparency. In order to optimise the risk culture, BIM can support the design of an ongoing transparent risk management. As already mentioned above, establishing a risk culture which tolerates errors

Fig. 10 BIM platform adapted from Kaufmann and Schönbach (2019)

5 Insights from Construction Management

189

Fig. 11 Addressing complexity through digital solutions (Bahlau and Klemt-Albert 2017)

and use them for continuous improvement is important to unleash the full potential of the risk management instruments. By providing visualisations, implementing cooperative processes and sharing of information a culture will be established which follows values such as openness and transparency. Furthermore, using a digital model supports working in partnership and implementing an open book policy. In addition, the generated information by using the method BIM allows an increase of objectivity in the project. Misinterpretation or other concerns caused by cultural aspects can be minimised by formalised and standardised information flows. In fact, digital solutions include important tools to realise complex construction projects. One step ahead, they are necessary for realising construction projects in the current time. Increased requirements of users and a fast-changing environment causes a higher degree of complexity. Companies of the construction industry need digital solutions to act economically and be still competitive against other companies (Fig. 11).

5 Sustainable Implementation of Risk Management Risk management in construction megaprojects is like a megaproject itself a complex task. The complexity of managing risks in construction projects can be subdivided in the four dimensions project management, engineering communication and culture. The first two dimensions can be categorised as hard factors whereas communication and culture can be classified as soft factors. As mentioned above, considering soft factors is as much important as hard factors. Moreover, interaction between humans in megaprojects should not be ignored. Avoiding the information about how human behaves can lead to a potential threat. For instance, communication to external stakeholders requires soft skills of the involved

190

H. Rahebi and K. Klemt-Albert

engineers. In other words, a construction megaproject has parallels to a social process. Developing a shared perception of the megaproject and focussing on the project participants during the overall project decide on project success or failure. Therefore, the classic risk management process should go beyond classic controlling and should be extended for a holistic risk management system. To perform risk management in today’s projects digital solutions are not only useful levers to conduct effective risk management but indeed necessary in order to face the existing complexity. The digital method BIM enables project managers to increase the degree of collaboration and transparency by reducing the existing information asymmetry. Overall, digitalisation is an enabler to design an effective risk management for construction megaprojects. Managing risks while taking social factors into account as well as the application of digital approaches need a common prerequisite: The project members need the qualification to work in an interdisciplinary team and apply digital methods. Although communication is the main complexity driver, classic curricula of engineering schools do not include training soft skills. Future generations of engineers must be sensitised to a comprehensive project understanding and learn to understand how to interact with different engineering disciplines and considering different cultures. In addition, the usage of innovative and new digital solutions requires trainings in order to apply them in practice. Therefore, it is recommended that students are introduced to methods and software applications during their studies that promote collaboration, transparency and a culture of continuous improvement, so that current social challenges and risks that exist in megaprojects can be tackled preventively.

References Ankrah, N.A. (2007) An investigation into the impact of culture on construction project performance, PhD Thesis, University of Wolverhampton. Anthony, S. (2014) The Apollo 11 moon landing, 45 years on: Looking back at mankind’s giant leap. Available at: https://www.extremetech.com/extreme/186600-apollo-11-moon-landing-45years-looking-back-at-mankinds-giant-leap (Accessed: 18 August 2019). Bahlau, S., and Klemt-Albert, K. (2017) Evaluationen zu den Potentialen von Building Information Modeling. Bauingenieur, pp. 1–20. Ballou, B., Heitger, D. and Schultz, T. (2009) Measuring the Costs of Responding to Business Risks. MANAGEMENT ACCOUNTING QUARTERLy, 10 (2). Beer, S. (1995) The Heart of Enterprise. Manchester: Wiley. Boateng, P., Chen, Z. and Ogunlana, S. (2016) A dynamic framework for managing the complexities of risks in megaprojects. International Journal of Technology and Management Research, 1 (5): 1–13. https://doi.org/10.1016/j.clinbiochem.2014.12.004. Caniëls, M.C.J., Chiocchio, F. and van Loon, N.P.A.A. (2019) Collaboration in project teams: The role of mastery and performance climates. International Journal of Project Management, 37 (1): 1–13. https://doi.org/10.1016/j.ijproman.2018.09.006. Flyvbjerg, B. (2006) From Nobel Prize to Project Management: Getting Risks Right. Project Management Journal, 37 (3): 5–15. Available at: http://flyvbjerg.plan.aau.dk/Publications2006/ Nobel-PMJ2006.pdf.

5 Insights from Construction Management

191

Flyvbjerg, B. (2017) “Introduction: The Iron Law of Megaproject Management.” In Flyvbjerg, B. (ed.) The Oxford Handbook of Megaproject Management. Oxford: Oxford University Press. pp. 1–18. Flyvbjerg, B. and Stewart, A. (2016) The Oxford Olympics Study 2016: Cost and Cost Overrun at the Games. SSRN Electronic Journal, (July). https://doi.org/10.2139/ssrn.2804554. Flyvbjerg, B., Ansar, A., Budzier, A., et al. (2018) Five Things You Should Know about Cost Overrun. Transportation Research Part A: Policy and Practice, 118 (December): 174–190. https:/ /doi.org/10.1016/j.tra.2018.07.013. Flyvbjerg, B., Bruzelius, N. and Rothengatter, W. (2003) Megaprojects and Risk: An Anatomy of Ambition. Cambridge: Cambridge University Press. Frahm, M. and Rahebi, H. (2018) Kybernetik, Lean Digital - Fuer intelligente, schlanke und vernetzte Bauprojekte. 1st Editio. Frahm, M. and Rahebi, H. (eds.). Stuttgart: Amazon Kindle Publishing. Gatermann, R. (2000) Öresundbrücke für sechs Milliarden Mark noch günstig. Available at: https://www.welt.de/print-welt/article520529/Oeresundbruecke-fuer-sechs-MilliardenMark-noch-guenstig.html (Accessed: 19 January 2020). Girmscheid, G. and Busch (2008) Projektrisikomanagement in der Bauwirtschaft. Berlin: Beuth Reihe Bauwerk. Greiman, V. (2013) Megaproject Management - Lessons on Risk and Project Management from the Big Dig. New Jersey: John Wiley & Sons, Inc. Hajikazemi, S., Ekambaram, A., Andersen, B., et al. (2016) The Black Swan – Knowing the Unknown in Projects. Procedia - Social and Behavioral Sciences, 226 (1877): 184–192. https:/ /doi.org/10.1016/j.sbspro.2016.06.178. Hirschner, R. (2017) Beteiligungsparadoxon in Planungs- und Entscheidungsverfahren. Available at: www.vhw.de/fileadmin/user_upload/08_publikationen (Accessed: 1 February 2020). Hofstede, G. (1991) Cultures and organizations: software of the mind. London: McGraw-Hill. HS2 Ltd. (2017) Investing in our economy. Available at: https://www.hs2.org.uk/why/investing-inour-economy/ (Accessed: 18 August 2019). Irish, J. (2010) Burj Dubai cost $1.5bn to build. Arabianbusiness.com, 4 October. Available at: https://www.arabianbusiness.com/burj-dubai-cost-1-5bn-build-27430.html. Jünger, H.C., Bernat, G. and Weissinger, M. (2019) Baubgleitendes Störungscontrolling bei Großprojekten. Bauwirtschaft, 4: 208–218. Kaufmann, T. and Schönbach, R. (2019) “Interdisziplinäre Kollaboration in BIM-Projekten - Ein Einblick in Lehre und Forschung am ICoM.” In 36. Internationales Projektmanagement-Forum. Nürnberg, 2019. Klemt-Albert, K. (2018) “Building Information Modeling.” In Albert (ed.) Bautabellen für Ingenieure. V23 ed. Berlin: Bundesanzeiger Verlag. pp. 1.68–1.87. Klemt-Albert, K. (2020) “Building Information Modelling - Die Baubranche zwischen Euphorie und Schicksal.” In Digital Bau. Cologne, 2020. Kostka, G. and Anzinger, N. (2015) Studie: Großprojekte in Deutschland – Zwischen Ambition und Realität Zusammenfassung Teil I: Sektorübergreifende Analyse. Hertie School of Governance, 1: 3. Kostka, G. and Fiedler, J. (2016) Large infrastructure projects in Germany: Between ambition and realities. Large Infrastructure Projects in Germany: Between Ambition and Realities, (August): 1–206. https://doi.org/10.1007/978-3-319-29233-5. Kumar, V. (2017) Financing the Mozal project Case Analysis. Available at: https://de.slideshare. net/vkautomotive/financing-the-mozal-project-72534048. Latham, M. (1994). Constructing the Team (The Latham Report). HMSO: The Latham Report. Available at: https://constructingexcellence.org.uk/wp-content/uploads/2014/10/Constructingthe-team-The-Latham-Report.pdf. Locatelli, G., Mancini, M. and Romano, E. (2017) Project Manager and Systems Engineer: a literature rich reflection on roles and responsibilities. International Journal of Project Organisation and Management, 9 (3): 195–216.

192

H. Rahebi and K. Klemt-Albert

Merrow, E.W. (2011) Industrial Megaprojects Concepts, Strategies and Practices for Success. Hoboken, New Jersey: Wiley. Morris, P.W.G. and Hough, G.H. (1987) The Anatomy of Major Projects—A Study of the Reality of Project. Nagel, M. and Satoh, K. (2019) Protesting iconic megaprojects. A discourse network analysis of the evolution of the conflict over Stuttgart 21. Urban Studies, 56 (8): 1681–1700. https://doi. org/10.1177/0042098018775903. Pathirage, C., Amaratunga, D. and Haigh, R. (2008) The role of tacit knowledge in the construction industry: towards a definition. Available at: https://www.researchgate.net/publication/463 88136_The_role_of_tacit_knowledge_in_the_construction_industry_towards_a_definition/ stats. Project Management Institute (2017) PMBOK Guide - A Guide to the Project Management Body of Knowledge. 6th Edition. Newton Square: Project Management Institute, Inc. Reformkommission Bau von Großprojekten (2015) Endbericht - Komplexität beherrschen kostengerecht, termintreu und effizient. Berlin. Rogers, J.D. (2014) “The American Engineers that Built the Panama Canal.” In Engineering the Panama Canal. Reston, VA, 13 August 2014. American Society of Civil Engineers. pp. 112–349. https://doi.org/10.1061/9780784413739.005. Rowlinson, S. and Root, D. (1996) The impact of culture on project management. Hong Kong. Available at: https://www.researchgate.net/publication/305397606_The_impact_of_culture_on_proj ect_management. Söderlund, J., Sankaran, S. and Biesenthal, C. (2017) The past and Present of Megaprojects. Project Management Journal, 48 (6): 5–16. https://doi.org/10.1177/875697281704800602. Tversky, A. and Kahneman, D. (2017) Judgment under Uncertainty: Heuristics and Biases Linked. JSTOR, 185 (4157): 1124–1131. Wideman, R.M. (1992) Project and program risk management: a guide to managing project risks and opportunities. Zeit Online (2011) Bürgerbeteiligung - Was Parteien von Stuttgart 21 lernen können. Zeit.de, 24 November. Available at: https://www.zeit.de/politik/deutschland/2011-11/referendum-stuttgart21-parteien/komplettansicht. Zhao, X., Hwang, B.G. and Low, S.P. (2015) Enterprise risk management in international construction firms: Drivers and hindrances. Engineering, Construction and Architectural Management, 22 (3): 347–366. https://doi.org/10.1108/ECAM-09-2014-0117.

Chapter 6

Evaluation of Case Studies/A Template Approach Franz Knoll

In order to learn from the case studies as a data base, the data must be organised in a way permitting to observe common features. In this chapter, an attempt is made to do just this, i.e. to classify the various “circumstances” associated with the history of what went wrong in each case. To do as much justice to the individual cases as possible, a compromise must be made between too detailed and too generalized an evaluation. It takes the form of a “template” with seven groups/classes of features, each one of these being composed of several specific circumstances. 1. The work – Concept – Workmanship – Novelty – Magnitude – Modifications – Temporary work 2. The client – Budget constraints – Time pressure – Selection of agents – Payment morale – Participation 3. Organisation – Management – Team work – Assignment of tasks and responsibilities – Method of decision making – Communication

4. Human individuals – Competence – Devotion attitude/attention – Continuity – Learning, comprehension – Modifications – Delegation to software 5. The “structure” and its essential properties – Robustness, resilience – Suitability – Complexity 6. Use – Maintenance – Exposure 7. Checking, review, quality control – Effectiveness – Timing – Targeting – Follow-up – Resources

F. Knoll (B) NCK Inc., Montréal, Canada e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 R. Moura et al. (eds.), Understanding Human Errors in Construction Industry, Digital Innovations in Architecture, Engineering and Construction, https://doi.org/10.1007/978-3-031-37667-2_6

193

194

F. Knoll

The result of this operation is a tabular listing of the “cases” and the relevant circumstances of the project history, in matrix form (see Appendix—Matrix of Circumstances vs. Errors) where a few results of simple and basic arithmetic have been added, suggesting tendencies that can be perceived at this time. It must be kept in mind that for conclusions to qualify as a significant data base in the sense of modern science the 60 or so cases are clearly insufficient. With 30 parameters, an “order of magnitude” harvest of some 1000 cases would be needed. It is hoped that in the future it will become possible to collect such a quantity of data—it certainly exists in the memories of engineers and architects, and in the archives of insurance companies. It has turned out very difficult however to extract it from these repositories for reasons that are not difficult to understand—nobody likes to talk about one’s failures. Only one smaller insurance company has been found who was prepared to open their archives. A large part of the cases listed are coming from the memory of the author who has been in contact with the events in one way or another. It is thought that even with the relative paucity of data some qualitative insight can be gained into when, where, how and why things are going wrong most consistently and notoriously. In the following text, the elements of the “template” will be reviewed. Most of them cannot be measured in the technical sense of the word but in the matrix a value is assigned to each element, representing the relative importance for the outcome in each case. 3 = Where the particular circumstance was instrumental for the effect, i.e. not much would have happened in it’s absence. 3 = Where a major contribution to the event is identified but all by itself it would not have caused the failure. 1 = Where a relatively minor contribution is found which, in combination with others was among the presumed causes. Such an evaluation is of course very subjective given the heterogeneous nature of the features describing the circumstances of the building process, it is difficult to see how arithmetic evaluation in the technical sense could do justice to the problem, beyond what can be read from the frequency and relative import of the contributing circumstances. The following text is an attempt to illustrate briefly the character of each of the circumstances making up the history of a construction project and its particular relationship with error-proneness.

1 The Work The characteristics of a proposed work, “the project” from the initial idea through the phases of the planning, construction, use and eventual “deconstruction” are of course of primary importance with respect to the success, i.e. the quality of the product in terms of safety, durability, reliability etc.

6 Evaluation of Case Studies/A Template Approach

195

1.1 Concept The key to every project is the conceptual design: is it appropriate, coherent, thought through, doable, economic, safe, in other words, how close is it to the (usually unattainable) optimum. The concept must consider all relevant criteria and aspects of the work, it’s planning, creation, use and eventual deconstruction. At the same time economics, timing, aesthetics, client’s satisfaction, coordination of all participating trades must be optimized, including all uncertainties and contradictions this implies. The concept is being developed in the planning phase in as much detail as possible but it will have to be refined and adapted to unforeseen obstacles and difficulties manifesting themselves, along the building process. The minimisation of the “unforeseen” is itself a part of the optimisation since surprises and difficulties usually translate into delays and cost or even accidents.

1.2 Workmanship, Quality The quality of workmanship is the first criterion that comes to mind in most cases for several reasons: • it relates most directly to the quality of the product • most of the time it effects are apparent visually • poor workmanship will have its negative effect on usability, durability, function, appearance, for the duration of the existence of the work • maintenance, its cost, frequency and scope will increase with low quality • the value in the case of change of hands will depend directly on the quality • low quality will often lead to contentious issues associated with the cost to sort things out.

1.3 Novelty If new circumstances are part of a project a lack or deficit of experience most be accounted for. The novelty may concern new techniques, unusual materials or methods or they may simply be new in their application having been used previously elsewhere, or in a different context. Most novel designs if they are successful, do also include some margin or reserve, in terms of robustness, just in case. If they do not, the uncertainty associated with an untried concept may take effect, in the form of risk.

196

F. Knoll

1.4 Magnitude Magnitude parallels complexity in most projects, with the number of people, network connections, elements of decision and information increasing at a steep rate. This introduces a critical element into the process which enhances its proneness to error genesis and perpetuation: It becomes increasingly difficult even for highly qualified actors to command an effective overview. The network representing the construction process, including all phases, becomes increasingly opaque to the viewer, much in the sense of the proverb of the forest and the trees, with Murphy’s law lurking everywhere and anywhere. It has been tried, starting in the 1970s and 80, to delegate this problem to schematic computer processing, with mixed success. The advent of powerful A(rtificial) I(intelligence) may take this one step further. It remains to be seen how much of a positive effect, in terms of error elimination this will produce.

1.5 Modifications “Ordre, contreordre, désordre”… Most if not all projects will experience modifications on the way to completion. This is part and parcel of the optimization process and includes the follow-up of error detection and correction. However, the manner in which modifications are introduced and absorbed is very important for the pursuit of quality, as well as, of course, the quantity of modifications that are found necessary. It is a common experience that modifications are often introduced “on the run”, meaning they are based on “snap decisions”, and not systematically recorded. An everyday discovery for engineers or architects is that things were not built as recorded in the way they are represented on available documentation. This is a consequence of the time pressure—time is money—imposed by the modern economy where any delay is perceived to translate into a financial loss; numerous projects are presently carried out in “fast track” mode as it is commonly called. There is a reason for this: it may be too difficult and time-consuming to work out all the details of information on conditions and circumstances that will be met by the execution and it is decided to “learn on the go”, rather than to invest in studies and exploration beforehand. This is especially important when dealing with existing structures where documentation about conditions cannot be found and can only be gained by sometimes destructive exploration. The potential of “surprises” must be accepted and quantified; it is usually carried under the label of “contingency” and represents a risk and will translate into modifications and the problem these bring along.

6 Evaluation of Case Studies/A Template Approach

197

1.6 Temporary Work Temporary structures built to serve only weeks, months or a few years, are often associated with the impression that safety, in the form of safety factors, robustness, workmanship or quality control can be relaxed. This is a fallacy, based on the notion that the duration of exposure—to loads or other types such as wear and tear, rough handling, temperature gradients, imposed deformations—is proportional to the risk of failure (to mean that the shorter the period of service, the smaller the risk). Most of the elements of exposure are of the type that experiences a momentary peak, e.g., during the placing of concrete on a scaffolding, or the shoring wall protecting an excavation following an exceptional rainstorm. The risk associated with this sort of events has little to do with the duration of service, but rather more with uncertainty related to the response of the structure. Scaffolding and excavation failures figure very prominently among the accidents causing loss of life and value, certainly as a consequence of the feeling that lesser safety margins and precautions are appropriate than for structures meant to serve for a number of decades or more. Similar considerations apply to machinery such as lifting devices, cranes, derricks or swing stages which are usually of an isostatic character with a unique load path. Newcomers are often surprised at the magnitude of loading assumptions for life lines or suspension elements to hanging stages; these are based on experience with past failures and the fact that often secondary effects are difficult to predict and their effect must be “lumped in” with the safety margins of the principal load case.

2 The Client The client in his role as the “maître d’ouvrage” (“master of the works”) must act as such, and this includes a number of things: Define (specify) what is to be bought/made/produced. If this is not done judiciously, confusion will ensure, preparing fertile ground for litigation. Hiring the team to do this (consulting architects, engineers, specialists, contractors, verification agents, etc.) has always been a source of problems where the choice is being made on the basis of lowest cost (lowest bidder), mostly disregarding potential performance. Numerous public bodies are forced to do so by regulation, often without regard to qualification, competence and reputation, to the detriment of the works. Often the client will take upon himself part of the management—numerous differing models are being acted out, from a pure turn-key mode as one extreme, and the traditional mode where the clients own agents act as managers, and everything between. Difficulties have been found with every one of these options, mostly related to inconsistent management.

198

F. Knoll

2.1 Budget Constraints It has become almost the rule that a substantial discrepancy is accepted to exist between the wish list, in other words the expectation, and the budget proposed for a project, following the tendency embedded in present day culture to try and get more for less. This has a dramatic effect on the outcome, i.e. the real performance of the making and the quality of the product which is now a variable. If the producer faces a set of incentives that makes it attractive to compromise quality by cutting corners he will do so for his profit or survival. Restrained budgets will tend to produce precisely that, i.e. situations where the supplier of services, be it consulting or execution, cannot afford to deliver good quality work. As a rule, the buyer will receive more or less what he is paying for, except that in real terms, he will not necessarily know where the deficiencies will be located since the short cuts the contractors choose to make ends meet, will not consistently be disclosed but may remain hidden. Often clients will institute agents and strategies in the attempt to control the situation but if these agents are engaged on similar terms, the success of such measures will suffer from the same illness. Greed has been part of most cultures, past and present, and it has been instrumental in almost every project where something went wrong. From the quality of construction it can be gaged how dominant its role was in the planning and execution of the work.

2.2 Time Pressure “Time is Money”, i.e. the time between the commitment of funds to pay for the work, and the eventual return the completed works are producing, is a quantity which must be minimized since the money tied up in construction must be either borrowed, or syphoned off from other purposes. This represents a cost, especially in times of high interest rates. Typically therefore, clients will take their time to specify what they wish to be built, since in the planning phase, relatively little money needs to be mobilized. This changes dramatically as soon as real construction takes over, where cash flow will increase by factors of 10 or more. Time will be precious now and the pressure is on to speed up the work. Time pressure is the enemy of good quality, it will cause people to do things sloppily and keep quiet about it lest they be looked at as uncooperative. Clients who impose time pressure must recognize this relationship and the risk it brings along.

6 Evaluation of Case Studies/A Template Approach

199

2.3 Selection of the Team This is one of the most important decisions affecting the well being of a project. Although it is obvious, many clients will give precedence to minimal cost for fees to the engineers, architects and contractors, and select accordingly. As a rule, this often turns out to be penny-wise “at the end of the day” when the cost for the consequences of mediocre work is coming in which can be a large multiple of the savings achieved initially. Every construction process, no matter how long the planning phase was, will start with a substantial uncertainty about how things will develop in reality. For reduced fees, reduced service will be given and it is an illusion that because the participants, architects, engineers, contractors are working to the same laws and standards, this makes them exchangeable. It leaves out the fact that the quality of the planning, i.e. how well it is thought through, has an effect on uncertainty and cost. As well, interventions will become necessary in response to unforeseen circumstances and developments during the entire process. This will require insight and effective cooperation among the players which is only possible in a well functioning team of competent players. It is the owners’ task to see to it that the motivation of all the participants is directed onto the job, to the best possible degree. This degree will reflect the equitability of the conditions at which each participant will work—if remuneration has been negotiated down to below reasonable needs, the result will reflect this.

2.4 Payment Morale Prompt payment is part of the equitability. Most contracts state a payment schedule which often is not adhered to by the client, mostly leaving the hired party with little effective power to straighten out matters, especially for consultants. This author speaks from long and varied experience, and the adversity that ensues when remuneration is retarded for whatever pretext, is certainly not in favour of good quality work. In a climate of contention or confrontation, good ideas do not prosper—good ideas are needed in all phases of the work, especially in the context of trouble shooting, in other words when errors of previous phases are being discovered and must be remedied.

2.5 Client’s Participation Since construction projects are not usually things bought off the shelf or from catalogues, the contribution of the client is needed in terms of specifications for the development and planning of the desired product.

200

F. Knoll

If this is not provided or insufficiently so, the construction team is on its own for decisions which, given the different incentives to which the participants are answering, are likely not congruent with the clients’ intentions and may result in misunderstandings and contention. Litigation will ensue and the absence of clear specifications will stand against the client, in other words, he will not have a case. Often the client will look to the professionals to help with that specification which will then based on a consensus, a much better proposition per se. However, if taken too far, it may become deleterious in its turn, where the client is interfering with the methodology of the work through untimely or inappropriate interference. It is important that the client as a participant in the team is acting out his role in a judicious and balanced way. His interest is mainly in the quality and function of the end product and therefore it is generally a good idea to hold back with dictating the methodology in which the work is performed.

3 Organization Construction projects are by their nature organisms of considerable complexity as can readily be seen by looking at graphic representations of the network of correlations and dependencies, even for relatively simple projects, and especially concerning the sequence and timing of the various activities. In the car industry, this has been brought to near perfection, under the leadership of Toyota with their philosophy and methodology of “just in time”. Construction industry as been struggling since its beginning with this goal and, with the aid of powerful software, has made considerable progress. However, the variation of the individual projects from an idealized standard model is much greater than for a more or less uniform product of the manufacturing industry, preventing the methodology from being entirely transposable from one to the other.

3.1 Management The management of a project is usually put into the hands of a few individuals whose incentives, and therefore their priorities are, by the nature of their relationship, not entirely congruent with those of the client. This applies of course to all the players participating in the construction process including the personnel employed directly by the client himself. The problem can be partially neutralized by an open management model, with the various parties participating in “running the job”, via discussion and periodic review. To direct an organism like a construction process takes a great amount of insight, even into the details of the project: “the devil is in the detail”. This includes the ability to recognize the critical path or paths, anticipating events that lead to delays—it is very damaging to the work to keep workers idle, waiting for conditions allowing them to work. Since modern work is no longer based on forced labor or slavery, the attitude

6 Evaluation of Case Studies/A Template Approach

201

of the individuals executing the work, is closely related to the their performance. If management is failing them, their attention and eventually, their loyalty will become diverted. Modern business practice is frequently found to be in contradiction with such traditional values as loyalty, trust, conscientiousness, values we like to associate with an idealized past which never existed. It can be found occasionally in small enterprises where the leadership rests with what one may call an artist or master tradesman in the traditional sense, i.e. someone who is deriving pride from the work he is performing; however artists are, as a rule, not good business people and will therefore not have easy access to large projects where money speaks most loudly.

3.2 Team Work Today’s practice has gone a long way from depending on general competence toward specialization which means that even in small projects, representatives from a number of different disciplines will have to be involved in the decision making. This translates to the need to form a team from the start where all the competences are being pooled together, and where the interfaces are bridged by a well structured mode of cooperation. It is a good idea to care for a smooth and productive interaction from the beginning of a project; conceptual decisions must be taken with all major players participating in order to avoid difficult conditions for one or the other who was left out of the process initially. Such difficulties have been causing great cost, as well as aggravation and loss of value. Often this remains latent for a while, until it is discovered that things do not work out as anticipated. “Acrobatics” may then be needed to save the day where other, less costly options could have been considered if the conceptual decisions had been reached in consensus. Building always has the character of an adventure as things are planned for the future which involves a quantity of uncertainty. Much like in other types of adventure such as alpinism or exploration, or war, team spirit is essential. A measure of mutual trust, meaning that one can rely on each other, must exist. If it does not, risks will grow and be prone to take effect. Trust is mostly an emotional issue but very much related to such criteria as communication, thoughtfulness and team spirit. Cases have been seen where trust has been shattered and communication was lost. The story of the Tower of Babel comes to mind, following the disaster, people were no longer able to communicate. Psychologists have tried to deal with this potential problem preventively, and “partnering” meetings or parties were organized. In this author’s experience, the utility of this sort of event is questionable and the good feelings it creates do not outlast the effects of the imbibed alcohol by much.

202

F. Knoll

3.3 Assignment of Tasks and Responsibility It is a job for management to assign all tasks and responsibilities to participants who command the competence to perform it. This is not always trivial and interfaces are frequently found between architecture and structural engineering, for example the design of curtain walls or of other types of building envelopes where it is not clear who commands the competence to perform the work. This involves important criteria from building physics, and in some circumstances, it has been left in limbo, with nobody competent attending to it, with very unfortunate consequences.1 Similar situations have been found at the interface of civil work and plumbing where it was not resolved who was to attend to which elements of the water supply, and the drainage of waste and storm water. This is a matter of contractual assignment of mandates requiring careful attention lest things fall “through the cracks”.

3.4 Methodology of Decision Making Most projects start with more or less informal “barn storming” or “brain storming” meetings where ideas are “tossed around” and discussed. Much of the time the result of such sessions are the conceptual decisions, and it is important to record and document carefully what was in effect decided by general consensus. Participants who differ must be given the opportunity to question the appropriateness of the decisions, following their reflection about de consequences for the work of their own discipline but also for all interfaces with others. Ramifications that did not come to mind during “brain storming” may do so in the early morning hours when people are bothered by apprehension about unresolved issues, or in the course of review with past experience. Things may become clear at this time, furthering optimisation.

3.5 Communication Communication is the blood of the organism which is the construction process where information must reach its destination as completely, correctly and quickly as possible. With the advancement of technology, work is becoming more complex and difficult to oversee—all of this translates once again into splitting it up into ever more pieces, increasing number and specialisation of the participants. At the same time, the number of interfaces between the participants of the organism which is

1

This author remembers a situation where he ended up supplying the necessary details for the attachment of a stone façade, having to persuade management that this was a task worthy of a fee—in the mind of the managers the task did not really exist and had not been attributed either to the architect nor the structural engineer.

6 Evaluation of Case Studies/A Template Approach

203

( ) n the construction process increases close to factorially making communication 2 increasingly voluminous. Much communication is done in the course of meetings where the key participants are invited by management. This works well for projects of limited magnitude and complexity. For larger and more complex projects, a separation into portions along—sometimes physical-boundaries is used. At those boundaries, a special methodology is needed to take care of all the relationships at the interface which cannot be decoupled. Once again, for larger projects, this requires a vertical hierarchical tree type of structure where communication is travelling down one twig and branch to the trunk and up again to the other branch and twig—horizontal communication is no longer working well. How sound the tree is will decide on the adequacy of communication.

4 The Human Individuals Although it turns out that the environment, in terms of organisational, financial, social, psychological circumstances is of great influence on the error proneness of a project, it is after all the individual actor who commits or contributes to the error. The human actors themselves and their interaction are ultimately instrumental for the fate of a project. They are taking the decisions, creating the information and transforming it into physical reality. Recently, some of this has been delegated to electronically controlled machinery but the software used to do this, is itself of human creation and bears therefore similar basic characteristics as its creators, with the additional drawback that it has become increasingly complex and opaque and therefore removed from control and verification. This has become an all pervasive condition affecting every industrial endeavour, and has begun to make itself felt.

4.1 Competence, Knowledge Forgetting about the obsession with the lowest price on offer initially, competence is still a decisive criterion for the selection of consultants, contractors, managers etc., even if it means that laws intended to enforce the acceptance of the lowest bid have to be circumvented by prequalification or preselection procedures, sometimes in clear disregard of the intent or letter of the law. It is particularly the element of the unpredictable that demands a high level of competence and insight. It is these faculties that will decide on the consequences of circumstances that were not anticipated—in other words trouble-shooting. Usually, time for this is short and quick decisions are called for.

204

F. Knoll

4.2 Devotion, Care and Attention Similarly to professional competence, the attitude of the individual players will be decisive for the outcome and quality of the work. It always was, and it is a question of incentives that each human actor is answering—if they are not aligned with the good of the work, a problem exists: If an extra effort is not related to any reward, it will likely not be made, at least on the long run. In some situations, rewards may be of a negative nature, i.e. to be avoided, as for instance the risk of getting fired. Devotion to the work will translate into enhanced attention and circumspection, both decisive elements of the quest to eliminate errors and flaws. If it is blunted, people will fall back on a minimalistic mode of working, with commensurate consequences. Recent work in psychology finds that emotional factors like humility, moral involvement, honesty are more important than thought earlier, for the quality of the work.

4.3 Continuity Every individual involved in a job will accumulate some knowledge about the work over time, much of which will exist in undocumented form in his/her memory to be recalled when cues come up that relate to it. This know-how is difficult to communicate. In the event of replacing an actor by another, much of it will be lost. A new replacement will need time to learn. During this time, competent attention will be to be supplemented.2

4.4 Ignorance and Learning Numerous errors can be traced back to decisions that were made in disregard of certain circumstances the future structure would see. Often this happens due to ignorance about those circumstances which often could have been identified if only the necessary investment in time and resources had been made. The reasons for this are manifold: time pressure, misinterpretation of the behaviour or exposure to the future works, difficulty in finding pertinent knowledge, the illusion of sufficient know-how, or simply pride.

2

This author remembers a desperate foreman on a construction site lamenting that he had just met the 87th representative of the structural engineering firm—the work was eventually completed but at the price of considerable cost overruns. The man’s desperation may have caused him to exaggerate but it confirms the problem that large engineering firms are having where hundreds of engineers are on the payroll and get assigned to the work at hand at the whim of management which may be removed by several echelons of hierarchy from the events on the construction site.

6 Evaluation of Case Studies/A Template Approach

205

Every engineer will have built a network of connections after some years of practice, with colleagues, peers, teachers, students, collaborators etc. where some knowledge beyond the personal arsenal can be mobilized quickly. Of course there is also a great quantity of literature in learned books or articles which has become more accessible through the internet. Experience shows that even so, a brief conversation with somebody “in the know” will yield more pertinent understanding more readily than the reading of a text which was written with something else in mind. This author looks back on numerous brief conversations with peers that proved more pertinent to the problem at hand than printed knowledge ever can—books do not answer questions, nor do they produce sketches to explain … Most important however is to be aware of the limits of one’s knowledge, to recognize when a problem exceeds those limits—and if so, to get help.

4.5 Delegation to Software The increasingly detailed codification and regulation of design work, combined with the ease modern software offers the user, has led to a rather dangerous consequence. More and more design decisions are delegated to automatic processing and the control of it all has been slipping from the hands and minds of the engineers responsible for those decisions. It is all taking place inside a black box the workings of which have become ever more sophisticated and at the same time less transparent. This has gone so far that occasionally some of the software becomes itself the object of research, in order to find out what really goes on inside the black box. Of course, the computer will readily print out or show on the screen whatever one commands it to show—it takes insight to decide whether what is seen is pertinent, complete and comprehensive, in other words, makes sense.3 Hand calculations with simplified models as well as experience will help and are indispensable.

3

This author has seen structural analysis performed on models with tens of thousands of elements— beams and shells etc.—and hundreds of load cases, only to find that the scanning used did not show up the critical places where the structure would fail prematurely (see cases 13, 24). At the same time, materials were wasted because the modelling followed a set of rules not appropriate for the type of structure used. As an example, torsional resistance of concrete beams when accounted for requires a substantial quantity of additional reinforcing over and above what is required for bending and shear. Very often it is not needed. Appropriate modelling can avoid this. The mere quantity of the output may itself form an obstacle to the understanding of the structural behaviour, the relevant numbers being buried beyond recognition in the mass of data produced.

206

F. Knoll

5 The Structure Topology, geometry, dimensions and materials decide whether a structure is appropriate for its intended purpose. Most of these properties are being decided in the initial phase of a project, and adjusted in minor ways in later phases. Recent history has greatly augmented the menu of materials and production methods a modern designer has at his disposal where one or two centuries ago, there was not much choice. Still today, the application of most modern materials is limited due to their high cost, in comparison with concrete, steel, brick or wood— this has been one of the chief reasons for the slow development of the construction industry, compared to other technology.

5.1 Robustness, Resilience Robustness can be defined as the ability to withstand exposure to unforeseen events or circumstances with limited damage. It can take different forms (see Knoll and Vogel 2009) such as reserve strength, ductility, deformability, mostly concerning the behaviour beyond the limits of elastic response. Building Codes and Standards have recently begun to include robustness as a mandatory requirement, to be provided by all new structures. Where classic safety margins were intended to compensate for normal variations of resistance and exposure, robustness provides structures with the ability to survive circumstances that exceed the limits of the expected, just in case, albeit in a form that may include permanent deformations, cracks or other signs of distress, but not collapse. The absence of robustness is found to be responsible for the severe consequences of some accidents which would have been greatly reduced if a measure of this property had been built in.

5.2 Appropriateness, Suitability The verdict whether the design chosen for a construction was appropriate or not will often make itself known only sometime after the construction—it may take years for problems to manifest themselves, depending on such factors as exposure to water, frost and thaw, chemical attack by deicing salt, vibration, mechanical abrasion, collision etc. Often the temptation to decide based on initial expenditure is strong, especially in commercial constructions and structural systems or materials are selected to reduce it, regardless of consequences. Many sins of the past are haunting the industry today, especially in the harsh climate of the Northern hemisphere where methods and materials brought from regions with more clement weather are proving themselves to be less than suitable.

6 Evaluation of Case Studies/A Template Approach

207

Society is facing an enormous bill for the rehabilitation of masonry and concrete construction produced in the late nineteenth and twentieth centuries—let future generations come up with the technology and finances to deal with the consequences of unsuitable construction. Those future generations are us presently and the question is legitimate whether we are in fact doing a better job?

5.3 Complexity Complexity may perhaps be compared to the density of a forest where things may be hidden behind shrubs and bushes—the more branches and growth the more places where things may hide and remain undiscovered. Complexity is the opposite of transparency, making things opaque and difficult to comprehend with all their implications. It means that the uncertainty about the real behaviour of a complex structure is increased and must be compensated in one way or another. The analysis of complex structures may have to make use of several different approaches, in order to elicit all possible outcomes. Generous dimensions may be the answer if insight Is incomplete. This is often the case when working on existing structure where documentation is missing and access is limited.

6 Use The longest chapter in a structure’s history is normally its use. The period of use, or exploitation, may vary from a few months, in the case of buildings constructed for a summer festival or exposition, to several centuries for “high-end” private or institutional buildings. The description of the use may vary over time and may include misuse or abuse where the structure is becoming exposed to effects it was not built for or that had not been included in the planning of its future. The structure may also become exposed to effects due to changes in its environment—a house built in a country-side surrounding may suddenly find itself in the middle of high-rise construction because of urban development. During the period of use, the presence of a professional engineer or architect is not self evident—the occupant and owner are on their own with all this implies. They may or may not decide to involve the advice and services of a professional unless this is made mandatory by public decree which usually follows accidents. Most structures will experience degradation over time due to cyclic loading, corrosion, chemical alteration, abrasion, or modification. This last influence has become a very serious issue, as a consequence of modernisation where the structure itself, or other systems (climatization, plumbing, electricity, etc.) are adapted to new requirements. This usually necessitates penetration, cutting or altering structural elements or systems and often amounts to weakening of the structural resistance which fact

208

F. Knoll

may remain latent, hidden behind finishes and knowledge about it lost, due to change of administration, ownership or simply time.

6.1 Maintenance It has been said that maintenance is an “investment” suggesting that it is optional, i.e. one can carry it out, or not. The consequences of the second option are everywhere and it is abundantly clear that the cost of maintenance when it is undertaken is increasing with the length of time one adhered to option 2 (see cases 7.4, 7.14), including the risk of accidents (e.g. Morandi Bridge Genoa). The need for maintenance usually comes up long after the original construction and it is an unwelcome bother, expenditure and complication which created the temptation to delay it as long as possible—when it will become somebody else’s problem. Public administrations concerned with the safety of the people in or near the structures in question have taken it upon themselves to enforce maintenance work by regulation, inspection and decree but this has met with limited success since negligent owners dispose of an array of means to delay or circumvent this. Sometimes, accidents are preceded by warning signs such as small pieces of stone, brick or concrete falling, or of rust causing deformation and spalling.4 In other cases degradation may remain latent and must be monitored proactively to ensure safe performance. Monitoring is therefore part and parcel of maintenance; if it is not done, danger is waiting.

6.2 Exposure Real or hypothetical exposure, or knowledge about it, may change over time, due to climate change, newly acquired insight, neighbouring construction, regulation or use. Taken together with the diminishing capacity of structures to resist exposure, this has proven to be a major problem for legislative or regulatory agents such as code committees when dealing with existing construction, as witnessed by the hesitation to do this, and the sometimes exceedingly complicated procedures and rules being decreed. Exposure to environmental circumstances (wind, snow, seismic action, etc.) has been made the subject of intensive research and can be assessed with probabilistic methods, based on records of past events. Recently, climatic changes make it necessary to review the validity of those records which must account for prediction of future

4

Rust has typically from five to ten times the volume of the iron (steel) it was and can exert considerable pressure). Similarly, frost action may dislodge masonry units for everyone to see, advertising the need for rehabilitation.

6 Evaluation of Case Studies/A Template Approach

209

development of climatic conditions—a subject which unfortunately has become highly charged politically.

7 Checking, Quality Control, Insurance The concept of quality control is as old as mankind. As soon as construction was delegated to professional agents, the state authorities recognized the necessity to institute a system of negative incentives in order to encourage good quality (see Hammurabi’s Law for example where punishment is specified according to the severity of the shortcomings a building was affected with—at the time there was no insurance and the builder was made wholly and personally responsible for the product of his work, with no distinction between civil and criminal law).

7.1 Effectiveness Checking, review, verification, identification of risks, interpretation of potential danger scenarios are all part of the activity called quality control (QC) or quality assurance (QA). The concept of quality assurance has mostly been perverted into something having little to do with quality.5 The discussion will therefore concentrate on quality control (QC). One may say that quality is the absence of human errors and its effects, and quality control is aimed at the elimination of these. Errors and flaws come in various guises and potentially affect any and every activity in the construction process. To be effective, quality control must address all of these and must take different forms, be performed by the right people, at the right time and focussed onto the right elements, systems or interfaces. This is a problem of optimization since resources will usually be limited in terms of time, personnel and qualification. The work, even with the greatest effort toward the elimination of flaws, will rarely match the theoretical perfection the specification spells out to 100% and the effectiveness of quality control is measured by how close it has brought the work to that perfection.

7.2 Timing The question of which resources and methods to put to work on error hunting at which time—not too early, not too late—is not always a trivial matter. 5

Quality assurance has become an activity aimed almost exclusively at providing extensive documentation. It has turned out to give little protection against mishaps caused by human error.

210

F. Knoll

At the design stage when creative work is performed in an interactive optimization process, a circumspect mode of review is called for. As things become more specific and the quantity of information representing the project is growing, reviews are become progressively more formalized, e.g. in the form of check lists of verification protocols. However, even in the latter stages of the project development, it is often fruitful to revisit basic questions in the sense of looking back to make sure the road one is following is still the correct one. The moment when the work is turned over to the contractor for execution is a pivotal station in the process. This moment extends from the preparation of contract documents to the delivery of the construction documents—drawings, specifications and instructions.

7.3 Methodology, Targeting Construction, as opposed to most manufacturing industries, is still very much a matter of work by human individuals, although automated processes are gaining ground. The more automatic processing replaces human work the more important independent control is becoming.6 To be effective, quality control and checking must be on target, i.e. following every step of the building process where information is created, transformed, communicated or put in physical form. Since total control of every element is out of reach most of the time, it will have to be aimed at the critical points in the process. Experience will tell where they are.

7.4 Response to Circumstances, Follow-up and Trouble Shooting Once a deficiency is suspected or recognized, or has already taken effect, corrective action is called for. It may take many different forms, and the history of response varies widely from case to case. The corrective action, like the original concept, becomes the object of 6

Modern techniques and tools are permitting easier and quicker production of documentation by cutting and pasting from a stock of stored data developed earlier. Although this has resulted in a manifold increase in productivity it has also become a new source of error because the thought that goes into the production remains largely related to the time spent, rather than the quantity of the product. Where the technician making a drawing by hand had many hours of time to consider what he was doing, and its implications, while he/she was tracing lines, his/her colleague today has to devote much of his/her attention to the operation of the computer and its keyboard when quickly assembling and adapting pieces of information imported from other jobs or from a standard arsenal. This leaves little time for the digestion and comprehension of what one is producing. Today’s computer generated drawings look perfectly neat and clear but they are just as wrong or even more so than 50 years ago.

6 Evaluation of Case Studies/A Template Approach

211

an optimization process, only now, time pressure has become intense, and lengthy analysis is frequently not an option. Once again, experience will have to be mobilized and decisions reached quickly and as a consensus of all, or most parties involved. Corrective action usually carries a price tag which will be an important ingredient in the optimization process. Somebody will be liable for the cost, and that party will likely try to have the last word. Nowadays it is mostly insurance compagnies playing that role in the drama. They will bring in their own experts to “help” with the decision whose incentives or interest will parallel those of the party that pays them. Repairs introduced to correct a deficiency may often fail in turn, due to problems with adhesion, durability, material properties and compatibility. The optimization of the corrective intervention differs from that of the original project in that it now includes new aspects such as minimization of delays, making good of the damage and aggravation. In some societies, to save everyone’s face is an all-important consideration and may force a choice of options less than satisfactory in other ways.

7.5 Resources Most of the work of the checking crew is of a preventive nature, so to say, in terms of eliminating faults in the construction process before they take effect. This implies that resources have to be mobilized at or near the start of every phase. Resources must be up to the task, meaning qualified to be effective. At times this may become a major problem of personnel management.

8 Results, Conclusion Setting aside the fact that the quantity of data is limited to the sixty and some cases, a few basic conclusions can be read from the matrix where particular elements of the construction process turn out to be at the root of the problem causing the mishap. The design concept stands out frequently as the origin of things going wrong, with a large proportion of cases where it was important or most important for the genesis of inappropriate decisions which become errors in the process. Bad workmanship was a contributing factor in more than half the cases and so was a deficit of competence and attitude of the participants. A lack of robustness is responsible for numerous types of failures with serious consequences which could have been avoided or attenuated had sufficient robustness been provided. It could be argued that a lack of robustness must be traced back to the conceptual design where this was neglected. Insufficient or ineffective checking and monitoring must be cited as an important contribution in about half the cases. In effect it did not meet its goal in almost every circumstance where things went wrong.

212

F. Knoll

The author believes that this evidence allows the conclusion where attention ought to be concentrated through pertinent organization of the construction process. This is clearly a task for management which ought to be entrusted to agents commanding sufficient insight.

References Knoll, F., & Vogel, T. (2009). Design for Robustness, IABSE

Appendix

Matrix of Circumstances Versus Errors

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 R. Moura et al. (eds.), Understanding Human Errors in Construction Industry, Digital Innovations in Architecture, Engineering and Construction, https://doi.org/10.1007/978-3-031-37667-2

213

214

Appendix: Matrix of Circumstances Versus Errors

Appendix: Matrix of Circumstances Versus Errors

215

216

Appendix: Matrix of Circumstances Versus Errors